id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.09083
Anomalies, $η$ , $η$' as keys to glueballs
Glueballs are the most straightforward prediction of QCD, yet while they have likely been produced, none has been unequivocally identified. We pursue a backdoor approach through anomalies, and singularly the $\eta$ and $\eta$' which brings light to this irritating situation. In particular, we advocate to consider the full decay chain $J/\psi \rightarrow X \gamma , X \rightarrow \eta \eta'$ (into glue-rich states followed by glue-rich decays). We also suggest new BES III searches, namely for the $\pi_1$ into $\eta(') \pi^0$, (this would be the partner of their recently observed $\eta_1(1855)$). Another useful investigation would be for other channels (or semi-inclusive) $f_0 (1500)$ decays (see last section)
Jean-Marie Frère
2023-04-18T15:54:07Z
http://arxiv.org/abs/2304.09083v1
# Anomalies, \(\eta\), \(\eta^{\prime}\) as keys to glueballs. ###### Abstract: Glueballs are the most straightforward prediction of QCD, yet while they have likely been produced, none has been unequivocally identified. We pursue a backdoor approach through anomalies, and singularly the \(\eta\) and \(\eta^{\prime}\) which brings light to this irritating situation. In particular, we advocate to consider the full decay chain \(J/\psi\to X\gamma,X\to\eta\eta^{\prime}\) (into glue-rich states followed by glue-rich decays). We also suggest new BES III searches, namely for the \(\pi_{1}\) into \(\eta(^{\prime})\pi^{0}\), (this would be the partner of their recently observed \(\eta_{1}(1855)\)). Another useful investigation would be for other channels (or semi-inclusive) \(f_{0}(1500)\) decays (see last section) Introduction While glueballs (colour neutral bound states of 2 or more gluons) are an obvious and compelling prediction of Quantum Chromodynamics (QCD), none have this far been identified with certainty. In a way, most physicists have ceased searching for them, satisfying themselves that they have certainly been produced, but are lost in a complex forest of quark states, with which they mix, pointing in particular at the total number of \(0^{++}\) or \(0^{+-}\) states between 1 and 2 Gev. More conspicuous would however be "exotics", namely mesons with the "wrong" charge-parity assignment with respect to the naive Quark Model as a result of the presence of an extra gluon in their composition; however here also, an ambiguity may exist with 4-quark states, a pair of quarks simulating a gluon. When glueballs searches have thus become a backwater, it is a bit shocking that such a fundamental test of QCD (and, accessorily, lattice calculations) is neglected. Fortunately, some long-overdue systematic searches combining gluon-rich production and gluon-rich decays have now been realized, thanks on the one hand to the identification of the \(\eta,\eta^{\prime}\) as possible markers, and on the other hand, to the coming in full operation of BESIII in China, allowing for a never-reached before resolution in the study of the \(J/\psi\) radiative decays. ## 2 Are glueball decays flavor blind? For lack of trustable calculations (we are deep in the domain of really strong forces, far away from asymptotic freedom) has led to a number of qualitative assumptions about the decays of glueballs. * the straightforward assumption is that decays are "flavor-blind", since pure glue ignores this quantum number, with the mitigation of phase space. * very quickly, qualifications and possible deviations were suggested: wave function overlap (a quantum mechanical approach) would suggest that heavier particles, closer to the 1-2 GeV mass of the glueballs would be preferred, say \(K\) pairs over \(\pi^{\prime}s\). * this tendency is also advocated by the supposition that 2 gluons cannot couple to light quarks, since the decay in a pseudoscalar meson would necessitate a mass insertion * it should however be noted that the above argument does not really apply in general cases, since it is well-known that quantum anomalies precisely relate the divergence of the axial current or light quarks to pure gluons \(\widetilde{G^{\mu\nu}}G_{\mu\nu}\) : this is precisely the term which appears in the evaluation of \(\eta,\eta^{\prime}\) states. A more detailed discussion can be found in ref. [1]. * the above arguments are mostly limited to 2-body decays into pseudoscalars, but it is not absurd to assume that flavor-neutral states would decay predominantly by 2 light \(\sigma\) (the very broad partner of the \(\pi\), only seen by partial wave interference), those particles decaying in turn in pion pairs (thus at least 4 \(\pi\)) ## 3 Enter the \(\eta\) and \(\eta^{\prime}\) Some early hints of modern glueballs searches came early (early 1980's) with the GAMS experiment [2], [3], which investigated a "gluon-rich" (central production) environment using a lead glass wall, mostly sensitive to photons. This had the effect of accessing the \(\eta\) and \(\eta^{\prime}\) decay modes, and resulted in at least one glueball candidate (which they called G(1590), now the \(f_{0}\) (1500);the discrepancy in mass is most probably due to the limited phase space in the \(\eta-\eta^{\prime}\) channel, which leads to severe distortion). Interestingly, the same experiment (which was conducted in parallel at CERN and Serpukhov) led to the discovery of an exotic candidate, then called M(1406), now probably \(\pi_{1}(1400)\) (also sometimes called \(\widetilde{\rho}\), with exotic parity \(1^{-+}\) impossible for a quark-antiquark meson), seen decaying into \(\eta(^{\prime})\pi\). In both cases, the role of the \(\eta^{\prime}\) meson was key, as pointed out by Gherstein et al.[4]. A special connection between glue and \(\eta^{\prime}\) was indeed known both from theoretical considerations (QCD anomaly) and the observation of the large \(J/\psi\to\eta^{\prime}\gamma\) branching ratio. It must be noted that _no quark mass insertion is needed in this case_, due to the anomaly. **We have developed a complete formalism to handle the radiative decays of the type V->P \(\gamma\) or P->V \(\gamma\) on the basis of QCD anomalies, [6], [7], further refined in [8], which lead to a successful description of those processes without recourse to additional parameters. The larger decay of J/\(\psi\) into \(\eta^{\prime}\gamma\) than \(\eta\gamma\) despite an unfavorable phase space is then put in relation to the greater admixture of the anomaly contribution in the physical \(\eta^{\prime}\).** For \(J/\psi\) decays, the emission of a photon leaves a glue-rich situation, which eventually couples to the \(\eta(^{\prime})\). This is thus a different "glue-rich" production source, in addition to the central production. While intensity-limited by the production mechanism, it offers a much "cleaner" situation to look for glueball states. Unfortunately, the J/\(\psi\) radiative decays into neutral mesons have proven very difficult experimentally, and the results of a search for glueballs were confusing at first. While SLAC ceased this activity, more powerful machines preferred to turn to the heavier B mesons. It was not until BES III in Beijing started taking data in recent years that difficult channels could be studied with (impressive) accuracy. Figure 1: J/\(\psi\) radiative decay as a glue-rich source coupled to the \(\eta(^{\prime})\); notice that no mass insertion is needed ## 4 Who are the Glueballs? Many attempts have been made to identify the glueballs in the 1-2 GeV spectroscopy of mesons, with approaches including various elements (in particular, including or excluding the anomaly-mediated channels). A popular approach (see references in [1]) attempts to parametrize the various contributions according to topologies assumed from the schematic mechanisms described, for instance in fig. 3, or possibly in the left part of fig. 2. While tempting, this approach should be taken with a grain of salt. Due to strong interactions, these diagrams are just oversimplified evocations of processes, and in no way on the level of Feynman graphs!. Each of them should, be enriched with a large number of QCD extra contributions, the phases are unknown,... As others, we have tried this approach a few years ago, and tried to compensate the insufficient theoretical basis with a long list of processes to be fitted. On the basis of this analysis, we tried 2 fits, one without assuming the often advocated chiral suppression (against which we already listed arguments) and the more traditional one for comparison. Both fits (in line with the then tendency) tended to present the \(f_{0}(1710)\) as a mostly-glueball state and the \(f_{0}(1500)\) mostly as a "normal" meson. As part of the fit, we had come with the Figure 3: 3 graphs (in addition to figure 2 describing possible decay topologies, from ref. [1] Figure 2: J/\(\psi\) radiative decay as a glue-rich source leading to Glueball formation and decay, here in the \(\eta\)\(\eta^{\prime}\) channels fit/prediction for the decay of these mesons into \(\eta\eta^{\prime}\) shown in figure 4 (the reference in brackets refer to the then-available data, see the original paper). These attempts certainly suggested that the \(f_{0}(1710)\) would be observed in the \(\eta\eta^{\prime}\) channels when the BES III data would become available! ## 5 The new results from BES III change the perspective We have finally come close to the holy grail! BES III recently published a partial wave analysis of the radiative \(J/Psi\) decays complete with the \(\eta\) modes. This is precisely the tool which has been missing for tens or years (more from lack of interest than from lack of feasibility for other facilities!). [9] And it comes with a surprise (we will deal with another surprise in the "exotics" in a later section). Namely, the \(f_{0}(1710)\) is simply not seen in the full decay chain \(J/\psi\to X\gamma,X\to\eta\eta^{\prime}\), where it should dominate the \(f_{0}(1500)\). Even accepting a confusion with the \(f_{0}(1810)\) (remember that the "mass" in the tables is not always the latest measurement, and that partial wave analysis may be affected by interferences, phase space distortions), we see that it is suppressed in the ration of 0.007/3.05. See however also a comprehensive analysis by ref [12]. **This suggests another attitude, and strongly points to the combined chain of decay as a prominent test for glueballs.** Figure 4: Our previous fit to the \(f_{0}(1710)\) and \(f_{0}(1500)\) decays into \(\eta\eta^{\prime}\), allowing for the severe phase space suppression of the latter; the brackets refer to the data available at the time [1] and the whole fit is now superseded bu BES III data Figure 5: The partial wave analysis of the full decay chain \(J/\Psi\to X\gamma,X\to\eta\eta^{\prime}\), from table II of [9] Ironically, this most recent results brings us back to the initial observations by GAMS, with their G(1590), now identified as the \(f_{0}\) (1500) as a leading candidate. Here also, the shift in estimated mass (from the very early and sparse observations) may be understood by the severe distortion of the phase space (the \(\eta\eta^{\prime}\) channel being at the nominal threshold, and only possible due to the state width). This of course prompts us to look back into the unusual properties of the \(f_{0}(1500)\), but also to check what the BES III data tell us about the other "weird" observation of GAMS, namely the \(1^{-+}\) exotic. This will occupy the next 2 sections. ## 6 The very unusual decays of the \(f_{0}(1500)\) particle It is worth to have a new peek into the PDG tables to learn more about this state, which is back in the spotlight. In this table, we note that, as already mentioned, the \(\eta\eta^{\prime}\) mode, while at threshold, is very significant (and would dominate the \(\eta\eta\) mode after phase space corrections), but also that the main observed decay mode is in \(4\pi\), overwhelming the \(2\pi\) channel. An interesting question remains : there may very well be other, not yet observed modes (multi pions), for which the analysis have not been performed or would prove impractical. In fact, a semi-inclusive search for \(J/\psi\to\gamma X(1500)\) where \(X\) is any recoiling state with invariant mass in the range could be instructive! Remaining with the \(4\pi\) decay mode, this can certainly suggest a decay through a pair of very wide \(0^{++}\)\(\sigma\) states. Given its quantum numbers, the \(\sigma\) can certainly be suspected to mix in the "glueball complex". ## 7 Return to the Exotics Exotics is the name traditionally given to meson states which cannot be reached by the association of just one quark and an antiquark. Such basic combinations indeed have their parity and charge conjugation directly related to orbital momentum and total spin respectively, namely \(P=(-)^{l+1},C=(-)^{l+s}\). As a result, a \(J^{PC}=1^{+-}\) state is forbidden. If a gluon(or a quark-antiquark pair mimicking a gluon) is added, the selection rule fails. Such a state was advocated by the GAMS collaboration (see above)[5], with a decay into \(\eta(^{\prime})\pi\), and a mass 1406 MeV, (initially nwmaed M(1406) and probably currently known as \(\pi_{1}(1400)\)). Figure 6: The main decay modes of the \(f_{0}(1500\) according to Particle Data Group [? ] While it was a clear sign of an exotic, the phenomenological analysis conducted then were not enthusiastic, since prejudice disfavored this precise decay mode - once again, this was an error due to disregarding axial anomalies, and the special role of the \(\eta^{\prime}\)! We advocated already at that time that the inclusion of the anomaly would on the opposite make it one leading decay mode [10]. Little work has gone on this search since, but it is interesting that BESIII has now identified a similar exotic, this time in the closely related \(\eta^{\ast}\eta\) channel, namely a \(\eta_{1}(1855\) with \(I^{G}J^{PC}=O^{+}1^{-+}\).[9] This time, the theoretical interpretation did not miss the role of the anomaly [11], coming to a conlusion similar to ours in the case of the \(\pi_{1}\). This comforts us in believing that the \(eta\) system, through quantum anomalies, is a key to understanding the glueballs and exotics system! ## 8 Conclusions and Work to Do Obviously our work in understanding the role of gluons in exotic mesons and glueballs is far from complete. Some comfort may be found in the fact that a large body of experimental evidence is now becoming available in what was for a long time a neglected field. Many theorists had probably given up in front of the complexity and the lack of fresh data, but hope is returning and results are lining up nicely. Many channels still remain worth exploring. Obviously, the search for the exotic \(\pi_{1}\) in the \(\eta^{\prime}\pi^{0}\) mode should be conducted at BES III, confirming the old GAMS result and giving extra support for a partner their \(\eta_{1}\) state. Further decay modes (ideally a semi-inclusive study) or the \(f_{0}(1500)\) would also help better understand this state. The next step would be to use then better established glueballs by probes for heavier states (assuming that a glueball to glueball + X decay would be favored). We may see a new sunrise on glueballs! ## 9 Acknowledgements This work is supported by IISN (Belgium), the Brout-Englert-Lemaitre Center (Brussels). I wish to thank the organizers of the Corfu meetings for the occasion to evoke those issues in beautiful surroundings.
2308.00414
Ab-initio Study of Electronic and Lattice Dynamical Properties of monolayer ZnO under Strain
First-principles density functional theory based calculations have been performed to investigate the strain-induced modifications in the electronic and vibrational properties of monolayer (ML) ZnO. Wide range of in-plane tensile and compressive strains along different directions are applied to analyse the modifications in detail. The electronic band gap reduces under both tensile and compressive strains and a direct to indirect band gap transition occurs for high values of biaxial tensile strain. The relatively low rate of decrease of band gap and large required strain for direct to indirect band gap transition compared to other $2$D materials are analysed. Systematic decrease in the frequency of the in-plane and increase in the out-of-plane optical phonon modes with increasing tensile strain are observed. The in-plane acoustic modes show linear dispersion for unstrained as well as strained cases. However, the out-of-plane acoustic mode (ZA), which shows quadratic dispersion in the unstrained condition, turns linear with strain. The dispersion of the ZA mode is analysed using the shell elasticity theory and the possibility of ripple formation with strain is analysed. The strain-induced linearity of the ZA mode indicates the absence of rippling under strain. Finally, the stability limit of ML-ZnO is investigated and found that for $18\%$ biaxial tensile strain the structure shows instability with the emergence of imaginary phonon modes. Furthermore, the potential of ML-ZnO to be a good thermoelectric material is analyzed in an intuitive way based on the calculated electronic and phononic properties. Our results, thus, not only highlight the significance of strain-engineering in tailoring the electronic and vibrational properties but also provide a thorough understanding of the lattice dynamics and mechanical strength of ML-ZnO.
Saumen Chaudhuri, A. K. Das, G. P. Das, B. N. Dev
2023-08-01T09:50:10Z
http://arxiv.org/abs/2308.00414v1
# Ab-initio Study of Electronic and Lattice Dynamical Properties of monolayer ZnO under Strain ###### Abstract First-principles density functional theory based calculations have been performed to investigate the strain-induced modifications in the electronic and vibrational properties of monolayer (ML) ZnO. Wide range of in-plane tensile and compressive strains along different directions are applied to analyse the modifications in detail. The electronic band gap reduces under both tensile and compressive strains and a direct to indirect band gap transition occurs for high values of biaxial tensile strain. The relatively low rate of decrease of band gap and large required strain for direct to indirect band gap transition compared to other 2D materials are analysed. Systematic decrease in the frequency of the in-plane and increase in the out-of-plane optical phonon modes with increasing tensile strain are observed. The in-plane acoustic modes show linear dispersion for unstrained as well as strained cases. However, the out-of-plane acoustic mode (ZA), which shows quadratic dispersion in the unstrained condition, turns linear with strain. The dispersion of the ZA mode is analysed using the shell elasticity theory and the possibility of ripple formation with strain is analysed. The strain-induced linearity of the ZA mode indicates the absence of rippling under strain. Finally, the stability limit of ML-ZnO is investigated and found that for 18% biaxial tensile strain the structure shows instability with the emergence of imaginary phonon modes. Furthermore, the potential of ML-ZnO to be a good thermoelectric material is analyzed in an intuitive way based on the calculated electronic and phononic properties. Our results, thus, not only highlight the significance of strain-engineering in tailoring the electronic and vibrational properties but also provide a thorough understanding of the lattice dynamics and mechanical strength of ML-ZnO. DFT, Mechanical strain, Monolayer ZnO, Rippling ## I Introduction Since the discovery of graphene [1] and its unique physical and chemical properties, a lot of interest is drawn toward atomically thin crystals [2; 3; 4; 5; 6]. Within the continuously growing family of layered materials, those with a sizable band gap have attracted tremendous attention due to their electronic and optical applications potential [7; 8; 9; 10] and numerous other applications in materials science [11; 12; 13; 14]. Recent advances in material engineering have shown that many of the binary compounds which crystallize in wurtzite structures can be exfoliated into graphene-like planar monolayer sheets from their bulk counterparts [15; 16; 17; 6]. Bulk ZnO which crystallizes in wurtzite structure can be exfoliated to be realized in 2D-graphene like stable monolayer [18]. The single-layer counterpart of ZnO offers attractive physical and chemical properties, for example, wide electronic band gap, high structural stability and spin transport on doping. In experiments, a single-atom-thick ZnO layer was grown on graphene substrates using the atomic layer deposition method [19]. The growth of a monolayer (ML) and few-layers of ZnO on Au (111) have shown that the band gap decreases with the increasing number of layers, from 4.48 eV for a monolayer to 3.42 eV for four layers [20]. The existence of a wide band gap assures the potential of ML-ZnO for future applications in various optoelectronic devices. The tunability of the electronic, optical and mechanical properties of 2D nanomaterials are highly desirable for their potential application in low-power flexible transistors and numerous other opto- and electro-mechanical systems [21]. Out of the possible ways to tune the electronic and optical properties of ML-ZnO, strain engineering is an efficient way due to its comparatively easy implementation and reversible mode of application. Our recent study on single-layer MoS\({}_{2}\)[22] suggests that the electronic and vibrational properties can be altered effectively and efficiently with mechanical strain. So far, the mechanical strain has been applied successfully in graphene [23] as well as other 2D-materials and the optical, electronic and vibrational properties are tuned efficiently [24]. In addition, a proper investigation of the phonon band dispersions as a function of strain can give information about the presence or absence of ripples in the nano-sheet and the critical strain before which the structure becomes unstable. Ripples are found to exist not only in free-standing graphene [25] but also in graphene grown on various substrates [26]. Using the shell elasticity theory, the formation of ripples can be analysed, particularly by examining the behaviour of the out-of-plane acoustic mode or the ZA mode. The dispersion relation of the acoustic modes can be fitted with the model equation \(\omega^{2}=\text{A}k^{4}+\text{B}k^{2}\), where A and B are related to the bending modulus of the sheet and the applied strain, respectively. Mechanical strains are almost invariably generated within the nano-sheets during their integration on to any substrates. Thus, a proper understanding of the behaviour of ZA mode and hence, the formation or non-formation of ripples with strain application is essential. From the fitting of dispersion of the ZA mode with the model equation, the coefficients of the linear (B) and the quadratic (A) terms can be extracted. Increasing linearity (increase in B) of the ZA mode, which shows dominant quadratic behaviour in the unstrained condition, is used as an indication to conclude that monolayers of graphene [27] and MoS\({}_{2}\)[28] may not show rippling in the strained condition. Here, in this work, first the choice of the exchange-correlation (XC) functional is made after critically evaluating the geometrical and electronic properties of ML-ZnO with different XC functional and Hubbard parameter (U) in DFT+U calculations. The lattice parameter, electronic properties and the relative position of the Zn-d states obtained from experiments and HSE06 hybrid functional calculations are considered as a reference behind the determination of the appropriate XC functional. We then considered both tensile and compressive strains applied within the plane of graphitic ML-ZnO along different directions, such as armchair (AC), zigzag (ZZ) and homogeneous biaxial strain (BI) to investigate the strain-induced modifications in the electronic and lattice dynamical properties. Due to the high probability of strain development within the monolayer sheet during its growth or integration on any substrate, a thorough and accurate study of the impact of strain on various physical and chemical properties of ML-ZnO is necessary. Furthermore, a qualitative analysis of the possibility of ripple formation with strain and the maximum amount of strain that a single-layer of ZnO can withstand is essential, which lacks in the available literature. In addition, the potential of ML-ZnO to be a good thermoelectric material is identified based on the chemical intuition, and caculated electronic and phonon band structures. The efficient tunability of the electronic and phononic properties with strain application also suggests the use of ML-ZnO as a flexible thermoelectric material, where the thermoelectric efficiency and transport properties can be tuned with strain. Our results not only show the tuning of electronic and phononic properties with a wide range of strain profiles but also provides a detailed understanding of the lattice dynamics and stability limit of ML-ZnO. ## II Computational details First-principles calculations using ab-initio density functional theory (DFT) have been performed as implemented within the Vienna Ab Initio Package (VASP) [29; 30] together with projector augmented wave (PAW) potentials to account for the electron-ion interactions [31]. For the electronic exchange and correlation, LDA+U method is used with a Hubbard-U of 8 eV for the Zn-\(d\) states [32]. The choice on the exchange-correlation (XC) functional is made based on a series of calculations on the geometrical and electronic properties of ML-ZnO using DFT+U calculations with different U values and hybrid functional calculations with HSE06 functional. An appropriate U value is necessary for correcting the self-interaction error, and has been used in earlier studies [33; 34]. In order to minimize the interaction between the adjacent layers along the out-of-plane direction, a supercell containing a sufficient vacuum of \(20\,\text{\AA}\) is used in all calculations. The internal position of the atoms in the unstrained as well as in the strained cases are relaxed using a conjugate gradient scheme until the forces on the atoms are less than \(0.01\,\text{eV/A}\). In the hexagonal geometry, a well-converged Monkhorst-Pack [35] k-point set of \(21\times 21\times 1\) together with a plane wave cut-off energy of 450 eV are used for the geometry relaxation, and a much denser converged k-point set is used for the post-relaxation calculations. For the orthorhombic geometry, the k-mesh is correspondingly scaled. The phonon dispersion curves are computed using the finite difference method as implemented within the Phonopy package [36] with a fixed displacement amplitude of \(0.015\) A. In order to get the interatomic force constants (IFC) with sufficient accuracy, a large supercell of dimension \(4\times 4\times 1\) with respect to the primitive unit cell together with a strict energy convergence criterion of \(10^{-8}\) eV is used. ## III Results and discussion ### Unstrained monolayer ZnO We first investigate the dependence of the geometrical and electronic properties of monolayer (ML) ZnO on the choice of the XC functional. In order to explore the dependency, we have performed first-principles DFT+U calculations with different U values and hybrid functional calculations on ML-ZnO. Finally, the choice of the appropriate XC functional is made for which the lattice parameter, electronic band gap and the relative position of the Zn-\(d\) states agree well with experimental observations. Explicitly calculating the Hubbard-U parameters using the ACBN0 method, an earlier report suggested that a U value as high as \(\sim\)14 eV for the Zn-\(d\) states is required to correct the over-delocalization issue of normal LDA or GGA based DFT calculations [37]. In our study, the choice of the XC functional and the U value are made after critically evaluating the electronic and the geometrical properties of ML-ZnO using different XC functional and U values. A summary of the calculated lattice parameter and the band gap with different XC functional and U values together with the experimentally observed ones are presented in Table 1. Note that, the experimentally found lattice parameter for graphene like planar ZnO is 3.09 A with a band gap of 3.5 eV [38]. Depending on the substrates used, for example Au or graphene, different values for the lattice parameter and the band gap of ML-ZnO can be found in literature [19; 39; 40]. Similarly, a wide range of values for the lattice parameter and band gap of ML-ZnO prevail in the literature from theoretical calculations using different XC functional and U values [34; 41; 42]. In the absence of experiments conducted on free-standing ZnO monolayer, the values calculated with hybrid functionals can be taken as a reliable reference. The band gap value of ML-ZnO is found to be 3.34 eV using HSE06 functional, which is in good agreement with earlier reports using hybrid functional (3.25 eV) [34; 43] and GW approximation (3.57 eV) [44], as well as with experimental observations [38; 40]. It can be seen from Fig. 1 that the band structures calculated using the LDA+U and HSE06 functional are nearly identical from the qualitative aspect, however, the band gap increases to 3.34 eV with the HSE06 functional. Using the computationally much less demanding LDA+U (U = 8 eV) calculations, we obtained a lattice parameter of 3.07A and a band gap of 2.79 eV for the ZnO monolayer. These results with LDA+U calculations are not only close to the experimentally obtained values, but also agrees well with the hybrid functional calculations. We, therefore, affirm that the choice of the XC functional (LDA) and the value of U (8 eV) made here, is legitimate. \begin{table} \begin{tabular}{c c c c c c c c} & exp & LDA+U & LDA+U & LDA+U & GGA+U & GGA+U & GGA+U \\ & ref. [38] & U = 0 eV & U = 4 eV & U = 8 eV & U = 0 eV & U = 4 eV & U = 8 eV \\ \hline a (Å) & 3.09 & 3.20 & 3.15 & 3.07 & 3.28 & 3.23 & 3.15 \\ \hline d (Å) & 1.784 & 1.848 & 1.821 & 1.773 & 1.893 & 1.869 & 1.821 \\ \hline E\({}_{\text{g}}\) (eV) & 3.5 & 1.71 & 2.26 & 2.79 & 1.68 & 2.16 & 2.74 \\ \end{tabular} \end{table} Table 1: The calculated geometrical properties, such as the lattice parameter (a), the Zn-O bond length (d) and the electronic band gap (E\({}_{\text{g}}\)) of ML-ZnO using different XC functional and U values. The calculated values are compared with the experimentally found values in ref. [38]. Figure 1: Electronic band structure of ML-ZnO in the unstrained condition, calculated using (a) LDA+U (U = 8 eV) and (b) HSE06 functional. Furthermore, we have used the experimentally determined position of the Zn-\(d\) states as a reference behind making the choice of the XC functional and the U value. In the literature, using photoemission experiments on wurtzite ZnO, the position of the Zn-\(d\) states is found to be peaked at 7.4 eV [45]. Considering the Zn-\(d\) states as semi-core states, we expect the position not to change significantly in graphene like planar structure of ZnO. It has been reported that, hybrid functional calculations, although improve the band gap, lead to a large underestimation of around 2 eV in the energy value corresponding to the peak of the Zn-\(d\) states [43]. However, using the LDA+U or GGA+U XC functional, which are computationally much cheaper, a better agreement on the relative position of the Zn-\(d\) states is achieved. From the DOS plots shown in Fig. 2, it can be seen that with an appropriate value of U, which is 8 eV in this case, the DOS peaks corresponding to the Zn-\(d\) states appear around 7.5 eV, agreeing well with experiments. With lower U values a large disagreement with experimentally found energy value is observed. This suggests that LDA+U with U = 8 eV is appropriate for ML-ZnO to correctly describe the geometrical and electronic properties. Before going into the details of the strain-induced effects on the electronic and phononic properties of ML-ZnO, we first present a brief overview of the calculated geometrical, electronic and lattice dynamical properties of ML-ZnO in the unstrained condition. The relaxed geometry of the ML-ZnO sheet possesses a graphene-like structure with the hexagonal primitive unit cell containing one Zn and one O atom, as shown in Fig. 3. In order to apply in-plane tensile and compressive strains independently along the two non-equivalent directions, namely the armchair (AC) and the zigzag (ZZ) directions, an orthorhombic unit cell has been used instead of the hexagonal primitive unit cell. The orthorhombic unit cell contains two Zn and two O atoms, as can be seen in Fig. 3. In the chosen orthorhombic unit cell, the AC and the ZZ crystallographic directions are desirably aligned with the orthogonal lattice vectors. The lattice parameters of both the unit cells in the relaxed geometry are given in Table 2. The lattice parameters calculated herein are found to be in good agreement with earlier reports [34; 46]. ML-ZnO in the unstrained condition is found to be semiconducting with a band gap of 2.79 eV with both the valence band maximum (VBM) and the conduction band minimum (CBM) at the same high-symmetry point (at \(\Gamma\)) of the Brillouin Zone (BZ). Thus resulting in a direct band gap semiconductor in the unstrained condition (see Fig. 3). The band gap value agrees well with earlier reports using the same exchange-correlation (XC) functional [34]. \begin{table} \begin{tabular}{c c c c} \hline Crystal structure & a (Å) & b (Å) & \(\gamma\) (degree) \\ \hline Hexagonal & 3.07 & 3.07 & 120 \\ \hline Orthorhombic & 3.07 & 5.31 & 90 \\ \hline \end{tabular} \end{table} Table 2: Calculated lattice parameters of the hexagonal unit cell and the orthorhombic conventional cell of monolayer ZnO. Figure 2: The electronic density of states (DOS) of ML-ZnO using (a) LDA+U and (b) GGA+U exchange-correlation functional with U = 0, 4, 8 eV. Black and red lines represent the total DOS and the contribution from Zn-\(d\) states, respectively. The zero energy (E = 0 eV) corresponds to the Fermi level. From the density of states (DOS), shown in Fig. 3(d), it is clear that the valence band edge is mostly composed of O-\(p\) orbitals. The phonon band structure of single-layer ZnO in the unstrained condition is also shown in Fig. 3. The absence of imaginary phonon branches throughout the BZ confirms the stability of the monolayer ZnO in the graphitic form. The phonon band structure consists of three low energy acoustic branches, namely longitudinal acoustic (LA), transverse acoustic (TA) and out-of-plane acoustic (ZA) and three optical branches (ZO, TO and LO). In the unstrained condition, the LA and TA branches exhibit linear behaviour near the zone centre (at \(\Gamma\)), whereas the ZA branch shows quadratic behaviour. The band dispersion and the frequencies of the optical branches are in good agreement with previous reports [41]. ### Strained monolayer ZnO To explore the importance of strain-engineering in tuning the electronic properties of ML-ZnO, a series of in-plane tensile and compressive strains have been applied. In-plane tensile and compressive strains are applied along the AC, ZZ, and homogeneous biaxial strain along both the directions. The applied strain is quantified as \(\epsilon=\frac{\mathrm{a}-\mathrm{a}_{0}}{\mathrm{a}_{0}}\times 100\%\), where \(\mathrm{a}_{0}\) and a are the unstrained and strained lattice parameters, respectively. The modifications in the electronic band structure of ML-ZnO under the application of tensile and compressive strains of different magnitude and along different directions are presented in Fig. 4 and the corresponding DOS of the strained structures are shown in Fig. 5. The strain-induced changes in the band gap and the band edge transition energies are summarized in Fig. 6. It can be seen that with the application of uniaxial strains along the AC and the ZZ directions, the degeneracy at the top of the valence band (at \(\Gamma\)) is lifted, whereas the degeneracy prevails with homogeneous biaxial strain. This can be understood from the breaking of hexagonal symmetry under the uniaxial strains, which does not happen under the biaxial strain. Both tensile and compressive strain reduces the band gap of ML-ZnO and the rate of reduction is more or less same for strains along different directions. Although the band gap decreases for both tensile and compressive strains, the mechanism behind the reduction is very different, as has been pointed out in an earlier study [42]. Within Figure 3: (a) Crystal structure of monolayer of ZnO (Zn and O atoms are shown in grey and red, respectively). The arrows show the arm-chair and the zig-zag directions. The hexagonal unit cell and the orthorhombic conventional cell are shown in red and black, respectively. (b) Electronic and (c) phonon band structure and (d) density of states (DOS) of unstrained ML-ZnO plotted along the high symmetry path of the irreducible Brillouin zone (BZ). the applied strain range (\(\pm\)10%), the band gap of ML-ZnO reduces from 2.79 eV to 2.26 eV for tensile strains and 2.44 eV for compressive strains. Also, for biaxial strain, a direct (\(\Gamma\)-\(\Gamma\)) to indirect (K-\(\Gamma\)) band gap transition occurs for a strain of 8%, as the VBM shifts from the \(\Gamma\)-point to the K-point. For the application of tensile strains, the valence band edge at K moves upward in energy with no significant change at the conduction band edge. Thereby, the transition of VBM from \(\Gamma\) to K occurs (see Fig. 4 and Fig. 6). Due to this upward shift of the valence band edge at K, the density of states (DOS) at the valence band edge is significantly enhanced, as can be seen from Fig. 5. Unlike ML-MoS\({}_{2}\) or other 2D materials in general, the rate of reduction of the band gap of ML-ZnO is very less and the direct to indirect band gap transition also occurs at a relatively high value of strain [22; 34; 47]. For ML-MoS\({}_{2}\), a direct to indirect band gap transition occurs for low values of strain and a semiconductor (band gap \(\sim\) 1.8 eV) to metal transition occurs at a strain of 10% [22]. Thus, it is clear that mechanical strain does not affect the electronic properties of single-layer ZnO the way it impacts other 2D materials. In order to understand this in greater detail, the electronic structure of ML-ZnO and the orbital contribution of the constituent atoms has been investigated. It is found that the valence band edges at K and \(\Gamma\) are composed of the hybridized states of Zn-\(d\) and O-\(p\) orbitals. Thus, when strain is applied, both the edge states at K and \(\Gamma\) would feel the similar shift in energy. Also, the large energy difference between the valence band edge states at \(\Gamma\) and K (\(\sim\) 0.6 eV) makes the edge transitions from one k-point to another k-point even more difficult. Therefore, the direct to indirect band gap transition of ML-ZnO occurs at a much higher strain of around 8% compared to the 2% or less in ML-MoS\({}_{2}\)[22; 47]. Next, the impact of in-plane tensile and compressive strains on the phonon dispersion and lattice dynamics of monolayer ZnO is explored. From an experimental point of view, the application of in-plane compressive strain on a Figure 4: Electronic band structure plots of ML-ZnO at different values of tensile and compressive strains applied along the (a) arm-chair, (b) zig-zag, and (c) homogeneous biaxial strain. The zero energy level (E = 0 eV) corresponds to the Fermi energy. monolayer sheet is a daunting task, whereas in-plane tensile strains can be applied conveniently and reliably. There have been efforts in recent times to apply tensile strain on monolayer or few-layer sheets of various 2D materials by growing them on a stretchable polymer substrate and then stretching or bending the substrate [48; 49; 50]. We, therefore, restricted the study of lattice dynamics of ML-ZnO only under tensile strains, keeping the practical aspect in mind. To investigate the impact of tensile strains on the phonon properties and lattice dynamics of ML-ZnO and to find out the critical strain at which the structure becomes unstable, in-plane strains along different crystallographic directions (AC, ZZ and BI) have been applied. The modifications in the phonon band structure at the representative strain values are shown in Fig. 7. In the unstrained condition, the LO and the TO modes are degenerate at the zone Figure 5: Electronic density of states plots of ML-ZnO at different values of tensile and compressive strains applied along the (a) arm-chair, (b) zig-zag, and (c) homogeneous biaxial strain. The zero energy level (E = 0 eV) corresponds to the Fermi energy. Figure 6: Variation in the (a) band gap, (b) band edge transition energies of ML-ZnO as a function of tensile and compressive strains along different directions. centre (\(\Gamma\)). With the application of uniaxial tensile strains along the AC and the ZZ direction, the degeneracy is lifted, whereas homogeneous biaxial strain preserves the degeneracy. This is due to the fact that hexagonal crystal symmetry is retained under homogeneous biaxial strain only, and not otherwise. Hence, the LO and TO modes remain degenerate only under biaxial strains. Unlike ML-MoS\({}_{2}\), where the frequency of all the optical modes decreases systematically [22] with increasing tensile strain, in ML-ZnO the frequency of the LO and TO modes decreases, whereas the frequency of the ZO mode increases. Such different behaviour of the optical modes i.e. softening of LO and TO modes and hardening of ZO mode with applied strain is due to the different values of the Gruneisen parameter associated with the phonon modes. In order to have a complete understanding of this phenomenon, a thorough study of the mode-resolved phonon parameters is required. The variation in the frequency values of the optical modes with strains of different magnitude and along different directions are summarized in Table 3. Another aspect to note here is that the large frequency gap between the ZO mode and the LO and TO modes present in the unstrained condition reduces with the applied tensile strain. For large biaxial strain (\(\sim 8\%\)), the frequency gap collapses and the ZO mode merges with the LO and TO modes. The out-of-plane acoustic mode or the flexural mode (ZA) is a characteristic vibrational mode in 2D materials such as graphene, boron nitride (BN) and transition metal di-chalcogenide (TMDC) sheets. The acoustic phonon modes usually have a linear relationship between the phonon frequency (\(\omega\)) and the wave vector (\(k\)). However, from the phonon dispersion curves of ML-ZnO, it can be seen that two of the acoustic modes, namely LA and TA show linear Figure 7: Phonon band structure plots of ML-ZnO at different values of tensile strains (4% and 8%) applied along the (a) arm-chair, (b) zig-zag, and (c) homogeneous biaxial strain. \begin{table} \begin{tabular}{c c c c c c c c c} Strain & ZO-AC & LO-AC & TO-AC & ZO-ZZ & LO-ZZ & TO-ZZ & ZO-BI & LO-BI & TO-BI \\ \hline 0\% & 8.96 & 17.89 & 17.89 & 8.96 & 17.89 & 17.89 & 8.96 & 17.89 & 17.89 \\ \hline 4\% & 9.38 & 17.18 & 15.20 & 9.42 & 16.44 & 15.78 & 9.68 & 14.48 & 14.48 \\ \hline 8\% & 9.69 & 15.90 & 11.71 & 9.62 & 16.75 & 10.74 & 10.03 & 9.77 & 9.77 \\ \hline \end{tabular} \end{table} Table 3: Variation in the phonon frequencies (THz) of the optical modes (ZO, LO and TO) at \(\Gamma\) for the unstrained as well as 4% and 8% strained structures along armchair (AC), zigzag (ZZ) and biaxial (BI) direction. behaviour, while the ZA mode has a significant quadratic (\(k^{2}\)) contribution. This mode is generally associated with the rippling of 2D materials [27; 28]. Now, strain is almost invariably generated within the monolayer nanosheet while growing on a substrate or transferring from one substrate to another. Thus, an accurate study of the behaviour of ZA mode with strain application is required to understand the formation or non-formation of ripples within the ML-ZnO sheet. The control of ripples is important to grow ultra-flat sheets of ZnO monolayer on various substrates. The behaviour of the ZA mode can be explained within the shell elasticity theory. The dispersion relation of the acoustic modes is usually given by the model equation \(\omega^{2}=\mathrm{A}k^{4}+\mathrm{B}k^{2}\), where A and B are related to the bending modulus of the sheet and the amount of strain applied, respectively. To quantify the amount of dependence of \(\omega^{2}\) on the \(k^{4}\) and \(k^{2}\) terms, we fitted the dispersion relation of the ZA mode near the zone centre (\(\Gamma\)) with the model equation. The values of A and B are extracted for the unstrained as well as strained cases and are presented in Table 4. It can be seen that values of A decrease, while that of B increase for strains applied along all three directions. This is indicative of the strain-induced hardening of the ZA phonon branch. Due to this continuous increment in the values of B, the ZA mode near \(\Gamma\)-point becomes nearly linear. Although the values of A and B change for both uniaxial and biaxial strains in similar fashion, the magnitude of decrement is much higher for homogeneous biaxial strain. The large increase in the linear behaviour of the ZA mode with strain suppresses the possibilities of ripple formation or instability of the ML-ZnO sheet grown on a flat substrate. We then went on to examine the behaviour of the ML-ZnO sheet under large biaxial strains. The phonon dispersion curves can be used to find a possible instability in the lattice. The general condition for a stable structure in terms of phonon dispersion is that all phonon modes have real frequency, and imaginary frequency values of any phonon mode within the BZ indicate instability of the structure or a possible phase transition. The phonon band structures of ML-ZnO at large biaxial strain values are presented in Fig. 8. Since the impact of the biaxial strain on the lattice dynamics of ML-ZnO is more pronounced compared to the uniaxial strains, it is assumed that the uniaxially strained structures will remain stable at least within the stability limit under biaxial strains. It can be seen that up to a biaxial strain of 15% no imaginary phonon branches emerge within the BZ. Starting from 18% biaxial strain, an acoustic phonon mode, namely the in-plane TA mode, becomes imaginary throughout the BZ. Thus, it can be assumed that the structure of ML-ZnO will become unstable if a biaxial tensile strain of 18% or higher is applied. For graphene also, the emergence of an imaginary phonon mode is observed due to the softening of an acoustic mode at a biaxial strain of 15% [27]. Finally, the promising signs of ML-ZnO as a good thermoelectric material is identified in an intuitive way based on the calculated electronic and phononic properties. The bonding character of ZnO in the wurtzite structure is primarily covalent with a significant contribution from ionic bonding. This results in a large direct band gap in the unstrained condition of ML-ZnO. The semiconducting nature of single-layer ZnO makes it appl Figure 8: Phonon band structure plots of ML-ZnO under large biaxial strains of 15% and 18%. In the 18% strained case the appearence of imaginary phonon branch (in-plane TA mode) throughout the Brillouin Zone can be seen. \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline Strain & A-AC & B-AC & A-ZZ & B-ZZ & A-BI & B-BI \\ \hline 0\% & 8144 & 7.5 & 8144 & 7.5 & 8144 & 7.5 \\ \hline 4\% & 7055 & 131 & 7534 & 217 & 6250 & 336 \\ \hline 8\% & 5127 & 287 & 6426 & 490 & 3544 & 688 \\ \hline \end{tabular} \end{table} Table 4: Variation in the coefficients A and B of the model equation \(\omega^{2}=\mathrm{A}k^{4}+\mathrm{B}k^{2}\) as a function of strain along AC, ZZ and BI direction. The values of A and B are extracted from the fitting of the model equation to the dispersion of the ZA mode near \(\Gamma\)-point. applications in the first place. With strain application, the band gap reduces at a slow rate and ML-ZnO remains a semiconductor throughout the strain range unlike ML-MoS\({}_{2}\), where a semiconductor to metal transition is seen [47; 22]. With tensile strain the bondlength between Zn and O increases, thereby the orbital overlap between the Zn-\(d\) and O-\(p\) orbitals decreases. Therefore the dispersion of the valence band top decreases, resulting in the increased density of states (DOS) at the valence band edge, as can be seen from Fig. 5. Enhanced DOS at the band edges usually result in an increased Seebeck coefficient and thus improve the overall thermoelectric performance. Now, due to the low-dimensional structure and the resulting interface phonon scattering and phonon confinement effects, ML-ZnO will offer low lattice thermal conductivity (\(\kappa_{L}\)) compared to its bulk counterpart or any other bulk structures, in general. Low \(\kappa_{L}\) is quintessential for efficient thermoelectric performance. Furthermore, the large frequency gap between the acoustic and optical modes in the unstrained condition ceases to exist with increasing tensile strain and the modes nearly merges at 8% biaxial tensile strain. Therefore, the acoustic-optical phonon scattering will increase with tensile strain and the \(\kappa_{L}\) will be further lowered. Thus, based on the analysis of the electronic and phononic properties of ML-ZnO in the unstrained as well as strained conditions, it is clear that the signs of ML-ZnO as a potential thermoelectric material are encouraging. In some recent studies [51; 52], the prospect of ML-ZnO as a thermoelectric material has been explored in the pristine condition based on first-principles calculations. Further studies on the thermoelectric and transport properties of ML-ZnO in the unstrained as well as strained condition will reveal its actual potential. ## IV Conclusion In summary, first-principles DFT calculations have been performed to investigate the impact of strain on the electronic and lattice dynamical properties of ML-ZnO. It is found that mechanical strain can effectively modify both the electronic and phononic band structures. With the application of both tensile and compressive strains along different directions (AC, ZZ and BI), the band gap of ML-ZnO decreases. For large biaxial tensile strain, a direct to indirect band gap transition occurs due to the shifting of the VBM from \(\Gamma\) to K. The slow rate of decrease of band gap and large required strain for direct to indirect band gap transition, compared to other 2D materials, are analysed through the investigation of orbital contributions to the electronic structure. With strain, two of the three optical phonon modes soften, while the ZO mode hardens. The ZA mode which shows quadratic behaviour with the phonon wave vector in the unstrained condition turns linear with increasing strain. The behaviour of the ZA phonon mode with strain have been analysed using shell elasticity theory to understand the role of strain in the ripple formation in a monolayer ZnO sheet. The increasing linearity of the ZA mode with strain indicates that single-layer ZnO can be grown on a flat substrate without ripple or resulting instability. Furthermore, the behaviour of the ML-ZnO sheet under large biaxial strain is tested to find out the critical strain value at which the structure becomes unstable. At a homogeneous biaxial strain of 18%, the monolayer sheet becomes unstable with the emergence of an imaginary phonon mode. Finally, the clear indications of ML-ZnO to be a good thermoelectric material and the improvement in thermoelectric performance with strain application is predicted based on the analysis of the calculated electronic and phononic properties. Our results may pave a very effective way for the tailoring of electronic and vibrational properties of ML-ZnO and thus enabling the device tunability. ## Conflict of interest The authors declare that they have no conflict of interest. ###### Acknowledgements. The first-principles calculations are performed using the supercomputing facility of IIT Kharagpur established under National Supercomputing Mission (NSM), Government of India and supported by the Centre for Development of Advanced Computing (CDAC), Pune. SC acknowledges MHRD, India, for financial support.
2301.03416
ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Task-agnostic knowledge distillation attempts to address the problem of deploying large pretrained language model in resource-constrained scenarios by compressing a large pretrained model called teacher into a smaller one called student such that the student can be directly finetuned on downstream tasks and retains comparable performance. However, we empirically find that there is a generalization gap between the student and the teacher in existing methods. In this work, we show that we can leverage multi-task learning in task-agnostic distillation to advance the generalization of the resulted student. In particular, we propose Multi-task Infused Task-agnostic Knowledge Distillation (MITKD). We first enhance the teacher by multi-task training it on multiple downstream tasks and then perform distillation to produce the student. Experimental results demonstrate that our method yields a student with much better generalization, significantly outperforms existing baselines, and establishes a new state-of-the-art result on in-domain, out-domain, and low-resource datasets in the setting of task-agnostic distillation. Moreover, our method even exceeds an 8x larger BERT$_{\text{Base}}$ on SQuAD and four GLUE tasks. In addition, by combining ERNIE 3.0, our method achieves state-of-the-art results on 10 Chinese datasets.
Weixin Liu, Xuyi Chen, Jiaxiang Liu, Shikun Feng, Yu Sun, Hao Tian, Hua Wu
2023-01-09T15:12:50Z
http://arxiv.org/abs/2301.03416v1
# ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization ###### Abstract Task-agnostic knowledge distillation attempts to address the problem of deploying large pretrained language model in resource-constrained scenarios by compressing a large pretrained model called teacher into a smaller one called student such that the student can be directly finetuned on downstream tasks and retains comparable performance. However, we empirically find that there is a generalization gap between the student and the teacher in existing methods. In this work, we show that we can leverage multi-task learning in task-agnostic distillation to advance the generalization of the resulted student. In particular, we propose Multi-task Infused Task-agnostic Knowledge Distillation (MITKD). We first enhance the teacher by multi-task training it on multiple downstream tasks and then perform distillation to produce the student. Experimental results demonstrate that our method yields a student with much better generalization, significantly outperforms existing baselines, and establishes a new state-of-the-art result on in-domain, out-domain, and low-resource datasets in the setting of task-agnostic distillation. Moreover, our method even exceeds an 8x larger BERTBase on SQuAD and four GLUE tasks. In addition, by combining ERNIE 3.0, our method achieves state-of-the-art results on 10 Chinese datasets. ## 1 Introduction Pretrained language models (PLMs) Devlin et al. (2019); Liu et al. (2019); Clark et al. (2020); He et al. (2021) have achieved great success in a wide range of natural language processing tasks, however, their enormous parameters often bring challenges to serving them in real-life applications where computational resources are limited. Knowledge distillation (KD) Hinton et al. (2015) has been widely utilized to tackle this problem. KD aims to compress a large PLM called teacher into a smaller one called student by transferring knowledge from teacher to student. In the context of PLM compression, KD is usually applied in two different settings: task-specific Sun et al. (2019); Tang et al. (2019); Jiao et al. (2020); Su et al. (2021) and task-agnostic Sanh et al. (2019); Wang et al. (2020). The former transfers task-specific knowledge from teacher to student for a given task and often yields student with better performance than the latter, but poses one disadvantage: task-specific KD needs to be performed for every downstream task. Task-agnostic KD, on the other hand, eliminates the need of distillation for every single task by transferring general knowledge to the student in a once-for-all fashion that the student needs to be distilled only once and can be directly finetuned on downstream tasks as similar to the pretrain-finetune paradigm. A natural research question is whether we can combine the advantage of favorable downstream performance and easy-to-deploy from these two types of distillation. Previous attempt Mukherjee et al. (2021) injects downstream task knowledge into task-agnostic distillation by performing task-agnostic distillation on a teacher finetuned in a downstream task. Although this approach can improve the performance of the student, knowledge from a single task may not be sufficient to yield a generalizable student. In this work, we show that the downstream generalization of the student can be further improved by fusing multi-task learning (MTL) into task-agnostic distillation. Existing works in MTL Liu et al. (2019); Agha Figure 1: Performance on GLUE and SQuAD. janyan et al., 2021) point out that the model learns a representation that generalizes better on new tasks and domains through finetuned on multiple tasks. We propose a distillation method Multi-task Infused Task-agnostic Knowledge Distillation (MITKD) to show that the generalizable representation brought by MTL can also benefit task-agnostic distillation. In particular, we first finetune the teacher on multiple downstream tasks under the setting of MTL to learn a generalizable representation and then perform task-agnostic distillation on it. Specifically, our contribution includes: 1. We present a novel and simple method to combine the advantage of task-agnostic and task-specific distillation by fusing MTL into task-agnostic distillation to improve the generalization of the student. 2. We conduct extensive experiments to verify the effectiveness of MITKD on in-domain datasets, out-domain datasets, and low-resource datasets. Empirical results show that MITKD consistently outperforms state-of-the-art baselines in all three foregoing scenarios, even outperforming an 8x larger BERTBase on SQuAD and four GLUE tasks shown in Figure 1 and Table 2. Moreover, by applying MITKD on ERNIE 3.0, we obtain ERNIE 3.0 Tiny which achieves state-of-the-art results on 10 Chinese datasets. 3. Our empirical results bring out a message that we should also pay attention on the knowledge embedded in the teacher apart from pursuing a stronger teacher, in order to improve student's downstream performance. ## 2 Related Work Knowledge DistillationKnowledge distillation Hinton et al. (2015) mimics the output representation of the teacher and the student to guide the student's training. In the context of PLM, task-specific methods aim to compress task-specific teacher over task-specific data Sun et al. (2019); Tang et al. (2019). Liu et al. (2019) strengthens the teacher through multi-task learning. Clark et al. (2019) compresses multiple task-specific models into one student model. On the other hand, task-agnostic methods aim to compress pretrained teacher in a way such that the resulted student can be easily adapted to downstream tasks via finetuning Sanh et al. (2019). In addition, task-agnostic method typically performs distillation on pretraining data (i.e. the unsupervised corpora on which the teacher is pre-trained). Wang et al. (2020, 2021) proposes to mimic the self-attention of student and teacher. Khanuja et al. (2021) compresses multiple teachers trained in different languages into a single student. XtremeDistilTrans Mukherjee et al. (2021) distills a single-task finetuned teacher on augmented transfer data generated from unsupervised data. Multi-Task LearningMulti-task learning learns multiple tasks jointly so that the knowledge learned from one task can benefit the others Caruana (1997); Luong et al. (2016). Liu et al. (2019) shows that adding MTL at finetuning stage can boost the performance of PLM. Aghajanyan et al. (2021) proposes a MTL stage named pre-finetuning stage before the traditional finetuning stage. Wei et al. (2022) combines MTL and prompting to improve zero-shot performance. ## 3 Method In this section, we first categorize the existing works in task-agnostic distillation into two different types, then we propose MITKD and show the difference between it and these existing works. Vanilla Task-agnostic DistillationVanilla task-agnostic distillation transfers general knowledge from pretrained teacher to student without leveraging any downstream knowledge. Single-task Enhanced Task-agnostic DistillationSingle-task enhanced task-agnostic distillation exploits downstream knowledge by finetuning the teacher on a single task and performing distillation on it. For example, XtremeDistilTrans Mukherjee et al. (2021) develops a cumbersome distillation method that first intensively searches for a source task inducing better transferability, then finetunes the pretrained teacher in the source task. After that, it performs task-agnostic distillation on the finetuned teacher with augmented transfer data generated from unsupervised data and a previously task-agnostic distilled student as a warm start. Multi-task Infused Task-agnostic Distillation (MITKD)While previous work leverages task knowledge through carefully hunting for a task inducing better transferability, we argue that we can simply utilize the power of MTL to infuse task knowledge into task-agnostic distillation to improve student's generalizability. In particular, we propose a two-stage distillation method: we first finetune a pretrained teacher on multiple tasks and then perform vanilla task-agnostic distillation on it. Unlike previous work which requires searching for the best transferable task and distilling on augmented transfer data, our method simply applies MTL to the teacher and only needs to distill on pretraining data as what vanilla task-agnostic distillation does. Moreover, our empirical results illustrate that MITKD does not only inject task knowledge to the distillation, but more importantly, brings in the generalization to improve the downstream performance of the student dramatically. ## 4 Experiment Experiment SetupFirst, we finetune the teacher with MTL. In particular, we adopt the base version of Muppet's Aghajanyan et al. (2021) released checkpoint 1 as our finetuned teacher. It is obtained by finetuning a pretrained RoBERTa\({}_{\text{Base}}\) model Liu et al. (2019) on around 50 tasks jointly. The datasets used for the the MTL finetuning are listed in Appendix A.4. Footnote 1: [https://huggingface.co/facebook/muppet-roberta-base](https://huggingface.co/facebook/muppet-roberta-base) Then, we perform task-agnostic distillation on the multi-task finetuned teacher using a classic task-agnostic distillation method MiniLMv2 Wang et al. (2021) which mimics a more fine-grained version of self-attention between the teacher and the student. Following MiniLMv2, we utilizes the pretraining datasets used in RoBERTa for the distillation data. All the students in the experiment are 6-layer transformers with hidden size of 384, intermediate size of 1536, and 12 attention heads. All the results in the experiment section are reported on the development set and are an average of 4 runs. We also listed all the hyperparameters used in the experiment in Appendix A.2. BaselinesWe select the classic task-agnostic distillation MiniLMv2 Wang et al. (2021) as our baseline for _vanilla task-agnostic distillation_. It utilizes the same task-agnostic distillation algorithm as us but is distilled from a larger teacher RoBERTa\({}_{\text{Large}}\). In order to ablate the effectiveness of our method, we also reproduce a MiniLMv2 with RoBERTa\({}_{\text{Base}}\) which is essentially the same teacher as ours but without MTL training. We denote the one distilled from RoBERTa\({}_{\text{Large}}\) as MiniLMv2-L and the other as MiniLMv2-B. As for _single-task enhanced task-agnostic distillation_, we select the state-of-the-art method XtremeDistiTrans Mukherjee et al. (2021) as the baseline. In particular, it utilizes a ELECTRABaseClark et al. (2020) finetuned on MNLI Williams et al. (2018) as the teacher and performs distillation from it on a student previously task-agnostic distilled. Tasks and DatasetsTo demonstrate how MTL can benefit task-agnostic distillation, we evaluate the student on those tasks utilized in the MTL stage of the teacher and those tasks which are not used in MTL. For simplicity, we call the former in-domain tasks and the latter out-domain tasks. In order to verify the generalization improvement brought by MITKD on out-domain datasets, we experiment with nine datasets from seven domains. Refer to Appendix A.1 for the dataset details. As for the evaluation on in-domain tasks, we select GLUE benchmark Wang et al. (2018) and SQuAD 2.0 Rajpurkar et al. (2018) among the 50 tasks trained in the MTL stage of the teacher. Out-domain GeneralizationAs MTL brings better generalization for new tasks and domains to the teacher Aghajanyan et al. (2021), we empirically show that it also boosts the generalization of the student through distillation by evaluating the downstream performance of the student on out-domain tasks. In particular, we compare our method with the mentioned baselines in Table 1. Experimental results illustrate that our method significantly outperforms other baselines, demonstrating the generalization brought by our method. Comparing XtremeDistiTrans and MiniLMv2-B, we can see that introducing task knowledge to distillation can improve the performance of the student. Moreover, MITKD further brings in multi \begin{table} \begin{tabular}{l|l|c c c c c c c c c|c} \hline \hline **Method** & Teacher & CHEMPORT & ANLI & ACL-ARC & SCIIRC & PARTSAN & REIPURL & SCOUPS & LEDGAR & FinBlank & Avg. \\ \hline Domain & - & Biology & General & Computer Science & News & Review & & Law & Finance & \\ \hline RoBERTa\({}_{\text{Large}}\) & - & **86.9** & **57.8** & **82.5** & **90.6** & **87.5** & 87.7 & **77.6** & **88.6** & **90.1** & **80.3** \\ RoBERTa\({}_{\text{Base}}\) & - & 83.9 & 53.4 & 81.2 & 89.9 & 84.2 & 87.7 & 74.6 & 87.4 & 88.8 & 81.2 \\ RoBERTa\({}_{\text{Base}}\) w/ MTL. & - & 84.0 & 54.9 & 81.6 & 90.4 & 85.8 & **87.8** & 74.6 & 87.9 & 89.0 & 81.8 \\ \hline MiniLMv2-L & RoBERTa\({}_{\text{Large}}\) & 74.5 & 48.7 & 70.0 & 80.3 & 77.0 & 87.3 & 66.6 & 86.6 & 85.1 & 75.1 \\ MiniLMv2-B & RoBERTa\({}_{\text{Base}}\) & 72.2 & 48.0 & 68.8 & 76.7 & 74.7 & 87.1 & 66.2 & 86.5 & 83.7 & 73.8 \\ KiemeDistiTrans & ELECTRAA\({}_{\text{Base}}\) & 73.3 & **80.5** & 72.8 & 81.4 & 77.7 & 87.1 & 66.4 & 86.5 & 85.7 & 75.8 \\ \hline **MTKD** & RoBERTa\({}_{\text{Base}}\) w/ MTL. & **79.0** & 50.0 & **74.4** & **82.4** & **82.0** & **87.5** & **71.2** & **86.8** & **85.9** & **72.7** \\ \hline \hline \end{tabular} \end{table} Table 1: Out-domain results on the development sets. All results are produced by us using their publically available checkpoints. For ANLI, we merge the train sets and dev sets at all three rounds into one train set and dev set, and use them for training and evaluation. * means that ELECTRABase is finetuned on MNLI. task knowlegde and generalization led by MTL, pushing the improvement by 2.0 average points. Together, we observe that adding MTL into task-agnostic distillation results in an impressive improvement of 3.9 points on out-domain tasks, compared to the vanilla task-agnostic distillation baseline MiniLMv2-B. In addition, it even exceeds MiniLMv2-L by 2.6 points. In-domain PerformanceBesides the generalization improvement on out-domain datasets, MITKD also improves the performance on in-domain tasks. Table 2 shows the dev set result on GLUE and SQuAD 2.0. MITKD achieves state-of-the-art results on most tasks in GLUE and SQuAD 2.0, exceeding the best baseline by a margin of 1.2 points. It even outperforms an 8x larger BERTBase on four GLUE tasks and SQuAD 2.0 while being 5.3x faster than BERTBase.3 Footnote 3: Refer to Appendix A.3 for the details of speedup and model size calculation. we should also pay attention to the knowledge embedded in the teacher. Performance on Chinese DatasetsTo verify the effectiveness of MITKD on Chinese datasets, we apply MITKD on ERNIE 3.0 Sun et al. (2021) to produce ERNIE 3.0 Tiny. In particular, we follow ERNIE 3.0 training process to reproduce a Large version of ERNIE 3.0 and use it as the teacher. It is a 24-layer transformer with hidden size of 1024, intermediate size of 4096, and 16 attention heads. 28 datasets are selected for MTL finetuning the teacher and are listed in Appendix A.5. All the students in this experiment are 4-layer transformers with hidden size of 312 and 12 attention heads. The finetuning hyperparameters for the students are listed in Appendix A.7. We compare ERNIE 3.0 Tiny with the Chinese version of TinyBERT 4Jiao et al. (2020) on the in-domain, out-domain and low-resource datasets and list out the result in Table 5. Refer to Appendix A.6 for the dataset details. ERNIE 3.0 Tiny outperforms the baseline by a margin of 4.3 average points, establishing a new state-of-the-art result. Footnote 4: [https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT) ## 5 Conclusion In this work, we propose a simple method to improve task-agnostic distillation generalization by leveraging MTL. The teacher is first augmented by MTL and then distilled. Empirical results show that our method outperforms several baselines on in-domain, out-domain and low-resource tasks. ## 6 Limitation Compared with vanilla task-agnostic distillation, MITKD has an additional multi-task finetuning stage which may require additional computation resources.
2310.15535
Integrable (3+1)-dimensional generalization for dispersionless Davey--Stewartson system
This paper introduces a (3+1)-dimensional dispersionless integrable system, utilizing a Lax pair involving contact vector fields, in alignment with methodologies presented by A. Sergyeyev in 2018. Significantly, it is shown that the proposed system serves as an integrable (3+1)-dimensional generalization of the well-studied (2+1)-dimensional dispersionless Davey-Stewartson system. This way, an interesting new example on integrability in higher dimensions is presented, with potential applications in modern mathematical physics. The work lays the foundation for future research into symmetries, conservation laws, and Hamiltonian structures, offering avenues for further exploration.
Antonio J. Pan-Collantes
2023-10-24T05:43:07Z
http://arxiv.org/abs/2310.15535v1
# Integrable (3+1)-dimensional generalization for dispersionless Davey-Stewartson system ###### Abstract This paper introduces a (3+1)-dimensional dispersionless integrable system, utilizing a Lax pair involving contact vector fields, in alignment with methodologies presented by A. Sergyeyev in 2018. Significantly, it is shown that the proposed system serves as an integrable (3+1)-dimensional generalization of the well-studied (2+1)-dimensional dispersionless Davey-Stewartson system. This way, an interesting new example on integrability in higher dimensions is presented, with potential applications in modern mathematical physics. The work lays the foundation for future research into symmetries, conservation laws, and Hamiltonian structures, offering avenues for further exploration. ## 1 Introduction Within the framework of partial differential equations (PDEs), integrable systems are of great interest in modern mathematics and physics [1, 2, 3, 4]. In particular, taking into account that the vast majority of physical theories commonly accept that there are three spatial dimensions and one temporal dimension, the search for integrable systems in (3+1) dimensions becomes particularly important and challenging, cf. e.g. [5, 3, 6, 7]. The paper [7] gives a systematic approach for constructing (3+1)-dimensional integrable dispersionless systems using contact Lax pairs. These are nonisospectral Lax pairs associated with contact vector fields and so, at least in principle, they are amenable to the inverse scattering transform and other techniques like the dressing method [4, 8, 9]. Remarkably, the systems obtained using the method of [7] admit a straightforward reduction to (2+1)-dimensional systems, which suggests that the said method allows for finding integrable (3+1)-dimensional generalizations of certain (2+1)-dimensional systems. In this paper we implement this strategy for finding a (3+1)-dimensional integrable generalization for the dispersionless Davey-Stewartson (dDS) system, which is currently a subject of intense research (see e.g. [10, 11, 12, 13] and references therein), in view of its providing a dispersionless limit for the the dispersive Davey-Stewartson system. The latter plays a significant role in mathematical physics, as it represents a (2+1)-dimensional generalization of the nonlinear Schrodinger equation, and describes many important wave processes [14, 15, 16, 17]. In what follows, after a summary of relevant facts in Section 2, we will employ the method from [7] for constructing a new (3+1)-dimensional dispersionless integrable system (10) in Section 3, and we will show that this new system is a (3+1)-dimensional generalization of dDS system (see Theorem 1). We note that it could be of interest to study the symmetries, conservation laws, Hamiltonian operators, and other related structures for (10) using e.g. the techniques from [18, 19]. ## 2 Preliminaries In the present paper a PDE system in four independent variables will be called a (3+1)-dimensional dispersionless system (note that dispersionless systems are also known as hydrodynamic-type systems) [20, 21, 7, 22] if it can be written in the form \[A_{0}(\mathbf{u})\mathbf{u}_{t}+A_{1}(\mathbf{u})\mathbf{u}_{x}+A_{2}( \mathbf{u})\mathbf{u}_{y}+A_{3}(\mathbf{u})\mathbf{u}_{z}=0, \tag{1}\] where \(x,y,z,t\) are independent variables, \(\mathbf{u}=(u_{1},\ldots,u_{N})^{T}\) is an \(N\)-dimensional vector of unknown functions, and \(A_{i}(\mathbf{u})\) are \(M\times N\) matrices whose entries depend on the components of \(\mathbf{u}\). Both \(M\) and \(N\) are positive integers satisfying \(M\geq N\). All functions under consideration are tacitly assumed smooth enough to ensure the validity of all computations. In what follows a (3+1)-dimensional dispersionless system will be said to be _integrable_ if it admits a Lax pair. In the recent paper [7] is presented a systematic approach for constructing integrable (3+1)-dimensional dispersionless systems, that we briefly review here for the readers' convenience. Consider, following [7], the class of Lax pairs \(L\chi=0,M\chi=0\), where \[L =\partial_{y}-X_{f}, \tag{2}\] \[M =\partial_{t}-X_{g},\] for suitable functions \(f=f(p,\mathbf{u})\) and \(g=g(p,\mathbf{u})\), and with \[X_{h}=h_{p}\partial_{x}+(ph_{z}-h_{x})\partial_{p}+(h-ph_{p})\partial_{z}.\] Note that \(p\) plays the role of the spectral parameter, since \(\mathbf{u}_{p}\equiv 0\) by assumption. These nonisospectral Lax pairs have a profound connection to contact geometry. The vector field \(X_{h}\) formally looks exactly like a contact vector field corresponding to a contact Hamiltonian \(h\) on a contact 3-manifold with local coordinates \(x,z,p\) and the associated contact one-form \(dz+pdx\) (see [23, 24] and the references therein). We state without proof the following proposition, which corresponds to Proposition 1 in [7]: **Proposition 1**.: A nonisospectral Lax pair of the form (2) is compatible if and only if the functions \(f,g\) satisfy \[f_{t}-g_{y}+\{f,g\}=0, \tag{3}\] with the bracket \(\{,\}\) defined as \[\{f,g\}=f_{p}g_{x}-g_{p}f_{x}-p\left(f_{p}g_{z}-g_{p}f_{z}\right)+fg_{z}-gf_{z}.\] Observe that equation (3) can be rewritten as \[\sum_{i=1}^{N}f_{u_{i}}\left(u_{i}\right)_{t}-g_{u_{i}}\left(u_{i}\right)_{y}+ \left(f_{p}g_{u_{i}}-g_{p}f_{u_{i}}\right)\left(u_{i}\right)_{x}+\left(\left(f -pf_{p}\right)g_{u_{i}}-\left(g-pg_{p}\right)f_{u_{i}}\right)\left(u_{i}\right) _{z}=0, \tag{4}\] and if the dependence of \(f\) and \(g\) on \(p\) is rational, we can bring equation (4) to a common denominator. Then, vanishing of the expression in question is equivalent to that of its numerator, and the latter is achieved by equating to zero the coefficients at all powers of \(p\). Consequently, Proposition 1 allows us to produce integrable systems of the form (1) by making appropriate choices of functions \(f,g\). Importantly, not every pair of functions \(f,g\) is _admissible_, in the sense that equation (3) leads to an expression that can be transformed into a system of Cauchy-Kowalevski type by a suitable change of variables. In this sense, in [7, 25] it is shown that for any natural numbers \(m\) and \(n\), the pairs of Lax functions \[f=p^{n+1}+\sum_{j=0}^{n}v_{j}p^{j},\quad g=p^{m+1}+\frac{m}{n}v_{n}p^{m}+\sum_ {k=0}^{m-1}w_{k}p^{k}, \tag{5}\] and \[f=\sum_{j=1}^{n}\frac{a_{i}}{v_{i}-p},\quad g=\sum_{k=1}^{m}\frac{b_{k}}{w_{k }-p}, \tag{6}\] are admissible, with \(\mathbf{u}\) being in these cases \[\mathbf{u}=\left(v_{0},\ldots,v_{n},w_{0},\ldots,w_{m-1}\right)^{T}\] and \[\mathbf{u}=\left(a_{1},\ldots,a_{m},b_{1},\ldots,b_{n},v_{1},\ldots,v_{m},w_{ 1},\ldots,w_{n}\right)^{T},\] respectively. Furthermore, in [26] it is presented an example of algebraic, rather than rational, admissible pair of Lax functions not belonging to any of the classes above. _Remark 1_.: As highlighted in [7, 25, 26], under the assumption \(\mathbf{u}_{z}=0\), system (1) reduces to a (2+1)-dimensional dispersionless system. If (1) admits a contact Lax pair (2) then the system obtained from (1) by putting \(\mathbf{u}_{z}=0\) admits a Lax pair involving Hamiltonian vector fields (as opposed to contact vector fields) with the Lax operators given by \[L=\partial_{y}-\tilde{X}_{f},\quad M=\partial_{t}-\tilde{X}_{g}, \tag{7}\] where now \[\tilde{X}_{h}=h_{p}\partial_{x}-h_{x}\partial_{p}.\] Detailed discussions on this type of (2+1)-dimensional integrable systems can be found e.g. in [21, 27]. An important example of a (2+1)-dimensional integrable dispersionless system is given by the dispersionless Davey-Stewartson system, which is a physically relevant generalization of the dispersionless nonlinear Schrodinger equation, cf. e.g. [10]. This system has the form: \[\begin{split} U_{t}+2\left(US_{x}\right)_{x}+2\left(US_{y} \right)_{y}=0,\\ S_{t}+S_{x}^{2}+S_{y}^{2}-\delta\phi&=0,\\ \phi_{xy}-\frac{1}{2}\left(U_{xx}+U_{yy}\right)&=0. \end{split} \tag{8}\] Observe that we can eliminate \(\phi\) from this system, expressing it using the second equation and substituting the result into the third one, which yields \[\begin{split} U_{t}+2\left(US_{x}\right)_{x}+2\left(US_{y} \right)_{y}=0,\\ \frac{1}{\delta}(S_{t}+S_{x}^{2}+S_{y}^{2})_{xy}-\frac{1}{2}\left( U_{xx}+U_{yy}\right)&=0.\end{split} \tag{9}\] ## 3 Integrable (3+1)-dimensional generalization of the (2+1)-dimensional dDS system Consider the following system \[\begin{split} u_{t}=&\quad 2rv_{x}+vr_{x}+su_{z}- us_{z}+wu_{x}-2vw_{z}+s_{y},\\ v_{t}=&\quad 2qu_{z}-uq_{z}+sv_{z}-2vs_{z}+vw_{x}+ wv_{x}+q_{y},\\ w_{y}=&\quad-2ru_{x}+rv_{z}+2vr_{z}+uw_{z},\\ r_{y}=&\quad ru_{z}+ur_{z},\\ q_{x}=&\quad 2cvu_{x}+cvv_{z}+\frac{q}{v}v_{x},\\ s_{x}=&\quad\frac{q}{v}u_{x}-3cvu_{z}-2cv_{y}+ \frac{2cuv-2q}{v}v_{z}+2q_{z},\end{split} \tag{10}\] where \(u,v,w,r,q,s\) are unknown functions of the independent variables \(x,y,z,t\), and \(c\in\mathbb{R}\) is a constant. This system can be transformed into evolutionary form, so that the Cauchy-Kowalevski theorem could be applied to it (so in particular the system in question is neither underdetermined nor overdetermined). Indeed, the coefficients of the variables \(u_{z},v_{z},w_{z},q_{z},r_{z},s_{z}\) in the previous equations constitute a non-degenerate matrix \[\begin{bmatrix}s&0&-2v&0&0&-u\\ 2q&s&0&-u&0&-2v\\ 0&r&u&0&2v&0\\ r&0&0&0&u&0\\ 0&cv&0&0&0&0\\ -3cv&\dfrac{2cuv-2q}{v}&0&2&0&0\end{bmatrix}\] whose determinant is \[cv\left(3cu^{4}v-4qu^{3}-16rv^{3}+4su^{2}v\right)\neq 0, \tag{11}\] so the system can be put in an evolutionary form with \(z\) playing the role of the evolution parameter. Our main result establishes that system (10) is integrable and that it is a (3+1)-dimensional generalization of (9): **Theorem 1**.: The (3+1)-dimensional dispersionless system (10) admits the Lax pair \[L=\partial_{y}+\dfrac{v}{p^{2}}\partial_{x}-\left(u+\dfrac{2v}{p}\right) \partial_{z}-\left(u_{z}p-u_{x}+v_{z}-\dfrac{v_{x}}{p}\right)\partial_{p},\] \[M=\partial_{t}-\left(2rp+w-\dfrac{q}{p^{2}}-\dfrac{2cv^{2}}{p^{3}}\right) \partial_{x}+\left(rp^{2}-s-\dfrac{2q}{p}-\dfrac{3cv^{2}}{p^{2}}\right) \partial_{z}\] \[-\left(r_{z}p^{3}+(w_{z}-r_{x})p^{2}+(s_{z}-w_{x})p+q_{z}-s_{x}+\dfrac{2cvv_{z} -q_{x}}{p}-\dfrac{2cvv_{x}}{p^{2}}\right)\partial_{p}. \tag{12}\] Moreover, for \(c=-1\) the system (10) is an integrable (3+1)-dimensional generalization of the (2+1)-dimensional dispersionless Davey-Stewartson system (9). _Remark 2_.: Observe that (10) also is a generalization of the (3+1)-dimensionl integrable system appearing in Example 1 in [7]: if we assume \(s=q=0\) and \(c=0\) in (10) we obtain \[\begin{array}{rl}u_{t}=&2rv_{x}+vr_{x}+wu_{x}-2vw_{z},\\ v_{t}=&vw_{x}+wv_{x},\\ w_{y}=&-2ru_{x}+rv_{z}+2vr_{z}+uw_{z},\\ r_{y}=&ru_{z}+ur_{z},\end{array} \tag{13}\] which coincides with system (38) in [7]. Proof of Theorem 1 First, observe that the Lax pair (12) is of the form (2), with the functions \[\begin{split} f&=u+\frac{v}{p},\\ g&=rp^{2}+wp+s+\frac{q}{p}+c\frac{v^{2}}{p^{2}}.\end{split} \tag{14}\] Now, according to Proposition 1, this Lax pair is compatible if and only if \[f_{t}-g_{y}+\{f,g\}=0. \tag{15}\] Therefore, to show that system (10) implies the compatibility of Lax pair (12), it is enough to substitute (14) into (15), and check that the resulting expression is equivalent to (10) upon equating to zero the coefficients at all powers of \(p\), as can be verified by straightforward algebraic manipulations. To prove the second part of the theorem observe first that, according to Remark 1, if we assume \[u_{z}=v_{z}=w_{z}=q_{z}=r_{z}=s_{z}=0 \tag{16}\] in system (10) we obtain a (2+1)-dimensional integrable system. If we further assume, in addition to (16), that \(r=1\), system (10) gives rise to the system \[\begin{split} u_{t}&=\quad wu_{x}+s_{y}+2v_{x},\\ v_{t}&=\quad vw_{x}+wv_{x}+q_{y},\\ w_{y}&=\quad-2u_{x},\\ q_{x}&=\quad 2cvu_{x}+\frac{q}{v}v_{x},\\ s_{x}&=\quad-2cv_{y}+\frac{q}{v}u_{x}.\end{split} \tag{17}\] Now note that for \(c=-1\) system (17) can be reduced, by means of the following substitutions \[u=S_{y},\quad v=\frac{1}{4}\delta U,\quad w=-2S_{x},\quad q=-\frac{1}{2} \delta US_{y},\quad s=\frac{1}{2}\delta W-S_{y}^{2},\] where \(\delta\) is a constant and \(S,U,W\) are functions of \(x,y,t\), to the system \[U_{t} =-2(US_{x})_{x}-2(US_{y})_{y}, \tag{18a}\] \[W_{x} =U_{y},\] (18b) \[S_{ty} =-2S_{x}S_{xy}+\frac{1}{2}\delta W_{y}-2S_{y}S_{yy}+\frac{1}{2} \delta U_{x}. \tag{18c}\] We can rearrange equation (18c) obtaining \[(S_{t})_{y}=-(S_{x}^{2})_{y}-(S_{y}^{2})_{y}+\frac{\delta}{2}(W_{y}+U_{x}), \tag{19}\] and differentiating (19) with respect to \(x\) gives us \[(S_{t})_{xy}=-(S_{x}^{2})_{xy}-(S_{y}^{2})_{xy}+\frac{\delta}{2}(W_{xy}+U_{xx}). \tag{20}\] Finally, we observe that if \(S,U,\) and \(W\) satisfy (18) then \(S\) and \(U\) also satisfy the system \[U_{t} =-2(US_{x})_{x}-2(US_{y})_{y}, \tag{21a}\] \[(S_{t})_{xy} =-(S_{x}^{2})_{xy}-(S_{y}^{2})_{xy}+\frac{\delta}{2}(U_{yy}+U_{xx}), \tag{21b}\] which is nothing but (9). So, we see that system (18) is an integrable generalization of (9), system (17) is an integrable generalization of (18), and finally system (10) is an integrable (3+1)-dimensional generalization of (17) and hence _a fortiori_ of (9), and the result follows. _Remark 3_.: It is worth noting that the pair of functions (14) doesn't belong to any of the previously known classes of admissible pairs of rational functions (5) and (6) yielding (3+1)-dimensional dispersionless integrable systems. ## Acknowledgments The author thanks the financial support from the _University of Cadiz_ through the internal funding program 'Plan Propio de Estimulo y Apoyo a la Investigacion y Transferencia 2022/2023', and from _Junta de Andalucia_ through the research group FQM-377. The author would also like to extend his sincere gratitude to A. Sergyeyev for stimulating discussions and encouragement.
2308.16276
Separating Subversion Forcing Principles
We study a family of variants of Jensen's subcomplete forcing axiom, $\mathsf{SCFA}$ and subproper forcing axiom, $\mathsf{SubPFA}$. Using these we develop a general technique for proving non-implications of $\mathsf{SCFA}$, $\mathsf{SubPFA}$ and their relatives and give several applications. For instance we show that $\mathsf{SCFA}$ does not imply $\mathsf{MA}^+(\sigma$-closed$)$ and $\mathsf{SubPFA}$ does not imply Martin's Maximum.
Hiroshi Sakai, Corey Bacal Switzer
2023-08-30T19:05:28Z
http://arxiv.org/abs/2308.16276v1
# Separating subversion forcing principles ###### Abstract. We study a family of variants of Jensen's _subcomplete forcing axiom_, \(\mathsf{SCFA}\) and _subproper forcing axiom_, \(\mathsf{SubPFA}\). Using these we develop a general technique for proving non-implications of \(\mathsf{SCFA}\), \(\mathsf{SubPFA}\) and their relatives and give several applications. For instance we show that \(\mathsf{SCFA}\) does not imply \(\mathsf{MA}^{+}(\sigma\)-closed) and \(\mathsf{SubPFA}\) does not imply Martin's Maximum. 2010 Mathematics Subject Classification: 03E17, 03E35, 03E50 _Acknowledgments:_ The first author would like to thank JSPS for the support through grant numbers 18K03397 and 21K03338. The second author would like to thank the Austrian Science Fund (FWF) for the generous support through grant number Y1012-N35. In this paper we combine the \(\infty\)-versions of these forcing classes with further parametrization "above \(\mu\)" for cardinals \(\mu\), initially investigated, somewhat sparingly, by Jensen in [12, Section 3]. This leads to a large family of forcing axioms \(\infty\)-\(\mathsf{SubPFA}\upharpoonright\mu\) and \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu\), where \(\infty\)-\(\mathsf{SubPFA}\) and \(\infty\)-\(\mathsf{SCFA}\) coincide with \(\infty\)-\(\mathsf{SubPFA}\upharpoonright 2^{\aleph_{0}}\) and \(\infty\)-\(\mathsf{SCFA}\upharpoonright 2^{\aleph_{0}}\) respectively. The main motivation of this work is to investigate how these axioms relate to one another and to other, more well known axioms such as \(\mathsf{MM}\) and \(\mathsf{MA}^{+}(\sigma\)-closed). Formal definitions will be given in the second part of this introduction and Section 2 but the definitions of these axioms alongside well known results provides almost immediately that the following diagram of implications holds with \(2^{\aleph_{0}}\leq\nu<\mu\) cardinals. The main result of this work is that essentially no arrows are missing from Figure 1 above. **Main Theorem 1.1**.: _Let \(2^{\aleph_{0}}\leq\nu\leq\lambda<\mu=\lambda^{+}\) be cardinals with \(\nu^{\omega}<\mu\). Assuming the consistency of a supercompact cardinal, the implications given in Figure 1 are complete in the sense that if no composition of arrows exists from one axiom to another then there is a model of \(\mathsf{ZFC}\) in which the implication fails2._ Footnote 2: Except for the trivial \(\forall\kappa\neg\Box_{\kappa}\to\forall\kappa\geq 2^{\aleph_{0}}\neg\Box_{\kappa}\) which did not fit aesthetically into the picture. As a corollary of this theorem and its proof we obtain separations of several "subversion" forcing principles from other, more well-studied reflection principles and forcing axioms. For instance we show the following. **Theorem 1.1** (See Theorem 3.1).: _Assuming the consistency of a supercompact cardinal, \(\mathsf{SCFA}\) does not imply the failure of \(\Box_{\aleph_{1}}\) when \(\mathsf{CH}\) fails._ **Corollary 1.2**.: _Assuming the consistency of a supercompact cardinal, \(\mathsf{SCFA}\) does not imply \(\mathsf{MA}^{+}(\sigma\)-closed)._ Figure 1. Subversion forcing principles and then some The rest of this paper is organized as follows. In the next subsection of this introduction we give relevant background and terminology. In the next Section we introduce the variants \(\infty\)-subcompleteness and \(\infty\)-subproperness above \(\mu\) and discuss some of their properties. In Section 3 we study the forcing axioms associated to these classes and show, amongst other things, that they are distinct as well as the fact \(\infty\)-\(\mathsf{SCFA}\) implies neither \(\mathsf{MA}^{+}(\sigma-\)closed) nor \(\neg\Box_{\kappa}\) for any \(\kappa<2^{\omega}\). In Section 4 we continue this investigation and show that \(\infty\)-\(\mathsf{SubPFA}\) does not imply \(\mathsf{MM}\). Section 5 concludes with some final remarks and open problems. ### Preliminaries We conclude this introduction with the key definitions we will use throughout, beginning with that of subproperness and subcompleteness. These are these two classes of forcing notions defined by Jensen in [14] which have found several applications, see e.g [17, 13, 6, 9]. More discussion of these concepts can be found in [14] or [10]. Before beginning with the definition we will need one preliminary definition. Below we denote by \(\mathsf{ZFC}^{-}\) the axioms of \(\mathsf{ZFC}\) without the power set axiom. **Definition 1.3**.: A transitive set \(N\) (usually a model of \(\mathsf{ZFC}^{-}\)) is _full_ if there is an ordinal \(\gamma\) so that \(L_{\gamma}(N)\models\mathsf{ZFC}^{-}\) and \(N\) is regular in \(L_{\gamma}(N)\) i.e. for all \(x\in N\) and \(f\in L_{\gamma}(N)\) if \(f:x\to N\) then \(\operatorname{ran}(f)\in N\). **Definition 1.4**.: Let \(\mathbb{P}\) be a forcing notion and let \(\delta(\mathbb{P})\) be the least size of a dense subset of \(\mathbb{P}\). 1. We say that \(\mathbb{P}\) is _subcomplete_ if for all sufficiently large \(\theta\), \(\tau>\theta\) so that \(H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}\), \(s\in N\), \(\sigma:\bar{N}\prec N\) countable, transitive and full with \(\sigma(\bar{\mathbb{P}},\bar{s},\bar{\theta})=\mathbb{P},s,\theta\), if \(\bar{G}\subseteq\bar{\mathbb{P}}\cap\bar{N}\) is generic then there is a \(p\in\mathbb{P}\) so that if \(p\in G\) is \(\mathbb{P}\)-generic over \(V\) then in \(V[G]\) there is a \(\sigma^{\prime}:\bar{N}\prec N\) so that 1. \(\sigma^{\prime}(\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=\mathbb{P},s,\theta\), 2. \(\sigma^{\prime}\,\bar{``G}\subseteq G\) 3. \(\operatorname{Hull}^{N}(\delta(\mathbb{P})\cup\operatorname{ran}(\sigma))= \operatorname{Hull}^{N}(\delta(\mathbb{P})\cup\operatorname{ran}(\sigma^{ \prime}))\) 2. We say that \(\mathbb{P}\) is _subproper_ if for all sufficiently large \(\theta\), \(\tau>\theta\) so that \(H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}\), \(s\in N\), \(p\in N\cap\mathbb{P}\), \(\sigma:\bar{N}\prec N\) countable, transitive and full with \(\sigma(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta})=p,\mathbb{P},s,\theta\), there is a \(q\in\mathbb{P}\) so that \(q\leq p\) and if \(q\in G\) is \(\mathbb{P}\)-generic over \(V\) then in \(V[G]\) there is a \(\sigma^{\prime}:\bar{N}\prec N\) so that 1. \(\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta})=p,\mathbb{P},s,\theta\) 2. \((\sigma^{\prime})^{-1}\,\text{``$G$ is $\bar{\mathbb{P}}$-generic over $\bar{N}$}\) 3. \(\operatorname{Hull}^{N}(\delta(\mathbb{P})\cup\operatorname{ran}(\sigma))= \operatorname{Hull}^{N}(\delta(\mathbb{P})\cup\operatorname{ran}(\sigma^{ \prime}))\) Note that the special case where \(\sigma=\sigma^{\prime}\) is properness (for subproperness) and (up to forcing equivalence) \(\sigma\)-closedness (for subcomplete). It was pointed out in [10] that the "Hulls" condition 3) in both definitions is somewhat unnatural. Indeed it is never used in applications and appears solely for the purpose of proving the iteration theorem, [14, Theorem 3]. In [10] Fuchs and the second author showed that by iterating with Miyamoto's _nice iterations_ this condition could be avoided. As such it makes sense to define the following. **Definition 1.5**.: Let \(\mathbb{P}\) be a forcing notion. 1. We say that \(\mathbb{P}\) is \(\infty\)-_subcomplete_ if for all sufficiently large \(\theta\), \(\tau>\theta\) so that \(H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}\), \(s\in N\), \(\sigma:\bar{N}\prec N\) countable, transitive and full with \(\sigma(\bar{\mathbb{P}},\bar{s},\bar{\theta})=\mathbb{P},s,\theta\), if \(\bar{G}\subseteq\bar{\mathbb{P}}\cap\bar{N}\) is generic then there is a \(p\in\mathbb{P}\) so that if \(p\in G\) is \(\mathbb{P}\)-generic over \(V\) then in \(V[G]\) there is a \(\sigma^{\prime}:\bar{N}\prec N\) so that 1. \(\sigma^{\prime}(\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=\mathbb{P},s, \theta,\mu\) 2. \(\sigma^{\prime}``\bar{G}\subseteq G\) 2. We say that \(\mathbb{P}\) is \(\infty\)-_subproper_ if for all sufficiently large \(\theta\), \(\tau>\theta\) so that \(H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}\), \(s\in N\), \(p\in N\cap\mathbb{P}\), \(\sigma:\bar{N}\prec N\) countable, transitive and full with \(\sigma(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta})=p,\mathbb{P},s,\theta\), there is a \(q\in\mathbb{P}\) so that \(q\leq p\) and if \(q\in G\) is \(\mathbb{P}\)-generic over \(V\) then in \(V[G]\) there is a \(\sigma^{\prime}:\bar{N}\prec N\) so that 1. \(\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta})=p,\mathbb{P},s,\theta\) 2. \((\sigma^{\prime})^{-1}``G\) is \(\bar{\mathbb{P}}\)-generic over \(\bar{N}\) To be clear this is just the same as the definitions of the "non-\(\infty\)" versions, simply with the additional "Hulls" condition removed. As mentioned these classes come with an iteration theorem. **Theorem 1.6** (Theorem 3.19 (for Subcomplete) and Theorem 3.20 (for Subproper) of [10]).: _Let \(\gamma\) be an ordinal and \(\langle\mathbb{P}_{\alpha},\hat{\mathbb{Q}}_{\alpha}\mid\alpha<\gamma\rangle\) be a nice iteration in the sense of Miyamoto so that for all \(\alpha<\gamma\) we have \(\Vdash_{\mathbb{P}_{\alpha}}``\hat{\mathbb{Q}}_{\alpha}\) is \(\infty\)-subproper (respectively \(\infty\)-subcomplete). Then \(\mathbb{P}_{\gamma}\) is \(\infty\)-subproper (respectively \(\infty\)-subcomplete)._ We note that the above theorem in the case of \(\infty\)-subproper forcing was originally proved first independently by Miyamoto in [16]. A consequence of this theorem (initially observed for the non \(\infty\)-versions by Jensen) is that, modulo a supercompact cardinal, these classes have a consistent forcing axiom. **Definition 1.7**.: Let \(\Gamma\) be a class of forcing notions. The _forcing axiom for \(\Gamma\)_, denoted \(\mathsf{FA}(\Gamma)\) is the statement that for all \(\mathbb{P}\) in \(\Gamma\) and any \(\omega_{1}\)-sequence of dense subsets of \(\mathbb{P}\), say \(\{D_{i}\mid i<\omega_{1}\}\) there is a filter \(G\subseteq\mathbb{P}\) which intersects every \(D_{i}\). If \(\Gamma\) is the class of (\(\infty\)-)subproper forcing notions we denote \(\mathsf{FA}(\Gamma)\) by (\(\infty\)-)SubPFA. Similarly if \(\Gamma\) is the class of (\(\infty\)-)subcomplete forcing notions we denote \(\mathsf{FA}(\Gamma)\) by (\(\infty\)-)SCFA. It is not known whether up to forcing equivalence each class is simply equal to its "\(\infty\)"-version or if their corresponding forcing axioms are equivalent. However, since the "\(\infty\)" versions are more general (or appear to be) and avoid the unnecessary technicality of computing hulls we will work with them in this paper. Nearly everything written here however could be formulated for the "non-\(\infty\)" versions equally well, though we leave the translation to the oddly interested reader. If \(\Gamma\subseteq\Delta\) then \(\mathsf{FA}(\Delta)\) implies \(\mathsf{FA}(\Gamma)\) so we get the following collection of implications, which are part of Figure 1. **Proposition 1.8**.: \(\mathsf{MM}\to\infty\)_-_\(\mathsf{SubPFA}\to\mathsf{PFA}\) _and \(\mathsf{MM}\to\infty\)_-_\(\mathsf{SubPFA}\to\infty\)_-_\(\mathsf{SCFA}\)_ Here MM, known as _Martin's Maximum_ and introduced in [5], is the forcing axiom for forcing notions which preserve stationary subsets of \(\omega_{1}\) (all \(\infty\)-subproper forcing notions have this property) and PFA is the forcing axiom for proper forcing notions. It is known from the work of Jensen, see also [10] that none of the above implications can be reversed with the exception of whether SubPFA implies MM. In this paper we will show the consistency of \(\mathsf{SubPFA}+\neg\mathsf{MM}\), see Theorem 4.1 below. On that note we move to our last preliminary. Many of the theorems in this paper involve showing that we can preserve some fragment of \(\infty\)-SCFA (or \(\infty\)-SubPFA) via a forcing killing another fragment of it. Towards this end we will need an extremely useful theorem due to Cox. Below recall that a class of forcing notions \(\Gamma\) is _closed under restrictions_ (see Definition 39 of [2]) if for all \(\mathbb{P}\in\Gamma\) and all \(p\in\mathbb{P}\) the lower cone \(\mathbb{P}\upharpoonright p:=\{q\in\mathbb{P}\mid q\leq p\}\in\Gamma\). One can check that both the classes of \(\infty\)-subcomplete and \(\infty\)-subproper forcing notions (as well as the restrictions "above \(\mu\)" defined in Section 2) have this property. **Theorem 1.9** (Cox, see Theorem 20 of [2]).: _Let \(\Gamma\) be a class of forcing notions closed under restrictions and assume \(\mathsf{FA}(\Gamma)\) holds. Let \(\mathbb{P}\) be a forcing notion. Suppose that for every \(\mathbb{P}\)-name \(\dot{\mathbb{Q}}\) for a forcing notion in \(\Gamma\) there is a \(\mathbb{P}\ast\dot{\mathbb{Q}}\)-name \(\dot{\mathbb{R}}\) for a forcing notion so that the following hold:_ 1. \(\mathbb{P}\ast\dot{\mathbb{Q}}\ast\dot{\mathbb{R}}\) _is in_ \(\Gamma\)__ 2. **If**__\(j:V\to N\) _is a generic elementary embedding,_ \(\theta\geq|\mathbb{P}\ast\dot{\mathbb{Q}}\ast\dot{\mathbb{R}}|^{+}\) _is regular in_ \(V\) _and_ _a)_ \(H^{V}_{\theta}\) _is in the wellfounded part of_ \(N\)_;_ _b)_ \(j\)_"_\(H^{V}_{\theta}\in N\) _has size_ \(\omega_{1}\) _in_ \(N\)_;_ _c)_ \(\operatorname{crit}(j)=\omega_{2}^{V}\)__ _d) There exists a_ \(G\ast H\ast K\) _in_ \(N\) _that is_ \((H^{V}_{\theta},\mathbb{P}\ast\dot{\mathbb{Q}}\ast\dot{\mathbb{R}})\)_-generic_ **Then**__\(N\) _believes that_ \(j\)_"\(G\) _has a lower bound in_ \(j(\mathbb{P})\)__ _Then \(\Vdash_{\mathbb{P}}\mathsf{FA}(\Gamma)\) i.e. \(\mathbb{P}\) preserves the forcing axiom for \(\Gamma\)._ See [2] for more on strengthenings and generalizations of this wide ranging theorem. ## 2. \(\infty\)-Subcompleteness and \(\infty\)-Subproperness above \(\mu\) Most theorems in this paper filter through the notions of \(\infty\)-_subcompleteness (respectively \(\infty\)-subproperness) above \(\mu\)_ for a cardinal \(\mu\). These are technical strengthenings of \(\infty\)-subcompleteness (respective \(\infty\)-subproperness). In this section we define these strengthenings as well as make some elementary observations which will be used in rest of the paper. **Definition 2.1**.: Let \(\mu\) be a cardinal and \(\mathbb{P}\) a forcing notion. 1. We say that \(\mathbb{P}\) is \(\infty\)-_subcomplete above_\(\mu\) if for all sufficiently large \(\theta\), \(\tau>\theta\) so that \(H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}\), \(s\in N\), \(\sigma:\bar{N}\prec N\) countable, transitive and full with \(\sigma(\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=\mathbb{P},s,\theta,\mu\), if \(\bar{G}\subseteq\bar{\mathbb{P}}\cap\bar{N}\) is generic then there is a \(p\in\mathbb{P}\) so that if \(p\in G\) is \(\mathbb{P}\)-generic over \(V\) then in \(V[G]\) there is a \(\sigma^{\prime}:\bar{N}\prec N\) so that 1. \(\sigma^{\prime}(\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=\mathbb{P},s,\theta,\mu\) 2. \(\sigma^{\prime}``\bar{G}\subseteq G\) 3. \(\sigma^{\prime}\upharpoonright\bar{\mu}=\sigma\upharpoonright\bar{\mu}\) 2. We say that \(\mathbb{P}\) is \(\infty\)-_subproper above_\(\mu\) if for all sufficiently large \(\theta\), \(\tau>\theta\) so that \(H_{\theta}\subseteq N:=L_{\tau}[A]\models\mathsf{ZFC}^{-}\), \(s\in N\), \(p\in N\cap\mathbb{P}\), \(\sigma:\bar{N}\prec N\) countable, transitive and full with \(\sigma(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=p,\mathbb{P},s, \theta,\mu\), there is a \(q\in\mathbb{P}\) so that \(q\leq p\) and if \(q\in G\) is \(\mathbb{P}\)-generic over \(V\) then in \(V[G]\) there is a \(\sigma^{\prime}:\bar{N}\prec N\) so that 1. \(\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{s},\bar{\theta},\bar{\mu})=p, \mathbb{P},s,\theta,\mu\) 2. \((\sigma^{\prime})^{-1}``G\) is \(\bar{\mathbb{P}}\)-generic over \(\bar{N}\) 3. \(\sigma^{\prime}\upharpoonright\bar{\mu}=\sigma\upharpoonright\bar{\mu}\) Concretely being \(\infty\)-subcomplete above \(\mu\) simply means that \(\mathbb{P}\) is subcomplete and, moreover, for any \(\sigma:\bar{N}\prec N\) the corresponding \(\sigma^{\prime}\) (in \(V[G]\)) witnessing the subcompleteness can be arranged to agree with \(\sigma\) "up to \(\mu\)" i.e. on the ordinals below \(\sigma^{-1}``\mu\) (and idem for \(\infty\)-subproperness). The "non-\(\infty\)" versions of these classes was first introduced by Jensen in [13] under the name "\(\mu\)-subcompleteness" and "\(\mu\)-subproperness". They were investigated further by Fuchs in [7] who introduced the terminology "above \(\mu\)" and made several of the elementary observations we repeat below. Following Fuchs (as opposed to Jensen) have moved the parameter \(\mu\) to the end to avoid the awkwardness of "\(\mu\)-\(\infty\)-subcomplete/\(\mu\)-\(\infty\)-subproper". The following is immediate from the definitions. **Observation 2.2**.: _Let \(\mu<\nu\) be cardinals. If \(\mathbb{P}\) is \(\infty\)-subcomplete (respectively \(\infty\)-subproper) above \(\nu\) then it is \(\infty\)-subcomplete (respectively subproper) above \(\mu\) and it is \(\infty\)-subcomplete (respectively \(\infty\)-subproper) (without any restriction)._ It is easy to see that being \(\infty\)-subcomplete (respectively, \(\infty\)-subproper) is equivalent to being \(\infty\)-subcomplete (respectively \(\infty\)-subproper) above \(\omega_{1}\), however more is true, an observation due independently to the first author and Fuchs (see [7, Observation 4.2]). **Proposition 2.3**.: _Let \(\mathbb{P}\) be a forcing notion. \(\mathbb{P}\) is \(\infty\)-subcomplete (respectively \(\infty\)-subproper) if and only if \(\mathbb{P}\) is \(\infty\)-subcomplete above \(2^{\aleph_{0}}\) (respectively \(\infty\)-subproper above \(2^{\aleph_{0}}\))._ As noted above, this proposition (in the case of subcompleteness) is proved as Observation 4.2 of [7] but we give a detailed proof in order to help the reader get accustomed to \(\infty\)-subversion forcing as well as to include the mild difference of subproperness. However, let us note that essentially the point is that, using the definable well order in \(L_{\tau}[A]\), the reals of \(\bar{N}\) code the cardinality of the continuum. Proof.: We prove the case of \(\infty\)-subcomplete and leave the reader to check the case of \(\infty\)-subproper. Let \(\mathbb{P}\) be a forcing notion. It is immediate as noted above that if \(\mathbb{P}\) is \(\infty\)-subcomplete above \(2^{\omega}\) then it is \(\infty\)-subcomplete so we need to check just the reverse direction. Thus assume that \(\mathbb{P}\) is \(\infty\)-subcomplete and let \(\tau>\theta\) be cardinals so that \(\sigma:\bar{N}\prec N:=L_{\tau}[A]\) with \(H_{\theta}\subseteq N\) be as in the definition of \(\infty\)-subcompleteness. Let \(\bar{G}\) be a \(\bar{\mathbb{P}}\)-generic filter over \(\bar{N}\) with \(\bar{\mathbb{P}}\) the preimage of \(\mathbb{P}\) under \(\sigma\). Finally let \(p\in\mathbb{P}\) force that there is a \(\sigma^{\prime}:\bar{N}\prec N\) so that \(\sigma^{\prime}(\bar{\mathbb{P}})=\mathbb{P}\) and \(\sigma^{\prime\,\text{``}\bar{G}}\subseteq G\) for any generic \(G\ni p\) (the existence of such a condition is the heart of the definition of \(\infty\)-subcompleteness of course). We need to show that \(p\) forces that \(\sigma^{\prime}\upharpoonright 2^{\aleph_{0}}=\sigma\upharpoonright 2^{\aleph_{0}}\), where, to be clear, \(2^{\aleph_{0}}\) denotes cardinal (as computed in \(\bar{N}\)) which bijects onto the continuum (as defined in \(\bar{N}\)). To avoid confusion let us denote the cardinal \(2^{\aleph_{0}}=\kappa\) (in \(V\) and hence \(N\)) and the preimage of \(\kappa\) in \(\bar{N}\) under \(\sigma\) as \(\bar{\kappa}\). First note that by the absoluteness of \(\omega\) we have that for all reals \(x\in\bar{N}\) it must be the case that \(\sigma(x)=\sigma^{\prime}(x)=x\) (and being a real is absolute between \(\bar{N}\) and \(V\)). Moreover, since \(N=L_{\tau}[A]\) there is a definable well order of the universe, and in particular there is a definable bijection of the reals onto \(\kappa\), say \(f:2^{\omega}\to\kappa\). By elementarity in \(\bar{N}\) there is a definable bijection \(\bar{f}:2^{\omega}\cap\bar{N}\to\bar{\kappa}\). But since \(f\) is definable we have \(\sigma(\bar{f})=\sigma^{\prime}(\bar{f})=f\) and hence for all \(\alpha\in\bar{\kappa}\) we get \(\sigma(\alpha)=\sigma(\bar{f}(\bar{f}^{-1}(\alpha)))=\sigma^{\prime}(\bar{f}( \bar{f}^{-1}(\alpha)))=\sigma^{\prime}(\alpha)\), as needed. In fact \(2^{\aleph_{0}}\) is the best possible in \(\mathsf{ZFC}\). Jensen showed that Namba forcing is \(\infty\)-subcomplete above \(\omega_{1}\) assuming \(\mathsf{CH}\) while it is not even \(\infty\)-subproper above \(\omega_{2}\) in \(\mathsf{ZFC}\), a consequence of the next observation. **Lemma 2.4**.: _Let \(\mu\) be a cardinal._ 1. _If_ \(\mathbb{P}\) _is_ \(\infty\)_-subproper above_ \(\mu\) _then any new countable set of ordinals less than_ \(\mu\) _added by_ \(\mathbb{P}\) _is covered by an old countable set of ordinals (less than_ \(\mu\)_). In particular if_ \(\Vdash_{\mathbb{P}}\text{``}\mathrm{cf}(\mu)=\omega\)_" then_ \(\mathrm{cf}(\mu)=\omega\) _(in_ \(V\)_)._ 2. _If_ \(\mathbb{P}\) _is_ \(\infty\)_-subcomplete above_ \(\mu\) _then_ \(\mathbb{P}\) _adds no new countable sets of ordinals below_ \(\mu\)_._ Proof.: The proofs of both are similar to the corresponding proofs that every new countable set of ordinals added by a proper forcing notion is contained in an old countable set of ordinals and \(\sigma\)-closed forcing notions do not add new countable sets of ordinals at all respectively. The point is that to show the corresponding fact "below \(\mu\)" one only needs \(\infty\)-subproperness (respectively \(\infty\)-subcompleteness) above \(\mu\). Let us begin with the first item. Assume that \(\mu\) is a cardinal, \(\mathbb{P}\) is \(\infty\)-subproper above \(\mu\), \(p\in\mathbb{P}\) and \(\dot{x}\) is a \(\mathbb{P}\)-name so that \(p\Vdash\dot{x}:\omega\to\mu\). We need to find a \(q\leq p\) and a countable \(X\subseteq\mu\) so that \(q\Vdash\mathrm{im}(\dot{x})\subseteq\bar{X}\). To this end, let \(\sigma:\bar{N}\prec N\) be as in the definition of \(\infty\)-subproperness with \(\sigma(\bar{p},\bar{\mathbb{P}},\bar{\mu},\dot{\bar{x}})=p,\mathbb{P},\mu,\dot {x}\). We claim that \(X:=\sigma\text{``}\bar{\mu}\) is as needed. Indeed, by the definition of \(\infty\)-subproperness above \(\mu\) there is a \(q\leq p\) forcing that there is an embedding \(\sigma^{\prime}:\bar{N}\prec N\) so that \(\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{\mu},\dot{\bar{x}})=p,\mathbb{P},\mu,\dot{x}\) and \(\sigma^{\prime}\upharpoonright\bar{\mu}=\sigma\upharpoonright\bar{\mu}\). Fix such a \(q\) and let \(q\in G\) be \(\mathbb{P}\)-generic over \(V\). Note that by elementarity we have that \(\bar{p}\Vdash\dot{\bar{x}}(\bar{n})\in\bar{\mu}\) for all \(n<\omega\). Also since \(\bar{G}:=(\sigma^{\prime})^{-1}\text{``}G\) is \(\bar{\mathbb{P}}\)-generic over \(\bar{N}\) we can find in \(\bar{N}\) ordinals \(\bar{\mu}_{n}<\bar{\mu}\) so that \(\bar{N}[\bar{G}]\models\dot{\bar{x}}^{\bar{G}}(n)=\bar{\mu}_{n}\) for each \(n<\omega\). Finally we have that and \(\sigma^{\prime}(\bar{p},\bar{\mathbb{P}},\bar{\mu},\dot{\bar{x}})=p,\mathbb{P},\mu,\dot{x}\) and hence, by elementarity again alongside the fact that \(\sigma\upharpoonright\bar{\mu}=\sigma^{\prime}\upharpoonright\bar{\mu}\) we get \(N[G]\models\dot{x}^{G}(n)=\sigma^{\prime}(\bar{\mu}_{n})=\sigma(\bar{\mu}_{n})\in X\) for all \(n<\omega\). This completes the proof of the first item. The second case is nearly verbatim noting that since for \(\infty\)-subcomplete forcing we can choose \(\bar{G}\) in the ground model we can actually define \(\sigma^{*}\dot{\bar{x}}^{\bar{G}}=(\sigma^{\prime})^{*}\dot{\bar{x}}^{\bar{G}}\) in \(V\). As mentioned before Lemma 2.4 an immediate consequence is the following. **Lemma 2.5**.: _Namba forcing is not \(\infty\)-subproper above \(\omega_{2}\). In particular Namba forcing is not \(\infty\)-subproper if \(\mathsf{CH}\) fails._ Finally we end this section with some observations about the associated forcing axioms for the classes we have been discussing. **Definition 2.6**.: Let \(\mu\) be a cardinal. Denote by \(\infty\)-\(\mathsf{SubPFA}\upharpoonright\mu\) the forcing axiom for forcing notions \(\mathbb{P}\) which are \(\infty\)-subproper above \(\mu\) and \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu\) the same for \(\mathbb{P}\) which are \(\infty\)-subcomplete above \(\mu\). The following is immediate by Observation 2.2. **Proposition 2.7**.: _Let \(\mu<\nu\) be cardinals. We have that \(\infty\)-\(\mathsf{SCFA}\) implies \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu\) implies \(\infty\)-\(\mathsf{SCFA}\upharpoonright\nu\). Similarly for the variants of \(\infty\)-\(\mathsf{SubPFA}\)._ In the next section we will show that (in many cases) the reverse implications do not hold. ## 3. Separating the \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu\) Principles In this section we show that under certain cardinal arithmetic assumptions \(\infty\)-\(\mathsf{SCFA}\upharpoonright\nu\) does not imply \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu\) for \(\mu<\nu\). Before proving this general theorem we introduce our technique with the simple example of separating \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{1}\) from \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}\). This involves showing that adding a \(\square_{\omega_{1}}\)-sequence to a model of \(\infty\)-\(\mathsf{SCFA}\) preserves \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}\). However, it follows from a result of Jensen, see [14] that \(\mathsf{SCFA}+\mathsf{CH}\) implies the failure of \(\square_{\omega_{1}}\). The Case of \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}\): Adding Non-Reflecting Structures of Size \(\aleph_{2}\) Recall that for an uncountable cardinal \(\lambda\) a \(\square_{\lambda}\)-sequence is a sequence \(\langle C_{\alpha}\mid\alpha\in\lambda^{+}\cap\mathrm{Lim}\rangle\) so that for all \(\alpha\) the following hold: 1. \(C_{\alpha}\) is club in \(\alpha\) 2. \(\mathrm{ot}(\alpha)\leq\lambda\) 3. For each \(\beta\in\mathrm{lim}(C_{\alpha})\) we have that \(C_{\alpha}\cap\beta=C_{\beta}\) We recall the poset \(\mathbb{P}_{0}\) from [3, Example 6.6] for adding a square sequence. Conditions \(p\in\mathbb{P}_{0}\) are functions so that the domain of \(p\) is \(\beta+1\cap\mathrm{Lim}\) for some \(\beta\in\lambda^{+}\cap\mathrm{Lim}\) and 1. For all \(\alpha\in\mathrm{dom}(p)\) we have that \(p(\alpha)\) is club in \(\alpha\) with order type \(\leq\lambda\); and 2. If \(\alpha\in\mathrm{dom}(p)\) then for each \(\beta\in\mathrm{lim}(p(\alpha))\) we have \(p(\alpha)\cap\beta=p(\beta)\) The order is end extension. We remark that a moment's reflection confirms that this poset is \(\sigma\)-closed. Moreover it is \(<\!\!\lambda^{+}\)-strategically closed (see [3]). In particular it preserves cardinals up to \(\lambda^{+}\). **Theorem 3.1**.: _Assume \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}\) and let \(\mathbb{P}_{0}\) be the forcing notion defined above for adding a \(\square_{\omega_{1}}\)-sequence. Then \(\Vdash_{\mathbb{P}_{0}}\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}\). In particular if the existence of a supercompact cardinal is consistent with \(\mathsf{ZFC}\) then \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}+\square_{\omega_{1}}\) is consistent as well._ Before proving this theorem we need to define one more poset. Recall that if \(G\subseteq\mathbb{P}_{0}\) is generic and \(\vec{\mathcal{C}}_{G}=\langle C_{\alpha}\mid\alpha\in\lambda^{+}\cap\mathrm{ Lim}\rangle\) is the generic \(\square_{\lambda}\)-sequence added by \(G\) then for any cardinal \(\gamma<\lambda\) we can _thread the square sequence_ via the following poset, \(\mathbb{T}_{G,\gamma}\). Conditions are closed, bounded subsets \(c\subseteq\lambda^{+}\) so that \(c\) has order type \(<\gamma\), and for all limit points \(\beta\in c\) we have that \(\beta\cap c=C_{\beta}\). See [4, SS6] and [15, p.7] for more on this threading poset. The point is the following. **Fact 3.2** (Lemma 6.9 of [4]).: _Let \(\gamma<\lambda\) be cardinals, \(\mathbb{P}_{0}\) the forcing notion described above for adding a \(\square_{\lambda}\)-sequence and \(\dot{\mathbb{T}}_{\dot{G},\gamma}\) be the \(\mathbb{P}_{0}\)-name for the forcing to thread the generic square sequence with conditions of size \(<\gamma\). Then \(\mathbb{P}_{0}*\dot{\mathbb{T}}_{\dot{G},\gamma}\) has a dense \(<\gamma\)-closed subset._ We can now prove Theorem 3.1. Proof.: We let \(\mathbb{P}_{0}\) be the forcing described above for adding a \(\square_{\omega_{1}}\)-sequence (so \(\lambda=\omega_{1}\)). Let \(\gamma=\aleph_{1}\) so in \(V^{\mathbb{P}_{0}}\) the threading poset \(\dot{\mathbb{T}}:=\dot{\mathbb{T}}_{\dot{G},\aleph_{1}}\) consists of countable closed subsets of \(\omega_{2}\). We want to apply Theorem 1.9 to \(\mathbb{P}_{0}\). Note that if \(\dot{\mathbb{Q}}\) is a \(\mathbb{P}_{0}\)-name for an \(\infty\)-subcomplete above \(\omega_{2}\) forcing notion, then \(\dot{\mathbb{T}}=\dot{\mathbb{T}}_{\dot{G},\aleph_{1}}\) is absolute between \(V^{\mathbb{P}_{0}}\) and \(V^{\mathbb{P}_{0}*\dot{\mathbb{Q}}}\) by Lemma 2.4 (2). **Claim 3.3**.: _It is enough to show that for any \(\mathbb{P}_{0}\)-name \(\dot{\mathbb{Q}}\) for a forcing notion which is \(\infty\)-subcomplete above \(\omega_{2}\), the three step \(\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}\) is \(\infty\)-subcomplete above \(\omega_{2}\)._ Proof of Claim.: This is because \(\mathbb{T}\) adds a lower bound to \(j\)"\(G\) as described in the statement of Theorem 1.9. In more detail, let \(\dot{\mathbb{Q}}\) be a \(\mathbb{P}_{0}\)-name for a forcing notion which is \(\infty\)-subcomplete above \(\omega_{2}\), we want to show that for \(\dot{\mathbb{R}}=\dot{\mathbb{T}}\) the hypotheses of Theorem 1.9 are satisfied assuming that \(\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}\) is \(\infty\)-subcomplete above \(\omega_{2}\). Since this is exactly the first clause we only need to concern ourselves with the second one. Recall that, relativized to this situation, this says that **if**\(j:V\to N\) is a generic elementary embedding, \(\theta\geq|\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}|^{+}\) is regular in \(V\) and a) \(H^{V}_{\theta}\) is in the wellfounded part of \(N\); b) \(j\)"\(H^{V}_{\theta}\in N\) has size \(\omega_{1}\) in \(N\); c) \(\mathrm{crit}(j)=\omega_{2}^{V}\) d) There exists a \(G*H*K\) in \(N\) that is \((H^{V}_{\theta},\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}})\)-generic **Then**\(N\) believes that \(j\)"\(G\) has a lower bound in \(j(\mathbb{P}_{0})\). So fix some \(\theta\) and \(j:V\to N\) as described in a) to d). Note that \(j\)"\(G=G\) by c) and the fact that \(G\) is coded as a subset of \(\omega_{2}^{V}\). Thus it suffices to find a lower bound of \(G\) in \(j(\mathbb{P}_{0})\). The point is now though that since \(G*H*K\in N\) we can in particular form \(\bigcup K\in N\) which is a club subset of \(\omega_{2}^{V}=\sup_{p\in G}\operatorname{dom}(p)\) and coheres with all of the elements of \(G\), and hence \((\bigcup G)\cup\langle\omega_{2}^{V},\bigcup K\rangle\) is as needed. Let us now show that \(\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}\) is \(\infty\)-subcomplete above \(\omega_{2}\). Let \(\tau>\theta\) be sufficiently large cardinals and \(\sigma:\bar{N}\prec N=L_{\tau}[A]\supset H_{\theta}\) be as in the definition of \(\infty\)-subcompleteness above \(\omega_{2}\). Let \(\sigma(\bar{\mathbb{P}}_{0},\dot{\bar{\mathbb{Q}}},\dot{\bar{\mathbb{T}}})= \mathbb{P}_{0},\dot{\mathbb{Q}},\dot{\mathbb{T}}\). Let \(\bar{G}*\bar{H}*\bar{K}\) be \(\bar{\mathbb{P}}_{0}*\dot{\bar{\mathbb{Q}}}*\dot{\bar{\mathbb{T}}}\)-generic over \(\bar{N}\). There are few things to note. First let us point out that \(\bar{G}\) and \(\bar{K}\) are (coded as) subsets of \(\bar{\omega}_{2}\), the second uncountable cardinal from the point of view of \(\bar{N}\) (so \(\sigma(\bar{\omega}_{2})=\omega_{2}\)). Next note that \(\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}\) is forcing equivalent to \(\mathbb{P}_{0}*\dot{\mathbb{T}}*\dot{\mathbb{Q}}\) since both \(\dot{\mathbb{Q}}\) and \(\dot{\mathbb{T}}\) are in \(V^{\mathbb{P}_{0}}\), and the same for the "bar" versions in \(\bar{N}\). Now note that since \(\mathbb{P}_{0}*\dot{\mathbb{T}}\) has a \(\sigma\)-closed dense subset, \(\sigma\)"\(\bar{G}*\bar{K}\) has a lower bound (in \(N\)), say \((p,t)\) (\(t\) is in the ground model and the \(\sigma\)-closed dense subset is simply the collection of conditions whose second coordinate is a check name decided by \(p\)). By \(\sigma\)-closed-ness \((p,t)\) forces that there is a unique lift of \(\sigma:\bar{N}\prec N\) to some \(\sigma_{0}:\bar{N}[\bar{G}]\prec N[G]\) with \(\sigma_{0}(\bar{G})=G\) for any \(\mathbb{P}_{0}\)-generic \(G\ni p\) (technically we need to work in the extension by \(\mathbb{P}_{0}*\dot{\mathbb{Q}}\), but we only want to specify the embedding of the \(\bar{\mathbb{P}}_{0}\) extension). Fix such a \(G\) (from which \(\sigma_{0}\) is defined) and work in \(V[G]\). Note that \(\sigma_{0}\)"\(\bar{K}=\sigma\)"\(\bar{K}\) has \(t\in N\) as a lower bound. Since \(\mathbb{Q}:=\dot{\mathbb{Q}}^{G}\) is \(\infty\)-subcomplete above \(\omega_{2}\) here we can apply the definition of \(\infty\)-subcompleteness to \(\sigma_{0}:\bar{N}[\bar{G}]\prec N[G]\) to obtain a condition \(\dot{q}^{G}:=q\in\mathbb{Q}\) so that if \(H\ni q\) is \(\mathbb{Q}\)-generic over \(V[G]\) then in \(V[G][H]\) there is a \(\sigma_{1}:\bar{N}[\bar{G}]\prec N[G]\) so that \(\sigma_{1}(\bar{G},\bar{\mathbb{P}}_{0},\dot{\bar{\mathbb{Q}}}^{\bar{G}},\dot{ \bar{\mathbb{T}}}^{\bar{G}})=G,\mathbb{P}_{0},\mathbb{Q},\mathbb{T}\) where \(\mathbb{T}\in V[G]\) is \(\dot{\mathbb{T}}^{G}\), \(\sigma_{1}\)"\(\bar{H}\subseteq H\) and \(\sigma_{1}\upharpoonright\bar{\omega}_{2}=\sigma\upharpoonright\bar{\omega}_{2}\). Note also that by condensation we have that \(\bar{N}=L_{\bar{\tau}}[\bar{A}]\) and hence we can ensure that \(\sigma_{1}\upharpoonright\bar{N}:\bar{N}\prec N\). Now by the first observation above we know that since \(\bar{G}\) and \(\bar{K}\) are coded as subsets of \(\bar{\omega}_{2}\) it must be the case that in fact \(\sigma_{1}\upharpoonright\bar{G}=\sigma\upharpoonright\bar{G}\) and idem for \(\bar{K}\). In particular \((p,t)\) is still a lower bound in \(\sigma_{1}\)"\(\bar{G}*\bar{K}\). But putting all of these observations together now ensures that the triple \((p,\dot{q},t)\in\mathbb{P}_{0}*\dot{\mathbb{Q}}*\dot{\mathbb{T}}\) forces that \(\sigma_{2}:=\sigma_{1}\upharpoonright\bar{N}\) is as needed to witness that the three step is \(\infty\)-subcomplete above \(\omega_{2}\) as needed. Note the following corollary of Theorem 3.1. **Corollary 3.4**.: \(\infty\)_-_SCFA _does not imply \(\mathsf{MA}^{+}(\sigma\text{-closed})\) assuming the consistency of a supercompact cardinal. In particular \(\infty\)-_SCFA _does not imply \(\mathsf{SCFA}^{+}\)._ Proof.: Begin with a model of \(\infty\)-\(\mathsf{SCFA}+2^{\aleph_{0}}=2^{\aleph_{1}}=\aleph_{2}\) (for instance a model of \(\mathsf{MM}\)). Force with \(\mathbb{P}_{0}\) to preserve these axioms and add a \(\square_{\omega_{1}}\)-sequence. Then \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}\) and \(\square_{\omega_{1}}\) hold in the extension by Theorem 3.1. But, since \(\mathbb{P}_{0}\) does not collapse cardinals (by \(2^{\aleph_{1}}=\aleph_{2}\)) or add reals, the continuum is still \(\aleph_{2}\) hence \(\infty\)-\(\mathsf{SCFA}\) holds yet \(\mathsf{MA}^{+}(\sigma\text{-closed})\) fails since this axiom implies that \(\square_{\kappa}\) fails for all \(\kappa\), see [5]. The proof of Theorem 3.1 can be generalized in many ways. Observe that very little about \(\mathbb{P}_{0}\) is used. For instance almost an analogous proof gives the following. **Theorem 3.5**.: _Assume \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}\). The forcing \(\mathbb{S}_{\omega_{2}}\) to add an \(\omega_{2}\)-Souslin tree preserves \(\infty\)-\(\mathsf{SCFA}\upharpoonright\omega_{2}\)._ Sketch.: Let \(\mathbb{S}_{\omega_{2}}\) be the standard forcing to add an \(\omega_{2}\)-Souslin tree: conditions are binary trees \(p\subseteq 2^{<\omega_{2}}\) of size \(<\aleph_{2}\) ordered by end extension. This adds an \(\omega_{2}\)-Souslin tree and is \(\sigma\)-closed. Let \(\dot{T}_{\dot{G}}\) be the canonical name for the tree added i.e. if \(G\subseteq\mathbb{S}_{\omega_{2}}\) is generic over \(V\) then \((\dot{T}_{\dot{G}})^{G}=\bigcup G\). Let \(\dot{\mathbb{Q}}\) be a \(\mathbb{S}_{\omega_{2}}\)-name for a forcing notion which is \(\infty\)-subcomplete above \(\omega_{2}\). As before it is enough to show that \(\mathbb{S}_{\omega_{2}}*\dot{\mathbb{Q}}*\dot{T}_{\dot{G}}\) is \(\infty\)-subcomplete above \(\omega_{2}\) where \(\dot{T}_{\dot{G}}\) is the name for the tree as a forcing notion. The proof is now though exactly as before noting that, since \(\dot{T}_{\dot{G}}\in V^{\mathbb{S}_{\omega_{2}}}\) we have again that \(\mathbb{S}_{\omega_{2}}*\dot{\mathbb{Q}}*\dot{T}_{\dot{G}}\) is forcing equivalent to \(\mathbb{S}_{\omega_{2}}*\dot{T}_{\dot{G}}*\dot{\mathbb{Q}}\) and the generic is coded by a subset of \(\omega_{2}\) hence not moved by the new embedding added by \(\dot{\mathbb{Q}}\). We have the following corollary similar to Corollary 3.4 above by invoking a model of \(\infty\)-\(\mathsf{SCFA}+2^{\aleph_{0}}=\aleph_{2}\). **Corollary 3.6**.: _Assuming the consistency of a supercompact cardinal we have the consistency of \(\mathsf{SCFA}+\neg\mathsf{CH}+\neg\mathrm{TP}(\omega_{2})\)._ Here \(\mathrm{TP}(\omega_{2})\) is the tree property at \(\omega_{2}\) i.e. no \(\omega_{2}\)-Aronszajn trees. This result contrasts with [18, Corollary 4.1] which shows that under Rado's Conjecture, another forcing axiom-like statement compatible with \(\mathsf{CH}\), \(\mathrm{TP}(\omega_{2})\) is equivalent to \(\neg\mathsf{CH}\). ### The General Case The proof of Theorem 3.1 can be easily generalized to establishe that for any cardinal \(\mu\) adding a \(\square_{\mu}\) sequence via \(\mathbb{P}_{0}\) preserves \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu^{+}\). **Theorem 3.7**.: _Let \(\mu\) be an uncountable cardinal and assume \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu^{+}\) holds. If \(\mathbb{P}_{0}\) is the forcing from the previous subsection to add a \(\square_{\mu}\)-sequence then \(\mathbb{P}_{0}\) preserves \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu^{+}\)._ Proof.: In \(V^{\mathbb{P}_{0}}\) let \(\dot{\mathbb{T}}:=\dot{\mathbb{T}}_{\dot{G},\aleph_{1}}\). We only give the proof of the claim obtained from Claim 3.3 by replacing \(\omega_{2}\) with \(\mu\). The rest of the proof is exactly the same as Theorem 3.1 (just replace \(\omega_{2}\) with \((\mu^{+})^{V}\)). Suppose \(j:V\to N\), \(\theta\) and \(G*H*K\) are as in the proof of Claim 3.3. Let \(\beta:=(\mu^{+})^{V}=\sup_{p\in G}\mathrm{dom}(p)\). Then \(\bigcup K\in N\) is a club subset of \(\beta\) and coheres with all of the elements of \(G\). Note that all initial segments of \(\bigcup K\) are countable sets in \(V\). So \(K^{*}:=j\) "\(\bigcup K\) is club in \(\beta^{*}:=\sup(j\) "\(\beta)\) and coheres with all of the elements of \(G^{*}:=j\) "\(G\). Hence \((\bigcup G^{*})\cup\langle\beta^{*},K^{*}\rangle\) is a lower bound of \(j\)"\(G\) in \(j(\mathbb{P}_{0})\). Note that in particular \(\mathsf{SCFA}\) does not imply \(\square_{\mu}\) for any \(\mu<2^{\aleph_{0}}\). On the other hand, as in Figure 1 at the introduction, we have the following theorem, which is essentially known. **Theorem 3.8**.: _Let \(2^{\aleph_{0}}\leq\nu\leq\kappa<\mu=\kappa^{+}\) be cardinals with \(\nu^{\omega}<\mu\). Modulo the existence of a supercompact cardinal \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu+\neg\infty\)-\(\mathsf{SCFA}\upharpoonright\nu\) is consistent._ Proof.: By Theorem 3.7 we know that \(\infty\)-\(\mathsf{SCFA}\upharpoonright\mu\) is consistent with \(\square_{\kappa}\) hence it suffices to show that \(\infty\)-\(\mathsf{SCFA}\upharpoonright\nu\) implies the failure of \(\square_{\kappa}\). This is essentially known though it needs to be pieced together from a few sources. First, to get the failure of \(\square_{\kappa}\) Jensen uses the forcing notion (at \(\kappa\)) from [14, Lemma 6.3]. Second, [8, Lemma 3.5] implies that this forcing notion is indeed \(\infty\)-subcomplete above \(\nu\) under the cardinal arithmetic assumptions mentioned in the theorem statement. See the proof of [8, Lemma 3.5] and the discussion therein for more details. ## 4. Separating \(\mathsf{MM}\) from \(\mathsf{SubPFA}\) In this section we prove the following result. **Theorem 4.1**.: _Assume there is a supercompact cardinal. Then there is a forcing extension in which \(\infty\)-\(\mathsf{SubPFA}\) holds but \(\mathsf{MM}\) fails. In particular, modulo the large cardinal assumption, \(\infty\)-\(\mathsf{SubPFA}\) does not imply \(\mathsf{MM}\)._ The idea behind this theorem is a combination of the proof technique from [1, Theorem 2.6] and the proof of Theorem 3.1. Starting from a model of \(\mathsf{MM}\) we will force to add a non-reflecting stationary set to \(2^{\aleph_{0}}\) (\(=\aleph_{2}\) since \(\mathsf{MM}\) holds). This kills \(\mathsf{MM}\) by the results of [5] but will preserve \(\infty\)-\(\mathsf{SubPFA}\) by an argument similar to that of [1, Theorem 2.6]. The interesting difference is that \(\infty\)-\(\mathsf{SubPFA}\) (in fact \(\mathsf{SCFA}\)) implies that there are no non-reflecting stationary sets above the continuum (and more) so here \(\aleph_{2}\) matters which is not true for \(\mathsf{PFA}\) (though the proof for \(\aleph_{2}\) is the same for \(\mathsf{PFA}\) and its "subversion"). We begin by recalling the relevant definitions. **Definition 4.2**.: Let \(\kappa\) be a cardinal of uncountable cofinality and \(S\subseteq\kappa\). For a limit ordinal \(\alpha<\kappa\) of uncountable cofinality we say that \(S\)_reflects to \(\alpha\)_ if \(S\cap\alpha\) is stationary in \(\alpha\). We say that \(S\) is _non-reflecting_ if it does not reflect to any \(\alpha<\kappa\) of uncountable cofinality. **Fact 4.3** (See Theorem 9 [5]).: \(\mathsf{MM}\) _implies that for every regular \(\kappa>\aleph_{1}\) every stationary subset of \(\kappa\cap\operatorname{Cof}(\omega)\) reflects._ Compare this with the following. **Fact 4.4** (See Lemma 6[14]).: \(\mathsf{SCFA}\) _implies that for every regular \(\kappa>2^{\aleph_{0}}\) every stationary subset of \(\kappa\cap\operatorname{Cof}(\omega)\) reflects._ _Remark 1_.: Note that in [14] it is claimed that \(\mathsf{SCFA}\) implies that the above holds for all \(\kappa>\aleph_{1}\), regardless of the size of the continuum. However, Sean Cox observed3 that the proof only works for \(\kappa>2^{\aleph_{0}}\). In light of Theorem 3.1 this bound is optimal. Footnote 3: Private communication, see also the discussion in [8] preceding Lemma 3.5. There is a natural forcing notion to add a non-reflecting stationary subset \(S\subseteq\kappa\cap\operatorname{Cof}(\omega)\) for a fixed regular cardinal \(\kappa\). The definition and basic properties are given in Example 6.5 of [3]. We record the basics here for reference. **Definition 4.5**.: Fix a regular cardinal \(\kappa>\aleph_{1}\). The forcing notion \(\mathbb{NR}_{\kappa}\) is defined as follows. Conditions are functions \(p\) with domain the set of countably cofinal ordinals below some ordinal \(\alpha<\kappa\) mapping into \(2\) with the property that if \(\beta\leq\sup(\operatorname{dom}(p))\) has uncountable cofinality then there is a set \(c\subseteq\beta\) club in \(\beta\) which is disjoint from \(p^{-1}(1)=\{\alpha\in\operatorname{dom}(p)\mid p(\alpha)=1\}\). The extension relation is simply \(q\leq_{\mathbb{NR}_{\kappa}}p\) if and only if \(q\supseteq p\). Proofs of the following can be found in [3]. **Proposition 4.6**.: _For any regular \(\kappa>\aleph_{1}\) the forcing \(\mathbb{NR}_{\kappa}\) has the following properties._ 1. \(\mathbb{NR}_{\kappa}\) _is_ \(\sigma\)_-closed._ 2. \(\mathbb{NR}_{\kappa}\) _is_ \(\kappa\)_-strategically closed and in particular preserves cardinals._ 3. _If_ \(G\subseteq\mathbb{NR}_{\kappa}\) _is generic then_ \(S_{G}:=\bigcup_{p\in G}p^{-1}(1)\) _is a non-reflecting stationary subset of_ \(\kappa\)_._ We neglect to give the definition of strategic closure since we will not need it beyond the fact stated above, see [4] or [3] for a definition. Let \(\kappa\) be as above, \(G\subseteq\mathbb{NR}_{\kappa}\) be generic over \(V\) and let \(S_{G}:=\bigcup_{p\in G}p^{-1}(1)\) be the generic non-reflecting stationary set. We want to define a forcing to kill \(S_{G}\) (this will be the "\(\dot{\mathbb{R}}\)" in our application of Theorem 1.9). Specifically we will define a forcing notion \(\mathbb{Q}_{S_{G}}\) so that forcing with \(\mathbb{Q}_{S_{G}}\) will add a club to \(\kappa\setminus S_{G}\) and hence kill the stationarity of \(S_{G}\). Note that since \(S_{G}\) is non-reflecting its complement must also be stationary and indeed has to be _fat_ i.e. contain continuous sequences of arbitrary length \(\alpha<\kappa\) cofinally high. **Definition 4.7**.: Borrowing the notation from the previous paragraph define the forcing notion \(\mathbb{Q}_{S_{G}}\) as the set of closed, bounded subsets of \(\kappa\setminus S_{G}\) ordered by end extension. Clearly the above forcing generically adds a club to the complement of \(S_{G}\) thus killing its stationarity, see [3] Definition 6.10. It is also \(\omega\)-distributive. We are now ready to prove Theorem 4.1. Proof of Theorem 4.1.: Assume \(\infty\)-\(\mathsf{SubPFA}\) holds (the consistency of this is the only application of the supercompact). Note that the continuum is \(\aleph_{2}\) and will remain so in any cardinal preserving forcing extension which adds no reals. Let \(\mathbb{P}=\mathbb{NR}_{\aleph_{2}}\), \(G\subseteq\mathbb{P}\) be generic over \(V\) and work in \(V[G]\). Obviously in this model we have "there is a non-reflecting stationary subset of \(\aleph_{2}\)" and thus \(\mathsf{MM}\) fails by Fact 4.3. We need to show that \(\infty\)-\(\mathsf{SubPFA}\) holds. We will apply Theorem 1.9 much as in the proof of Theorem 3.1. Let \(\dot{\mathbb{Q}}\) be a \(\mathbb{P}\)-name for an \(\infty\)-subproper forcing notion and let \(\dot{\mathbb{R}}\) name \(\mathbb{Q}_{S_{\dot{G}}}\) in \(V^{\mathbb{P}*\dot{\mathbb{Q}}}\) (NOT just in \(V^{\mathbb{P}}\) - this is different than the proof of Theorem 3.1 and crucial). By exactly the same argument as in the proof of Theorem 3.1 it suffices to show that \(\mathbb{P}*\dot{\mathbb{Q}}*\dot{\mathbb{R}}\) is \(\infty\)-subproper (in \(V\)). This is because (2) from Theorem 1.9 follows from the fact that, borrowing the notation from that Theorem applied to our situation \(\dot{\mathbb{R}}\) shoots a club through the complement of \(S_{G}\) hence \(j``S_{G}=S_{G}\) is non-stationary in its supremum and so has a lower bound in \(N\). So we show that \(\mathbb{P}*\check{\mathbb{Q}}*\check{\mathbb{R}}\) is \(\infty\)-subproper. This is very similar to the proof of Theorem 3.1 but enough details are different to warrant repeating everything for completeness. Let \(\tau>\theta\) be sufficiently large cardinals and \(\sigma:\bar{N}\prec N=L_{\tau}[A]\supseteq H_{\theta}\) be as in the definition of \(\infty\)-subproperness. Let \(\sigma(\bar{\mathbb{P}},\check{\mathbb{Q}},\check{\mathbb{R}},\bar{\omega_{2} })=\mathbb{P},\check{\mathbb{Q}},\check{\mathbb{R}},\omega_{2}\). Let \((p_{0},\dot{q}_{0},\dot{r}_{0})\) be a condition in \(\mathbb{P}*\check{\mathbb{Q}}*\check{\mathbb{R}}\) with \(\sigma(\bar{p}_{0},\dot{\bar{q}}_{0},\dot{\bar{r}}_{0})=(p_{0},\dot{q}_{0}, \dot{r}_{0})\). Applying the \(\sigma\)-closure of \(\mathbb{P}\) we can find a \(\bar{\mathbb{P}}\)-generic \(\bar{G}\) over \(\bar{N}\) and a condition \(p\leq p_{0}\) so that \(p\) is a lower bound on \(\sigma``\bar{G}\) and, letting \(\alpha=\sup(\sigma``\bar{\omega}_{2})\), we have \(p(\alpha)=0\) (i.e. \(p\) forces \(\alpha\) to not be in the generic stationary set). Let us assume \(p\in G\) and note that this condition forces \(\sigma``\bar{G}\subseteq G\) and hence \(\sigma\) lifts uniquely to a \(\tilde{\sigma}:\bar{N}[\bar{G}]\prec N[G]\) that \(\tilde{\sigma}(\bar{G})=G\) and \(\alpha:=\sup(\sigma``\bar{\omega}_{2})\notin S_{G}\). Let \(\bar{\mathbb{Q}}=\check{\mathbb{Q}}^{\bar{G}}\) as computed in \(\bar{N}[\bar{G}]\) and let \(\bar{q}_{0}=\dot{\bar{q}}_{0}^{\bar{G}}\in\bar{N}[\bar{G}]\). Applying the fact that \(\check{\mathbb{Q}}\) is forced to be \(\infty\)-subproper let \(q\leq q_{0}=\tilde{\sigma}(\bar{q}_{0})\) be a condition forcing that if \(H\subseteq\mathbb{Q}\) is \(V\)-generic with \(q\in H\) then there is a \(\sigma^{\prime}\in V[G][H]\) so that \(\sigma^{\prime}:\bar{N}[\bar{G}]\prec N[G]\) as in the definition of \(\infty\)-subproperness (with respect to \(\tilde{\sigma}\)). Note that as in the proof of Theorem 3.1\(\sigma^{\prime}\upharpoonright\bar{N}:\bar{N}\prec N\) and \(\sigma^{\prime}\upharpoonright\bar{\omega}_{2}=\sigma\upharpoonright\bar{\omega}_ {2}\). Let \(\tilde{\sigma}^{\prime}:\bar{N}[\bar{G}][\bar{H}]\to N[G][H]\) be the lift of \(\sigma^{\prime}\), where \(\bar{H}=(\sigma^{\prime})^{-1}``H\). **Claim 4.8**.: _In \(V[G][H]\) the set \(S_{G}\) does not contain a club._ Proof of Claim.: Since \(\aleph_{2}\) is the continuum in \(V[G]\) note that \(\omega_{2}^{V[G]}\) remains uncountably cofinal in \(V[G][H]\) (though of course it can be collapsed to \(\omega_{1}\)). Suppose towards a contradiction that \(S_{G}\) contains a club and note that since we chose \(\theta\) etc sufficiently large we have \(N[G][H]\models``\exists C\) which is club and \(C\subseteq S_{G}\)". By elementarity there is a \(\bar{C}\in\bar{N}[\bar{G}][\bar{H}]\) so that \[\bar{N}[\bar{G}][\bar{H}]\models\bar{C}\subseteq\bar{S}_{G}\;\text{is\,club}\] where \(\bar{H}:=\sigma^{\prime-1}H\) is \(\bar{\mathbb{Q}}\)-generic over \(\bar{N}[\bar{G}]\) by the definition of \(\infty\)-subcompleteness and the choice of \(q\). But now note that if \(C=\tilde{\sigma}^{\prime}(\bar{C})\) then \(C\cap\alpha\) is cofinal in \(\alpha\) by elementarity so \(\alpha\in C\) but \(\alpha\notin S_{G}\) which is a contradiction. Given the claim we know that \(\omega_{2}^{V}\setminus S_{G}\) is a stationary set in \(V[G][H]\) and hence \(\mathbb{R}:=\check{\mathbb{R}}^{G*H}\) is the forcing to shoot a club through a stationary set. Let \(\bar{\mathbb{R}}\in\bar{N}[\bar{G}][\bar{H}]\) be \(\dot{\bar{\mathbb{R}}}^{G*\bar{H}}\). Note that for each \(\beta\in\bar{N}\cap\bar{\omega}_{2}\) it is dense (in \(\bar{N}[\bar{G}][\bar{H}]\)) that there is a condition \(\bar{r}\in\bar{\mathbb{R}}\) with \(\beta\in\text{dom}(\bar{r})\). It follows that if \(\bar{K}\) is generic for \(\bar{\mathbb{R}}\) over \(\bar{N}[\bar{G}][\bar{H}]\) with \(\bar{K}\ni\bar{r}_{0}:=\dot{\bar{r}}^{G*\bar{H}}\) then \(\tilde{\sigma}^{\prime}``\bar{K}\) unions to a club in \(\alpha\setminus S_{G}\). Since \(\alpha\notin S_{G}\) we have that \(r:=\bigcup\tilde{\sigma^{\prime}}``\bar{K}\cup\{\alpha\}\) is a condition in \(\mathbb{R}\) which is a lower bound on \(\tilde{\sigma^{\prime}}``\bar{K}\) and hence \(r\leq\dot{r}_{0}^{G*H}\). Finally let \(K\ni r\) be \(\mathbb{R}\)-generic over \(V[G][H]\). It is now easy to check that the condition \((p,\dot{q},\dot{r})\) and \(\sigma^{\prime}\upharpoonright\bar{N}\) collectively witness the \(\infty\)-subproperness of \(\mathbb{P}*\mathbb{Q}*\check{\mathbb{R}}\) so we are done. We note that by the same proof adding a nonreflecting stationary set of \(\mu\cap\text{Cof}(\omega)\) for larger cardinals \(\mu\) we can preserve \(\infty\)-SubPFA\(\upharpoonright\mu\). The following therefore holds. **Theorem 4.9**.: _Let \(2^{\aleph_{0}}\leq\mu\leq\lambda<\nu=\lambda^{+}\) be cardinals with \(\mu^{\omega}<\nu\). Modulo the existence of a supercompact cardinal \(\infty\)-\(\mathsf{SubPFA}\upharpoonright\nu+\neg\infty\)-\(\mathsf{SubPFA}\upharpoonright\mu\) is consistent._ The proof of this Theorem finishes the proof of all nonimplications involved in Main Theorem 1.1. ## 5. Conclusion and Open Questions We view this paper, alongside its predecessor [10] as showing, amongst other things, that the continuum forms an interesting dividing line for subversion forcing: below the continuum the "sub" plays no roles as witnessed, as the same non-implications can hold as those that hold for the non-sub versions. Above, it adds considerable strength to the associated forcing axioms. However, as of now we only know how to produce models of \(\mathsf{SCFA}\) in which the continuum is either \(\aleph_{1}\) or \(\aleph_{2}\). The most pressing question in this area is therefore whether consistently \(\mathsf{SCFA}\) can co-exist with a larger continuum. _Question 1_.: Is \(\mathsf{SCFA}\) consistent with the continuum \(\aleph_{3}\) or greater? We note here that the most obvious attempt to address this question i.e. starting with a model of \(\mathsf{SCFA}\) and adding \(\aleph_{3}\)-many reals with e.g. ccc forcing, does not work, an observation due to the first author. **Lemma 5.1**.: _Suppose \(\mathbb{P}\) is a proper forcing notion adding a real. Then \(\mathsf{SCFA}\) fails in \(V^{\mathbb{P}}\)._ All that is needed about "properness" here is that being proper implies that stationary subsets of \(\kappa\cap\operatorname{Cof}(\omega)\) are preserved. The proof of this is standard and generalizes the proof of Lemma 2.4 above (swapping subproper for proper and removing the bound by the continuum). Proof.: Assume \(\mathbb{P}\) is proper. Let \(G\) be a \(\mathbb{P}\)-generic filter over \(V\). For a contradiction, assume \(\mathsf{SCFA}\) holds in \(V[G]\). Take a regular cardinal \(\nu>2^{\omega}\) in \(V[G]\). In \(V\), take stationary partitions \(\langle A_{k}:k<\omega\rangle\) of \(\nu\cap\operatorname{Cof}(\omega)\) and \(\langle D_{i}:i<\omega\rangle\) of \(\omega_{1}\). In \(V[G]\), take a subset \(r\) of \(\omega\) which is not in \(V\). Let \(\{k(i)\}_{i<\omega}\) be the increasing enumeration of \(r\). By [14, Lemma 7.1], in \(V[G]\), there is an increasing continuous function \(f:\omega_{1}\to\nu\) such that \(f[D_{i}]\subseteq A_{k(i)}\) for all \(i<\omega\). Let \(\alpha:=\sup(\operatorname{range}(f))\). Then, in \(V[G]\), we have that \(r=\{k\in\omega:A_{k}\cap\alpha\) is stationary in \(\alpha\}\). But the set \(\{k\in\omega:A_{k}\cap\alpha\) is stationary in \(\alpha\}\) is absolute between \(V\) and \(V[G]\) since \(\mathbb{P}\) is proper and hence preserves stationary subsets of \(\operatorname{Cof}(\omega)\) points. But then \(r\) is in \(V\), which is a contradiction. This shows that either \(\mathsf{SCFA}\) implies the continuum is at most \(\aleph_{2}\) - though given the results of this paper this seems difficult to prove by methods currently available - or else new techniques for obtaining \(2^{\aleph_{0}}\geq\aleph_{3}\) are needed, which is well known to be in general an open and difficult area on the frontiers of set theory.
2303.14713
Engineering Software Systems for Quantum Computing as a Service: A Mapping Study
Quantum systems have started to emerge as a disruptive technology and enabling platforms - exploiting the principles of quantum mechanics - to achieve quantum supremacy in computing. Academic research, industrial projects (e.g., Amazon Braket), and consortiums like 'Quantum Flagship' are striving to develop practically capable and commercially viable quantum computing (QC) systems and technologies. Quantum Computing as a Service (QCaaS) is viewed as a solution attuned to the philosophy of service-orientation that can offer QC resources and platforms, as utility computing, to individuals and organisations who do not own quantum computers. To understand the quantum service development life cycle and pinpoint emerging trends, we used evidence-based software engineering approach to conduct a systematic mapping study (SMS) of research that enables or enhances QCaaS. The SMS process retrieved a total of 55 studies, and based on their qualitative assessment we selected 9 of them to investigate (i) the functional aspects, design models, patterns, programming languages, deployment platforms, and (ii) trends of emerging research on QCaaS. The results indicate three modelling notations and a catalogue of five design patterns to architect QCaaS, whereas Python (native code or frameworks) and Amazon Braket are the predominant solutions to implement and deploy QCaaS solutions. From the quantum software engineering (QSE) perspective, this SMS provides empirically grounded findings that could help derive processes, patterns, and reference architectures to engineer software services for QC.
Aakash Ahmad, Muhammad Waseem, Peng Liang, Mahdi Fehmideh, Arif Ali Khan, David Georg Reichelt, Tommi Mikkonen
2023-03-26T12:50:22Z
http://arxiv.org/abs/2303.14713v1
# Engineering Software Systems for Quantum Computing as a Service: A Mapping Study ###### Abstract Quantum systems have started to emerge as a disruptive technology and enabling platforms - exploiting the principles of quantum mechanics - to achieve quantum supremacy in computing. Academic research, industrial projects (e.g., Amazon Braket), and consortiums like 'Quantum Flagship' are striving to develop practically capable and commercially viable quantum computing (QC) systems and technologies. Quantum Computing as a Service (QCaaS) is viewed as a solution attuned to the philosophy of service-orientation that can offer QC resources and platforms, as utility computing, to individuals and organisations who do not own quantum computers. To understand the quantum service development life cycle and pinpoint emerging trends, we used evidence-based software engineering approach to conduct a systematic mapping study (SMS) of research that enables or enhances QCaaS. The SMS process retrieved a total of 55 studies, and based on their qualitative assessment we selected 9 of them to investigate (i) the functional aspects, design models, patterns, programming languages, deployment platforms, and (ii) trends of emerging research on QCaaS. The results indicate three modelling notations and a catalogue of five design patterns to architect QCaaS, whereas Python (native code or frameworks) and Amazon Braket are the predominant solutions to implement and deploy QCaaS solutions. From the quantum software engineering (QSE) perspective, this SMS provides empirically grounded findings that could help derive processes, patterns, and reference architectures to engineer software services for QC. Quantum Software Engineering, Quantum Service Computing, Systematic Mapping Study, Software Services. ## I Introduction Quantum computing (QC) has started to emerge as a disruptive technology and an enabling platform - exploiting the principles of quantum mechanics - relying on Quantum Bits (QuBits) that manipulate Quantum Gates (QuGates) to tackle computationally intensive tasks efficiently [1]. QC systems are in a phase of continuous evolution and despite being in a state of their infancy, such systems have started to computationally outperform their classical counterparts (i.e., digital computers) in applications such as quantum information processing, bio-inspired computing, and simulation of quantum mechanics [2]. Academic research [1][3] and industrial initiatives led by technology giants such as IBM, Google, and Microsoft [4] are striving hard to achieve strategic advantages associated with quantum systems and software technologies in a so-called 'race to quantum economy'. A recent report presented at the World Economic Forum titled _State of Quantum Computing: Building a Quantum Economy_ highlights that by the year 2022, public and private investments in quantum computing technologies totalled $35.5 billion [5]. Despite the strategic capabilities that can be attained via quantum supremacy in computing; programming, operationalising, and maintaining a quantum computer is a complex and radically distinct engineering paradigm [1]. To augment the quantum hardware development, state-funded projects, and global consortiums are proactively funding initiatives such as the Quantum Flagship [6] and National Quantum Initiative [7] to develop software ecosystems, networking technologies, and human expertise for the alleged quantum leap in computing [8]. **Motivation: Pay-per-use QCaaS -** Service-oriented systems have proven to be useful in supporting utility computing model that relies on pay-per usability approach for individuals and organisations to utilize a multitude of computing services without the need to own or maintain them [9]. Since their early adoption, pay-per-use service-driven systems have now grown from data storage, video streaming, and entertainment, to resource-sharing applications that represent a multi-billion dollar industry in service economies [10][11]. The 'as-a-Service' (aaS) model has provided the impetus to wide-scale adoption of service systems that can offer a plethora of computing services including but not limited to storage, computation, infrastructure, platform, and software to end-users [10]. The aaS model can enable users and developers who can exploit the QC platforms (e.g., processors, memory, simulators) offered by quantum vendors, such as Amazon, Goole, and IBM [12]. Quantum Computing as a Service (QCaaS) as in Figure 1 is a recent and quantum-specific genre of aaS model that is built on the philosophy of service-orientation of QC, i.e., pay-per-shot at QC resources instead of owning, programming, and/or maintaining quantum computers [13]. Quantum vendors view pay-per-shot as an opportunistic business model to generate revenue streams from their QC infrastructures, where a shot is a single execution of a quantum algorithm on a quantum processing unit (QPU). The QCaaS can alleviate the need to own or maintain quantum computers and can help software and service developers to rely on existing knowledge and best practices to develop software services that can be executed on QC platforms [14]. Quantum software services enable developers to wrap data and computations inside loosely coupled, fine-grained modules of source code to execute tasks such as prime factorisation, key encryption, or bio stimulations on QC platforms [13]. There is a growing interest in the academic community and industrial vendors to research and develop solutions for enabling quantum service-orientation [12][13]. **Needs for the mapping study:** Systematic mapping studies (SMS) rely on evidence-based software engineering approach to systematically and reproducibly identify, analyse, synthesise, and document (i.e., map the trends of) existing research on the topic under investigation [15]. The academic community aims to exploit QC platforms for empiricism in quantum research [12], while the vendors pursue a revenue stream as well as validation and testing of their under-development quantum platforms, offered as a service [13]. Synergising research and development on quantum computing with service-orientation can allow researchers and developers to leverage existing design principles, patterns, architectural styles, and modelling languages of services computing to offer QC as a service [14]. Moreover, systemizing the efforts to architect and implement QaaS requires discovering new patterns and developing innovative frameworks rooted in empirically grounded guidelines to research emerging challenges and develop futuristic solutions [16]. We conducted this SMS based on two research questions to investigate (i) _existing solutions in terms of designing, implementing, deploying, and operationalizing quantum software services_ and (ii) _emerging trends that can indicate futuristic research on QCaaS_. The results indicate that during quantum service-orientation, software modeling languages (e.g., UML) and patterns (e.g., service wrapping, API gateway) help in mapping functional requirements to service implementation (e.g., Python code) that can be deployed on QC platforms (e.g., Amazon Braket). Emerging trends indicate non-functional aspects, model-driven engineering (low-code development), empirically discovered tactics, human roles, and process-centric development of QCaaS. **Contributions and implications** of this SMS are to: * identify and document a collective impact of existing research (published academic evidence) to investigate the extent to which classical and quantum-specific service-orientation can be applied to QCaaS. * identifying existing gaps - that reflect the dimensions of future research to develop emerging and next-generation of QCaaS solutions. Academic researchers can rely on mapping of existing research and vision for future work to research and develop QCaaS solutions in a broader context of QSE [1][6]. Practitioners who can rely on academic references about patterns (reusability), modeling notations (representation), and implementation (prototyping) to develop QCaaS [12]. ## II Research Context and Method We now contextualise service-orientation for QCs in Section II-A and present the research method in Section II-B. ### _Context: Service-Orientation for Quantum Computing_ #### Ii-A1 Quantum Computing Systems We briefly overview a QC system that comprises of quantum hardware and software elements as shown in Figure 1(a)). Fundamental to achieving quantum computations are Quantum Bits (QuBits) that represent the basic unit of quantum information processing by manipulating Quantum Gates (QuGates) [2][8]. Traditional Binary Digits (Bits) in classical systems (i.e., digital computers) are represented as [1, 0] where 1 represents the computation state as On and 0 represents the state as Off to manipulate binary gates in digital circuits. In comparison, a QuBit represents a two-state quantum computer expressed as \(|0\rangle\) and \(|1\rangle\). The state of a single Qubit can be expressed as \(|0\rangle=\begin{bmatrix}1\\ 0\end{bmatrix}\) and \(|1\rangle=\begin{bmatrix}0\\ 1\end{bmatrix}\) and quantum superposition allows a QuBit to attain a liner combination of both states: \[|0\rangle=\left[\begin{array}{c}1\\ 0\end{array}\right]\qquad+\qquad|1\rangle=\left[\begin{array}{c}0\\ 1\end{array}\right] \tag{1}\] Based on Figure 1(a)), we distinguish between a Bit and QuBit such that a Bit can take a value as either 'Off-0' or 'On-1' with 100% probability. In comparison, a QuBit can be in a state of \(|0\rangle\) or \(|1\rangle\) or in a superposition state with 50% \(|0\rangle\) and 50% \(|1\rangle\). In addition, two QuBits can be entangled, and entangled QuBits are linked in a way that observing (i.e., measuring) one of the QuBits, can reveal the state of the other QuBit. Extended details about QuBits and QuGates to develop and operate the QC systems are reported in studies like [1] and [12]. To utilise the quantum computing resources, such as quantum processor and memory, there is a need for control software that can program QuBits to manage QuGates of a QC system. Quantum software systems rely on quantum source code compilers that allow quantum algorithm designers and programmers to write, build, and execute software for quantum computers. For example, a Fig. 1: A Generic View of the QCaaS Model programmer can use a quantum programming language, such as Q# (by Microsoft) or Qiskit (by Google) and use a quantum compiler to enable programable quantum computations [4][17]. Software systems that can manage and control quantum hardware find their applications in areas including but not limited to quantum cryptography, bio-inspired computing, and quantum information processing [1]. However, scarcity of quantum hardware, lack of quantum software professionals, and economics of owning or maintaining QC are some critical factors that impede commercially viable quantum computers [2][3]. Vendors who offer QCaaS platforms view quantum service-orientation as an opportunistic business model that offers QC resources to customers as utility computing [9][12]. #### Iii-A2 Service-Oriented Computing Services computing follows the SOA style that allows service users to discover and utilise a multitude of available software services that encapsulate computing resources and applications offered by service vendors/providers [10]. Figure 2b) shows SOA style-based quantum servicing where a QC user (i.e., service requester) can utilise the QC resources offered by quantum vendors (i.e., service provider) by means of quantum services. In most cases, QC systems of today are not capable of executing quantum algorithms wrapped with large amounts of data, inputs, and outputs [2][8]. As shown in Figure 2a), large volumes of data in quantum algorithms require more QuBits and complex QuGates that result in deep quantum circuiting and consequently increased errors referred to as noisy intermediate-scale quantum (NISQ) [4]. To address the issues like NISQ, the classic-quantum split pattern slices the overall quantum software or application into classical modules (pre/post-processing) and quantum modules (quantum computation) that result in hybrid applications [16]. One of the prime examples of classic-quantum split patterns is Shor's algorithm which involves quantum computations for finding the prime factors of an integer with its application in computer security and cryptography. Quantum service-orientation when viewed from a utility computing perspective can minimise the quantum divide, a prevailing issue highlighted at the World Economic Forum 2023, between states/entities that own or lack QC systems, technologies, and infrastructures [18]. ### _Research Method for the SMS_ We now discuss the research method, driven by three phases as illustrated in Figure 3, based on the guidelines to conduct the SMS [15]. #### Iii-B1 Phase I - Specifying the Research Questions Research questions (RQs) are fundamental to conducting the SMS and documenting the results. We outlined two RQs for this SMS. **RQ-1**: What solutions are reported in the literature to support the development of quantum computing as a service? **Objective(s)** - To investigate state-of-the-art in terms of existing solutions that enable or enhance QCaaS computing. A multi-perspective analysis can reveal the functionality offered by the solutions, modeling languages and patterns to design the solutions, and programming technologies along with deployment platforms to implement and operationalise the solutions. **RQ-2**: What are the emerging trends of research on quantum computing as a service? **Objective(s)** - To identify and discuss the emerging trends that can help pinpoint the prevalent challenges and their solutions as dimensions of potentially futuristic research on QCaaS. The emerging trends can help to provide a road map for progressing research and development on QCaaS. #### Iii-B2 Phase II - Identifying and Selecting the Literature for SMS Based on the guidelines for literature search, we formulated a generic string to be executed on prominent Electronic Data Sources (EDS) [19] including IEEE Xplore, ACM Digital Library, Springer Link, Science Direct, Springer Link, and Wiley Online Library. Google Scholar was used as a complementary EDS to ensure that we did not miss any relevant study for selection. The search string presented in Figure 3 is generic that combines logical operations AND, OR) to compose the key terms (e.g., Quantum AND Service OR Cloud), customized for each EDS individually. Customised search strings are provided as part of the SMS protocol [20]. We conducted a pilot search to assess the need for any customisation to the search string(s) or any filters applied on specific EDS to avoid an exhaustive search resulting in a significant number of unrelated studies. For example, we limited our search on IEEE Xplore from 'Full Text & Metadata' to 'Document Title' as searching for our defined key terms in full text and metadata yielded a significant amount of irrelevant studies (e.g., cloud services, quantum hardware). **Screening and Quality Assessment:** By executing the customised search strings on five selected EDS, the SMS process retrieved a total of 55 potentially relevant studies. To complement the automated search on EDS, we applied the forward snowballing process [17], as a manual effort. Fig. 2: An Overview of the Quantum Service-Orientation The forward snowballing approach involves looking up the references or bibliography sections of 55 studies, referred to as the seed set in snowballing, to see if any relevant cited literature can be found. The forward snowballing helped us to identify a total of 13 studies resulting in a total of 68 studies (55: EDS and 13: snow-balling). To assess and select the studies for review, we performed study screening based on criteria (Step 1: S1 - S4) in Table I. Most of the studies identified during the snowballing failed the screening criteria S3 and S4 which means either the studies were duplicate studies or secondary/survey studies that cannot be included in the review. Based on the screening of identified studies, more specifically reading through the titles, abstracts, and conclusions we shortlisted a total of 11 potentially relevant studies to be qualitatively assessed (Step II: Q1 - Q5) for their inclusion in the review for SMS. Based on the quality assessment, we excluded 2 studies to finally select a total of 9 studies to be included in the review. The list of selected studies for the SMS is provided in **Appendix A**. #### Iii-A3 Phase Ill - Documenting the Results To document the results, i.e., answering RQs objectively, we extracted the data from the selected studies in Appendix A and documented it using a structured format, having seven criteria, in Table II. The criteria focus on conceptualising, designing, developing, and deploying quantum software services by following the IBM SOA foundation life cycle (SOA life cycle for short) [21]. To contextualise QCaaS from Figure 2 and to ensure fine-grained analysis of SMS data, we have divided the 'Model' activity from SOA life cycle into two activities namely Conception and Model to distinguish between functional needs (conception) and representation (modeling) of quantum service design. Model represents the conception as the design specification of functional needs for quantum services. We do not have the 'Manage' phase from SOA life cycle as we could not find any evidence in the literature that supports identity, compliance, and business metrics management of quantum services. * **Conception** as the initial activity in the service life cycle aims to conceptualise the functional aspects of quantum services by capturing the details of required functionality, i.e., functional requirements. Conception aims to identify: _what are the business needs of a quantum service?_ * **Model** activity aims to translate the conception into a design that acts as a blueprint for implementing quantum services. Model focuses on: _how to represent the conception as the design of the solution?_ * **Assemble** focuses on implementing the design to produce concrete, i.e., executable specification of quantum services. Assemble aims to address: _how to implement the design as executable services?_ * **Deplog** as the last activity aims to deploy the assembled solution for operationalisation and usage of quantum services. Deploy focuses on: _what platforms can be used to deploy the assembled (implemented) solution?_ #### Iii-A4 Threats to the Validity of SMS Systematic literature reviews and mapping studies are prone to a number of validity threats that refer to deviation, limitations, or invalidation of study results when applied to a theoretical or practical context. **Construct validity** of the SMS corresponds to the rigor of study protocol and methodological details to extract, analyze, and synthesise the data to objectively answer the RQs and present the data systematically. To avoid this threat, i.e., avoiding the bias in data extraction and documentation, we applied well-practiced guidelines [15, 19], derived the search strings (Figure 3), and devised a structured format (Table II) to collect and present the data consistently. **Internal validity** examines SMS design, conduct, and analysis to answer the RQs without bias. To minimize this threat, we synthesized the data based on the well-known IBM SOA life cycle [21] that structures the results into fine-grained life cycle activities. We documented \begin{table} \begin{tabular}{|l|} \hline **Study Selection Step 1 - Screening of Identified Studies** \\ \hline S1 - The study does not discuss any solution or proposal for quantum computing as a service \\ \hline S2 - The study is not reported in English \\ \hline S3 - The study is a duplicate study. Duplicate studies are studies with overlapping contents, e.g., a conference paper extended as a journal article. \\ \hline S4 - The study is a secondary study/survey paper \\ \begin{tabular}{l} Exclude the study if the answer to any of the criteria in Step I (S1 - S4) results in Yes, otherwise, \\ **Include the study for quality assessment in Step II** \\ \end{tabular} \\ \hline \begin{tabular}{l} Study Selection Step II - Quality Assessment of the Identified Studies** \\ \end{tabular} \\ \hline Q1 - Study objectives and Contributions are clear? [Yes = 1, Partially = 0.5, No = 0] \\ \hline Q2 - Research method to conduct the study is reported [Yes = 1, Partially = 0.5, No = 0] \\ \hline Q3 - Design and/or implementation details of solution are provided [Yes = 1, Partially = 0.5, No = 0] \\ \hline Q4 - Details for Experiments/Evaluation/Demonstration of Solution are provided [Yes = 1, Partially = 0.5, No = 0] \\ \hline Q5 - Study limitations and needs for future research are discussed [Yes = 1, Partially = 0.5, No = 0] \\ \hline Exclude the study that has a quality assessment score (Q1 - Q5) less than 2.0, otherwise \\ Include the study for review and data extraction in Table II \\ \end{tabular} \end{table} TABLE I: Criteria for Screening and Qualitative Assessment of Selected Studies Fig. 3: Overview of the Research Method the results while performing a quality assessment (Table I) and a well-defined service life cycle template. **External validity** of the SMS refers to the extent to which the findings of study can be generalised/externalised to research and development projects. It is challenging to foresee and outline the predictive implications of the study results. We have outlined the implications and generalization of study findings (Table II, Figure 4, Figure 5) can provide the basis for creating a reference architecture for QCaaS as future work. The documented results are discussed in Section III (RQ-1) and Section IV (RQ-2). ## III Engineering Software for QCaaS (RQ-1) We now discuss the existing solutions, reported in the literature, that support the development of quantum services to operationalise QCaaS solutions. The data extracted from the selected studies is presented in Table II and visualised in Figure 4. Table II can be viewed as a catalogue that organises a summary of the core findings to answer RQ-1 based on the four activities of the SOA life cycle (Phase III, Figure 3). **Illustrative Example**: Figure 4a) exemplifies the service life cycle with _functional aspects_ that requires a quantum service to compute the prime factors of an integer. Functional aspects need _modeling_ and that uses Unified Modeling Language (UML) component diagram [22] as the modeling notation to specify computational elements (components) and their interconnections (connectors) in a service. The Classic-Quantum split pattern [16] is applied to slice the functionality between a classical computer (pre-/post-processing, e.g., Gen_Num component) and a quantum computer (prime factorization, Factorize component). The model acts as a blue-print to support _assembly_ of a services using a programming language (Qiskit code snippet) that converts the design specifications of a service to its executable specifications. Finally, _deployment_ activity is shown as UML deployment diagram to configure the assembled service on a quantum computer provided by the quantum vendor (Amazon Braket). ### **Conception**: Functional Aspects During the conception activity, functional aspects of a service relate to identifying and outlining the functionality to be offered by a quantum service. The functionality can be achieved via service execution on a quantum computer (provider), whereas the service is developed or invoked by the user (requester). Figure 2 shows that, in order to exploit QC platforms for quantum functionality, service requesters can develop/discover new and/or available services, such as quantum simulation or quantum cryptography using the SOA patterns [10]. In the SMS, we identified a multitude of functional aspects for quantum services and organized them into five categories namely _experimental_, _service delivery_, _number crunching_, _data searching_, and _natural computing_ as shown in Table II. For example, Figure 4a) highlights the generic functional aspect of number crunching that involves prime factorisation of integers for Shor's algorithm [1][2]. The diversity of existing functional aspects is proportional to the capability of the current era of quantum computers that is considered as limited due to a number of factors such as simplistic quantum circuitry (e.g., less QuBits/QuGates) and quantum errors (e.g., NISQ) [4]. Quantum software services that support functional aspects of QC are merely capable of checking the status of quantum circuits (experimental) or generation of random numbers using quantum hardware (number crunching). Functional aspects reflect only a partial view of system design in terms of offered functionality that should not overlook the non-functional or quality aspects of the QCaaS solutions. For example, resource efficiency in terms of utilizing minimal available QuBits to generate prime factors can ensure the required functionality and desired quality (e.g., service efficiency, execution performance) of QCaaS. ### **Modeling**: Notations and Patterns During service modeling, modeling notations, such as Q-UML or ontological models can help create a blueprint for the implementation of functional aspects [22]. Patterns can complement the notations by providing reusable knowledge and design rationale to architect the quantum services [13][16]. _Modeling notations_ are fundamental to the creation, maintenance, and evolution of models such as ontological structures and graph-based diagrams that provide a visual representation, whereas architectural description languages support a textual specification for software-intensive systems [22]. Recent trends in software engineering that promote model-driven and low-code application development have resulted in transitioning developers' focus from coding to software modeling for implementation [7]. Low code application development process leverages the principle and practices of model-driven engineering to utilise model(s) as first-class entities in software development [17][23]. Investigating software models and modeling notations that help create service models is essential to support model-driven perspective to QSE, consequently facilitating quantum code developers to abstract implementation-specific complexities, via model-driven QSE, while developing quantum services [23]. This SMS indicates three main types of notations to model services in QCaaS that include the Unified Modeling Language (UML), graph-based models, and process models highlighted in Figure 4b). UML-based models are represented via a multitude of notations, such as class and component diagrams that represent the structure, while sequence and deployment diagrams represent runtime or behavioural view of QCaaS. Graph-based models contain directed graphs and ontologies, whereas process models rely on automating the business processes of an enterprise as quantum services. For example, the study [S3] reports a class diagram as a structural view of the system to represent the attributes and methods of entities (user, service provider, authentication etc.) of quantum computing as a service. UML diagrams and profiles represent the status-quo in software modeling and are seen as the de-facto notation in the software and service'community to model classical software with growing adoption in QSE [22]. _Design patterns_ represent a concentrated wisdom of software designers that can be leveraged to address design and implementation issues, addressing functionality and quality, effectively and efficiently. Considering a lack of professional expertise in QSE (e.g., quantum domain engineers, quantum algorithm designers, quantum software architects etc.) patterns as artifacts of reuse can help novice developers during quantum software development to rely on existing best practices [1][13]. This SMS highlights that the literature on QCaaS reports five patterns, namely the _API Gateway_, _Layered Architecture_, _Classic-Quantum Split_, _Service Wrapping_, and _Repository Pattern_. Figure 4c) depicts pattern thumbnails as an abstract view of the identified patterns. Patterns are generally documented as templates or pattern languages, here we only focus on overviewing the reported patterns for QCaaS, while details for pattern representation and documentation can be found in [23]. The Classic-Quantum Split pattern [16] is a quantum version of the Splitter pattern, driven by quantum workflow, that splits computation tasks into tasks that can be generated and executed on classical machines (e.g., random number generation) and tasks that can be executed on quantum machines (e.g., prime factorisation). The pattern aims to address issues like NISQ by splitting quantum software into classical and quantum parts as a hybrid application [2][4]. **Modeling notations** can assist software engineers to transit their focus from implementation towards design perspective. Modeling can incrementally transform functional aspects to service models leading to service implementation via model-driven engineering or low-code development. **Pattern** (classical or quantum-specific) can facilitate developers to architect and implement quantum-age software services by relying on reusable knowledge and best practices of service-orientation. Recently, a number of studies have focused on organising Fig. 4: An Overview of the Mapping Study Results quantum software patterns as a body of knowledge in QSE, however, there is no evidence of empirically-derived methods to discover and document patterns for quantum services computing. SOA-specific patterns like API Gateway and Service Wrapping patterns can be tailored to address QCaaS solutions. There is a need for mining repositories and knowledge resources to discover reusable knowledge and best practices from quantum software development projects that can to be documented as tactics and patterns for QCaaS solutions. ### _Assembly: Application Domain and Programming_ Assembling the quantum services involves identifying the application domains and exploiting the programming languages as implementation technologies to develop executable specifications from the service model [10][12]. _Application Domain_ is also referred to as the implementation use cases or practical context to which the QCaaS solutions can be applied. For example, quantum security represents an application domain for quantum software servicing where a service can be invoked to implement cryptography protocols to generate and manage a secure quantum key [8]. The results of this SMS indicate four application domains, namely _OptimSation_, _Process Automation_, _Mathematics_, and _Quantum Simulation_. The application domains may impact the selection of programming languages and tools for service implementation. For example, the study [S3] uses Q# as the programming language that can be developed and compiled in Microsoft.Net framework for executing quantum algorithms. _Service Implementation_ involves programming languages that represent a system of notation or source coding scripts for implementing quantum services to manage and operationalise QC resources [17][22]. In recent years, a number of Quantum Programming Languages (QPLs) including but not limited to Q# by Microsoft or Cirq by Google have emerged to provide specialized programming syntax, framework, and environments to develop, execute, and deploy quantum source code. Insights into programming languages can reveal if classical programming languages (e.g., C, Java, Python etc.) suffice for QCaaS implementation or if there is need for more specialised QPLs (Q#, Cirq etc.). The SMS results indicate three programming languages, namely Python, Java, and Q#, as the preferred languages to implement quantum services. Based on the details of source coding, Figure 4f) distinguishes between native code of a language and specialised libraries/application programming interfaces (APIs) being developed using a specific language. For example, the studies [1, 1] used native Python code to implement quantum micro-servicing for experimentation. In comparison, the study [1] used Flask as a Web framework written in Python to develop a solution for optimal delivery of quantum services on Amazon Braket. Python is the most preferred programming language both in terms of native code as well as specialised libraries of Python that include Flask and Qiskit as open-source language frameworks and Cirq which is adopted by Google. **Application domains** represent the practical contextuse cases of quantum services and may impact the selection of programming languages and tools for implementation. **Programming languages** provide a system of notation for source-coding of quantum services. Classical programming languages, such as Python represent a predominant choice over QPLs to implement QCaaS due to more comprehensive documentation and familiarity of Python in service developers' community. ### **Deployment: Quantum Platform** The deployment activity supports the selection of QC platforms on which services can be deployed for their operationalisation and execution. Platform providers also referred to as quantum vendors offer computing infrastructure in terms of hardware as well as software that allows service developers to develop and/or utilise the quantum services. Deployment represents the last activity in the SOA life cycle that is represented as a UML deployment diagram in Figure 4a). This SMS identified a total of three quantum vendors for the deployment of quantum services, namely _Amazon Bracket_, _IBM Quantum_, and _Rigetti_. Amazon Braket (a managed Amazon Web Services (AWS)) is the most preferred platform to design, test, and run quantum algorithms. One of the reasons for selecting Amazon Braket for service deployment is that it can allow service users/developers to design their own quantum algorithms. This can be particularly handy for novice developers unfamiliar with the technicalities of quantum systems to utilise a set of pre-built algorithms, tools, and documents to develop and manage quantum services on Amazon platform. **Quantum platforms** leverage cloud computing infrastructures to support quantum service-orientation. Amazon Braket is the predominant quantum vendor that can enable novice developers to exploit some pre-built algorithms, programming tools, and service documentation that lack on other quantum platforms. ## IV Emerging Trends of Research on QaaS (RQ-2) We now answer RQ-2 that aims to report the emerging trends that indicate dimensions of potential future research on QCaaS. To identify these trends, we specifically reviewed the details about objectives/contributions, evaluation/demonstrations, and limitations/future research in each study during quality assessment (Q-1, Q-4, Q-5 in Table I). We have visualised the identified emerging trends in Figure 5 as per the SOA life cycle activities. For example, Figure 5 highlights that during the conception of quantum services, Quantum Significant Requirements (QSRs) should include quality aspects that complement the functional aspects to ensure the required functionality and desired quality of QCaaS as part of quantum domain engineering activity in QSE. ### _Process and Human Roles in QCaaS Development_ Process-centred engineering is manifested in service life cycle to structure a multitude of activities, such as service design, development, and delivery, into a coherent process that can be executed in an incremental fashion [16]. Although the evidence from the selected studies is accumulated and organized under SOA life cycle in Table II, however, at the individual scale, the studies lacked a process-centric approach and highlighted the needs for processes attuned to the needs of QSE [16]. These processes, such as Quantum DevOps, Quantum microservicing or agile methods for quantum service/software etc., are seen as quantum-genre of SE processes that are better aligned with the needs of QSE [17]. For example, in the agile method for QSE, quantum domain engineering can help accumulate the domain information of a quantum system (hardware/software) to develop a design model that can act as a blueprint to implement quantum software and services. Process-centric approaches can also support tools (for automation) and professional roles (human decision support) to engineer quantum services. We identified the need for human roles as QSE professionals to effectively undertake QCaaS development. **Process-centred** approaches, such as Quantum DevOps, Quantum microservicing or agile methods, can enable an incremental development and delivery of quantum software services. **Human roles** can enrich the process - synergising knowledge of quantum physics practices of software engineering - to support activities of quantum domain engineering, software architecting, and simulation management as expertise that currently lacks in quantum service-orientation. ### _Quantum Significant Requirements_ The concept of Quantum Significant Requirements (QSRs) can be traced to well-established role of Architecturally Significant Requirements (ASRs) in software and service engineering. QSRs as part of the quantum domain engineering activity can help elicit and document the functional and non-functional aspects (also quality aspects or attributes) of quantum software. Existing research primarily focuses on functional aspects of services while overlooking the non-functional aspects that include but are not limited to QuBit utilisation, energy efficiency, quantum vendor lock-ins, QPU elasticity, quantum error mitigation, that impact the development and operationalisation or QCaaS solutions. For example, during the quantum domain engineering activity the hardware aspects (operations of QuGates) can be mapped to software aspects (components and connectors) that can help with splitting the computation tasks between classical and quantum computers using the classic-quantum split pattern. * The notion of **QSRs** is rooted in the concept of ASRs that aims to identify, classify, and document functional and quality aspects of quantum software services. QSRs for quantum services can complement the functional aspects with quality requirements to ensure the required functionality and desired quality of service. ### _Model-driven Quantum Software Servicing_ Model-driven Service Engineering (MDSE) enables software engineers and architects to rely on models that can abstract complex and implementation-specific details with human comprehensible visual notations to design and implement software services. Specifically, by exploiting MDSE, software engineers can apply the model transformation to transform the design models into implementation (source code) and validation (test case) models. Modeling notations, such as Service-oriented architecture Modeling Language (SoaML) and more specifically the Q-UML specification, provide a metamodel and a UML profile for the specification and design of services within a service-oriented architecture. MDSE can benefit novice developers to to map the flow of algorithms and modules of source code to graphical models for low-code (model-driven) development of quantum services. * **Model-driven Quantum Service Development** can leverage the principle of MDSE and modeling notations like SoaML and Q-UML to abstract implementation-level complexities with design-level models driven by QSRs. To exploit model-driven development, there is a need for tool support that could enable automation (e.g., model transformation tools) and human decision support (e.g., quantum software architects) to quantum software servicing using MDSE. ### _Empiricism in Mining Quantum Service Patterns_ Pattern-based software service engineering relies on architectural design and implementation strategies and best practices that can be reused to deliver software services [21]. Existing solutions employ a number of pattern-based solutions, such as classic-quantum split and service wrapping pattern, however, there is no evidence on an empirical discovery of patterns and tactics as reusable knowledge [30]. A lack of empiricism in discovering new patterns hinders the reusability of service design and implementation knowledge during quantum service engineering. One possible dimension for pattern discovery is mining software repositories or social coding platforms (e.g., GitHub) that contain raw knowledge that can be mined as patterns. Pattern-based solutions could complement human expertise with available best practices for service design and implementation. * **Pattern discovery** via empirically-grounded methods, i.e., mining repositories or social coding platforms of quantum software development, can help leverage reusable design rationale for quantum service engineering. Pattern languages can empower the role of quantum software architects and algorithm designers to rely on reusable knowledge and best practices as opposed to ad-hoc and once-off solutions. ### _Continuous Testing and Delivery of Quantum Services_ With an adoption of agile software engineering in quantum software development context [24], there is a need for light and adaptive methods to ensure a continuous development and delivery of quantum software services. The literature suggested a lack of solutions on testing the software services. The continuous testing and continuous delivery can help CT/CD to test the services against QSRs more effectively and deliver them rapidly. Quantum service testing can involve simulation or regression tests against the QSRs. * **Continuous Testing and Continuous Delivery** (CT/CD) relies on agile software engineering methods to help deliver quantum software services rapidly and reliably. CT/CD can provide strategic benefits to vendors by adding new services to their quantum platforms. ## V Related Survey-based and Empirical Studies We now review the relevant research rooted in survey-based evidence or empirical studies on (i) engineering and architecting quantum software (Section V-A), and (ii) quantum services computing (Section V-B), as in Table III. **Interpretation of the Review**: To compare and summarise objectively, Table III highlights each study using five-point self-explanatory criteria including (a) type of study (adopted from [15]), (b) focus of study, (c) core findings, (d) QSE activity supported by the study, and (e) year of publication, each exemplified below. For example, the study [26] presents a systematic review, available since 2021, as part of evidence-based software engineering to focus on architecting quantum software. It presents the core findings about architectural life cycle and state-of-the-art on architectural modeling, patterns, and tools to architects develop quantum software. ### _Architecting and Engineering Quantum Software_ Quantum Software Engineering (QSE) has emerged as the most recent genre of software engineering (SE) that allows practitioners to adopt a process-centric and systematic approach develop software-intensive systems and applications for QC [25, 28]. Some recently published survey-based studies on QSE highlight that existing SE principles and practices can be tailored to develop quantum software, however, issues specific to QC, such as operationalising QuBit-s/QuGates, quantum domain engineering, and quantum-classic software split, require new engineering methods to tackle these Fig. 5: Overview of Emerging Research Trends challenges [28]. To derive new methods and processes, the QSE research community is striving to organize a body of knowledge and academic collaborations that can be stimulated via dedicated publication forums. Academic discussions at the publication fora and published results [1][4] have highlighted that in addition to the engineering principle and practices, human knowledge and expertise require fundamental knowledge of quantum mechanics to effectively design algorithmic solutions for QSE projects [17]. The current generation of software architects and developers who lack the foundational knowledge of quantum mechanics may be hindered or find themselves under-prepared for quantum software development [26]. Software architecture is being viewed as a solution that can abstract complexities of implementation by representing software to be developed as architectural components and connectors. A recently conducted systematic review of architectural solutions puts forward five architecting activities to guide software designers to engineer quantum software solutions in an incremental manner [26]. ### _Quantum Services Computing_ Quantum service-orientation is a recent trend initially pushed by QC vendors to allow developers who can compose and invoke software services on remotely hosted quantum systems and infrastructures [14]. Quantum service computing initiatives, such as Amazon Braket and Q experience, have paved the way for academic research to propose solutions for quantum cloud, quantum services computing, and quantum as a service. Recent research reviews inform about the current progress and emerging challenges for quantum services computing that include but are not limited to hardware availability, quantum noise, and quantum-classic split [29]. Software researchers are striving to synergise existing methods of microservicing and quantum software development such as Quantum DevOps to develop quantum microservices [24]. ## VI Conclusions and Future Work Quantum service orientation allows QC vendors to offer end-users' access to computational resources - enabling shots at QPUs - by following the utility computing model. Recently, research and development on QSE have started to synergize the principles of service-orientation and practices of QC (algorithms, simulations, QuGates etc.) to enable the adoption of QCaaS by individuals and enterprises to attain quantum supremacy in modern day computing. To this end, we used the systematic mapping study approach to investigate (i) existing solutions for quantum service development that enable or enhance QCaaS, and (ii) emerging trends that highlight the needs for ongoing and future research on QCaaS. The results of this SMS as structured information (Table II) and visual illustrations (Figure 2-4) highlight the strengths, limitations, and potential for future research from the QSE perspective. **Implications and Future Work**: The primary implication of this SMS is to establish an evidence-based body of knowledge for service development that leverages design notations, patterns, and architectural approaches highlighting the needs for human-centric and process-driven QCaaS development. The results of this SMS establish the foundations for future work in three directions that include (a) _conducting practitioners' survey_, (b) _implementing the reference architecture_, and (c) _mining social coding platforms_ for empiricism in QCaaS computing. The literary foundations can help us to design an empirical study that aims to engage service developers and engineers in a workshop (activity and survey) to seek their feedback and synthesise the results as practitioners' perspectives to complement the evidence from academic research. We also plan to mine social coding platforms (e.g., GitHub) to empirically discover knowledge and understand the practices adopted by developers' communities in open-source QCaaS.
2301.02708
Few-shot Node Classification with Extremely Weak Supervision
Few-shot node classification aims at classifying nodes with limited labeled nodes as references. Recent few-shot node classification methods typically learn from classes with abundant labeled nodes (i.e., meta-training classes) and then generalize to classes with limited labeled nodes (i.e., meta-test classes). Nevertheless, on real-world graphs, it is usually difficult to obtain abundant labeled nodes for many classes. In practice, each meta-training class can only consist of several labeled nodes, known as the extremely weak supervision problem. In few-shot node classification, with extremely limited labeled nodes for meta-training, the generalization gap between meta-training and meta-test will become larger and thus lead to suboptimal performance. To tackle this issue, we study a novel problem of few-shot node classification with extremely weak supervision and propose a principled framework X-FNC under the prevalent meta-learning framework. Specifically, our goal is to accumulate meta-knowledge across different meta-training tasks with extremely weak supervision and generalize such knowledge to meta-test tasks. To address the challenges resulting from extremely scarce labeled nodes, we propose two essential modules to obtain pseudo-labeled nodes as extra references and effectively learn from extremely limited supervision information. We further conduct extensive experiments on four node classification datasets with extremely weak supervision to validate the superiority of our framework compared to the state-of-the-art baselines.
Song Wang, Yushun Dong, Kaize Ding, Chen Chen, Jundong Li
2023-01-06T20:40:32Z
http://arxiv.org/abs/2301.02708v1
# Few-shot Node Classification with Extremely Weak Supervision ###### Abstract. Few-shot node classification aims at classifying nodes with limited labeled nodes as references. Recent few-shot node classification methods typically learn from classes with abundant labeled nodes (i.e., meta-training classes) and then generalize to classes with limited labeled nodes (i.e., meta-test classes). Nevertheless, on real-world graphs, it is usually difficult to obtain abundant labeled nodes for many classes. In practice, each meta-training class can only consist of several labeled nodes, known as the _extremely weak supervision_ problem. In few-shot node classification, with extremely limited labeled nodes for meta-training, the generalization gap between meta-training and meta-test will become larger and thus lead to suboptimal performance. To tackle this issue, we study a novel problem of few-shot node classification with extremely weak supervision and propose a principled framework X-FNC under the prevalent meta-learning framework. Specifically, our goal is to accumulate meta-knowledge across different meta-training tasks with extremely weak supervision and generalize such knowledge to meta-test tasks. To address the challenges resulting from extremely scarce labeled nodes, we propose two essential modules to obtain pseudo-labeled nodes as extra references and effectively learn from extremely limited supervision information. We further conduct extensive experiments on four node classification datasets with extremely weak supervision to validate the superiority of our framework compared to the state-of-the-art baselines. Graph Neural Networks; Few-shot Learning; Weak Supervision + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: Information systems Data mining + Footnote †: journal: model performance will be deteriorated by the _under-generalizing_ problem due to extremely inadequate support nodes. Specifically, given extremely limited labeled nodes for each meta-training class, the support nodes in each meta-training task are only sampled from a small set of labeled nodes. Therefore, the effectiveness of meta-learning in extracting meta-knowledge from different classes will be greatly weakened. As a result, the generalizability of the model to meta-test classes will drop significantly (i.e., under-generalizing). Second, with extremely weak supervision, the meta-training efficacy will be severely impacted by the _over-fitting_ problem due to extremely inadequate query nodes. Recent few-shot node classification studies (Liu et al., 2018; Li et al., 2018; Li et al., 2019; Li et al., 2019) generally require a large number of query nodes during meta-training for model optimization. Nevertheless, with extremely weak supervision from the meta-training classes, the number of query nodes for optimization during meta-training is significantly reduced. As a result, the model will be easily over-fitted and result in suboptimal performance. To tackle the aforementioned challenges, we propose a novel framework for few-shot node classification with extremely weak supervision from the meta-training classes, named as _X-FNC_. Essentially, our framework consists of two innovative modules to handle the _under-generalizing_ and _over-fitting_ issues, respectively. First, to compensate for the insufficient support nodes during meta-training, we perform label propagation to obtain abundant pseudo-labeled nodes based on Poisson Learning (Li et al., 2018). With the pseudo-labeled nodes, we can expand the support set in each meta-task to better extract discriminative meta-knowledge for each class. Second, to alleviate the negative impact of over-fitting caused by inadequate query nodes, we propose to optimize the model by both classifying nodes and filtering out irrelevant information (e.g., decisive classification information for classes not used in a meta-task) based on Information Bottleneck (IB) (Li et al., 2019). As a result, in addition to learning with supervision information, the model will also learn to ignore irrelevant information during the meta-learning process, which relieves over-fitting caused by insufficient supervision information. In summary, our main contributions are as follows: * We investigate a novel research problem of few-shot node classification with extremely weak supervision. * We develop a novel few-shot node classification framework under the extremely weak supervision scenario with two essential modules: (1) a label propagation module based on Poisson Learning to expand the support set in each meta-task by obtaining pseudo-labeled nodes; (2) an optimization strategy based on Information Bottleneck to learn from classifying query nodes while reducing irrelevant information. * We conduct extensive experiments on four node classification datasets with extremely weak supervision. Experimental results demonstrate the superiority of our framework. ## 2. Related Work ### Few-shot Node Classification Few-shot learning aims to achieve considerable classification performance using limited labeled samples as references. The general approach is to accumulate transferable knowledge from tasks with abundant labeled samples and then generalize such knowledge to novel tasks with few labeled samples. Generally, there are two main categories of approaches for few-shot learning: (1) _Metric-based_ approaches focus on learning a metric function to match the query set with the support set for classification (Kang et al., 2017; Li et al., 2019). For example, Prototypical Networks (Kang et al., 2017) learn prototypes for classes and classify query samples based on the Euclidean distances between the query set and the prototypes. (2) _Optimization-based_ approaches aim to optimize model parameters based on gradients on support samples in each meta-task (Li et al., 2019; Li et al., 2019; Li et al., 2019). As a classic example, MAML (Li et al., 2018) learns the parameter initialization for different meta-tasks with the proposed meta-optimization strategy. On graph data, many research efforts have been devoted to studying few-shot learning on graphs with limited labeled nodes (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). For example, Meta-GNN (Li et al., 2019) combines meta-learning (Li et al., 2018) with GNNs to reduce the requirement of labeled nodes. GPN (Li et al., 2018) estimates node importance and leverages Prototypical Networks (Kang et al., 2017) for few-shot node classification. TENT (Kang et al., 2017) proposes to reduce the variance among tasks for generalization performance. ### Semi-supervised Few-shot Learning Several recent approaches aim to combine semi-supervised or self-supervised learning with few-shot learning to improve the performance on few-shot classification tasks with unlabeled data. Ren et al. (Ren et al., 2017) extend Prototypical Networks with unlabeled data based on the Soft k-Means method. TPN (Ren et al., 2018) propagates labels of given data to unlabeled data, combined with a meta-learning strategy for optimization. On the other hand, the Information Bottleneck (IB) principle is also leveraged in self-supervised representation learning. DVIB (Li et al., 2018) first utilizes IB in neural networks for robust representation learning. Moreover, GIB (Li et al., 2019) develops information-theoretic modeling of graph structures and node features on graph representation learning. ## 3. Preliminaries ### Few-shot Node Classification We denote an attributed graph as \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\), where \(\mathcal{V}\) and \(\mathcal{E}\) denote the set of nodes and edges, respectively. \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\) is the node feature matrix, where \(d\) is the feature dimension. Moreover, we denote the set of node classes as \(\mathcal{C}\), which can be further divided into two sets: \(\mathcal{C}_{train}\) and \(\mathcal{C}_{test}\). Note that \(C=\mathcal{C}_{train}\cup\mathcal{C}_{test}\) and \(\mathcal{C}_{train}\cap\mathcal{C}_{test}=\emptyset\), where \(\mathcal{C}_{train}\) and \(\mathcal{C}_{test}\) denote the set of meta-training and meta-test classes, respectively. General few-shot settings assume that labeled nodes in \(\mathcal{C}_{train}\) are abundant, while labeled nodes in \(\mathcal{C}_{test}\) are generally scarce. However, it is usually unrealistic in practice to obtain adequate labeled nodes for all classes in \(\mathcal{C}_{train}\). With extremely weak supervision, the number of labeled nodes in \(\mathcal{C}_{train}\) is severely limited. Subsequently, our goal is to develop a learning model such that after meta-training on extremely limited labeled nodes, the model can accurately predict labels for the nodes in \(\mathcal{C}_{test}\) with only \(K\) labeled nodes for each of \(N\) randomly sampled classes as the reference. In this way, the problem is called \(N\)-way \(K\)-shot node classification. ### \(N\)-way \(K\)-shot Meta-learning We follow the prevalent episodic meta-learning paradigm, which has demonstrated superior performance in few-shot learning (Li et al., 2019; Li et al., 2019; Li et al., 2019). 35]. Particularly, we employ \(\mathcal{C}_{train}\) and \(\mathcal{C}_{test}\) for meta-training and meta-test, respectively. During meta-training, the model learns from a series of _meta-training tasks_. Each meta-training task consists of a support set \(\mathcal{S}\) as the reference and a query set \(\mathcal{Q}\) to be classified. Here \(\mathcal{S}=\{(v_{1},y_{1}),(v_{2},y_{2}),\ldots,(v_{N\times K},y_{N\times K})\}\) contains \(N\) classes randomly sampled from \(\mathcal{C}_{train}\) and \(K\) labeled nodes for each of these \(N\) classes (i.e., \(N\)-way \(K\)-shot), \(v_{i}\in\mathcal{V}\) is a node in \(\mathcal{G}\) and \(y_{i}\) is the class of \(v_{i}\). The query set \(\mathcal{Q}=\{(v_{1}^{*},y_{1}^{*}),(v_{2}^{*},y_{2}^{*}),\ldots,(v_{Q}^{*},y_ {Q}^{*})\}\) consists of totally \(Q\) different nodes from these \(N\) classes. Note that during the classification process in each meta-task, all nodes on the graph other than nodes in this meta-task are considered unlabeled and can be leveraged to advance classification performance. During meta-test, the model is evaluated on meta-test tasks, which share a similar structure with meta-training tasks, except that the classes are in \(\mathcal{C}_{test}\). Under the meta-learning [9; 14; 43] framework, we first fine-tune the model based on support nodes and then conduct classification on query nodes. ## 4. The Proposed Framework We first present an overview of our proposed framework X-FNC. Specifically, we formulate the problem of _few-shot node classification with extremely weak supervision_ under the prevalent \(N\)-way \(K\)-shot meta-learning framework. In practice, we conduct meta-training on a series of randomly sampled meta-tasks, where a meta-task contains \(K\) nodes for each of \(N\) classes as the support set and several query nodes to be classified. Due to the extremely limited labeled nodes during meta-training, the model performance will be severely deteriorated by two problems: _under-generalizing_ (caused by inadequate support nodes) and _over-fitting_ (caused by inadequate query nodes). Therefore, we propose two essential modules: _Poisson Label Propagation_ and _Information Bottleneck Fine-tuning_, which solve these two problems by obtaining pseudo-labeled nodes and maximally learning decisive information, respectively. ### Poisson Label Propagation To alleviate the problem of under-generalizing caused by extremely limited support nodes during meta-training, we propose to obtain pseudo-labeled nodes based on Poisson Learning [3]. Specifically, Poisson Learning is recently proposed to propagate labels from relatively limited labeled samples to unlabeled samples, based on the assumption that samples that are close to each other can potentially share similar classes. By recursively aggregating label information from close samples, the unlabeled samples can be pseudo-labeled based on label information propagated from labeled samples. Thus, it is helpful for obtaining pseudo-labeled nodes under the extremely weak supervision setting. However, it remains non-trivial to perform Poisson Learning on few-shot node classification with extremely weak supervision due to the following two reasons. First, Poisson Learning cannot fully take advantage of structural information on graph data when leveraged to obtain pseudo-labeled nodes. Originally proposed for image classification, Poisson Learning constructs a graph solely based on the Euclidean distances between images. However, on graph data, the graph structures encode crucial information for node classification and thus cannot be ignored. Second, Poisson Learning cannot effectively handle a varying class set. Few-shot learning models are required to deal with various classes across different meta-tasks, which contradicts the fact that Poisson Learning is originally proposed to operate on a fixed class set. Nevertheless, the ability to handle various classes is crucial for few-shot node classification [6; 14]. To overcome these two difficulties, as illustrated in Fig. 1, we propose to construct a subgraph in each meta-task based on graph structures and node features, and such subgraph consists of support nodes and randomly sampled unlabeled nodes. In addition, we include the neighbors of these nodes and the corresponding edges in the subgraph to effectively leverage local structures of support nodes. Moreover, we utilize the constructed subgraph in each meta-task instead of the entire graph, so that we can perform label propagation regarding the varying classes in different meta-tasks. Specifically, consider a meta-task \(\mathcal{T}=(\mathcal{S},\mathcal{Q})\). We first aim to sample a number of unlabeled nodes for label propagation based on Poisson Learning. Basically, the neighbors of nodes in \(\mathcal{S}\) bear a higher chance of belonging to classes in \(\mathcal{S}\) than other random nodes. In addition, only using these neighboring nodes can be insufficient when the average node degree is small. Therefore, we sample unlabeled nodes via two strategies: neighbor sampling and random sampling. For the neighbor sampling, we select 2-hop neighbors of the labeled nodes in \(\mathcal{S}\), since neighboring nodes maintain explicit connections to these labeled nodes and are thus more likely to share the same classes. In particular, denoting the set of 2-hop neighbors of node \(v_{i}\) as \(\mathcal{N}_{i}\), the node set obtained via neighbor sampling is \(\mathcal{V}_{n}=\bigcup_{i=1}^{NK}\mathcal{N}_{i}\). For the random sampling, we randomly select \(R\) nodes from the remaining node set \(\mathcal{V}\setminus(\mathcal{S}\cup\mathcal{V}_{n})\) to form a random node set \(\mathcal{V}_{r}\), where \(|\mathcal{V}_{r}|=R\). Then similarly, we extract the 2-hop neighbors of nodes in \(\mathcal{V}_{r}\) as \(\mathcal{V}_{m}\). In consequence, combining nodes sampled from the two sampling strategies, we can obtain the final node set \(\mathcal{V}_{s}=\mathcal{S}\cup\mathcal{V}_{n}\cup\mathcal{V}_{r}\cup\mathcal{V} _{rn}\) for the subgraph. Nevertheless, the sampled nodes in \(\mathcal{V}_{s}\) can be distributed across the entire graph and potentially unconnected, which greatly hinders the process of label propagation. Therefore, we propose to construct a subgraph with these nodes based on both the structural and feature information. More specifically, we first extract the corresponding edge set \(\mathcal{E}_{s}\) from \(\mathcal{E}\) according to \(\mathcal{V}_{s}\). Then we denote \(\mathbf{A}^{\prime}\in\mathbb{R}^{|\mathcal{V}_{s}^{\prime}|\times|\mathcal{V} _{s}^{\prime}|}\) as the adjacency matrix obtained from graph structures (i.e., \(\mathcal{E}_{s}\)), where \(\mathbf{A}^{\prime}_{ij}=1\) if the \(i\)-th node in \(\mathcal{V}_{s}\) connects to the \(j\)-th node in \(\mathcal{V}_{s}\), and \(\mathbf{A}^{\prime}_{ij}=0\), otherwise. In this way, we can construct edges without losing the original structural information. Furthermore, to incorporate feature information, we propose to compute another edge weight matrix based on Euclidean distances between node features [3] as follows: \[\mathbf{A}^{\prime\prime}_{ij}=\exp\left(-\eta\|\mathbf{x}_{i}-\mathbf{x}_{j} \|\right), \tag{1}\] where \(\eta\in\mathbb{R}\) is a hyper-parameter to control the scale of \(\mathbf{A}^{\prime\prime}\) and \(\|\cdot\|\) is the \(\ell_{2}\)-norm. In this way, all nodes in \(\mathcal{V}_{s}\) are also connected according to their distances, which further advances the label propagation. Finally, we combine the two matrices to form the final adjacency matrix: \(\mathbf{A}=\lambda\mathbf{A}^{\prime}+(1-\lambda)\mathbf{A}^{\prime\prime}\) with a scaling hyper-parameter \(\lambda\in[0,1]\). As a result, the edges can absorb information from both graph structures and node features, which effectively promotes the label propagation process based on Poisson Learning on this subgraph. Then with the learned adjacency matrix \(\mathbf{A}\), we can perform Poisson Learning on this subgraph to obtain pseudo-labeled nodes. Denote \(\mathbf{u}_{i}\in\mathbb{R}^{N}\) as the label vector of the \(i\)-th node \(v_{i}\) in \(\mathcal{V}_{s}\) to be learned, where the index of the largest element in \(\mathbf{u}_{i}\) indicates that \(v_{i}\) belongs to this class. Intuitively, Poisson Learning (Beng et al., 2015) assumes that the label vector of an unlabeled node is the weighted average of its neighbors' label vectors, where the weight is from the corresponding entry in \(\mathbf{A}\). Moreover, the label vectors of given labeled nodes are their corresponding classes minus the average label vector of all labeled nodes. In this way, the objective of Poisson Learning can be formulated as follows: \[\begin{cases}\sum_{j=1}^{|\mathcal{V}_{s}|}\mathbf{A}_{ij}\left( \mathbf{u}_{i}-\mathbf{u}_{j}\right)=0,&\text{if }NK+1\leq i\leq|\mathcal{V}_{s}|,\\ \mathbf{u}_{i}=\mathbf{y}_{i}-\bar{\mathbf{y}},&\text{if }1\leq i\leq NK, \end{cases} \tag{2}\] satisfying \(\sum_{i=1}^{|\mathcal{V}_{s}|}d_{i}\mathbf{u}_{i}=0\), where \(d_{i}=\sum_{j=1}^{|\mathcal{V}_{s}|}\mathbf{A}_{ij}\). \(\mathbf{y}_{i}\in\mathbb{R}^{N}\), where the \(j\)-th element is \(1\) if \(\mathbf{x}_{i}\) belongs to the \(j\)-th class, and other elements are \(0\). \(\bar{\mathbf{y}}=\sum_{i=1}^{NK}\mathbf{y}_{i}/NK\) is the average label vector. To solve Eq. (2), we iteratively update the prediction matrix \(\mathbf{U}\in\mathbb{R}^{\mathcal{V}_{s}\times N}\) based on (Beng et al., 2015) as follows: \[\mathbf{U}^{(t)}\leftarrow\mathbf{U}^{(t-1)}+\mathbf{D}^{-1}\left(\mathbf{B}^ {\top}-\mathbf{L}\mathbf{U}^{(t-1)}\right), \tag{3}\] where \(t\in\{1,2,\dots,T_{l}\}\) and \(T_{l}\) is the number of label propagation steps. \(\mathbf{D}\) is a diagonal matrix and \(\mathbf{D}_{ii}=\sum_{j=1}^{|\mathcal{V}_{s}|}\mathbf{A}_{ij}\). \(\mathbf{L}=\mathbf{D}-\mathbf{A}\) is the unnormalized Laplacian matrix, and \(\mathbf{B}=[\mathbf{F}-\bar{\mathbf{y}},\mathbf{O}]\), where \(\mathbf{O}\in\mathbb{R}^{N\times(|\mathcal{V}_{s}|-NK)}\) is a zero matrix. \(\mathbf{F}\in\mathbb{R}^{NN\times K}\) denotes the label matrix of the \(NK\) labeled nodes, whose \(i\)-th column is \(\mathbf{y}_{i}\). The \(i\)-th row of the final result \(\mathbf{U}^{(T_{l})}\) is the obtained label vector of \(v_{i}\). The iteration is achieved by replacing the label vector (i.e., \(\mathbf{u}_{i}\)) with the weighted average of label vectors from neighboring nodes of \(v_{i}\). In this way, we can obtain a considerable number of pseudo-labeled nodes to compensate for the lack of support nodes during meta-training. Nevertheless, some of the pseudo-labeled nodes could be incorrect and thus deteriorate the classification performance if all pseudo-labeled nodes are used to expand the support set. Therefore, we propose to select pseudo-labeled nodes with high prediction confidence for fine-tuning in each meta-task. Specifically, we compute the confidence score for each pseudo-labeled node according to the entropy of the prediction result as follows: \[c_{i}=-\sum_{j=1}^{N}u_{ij}\log u_{ij}, \tag{4}\] where \(u_{ij}\) is the \(j\)-th element of \(\mathbf{u}_{i}\) after softmax. In this way, we can select top \(M\) pseudo-labeled nodes with the highest confidence scores. Note that \(\mathcal{V}_{s}\) contains the \(NK\) support nodes, which will be ignored during the selection since they are already labeled. Then the support set can be augmented as \(\widetilde{\mathcal{S}}=\mathcal{S}\cup\mathcal{S}_{p}\), where \(\mathcal{S}_{p}\) is the set of selected pseudo-labeled nodes and \(|\widetilde{\mathcal{S}}|=NK+M\). ### Information Bottleneck Fine-tuning With the augmented support set \(\widetilde{\mathcal{S}}\), we can conduct fine-tuning on \(\widetilde{\mathcal{S}}\) for fast adaptations to the given meta-task \(\mathcal{T}\) and then meta-optimize the model on the query set \(\mathcal{Q}\). However, although the support set is augmented, the query set \(\mathcal{Q}\) could still be inadequate for optimization with extremely weak supervision. In other words, the model can be easily influenced by irrelevant information (e.g., decisive classification information for classes not in \(\mathcal{T}\) ) and thus leads to _over-fitting_. Therefore, we aim to fine-tune the model with extremely limited query nodes while ignoring the irrelevant information as much as possible. Particularly, the Information Bottleneck (IB) (Shen et al., 2015) provides an essential principle to extract classification information while maximally reducing the negative impact of irrelevant information. Moreover, the IB principle can also encourage the model to benefit from incorrect pseudo-labeled nodes by learning to neglect irrelevant information. Nevertheless, it remains non-trivial to utilize the IB principle on graph data, due to the fact that graph data does not follow the i.i.d. assumption used in previous IB-based models (Zhu et al., 2019). Thus, we further derive two variational bounds for IB to fine-tune the model in a more tractable manner. Specifically, the objective of the IB principle can be formulated as follows: \[\min\operatorname{IB}(D,Y;Z)\triangleq[-I(Y;Z)+\beta I(D;Z)], \tag{5}\] where \(Y\) denotes the class set of nodes in \(\widetilde{\mathcal{S}}\) and \(|Y|=N\). \(Z\) denotes the node representations to be learned. \(D\) denotes the structural and feature information of nodes in \(\widetilde{\mathcal{S}}\). \(\beta\) is a positive scalar to balance the trade-off between the desire to preserve classification information and being invariant to irrelevant graph structures and node features (Zhu et al., 2019). In particular, the IB aims to learn representations that are maximally informative for classification (i.e., maximizing \(I(Y;Z)\)) while reducing irrelevant information (i.e., minimizing \(I(D;Z)\)). Furthermore, it becomes more useful in few-shot learning Figure 1. The pseudo-labeling process with our Poisson Label Propagation module. For each meta-task, we construct a subgraph based on support nodes and randomly sampled unlabeled nodes, including their neighboring nodes. Then we perform Poisson Label Propagation to obtain pseudo-labeled nodes. After that, we further select pseudo-labeled nodes with high confidence to form the augmented support set. The augmented support set will be used for fine-tuning in this meta-task. since each meta-task is only conducted on \(N\) classes, and thus the irrelevant information \(D\) can be more redundant. Nevertheless, it is difficult to directly optimize the objective in Eq. (5), since it is intractable [40]. Thus, we propose to derive an upper bound for each of the two terms in Eq. (5) for optimization. Specifically, the first term can be expressed using entropy as follows: \[-I(Y;Z)=-\left[H(Y)-H(Y|Z)\right], \tag{6}\] where \(H\) is the entropy. Since we aim to optimize the model for better \(Z\), we can ignore the unrelated term \(H(Y)\). Then we can obtain the explicit form of \(H(Y|Z)\) based on the definition of entropy: \[\begin{split} H(Y|Z)&=-\sum_{i=1}^{N}\sum_{j=1}^{ \lvert\mathcal{\widetilde{S}}\rvert}p(y_{i},z_{j})\log\frac{p(y_{i},z_{j})}{p( z_{j})}\\ &=-\sum_{i=1}^{N}\sum_{j=1}^{\lvert\mathcal{\widetilde{S}}\rvert }p(z_{j}|y_{i})p(y_{i})\log p(y_{i}|z_{j}),\end{split} \tag{7}\] where \(y_{i}\) and \(z_{i}\) denote the label and the representation of the \(i\)-th node \(v_{i}\) in \(\mathcal{\widetilde{S}}\), respectively. Since each meta-task contains \(K\) support nodes for each of \(N\) classes, we can assume that the prior distribution of \(Y\) is uniform, and thus \(p(y_{i})\) is a constant. To further estimate \(p(z_{j}|y_{i})\), we compute it via \(p(z_{j}|y_{i})=\mathbb{1}(z_{j}\in y_{i})\), where \(\mathbb{1}(z_{j}\in y_{i})=1\) if \(z_{j}\) belongs to \(y_{i}\); otherwise \(\mathbb{1}(z_{j}\in y_{i})=0\). In this way, the objective of \(-I(Y;Z)\) is formulated as a cross-entropy loss: \[-I(Y;Z)\rightarrow\mathcal{L}_{Y}=-\sum_{i=1}^{\mathcal{\widetilde{S}}}\log p (y^{\prime}_{i}|z_{i}), \tag{8}\] where \(y^{\prime}_{i}\) denotes the specific label that the \(i\)-th node \(v_{i}\) belongs to. Then to estimate \(p(y^{\prime}_{i}|z_{i})\), we further utilize a GNN\({}_{\theta}\) followed by an MLP classifier MLP\({}_{\theta}\). Specifically, for the \(i\)-th node \(v_{i}\) in \(\mathcal{\widetilde{S}}\), we extract its 2-hop neighboring nodes to form a subgraph, represented by \((\mathbf{A}_{i},\mathbf{X}_{i})\). Here \(\mathbf{A}_{i}\) and \(\mathbf{X}_{i}\) denote the adjacency and feature matrix, respectively. Then we compute the output prediction score as \[\mathbf{s}_{i}=\text{MLP}_{\theta}\left(\text{GNN}_{\theta}(\mathbf{A}_{i}, \mathbf{X}_{i})\right), \tag{9}\] where \(\mathbf{s}_{i}\in\mathbb{R}^{N}\) is the unnormalized prediction score of \(v_{i}\). With a softmax function, we can normalize \(\mathbf{s}_{i}\) to finally obtain \(p(y^{\prime}_{i}|z_{j})\). In this way, the model learns the crucial information for classification of classes in \(Y\) via maximizing \(I(Y;Z)\). For another term \(I(D;Z)\), we first express it via the expectation: \[I(D;Z)=\mathbb{E}\left(\log\frac{p(Z|D)}{p(Z)}\right), \tag{10}\] where \(D\) denotes the structural and feature information of nodes in \(\mathcal{\widetilde{S}}\). It is noteworthy that nodes on graphs do not follow the i.i.d. assumption and thus are inherently correlated. Hence, although the fine-tuning is conducted on a specific meta-task, \(D\) should incorporate the information from the entire graph due to the correlations among nodes (i.e., \(D=(\mathcal{E},\mathbf{X})\)). However, it is difficult to estimate \(p(Z|D)\), since the only \(D\) is represented by the entire graph. Therefore, we introduce another distribution \(q(Z)\) to approximate the true posterior \(p(Z|D)\). In this way, we can further derive an upper bound of \(I(D;Z)\) for optimization: \[\begin{split}\mathbb{E}\left(\log\left(\frac{p(Z|D)}{q(Z)} \frac{q(Z)}{p(Z)}\right)\right)&=\mathbb{E}\left(\log\frac{p(Z| D)}{q(Z)}\right)-\text{KL}\left(p(Z)||q(Z)\right)\\ &\leq\mathbb{E}\left(\log\frac{p(Z|D)}{q(Z)}\right),\end{split} \tag{11}\] where \(\text{KL}(\cdot||\cdot)\) denotes the KL-divergence of distributions. In this way, the final objective is to minimize the KL-divergence between \(p(Z|D)\) and \(q(Z)\). In practice, to estimate \(p(Z|D)\), we utilize GNN\({}_{\theta}\) to instantiate \(p(Z|D)\). However, incorporating structural and feature information from the entire graph can be inefficient and redundant for classification in a meta-task. Thus, we leverage the local-dependence assumption [40] of graph data to define \(D\) as the specific structural and feature information of each node in \(\mathcal{\widetilde{S}}\). In this way, GNN\({}_{\theta}\) can learn to provide a comprehensive estimation for \(p(Z|D)\) based on the specific \(D\) in each meta-task, since \(D\) changes with \(\mathcal{\widetilde{S}}\) in different meta-tasks. On the other hand, \(q(Z)\) is a prior distribution for \(Z\) and is thus difficult to estimate. Therefore, we propose to instantiate \(q(Z)\) with another GNN parameterized by \(\phi\) (i.e., GNN\({}_{\phi}\)). Meanwhile, since \(q(Z)\) is not conditioned on \(D\), it is necessary to alleviate the inevitable influence of \(D\). Therefore, we propose to randomly mask the corresponding graph structures and node features in the subgraph \((\mathbf{A}_{i},\mathbf{X}_{i})\) of each node in \(\mathcal{\widetilde{S}}\). Specifically, for a subgraph represented by \((\mathbf{A}_{i},\mathbf{X}_{i})\), each entry in \(\mathbf{A}_{i}\) and \(\mathbf{X}_{i}\) has a probability of \(\gamma\) to be masked (i.e., becomes zero), and the masked matrices are denoted as \((\mathbf{\widetilde{A}}_{i},\mathbf{\widetilde{X}}_{i})\). As a result, the model can Figure 2. The optimization process with our IB fine-tuning module and meta-learning strategy. For each node in the augmented support set, we construct a 2-hop subgraph and a masked 2-hop subgraph. Then we utilize two GNN\({}_{\theta}\) and GNN\({}_{\phi}\) to perform IB fine-tuning via \(T\) steps. After that, we calculate the loss on the query set and meta-update model parameters. learn to extract the decisive information for classification while maximally ignoring irrelevant information in \(D\). Then for the \(i\)-th node \(v_{i}\) in \(\widetilde{\mathcal{S}}\), as illustrated in Fig. 2, we can achieve the two representations obtained by \(\text{GNN}_{\theta}\) and \(\text{GNN}_{\phi}\) as follows: \[\mathbf{h}_{i}=\text{GNN}_{\theta}(\mathbf{A}_{i},\mathbf{X}_{i}),\;\;\; \widetilde{\mathbf{h}}_{i}=\text{GNN}_{\phi}(\widetilde{\mathbf{A}}_{i}, \widetilde{\mathbf{X}}_{i}), \tag{12}\] where \(\mathbf{h}_{i}\) and \(\widetilde{\mathbf{h}}_{i}\) denote the representations of \(v_{i}\) from the two GNNs, respectively. To minimize the KL-divergence between \(p(Z|D)\) and \(q(Z)\), we utilize a predictor (Krizhevsky et al., 2014; Goodfellow et al., 2015)\(p_{\theta}\) (a two-layer MLP) that uses \(\mathbf{h}_{i}\) to produce a prediction \(p_{\theta}(\mathbf{h}_{i})\) for \(\widetilde{\mathbf{h}}_{i}\). After normalizing both \(p_{\theta}(\mathbf{h}_{i})\) and \(\widetilde{\mathbf{h}}_{i}\), the mean squared error can be defined as follows: \[\text{MSE}(p_{\theta}(\mathbf{h}_{i}),\widetilde{\mathbf{h}}_{i})=\left\| \frac{p_{\theta}(\mathbf{h}_{i})}{\|p_{\theta}(\mathbf{h}_{i})\|}-\frac{ \widetilde{\mathbf{h}}_{i}}{\|\widetilde{\mathbf{h}}_{i}\|}\right\|^{2}=2-2 \cdot\frac{p_{\theta}(\mathbf{h}_{i})\cdot\widetilde{\mathbf{h}}_{i}}{\|p_{ \theta}(\mathbf{h}_{i})\|\|\widetilde{\mathbf{h}}_{i}\|}, \tag{13}\] where \(\|\cdot\|\) denotes the \(\ell_{2}\)-norm. In this way, the loss becomes: \[I(D;Z)\rightarrow\mathcal{L}_{D}=-\sum_{i=1}^{\widetilde{\mathcal{S}}}\frac{ p_{\theta}(\mathbf{h}_{i})\cdot\widetilde{\mathbf{h}}_{i}}{\|p_{\theta}( \mathbf{h}_{i})\|\|\widetilde{\mathbf{h}}_{i}\|}, \tag{14}\] which is the cosine similarity between \(p_{\theta}(\mathbf{h}_{i})\) and \(\widetilde{\mathbf{h}}_{i}\). Then the final fine-tuning loss can be defined as \(\mathcal{L}=\mathcal{L}_{Y}+\beta\mathcal{L}_{D}\), where \(\beta\) is the hyper-parameter in the IB principle to trade off the two mutual information terms. ### Meta Learning-based Optimization In this part, we elaborate on the optimization process of X-FNC. As illustrated in Fig. 2, our optimization process consists of two main stages: fine-tuning and meta-optimization. Given a specific meta-task \(\mathcal{T}\), we first obtain the augmented support set \(\widetilde{\mathcal{S}}\) via the proposed Poisson Label Propagation module introduced in Sec. 4.1. Then we fine-tune our framework on \(\widetilde{\mathcal{S}}\) for a fast adaptation to this meta-task. Furthermore, to ensure adaptations to each meta-task during evaluation, we utilize the prevalent strategy (Krizhevsky et al., 2014; Goodfellow et al., 2015), which meta-optimizes model parameters according to loss on the query set. The original strategy is proposed to optimize an entire model with one meta-learning rate. However, X-FNC consists of multiple modules with various purposes. Therefore, we propose to separately optimize modules in X-FNC based on different losses. Specifically, let \(\theta\) denote the total parameters of \(\text{GNN}_{\theta}\), \(\text{MLP}_{\theta}\), and the predictor \(p_{\theta}\). For the fine-tuning process, we first initialize the parameters for fine-tuning as \(\theta_{0}\leftarrow\theta\). Then we conduct \(T\) steps of fine-tuning based on the loss \(\mathcal{L}\) calculated on \(\widetilde{\mathcal{S}}\) as follows: \[\theta_{t}\leftarrow\theta_{t-1}-\alpha\nabla_{\theta_{t-1}}\mathcal{L}\left( \widetilde{\mathcal{S}};\theta_{t-1}\right), \tag{15}\] where \(t\in\{1,2,\dots,T\}\) and \(\mathcal{L}(\widetilde{\mathcal{S}};\theta_{t-1})\) denotes that the loss is calculated based on \(\widetilde{\mathcal{S}}\) with the parameters \(\theta_{t-1}\). \(\alpha\) is the learning rate in each fine-tuning step. It is noteworthy that during fine-tuning, other parameters of our framework (i.e., \(\text{GNN}_{\phi}\) parameterized by \(\phi\)) are kept unchanged, since simultaneously optimizing \(\text{GNN}_{\theta}\) and \(\text{GNN}_{\phi}\) can result in collapse (e.g., a constant representation) (Krizhevsky et al., 2014; Goodfellow et al., 2015). After \(T\) steps of fine-tuning, we will meta-optimize \(\text{GNN}_{\phi}\) with the loss calculated on the query set \(\mathcal{Q}\). Meanwhile, since different modules bear various purposes, we optimize them with two meta-learning rates and losses. More specifically, on the query set \(\mathcal{Q}\), we meta-optimize \(\theta\) and \(\phi\) with the following update functions: \[\theta=:\theta-\beta_{1}\nabla_{\theta}\mathcal{L}(\mathcal{Q};\theta_{T}),\;\; \;\phi=:\phi-\beta_{2}\nabla_{\phi}\mathcal{L}_{D}(\mathcal{Q};\theta_{T}), \tag{16}\] where \(\beta_{1}\) and \(\beta_{2}\) are meta-learning rates for \(\theta\) and \(\phi\), respectively. Note that \(\text{GNN}_{\phi}\) is only used to calculate \(\mathcal{L}_{D}\). Thus, \(\text{GNN}_{\phi}\) will be meta-optimized regarding \(\mathcal{L}_{D}\) instead of \(\mathcal{L}\), while \(\theta\) (i.e., parameters of \(\text{GNN}_{\theta}\), \(\text{MLP}_{\theta}\), and \(p_{\theta}\)) is meta-optimized based on \(\mathcal{L}\). Moreover, it is noteworthy that our framework does not explicitly prevent collapse with extra operations (e.g., the negative samples used in contrastive learning (Krizhevsky et al., 2014; Goodfellow et al., 2015)) when minimizing \(\mathcal{L}_{D}\). Nevertheless, the loss design of our framework naturally avoids converging to a minimum regarding both \(\theta\) and \(\phi\) (e.g., a trivial constant representation). Different from BYOL (Krizhevsky et al., 2014) and BGRL (Goodfellow et al., 2015), which utilize a momentum strategy, we propose two different losses (i.e., \(\mathcal{L}_{Y}\) and \(\mathcal{L}_{D}\)) for meta-optimization regarding \(\theta\) and \(\phi\). As a result, the meta-optimization targets are different for \(\theta\) and \(\phi\) and thus will not cause collapse during meta-optimization. In addition, the collapse will also not occur during fine-tuning since only \(\theta\) is updated while \(\phi\) remains unchanged in this step. After meta-training on a specific number of meta-training tasks, we evaluate the performance of our framework X-FNC on the meta-test tasks, which are sampled from \(\mathcal{C}_{test}\). ## 5. Experimental Evaluations ### Datasets To evaluate the performance of X-FNC on few-shot node classification with extremely weak supervision, we conduct experiments on four prevalent real-world graph datasets: Amazon-E (Liu et al., 2018), DBLP (Krizhevsky et al., 2014), Cora-full (Liu et al., 2018), and ogbn-arxiv (Krizhevsky et al., 2014). Each dataset is a graph and consists of a considerable number of node classes to ensure that the meta-test tasks contain a variety of classes for a more comprehensive evaluation. Specifically, we obtain Amazon-E and DBLP datasets from (Krizhevsky et al., 2014). Cora-full and ogbn-arxiv are from the corresponding source. Then we conduct experiments on these datasets under the extremely weak supervision setting. In particular, we choose three different settings: 5/10/20 labels per class. In other words, each meta-training class only consists of 5/10/20 labeled nodes (50/100/200 for ogbn-arxiv due to the large size of the graph), where the total labeled nodes are approximately 1%/25%/4% of nodes on the graph. It is noteworthy that we randomly select these labeled nodes from the training classes in the original datasets. The detailed statistics of these datasets are summarized in Table 2, where the class split setting denotes the number of classes used for training/validation/test. ### Experimental Settings To achieve a comparison of X-FNC with competitive baselines, we conduct experiments with the state-of-the-art few-shot node classification methods. **Prototypical Networks (PN)**(Wang et al., 2019) and **MAML**(Krizhevsky et al., 2014) are conventional few-shot methods, and we apply them on graph data. **G-Meta**(Liu et al., 2018), **GPN**(Krizhevsky et al., 2014), and **RALE**(Krizhevsky et al., 2014) are recently proposed studies on few-shot node classification. During meta-training, we randomly sample \(\mathcal{T}_{train}\) meta-training tasks from meta-training classes for model optimization. Here the support set and the query set in each meta-task will only be sampled from labeled nodes (i.e., 5/10/20 labeled nodes in each class). Then during meta-test, we evaluate the model on a series of randomly sampled meta-test tasks from the entire node set of meta-test classes. The final averaged classification accuracy on meta-test tasks will be used as the evaluation metric. For the Poisson Label Propagation module, we set the number of label propagation steps \(T_{l}\) as 10. The number of randomly sampled nodes \(R\) to construct the subgraph for label propagation is set as 10. The scaling parameter of \(\mathbf{A}^{\prime\prime}\) is set as 100, and the hyper-parameter \(\lambda\) is set as 0.5. The number of selected pseudo-labeled nodes \(M\) is 20. For IB fine-tuning, the number of fine-tuning steps \(T\) is 40. The mask rate \(\gamma\) is set as 0.1. The learning rate \(\alpha\) during fine-tuning is 0.1. The meta-learning rates \(\beta_{1}\) and \(\beta_{2}\) during meta-optimization are set as 0.005. The trade-off hyper-parameter \(\beta\) is set as 1. The hidden sizes of GNN\({}_{\theta}\) and GNN\({}_{\phi}\) are both 64. The hidden size of MLP\({}_{\theta}\) is 64 while the hidden sizes of the two MLP layers in \(p_{\theta}\) are 128 and 64, respectively. The number of training epochs \(T_{train}\) is 5,000, and the number of meta-test tasks \(T_{test}\) is 500. The dropout rate is 0.5. The query set size \(|\mathbf{Q}|\) is 10. Our code can be found at [https://github.com/SongW-SW/X-FNC](https://github.com/SongW-SW/X-FNC). ### Performance Comparison Table 1 presents the performance comparison of our framework X-FNC and all baselines on few-shot node classification with extremely weak supervision. Specifically, we choose two different few-shot settings to obtain a more comprehensive comparison: 5-way 3-shot and 10-way 3-shot. We use the average classification accuracy over 10 repetitions as the evaluation metric. From Table 1, we can have the following observations: (1) Our framework X-FNC achieves the best results compared with all other baselines in all datasets. The performance also consistently outperforms other baselines under different settings, which validates the superiority of X-FNC on few-shot node classification with extremely weak supervision. (2) When the number of labels per class decreases from 20 to 5, X-FNC has the least performance drop compared with other baselines. The main reason is that X-FNC obtains pseudo-labeled nodes via Poisson Label Propagation to alleviate the under-generalizing problem with extremely weak supervision. (3) The performance improvement of X-FNC over other baselines is slightly larger on DBLP. This is due to the fact that DBLP has a larger average node degree, which helps improve the pseudo-labeling accuracy during meta-training for better performance. (4) When the value of \(N\) increases (i.e., more classes in each meta-task), all methods encounter a significant performance drop, since query nodes are classified from a larger class set in each meta-task. Nevertheless, under the extremely limited setting, X-FNC consistently outperforms other baselines. It is because X-FNC can better extract decisive information for classification with a larger value of \(N\) via IB fine-tuning. ### Ablation Study We conduct an ablation study on Amazon-E and Cora-full to evaluate the effectiveness of different components in our framework X-FNC (similar results on other datasets). Specifically, we compare X-FNC with three degenerate versions: (a) X-FNC without pseudo-labeling (X-FNC\(\dagger\)); (b) X-FNC without IB-based fine-tuning (X-FNC\(\dagger\)); (c) without both (X-FNC\(\dagger\)). More specifically, X-FNC\(\dagger\) removes the pseudo-labeling process such that the support set only consists of the given labeled nodes. X-FNC\(\dagger\) replaces the IB fine-tuning process with a simple classifier during fine-tuning, while X-FNC\(\dagger\) combines the two variants. From Fig. 3, we can obtain several observations. First, our framework outperforms all other variants, which further validates that each module plays an important role in few-shot node classification with extremely weak \begin{table} \begin{tabular}{c||c|c|c|c|c|c||c|c|c|c|c|c} \hline Dataset & \multicolumn{8}{c||}{DBLP} & \multicolumn{8}{c}{Amazon-E} \\ \hline Setting & \multicolumn{4}{c|}{5-way 3-shot} & \multicolumn{4}{c||}{10-way 3-shot} & \multicolumn{4}{c||}{5-way 3-shot} & \multicolumn{4}{c}{10-way 3-shot} \\ \hline \# Labels per Class & 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 \\ \hline \hline PN & 49.43\({}_{3.2}\) & 51.9\({}_{3.1}\) & 53.3\({}_{3.9}\) & 36.3\({}_{3.8}\) & 38.5\({}_{2.8}\) & 40.2\({}_{3.9}\) & 51.6\({}_{2.3}\) & 52.2\({}_{2.3}\) & 53.8\({}_{2.3}\) & 36.7\({}_{3.3}\) & 38.2\({}_{2.2}\) & 41.3\({}_{3.9}\) \\ \hline MAML & 50.9\({}_{3.1}\) & 51.8\({}_{1.1}\) & 56.1\({}_{2.1}\) & 39.4\({}_{2.3}\) & 44.3\({}_{2.0}\) & 45.4\({}_{3.1}\) & 48.8\({}_{2.4}\) & 49.4\({}_{3.3}\) & 53.9\({}_{2.7}\) & 39.0\({}_{3.3}\) & 40.3\({}_{3.2}\) & 41.5\({}_{3.2}\) \\ \hline G-Meta & 59.8\({}_{3.3}\) & 61.8\({}_{3.5}\) & 63.3\({}_{4.1}\) & 44.9\({}_{2.9}\) & 51.0\({}_{2.3}\) & 52.9\({}_{3.6}\) & 53.4\({}_{2.2}\) & 55.7\({}_{3.6}\) & 56.6\({}_{4.3}\) & 39.6\({}_{4.1}\) & 41.9\({}_{3.0}\) & 45.6\({}_{4.3}\) \\ \hline GPN & 58.6\({}_{3.8}\) & 62.5\({}_{2.8}\) & 66.9\({}_{4.3}\) & 50.6\({}_{3.9}\) & 52.7\({}_{2.4}\) & 54.6\({}_{3.4}\) & 56.0\({}_{4.1}\) & 60.7\({}_{4.7}\) & 63.0\({}_{2.3}\) & 42.1\({}_{4.8}\) & 45.8\({}_{3.3}\) & 52.1\({}_{4.8}\) \\ \hline RALE & 64.7\(\pm\)4.1 & 66.9\(\pm\)4.7 & 67.9\(\pm\)4.0 & 51.3\(\pm\)4.2 & 55.0\(\pm\)3.2 & 56.9\(\pm\)4.0 & 60.4\(\pm\)4.5 & 64.0\(\pm\)4.8 & 66.1\(\pm\)4.5 & 47.8\(\pm\)4.4 & 48.6\(\pm\)4.8 & 52.4\({}_{3.3}\) \\ \hline X-FNC & **70.1\(\pm\)**0.0 & **75.5\({}_{3.5}\)** & **76.8\({}_{3.3}\)** & **57.2\({}_{2.3}\)** & **63.6\({}_{3.3}\)** & **65.8\({}_{3.1}\)** & **69.9\({}_{3.9}\)** & **72.8\({}_{3.4}\)** & **76.0\({}_{4.8}\)** & **49.2\({}_{4.1}\)** & **51.5\({}_{2.8}\)** & **56.3\({}_{3.4}\)** \\ \hline \end{tabular} \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c} \hline Dataset & \multicolumn{8}{c||}{Cora–full} & \multicolumn{8}{c}{ogbn-arxiv} \\ \hline Setting & \multicolumn{4}{c|}{5-way 3-shot} & \multicolumn{4}{c||}{10-way 3-shot} & \multicolumn{4}{c||}{5-way 3-shot} & \multicolumn{4}{c}{10-way 3-shot} \\ \hline \# Labels per Class & 5 & 10 & 20 & 5 & 10 & 20 & 50 & 100 & 200 & 50 & 100 & 200 \\ \hline \hline PN & 45.5\({}_{2.7}\) & 48.1\({}_{3.6}\) & 48.9\({}_{3.8}\) & 28.2\({}_{2.3}\) & 31.6\({}_{3.3}\) & 34.4\({}_{2.5}\) & 39.1\({}_{2.5}\) & 40.8\({}_{3.7}\) & 42.6\({}_{3.1}\) & 23.1\({}_{3.6}\) & 24.4\({}_{4.3}\) & 27.7\({}_{3.5}\) \\ \hline MAML & 46.9\({}_{2.6}\) & 48.6\({}_{3.0}\) & 49.2\({}_{2.7}\) & 32.7\({}_{2.7}\) & 33.2\({}_{2.3}\) & 35.8\({}_{1.9}\) & 41.0\({}_{2.4}\) & 41.9\({}_{1.9}\) & 43.1\({}_{3.4}\) & 23.2\({}_{2.2}\) & 25.4\({}_{3.1}\) & 28.0\({}_{3.2}\) \\ \hline G-Meta & 57.7\({}_{3.9}\) & 58.7\({}_{3.6}\) & 59.8\({}_{2.6}\) & 4 supervision. Second, removing the Poisson Learning module deteriorates the performance on Cora-full more than that on Amazon-E. The reason is that Cora-full consists of significantly fewer meta-training classes than Amazon-E, and obtaining pseudo-labeled nodes becomes more crucial in this scenario. Third, without IB fine-tuning, the performance drops more significantly on 10-way settings than 5-way settings. The result further indicates that IB fine-tuning is critical for model generalization to meta-test classes, especially when each meta-task includes more classes. ### Effect of Loss \(\mathcal{L}_{D}\) In this part, we conduct experiments to study the effect of the loss \(\mathcal{L}_{D}\) in IB fine-tuning, which is calculated according to Eq. (14) with two hyper-parameters (i.e., the loss weight \(\beta\) and the mask rate \(\gamma\)). Specifically, in X-FNC, \(\beta\) represents the level of attention the model pays to the irrelevant local structures for classification on a specific node. According to the IB principle, with a higher loss weight \(\beta\), the model will focus more on filtering out irrelevant information for classification while less on extracting decisive classification information. On the other hand, the mask rate \(\gamma\) represents the approximate ratio of irrelevant information in the local structure of each node, which should be adjusted according to different datasets. To demonstrate the joint impact of these two hyper-parameters, we present the results with different values of \(\beta\) and \(\gamma\) on Amazon-E and Cora-full. From Fig. 4, we can observe that the mask rate of 0.1 generally provides better performance than other values. This is mainly because a small mask rate can be insufficient to filter out irrelevant structural information, while a larger mask rate can result in the loss of helpful information in local structures. Moreover, the performance drop on Cora-full is slightly larger than Amazon-E. The reason is that in Cora-full, the average node degree is significantly larger than Amazon-E. As a result, the graph structure encodes more decisive information for classification, which is more easily impacted by a large mask rate. ### Random Sampling in Pseudo-labeling In this part, we study the impact of factors that affect the pseudo-labeling accuracy during label propagation. Specifically, the sample number \(R\) controls the ratio of random unlabeled nodes in the constructed subgraph during pseudo-labeling. On the other hand, the distance-based adjacency matrix \(\mathbf{A}^{\prime\prime}\) acts as flexible connections between randomly sampled nodes and the limited labeled nodes (i.e., support nodes). Hence, we also adjust the values of \(R\) and the scaling hyper-parameter \(\lambda\) to evaluate their influence. From Fig. 5, we observe that the two parameters affect the pseudo-labeling accuracy differently. In particular, increasing the number of randomly sampled nodes \(R\) will first increase pseudo-labeling accuracy and then keep it stable. It is because a more complex structure of the constructed subgraph can help the label propagation process. In addition, a higher \(\lambda\) (i.e., the scaling hyper-parameter for \(\mathbf{A}^{\prime\prime}\)) first increases the pseudo-labeling accuracy while later deteriorating the accuracy. The reason is that with a higher value of \(\lambda\), the model will focus more on label propagation to random nodes instead of neighbors of support nodes. As a result, a higher \(\lambda\) can help discover more nodes that share the same classes with support nodes. ## 6. Conclusion In this paper, we study the problem of few-shot node classification with extremely weak supervision, which focuses on predicting labels for nodes in meta-test classes while utilizing extremely limited labeled nodes for meta-training. Furthermore, to tackle the challenges caused by extremely limited labeled nodes, we propose an innovative framework X-FNC to obtain pseudo-labeled nodes via Poisson Learning and conduct fine-tuning based on the IB principle. As a result, our framework can expand the support set in each meta-task to alleviate the problem of under-generalizing while filtering out irrelevant information for classification to avoid over-fitting. We conduct extensive experiments on four node classification datasets with extremely weak supervision, and the results validate the superiority of our framework over other state-of-the-art baselines. ## Acknowledgements This work is supported by the National Science Foundation under grants IIS-2006844, IIS-2144209, IIS-2223769, CNS-2154962, and BCS-2228534, the JP Morgan Chase Faculty Research Award, the Cisco Faculty Research Award, the Jefferson Lab subcontracts JSA-22-D0311 and JSA-23-D0163, the Commonwealth Cyber Initiative awards HV-2Q23-003 and VV-1Q23-007, the 4-VA Collaborative Research grant, and the UVA 3Cavaliers Seed Research Grant. Figure 4. Results of our framework on Amazon-E (left) and Cora-full (right) with different mask rates. Figure 5. Results of pseudo-labeling accuracy (in %) on Amazon-E (left) and Cora-full (right) with different \(\lambda\) and \(R\). Figure 3. Ablation study on our framework on Amazon-E (left) and Cora-full (right) in the \(N\)-way \(K\)-shot setting (\(N\), \(K\)).
2310.13144
Optimal Symbolic Bound Synthesis
The problem of finding a constant bound on a term given a set of assumptions has wide applications in optimization as well as program analysis. However, in many contexts the objective term may be unbounded. Still, some sort of symbolic bound may be useful. In this paper we introduce the optimal symbolic-bound synthesis problem, and a technique that tackles this problem for non-linear arithmetic with function symbols. This allows us to automatically produce symbolic bounds on complex arithmetic expressions from a set of both equality and inequality assumptions. Our solution employs a novel combination of powerful mathematical objects -- Gr\"obner bases together with polyhedral cones -- to represent an infinite set of implied inequalities. We obtain a sound symbolic bound by reducing the objective term by this infinite set. We implemented our method in a tool, AutoBound, which we tested on problems originating from real Solidity programs. We find that AutoBound yields relevant bounds in each case, matching or nearly-matching upper bounds produced by a human analyst on the same set of programs.
John Cyphert, Yotam Feldman, Zachary Kincaid, Thomas Reps
2023-10-19T20:38:34Z
http://arxiv.org/abs/2310.13144v1
# Optimal Symbolic Bound Synthesis ###### Abstract. The problem of finding a _constant_ bound on a term given a set of assumptions has wide applications in optimization as well as program analysis. However, in many contexts the objective term may be unbounded. Still, some sort of _symbolic_ bound may be useful. In this paper we introduce the _optimal symbolic-bound synthesis problem_, and a technique that tackles this problem for _non-linear arithmetic_ with function symbols. This allows us to automatically produce symbolic bounds on complex arithmetic expressions from a set of both equality and inequality assumptions. Our solution employs a novel combination of powerful mathematical objects\(-\)_Grobner bases_ together with _polyhedral cones_--to represent an infinite set of implied inequalities. We obtain a sound symbolic bound by _reducing_ the objective term by this infinite set. We implemented our method in a tool, AutoBound, which we tested on problems originating from real Solidity programs. We find that AutoBound yields relevant bounds in each case, matching or nearly-matching upper bounds produced by a human analyst on the same set of programs. ## 1. Introduction In this paper we introduce and address the following problem, which we call the _optimal symbolic-bound synthesis (OSB)_ problem: **Given** a (potentially non-linear) formula \(\phi\) representing assumptions and axioms and an objective term \(t\), **find** a term \(t^{*}\) such that 1. (Bound) \(\phi\models t\leq t^{*}\) 2. (Optimality) For every term \(s\) that satisfies the first condition, \(t^{*}\leq s\) holds, where \(\preceq\) represents some notion of "term desirability." A solution to this problem has many applications in the automatic analysis of programs. For example, a common program-analysis strategy is to extract the semantics of a program as a logical formula, say \(\phi\). However, such a formula can contain temporary variables and disjunctions, and is therefore it is difficult for a human to understand the dynamics of the program from \(\phi\). An instance of the OSB problem allows a user to specify a term of interest \(t\), e.g. representing some resource, such as time, space or the value of some financial asset, and a term order \(\preceq\) that strictly favors terms only over input parameters. In this instance an OSB solver produces a sound upper-bound \(t^{*}\) on the resource \(t\) with all the temporary variables projected out. The problem of finding _constant_ bounds on a term \(t\) given assumptions \(\phi\) is commonly addressed in the field of optimization. However, in the context of program analysis we are often not interested in constant bounds on such a term--either because \(t\) is unbounded, or because the bound on \(t\) is so loose as to be uninformative. An alternative approach--the one adopted in this paper--is to find a _symbolic_ bound, given assumptions \(\phi\). The OSB problem as given above is very general. Namely, we have yet to specify any restrictions on \(\phi\), \(\models\), or the term-desirability order \(\preceq\). In future we would like others to consider methods to address OSB problems for various instantiations of \(\models\) and \(\preceq\). In this paper, we consider the OSB problem in the context in which \(\phi\), \(\models\) and \(t\) are interpreted over _non-linear_ arithmetic, and \(\preceq\) gives a intuitive, human notion of a simpler term. Moreover, we do not place a restriction on the form of \(\phi\). That is, \(\phi\) is an arithmetic formula with the usual boolean connectives. This setting introduces the significant challenge of non-linear arithmetic reasoning: (i) in the case of rational arithmetic, it is undecidable to determine whether any \(t^{\prime}\) is a bound on \(t\), let alone find an optimal bound; (ii) in the case of real arithmetic, reasoning is often prohibitively expensive. In the setting of finding bounds, the challenge is finding a finite object to represent the infinite set of upper-bounds implied by the formula \(\phi\). In the case of _linear_ arithmetic, convex polyhedra can be represented finitely, and can completely represent the set of inequality consequences of a linear formula \(\phi\). Moreover, manipulating polyhedra is often reasonably efficient. However, in the non-linear rational case no such complete object exists, and in the non-linear real case manipulating the corresponding object1 is computationally challenging. Footnote 1: See the discussion on Positivestellensatz in §8. To address this challenge we introduce a mathematical object we call a _cone of polynomials_ (SS4.3) to hold on to an infinite set of non-linear inequalities. A cone of polynomials, consists of a polynomial ideal (SS3.1), which captures equations, and a polyhedral cone (SS3.2), which captures inequalities. Cones of polynomials strike a balance between expressiveness and computational feasibility--using _non_-linear reasoning on _equalities_ through the ideal, and _linear_ reasoning on _in_equalities through the linear cone, gives efficient yet powerful _non_-linear reasoning on _in_equalities. We utilize cones of polynomials to address the non-linear OSB problem in a two-step process: (1) From \(\phi\) create an implied cone of polynomials \(C\). That is, \(C\) is an infinite collection of inequalities, each implied by \(\phi\). (2) _Reduce_ the term \(t\) by \(C\) to obtain \(t^{*}\). Due to the difficulties of non-linear arithmetic the first step is necessarily incomplete. However, the second step (reduction) is _complete_ with respect to cones of polynomials. Our reduction method for cones of polynomials makes use of a sub-algorithm that reduces a linear term by a polyhedron. That is, in SS4.2 we give an algorithm (Alg. 2) that _solves_ the OSB problem, where \(\phi\) is a conjunction of linear inequalities (a polyhedron), \(\models\) is interpreted as linear arithmetic, and \(\preceq\) is an order that encodes preferability of the dimensions of the returned bound. This method makes use of a novel _local projection_ (SS4.2) method. Local projection can be seen as an incomplete method of quantifier elimination for polyhedra that avoids the blow-up of a full-projection method such as Fourier-Motzkin. Nevertheless, local projection suffices in the case of the OSB problem for polyhedra. In SS7, we compare our reduction method based on local project with a, perhaps more obvious, approach based on multi-objective linear programming. We find that our algorithm solves the problem much more efficiently than the LP approach. In SS4.3, we show (Thm. 4.13) how the polyhedral OSB solution can be extended to the setting of cones of polynomials. This means that in particular we are able to completely solve OSB with respect to a polynomial \(t\) and a polynomial cone \(C\), which has the property that the result, \(t^{*}\), is optimal with respect to any other bound \(s\) implied by the cone \(C\). This method works for desirability orders \(\preceq\) that are _monomials orders_. With these methods in hand, SS5 shifts to the following problem: Given a formula \(\phi\), extract an implied cone \(C\) for which the methods from SS4 can be applied. Due to the issues of non-linear arithmetic, such an extraction process will necessarily be incomplete. However, in SS5 we give a heuristic method for extracting a cone from a non-linear formula that works well in practice (SS7). Moreover, our method allows \(\phi\) and \(t\) to contain function symbols, such as \(\lfloor\cdot\rfloor\) and \(\frac{1}{\cdot}\) (reciprocal), both of which are outside the signature of polynomial arithmetic. Overall, our two-step method is sound with respect to non-linear arithmetic augmented with additional function symbols. In SS6, we introduce the _effective-degree order_. Effective-degree is essentially a degree monomial orders, extended to the case of non-polynomial function symbols. Effective-degree orders capture the intuitive notion that terms with fewer products are simpler. Also, variable restriction can be encoded as an effective-degree order. Our saturation and reduction methods combined with effective degree results in a powerful yet practical method for addressing the non-linear OSB problem. In SS7, we give experimental results that show our method, using effective-degree as a term desirability order, produces interesting and relevant bounds using a set of benchmarks extracted from Solidity code by industry experts in smart-contract verification. Our tool is able to produce in seconds or minutes bounds which match or nearly-match human-produced bounds, as well as bounds where ones were previously unknown to human experts. Contributions.1. The introduction of the optimal symbolic-bound synthesis problem 2. The local-projection method for projecting polyhedra (SS4.1) 3. Algorithms for reducing a term \(t\) by a polyhedron (SS4.2) and a polynomial cone (SS4.3) 4. A saturation method that extracts a polynomial cone from a non-linear formula with additional function symbols (SS5) 5. The introduction of the effective-degree order on terms, which is amenable to automation, and in practice results in useful bounds (SS6) 6. An experimental evaluation demonstrating the power and practicality of our method (SS7) SS8 discusses related work. Proofs of theorems are given in appendices. ## 2. Overview To motivate the optimal symbolic-bound synthesis problem, as well as understand how we address it, consider the code in Fig. 0(a). This code presents us with an interesting non-linear inequational-reasoning problem, which arises from a common smart-contract pattern. A typical "rebase" or "elastic" smart contract holds some amount of "tokens," which can vary over time, and each user holds a certain amount of "shares" in the tokens. While the number of tokens may vary (e.g., to control the price), the given number of shares that the user holds should correspond to a largely-unchanging percentage of the total tokens. The utility class _Rebase_, which is based on real-world Solidity code2, tracks the total number of tokens in _elastic_ and the total number of shares in _base_. The function _add_ increases the number of tokens, and the number of available shares accordingly. However, a given amount \(v\) should be represented by the same number of shares even after an _add_ operation. Thus, for a given values \(v\) and \(a\), if we execute the sequence \[x\ =\ toBase(v)\ ;\ add(a)\ ;\ y\ =\ toBase(v),\] the term \(t=x-y\) should be \(0\), or close to \(0\). Plugging in concrete value shows that \(t\) is not identically \(0\), but how far from \(0\) is it? The answer can depend on \(v\) and \(a\), as well as the initial values of _elastic_, and _base_, so a precise characterization involves a _symbolic_ expression. Indeed, a verification expert that analyzed this problem came up with the bound \(t\leq 1+\lfloor v/(e+a)\rfloor\), where \(e\) is the initial value of _elastic_. Can we automate this creative process of _generating_ a bound, which for humans often involves much trial-and-error, while even _validating_ a guess for a bound is challenging? In this case, can we automatically find lower and upper symbolic bounds on the term \(t\ =\ x\)-\(y\)? The same question can be translated to an OSB problem by writing the following conjunctive formula that represents the assumptions about the initial state, together with the program's execution: \[\phi\triangleq x=\left\lfloor\frac{vb}{e}\right\rfloor\wedge y=\left\lfloor \frac{vb^{\prime}}{e^{\prime}}\right\rfloor\wedge a2=\left\lfloor\frac{ab}{e} \right\rfloor\wedge e^{\prime}=e+a\wedge b^{\prime}=b+a2\wedge a,b,e,v\geq 0.\] The goal is to produce a term \(t^{*}\) such that \(\phi\models x-y\leq t^{*}\). Furthermore, we are interested in terms that are "insightful" in some sense. For example, we would like to produce a bound that does not contain any temporary variables \(e^{\prime}\), \(a2\), or \(b^{\prime}\), as well as the variables \(x\) and \(y\). This variable restriction does not alone determine a desirable bound, but for this example we at least require a bound to satisfy this constraint. For this example, as well as our experiments, we use the effective-degree order (SS6) as a stand-in for term desirability. Using effective-degree we can encode variable restriction into the order, ensuring that the variables \(e^{\prime}\), \(a2\), \(b^{\prime}\), \(x\), and \(y\) are absent from the bound we produce if such a bound exists. Effective-degree goes further and roughly minimizes the number of products in the result. Fig. 0(b) gives an outline of our method. The first step to produce an implied cone is to _purify_ the formula \(\phi\) into a formula using only polynomial symbols. We do this by introducing a new variable for each non-polynomial function, and placing the variable assignment in a _foreign-function map_. That is, for every non-polynomial function symbol \(f(w)\) we introduce a new variable, \(u\), add \(u\mapsto f(w)\) to our map and replace \(f(w)\) with \(u\). Purifying the formula \(\phi\) we obtain, \[\phi^{\prime}\triangleq x=u_{4}\wedge y=u_{5}\wedge a2=u_{6} \wedge e^{\prime}=e+a\wedge b^{\prime}=b+a2\wedge a,b,e,v\geq 0\] \[TM= \{u_{1}\mapsto e^{-1},u_{2}\mapsto e^{\prime-1},u_{3}\mapsto e^{ -1},u_{4}\mapsto\lfloor vbu_{1}\rfloor,u_{5}\mapsto\lfloor vb^{\prime}u_{2} \rfloor,u_{6}\mapsto\lfloor abu_{3}\rfloor\}\] The purpose of purification is to produce a \(\phi^{\prime}\) which contains no function symbols. The original formula \(\phi\) is equivalent to \(\phi^{\prime}\wedge\bigwedge_{u\mapsto f(w)}u=f(w)\). By making this separation of \(\phi\) in \(\phi^{\prime}\) and \(TM\) we can create methods that can separately work on \(\phi^{\prime}\), \(TM\), or the combination of \(\phi^{\prime}\) and \(TM\) when required. We then strengthen \(\phi^{\prime}\) by adding properties of the functions in \(TM\). This is the Instantiate Axioms step in Fig. 0(b).3 For example, \(u_{4}\) represents a floor term and so satisfies more properties than a generic polynomial variable. Thus, after purification our method uses the term map \(TM\) to instantiate axioms for floor and inverse for each occurrence of a floor and inverse term appearing in the map, and adds them to \(\phi^{\prime}\). That is, we create \(\phi^{\prime\prime}\) by adding the instantiated axioms \((u_{1}e=1)^{4}\), \((u_{2}e^{\prime}=1)\),..., \(vbu_{1}-1\leq u_{4}\leq vbu_{1},vb^{\prime}u_{3}-1\leq u_{5}\leq vb^{\prime}u_ {3},\)..., \(e\geq 0\ \Longrightarrow\ u_{1}\geq 0\) \(vbu_{1}\geq 0\implies u_{4}\geq 0\), etc, to \(\phi^{\prime}\). At this point \(\phi^{\prime\prime}\) is \[\phi^{\prime\prime}\ \triangleq\ \phi^{\prime}\wedge(e\geq 0\implies u_{1}\geq 0 )\wedge\dots\wedge(eu_{1}=1)\wedge\dots\wedge(vbu_{1}-u_{4}\geq 0)\wedge\dots\] After axioms have been instantiated, \(\phi^{\prime\prime}\) and the term map \(TM\) are used to construct a _cone of polynomials_ (SS4.3). A cone of polynomials is a composite of a _polynomial ideal_ (SS3.1) and a _polyhedral cone_ (SS3.2). The ideal and polyhedral cone are each represented by a finite set of basis equations and inequalities, respectively. The ideal consists of its basis equations, as well as all other equations that are _polynomially implied_ by the basis equations. That is, the ideal consists of polynomials \(p_{1},\dots,p_{k}\), representing assumptions \(p_{i}=0\), as well as any polynomial of the form \(h_{1}p_{1}+\dots+h_{k}p_{k}\) for polynomials \(h_{1},\dots,h_{k}\). The polyhedral cone consists of its basis inequalities as well as all other inequalities that are _linearly implied_ by the basis inequalities. That is, the polyhedron consists of polynomials \(q_{1},\dots,q_{r}\), representing assumptions \(q_{i}\geq 0\), as well as any other polynomial of the form \(\lambda_{1}q_{1}+\dots\lambda_{r}q_{r}\) for scalar \(\lambda_{i}\geq 0\). Overall, the cone consists of terms of the form \(p+q\) where \(p\) is a member of the ideal and \(q\) is a member of the polyhedron. Because \(p\) is an implied equation and \(q\) is an implied inequality, we have \(p+q\geq 0\). We call the process of creating a cone of polynomials from \(\phi^{\prime\prime}\) and \(TM\)_saturation_. We describe the saturation process in SS5 using the running example from this section. At a high level, saturation is an iterative process that extracts equalities and inequalities that are implied by \(\phi^{\prime\prime}\) and \(TM\). A cone of polynomials is created by adding extracted equalities to the ideal part and by adding extracted inequalities to the polyhedral cone part. The methods that we use include congruence closure (SS5.1), linear consequence finding (SS5.2), and "taking products" (SS5.3). By taking products, we mean that from an inequality \(w\geq 0\) and \(z\geq 0\), we can derive \(w^{2}\geq 0\), \(wz\geq 0\), \(z^{2}\geq 0\), etc. There are an infinite set of products we could add so our method takes products up to a given _saturation depth_. In our experiments a saturation depth of 3 worked well. By bounding the set of products we add as well as the use of our consequence finding method makes saturation incomplete for full non-linear arithmetic. However, our experiments show saturation works well in practice (SS7). As detailed in SS5, saturation produces the following cone of polynomials on the running example: \[C\triangleq\langle x-u_{4},y-u_{5},a2-u_{6},e^{\prime}-e-a,b^{ \prime}-b-a2,u_{3}-u_{1},eu_{1}-1,(e+a)u_{2}-1\rangle+\] \[\qquad\qquad[e,a,v,b,e^{2},ea,\dots,vbu_{1}-u_{4},vu_{2}u_{6},vbu _{2}-vbu_{1}+uu_{2},u_{5}-vu_{2}u_{6}-vbu_{2}+1,1];\] \(\langle x-u_{4},\dots,\rangle\) is the ideal and \([e,\dots,1]\) is the polyhedral cone. In other words, saturation extracted the equations \(x-u_{4}=0\), \(y-u_{5}=0\), etc. as well as the inequalities \(e\geq 0\), \(a\geq 0\), and many more.5 Footnote 5: For this example, saturation extracted 814 inequalities. The next step in addressing the OSB problem is to _reduce_ our term of interest by \(C\). That is, we need to find the best \(t^{*}\) such that the cone implies \(t\leq t^{*}\). Equivalently, the problem is to find the best \(t^{*}\) such that the cone contains \(t^{*}-t\). Our reduction procedure for polynomials and polynomial cones works by first reducing the polynomial of interest by the ideal, and then reducing the result by the polyhedron. The process of reducing the polynomial by the ideal is a standard method in computational algebraic geometry (SS3.1); however, we present a novel polyhedral reduction method (SS4.2), which in turn uses a novel projection method (SS4.1). The main idea of our polyhedral reduction method is to order the dimensions of the polyhedron, which in our setting correspond to monomials, and successively _project_ out the worst dimension of the polyhedron6 until the term of interest \(t\) becomes unbounded. We show (Thm. 4.5) that the bound on \(t\) right before \(t\) becomes unbounded is optimal in the order of dimensions. In SS4.3, we show that the combination of the standard ideal reduction with the polyhedral reduction yields a reduction method for the combined cone. For the example from Fig. 0(a), we instantiate an effective-degree order that favors terms without temporary variables. From the saturated cone of polynomials \(C\), we have the following equations in the basis of the ideal, \(x-u_{4}\) and \(y-u_{5}\), as well as the following inequalities in the basis of the polyhedral cone: \[vbu_{1}-u_{4}\geq 0\qquad va2u_{2}+vbu_{2}-vbu_{1}+vu_{2}\geq 0\qquad u_{5}- va2u_{2}-vbu_{2}+1\geq 0\] Reducing \(x-y\) by the equations yields \(u_{4}-u_{5}\). The polyhedral reduction method can then be seen as rewriting \(u_{4}-u_{5}\) to \(uu_{2}+1\) via the justification \[vu_{2}+1-(u_{4}-u_{5})=vbu_{1}-u_{4}+va2u_{2}+vbu_{2}-vbu_{1}+uu_{2}+u_{5}-va2u _{2}-vbu_{2}+1\] The right-hand-side is non-negative. Thus, \(x-y\leq u_{2}+1\). Before returning the final result, our system _unpurifies_ this bound by replacing \(u_{2}\) with its definition in \(TM\). Consequently, our system returns the final result as "\(x-y\leq\frac{v}{a+e}+1\)." Our system can also be used to automatically find a lower bound for a term. In our example, the lower bound that it finds is "\(x-y\geq-1\)." These bounds, \(-1\leq x-y\leq\frac{v}{a+e}+1\), which the implementation of our method found on the order of seconds, are very nearly the bounds, \(0\leq x-y\leq\left\lfloor\frac{v}{a+e}\right\rfloor+1\), found manually by a human analyst. Differences between the bound we compute automatically and the bound produced by a human sometimes stem from slightly different preferences in the tension between the bound's simplicity and tightness, but in this case a deeper issue is at play. Our method has a limited capacity to perform inequality reasoning _inside_ a floor term; for instance, we do not produce the inequality \(\lfloor t_{1}\rfloor\leq\lfloor t_{2}\rfloor\) even when \(t_{1}\leq t_{2}\) is known, if \(t_{1}\) or \(t_{2}\) are not present in the input formula. We do obtain the slightly weaker \(\lfloor t_{1}\rfloor\leq t_{2}\), which, for instance, does not precisely cancel with \(-\lfloor t_{2}\rfloor\), leading to slightly weaker bounds. Our initial experience with the system is that it is able to produce interesting upper bounds that are challenging to come up with manually. In one case (fixed point integer arithmetic--see SS7), we asked a human analyst to propose a bound for a problem they knew, but had previously attempted only a bound in the other direction (whereas our system computes both at the same time). After approximately fifteen minutes, and correcting the derivation at least once, they came up with a bound that nearly matches the bound that our system generated in less than a second. ## 3. Background Our method is based on the construction and manipulation of a cone, which as stated consists of a polynomial ideal to hold on to equations and a linear cone to hold on to inequalities. Part of our contribution is the use and manipulation of this composite object. However, we borrow many techniques and ideas from the study of the individual components. In this section, we give background on the definitions and properties of polynomials ideals and linear cones. Overall, our method works for any ordered field. That is, our techniques are sound with respect to the theory of ordered fields. We will write \(\models_{OF}\) to denote entailment modulo the theory of ordered fields when we want to indicate soundness. Since \(\mathbb{R}\) and \(\mathbb{Q}\) are ordered fields, \(\models_{OF}\) implies entailment with respect to non-linear real and non-linear rational arithmetic. ### Polynomials and Ideals In this section, we give definitions of polynomial ideals, as well as highlight algorithms and results that we will need in order to manipulate and reason about polynomial ideals. For a more in-depth presentation of polynomial ideals and algorithms for manipulating them, see Cox et al. (2009). Our use of the phrase monomial refers to the standard definition. We consider polynomials as being a finite linear combination of monomials over some (ordered) field \(\mathbb{K}\). For example \(\mathbb{K}\) could be \(\mathbb{Q}\) or \(\mathbb{R}\). In this paper we use polynomial ideals to represent an infinite set of equality consequences. Due to some classical results from ring theory as well as a result due to Hilbert concerning polynomial ideals, we can take the following definition as a definition for any ideal of polynomials. **Definition 3.1**: _Let \(\{p_{1},\ldots,p_{k}\}\) be a finite set of polynomials. \(\langle p_{1},\ldots,p_{k}\rangle\) denotes the ideal generated by the basis \(\{p_{1},\ldots,p_{k}\}\). Furthermore,_ \[\langle p_{1},\ldots,p_{k}\rangle=\{h_{1}p_{1}+\ldots+h_{k}p_{k}\mid 1\leq i \leq k\text{ and }h_{i}\text{ is an arbitrary polynomial}\}.\] If we consider a set of polynomials \(p_{1},\ldots,p_{n}\) as a given set of polynomial equations, i.e., \(p_{i}=0\) for each \(i\), then the ideal generated by \(p_{1},\ldots,p_{n}\) consists of equational consequences. **Example 3.2**: _One way to see that \(x-1-t=0\wedge y-1-t^{2}=0\models_{\mathrm{OF}}x^{2}-2x=y-2\), is by observing_ \[x^{2}-2x-y+2 \in\langle x-1-t,y-1-t^{2}\rangle\] \[x^{2}-2x-y+2 =(x-1+t)(x-1-t)+(-1)(y-1-t^{2})\] As will be highlighted shortly, determining ideal membership for \(\mathbb{Q}[x_{1},\ldots,x_{n}]\), \(\mathbb{R}[x_{1},\ldots,x_{n}]\) is _decidable_. However, in the general context of non-linear rational arithmetic or nonlinear integer arithmetic, determining polynomial consequences is _undecidable_. Therefore, in general ideals give a sound but incomplete characterization of equational consequences. A simple example illustrating this fact is \(x^{2}=0\models_{\mathrm{OF}}x=0\), but \(x\notin\langle x^{2}\rangle\). While polynomial ideals are in general incomplete, they have the advantage of having useful algorithms for manipulation and reasoning. Namely, the multivariate-polynomial division algorithm and the use of Grobner bases give a practical method to _reduce_ a polynomial by an ideal and check ideal membership. These techniques are integral to our overall method, so we now briefly highlight some of these ideas. Algorithms for manipulating polynomials often consider the monomials one at a time. Thus, we often orient polynomials using a _monomial order_. **Definition 3.3**: _Let \(\ll\) be a relation on monomials. \(\ll\) is a monomial order, if_ 1. \(\ll\) _is a total order._ 2. _For any monomials_ \(a\)_,_ \(b\)_, and_ \(m\)_, if_ \(a\ll b\)_, then_ \(a\cdot m\ll b\cdot m\)__ 3. _For any monomial_ \(a\)_,_ \(a\gg 1\)_._ With a monomial order defined, we often write polynomials as a sum of their monomials in decreasing order. A strategy for ensuring termination of algorithms is to process monomials in decreasing order with respect to some monomial order, while guaranteeing intermediate steps do not introduce larger monomials. Because a monomial order is well-ordered, termination is ensured. Common monomial orders are the lexicographic monomial order, degree lexicographic order, and degree reverse lexicographic order (grevlex). The details of these monomial orders in unimportant for this work, but in practice the grevlex order tends to yield the best performance in implementations. **Definition 3.4**: _With respect to a monomial order the leading monomial of a polynomial \(p\), denoted \(\mathrm{LM}(p)\), is the greatest monomial of \(p\)._ Once a monomial order has been defined, we can use the multivariate polynomial division algorithm to divide a polynomial \(f\) by an ideal \(\langle p_{1},\ldots,p_{k}\rangle\). This algorithm successively divides the terms of \(f\) by the leading terms of the set of \(p_{i}\)'s until no more divisions can be performed. The result is a remainder \(r\). The value of this remainder can be used for various purposes. For example, if \(r=0\), then \(f\in\langle p_{1},\ldots,p_{k}\rangle\). However, examples can be constructed that show that performing multivariate division on an _arbitrary_ basis does not necessarily yield a unique result. In other words, it is possible to have another basis \(p_{1}^{\prime},\ldots,p_{k}^{\prime}\) with \(\langle p_{1}^{\prime},\ldots,p_{k}^{\prime}\rangle=\langle p_{1},\ldots,p_{k}\rangle\), but dividing \(f\) by \(p_{1}^{\prime},\ldots,p_{k}^{\prime}\) will yield a different remainder \(r^{\prime}\). The solution to this issue is to divide by a _Grobner basis_. That is, to divide a polynomial \(f\) by an ideal \(\langle p_{1},\ldots,p_{k}\rangle\), we do not divide \(f\) by \(p_{1},\ldots,p_{k}\), but instead we construct a Grobner basis \(g_{1},\ldots,g_{s}\) with \(\langle g_{1},\ldots,g_{s}\rangle=\langle p_{1},\ldots,p_{k}\rangle\). It can then be shown that dividing \(f\) by \(g_{1},\ldots,g_{s}\) will yield a unique remainder. The exact definition of a Grobner basis is technical and not required for this paper. What is required to know is that, given an ideal \(\langle p_{1},\ldots,p_{k}\rangle\), there are algorithms, such as Buchberger's algorithm [3, 4] and the F4 [11] and F5 [12] algorithms, for constructing a Grobner basis \(g_{1},\ldots,g_{s}\) with \(\langle g_{1},\ldots,g_{s}\rangle=\langle p_{1},\ldots,p_{k}\rangle\). Furthermore, using the multivariate division algorithm to divide a polynomial \(f\) by a Grobner basis yields a remainder with certain special properties. Definition 3.5 ().: _Let \(B\) be a Grobner basis, and \(f\) a polynomial. We call the process of dividing \(f\) by \(B\) using the multivariate division algorithm and taking the remainder reduction, and denote the process by \(\textbf{red}_{B}(f)\)._ Theorem 3.6 ().: _[_9_, Section 2.6]_ _Let \(B=\{g_{1},\ldots,g_{s}\}\) be a Grobner basis for an ideal \(I\), \(f\) a polynomial, and \(r=\textbf{red}_{B}(f)\). \(r\) is the unique polynomial with the following properties:_ 1. _No term of_ \(r\) _is divisible by any of_ \(\operatorname{LT}(g_{1}),\ldots,\operatorname{LT}(g_{s})\)_._ 2. _There is a_ \(g\in I\) _with_ \(f=g+r\)_._ Lemma 3.7 ().: \(\operatorname{LT}(f)\gg\operatorname{LT}(\textbf{red}_{B}(f))\) _for any_ \(f\) _and basis_ \(B\)_._ Corollary 3.7 ().: _Let \(B=\{g_{1},\ldots,g_{s}\}\) be a Grobner basis for an ideal \(I\), \(f\) a polynomial, and \(r=\textbf{red}_{B}(f)\). \(r\) is the optimal remainder in the monomial order. That is, for any other \(r^{\prime}\) with \(f=g^{\prime}+r^{\prime}\) for some \(g^{\prime}\in I\), \(\operatorname{LT}(r^{\prime})\gg\operatorname{LT}(r)\)._ Corollary 3.8 ().: _Let \(B=\{g_{1},\ldots,g_{s}\}\) be a Grobner basis for an ideal \(I\), \(f\) a polynomial. \(f\in I\) if and only if \(0=\textbf{red}_{B}(f)\)._ ### Polyhedral Cones In this section, we give background on polyhedral cones. Mirroring the process of using ideals to represent equations, we use polyhedral cones to represent inequalities. The reader should keep in mind that our method uses two different kinds of cones. We have an inner cone which is used to hold on to linear inequalities and an outer cone which consists of an ideal and the inner cone. The inner cone is a _polyhedral cone_ and is the main subject on this section. We will describe the outer cone in more detail in SS4.3. To make the distinction between the two concepts clear, we will use the terms "polyhedral cone" and "cone of polynomials" to refer to the inner and outer cone, respectively. Definition 3.8 ().: _Let \(\mathbb{K}\) be an ordered field (e.g., \(\mathbb{R}\) or \(\mathbb{Q}\)) and \(V\) be a vector space over \(\mathbb{K}\). A polyhedral cone \(C\) is the conic combination of finitely many vectors. That is, there is a set of vectors \(\{v_{1},\ldots,v_{n}\}\) with \(C=\{\lambda_{1}v_{1}+\ldots+\lambda_{n}v_{n}\mid\lambda_{i}\geq 0\}\). We use \(C=[v_{1},\ldots,v_{n}]\) to denote that \(C\) is generated by the vectors \(\{v_{1},\ldots,v_{n}\}\)._ While we use polyhedral cones to _represent_ a set of linear consequences, we frame some of our reduction algorithms (SS4.2) in terms of _convex polyhedra_. Fortunately, there is a very strong connection between polyhedral cones and convex polyhedra. There are multiple equivalent definitions for a convex polyhedron that lead to different representations. In this paper we only represent a polyhedron using a set of inequality constraints, sometimes called the _constraint representation_. Definition 3.9 ().: _Let \(\mathbb{K}\) be an ordered field. A linear constraint over variables \(x_{1},\ldots,x_{n}\) is of the form \(a_{1}x_{1}+\cdots+a_{n}x_{n}+b\geq 0\), \(a_{1}x_{1}+\cdots+a_{n}x_{n}+b>0\), or \(a_{1}x_{1}+\cdots+a_{n}x_{n}+b=0\), where \(a_{1},\ldots,a_{n},b\in\mathbb{K}\). A (convex) polyhedron is the set of points of \(\mathbb{K}^{n}\) satisfying a set of linear constraints._ Because each equality can be represented as two inequalities, we could consider polyhedra to not have equality constraints. However, having explicit equalities can allow algorithms to be more efficient in their calculations. We do not take a strong stance on whether all of the equalities of a polyhedron are explicit or not. In SS4, we sometimes consider equality constraints as being explicitly part of a polyhedron, but our methods work the same if the equalities are implicit. If we look at the constraints of the polyhedron as given inequality assumptions, taking conical combinations of the constraints give a sound set of inequality consequences. Moreover, Farkas' Lemma shows that this set of consequences is also _complete._ Lemma 3.10 ().: _(Variant of Farkas' Lemma) Let \(P\) be a non-empty polyhedron with non-strict constraints \(N=\{c_{1}\geq 0,\ldots,c_{k}\geq 0\}\) and strict constraints \(S=\{s_{1}>0,\ldots,s_{l}>0\}\). Let \(P^{*}\) denote the polyhedral cone \([c_{1},\ldots,c_{k},s_{1},\ldots,s_{l},\mathbf{1}]\). Then \(P\models_{\text{LRA}}t\geq 0\) if and only if \(t\in P^{*}\)._ We close this section by giving observing that a polyhedral cone that represents inequalities can also represent equalities in the sense that for some vector \(v\in C\) it is possible for \(-v\in C\) to hold as well. If \(C\) is holding onto non-negative vectors, we would have \(v\geq 0\) and \(-v\geq 0\), so \(v=0\). Definition 3.11.: _If the only vector \(v\) of \(C\) with \(-v\in C\) is \(\mathbf{0}\) then \(C\) is called salient._ In the case of polyhedral cones we can determine if a cone is salient by looking at the generators. Lemma 3.12.: _If \(C=[v_{1},\ldots,v_{n}]\) is not salient then there is an \(i\in\{1,\ldots,n\}\) with \(v_{i}\in\{v_{1},\ldots,v_{n}\}\) and \(-v_{i}\in C\)._ Remark ().: _Lem. 3.10 can be modified to also say something about the strict inequality consequences of \(P\). However, we would have to add the condition that the "witness" of \(t\in P^{*}\) has at least one of the multiples on the strict inequality constraints non-zero. Consequently the same machinery presented here can be used to hold on to and decide strict inequalities. We just need to make sure that we keep appropriate track of which constraints are strict and which ones are not. AutoBound does distinguish between strict and non-strict inequalities, and so it is possible for the tool to return an bound \(t^{*}\) and know that \(t<t^{*}\) rather than \(t\leq t^{*}\). However, for presentation purposes in the subsequent sections we focus on the non-strict case._ ## 4. Reduction In this section we present our algorithm for efficiently reducing a term w.r.t. a cone of polynomials. We first present its key technical component, the algorithm for local projection (SS4.1), then explain how to use it to perform reduction w.r.t. an (ordinary) polyhedron (SS4.2), and finally extend it to operate w.r.t. the extra ideal to handle the more general case of a cone of polynomials (SS4.3). ### Local Projection An important polyhedral operation is _projection_. Our reduction method uses a weaker projection operation, which we call _local projection_. We could use a standard polyhedral-projection operation such as Fourier-Motzkin elimination to yield the same result. However, using full Fourier-Motzkin elimination to remove a single variable from a polyhedron of \(n\) constraints can result in \(\mathcal{O}(\frac{n^{2}}{4})\) constraints in the projection. Projecting out \(d\) variables can result in \(\mathcal{O}(\left(\frac{n}{4}\right)^{2^{d}})\) constraints, although many are redundant. The number of necessary constraints grows as a single exponential, at the expense of some additional work detecting redundant constraints at each step. In contrast to this complexity, using local project to remove a single variable is in linear in time and space. Thus, projecting out \(d\) variables takes \(\mathcal{O}(dn)\) time and space. The caveat is that local projection only results in a subset of the real projection result, but, as we will show, the real projection result can be finitely covered by local projections. In the worst case the number of partitions for projecting out a single variable is \(\mathcal{O}(n^{2})\), so local projection does not give a theoretical advantage compared to Fourier-Motzkin. However, in our case, we often do not need to compute the full projection result. Instead, we only require parts of it, and so using local projection gives us a _lazy_ method for computing objects of interest. Local projection can also be understood as a method of model-based projection [19, Section 5], specialized to the setting of polyhedra. Komuravelli et al. [19] give a model-based projection for LRA based on the quantifier-elimination technique of Loos and Weispfenning [21]. Thus, our specialization is very similar to these prior methods. **Definition 4.1**: _Let \(P\) be a polyhedron with dimension \(d_{i}\). \(\mathrm{Proj}(P,d_{i})=\{m\models_{LRA}\exists d_{i}.P\}\)._ Let \(P\) be a polyhedron represented by a conjunction of equality and inequality constraints, and let \(x\) be a dimension of \(P\). The constraints of \(P\) can be divided and rewritten as follows: * Let \(E_{x}\triangleq\{x=-\frac{f_{i}}{e_{i}}\mid e_{i}x+f_{i}=0\in P\}\), where each \(e_{i}\) is a constant and each \(f_{i}\) is \(x\)-free. * Let \(L_{x}\triangleq\{x\succeq-\frac{b_{i}}{a_{i}}\mid a_{i}x+b_{i}\geq 0\in P\}\) and \(L_{x}^{s}\triangleq\{x>-\frac{b_{i}^{\prime}}{a_{i}^{\prime}}\mid a_{i}^{ \prime}x+b_{i}^{\prime}\geq 0\in P\}\), where each \(a_{i},a_{i}^{\prime}\) is a constant greater than \(0\), and each \(b_{i},b_{i}^{\prime}\) is \(x\)-free. * Let \(U_{x}\triangleq\{x\preceq-\frac{d_{i}}{c_{i}}\mid c_{i}x+d_{i}\geq 0\in P\}\) and \(U_{x}^{s}\triangleq\{x<-\frac{d_{i}^{\prime}}{c_{i}^{\prime}}\mid c_{i}^{ \prime}x+d_{i}^{\prime}>0\in P\}\), where each \(c_{i},c_{i}^{\prime}\) is a constant less than \(0\), and each \(d_{i},d_{i}^{\prime}\) is \(x\)-free. * Other constraints, \(C\), not involving \(x\). Local ProjectionLet \(x\) be some dimension of a polyhedron \(P\) that is represented by equality constraints \(E_{x}\), lower-bound constraints \(L_{x}\) and \(L_{x}^{s}\), upper-bound constraints \(U_{x}\) and \(U_{x}^{s}\), and other constraints \(C\). Let \(m\) be a model of \(P\). The _local projection_ of \(x\) from \(P\) w.r.t. \(m\), denoted by \(\mathrm{LProj}(m,P,x)\), is a polyhedron defined by a set of constraints as follows: * If \(E_{x}\) is not empty, then let \(x=e\in E_{x}\). Then \(\mathrm{LProj}(m,P,x)\) is \[E_{x}[x\mapsto e]\cup L_{x}[x\mapsto e]\cup L_{x}^{s}[x\mapsto e]\cup U_{x}[x \mapsto e]\cup U_{x}^{s}[x\mapsto e]\cup C.\] * If \(E_{x}\), \(L_{x}\), and \(L_{x}^{s}\) are empty, then \(\mathrm{LProj}(m,P,x)\triangleq C\) * If \(lb^{*}\) corresponds to a non-strict constraint, then \(\mathrm{LProj}(m,P,x)\) is \[\{lb\leq lb^{*}\mid lb\in L_{x}^{\prime}\}\cup\{lb^{*}\leq ub\mid x\leq ub\in U _{x}\}\cup\{lb^{*}<ub^{\prime}\mid x<ub^{\prime}\in U_{x}^{s}\}\cup C\] * If \(lb^{*}\) corresponds to a strict constraint, then \(\mathrm{LProj}(m,P,x)\) is \[\{lb\leq lb^{*}\mid lb\in L_{x}^{\prime}\}\cup\{lb^{*}<ub\mid x\leq ub\in U _{x}\}\cup\{lb^{*}<ub^{\prime}\mid x<ub^{\prime}\in U_{x}^{s}\}\cup C\] The idea of a local projection is identical to a full projection except in local project we only consider the lower bound \(lb^{*}\) that is binding with respect to the given model. In general there are other models with different lower bounds, so a full projection needs to consider these alternative cases. However, because there are only finitely many possible binding lower bounds, local project finitely covers the full project. These ideas are formally captured by Lem. 4.2. For a more detailed comparison of local projection versus full projection see the proof of the lemma. **Lemma 4.2**: _Let \(P\), be a polyhedron. For a model \(m\models P\) and a dimension \(x\), the following are true:_ 1. \(m\models\mathrm{LProj}(m,P,x)\)__ 2. \(\mathrm{LProj}(m,P,x)\models\mathrm{Proj}(P,x)\)__ 3. \(\{\mathrm{LProj}(m,P,x)\mid m\models P\}\) _is a finite set_ Fig. 2 gives a geometric picture of local projection and projection. Consider Fig. 1(a), where the goal is to project out the \(z\) dimension. Take the red region for example. Any model in the red region has the lower-front facing triangle as a binding constraint; therefore, local projecting to the \(x\)-\(y\) plane yields the red-triangle. The union of the red, gray, olive, and blue regions give the full projection. Fig. 1(b) is a similar diagram, but for projecting out \(y\) then \(z\). Fig. 1(c) shows the result of projecting out \(z\) then \(y\). The result is a line segment in the \(x\) dimension. In Figs. 1(b) and 1(c), the resulting projections are depicted as being slightly displaced from the \(x\)-axis for clarity. Local projection can also be used to project out multiple dimensions by projecting out each dimension sequentially. **Definition 4.3**.: _Given a list of dimensions, we use \(\operatorname{LProj}(m,P,[d_{1},\ldots,d_{k}])\) to denote \(\operatorname{LProj}(m,\operatorname{LProj}(m,\ldots,\operatorname{LProj}(m,P,d_{1}),\ldots,d_{k-1}),d_{k})\). Similarly, for \(\operatorname{Proj}(P,[d_{1},\ldots,d_{k}])\)._ Crucially, locally projecting out a set of variables has the same relationship to the full projection of the same variables as we have in Lem. 4.2 for locally projecting out a single variable. **Theorem 4.4**.: _Let \(P\), be a polyhedron. For a \(m\models P\) and a list of dimensions \([d_{1},\ldots,d_{k}]\), the following are true_ 1. \(m\models\operatorname{LProj}(m,P,[d_{1},\ldots,d_{k}])\)__ 2. \(\operatorname{LProj}(m,P,[d_{1},\ldots,d_{k}])\models\operatorname{Proj}(P,[d_ {1},\ldots,d_{k}])\)__ 3. \(\{\operatorname{LProj}(m,P,[d_{1},\ldots,d_{k}])\mid m\models P\}\) _is a finite set_ While it is well known that when performing a full projection the order in which the dimensions are presented does not matter; however, in the case of local projection, the order _does_ matter. To see why, compare Figs. 1(b) and 1(c). In Fig. 1(b) the red, green, gray, and blue line segments are the possible results from projecting out \(y\) then \(z\). However, in Fig. 1(c), either of the olive line segments (the ones furthest from the axis) are possible results from projecting out \(z\) then \(y\). There is no corresponding segment in Fig. 1(b) to either of the olive line segments. However, Thm. 4.4 ensures that the set of possible local projections is finite, and that they exactly cover the full projection. ### Polyhedral Reduction In this section we present our algorithm for optimally reducing a polyhedron \(P\) with respect to a linear term \(t\) and an order on dimensions \(\ll\). Alg. 2 will produce a bound \(t^{*}\) over dimensions Figure 2. Local projections of a polyhedron. \(d_{j},\ldots,d_{k}\) such that \(P\models_{LRA}t\leq t^{*}\) and \(t^{*}\) is optimal with respect to \(\ll\). That is, for any \(b\) with \(\models_{LRA}t\leq b\), then \(b\) is an expression over the dimensions \(d_{i},\ldots,d_{k}\) with \(d_{n}\ll d_{i}\). Another way to think about the optimality is the "leading dimension" of \(t^{*}\) is minimal. Fig. 3 gives a geometric representation of Alg. 2. Suppose that we wish to upper-bound some term \(t\) that is an expression over \(x\) and \(y\), under the assumption that \(x\) and \(y\) are restricted to the (unbounded) dark-blue region. Let \(T=t\). Then the floating orange plane is the term to optimize. Suppose that we favor an upper-bound containing \(x\) over an upper-bound containing \(y\). In Fig. 3, the optimal upper-bound corresponds to the constraint \(u_{1,x}\). The algorithm lazily explores the polyhedron by getting a model of the floating orange polyhedron. Suppose that the first model sampled by Alg. 2, say \(m_{1}\), has an assignment for \(T\) smaller than the constant \(u_{2,c}\) shown in Fig. 3. Alg. 2 calls Alg. 1 with \(m_{1}\). Alg. 1 explores the local projection of the floating orange plane with respect to the model \(m_{1}\). Note that on initial call to Alg. 1, \(T\) always has an upper bound, namely \(t\). Thus Alg. 1 will successively project out dimensions until \(T\) is unbound. In the case of Fig. 3, the dimension \(y\) is locally projected out first, yielding the orange region in the \(x\)-\(T\) plane. This region does have an upper-bound, \(u_{1,x}\), which can simply be read off the constraint representation of the orange region. However, at this point it is unknown if there is another bound in fewer dimensions. So, Alg. 1 continues and locally projects out the \(x\) dimension obtaining the interval \(l_{1,c}\leq T\leq u_{2,c}\). There are no more dimensions to project, so Alg. 1 returns the conjectured upper-bound of \(u_{2,c}\). Note that this is _not_ a true bound. That is, there is a model of the floating orange plane that is strictly larger than \(u_{2,c}\). This situation means that the while loop in Alg. 2 will execute again with a new model \(m_{2}\) that is still within the polyhedron, but is strictly larger than \(u_{2,c}\). Thus, it will pass off this new model to Alg. 1, to get new conjectured bounds. Alg. 1 will, again, project out the dimensions \(y\) then \(x\) to obtain the interval \(u_{2,c}\leq T\). However, in this case projecting out \(y\) then \(x\) gives an interval Figure 3. Reduction by local project that is unbounded above. Thus, Alg. 1 will go back one step, when \(T\) still had an upper-bound, namely \(T\leq u_{1,x}\). Note that the upper-bound \(u_{1,x}\) is an upper-bound containing the variable \(x\). Because \(u_{1,x}\) is the most optimal upper-bound for \(T\) using the model \(m_{2}\), Alg. 1 will return \(u_{1,x}\). In this case \(u_{1,x}\)_is_ a true upper-bound. There is no value of \(T\) that is strictly greater than \(u_{1,x}\). For this reason, the loop in Alg. 2 will terminate with \(U=\{u_{2,c},u_{1,x}\}\). For the loop to terminate, one of these bounds must be true. So, the algorithm finishes by filtering \(U\) by which ones are true bounds, that is which \(ub\in U\) have \(P\models_{\mathit{LRA}}T\leq ub\). This check can easily be accomplished by an SMT solver. In short, such a bound must exists in \(U\) because any "upper-bound" face of the full projection polyhedron with minimal dimensions \(d_{i},\ldots,d_{k}\) is a true upper bound on \(T\). Moreover, Alg. 1 only returns "upper-bound" faces of local projections that are in \(d_{i},\ldots,d_{k}\) dimensions or fewer. If the upper-bound Alg. 1 returns is in _fewer_ dimensions then it's not a true bound, and another model will be sampled in Alg. 2. Since local projection finitely covers the full projection eventually a model will be sampled to produce a face of the full projection in \(d_{i},\ldots,d_{k}\). For a more detailed explanation see the proof of Thm. 4.5. For this particular choice of \(m_{1}\) and \(m_{2}\), Alg. 2 took two rounds to find a true upper-bound. However, if instead \(m_{2}\) was selected as the first model, then the true bound would have been found and returned in only one round. For this reason, the performance of Alg. 2 is heavily dependent on the models returned by the SMT solver. Theorem 4.5 ().: _Let \(t^{*}\) be a term produced by Alg. 2 for inputs \(P\), \(t\), and \(\ll\). Let \(d_{1},\ldots,d_{k}\) be the dimensions of \(P\) and \(t\) sorted greatest-to-smallest with respect to \(\ll\). Let \(d_{j},\ldots,d_{k}\) be the dimensions used in \(t^{*}\). Then the following are true of \(t^{*}\):_ 1. \(P\models_{\mathit{LRA}}t\leq t^{*}\)__ 2. \(t^{*}\) _is optimal in the sense that for any other_ \(b\) _with_ \(P\models_{\mathit{LRA}}t\leq b\)_, then_ \(b\) _is an expression over the dimensions_ \(d_{i},\ldots,d_{k}\) _with_ \(d_{j}\ll d_{i}\)_._ Thm. 4.5 says Alg. 2 solves the OSB problem for polyhedra and "dimensional orders". Furthermore, due to Farkas' lemma, an optimal bound can be found with respect to polyhedral cones. Corollary 4.5.1 ().: _Let \(Q\) be a polyhedral cone over \(\mathbb{K}^{k}\) (\(\mathbb{Q}^{k}\) or \(\mathbb{R}^{k}\)). Let \(\ll\) be an order on the dimensions of \(\mathbb{K}^{k}\) and \(t\) a vector in \(\mathbb{K}^{k}\). Alg. 2 can be used to solve the problem of finding a \(t^{*}\) with the following properties:_ 1. \(t^{*}-t\in Q\)__ 2. \(t^{*}\) _is optimal in the sense that for any other_ \(b\) _with_ \(b-t\in Q\)_, then_ \(b\) _is an expression over the dimensions_ \(d_{i},\ldots,d_{k}\) _with_ \(d_{j}\ll d_{i}\)_._ #### 4.2.1. LP Reduction Our main approach for reducing with respect to a polyhedral cone is Alg. 2. However, an alternative method based on linear programming is also possible. The idea is based on the observation that given a polyhedral cone \(Q=[q_{1},\ldots,q_{r},1]\) and terms \(t\) and \(t^{\prime}\) over dimensions \(d_{1},\ldots,d_{n}\) and constant dimension \(d_{n+1}\), it is possible to check whether \(r-t\in Q\) with an LP query. That is, let \(t^{c}_{j}\), \(t^{\prime c}_{j}\), and \(q^{c}_{i,j}\) denote the coefficient of \(t\), \(t^{\prime}\), and \(q_{i}\) on the \(j\)'th dimension. Then \(t^{\prime}-t\in Q\) if and only if there are non-negative \(\lambda_{1},\ldots,\lambda_{r+1}\) with \(t^{\prime c}_{j}-t^{c}_{j}=\lambda_{1}q^{c}_{1,j}+\ldots+\lambda_{r}q^{c}_{r,j}\) for each \(1\leq j\leq n\) and \(t^{\prime c}_{n+1}-t^{c}_{n+1}=\lambda_{1}q^{c}_{1,n+1}+\ldots+\lambda_{r}q^{c }_{r,n+1}+\lambda_{r+1}\). This system can be used to decide if a given concrete \(t^{\prime}\) has the property \(t^{\prime}-t\in Q\); moreover, the system can also represent the space of \(t^{\prime}\) with this property by leaving each \(t^{\prime c}_{j}\) undetermined, i.e., by considering the linear system over the variables \(t^{\prime c}_{1},\ldots,t^{\prime c}_{n+1},\lambda_{1},\ldots,\lambda_{r+1}\). Therefore, it is possible to reduce the OSB problem over a polyhedral cone (and consequently a polyhedron) to linear programming with a lexicographic objective function [17]. Without loss of generality assume the dimensions \(d_{1},\ldots,d_{n}\) are ordered in terms of preference. An optimal \(t^{*}\) can be found by asking whether \(t^{*,c}_{j}-t^{c}_{j}=\lambda_{1}q^{c}_{1,j}+\ldots+\lambda_{r}q^{c}_{r,j}\) for each \(1\leq j\leq n\) and \(t^{*,c}{}_{n+1}-t^{c}_{n+1}=\lambda_{1}q^{c}_{1,n+1}+\ldots+\lambda_{r}q^{c}_{r,n+1 }+\lambda_{r+1}\) has a solution with \(t^{*,c}{}_{1}=0,\ldots,t^{*,c}{}_{n}=0\). If not, then we see if there is a solution with \(t^{*,c}{}_{1}=0,\ldots,t^{*,c}{}_{n-1}=0\), and then \(t^{*,c}{}_{1}=0,\ldots,t^{*,c}{}_{n-2}=0\), etc. until a solution is found. We found in practice Alg. 2 was much faster (see SS7). ### Cone of Polynomials In this section, we extend the results from the previous section to the case of a cone of polynomials. Definition 4.6 ().: _Let \(p_{1},\ldots,p_{k}\), \(q_{1},\ldots,q_{r}\) be polynomials. The cone of polynomials \(C\) generated by \(p_{1},\ldots,p_{k}\) and \(q_{1},\ldots,q_{r}\) is the set_ \[C=\langle p_{1},\ldots,p_{k}\rangle+[q_{1},\ldots,q_{r},1]=\{p+q\mid p\in \langle p_{1},\ldots,p_{k}\rangle,q\in[q_{1},\ldots,q_{r},1]\}.\] At first glance there seems to be a slight mismatch in the definition. We defined polynomial ideals as consisting of polynomials, whereas polyhedral cones are defined as a collection of vectors. However, there is no issue because polynomials can be viewed as vectors in the infinite-dimensional vector space that has monomials as basis vectors. A cone of polynomials generated by \(p_{1},\ldots,p_{k}\) and \(q_{1},\ldots,q_{r}\) gives a sound set of consequences for assumptions \(p_{1}(\mathbf{x})=0,\ldots,p_{k}(\mathbf{x})=0\) and \(q_{1}(\mathbf{x})\geq 0,\ldots,q_{r}(\mathbf{x})\geq 0\). Lemma 4.7 (Soundness).: _Let \(g\in C=\langle p_{1},\ldots,p_{k}\rangle+[q_{1},\ldots,q_{r},1]\). Then,_ \[(\bigwedge_{i=1}^{k}p_{i}(\mathbf{x})=0)\wedge(\bigwedge_{j=1}^{r}q_{j}( \mathbf{x})\geq 0)\models_{OF}g(\mathbf{x})\geq 0.\] Lem. 4.7 is the main reason for our interest in cones of polynomials. Furthermore, we will show that we can perform reduction on a cone of polynomials. However, we take this moment to discuss the power of this object and the issue of completeness. In the linear case, by Farkas' lemma, polyhedral cones are complete with respect to a conjunction of linear inequalities; however, there is no such analogue for the case of a cone of polynomials7. That is, cones of polynomials are _incomplete_ for non-linear arithmetic. Footnote 7: In the case of real arithmetic, positivestellensatz theorems are the analogue of Farkas’ lemma for a different kind of cone object. However, they exhibit computational difficulties. See §8 for more discussion. Example 4.8 ().: _From an empty context, we have \(\models_{NRA}(x^{n})^{2}\geq 0\) for any \(n\in\mathbb{N}\). If cones of polynomials were complete we would need to have \((x^{n})^{2}\) in the "empty cone" \(\langle 0\rangle+[1]\)._ On the other hand because of the inclusion of the ideal, a cone of polynomials does hold onto some non-linear consequences. Example 4.9 (Extension of Ex. 3.2).: \(x^{2}-2x+2\in\langle x-1-t,y-1-t^{2}\rangle+[y,1]\)__ \[x^{2}-2x+2=(x-1+t)(x-1-t)+(-1)(y-1-t^{2})+(1)y+(0)(1).\] _Thus, a cone of polynomials can establish the consequence \(x^{2}-2x+2\geq 0\) from the assumptions \(x-1-t=0\), \(y-1-t^{2}=0\), \(y\geq 0\)._ The use of cones of polynomials balances expressiveness and computational feasibility. Before we give the method for reducing a polynomial with respect to a cone of polynomials, we need to introduce the idea of a _reduced_ cone of polynomials. The only difference is that in the case of reduced cone we require the \(q_{1},\ldots,q_{r}\) polynomials to be reduced with respect to a Grobner basis for the ideal \(\langle p_{1},\ldots,p_{k}\rangle\). Definition 4.10 ().: _Let \(C=\langle p_{1},\ldots,p_{k}\rangle+[q_{1},\ldots,q_{r},1]\) be a cone of polynomials and \(\ll\) a monomial order. \(C\) is reduced with respect to \(\ll\) if \(p_{1},\ldots,p_{k}\) is a Grobner basis for \(\langle p_{1},\ldots,p_{k}\rangle\) and for every \(q_{i}\) we have that no monomial of \(q_{i}\) is divisible by any of \(\operatorname{LM}(p_{1}),\ldots,\operatorname{LM}(p_{k})\)._ **Theorem 4.11**.: _Let \(C=\langle p_{1},\ldots,p_{k}\rangle+[q_{1},\ldots,q_{r},1]\) be a cone of polynomials. Let \(B=\{q_{1},\ldots,g_{s}\}\) be a Grobner basis for \(\langle p_{1},\ldots,p_{k}\rangle\) and let \(t_{i}=\textbf{red}_{B}(q_{i})\) for each \(q_{i}\). Then \(C^{\prime}=\langle g_{1},\ldots,g_{s}\rangle+[t_{1},\ldots,t_{r},1]\) is a reduced cone with \(C=C^{\prime}\)._ #### 4.3.1. Reduction With the notion of a reduced cone in hand, we immediately arrive at a method to reduce a polynomial \(t\) by a cone \(C\) with respect to a monomial order \(\ll\). All we need to do is reduce \(t\) by the equality part of \(C\), i.e., the Grobner basis, and then reduce the result by the polyhedral-cone part, using the method from SS4.2. More explicitly, given an arbitrary monomial order \(\ll\), cone of polynomials \(C\), and polynomial \(t\), we can reduce \(t\) by \(C\), obtaining \(t^{*}\), using the following steps: 1. From \(C=\langle p_{1},\ldots,p_{k}\rangle+[q_{1},\ldots,q_{r},1]\), compute a Grobner basis \(B=\{g_{1},\ldots,g_{s}\}\) for \(\langle p_{1},\ldots,p_{k}\rangle\), and for each \(q_{i}\) compute \(t_{i}=\textbf{red}_{B}(q_{i})\). From \(B\) and \(\{t_{1},\ldots,t_{r}\}\) construct the reduced cone \(C^{\prime}=\langle g_{1},\ldots,g_{s}\rangle+[t_{1},\ldots,t_{r},1]\). 2. "Equality reduce" \(t\) by the Grobner basis \(B\). That is, let \(t^{\prime}=\textbf{red}_{B}(t)\). 3. "Inequality reduce" \(t^{\prime}\) with Alg. 2 to obtain the result \(t^{*}\). That is, treat the polynomials of \(\{t,t_{1},\ldots,t_{r}\}\) as vectors in the finite-dimensional subspace of \(\mathbb{K}[\mathbf{X}]\) spanned by the monomials present in \(\{t,t_{1},\ldots,t_{r}\}\), and run Alg. 2. **Definition 4.12**.: _We denote the process of reducing a polynomial \(t\) by a cone of polynomials \(C\) with respect to a monomial order \(\ll\), \(\textbf{cred}_{C}(t)\)._ **Theorem 4.13**.: _Let \(t\) be a polynomial, \(C\) a cone of polynomials, and \(\ll\) a monomial order. Let \(t^{*}=\textbf{cred}_{C}(t)\). \(t^{*}\) has the following properties:_ 1. \(t^{*}-t\in C\)__ 2. \(t^{*}\) _is optimal in the sense that for any other_ \(b\) _with_ \(b-t\in C\)_, then_ \(\operatorname{LM}(b)\gg\operatorname{LM}(t^{*})\)_._ **Example 4.14**.: _Consider the example from SS2. In that example, we said that we had the equations \(x-u_{4}\) and \(y-u_{5}\) in the basis of the ideal, and the inequalities \(vbu_{1}-u_{4}\geq 0\), \(va2u_{2}+vbu_{2}-vbu_{1}+vu_{2}\geq 0\), and \(u_{5}-va2u_{2}-vbu_{2}+1\geq 0\) in the basis of the polyhedral cone. The ideal and polyhedral cone created for this example has many more equations and inequalities, but these are the ones that are relevant for reduction. Using the new terminology, in SS2 we reduced \(t=x-y\) by the reduced cone_ \[C=\langle x-u_{4},y-u_{5},\ldots,\rangle+[vbu_{1}-u_{4},va2u_{2}+vbu_{2}-vbu_{ 1}+vu_{2},u_{5}-va2u_{2}-vbu_{2}+1,\ldots,1].\] _To reduce \(x-y\) by \(C\) we first equality reduce \(x-y\) by the ideal, and obtain \(u_{4}-u_{5}\). Then, to reduce \(u_{4}-u_{5}\) by the polyhedral cone we treat each unique monomial as a separate dimension, and run Alg. 2. For example, we might create the map_ \[\{d_{1}\mapsto u_{4},d_{2}\mapsto u_{5},d_{3}\mapsto bu_{1},d_{4}\mapsto va2u _{2},d_{5}\mapsto bvu_{2},d_{6}\mapsto bvu_{1},d_{7}\mapsto vu_{2},\ldots\}.\] _We then reduce \(d_{1}-d_{2}\) by the polyhedron \(P\triangleq\{d_{3}-d_{1}\geq 0,d_{4}+d_{5}-d_{3}+d_{7}\geq 0,d_{2}-d_{4}-d_{5}+1 \geq 0,\ldots\}\) and get the result \(d_{7}+1\), or equivalently, \(u_{2}+1\). By Thm. 4.13, \(vu_{2}+1-(x-y)\in C\), and \(ou_{2}+1\) is optimal. Furthermore, by Lem. 4.7, these equalities and inequalities entail \(x-y\leq vu_{2}+1\)._ _Optimality._ Thm. 4.13 gives optimality with respect to a monomial order \(\ll\) which is a total order on monomials. However, when extending \(\ll\) to polynomials, the comparison becomes a _pre-order_. For example \(x+y\), and \(x\) have the same leading monomial if \(x\gg y\). Furthermore, coefficients are not compared in the monomial order (for example, \(5x\) and \(2x\) are equivalent in the monomial order). For this reason, there can be multiple distinct optimal terms that satisfy the conditions of Thm. 4.13. The reduction method is not guaranteed to return all of the optimal terms. Thm. 4.13 guarantees that the reduction will return one of the optimal bounds. ## 5. Saturation In this section, we give a heuristic method for the non-linear OSB problem. The idea is that from a formula \(\phi\) we extract implied polynomial equalities and polynomial inequalities, and construct a cone of polynomials from the result. We illustrate the process using the example from SS2. PurificationThe first step is to _purify_ all the terms in the formula. For each non-polynomial function symbol, we introduce a fresh variable, replace all occurrences of the function with the introduced symbol, and remember the assignment in a foreign-function map. We also perform this process with respect to the function arguments as well. Thus, the foreign-function map consists of assignments \(u\mapsto f(p)\), where \(f\) is a non-polynomial function symbol, but \(p\) has also been purified and is therefore a polynomial. The result of purification on the example from SS2 results in the following formula and foreign-function map: \[\phi^{\prime} \triangleq x=u_{4}\wedge y=u_{5}\wedge a2=u_{6}\wedge e^{\prime}=e+a \wedge b^{\prime}=b+a2\wedge a,b,e,v\geq 0\] \[TM \triangleq\{u_{1}\mapsto e^{-1},u_{2}\mapsto e^{\prime-1},u_{3} \mapsto e^{-1},u_{4}\mapsto\lfloor vbu_{1}\rfloor,u_{5}\mapsto\lfloor vb^{ \prime}u_{2}\rfloor,u_{6}\mapsto\lfloor abu_{3}\rfloor\}\] As mentioned in SS2, we can also consider additional axioms satisfied by the non-polynomial functions. For example, \(e\geq 0\implies\frac{1}{e}\geq 0\). Formally, we consider these instantiated axioms as being provided by the user in the original formula; however, for convenience, in our implementation and experiments, we use templates and instantiate these axioms automatically using the foreign-function map, and then conjoin the result onto the original formula. Once the formula has been purified and a function map created, the task becomes to extract an implied cone from the combination of the function map and the purified formula. We refer to this process as _saturation_. Within a saturation step, two flavors of implied equalities and inequalities can be produced. There are ones that are (linearly) implied by the formula, and there are others that are _implied_ by the cone, but not _in_ the cone. For example, consider the cone \(C_{1}=\langle 0\rangle+[x,1]\). \(x\in C\) corresponds to \(x\geq 0\), which implies \(x^{2}\geq 0\); however, \(x^{2}\notin C_{1}\). If we add \(x^{2}\) to \(C_{1}\), we get \(C_{2}=\langle 0\rangle+[x^{2},x,1]\) with \(C_{1}\subseteq C_{2}\). Such a step, where we add implied equalities and inequalities to a cone that are not members of the cone, is referred to as a _strengthening_ step. Fig. 4 gives an overview of our saturation method. The process is iterative until no more equalities or inequalities can be added to the cone of polynomials. ### Equality Saturation This subsection covers steps (1), (2), and (3) of Fig. 4. We first assume we have some new implied equations \(\{p_{1}=0,\ldots,p_{l}=0\}\) and inequalities \(\{q_{1}\geq 0,\ldots,q_{r}\geq 0\}\), which have been produced from a yet-to-be-explained method. We take the equations, add them to an ideal, and compute a Grobner basis; we take the inequalities and add them to a polyhedral cone. For the example from SS2, suppose that we are given the equations \(x=u_{4}\), \(y=u_{5}\), \(a2=u_{6}\), \(e^{\prime}=e+a\), \(b^{\prime}=b+a2\), \(eu_{1}=1\), \(e^{\prime}u_{2}=1\), \(eu_{3}=1\), and no inequalities. We add the equalities to an ideal and compute a Grobner basis, which for this example would yield: \(\langle x-u_{4},y-u_{5},a2-u_{6},e^{\prime}-e-a,b^{\prime}-b-a2,eu_{1}-1,eu_{2}+ au_{2}-1,eu_{3}-1\rangle\). We have now finished step (1), with the following cone, map, and formula with instantiated axioms: \[\phi^{\prime\prime} \triangleq a,b,e,v\geq 0\wedge(e\geq 0\implies u_{1}\geq 0) \wedge(\mathit{vbu_{1}}\geq 0\implies u_{4}\geq 0)\wedge\ldots\] \[C \triangleq\langle x-u_{4},y-u_{5},a2-u_{6},e^{\prime}-e-a,b^{ \prime}-b-a2,eu_{1}-1,eu_{2}+au_{2}-1,eu_{3}-1\rangle+[1]\] \[TM \triangleq\left\{u_{1}\mapsto e^{-1},u_{2}\mapsto e^{\prime-1},u _{3}\mapsto e^{-1},u_{4}\mapsto\left\lfloor\mathit{vbu_{1}}\right\rfloor,u_{5} \mapsto\left\lfloor\mathit{vb^{\prime}u_{3}}\right\rfloor,u_{6}\mapsto \left\lfloor\mathit{abu_{2}}\right\rfloor\right\}\] Step (2) is a strengthening step, where we perform a type of congruence-closure process on the ideal and the foreign-function map. Consider the running example. In the foreign-function map we have \(u_{1}\mapsto e^{-1}\) and \(u_{3}\mapsto e^{-1}\). By the axiom \(w=w^{\prime}\implies(w)=f(w^{\prime})\), for any function \(f\), it is clear that \(u_{1}=u_{3}\). However, \(u_{1}-u_{3}\) is _not_ a member of the ideal. The purpose of step (2) (Closure) is to find these equalities. Our closure algorithm works by considering each pair of assignments \(w\mapsto f(p)\) and \(w^{\prime}\mapsto f^{\prime}(p^{\prime})\) in the foreign-function map, where \(f\) and \(f^{\prime}\) are the same function symbol. To check if the arguments are equal we check whether \(p-p^{\prime}\) is a member of the ideal which by Cor. 3.7.2 can be done by checking if the result of reducing \(p-p^{\prime}\) by the Grobner basis is \(0\). If the result is \(0\) then we have that the ideal entails \(p=p^{\prime}\), so \(f(p)=f^{\prime}(p^{\prime})\) and \(w=w^{\prime}\). In this case we add the new equality \(w-w^{\prime}\) (meaning \(w-w^{\prime}=0\)) into the ideal and compute a new Grobner basis. The new ideal might uncover new equalities of the map, so we have to check each pair of functions again until no new equalities are discovered. Lemma 5.1 ().: _Closure terminates._ After equality saturation, we reduce the set of inequalities by the newly returned Grobner basis to keep the cone reduced in the sense of Defn. 4.12. This reduction is step (3) in Fig. 4. Returning to the running example, after step (3) we have the following cone, foreign-function map, and formula with instantiated axioms: \[\phi^{\prime\prime} \triangleq a,b,e,v\geq 0\wedge(e\geq 0\implies u_{1}\geq 0) \wedge(\mathit{vbu_{1}}\geq 0\implies u_{4}\geq 0)\wedge\ldots\] \[C \triangleq\langle x-u_{4},y-u_{5},a2-u_{6},e^{\prime}-e-a,b^{ \prime}-b-a2,eu_{1}-1,eu_{2}+au_{2}-1,u_{3}-u_{1}\rangle+[1]\] \[TM \triangleq\left\{u_{1}\mapsto e^{-1},u_{2}\mapsto e^{\prime-1},u _{3}\mapsto e^{-1},u_{4}\mapsto\left\lfloor\mathit{vbu_{1}}\right\rfloor,u_{5} \mapsto\left\lfloor\mathit{vb^{\prime}u_{2}}\right\rfloor,u_{6}\mapsto \left\lfloor\mathit{abu_{3}}\right\rfloor\right\}\] ### Consequence Finding In step (4), our goal is two-fold. First, we want to make the polyhedral cone of inequalities salient in the sense of Defn. 3.11; second, we want to extract inequalities implied by the formula. For both of these goals, we generate a set of potential consequences and use a _linear_ SMT solver to filter the potential consequences down to a set of true consequences. We first need to explain the relevance of making the polyhedral cone salient. Suppose that we had the cone of polynomials \(C=\langle p_{1},\ldots,p_{k}\rangle+[q_{1},\ldots,q_{r},1]\). The polyhedral cone \(P=[q_{1},\ldots,q_{r},1]\) represents inequalities; i.e., \(v\in P\) corresponds to \(v\geq 0\). If \(P\) is not salient, then there exists a polynomial \(f\in P\) and \(-f\in P\), implying that we can derive \(f\geq 0\) and \(-f\geq 0\), so \(f=0\); however, assuming that the cone was reduced, \(f\notin\langle p_{1},\ldots,p_{k}\rangle\). Ideal reasoning is stronger than polyhedral-cone reasoning, so in this situation if we created \(C^{\prime}=\langle f,p_{1},\ldots,p_{k}\rangle+Q\), where \(Q\) is \(P\) with \(f\) removed, we would have \(C\subsetneq C^{\prime}\). Fortunately, we can reformulate Lem. 3.12 to say that if there are no implied equations among \(\{q_{1},\ldots,q_{r}\}\) then there are no implied equations in all of \(P\). Thus, we can make \(P\) salient by asking if any of the equalities \(q_{1}=0,\ldots,q_{r}=0\) are implied. If so, these are newly discovered equations that will be added to the ideal in step (1) on the next saturation round. Also, in this step we want to extract other inequalites that are implied by the formula. Consider the running example. We have that \(C\) implies \(e\geq 0\) and from \(\phi^{\prime\prime}\) we have \(e\geq 0\implies u_{1}\geq 0\) Thus, \(u_{1}\geq 0\), but \(u_{1}\) is _not_ a member of \(C\). To extract \(u_{1}\geq 0\) as a true consequence, we generate a finite list of potential consequences by adding each atom of the negation normal form of \(\phi^{\prime\prime}\) as a potential consequence. For example, from \(\phi^{\prime\prime}\) some potential consequences are \(e<0\), \(u_{1}\geq 0\), \(vbu_{1}<0\), and \(u_{4}\geq 0\). Note that even in the linear case this method is incomplete. That is, there are inequalities that are implied by the formula that are not present in the formula. For example, \((x\geq 0)\land(x\leq 1\implies y\geq x)\land(1\leq x\leq 2\implies y\geq-x+2) \land(x\geq 2\implies y\geq\frac{1}{2}x-1)\) entails \(y\geq 0\), but \(y\geq 0\) is not found in any atom of the negation normal form of the formula.8 Footnote 8: In this step we can also extract equalities. We can generate potential equalities along with inequalities by looking at the formula for equality atoms. However, if we only want to look for inequalities, saturation will still work because inequalities that are actually equalities will get upgraded in some later round of saturation. We collect both the potential equality consequences and the potential inequality consequences into a list, reduce them with the ideal, and then use a Houdini [15] like algorithm using a _linear_ SMT solver to filter the potential consequences to a list of true consequences. We use a linear SMT solver as opposed to a non-linear one to avoid the aforementioned issues of non-linear reasoning. That is, we replace each monomial with a fresh variable before determining true consequences. For the running example, we do not yet have any known inequalities, so we do not have any inequalities to potentially upgrade to equalities; however, from the formula \(\phi^{\prime\prime}\) we generate the potential consequences \(Cons=\{e\geq 0,b\geq 0,e\geq 0,v\geq 0,e<0,u_{1}\geq 0,vbu_{1}<0,u_{4}\geq 0, \ldots,vbu_{1}-u_{4}\geq 0,\ldots\}\). We then filter \(Cons\) to \(Cons^{*}\) where \[Cons^{*}=\{c\in Cons\mid\phi^{\prime\prime}\wedge x-u_{4}=0\wedge y-u_{5}=0 \wedge a2-u_{6}\wedge\ldots\wedge u_{3}-u_{1}=0\models_{LRA}c\}.\] What \(Cons^{*}\) gives is a set of equalities and inequalities that we will add to the cone of polynomials. However, we have to take one more step before we add the inequalities. For this example, \(Cons^{*}=\{e\geq 0,\ldots,vbu_{1}-u_{4},\ldots\}\). ### Taking Products Before we add inequalities to the cone, we "take products," which is a strengthening step indicated as step (5) in Fig. 4. The process of taking products is one of the main reasons our method gives interesting answers, and it is what leads to the main expense of the overall method. Suppose that from step (4) we obtain the inequalities \(x\geq 0,y\geq 0,z\geq 0\). In non-linear arithmetic, from these inequalities we have \(x^{2}\geq 0,xy\geq 0,xz\geq 0,y^{2}\geq 0,yz\geq 0,xz^{2}\geq 0,x^{3}\geq 0\), etc. Moreover, all of these "product" inequalities are not members of \([x,y,z]\). We could strengthen the cone by adding all of these product inequalities; however, the set of all of these products is infinite. In our implementation, we heuristically "cut-off" the depth of products at some parameterized value \(N\). That is, we assign each inequality \(w\geq 0\) in the cone a depth \(i\), which we denote by \(w\geq_{i}0\). Newly discovered inequalities, i.e., the ones produced from step (4), have a depth of 1. Product inequalities can be generated by the rule \(w\geq_{i}0\) and \(z\geq_{j}0\) yields \(wz\geq_{i+j}0\). When we add inequalities to the cone, we make sure to add all products that have a depth less than or equal to \(N\). For example, suppose that the polyhedral cone \([x,y,x^{2},xy,y^{2}]\) corresponds to the following inequalities with indicated depths \(\{x\geq_{1}0,y\geq_{1}0,x^{2}\geq_{2}0,xy\geq_{2}0,y^{2}\geq_{2},1\}\), \(N=2\), and we have the newly discovered inequality \(z\geq_{1}0\). We make sure to take all products within the new inequalities (\(\{z^{2}\geq_{2}0\}\)), as well products with the polyhedral basis (\(\{xz\geq_{2}0,yz\geq_{2}0\}\)), to obtain the new inequalities \(\{z\geq_{1},z^{2}\geq_{2}0,xz\geq_{2}0,yz\geq_{2}0\}\). Thus, after taking products and adding the results to the cone of polynomials we would have \(C=\langle p_{1},\ldots,p_{k}\rangle+[x,y,x^{2},xy,y^{2},z,z^{2},xz,yz,1]\). For the running example we use a saturation depth of \(N=3\), and would generate many inequalities from \(Cons^{*}\) from SS5.2. Generated inequalities would include \(e\geq_{1}0,e^{2}\geq_{2}0,e^{3}\geq_{3}0,b\geq_{1}0,eb\geq_{2}0,vbu_{1}\geq_{3}0\), and many more, all of which would be added to the cone of polynomials. ### Putting it All Together For the running example, after going through one round of saturation, we have the following:9 Footnote 9: To simplify the presentation, we have removed some clauses from \(\phi^{\prime\prime}\) that are no longer useful. \[\phi^{\prime\prime} \triangleq(bbu_{1}\geq 0\implies u_{4}\geq 0)\wedge\ldots\] \[C \triangleq\langle x-u_{4},y-u_{5},a2-u_{6},e^{\prime}-e-a,b^{ \prime}-b-a2,eu_{1}-1,eu_{2}+au_{2}-1,u_{3}-u_{1}\rangle+\] \[\qquad[e,e^{2},e^{3},b,\ldots,bbu_{1},\ldots,1]\] \[TM \triangleq\{u_{1}\mapsto e^{-1},u_{2}\mapsto e^{\prime-1},u_{3} \mapsto e^{-1},u_{4}\mapsto\lfloor bbu_{1}\rfloor,u_{5}\mapsto\lfloor bb^{ \prime}u_{2}\rfloor,u_{6}\mapsto\lfloor abu_{3}\rfloor\}\] However, going through the saturation steps again will generate even more information. For example, in \(\phi^{\prime\prime}\) we have \(bbu_{1}\geq 0\implies u_{4}\geq 0\). When we performed consequence finding in step (4), \(u_{4}\) was not a true consequence, because it was not _linearly_ implied by \(C\) or \(\phi^{\prime\prime}\). However, by taking products of the inequalities \(v\geq 0\), \(b\geq 0\), and \(u_{1}\geq 0\), we now have \(vbu_{1}\) as a basis polynomial in the polyhedral cone. Therefore, now \(u_{4}\geq 0\)_is_ linearly implied by \(C\) and \(\phi^{\prime\prime}\), so running through the steps again would establish \(u_{4}\geq 0\). Similarly, it may be possible for new equations to be generated in steps (2) or (4), as well. The saturation process starts with an "empty" cone \(C_{0}=\langle 0\rangle+[1]\). Then, at each step of saturation a stronger cone is produced that is still implied by the original formula. That is, starting from \(C_{0}\) a sequence of cones is produced \(C_{0},C_{1},\ldots,C_{n}\) where \(C_{i}\) is created by running one step of saturation on \(C_{i-1}\). Each cone is implied by the original formula together with the foreign-function map. Furthermore, \(C_{i-1}\subsetneq C_{i}\) for \(i\leq n\). Theorem 5.2 ().: _For each \(i\). We have for every \(c\in C_{i}\),_ \[\phi\wedge\bigwedge_{w\mapsto f(p)\in TM}(u=f(p))\models_{UFOF}c\geq 0.\] _Where \(\models_{UFOF}\) denotes entailment w.r.t. the theory of ordered fields with uninterpreted functions._ What is key, though, is that saturation will eventually terminate. That is, \(C_{n}=C_{n+1}\) for some \(n\in\mathbb{N}\). The reason is two-fold. With respect to inequalities, because we limit inequalities with a saturation bound, there are a finite set of potential inequalities (coming from the finite formula) and inequality products because products from a finite set give a finite set. With respect to equalities, we can appeal to Hilbert's basis theorem (see SS3) and say that the set of equalities must eventually stabilize because every polynomial ideal is finitely generated. For the running example, running the saturation procedure until it stabilizes\(-\)using a saturation bound of \(3-\)produces a cone \(C=\langle x-u_{4},y-u_{5},\ldots\rangle+[bbu_{1}-u_{4},u_{2}u_{6}+vbu_{2}-vbu_{ 1}+vu_{2},u_{5},uu_{2}u_{6}-vbu_{2}+1,\ldots,1]\). Reducing \(x-y\) using the procedure from SS4.3 gives the upper bound \(uu_{2}+1\). ## 6. Effective Degree Order The methods of SS4 can reduce a polynomial \(t\) by a cone of polynomials \(C\) with respect to an arbitrary monomial order. Thus, a user can use the results of saturation, a purified term to rewrite \(t\), a cone of polynomials \(C\), and a foreign-function map \(TM\), to create an arbitrary monomial order for some downstream task. However, determining an appropriate order is a challenging task. In this section we present the _effective-degree order_ which is the monomial order we use in AutoBound. Our definition for effective degree includes a set of variables \(W\), specified by the user, which indicate variables to keep. That is, by specifying a set \(W\) the user is indicating that any term containing a variable not in \(W\) is worse than a term containing only \(W\) variables. In this way, the set \(W\) encodes a "variable restriction" on the preference of terms. Incorporating variable restriction into a traditional monomial order is straightforward. However, in our setting we have the additional challenge of function symbols in the foreign-function map \(TM\). Suppose we have the assignment \(x\mapsto f(p)\) in \(TM\). If \(p\) contains only \(W\) variables, then in the monomial order the variable \(x\) should also be thought of as referring to only \(W\) variables. However, it may be the case that \(p\) contains some variables _not_ in \(W\), but there could be another polynomial \(p^{\prime}\) with \(p=p^{\prime}\) and \(p^{\prime}\)_does_ contain only \(W\) variables. For example, consider the assignment \(u_{2}\mapsto e^{\prime-1}\) in the running example, and let \(W=\{a,b,e,v\}\). \(e^{\prime}\) is _not_ in \(W\), but we have \(e^{\prime}=e+a\) implied by the ideal. In other words, we have \(u_{2}\mapsto(e+a)^{-1}\), and now \(u_{2}\) refers to a function containing only \(W\) variables. To solve this challenge we must first "reduce" the foreign-function map \(TM\) using the ideal in \(C\). The idea is that we have an initial definition for effective degree with variable restriction. We then reduce each polynomial \(p\) in each assignment in \(TM\). Each reduction may rewrite \(p\) towards another \(p^{\prime}\) which is lower in effective degree and thus may only contain \(W\) variables. We then update the effective degree order and repeat until the process stabilizes. 10 Footnote 10: In our experiments we always use an effective-degree order so this reduction step occurs during closure, discussed in §5.1, and not as a separate step. Let \(W\) be a set of variables, \(TM:Y\rightarrow\operatorname{Term}\) a foreign-function map. Let \(m\) be a monomial over variables \(X\) with \(Y\subseteq X\). The _effective-degree_ of a variable \(x\) denoted \(\operatorname{eff.deg}_{TM}^{W}(m)\) is a pair of natural numbers and is defined recursively as follows: \[\operatorname{eff.deg}_{TM}^{W}(x) \triangleq\begin{cases}(0,1)&\text{ if }x\notin TM\text{ and }x\in W\\ (1,0)&\text{ if }x\notin TM\text{ and }x\notin W\\ \operatorname{LM}(p)&x\mapsto f(p)\in TM\end{cases}\] \[\operatorname{eff.deg}_{TM}^{W}(n) \triangleq(0,0) \operatorname{eff.deg}_{TM}^{W}(st) \triangleq\operatorname{eff.deg}_{TM}^{W}(s)+\operatorname{eff.deg}_{TM}^{ W}(t)\] In the above \(+\) is taken pointwise, and \(\operatorname{LM}(p)\) is \(\max(\operatorname{eff.deg}_{TM}^{W}(m_{1}),\ldots,\operatorname{eff.deg}_{TM} ^{W}(m_{n}))\) with \(\max\) taken in lexicographic order w.r.t. the monomials \(m_{1},\ldots,m_{n}\) of \(p\). To make \(\operatorname{eff.deg}_{TM}^{W}(-)\) a total order we break ties between \(\operatorname{eff.deg}_{TM}^{W}(x)\) and \(\operatorname{LM}(p)\) for \(x\mapsto f(p)\in TM\) by taking \(x\gg\operatorname{LM}(p)\). Example 6.1 ().: _Consider the un-reduced map \(TM\) from earlier, and let \(W=\{a,b,e,v\}\)_ \[TM\triangleq\ \{u_{1}\mapsto e^{-1},u_{2}\mapsto e^{\prime-1},u_{3}\mapsto e^{-1 },u_{4}\mapsto\lfloor vbu_{1}\rfloor,u_{5}\mapsto\lfloor vb^{\prime}u_{2} \rfloor,u_{6}\mapsto\lfloor abu_{3}\rfloor\}\] \[\operatorname{eff.deg}_{TM}^{W}(u_{5})=\operatorname{eff.deg}_{TM}^{W}(v)+ \operatorname{eff.deg}_{TM}^{W}(b^{\prime})+\operatorname{eff.deg}_{TM}^{W}(b^ {\prime})+\operatorname{eff.deg}_{TM}^{W}(u_{2})=(0,1)+(1,0)+(1,0)=(2,1)\text{. Now}\] _consider the reduced map \(TM^{\prime}\), using the ideal of \(C\):_ \[TM^{\prime}\triangleq\ \{u_{1}\mapsto e^{-1},u_{2}\mapsto(e+a)^{-1},u_{3} \mapsto e^{-1},u_{4}\mapsto\lfloor vbu_{1}\rfloor,u_{5}\mapsto\lfloor v(b+u_{6 })u_{2}\rfloor,u_{6}\mapsto\lfloor abu_{3}\rfloor\}\] \[\operatorname{eff.deg}_{TM^{\prime}}^{W}(u_{5})=(0,5)\text{.}\] Lemma 6.2 ().: _Reducing a foreign-function map \(TM\) using effective degree eventually stabilizes._ In SS7 we use effective degree to mean the effective degree w.r.t. a reduced map \(TM\). ## 7. Experiments We implemented our technique in a tool called AutoBound,11 handling arithmetic terms with floors and divisions.12 We use Z3 (Friedman and Boyd, 2007) for SMT solving, and the FGb library (Kirsh and Boyd, 2007)13 for Grobner-basis calculations. Our evaluation of AutoBound targets the following questions, grouped in two categories: 1. **Performance**: These questions address the run-time behavior of AutoBound, and its scalability, and how the time breaks down by component. We study: 1. How much **overall time** does it take for AutoBound to produce a bound? 2. How does the overall time **break down** into the time spent in the **saturation step** and in the **reduction step**? 3. What is the **size of the (representation of the) cone** that AutoBound generates--how many **equalities** are produced to form the ideal, and how many **inequalities** over how many distinct **monomials** are produced to form the polyhedral cone? (The number of monomials produced gives the number of dimesions of the corresponding polyhedron and the number of inequalities gives the number of constraints of the polyhedron.) 4. How does Alg. 2 compare with the naive method of polyhedral-cone reduction that is based on linear programming, outlined in SS4.2.1.14 Footnote 14: In our experiments we used Z3’s Optimize module to solve the multi-objective linear program. 5. How does AutoBound **scale** as the **saturation bound** is increased? 2. **Bound quality**: These questions examine the output bound that AutoBound synthesizes. 1. Is the bound **optimal** w.r.t. the effective-degree order? 2. Is the bound **desirable**--tight yet meaningful? 3. What **saturation depth** suffices for AutoBound to synthesize desirable bounds? We investigated these questions using a set of benchmarks that we collected from Solidity code provided to us by industry experts in smart-contract verification, which we modelled in our tool as a set of equational and inequational assumptions over initial and intermediate variables in the program. Overall, we find that AutoBound is able to produce optimal, meaningful bounds in a few seconds in nearly all the benchmarks. We begin with a brief description of the benchmarks. ### Benchmarks We briefly describe each of the problems that we considered in our evaluation. _Elastic_. This is the example that was described in detail in SS2. _Fixed-point arithmetic_. Here, non-integer numbers are represented by multiplying them with a scaling factor _sf_, i.e., a rational number \(x\in\mathbb{Q}\) is approximately represented by rounding \(x\cdot\textit{sf}\)to an integer. Multiplication and division need to correct for the double occurrence of the scaling factor, dividing by _sf_ upon multiplication and multiplying by _sf_ upon division. All these divisions are _integer_ divisions and so not exact. We are interested in what happens when a number represented by \(a\in\mathbb{Z}\) is multiplied and then divided by the same number represented by \(b\in\mathbb{N}\), that is, in the term \(t=\left\lfloor\frac{\left|\frac{ab}{a^{\prime}}\right|\cdot\textit{sf}}{b} \right\rfloor\), and seek a bound over any of the variables present, under the inequational assumptions \(b\geq 0,\textit{sf}\geq 0\). Note the nested structure of floor terms. _Manual price_. There is an auction with a price that decreases linearly with time. The price in time \(\tilde{t}\in[\textit{beginTime},\textit{endTime}]\) is computed by \[\textit{drop}(\tilde{t})=\left\lfloor\frac{\textit{startPrice}-\textit{ minimumPrice}}{\textit{endTime}-\textit{startTime}}\right\rfloor,\quad\textit{price}(\tilde{t})=\textit{startPrice}-\textit{ drop}(\tilde{t})\cdot(\tilde{t}-\textit{startTime}).\] To show that the price always resides in the expected interval, we want bounds on the term \(t=\textit{price}(\tilde{t})\), under the inequational assumptions \(\textit{startTime}\leq\tilde{t}\leq\textit{endTime}\), \(\textit{minimumPrice}\leq\textit{startPrice}\), and over any of the variables present. _Manual-price monotonicity._ In the context of the previous benchmark, to show monotonicity of the price with time, we are interested in an upper bound on the term \(t_{2}=\textit{price}(\tilde{t}_{2})-\textit{price}(\tilde{t}_{1})\) under the inequational assumptions \(\textit{startTime}\leq\tilde{t}_{1}\leq\tilde{t}_{2}\leq\textit{endTime}\), \(\textit{minimumPrice}\leq\textit{startPrice}\), and over any of the variables present. _Token._ In one implementation of an "elastic" smart contract by TrustToken15, one property of interest is that a client cannot increase their gain by splitting transactions; specifically, that _balanceSplit_, the balance of a client after \(\textit{withdraw}(x)\) ; \(\textit{withdraw}(x)\), is not greater than _balanceJoined_, the balance after executing \(\textit{withdraw}(2x)\) from the same initial state. Each _withdraw_ operation modifies the client's balance, including a fee for the operation, as well as the total supply held by the contract, in ways that involve several multiplications and divisions with another quantity called _funds_. We are interested in bounds for the term \(t=\textit{balanceSplit}-\textit{balanceJoined}\), under inequational assumptions that the client's balance and total supply are large enough to withdraw \(2x\), and that _funds_ is non-negative, over the variables \(x\), the initial values of the client's balance and supply, and _funds_ (but not over value of intermediate variables). Footnote 15: [https://github.com/trusttoken](https://github.com/trusttoken) _NIRN._ In a different implementation of an "elastic" smart contract by NIRN16, the property of interest is that when a client deposits an amount and receives a corresponding amount of shares (see SS2), the number of shares does not vary much at different times for the same amount. Specifically, the question is what happens under an execution of \(\textit{shares1}=\textit{deposit}(x)\) ; \(\textit{shares2}=\textit{deposit}(x)\) to the term \(t=\textit{shares1}-\textit{shares2}\), under appropriate inequational assumptions that ensure that deposit operations execute successfully. We would like to find a bound for \(t\) defined over the variable \(x\) and the initial values alone (not over values of intermediate variables). Each _deposit_ operation modifies the total supply and the client's balance, in a calculation that is more involved than in the benchmark _Token_, described above. Footnote 16: [https://github.com/indexed-finance/nirn](https://github.com/indexed-finance/nirn) ### Performance Table 1 displays the running time of AutoBound on each of the benchmarks with saturation depth 3. Time measurements are averaged over 5 runs, running on an Ubuntu 18.04 VM over an X1 Carbon 5th Generation with an Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz 2.90 GHz processor. \begin{table} \begin{tabular}{||l|r|r|r|r|r|r||r|r||r||} \hline Benchmark name & \#eq & \#in & \#floors & \#c-eq & \#c-in & \#c-m & time (s) & cast (s) & reduce (s) & reduce-lp (s) \\ \hline \hline elastic & 5 & 3 & 3 & 8 & 814 & 413 & 2.4 & 1.4 & 1.0 & 8.5 \\ \hline fixedPointInt & 0 & 2 & 2 & 2 & 162 & 131 & 0.3 & 0.2 & 0.1 & 0.7 \\ \hline manualPrice & 3 & 4 & 1 & 4 & 163 & 168 & 0.5 & 0.4 & 0.1 & 0.6 \\ \hline manualPriceMonotone & 6 & 5 & 2 & 8 & 218 & 228 & 0.7 & 0.6 & 0.1 & 1.7 \\ \hline tokent & 10 & 4 & 3 & 13 & 815 & 288 & 2.0 & 1.1 & 0.9 & 2.8 \\ \hline nirn & 10 & 5 & 6 & 16 & 4057 & 1351 & 81.0 & 9.6 & 71.4 & 963.1 \\ \hline \end{tabular} \end{table} Table 1. Performance of AutoBound on the examples. #eq and #ineq’s are resp. equality and inequality assumptions initially given (not including instantiated axioms); #floors is the number of integer divisions (floor of division) terms in the assumptions. #c-eq and #c-ineq are resp. the number of equalities/inequalities in the generated cone’s ideal/polyhedron; #m is the number of distinct monomials in the inequalities. time is the overall execution time of AutoBound (all times in seconds). cast is the time to saturate the cone. reduce is the time to reduce w.r.t. the cone using local projection. reduce-lp is the time to reduce using linear programming instead of local projection. All experiments in this table were taken using a product saturation depth of 3. #### 7.2.1. Overall time In the majority of our benchmarks, AutoBound is able to produce the bound in less than a few seconds. The only exception is the NIRN benchmark, where the upper bound requires about a minute and a half, which we attribute to the larger number of floor and division terms and accordingly a larger resulting cone. #### 7.2.2. Breakdown In most benchmarks, saturation-time is slightly larger than reduce-time. The only exception is NIRN, where reduce-time dominates. This may indicate that the time to reduce increases more steeply then the time saturate as the cone size increases. This is also apparent in the trends when saturation depth is increased (shown in SSE). #### 7.2.3. Cone size The size of the cone affects the running time of both saturation and reduction. The number of equalities in the ideal grows with the number of equational assumptions in the input, although not steeply. The number of inequalities in the cone grows rather steeply with the number of inequational assumptions, and especially with the number of integer divisions--these growth patterns are likely because floor and division terms trigger the instantiation of inequational axioms, which in the saturation process further amplifies the number of inequalities because of taking products of inequational assumptions and inequational terms from axiom instantiations. #### 7.2.4. Reduction with Linear Programming In all benchmarks, performing the reduction step using linear programming is significantly slower than using our local projection method, often in one or more orders-of-magnitude, justifying our use of local projection for reduction. #### 7.2.5. Scalability with Saturation Depth Fig. 5 shows how the overall time of AutoBound changes as the saturation depth is increased, using a timeout of 90 seconds (each data point averaged over 3 runs). We see that running time increases steeply, which is expected because the number of product terms grows exponentially with the saturation depth. We are pleasantly surprised by the fact that AutoBound tolerates even larger saturation depths for some of the benchmarks. However, saturation depth of 8 causes AutoBound not to terminate within the time limit in any of the benchmarks. Further data on the cone size and the run-time breakdown as the saturation depth is increased until a 10 minutes timeout appear in SSE. ### Bound Quality Tab. 2 presents the bounds we obtain for the benchmarks with saturation depth 3, as well as the bounds generated manually by a verification expert (where available). \(\star\) indicates that AutoBound has generated a bound we deem as uninformative. As explained in SS4, there are potentially multiple optimal upper-bounds. Moreover, the technique can discover several upper bounds of the same effective-degree in one run of Alg. 1; in AutoBound we simply return multiple bounds in that case. In the table, this arose in fixedPointInt (5 upper bounds), where it was easily to select the most appealing of them (the other involving additional added terms); and NIRN (2 lower bounds), where both bounds were uninformative, as indicated in the table by \(\star\). The latter case we discuss separately, below in SS7.3.2. Figure 5. Total running time as the saturation depth is increased. Missing points indicate a time greater than the timeout of 90 seconds. For each benchmark, the depth that suffices for both the upper and lower bounds to be the ones given in Tab. 2 is marked by a larger, black marker of the same type as the line plot for the benchmark. #### 7.3.1. Optimality The bounds that AutoBound produces are nearly optimal w.r.t. the effective-degree order. The lower bound for _elastic_, and the upper bound for _manual-price monotonicity_, _token_, and _NIRN_ are all constant, which is always optimal. The upper bounds for _fixed point_ and _manual price_ are also optimal, consisting merely of one variable. (These two results are optimal because there is no valid constant bound; moreover, there is no other bound consisting of a single variable that is valid.) It is harder to evaluate whether the remaining bounds are close to optimal w.r.t. the effective-degree order; however, they have the same effective-degree as the expert bounds, where the latter are available. #### 7.3.2. Desirability As Tab. 2 demonstrates, the bounds that AutoBound computes match or nearly match the bounds produced by a domain expert, and differ in constant or nearly-constant (e.g. \(\frac{1}{b}\)) terms. We attribute these differences to the challenge of inequality reasoning inside a function, which we discussed in SS2. The lower bound for NIRN that AutoBound computes does not seem desirable; it is complex, with an effective degree of 5, four floor terms, and three levels of floor nesting. This result may indicate that a saturation bound larger than 3 is necessary, but we believe that the ultimate cause is inequational assumptions that are present in the original code but are missing in our model (this was our experience with the token example and the NIRN upper bound). #### 7.3.3. Sufficient Saturation Depth The smallest saturation bound that suffices to produce the bounds of Table 2 are marked on Fig. 5 using a larger marker of the same type is the other data points of the benchmark. All are below 3, and 2 suffices for some. ## 8. Related Work _Optimization modulo theories._ Optimization modulo theories is the problem of minimizing (or maximizing) an objective function subject to a feasible region that is defined by a first-order formula (Bigarella et al., 2011; Bigarella et al., 2012; Bigarella et al., 2013). This paper investigates a variation of this problem, in which the desired result is a _term_ rather than a value. Most work on optimization modulo theories is concerned with linear arithmetic. An exception is Bigarella et al. (2011), which--similarly to our work--handles non-linearity by "linearizing" the problem. Bigarella et al. (2011) incorporates a linear optimization modulo theories solver into an abstraction-refinement loop with an exact non-linear solver that generates lemmas \begin{table} \begin{tabular}{||l|c|c|c|c|c||} \hline \hline Benchmark name & \(t\) & \begin{tabular}{c} AutoBound \\ bound \\ \end{tabular} & \begin{tabular}{c} Expert \\ bound \\ \end{tabular} & \begin{tabular}{c} Expert \\ bound \\ \end{tabular} & \begin{tabular}{c} Ed \\ \end{tabular} \\ \hline \hline elastic & \(x-y\) & \(\leq\frac{p}{e+a}+1\) & \(2\) & \(\leq\left\lfloor\frac{p}{e+a}\right\rfloor+1\) & \(2\) \\ & & \(\geq-1\) & \(0\) & \(\geq 0\) & \(0\) \\ \hline fixedPointInt & \(\left\lfloor\frac{\frac{ab}{f}}{b}\right\rfloor\cdot\cdot g\) & \(\leq a\) & \(1\) & \(\leq a\) & \(1\) \\ & & \(\geq a-1-\frac{sf}{b}\) & \(2\) & \(\geq a-\frac{sf-1}{b}\) & \(2\) \\ \hline manualPrice & \(price(\bar{t})\) & \(\leq\)_startPrice_ & \(1\) & \(\leq\)_startPrice_ & \(1\) \\ & & \(\geq\)_minimalPrice_ & \(1\) & \(\geq\)_minimalPrice_ & \(1\) \\ \hline manualPriceMonotone & \(price(\bar{t}_{2})-price(\bar{t}_{1})\) & \(\leq 0\) & \(0\) & \(\leq 0\) & \(0\) \\ \hline token & \(balanceJoined-balanceSplit\) & \(\leq 2\) & \(0\) & \(\leq 1\) & \(0\) \\ & & \(\geq-1/2-1/2\cdot funds\) & \(1\) & n.a. & – \\ \hline nirn & \(shares1-shares2\) & \(\leq 1.88\) & \(0\) & n.a. & – \\ & & \(\star\) & \(5\) & n.a. & – \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison of the bounds that AutoBound generates for input term \(t\) (saturation depth=3) and the bounds curated by a human expert. \(\star\) indicates a result that is too large to include in the table. ed shows the (second component of the) effective-degree of the bound (the first component is 0). on demand. In contrast, our method eagerly communicates lemmas between a linear solver and a cooporating (incomplete) non-linear solver (based on _cones of polynomials_). Non-linear abstract domainsThe set of _cones_ can be seen as an abstract domain, in which elements are conjunctions of polynomial inequalities. In this sense, cones are analogous to convex polyhedra, which represent conjunctions of linear inequalities. Unlike the case of convex polyhedra, the concretization function that maps cones to their solutions is not injective; that is, "inequivalent" cones may represent the same set of points (e.g., the cones generated by \(\{x,y\}\) and \(\{x,xy\}\) are different, but the formulas \(x\geq 0\wedge y\geq 0\) and \(x\geq 0\wedge xy\geq 0\) represent the same points). The saturation steps (SS5) can be conceived as semantic reduction [8]: each saturation step derives new valid inequalities, while leaving the _solutions_ to those inequalities unchanged. There are a number of non-linear abstract domains that include semantic-reduction steps akin to cone saturation [1, 5, 16, 18]. The _wedge_ abstract domain introduced in Kincaid et al. [18] is most similar to our saturation step--it uses a combination of Grobner-basis, congruence-closure, and polyhedral techniques for saturation. Unlike the wedge domain, this paper uses a _systematic_ rather than _heuristic_ method for applying the rule that the product of two non-negative polynomials is non-negative. In this regard, our method is similar to the domain of polynomial inequalities proposed in Bagnara et al. [1], and the domain of of polynomial equalities proposed in [7]. PositivstellensatzIn the case of real arithmetic positivstellensatz theorems [22, 23] give completeness results concerning which polynomials are positive given a set of inequality assumptions. They are the non-linear analogue of Farkas' lemma. For Farkas' lemma conical combinations by non-negative scalars is sufficient; however positivstellensatz theorems consider an ideal and conical combinations by more general terms; e.g. sum-of-squares polynomials in the case of Putinar [22]. In the Kriving-Stengle positivstellensatz, sum-of-squares polynomial combinations are taken with respect to products of the basis inequalities, which is similar to our process of taking products in SS5.3. Chatterjee et al. [6], Feng et al. [14] have both used positivstellensatz theorems with a degree bound in the context of program analysis by reducing to semi-definite programming, and consequently give numerical solutions. Our method does not perform such a reduction and can therefore easily give exact results. However, we lack an analogue of their completeness results.
2304.11814
Stochastic Soiling Loss Models for Heliostats in Concentrating Solar Power Plants
Reflectance losses on solar mirrors due to soiling are a significant challenge for Concentrating Solar Power (CSP) plants. Soiling losses can vary significantly from site to site -- with (absolute) reflectance losses varying from fractions of a percentage point up to several percentage points per day (pp/day), a fact that has motivated several studies in soiling predictive modelling. Yet, existing studies have so far neglected the characterization of statistical uncertainty in their parameters and predictions. In this paper, two reflectance loss models are proposed that model uncertainty: an extension of a previously developed physical model and a simplified model. A novel uncertainty characterization enables Maximum Likelihood Estimation techniques for parameter estimation for both models, and permits the estimation of parameter (and prediction) confidence intervals. The models are applied to data from ten soiling campaigns conducted at three Australian sites (Brisbane, Mount Isa, Wodonga). The simplified model produces high-quality predictions of soiling losses on novel data, while the semi-physical model performance is mixed. The statistical distributions of daily losses were estimated for different dust loadings. Under median conditions, the daily soiling losses for Brisbane, Mount Isa, and Wodonga are estimated as $0.53 \pm 0.66$, $0.08 \pm 0.08$, and $0.58 \pm 0.15$ pp/day, respectively. Yet, higher observed dust loadings can drive average losses as high as $2$ pp/day. Overall, the results suggest a relatively simple approach characterizing the statistical distributions of soiling losses using airborne dust measurements and short reflectance monitoring campaigns.
Giovanni Picotti, Michael E. Cholette, Cody B. Anderson, Theodore A. Steinberg, Giampaolo Manzolini
2023-04-24T04:20:35Z
http://arxiv.org/abs/2304.11814v2
# Stochastic Soiling Loss Models for Heliostats in Concentrating Solar Power Plants ###### Abstract Reflectance losses on solar mirrors due to soiling pose a formidable challenge for Concentrating Solar Power (CSP) plants. Soiling can vary significantly from site to site -- from fractions of a percent to several percentage points per day (pp/day), a fact that has motivated several studies in soiling predictive modelling. Yet, existing studies have so far neglected the characterization of statistical uncertainty in their parameters and predictions. In this paper, two reflectance loss models are proposed that model uncertainty: an extension of a previously developed physical model and a simplified model. A novel uncertainty characterization enables Maximum Likelihood Estimation techniques for parameter estimation for both models, and permits the estimation of parameter (and prediction) confidence intervals. The models are applied to data from ten soiling campaigns conducted at three Australian sites (Brisbane, Mount Isa, Wodonga). The simplified model produces high-quality predictions of soiling losses on novel data, while the semi-physical model performance is mixed. The statistical distributions of daily losses were estimated for different dust loadings. Under median conditions, the daily soiling losses for Brisbane, Mount Isa, and Wodonga are estimated as \(0.53\pm 0.62\), \(0.1\pm 0.1\), and \(0.57\pm 0.14\) pp/day, respectively. Yet, higher observed dust loadings can drive average losses as high as 2.50 pp/day. Overall, the results suggest a relatively simple approach characterizing the statistical distributions of soiling losses using airborne dust measurements and short reflectance monitoring campaigns. keywords: CSP, Heliostat, Reflectance, Dust deposition, Site soiling assessment, Uncertainty characterization, Solar Energy + Footnote †: journal: Journal of Geomechanics ## 1 Introduction Maintaining high reflectance for Concentrating Solar Power (CSP) heliostats, is of paramount importance for the economics of the plant. One of the key degradation modes is the loss of reflectance due to the accumulation of dust on the surface of the heliostats. These losses are typically mitigated via artificial cleaning with brushes and (possibly high-pressure) water [1], which represent a significant Operation & Maintenance (O&M) cost for many plants [2]. In an effort to optimally balance productivity losses against direct costs, a number of studies have been devoted to cleaning planning and resource allocation, including methods based on mathematical programming [3; 4] or Markov Decision Processes (MDPs) [5; 6]. Yet, these approaches rely on estimation of the local soiling dynamics, which is influenced by local environmental factors, exhibits significant randomness and may be affected by seasonality. As a result of these factors, local soiling rates for CSP reflec tors may vary from a few tenths of a percentage point to a few percentage points per day [7; 8; 9] -- a variation that would require vastly different cleaning strategies and strongly influence the economic viability of a plant. Therefore, accurately characterizing and predicting the dependence of these soiling losses based on location-specific context is essential to plan cleaning approaches and reduce the financial risk during site selection [10]. A number of soiling predictive models have been developed in literature for both PV and CSP [8; 11]. A main distinction can be made between "physical" and "statistical" approaches. The statistical approaches typically utilize extensive experimental data provided by reflectance measurements from hand-held reflectometers [12] or custom semi-automated devices (e.g. the TraCS [13] or AVUS [14]) together with weather measurements (e.g. wind speed, humidity, PM10, etc.). A model is then constructed via regression analysis [15; 12; 16], Artificial Neural Networks [12; 16], autoregressive models [17], linear discrete-time state-space models [18; 19], Markov Regime Switching Models [20], or other data-driven approach to describe the dependence of soiling losses on a number of environmental parameters. While these statistical approaches avoid the need for a detailed model, they do not provide any insight on the relevant physical processes and extrapolation to novel weather conditions or at novel sites is dubious. On the other hand, physical models describe the deposition of dust particles and their interaction with solar collectors surfaces through physical laws that describe the individual mechanisms that happen in the different stages of the overall process. Picotti et al. [8; 21] developed and validated a model for CSP soiling losses which divided the process into four main phases (dry deposition, adhesion, removal, and reflectance losses). Each was separately modelled through physical and/or geometrical equations. Wolfertstetter et al. [22] adopted a similar deposition model and fit and validated this model on two data sets. In both cases, the models show good agreement with the experiments, but no effort was made to provide a detailed statistical characterization of the reflectance loss predictions. Lozano-Santamaria et al. [23] applied similar considerations to assess the dust deposition on air coolers in CSP (fouling), demonstrating the relevance of many of those environmental parameters on the deposition process. Other authors focused on the impact of deposited dust on reflectance losses, exploiting the concept of "turbid medium" and the Beer-Lambert law to identify the impact of different incidence angles [24], or applying the Mie scattering theory to measured gravimetric density and assumed deposited dust size distribution to compute the cleanliness of the solar collectors (both PV and CSP) [25]. In PV literature, physical models have also been employed in a number of studies. You et al. [26] computed the dry deposition velocity to assess airborne dust deposition and subsequently applied a linear relation between deposited dust and efficiency loss. Following a very similar approach, Coello and Boyle [27] computed the dry deposition velocity using standard models from atmospheric literature, and eventually assessed the soiling ratio as a function of total mass accumulated on the PV panels. Fernandez-Solas et al. [28] managed to exploit the physics of PV modules to assess soiling form their electrical available measurements. These latter approaches grounded in physical models have the advantage that they offer insight into the relevant physical processes, but the simplified models typically used for key phenom ena (e.g. a logarithmic profile for wind speeds, atmospheric stability) are difficult to validate and still require some experimental data from the site for tuning to produce accurate predictions. Taken together, the existing studies have demonstrated the promise of both statistical and physical approaches to predict soiling losses, with the former offering accurate predictions and the latter offering more physical insight. However, there are a number of gaps: 1. **Statistical characterization of soiling losses**. While many of the existing models assess the quality of the fit (e.g. via mean-square error of forecasts), there has been little effort on modelling the impact of the various sources of uncertainty on the resulting soiling rate predictions (e.g. provide prediction confidence intervals), despite the significant impact that such uncertainty has on cleaning operations [6]. This is particularly true for physical soiling models [21; 22] which do not include error terms, but is also true of the statistical models, which tend to focus on forecasts using expected values rather than uncertainties. A more complete stochastic description of different uncertainties in the soiling process and measurements can yield 1) insights into how these uncertainties affect the parameter estimates (i.e. parameter confidence intervals), and 2) statistical confidence/credible intervals on predictions to quantify soiling loss risks. 2. **Use of short-term reflectance measurement campaigns**. For existing CSP studies, while the predictions are of good quality, they have only been evaluated at a small number of sites (typically one or two) using long-term data sets (\(\sim\)4-48 months) [22; 19; 20; 13]. Such long term reflectance campaigns either require laborious measurements from hand-held reflectometers [21; 12; 20] or elaborate custom devices [29]. It is therefore advantageous to explore the possibility of using shorter-term reflectance campaigns to assess the soiling rates at a particular site. Such short term experiments would clearly have a lower accuracy, but by using the aforementioned statistical analysis, the accuracy of the predictions could be estimated and inform a decision on whether or not more data needs to be collected. 3. **Understanding soiling losses in Australia**. Existing CSP soiling loss studies have been primarily conducted in Europe and North Africa, and soiling studies from other CSP-relevant locations (e.g. Australia) are lacking. 4. **CSP reflectance loss models are simplistic**. Existing physical CSP soiling models use geometrical area coverage to assess the reflectance loss of the deposited dust [21; 22]. Yet, it is well known that the losses are governed by Mie Scattering [25; 30], and thus the scattering effective area is likely significantly different than the aerodynamic cross-section. It is therefore advantageous to integrate the Mie Scattering models (e.g. similar to [30; 25] with the physical deposition models. In this paper, two new stochastic soiling loss models for solar mirrors -- a _semi-physical_ and simplified _constant-mean_ model -- are developed that aim at a characterization of uncertainty in the soiling process by extending the physical model previously developed in [21]. The contributions are as follows: 1) a novel uncertainty models for the deposition velocity and measurement errors, 2) improvement of the CSP reflectance loss model to include Mie scattering, 3) derivation of expressions for the statistical distribution of soiling losses and Maximum Likelihood Estimation (MLE) methods, and 4) a novel simplified model based on the insights from the first three developments. These developments enable, for the first time, the combination of physical insight with an assessment of parameter estimation uncertainties induced by the uncertain deposition and measurements. Moreover, both the semi-physical model and the simplified model are deployed on a total of ten experimental campaigns taken at three different sites around Australia with significantly different climates. The results permit an evaluation of the suitability of the models to describe CSP-relevant soiling losses across different sites using short (\(\sim\)1 week) soiling campaigns and longer-term, automatically acquired weather data. The remainder of the paper is organized as follows. Section 2 describes the physical soiling model, which is subsequently extended to describe the statistical distribution of reflectance changes and develop a simplified model. Section 3 describes the ten experimental campaigns as well as the parameter estimation and testing of the developed models. Finally, Section 4 concludes the paper and suggests areas for future work. ## 2 Modeling The models developed in this paper are comprised of two components: a deterministic physics-based deposition model based on an extension of the authors' earlier work [21], and a wholly novel stochastic component. Each component will be discussed in the sequel, and an expression for the statistical distribution of reflectance changes will be developed. This development will serve to 1) clarify some important phenomena for the observed reflectance of CSP collectors subject to soiling and 2) enable the development of the statistical likelihood of observations, which is the basis for the estimation of the model parameters. ### Physical model Firstly, the physical model will be discussed. While the deposition model largely follows [21], the discussion here will focus on details that are pertinent to later developments and will introduce a refined loss model based on Mie Theory, following approaches similar to [25; 30]. #### 2.1.1 Reflectance Loss Model Consider the following model for the reflectance of the heliostat1: Footnote 1: The wavelength dependence is omitted here. The surface is assumed to be grey or the reflectance is assumed to be a solar weighted average. \[\rho\left(A_{soil}(t),\phi\right)=\rho_{0}\left[1-\frac{A_{soil}(t)}{A_{ref}} \cdot g(\phi)\right] \tag{1}\] where \(A_{soil}\) is the effective soiled area when the beam incidence is normal to the surface (i.e. \(\phi=0\)) 2, \(\phi\) is the angle of incidence at which the reflectance is assessed/measured, \(\rho_{0}\) is the nominal reflectance (assumed to be directionally uniform), \(A_{ref}\) is the unsoiled reflective area3, and \(g(\phi)\geq 1\) is an incidence factor that accounts for the additional blocking and shading when \(\phi>0\). This factor depends on the nature of the reflector, particularly whether it is a first- or second-surface reflector -- a fact which will be discussed later. In the absence of cleaning (and rain)4, the total soiled area of the deposited particles can be computed as: Footnote 4: Cleaning/rain models are considered outside the scope of this paper. \[A_{soil}(t)=A_{soil}(t_{0})+\frac{\pi}{4}\int_{0}^{\infty}D^{2}N(t_{0},t,D) \gamma(D,\phi_{a})dD \tag{2}\] where \(D\) is the particle diameter and \(N\left(t_{0},t,D\right)\) is the number of particles of diameter \(D\) accumulated on the mirror between \(t_{0}\) and \(t\), \(\phi_{a}\) is the acceptance half-angle of the reflectance measurements (which is assumed to be constant in this study). The function \(\gamma(D,\phi_{a})\) is a weighting factor that is calculated based on Mie Theory [30] \[\gamma(D,\phi_{a})\triangleq 2\pi\int_{0}^{\infty}Q_{ext}\left(\frac{\pi D}{ \lambda},m\right)\,g(\phi_{a})\bar{E}(\lambda)d\lambda \tag{3}\] where \(m\) is the complex refractive index of the dust, \(Q_{ext}\left(\frac{\pi D}{\lambda},m\right)\) is the typical extinction coefficient from Mie Theory, which is the ratio of the extinction area and the cross-sectional area of the particle. Finally, \[g(\phi_{a})\triangleq\int_{-1}^{\cos(\phi_{a})}p(\mu)d\mu \tag{4}\] where \(p(\mu)\) is the scattering function5. This last integral accounts for the small, but finite, acceptance angle of the measurement device. Footnote 5: The scattering function used here is normalized to integrate to one. This area loss model is similar to [21] with one important extension: the weighting factor \(\gamma(D,\phi_{a})\) allows particles of different diameters to have different effective cross-sectional areas for light scattering -- a fact that may be important for the medium-sized particles (0.1-10\(\mu m\)) which have dimensions similar to the wavelengths of the incident light. Indeed, this model simplifies to that of [21] if \(\gamma(D,\phi_{a})=1\) for all \(D\). When the reflectance measurement angle is normal (\(\phi=0^{\circ}\)), the reflectance lost is simply the effective cross-sectional area, i.e. \(g(\phi)=1\) in (1). However, for non-zero incidence, \(g(\phi)\neq 1\) and an expression must be developed. For this work, a simplified geometrical model (albeit using the effective cross-sectional area from the Mie calculations) is used to model the incidence angle effect. In the case where the dust is deposited directly on the reflector surface, \(g(\phi)=\frac{1+\sin(\phi)}{\cos(\phi)}\)[21]. For a second-surface reflector with a covering glass a few millimeters thick with incidence angles sufficiently far from zero (e.g. greater than \(1^{\circ}\)), the total area loss is well-approximated by \[g(\phi)=\frac{2}{\cos(\phi)} \tag{5}\] Therefore, once the number of particles at each diameter are known at some \(t_{0}\), the reflectance can be computed using (1). It is likely that \(t_{0}\) will be just after cleaning and \(A_{soil}(t_{0})=0\) if perfect cleaning is assumed. #### 2.1.2 Deposition Model Due to the measurements available for airborne dust, it is convenient to express the number distribution as a function of the cumulative _mass_ deposited on the mirror \(M(t_{0},t,D)\) \[N\left(t_{0},t,D\right)=\frac{M(t_{0},t,D)}{\frac{1}{6}\rho\pi D^{3}}. \tag{6}\] which assumes a constant density (both as a function of diameter and time). The mass deposited is modeled as \[M(t_{0},t,D)=\int_{t_{0}}^{t}f(\tau,D)d\tau \tag{7}\] where \(f(\tau,D)\) is the particle mass flux for particles of diameter \(D\) at time \(\tau\). The mass flux is modelled as product of the mass of the particles in the air, the vertical velocity of the particles, and the mirror horizontal projected area [21]: \[f(t,D)=m(t,D)\cdot v_{d}(w(t),T(t),D;\,hrz0)\cdot\cos(\theta(t)) \tag{8}\] where \(m(t,D)\) is the mass concentration distribution of airborne particles of diameter \(D\), \(\theta(t)\) is the tilt angle of the mirror, and \(v_{d}(w(t),T(t),D;\,hrz0)\) is the deposition velocity. This last term is based on the well-known resistance model [31] and has a functional dependence on the diameter, wind speed \(w(t)\), air temperature \(T(t)\), and the site-dependent parameter \(hrz0\) (the ratio of a reference height to the surface roughness of the site). Beyond this functional dependence, the details of the model are not particularly important for the later developments in this paper, so the interested reader is referred to [21] for more details. Unfortunately, the mass distribution in (8) is not usually known. Typically, the cumulative mass up to a few diameters -- often just one -- are measured. The most commonly available are \(PM_{10}\) and \(TSP\), which are the cumulative particle mass up to a diameter of \(10\mu\)m, and the total suspended particles, respectively. Assuming constant density, the mass distribution is factored into the following form \[m(t,D)=\rho\frac{1}{6}\pi D^{3}\cdot\alpha(t)\cdot\hat{n}(D) \tag{9}\] where \(\hat{n}(D)\) is the well-known tri-modal log-normal number distribution [32]6: Footnote 6: The \(\ln(10)\,D\) term is due the transformation of the independent variable from \(\log D\) to \(D\). The interested reader is referred to [32, p. 333] for details. \[\hat{n}(D)=\sum_{i=1}^{3}\frac{N_{i}}{\ln(10)\,D\,\sqrt{2\pi}\log\sigma_{i}} \exp\left[-\frac{(\log(D)-\log(\mu_{i}))^{2}}{2(\log(\sigma_{i}))^{2}}\right] \tag{10}\] where \(N_{i}\), \(\mu_{i}\), and \(\sigma_{i}\) are parameters of the different modes. Prototype parameters can be selected based on the nature of the site (e.g. urban, rural, etc.) from literature [31]. The time-dependent term \(\alpha(t)\) is the ratio of the relevant airborne dust measurement with its corresponding value for the prototype mass distribution. To be more specific, let \(c_{\delta}(t)\) be the concentration of airborne dust with diameters less than \(\delta\), then \(\alpha(t)\) can be expressed as \[\alpha(t)=\frac{c_{\delta}(t)}{\frac{\pi\rho}{6}\int_{0}^{\delta}D^{3}\cdot \hat{n}(D)\,dD}. \tag{11}\] In other words, \(\alpha(t)\) is the ratio of an observed mass concentration below diameter \(\delta\) and the corresponding value of the mass concentration implied by the prototype distribution. Thus, the prototype distribution is "scaled" according to the measurements. Putting (6)--(11) together yields: \[N\left(t_{0},t,D\right)=\hat{n}(D)\cdot\int_{t_{0}}^{t}\alpha(\tau)\cdot v_{d }(w(\tau),T(\tau),D;\,hrz0)\cdot\cos(\theta(\tau))\,d\tau \tag{12}\] This integral may be computed approximately by discretizing the timeline into \(\Delta t\) intervals. Let interval \(k\) denote the time interval \([k\Delta t,(k+1)\Delta t)\) and assume that the measurements are taken at the beginning of the interval. Moreover, assume that the time-varying quantities are constant at their average values for the interval. The integral (12) can be approximated as: \[N(k_{0}\Delta t,k\Delta t,D)\approx\hat{n}(D)\cdot\sum_{i=k_{0}}^{k-1}\alpha_{ i}\cdot v_{d}(w_{i},T_{i},D;hrz0)\cdot\cos(\theta_{i})\Delta t \tag{13}\] where \(w_{i}\triangleq w(i\Delta t)\) and an analogous definition holds for each of \(\alpha_{i}\), \(w_{i}\), \(T_{i}\), and \(\theta_{i}\). Equation (13) enables the computation of the number of particles that fall on a surface between two times. However, it is possible that these particles are removed from the surface at some point due to wind, sliding, rolling, or other removal forces [33]. For the purposes of this model, only competition between the rolling moment and gravity removal is considered -- resulting in a critical diameter \(D_{c}(\theta)\) above which all particles are removed [21]. For this study, it is assumed that there is a \(D^{\prime}_{c}\) above which no dust is deposited on the mirror. This is a reasonable assumption in two important situations: 1. Fixed tilt mirrors (e.g. for soiling experiments), where \(\theta=constant\). 2. When tracking mirrors frequently achieve some high tilt angle. This can happen if mirrors are either stowed at a high tilt (close to \(90^{\circ}\)) or achieve some high tilt as a part of the stowing process (e.g. they are near vertical during the manoeuvre to the face-down stowing position). In both of these cases, dust that is regularly removed is simply ignored and the dust accumulated on the mirror is \[N(k_{0}\Delta t,k\Delta t,D)\approx\begin{cases}\text{Eq. (\ref{eq:20})}&D\leq D^{\prime}_{c}\\ 0&D>D^{\prime}_{c}\end{cases} \tag{14}\] ### Stochastic model & the distribution of changes in measured reflectance The physical model is capable of utilizing weather and airborne dust measurements and a few key choices (prototype distribution, \(hrz0\)) to provide predictions of reflectance losses at different angles of incidence. However, these _deterministic_ predictions are subject to a number of assumptions (e.g. spherical particles, constant density) and thus the deposited dust is likely subject to significant uncertainty. Moreover, the reflectance measurements (particularly for soiled samples) are inherently uncertain. In the remainder of this section, a model will be developed to attempt to quantify the effects of uncertainty in the deposition flux and measurements. Consider the reflectance measurements at time index \(k\), \(r_{k}\). Because the reflectometer-measured average reflectance has significant uncertainty, the measured values are modelled as \[r_{k}=\rho\left(A_{soil,k},\phi_{k}\right)+\epsilon_{r,k} \tag{15}\] where \(\epsilon_{r,k}\sim\mathcal{N}\left(0,\sigma_{r,k}^{2}\right)\) is allowed to vary with time and \(k\subseteq\{0,1,\ldots\}\) are the time indices where the reflectance is measured. The measurement noise variances \(\sigma_{r,k}^{2}\) can be estimated by taking repeated measurements at different locations and computing the estimated variance of the mean. The time index is included since this variance has been noted to change as the mirror becomes more soiled. Now, consider two consecutive measurement taken at time indices \(\ell<m\). The change in the reflectance loss is: \[r_{m}-r_{\ell} =\rho\left(A_{soil,m},\phi_{m}\right)-\rho\left(A_{soil,\ell}, \phi_{\ell}\right)+\epsilon_{m}-\epsilon_{\ell}\] \[=\frac{\rho_{0}}{A_{mirror}}\left[g\left(\phi_{\ell}\right)A_{soil,\ell}-g\left(\phi_{m}\right)A_{soil,m}\right]+\epsilon_{r,m}-\epsilon_{r,\ell} \tag{16}\] Let \(g(w_{j},T_{j},D;\,hrz0)\,\triangleq\,\alpha_{j}\,\hat{n}(D)\,v_{d}(w_{j},T_{j},D;\,hrz0)\) and assume that the soiled area at \(t_{0}=k_{0}\Delta t\) is known7. The soiled area at any \(k>k_{0}\) can be computed as: \[A_{soil,k} =A_{soil,k_{0}}+\frac{\pi}{4}\int_{0}^{\infty}D^{2}\,N(k_{0}\Delta t,k\Delta t,D)\gamma(D,\phi_{a})dD\] \[=A_{soil,k_{0}}+\frac{\pi}{4}\Delta t\sum_{i=k_{0}}^{k-1}\cos( \theta_{i})\,\int_{0}^{D_{c}}D^{2}\,\gamma(D,\phi_{a})\,g(w_{i},T_{i},D;\,hrz0 )dD\] \[=A_{soil,k_{0}}+\sum_{i=k_{0}}^{k-1}\alpha_{i}\cdot\mu(w_{i},T_{i} ;\,hrz0)\cdot\cos(\theta_{i}) \tag{17}\] where \(\,\mu(w_{i},T_{i};hrz0)\,\triangleq\,\frac{\pi}{4}\Delta t\int_{0}^{D_{c}}D^{ 2}\,\gamma(D,\phi_{a})\,\hat{n}(D)\,\cdot\,v_{d}(w_{i},T_{i},D;hrz0)dD\). Since the area loss rate has significant uncertainty, it is reasonable to include a noise term in the following manner: \[A_{soil,k}=A_{soil,k_{0}}+\sum_{i=k_{0}}^{k-1}\alpha_{i}\cdot\cos(\theta_{i })\cdot[\mu(w_{i},T_{i};hrz0)+\varepsilon_{i}] \tag{18}\] where \(\varepsilon_{i}\sim\mathcal{N}(0,\sigma_{dep}^{2})\) are assumed to be i.i.d. noise terms. In other words, the model captures the mean of the reflective area loss, while the variance in this loss is modelled via \(\sigma_{dep}^{2}\). The distribution of \(A_{soil,k}\) can now be easily established as the sum of linearly transformed random variables \[A_{soil,k}\sim\mathcal{N}\left(m_{k},s_{k}\right) \tag{19}\] where \[m_{k} \triangleq A_{soil,k_{0}}+\sum_{i=k_{0}}^{k-1}\alpha_{i}\,\cos( \theta_{i})\,\mu(w_{i},T_{i};hrz0)\] \[s_{k}^{2} \triangleq\sigma_{dep}^{2}\sum_{i=k_{0}}^{k-1}\alpha_{i}^{2}\, \cos^{2}(\theta_{i})\] We can now compute the change in reflectance using only observable quan tities using (16): \[r_{m}-r_{\ell} =b(\phi_{\ell})A_{soil,\ell}-b(\phi_{m})A_{soil,m}+\epsilon_{m}- \epsilon_{\ell}\] \[=\left[b(\phi_{\ell})-b(\phi_{m})\right]A_{soil,\ell}\] \[\qquad-b(\phi_{m})\sum_{j=\ell}^{m-1}\alpha_{j}\,\cos(\theta_{j}) \cdot\left(\mu(w_{j},T_{j};hrz0)+\varepsilon_{j}\right)+\epsilon_{m}-\epsilon _{\ell} \tag{20}\] where \(b(\phi)=\frac{\rho_{0}}{A_{mirror}}g(\phi)\). Equation (20) is interesting in its own right. The first term describes how a change in measurement angle will yield a change in observed reflectance, even when no additional dust is deposited. The second term accounts for the losses due to the additional dust deposited from time index \(\ell\) to index \(m\), and the last two terms model the imperfect measurements. In the case of a tracking heliostat, the incidence angle will change significantly throughout the day and this first term will significantly contribute to the observed changes in reflectance. However, reflectance is typically measured using a fixed-incidence-angle reflectometer, and so \(\phi_{\ell}=\phi_{m}\) and thus \(\left[b(\phi_{\ell})-b(\phi_{m})\right]=0\), so the first term has no effect on the observed reflectance change. The fixed-incidence condition -- together with the fact that (20) is an affine transformation of normal random variables -- implies that the statistical distribution of the reflectance changes may be written as: \[r_{m}-r_{\ell}\sim\mathcal{N}\left(\mu_{m,\ell},\sigma_{m,\ell}^{2}\right) \tag{21}\] where \[\mu_{m,\ell}=-b(\phi_{m})\sum_{j=\ell}^{m-1}\alpha_{j}\,\cos(\theta_{j})\cdot\, \mu(v_{j},T_{j};hrz0) \tag{22}\] \[\sigma_{m,\ell}^{2}=\sigma_{dep}^{2}\,b(\phi_{m})^{2}\,\sum_{j=\ell}^{m-1} \alpha_{j}^{2}\,\cos^{2}(\theta_{j})+\sigma_{r,m}^{2}+\sigma_{r,\ell}^{2} \tag{23}\] This expression permits the evaluation of the probability density of reflectance changes given meteorological parameters (airborne dust concentration, the ambient temperature, wind speed), measurement parameters (acceptance and incidence angles), and the tilt history. ### Likelihood & parameter estimation Consider the data set \(\mathcal{D}\), which consists of data from \(L\) experiments each consisting of measurements taken at time indices \(k_{\ell}\), \(\ell=1,2,\ldots,N_{\ell}\) where \(k_{j}>k_{i}\)\(\forall j>i\). Let \(\bar{w}_{i}=(w_{i},T_{i},c_{i},\theta_{i})\) be the tuple of input variables at time index \(i\) and let \(hrz0=h\) and \(\sigma_{dep}^{2}=q\). The data can be written as: \[\mathcal{D}=\bigcup_{\ell=1}^{L}\mathcal{D}_{\ell} \tag{24}\] \[\mathcal{D}_{\ell}=\left\{\left(\Delta r_{\ell,k_{i+1}},\bar{w}_ {\ell,k_{i}},\bar{w}_{\ell,k_{i}+1},\ldots,\bar{w}_{\ell,k_{i+1}-1},\bar{w}_{ \ell,k_{i+1}}\right),\,i=0,1,\ldots,N_{\ell}-1\right\} \tag{25}\] where \(\Delta r_{\ell,k_{i}}=r_{\ell,k_{i}}-r_{\ell,k_{i-1}}\) is the reflectance loss of experiment \(\ell\) between successive measurements. The likelihood for this data is \[L\left(\mathcal{D}\mid h,q\right)=\prod_{\ell=1}^{L}\prod_{i=1}^{N_{\ell}}p( \Delta r_{\ell,k_{i}}\mid h,q) \tag{26}\] where \(p(\cdot\mid h,q)\) is the density for the normal distribution from (21) with \(hrz0=h\) and \(\sigma_{dep}=q\). Using this probability model, the log-likelihood can be written as \[\ell\left(\mathcal{D}\mid h,q\right)=\sum_{\ell=1}^{L}\sum_{i=1}^{N_{\ell}}- \frac{1}{2}\log(2\pi)-\frac{1}{2}\log\left(\sigma_{\ell,k_{i},k_{i-1}}^{2} \right)-\frac{(\Delta r_{\ell,k_{i}}-\mu_{\ell,k_{i},k_{i-1}},)^{2}}{2\sigma_{ \ell,k_{i},k_{i-1}}^{2}} \tag{27}\] where \(\mu_{\ell,k_{i},k_{i-1}}\) and \(\sigma_{\ell,k_{i},k_{i-1}}^{2}\) are the mean and variance of the reflectance loss for the \(i\)th difference in the \(\ell\)th experiment and which can be computed from (22) and (23). Note that neither \(h\) nor \(q\) appear explicitly here, but they are embedded in the computation of \(\mu_{\ell,k_{i},k_{i-1}}\) and \(\sigma_{\ell,k_{i},k_{i-1}}^{2}\) in (22) and (23). The following procedure can be used to compute the likelihood: 1. Compute \(D_{c}\) for the fixed tilt or stow angle. 2. Compute \(\mu(w_{i},T_{i})\) with \(hrz0=h\), setting and \(v_{d}(w_{i},T_{i},D^{\prime};\,h)=0\)\(\forall D^{\prime}\geq D_{c}\). 3. Compute \(\mu_{\ell,k_{i},k_{i-1}}\) via (22) and \(\sigma_{\ell,k_{i},k_{i-1}}^{2}\) via (23) with \(\sigma_{dep}=q\) for all \(i=1,2\ldots,N_{\ell}\) and sum. 4. Repeat 3 for \(\ell=1,2,\ldots,L\) and sum result. For estimation, the measurement noise variances \(\sigma_{r,k}^{2}\) can be estimated separately using repeated measurement on the same reflector and the sample variance of the mean. The two remaining free parameters -- \(hrz0\) and \(\sigma_{dep}\) -- were estimated via maximization of (27): \[\widehat{hrz0},\hat{\sigma}_{dep}=\arg\max_{h,q}\ell\left(\mathcal{D}\mid h,q\right)\] Because of the bounds on the variables, the optimization decision variables were \(\log\left[\log\left(hrz0\right)\right]\) and \(\log\left(\sigma_{dep}\right)\) to ensure that the parameters obey the physical limits of \(hrz0>1\) and \(\sigma_{dep}>0\) during optimization. The pa rameter confidence intervals are found using Fisher Information in the log-transformed space and inverting the transformation for the upper and lower bounds. ### Constant mean deposition velocity simplification The above development suggests the following simplification when no weather data is available: \[\mu\left(w_{i},T_{i};hrz0\right)\rightarrow\tilde{\mu} \tag{28}\] in (18) (i.e. we ignore the mean prediction of the physical model). The measured reflectance loss in (16) will therefore be distributed as \[\Delta r_{\ell,k_{i}}\sim\mathcal{N}\left(\tilde{\mu}_{\ell,k_{i},k_{i-1}}, \sigma_{\ell,k_{i},k_{i-1}}^{2}\right) \tag{29}\] with \[\tilde{\mu}_{\ell,k_{i},k_{i-1}}=-\tilde{\mu}\cdot b(\phi_{k_{i}})\sum_{j=k_{i -1}}^{k_{i}-1}\alpha_{j}\,\cos(\theta_{j}) \tag{30}\] and the same variance equation (23) still holds. In this case, the deposition and Mie loss components of the physical model are absorbed into a single average and only the incidence angle, tilt angle, and the dust scaling factor are needed as inputs. This model was also estimated my maximizing (27) with \(\tilde{\mu}_{\ell,k_{i},k_{i-1}}\) in place of \(\mu_{\ell,k_{i},k_{i-1}}\). For the estimation of this simplified model, \(\log\left(\tilde{\mu}\right)\) replaces \(\log\left[\log\left(hrz0\right)\right]\) as a decision variable. ### Statistical distribution of daily losses for the constant-mean model The constant-mean model permits a simple procedure for computing the daily reflectance loss distribution, which is a key practical parameter for CSP. The procedure described here can be extend in the case of the semi-physical model though straightforward modifications, albeit with a more complex simulation procedure. For the constant-mean model, if the parameter uncertainty is neglected, the statistical distribution of losses can be computed via (29), but with \(\sigma_{r,k_{i}}=\sigma_{r,k_{i-1}}=0\) in order to predict the "true" reflectance instead of the noise-polluted measurements. Note that this statistical distribution is dependent on the total dust loading through the sums: \[\sum_{j=k_{i-1}}^{k_{i}-1}\alpha_{j}\,\cos(\theta_{j}) \tag{31}\] \[\sum_{j=k_{i-1}}^{k_{i}-1}\alpha_{j}^{2}\,\cos^{2}(\theta_{j}) \tag{32}\] which provide convenient metrics for identifying the dust loading of a particular time interval (e.g. one day)8 would Including the parameter uncertainty is also straightforward if one is willing to resort to Monte Carlo simulation, since \(\log(\tilde{\mu})\) and \(\log(\sigma_{dep})\) are approximately jointly normal under MLE. A simple Monte Carlo simulation can thus be conducted to obtain the reflectance changes for any day as follows: Footnote 8: These loadings are not sufficient for the semi-physical model, where deposition has some complex dependencies on the wind speed and ambient temperature as well as the airborne dust. For this model, the mean loss in (22) is the only meaningful quantification of the loading. 1. Compute the dust loading sums \(\sum_{j=k_{i-1}}^{k_{i}-1}\alpha_{j}\,\cos(\theta_{j})\), and \(\sum_{j=k_{i-1}}^{k_{i}-1}\alpha_{j}^{2}\,\cos^{2}(\theta_{j})\) for all days where airborne dust is available (but not necessarily reflectance measurements). Here, \(k_{i-1}\) is the index of the first interval for the day and \(k_{i}\) is the index of the last measurement interval. * Sample the daily dust loading from those computed in step 1 * Sample \(\log(\tilde{\mu})\) and \(\log(\sigma_{dep})\) from a joint normal * Compute the mean and variance via (23),(30) with \(\sigma_{r,k_{i}}=\sigma_{r,k_{i-1}}=0\) * Sample reflectance changes (29) * Repeat steps 2 to 5 until the desired number of samples have been obtained. The statistics of the obtained samples can be used to estimate the characteristics of the daily losses. ## 3 Experiments & Results The accuracy of the model described in Section 2 was assessed against experimental data collected at three locations in Australia. The parameters (\(hrz0\), \(\sigma_{dep}\), \(\tilde{\mu}\)) were fitted using the data and the measured mirror reflectance was compared with the simulated values, including analysis of their uncertainties. Ten approximately week-long experimental campaigns conducted at three different sites were exploited to fit and test the model predictions. Four of the campaigns were conducted at the first site -- the roof of a building at the Queensland University of Technology (QUT), Gardens Point campus, located in Brisbane. Three campaigns were conducted at the second site, which was near the remote outback mining town of Mount Isa, Queensland, and another three campaigns were conducted at the third site near a factory in Wodonga, Victoria. A diverse range of conditions and weather charac teristics were provided by the three environments for testing the predictive performance of the models. The experimental setups for the three sites and the data collection procedure are described in the remainder of the section. Subsequently, the fitting of the models is discussed and their predictive performance on novel data sets is examined. ### Experiment & model setup The three experimental setups were deployed in different years in the two locations, and are clearly exposed to different conditions. However, their conceptual design is similar and they both share the same main components. The two main equipment that are deployed to perform the experiments are 1) the dust sampler and 2) the mirror rig. Two different types of dust sampler were used for the experimental campaigns. The dust sampler used in the experiments conducted at QUT and in Mount Isa is a Protinus 1000 purchased from Ecotech that measures Total Suspend Particles (TSP), and is equipped with additional weather sensors to measure wind speed and direction, relative humidity, and air temperature among others. The dust sampler used in the experiments conducted in Wodonga is a QAMS Dust Master Pro purchased from Thomson Environmental Systems that simultaneously measures five different Particulate Matter (PM) fractions, and is equipped with additional weather sensors to measure wind speed and direction, relative humidity, air temperature, and rain intensity among others. Both dust samplers uses a light scatter method to assess the mass of airborne dust, that may be integrated with a gravimetric analysis on collected samples. The mirror rigs are composed of a few mirror samples tilted at different angles to simulate the changing position of heliostats tracking the sun in an actual solar field. Eventually, a device is required to measure the reflectance of the mirrors, both in clean and soiled state. In all experiments, a multi wavelengths and acceptance angle configurable 15R-RGB reflectometer purchased from Devices & Services Company (D&S) was used to collect measurements. The measurements repeatability reported in the technical sheet of the instrument is 0.2%. The incidence angle is fixed by the device at \(\theta=15^{\circ}\). Detailed information on the devices can be found in [21, 34]. The experimental setup at QUT is located on an intermediate level on the roof of a twelve-story building in the CBD of Brisbane, and it is comprised of the dust and weather sampler and a 6 mirrors test-rig oriented at \(0^{\circ}\), \(15^{\circ}\), \(30^{\circ}\), \(45^{\circ}\), \(60^{\circ}\), and \(90^{\circ}\), represented in Fig. 1. The settings of the D&S reflectometer were set on green wavelength (\(\lambda=560\)nm) and an (half) acceptance angle \(\theta_{a}=12.5\)mrad. Four separate measurement campaigns are available and the details of these campaigns can be found in [21]. Figure 1: Mirrors Setup at QUT The experimental setup in the Australian outback is located a few kilometers away from Mount Isa, and it is surrounded by typical low bush vegetation and red soil. The dust sampler and mirror rig were approximately 20 m apart and fences were constructed for protection from animals (e.g. kangaroos, emus, cows) that inhabit the area. The test-rig was made of a total of 18 mirrors -- four groups of four mirrors oriented facing the major cardinal directions with tilts of \(5^{\circ}\), \(30^{\circ}\), \(60^{\circ}\), and \(85^{\circ}\). Two additional mirrors are added for control purposes: a horizontal (\(0^{\circ}\)) one in the North arm of the test-rig and a vertical (\(90^{\circ}\)) one facing East. The D&S reflectometer was set on red wavelength (\(\lambda\) = 650 nm) and an (half) acceptance angle \(\theta_{a}=12.5\) mrad. Details of the dust sampler and the mirror test-rig are visible in Figs. 1(a) and 1(b). Three separate experimental campaigns were conducted where the reflectance was measured twice daily using an average of nine different positions on each sample mirror. The experimental setup in Wodonga, Victoria, is located in the parking Figure 2: Experimental Setup in the Australian Outback - Details lot of a factory in the outskirts of the town. It is comprised of a mirror test-rig, a dust sampler, and a weather station set up. The mirror rig is made of 5 mirrors deployed at different tilt angles: M1 - 0\({}^{\circ}\), M2 - 5\({}^{\circ}\), M3 - 30\({}^{\circ}\), M4 - 30\({}^{\circ}\), and M5 - 60\({}^{\circ}\), facing East (M1 to M3) and West (M4 and M5). The weather station measured temperature, relative humidity, ambient pressure, wind speed, wind direction, and precipitation intensity, among other parameters. The dust sampler measured five fractions of PM (Particulate Matter): PM1, PM2.5, PM4, PM10, and PM20. The settings of the D&S reflectometer were set on red wavelength (\(\lambda\) = 650 nm) and a (half) acceptance angle of \(\theta_{a}\) = 12.5 mrad. Details of the dust sampler and the mirror test-rig are visible in Figs. (a)a and (b)b. Three separate experimental campaigns were conducted where the reflectance was measured twice a day sampling nine different positions on each sample mirror and taking the average of those measurements. Figure 3: Experimental Setup in Rural Australia - Details For the semi-physical model, a number of parameters regarding the size and composition of airborne dust must be specified. The airborne dust distributions were assumed based on those from [21]: the Urban prototype was used for the QUT experiments and the Rural prototype was used for both Mount Isa and Wodonga. In all cases, the dust was assumed to be composed of homogeneous quartz spheres. The (half) acceptance angle of the reflectometer was 12.5 mrad and the incidence angle was 15\({}^{\circ}\). Other parameter values were set to those in Appendix B of [21]. Both the semi-physical and constant-mean models were implemented in Python and are now available in the HelioSoil GitHub repository9. The implementations rely mostly on NumPy[35] and SciPy[36], but the maximization of the log-likelihood is carried out via the minimization of the negative log-likelihood using scipy.optimize.minimize. Mie extinction coefficients are computed using miepython10. The data for all experimental campaigns, model inputs, and model parameter values can also be found in in the mirror_soiling_data GitHub repository11. Footnote 9: [https://github.com/cholette/HelioSoil](https://github.com/cholette/HelioSoil) Footnote 10: [https://github.com/scottprahl/miepython](https://github.com/scottprahl/miepython) Footnote 11: [https://github.com/cholette/mirror_soiling_data](https://github.com/cholette/mirror_soiling_data) ### Fitting and prediction results For each site, the parameters of the models were estimated using data from the horizontal mirrors. For the QUT experiments, the first 2 (out of 4) campaigns were used for fitting, while the first campaign (out of three) was used for Mount Isa and Wodonga. For Wodonga, two additional fittings were performed using two and three campaigns to assess the impact of including Figure 4: QUT Experiments: results for the **semi-physical model**. more data. The results of the semi-physical model fitting can be seen in Table 1 and the model performance can be seen in Figs. 4, 5, and 6 for the QUT, Mount Isa, and Wondonga experiments, respectively. In these plots, each column of subplots shows a different experimental campaign, while each row represents a mirror with a different tilt. The black lines denote the predictions of the model from the first measurement onward, while the grey shaded areas indicate the two-standard-deviation prediction interval. The remaining lines denote the mean of nine reflectance measurements with the error bars denoting the sample standard deviation of this mean. The yellow shading denotes time periods used for fitting the model parameters. The QUT experiments can be found in Fig. 4. A few observations may be made regarding the training experiments (the first two columns of subplots): 1) the model describes the effect of increasing tilt without the need for training on additional angles; 2) the width of the prediction intervals (grey shaded areas) increases with increasing airborne dust concentration. This is most evident when the TSP increased toward the end of Experiment 0, where a \begin{table} \begin{tabular}{c|c c c} **Location** & \(\Delta t\) & \(\widehat{\textbf{hrz0}}\) & \(\hat{\boldsymbol{\sigma}}_{dep}\times 10^{4}\) \\ \hline QUT & 1 hr & 1.88 [1.51, 2.62] & 2.75 [1.66, 4.57] \\ Mount Isa & 5 min & 5.07 [2.93, 11.6] & 1.61 [0.432, 5.99] \\ Wodonga (one exp.) & 5 min & 2.72 [2.40, 3.14] & \(\sim 0\) [0.00, \(\infty\)] \\ Wodonga (two exp.) & 5 min & 3.33 [2.19, 6.33] & 1.20 [0.71, 2.02] \\ Wodonga (three exp.) & 5 min & 5.68 [2.99, 15.7] & 1.41 [0.992, 2.00] \\ \end{tabular} \end{table} Table 1: Maximum likelihood estimates and 95% confidence interval for the **semi-physical model**. Figure 5: Mount Isa Experiments: results for the **semi-physical model**. corresponding increase in the growth rate of the prediction interval is observed. The validation experiments (the second two columns of subplots in Fig. 4) demonstrate good agreement between the model and experimental measurements. The semi-physical model results for the Mount Isa experiments can be seen in Fig. 5. In this experiment, there were a number of mirrors with different orientations to evaluate the influence of wind direction, hence the additional experimental curves present (particularly at 5-80 degrees of tilt). Note that for the visual comparison of the reflectance degradation trends, the initial reflectances (which differed by about 1 percentage point) were all shifted upward so that they have a common starting point (1.0). Interestingly, all orientations exhibit similar losses for the same tilts (rows in Fig. 5), with the West and East orientations perhaps showing higher losses at the end of the test. However, this difference in losses is small and well within the measurement mean error bars. In the fitting campaign, the effect of the tilt angle is modelled well once again. Moreover, third campaign shows excellent agreement with the model predictions using the parameter fit from the first campaign, however the losses are quite low. The results for the second campaign are mixed. A high dust event was observed around Day 2 (TSP near \(300\frac{\mu q}{m^{3}}\)) and the losses are clearly over-predicted by the model, even if the model does communicate a lack of confidence in the predictions through a significant expansion of the prediction interval. Both before and after this event, the loss rate is still predicted quite well by the model, though this loss rate is quite low again. Interestingly, the losses observed on the vertical (\(90^{\circ}\)) and near-vertical (\(85^{\circ}\) Figure 6: Wodonga Experiments: results for the **semi-physical model**. mirrors, which indicate the presence of a horizontal deposition mode that is neglected in both models. This may be the culprit for the trend from overestimation of losses for flat mirrors toward underestimation of losses for the vertical mirror during the high-dust event. Finally, the losses don't seem to show significant orientation dependence. The results for the Wodonga experiments can be seen in Figs. 6 and 7. Initial reflectances were again all shifted up so that they have a common starting point (1.0) for visual comparison. The semi-physical model appears to capture the tilt angle effects in the first campaign, but testing on the other two campaigns yields poor performance. Moreover, the estimated \(\sigma_{dep}\) is very small, which is the reason that no confidence interval is visible in the plots (it is too small to be seen). However, the confidence interval for this parameter is trivially large due to a poorly conditioned Hessian at the maximum, clearly indicating that one should be suspicious of this estimate. Adding the second experiment to the training set improves the confidence interval on \(\sigma_{dep}\), as can be seen in Table 1. However, there are two indications here that the model is missing something: 1) the \(hrz0\) value for the two-experiment fit is outside the confidence interval of the one-experiment fit and 2) the fitting and prediction results seen in Fig. 7 show that the fit is still poor on the second experiment, despite the fact that it was used in the parameter estimation. When all three experiments are used (Table 1) -- \(hrz0\) increases, albeit inside the two-experiment confidence interval this time, but the confidence interval expands to the right. Taken together, the three Wodonga fittings suggest that there are some phenomena that are driving the differences in soiling rates between the experiments that are not captured by the semi-physical model. The fitting results for the constant-mean models can be seen in Table 2. The results of the simplified (constant-mean) model on the QUT experiments are shown in Fig. 8 which shows a strikingly similar performance on both the training data and validation experiments to the semi-physical model. The one interesting area of difference is the performance on the final day of the first experiment -- the constant-mean model slightly over-predicts the losses while the semi-physical model does not. This is likely due to the simplified model ignoring the wind speed, which increases during this period. On the other hand, the semi-physical model fits nicely to this behaviour. The results for the simplified model on the Mount Isa experiments can be seen in Fig. 9. The performance of this model on the first and third campaign is similar to the semi-physical model. However, the simplified model performs significantly better on the second campaign -- it captures the average behaviour during the high-dust event quite well. There is a hint of some bias in this prediction starting at the 60\({}^{\circ}\) tilt, perhaps owing the to the aforemen \begin{table} \begin{tabular}{c|c c c} **Location** & \(\Delta t\) & \(\hat{\vec{\hat{\mathbf{\mu}}}}\times 10^{4}\) & \(\hat{\mathbf{\sigma}}_{dep}\times 10^{4}\) \\ \hline QUT & 1 hr & 0.967 [0.567, 1.65] & 2.54 [1.49, 4.34] \\ Mount Isa & 5 min & 0.304 [0.179, 0.516] & 2.10 [0.890, 4.98] \\ Wodonga (one exp.) & 5 min & 0.240 [0.197, 0.292] & 0.382 [0.119, 1.22] \\ Wodonga (two exp.) & 5 min & 0.260 [0.217, 0.312] & 0.608 [0.283, 1.31] \\ Wodonga (three exp.) & 5 min & 0.249 [0.211, 0.294] & 0.842 [0.545, 1.30] \\ \end{tabular} \end{table} Table 2: Maximum likelihood estimates and 95% confidence interval for the **constant-mean model**. Figure 7: Wodonga Experiments: results for the **semi-physical model**, this time trained on the horizontal mirrors from the first two experiments. Figure 8: QUT Experiments: results for the simplified **constant-mean** model. tioned apparent "horizontal" deposition mode during the high-dust event. The results for the simplified model on the Wodonga experiments can be seen in Fig. 10. Again, the simplified model shows good prediction of both the tilt angle effects and the losses on the two validation experiments. However, the prediction intervals (the grey area) appear narrow. As can be seen in Table 2, the confidence interval on \(\sigma_{dep}\) is quite wide for the one-experiment fit, and adding experiments tends to increase the value (though still within the confidence interval of the one-experiment fitting). ### Statistical distributions of losses at the three sites Using the method described in Section 2.5, the statistical distribution of losses for the scenarios was computed for a flat mirror (\(\theta_{k}=0\)\(\forall k\)) at the three sites. Since the airborne dust data is limited during the experimental campaigns, the sampling of daily loadings in Step 1 is omitted and four dust loading scenarios were considered in its place: low, medium, high, and maximum dust loading scenarios, which are defined as the 5, 50, 95, and 100 %-ile of the daily sums of (31) in the data collected during all of the campaigns. The results are shown in Fig. 11. The histograms are the result of the Monte Carlo simulations (and therefore include parameter uncertainty), while the solid curves are the analytical distributions using the MLE estimates of the parameters. The losses are summarized in Table 3 as well. A number of observations can be made from these results. Firstly, typical daily losses during the measurement campaigns (quantified by the medium scenarios) show losses of around 0.5 percentage points per day (pp/day) for both QUT and Wodonga, but is predicted to vary between 0 to 1 pp/day Figure 9: Mount Isa Experiments: results for the simplified **constant-mean** model. Figure 10: Wodonga Experiments: results for the **constant-mean**. ## 6 Conclusion In this paper, we have proposed a novel approach to solve the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the problem of the of the problem of the problem of the problem of the problem of the problem of the of the problem of the problem of the problem of the problem of the of the problem of the problem of the problem of the of the problem of the problem of the of the problem of the problem of the problem of the of the problem of the of the problem of the problem of the of the problem of the problem of the of the problem of the problem of the of the problem of the of the problem of the problem of the of the problem of the of the problem of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the problem of the of the of the problem of the of the of the problem of the of the problem of the of the of the problem of the of the of the problem of the of the of the problem of the of the of the problem of the of the of the of the problem of the of the of the of the problem of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of of the for the same dust loading. For Mount Isa, the typical losses for the observed dust loading is quite low -- around 0.1 pp/day -- and the variance on this distribution is quite low as well. Secondly, the distributions show some probability of a negative loss, which is a result of the normal distribution for the modelling errors. This choice, however, was made to reflect the real possibility of an increase in reflectance due to removal modes that are not included in the physical model modelled (e.g. wind removal). Thirdly, the variance of the losses increase considerably with dust loading, which is consequence of modelling uncertainty in the deposition velocity. This is most evident in the "maximum" scenario in QUT and Mount Isa data -- the very high dust loadings lead to large increases in both the mean and variance of the losses. The Mount Isa highest scenario is plotted separately in Fig. 11d to better show this distribution. Note that this dust loading corresponds to the high dust event observed during the second campaign and is responsible for the large increase in the prediction uncertainty observed during this time (see Fig. 9). Finally, the relative impact of the three uncertainty sources can be assessed. The differences in the scenario distributions clearly indicate that random fluctuations in the airborne dust has a large impact. Within a scenario, the histograms have slightly higher variance than the curves due to the parameter uncertainty. However, the difference is small, indicating that the within-scenario variance is dominated by uncertainty in the deposition velocity (i.e. \(\sigma^{2}_{\ell,k_{i},k_{i-1}}\) ) -- a dominance will increase if more data is used and the parameter confidence intervals are decreased. ## 4 Discussion & Conclusions Concentrating Solar Power (CSP) systems face challenges due to soiling losses, which must be accurately assessed at current and future plant sites. This study extends a previously developed physical soiling model by including 1) an uncertainty model for the deposition model, 2) a loss model based on Mie scattering and 3) Maximum Likelihood Estimation techniques to estimate model parameters from reflectance loss measurement and weather parameters. Two models were developed, one simplified and one physical, both of which were trained and tested on data sets obtained through experimental campaigns in different environments in Australia (Brisbane, Mount Isa, Wodonga). The results demonstrate the usefulness of the statistical perspective, with parameter estimates and their confidence intervals obtained via MLE methods, and prediction confidence intervals obtained that respond to changes in airborne dust. The simplified model achieved better generalization to other experimental campaigns and performs well on all three data sets. The physical model shows promise in fitting the deposition rate dependence on wind speed in the QUT experiments but performs significantly worse than the simplified model on the Mount Isa and Wodonga experiments. The results suggest that the simplified model can be used to obtain reasonable estimates of soiling rates by measuring airborne dust, undertaking a short reflectance monitoring campaign, fitting the simplified prediction model to the data, and using longer-term airborne dust measurements and the fitted model to predict the distribution of losses that would have been observed. The provided confidence intervals may be used to assess if more/longer ex periments are needed for a particular site. Moreover, the results suggest that airborne dust measurements are important tools for understanding and predicting deposition rates, which has also been noted by other researchers [15]. There are some important limitations of this work. Firstly, there is some risk in ignoring the wind effect, which may be significant under different conditions at different sites. Indeed, the results presented clearly show that a model that works well under some conditions (e.g. the QUT data set) may work quite poorly under different conditions (e.g. Wodonga). Yet, the physical model in its current form clearly does not model wind effects accurately. This is probably due in part to the simple model of the atmospheric boundary layer (ABL) and other assumptions on the composition, shape, and size distribution of the airborne dust. Secondly, the incidence angle model in (5) is an approximation and there is evidence that a different dependence on the incidence angle may occur for naturally soiled mirrors [24]. Regarding the statistical distributions in Section 3.3, the use of the scenarios should be replaced by sampling from long-term dust data, when available. This will complete the picture of the statistical distribution airborne dust loadings and, in turn, the statistical distribution of daily losses. Future work will focus on developing a better understanding of the ABL and the nature of airborne dust using experimental measurements to identify and address the deficiencies of the physical model. Moreover, experiments to assess the quality of the reflectance loss model will be pursued, and will be used to assess the need for a more sophisticated model. Finally, more experimental campaigns will be carried out across Australia and North America to expand the available data and a model benchmarking study will be carried out between the physical, simplified, and statistical approaches to identify their strengths and weaknesses. ## 5 Acknowledgements G. Picotti, M.E. Cholette, C.B. Anderson, and T.A. Steinberg acknowledge the support of the Australian Government for this study, through the Australian Renewable Energy Agency (ARENA) within the framework of the Australian Solar Thermal Research Institute (ASTRI Project ID P54). The same authors also acknowledge the support of the U.S. Department of Energy's Solar Energy Technologies Office via the Soiling Subtask of the Heliostat Consortium (HelioCon). The authors would like to also acknowledge the support Kurt Drewes and Bruce Leslie of Vast Solar, and Paul Matuschka from Mars Petcare.
2302.13433
On a Subset Metric
For a bounded metric space X, we define a metric on the set of all finite subsets of X. This generalizes the sequence-subset distance introduced by Wentu Song, Kui Cai and Kees A. Schouhamer Immink to study error correcting codes for DNA based data storage. This work also complements the work of Eiter and Mannila where they study extensions of distance functions to subsets of a space in the context of various applications.
Richard Castro, Zhibin Chang, Ethan Ha, Evan Hall, Hiren Maharaj
2023-02-26T22:59:50Z
http://arxiv.org/abs/2302.13433v1
# On a subset metric ###### Abstract. For a bounded metric space \(X\), we define a metric on the set of all finite subsets of \(X\). This generalizes the sequence-subset distance introduced by Wentu Song, Kui Cai and Kees A. Schouhamer Immink [7] to study error correcting codes for DNA based data storage. This work also complements the work of Eiter and Mannila [3] where they study extensions of distance functions to subsets of a space in the context of various applications. ## 1. Introduction To design error correcting codes for DNA storage channels, a new metric, called the sequence-subset distance, was introduced in [7]. This metric generalizes the Hamming distance to a distance function defined between any two sets of unordered vectors. The definition is as follows. Let \(\mathbb{A}\) be a fixed finite alphabet and \(L\geq 1\) an integer. For any \(x_{1},x_{2}\in\mathbb{A}^{L}\), the Hamming distance \(d_{H}(x_{1},x_{2})\) between \(x_{1}\) and \(x_{2}\) is the number of coordinates in which \(x_{1}\) and \(x_{2}\) differ. For two subsets \(X_{1},X_{2}\subset\mathbb{A}^{L}\), with \(|X_{1}|\leq|X_{2}|\), and any injection \(\chi:X_{1}\to X_{2}\), the \(\chi-\) distance between \(X_{1}\) and \(X_{2}\) is defined to be \[d_{\chi}(X_{1},X_{2})=\sum_{x\in X_{1}}d_{H}(x,\chi(x))+L(|X_{2}|-|X_{1}|). \tag{1}\] The sequence-subset distance between \(X_{1}\) and \(X_{2}\) is defined to be \[d_{S}(X_{1},X_{2})=d_{S}(X_{2},X_{1})=\min\{d_{\chi}(X_{1},X_{2})|\chi:X_{1} \to X_{2}\text{ is an injection}\}.\] In [7] it is shown that \(d_{S}\) is in fact a metric on the set of subsets of \(\mathbb{A}^{L}\). In this note we generalize the sequence-subset distance as follows. Let \(X\) be a bounded metric space. For each \(y\in X\), let \(M:X\to\mathbb{R}\) be a function such that \[d(x,y)\leq M(x)\leq d(x,z)+M(z) \tag{2}\] for all \(x,y,z\in X\). Put \(Y:=\mathcal{F}(X)\), the set of all finite subsets of \(X\). For \(A,B\in Y\), with \(|A|\leq|B|\), and any injection \(\chi:A\to B\), the \(\chi-\) distance between \(A\) and \(B\) to defined to be \[d_{\chi}(A,B):=\sum_{x\in A}d(x,\chi(x))+\sum_{y\in B\chi(A)}M(y).\] Now the distance between \(A\) and \(B\) is defined to be \[d_{S}(A,B)=d_{S}(B,A):=\min\{d_{\chi}(A,B)|\ \chi:A\to B\text{ is an injection}\}. \tag{3}\] We show in Section 2 that \(d_{S}\) is indeed a metric on \(\mathcal{F}(X)\). We will refer to this distance function simply as a subset metric. There is some flexibility in the choice of the function \(M\). Since \(X\) is a bounded metric space, we can select the function \(M\) to have constant value \(D:=\sup\{d(x,y):x,y\in X\}\). In the case of the Hamming metric \(d=d_{H}\) on \(X=\mathbb{A}^{L}\), this is tantamount to choosing \(M(y)\) to be the constant \(L\) for all \(y\in X\) and the subset-sequence metric of [7] is recovered. In fact \(M\) could be be any constant valued function whose value is an upper bound for the metric \(d\) on \(X\). Alternatively, one could define \(M\) as follows: for each \(x\in X\), let \[M(x)=\sup\{d(x,y):y\in X\}. \tag{4}\] Condition (2) is satisfied: for all \(y\in X\), \(d(x,y)\leq d(x,z)+d(z,y)\leq d(x,z)+M(z)\) whence \(M(x)\leq d(x,z)+M(z)\). As for the sequence-subset distance of [7], the subset distance between \(A\) and \(B\) can be computed from a minimum weight perfect matching of the bipartite graph whose partite sets are \(A\) and \(B\); the edge joining \(a\in A\) with \(b\in B\) is assigned weight \(d(a,b)\). The Kuhn-Munkres algorithm does this in time \(O(|B|^{3})\)[4]. The generalized metric could potentially have more applications. For example, take \(X\) to be the vertex set of a finite connected graph and \(d(x,y)\) the length of the shortest path between \(x\) and \(y\). Then \(d_{S}\) is a metric on the power set \(2^{X}\) and provides a measure of distance between collections of vertices. Another example is image recognition. In this case take \(X\) to be a bounded subset of the standard Euclidean plane (for example, corresponding to a raster of pixels). For simplicity we take \(X=[0,1]\times[0,1]\) the unit square as an example and \(d(p,q)=||p-q||\) is the standard Euclidian distance. Each finite subset of \(X\) would correspond to an image. Using (4) to define the function \(M(p)\), we have \(M(p):=\max\{||p-c_{1}||,||p-c_{2}||,||p-c_{3}||,||p-c_{4}||\}\) where \(c_{1},c_{2},c_{3},c_{4}\) are the four corners of \(X\). Alternatively, \(M\) could be replaced by the constant function whose value is \(D=\sqrt{2}\). Distance functions between subsets of a metric space and also measure spaces have been widely studied, see [1] for a survey of such distances; see also [2]. One of the most widely used subset metrics is the Hausdorff metric [1]. This metric has many variations, but we state one version for comparison. Let \(X\) be a bounded metric space with metric \(d\). For non-empty compact subsets \(A,B\) of \(X\), define \[h(A,B):=\max\{\max_{a\in A}d(a,B),\max_{b\in B}d(b,A)\}\] where \(d(a,B):=\min_{a\in A}d(x,a)\) and \(d(b,A)\) is defined likewise. The function \(h\) gives a metric on the set of all compact subsets of \(X\) that generalizes \(d\): \(h(\{a\},\{b\})=d(a,b)\) for all \(a,b\in X\). If \(X\) is finite, the Hausdorff metric is computable in polynomial time and does have theoretical benefits, for example, it is complete if \(X\) is complete with respect to \(d\). However, as pointed out in [3], it may not be appropriate for some applications since the metric does not take into account the entire configuration of some finite sets. On the other hand, the subset-sequence metric formulated in [7] for the purpose of comparing of DNA sequences provides a finer comparison between two collections of sequences and is thus a more appropriate distance measure in that situation. Each term involving \(L\) on the right side of (1) expresses a natural worst case weight for a DNA strand that is too far away from the other set. While the authors of this work were primarily motivated by generalizing the work of [7], this work also complements that of [3] where they study extensions of distance measures to subsets more generally. For comparison, we briefly recall some of the main results from [3]. A distance function \(\Delta\) on a non-empty set \(B\) is one that satisfies all of the axioms to be a metric, except possibly the triangle inequality. In [3], the authors consider the problem of extending a distance function to the set of non-empty finite subsets of \(B\). They also discuss algorithms for computing such extensions. To measure a distance between two non-empty subsets \(S_{1},S_{2}\) of \(B\), they discuss four distance functions: the sum of minimum distances [5] \[d_{md}(S_{1},S_{2}):=\frac{1}{2}\left(\sum_{e\in S_{1}}\Delta(e,S_{2})+\sum_{e \in S_{2}}\Delta(e,S_{1})\right),\] the surjective distance \[d_{s}(S_{1},S_{2}):=\min_{\eta}\sum_{(e_{1},e_{2})\in\eta}\Delta(e_{1},e_{2})\] where the minimum is over all surjections \(\eta\) from the larger set to the smaller set (due to G. Oddie in [6]), the Fair surjection distance \[d_{fs}(S_{1},S_{2}):=\min_{\eta}\sum_{(e_{1},e_{2})\in\eta}\Delta(e_{1},e_{2})\] where the minimum is over all _fair_ surjections \(\eta\) from the larger set to the smaller set (a surjection \(\eta:S_{1}\to S_{2}\) is called fair if \(||\eta^{-1}(x)|-|\eta^{-1}(y)||\leq 1\) for all \(x,y\in S_{1}\); this is also due to G. Oddie in [6]) and they introduce the Link distance \[d_{l}(S_{1},S_{2}):=\min_{R}\sum_{(e_{1},e_{2})\in R}\Delta(e_{1},e_{2})\] where the minimum is over all linking relations \(R\) between \(S_{1}\) and \(S_{2}\) (a subset \(R\subset S_{1}\times S_{2}\) is called a linking relation if for all \(e_{1}\in S_{1}\), there exists \(e_{2}\in S_{2}\) such that \((e_{1},e_{2})\in R\) and also if for all \(e_{2}\in S_{2}\), there exists \(e_{1}\in S_{1}\) such that \((e_{1},e_{2})\in R\)). While they show that these distance functions fail to be a metric in the case that \(B\) is a finite subset of the integral plane and \(\Delta\) is the Manhattan metric, Eiter and Mannila present an elegant construction, called the metric infimum method, that produces a metric \(\Delta^{\omega}\) from a given distance function \(\Delta\). Interestingly, they demonstrate that \(d_{s}^{\omega}=d_{fs}^{\omega}=d_{l}^{\omega}\). The authors in [3] argue that the link metric is very intuitive in some contexts. It would interesting to also study this metric in the context of error correcting codes for DNA data storage. The rest of the paper is devoted to proving that (3) is indeed a metric. ## 2. Proofs Thoughout this section \(X\) is a bounded metric space with metric \(d\), the function \(M:X\to\mathbb{R}\) is one that satisfies the condition (2), \(d_{S}\) is the function defined by (3) and \(\mathcal{F}(X)\) is the set of all finite subsets of \(X\). In this section we prove that the function \(d_{S}\) is a metric on \(\mathcal{F}(X)\). While the main steps followed here are inspired by [7], there are differences to account for the presence of the function \(M\) in the definition of \(d_{S}\). **Lemma 1**.: _For any \(X_{1},X_{2}\in\mathcal{F}(X)\), such that \(|X_{1}|\leq|X_{2}|\), there exists an injection \(\chi_{0}:X_{1}\to X_{2}\), such that \(d_{S}(X_{1},X_{2})=d_{\chi_{0}}(X_{1},X_{2})\) and \(\chi_{0}(x)=x\) for all \(x\in X_{1}\cap X_{2}\)._ Proof.: If \(X_{1}\cap X_{2}=\emptyset\), then the statement is vacuously true. Suppose that \(X_{1}\cap X_{2}\neq\emptyset\). Choose \(\chi:X_{1}\to X_{2}\) such that \(d_{S}(X_{1},X_{2})=d_{\chi}(X_{1},X_{2})\). The proof will be in two parts. First we show that, if necessary, \(\chi\) can be redefined on \(X_{1}\cap X_{2}\) so that \(d_{S}(X_{1},X_{2})=d_{\chi}(X_{1},X_{2})\) and \(X_{1}\cap X_{2}\) is contained in the image of \(\chi\). Next we will show that \(\chi\) can be further adjusted to have the desired properties. Suppose that some \(x_{0}\in X_{1}\cap X_{2}\) does not belong to the image of \(\chi\). Then we redefine \(\chi\) at \(x_{0}\) to form a new embedding \(\nu:X_{1}\to X_{2}\) by setting \[\nu(x)=\left\{\begin{array}{ll}\chi(x)&\mbox{ if }x\neq x_{0}\\ x_{0}&\mbox{ if }x=x_{0}.\end{array}\right.\] By definition \(d_{S}(X_{1},X_{2})\leq d_{\nu}(X_{1},X_{2})\). Note that \(\nu(X_{1})=(\chi(X_{1})\setminus\{\chi(x_{0})\})\cup\{x_{0}\}\) and \[\sum_{x\in X_{1}}d(x,\nu(x))=\sum_{x\in X_{1}}d(x,\chi(x))\ -d(x_{0},\chi(x_{0})). \tag{5}\] Since \(x_{0}\in X_{2}\setminus\chi(X_{1})\), \(\chi(x_{0})\not\in X_{2}\setminus\chi(X_{1})\) and \(\chi(x_{0})\in X_{2}\setminus\nu(X_{1})\), it follows that \[\sum_{y\in X_{2}\setminus\nu(X_{1})}M(y)=\sum_{y\in X_{2}\setminus\chi(X_{1} )}M(y)-M(x_{0})+M(\chi(x_{0})). \tag{6}\] Combining (5) and (6), we get that \[d_{\nu}(X_{1},X_{2})=d_{\chi}(X_{1},X_{2})+M(\chi(x_{0}))-M(x_{0})-d(x_{0}, \chi(x_{0})).\] From the condition (2), it follows that \(d_{\nu}(X_{1},X_{2})\leq d_{\chi}(X_{1},X_{2})=d_{S}(X_{1},X_{2})\). Thus \(d_{S}(X_{1},X_{2})=d_{\nu}(X_{1},X_{2})\) and \(\nu(x_{0})=x_{0}\). By repeatedly applying the above procedure we will obtain an embedding of \(X_{1}\) into \(X_{2}\), which we also call \(\chi\), with the property that \(X_{1}\cap X_{2}\subseteq Im(\chi)\). Let \(x_{1}\in X_{1}\cap X_{2}\). Next we show that if \(\chi(x_{1})\neq x_{1}\) then we can adjust the embedding \(\chi\) to form a new embedding \(\mu:X_{1}\to X_{2}\) such that we have \(\mu(x_{1})=x_{1}\) and still have that \(d_{S}(X_{1},X_{2})=d_{\mu}(X_{1},X_{2})\). From above we know that there exists \(z\in X_{1}\) such that \(\chi(z)=x_{1}\). Put \(y=\chi(x_{1})\) and define \[\mu(x)=\left\{\begin{array}{ll}\chi(x)&\mbox{ if }x\neq x_{1},z\\ x_{1}&\mbox{ if }x=x_{1}\\ y&\mbox{ if }x=z.\end{array}\right.\] Then \(\mu:X_{1}\to X_{2}\) is an injection and, by the definition of the subset distance, \(d_{S}(X_{1},X_{2})\leq d_{\mu}(X_{1},X_{2})\). Also we have that \[d_{\chi}(X_{1},X_{2}) = d(x_{1},y)+d(z,x_{1})+(d_{\mu}(X_{1},X_{2})-d(x_{1},x_{1})-d(z,y))\] \[= d_{\mu}(X_{1},X_{2})+d(x_{1},y)+d(z,x_{1})-d(z,y)\] \[\geq d_{\mu}(X_{1},X_{2})\] where the last inequality follows from the triangle inequality. Thus \(d_{S}(X_{1},X_{2})\geq d_{\mu}(X_{1},X_{2})\) and we see that \(d_{S}(X_{1},X_{2})=d_{\chi}(X_{1},X_{2})=d_{\mu}(X_{1},X_{2})\) and \(\mu(x_{1})=x_{1}\). By repeated application of the above procedure, we obtain an embedding with the desired property. **Corollary 1**.: _For any \(X_{1},X_{2}\in\mathcal{F}(X)\),_ \[d_{S}(X_{1},X_{2})=d_{S}(X_{1}\setminus X_{2},X_{2}\setminus X_{1}).\] Proof.: This is a direct consequence of Lemma 1 and the definition of \(d_{\chi}(\cdot,\cdot)\). **Lemma 2**.: _Suppose that \(X_{1},X_{2}\in\mathcal{F}(X)\) with \(|X_{1}|\leq|X_{2}|\). Then for any \(b\in X\), \(d_{S}(X_{1},X_{2})\leq d_{S}(X_{1},X_{2}\cup\{b\})\)._ Proof.: Suppose \(\chi:X_{1}\to X_{2}\cup\{b\}\) such that \(d_{S}(X_{1},X_{2}\cup\{b\})=d_{\chi}(X_{1},X_{2}\cup\{b\})\). If \(\chi(X_{1})\subseteq X_{2}\), then \(d_{\chi}(X_{1},X_{2}\cup\{b\})=d_{\chi}(X_{1},X_{2})+M(b)\geq d_{S}(X_{1},X_{2} )+M(b)\geq d_{S}(X_{1},X_{2})\). If \(\chi(X_{1})\not\subset X_{2}\), then \(\chi(a)=b\) for some \(a\in X_{1}\) and \(|X_{2}|>|X_{1}|\). Fix \(c\in X_{2}\setminus\chi(X_{1})\) and define \(\eta:X_{1}\to X_{2}\) by \[\eta(x)=\left\{\begin{array}{ll}\chi(x)&\text{if }x\neq a\\ c&\text{if }x=a.\end{array}\right.\] Then \(\eta(X_{1})=(\chi(X_{1})\setminus\{b\})\cup\{c\}\) so \(X_{2}\cup\{b\}\setminus\chi(X_{1})\) is the disjoint union \((X_{2}\setminus\eta(X_{1}))\cup\{c\}\) and \[d_{S}(X_{1},X_{2}\cup\{b\})\] \[= d_{\chi}(X_{1},X_{2}\cup\{b\})\] \[= \sum_{x\in X_{1}}d(x,\chi(x))+\sum_{y\in X_{2}\cup\{b\}\setminus \chi(X_{1})}M(y)\] \[= d(a,b)+\sum_{x\in X_{1}}d(x,\eta(x))-d(a,c)+\sum_{y\in X_{2} \setminus\eta(X_{1})}M(y)+M(c)\] \[= d(a,b)+M(c)-d(a,c)+\sum_{x\in X_{1}}d(x,\eta(x))+\sum_{y\in X_{2 }\setminus\eta(X_{1})}M(y)\] \[= d(a,b)+M(c)-d(a,c)+d_{\eta}(X_{1},X_{2})\] \[\geq d_{\eta}(X_{1},X_{2})\geq d_{S}(X_{1},X_{2})\] since \(d(a,c)\leq M(c)\) by condition (2). By repeated application of the above result, we obtain the following corollary. **Corollary 2**.: _For any \(X_{1},X_{2}\in\mathcal{F}(X)\), such that \(|X_{1}|\leq|X_{2}|\). Suppose that \(X_{2}^{\prime}\subseteq X_{2}\) such that \(|X_{1}|\leq|X_{2}^{\prime}|\). Then_ \[d_{S}(X_{1},X_{2}^{\prime})\leq d_{S}(X_{1},X_{2}).\] **Theorem 1**.: \(d_{S}(\cdot,\cdot)\) _is a metric on \(\mathcal{F}(X)\)._ Proof.: For two finite sets \(A\) and \(B\) we denote by \(\mathscr{X}(A,B)\) the set of injections \(\chi:A\to B\). Let \(X_{1},X_{2}\in\mathcal{F}(X)\). By definition of \(d_{S}(\cdot,\cdot)\) we have that \(d_{S}(X_{1},X_{2})=d_{S}(X_{2},X_{1})\geq 0\). We show that \(d_{S}(X_{1},X_{2})=0\) iff \(X_{1}=X_{2}\). We may assume that \(|X_{1}|\leq|X_{2}|\), and let \(\nu\in\mathscr{X}(X_{1},X_{2})\) be such that \(d_{S}(X_{1},X_{2})=d_{\nu}(X_{1},X_{2})\). Then \(d_{S}(X_{1},X_{2})=d_{\nu}(X_{1},X_{2})=0\) iff \(\sum_{x\in X_{1}}d(x,\nu(x))+\sum_{y\in X_{2}\setminus\nu(X_{1})}M(y)=0\) iff \(d(x,\nu(x))=0\) for all \(x\in X_{1}\) and \(X_{2}=\nu(X_{1})\) iff \(x=\nu(x)\) for all \(x\in X_{1}\) and \(|X_{2}|=|X_{1}|\) iff \(X_{1}=X_{2}\). Thus, we need only to show that \(d_{S}(\cdot,\cdot)\) satisfies the Triangle Inequality. Let \(X_{1},X_{2},X_{3}\in\mathcal{F}(X)\). We will show that \(d_{S}(X_{1},X_{2})\leq d_{S}(X_{1},X_{3})+d_{S}(X_{3},X_{2})\) by considering various cases. Note that we are still assuming that \(|X_{1}|\leq|X_{2}|\), and that \(\nu\in\mathscr{X}(X_{1},X_{2})\) is such that \(d_{S}(X_{1},X_{2})=d_{\nu}(X_{1},X_{2})\). **Case 1:** Suppose that \(|X_{1}|\leq|X_{3}|\leq|X_{2}|\). Let \(\mu\in\mathscr{X}(X_{3},X_{2})\) and \(\eta\in\mathscr{X}(X_{1},X_{3})\), be such that \(d_{S}(X_{3},X_{2})=d_{\mu}(X_{3},X_{2})\) and \(d_{S}(X_{1},X_{3})=d_{\eta}(X_{1},X_{3})\). We may assume that \[X_{1}= \{x_{1},\ldots,x_{n}\}\] \[X_{3}= \{y_{1},\ldots,y_{n},y_{n+1},\ldots,y_{n+s}\}\] \[X_{2}= \{z_{1},\ldots,z_{n},z_{n+1},\ldots,z_{n+s},\ldots,z_{n+s+t}\}\] where \(s,t\geq 0\) and \(\mu(y_{i})=z_{i}\) for \(1\leq i\leq n+s\) and \(\eta(x_{i})=z_{i}\) for \(1\leq i\leq n\). Then \[d_{S}(X_{1},X_{3})= \sum_{i=1}^{n}d(x_{i},y_{i})+\sum_{i=n+1}^{n+s}M(y_{i})\text{ and}\] \[d_{S}(X_{2},X_{3})= \sum_{i=1}^{n+s}d(y_{i},z_{i})+\sum_{i=n+s+1}^{n+s+t}M(z_{i}).\] Let \(\chi=\mu\circ\eta\in\mathscr{X}(X_{1},X_{2})\). Then \[d_{S}(X_{1},X_{2})\] \[\leq d_{\chi}(X_{1},X_{2})\] \[= \sum_{i=1}^{n}d(x_{i},z_{i})+\sum_{i=n+1}^{n+s+t}M(z_{i})\] \[\leq \sum_{i=1}^{n}\left[d(x_{i},y_{i})+d(y_{i},z_{i})\right]+\sum_{i= n+1}^{n+s+t}M(z_{i})\] \[= \sum_{i=1}^{n}d(x_{i},y_{i})+\sum_{i=1}^{n}d(y_{i},z_{i})+\sum_{ i=n+1}^{n+s+t}M(z_{i})\] \[= \left(d_{S}(X_{1},X_{3})-\sum_{i=n+1}^{n+s}M(y_{i})\right)+\] \[\left(d_{S}(X_{3},X_{2})-\sum_{i=n+1}^{n+s}d(y_{i},z_{i})-\sum_{ i=n+s+1}^{n+s+t}M(z_{i})\right)+\sum_{i=n+1}^{n+s+t}M(z_{i})\] \[= d_{S}(X_{1},X_{3})+d_{S}(X_{2},X_{3})-\sum_{i=n+1}^{n+s}M(y_{i}) -\sum_{i=n+1}^{n+s}d(y_{i},z_{i})+\sum_{i=n+1}^{n+s}M(z_{i})\] \[= d_{S}(X_{1},X_{3})+d_{S}(X_{2},X_{3})+\sum_{i=n+1}^{n+s}\left(M(z _{i})-M(y_{i})-d(y_{i},z_{i})\right)\] \[\leq d_{S}(X_{1},X_{3})+d_{S}(X_{3},X_{2}).\] by condition (2) **Case 2:** Suppose \(|X_{3}|\leq|X_{1}|\leq|X_{2}|\). Let \(\mu\in\mathscr{X}(X_{3},X_{2})\) and \(\eta\in\mathscr{X}(X_{3},X_{1})\) be such that \(d_{S}(X_{3},X_{2})=d_{\mu}(X_{3},X_{2})\) and \(d_{S}(X_{1},X_{3})=d_{\eta}(X_{1},X_{3})\). We may assume that \[X_{3} = \{x_{1},\ldots,x_{n}\}\] \[X_{1} = \{y_{1},\ldots,y_{n},y_{n+1},\ldots,y_{n+s}\}\] \[X_{2} = \{z_{1},\ldots,z_{n},z_{n+1},\ldots,z_{n+s},\ldots,z_{n+s+t}\}\] where \(s,t\geq 0\) and \(\mu(x_{i})=z_{i}\) for \(1\leq i\leq n\) and \(\eta(x_{i})=y_{i}\) for \(1\leq i\leq n\). Then \[d_{S}(X_{3},X_{1})= \sum_{i=1}^{n}d(x_{i},y_{i})+\sum_{i=n+1}^{n+s}M(y_{i})\] \[d_{S}(X_{3},X_{2})= \sum_{i=1}^{n}d(x_{i},z_{i})+\sum_{i=n+1}^{n+s+t}M(z_{i}).\] Define \(\chi:X_{1}\to X_{2}\) by \(\chi(y_{i})=z_{i}\) for \(i=1,2,\ldots,n+s\). Then \[d_{S}(X_{1},X_{2})\] \[\leq d_{\chi}(X_{1},X_{2})\] \[= \sum_{i=1}^{n+s}d(y_{i},z_{i})+\sum_{i=n+s+1}^{n+s+t}M(z_{i})\] \[= \sum_{i=1}^{n}d(y_{i},z_{i})+\sum_{i=n+1}^{n+s}d(y_{i},z_{i})+ \sum_{i=n+s+1}^{n+s+t}M(z_{i})\] \[\leq \sum_{i=1}^{n}\left[d(y_{i},x_{i})+d(x_{i},z_{i})\right]+\sum_{i= n+1}^{n+s}d(y_{i},z_{i})+\sum_{i=n+s+1}^{n+s+t}M(z_{i})\] \[= \sum_{i=1}^{n}d(y_{i},x_{i})+\left(\sum_{i=1}^{n}d(x_{i},z_{i})+ \sum_{i=n+1}^{n+s+t}M(z_{i})\right)-\sum_{i=n+1}^{n+s}M(z_{i})+\sum_{i=n+1}^{n +s}d(y_{i},z_{i})\] \[= \left(d_{S}(X_{3},X_{1})-\sum_{i=n+1}^{n+s}M(y_{i})\right)+d_{S}( X_{3},X_{2})+\sum_{i=n+1}^{n+s}\left(d(y_{i},z_{i})-M(z_{i})\right)\] \[= d_{S}(X_{3},X_{1})+d_{S}(X_{3},X_{2})-\sum_{i=n+1}^{n+s}M(y_{i})+ \sum_{i=n+1}^{n+s}\left(d(y_{i},z_{i})-M(z_{i})\right)\] \[\leq d_{S}(X_{1},X_{3})+d_{S}(X_{3},X_{2}).\] where the last inequality follows from by condition (2). **Case 3:** Suppose \(|X_{1}|\leq|X_{2}|\leq|X_{3}|\). Fix a subset \(X_{3}^{\prime}\) of \(X_{3}\) of cardinality equal to \(X_{2}\). Then from Case 1, it follows that \(d_{S}(X_{1},X_{2})\leq d_{S}(X_{1},X_{3}^{\prime})+d_{S}(X_{3}^{\prime},X_{2})\). From Corollary 2 we know that \(d_{S}(X_{1},X_{3}^{\prime})\leq d_{S}(X_{1},X_{3})\) and \(d_{S}(X_{3}^{\prime},X_{2})\leq d_{S}(X_{3},X_{2})\). Thus \(d_{S}(X_{1},X_{2})\leq d_{S}(X_{1},X_{3})+d_{S}(X_{3},X_{2})\). **Remark 1**.: _If \(X\) contains at least two elements, then the function \(M\) never takes on the value 0. In fact, there exists a constant \(C>0\) such that \(M(y)\geq C\) for all \(y\in X\): from (2), \(d(x,y)\leq M(x)\leq d(x,y)+M(y)\leq 2M(y)\). Thus \(M(y)\geq M(x)/2\) for all \(y\in X\). Put \(C=M(x)/2\). If \(C=0\), then the inequlity \(M(y)\geq d(y,x)\) implies that \(y=x\) for all \(x\in X\), contradicting that \(X\) contains at least two elements. Thus \(C=M(x)/2>0\) is the required constant._ **Remark 2**.: _If \(\{A_{n}\}\) is a Cauchy sequence in \(\mathcal{F}(X)\), it can be shown that \(|A_{n}|=|A_{m}|\) for all \(m,n\) sufficiently large: let \(C\) be as in Remark 1. Then there exists \(N\) such that \(d_{S}(A_{m},A_{n})<C\) for all \(m,n\geq N\). Since \(C=\frac{1}{2}M(x)<M(y)\) for all \(y\in X\), it follows that \(|A_{m}|=|A_{n}|\) for all \(m,n\geq N\)._ **Remark 3**.: _If the topology induced the metric \(d\) on \(X\) is the discrete topology, then \(\mathcal{F}(X)\) is complete with respect to the subset metric. However, this is not the case in general. Consider the case where \(X=[0,1]\), \(d\) is the usual Euclidean metric and \(M(y)=\max\{y,1-y\}\). Put \(A_{n}=\{0,\frac{1}{n}\}\) for all \(n\geq 1\). Then \(\{A_{n}\}\) is Cauchy sequence that does not converge: if \(\{A_{n}\}\) did converge, using Lemma 1 and Remark 2, it would converge to a set of the form \(A=\{0,a\}\) for some \(a\in X\). But \(d_{S}(A_{n},A)=|a-1/n|\to 0\) as \(n\to\infty\), so a must equal to \(0\). But if \(a=0\), then \(d_{(}A_{n},A)=M(1/n)=1-1/n\to 1\) as \(n\to\infty\), a contradiction._
2308.06703
Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods
Stochastic gradient descent (SGD) and adaptive gradient methods, such as Adam and RMSProp, have been widely used in training deep neural networks. We empirically show that while the difference between the standard generalization performance of models trained using these methods is small, those trained using SGD exhibit far greater robustness under input perturbations. Notably, our investigation demonstrates the presence of irrelevant frequencies in natural datasets, where alterations do not affect models' generalization performance. However, models trained with adaptive methods show sensitivity to these changes, suggesting that their use of irrelevant frequencies can lead to solutions sensitive to perturbations. To better understand this difference, we study the learning dynamics of gradient descent (GD) and sign gradient descent (signGD) on a synthetic dataset that mirrors natural signals. With a three-dimensional input space, the models optimized with GD and signGD have standard risks close to zero but vary in their adversarial risks. Our result shows that linear models' robustness to $\ell_2$-norm bounded changes is inversely proportional to the model parameters' weight norm: a smaller weight norm implies better robustness. In the context of deep learning, our experiments show that SGD-trained neural networks have smaller Lipschitz constants, explaining the better robustness to input perturbations than those trained with adaptive gradient methods.
Avery Ma, Yangchen Pan, Amir-massoud Farahmand
2023-08-13T07:03:22Z
http://arxiv.org/abs/2308.06703v2
Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods ###### Abstract Stochastic gradient descent (SGD) and adaptive gradient methods, such as Adam and RMSProp, have been widely used in training deep neural networks. We empirically show that while the difference between the standard generalization performance of models trained using these methods is small, those trained using SGD exhibit far greater robustness under input perturbations. Notably, our investigation demonstrates the presence of irrelevant frequencies in natural datasets, where alterations do not affect models' generalization performance. However, models trained with adaptive methods show sensitivity to these changes, suggesting that their use of irrelevant frequencies can lead to solutions sensitive to perturbations. To better understand this difference, we study the learning dynamics of gradient descent (GD) and sign gradient descent (signGD) on a synthetic dataset that mirrors natural signals. With a three-dimensional input space, the models optimized with GD and signGD have standard risks close to zero but vary in their adversarial risks. Our result shows that linear models' robustness to \(\ell_{2}\)-norm bounded changes is inversely proportional to the model parameters' weight norm: a smaller weight norm implies better robustness. In the context of deep learning, our experiments show that SGD-trained neural networks show smaller Lipschitz constants, explaining the better robustness to input perturbations than those trained with adaptive gradient methods. ## 1 Introduction Adaptive gradient methods, such as Adam (Kingma & Ba, 2015) and RMSProp (Hinton et al., 2012), are a family of popular techniques to optimize machine learning (ML) algorithms. They are an extension of the traditional gradient descent method, which uses the gradient of a differentiable objective function to update the model's parameters in the direction that improves the objective. To speed up the optimization procedure, the adaptive gradient methods introduce a coordinate-wise learning rate to adjust the update for each parameter based on its individual gradient. Previous empirical work investigates the difference in the standard generalization between models trained using SGD and adaptive gradient methods (Wilson et al., 2017; Agarwal et al., 2020), while recent efforts have focused on understanding the implicit bias of SGD (Gunasekar et al., 2017; Soudry et al., 2018; Lyu & Li, 2020) and adaptive gradient algorithms (Qian & Qian, 2019; Wang et al., 2021). Nevertheless, our result shows that in practice such a gap in the standard generalization is relatively small, in contrast to the difference between the robustness of models trained using those algorithms. While more ML-based systems are deployed in the real world, the models' robustness, their ability to maintain their performance when faced with noisy or corrupted inputs, has become an important criterion. There is a large volume of literature on developing specialized methods to improve the robustness of neural networks (Silva and Najafirad, 2020), yet practitioners still simply use standard methods to train their models. In fact, a recent survey shows that only 3 of the 28 organizations have developed their ML-based systems with the improvement in robustness in mind (Kumar et al., 2020). Therefore, this motivates us to understand the effect of optimizers on the robustness of models obtained in the standard training regime. In particular, we focus on models trained using SGD and adaptive gradient methods. Note that our primary focus lies in understanding the robustness difference, and robustification falls outside the scope of our work. ### The Robustness Difference between Models Trained by Different Algorithms As a first step, we compare how models, trained with SGD, Adam, and RMSProp, differ in their standard generalization and robustness on seven benchmark datasets (LeCun, 1998; Xiao et al., 2017; Krizhevsky and Hinton, 2009; Netzer et al., 2011; Howard et al., 2004). In our experiments, we evaluate standard generalization using the accuracy of the trained classifier on the original test dataset. To measure robustness, we consider the classification accuracy on the test dataset perturbed by Gaussian noise, \(\ell_{2}\) and \(\ell_{\infty}\) bounded adversarial perturbations (Croce and Hein, 2020). We follow the default Pytorch configuration to train all the models and sweep through a wide range of learning rates. The final model is selected with the highest validation accuracy. Models are trained in a vanilla setting in which batch normalization (Ioffe and Szegedy, 2015) is disabled and data augmentations are limited to random flipping. Additional discussions on batch normalization, data augmentation, optimization schedule and network designs are included in Appendix B. Appendix C contains the result of the experiment used in the plot of Figure 1, as well as details on how the perturbations were selected for each dataset; and visualizations of the perturbations are included in Appendix F. Figure 1 compares the models trained with SGD and the adaptive gradient methods, pointing to two important observations. First, the relatively small vertical differences among the three models, on a given dataset, show that the models have similar standard generalization performance despite being trained by different algorithms. On the other hand, we observe, under all three types of perturbations, a large horizontal span with SGD always positioned on the far right side among the three. This indicates that models trained by SGD significantly outperform models trained by Adam and RMSProp in terms of their robustness against perturbations. Figure 1: **Comparison between models trained using SGD, Adam, and RMSProp on seven benchmark datasets. Each colored triplet denotes the models on the same dataset. Models trained by different algorithms have similar standard generalization performance, but there is a distinct robustness difference as measured by the accuracy of the test dataset under Gaussian noise, \(\ell_{2}\) and \(\ell_{\infty}\) bounded adversarial perturbations (Croce and Hein, 2020). Results are obtained by averaging over three independently initialized and trained models.** ### Contributions Previous optimization work often studies how the structure of the dataset affects the dynamics of learning. For example, some focus on a dataset with different feature strengths (Amari et al., 2021; Pezeshki et al., 2021), while others assume a linearly separable dataset (Wilson et al., 2017; Gunasekar et al., 2017; Soudry et al., 2018). In our work, we investigate how the frequency characteristics of the dataset impact the robustness of models trained by SGD and adaptive gradient methods. We make the following contributions: * We demonstrate that natural datasets contain irrelevant frequencies, which, when removed, have negligible effects on standard generalization performance (Sec. 3.1). * We also observe that neural networks trained by different algorithms can have very different robustness against perturbations in the direction of the irrelevant frequencies (Sec. 3.2). * Those observations lead to our claim that models only need to learn how to correctly use relevant information in the data to optimize the training objective, and because their use of the irrelevant information is under-constrained, it can lead to solutions sensitive to perturbations (Sec. 3). * Our analysis of linear models on least square regression shows that linear models' robustness to \(\ell_{2}\)-norm bounded changes is inversely proportional to the model parameters' weight norm: a smaller weight norm implies better robustness (Sec. 4.1). * We study the learning dynamics of GD and signGD, a memory-free version of Adam and RMSProp, with linear models. With a three-dimensional input space, the analysis shows that models optimized with GD exhibit a smaller weight norm compared to their signGD counterparts (Sec. 4.2). * To generalize this result in the deep learning setting, we demonstrate that neural networks trained by Adam and RMSProp often have a larger Lispchitz constant and, consequently, are more prone to perturbations (Sec. 5). Specifically, in the analysis of linear models, we design a least square regression task using a synthetic dataset whose frequency representation mimics the natural datasets. This setting allows us to i) mathematically define the standard and adversarial population risks, ii) design a learning task that has multiple optima for the standard population risk, each with a different adversarial risk, and iii) theoretically analyze the learning dynamics of various algorithms. ## 2 Background In this section, we briefly review the essential background to help understand our work. Specifically, we discuss formulations of adaptive gradient methods, previous work on the adversarial robustness of the model, and methods of representing signals in the frequency domain. ### Optimizations with Adaptive Gradient Algorithms Consider the empirical risk minimization problem with an objective of the form \(\mathcal{L}(w)=\frac{1}{N}\sum_{n=1}^{N}\ell(X_{n},Y_{n};w)\), where \(w\in\mathbb{R}^{d}\) is a vector of weights of a model, \(\{(X_{n},Y_{n})\}_{n=1}^{N}\) is the training dataset and \(\ell(x,y;w)\) is the point-wise loss quantifying the performance of the model on data point \((x,y)\). A common approach in training machine learning models is to reduce the loss via SGD which iteratively updates the model based on a mini-batch of data points drawn uniformly and independently from the training set: \[g(w)=\frac{1}{|\mathcal{B}|}\sum_{n\in\mathcal{B}}\nabla_{w}\ell(X_{n},Y_{n};w), \tag{1}\] where \(\mathcal{B}\subset\{1,...,N\}\) denotes the minibatch and has a size of \(|\mathcal{B}|\ll N\). The update rule of SGD is \(w(t+1)=w(t)-\eta(t)g(w(t))\), where \(\eta(t)\in\mathbb{R}^{+}\) denotes the learning rate.1 A family of adaptive gradient methods has been used to accelerate training by updating the model parameters based on a coordinate-wise scaling of the original gradients. Methods such as Adam and RMSprop have demonstrated significant acceleration in training deep neural networks (Wilson et al., 2017). Many adaptive gradient methods can be written as \[m(t+1) =\beta_{1}g(w(t))+(1-\beta_{1})m(t)\] \[v(t+1) =\beta_{2}g(w(t))^{2}+(1-\beta_{2})v(t)\] \[w(t+1) =w(t)-\eta(t)\frac{m(t+1)}{\sqrt{v(t+1)}+\epsilon}, \tag{2}\] where \(g(w(t))\) is the stochastic estimate of gradient used by SGD (1), \(m\) and \(v\) are the first and second-order memory terms with their strength specified by \(\beta_{1}\) and \(\beta_{2}\), and \(\epsilon\) is a small constant used to avoid division-by-zero. Such a general form has been widely used to study the dynamics of adaptive gradient algorithm (Wilson et al., 2017; da Silva and Gazeau, 2020; Ma et al., 2022). For example, Adam corresponds to \(\beta_{1},\beta_{2}\in(0,1)\), and RMSProp is recovered when \(\beta_{1}=1\) and \(\beta_{2}\in(0,1)\). Notice that such updates rely on the history of past gradients, and this makes the precise understanding and analysis of adaptive gradient methods more challenging (Duchi et al., 2011). Recent work analyzes the learning dynamics of adaptive gradient methods by separately considering the direction and the magnitude of the update (Kingma and Ba, 2015; Balles and Hennig, 2018; Ma et al., 2022). As a simple example, to demonstrate how adaptive gradient methods can potentially accelerate learning compared to the vanilla SGD, consider a memory-free version of (2) with \(\beta_{1}=\beta_{2}=1\) and \(\epsilon=0\). It is easy to see that the update rule in (2) becomes sign gradient descent: \[w(t+1) =w(t)-\eta(t)\operatorname{sign}(g(w(t)))\] \[=w(t)-\vec{\eta}(t)\odot g(w(t)), \tag{3}\] where \(\odot\) denotes Hadamard product, \(\vec{\eta}(t)\in\mathbb{R}^{d}\) is a coordinate-wise learning rate based on the absolute value of the weight, i.e., \(\vec{\eta}(t)=\frac{\eta(t)}{|g(w(t))|}\). Therefore, \(\vec{\eta}(t)\) accounts for the magnitude of the weight and a larger learning rate is used for parameters with smaller gradients. In general, gradient-sign-based optimization methods are not successful in training deep learning models (Riedmiller and Braun, 1993; Ma et al., 2022), nevertheless, methods such as signGD can shed light on the learning dynamics of adaptive gradient methods (Karimi et al., 2016; Balles and Hennig, 2018; Moulay et al., 2019). For example, recent work by Ma et al. (2022) studies the behavior of adaptive gradient algorithms in the continuous-time limit. They demonstrate that under a fixed \(\beta_{1}\) and \(\beta_{2}\), the memory effect for both Adam and RMSprop diminishes and the continuous-time limit of the two algorithms follows the dynamics of signGD flow. In this work, the deep learning models on which we observe the robustness difference are trained using Adam and RMSProp, with the exception of Sec. 4, where we focus on signGD, a memory-free version of Adam and RMSProp, and gradient descent to help us understand the robustness gap between models trained using SGD and adaptive gradient methods in a simple setting. ### Robustness of ML Models An important assumption of most modern ML models is that samples from the training and testing dataset are independent and identically distributed (i.i.d.); however, samples collected in the real world rarely come from an identical distribution as the training data, as they are often subject to noise and can even be corrupted. It is known that ML models can achieve impressive success on the original testing dataset, but exhibit a sharp drop in performance under perturbations (Szegedy et al., 2014). Such an observation has posed concerns about the potential vulnerabilities for real-world ML applications such as healthcare (Qayyum et al., 2020), autonomous driving (Deng et al., 2020) and audio systems (Li et al., 2020). Models' robustness performance has become an important secondary metric in the empirical evaluation of new training methods, such as data augmentations (Zhang et al., 2018; Hendrycks et al., 2020; Verma et al., 2019; Ma et al., 2022) and robust optimization techniques (Zhai et al., 2021). Generally, the robustness property of models is assessed by examining the model performance under multiple perturbations (Ding et al., 2020; Shen et al., 2021; Kuang et al., 2018). A wide variety of approaches have been proposed to improve the robustness of the model through regularizations (Goodfellow et al., 2015; Simon-Gabriel et al., 2019; Wen et al., 2020; Ma et al., 2020), data augmentation (Madry et al., 2018; Rebuffi et al., 2021; Gowal et al., 2021; Ma et al., 2022), and novel network architectures (Wu et al., 2021; Ma et al., 2021; Huang et al., 2021). However, most industry practitioners are yet to come to terms with improving security in developing ML systems (Kumar et al., 2020). Since SGD, Adam, and RMSProp have been the go-to optimizer in both academic and industrial settings, this motivates us to understand the robustness of models trained by those algorithms and built on the standard training pipelines, i.e., minimizing some losses on the original training set. ### Frequency Representation of Signals Natural signals are highly structured (Schwartz and Simoncelli, 2001). They often consist of statistically significant (or insignificant) patterns with a large amount of predictability (or redundancy). Such a phenomenon has been observed in both natural images (Ruderman, 1994; Simoncelli, 1997; Huang and Mumford, 1999) and natural audio signals (McAulay and Quatieri, 1986; Attias and Schreiner, 1996; Turner, 2010). To understand the structure of signals and identify patterns from them, one technique is to decompose the signal into multiples of "harmonics" or "overtones": a superposition of periodic waves with varying amplitudes and in varying phases. For example, Fourier (1822) first proposed to analyze complicated heat equations using well-understood trigonometric functions, a method now called the Fourier transformation. This new representation allows us to precisely study the structure and the magnitude of any repeating patterns presented in the original waveform. For the understanding of digital signals, such a process is called discrete-time signal processing (Oppenheim et al., 2001). Many discrete harmonic transformations exist, such as the discrete Fourier transform, the discrete cosine transform (DCT) (Ahmed et al., 1974) and the wavelet transform (Mallat, 1999). The analysis in this work utilizes the type-II DCT, but other techniques can be applied as well and we expect similar results. Concretely, consider a \(d\)-dimensional signal \(x\in\mathbb{R}^{d}\) in the spatial domain. The same signal can be alternatively represented as a discrete sum of amplitudes multiplied by its cosine harmonics: \(\tilde{x}_{k}=\sum_{j=0}^{d-1}x_{j}\cos\left[\frac{\pi}{d}\left(j+\frac{1}{2} \right)k\right]\) for \(k=0,...,d-1\), where the transformed signal \(\tilde{x}\) has a frequency-domain representation.2 Because DCT is linear, it can be carried out using a matrix operation, i.e., \(\tilde{x}=Cx\), where \(C\) is a \(d\times d\) DCT transformation matrix with values specified by Footnote 2: Indices range from \(0\) to \(d-1\), as zero-frequency is commonly used to refer to a signal with a constant everywhere. \[C_{kj}^{(d)}=\sqrt{\frac{\alpha_{k}}{d}}\cos\left[\frac{\pi}{d}\left(j+\frac{ 1}{2}\right)k\right], \tag{4}\] where \(\alpha_{0}=1\) and \(\alpha_{k}=2\) for \(k>0\). In particular, \(\tilde{x}\) can be written as a matrix-vector product between the transformation matrix \(C\) and the column vector \(x\): \[\begin{bmatrix}\tilde{x}_{0}\\ \tilde{x}_{1}\\ \vdots\\ \tilde{x}_{d-1}\end{bmatrix}=\begin{bmatrix}\sqrt{\frac{1}{d}}&\sqrt{\frac{1}{ d}}&\cdots&\sqrt{\frac{1}{d}}\\ \sqrt{\frac{2}{d}}\cos\frac{\pi(2(0)+1)(1)}{2d}&\sqrt{\frac{2}{d}}\cos\frac{ \pi(2(1)+1)(1)}{2d}&\cdots&\sqrt{\frac{2}{d}}\cos\frac{\pi(2(d-1)+1)(1)}{2d}\\ \vdots&\vdots&\vdots&\vdots\\ \sqrt{\frac{2}{d}}\cos\frac{\pi(2(0)+1)(d-1)}{2d}&\sqrt{\frac{2}{d}}\cos\frac {\pi(2(1)+1)(d-1)}{2d}&\cdots&\sqrt{\frac{2}{d}}\cos\frac{\pi(2(d-1)+1)(d-1)}{2 d}\end{bmatrix}\begin{bmatrix}x_{0}\\ x_{1}\\ \vdots\\ x_{d-1}\end{bmatrix}. \tag{5}\] Notice that \(C\) is a real orthogonal matrix whose rows consists of periodic cosine bases with increasing frequencies. Therefore, the absolute value of \(\tilde{x}\) at a particular dimension indicates the magnitudes of the corresponding basis function, and a higher dimension in \(\tilde{x}\) means the basis function is of higher frequency. Another important property of DCT is its invertibility. That is, signals in the frequency domain can be converted back to the spatial-temporal domain via the inverse DCT (iDCT): \(x=C^{-1}\tilde{x}=C^{\top}\tilde{x}\). In the example above, we discussed one-dimensional DCT which is applied to vectors and is used in the linear analysis in Sec. 4. Transformations on images require two-dimensional DCT and can be done using \(\tilde{x}=CxC^{\top}\), where \(x,\tilde{x}\in\mathbb{R}^{d\times d}\), and \(C\) is defined in (4); and the inverse two-dimensional DCT is \(x=C^{\top}\tilde{x}C\). For more details on two-dimensional DCT, we refer the reader to Pennebaker and Mitchell (1992). Previous work analyzes the sensitivity of neural network classifiers by examining the frequency characteristics of various types of perturbations, with an emphasis on understanding how data augmentation affects the robustness of the model (Yin et al., 2019). In our work, the frequency interpretation of signals is an integral part of understanding the robustness difference between models trained by SGD and adaptive gradient methods. This perspective allows us to study the structure of complex signals using well-understood periodic basis functions such as cosines and understand the energy distribution of signals by examining the amplitude of the basis function. In particular, the energy of a discrete signal \(x\) is defined as \(E(x)=\sum_{i=0}^{d-1}|x_{i}|^{2}\), and by Parseval's theorem, is equivalent to the sum of squared amplitudes across all the bases, i.e., \(E(x)=E(\tilde{x})=\sum_{i=0}^{d-1}|\tilde{x}_{i}|^{2}\). Natural images are primarily made of low-frequency signals3: a high concentration of energy in the low-frequency harmonics renders the amplitude of the higher-frequency harmonics almost negligible (Tolhurst et al., 1992; Schaaf and Hateren, 1996), as shown in Figure 8 in Appendix F. Moreover, we show in Sec. 3.1 that there exist frequencies in natural datasets, which if removed from the training data, do not affect the standard generalization performance of the model. Based on this observation, in Sec. 4, we construct a synthetic dataset that mimics the characteristics of natural signals, and it allows us to study the learning dynamics of various optimization algorithms in a controlled setting. Footnote 3: We will always use the term “high” or “low” frequency on a relative scale. ## 3 A Claim on How Models Use Irrelevant Frequencies Why do models trained by different optimization algorithms behave similarly in the standard setting where the training and the test inputs are i.i.d., while they perform drastically differently when faced with noisy or corrupted data? To answer this question, we first observe that there is irrelevant information in the natural dataset (Observation I), and attenuating them from the training input has negligible effects on the standard generalization of the model. This leads to our claim: **Claim 3.1**.: _To optimize the standard training objective, models only need to learn how to correctly use relevant information in the data. Their use of irrelevant information in the data, however, is under-constrained and can lead to solutions sensitive to perturbations._ Because of this, by targeting the perturbations toward the subset of the signal that contains irrelevant information, we notice that models trained by different algorithms exhibit very different performance changes (Observation II). ### Observation I: Irrelevant Frequencies in Natural Signals Previous work demonstrated that the magnitude of the frequency components in natural images decreases as the frequency increases, and this decrease follows a \(\frac{1}{f^{2}}\) relationship (Ruderman, 1994; Wainwright and Simoncelli, 1999). In Figure 8 of Appendix F, we make the observation on several common vision datasets that the distribution of spectral energy heavily concentrates at the low end of the frequency spectrum and decays quickly towards higher frequencies. The spectral sensitivity of the human eyes is limited (Gross, 2005), so patterns with low magnitudes and high frequencies are not important from the perspective of human observers, as they appear to us as nearly invisible and unintuitive information in the scene (Schwartz and Simoncelli, 2001; Schwartz, 2004). For machines, image-processing methods have long exploited the fact that most of the content-defining information in natural images is represented in low frequencies, and the high-frequency signal is redundant, irrelevant, and is often associated with noise (Wallace, 1991; Guo et al., 2020; Sharma et al., 2019). Similarly, the notion of irrelevant frequencies also exists when training a neural network classifier. One way to illustrate this is by taking a supervised learning task, removing the irrelevant information from the training input, and then assessing the model's performance using the original test data. We observe that when modifying the training dataset by removing subsets of the signal with low spectral energy (Figure 1(a)) or high frequencies (Figure 1(b)), there is a negligible effect on models' classification accuracy on the original test data. In Appendix D, we explain how images are modified in detail, and visualizations of the modified images are included in Appendix F. In both settings after reducing more than half of the DCT basis vectors to zeroes in the training data, the model's generalization ability remains strong. This observation suggests there is a considerable amount of irrelevant information in naturally occurring data from the perspective of a neural network classifier, and such information is often featured with low spectrum energy or lives at the high end of the frequency spectrum. This observation leads to the first part of Claim 3.1. That is, models only need to learn how to correctly use the crucial class-defining information from the training data to optimize the training objective. On the other hand, the extent to which they utilize irrelevant information in the data is not well-regulated. This can be problematic and lead to solutions sensitive to perturbations. In Sec. 4, we validate Claim 3.1 using a linear regression analysis with a synthetic dataset that contains irrelevant information. We demonstrate there exist multiple optima of the training objective and those solutions can all correctly use the relevant information in the data, but the way they exclude irrelevant information from computing the output is different. Specifically, a robust model disregards irrelevant information by assigning a weight of zero to it, but a non-robust model has certain non-zero weights which, when combined with the irrelevant information in the input, yield a net-zero effect in the output. In this case, although the two models are indistinguishable under the original training objective, the non-robust model will experience a reduction in model performance should this irrelevant information become corrupted at test time. ### Observation II: Model Robustness along Irrelevant Frequencies Let us now focus on the second part of the claim. If models' responses to perturbations along the irrelevant frequencies explain their robustness difference, then we should expect a similar accuracy drop between models when perturbations are along relevant frequencies, but a much larger accuracy drop on less robust models when test inputs are perturbed along irrelevant frequencies. Consider the robustness of the models when the test data are corrupted with Gaussian noise: the perturbation along each spatial dimension is i.i.d and drawn from a zero-mean Gaussian distribution with finite variance. This type of noise is commonly referred to as the additive white Gaussian noise, where white refers to the property that the noise has uniform power across the frequency spectrum (Diebold, 1998). Nevertheless, the previous discussion suggests that noise along different frequencies does not have an equal impact on the models' output. To verify this, we assess the impact on model accuracy by perturbing only specific frequency ranges of the test inputs with band-limited Gaussian noise. To construct the band-limited Gaussian noise, we first follow the previous work (Wang et al., 2020) to group DCT basis vectors based on their distance to the 0-frequency DC term and divide the entire DCT spectrum into ten bands where each band occupies the same number of DCT bases. This is to ensure an identical \(\ell_{2}\) norm among all the perturbations. Denote the binary mask of the \(i\)-th band by using \(M^{(i)}\in\left\{0,1\right\}^{d\times d}\), its Figure 2: **Irrelevant frequencies exist in the natural data. Accuracy on the original test set remains high when the training inputs are modified by removing parts of the signal with a) low spectrum energy and b) high frequencies. Stars represent test accuracy on models trained using the original training input. In setting a), training images are filtered based on the magnitude of the DCT basis. Specifically, parts of the image with DCT bases that have a magnitude in the bottom \(\frac{p}{100}\)-th percentile are removed, so a large \(p\) means more information is discarded. In setting b), training images are low-pass filtered, and \(p\) denotes the percentage of the high-frequency components that are discarded in the training data. We explain the formulation of the two settings in Appendix D. Examples of the modified inputs are included in Appendix F.** corresponding band-limited Gaussian noise is \(\Delta x^{(i)}=C^{\top}(M^{(i)}\odot\delta)C\), where \(\delta\sim\mathcal{N}(0,\sigma^{2}I_{d\times d})\) and \(C\) is the DCT transformation matrix defined in (4). Figure 3 illustrates how the frequency bases are grouped into ten equally sized bands and examples of the band-limited Gaussian noise. Denote the perturbations by using \(\Delta x^{(i)}\), with \(\Delta x^{(0)}\) and \(\Delta x^{(9)}\) representing the lowest and the highest band, respectively. To investigate the effect of the perturbation \(\Delta x^{(i)}\) on the models, we measure the change in classification accuracy when the test inputs are perturbed by \(\Delta x^{(i)}\): \[\frac{1}{N}\sum_{n=1}^{N}\mathbb{I}\left\{F(X_{n})=Y_{n}\right\}-\frac{1}{NK} \sum_{n=1}^{N}\sum_{k=1}^{K}\mathbb{I}\left\{F(X_{n}+\Delta x_{k}^{(i)})=Y_{n} \right\}, \tag{6}\] where \(F\) is a neural network classifier, \(\{(X_{n},Y_{n})\}_{n=1}^{N}\) represents the test dataset, each test input is perturbed by i.i.d sampled \(\Delta x_{k}^{(i)}\) and the subscript \(k\) is used to differentiate between \(K\) instances of the randomly sampled noise; and we use \(K=10\) in our experiments. It is important to realize in (6) that the additive noise \(\Delta x\) is applied to the spatial signal \(X\), although we are limiting the frequency band of the noise. Figure 4 demonstrates how the classification accuracy degrades under different band-limited Gaussian noises on MNIST, CIFAR100, and Imagenette; and results on the other datasets are included in Appendix F. First, notice that the perturbation from the lowest band \(\Delta x^{(0)}\) has a similar impact on all the models regardless of the algorithm they are trained by. There is however a noticeable difference in how models trained by SGD and adaptive gradient methods respond to perturbations from higher frequency bands. On models trained by SGD, the flattened curve implies that the effect of high-frequency perturbations on the generalization performance quickly diminishes to zero, suggesting that models are not sensitive to changes along the dimensions of irrelevant frequencies. Contrarily on models trained by the two adaptive gradient methods, we observe a difference in the way models respond to perturbations of higher frequency bands. On CIFAR100, for example, the two models are highly vulnerable to Gaussian perturbations from bands 5 to Figure 4: **The effect of band-limited Gaussian perturbations on the model.** Perturbations from the lowest band, i.e., \(\Delta x^{(0)}\), have a similar effect on all the models, despite being trained by different algorithms and exhibiting different robustness properties. On the other hand, models’ responses vary significantly when the perturbation focuses on higher frequency bands. The results are averaged over three independently initialized and trained models, and the shaded area indicates the standard error among the three models. Figure 3: **Visualization of the band-limited Gaussian perturbations.** The DCT spectrum is divided into ten equally sized bands to generate band-limited Gaussian perturbations. Denote them by using \(\Delta x^{(i)}\), where \(i\in\{0,1,...,9\}\). The frequency represented in the spectrum plot increases from the top-left (lowest frequency) to the bottom-right corner (highest frequency). Therefore, as the band moves towards higher frequencies, perturbations exhibit more high-frequency checkerboard patterns. 7. This observation shows that when models, during their training phase, do not have mechanisms in place to limit their use of irrelevant frequencies, their performance can be compromised if data along irrelevant frequencies become corrupted at test time. One can also observe that models' responses to high-frequency Gaussian perturbations varies among datasets. This can be attributed to the fact that (ir-)relevant frequencies are most likely going to be a unique characteristic for a particular dataset. We do not expect a dataset that solely consists of hand-written digits to share the same (ir-)relevant frequencies as one that consists of real-world objects. Moreover, the dimension (image resolution) of inputs for a given dataset matters, as a higher dimension potentially can allow more irrelevant frequencies. As such, we emphasize that the goal of our work is not to identify the exact (ir-)relevant frequencies among datasets. Rather, the analysis is built on the **presence** of irrelevant frequencies in the dataset, especially towards the higher end of the frequency spectrum, and how models differ in their robustness when trained by different algorithms. In the next section, we investigate the reason for such a robustness difference by studying how the irrelevant frequencies affect the learning dynamics of GD and signGD under a synthetic linear regression task. ## 4 Linear Regression Analysis with an Over-parameterized Model In this section, we study the learning dynamics of GD and signGD on least squares regression with linear models to understand why models trained using the two algorithms have the same standard generalization performance but exhibit different robustness against perturbations. On a synthetic dataset that emulates the energy distribution of natural datasets in the frequency domain, we design a learning task that has multiple optima for the standard population risk, each with a different adversarial risk. We analyze the weight adaptation under GD and signGD in both spatial and frequency domains and show that training with signGD can result in larger weights associated with irrelevant frequencies, resulting in models with a higher adversarial risk. Our result verifies claim 3.1. We report the main results here and defer the full derivations to Appendix E. ### Problem Setup Consider a linear model \(f(x,w)=\left\langle\,w\,,\,x\,\right\rangle\) with \(x,w\in\mathbb{R}^{d}\), where \(w\) and \(x\) are the weight and the signal represented in the spatial domain, respectively. Since the DCT transformation matrix \(C\) is an orthogonal matrix whose rows and columns are unit vectors, an alternative way to represent this model is: \[f(x,w)=\left\langle\,w\,,\,x\,\right\rangle=w^{\top}x=w^{\top}C^{\top}Cx= \left\langle\,\tilde{w}\,,\,\tilde{x}\,\right\rangle=f(\tilde{x},\tilde{w}),\] where \(\tilde{w}\) and \(\tilde{x}\) are the exact same weight and signal but are now represented in the frequency domain. This means that for linear models, computing the output of the model can be carried out in either domain as long as we use the matching representation of the signal and the weight. The goal of the linear analysis is to study the learning dynamics of different algorithms in a synthetic and controlled environment where one can clearly define the frequency-domain signal-target (ir)relevance to help understand the behavior of models in more complex settings. As such, let \(\tilde{w}^{*}\) denote the frequency-domain representation of the true model that is used to interact with the input \(\tilde{x}\) and generate the target: \(y=\tilde{x}^{\top}\tilde{w}^{*}\), where \(\tilde{w}^{*}=(\tilde{w}_{0}^{*},\tilde{w}_{2}^{*},\ldots,\tilde{w}_{d-1}^{*} )^{\top}\). We consider the squared error pointwise loss, which can be equally formulated in both domains: \[\ell(x,y;w) =\frac{1}{2}\left|f(x,w)-y\right|^{2}\] and \[=\frac{1}{2}\left|\left\langle\,\tilde{x}\,,\,\tilde{w}\,\right\rangle -\left\langle\,\tilde{x}\,,\,\tilde{w}^{*}\,\right\rangle\right|^{2}.\] Denote the error between the learned weight and the true weight at iteration \(t\) by \(e(t)=w(t)-w^{*}\), and the standard risk by \(\mathcal{R}_{\mathrm{s}}(w)=\mathbb{E}\left[\ell(X,Y;w)\right]\). In a similar way, those terms can be represented in the frequency domain as \(\tilde{e}(t)=\tilde{w}(t)-\tilde{w}^{*}\) and \(\mathcal{R}_{\mathrm{s}}(\tilde{w})=\mathbb{E}\left[\ell(\tilde{X},Y;\tilde{w })\right]\). Now we are ready to explain the design philosophy behind the synthetic dataset, the structure of the true model \(\tilde{w}^{*}\), and particularly, with regard to robustness, the ideal model that minimizes the effect of perturbations. Suppose that \(\tilde{X}\) follows a Gaussian distribution \(\mathcal{N}(\tilde{\mu},\tilde{\Sigma})\). For analytical tractability, we consider \(\tilde{\mu}=0\) and a diagonal structure of \(\tilde{\Sigma}\), i.e., \(\tilde{\Sigma}=\operatorname{diag}(\tilde{\sigma}_{0}^{2},...,\tilde{\sigma}_{ d-1}^{2})\). This implies that in the spatial domain, \(X\) follows a Gaussian distribution \(\mathcal{N}(0,\Sigma)\) where \(\Sigma=C^{\top}\tilde{\Sigma}C\). In Appendix E.1, we provide examples of the spatial-domain structure of the data, when we define the distribution directly in the frequency domain. In Sec. 3, we demonstrate that natural datasets exhibit a particular energy profile where signals contain irrelevant information represented by high-frequency and low-amplitude waves. To emulate this setting with a synthetic dataset, we define frequencies that are (ir)relevant in generating the target. Let \(\mathbb{I}_{\text{irrel}}\subseteq\{1,2,...,d-1\}\) and \(\mathbb{I}_{\text{rel}}=\{0,1,2,...,d-1\}-\mathbb{I}_{\text{irrel}}\) denote the set of irrelevant and relevant frequencies, respectively. Recall that the goal is to make high-frequency components of the dataset irrelevant, so we exclude the DC term (\(0\notin\mathbb{I}_{\text{irrel}}\)) when considering irrelevant frequencies, as it is the lowest frequency possible. Next, we specify the energy distribution of the synthetic dataset. The expected energy of a random signal following such a distribution is \[\mathbb{E}\left[E(\tilde{X})\right]=\mathbb{E}\left[\sum_{i=0}^{d-1}|\tilde{X }_{i}|^{2}\right]=\sum_{i=0}^{d-1}\mathbb{E}\left[\tilde{X}_{i}^{2}\right]= \sum_{i=0}^{d-1}\tilde{\sigma}_{i}^{2}. \tag{7}\] We assume that \(\tilde{\sigma}_{i}^{2}=0\) if \(i\in\mathbb{I}_{\text{irrel}}\), meaning the irrelevant frequencies of the data from the synthetic dataset have zero energy contributions. The purpose of this is to imitate the behavior of real-world datasets, where the high-frequency components have a negligible impact on the overall energy of the signal. To see how having irrelevant frequencies affect the structure of the true model, notice that the definition of the synthetic dataset implies \(\tilde{X}_{i}=0\) for all \(i\in\mathbb{I}_{\text{irrel}}\). This means that the true target value does not depend on those irrelevant frequencies. Clearly, this linear model is over-paramaterized because one only needs to specify \(\tilde{w}_{i}^{*}\) for all \(i\in\mathbb{I}_{\text{rel}}\) to establish the signal-target relationship. The objective of the standard risk with such a synthetic dataset is not strictly convex, i.e., there are multiple minimizers with zero standard risk, as the value of \(\tilde{w}_{i}^{*}\) for all \(i\in\mathbb{I}_{\text{irrel}}\) has no impact on the model output. For clarity, let us define \(\tilde{\mathcal{W}}^{*}=\{\,\tilde{w}^{*}:\mathcal{R}_{s}(\tilde{w}^{*})=0\,\}\) as the set that includes all standard risk minimizers. Having multiple standard risk minimizers is the result of over-parametrization; however, there is a unique solution that achieves zero standard risk and makes the model immune to any perturbations parallel to the directions of the irrelevant frequencies, and it corresponds to having zero weight at irrelevant frequencies: \(\tilde{w}_{i}^{*}=0\) for all \(i\in\mathbb{I}_{\text{irrel}}\): Define such a robust standard risk minimizer as \(\tilde{w}^{\text{R}}\in\tilde{\mathcal{W}}^{*}\), we have \[\tilde{w}_{i}^{\text{R}}\triangleq\begin{cases}\tilde{w}_{i}^{*}&\text{for all}\quad i\in\mathbb{I}_{\text{rel}}\\ 0&\text{otherwise.}\end{cases} \tag{8}\] Note that we use \(\tilde{w}^{*}\) to denote any arbitrary standard minimizers in \(\tilde{\mathcal{W}}^{*}\). To see why \(\tilde{w}^{\text{R}}\) is the most robust standard minimizer, we introduce the adversarial risk to capture the worst-case performance of the model under an \(\ell_{2}\)-constrained perturbation. Similar to the squared error loss, the adversarial risk can also be equally formulated in both domains: \[\mathcal{R}_{\text{a}}(w)\triangleq\mathbb{E}_{(X,Y)}\bigg{[}\max_{||\Delta \tilde{x}||_{2}\leq\epsilon}\ell(X+\Delta x,Y;w)\bigg{]}\qquad\text{and}\qquad \mathcal{R}_{\text{a}}(\tilde{w})\triangleq\mathbb{E}_{(\tilde{X},Y)}\bigg{[} \max_{||\Delta\tilde{x}||_{2}\leq\epsilon}\ell(\tilde{X}+\Delta\tilde{x},Y; \tilde{w})\bigg{]},\] where the \(\ell_{2}\)-constraint with a size of \(\epsilon\) has an equivalent effect in both domains. To understand the adversarial risk from a frequency-domain perspective, let us focus on \(\mathcal{R}_{\text{a}}(\tilde{w})\): \[\mathcal{R}_{\text{a}}(\tilde{w})=\mathbb{E}_{\tilde{X}}\bigg{[}\underset{|| \Delta\tilde{x}||_{2}\leq\epsilon}{\max}\frac{1}{2}\big{|}\big{\langle}\, \tilde{X}\,,\,\tilde{w}-\tilde{w}^{*}\,\big{\rangle}+\langle\,\Delta\tilde{x} \,,\,\tilde{w}\,\rangle\big{|}^{2}\bigg{]}, \tag{9}\] where we focus on the expectation over \(\tilde{X}\), as \(Y\) is replaced with \(\big{\langle}\,\tilde{X}\,,\,\tilde{w}^{*}\,\big{\rangle}\). Notice that the maximization is inside the expectation. This means that we are finding a separate separation for each input. Therefore, the maximizer, \(\Delta\tilde{x}^{*}\), of a given \(\tilde{X}\) within the expectation in (9) is \[\Delta\tilde{x}^{*}\triangleq\underset{||\Delta\tilde{x}||_{2}\leq\epsilon}{ \arg\max}\frac{1}{2}\big{|}\big{\langle}\,\tilde{X}\,,\,\tilde{w}-\tilde{w}^{*} \,\big{\rangle}+\langle\,\Delta\tilde{x}\,,\,\tilde{w}\,\rangle\big{|}^{2}= \epsilon\,\text{sign}[\big{\langle}\,\tilde{X}\,,\,\tilde{w}-\tilde{w}^{*}\, \big{\rangle}]\frac{\tilde{w}}{||\tilde{w}||_{2}}. \tag{10}\] Now knowing the worst-case perturbation to any \(\tilde{X}\), we can continue the derivation in (9) with \[\mathcal{R}_{\text{a}}(\tilde{w}) =\frac{1}{2}\mathbb{E}_{\tilde{X}}\left[\left|\left<\,\tilde{X}\,,\, \tilde{w}-\tilde{w}^{*}\,\right>+\epsilon\,\text{sign}[\left<\,\tilde{X}\,,\, \tilde{w}-\tilde{w}^{*}\,\right>]\left\|\tilde{w}\right\|_{2}\right|^{2}\right]\] \[=\frac{1}{2}\sum_{i\in\mathbb{I}_{\text{rel}}}\tilde{\sigma}_{i}^ {2}(\tilde{w}_{i}-\tilde{w}_{i}^{*})^{2}+\epsilon\sqrt{\frac{2}{\pi}\sum_{i \in\mathbb{I}_{\text{rel}}}\tilde{\sigma}_{i}^{2}(\tilde{w}_{i}-\tilde{w}_{i}^ {*})^{2}}\left\|\tilde{w}\right\|_{2}+\frac{\epsilon^{2}}{2}\left\|\tilde{w} \right\|_{2}^{2}. \tag{11}\] Finding the exact minimizer to (11) is more involved. Without doing that however, it is obvious that for an arbitrary standard risk minimizer \(\tilde{w}^{*}\) from \(\tilde{\mathcal{W}}^{*}\), we can evaluate (11) at \(\tilde{w}^{*}\) and obtain its the adversarial risk as \[\mathcal{R}_{\text{a}}(\tilde{w}^{*})=\frac{\epsilon^{2}}{2}||\tilde{w}^{*}|| _{2}^{2}, \tag{12}\] where the first two terms in (11) become zero at any fixed standard risk minimizer. This shows that the robustness of the standard risk minimizers against \(\ell_{2}\)-bounded perturbations is inversely proportional to the norm of the linear model. That is, a smaller norm implies better robustness. Recall that when evaluating the standard risk of the model, the weights associated with irrelevant frequencies do not matter, since they are never used in computing the output of the model. On the contrary, the \(||\tilde{w}^{*}||_{2}^{2}\) term in (12) implies that those weights matter when considering the robustness of the model under perturbations. It is not difficult to see that the minimum adversarial risk can be achieved on a unique standard risk minimizer \(\tilde{w}^{\text{R}}\) (8). Therefore, in the over-parameterized linear regression setting, a standard risk minimizer with a minimum norm is preferred when considering the robustness of the model, and a model with zero weight at irrelevant frequencies is the most robust solution among the standard risk minimizers. With this example, we verify Claim 3.1. While standard risk minimizers can correctly use the relevant information of the data, their use of irrelevant information is under-constrained. This can result in significant weight assigned to irrelevant frequencies, making models more susceptible to perturbations. Next, we study the learning dynamics of GD and signGD and demonstrate that the solutions found by GD and signGD differ in the weight of the irrelevant frequencies. This causes the solutions found by the two algorithms to have a similar standard population risk, but behave very differently under perturbations. ### Analysis on the Learning Dynamics of GD and signGD We now analyze the weight adaptation of a linear model under GD and signGD, and experimentally verify our results. Our analysis shows that for the over-parameterized linear model, GD finds solutions with a standard risk of exactly zero, and signGD finds solutions with a standard risk close to zero. However, they have different robustness properties. In the presence of irrelevant frequencies, GD is more likely to converge to a solution that is less sensitive to perturbations along the direction of irrelevant frequencies, whereas signGD is more likely to converge to solutions that are more prone to such perturbations. #### 4.2.1 GD Dynamics Let us start with GD in the spatial domain. Suppose that we initialize the weights in the spatial domain as \(w(0)=W\sim N(0,\Sigma_{W})\) where \(\Sigma_{W}\in\mathbb{R}^{d\times d}\). Similar to how both \(\tilde{X}\) and \(X\) follow a Gaussian distribution, the frequency representation of the initialized weight also follows a Gaussian distribution: \(\tilde{w}(0)=\tilde{W}\sim\mathcal{N}(0,\tilde{\Sigma}_{W})\) where \(\tilde{\Sigma}_{W}=C\Sigma_{W}C^{\top}\). To train the model, we use GD on the population risk: \[w(t+1)\gets w(t)-\eta\nabla_{w}\mathcal{R}_{\text{s}}(w(t)). \tag{13}\] The gradient computed using the population risk is \(\nabla_{w}\mathcal{R}_{\text{s}}(w(t))=\mathbb{E}\left[XX^{\top}\right]e(t)= \Sigma e(t)\), and the learning dynamics of GD in the spatial domain can be captured using: \[e(t+1)=w(t+1)-w^{*}=w(t)-w^{*}-\eta\Sigma e(t)=(I-\eta\Sigma)^{t+1}e(0). \tag{14}\] This shows that the learned weight converges to the optimal weight \(w^{*}\) at a rate depending on \(\Sigma\). To see the GD dynamics in the frequency domain, we can simply perform DCT on both sides of (14): \[\tilde{e}(t+1)=C(I-\eta\Sigma)^{t+1}e(0)=(I-\eta\tilde{\Sigma})^{t+1}\tilde{e }(0), \tag{15}\] where \(\tilde{\Sigma}\) is the covariance of \(\tilde{x}\). It is easy to see that no weight adaptation happens for the irrelevant frequencies because \(\tilde{\sigma}_{i}^{2}=0\) for all \(i\in\mathbb{I}_{\text{irrel}}\). As \(\tilde{\Sigma}\) is diagonal, choosing the learning rate \(\eta\) such that \(\eta\max_{i\in\{0,\ldots,d-1\}}\tilde{\Sigma}_{ii}<1\), we get that the asymptotic solution is \[\boldsymbol{\tilde{w}}_{i}^{\text{GD}}\triangleq\lim_{t\to\infty}\tilde{w}_{i} (t)=\begin{cases}\tilde{w}_{i}^{*}&\text{for all}\quad i\in\mathbb{I}_{\text{ rel}}\\ \tilde{w}_{i}(0)&\text{otherwise.}\end{cases} \tag{16}\] That is, the initial random weights at the irrelevant frequencies do not change. Using (12), we have \[\mathcal{R}_{\text{a}}(\boldsymbol{\tilde{w}}^{\text{GD}})=\frac{\epsilon^{2} }{2}||\boldsymbol{\tilde{w}}^{\text{GD}}||_{2}^{2}=\frac{\epsilon^{2}}{2} \left\{\sum_{i\in\mathbb{I}_{\text{irrel}}}\tilde{w}_{i}^{*2}+\sum_{j\in \mathbb{I}_{\text{irrel}}}\tilde{w}_{j}(0)^{2}\right\}. \tag{17}\] Comparing the standard risk minimizer found by GD with the robust standard risk minimizer in (8), we notice that the GD solution is not the most robust among all standard risk minimizers, as it is sensitive to perturbations along irrelevant frequencies. Suppose that the initialized weight in the frequency domain is randomly sampled from \(\mathcal{N}(0,\sigma^{2}I_{d\times d})\), and the signal-target relationship is determined by a handful of relevant frequencies. Taking the expectation of (17) over the randomly initialized weight, we have \(\mathbb{E}_{\tilde{w}(0)}\left[\mathcal{R}_{\text{a}}(\boldsymbol{\tilde{w}} ^{\text{GD}})\right]\approx O(\epsilon^{2}d\sigma^{2})\), so the adversarial risk can be quite significant if there is a large number of irrelevant frequencies, i.e., \(|\mathbb{I}_{\text{rel}}|\ll d\) and \(|\mathbb{I}_{\text{irrel}}|\approx d\). This example shows that the GD solution is sensitive to initialization. Because there is no mechanism in place to actively ensure that the weights associated with these irrelevant frequencies become zero, GD is not forcing the initial weights to go to zero at those frequencies. One solution is to include the weight norm as a penalty term along with the original optimization objective, but this can result in learning a biased solution. Another simple fix is to initialize the weight at exactly \(0\). This robustifies the GD solution by initializing those irrelevant weights at the most robust state. #### 4.2.2 SignGD Dynamics Adaptive gradient algorithms like Adam and RMSProp utilize historical gradient information as a momentum mechanism for updating model parameters, thereby expediting the learning process. However, it is important to note that their acceleration is not solely attributable to this feature, nor is their adaptiveness limited to it. In (3), we have demonstrated how signGD, a memory-free version of Adam and RMSProp, can adaptively adjust the update using a coordinate-wise learning rate. Although signGD is not a suitable choice for training deep neural networks (Riedmiller and Braun, 1993; Ma et al., 2022), examining its behavior can provide insights into the learning dynamics of other adaptive gradient methods (Karimi et al., 2016; Balles and Hennig, 2018; Moulay et al., 2019). Additionally, in Sec. 4.2.3, we empirically justify the use of signGD as a suitable alternative in understanding the learning dynamics of Adam and RMSProp. Again, let us start with signGD in the spatial domain. The update rule using the population risk takes the sign of the gradient computed using the population risk \[w(t+1)\gets w(t)-\eta\,\text{sign}[\nabla_{w}\mathcal{R}_{\text{s}}(w)], \tag{18}\] and its learning dynamics in the spatial domain is \[e(t+1)=w(t+1)-w^{*}=e(t)-\eta\,\text{sign}[\Sigma e(t)]. \tag{19}\] Unlike the GD dynamics in (14), (19) shows that the behavior of signGD depends on the sign of \(\Sigma e(t)\), and this means that when \(|[\Sigma e(t)]_{i}|\ll 1\), training using signGD can accelerate the learning along the \(i\)-th dimension. Although we can obtain \(\Sigma\) from \(\Sigma=C^{\top}\tilde{\Sigma}C\), the structure of \(\Sigma\) is subject to variation based on \(\tilde{\Sigma}\), so it is difficult to find an analytical solution for the dynamics of the model trained under signGD, such as (14) where we have a closed form for \(e(t)\) as a function of \(e(0)\) for models trained under GD. This means that analyzing the signGD dynamics is limited to studying its step-by-step update based on the sign of the entries in \(\Sigma e(t)\). The signGD learning dynamics in the frequency domain can be obtained by taking the DCT transformation on both sides of (19): \[\tilde{e}(t+1)=\tilde{e}(t)-\eta C\operatorname{sign}[\Sigma e(t)]=\tilde{e}(t)- \eta C\operatorname{sign}[C^{\top}\tilde{\Sigma}\tilde{e}(t)], \tag{20}\] where the error and the covariance terms inside of the sign are also transformed into their frequency-domain representations. Equation 20 shows that analyzing the behavior of signGD in the frequency domain requires knowing the sign of the entries in \(C^{\top}\tilde{\Sigma}\tilde{e}(t)\). This term can be understood as an inverse DCT transformation of \(\tilde{\Sigma}\tilde{e}(t)\), and with a diagonal structure of \(\tilde{\Sigma}\), we know that \(\tilde{\Sigma}\tilde{e}(t)=\left[\tilde{\sigma}_{0}^{2},...,\tilde{\sigma}_{ d-1}^{2}\right]^{\top}\odot\hat{e}(t)\). However, similar to the situation in (19), the sign of the entries in \(C^{\top}\tilde{\Sigma}\tilde{e}(t)\) is dependent on \(\tilde{e}(t)\) at different steps, so obtaining an analytical solution for the frequency-domain dynamics is also challenging. In both (19) and (20), we see that understanding the signGD dynamics for any general \(\tilde{\Sigma}\) can be complicated. As such, we focus on a structure of \(\tilde{\Sigma}\) that simplifies the analysis but still allows us to understand why training with signGD results in vulnerable models. In particular, we consider a data distribution where \(\tilde{X}\sim\mathcal{N}(\tilde{\mu}=0,\tilde{\Sigma}=\operatorname{diag} \left\{\tilde{\sigma}_{0}^{2},\tilde{\sigma}_{1}^{2},0\right\})\). This definition implies that the data distribution contains irrelevant information at the highest frequency basis and we have \(\tilde{X}_{2}=0\) for all datapoints. Now, we continue with signGD learning dynamics in the frequency domain from (20). Let us denote \(A(t)=\frac{\sqrt{3}}{3}\tilde{\sigma}_{0}^{2}\tilde{e}_{0}(t)\) and \(B(t)=\frac{\sqrt{2}}{2}\tilde{\sigma}_{1}^{2}\tilde{e}_{1}(t)\), and \(C=C^{(3)}\) follows the DCT transformation matrix defined in (4). With some algebraic manipulation, we have \[\tilde{e}(t+1)=\tilde{e}(t)-\eta\begin{bmatrix}\frac{\sqrt{3}}{3}( \operatorname{sign}[A(t)+B(t)]+\operatorname{sign}[A(t)]+\operatorname{sign} [A(t)-B(t)])\\ \frac{\sqrt{2}}{2}(\operatorname{sign}[A(t)+B(t)]-\operatorname{sign}[A(t)-B (t)])\\ \frac{\sqrt{6}}{6}\operatorname{sign}[A(t)+B(t)]-\frac{\sqrt{6}}{3} \operatorname{sign}[A(t)]+\frac{\sqrt{6}}{6}\operatorname{sign}[A(t)-B(t)] \end{bmatrix}, \tag{21}\] and we include its complete derivation in Appendix E.7. With this particular choice of \(\tilde{\Sigma}\), (21) shows that weight adaptation depends on the sign of three terms: \(A(t)\), \(A(t)+B(t)\) and \(A(t)-B(t)\). In Table 5 of Appendix E.8, we study the learning dynamics of signGD by analyzing all 27 sign combinations and their corresponding updates. We report the main results here and defer the detailed analysis to Appendix E.8. With a constant learning rate of \(\eta\), the asymptotic signGD solution converges to an \(O(\eta)\) neighborhood of the standard risk minimizer: \[\limsup_{t\to\infty}|\tilde{w}_{i}(t)-\tilde{w}_{i}^{*}|=O(\eta), \tag{22}\] where \(i\in\{0,1\}\). In particular, we demonstrate that \(\tilde{w}_{0}\) oscillates in an \(O(\eta)\) neighborhood of \(\tilde{w}_{0}^{*}\). Consider \(T\) as the first iteration after which \(\tilde{w}_{0}\) starts oscillating, and define \(\Delta\tilde{w}_{2}\) as the sum of all the updates in \(\tilde{w}_{2}\) up to the \(T\)-th iteration. The limiting behavior of \(\tilde{w}_{2}\) under signGD update is \[\limsup_{t\to\infty}|\tilde{w}_{2}(t)|=|\tilde{w}_{2}(T)+O(\eta)|=|\tilde{w}_{ 2}(0)+\Delta\tilde{w}_{2}+O(\eta)|\,, \tag{23}\] where \(\tilde{w}_{2}(0)\) is the weight at initialization. This means that after \(T\) iterations, for all \(t^{\prime}>T\), \(\tilde{w}_{2}(t^{\prime})\) stays in an \(O(\eta)\) neighborhood of \(\tilde{w}_{2}(T)\). As such, we have the asymptotic solution found by signGD: \[\boldsymbol{\tilde{w}}^{\text{signGD}}=[\tilde{w}_{0}^{*},\ \tilde{w}_{1}^{*},\ \tilde{w}_{2}(0)+\Delta\tilde{w}_{2}]^{\top}+O(\eta). \tag{24}\] From the perspective of training under the standard risk, the signGD solution is close to the optimum. Specifically, its standard risk is \[\mathcal{R}_{\text{s}}(\boldsymbol{\tilde{w}}^{\text{signGD}})=\mathbb{E} \left[\ell(\tilde{X},Y;\boldsymbol{\tilde{w}}^{\text{signGD}})\right]=\frac{1 }{2}\mathbb{E}\left[\left\langle\,\tilde{X}\,,\,\boldsymbol{\tilde{w}}^{\text{ signGD}}-\tilde{w}^{*}\,\right\rangle^{2}\right]=O((\tilde{ \sigma}_{0}^{2}+\tilde{\sigma}_{1}^{2})\eta^{2}). \tag{25}\] Note that the standard risk of the GD solution is exactly zero; and by choosing a sufficiently small learning rate \(\eta\), the standard risk of the signGD solution can also be close to zero as well. However, their adversarial risks are very different. Specifically, the adversarial risk of the asymptotic signGD solution is \[\mathcal{R}_{\text{a}}(\boldsymbol{\tilde{w}}^{\text{signGD}})=\frac{\epsilon^ {2}}{2}||\boldsymbol{\tilde{w}}^{\text{signGD}}||_{2}^{2}=\frac{\epsilon^{2}}{2 }\left\{\tilde{w}_{0}^{*2}+\tilde{w}_{1}^{*2}+(\tilde{w}_{2}(0)+\Delta\tilde{w} _{2})^{2}\right\}. \tag{26}\] We can compare it with the adversarial risk of the asymptotic solution found by GD under the same setup: \[\mathcal{R}_{\mathrm{a}}(\mathbf{\tilde{w}}^{\mathrm{GD}})=\frac{\epsilon^{2}}{2} \left\{\tilde{w}_{0}^{*2}+\tilde{w}_{1}^{*2}+\tilde{w}_{2}(0)^{2}\right\}. \tag{27}\] It can be observed that the main difference between the two adversarial risks in (26) and (27) arises from the difference in weights learned at the irrelevant frequency. Since their use of irrelevant frequency in the data is under-constrained, neither algorithm can reduce \(\tilde{w}_{2}\) to zero, thereby neither solution is the most robust standard risk minimizer. As discussed in Sec. 4.2.1, the GD solution is sensitive to weight initialization. Before understanding the \(\Delta\tilde{w}_{2}\) term in the signGD solution, we first introduce two assumptions on the synthetic dataset that serve to better emulate the distribution found in the natural dataset. Consider a dataset with a strong task-relevant correlation between the relevant frequency component of the data and the target, a realistic scenario as we discussed in Sec. 3.2. In this case, \(|\tilde{w}_{0}^{*}|\) and \(|\tilde{w}_{1}^{*}|\) can be large. Additionally, with a weight initialization around zero, such as in methods by He et al. (2015) and Glorot and Bengio (2010), the initial error \(|\tilde{e}_{0}(0)|\) and \(|\tilde{e}_{1}(0)|\) can be large and close to \(|\tilde{w}_{0}^{*}|\) and \(|\tilde{w}_{1}^{*}|\) when \(|\tilde{w}_{0}^{*}|\gg|\tilde{w}_{0}(0)|\) and \(|\tilde{w}_{1}^{*}|\gg|\tilde{w}_{1}(0)|\). Moreover, it is discussed in Sec. 3.1 and later supported empirically in Figure 8 of Appendix F that the distribution of spectral energy heavily concentrates at the low end of the frequency spectrum and decays quickly towards higher frequencies. Since \(\tilde{\sigma}_{i}^{2}\) is interpreted as the expected energy of a random variable at the \(i\)-th frequency, it is reasonable to expect that \(\frac{\tilde{\sigma}_{i}^{2}}{\tilde{\sigma}_{0}^{2}}<\frac{1}{3}\). With the two assumptions, we demonstrate that \(\Delta\tilde{w}_{2}\) is proportional to \(|\tilde{w}_{0}^{*}|\) or \(|\tilde{w}_{1}^{*}|\) depending on the initialization of \(|A(0)|\) and \(|B(0)|\). In particular, we have \[|\Delta\tilde{w}_{2}|\approx\begin{cases}\sqrt{3}C\,|\tilde{w}_{0}^{*}|&\text {if}\quad|A(0)|<|B(0)|\\ \frac{3\sqrt{2}\tilde{\sigma}_{0}^{2}}{2\tilde{\sigma}_{0}^{2}}C\,|\tilde{w}_ {1}^{*}|&\text{if}\quad|A(0)|>|B(0)|\end{cases} \tag{28}\] where \(C\in[\frac{\sqrt{6}}{6},\frac{\sqrt{6}}{3}]\). To quantitatively understand the robustness difference between solutions found by the two algorithms, we consider the ratio between the adversarial risk of the standard risk minimizers found by GD (27) and signGD (26) with a three-dimensional input space. We observe that the solution found by signGD is more sensitive to perturbations compared to the GD solution: \[\frac{\mathcal{R}_{\mathrm{a}}(\mathbf{\tilde{w}}^{\mathrm{signGD}})}{\mathcal{R} _{\mathrm{a}}(\mathbf{\tilde{w}}^{\mathrm{GD}})}\approx\begin{cases}1+C_{3}\frac{ \tilde{w}_{0}^{*2}}{\tilde{w}_{0}^{*2}+\tilde{w}_{1}^{*2}}&\text{if}\quad|A(0 )|<|B(0)|\\ 1+C_{4}\frac{\tilde{w}_{1}^{*2}}{\tilde{w}_{0}^{*2}+\tilde{w}_{1}^{*2}}&\text{ if}\quad|A(0)|>|B(0)|\,,\end{cases} \tag{29}\] where \(\frac{1}{2}\leq C_{3}\leq 2\) and \(\frac{3}{4}\frac{\tilde{\sigma}_{0}^{4}}{\tilde{\sigma}_{0}^{4}}\leq C_{4} \leq 3\frac{\tilde{\sigma}_{0}^{4}}{\tilde{\sigma}_{0}^{4}}\). Given that this ratio is always greater than 1, the linear model obtained through GD is always more robust against \(\ell_{2}\)-bounded perturbations in comparison to the model obtained from signGD. #### 4.2.3 Empirical Validation To validate our analysis, in Figure 5 we create a three-dimensional dataset using \((\tilde{\sigma}_{0}^{2},\tilde{\sigma}_{1}^{2},\tilde{\sigma}_{2}^{2})=(0.01,0.0025,0)\), and \((\tilde{w}_{0}^{*},\tilde{w}_{1}^{*},\tilde{w}_{0}^{*})=(5,10,0)\), and compare the dynamics of the frequency-domain weight error on models trained by GD, Adam, RMSProp, and signGD. All models are initialized with the same weight and are trained using a fixed learning rate of \(0.01\). At each training iteration, we sample \(1000\) data points and compute the gradient based on the sampled data. We want to clarify that even though we analyze the weight update dynamics in both frequency and spatial domains, the actual training still takes place in the spatial domain. In (15), we show that the GD solution \(\mathbf{\tilde{w}}_{i}^{\mathrm{GD}}(t)\) converges to \(\tilde{w}_{i}^{*}\) with a rate of \(1-\eta\tilde{\sigma}_{i}^{2}\). Therefore, when \(\tilde{\sigma}_{i}^{2}\) is small, learning can be particularly slow for weights associated with the \(i\)-th frequency, as shown in Figure 5a. On the other hand, notice in Table 5 that \(|\tilde{e}_{0}|\) gets reduced by at least \(\frac{\sqrt{3}}{3}\) regardless of the magnitude of \(\tilde{\sigma}_{0}^{2}\) for signGD. This means that the magnitude of \(\tilde{\sigma}_{i}^{2}\) does not directly affect the convergence speed. Instead, the relative magnitude between \(A(t)\) and \(B(t)\) determines the frequency which receives priority during the learning process. As a result, we observe an acceleration for models trained by signGD. Next, we observe that the error trajectory for the model trained by signGD closely resembles the one from the model trained by Adam for \(\tilde{e}_{0}\) and \(\tilde{e}_{1}\). In the analysis of signGD, we show that \(|\tilde{e}_{2}|\) increases till \(|\tilde{e}_{0}|\) starts oscillating in \(O(\eta)\). Figure 5a shows that this pattern can be observed in models trained by Adam as well. This shows that signGD is a suitable alternative to understanding the learning dynamics of models under the proposed linear regression task. For models trained by GD, since there is no update on the weight associated with the irrelevant frequency, \(\tilde{e}_{2}\) remains at the initialized value throughout training. To demonstrate the weight adaptation under signGD, we divide the training into two phases, as highlighted by two background colors. The green area indicates that \(|\tilde{e}_{0}|\) decreases and \(|\tilde{e}_{2}|\) increases in the meanwhile. Once oscillation begins for \(|\tilde{e}_{0}|\), \(|\tilde{e}_{2}|\) can no longer be corrected. This behavior corresponds to the purple area in Figure 5a. In Figure 5b, we compare the standard population risk and the adversarial population risk of different models. We notice that despite all models reaching near zero standard population risk, their adversarial population risk is different. In particular, the adversarial population risk of models trained by adaptive gradient methods is higher than the one from the model trained by GD, indicating lower robustness. Choosing \(\epsilon=\sqrt{2}\) in (12), the adversarial risk of those standard risk minimizers is exactly the squared \(\ell_{2}\) norm of the weight. With our choice of initialization, the resulting \(|A(0)|\) and \(|B(0)|\) are \(0.0289\) and \(0.0177\) respectively. This means that the ratio between the two adversarial risks is \(\frac{\mathcal{R}_{a}(\mathbf{\psi}^{\text{signGD}})}{\mathcal{R}_{a}(\mathbf{\psi}^{ \text{GD}})}\in[1.04,1.15]\) according to (29), and this aligns with the ratio of \(1.146\) obtained empirically from the experiments. This simple problem illustrates how the optimization algorithms and an over-parameterized model might interact, and learning with signGD can lead to a solution that is more prone to perturbations. In this section, we focus on analyzing the robustness of the solution from a frequency domain perspective, that is, Figure 5: **Comparing (a) the learning dynamics, (b) the standard and adversarial population risk of linear models trained by GD, Adam, RMSProp, and signGD. We create a three-dimensional dataset created using \((\tilde{\sigma}_{0}^{2},\tilde{\sigma}_{1}^{2},\tilde{\sigma}_{2}^{2})=(0.01, 0.0025,0)\), and \((\tilde{w}_{0}^{*},\tilde{w}_{1}^{*},\tilde{w}_{2}^{*})=(5,10,0)\). All models are initialized with the same weight \((\tilde{w}_{0}(0),\tilde{w}_{1}(0),\tilde{w}_{2}(0))=(0.01,-0.01,0.02)\) and trained using a fixed learning rate of \(0.01\). (a) Dynamics of the error term. During the signGD training process, the error along the irrelevant frequency grows until \(\tilde{e}_{0}\) starts to oscillate around \(0\). In our example, the green highlighted areas in the figure correspond to the iterations before \(\tilde{e}_{0}\) starts to oscillate, and the red areas show that the error along the irrelevant frequency cannot be corrected. (b) The standard population risk and the adversarial population risk (\(\epsilon=\sqrt{2}\)). We notice that despite all models can reach zero standard population risk, their adversarial population risks are different. The adversarial population risk of models trained by adaptive gradient methods is higher than the one from the model trained by GD, indicating lower robustness.** the behavior of \(\tilde{w}\) with an input perturbation of \(\Delta\tilde{x}\). In Appendix E.9, we present a spatial interpretation of the result and demonstrate how signals with irrelevant frequencies contain spatially redundant dimensions. ## 5 Connecting the Norm of Linear Models to the Lipschitzness of Neural Networks The takeaway from the over-parameterized linear regression analysis is that among all standard risk minimizers, the minimum norm solution is the most robust one. That is, a smaller weight norm implies better robustness. This suggests a connection between the weight norm and model robustness. Nonetheless, the major limitation of the analysis is that it is designed for a linear model. In this section, we generalize such a connection to the deep learning setting and verify it using the robustness of neural networks trained by different algorithms. One major obstacle is that the notion of weight norm as defined for linear models is not generally applicable to neural networks. However, we can still relate the weight of a network to its sensitivity with respect to changes in the input space. Consider a single-layer ReLU-activated feedforward network with \(x\in\mathbb{R}^{d}\) and \(W\in\mathbb{R}^{D\times d}\). With a perturbation of \(\Delta x\in\mathbb{R}^{d}\) constrained by the vector \(\ell_{p}\)-norm, the maximum change in the model output as measured by the same norm can be bounded using \[\left\|\mathrm{ReLU}\left(W(x+\Delta x)\right)-\mathrm{ReLU}\left(Wx\right) \right\|_{p}\leq\left\|W\Delta x\right\|_{p}\leq\left\|W\right\|_{p}\left\| \Delta x\right\|_{p}, \tag{30}\] where \(\|W\|_{p}\) denotes the vector \(\ell_{p}\)-norm induced matrix norm of the weight \(W\) and is referred to as the Lipschitz constant of this single-layer model. Consider the \(\ell_{p}\) vector norm, for all \(x_{1},x_{2}\in\mathbb{R}\), a function \(f\) is said to be Lipschitz continuous if \(||f(x_{1})-f(x_{2})||_{p}\leq L||x_{1}-x_{2}||_{p}\), for some real-valued Lipschitz constant \(L\geq 0\).4 Indeed, the Lipschitz constant of a function with respect to inputs captures how sensitive the model is in relation to changes in the input space. Footnote 4: Any value of \(L\) satisfying the Lipschitz condition is considered a valid Lipschitz constant. For the sake of clarity, we will refer to the smallest (optimal) Lipschitz constant as \(L\). In the single-layer model example, its Lipschitz constant is exactly the matrix norm of the weight. More generally, consider the feed-forward neural network as a series of function compositions: \[f(x)=(\phi_{l}\circ\phi_{l-1}\circ...\circ\phi_{1})(x),\] where each \(\phi_{i}\) is a linear operation, an activation function, or a pooling operation. A particularly useful property of the Lipschitz function is that the composition of Lipschitz functions with Lipschitz constant \begin{table} \begin{tabular}{c l l l l l l l} & Dataset & MNIST & Fashion & CIFAR10 & CIFAR100 & SVHN & Caltech101 & Imagenette \\ \hline \multirow{3}{*}{\(\prod_{i=1}^{l}L(\phi_{i})\)} & SGD & **3.80** & **3.83** & **26.81** & **40.41** & **22.65** & **18.53** & **23.99** \\ & Adam & 5.75 & 8.12 & 28.70 & 41.87 & 30.45 & 26.20 & 28.55 \\ & RMSProp & 6.21 & 5.11 & 37.75 & 41.71 & 28.31 & 45.84 & 27.11 \\ \hline \multirow{3}{*}{ \begin{tabular}{c} Averaged \\ Robust Acc. \\ \end{tabular} } & SGD & **77.97\%** & **77.95\%** & **63.21\%** & **55.65\%** & **69.08\%** & **71.42\%** & **67.59\%** \\ & Adam & 65.64\% & 67.60\% & 57.71\% & 45.25\% & 65.60\% & 55.03\% & 58.86\% \\ & RMSProp & 63.54\% & 71.34\% & 56.47\% & 47.55\% & 65.37\% & 53.16\% & 57.98\% \\ \hline \end{tabular} \end{table} Table 1: **Comparing the upper bound on the Lipschitz constant and the averaged robust accuracy of neural networks trained by SGD, Adam, and RMSProp. We follow (Gouk et al., 2021) to compute the Lipschitz constants of each layer in isolation and multiply them together to establish an upper bound on the constant of the entire network. Notice that across all selected datasets, models trained by SGD have a considerably smaller upper bound compared to models trained by Adam and RMSProp. In Figure 1, we demonstrate the robustness of the neural networks under Gaussian noise, \(\ell_{2}\) and \(\ell_{\infty}\) bounded adversarial perturbations (Croce and Hein, 2020). Here, we average the accuracy across the perturbations and get a single score quantifying the model’s robustness. All results are averaged over three independently initialized and trained models.** \(L_{1}\), \(L_{2}\),..., \(L_{N}\) w.r.t. the same norm is also Lipschitz with an upper-bound on the Lipschitz constant \(L\leq L_{1}L_{2}...L_{N}\). Denoting the Lipschitz constant of function \(f\) as \(L(f)\), we can establish an upper bound on the Lipschitz constant for the entire feed-forward neural network using \[L(f)\leq\prod_{i=1}^{l}L(\phi_{i}). \tag{31}\] As such, for a multi-layer neural network that comprises repeated layers of linear operation followed by non-linear activation, we can upper bound the change in model output with respect to the change in the input space by multiplying the operator norms of the weights. It is important to realize that (31) is not a tight upper bound, and in fact, computing the exact Lipschitz constant of the neural network is NP-hard (Virmaux & Scaman, 2018). Nonetheless, this approach allows us to draw connections between the weight and the robustness of the model in the context of neural networks. Results in Sec. 4 indicate that linear models trained by signGD have larger weight norms, indicating less robustness. Therefore, we expect in the deep learning setting that neural networks trained by SGD are more robust, as they have a smaller Lipschitz upper bound, as shown in Figure 1. To verify this, we follow the techniques in Gouk et al. (2021) and compute an upper bound on the Lipschitz constant of the same neural networks trained by SGD, Adam, and RMSProp in Figure 1. Results are shown in Table 1. The result shows that across all datasets and architectures, models trained by SGD have a smaller upper bound on the Lipschitz constant compared to models trained by the two adaptive gradient methods. In Figure 1, we demonstrate the robustness of the neural networks under Gaussian noise, \(\ell_{2}\) and \(\ell_{\infty}\) bounded adversarial perturbations (Croce & Hein, 2020). In Table 1, we average the accuracy across the perturbations and get a single score quantifying the model's robustness. We observe that a smaller upper bound on the Lipschitz constant of a neural network implies better robustness against perturbations. ## 6 Conclusions In this paper, we highlighted the robustness difference between models trained by SGD and adaptive gradient methods, particularly Adam and RMSProp. To understand this phenomenon, we leveraged a frequency-domain analysis, and demonstrated that natural datasets contain frequencies that are irrelevant to minimizing the standard training loss. Empirically, through a band-limited perturbation analysis on neural networks trained on common vision datasets, we showed that models trained by the adaptive gradient method utilize the statistics in the irrelevant frequency, and thus they experience a huge drop in performance when the same statistics become corrupted. Analytically, on a synthetic linear regression task where the dataset was designed to contain target-irrelevant frequencies, we showed that while both GD and signGD can find the solution with standard risks close to zero, the adversarial risk of the asymptotic solution found by signGD can be larger than that of GD. Such results from the linear analysis explained the observation in Figure 1 and suggested that a smaller model parameters' weight norms indicate a larger model robustness. Finally, in the deep learning setting, we showed that models trained by SGD have a noticeably larger Lipschitz constant than those trained by Adam and RMSProp. Our work has two limitations. First, when conducting a theoretical analysis of various optimizers, we opted for signGD as a simpler alternative to Adam and RMSProp. Additionally, our focus was primarily on a linear model. However, it is crucial to acknowledge that deep neural networks inherently possess non-linear characteristics, which limit the depth of insights derived from linear models. As a promising future direction, incorporating advanced analytical tools such as neural tangent kernels (Jacot et al., 2018) could provide a deeper understanding of network dynamics. #### Acknowledgments We acknowledge the funding from the CIFAR AI Chairs program, as well as the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grant program (2021-03701). Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. We would like to also thank the members of the Adaptive Agents Lab who provided feedback on a draft of this paper.
2302.03620
Recent advances in the Self-Referencing Embedding Strings (SELFIES) library
String-based molecular representations play a crucial role in cheminformatics applications, and with the growing success of deep learning in chemistry, have been readily adopted into machine learning pipelines. However, traditional string-based representations such as SMILES are often prone to syntactic and semantic errors when produced by generative models. To address these problems, a novel representation, SELF-referencIng Embedded Strings (SELFIES), was proposed that is inherently 100% robust, alongside an accompanying open-source implementation. Since then, we have generalized SELFIES to support a wider range of molecules and semantic constraints and streamlined its underlying grammar. We have implemented this updated representation in subsequent versions of \selfieslib, where we have also made major advances with respect to design, efficiency, and supported features. Hence, we present the current status of \selfieslib (version 2.1.1) in this manuscript.
Alston Lo, Robert Pollice, AkshatKumar Nigam, Andrew D. White, Mario Krenn, Alán Aspuru-Guzik
2023-02-07T17:24:08Z
http://arxiv.org/abs/2302.03620v1
# Recent advances in the Self-Referencing Embedding Strings (SELFIES) library ###### Abstract String-based molecular representations play a crucial role in cheminformatics applications, and with the growing success of deep learning in chemistry, have been readily adopted into machine learning pipelines. However, traditional string-based representations such as SMILES are often prone to syntactic and semantic errors when produced by generative models. To address these problems, a novel representation, SELF-referencing Embedded Strings (SELFIES), was proposed that is inherently 100% robust, alongside an accompanying open-source implementation selfies. Since then, we have generalized SELFIES to support a wider range of molecules and semantic constraints and streamlined its underlying grammar. We have implemented this updated representation in subsequent versions of selfies, where we have also made major advances with respect to design, efficiency, and supported features. Hence, we present the current status of selfies (version 2.1.1) in this manuscript. Our library, selfies, is available at GitHub ([https://github.com/aspuru-guzik-group/selfies](https://github.com/aspuru-guzik-group/selfies)). ###### Contents * I Introduction * II Timeline and Advances * III SELFIES Specification * III Syntax * III.1 The Selfies Grammar * III.2 Simple Chain Derivation * III.3 Branch Derivation * III.4 Ring Derivation * IV Library Design * IV.1 Core Functions * IV.2 Explaining Translation * IV.3 Customization Functions * IV.4 Utility Functions * V Results and Discussion * VI Conclusions and Outlook ## I Introduction In recent years, machine learning (ML) has become a powerful tool to tackle challenging problems in chemistry. Machine learning pipelines involve three crucial elements: data, representation, and models. Choosing the proper representation is important as it defines the space of available models available to work with the data, as well as impacting directly model performance. For molecules, one of the more widely-used classes of representations encode molecules as strings (i.e., the string-based molecular representations). These representations are popular since they can leverage the rich collection of ML tools that have been developed for sequential data [1, 2]. Historically, the most employed string representation is the Simplified Molecular Input Line Entry System (Smiles), which was introduced by Weininger in 1988 [3]. Currently, Smiles has become the _de facto_ standard representation in cheminformatics and has historically been a key component of central applications in the field, such as chemical databases. The main appeal of Smiles is its simple underlying grammar, which allows for the rigorous specification of molecules in a manner that can be parsed efficiently, and which is readable for humans at least for small molecules. However, in an ML setting, this grammar can carry two intrinsic weaknesses. First, many strings constructed from Smiles symbols are _syntactically_ invalid due to the Smiles grammar, i.e., the strings cannot be interpreted as molecular graphs [4, 5]. For instance, Smiles requires open and closing brackets to appear in matching pairs, so the Smiles string C(CC is invalid. This is problematic because ML models that produce Smiles strings, especially generative models, can be prone to these syntactic errors, rendering a significant fraction of their output meaningless. One strategy is to constrain the ML architecture to reduce the number of invalid structures, which has been demonstrated successfully in the literature [6, 7, 8]. This approach, of course, needs significant computational effort and cannot be transferred directly to other systems without model retraining, model architecture adjustments, or domain-specific design considerations. An alternative and more fundamental solution is to define representations that are inherently robust. A first step towards this direction was taken by DeEPSMILES[9], a string-based representation derived from SMILES that reworked some of its most syntactically susceptible rules. While DeEPSMILES solves most of the syntactical errors, it does not address the second weakness of SMILES, namely, that even syntactically valid strings may not necessarily correspond to a physical molecule. Typically, this occurs when a string represents a molecular graph that exceeds normal chemical valences, in which case we call the string _semantically_ invalid. For example, the SMILES string CO=C is semantically invalid because it erroneously specifies a trivalent oxygen atom, which is chemically unstable and reactive. To eliminate both syntactic and semantic invalidities in string-based molecular representations on a fundamental level, an entirely new representation termed SELF-referenCIng Embedded Strings (Selfies) has been proposed by some of us [10]. By construction, Selfies is 100% _robust_ to both syntactic and semantic errors. That is, any combination of Selfies symbols specifies a molecular graph that obeys chemical valences. This is achieved through a small Chomsky type-2, context-free grammar [11] that is augmented with self-referencing functions to handle the generation of branches and rings. Since its release, Selfies has enabled or improved numerous applications, ranging from molecular design [12; 13; 14; 15] to interpretability [16] to image-to-string and string-to-string translations [17; 18], and has been extended to incorporate functional groups and other fragments [19]. For an extensive summary of its applications and opportunities, we refer readers to the recent community paper on Selfies[20]. Herein, we introduce selfies 2.1.1, the latest version of the open-source Python implementation of Selfies. In particular, we provide a detailed look into its history, developments, underlying algorithms, design, and performance. Together with the community, we have recently overviewed potential extensions and formulated 16 concrete future projects for Selfies and other robust molecular string representations [20]. We hope that this manuscript will also help in developing some of these extensions and ideas. Our software package selfies can be installed with pip install selfies and is available at GitHub ([https://github.com/aspuru-guzik-group/selfies](https://github.com/aspuru-guzik-group/selfies)) under the Apache 2.0 license, along with comprehensive documentation and tutorials. ## II Timeline and advances The selfies library version that implemented the representation from Krenn _et al._[10] was first released as selfies 0.2.4 in 2019. This older version provided an API of two translation functions where a restricted subset of organic, uncharged, nonaromatic SMILES strings could be converted to and from Selfies strings. In addition, the internal algorithms behind selfies relied heavily on direct string manipulations, so they were computationally inefficient and difficult to maintain. Since then, selfies has undergone several major redesign that have significantly advanced the algorithmic handling of both SMILES and Selfies. Most importantly, the underlying grammar of selfies has been streamlined and generalized in subsequent versions. We will now describe the changes up until selfies 2.1.1, the most recent version of selfies at the time of publication of this work. One major modification we made is that selfies now uses directed molecular graphs to internally represent SMILES and Selfies strings. This has afforded selfies greater efficiency and flexibility and enabled a number of additional extensions to be made. For example, we added support for aromatic molecules by kekulizing SMILES strings with aromatic symbols before they are translated into Selfies. Furthermore, we handle species with partial charges, radicals, and molecules with explicit hydrogens, non-standard isotopes, and stereochemical definitions in a fully syntactically and semantically robust way. Besides the standard constraints for the number of valences, users can now specify their own constraints and we provide built-in relaxed and stricter constraint presets that can be selected conveniently. Most recently, we introduced the ability to trace the connection between input and output tokens when translating between Selfies and SMILES. Table 1 gives a brief changelog of the major releases of selfies and their associated advancements. While the ideas outlined in Krenn _et al._[10] ensure the validity of the representation remains at the core of selfies, the manifold implementation improvements and extensions are the novelties that we detail in this paper. Hereafter, unless specified otherwise, we will use selfies to refer to selfies 2.1.1 in particular and Selfies to refer to the representation that selfies 2.1.1 implements. We will provide a complete and formal description of the updated representation in SSIII and describe the API of selfies in SSIV. ## III Selfies specification Being 100% robust, every string of Selfies symbols corresponds to a SMILES string that is both syntactically and semantically valid. Recall that we call a SMILES string semantically valid if it syntactically valid and represents a molecular graph that obeys normal chemical valences. Within Selfies, these chemical valences are encoded as a constraint function \(\nu\colon\mathcal{A}\to\mathbb{N}_{0}\), where \(\mathcal{A}\) is a finite universe of the atom types (e.g., \(\mathcal{A}=\{\mathsf{C},\mathsf{N},\mathsf{O},\mathsf{F},\ldots\}\)) of interest and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). The valences represented by \(\nu\) dictate that an atom \(A\) must assume \(\nu(\mathsf{type}(A))\) incident bonds in total. Note that if a SMILES string obeys the valences \(k\), each of its atoms \(A\) makes _at most_\(\nu(\mathsf{type}(A))\) explicit bonds within the string. There is a possibly-strict inequality in this case due to the way Smiles automatically adds implicit hydrogens until chemical valences are satisfied. In practice, the mapping \(\nu\) is rationally chosen to align with physical considerations and established cheminformatics packages such as RDKit [21]. For example, a plausible setting might map \[\nu(\mathsf{C})=4,\quad\nu(\mathsf{N})=3,\quad\nu(\mathsf{O})=2,\quad\nu( \mathsf{F})=1 \tag{1}\] which is the default behaviour of selfies (see SSIV.3). We formulate chemical valences in this manner to emphasize that although Selfies depends on \(\nu\), it is not fixed to any particular setting of \(\nu\). That is to say, Selfies can enforce rule sets induced by any arbitrary mapping \(\nu\colon\mathcal{A}\to\mathbb{N}_{0}\), even if they are not chemically meaningful. To highlight an absurd example, the uniform constraints \(\nu(\cdot)=1000\) can be used in principle, which corresponds to effectively having no semantic constraints at all. In this sense, Selfies can be thought of as a general framework for an adjustable set of constraints \(\nu\). In the ensuing discussion, we will describe Selfies under the assumption that some constraint function \(\nu\) is fixed beforehand. ### Syntax Before explaining the Selfies specification, we make a brief aside and give an overview of the form of Selfies strings. Simply, a valid Selfies string is _any_ finite sequence of Selfies symbols joined together. For ease of visual partitioning, all Selfies symbols are enclosed by square brackets. Hence, a generic Selfies string is of the form \[[\ldots][\ldots][\ldots] \tag{2}\] where the \(\ldots\) is a placeholder for a symbol-specific token. We can further categorize Selfies symbols into four main types, namely, atom, ring, branch, and miscellaneous, and characterize the syntax of each in the following. Throughout, let \(\varepsilon\) be the empty string and given \(n\) strings \((\sigma_{i})_{i=1}^{n}\), let \(\sigma_{1}\,\sigma_{2}\,\cdots\,\sigma_{n}\) denote their concatenation. **Atom Symbols.** The general Selfies atom symbol has the form \[[\beta\,\alpha\,] \tag{3}\] \[\alpha=\alpha_{\mathrm{iso}}\,\alpha_{\mathrm{elem}}\,\alpha_{ \mathrm{H}}\,\alpha_{\pm}\] where \(\beta\in\{\varepsilon,=,\#,/\,,\backslash\}\) is a Smiles-like bond symbol and \[\alpha_{\mathrm{iso}} \in\{\varepsilon,1,2,3,\ldots\} \tag{4}\] \[\alpha_{\mathrm{elem}} \in\{\text{element symbols}\}\] \[\alpha_{\mathrm{chiral}} \in\{\varepsilon,\mathtt{@},\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@ }\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@ }\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{ @}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{@}\mathtt{}\mathtt{}\] \[\alpha_{\mathrm{EL}}\] collectively specify an atom type \(\mathtt{type}(\alpha)\) in a Smiles-like fashion (the atom's isotope number, atomic number, chirality, number of attached hydrogens, and charge, respectively, and sometimes optionally). Notably, each Selfies atom symbol is semantically unique, i.e., different atom symbols are not interchangeable. This is not the case in Smiles due to shorthand abbreviations in how attached hydrogens and charge can be represented. For example, the Smiles atom symbol pairs ([Fe++], [Fe+2]) and ([CH], [CH1]) are interchangeable. To create a more standardized alphabet of symbols, we remove this redundancy in Selfies. **Branch Symbols.** The general Selfies branch symbol has the form \[[\beta\,\mathtt{Branch}\,\ell\,] \tag{5}\] where \(\beta\in\{\varepsilon,=,\#\}\) is a Smiles-like bond symbol and \(\ell\in\{1,2,3\}\). \begin{table} \begin{tabular}{c c l} \hline \hline Version & Year(s) & Description \\ \hline 0.1.1 & (Jun) 2019 & Initial release of selfies. \\ 0.2.4 & (Oct) 2019 & Release of selfies that implements the representation from Krenn _et al._[10]. \\ 1.0.x & 2020-21 & Expanded the support of selfies to a greater subset of SMILES strings, including strings with aromatic atoms, isotopes, charged species, and certain stereochemical specifications. To do so, the underlying grammar used by selfies was both streamlined and generalized. \\ & & Added support for the customization of the semantic constraints used by selfies. \\ & & Significantly improved the efficiency of translation between SELFIES and SMILES. \\ & & Added a variety of utility functions to make the handling of SELFIES strings convenient. \\ 2.0.x & 2021 & Updated the SELFIES alphabet to be more human-readable and standardized. \\ & & Improved handling of stereochemical specifications in SELFIES involving ring bonds. \\ 2.1.x & 2022 & Added support for explaining translations between SELFIES and SMILES through attributions. \\ \hline \hline \end{tabular} \end{table} Table 1: A timeline of the various releases of selfies. **Ring Symbols.** Selfies ring symbols can be further subdivided into two sub-types. These are of the form \[\begin{split}\llbracket\beta\,\texttt{Ring}\,\ell\rrbracket\\ \llbracket\beta_{1}\,\beta_{2}\,\texttt{Ring}\,\ell\rrbracket\end{split} \tag{6}\] where \(\beta\in\{\varepsilon,\texttt{=},\texttt{=},\texttt{=}\texttt{=}\texttt{=}\}\) and \[\beta_{1},\beta_{2}\in\{\texttt{-},\texttt{/},\texttt{\backslash}\},\ \textit{not}\ \text{both}\ \beta_{1}=\beta_{2}=\texttt{-} \tag{7}\] are Smiles-like bond symbols and \(\ell\in\{\texttt{1},\texttt{2},\texttt{3}\}\), similar to branch symbols. The second ring symbol type (Eq. 6) is used to handle stereochemical specifications across double ring bonds (see SSIII.5) **Miscellaneous Symbols.** Selfies has a few auxiliary symbols that are not core to the representation. These symbols still have common use cases and are specially recognized by the functions in selfies that translate between Selfies strings and Smiles strings (see SSIV.1): * The dot symbol., which can be used to express multiple disconnected fragments in a single Selfies string, similar to its role in Smiles. The dot symbol is interpreted by treating it as delimiter and splitting the Selfies string across the symbol. Then, each token is treated as an independent Selfies string. * The [nop] (for "no-operation") symbol, which is a special padding symbol ignored by selfies. Table 2 provides examples of Selfies atom, branch, and ring symbols. ### The SELFIES Grammar Now, we return to explaining the practical algorithm used to derive Smiles strings from their correspondent Selfies strings. To do so, we first introduce the notion of a context-free grammar. A context-free grammar \(G\) is a tuple \(G=(V,\Sigma,R,S)\), where \(V\) and \(\Sigma\) are disjoint finite sets of nonterminal and terminal symbols, respectively, \(R\subseteq V\times(V\cup\Sigma)^{*}\) is a finite relation, and \(S\in V\) is a so-called start symbol. Under \(G\), strings of terminal symbols can be derived by performing a finite sequence of replacements starting with the single-symbol string \(\sigma_{0}=S\). At each step \(t\), if the current string \(\sigma_{t}\) contains a nonterminal symbol \(\mathbf{A}\in V\) (i.e., \(\sigma_{t}=\rho_{1}\,\mathbf{A}\,\rho_{2}\) for \(\rho_{1},\rho_{2}\in(V\cup\Sigma)^{*}\)) and there is an \((\mathbf{A},\alpha)\in R\), then we replace \(\mathbf{A}\) with \(\alpha\) to get the next string \(\sigma_{t+1}=\rho_{1}\,\alpha\,\rho_{2}\). For this reason, tuples \((\mathbf{A},\alpha)\in R\) are called production rules, and are suggestively notated \(\mathbf{A}\rightarrow\alpha\). The derivation terminates once only terminal symbols remain. The derivation of Smiles strings under Selfies is similar to the preceding process. In fact, a context-free grammar underlies Selfies, which we call the Selfies grammar. Specifically, the Selfies grammar takes \[V =\{\mathbf{S},\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{X}_{3}, \ldots,\mathbf{X}_{\max\nu(\mathcal{A})}\}\] \[\Sigma =\{\texttt{Smiles symbols, e.g., C, =, (\ldots)}\] \[S =\mathbf{S} \tag{8}\] where \(\max\nu(\mathcal{A})\) is the maximum valence of all atom types. The production rules \(R\) will be characterized later. Given a Selfies string, its corresponding Smiles string is then derived through a trajectory of replacements starting from \(\mathbf{S}\), as previously described. However, there are two further modifications that provides Selfies its strong robustness. First, the replacements that are performed are not chosen arbitrarily, but are instead dictated by the Selfies string of interest. At each derivation step, the next symbol of the Selfies string is read off and fully specifies which production rule is applied. We systematically design this symbol-to-rule mapping such that the final derived Smiles string will always be valid. Second, Selfies augments the grammar with self-referencing functions. These self-referencing functions manipulate the derivation process in more complicated ways than simple replacements, so they are not production rules. However, as before, the manner in which these self-referencing functions are applied is also dictated by the symbols in the Selfies string. Thus, a Selfies string can be viewed as a recipe of instructions (the symbols) that guides string derivation under the Selfies grammar. ### Simple Chain Derivation Herein, we begin by considering the simplest type of Selfies strings, those that correspond to simple chains of atoms. In Smiles, simple chains of atoms are represented by sequences of alternating atom and bond Smiles symbols, the latter of which can sometimes be left implicit by convention. Examples of such Smiles strings include CCCC (n-butane) and O=C=O (carbon dioxide). Analogously, in Selfies, simple chains are represented by sequences of Selfies atom symbols, which can be understood as playing a similar role as a grouping of a Smiles atom symbol and its preceding Smiles bond symbol. Simple chains are the easiest to derive in Selfies, because the process occurs only through mere replacements, as in regular context-free grammars. The derivation of a simple chain starts with the initial string \(\sigma_{0}=\mathbf{S}\). Recall that the Selfies symbols dictate how production rules are applied. For simple chains, this is achieved by having each pair of Selfies atom symbol \begin{table} \begin{tabular}{l l} \hline \hline Type & Examples \\ \hline Atom & [\texttt{=}0], [C0@H1], [N+1] \\ Branch & [Branch3], [@Branch1], [@Branch2] \\ Ring & [=Ring1], [\(\wedge\)Ring3], [Ring2] \\ Misc. &., [nop] \\ \hline \hline \end{tabular} \end{table} Table 2: Example SELFIES symbols, by symbol type. and nonterminal symbol \(\mathbf{A}\in V\) determine a production rule of the form \(\mathbf{A}\to\alpha\,\mathbf{A}^{\prime}\), where \(\alpha\in\Sigma^{*}\) is a terminal string and \(\mathbf{A}^{\prime}\in V\cup\{\varepsilon\}\). Then, a sequence of replacements is iteratively performed by treating the Selfies string as a queue \(\mathcal{Q}\) of Selfies symbols. At each step, the head of \(\mathcal{Q}\) is popped1 and, with a nonterminal symbol in the current string \(\sigma_{t}\), is used to select and apply a production rule to get the next string \(\sigma_{t+1}\). Note that \(\sigma_{0}=\mathbf{S}\) is itself a single nonterminal symbol, and each rule induced by a Selfies atom symbol replaces one nonterminal symbol by another. Hence, throughout the derivation, the current string \(\sigma_{t}\) will always contain at most one nonterminal symbol and there is never any ambiguity as to how or which production rule is applied. Once the current string has only terminal symbols or \(\mathcal{Q}\) is empty, the process ends (since Selfies strings are finite, termination necessarily occurs). The final derived SMILES string is read off by dropping all nonterminal symbols. Footnote 1: To _pop_ or _dequeue_ the head of a queue \(\mathcal{Q}\) means to fetch and then remove the oldest item in \(\mathcal{Q}\). We now fully enumerate the Selfies atom symbol to production rule mapping. Let \(\lceil\beta\,\alpha\rceil\) be a generic atom symbol, as described in Eq. 3. Based on this symbol, we first define the terminal string \[\tilde{\alpha}=\begin{cases}\alpha,&\text{if }\alpha\in\mathcal{O}\\ \lceil\alpha\rceil,&\text{otherwise}\end{cases} \tag{9}\] where \(\mathcal{O}=\{\mathtt{B},\mathtt{C},\mathtt{N},\mathtt{O},\mathtt{S},\mathtt{ P},\mathtt{F},\mathtt{C}\mathtt{I},\mathtt{Br},\mathtt{I}\}\) are the symbols of elements in the SMILES organic subset. The string \(\tilde{\alpha}\) can be thought of as transforming \(\alpha\) into the SMILES syntax. Then \(\lceil\beta\,\alpha\rceil\) together with the nonterminal symbol \(\mathbf{S}\in V\) specifies the production rule: \[\mathbf{S}\to\tilde{\alpha}\,\mathbf{X}_{\ell} \tag{10}\] where \(\ell=\nu(\mathsf{type}(\alpha))\) is the valence of the atom type specified by \(\alpha\), and we hereafter define \(\mathbf{X}_{0}=\varepsilon\) to be the empty string to handle the case where \(\ell=0\). The atom symbol \(\lceil\beta\,\alpha\rceil\) together with the symbol \(\mathbf{X}_{i}\in V\), where \(1\leq i\leq\max\nu(\mathcal{A})\), specifies a production of the form: \[\mathbf{X}_{i}\to\begin{cases}\beta_{\downarrow}(d_{0})\,\tilde{\alpha}\, \mathbf{X}_{\ell-d_{0}},&\text{if }\ell>0\\ \varepsilon,&\text{if }\ell=0\end{cases} \tag{11}\] where \(d_{0}=\min(\ell,i,d(\beta))\). Here, \(d(\beta)\) is a function that returns the order of the bond type represented by \(\beta\): \[d(\beta)=\begin{cases}1,&\text{if }\beta\in\{\varepsilon,\prime,\backslash\}\\ 2,&\text{if }\beta=\text{\textvisibles First, \(\ell\) symbols are popped from \(\mathcal{Q}\) and converted into integer values by the mapping summarized in Table 3. Let \(c_{1}\cdots,c_{\ell}\) be the indices in first-to-last order of retrieval. In the event that \(\mathcal{Q}\) contains fewer than \(\ell\) symbols, the missing indices are set to have a default value of 0. Next, these indices are identified with a natural number \(N\in\mathbb{N}\) by treating them as hexadecimal digits: \[N=1+\sum_{k=1}^{\ell}16^{\ell-k}c_{k} \tag{17}\] Then, \(N\) symbols from \(\mathcal{Q}\) (or all symbols in \(\mathcal{Q}\), if fewer exist) are consumed to form a new Selfies string, and with start symbol \(S=\mathbf{X}_{d_{0}}\) (instead of \(S=\mathbf{S}\) as before), this substring is recursively derived into a Smiles string \(\rho_{0}\). We take \(\rho=\varepsilon\) if \(\rho_{0}=\varepsilon\), and \(\rho=\left(\,\rho_{0}\,\right)\) otherwise.2 Footnote 2: A minor technicality occurs if \(\rho_{0}\) starts with a branch parentheses (, in which case \(\rho\) is of the form \(\left(\left(\alpha_{1}\right)\cdots\left(\alpha_{m}\right)\alpha_{m+1}\right)\) for strings \(\alpha_{k}\in\Sigma^{*}\) that do not start with (. This would result in an invalid SMILES string because branches cannot start with other branches in SMILES. To amend this, we naturally interpret and replace \(\rho\) with the string \(\left(\alpha_{1}\right)\cdots\left(\alpha_{m}\right)\left(\alpha_{m+1}\right)\). **Example.** To provide an overview of branch derivation, we translate a Selfies string representing acetic acid: \[\mathcal{Q}=\texttt{[0][C][=Branch1][C][=O][=C]} \tag{18}\] Processing the first two Selfies symbols [O][C] results in the string \(\texttt{OC}\,\mathbf{X}_{3}\), after which the symbol [=Branch1] is dequeued. Since \(\ell=2\), we consume the next symbol [C] in \(\mathcal{Q}\) and identify it with \(N=1\). Hence, we create the Selfies substring [=O] from popping the next symbol in \(\mathcal{Q}\) and, with start symbol \(\mathbf{X}_{2}\), recursively derive it into the Smiles substring \(\rho=\left(\texttt{=O}\right)\). Then, performing the replacement in Eq. 16 gives the string \(\texttt{OC}(\texttt{=O})\,\mathbf{X}_{1}\), and processing the last symbol [=C] in \(\mathcal{Q}\) finally produces a Smiles string \(\texttt{OC}(\texttt{=O})\texttt{C}\) for acetic acid. ### Ring Derivation The final feature that is necessary to capture the diverse variety of molecules is the ability to encode ring closures. In Smiles, this is achieved by paired numeric tags that indicate two separate atoms are joined together; for example, \(\texttt{CC1CCC1}\) (methylcyclobutane). By adding bond characters before the numbers, Smiles can also specify ring closures of higher bond orders, such as C=1CCC=1 (cyclopentene). In Selfies, ring closures are specified by ring symbols, which behave similarly to branch symbols. The derivation process extends that in SSIII.4. Per Eq. 6, there are two forms of Selfies ring symbols. To simplify the ensuing discussion, however, we will begin by only considering the first form. When a ring symbol \(\left[\beta\,\texttt{Ring}\,\ell\right]\) is popped from the queue of Selfies symbols \(\mathcal{Q}\), a nonterminal symbol \(\mathbf{A}\) in the current derived string is used to specify a production rule. If \(\mathbf{A}=\mathbf{S}\), then we apply the rule \(\mathbf{A}\rightarrow\mathbf{A}\), and the ring symbol is effectively skipped. If \(\mathbf{A}=\mathbf{X}_{i}\), then we replace: \[\mathbf{A}\rightarrow\mathbf{X}_{i-\min(i,d(\beta))} \tag{19}\] In addition, we consume the next \(\ell\) symbols of \(\mathcal{Q}\) (or all symbols in \(\mathcal{Q}\), if fewer exist) to specify a number \(N\in\mathbb{N}\) by Eq. 17. Then, the ring symbol would indicate that a ring closure should be formed between the _ring-initiating_ atom and the \(N\)-th atom previously derived from it (or simply, the first atom if less than \(N\) such atoms exist). Here, the derivation order is the order in which atoms are realized through the production rules in Eqs. 10 and 11. By ring-initiating atom, we also mean the atom at which bonds would be made if the ring symbol were instead an atom symbol. Often, this coincides with the last-derived atom, as is the case in: \[\texttt{NC}(\texttt{C})\texttt{COC}^{*\dagger}\,\mathbf{X}_{4} \tag{20}\] where the ring-initiating and last-derived atoms are marked with an asterisk and dagger, respectively. However, this is not the case when the last-derived atom lies within a fully-derived branch: \[\texttt{NC}(\texttt{C})\texttt{COC}^{*}(\texttt{C})\,(\texttt{C}^{\dagger}) \,\mathbf{X}_{1} \tag{21}\] For brevity, we will refer to the ring-initiating atom as the _right ring_ atom and its counterpart the _left ring_ atom, as the latter precedes the former in a Smiles string under derivation order. Although a ring symbol specifies a closure between the left and right ring atoms, such a bond cannot be naively added since it may cause valences to be violated for the left ring atom immediately (e.g., consider the case where this atom has already attained its maximum valence) or in the future. Hence, Selfies postpones the creation of ring closures to a final post-processing step. Instead, the ring closure candidates are pushed to a temporary queue \(\mathcal{R}\), and once all the Selfies symbols have been processed, the items in \(\mathcal{R}\) are revisited in first-to-last order. Based on \begin{table} \begin{tabular}{c|c|c} \hline \hline Index & Symbol & Index & Symbol \\ \hline 0 & [C] & 8 & [\#Branch2] \\ 1 & [Ring1] & 9 & [O] \\ 2 & [Ring2] & 10 & [N] \\ 3 & [Branch1] & 11 & [=N] \\ 4 & [=Branch1] & 12 & [=C] \\ 5 & [\#Branch1] & 13 & [\#C] \\ 6 & [Branch2] & 14 & [S] \\ 7 & [=Branch2] & 15 & [P] \\ \hline \hline \end{tabular} All other symbols are assigned index 0. \end{table} Table 3: The symbols succeeding a branch or ring SELFIES symbol are sometimes overloaded with a numeric index, which is determined by the following symbol-to-index mapping. the state of the ring atoms, a candidate may be rejected (and no ring bond is made) or executed. Specifically, given a potential ring closure indicated by symbol [\(\beta\,\texttt{Ring}\,\ell\,\)], let \(m_{1}\) and \(m_{2}\) be the number of additional bonds that the left and right ring atoms can make, respectively. If \(m_{1}=0\) or \(m_{2}=0\), we must reject the candidate since adding the ring closure would exceed one of the valences of the ring atom. The candidate is also rejected if its left and right ring atoms are not distinct, to avoid unphysical self-loops. Otherwise, the candidate is accepted, and, assuming there is no pre-existing bond between its two ring atoms, we form a new bond of order \(d_{0}=\min(d(\beta_{1}),m_{1},m_{2})\) between them. If a prior bond does exist (e.g., if a duplicate ring closure is specified earlier in \(\mathcal{R}\)), then we increment the order of this existing bond as necessary. That is, if the existing bond is of order \(d_{1}\), then we promote it to a bond of potentially-higher order \(\min(3,d_{1}+d_{0})\). **Example.** We translate a Selfies string representing methylcyclobutane: \[\mathcal{Q}=\texttt{[C]}\texttt{[C]}\texttt{[C]}\texttt{[C]}\texttt{[C]} \texttt{[Ring1]}\texttt{[Ring2]} \tag{22}\] The first five symbols produce the string CCCCCX\({}_{4}\), after which the ring symbol [Ring1] is dequeued. Since \(\ell=1\), the next and final symbol [Ring2] specifies a single ring bond between the final C and its \(N=3\)rd preceding atom. This produces the SMILES string CC1CCC1. The second ring symbol form [\(\,\beta_{1}\,\beta_{2}\,\texttt{Ring}\,\ell\,\)] in Eq. 3 behaves nearly identically to [Ring\(\,\ell\,\)], and is used to support specification of stereochemistry across single ring bonds. The only difference occurs when a ring closure candidate produced by [\(\,\beta_{1}\,\beta_{2}\,\texttt{Ring}\,\ell\,\)] is accepted, and a new ring bond is added between the two ring atoms. In this case, if \(\beta_{1}\in\{\prime,\backslash\}\), then we add the bond character \(\beta_{1}\) before the numeric ring tag on the left ring atom, and similarly with \(\beta_{2}\) and the right ring atom. For example, if the example Eq. 22 used the symbol [\(\,\)/~Ring1] instead of [Ring1], then the derived SMILES string would be CC/1CCC1. ## IV Library Design The selfies library is designed to be fast, lightweight, and user-friendly. A small but nice feature of selfies is that it also requires no extra dependencies. At its core, there are two functions that facilitate the interconversion between Selfies strings and SMILES strings. For more advanced usage, we provide functions to customize the underlying semantic constraints that selfies enforces and operates upon. Finally, we also provide a variety of utility functions for manipulating Selfies strings. The following describes each type of function in more detail and provides potential use case examples. All code snippets are written in Python, with selfies being a Python library. ### Core Functions Selfies strings can conveniently be created from and turned into SMILES strings using the functions encoder() and decoder(), respectively. The latter derives a SMILES string from a Selfies string, using the procedure described in SSIII. The former performs the reverse translation such that passing a SMILES string through the composition decoder(encoder()) is always guaranteed to recover a SMILES string that represents the same molecule (but not necessarily the original SMILES string itself). The following excerpt defines a toy function roundtrip() that illustrates this: ``` 1importselfiesassf 2 3defroundtrip(smiles): 4try: 5selfies=sf.encoder(smiles) 6returnsf.decoder(selfies) 7exceptaf.EncoderError: 8returnNone 9 10benzene=roundtrip("ccccc1") 11#->C[]=C[]C[]=C[][C][=C][Ring1][=Branch1] 12#->C1=CC=CC=1 ``` Line 5 translates the SMILES string for benzene into the Selfies string in Line 11. Notably, Selfies does not support aromatic atom symbols (e.g., c) in the same way as SMILES, so encoder() performs an internal kekulization if it is passed an aromatic SMILES string. Line 7 guards against errors raised by encoder() when being passed SMILES strings that are syntactically invalid, semantically invalid (i.e., violate the constraints described in the next subsection), or unsupported. An unsupported Smiles string uses features of SMILES that are not implemented in Selfies, such as the wildcard * and quadruple bond $ symbols; the API reference of selfies further details which SMILES strings are currently supported. Line 10 applies the roundtrip() function to a SMILES string c1cccc1 for benzene. Indeed, this round-trip translation recovers a SMILES string C1=CC=CC=C1 that is different than the original string, but still specifies the (kekulized) benzene molecule. Since every string of Selfies symbols can be derived into a valid SMILES string, we can generate random but valid SMILES strings by passing random Selfies strings through decoder(). To sample these Selfies strings, we use the get_semantic_robust_alphabet() utility function, which returns a subset of semantically constrained Selfies symbols: ``` 1importrandom 2 3length=10 4alphabet=sf.get_semantic_robust_alphabet() 5alphabet=list(alphabet) 6 7symbols=random.choices(alphabet,k=length) 8random_selfies="-join(symbols) 9random_smiles=sf.decoder(random_selfies) Note that by changing the pool of Selfies symbols from which we sample from, we can change the distribution of produced molecules. ### Explaining Translation To explain translations between Selfies and SMILES, both encoder() and decoder() support an attribute flag that enables attributions of the output string symbol(s) to symbol(s) in the input string: ``` 1cyclobutane="[C][C][C][Ring1][Ring2]" 2smiles, attributions=sf.decoder( 3cyclobutane, 4attribute=True, 5) 6 7#smiles=C1CCC1 8# attributionsisanglent-4listwith: 9attributions[0]=AttributionMap( 10index=0, 11token="C", 12attribution=[ 13Attribution(index=0,token="[C]") 14], 15} 16attributions[1]=AttributionMap( 17index=2, 18token="C", 19attribution=[ 20Attribution(index=1,token="[C]") 21], 22} 23#attributions[2]=AttributionMap(....) 24#attributions[3]=AttributionMap(....) ``` The attributions are a list of AttributionMap objects, one for each output symbol. Each AttributionMap contains the output symbol, its index, and a list of Attribution objects, each of which holds an input symbol (and its index) that is responsible for the output symbol. Note that a single output symbol may be attributed to multiple input symbols because it may be determined by both atom symbols and branch or ring symbols. Tracing the relationship between symbols can enable alignment between SMILES and Selfies so that per-atom properties can be connected on both sides of the translation. ### Customization Functions selfies dynamically constructs its derivation rules from a set of prespecified constraints, which dictate the maximum number of bonds that each atom type in a molecule may form. The derivation rules then ensure that each Selfies string corresponds to a molecular graph satisfying the set constraints. By choosing a set of constraints in accordance with chemical valences, 100% robustness can be achieved. Specifically, selfies uses the constraints in Table 4 by default. However, a limitation of the default constraints is that Selfies cannot represent existing molecules that violate them, such as perchloric acid (which features a hypervalent Cl making 7 bonds). Moreover, the catch-all constraint may be too relaxed to ensure the validity of Selfies strings containing atom types outside those in Table 4 (e.g., Si, Se). Hence, users may wish to instead use custom constraints that are tailored to the Selfies strings being worked with. To this end, selfies provides the key function set_semantic_constraints(). The following provides a minimal example: ``` 1importselfiesassf 2 3constraints={ 4"C":4,"C+1":5,"C-1":3, 5"?":4#catch-all 6} 7 8sf.set_semantic_constraints(constraints) ``` Here, the constraints dictionary encodes a set of custom constraints; specifically, explicit constraints on the neutral and \(\pm 1\) charged variants of C (as in Table 4) and a catch-all constraint (of 4 maximum bonds). Line 8 then sets constraints as the underlying semantic constraints that selfies will operate under, which changes the subsequent behaviour of encoder() and decoder() appropriately. Note that the pre-existing constraints are fully replaced in Line 8; any constraint that is not explicitly specified in constraints would be thus removed. For convenience, selfies provides a couple of preset constraints to serve as templates that can be easily modified. These can be obtained as follows: ``` 1c1=sf.get_preset_constraints("default") 2c2=sf.get_preset_constraints("octet_rule") 3c3=sf.get_preset_constraints("hypervalent") ``` The currently-set constraints can also be viewed by: ``` 1curr_constraints=sf.get_semantic_constraints() ``` ### Utility Functions selfies provides a number of utility and convenience functions. Two basic utility functions are len_selfies(), \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Maximum Bonds} \\ \cline{2-4} Element & Charge 0 & Charge +1 & Charge \(-1\) \\ \hline H, F, Cl, Br, I & 1 & / & / \\ B & 3 & 2 & 4 \\ C & 4 & 5 & 3 \\ N & 3 & 4 & 2 \\ O & 2 & 3 & 1 \\ P & 5 & 6 & 4 \\ S & 6 & 7 & 5 \\ \hline \hline \end{tabular} \end{table} Table 4: The default constraints used by selfies. All atom types other than those explicitly listed below are constrained to 8 maximum bonds, which acts as a _catch-all_ constraint. which computes the number of symbols in a SELFIES string, and split_selfies(), which tokenizes a SELFIES string into an iterable of its constituent symbols: ``` 1importselfiesassf 2 3selfies="[F][-C][-C][#N]" 4length=sf.len_selfies(selfies)##4 5symbols=list(sf.split_selfies(selfies)) 6#["[F]","[=C]","[=C]","[#N]"] ``` Furthermore, selfies includes functions to extract a vocabulary of symbols from a dataset of SELFIES strings, and to convert SELFIES strings into label or one-hot encodings. Consider the following example: ``` 1dataset=[ 2"[C][0][C]", 3"[F][C]", 4"[C][C][0][C]", 5] 6 7alphabet=sf.get_alphabet_from_selfies(dataset) 8alphabet.add("[nop]") 9alphabet=list(sorted(alphabet)) 10#["[C]","[F]","[0]","[nop]"] 11 12pad_to=max(sf.len_selfies(s)forsimdataset) 13stoi={s:ifor,simenumerate(alphabet)} 14 15dimethyl_ether=dataset[0]#[C][0][C] 16 17label,one_hot=sf.selfies_to_encoding( 18selfies=dimethyl_ether, 19vocab_stoi=stoi, 20pad_to_len=pad_to,#4 21enc_type="both", 22) 23 24#label=[0,2,0,3] 25#one_hot=[[1,0,0,0],[0,0,1,0], 26#[1,0,0,0],[0,0,0,11]] ``` Here, we are given a list dataset of SELFIES strings. Line 7 uses a utility function of selfies to extract the set alphabet of SELFIES symbols that appear in the dataset, which is used in Line 13 to create a symbol to index mapping set. Next, lines 17-22 use another utility function selfies_to_encoding() to create a label and one-hot encoding of the first SELFIES string in dataset. Under the hood, this function first pads the input string to length pad_to_len by appending to it sufficiently many copies of the symbol [nop] (for "no-operation"), which is a special padding symbol in selfies that is automatically ignored by decoder(). Then, the padded SELFIES string is tokenized, and stoi is used to convert each of its symbols into integer labels and one-hot vectors. Since the padded SELFIES string may now contain [nop], this symbol must be added to stoi, which is done through Line 8. Lastly, the reverse encoding can be performed using the encoding_to_selfies() utility: ``` 1itos={i:sfors,iinstoi.items()} 2 3#recover[C][0][C][nop]fromlabelencoding 4recovered=sf.encoding_to_selfies( 5encoding=label, 6vocab_to_stios, 7enc_type="label", 8} 9 10sf.decoder(recovered)#COC ``` Table 5 summarizes the various utility functions introduced within this section. ## V Results and Discussion selfies is quick and efficient in its translation, despite being implemented in pure Python. To demonstrate this, we provide some simple benchmarks of its core functions encoder() and decoder(). The following experiments were run on Google Colaboratory, which uses two 2.20GHz Intel(R) Xeon(R) CPUs. **Roundtrip Translation.** Here, we consider the roundtrip translation task, where a SMILES string is translated to SELFIES and then back to SMILES (see SSIV.1). Specifically, we translate the Developmental Therapeutics Program (DTP) open compound collection [22, 23], which contains a little over 300k SMILES strings and is a set of molecules which have been tested experimentally for potential treatment against cancer and the acquired immunodeficiency syndrome (AIDS) [24]. Translating the full dataset into SELFIES strings with encoder() takes 136 s, and recovering the SMILES dataset using decoder() takes 116 s, for a total roundtrip translation time of 252 s. Figure 2 plots how this roundtrip time scales with molecu \begin{table} \begin{tabular}{c l} \hline \hline Function & Description \\ \hline len\_selfies() & Computes the symbol length of a SELFIES string. \\ split\_selfies() & Tokenizes a SELFIES string into its constituent symbols. \\ get\_alphabet\_from\_selfies() & Extracts a minimal vocabulary from a dataset of SELFIES strings. \\ selfies\_to\_encoding() & Converts a SELFIES string into a label and/or one-hot encoding. \\ encoding\_to\_selfies() & Recovers a SELFIES string from its label and/or one-hot encoding. \\ get\_semantic\_robust\_alphabet() & Provides an alphabet of semantically-constrained SELFIES symbols. \\ \hline \hline \end{tabular} \end{table} Table 5: An overview of selfies utility functions. lar size. Notably, we obtain all of these times by averaging over 3 replicate trials. **Random Selfies.** First, we sample 1000 fixed-length Selfies strings and translate them to SMILES, per SSIV.1. We try this experiment with different symbol lengths and alphabets from which the Selfies strings are built. Figure 1 shows the resulting distribution of SMILES strings and the time it takes to decode each full batch of random Selfies strings. Performing this experiment reaffirms the robustness of Selfies and demonstrates the ease in which we can create random valid molecules without applying any filters, pre- or post-selection. In Figure 1(a), we show how Selfies strings sampled from a basic alphabet translate to random molecules; an important observation is that the generated molecules are rather small, independent of the Selfies length chosen. That is mainly caused by the inclusion of multi-bonds and low-valence atoms in the considered alphabet, which exhaust the available valences of the constituent atoms and then lead to an earlier termination of the derivation. A simple workaround is to instead use an alphabet without multi-bonds and low-valence atom types, as illustrated in Figure 1(b). Here, the molecular size distribution is shifted significantly towards larger molecules, especially when longer Selfies string are sampled. Hence, this showcases how to create very large, valid random molecules. ## VI Conclusions and Outlook Since its first release in 2019, the selfies library has undergone significant changes and experienced a drastic Figure 1: For a fixed alphabet \(\mathcal{A}\), 1000 SELFIES strings were generated by uniformly sampling \(L\) symbols from an alphabet. Then, we plot the size distribution of the resulting molecules for varying symbol lengths \(L\). **(a)** We take \(\mathcal{A}\) to be the 69 symbols returned by get_semantic_robust_alphabet() under the default semantic constraints. **(b)** We filter the alphabet in (a) to 19 symbols by removing all atom symbols \([\beta\,\alpha\,]\) where \(\beta\in\{=,\#\}\) or \(\nu(\mathsf{type}(\alpha))=1\), and removing all branch and ring symbols except for [Branch1] and [Ring1]. This decreases the chance that the SELFIES derivation process is terminated early, causing the derived molecules to be larger. **(c)** The time taken to translate each batch of random SELFIES strings to SMILES using decoder(), measured by averaging over 20 replicate trials. Figure 2: The roundtrip translation time of 1000 randomly-sampled SMILES strings from the DTP open compound collection as a function of size, measured in number of atoms. transformation in terms of both capabilities and code design. All of these modifications were executed with two major premises, namely, (1) extending the functionality and capability to support all features of the Smiles representation and (2) retaining or even improving upon its simplicity and user-friendliness. To achieve that, we implemented all necessary functionality in the library itself so that it does not require any other packages. Additionally, we added several utility functions to the library to support common use cases. Apart from these two prime goals, we also made significant efforts to make the implementation faster as Selfies has been employed in many performance-critical applications and workflows. Overall, the Selfies community has grown rapidly and we are actively engaging in constructive discussions about the current implementation and future improvements. While selfies 2.1.1 supports almost all important features of Smiles, there are still many new features on our agenda. We outlined many of them in [20], for example, extensions to polymers, crystals, molecules with non-covalent bonds, or reactions. Our vision is that Selfies will become a standard computer representation for molecular matter. We encourage the community to implement it into their workflows, report errors in the current implementation, and propose changes and new features that will help them to succeed in their goals.
2307.06186
One-dimensionally confined ammonia molecules: A theoretical study
We examine a single-file chain of ammonia molecules in a carbon nanotube. To this end, we use i) molecular dynamics simulations (combined with the charges for ammonia nitrogen and hydrogen obtained from quantum chemistry) and ii) lattice-model calculations [M.~Druchok {\it et al.}, J. Chem. Phys. {\bf 158}, 104304 (2023)]. Our findings demonstrate the occurrence of the orientational quasiorder of the ammonia dipoles, which become parallel to the tube axis, at intermediate temperatures below $100$~K.
Maksym Druchok, Volodymyr Krasnov, Taras Krokhmalskii, Oleg Derzhko
2023-07-12T14:25:51Z
http://arxiv.org/abs/2307.06186v1
# One-dimensionally confined ammonia molecules: A theoretical study ###### Abstract We examine a single-file chain of ammonia molecules in a carbon nanotube. To this end, we use i) molecular dynamics simulations (combined with the charges for ammonia nitrogen and hydrogen obtained from quantum chemistry) and ii) lattice-model calculations [M. Druchok _et al._, J. Chem. Phys. **158**, 104304 (2023)]. Our findings demonstrate the occurrence of the orientational quasiorder of the ammonia dipoles, which become parallel to the tube axis, at intermediate temperatures below 100 K. single-walled carbon nanotubes, single-file ammonia molecules, orientational quasiorder, quasiphase transition ## I Introductory remarks Confinement of molecules to pores of nanometer diameters, when they form a single-file chain, results in essentially new properties of such a substance. Single-walled carbon nanotubes (CNTs) of a corresponding diameter provide an excellent experimental setup to study the one-dimensionally confined substance [1]. Recently, X. Ma _et al._[2] have reported temperature-dependent photoluminescence spectroscopy data for single-walled (6,5) CNTs. While empty CNTs exhibit a linear temperature-dependent photoluminescence spectral shift as expected, the water-filled CNTs show a stepwise photoluminescence spectral shift centered at about 150 K, which is superimposed on the anticipated linear temperature-dependent one. X. Ma _et al._[2] assumed that the origin of the observed additional spectral shift is related to a significant change in the orientation of the water dipoles. They performed molecular dynamics (MD) simulations and indicated three different regimes: 1) traditional hydrogen-bonded chains (below \(\sim\) 40 K), 2) predominantly bifurcated hydrogen bonds, where hydrogen bond from a single oxygen atom is distributed over both hydrogen atoms of a single neighboring water molecule (around \(\sim\) 70 K), and 3) disordered chains (for \(T>200\) K). The effective total dipole moment of the structures dominated as temperature grows agrees with the direction of the measured photoluminescence spectral shift. Several theoretical studies have been inspired by experiments reported in Ref. [2]. Thus, the ground-state and finite-temperature properties of one-dimensionally confined water molecules were discussed in Refs. [3; 4; 5; 6; 7]. In the present study, we address a question whether quasiphase transitions, observed for the water molecules H\({}_{2}\)O encapsulated in single-walled (6,5) CNT, can be expected for other molecules. To this end, we consider the ammonia molecules NH\({}_{3}\) and the same, yet fillable, CNT. Ammonia molecule has a dipole moment that makes it polar and it has ability to form hydrogen bonds [8]. Similarly to Refs. [2; 7], we carry out i) quantum chemistry calculations to obtain charges of H and N inside the (6,5) CNT, ii) MD simulations to obtain the dipole moment dependence on temperature for illustration of quasiphases as well as to obtain reference values for a lattice model (dipole moment and intermolecular distance), and iii) lattice-model calculations. Our theoretical analysis gives evidence that the ammonia molecules encapsulated in the (6,5) CNT may show a temperature-driven orientational quasordering at intermediate temperatures below 100 K. The rest of the paper is organized as follows. In Section II, quantum-chemical computations and MD simulations are described. In Section III, we report the lattice-model calculations to further illustrate a temperature-driven dipole quasiordering. Then, we conclude with a brief discussion and summary in Section IV. ## II Molecular simulations ### Quantum chemistry and ammonia molecule model To examine the ammonia molecules encapsulated in (6,5) CNT by MD simulations, the realistic charges for the ammonia nitrogen and hydrogen atoms are primarily required. According to Ref. [9], for the bulk ammonia case \(q_{\rm N}=-0.9993e\) and \(q_{\rm H}=-q_{\rm N}/3\), where \(e\) is the elementary electric charge. The OPLS (Optimized Potentials for Liquid Simulations) force field [10] gives \(q_{\rm N}=-1.026e\) and \(q_{\rm H}=-q_{\rm N}/3\). However, these charge values do not account the presence of CNT and therefore are of limited applicability for MD simulations of ammonia molecules inside CNT. To obtain the charges for the nitrogen and hydrogen atoms in ammonia in the (6,5) CNT, we used the GAMESS package [11] and performed semi-empirical (AM1, PM6), first-principle (STO-2G, MINI), and density-functional-theory (B3LYP) calculations. Within these calculations, we examined ammonia molecules inside a (6,5) CNT (AM1, PM6 - see the top panel of Fig. 1) and ammonia molecules restricted to one dimension but without CNT (AM1, PM6, STO-2G, MINI, B3LYP - see the bottom panel of Fig. 1). To model the (6,5) CNT, we used the structure with 272 carbon atoms and 22 hydrogen atoms added to saturate free carbon bonds on the edges of CNT (see the top panel of Fig. 1). The coordinates of carbon atoms were frozen while for the coordinates of hydrogen atoms the optimal positions were found. In our study, we consider 10 ammonia molecules and search for optimal positions of the nitrogen and hydrogen atoms. To avoid surface effects, i.e., to minimize the effects of molecules at terminal positions, only charges of the four inner ammonia molecules were averaged for further molecular dynamics simulations. Mean charges obtained at the AM1/PM6 level are: \(q_{\rm N}=-0.3965/-0.6127\) (in units of \(e\)) for nitrogen atoms and \(q_{\rm H}=0.1322/0.2042\) for hydrogen atoms. In the presence of CNT we obtained \(q_{\rm N}=-0.3913/-0.6281\) and \(q_{\rm H}=0.1289/0.2100\) at the AM1/PM6 level, respectively. Only small changes in atomic charges allow us to examine the former case alone while performing more demanding density-functional-theory calculations. Furthermore, the B3LYP method yields \(q_{\rm N}=-0.7814\) (Lowdin population analysis) or \(q_{\rm N}=-0.9582\) (Mulliken population analysis) for nitrogen atom and \(q_{\rm H}=0.2704/0.2540\) (Lowdin) or \(q_{\rm H}=0.3761/0.2918\) (Mulliken) for hydrogen atoms. Here, doubled values of \(q_{\rm H}\) correspond to the charge of hydrogens forming hydrogen bonds (higher value before the slash) and the charge of two dangling hydrogens (lower value after the slash), see the bottom panel in Fig. 1. Semi-empirical AM1/PM6 calculations also provide some hints for forming the hydrogen bonds, however, the difference in two values of charges \(q_{\rm H}\) is smaller. It should be stressed that the values of \(q_{\rm N}\) (and \(q_{\rm H}\)) are lower than the bulk ones, found in Ref. [9] to fit to experimental vapor-liquid equilibrium data (\(q_{\rm N}=-0.9993\)) or from OPLS (\(q_{\rm N}=-1.026\)). Lower charges result in a reduced dipole moment of the ammonia molecule in CNT. As MD simulations, besides \(q_{\rm N}\) and \(q_{\rm H}\), use other parameters of atoms constituting ammonia molecules, we combine the obtained charges with the OPLS force field geometry. In particular, the valence angles and intramolecular distances were controlled with harmonic terms \(k_{\alpha}\cdot(\alpha_{\rm H-N-H}-\alpha_{0})^{2}\) and \(k_{r}\cdot(r_{\rm N-H}-r_{0})^{2}\), where \(\alpha_{0}=106.4^{\circ}\) and \(r_{0}=1.01\) A [12]. Because of such flexibility, the dipole moment of ammonia molecules was found to vary within the range of \(1.26\ldots 1.36\) D (Lowdin) or \(1.32\ldots 1.43\) D (Mulliken) along the span of simulated temperatures, see the upper panel of Fig. 3 and Fig. 8 below. For the sake of comparison, the higher dipole moment of \(\mu=1.94\) D for bulk ammonia is reported in Ref. [9]. Significantly lower dipole moment of water molecules confined in nanotubes (\(\mu=1.105\) D) in comparison to the bulk value (\(\mu=2.351\) D) is also known in literature, see, e.g., Refs. [2; 13]. More details about quantum chemistry calculations are given in Appendix A. As in the case of confined water in (6,5) CNT [2; 7], the quantum chemistry predictions are rather diverse and strongly depend on a calculation scheme (choice of the basis set, the quantum-mechanical method, the population analysis method and the choice of the geometry of the ammonia molecule), see, e.g., Ref. [14]. The ambiguity related to the ammonia molecule model, which is required for further MD studies, can be avoided while performing _ab initio_ MD simulations [15], however, such calculations are far beyond the focus of the present paper. In our MD simulations we use the B3LYP data, i.e., two sets: \(q_{\rm N}=-0.7814\), \(q_{\rm H}=-q_{\rm N}/3\) (Lowdin charges, see Sec. II.2) and \(q_{\rm N}=-0.9582\), \(q_{\rm H}=-q_{\rm N}/3\) (Mulliken charges, see Appendix B). Both sets yield qualitatively similar results, although larger Mulliken charge values result in larger dipole-dipole interaction strength push Figure 1: Quantum chemistry predictions for one-dimensionally confined \(N=10\) ammonia molecules. Top panel: Visualization of AM1 data. Bottom panel: Visualization of B3LYP data (only the 4th, 5th, 6th, and 7th ammonia molecules are shown). ing the temperature-driven orientational quasiordering to higher temperatures. ### Molecular dynamics simulations and results We use the DL\({}_{-}\)POLY molecular simulation package [16] to perform a series of MD simulations of ammonia molecules encapsulated inside the CNT with chirality of (6,5) at different temperatures. The (6,5) CNTs have a diameter of \(\approx 7.4\) A. This value denotes the diameter of the circle over the centers of carbon atoms constituting the CNT openings. The actual interior, available for ammonia molecules, is smaller due to van der Waals sizes of carbons. Such a small-sized nanopore allows only for a single-file arrangement of molecules. Furthermore, we consider \(N=35\) ammonia molecules inside a CNT of \(\approx 170\) A. We ran a set of simulations over a temperature range of \(10\ldots 240\) K. The results around the lower temperature boundary should be taken with caution because of quantum effects which are not accounted for in our MD simulations. We generated starting configurations consisting of CNTs and ammonia molecules. The intramolecular geometry and short-range interactions for ammonia were taken from the OPLS force field, while the charges for nitrogens and hydrogens were optimized within the B3LYP level of approximation (see Sec. II.1 and Appendix A). The CNT model was taken from Ref. [17], namely, the Lennard-Jones parameters for carbons of nanotube sidewalls. Usually, the CNT simulations also imply a set of harmonic bonds and angles maintaining the CNT geometry, however, since ammonia molecules are in the spot of interest, we froze the ideal CNT in vacuum to cut the computational costs. We mention that CNT sidewall carbons are neutral. The Lennard-Jones parameters for unlike sites were calculated using the Lorentz-Berthelot mixing rules. This combination of interaction parameters was successfully utilized in our recent studies [7; 18; 19; 20] of nanotubes interacting with SPC/E water. Finally, two additional carbon atoms were placed at the centers of CNT openings, which play a role of obstacles, to assure that ammonia molecules stay inside the nanotube interior during the whole set of simulations. The simulation conditions were kept the same for all the runs except the temperature variation. The temperature was controlled by means of the NVT Nose-Hoover thermostat. Each simulation utilized the leapfrog integration algorithm with the time step of 0.001 ps, covering 200 ps of equilibration and then 600 ps of production runs. Smooth particle mesh Ewald technique was used to calculate the electrostatic terms, while the short-range interactions were calculated with the cut-off distance of 15 A. We also performed MD simulations for \(N=35\) water molecules in the (6,5) CNT of \(\approx 170\) A (the results are collected in Appendix C) for comparison with the ones for the ammonia case. Our MD simulation results for the ammonia molecules in the (6,5) CNT are reported in Figs. 2, 3, and 4. These results i) illustrate the emergence of the intermediate ordered quasiphase and ii) provide an input for the lattice model to be considered in Sec. III. Figure 2 illustrates temperature dependence of orientations of confined ammonia molecules. Due to an arbitrary choice of the CNT axis direction, the orientational order within the ammonia chain can be characterized in two manners - "individual" and "group" ones. In general, the angle \(\theta\) between the dipole moment of ammonia molecule and the CNT axis can take values from the range of \(0\ldots 180^{\circ}\). By adopting the same axis direction for all dipoles we refer to this as the group orientation. The definition of \(\tilde{\theta}\) as \(\min(\theta,180^{\circ}-\theta)\) has a meaning of an individual angle. Both definitions make sense, as \(\tilde{\theta}\) is more focused on independent orientation of a dipole and is useful for our lattice model, while \(\theta\) distribution brings information about mutual orientation of dipoles along the chain. Both definitions, in their own way, can serve as indicators of temperature-induced rearrangement of dipoles. In particular, the top panel of Fig. 2 demonstrates evolution of "individual" angles \(\tilde{\theta}\). Here and be Figure 2: Angles between CNT axis and dipole moments of ammonia molecules. Top panel: Temperature dependence of mean angles averaged over time and ensemble \(\tilde{\theta}\). Bottom panel: \(p(\theta)/\sin\theta\) for multiple temperatures; \(p(\theta)/\sin\theta\) has maximum at \(\theta\approx 57^{\circ}\) for \(T=10\) K. low in plots with MD results, besides the mean values, we also show errorbars to indicate 25-th and 75-th percentiles of values collected during the production runs of simulations. The mean value for \(\bar{\theta}\) at \(T=10\) K is about \(57^{\circ}\) and decreases down to \(47^{\circ}\) at \(T=50\) K, then increases with the temperature raise. One can also see that the errorbars start to span over larger intervals with the temperature growth. The bottom panel of Fig. 2 demonstrates the probability distribution density of a "group" angles \(\theta\) normalized by \(\sin\theta\). Such a denominator is needed to account for different number of redundant states for different \(\theta\). One can see that at low temperature (\(T=10\) K) the distribution \(p(\theta)/\sin\theta\) is focused in vicinity of \(\theta\approx 57^{\circ}\), which corresponds to the value of \(\bar{\theta}\approx 57^{\circ}\) spotted above. "Individual" and "group" angles coincide, thus, the chain demonstrates a uniform orientation of dipoles. With the temperature rise, the distributions \(p(\theta)/\sin\theta\) at first spread to smaller angles, indicating the intermediate quasiorder (\(T=30\) and \(60\) K), then, become broader with appearance of angles about \(90^{\circ}\), indicating that molecules can flip their direction (\(T=120\) and \(180\) K). Such a behavior of \(p(\theta)/\sin\theta\) may also yield a rough estimate for the temperature interval \(T_{1}<T_{2}\) where the intermediate quasiphase with dipole moments parallel to the CNT axis exists. Top panel of Fig. 3 shows temperature dependence of the ammonia molecules dipole moment. As can be seen in the top panel of Fig. 3, the higher the temperature the higher variation of dipole values around means, speaking in favor of increasing fluctuations in the system. In the bottom panel of Fig. 3 we show the normal component of dipole moment of individual ammonia molecules \(\mu_{\rm norm}=\overline{|\mu_{\perp}|}/\mu\) and the total dipole moment of ammonia chain tangential to the CNT axis \(\mu_{\rm tang}=\overline{\mu_{\parallel}^{\rm tot}}/(N\mu)\); here \(\overline{(\ldots)}\) denotes the mean value of \((\ldots)\). Clearly, \(\mu_{\rm tang}\) and \(\mu_{\rm norm}\) reach maximum and minimum, correspondingly, in vicinity of \(T=40\ldots 50\) K as expected for the intermediate quasiphase with dipole moments aligned along the CNT axis. X. Ma _et al._[2] observed similar temperature profiles for confined water and pointed out three types of water arrangement: 1) hydrogen-bonding over the whole chain, when dipole moments of water molecules are tilted by \(31^{\circ}\) to the CNT axis (quasiphase 1), 2) dipole moments tend to align along the CNT axis in one direction (quasiphase 2), and 3) collective arrangement is completely destroyed (quasiphase 3). On the basis of similar results reported in Fig. 3, we may assume that the quasiphase 2 is achieved for ammonia in vicinity of \(T=40\ldots 50\) K, while the quasiphases 1 and 3 are located at lower and higher temperatures, respectively. In addition to MD analysis of the dipole moment magnitudes and orientations, mean distances between nearest ammonia molecules, as they follow from MD study, are also an important reference. We monitored nitrogen-nitrogen distances during the simulation and then averaged them; such an average is reported in Fig. 4. Interesting to note, that both mean N-N distance and its Figure 4: Temperature dependence of mean distances between nearest nitrogen atoms within ammonia chain. Figure 3: Top panel: Temperature dependence of ammonia dipole moment \(\mu\). Bottom panel: Mean magnitudes of the perpendicular component of dipole moment for ammonia molecule (red) and the tangential component of total dipole moment of ammonia chain per ammonia molecule (green); both components are additionally normalized by \(\mu\). variation increase once the system passes the temperature range of quasiphase 2. We end up the section with a remark about ammonia molecule model based on the Mulliken charges. The Mulliken case demonstrates a similar behavior as in Figs. 2, 3, and 4 (cf. Figs. 7, 8, and 9 in Appendix B), i.e., predicts three quasiphases and two quasiphase transitions between them, however, the temperature range for the intermediate quasiphase 2 is now \(T=90\ldots 100\) K. ## III Lattice model ### Lattice model formulation We pass to statistical mechanics arguments for explaining behavior of ammonia molecules forming a single-file chain in CNT and emergence of the intermediate quasiphase [7]. We bear in mind that the system at hand is one-dimensional and consists of finite (not very large) number of molecules \(N\). We have to account for short-range nearest-neighbor interactions leading to hydrogen-bonded chains and long-range dipole-dipole interactions as well as rotations with limitations imposed by the geometry of CNT having a very small diameter that can only be filled with one ammonia molecule after the other [21]. Then, an interplay of these ingredients produces a finite range of temperatures in which the states, in which ammonia molecule dipole moments are directed along the CNT axis, dominate in the state space [7]. More specifically, we consider a lattice model with 3 states at each lattice site subjected to certain restrictions (hydrogen-bonded chain consists at least of two molecules) which effectively decrease the number of states per site. Thus, we deal with \(N\) rigid bodies each with the moment of inertia \(I\), which carry coplanar dipoles \(\vec{\mu}_{j}\), \(j=1,\ldots,N\); they are rendered on a single straight line so that the distance between the neighboring sites \(j\) and \(j+1\) is \(a_{j,j+1}\). Each site \(j\) may be in one of the following 3 states \(\xi_{j}\): * The state \(\xi_{j}=1\), when \(\mu_{\parallel,j}=\mu\cos\alpha_{1}\), \(|\mu_{\perp,j}|=\mu\sin\alpha_{1}\) and the extension of the occupied site is \(a_{1}\). The site being in such a state belongs to a hydrogen-bonded chain and must have at least one neighboring site in the same state \(\xi=1\). Furthermore, the set of \(\mu_{\perp,j}\) for the hydrogen-bonded chain forms a staggered pattern. * The state \(\xi_{j}=2\), when \(\mu_{\parallel,j}=\mu\cos\alpha_{2}\), \(|\mu_{\perp,j}|=\mu\sin\alpha_{2}\), \(0^{\circ}\leq\alpha_{2}<\alpha_{1}\), and the extension of the occupied site is \(a_{2}=a_{1}(1+\varepsilon_{2})>a_{1}\). * The state \(\xi_{j}=3\), \(\mu_{\parallel,j}=\mu\cos\alpha_{3}\), \(|\mu_{\perp,j}|=\mu\sin\alpha_{3}\), \(\alpha_{1}<\alpha_{3}\leq 90^{\circ}\), and the extension of the occupied site is \(a_{3}=a_{1}(1+\varepsilon_{3})>a_{2}\). This state represents a completely independent ammonia molecule with a random orientation of \(\vec{\mu}_{j}\). The introduced rules reduce the number of states \(W_{N}\) for the lattice of \(N\) sites, which is now \(W_{N}\approx 2.52^{N}<3^{N}\)[7] (i.e., the lattice model has \(\approx 2.52<3\) states per each site). From the mathematical point of view, \(W_{N}\) is equal to the number of words of length \(N\) over the three-letter alphabet \(\{1,2,3\}\) without words having isolated 1's [22; 23]. Moreover, \(W_{N}\) is defined recursively according to the recurrence relation \(W_{N}=3W_{N-1}-2W_{N-2}+2W_{N-3}\), \(W_{0}=1\), \(W_{1}=2\), \(W_{2}=5\) and can be obtained from the generating function \(g(z)=\sum_{N=0}^{\infty}z^{N}W_{N}\), which is given by \(g(z)=(1-z+z^{2})/(1-3z+2z^{2}-2z^{3})\), see Ref. [23]. Furthermore [24], inserting \(W_{N}\propto a^{N}\) for \(N\to\infty\) into the recurrence relation, one gets a cubic equation for \(a\), \(a^{3}-3a^{2}+2a-2=0\), the largest root of which \((27+3\sqrt{78})^{1/3}/3+(27+3\sqrt{78})^{-1/3}+1\approx 2.521\,379\,706\,8\) settles \(W_{N}\approx 2.52^{N}\) as \(N\to\infty\). Now, the thermodynamic average is defined as follows: \[\langle(\ldots)\rangle=\frac{1}{Z}{\sum_{\xi_{1}\ldots\xi_{N}}}^{ \prime}\sum_{\rm rot}\left(\exp\left[-\frac{E(\xi_{1}\ldots\xi_{N})}{k_{\rm B} T}\right](\ldots)\right),\] \[Z={\sum_{\xi_{1}\ldots\xi_{N}}}^{\prime}\sum_{\rm rot}\exp\left[- \frac{E(\xi_{1}\ldots\xi_{N})}{k_{\rm B}T}\right]. \tag{1}\] Here the prime near the first sum indicates the discussed above restriction on the set of values \(\xi_{1}\ldots\xi_{N}\) (no words with isolated 1's) and the second sum denotes the summation over rotational degrees of freedom for given (allowed) set \(\xi_{1}\ldots\xi_{N}\). Moreover, \(E(\xi_{1}\ldots\xi_{N})\) stands for the sum of the rotation energy and the interaction energy which contribute to the rotation part \(K\) and the interaction part \(Q\) of the partition function \(Z=Z(T,N)\). We take into account the short-range nearest-neighbor interactions by treating the molecules at sites \(j\) and \(j+1\) with \(a_{j,j+1}=a_{1}\) as linked (i.e., rigidly connected) through a hydrogen bond. More generally, we have to assume in addition that the energy of the hydrogen-bonded chain also decreases because of the bonding, see below. The long-range interactions between all molecules \(U_{\xi_{1}\ldots\xi_{N}}\) is the sum over all \(N(N-1)/2\) pairs of the dipole-dipole interaction \(u_{ij}\), \(i<j\), \(i=1,\ldots,N-1\), \(j=2,\ldots,N\). Moreover, \[u_{ij}=k\frac{\mu_{\perp,i}\mu_{\perp,j}-2\mu_{\parallel,i}\mu_{\parallel,j}}{a _{ij}^{3}}, \tag{2}\] \(k=1/(4\pi\epsilon_{0})\), \(\epsilon_{0}\) is the vacuum permittivity (SI units), if the both sites \(i\) and \(j\) belong to the same hydrogen-bonded chain. However, \[u_{ij}=k\frac{-2\mu_{\parallel,i}\mu_{\parallel,j}}{a_{ij}^{3}}, \tag{3}\] if the sites \(i\) and \(j\) belong to different hydrogen-bonded chains or at least one of these sites is in the state 2 or 3. In other words, the \(\mu_{\perp}\) on-site components contribute to the intersite interaction \(u_{ij}\) only if the both sites rotate as a whole, but do not contribute to the intersite interaction if they rotate independently. In contrast, the \(\mu_{\parallel}\) on-site components always contribute to the intersite interaction \(u_{ij}\). Limited (because of CNT geometry) rotations are accounted as follows. A hydrogen-bonded chain consisting of \(n\) molecules has the moment of inertia \(nI\) and rotates along one axis only, which coincides with the nanotube axis. Its energy is given by \(E_{m}=\hbar^{2}m^{2}/(2nI)\) with \(m=0,\pm 1,\pm 2,\ldots\) and hence each energy level \(E_{m}\) except the one with \(m=0\) is two-fold degenerate [25]. The rotational partition function of the hydrogen-bonded chain reads: \[K_{n}^{(1)}=\sum_{m=-\infty}^{\infty}\exp\left(-\frac{m^{2}}{n\tau}\right), \tag{4}\] \(\tau=T/T_{\rm rot}\), \(k_{\rm B}T_{\rm rot}=\hbar^{2}/(2I)\). Furthermore, we assume that a site being in the state 2 corresponds to a molecule which rotates similarly to the hydrogen-bonded chain, i.e., contributes \(K_{1}^{(1)}\) to the rotation part of the partition function. Moreover, a site being in the state 3 corresponds to a molecule which rotates along three axes; its energy is given by \(E_{J}=\hbar^{2}J(J+1)/(2I)\) with \(J=0,1,2,\ldots\) and the degeneracy of the energy level \(E_{J}\) is \((2J+1)^{2}\). The partition function of such a rotor reads: \[K_{1}^{(3)}=\sum_{J=0}^{\infty}\left(\left(2J+1\right)^{2}\exp\left[-\frac{J(J +1)}{\tau}\right]\right). \tag{5}\] Having the partition function \(Z\) (1), we immediately get the Helmholtz free energy \(F=-k_{\rm B}T\ln Z\) and hence the entropy \(S=-\partial F/\partial T\), the internal energy \(E=F+TS\), and the specific heat \(C=T\partial S/\partial T\). According to Eq. (1) we also get the average dipole moment (per site) or, more precisely, the following quantities: \[\mu_{\parallel}=\frac{1}{N}\sum_{j=1}^{N}\langle\mu_{\parallel,j}\rangle, \hskip 8.535827pt|\mu_{\perp}|=\frac{1}{N}\sum_{j=1}^{N}\langle|\mu_{\perp,j}|\rangle. \tag{6}\] Obviously, \(\mu_{\rm tang}=\mu_{\parallel}/\mu\) and \(\mu_{\rm norm}=|\mu_{\perp}|/\mu\), see Sec. II.2 and Fig. 3. Finally, we can calculate the average length of the chain \(L\) and the coefficient of linear thermal expansion \(\alpha_{L}=(1/L)(\partial L/\partial T)\): \[L=\sum_{j=1}^{N-1}\langle a_{j,j+1}\rangle,\hskip 8.535827pt\alpha_{L}=\frac{1} {L}\frac{{\rm d}L}{{\rm d}T}. \tag{7}\] The length of the chain per site \(L/N\) might be related to the mean distance between the nearest nitrogen atoms, see Fig. 4. There are only a few parameters which are used as the input for the lattice model described above. We begin with the energy scale which is determined by \(T_{\rm rot}\). Using the values \(I_{A}=I_{B}=2.8\times 10^{-47}\) or \(I_{C}=4.4\times 10^{-47}\) in units of SI [26; 27] we obtain for \(T_{\rm rot}\) the values about 14 or 9 K. In our calculations we set \(T_{\rm rot}=10\) K. The energy scales which is determined by \(T_{\rm dip}\propto(k\mu^{2}/a^{3})/k_{\rm B}\) depends on the values of the dipole moment \(\mu\) and the characteristic length \(a\). The quantum-chemical computations and MD simulations illustrated in Sec. II allow us to choose these parameters. Using data for \(\mu=1.35\ldots 1.28\) D and \(a=3.1\ldots 4.6\) A from Figs. 3 and 4 (Lowdin charges), we may set \(R=0.07\), that is, \(T_{\rm dip}=T_{\rm rot}/R\approx 143\) K. Note that for the Mulliken charges, Figs. 8 and 9, one has to decrease \(R\) slightly. Next, we fix the angles as follows: \(\alpha_{1}=56^{\circ}\), \(\alpha_{2}=20^{\circ}\), \(\alpha_{3}=80^{\circ}\), cf. Fig. 2. Finally, the length scale is determined by \(a_{1}\), \(a_{2}\), and \(a_{3}\): It might be reasonable to set \(a_{1}=3.10\) A, \(\varepsilon=0.25\), \(a_{2}=(1+\varepsilon)a_{1}\approx 3.88\) A, and \(a_{3}=(1+2\varepsilon)a_{1}=4.65\) A in view of the MD results reported in Fig. 4. The values \(0^{\circ}\leq\alpha_{2}<\alpha_{1}<\alpha_{3}\leq 90^{\circ}\) and \(a_{1}<a_{2}<a_{3}\) are chosen to be in line with whole temperature dependencies shown in the bottom panel of Fig. 3 and in Fig. 4. Importantly, the value of \(\varepsilon\) must exceed a certain threshold value in order to have as the ground state the hydrogen-bonded chain \(111\ldots\) rather than the state \(22\ldots 2\). More realistic description mentioned above would imply including an energy of the hydrogen-bonded chain which diminishes the energy of the state \(111\ldots\). In the present study, we do not take into account the bonding energy which may modify quantitatively the lattice-model outcome. We emphasize here that our aim is not to reproduce MD simulations by the lattice model analysis, but only to demonstrate a reasonable agreement between the conclusions yielded by these approaches, i.e., MD simulations and statistical mechanics calculations. ### Lattice model results We perform all lattice-model calculations described in previous subsection for \(N=8,9,10,11,12\) using the Maple software package implemented on a personal computer. Our findings are reported in Figs. 5 and 6 and are discussed below. To illustrate how the intermediate phase shows up, we single out in the partition function \(Z\) (1) the contributions of hydrogen-bonded chains of various lengths. Namely, of the length \(N\), \({\cal Z}_{N}=K_{N}^{(1)}\,{\cal Q}_{N}\), of the length \(N-1\), \({\cal Z}_{N-1}=K_{N-1}^{(1)}{\cal Q}_{N-1}\), \(\ldots\), of the length 2, \({\cal Z}_{2}=K_{2}^{(1)}{\cal Q}_{2}\), and the contributions without the hydrogen-bonded chains \({\cal Z}_{0}\), i.e., \(Z={\cal Z}_{N}+{\cal Z}_{N-1}+\ldots+{\cal Z}_{2}+{\cal Z}_{0}\), where \(K_{n}^{(1)}\) is given in Eq. (4) and \({\cal Q}_{n}\) denotes the remaining part of \({\cal Z}_{n}\), \(n=N,\ldots,2\), see Ref. [7]. Moreover, we introduce the probabilities \(p_{N}={\cal Z}_{N}/Z,\,p_{N-1}={\cal Z}_{N-1}/Z\), \(\ldots\), \(p_{2}={\cal Z}_{2}/Z\), and \(p_{0}={\cal Z}_{0}/Z\); obviously, \(p_{N}+p_{N-1}+\ldots+p_{2}+p_{0}=1\). The temperature-dependent probabilities \(p_{N},\ p_{N-1}\), \(\ldots\), \(p_{2}\), and \(p_{0}\) control the role of the configurations with different length of hydrogen-bonded chains in thermodynamics. In the zero-temperature limit \(T\to 0\), when the lowest-energy ground state dominates, and \(p_{N}\to 1\). In the high-temperature limit \(T\to\infty\), when the dipole-dipole interactions become irrelevant \(Z\to\mathcal{Z}_{0}\to(K_{1}^{(3)})^{N}\) and \(p_{0}\to 1\). Temperature dependencies of \(p_{N}\), \(p_{N-1}\), \(\ldots p_{2}\), and \(p_{0}\) for \(N=10\) and \(N=12\) are shown in Fig. 5. As can be seen from this figure, there is a wide temperature range of \(55\ldots 75\) K where the largest probability \(p_{2}\) exceeds 40% for both values of \(N\). More detailed analysis of \(\mathcal{Q}_{2}\) shows that the main contribution to \(p_{2}\) comes from the subset of configurations in which the remaining \(N-2\) molecules are in the state 2 [7]. In summary, the considered probabilities \(p_{N},\ldots,p_{0}\) for \(N=8,\,9,\,10,\,11,\,12\) provide evidence that the states, which contain ammonia molecules in the on-site states 1 and 2, are the most relevant ones in the temperature range \(50\ldots 75\) K and result in the emergence of an intermediate quasiphase. Since the ammonia molecules in the state 2 and in the short hydrogen-bonded chains contribute to \(\mu_{\parallel}\) but not to \(\mu_{\perp}\), the tangential/normal component of total dipole moment should increase/decrease in this temperature interval. Further evidence for that follows from analysis of temperature dependencies of observable quantities to be discussed below. The intermediate quasiphase is stable in a rather wide temperature region \(T_{1}\ldots T_{2}\). An estimate for the temperatures of quasiphase transitions \(T_{1}<T_{2}\) can be drawn by equating the corresponding probabilities, that is, \(p_{3}(T_{1})=p_{2}(T_{1})\) and \(p_{2}(T_{2})=p_{0}(T_{2})\). For the chosen set of parameters we get \(T_{1}\approx 39,\,41,\,44,\,46,\,49\) K and \(T_{2}\approx 69,\,72,\,75,\,77,\,79\) K as \(N=8,\,9,\,10,\,11,\,12\). In Fig. 6 we show temperature dependencies for various quantities of interest for the lattice model of \(N=8,9,10,11,\,12\) sites. The introduced model predicts an increase of \(\mu_{\parallel}\) (6) and decrease of \(|\mu_{\perp}|\) (6) in the temperature range \(30\ldots 80\) K in agreement with MD simulations, see blue and orange curves in the top panel of Fig. 6. The specific heat per site in the interval \(30\ldots 70\) K has minimum with the minimal value which noticeably depends on \(N\). At high temperatures the specific heat approaches \(3k_{\rm B}/2\) as it should for independent three-axes rotators, see Eq. (5). The average length \(L\) increases with the temperature growth, however, differently at different temperatures. This can be also seen from the temperature dependence of \(\alpha_{L}\) from Eq. (7) shown in the inset. We plot also the results of MD simulations shown previously in Fig. 4 after assuming \(L(0)/(N-1)=3.10\) A. Again both results, lines and filled circles, agree qualitatively. In the lowest panel of Fig. 6 we report the lattice-model predictions for the dipole correlators \(\langle\mu_{\parallel,1}\mu_{\parallel,i}\rangle\), \(i=1,\ldots,8\) (\(N=8\)), which also indicate a correlated state of ammonia molecules in CNT at low temperatures. From this figure we conclude that all correlators practically coincide, i.e., are independent on distance between the sites, up to 15 K having the value about \(\cos^{2}56^{\circ}\) (quasiphase 1 when molecules form a hydrogen-bonded chain), and with further temperature growth up to 55 K the correlators with \(i=2,\ldots,6\) remain almost indistinguishable passing the maximal value about 0.5 (quasiphase 2). Only for higher temperatures, the dipole correlations decrease with increase of the distance between sites and gradually fall down to \(\cos^{2}80^{\circ}\) (quasiphase 3). In the end of subsection we remark that the estimates of the temperature interval for the intermediate quasiphase, as they follow from inspection of various quantities, are slightly different and depend on the quantity under analysis. This is yet another indication that we face a gradual emergence of the intermediate quasiphase but not a strict phase transition. ## IV Discussion and Summary In conclusion, motivated by the experiments reported in Ref. [2] and their theoretical support [2; 7], which Figure 5: Probabilities of various configurations versus temperature for the chains of \(N=10\) (top) and \(N=12\) (bottom) sites. There is a temperature region \(50\ldots 75\) K where the probability \(p_{2}\) (blue curves) dominates. demonstrated a temperature-driven water dipole quasiordering in one dimension, we address a question whether such phenomenon can be observed for other molecules and focus on the ammonia molecules NH\({}_{3}\). Our theoretical study, which includes quantum chemistry calculations for the model of ammonia molecule and MD simulations for ammonia molecules encapsulated in (6,5) CNT as well as statistical mechanics arguments on the basis of a lattice model, gives an affirmative answer to this question: Ammonia dipoles show quasiorder below 100 K becoming parallel to the CNT axis. Even though details of quasiordering depend on the input characteristics of the ammonia molecule inside CNT (Lowdin or Mulliken charges) the quasiordering cannot be questioned. Both the water molecules and the ammonia molecules show similar behavior. Namely, atomic charges in CNT are reduced and rotations within CNT becomes restricted, the temperature dependence of the probability distribution density of the angle \(\theta\) between the molecule dipole moment and the CNT axis indicate three phases and two quasiphase transitions between them, an interplay of the interaction contribution and the entropy contribution (rotations within restricted one-dimensional geometry) produce a highly ordered structures in a temperature range \(T_{1}\ldots T_{2}\). In this temperature range, as it follows from the lattice-model analysis, the states with dipole moments oriented along the CNT axis dominate the partition function that can be seen in the temperature dependencies of \(\mu_{\parallel}\), \(|\mu_{\perp}|\), the specific heat or dipole correlators. Clearly, we deal with a finite number of sites one-dimensional lattice model and any true phase transitions cannot be expected, however, a gradual replacement of one quasiphase by another is possible and several calculated quantities do indicate this. The quasiordering at intermediate temperatures is a classical phenomenon: It occurs at temperatures at which the thermal de Broglie wavelength of ammonia molecule is much smaller than, e.g., intermolecular distances. Most evident difference between the water molecules and the ammonia molecules is the angle of molecule dipoles relative to the nanotube axis in the hydrogen-bonded quasiphase, which is about two times larger for ammonia case in comparison the water case (recall, \(\alpha_{1}=31^{\circ}\) for water [7] versus \(\alpha_{1}=56^{\circ}\) for ammonia). The most interesting question which naturally shows up, is whether it is possible to encapsulate ammonia molecules inside a narrow CNT and then examine extensively ammonia-filled and empty CNTs for comparison. Unfortunately, we are not aware about this. We hope that the presented theoretical study may be of use for corresponding experiments in the future. ###### Acknowledgements. The authors thank Danylo Dobushovskyi for bringing Refs. [22; 23] to their attention. ## Author declarations **Conflict of interest:** The authors have no conflicts to disclose. Figure 6: Temperature dependencies of (from top to bottom) \(\mu_{\parallel}/\mu\) (blue) and \(|\mu_{\perp}|/\mu\) (orange), see Eq. (6), specific heat \(c(T)=C/(k_{\rm B}N)\), average length (coefficient of linear thermal expansion is shown the inset), see Eq. (7), and correlators \((\mu_{\parallel,\parallel}\mu_{\parallel,\downarrow})/\mu^{2}\), \(i=1,\ldots,N\) for the chains of \(N{=}8,\ldots,12\) sites. MD simulations (\(N=35\) ammonia molecules in the CNT of length \(\approx 170\) Å, filled circles) are shown in the panels with \(\mu_{\parallel}/\mu\), \(|\mu_{\perp}|/\mu\) and the average length [we set \(L(0)/(N-1)=3.10\) Å]. **Author contributions:** O. D. conceived the study; M. D. performed molecular dynamics simulations; V. K. performed quantum chemical calculations; T. K. performed calculations for the lattice model. All authors discussed the results and commented on the manuscript. ## Data availability The data that support the findings of this study are available within the article. ## Appendix A Quantum chemistry calculation details In this appendix, we report more results of quantum chemistry calculations for ammonia molecules encapsulated in the \((6,5)\) CNT. First of all, we show some characteristics of the ammonia molecule restricted to one dimension, but without CNT (Table 1) and then inside the CNT (Table 2) as they follow from semi-empirical methods [28]. Comparing numbers, we notice that the presence of the CNT results only in small changes. This allows us to restrict ourselves to less computer demanding case of the ammonia molecule without CNT while performing quantum chemistry calculations at the Hartee-Fock level (Table 3) or the density-functional-theory level (Table 4). Similarly to the previous studies for water [2; 7], we observe rather scattered outcomes for the charges of nitrogen and hydrogen atoms of NH\({}_{3}\), see the second and third rows in Tables 1 - 4. Most importantly and again similarly to the previous studies for water [7], not all sets of charges \(q_{\rm N}\) and \(q_{\rm H}\) being plugged in MD simulations yield two quasiblange transitions between three quasiblases. For example, \(q_{\rm N}\) and \(q_{\rm H}=-q_{\rm N}/3\) optimized at the AM1 level (Tables 1 and 2) do not reproduce a temperature-driven orientational quasiordering in MD simulations. \begin{table} \begin{tabular}{|c|c|c|} \hline & AM1 & PM6 \\ \hline \(q_{\rm N}\) (\(e\)) & \(-0.3913\) & \(-0.6281\) \\ \hline \(q_{\rm H}\) (\(e\)) & 0.1289 & 0.2100 \\ \hline \(\alpha_{\rm H-N-H}\) (\({}^{\circ}\)) & 107.6 & 105.1 \\ \hline \(r_{\rm N-N}\) (Å) & 3.05 & 2.94 \\ \hline \(r_{\rm N-H}\) (Å) & 1.00 & 1.01 \\ \hline \end{tabular} \end{table} Table 2: Quantum chemistry calculations for the ammonia molecules inside the CNT, see Sec. II.1. Semi-empirical methods AM1 and PM6. \begin{table} \begin{tabular}{|c|c|} \hline & B3LYP \\ \hline \(q_{\rm N}\) (\(e\)) & \(-0.7814\) \\ & \(-0.9582\) \\ \hline \(q_{\rm H}\) (\(e\)) & 0.2704/0.2540 \\ & 0.3761/0.2918 \\ \hline \(\alpha_{\rm H-N-H}\) (\({}^{\circ}\)) & 105.5/104.9 \\ \hline \(r_{\rm N-N}\) (Å) & 3.18 \\ \hline \(r_{\rm N-H}\) (Å) & 1.03/1.02 \\ \hline \end{tabular} \end{table} Table 4: Quantum chemistry calculations for the ammonia molecule restricted to one dimension (without CNT), see Sec. II.1. Density-functional-theory method (B3LYP) results, see explanations in the title of Table 3. \begin{table} \begin{tabular}{|c|c|c|} \hline & \multicolumn{1}{c|}{AM1} & PM6 \\ \hline \(q_{\rm N}\) (\(e\)) & \(-0.3965\) & \(-0.6127\) \\ \hline \(q_{\rm H}\) (\(e\)) & 0.1322 & 0.2042 \\ \hline \(\alpha_{\rm H-N-H}\) (\({}^{\circ}\)) & 107.8 & 105.7 \\ \hline \(r_{\rm N-N}\) (Å) & 2.95 & 2.97 \\ \hline \(r_{\rm N-H}\) (Å) & 1.00 & 1.01 \\ \hline \end{tabular} \end{table} Table 1: Quantum chemistry calculations for the ammonia molecules restricted to one dimension (without CNT), see Sec. II.1. Semi-empirical methods AM1 and PM6. \begin{table} \begin{tabular}{|c|c|c|} \hline & STO-2G & MINI \\ \hline \(q_{\rm N}\) (\(e\)) & \(-0.2352\) & \(-0.4272\) \\ & \(-0.3735\) & \(-0.6150\) \\ \hline \(q_{\rm H}\) (\(e\)) & 0.1249/0.0553 & 0.1662/0.1317 \\ & 0.1821/0.0958 & 0.2350/0.1911 \\ \hline \(\alpha_{\rm H-N-H}\) (\({}^{\circ}\)) & 103.5/101.9 & 108.3/107.8 \\ \hline \(r_{\rm N-N}\) (Å) & 2.83 & 3.29 \\ \hline \(r_{\rm N-H}\) (Å) & 1.06/1.04 & 1.03/1.02 \\ \hline \end{tabular} \end{table} Table 3: Quantum chemistry calculations for ammonia molecules restricted to one dimension (without CNT), see Sec. II.1. Hartree-Fock method results with the STO-2G and Huzinaga MINI basis sets. A hydrogen-bonded chain implies, first, a forming bond hydrogen charge \(q_{\rm H}\) (the first number before slash in the third row) and two dangling hydrogens having charge \(q_{\rm H}\) (the second number after slash in the third row) and, second, a shorter and a longer N-H bonds with \(r_{\rm N-H}\) given by the first and the third numbers in the last row, respectively, as well as the dangling hydrogens with \(r_{\rm N-H}\) given by the second number in the last row. The partial charges \(q_{\rm N}\) and \(q_{\rm H}\) are determined from the Löwdin population analysis (the upper rows in the second and third rows) or from the Mulliken population analysis (the lower rows in the second and third rows). ## Appendix B Mulliken population analysis for MD model for ammonia In this appendix, we report MD results for the ammonia molecule model with the Mulliken charges. Namely, Figs. 7, 8, and 9 (Mulliken charges) correspond to Figs. 2, 3, and 4 (Lowdin charges). All shown dependences are qualitatively the same, however, the temperature range for the intermediate phase is shifted to higher temperatures and is \(90\ldots 100\) K. In Figs. 8 and 9 we also show by solid curves the lattice-model predictions, see Sec. III, considering as an example \(N=10\). To catch the change in atomic charges we have decreased \(R\) (in comparison with \(R=0.07\) we used for the Lowdin charges) and set now \(R=0.03\) (solid curves). Moreover, to get the average length in angstroms (Fig. 9), we set \(L(0)/(N-1)=2.90\) A. As can be seen from Figs. 7, 8, and 9, overall, the lattice-model calculations again agree with the MD simulations, although the minimum or maximum of \(\mu_{\parallel}(T)\) (blue) and \(|\mu_{\perp}|(T)\) (orange) are sharper than their MD counterparts (compare to green and red symbols, respectively).
2310.15253
Shared randomness allows violation of macroscopic realism using a single measurement
Macro-realistic description of systems is based majorly on two basic intuitions about the classical world, namely, macrorealism per se, that is, the system is always in a distinct state, and non-invasive measurements, that is, measurements do not disturb the system. Given the assumption of no-signalling in time, one utilizes Leggett-Garg inequalities to observe a violation of macroscopic realism which requires at least three measurements. In this work, we show that if one has access to shared randomness then one can observe a violation of macroscopic realism using a single measurement even if no signalling in time is satisfied. Interestingly, using the proposed scheme one can also rule out a larger class of models, which we term "macroscopic no-signalling" theories which can not violate the no-signalling in time conditions. We further construct a witness to observe the violation of macroscopic no-signalling.
Shubhayan Sarkar
2023-10-23T18:03:46Z
http://arxiv.org/abs/2310.15253v1
# Shared randomness allows violation of macroscopic realism using a single measurement ###### Abstract Macro-realistic description of systems is based majorly on two basic intuitions about the classical world, namely, macrorealism per se, that is, the system is always in a distinct state, and non-invasive measurements, that is, measurements do not disturb the system. Given the assumption of no-signalling in time, one utilizes Leggett-Garg inequalities to observe a violation of macroscopic realism which requires at least three measurements. In this work, we show that if one has access to shared randomness then one can observe a violation of macroscopic realism using a single measurement even if no signalling in time is satisfied. Interestingly, using the proposed scheme one can also rule out a larger class of models, which we term _macroscopic no-signalling_ theories which can not violate the no-signalling in time conditions. We further construct a witness to observe the violation of _macroscopic no-signalling_. ## I Introduction Bell's notion of local realism [1; 2] aims for a classical understanding of the quantum world. As it turns out such a classical description is falsified in quantum theory. Macroscopic realism or macrorealism on the other hand aims to address the problem of whether macroscopic or simply classical systems can behave quantum mechanically or not. For instance, a famous problem in this regard is whether Schrodinger's cat can be alive and dead at the same time or not. Macrorealism embodies our classical understanding of the world, where macroscopic objects have well-defined states, and measurements of their properties do not cause any significant perturbations. These principles form the foundation of classical physics and have been deeply ingrained in our scientific and everyday thinking. It is well-known that in the realms of quantum physics, these classical intuitions are challenged, leading to the exploration of phenomena like quantum entanglement and non-locality. Macrorealism is based on two major assumptions. First, macrorealism per se: This principle suggests that a system, in the macroscopic sense, always exists in a well-defined and distinct state. In other words, for classical objects that we encounter in our everyday lives, there is an intrinsic property or state that fully characterizes the system at any given moment. This state is assumed to be definite, and it remains unchanged until acted upon by an external force or measurement. For instance consider a classical object, like a billiard ball on a pool table, macrorealism implies that it has a specific position and velocity, and this information uniquely defines its state. Second, non-invasive measurements (NIM): This intuition posits that measurements made on a system do not interfere with or disturb the system. In classical physics, when one measures a property of an object, such as its position or velocity, one expects that the act of measurement doesn't alter these properties. The measurements are assumed to be passive observations, and the system remains in its original state after the measurement. To observe the violation of macrorealism one needs to consider time-like correlations, that is, correlations between measurement events on a single system at different times. One can also comprehend the scenario like the standard Bell scenario but in the time domain. However, there are some crucial differences between both scenarios [3]. For instance, no-signalling is always satisfied in the Bell scenario but can be violated in the time domain. Consequently, no-signalling in time can serve as a witness to observe the violation of macrorealism [4; 5; 6; 7; 8; 9]. Even if no-signalling in time is satisfied one can still observe the violation of macrorealism using Leggett-Garg inequalities [10] which can be considered as a temporal version of Bell inequalities. It is important to note here that to observe a violation of Leggett-Garg inequalities, one requires at least three different measurements. Consequently, one can also obtain a violation of no-signaling in time even if Leggett-Garg inequalities are satisfied [8]. The violation of Leggett-Garg inequalities has been experimentally demonstrated but only for microscopic systems [11; 12; 13; 14; 15]. It still remains a challenge to demonstrate the violation of Leggett-Garg inequalities in systems that can be considered macroscopic. The maximal violation of Leggett-Garg inequalities has also been utilised for semi-device independent certification of quantum measurements [16; 17; 18]. Interestingly, it was shown in [18] that one can utilise the maximal violation of Leggett-Garg inequalities for certification of an unbounded amount of randomness without nonlocality. In this work, we show that even if no-signalling in time and Leggett-Garg inequalities are satisfied, still one can observe the violation of macrorealism. Consequently, none of the above mentioned conditions are necessary to observe a violation of macrorealism. For our purpose, we consider two space-like separated parties each of whom have access to shared randomness. Both the parties need to perform a single measurement. We then impose that the local correlations of the parties are no-signalling in time. We then observe that by using a single measurement with each party one can obtain a violation of macrorealism. Furthermore, we identify a larger class of classical models that can be falsified using the proposed setup. We term the corresponding notion as _macroscopic no-signaling_. We then construct a simple witness to observe the violation of macroscopic no-signaling. Furthermore, we identify some necessary conditions for the violation of macroscopic no-signalling. ## II Scenario The scenario consists of two parties, namely, Bob and Alice, and a preparation device that sends one subsystem to Bob and one to Alice. Bob and Alice can only perform a single measurement on their subsystems. However, Alice can sequentially measure her subsystem. Here, we restrict Alice to measure her system twice, that is, at times \(t_{0},t_{1}\). Now, Bob and Alice repeat the experiment enough times to construct the joint probability distribution (correlations) \(\vec{p}=\{p(a_{0},a_{1},b)\}\) where \(p(a_{0},a_{1},b)\) denotes the probability of obtaining outcome \(a_{0},a_{1}\) at times \(t_{0},t_{1}\) with Alice and \(b\) with Bob respectively. Here we consider that \(b=0,1\) and \(a_{0},a_{1}=0,1,2\). The scenario is depicted in Fig. 1. Let us now restrict to quantum theory and consider that the source sends the quantum state \(\rho_{AB}\) with Bob and Alice performing the measurement \(\{\mathcal{M}_{i}\}\) and \(\{\mathcal{R}_{i}\}\) respectively, where \(\mathcal{M}_{i}\geq 0,\mathcal{R}_{i}\geq 0\) and \(\sum_{i}\mathcal{M}_{i}=\sum_{i}\mathcal{R}_{i}=1\). As the measurements, in general, might not be projective, the probability \(p(a_{0},a_{1},b)\) can be computed as [19] \[p(a_{0},a_{1},b)=\operatorname{Tr}\left(\sqrt{\mathcal{R}_{a_{0}}}U_{a_{0}}^{ \dagger}\mathcal{R}_{a_{1}}U_{a_{0}}\sqrt{\mathcal{R}_{a_{0}}}\otimes \mathcal{M}_{b}\;\rho_{AB}\right) \tag{1}\] here \(U_{a_{0}}\) is some unitary dependent on the outcome \(a_{0}\). According to Luder's postulate [20; 21], the unitaries in the above formula can be dropped. We impose that Alice's local correlations obey the well-known "no-signalling in time" constraint [3], which can be expressed as \[\sum_{a_{i}}p(a_{0},a_{1})=p(a_{j})\quad i\neq j. \tag{2}\] _Macroscopic no-signalling_-- Let us now specify the assumptions of _macroscopic no-signalling_. Consider again the setup depicted in Fig. 1. In general, one can consider that the source generates some variables \(\lambda\) with probability \(p(\lambda)\), which can treated as shared randomness between Bob and Alice [see Fig. 2]. In principle, \(\lambda\) could also be some quantum states. Consequently, \(p(a_{0},a_{1},b)\) can be expressed as \[p(a_{0},a_{1},b)=\sum_{\lambda}p(a_{0},a_{1},b|\lambda)p(\lambda). \tag{3}\] It is straightforward to see from Fig. 2 that shared randomness can always be described locally, that is, \[p(a_{0},a_{1},b|\lambda)=p(a_{0},a_{1}|\lambda)p(b|\lambda). \tag{4}\] Now, macroscopic no-signalling can be simply stated as **Definition 1** (Macroscopic no-signalling).: _Given any additional information \(\lambda\), Alice's measurement outcome at time \(t_{0}\) does not influence the outcome at a later time \(t_{1}\), that is,_ \[\sum_{a_{i}}p(a_{0},a_{1}|\lambda)=p(a_{j}|\lambda)\quad i\neq j\quad\forall\lambda. \tag{5}\] Notice that the above definition encompasses a larger class of models than macroscopic realism which is defined as [10] \[p(a_{0},a_{1}|\lambda)=p(a_{0}|\lambda)p(a_{1}|\lambda). \tag{6}\] As \(\sum_{a_{i}}p(a_{i}|\lambda)=1\) for every \(\lambda,i\), it is simple to observe that macroscopic realism Eq. 6 implies macroscopic no-signalling Eq. 5, however, the converse is not true. Any correlation satisfying Eq. (6) is termed as macroscopic no-signalling correlation. Let us now observe the violation of macroscopic no-signalling. ## III Violation of macroscopic no-signalling Consider again the correlations \(p(a_{0},a_{1},b)\) written using Eq. (3) as \[\sum_{a_{0}}p(a_{0},a_{1},b)=\sum_{\lambda}\sum_{a_{0}}p(a_{0},a_{1},b| \lambda)p(\lambda). \tag{7}\] Using Eq. (4) and then macroscopic no-signalling (5) one can conclude that for all \(a_{1},b\) \[\sum_{a_{0}}p(a_{0},a_{1},b)=\sum_{\lambda}p(a_{1}|\lambda)p(b|\lambda)p( \lambda)=p(a_{1},b). \tag{8}\] Figure 1: The setup involves two players, Bob and Alice, along with a preparation device that dispatches a subsystem to each of them. Bob and Alice are each limited to a single measurement on their respective subsystems. However, Alice is allowed to conduct sequential measurements on his subsystem. In this context, we confine Alice’s measurements to two specific time points, denoted as \(t_{0}\) and \(t_{1}\). Bob and Alice obtain the joint probability distribution \(\{p(a_{0},a_{1},b)\}\) at the end of the experiment. Consider now that the source prepares the classically correlated state \[\rho_{AB}=\frac{1}{2}|0_{A}0_{B}\rangle\!\langle 0_{A}0_{B}|+\frac{1}{2}|1_{A}1_{B} \rangle\!\langle 1_{A}1_{B}| \tag{9}\] with Bob performing the measurement \(\sigma_{z}\) and Alice performing the measurement \[\mathcal{R}_{0} = \frac{1}{3}(\mathbb{1}+\sigma_{z})\] \[\mathcal{R}_{1} = \frac{1}{3}\left(\mathbb{1}-\frac{1}{2}\sigma_{z}+\frac{\sqrt{3}} {2}\sigma_{x}\right)\] \[\mathcal{R}_{2} = \frac{1}{3}\left(\mathbb{1}-\frac{1}{2}\sigma_{z}-\frac{\sqrt{3} }{2}\sigma_{x}\right) \tag{10}\] where \(\sigma_{z},\sigma_{x}\) are the Pauli \(z,x\) observables respectively. For a note, the above measurement is also termed as _trine-POVM_ and has been experimentally implemented [22]. As the above Alice's measurement is an extremal POVM [23], that is, each measurement element can be expressed as a projector times a positive number, the formula Eq. (1) is simplified to \[p(a_{0},a_{1},b)=\operatorname{Tr}\left(\sqrt{\mathcal{R}_{a_{0}}}\mathcal{R} _{a_{1}}\sqrt{\mathcal{R}_{a_{1}}}\otimes\mathcal{M}_{b}\,\rho_{AB}\right). \tag{11}\] Notice that the local state of Alice is the maximally mixed state and thus for any measurement by Alice, the no-signalling in time condition (2) will be satisfied. Substituting Bob's and Alice's measurements in the above formula allows us to observe that the condition (8) is not satisfied for any \(a_{1},b\). Consequently, we can conclude that quantum theory violates the notion of macroscopic no-signalling with a single measurement with Alice. _Witness_-- From an experimental perspective, one can never observe a strict equality condition as Eq. (8). Thus, we now find a witness that allows one to observe a considerable difference between macroscopic no-signalling correlations and quantum correlations. The witness is given by \[\mathcal{S}=\sum_{a_{1}=0,1,2}\sum_{b=0,1}\left|\sum_{a_{0}=0,1,2}p(a_{0},a_{1 },b)-p(a_{1},b)\right|. \tag{12}\] From Eq. (8), it is clear that for macroscopic no-signalling correlations \(\mathcal{S}=0\). Using the measurements suggested in Eq. (10) and the classically correlated state Eq. (9), one can attain a value \(\mathcal{S}=1/3\). ## IV Necessary conditions for violation Let us now identify some necessary conditions to violate macroscopic no-signalling. For this purpose, let us consider the correlation \(p(a_{0},a_{1},b)\) as in Eq. (1). Assuming Luder's postulate, let us find the properties of Alice's measurement \(\{\mathcal{R}_{i}\}\) that can satisfy the condition (8). Now, expanding (8) gives us \[\sum_{a_{0}}\operatorname{Tr}\left(\sqrt{\mathcal{R}_{a_{0}}}\mathcal{R}_{a_ {1}}\sqrt{\mathcal{R}_{a_{0}}}\otimes\mathcal{M}_{b}\,\rho_{AB}\right)= \operatorname{Tr}\left(\mathcal{R}_{a_{1}}\otimes\mathcal{M}_{b}\,\rho_{AB} \right). \tag{13}\] It is clear from the above expression that if \(\mathcal{R}_{a_{0}}\) and \(\mathcal{R}_{a_{1}}\) for any \(a_{0},a_{1}\) commute, then the above relation (13) is always satisfied. Thus, a necessary condition to observe a violation of macroscopic no-signalling is that Alice's measurement elements must be incompatible with each other. This rules out two major classes of measurements, 1. As projectors always commute among each other, macroscopic no-signalling can not be violated using projective measurements with Alice. 2. The measurement elements of any two-outcome measurement also commute among each other. Consequently, to observe a violation of macroscopic no-signalling, Alice needs to perform a measurement with at least three outcomes. Consequently, we can also define a class of measurements where the measurement elements commute among each other. **Definition 2** (Self-commuting measurements).: _Consider a measurement \(\mathcal{R}=\{\mathcal{R}_{i}\}\) such that every measurement element commutes among each other, that is, \(\mathcal{R}_{i}\mathcal{R}_{j}=\mathcal{R}_{j}\mathcal{R}_{i}\) for any \(i,j\). Then the measurement \(\mathcal{R}\) is a self-commuting measurement._ Self-commuting measurements can not be used to violate the notion of macroscopic no-signalling. It will be interesting to find further properties of non-projective measurements that can violate macroscopic no-signalling. ## V Conclusions The above-presented result can also be considered as a violation of Leibniz's principle of indiscernibles [24], which can be simply stated as principles that hold operationally should also hold ontologically. Here, no-signalling in time is satisfied operationally, but is violated at the ontological level. One might argue that the Figure 2: Macrorealistic description of the setup in Fig. 1. The source sends the variable \(\lambda\) to both Bob and Alice with probability \(p(\lambda)\). The variable does not change after Alice’s measurement at time \(t_{0}\). notion of macrorealism or macroscopic no-signalling is very strong even for macroscopic systems. However, the notions are based on the very foundations of classical physics and unless one finds the violation of it in classical theories, we can safely assume that classical systems must satisfy the above-mentioned notions. For a note, a weaker form of macrorealism imposed on the ontological description of quantum systems, termed ontic-distinguishability was considered in [25]. Several follow-up problems arise from this work. The most interesting among them would be to find a unifying notion of local realism and macrorealism using a similar setup to the one presented in this work. A simpler problem in this regard would be to extend the above scenario to an arbitrary number of outcomes. As the witness suggested in this work is simple and can be violated using a classically correlated state and the trine-POVM (10) which has been experimentally implemented [22], we believe that it will not be difficult to observe the violation of macroscopic no-signalling at least for microscopic quantum systems. ###### Acknowledgements. This project was funded within the QuantERA II Programme (VERIqTAS project) that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733.
2303.06471
Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review
Cancer has relational information residing at varying scales, modalities, and resolutions of the acquired data, such as radiology, pathology, genomics, proteomics, and clinical records. Integrating diverse data types can improve the accuracy and reliability of cancer diagnosis and treatment. There can be disease-related information that is too subtle for humans or existing technological tools to discern visually. Traditional methods typically focus on partial or unimodal information about biological systems at individual scales and fail to encapsulate the complete spectrum of the heterogeneous nature of data. Deep neural networks have facilitated the development of sophisticated multimodal data fusion approaches that can extract and integrate relevant information from multiple sources. Recent deep learning frameworks such as Graph Neural Networks (GNNs) and Transformers have shown remarkable success in multimodal learning. This review article provides an in-depth analysis of the state-of-the-art in GNNs and Transformers for multimodal data fusion in oncology settings, highlighting notable research studies and their findings. We also discuss the foundations of multimodal learning, inherent challenges, and opportunities for integrative learning in oncology. By examining the current state and potential future developments of multimodal data integration in oncology, we aim to demonstrate the promising role that multimodal neural networks can play in cancer prevention, early detection, and treatment through informed oncology practices in personalized settings.
Asim Waqas, Aakash Tripathi, Ravi P. Ramachandran, Paul Stewart, Ghulam Rasool
2023-03-11T17:52:03Z
http://arxiv.org/abs/2303.06471v3
# Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review ###### Abstract Cancer has relational information residing at varying scales, modalities, and resolutions of the acquired data, such as radiology, pathology, genomics, proteomics, and clinical records. Integrating diverse data types can improve the accuracy and reliability of cancer diagnosis and treatment. There can be disease-related information that is too subtle for humans or existing technological tools to discern visually. Traditional methods typically focus on partial or unimodal information about biological systems at individual scales and fail to encapsulate the complete spectrum of the heterogeneous nature of data. Deep neural networks have facilitated the development of sophisticated multimodal data fusion approaches that can extract and integrate relevant information from multiple sources. Recent deep learning frameworks such as Graph Neural Networks (GNNs) and Transformers have shown remarkable success in multimodal learning. This review article provides an in-depth analysis of the state-of-the-art in GNNs and Transformers for multimodal data fusion in oncology settings, highlighting notable research studies and their findings. We also discuss the foundations of multimodal learning, inherent challenges, and opportunities for integrative learning in oncology. By examining the current state and potential future developments of multimodal data integration in oncology, we aim to demonstrate the promising role that multimodal neural networks can play in cancer prevention, early detection, and treatment through informed oncology practices in personalized settings. ## 1 Introduction Cancer is a disease marked by a disordered growth of abnormal cells that may lead to death if not treated. Around 1.9 million people in the US are expected to be diagnosed with cancer in 2023. Cancer is the second most common reason for death in the US, and it is expected to cause 1,670 deaths per day in the US in 2023 [1]. However, with advances in oncology research, it is estimated that nearly 42% of newly diagnosed cases can be potentially avoided. Being a complex disease, the development and growth of cancer involve multiple microscopic and macroscopic changes in the cell morphology, which are not yet fully understood. In recent years, increasing interest has been in using machine learning techniques, such as deep neural networks (DNNs), to assist with cancer diagnosis and treatment. Deep learning (DL), a subset of machine learning (ML), uses neural networks with multiple layers to analyze and process large amounts of data. This technique has been used to develop computer-aided diagnostic (CAD) systems that automatically detect and classify cancerous regions in medical images, such as mammograms and computerized tomography (CT) scans. DL has been used to analyze genomic data to predict a patient's treatment response and identify new biomarkers for cancer diagnosis. With the growing amount of data generated in oncology, DL is becoming an increasingly important tool for understanding cancer, making diagnosis, predicting clinical outcomes, assessing effectiveness of different therapies, and developing new treatments [2]. ### Multimodal Learning (MML) One particularly promising approach is multimodal learning (MML), which combines information from multiple sources (or modalities) to improve the accuracy and reliability of a given task [3]. MML has gained notable popularity in recent years, as evidenced by the increasing number of publications related to MML in various fields, including in the biomedical and clinical sciences field (see Fig. 1). With the increasing availability of multimodal data, MML has become an important tool for advancing understanding of complex diseases like cancer. Multimodal data fusion has the potential to enable the integration of diverse data types, such as medical images, molecular data, and electronic health records (EHR), to gain a more comprehensive understanding of a given problem. Combining information from multiple sources makes it possible to extract and integrate relevant features that may be missed when each data modality is analyzed in isolation. Recent approaches to multimodal data fusion, facilitated by DNNs, can tackle the challenge of learning from computer vision (CV) and natural language processing (NLP) data simultaneously. One such example is the Con trastive Language-Image Pre-training (CLIP) model introduced by OpenAI in 2021 [5], which uses natural language supervision to learn visual concepts and can perform various classification tasks. CLIP makes significant improvement over traditional DL approaches in CV, as it reduces the robustness gap while matching the performance of ResNet-50 on ImageNet, by zero-shot learning without using labeled examples. CLIP is shown to have multimodal neurons, similar to those found in the human brain, which respond to clusters of abstract concepts rather than specific visual features. Another example of MML is the Foundational Language And Vision Alignment Model (FLAVA) [6], which combines vision and language representation learning for multimodal reasoning. FLAVA achieved the best average scores on vision, NLP, and multimodal tasks compared to baseline models, validating its effectiveness as a foundational model. MML has also been applied in oncology. Recently proposed RadGenNets model predicts gene mutations in Non-small cell lung cancer (NSCLC) patients by integrating clinical and genomics data, Positron Emission Tomography (PET) scans, and gene mutation data using a fusion of Convolutional Neural Networks (CNNs) and Dense Neural Networks [7]. ### Prior Work Several survey papers have been published on the topic of multimodal data learning [8, 9, 10], which provide an overview of the current state of the field and have identified key trends and challenges. Traditionally, researchers have pursued multimodal integration through classical ML approaches, especially in the field of oncology. However, with the successes of DL, multimodal integration using DNNs has gained significant traction. In particular, incorporating graph Figure 1: The number of yearly publications related to multimodal learning, overall and in biomedical and clinical sciences in the period 2014-2022 [4]. neural networks (GNNs) and Transformers has enabled the creation of complex multimodal data fusion models that can effectively process and analyze large amounts of data [11, 12]. GNNs and Transformers have been proposed for various tasks in oncology, including tumor classification [13], prognosis prediction [14], and treatment response assessment [15]. ### Contributions The success of multimodal data fusion models in traditional ML areas, such as CV and NLP shows their potential for improving the accuracy and efficiency of cancer diagnosis, prognosis, treatment planing and predicting clinical variables of interest [8]. However, implementing multimodal data fusion in oncology also comes with a number of significant challenges. These include the need for large amounts of annotated data, the complexity of integrating diverse data types, and the potential for bias in the data [16]. While many recent surveys cover the topic of MML, there is a lack of surveys specifically focused on using DL models (such as GNNs and Transformers) in oncology settings. As these models become more widely used in the field, it is important to understand the current state of the art, as well as the challenges and limitations of applying these models to multimodal data fusion in oncology. In the following, we provide the contributions of this article: 1. **Identifying large-scale multimodal learning approaches in oncology**. We provide an overview of the current state of the art in using DNNs, specifically GNNs and Transformers, for multimodal data fusion in oncology. 2. **Highlighting the challenges and limitations of multimodal oncology data fusion**. We discuss the challenges and limitations of implementing multimodal data fusion models in oncology, including the need for large datasets, the complexity of integrating diverse data types, missing data modalities and samples, and data alignment. 3. **Providing a comprehensive taxonomy for describing multimodal architectures**. We present a comprehensive taxonomy for describing multimodal architectures, including both traditional and DL methods, to facilitate future research in this area. 4. **Identifying key contributions and future directions for multimodal data fusion in oncology**. Finally, we explore the various ways of integrating multimodal oncology data using neural networks. We identify GNNs and Transformers as a potential solution to across-the-spectrum multimodal integration and present the associated challenges. The article is organized as following. We begin by providing basic information about MML, including techniques for representing and translating data across modalities and the taxonomy of multi-modal learning in Section 2. We discuss oncology data modalities in Section 2.1, taxonomy of MML in Section 2.2, data fusion techniques in Section 2.3, DNNs used in cancer data and their associated learning mechanisms in Sections 2.4 and 2.5. In Section 2.6, we discuss the challenges of learning on multimodal data that lead to the use of modern techniques, including GNNs and Transformers. The use of GNNs in MML is discussed in Section 3 followed by Trasnformers in Section 4. We present the existing challenges and opportunities in MML with respect to oncology data in Section 5. Finally, we present a non-exhaustive list of publicly available multimodal oncology datasets in Section 6 before concluding the article. ## 2 Fundamentals of Multimodal Learning The word _modality_ represents the expression of an entity or a particular form of sensory perception, such as the visual actions of the characters, sounds of the dialogues being spoken, or the background music [17]. A collective notion of these modalities is called _multi-modality_[10]. ### Data Modalities in Oncology In oncology, data is inherently multimodal, as it includes information from multiple sources or modalities that are related in various ways. Fig. 2 provides a view of multiple modalities of cancer at various scales, from the population level to the single-cell analysis. Present-day cancer hospitals gather and store data of a patient throughout the patient's diagnosis, prognosis, and disease progression. The data can be broadly classified into three categories: clinical, molecular, and imaging, where each category provides complementary information about the patient's disease. Fig. 3 highlights different modalities within clinical, molecular, and imaging categories. Clinical modalities refer to the traditional methods of diagnosis, such as physical examination, medical history, and laboratory tests. Molecular modalities include genetic testing and analysis of gene mutations, while imaging modalities encompass various imaging Figure 2: Various modalities of cancer data at different scales, from population to single cell. techniques such as X-rays, CT scans, tissue biopsy, and histopathology images (whole slide imaging or WSI). Traditional methods to study cancer are targeted toward individual data modalities (e.g., EHR) [18, 19], radiology [20], pathology [21], and molecular, including genomics [22], transcriptomics [23, 24], proteomics [25, 26], etc.). **Genomics** refers to the study of an organism's entire genome, including its genes and their functions. **Transcriptomics** focuses on the analysis of RNA molecules in a cell, which can provide insights into gene expression levels and regulation. **Pathomics** is the study of the pathology of diseases, including the characterization of tissue samples and disease progression. **Radiomics** is the extraction of quantitative features from radiological medical images, which can be used to characterize the phenotype of a disease or predict patient outcomes. Efforts toward integrating molecular data resulted in the **multi-omics** research field that aims to identify molecular markers associated with biological processes [27, 28]. Radiogenomics combines genetic and radiomic data [29], proteogenomics is the unified study of proteins and genes [30], whereas integration of multiple modalities such as pathomics, radiomics, and genomics is still an under-explored task [31]. **Multi-scale** or **multi-resolution** studies aim to analyze the disease across different scales that may comprise cellular, tissue, and organismic levels [32] or identify the infection outcome of disease across tissues and organs scales [33]. **Multimodal** analysis endeavors to gain holistic insights about the disease through multiple measurement methods. Gene expression, copy number alteration, and clinical data can be studied in a unified framework for multimodal analysis [34]. Clinical, imaging, and different high-dimensional -omics data modalities provide an even better picture of the disease [35]. Similarly, other multimodal integration techniques include WSIs with gene expression data [36] and CT with EHR [37]. For the confines of this review, we will collectively call all these multi-categories as _multimodal_ attributes of the data. Figure 3: A landscape of oncology data (sub)modalities acquired for cancer care. #### 2.1.1 Molecular Data Molecular modalities provide information about the underlying genetic changes and alterations in the cancer cells [38]. Two main areas of molecular analysis in oncology are proteomics and genomics. **Proteomics** involves the study of proteins and their changes in response to cancer, and it provides information about the biological processes taking place in cancer cells. **Genomics** involves the study of the entire genome of cancer cells, including changes in DNA sequence, gene expression, and structural variations [8]. Many publicly available datasets provide access to molecular data, including the Proteomics Data Commons (PDC) for proteomics data and the Genome Data Commons (GDC) for genetic data [39, 40]. These resources provide a wealth of information for researchers to analyze and gain insights into the cancer disease. #### 2.1.2 Imaging Data Imaging modalities play a crucial role in diagnosing and monitoring cancer. The imaging category can be divided into two main categories: radiological and histopathological. **Radiological** imaging encompasses various techniques such as X-rays, CT scans, and Magnetic resonance imaging (MRI), which provide information about the location and extent of cancer within the body. These images can be used to determine the size and shape of a tumor, monitor its growth, and assess the effectiveness of treatments. **Histopathological** imaging involves the examination of tissue samples obtained through biopsy or surgery [41]. These images (referred to as WSI) provide detailed information about the micro-structural changes in the cancer cells and can be used to diagnose cancer and determine its subtype. Furthermore, histopathological imaging can also provide information about the molecular changes occurring in the cancer cells, allowing for a more comprehensive understanding of the disease [41]. #### 2.1.3 Clinical Data Clinical data provide information about the patient's medical history, physical examination, and laboratory results, which can be used to make an initial diagnosis and monitor progression of the disease [42]. Clinical data can be divided into time-series data, EHR, and clinician notes. Time-series data refers to the collection of clinical data over time, such as repeated blood tests. These data provide information about the changes in the patient's condition and can be used to monitor the disease progression [43]. EHR consists of digital records of a patient's health information stored in a centralized database. They provide a comprehensive view of a patient's medical history, including past diagnoses, treatments, and laboratory test results. This information can be used to inform the diagnosis and treatment of cancer, as well as to monitor the disease progression over time [42]. Clinical notes are part of EHR and refer to the written records of a patient's medical history and examinations, as well as the observations and recommendations of the healthcare provider [43]. ### Taxonomy of Multimodal Learning We follow the taxonomy proposed by William _et al._[17]. The authors proposed a multimodal classification taxonomy with five main stages - preprocessing, feature extraction, data fusion, primary learner, and final classifier. The proposed taxonomy provides a descriptive and high-level set of terms that can be used to describe existing or future model architectures. #### 2.2.1 Pre-processing Pre-processing involves modifying the input data before being fed into the model for training to ensure that the input data is in a suitable format. Pre-processing may include data cleaning, normalization, class balancing, and augmentation. Data cleaning involves removing unwanted noise or bias from the data, including errors or missing data points [44]. Normalization is the process of scaling the input data so that it falls within a specific range, which is useful to ensure that each modality contributes equally to the model learning [45]. Class balancing is an important task in cases where the data is imbalanced, i.e., one class may have a significantly larger number of data points than another. In such cases, the model may be biased towards the dominant class, resulting in lower accuracy for the minority class. Data augmentation is a technique used to artificially increase the size of the dataset by generating new samples based on the existing data. This can improve the model's robustness and ability to generalize [44]. #### 2.2.2 Feature Extraction In MML different modalities may have different features or representations, and it is necessary to extract relevant features that can be combined to improve the model's learning. Several techniques can be used to generate representations (also referred to as _embeddings_) for each data modality. Feature extraction can be done using manual feature engineering or automatic feature extraction. Text encoding and visual feature extraction methods using ML models are the subcategories of automated extraction techniques. Feature engineering involves designing features relevant to the task and extracting them from the input data. This can be time-consuming but may allow the model to incorporate prior knowledge about the problem. Text encoding involves transforming textual data into a numerical representation, which can be used as input to an ML model [46]. This can be done using techniques such as bag-of-words, word embeddings, or topic models [47, 48, 49]. ML model-generated filters are a feature extraction technique particularly useful for image and video data [50]. These filters are learned automatically by DNNs during the training process and can capture relevant features such as edges, textures, or shapes [50]. #### 2.2.3 Data Fusion Data fusion combines raw features, extracted features, or class prediction vectors from multiple modalities to create a single data representation. Fusion enables the model to use the complementary information provided by each modality and improve its learning. Data fusion can be done using early, late, or intermediate fusion. Detailed discussion on these fusion stages is presented in Section 2.3. The choice of fusion technique depends on the characteristics of the data and the specific problem being addressed [51]. #### 2.2.4 Primary Learner The primary learner stage involves training the model on the pre-processed, feature-extracted data. Depending on the specific problem and data, the primary learner can be implemented using various techniques, including DNNs, decision trees, or support vector machines (SVMs). DNNs are a popular choice for primary learners in MML because they can automatically learn high-level representations from the input data and have demonstrated state-of-the-art performance in many applications. CNNs are often used for image and video data, while recurrent neural networks (RNNs) and Transformers are commonly used for text and sequential data. Decision trees and SVMs are also used for primary learning in MML, particularly for problems with a smaller amount of data or when interpretability is crucial. The primary learner can be implemented independently for each modality or shared between modalities, depending on the specific problem and data. Sharing the primary learner can help the model to exploit the correlations between different modalities and improve the overall performance of the model. #### 2.2.5 Final Classifier The final stage of MML is the classifier, which produces category labels or class scores and can be trained on the output of the primary learner or the fused data. This stage can include anything from a simple shallow neural network, decision tree, or an ensemble model [17]. Ensemble methods, such as stacking or boosting, are often used to improve the performance of the final classifier. Stacking involves training multiple models and then combining their predictions at the output stage while boosting involves iteratively training weak learners and adjusting their weights based on the errors made by previous learners [52]. ### Data Fusion Stages Multimodal data fusion can be performed at different levels, including early (feature level), intermediate (model level), and late (decision level). Fig. 4 illustrates three stages of data fusion in MML. At the feature level, the input data from multiple modalities are combined to form a single feature vector, which is then used as input to an ML model. Multiple models are trained independently on each modality at the model level, and their outputs are combined to make a final decision. Finally, at the decision level, the decisions made by individual models are combined to make a final decision. Each fusion stage has its own advantages and challenges, and the choice of fusion stage depends on the characteristics of the data and the problem at hand. In the following sections, we will explore each fusion stage in more detail and discuss their applications and limitations. The **early fusion** approach involves merging the features extracted from different modalities before the model is trained. In this method, the feature vectors of the different modalities are combined into a single vector, which is used as the input to the ML model [17]. This approach can be used when the modalities have complementary information and can be easily aligned, such as combining visual and audio features in a video analysis application. The process of early fusion involves extracting relevant features from each modality and combining them into a single feature vector. The resulting feature vector can be used as input to an ML model. The main challenge with early fusion is ensuring that the features extracted from different modalities are compatible and provide complementary information. The **intermediate fusion** approach involves training separate models on each modality and then combining the outputs of these models to make a final prediction. This approach trains the models independently on the modalities, and the resulting output is fused at the model level [17]. This approach is suitable when the data modalities are independent of each other and cannot be easily combined at the feature level. The process of intermediate fusion involves training separate models on each modality and combining their output to make a final prediction. The main challenge with intermediate fusion is selecting an appropriate method for combining the output of the separate models. This Figure 4: Taxonomy, stages, and techniques of multimodal data fusion presented in this survey. _Early_, _late_, _cross-modality_ fusion methods integrate individual modality features _before_, _after_, or _at_ the primary learning step, respectively. can be done by either averaging the output of the separate models or using a weighted sum. In **late fusion**, the output of each model is used to make a separate decision, which is then combined to make a final decision. This approach is suitable when the modalities provide complementary information but are not necessarily independent of each other. The main challenge with late fusion is selecting an appropriate method for combining the predictions by individual models. This can be done using majority voting, weighted voting, or employing other ML models. ### Neural Network Architectures on Oncology Data ML and DL techniques are pervasive and effective in oncology research [53, 54]. CNNs have been used to identify mitosis in breast cancer histology images [55]. Cahall et al. [56] used Inception modules on U-Net for brain tumor segmentation. Syed et al. [57] used a Random Forest classifier to fuse radiology image representations learned from singular value decomposition (SVD) method with the textual annotation representation learned from the fastText algorithm for prostate and lung cancer patients. Liu et al. [58] proposed a hybrid DL framework for combining breast cancer patients' genomic and pathology data using fully-connected (FC) network for genomic data, CNN for radiology data, and Simulated Annealing algorithm for late fusion. Multiview multimodal network (MVMM-Net) [59] combined two different modalities (low-energy and dual-energy subtracted) from contrast-enhanced spectral mammography images, each learned through CNN and late-fusion through FC network in breast cancer detection task. Yap et al. [60] used a late-fusion method for a multimodal skin lesion classification task. The ResNet50 networks were used for generating image representations of macroscopic and dermatoscopic images, which were fused with the random forest-based clinical data representation through the FC network. An award-winning work [61] on brain tumor grade classification followed a late-fusion technique where CNNs were used for learning on the radiology and pathology images separately and later concatenated and classified through a logistic regression algorithm. The single-cell unimodal data alignment is one technique in multimodal learning. Self-organizing maps to link scATAC-seq (SOMatic) [62] model combined ATAC-seq regions with RNA-seq genes using self-organizing maps. Single-Cell data Integration via Matching (SCIM) [63] matched cells in multiple datasets in low-dimensional latent space uses autoencoder. Using variational autoencoders, graph-linked unified embedding (GLUE) [64] model learns regulatory interactions across omics layers and aligns the cell. These aforementioned methods are unable to incorporate high-order interactions among cells or different modalities. Single-cell data integration using multiple modalities is mostly based on auto-encoders (scDART [65], Cross-modal Autoencoders [66], Mutual Information Learning for Integration of Single Cell Omics Data (SMILE) [67], scMM [68], single-cell multimodal variational autoencoder (scMVAE) [69], DCCA [70], BABEL [71]). ### Supervised, Weakly-Supervised, and Unsupervised Training ML-based methods are extensively used in the cross-domain setting to achieve better predictive power through multimodal data integration [10]. However, most of the approaches employ ad-hoc mechanisms specific to modalities, data types, learning models, or specific task-at-hand. Multimodal data integration is not a single, clearly defined task because of the variety of modalities and biological problems [72]. Existing training methods can be roughly classified into two categories. One is the integrated analysis of several unimodal datasets, and the other is the analysis of multimodal datasets. Both of these data analysis categories involve three learning regimes, namely, supervised, weakly-supervised, and unsupervised learning. _Supervised_ learning involves mapping the input samples to pre-annotated labels or ground truth. The input samples can be hand-engineered features and classical ML models are usually employed, such as decision trees, random forests, support vector machines, multi-layer perceptrons, and the ensemble of these models [73]. The CNN family of DL models uses raw data to learn complex features instead of hand-engineered features. The _weakly supervised_ learning methods make use of the batch or sub-population annotations instead of the sample annotations for predictions, such as patient survival outcome, tumor grades, identification of small tumor regions, etc. Weakly supervised learning involves multiple-instance learning models, transformers and their variants, and GNNs. _Unsupervised_ learning methods do not rely on annotations to discover patterns, groupings, or structures in the input data. These methods may use a subset of embedded features in the input data as labels for predictions (self-supervised learning) or cluster the input data by identifying the commonalities among data points (fully-unsupervised learning). ### Challenges with Multimodal Data While multi-modal learning has shown significant promise, there are many challenges owing to the inductive biases of the ML models [74]. #### 2.6.1 Alignment One major challenge is identifying cross-modal connections between all elements of multiple modalities [75]. This requires explicit alignment of the data and an understanding of how each modality contributes to the overall task. Implicit alignment, where connections between modalities are not immediately clear, can also pose a challenge. Additionally, finding effective ways to represent the data from multiple modalities in a useful way for learning can be difficult. #### 2.6.2 Transference Transference in MML aims to transfer knowledge between modalities and their representations to improve the performance of a model trained on a primary modality [75]. This is particularly important when the primary modality has limited resources, lacks annotated data, noisy input, or unreliable labels. Cross-modal transfer is a popular approach to achieve transference in MML. It involves using knowledge learned from a secondary modality to improve the performance of a model trained on a primary modality. One way to do this is to fine-tune a pre-trained model on a secondary modality, which has been trained on a large dataset [76]. Alternatively, one can use the predictions or representations from a secondary modality as additional inputs to a model trained on a primary modality. Multimodal co-learning is another approach to transference [77]. This approach involves training a model on multiple modalities simultaneously, allowing the model to learn common representations and patterns across modalities. Model induction is a third approach to transference [78]. It involves using a model trained on a secondary modality to guide a model's training on a primary modality. This can be achieved by using the predictions or representations from the secondary modality as a form of supervision for the primary modality. ## 3 GNNs in Multimodal Learning Graph structures are commonly used to represent the relational connectivity of any system that has interacting entities [12, 79, 80, 81]. Graphs have been used to study brain networks [82], financial networks [83], driving maps [84], product recommendations [85], and the structure of DNNs themselves [86]. GNNs are a type of neural network specifically designed to process data represented as a graph [87], which makes them well-suited for analyzing complex and connected data, such as multimodal oncology data. The data can include, for example, Figure 5: Number of publications involving deep learning, graph neural networks (GNNs), and GNNs in the medical domain in the period 2014-2023 [4]. imaging data, molecular data, and clinical data, which are all interconnected and can be represented as a graph. Researchers have developed various GNN-based methods for analyzing multimodal oncology data, such as disease diagnosis and prognosis, drug discovery, and patient stratification [88]. GNNs are gaining popularity in the ML community, as evident from Fig. 5. ### The Graph Data A graph is represented as \(G\)=\((V,E)\) having node-set \(V\)=\(\{v_{1},v_{2},...,v_{n}\}\), where node \(v\) has feature vector \(\mathbf{x}_{v}\), and edge set \(E\)=\(\{(v_{i},v_{j})\mid v_{i},v_{j}\in V\}\). The neighborhood of node \(v\) is defined as \(N(v)\)=\(\{u\mid(u,v)\in E\}\). Here, we discuss the various attributes of graph data and the usual tasks undertaken on graph data. #### 3.1.1 Graph Types As illustrated in Fig. 6, we present the common types of graphs include undirected, directed, Homogeneous, static, dynamic, unattributed, and attributed. Undirected graphs comprise undirected edges, that is, the direction of relation is not important between any ordered pair of nodes. In the directed graphs, the nodes have a directional relationship(s). Homogeneous graphs have the same type of nodes, whereas heterogeneous graphs have different types of nodes and related attributes within a single graph [89, 90]. Static graphs do not change over time with respect to the existence of edges and nodes, whereas dynamic graphs change over time resulting in changes to structure, attributes, and node relationships. Unattributed graphs have unweighted edges indicating that the weighted value for all edges in a graph is the same, i.e., 1 if present, 0 if absent. Attributed graphs have different edge weights that capture the strength of relational importance [87, 91]. Figure 6: The commonly occurring graph types are presented. These include undirected and directed, homogeneous and heterogeneous, dynamic and static, and attributed (edges) and Unattributed. #### 3.1.2 Tasks on Graphs In Fig. 7, we present three major types of tasks defined on graphs, including * Node-level tasks. These may include node classification, regression, clustering, attributions, and generation. * Edge-level task. Edge classification and prediction (presence or absence) are two common edge-level tasks. * Graph-level tasks. These tasks involve predictions on the graph level, such as graph classification and graph generation. ### Learning on Graphs The data representation learning on graphs can be categorized into the traditional (or shallow) and deep neural network-based methods based on the way the nodes are encoded. The different techniques and architectures used under these categories are illustrated in Fig. 8. The reader is referred to [92, 93, 94] for details on these methods. #### 3.2.1 Traditional (Shallow) Methods Two categories of shallow methods commonly found in the literature are, **graph embedding** and **probabilistic methods**. Graph embedding methods represent a graph (its constituent nodes and edges) with low-dimensional vectors (graph embedding and node embedding), preserving the structural properties of the graph. The learning tasks in graph embedding usually involve dimensionality reduction through linear (principal component analysis, linear discriminant analysis, locality preserving projections), kernel (nonlinear mapping), or tensor (higher-order structures) methods [92]. Probabilistic graphical methods use graph data to represent probability distribution, where nodes are considered random variables and edges depict the probability relations among nodes [92]. Figure 7: Various tasks defined on graph data, including node-level, link-level, and graph-level tasks. Bayesian and Markov's networks are the two categories of probabilistic models. Variational inference, sampling inference, variable elimination, and other inference algorithms are used in probabilistic methods [92]. #### 3.2.2 Deep Methods - GNNs These include DL techniques defined for graphs where information aggregation from the neighborhood is fused into a node's representation. Traditional ML and DL methods process data represented in the Euclidean space. Models such as CNNs and their variants have shown remarkable success in processing the data representations in Euclidean space; however, they fail to perform well when faced with non-Euclidean or relational datasets. Compared to CNNs, where the locality of the nodes in the input is fixed, GNNs have no canonical ordering of the neighborhood of a node. Representing data as graphs can enable capturing and encoding the relationships among entities of the samples [94]. GNNs often employ a message-passing mechanism in which a node's representation is derived from its neighbors' representations via a recursive computation. The message passing for a GNN is given as: \[\mathbf{h}_{v}^{(l+1)}= \sigma\left(W_{l}\sum_{u\in N(v)}\frac{\mathbf{h}_{u}^{(l)}}{|N( v)|}+B_{l}\mathbf{h}_{v}^{(l)}\right) \tag{1}\] where, \(h_{v}^{(l+1)}\) is the updated embedding of node \(v\) after \(l+1\) layer, \(\sigma\) is the non-linear function (such as rectified linear unit or ReLU), \(h_{u}^{(l)}\) and \(h_{v}^{(l)}\) represent the embeddings of nodes \(u\) and \(v\) at layer \(l\). \(W_{l}\) and \(B_{l}\) are the trainable weight matrices for neighborhood aggregation and (self)hidden vector transformation, Figure 8: Various categories of representation learning for graphs is presented. respectively. The node embedding can encode high-order structural information through multiple aggregation layers. GNNs smooth the features by aggregating neighbors' embedding and filter eigenvalues of graph Laplacian, which provides an extra denoising mechanism [96]. GNNs are excellent at learning the given task for any permutation of the input data, as depicted in Fig. 9. GNNs are comprised of multiple permutation equivariant and invariant functions. GNNs are an excellent choice for handling heterogeneous input data [97]. Furthermore, as described earlier, traditional ML models deal with the Euclidean data [98]. The correlations in oncology data may not be captured in Euclidean space. However, the same oncology data may be highly correlated in the non-Euclidean space [88]. Based on the difference in the information fusion and aggregation methodology, GNNs-based deep methods are classified into the following: * _Recurrent GNNs_: RecGNNs are built on top of the standard Recurrent Neural Network (RNN) combined with the GNN. Recurrent GNNs can operate on graphs having variable sizes and topologies. The recurrent component of the RecGNN captures temporal dependencies and learns latent states over time, whereas the GNN component captures the local structure. The information fusion process is repeated a fixed number of times until an equilibrium, or the desired state is achieved [99, 100, 101]. The RecGNN employs the model given by: Figure 9: Image vs. graph convolutions. The canonical order of the input is important in CNNs, whereas in GNNs, the order of the input nodes is not important. This essentially makes CNNs a subset of GNNs [95]. \[\mathbf{h}_{v}^{(l+1)}=\text{RecNN}\left(\mathbf{h}_{u}^{(l)},\mathbf{Msg}_{N(v)}^{ (l)}\right),\] (2) where, RecNN is any RNN, and \(Msg_{N(v)}^{(l)}\) is the neighborhood message-passing at layer \(l\). * _Convolutional GNNs_: ConvGNNs undertake the convolution operation on graphs by aggregating neighboring nodes' embeddings through a stack of multiple layers. They use the symmetric and normalized summation of the neighborhood and self-loops for updating the node embeddings given by: \[\mathbf{h}_{v}^{(l+1)}=\sigma\left(W_{l}\sum_{u\in N(v)\cup v}\frac{\mathbf{h} _{v}}{\sqrt{|N(v)||N(u)|}}\right).\] (3) The ConvGNN can be spatial or spectral, depending on the type of convolution they implement. Convolution in spatial ConvGNNs involves taking a weighted average of the neighboring vertices. Examples of spatial ConvGNNs include GraphSAGE [99], Message Passing Neural Network (MPNN) [102], Graph Attention Network (GAT) [103], and Edge-Conditioned Convolution (ECC) [104]. The spectral ConvGNNs operate in the spectral domain by using the eigendecomposition of the graph Laplacian matrix. The convolution operation is performed on the eigenvalues, which can be high-dimensional. The popular spectral ConvGNN are ChebNet [105], and Graph Convolutional Network (GCN) [106]. An interesting aspect of these approaches is the representational containment, which is defined as: \[convolution\subseteq attention\subseteq message\_passing.\] * _Graph Auto-Encoder (GAE) Networks_: GAEs are unsupervised graph learning networks for dimensionality reduction, anomaly detection, and graph generation. They are built on top of the standard auto-encoders to work with graph data. The encoder component of the GAE maps the input graph to a low-dimensional latent space, while the decoder component maps the latent space back to the original graph [107, 108, 109]. * _Graph Adversarial Networks (GraphANs)_: These can learn to generate new graphs with similar properties to the input data. Based on Generative Adversarial Networks, GraphANs are designed to work with graph-structured data. The generator component of the GraphAN maps a random noise vector to a new graph, while the discriminator component tries to distinguish between the generated vs. the actual input. The generator generates graphs to fool the discriminator, while the discriminator tries to classify the given graph as real or generated accurately. * _Other GNNs_: Other categories of GNNs may include scalable GNNs [110], dynamic GNNs [111], hypergraph GNNs [112], heterogeneous GNNs [113, 114] and may others [115]. ### Unimodal Oncology Learning using GNNs Here, we explore how GNNs have been used to analyze oncology data using a single modality. Here single modality refers to the cancer data resolution and type, such as pathology, radiology, and molecular data. #### 3.3.1 Pathology CNNs-based neural networks have been extensively used to learn features from digital pathology data. However, such networks fail to capture the global contextual information which is important in the tissue phenotypical and structural micro and macro environment [117]. Graph neural networks can capture larger spatial and contextual information compared to traditional DL methods. For using histology data in GNNs, the cells, tissue region, or image patch are depicted as nodes. The relations and interactions among these nodes are represented as (un)weighted edges. Usually, a graph of the patient histology slide is used along with a patient-level label for training a GNN, as illustrated in Fig. 10. Histographs [118] used breast cancer histology data to distinguish cancerous and non-cancerous images. Pre-trained VGG-UNet was used for nuclei detection, micro-features of the nuclei were used as node features, and Euclidean distance among nuclei was incorporated as edge features. The resulting cell-graphs were used to train the GCN-based robust spatial filtering (RSF) model and showed superior performance compared to the CNN-based classification. Sureka et al. [119] studied breast and prostate cancer histology slides using GCN-based RSF with an attention mechanism. Similarly, Wang et al. [120] analyzed grade classification in tissue micro-arrays of prostate cancer using the weakly-supervised technique on a variant of GraphSAGE with self-attention pooling (SAGPool) [121]. CGC-Net [122] generated cell-graphs from histology slides of colorectal cancer tissues, segmented with CIA-Net [123], and resulting cell-graphs are classified using the adaptive GraphSAGE network. Cell-Graph Signature (\(CG_{signature}\)) [124] predict patient survival in gastric cancer using cell-graphs of multiplexed immunohistochemistry (mIHC) images processed through Figure 10: Data processing pipeline for histopathology images using GNNs is presented [116]. two types of GNNs (GCNs and GINs) with two types of pooling (SAGPool, Top-KPool). Besides above mentioned cell-graphs, there is an elaborate review of GNN-based tissue-graphs or patch-graphs methods implemented on unimodal pathology cancer data given in [117]. If considered individually, cell-graphs and tissue-graphs are not ideal resolution levels for studying the pathological structures of the histology slide. A combination of the multilevel information in histology slides can help understand the intrinsic features of the disease, as we will see in the multimodal Sections 3.4, 3.5, and 3.6 below. #### 3.3.2 Radiology Recently, GNNs have been used in radiology-based cancer data for segmentation, classification, and prediction tasks, especially on X-rays, mammograms, MRI, PET, and CT scans. Fig. 11 illustrates a general pipeline of using radiology-based data to train GNNs. Here we give a non-exhaustive review of GNNs-based works on radiological oncology data as a single modality input. Mo et al. [126] proposed a framework that improved the liver cancer lesion segmentation in the MRI-T1WI scans through guided learning of MRI-T2WI modality priors. Learned embeddings from fully convolutional networks on separate MRI modalities are projected into the graph domain for learning by GCNs through the co-attention mechanism and finally to get the refined segmentation by re-projection. Radiologists usually review radiology images by zooming into the region of interest (ROIs) on high-resolution monitors. Du et al. [127] used a hierarchical GNN framework to automatically zoom into the abnormal lesion region of the mammograms and classify breast cancer. The pre-trained CNN model extracts image features, whereas a GAT model is used to classify the nodes for deciding whether to zoom in or not based on whether it is benign or malignant. UNet-GNN [128], a model designed by replacing the deepest-level convolutional layers of UNet with a GCN module, was able to perform superior segmentation on the CT images of lung cancer patients. Based on the established knowledge that lymph nodes (LNs) have connected lymphatic system and LNs cancer cells spread on certain pathways, Chao et al. [129] proposed a lymph node gross tumor volume learning framework. The framework was able to delineate the LN appearance as well as the inter-LN relationship. The end-to-end Figure 11: Graph processing pipeline on radiology data. Adapted from [125]. learning framework was superior to the state-of-the-art on esophageal cancer radiotherapy dataset. Tian et al. [130] suggested interactive segmentation of MRI scans of prostate cancer patients through a combination of CNN and two GCNs. CNN model outputs a segmentation feature map on the MRI input, and the GCN takes these features to predict the prostate contour. Through user interactions, the interactive GCN corrects the points of contour by combining spline curves on the corrected points. Saueressig et al. [131] used GNNs to segment brain tumors in 3D MRI images. The 3D image, formed by stacking different modalities of MRI (T1, T2, T1-CE, FLAIR), is converted into a graph where each voxel represents a node. Supervoxel graph is fed through a GNN for predicting node labels. The authors experimented with many variants of GNNs and reported that GraphSAGE-pool was best for segmenting brain tumors. Besides radiology, a parallel field of radiomics has recently gained attraction. Radiomics is the automated extraction of quantitative features from radiology scans as sub-visual signals or representations. A comprehensive survey of radiomics and radiogenomic analysis on brain tumors is presented in [125]. A detailed survey on the use of GNNs for both oncology and non-oncology data is presented in [132]. #### 3.3.3 Molecular Data Graphs are a natural choice for representing molecular data such as omic-centric (DNA, RNA, or proteins) or single-cell centric. In the graph representation learning regime, the individual modalities are processed separately to generate graph representations that are then processed through GNNs followed by the classifier to predict the downstream task, as illustrated in Fig. 12. For example, one method of representing proteins as graphs is to depict the Figure 12: Graph processing pipeline on non-imagery data. Adapted from [133]. amino acid residue in the protein as the node and the relationship between residues denoted by edge [134]. The residue information is depicted as a vector of node embedding, whereas the relational information between two residues is represented as the edge feature vector. Fout et al. [134] use spatial ConvGNNs to predict interfaces between proteins which is important in drug discovery problems. Deep predictor of drug-drug interactions (DPDDI) predicted the drug-drug interactions using GCN followed by a 5-layer classical neural network [135]. Molecular pre-training graph net (MPG) [136] is a powerful framework based on GNN and BERT to learn drug-drug and drug-target interactions. Graph-based Attention Model (GRAM) [137] handles the data inefficiency by supplementing electronic health records with hierarchical knowledge in the medical ontology. A few recent works have applied GNNs to single-cell data. scGCN [138] is a knowledge transfer framework in single-cell omics data such as mRNA or DNA. scGNN [139] processed cell-cell relations through GNNs for the task of missing-data imputation and cell clustering on single-cell RNA sequencing (scRNA-seq) data. Despite its success in single-modality data, there are few efforts on applying GNNs to multimodal single-cell data. ### Multimodal Pre-learning Fusion As illustrated in Fig. 4, the first and most primitive form of multimodal integrative learning is the pre-learning fusion. This approach involves merging the features extracted from individual modalities of data and then training the multimodal primary learner model on the fused representations. In the context of GNNs being the primary learning model, the extraction step of individual modality representations can be hand-engineered, as in dimensionality reduction techniques, or representations learned by DL models such as CNNs, Transformers, etc. Cui et al. [140] proposed a GNN-based early fusion framework to learn latent representations from radiological and clinical modalities for Lymph node metastasis (LNM) prediction in esophageal squamous cell carcinoma (ESCC). The extracted features from the two modalities using UNet and CNN-based encoders were fused together with category-wise attention as node representation. The message passing from conventional GAT and correlation-based GAT learned the neighborhood weights. The attention attributes were used to update the final node features before classification by a 3-layer fully connected (FC) network. For Autism spectrum disorder, Alzheimer's disease, and ocular diseases, a multimodal learning framework called Edge-Variational GCN (EV-GCN) [141] fuses the radiology features extracted from fMRI images with clinical feature vectors for each patient. An MLP-based pairwise association encoder is used to fuse the input feature vectors and to generate the edge weights of the population graph. The partially labeled population graph is then processed through GCN layers to generate the diagnostic graph of patients. Another technique of fusing the radiology and phenotypic features is to generate population graph with node features extracted from radiology images, while the edge weights are based on the similarity function of the phenotypic features among patients and using GCNs to undertake the downstream task [142]. ### Multimodal Cross-learning Fusion Cross-multimodal learning involves intermediate fusion and/or cross-links among the models being trained on individual modalities, as illustrated in Fig. 4. For the case of GNNs, we include the hierarchical learning mechanisms using GNNs as the cross-learning multimodal fusion method. The hierarchical frameworks involve learning representations for one modality and using the learned latent embeddings in tandem with other data modalities sequentially to get the final desired low-dimensional representations. Lian et al. [143] used a sequential learning framework where tumor features learned from CT images using the ViT model were used as node features of the patient population graph for subsequent processing by the GraphSAGE model. The hierarchical learning from radiological and clinical data using Transformer-GNN outperformed the ResNet-Graph framework in survival prediction of early-stage NSCLC. scMoGNN [144] is the first method to apply GNNs in the field of multimodal single-cell data integration and build a cross-learning fusion-based GNN framework. scMoGNN framework officially won first place in the overall ranking of the modality prediction task at the NeurIPS 2021 Competition. scMoGNN used paired data to generate cell-feature graphs, reduced the data dimensionality, and performed well on various tasks. Hierarchical cell-to-tissue-graph network (HACT-Net) combines the low-level cell-graph features with the high-level tissue-graph features through two hierarchical GINs on breast cancer multi-class prediction [145]. Data imputation is a method of populating the missing values or false zero counts in single-cell data, mostly done using DL autoencoders (AE) architecture. However, few multimodal cross-learning GNN-based algorithms incorporate data imputation in their processing pipeline. scGNN [139] uses imputation AE and graph AE in an iterative manner for imputation. Another method GraphSCI [146], uses GCN with AE to impute the single-cell RNA-seq data using the cross-learning fusion between the GCN and the AE networks. Clustering is a method of characterizing cell types within a tissue sample. Graph-SCC [147] clustered cells based on scRNA-seq data through self-supervised cross-learning between GCN and a denoising AE network. Jiang et al. [148] proposed a GCN-based model to predict synergistic drug combinations in particular cell lines in six cancer types. The heterogeneous graphs, with features from the drug-drug, drug-protein, and protein-protein combinations, were processed through a 4-layer GCN that showed superior performance in predicting cell line-specific synergistic drug combinations. Recently a multilayer GNN framework, Explainable Multilayer GNN (EMGNN), has been proposed for cancer gene prediction task using multi-omics data from 16 different cancer types [149]. ### Multimodal Post-learning The most sought-after multimodal fusion technique in the existing literature is the post-learning fusion method, where the individual data views are first processed by the primary learning model and later fused for the downstream predictive task, as illustrated in Fig. 4. Post-learning fusion approaches have performed significantly better than the early fusion rules [150]. Within the post-learning fusion paradigm, when input data dimensionality is low, the handcrafted features perform better than the deep features, and vice versa [150]. Many interesting GNNs-based works involving the post-learning fusion mechanism have recently been published. Fout et al. [134] used ConvGNNs for the task of predicting protein interactions. They used graph convolutions to learn latent residue representations for receptor protein and ligand protein and merged the two embeddings by concatenating them and passing through a dense layer for classification. Decagon [151] used a multimodal approach on GCNs using proteins and drugs interactions to predict exact side effects as a multi-relational link prediction task. Drug-target affinity (DTA) [152] experimented with four different flavors of GNNs (GCN, GAT, GIN, GAT-GCN) along with a CNN to fuse together molecular embeddings and proteins sequences for predicting drug-target affinity. PathomicFusion [116] combined the morphological features extracted from image patches, cell-graph features from cell-graphs of histology images, and genomic features for survival prediction on glioma and clear cell renal cell carcinoma. The image features are extracted using CNNs, cell-graph features are extracted using GraphSAGE-based GCNs, and genomic features are extracted using the feed-forward network. Shi et al. [153] proposed a late-fusion technique to study screening of cervical cancer at early stages. CNNs have been used to extract features from cervical cell histology images, followed by K-means clustering to generate graphs which are processed through two-layer GCN. The resultant features are fused together through a linear layer to classify the cervical cells. BDR-CNN-GCN (batch normalized, dropout, rank-based pooling-CNN-GCN) [154] used the same mammographic images to extract image-level features using CNN and relation-aware features using GCN. The two feature sets are fused using dot product followed by a trainable linear projection for the breast cancer classification. For the molecular-level tasks in oncology, especially under the umbrella of multi-omics data, many GNN-based frameworks have been proposed recently. Molecular omics network (MOOMIN) [155] is a multi-modal heterogeneous GNN to predict oncology drug combinations. The molecular structure, protein features, and cell lines are processed through feed-forward and GCN-based encoders, and the resultant representations are fused together using a bipartite drug-protein interaction graph for the drug combination task. Multi-omics graph convolutional networks (MOGONET) [133] used a late fusion technique for the biomedical classification of four different diseases, including three cancer types, breast, kidney, and glioma. The hand-engineered pre-processed multi-omics features (mRNA expression, DNA methylation, and miRNA expression data) were processed through GCNs separately. Learned representations were tensor-fused and fed to a GAN-based view correlation discovery network (VCDN) for integrative classification. MOGONET identified important disease-specific biomarkers related to Alzheimer's disease and breast cancer. Leng et al. [156] extended MOGONET to benchmark three multi-omics datasets on two different tasks using sixteen DL networks and concluded that GAT-based GNN had the best classification performance. Multi-Omics Graph Contrastive Learner(MOGCL) [157] uses graph structure and contrastive learning information to generate representations for improved downstream classification tasks on the breast cancer multi-omics dataset. A GCN is pre-trained on each omics modality using the GRACE [158] framework. The output representations from all the modality-specific GCNs are concatenated and passed through a linear layer for the downstream task of breast invasive carcinoma PAM50 subtype classification. Similar fusion mechanism as MOGCL, Park et al. [159] developed a GNN-based multi-omics model that integrated mRNA expression, DNA methylation, and DNA sequencing data for NSCLC diagnosis. Besides identifying new biomarkers, the pathway analysis showed that lung adenocarcinoma and lung squamous cell carcinoma, the two major types of NSCLC, have both specific and common GO biological processes. ## 4 Transformers in Multi-modal Learning Transformers are attention-based DNN models that originally gained widespread attention in NLP [11]. Transformers implement scaled dot-product of the input with itself and can process various types of data in parallel, truly benefiting from the massive compute offered by modern GPU-based hardware [11]. Transformers can handle sequential data and learn long-range dependencies, making them well-suited for tasks such as language translation and language modeling. Unlike Recurrent Neural Networks (RNNs) and CNNs, Transformers use self-attention operations to weigh the importance of different input tokens (or embeddings) at each time step. This allows them to handle sequences of arbitrary length and to capture dependencies between input tokens that are far apart in the sequence [11]. Transformers-based models achieve state-of-the-art results on a wide range of NLP tasks, including machine translation, language modeling, and question answering and many more [160]. More recently, Transformer models have also been applied to other modalities, such as images [161], audio [162], and time-series analysis [163], resulting in a new wave of multi-modal applications. One key advantage of Transformers in MML is their ability to handle input sequences of different modalities in a unified way, using the same self-attention mechanism which process the inputs as a fully connected graph [9]. This allows Transformers to capture complex dependencies between different modalities, such as the relationship between visual and textual information in image captioning or visual question-answering (VQA) tasks [164]. Transformers can be easily pre-trained on large amounts of data, using unsupervised or self-supervised learning techniques, and then fine-tuned for specific downstream tasks. This has led to the development of powerful pre-trained models, such as BERT [48], GPT [165], RoBERTa [49], CLIP [5], T5 [166], BART [167], BLOOM [168], ALIGN [169], SimVLM [170], Florence [171], Flamingo [172], and CoCa [173] which have achieved impressive results on a wide range of NLP, CV, and multimodal tasks. Multimodal Transformers are a recent development in the field of MML, which extends the capabilities of traditional Transformers to handle multiple data modalities at once. The inter-modal dependencies of the multimodal inputs are captured by the self-attention mechanism (referred to as cross-attention) in multimodal Transformers, allowing the model to jointly reason about multiple modalities and extract rich data representations. There are various types of multimodal Transformers, such as the Unified Transformer (UniT) [174], which uses a shared encoder-decoder architecture, and the Multi-way Multimodal Transformer (MMT), which utilizes a multi-way tensor structure [175]. Other examples include CLIP [5], SimVLM [170], Florence [171], Flamingo [172], CoCa [173], and Perceiver IO [176]. ### The Transformer Architecture The original Transformer model (see Fig. 13) is composed of encoder and decoder blocks, each made up of several layers of self-attention and feed-forward neural networks. The encoder takes the input sequence and generates a sequence of hidden representations, which are then fed to the decoder. The decoder generates the output sequence by attending to the encoder's hidden representations and the previously generated tokens (i.e., auto-regressive). The self-attention operation (or equivalently the scaled dot-product) is a crucial component of the Transformer model. It allows the model to determine the significance of each element in the input sequence with respect to the whole input. Self-attention operates by computing a weighted sum of the input sequence's hidden representations, where the weights are determined by the dot product between the _query_ vector and the _key_ vector, followed by a scaling operation to stabilize the gradients. The resulting weighted sum is multiplied Figure 13: The original Transformer architecture as proposed in [11]. A Transformer model can have multiple blocks of encoders and decoders based on the application and data type. by a _value_ vector to obtain the output of the self-attention operation. There has been a tremendous amount of work on various facets of the Transformer architecture. The readers are referred to relevant review papers [9, 177, 178, 179]. ### Multimodal Transformers Transformers can be viewed as a type of GNN [9]. The self-attention allows a Transformer model to process each input as a fully connected graph and attend to (or equivalently learn from) the global patterns present in the input. This makes Transformers compatible with various data modalities by treating each token (or its embedding) as a node in the graph. To use Transformers for a data modality, we need to tokenize the input and select an embedding space for the tokens. Tokenization and the selection of embeddings are flexible and can be done at multiple granularity levels, such as using raw features, ML-extracted features, patches from the input image, or graph nodes. Table 1 summarizes some common practices related to tokenization and the selection of embeddings for various types of data found in oncology. One of the main challenges in developing multimodal Transformer models is related to handling cross-modal interactions between data modalities, which involve fusing or aligning information from different data modalities. The proposed solution architectures include _early fusion_ of data modalities, _cross-attention_, _hierarchical attention_, and _late fusion_. A detailed summary of these four architectures is presented in Fig. 14. Early fusion combines the data from all modalities into a single input at the model input. Late fusion merges the data modalities at the end of the model. The cross-attention fusion technique allows the model to attend to different modalities at different stages of data processing. The hierarchical attention involves multiple layers of self-attention, where each layer focuses on different aspects of the data. In the following, we present and compare data processing steps for these four methods using two data modalities as an example. The same analysis can be extended to multiple \begin{table} \begin{tabular}{l|l|l} \hline \multicolumn{1}{c|}{**Modalities**} & \multicolumn{1}{c|}{**Tokenization**} & \multicolumn{1}{c}{**Token Embeddings**} \\ \hline Pathology & Patch & CNNs [180] \\ \hline Radiology & Patch & CNNs [181] \\ \hline \multirow{2}{*}{Electronic Health Record} & \multirow{2}{*}{ICD Code} & GNNs [182], \\ & & ML models [183] \\ \hline \multirow{2}{*}{-Omics} & Graphs & GNNs [184] \\ & K-mers & ML model [185] \\ \hline \multirow{3}{*}{Clinical notes} & \multirow{3}{*}{Word} & BERT [48], \\ & & RoBERTa [49], \\ \cline{1-1} & & BioBERT [186] \\ \hline \end{tabular} \end{table} Table 1: Oncology data modalities and their respective tokenization and embeddings selection techniques. modalities. #### 4.2.1 Early Fusion Early fusion is the simplest way to combine data from multiple modalities. The data from different modalities are concatenated to a single input before feeding it to the model. The concatenated input is then processed by the Transformer layers as a single entity. Mathematically, the concatenation operation can be represented as: \(x_{cat}=[x_{1},x_{2}]\), where \(x_{1}\) and \(x_{2}\) are the inputs from two different data modalities, and \(x_{cat}\) is the concatenated input to the model. The main advantage of early fusion is its simplicity and efficiency. However, it assumes that all modalities have the same importance and are equally relevant for the task at hand [187]. This may not always be the case in practice, as some modalities may provide more informative or discriminative features than others [188]. Figure 14: Four different strategies of fusing information from various data modalities in multimodal Transformers are presented. #### 4.2.2 Cross-Attention Fusion Cross-attention is a relatively more flexible approach to combine multiple modalities. The Transformer layers attend to different modalities at different stages of data processing. Cross-attention can be used to model the interactions between the data modalities and learn their joint representation. Cross-attention allows the model to selectively attend to different modalities based on their relevance to the task [189]. It can also capture complex interactions between the modalities that cannot be easily represented by simple concatenation [190]. #### 4.2.3 Hierarchical Fusion Hierarchical attention is a complex approach to combining multiple modalities that has been used in various applications. For instance, DFTR (Depth-supervised Fusion Transformer for Salient Object Detection) employs hierarchical feature extraction to improve salient object detection performance by fusing low-level spatial features and high-level semantic features from different scales [191]. Additionally, Yang et al. [192], introduce a hierarchical approach to fine-grained classification using a fusion transformer. The network leverages coarse class predictions to enhance the fine class predictions in a stage-wise manner, resulting in improved accuracy [192]. Furthermore, the Hierarchical Multimodal Transformer (HMT) for video summarization can capture global dependencies and multi-hop relationships among video frames using a transformer model. The hierarchical structure of the HMT is designed according to the video structure of frame-shot-video, and multimodal features such as visual, audio and textual features are incorporated to enhance its representation ability [193]. #### 4.2.4 Late Fusion In late fusion, each data modality is processed independently by its own Transformer model. The outputs of various branches are concatenated and passed through a Transformer layers to learn the joint representation. Late fusion allows the model to capture the unique features of each modality while still learning their joint representation. One example of late fusion with Transformer was presented by Sun et al. [194], where they propose a multi-modal adaptive fusion transformer network for estimating the levels of depression. Their proposed model extracts long-term temporal information from uni-modal audio and visual data independently and then fuses the weights at the end to learn the joint representation. ### Transformers for Oncology Data Processing Transformers have been successfully applied to various tasks in oncology, including cancer screening, diagnosis, prognosis, treatment selection, and prediction of various clinical variables of interest [195, 196, 143, 8, 143, 180]. For instance, a Transformer-based model was used to predict the presence and grade of breast cancer using a combination of imaging and genomics data [8]. In another study, a Transformer model, called Transmil, was proposed to process histopathology images [195]. The authors used self-attention to learn the correlation between multiple instances and use Transformer architecture to classify the instances into different categories [195]. TransConv, a Transformer and convolution parallel network, was proposed recently for automatic brain tumor segmentation using MRI data [196]. The authors used Transformers to capture the global context information and the convolutional network to capture the local details [196]. Recently, a multimodal approach based on Transformers and GNNs was proposed for early-stage NSCLC prognostic prediction [143]. The authors combined Transformers and GNNs to learn the representations of the patient's clinical and pathological features and to model the patient's physiological network. A multimodal co-attention Transformer was proposed for survival prediction using WSIs [180]. The method learned the representations of both the image and the patient's genomic sequences and used the co-attention mechanism to learn the interactions between the two data modalities [180]. ## 5 Challenges in Multimodal Learning (MML) Learning from multimodal oncology data is a complex and rapidly growing field that presents both challenges and opportunities. With the increasing availability of large amounts of data from diverse sources such as EHR, radiological and histopathology imaging, clinical notes, and -omics data, there is a need for novel, innovative, and effective multimodal learning strategies. With the help of new methods, we should be able to integrate and analyze these diverse datasets and improve cancer diagnosis, prognosis, treatment planning, and patient outcomes. Multimodal learning using GNNs and Transformers is a promising approach to address these challenges. These models enable the simultaneous analysis of multiple data modalities and can capture complex relationships between different data types that may not be possible with traditional ML models. However, the integration of these diverse, multi-scale, and multi-resolution modalities presents unique challenges. In this context, we present major challenges and opportunities of multimodal learning in oncology settings that could unlock the full potential of this emerging field. ### Availability of Large Amounts of High-quality Data The foremost challenge is the availability of large, high-quality datasets that can be used to train and evaluate multimodal models. The DL models are traditionally trained on large datasets with enough samples for training, validation, and testing. Data of the scale of JFT-300M [198] and YFCC100M [199] are not available in the cancer domain. The amount of oncology data is significantly limited compared to datasets that DL models are usually trained on. For example, the largest genomics data repository, Gene Expression Omnibus (GEO) database, has approximately 1.1 million samples with the keyword 'cancer' compared to 3 billion images in JFT-300M [200, 201]. Annotating medical data and especially oncology data is a time-consuming and manual process that requires significant expertise in many different areas of medical sciences. Given the heterogeneity of the disease, noise in the data recording modalities, and the background and training of medical professionals, a high level of inter- and intra-operator variability is not uncommon that leads to a lack of reproducibility and inconsistent clinical outcomes [16]. Getting access to accurately annotated datasets with a diverse range of modalities is still a challenge for ML researchers working in the oncology space. ### Data Registration and Alignment Data alignment and registration refer to the process of combining and aligning data from different modalities in a way that enables the comparison and integration of information from different sources [202]. In multimodal oncology data, this process involves aligning data from multiple imaging modalities, such as CT, MRI, and PET, as well as data from other sources, such as histopathology (in the form of WSIs), genomics, transcriptomics, and clinical records. Data registration involves aligning the data modalities to a common reference frame or coordinate system and may involve identifying common landmarks or fiducial markers in the images or non-image data. For the effective integration of multiple modalities, the data from each modality must be properly registered and aligned. If the data is not registered correctly, it may be difficult to fuse the information from different modalities accurately. ### Pan-Cancer Generalization It can be difficult to develop models that are able to generalize across different cancer sites, as each cancer site may have its own unique characteristics and challenges. Furthermore, the models trained on a specific modality, such as radiology images, will not perform well with other imaging modalities, such as histopathology slides. Because of the individuality of each cancer type and site, the inter-cancer and inter-site variation make the DL models under-perform for the same task. To overcome this challenge, there is a need to devise mechanisms for improved universality of ML models. ### Missing Data Samples and Modalities Complete unavailability of one or more modalities or absence of samples in a modality affects the model learning, as most of the existing DL models cannot process the "missing information". This requirement, in turn, constrains the already insufficient size of datasets in oncology. Almost all publicly available oncology datasets have missing data for a large number of samples [201]. Various approaches for handling missing data samples and missing modalities in DL models are gradually gaining the attention of researchers [203]. However, this is still an open challenge in oncology ML [204]. ### Imbalanced Data Class imbalance refers to the situation where one class (e.g., cancer negative/positive) is significantly more frequent in the data than another class (e.g., cancer positive/negative). Traditional DL models may struggle to accurately classify underrepresented classes. Techniques such as data augmentation, ensemble, active learning, continual learning, and transfer learning are usually used to counter the class imbalance challenge [204]. Class imbalance is a common issue in oncology data [204]. The inherent sparsity and cost of acquisition of cancer data make it difficult to generate balanced datasets. ### Explainability and Trustworthiness of ML and DL Models The explainability in ML, e.g., how GNNs and Transformers make a specific decision, is still an area of active research. A survey of explainability methods in DL is given in [205]. GNNExplainer [206], GraphMask [207], PGMExplainer [208], SubgraphX [209], and XGNN [210] are among some methods that attempt to explain the decision-making process of GNNs. The explainability methods for Transformers have been analyzed in many works recently, including [211] and [212]. The challenge to explain the decisions of multimodal models and establish their trustworthiness is challenging as these models involve combining information from multiple modalities. GNNs and Transformers have exhibited vulnerability in the presence of adversarial attacks, noisy samples, and class-imbalance inputs. Existing efforts and a roadmap to improve the trustworthiness of GNNs have been presented in the latest survey [213]. However, the explainability and trustworthiness of multimodal GNNs and Transformers is challenging and would require significant efforts from the research community. ### Oversmoothing in GNNs One particular challenge in dealing with multimodal data using GNNs is over-smoothing, which occurs when the GNN is trained for too many iterations, causing the node representations to become almost similar [94]. This leads to a loss of information, decrease in the model's ability to distinguish among graph entities, and decrease in the model's generalization [214]. Oversmoothing prevents the use of deeper GNNs because representations of nodes become smoother, even for nodes that are distinct and far from each other. Regularization techniques such as dropout, weight decay, and skip-connection have been proposed, but building deep architectures that can scale and adapt to varying structural patterns of graphs is still an open challenge. The incorporation of higher-order structures, such as motifs and graphlets, in the learning mechanism has the potential for the improved expressive power of multimodal models. ### Scalability One of the primary scalability challenges with GNNs and Transformers is the large number of parameters that these models require to learn effectively. These models often have millions or even billions of parameters, which require massive amounts of computing resources to train and apply to large-scale oncology datasets [48]. Additionally, as the size of the input data increases, the deployment-time memory requirements for these models can also become prohibitively large. Existing multimodal architectures of GNNs and Transformers will need significant modification for clinical readiness [204]. ### Modality Collapse Modality collapse is a phenomenon that can occur in multimodal learning, where a model trained on multiple modalities of data may become over-reliant on a single modality, to the point where it largely ignores or neglects the other modalities [215]. The MML techniques may give importance to only a subset of modalities that were more helpful in model training. Such practice ignores the modalities that might be actually more informative for inference [215]. Some recent works explore the underlying phenomenon and the reasons for such behavior to improve our theoretical understanding of modality collapse [216]. Counteractions to mitigate the modality collapse and balance the modality competition are still being actively investigated by the ML research community. ### Curriculum Learning Curriculum learning aims to mimic the learning process of humans to help improve the learning performance, generalization, and convergence of ML and DL models [217]. Curriculum learning on GNNs (also referred to as Graph CL) is a promising field that combines the strengths of graph representation learning with those of curriculum learning. However, tackling issues such as generalization, transferability, and evaluation of curriculum learning in GNNs are still open challenges [217]. ### Dynamic and Temporal Data Dynamic and temporal data refers to the data that changes over time [94]. Tumor surveillance is a well-known technique to study longitudinal cancer growth over multiple data modalities [20]. Spatio-temporal methods including multiple instance learning, GNNs, and hybrid of multiple models are a popular choice to capture complex change in the data relationships over time. Learning using multimodal dynamic data is an active area of research in oncology, and has the potential to provide a more comprehensive understanding of disease progression and treatment response [218]. ### Data Privacy and Federated Learning With the increased concern for the privacy of data, especially in medical settings, MML techniques need to adapt to local data processing and remote federation. Federated learning [219] can help train large multimodal models using local data without sharing the data with other sites, but still benefiting from data from other sites [220]. ### Other Challenges Pre-trained models can be useful for multimodal oncology learning, however, suitable pre-trained models may not be available for a given cancer type or task. Multimodal learning often requires extensive computational resources and time to train multimodal models on a variety of datasets and tasks. Robustness and failure detection [221] are critical aspects of MML, particularly in applications such as oncology. Uncertainty quantification techniques, such as Bayesian neural networks [222, 223], are still under-explored avenues in the MML. Overall, MML presents a number of challenges that must be carefully considered in order to effectively combine information from multiple modalities. By addressing these challenges, it is possible to develop MML methods and models that are able to surpass performance offered by single modality models. ## 6 Multimodal Oncology Data Sources Recent efforts to build central archives and unify the different collections of diverse modality oncology data have given rise to many publicly available, identified data portals [224, 200]. We have compiled a non-exhaustive list of datasets from data portals shared by the National Institute of Health (NIH) and others. The purpose of this compilation is to provide ML researchers working in the cancer community with a unified source of data. The collection is available at [https://lab-rasool.github.io/pan-cancer-dataset-sources/](https://lab-rasool.github.io/pan-cancer-dataset-sources/), and is being updated periodically. ## 7 Conclusion Research efforts in integrating data across a few modalities have already shown encouraging results. There is no unified framework available for scaling across all possible modalities of cancer. The convergence of individual methodologies and data across varying scales may hold vital clues in creating a unified view of the disease that is more prognostic, predictive, and insightful than the individual view or modality. Efforts to beat cancer require synergistic analysis of heterogeneous data and instantiating scalable models. In this survey, we reviewed the multimodal learning task on oncology data. The future resides in developing a deployment-ready, scalable deep learning framework with inherent uncertainty quantification, interpretability, and generalizability to integrate onocology data across multiple scales, modalities, and resolutions to accelerate cancer diagnosis, prognosis, therapeutic response, and treatment planning. ## 8 Acknowledgments This work was partly supported by the National Science Foundation awards ECCS-1903466, OAC-2008690 and OAC-2234836.
2305.02599
Transmissive Reconfigurable Intelligent Surface Transmitter Empowered Cognitive RSMA Networks
In this paper, we investigated the downlink transmission problem of a cognitive radio network (CRN) equipped with a novel transmissive reconfigurable intelligent surface (TRIS) transmitter. In order to achieve low power consumption and high-rate multi-streams communication, time-modulated arrays (TMA) is implemented and users access the network using rate splitting multiple access (RSMA). With such a network framework, a multi-objective optimization problem with joint design of the precoding matrix and the common stream rate is constructed to achieve higher energy efficiency (EE) and spectral efficiency (SE). Since the objective function is a non-convex fractional function, we proposed a joint optimization algorithm based on difference-of-convex (DC) programming and successive convex approximation (SCA). Numerical results show that under this framework the proposed algorithm can considerably improve and balance the EE and SE.
Ziwei Liu, Wen Chen, Zhendong Li, Jinhong Yuan, Qingqing Wu, Kunlun Wang
2023-05-04T07:08:30Z
http://arxiv.org/abs/2305.02599v1
# Transmissive Reconfigurable Intelligent Surface Transmitter Empowered Cognitive RSMA Networks ###### Abstract In this paper, we investigated the downlink transmission problem of a cognitive radio network (CRN) equipped with a novel transmissive reconfigurable intelligent surface (TRIS) transmitter. In order to achieve low power consumption and high-rate multi-streams communication, time-modulated arrays (TMA) is implemented and users access the network using rate splitting multiple access (RSMA). With such a network framework, a multi-objective optimization problem with joint design of the precoding matrix and the common stream rate is constructed to achieve higher energy efficiency (EE) and spectral efficiency (SE). Since the objective function is a non-convex fractional function, we proposed a joint optimization algorithm based on difference-of-convex (DC) programming and successive convex approximation (SCA). Numerical results show that under this framework the proposed algorithm can considerably improve and balance the EE and SE. Reconfigurable intelligent surface (RIS), cognitive radio network (CRN), rate splitting multiple access (RSMA), time modulation array (TMA). ## I Introduction Future wireless communications need to satisfy the demands of massive access and fast growing-data flow, which lead to increasing problems of system energy consumption, spectrum shortage, and inter-device interference [1]. Therefore, there is an urgent need to advance the development of technologies to tackle the aforesaid problems. As for the spectrum shortage problem, radio cognitive network (CRN) was introduced to manage spectrum resources [2], where users are divided into primary users (PUs) and cognitive users (CUs). By interacting with the wireless environment and sensing the underutilized spectrum which can be dynamically allocated to the CUs without causing interference to the PUs. Therefore, the spectrum resource is fully utilized, and greatly improving the spectrum efficiency (SE) [3]. In order to achieve low-cost and low-energy consumption communication deployments, reflective reconfigurable intelligent surface (RIS) is being focused on [4]. RIS consists of a large number of low-cost passive elements and is considered as a key technology for next-generation wireless communication technologies on account of its ability to improve the wireless propagation constructively. Due to this characteristic, reflective RIS is used for assisted communication [5]. Recently, a transmissive RIS (TRIS) transmitter has been proposed [6], and TRIS can achieve higher aperture efficiency due to its structural features, which avoid the problem of feed source occlusion and the interference problem of reflected waves. Meanwhile, since the transmissive architecture is equipped with only one feed antenna, the energy consumption of the system is lower than that of the conventional transceiver architecture. In addition, combined with the time-modulated array, it can realize digital signal modulation of arbitrary order [7] and multi-streams communication. Since this novel TRIS is equipped with only one single antenna [6], the non-orthogonal access method is considered. Rate splitting multiple access (RSMA), as a general non-orthogonal access scheme, can achieve a higher SE and has recently attracted considerable attention and research for the multi-antenna system, which generalizes and coordinates two extreme interference management strategies [8, 9]. The first extreme interference management strategy fully treats the residual interference as noise, and this is the scheme applied in space division multiple access (SDMA), which utilizes linear precoding to distinguish users spatially. The second extreme interference management strategy fully decodes and cancels the multi-user interference, and this is the scheme applied in non-orthogonal multiple access (NOMA), which utilizes successive interference cancellation (SIC) techniques to cancel the interference [8]. Combining the above two schemes, RSMA utilizes both linear precoding and SIC techniques to decode part of the interference and treat the remaining part as noise, thus can reduce the interference at the users. In addition, RIS-assisted communication system using RSMA for access is investigated [10][11], and the results show that this scheme outperforms conventional schemes in terms of SE and energy efficiency (EE). The other way around, based on the multi-streams scheme design of this paper, in order to optimize the RIS element phase, the precoding matrix optimization of RSMA is naturally considered. Inspired by the above work, we constructed a novel TRIS transmitter architecture to achieve low power consumption and high-rate multi-streams communication and jointly optimize the parameters of RIS and RSMA to improve the EE and SE of the system. Meanwhile, we investigate the trade-off between the two. Due to coupled optimization variables and fractional functions, we propose a joint optimization algorithm based on difference-of-convex (DC) programming and successive convex approximation (SCA) to solve the non-convex problem. ## II System model and problem formulation As shown in Fig. 1, we consider a downlink transmission CRN with RSMA. The primary base station (PBS) serves \(N\) single-antenna PUs, and the cognitive base station (CBS) serves \(K\) single-antenna CUs. ### _TRIS Transmitter Characteristics and Multi-stream Scheme_ The CBS consists of a feed antenna, a controller, and a TRIS, and the TRIS has \(M=M_{r}\times M_{c}\) transmissive elements arranged in a uniform planar array (UPA) with the centers of adjacent elements spaced at \(d_{f}=\lambda_{c}/2\), where \(\lambda_{c}\) is the carrier wavelength. TRIS differs from reflective or active RIS in that it serves to load information (including user and beam information) and spatial diversity, and does not additionally process the signal. Based on the above differences and characteristics, this transmitter structure achieves beamforming and multi-stream communication in four steps as described below. Firstly, the signal \(\mathbf{s}\) is precoded in the controller through the precoding matrix \(\mathbf{P}\) to form a directional complex signal \(\mathbf{Ps}\). Secondly, the complex signal is modulated by time array to generate a control signal \(\mathrm{Crtl}\left(t\right)\) whose duty cycle is determined by the amplitude and phase of the complex signal [7]. Thirdly, the control signal controls the phases of the TRIS elements to load the information onto the TRIS. Finally, the feed antenna transmits the carrier wave to carry the loaded TRIS information to the users. For time-modulated arrays, the amplitude \(A\) and phase \(\varphi_{0}\) of the coded information are mapped by adjusting the 0-state digital signal start time \(t_{on}\) and 0-state digital signal duration \(\tau\) of the control signal, which can be expressed as follows \[A/A_{\max}=\sin\left(\pi\tau/T_{p}\right), \tag{1}\] and \[-\pi\left(2t_{on}+\tau\right)/T_{p}=\varphi_{0}+2k\pi,\forall k, \tag{2}\] where \(T_{p}\) is the code element time and \(A_{\max}\) is the maximum amplitude of the digitally modulated signal. Notice that the control signal superimposes information from all users through a precoding matrix in one code element and information about different code elements of the control signal is loaded onto different TRIS elements at the same time. This way each TRIS element contains the same information for all users, but with different directivity information, and each TRIS element serves all users. Therefore, beamforming and multi-stream communication are realized. ### _Channel Model_ In the CBS, the feed antenna to the TRIS is unobstructed and the physical distance is less than the Rayleigh distance \(2D/\lambda_{c}\), usually called the near-field. Based on the spherical wave assumption [12], the near-field line-of-sight (LOS) channel can be derived in the following form \[\begin{split}\mathbf{h}=\alpha\left[e^{-j2\pi D_{FR}\left(1,1 \right)},\cdots,e^{-j2\pi D_{FR}\left(1,M_{c}\right)},\\ \cdots,e^{-j2\pi D_{FR}\left(M_{r},1\right)},\cdots,e^{-j2\pi D_ {FR}\left(M_{r},M_{c}\right)}\right]^{T},\end{split} \tag{3}\] where \(\alpha\) denotes the channel gain from the feed antenna to the TRIS, and \(D_{FR}\left(m_{r},m_{c}\right)=\sqrt{d_{0}^{2}+d_{m_{r},m_{c}}^{2}}\) denotes the Euclidean distance from the feed antenna to the \(\left(m_{r},m_{c}\right)\)-th element of the TRIS. \(d_{0}\) denotes the distance from the feed antenna to the TRIS center and \(d_{m_{r},m_{c}}\) denotes the distance from the \(\left(m_{r},m_{c}\right)\)-th element of the TRIS to the center of the RIS, which can be expressed as \(d_{m_{r},m_{c}}=d_{f}\sqrt{\Delta_{r}^{2}+\Delta_{c}^{2}}\), \(\Delta_{r}=\left(2m_{r}-M_{r}-1\right)/2\) and \(\Delta_{c}=\left(2m_{c}-M_{c}-1\right)/2\) denotes the incremental index. For the channel from TRIS to CUs (The channel model between TRIS to PUs \(\mathbf{g}_{n}\) is the same as the CUs), we consider the existence of line-of-sight (LoS) part and non-line-of-sight (NLoS) part, modeled as the following Rician fading channel \[\mathbf{g}_{k}=\xi_{k}\left(\sqrt{\frac{\kappa_{v}}{\kappa_{v}+1}}\mathbf{g }_{\mathrm{LoS}}+\sqrt{\frac{1}{\kappa_{v}+1}}\mathbf{g}_{\mathrm{NLoS}} \right),\forall k, \tag{4}\] where \(\xi_{k}\) represents the path loss, and \(\kappa_{v}\) represents the Rician factor. The LoS channel can be expressed as \[\mathbf{g}_{\mathrm{LoS}}=\left[e^{-j2\pi\delta_{c}\mathbf{m}_{r}}\right]^{T} \otimes\left[e^{-j2\pi\delta_{c}\mathbf{m}_{c}}\right]^{T}, \tag{5}\] where \(\mathbf{m}_{r}=\left[0,1,\cdots,M_{r}-1\right]\), \(\mathbf{m}_{c}=\left[0,1,\cdots,M_{c}-1\right]\). \(\delta_{r}=d_{f}\sin\varphi\cos\psi/\lambda\), and \(\delta_{c}=d_{f}\sin\varphi\sin\psi/\lambda\). \(\varphi\) and \(\psi\) represent the azimuth and pitch angles of the electromagnetic wave at the transmitting TRIS element. The NLoS channel obeys a circularly symmetric complex Gaussian distribution, i.e., \(\mathbf{g}_{\mathrm{NLoS}}\sim\mathcal{CN}\left(0,\mathbf{I}_{M_{c}M_{c}}\right)\). The CSI of both PUs and CUs is assumed to be known at the transmitter, and the signaling overhead for CSI acquisition can be obtained in [13]. ### _Signal Model And Transmission Scheme_ In this paper, users apply the RSMA to access the network, where the information is divided into two separate parts, the common and the private stream. The information stream is precoded in the TRIS controller utilizing the beamformer, and all CUs share the same codebook. For the common information stream part of CUs \(s_{c}\in\mathbb{C}\) is coded jointly using the coding matrix \(\mathbf{p}_{c}\in\mathbb{C}^{M\times 1}\), and the private information stream of CUs \(s_{k}\in\mathbb{C}\) is coded separately using the coding matrix \(\mathbf{p}_{k}\in\mathbb{C}^{M\times 1}\). Then, the signal to be modulated can be expressed as \[\mathbf{x}=\mathbf{Ps}, \tag{6}\] where \(\mathbf{P}=\left[\mathbf{p}_{c},\mathbf{p}_{1},\mathbf{p}_{2},\cdots,\mathbf{p }_{K}\right]\in\mathbb{C}^{M\times\left(K+1\right)}\), and \(\mathbf{s}=\left[s_{c},s_{1},s_{2},\cdots,s_{K}\right]^{T}\in\mathbb{C}^{\left( K+1\right)\times 1}\). When the users decode common stream information, the private stream's interference is considered as noise. After decoding the common stream information, the common stream information is cancelled from the received signal. Then, users decode their private stream information from the received signal. From the above description, the equivalent common Fig. 1: TRIS transmitter empowered cognitive RSMA networks. stream and private stream signal-to-noise ratio of the \(k\)-th CU can be expressed as follows \[\gamma_{c,k}=\frac{\left|\mathbf{g}_{k}^{H}\mathrm{diag}\left(\mathbf{h}\right) \mathbf{p}_{c}\right|^{2}}{\sum_{k=1}^{K}\left|\mathbf{g}_{k}^{H}\mathrm{diag} \left(\mathbf{h}\right)\mathbf{p}_{k}\right|^{2}+\sigma^{2}},\forall k, \tag{7}\] and \[\gamma_{p,k}=\frac{\left|\mathbf{g}_{k}^{H}\mathrm{diag}\left(\mathbf{h}\right) \mathbf{p}_{k}\right|^{2}}{\sum_{i\neq k}^{K}\left|\mathbf{g}_{k}^{H}\mathrm{ diag}\left(\mathbf{h}\right)\mathbf{p}_{i}\right|^{2}+\sigma^{2}},\forall k. \tag{8}\] Then the corresponding achievable rate for the common and private streams can be expressed as \[R_{i,k}=W\mathrm{log}_{2}\left(1+\gamma_{i,k}\right),i\in\left\{c,p\right\}, \forall k. \tag{9}\] To ensure that all CUs are able to decode the common stream information, the following constraints need to be satisfied \[R_{c}=\min\left(R_{c,1},R_{c,2},\cdots,R_{c,K}\right). \tag{10}\] The common stream rate is shared by all CUs, which jointly participate in the encoding of the common information stream and need to satisfy the following constraints \[\sum\nolimits_{k=1}^{K}C_{k}=R_{c}, \tag{11}\] where \(C_{k}\) denotes the equivalent common stream rate of the \(k\)-th CU. Combining the above common stream rate and private stream rate, the total achievable rate of CUs can be obtained as follows \[R_{tot}=\sum\nolimits_{k=1}^{K}\left(C_{k}+R_{p,k}\right). \tag{12}\] Let \(l\in\left\{c,1,2,\cdots,K\right\}\). The total energy consumption of the cognitive network can be expressed as \[P_{tot}=\sum\nolimits_{l}\left\|\mathbf{p}_{l}\right\|^{2}+P_{ cir}\leq P_{\max}. \tag{13}\] where \(P_{ cir}\) denotes the circuit power consumption, and \(P_{\max}\) denotes the maximum power for CBS, which is related to the maximum transmitting power of the TRIS elements. ### _Problem Formulation_ In this paper, the SE and EE of the cognitive networks are maximized by optimizing the precoding matrix \(\mathbf{P}=\left[\mathbf{p}_{c},\mathbf{p}_{1},\mathbf{p}_{2},\cdots,\mathbf{ p}_{K}\right]\) and the common stream rate vector \(\mathbf{c}=\left[C_{1},C_{2},\cdots,C_{K}\right]\), while guaranteeing the quality of service (QoS) of the PUs. The corresponding \(\eta_{SE}\) and \(\eta_{EE}\) can be expressed as \(\eta_{SE}=R_{tot}/W,\ \eta_{EE}=R_{tot}/P_{tot}\). Then this optimization problem can be formulated as \[\left(\mathrm{P1}\right): \max_{\mathbf{P},\mathbf{c}}\ \eta_{SE}, \tag{14a}\] \[\max_{\mathbf{P},\mathbf{c}}\ \eta_{EE},\] (14b) \[\mathrm{s.t.} \quad P_{tot}\leq P_{\max},\] (14c) \[\sum\nolimits_{k=1}^{K}C_{k}\leq R_{c},\] (14d) \[C_{k}\geq 0,\forall k,\] (14e) \[\mathbf{p}_{c}\succeq 0,\mathbf{p}_{k}\succeq 0,\forall k,\] (14f) \[\left|\mathbf{g}_{n}^{H}\mathrm{diag}\left(\mathbf{h}\right) \mathbf{p}_{c}\right|^{2}\leq I_{c,th},\forall n\] (14g) \[\sum\nolimits_{k=1}^{K}\left|\mathbf{g}_{n}^{H}\mathrm{diag} \left(\mathbf{h}\right)\mathbf{p}_{k}\right|^{2}\leq I_{p,th},\forall n,\] (14h) \[C_{k}+R_{p,k}\geq R_{th},\forall k. \tag{14i}\] Note that constraints (14g)-(14h) are required to guarantee the QoS of PUs, and thus it is required that the interference power of the CUs' common and private streams to the \(n\)-th PU is less than the thresholds \(I_{c,th}\) and \(I_{p,th}\), respectively. To guarantee the QoS of each CU, the constraint (14i) needs to be satisfied. It is obvious that \(\left(\mathrm{P1}\right)\) is a non-convex multi-objective optimization problem due to the coupling of optimization variables and the fractional functions, so in the next section, a joint optimization algorithm is proposed to solve this multi-objective optimization problem. ## III Joint Precoding And Common Stream Rate Optimization ### _Problem Reformulation_ To solve this multi-objective non-convex optimization problem, the \(\varepsilon\)- constraint is applied in this paper, and the objective function of spectral efficiency is transformed into a constraint, then the problem \(\left(\mathrm{P1}\right)\) is reformulated in the following form \[\left(\mathrm{P2}\right): \max_{\mathbf{P},\mathbf{c}}\ \eta_{EE}, \tag{15a}\] \[\mathrm{s.t.} \quad\left(14c\right)-\left(14i\right),\] (15b) \[\eta_{SE}\geq\eta_{0}. \tag{15c}\] **Remark 1**: _The initial selection of \(\eta_{0}\) should be less than \(\eta_{SE,\max}\), which is independently obtained by [14] based on constraints (14c)-(14i). So the constraint (15c) guarantees that \(\eta_{SE}\) is not less than \(\eta_{0}\) in the process of solving all feasible solutions, which satisfies the condition required by Pareto optimal._ ### _Precoding Matrix and Common Stream Rate Joint Design_ Let \(\mathbf{Q}_{l}=\mathbf{p}_{l}\mathbf{p}_{l}^{H}\), and \(\mathbf{Q}=\left[\mathbf{Q}_{1},\mathbf{Q}_{2},\cdots,\mathbf{Q}_{K}\right]\) denotes the collections of all matrices. The precoding matrix optimization problem can be rewritten as \[\left(\mathrm{P2.1}\right): \max_{\mathbf{Q}}\ \frac{\sum\nolimits_{k=1}^{K}\left(C_{k}+R_{p,k} \right)}{\sum\nolimits_{l}\mathrm{Tr}\left(\mathbf{Q}_{l}\right)+P_{ cir}}, \tag{16a}\] \[\mathrm{s.t.} \quad\left(14d\right),\left(14i\right),\left(15c\right),\] (16b) \[\sum\nolimits_{l}\mathrm{Tr}\left(\mathbf{Q}_{l}\right)+P_{ cir}\leq P_{\max}\] (16c) \[\mathbf{Q}_{l}\succeq 0,\forall l,\] (16d) \[\mathrm{rank}\left(\mathbf{Q}_{l}\right)=1,\forall l,\] (16e) \[\mathrm{Tr}\left(\mathbf{F}_{n}\mathbf{Q}_{c}\right)\leq I_{c, th},\forall n,\] (16f) \[\sum\nolimits_{k=1}^{K}\mathrm{Tr}\left(\mathbf{F}_{n}\mathbf{Q }_{k}\right)\leq I_{p,th},\forall n, \tag{16g}\] where \(\mathbf{F}_{n}=\mathbf{f}_{n}\mathbf{f}_{n}^{H}\in\mathbb{H}^{M},\ \mathbf{f}_{n}=\mathrm{diag}\left(\mathbf{h}\right)\mathbf{g}_{n}\). Since the objective function is a fractional function, the optimal value of the parameter \(\lambda\in\mathbb{R}\), which can be given by Dinkelbach's algorithm [15], is introduced in this paper. Let \(\tilde{P}_{\mathrm{tot}}=\sum_{l}\mathrm{Tr}\left(\mathbf{Q}_{l}\right)+P_{ cir}\). Then, the objective function can be transformed into the following tractable form \[\tilde{\eta}_{EE}=\sum\nolimits_{k=1}^{K}\left[C_{k}+W\left(f_{k}\left( \mathbf{Q}\right)-v_{k}\left(\mathbf{Q}\right)\right)\right]-\lambda\tilde{P}_{ \mathrm{tot}}. \tag{17}\] Where \(f_{k}\left(\mathbf{Q}\right)\) and \(v_{k}\left(\mathbf{Q}\right)\) are shown below \[f_{k}\left(\mathbf{Q}\right)=\mathrm{log}_{2}\left(\sum\nolimits_{i=1}^{K} \mathrm{Tr}\left(\mathbf{F}_{k}\mathbf{Q}_{i}\right)+\sigma^{2}\right),\forall k, \tag{18}\] and \[v_{k}\left(\mathbf{Q}\right)=\log_{2}\left(\sum\nolimits_{i\neq k}^{K}\mathrm{Tr} \left(\mathbf{F}_{k}\mathbf{Q}_{i}\right)+\sigma^{2}\right),\forall k, \tag{19}\] Since \(R_{p,k}=W\left(f_{k}\left(\mathbf{Q}\right)-v_{k}(\mathbf{Q})\right)\) is the difference of two concave functions, it is a standard DC function. Therefore, a first-order Taylor expansion of \(v_{k}\left(\mathbf{Q}\right)\) using successive convex approximations (SCA) is applied to transform Eq. (17) into a concave function, and the upper bound of \(v_{k}\left(\mathbf{Q}\right)\) is as follows \[v_{k}(\mathbf{Q})^{ub}\overset{\Delta}{=}v_{k}\left(\mathbf{Q}^{r}\right)+ \mathrm{vec}\big{(}\nabla v_{k}\left(\mathbf{Q}^{(r)}\right)\big{)}^{T} \mathrm{vec}\left(\mathbf{Q}-\mathbf{Q}^{(r)}\right), \tag{20}\] where \(\nabla v_{k}\left(\mathbf{Q}\right)=\left[\frac{\partial v_{k}(\mathbf{Q})}{ \partial\mathbf{Q}_{i}},\cdots,\frac{\partial v_{k}(\mathbf{Q})}{\partial \mathbf{Q}_{K}}\right]\) denotes the gradient of \(v_{k}\left(\mathbf{Q}\right)\), \(\mathrm{vec}\) denotes the vectorization operation, and \(\mathbf{Q}^{(r)}\) represents the value of \(\mathbf{Q}\) for the \(r\)-th iteration. The partial derivative of \(v_{k}\left(\mathbf{Q}\right)\) with respect to \(\mathbf{Q}_{i}\) can be expressed as \[\frac{\partial v_{k}\left(\mathbf{Q}\right)}{\partial\mathbf{Q}_{i}}=\left\{ \begin{aligned} &\frac{\mathbf{F}_{k}}{\ln 2\left(\sum\nolimits_{i \neq k}^{K}\mathrm{Tr}\left(\mathbf{F}_{k}\mathbf{Q}_{i}\right)+\sigma^{2} \right)},\mathrm{o.w},\\ &\mathbf{0},\hskip 56.905512pti=k.\end{aligned}\right. \tag{21}\] Thus the approximation of \(R_{p,k}\) can be expressed as \[\widetilde{R}_{p,k}=W\left(f_{k}\left(\mathbf{Q}\right)-v_{k}(\mathbf{Q})^{ ub}\right),\forall k, \tag{22}\] Based on the above approximation, all equations containing \(R_{p,k}\) are transformed into concave functions or convex sets. It can be seen that the constraint (14d) in problem \(\mathrm{(P2.1)}\) is still a non-convex constraint, which can be transformed into the following form \[\mathrm{Tr}\left(\mathbf{F}_{k}\mathbf{Q}_{c}\right)\geq\gamma_{c0}\left( \sum\nolimits_{i\neq k}^{K}\mathrm{Tr}\left(\mathbf{F}_{k}\mathbf{Q}_{i} \right)+\sigma^{2}\right),\forall k, \tag{23}\] where \(\gamma_{c0}=2^{R_{c}/W}-1\) denotes the signal-to-noise ratio corresponding to the common stream rate \(\sum_{k=1}^{K}C_{k}\). Using the semidefinite relaxation (SDR) technique, the relaxation constraint (16e) is relaxed, and in order to emphasize the relaxation constraint \(\mathrm{rank}\left(\mathbf{Q}_{l}\right)=1,\forall l\), and at the same time to obtain the solution \(\mathbf{P}^{*}\) of the problem \(\mathrm{(P2.1)}\), the sequential rank one constraint relaxation technique (SROCR) is applied in this paper. This algorithm relaxes the rank-one constraint to an inequality by introducing auxiliary variables \(\omega\) in the equivalent expression as follow \[\mathbf{u}_{\max}\Big{(}\mathbf{Q}_{l}^{(r)}\Big{)}^{H}\mathbf{Q}_{l}\mathbf{ u}_{\max}\left(\mathbf{Q}_{l}^{(r)}\right)\geq\omega^{(r)}\mathrm{Tr}\left( \mathbf{Q}_{l}\right),\forall l, \tag{24}\] where \(\mathbf{u}_{\max}\) denotes the eigenvector corresponding to the largest eigenvalue of \(\mathbf{Q}_{l}^{(r)}\), and \(r\) denotes the iteration number. According to [16], the rank-one constraint can be satisfied gradually by updating \(\omega\), which makes it easier to find a feasible solution. Finally, the original problem \(\mathrm{(P2.1)}\) can be written in the following solvable form \[\mathrm{(P2.2)}: \max_{\mathbf{Q},\mathbf{c}}~{}\sum\nolimits_{k=1}^{K}\left(C_{k }+\widetilde{R}_{p,k}\right)-\lambda\tilde{P}_{\mathrm{tot}} \tag{25a}\] \[\mathrm{s.t.} (14e),(16c),(16d),(16f),(16g),(23),(24),\] (25b) \[C_{k}+\widetilde{R}_{p,k}\geq R_{th},\forall k,\] (25c) \[\sum\nolimits_{k=1}^{K}\left(C_{k}+\widetilde{R}_{p,k}\right)\geq W \eta_{0}. \tag{25d}\] The problem \(\mathrm{(P2.2)}\)1 is a joint convex problem on the variables \(\mathbf{Q}\) and \(\mathbf{c}\), and is a semi-definite programming (SDP) problem, which can be solved by using the CVX toolbox. Footnote 1: We utilize SCA to deal with non-convexity and obtain a suboptimal high-precision solution to the problem (P2.2) by iteration, with convergence guaranteed by the interior point method [17]. ### _Complexity Analysis of the Proposed Algorithm_ The proposed algorithm can be summarized as **Algorithm 1**, and the complexity of the algorithm is mainly determined by its step 4. The complexity of the step is \(\mathcal{O}\left(\log\left(1/\varepsilon_{0}\right)\left(M\right)^{3.5}\right)\), and \(\varepsilon_{0}\) is the accuracy of stopping iteration, which is set to \(\varepsilon_{0}=10^{-3}\) in this paper. ``` 1:Solve \(\eta_{SE}\) optimization problem to obtain \(\eta_{SE,max}\). 2:Initialization: \(\mathbf{P}^{0}\), \(\mathbf{c}^{0}\), \(\eta_{0}\), \(\varepsilon_{0}\), \(\lambda_{0}\), \(r=0\). 3:repeat 4: Solve Problem \(\mathrm{(P2.2)}\) and obtain \(\mathbf{c}^{(r)}\) and \(\mathbf{P}^{(r)}\) based on its solution using the SROCR technique. 5: Get \(\lambda\) using Dinkelbach's algorithm. 6:\(r\gets r+1\) 7:until The fractional decrease of the objective value is below a threshold \(\varepsilon_{0}\). 8:return Precoding matrix and common stream rate vector. ``` **Algorithm 1** The Joint Optimization Algorithm ## IV Numerical Results In this section, the performance of the proposed algorithm is investigated. PBS serves \(N=5\) PUs with a single antenna and CBS serves \(K=5\) CUs with a single antenna. These users are randomly distributed in a radius circle with CBS as the coordinate origin, PUs are distributed in a radius of 500 m and CUs are distributed in a radius of 350 m. Unless specified, other simulation parameters are set as in Table I. In this paper, we compare the performance of the proposed algorithm and other benchmarks as follows: (1) **benchmark 1** (i.e., EE maximization): this scheme aims to maximize EE. (2) **benchmark 2** (i.e., random precoding): this scheme does not optimize the precoding matrix and employs a randomly generated scheme. (3) **benchmark 3** (i.e., fixed precoding): this scheme uses a fixed precoding matrix. (4) **benchmark 4** (i.e., SDMA): users access by SDMA. (5) **benchmark 5** (i.e., NOMA): users access by NOMA. (6) **benchmark 6** (i.e., No RIS): this scheme does not deploy a TRIS. We first provide insight into the relationship between the SE of the system and the number of TRIS elements. As shown in Fig. 2(a), it is obvious that the SE of all schemes improves with the increase in the number of TRIS elements, except for benchmark 6 where RIS is not deployed, and despite taking random and fixed schemes, benchmark 2 and 3 still have higher SE than benchmark 6, proving that TRIS has a significant effect on enhancing the SE of the system. Meanwhile, the SE in the proposed architecture is higher than SDMA, NOMA, and No RIS, confirming that the combination of RSMA and TRIS can bring higher gains. Then, we shed light on the variation of the EE with the maximum power constraint of CBS. As shown in Fig. 2(b), due to the change in the growth rate of the \(R_{tot}\) and the \(P_{tot}\), there are intersections of the two functions, resulting in a rising and then falling trend of EE with the variation of \(P_{tot}\). It is noteworthy that the architecture in this paper can achieve higher EE with lower \(P_{tot}\) compared to other schemes, which confirmes that the proposed architecture can achieve high-rate multi-stream communication with low power consumption. Finally we investigate the tradeoff between EE and SE as the maximum power constraint of CBS is increased from 4 dBw to 13 dBw. Fig. 2(c) shows that EE increases and then decreases as SE increases, which indicates that higher SE requires more energy consumption, and the rate of increase of SE is less than the rate of increase of energy consumption, leading to a decrease in EE. As the total power goes from 11dBw to 13dBw, the common stream power of RSMA is close to the private stream power, resulting in the degradation of RSMA to SDMA, so their EEs are neck and neck. Meanwhile, the proposed algorithm achieves higher SE and EE and has a better SE and EE balance than other schemes, confirming the necessity and effectiveness of the proposed algorithm to jointly optimize SE and EE. ## V Conclusions In this paper, we proposed TRIS empowered cognitive RSMA networks to implement low-power, high-rate multi-stream communication. Based on this networks, the joint precoding matrix and common stream rate design optimization problem has been formulated. To solve the formulated non-convex problem, we utilized a joint optimization algorithm framework based on DC programming and SCA to obtain the solution. Based on the experimental results, we provided design guidelines to improve the SE of the system by increasing the number of low-cost TRIS elements, and the transmitter transmit power is maintained at an appropriate level to ensure maximum EE.
2307.14582
Resistive Switching Conducting Filament Electroformation with an Electrothermal Phase Field Method
A phase field method self-consistently coupled to continuum heat transport and charge conservation is used to simulate conducting filament dynamical evolution and nanostructure of electroformed resistive switching thin films. Our method does not require a pre-defined idealized conducting filament, as previous methods do, instead treating its dynamical evolution as a stochastic diffuse interface problem subject to a variational principle. Our simulation results agree well with available experimental observations, correctly reproducing electroformed conducting filament nanostructure exhibited by a variety of resistive switching thin films.
John F. Sevic, Nobuhiko P. Kobayashi
2023-07-27T02:05:39Z
http://arxiv.org/abs/2307.14582v1
# Resistive Switching Conducting Filament Electroformation ###### Abstract A phase field method self-consistently coupled to continuum heat transport and charge conservation is used to simulate conducting filament dynamical evolution and nanostructure of electroformed resistive switching thin films. Our method does not require a pre-defined idealized conducting filament, as previous methods do, instead treating its dynamical evolution as a stochastic diffuse interface problem subject to a variational principle. Our simulation results agree well with available experimental observations, correctly reproducing electroformed conducting filament nanostructure exhibited by a variety of resistive switching thin films. Appropriately prepared nanoscale resistive switching thin films exhibit persistent conductivity modulation, central to their operation as next-generation non-volatile memory and neuromorphic computing technologies [1][2]. Persistent conductivity modulation is produced by various physical mechanisms, such as insulator-metal phase transition (IMT), a consequence of intrinsic metastable atomic-scale states of these thin films and controlled introduction of reversible states endowed by specific preparation processes. Various qualitative models have been proposed to study transport phenomena and nanostructure of these thin films, in particular the widely adopted conducting filament (CF) formalism. The CF formalism suggests an initial irreversible forming process producing a nanoscale filamentary thread of locally high electrical conductivity embedded in these thin films. Following CF formation, persistent electrical conductivity modulation obtains by reversible rupture and recovery, usually under the influence of an electric potential and associated Joule heating. While a precise quantitative description of CF dynamical evolution is often not established, the CF formalism nevertheless provides a useful starting point for further quantitative treatment because of experimental evidence suggesting the presence of such conducting filaments. Building on the CF formalism, many computational studies have adopted a continuum drift-diffusion (DD) formulation of various self-consistently coupled transport phenomena, often referred to as multiphysics [3][4][5][6][7][8][9][10]. In this formulation, a nanoscale CF geometry is _a priori_ defined, embedded in an idealized host thin film. The CF formalism then stipulates allied transport phenomena involving coupling between various physical processes and conservation laws. The DD formulation thus requires mobility and diffusion expressions for each specific mass transport mechanism. Additionally, thermal transport and charge conservation are self-consistently coupled. Supporting the CF formalism is a large body of experimental data providing evidence of the existence of nanoscale conducting filaments in thin films [11][12][13][14][15][16][17]. In contrast to the assumption of an _a priori_ defined CF with a specific geometry, these experimentally observed conducting filaments suggest the presence of a stochastic component, due to inherent anisotropic inhomogeneity that naturally appears in thin films. We believe this stochastic character of CF dynamical evolution is central to correctly reproducing experimental nanostructure by computation. Expanding on existing DD formulations, we previously demonstrated an isothermal phase field method requiring no _a priori_ assumptions on CF geometry or its idealized thin film host. While making an isothermal assumption, our basic phase field method nevertheless produced conducting filaments exhibiting nanostructure consistent with experimental observation [18]. In this paper, we propose a self-consistent electrothermal phase field method for the computational study of resistive switching phenomena exhibited by a variety of as-fabricated thin films. With this method, the requirement of _a priori_ defining a CF and idealized host dielectric is abandoned, and dynamical evolution is instead treated as a stochastic diffuse interface problem subject to a variational principle. A significant feature of the phase field method is that it avoids the mathematically onerous difficulty of expressing dynamic boundary conditions of an unknown diffuse interface, the CF of our method, whose location is part of the solution, while retaining the benefits of the DD formulation. Our method produces spontaneous nucleation and electroformation of multiple conducting filaments, as seen in a range of resistive switches, offering an alternative computational formulation correctly reproducing experimentally observed nanostructure. Phase change, fundamental to the study of resistive switching based on IMT, is naturally treated by our method, from its metastable atomic origins, and applies to both electronic and ionic charge transport. The present study focuses exclusively on CF electroformation based on ionic transport of a thin film exhibiting unipolar resistive switching from IMT. To develop our self-consistent electrothermal phase field method, consider Figure 1 approximating an as-fabricated pristine resistive switching thin film at room temperature with an initial equilibrium charge density \(c(\vec{r},0)\). To model resistive switching based on IMT, we assume spinodal separation between an insulating low temperature phase, the pristine equilibrium thin film, and a metallic high temperature phase, the CF of our method [19]. From a variational argument, an interface forms, possibly several, under Joule heating because it is energetically favorable, creating isolated clusters of metallic-phase thin film self-consistently evolving, some eventually merging to produce conducting filaments. Treating IMT from its atomic origins, we adopt a double-well free-energy density based on charge density, \(c(\vec{r},t)\), and temperature, \(T(\vec{r},t)\), to describe electrothermal CF dynamical evolution producing experimentally consistent nanostructure. This form of free-energy density can approximate a range of resistive switching thin films based on IMT. For the present work we employ an even-order sixth-degree polynomial function of charge density and absolute temperature of the form given by Equation 1 \[f_{b}(c,T)=a_{2}(c,T)+a_{4}(c,T)+a_{6}(c,T) \tag{1}\] where \(a_{2}(c,T)\), \(a_{4}(c,T)\), and \(a_{6}(c,T)\) are second-, fourth-, and sixth-degree polynomial functions of normalized charge density and absolute temperature, \(c(\vec{r},t)\) and \(T(\vec{r},t)\), respectively, and \(f_{b}(c,T)\) is free-energy density in keV/mol. Here \(\vec{r}\) represents a location in the \((x,y)\)-plane of the structure of Figure 1 and \(t\) is time [20]. The constants defining \(a_{2}\), \(a_{4}\) and \(a_{6}\) produce the free-energy density function illustrated by Figure 2, representing IMT of our simulation. Here the magnitude of the free-energy density and absolute temperature range approximately correspond to thin films from our earlier work [9][10][21][22]. The free-energy of the thin film and the CF interface energy interact with an externally applied electric potential by an electrostatic energy term \[g_{elec}(c,V)=V(\vec{r},t)\times c(\vec{r},t)\times\frac{q}{\Omega} \tag{2}\] where \(V(\vec{r},t)\) is electric potential between the top and bottom edges of the structure, \(q\) is electronic charge, and \(\Omega\) is a energy density factor that produces Joule self-heating consistent with our IMT model illustrated by Figure 2[23]. Our phase field CF equation of motion obtains by formulating a free-energy functional composed of Equations 1 and 2 with the CF interface energy to yield \[F=\int_{R}\left[f_{b}(c,T)+\frac{\kappa}{2}\nabla^{2}c(\vec{r},t)+g_{elec}(c,V) \right]d\vec{r} \tag{3}\] where \(\kappa\) is an interface gradient energy term and the integration is over \(R\), the entire thin film of Figure 1. Figure 1: Cross-section of our as-fabricated pristine resistive switching thin film model showing dimensions of 50 nm x 10 nm with an initial charge density \(c(\vec{r},0)\). The initial charge density is chosen to represent a thin film uniformly at approximately 300 K, corresponding to the free-energy density function of Figure 2. To initiate electroformation, an external electric potential of 1 V is applied to the top edge, and the bottom edge is held at 0 V. The top and bottom edges are held at a temperature of 300 K, and periodic boundary conditions along the \(x-axis\) are assumed for charge density, electric potential, and temperature. Note that initial charge density is concentrated around the two spinodal minima of the free-energy density function illustrated by Figure 2. Figure 2: Free-energy density as a function of normalized charge density and absolute temperature, \(c(\vec{r},t)\) and \(T(\vec{r},t)\), respectively. Our phase field method assumes spinodal phase separation between an insulating low temperature phase, the pristine thin film, and a metallic high temperature phase, the CF of our method. For the present work, the interface gradient energy term is assumed uniformly constant over \(R\). A substantial feature of a variational argument is stationary equilibrium is switching mechanism agnostic. For example, memristive thin films exploiting mechanical strain would be treated by adding to the free-energy functional Equation 3 a strain energy density term [24][25]. The equation of motion of the _a priori_ unknown diffuse interface, the CF of our method, is found by extracting a Euler-Lagrange equation from free-energy functional Equation 3, yielding \[\frac{\partial c(\vec{r},t)}{\partial t}=M\nabla^{2}\bigg{[}\frac{\partial f_{ b}(c,T)}{\partial c}-\kappa\nabla^{2}c(\vec{r},t)-\frac{q}{\Omega}V(\vec{r},t) \bigg{]} \tag{4}\] where \(M\) is the CF interface mobility and assumed uniformly constant over \(R\). This is a Cahn-Hilliard phase field equation in normalized charge density \(c(\vec{r},t)\) for our phase field method [18][26][27]. To complete our electrothermal phase field method, this equation of motion governing CF dynamical evolution must be solved self-consistently with the DD heat equation and the charge conservation equation. Equation 4 is self-consistently coupled to the DD heat equation with a Joule heating source term \[\rho c_{p}\frac{\partial T(\vec{r},t)}{\partial t}-\nabla\cdot k_{th}\nabla T( \vec{r})=\sigma(\vec{r},c,T)\times|\nabla V(\vec{r},t)|^{2} \tag{5}\] where \(\rho\) is mass density, \(c_{p}\) is specific heat capacity, \(k_{th}\) is thermal conductivity, and \(\sigma(\vec{r},c,T)\) is electrical conductivity at a point in the \((x,y)\)-plane of the thin film of Figure 1. Mass density and specific heat are assumed constant, for the moment, although our electrothermal phase field method naturally treats phase change and anisotropy for these material properties. Thermal conductivity is similarly assumed constant, although its specification is entirely arbitrary with our method. Charge conservation on normalized charge density, \(c(\vec{r},t)\), is imposed as \[\nabla\cdot\bigg{[}\sigma(\vec{r},c,T)\times\nabla V(\vec{r},t)\bigg{]}-c( \vec{r},t)=0 \tag{6}\] where electrical conductivity, \(\sigma(\vec{r},c,T)\), assumes the usual continuum form \[\sigma(\vec{r},c,T)=c(\vec{r},t)\times\mu_{F}(\vec{r},T)\times q \tag{7}\] where \(c(\vec{r},t)\) is normalized charge density and \(q\) is electronic charge. For our electrical conductivity model, we assume the metallic-phase thin film enclosed by the CF interface responds dynamically to electric field \(\vec{E}(\vec{r},t)=\nabla V(\vec{r},t)\) with Poole-Frenkel mobility \[\mu_{F}(\vec{r},t)=\frac{\mu_{o}}{T(\vec{r},t)}\times exp\left[\frac{-E^{\mu} _{ac}}{k_{b}\times T(\vec{r},t)}\right] \tag{8}\] where \(\mu_{o}\) is a charge mobility pre-factor, \(k_{b}\) is the Boltzmann constant and \(E^{\mu}_{ac}\) is a field-assisted activation energy. Our simulation thermal and electrical material properties and phase field model parameters are defined by Table 1. We assume isotropic continuum heat transport with mass density, specific heat capacity, and thermal resistance, \(\rho\), \(c_{p}\), and \(k_{th}\), respectively. These thin films exhibit charge transport approximated as field-assisted Poole-Frenkel phenomenon, defined by a charge mobility pre-factor and activation energy, \(\mu_{o}\) and \(E^{\mu}_{ac}\), respectively. Each of these material properties are extracted from our previous simulated and experimental data on memristive thin films based on IMT [9][10][18][21][22]. The phase field interface mobility, \(M\), is approximated to reflect an interface that forms essentially transparent to the CF interface. The interface energy density \(\kappa\), is established by making the metal-insulator diffuse interface width consistent with experimental data and our isothermal phase field method [18][28]. To simulate CF electroformation using our electrothermal phase field method, we solve self-consistently the CF equation of motion, Equation 4, with the heat equation and charge conservation equation, Equations 5 and 6, using the Multiphysics Object-Oriented Simulation Environment (MOOSE) multiphysics solver [29]. These equations are discretized by an adaptive meshing algorithm and self-consistently solved by a finite element transient Newton method to yield versus time the dynamical evolution of the unknown diffuse interface, \(c(\vec{r},t)\), the CF of our method, and state variables \(V(\vec{r},t)\) and \(T(\vec{r},t)\)[30][31][32][33]. The initial condition for normalized charge density, \begin{table} \begin{tabular}{c c c} Parameter & Value & Units \\ \hline \(\rho\) & 1000 & \(\frac{kg}{m^{3}}\) \\ \(c_{p}\) & 5.0 & \(\frac{J}{K\times kg}\) \\ \(k_{th}\) & 0.10 & \(\frac{W}{m\times kg}\) \\ \(\mu_{o}\) & 100 & \(\frac{nm}{V\times ns}\) \\ \(E^{\mu}_{ac}\) & 250 & \(meV\) \\ \(M\) & 1000 & \(\frac{nm}{J\times ns}\) \\ \(\kappa\) & 1.0 & \(\frac{eV}{nm^{2}}\) \\ \end{tabular} \end{table} Table 1: Thermal and electrical material properties and phase field model parameters used electroformation simulation. These parameters appear in our previous experimental and computational work on memristive thin films based on IMT. illustrated by Figure 1, is chosen to represent an as-fabricated pristine resistive switching thin film, in thermodynamic equilibrium between the two spinodal normalized charge density minima. The initial absolute temperature is uniformly distributed between 300 K and 301 K over the entire thin film. We have found that inhomogeneity in these two initial conditions has a profound first-order effect on CF affinity of formation and resultant nanostructure, due to the stochastic nature of CF nucleation and dynamical evolution. The stochastic nature is due to not knowing precisely how locally favorable thermodynamic conditions catalyze CF nucleation and evolution. This is consistent with experimental data and is the subject of future research with our phase field method. To initiate electroformation, an electric potential on the top and bottom edges of our resistive thin film shown in Figure 1 is established at 1 V and 0 V, respectively. An absolute temperature boundary condition of 300 K is similarly established for the top and bottom edges. A Dirichlet boundary condition is reasonable given the transient duration of electroformation, and is consistent with our previous experimental work IMT-based evaluation structures embedded in thermal vials [21]. Periodic boundary conditions for normalized charge density, absolute temperature, and electric potential are imposed on the left and right edges of our resistive thin film. A transient simulation is run to reach the electroformed steady-state. The steady-state solutions for \(V(\vec{r},t)\) and \(T(\vec{r},t)\) are also produced. The electroformed steady-state is reached in approximately 100 ns real time for the present set of initial conditions, boundary conditions, and material properties, consistent with our isothermal phase field method [10]. To understand stochastic CF nucleation and dynamical evolution with our phase field method, consider the vector electric field immediately following the application of the electric potential, illustrated by Figure 3. Here we show electric field magnitude, where \(|E(\vec{r},\vec{t})|=|\nabla V(\vec{r},t)|\). We assume this initial electric field is established instantaneously, since the charge transport rate of our mobility model, Equation 8, is small compared to this transient event [34][35]. It is evident that inhomogeneity in the initial condition of normalized charge density, illustrated by Figure 1, produces many clusters of increased electric field intensity, and these clusters are uniformly distributed throughout the our thin film. It is these clusters that are energetically favorable to CF nucleation due to the increased local self-heating produced by a concentrated electric field intensity. As local temperature subsequently increases, these various electric field concentration clusters undergo insulator-metal transition to the metallic state, now yielding clusters of increased electrical conductivity. Nevertheless, at this particular instance, these clusters remain distinct and disconnected, and there is no bulk electrical conduction between the top and bottom edges. Figure 4 shows absolute temperature, \(T(\vec{r},t)\), shortly after initiation of electroformation, approximately 10 ns in real time for our chosen material properties. In concurrence with production of disconnected clusters of concentrated electric field is an associated production of isolated clusters that have reached sufficient absolute temperature for insulator-metal transition to a metallic state, approximately 1300 K for our free-energy density defined by Figure 2. Using a variational argument, we may make the following two assertions. First, since these distinct and disconnected clusters now electrically conduct, it is reasonable to conclude that they may longitudinally align to the Figure 3: Magnitude of normalized electric field immediately following application of the electroforming electric potential at \(t=0\) s, where \(|E(\vec{r},\vec{t})|=|\nabla V(\vec{r},t)|\). We assume this initial electric field is established instantaneously, since the charge transport rate our mobility model, Equation 8 is small compared to this transient event. Inhomogeneity in the initial condition of normalized charge density, illustrated by Figure 1, produces many concentrated clusters of increased electric field intensity, and these regions are uniformly distributed throughout the structure. These field-enhanced thermally excited random clusters are energetically favorable to CF nucleation. Figure 4: Absolute temperature, \(T(\vec{r},t)\), shortly after initiation of electroformation, approximately 10 ns in real time for our chosen material properties. It is evident that electrothermal effects are largely inhomogeneous at the outset and that substantial localized Joule heating occurs in correspondence with the initially high local electric field intensity illustrated by Figure 3. electric field, \(E(\vec{r},t)\), between the top edge and bottom edge of the thin film of Figure 1, so that disconnected clusters may eventually merge and produce a continuous thread to produce a CF. Second, the foundation of our phase field method, is that part of the total thermodynamic energy of the system is manifested as a diffuse interface between a conducting, metallic state, and an insulating thin film, and energetically favors a minimal interface enclosing these conducting clusters in stationary equilibrium. In support of this variational argument, consider Figure 5, illustrating normalized electrical conductivity, \(\sigma(\vec{r},c,T)\), at three different times. Panel (a) shows \(\sigma(\vec{r},c,T)\) at 10 ns, the same time shown by Figures 3 and 4, illustrating regions of increased electrical conductivity exhibit already characteristic clustering aligned to regions of concentrated Joule heating. Panel (b) shows \(\sigma(\vec{r},c,T)\) at 50 ns, illustrating for our model parameters substantial insulator-metal phase transition has occurred, as shown by spontaneously created clusters in the metallic state. Panel (c) shows \(\sigma(\vec{r},c,T)\) at 100 ns, the thermodynamic steady-state, representing an electrorned thin film. Consistent with our variational argument is the spontaneous production of many conducting filaments, embedded within the as-fabricated thin film. In a sequence of mutually coupled phenomena, CF nucleation is initiated from appropriate, stochastic, initial conditions, creating isolated clusters in the metallic, conducting, phase. Because it it energetically favorable to reduce the area of interfaces, some of these isolated conducting clusters eventually merge to create continuous conducting filaments. The conducting filaments produced by our electrothermal phase field method exhibit nanostructure consistent with experimental data [15][16][17]. Not all merged clusters create continuous conducting filaments from the top edge to the bottom edge. Instead, some conducting clusters grow in relative size but nevertheless remain disconnected and unable to contribute to bulk conduction. The two solid arrows shown in Figure 5 illustrate for this particular set of initial conditions and material properties that two complete conducting filaments have formed between the top and bottom edges, each now appropriately electroformed to contribute to bulk resistive switching behavior. In this letter, we have extended our previous isothermal phase field method studying electroformation by self-consistently coupling to a drift-diffusion transport model of Joule heating [18]. Our electrothermal phase field method produced spontaneous nucleation and growth of multiple conducting filaments embedded within an as-fabricated thin film comparable to a range of resistive switches, offering an alternative computational formulation based on metastable atomic-scale states. Our simulation results agree well with data from a range of resistive switches, correctly reproducing experimental conducting filament nanostructure [15][16][17]. Figure 5: Electrical conductivity, \(\sigma(\vec{r},c,T)\),at 10 ns, 50 ns and 100 ns in real time. In a sequence of mutually coupled phenomena, CF nucleation is initiated from appropriate initial conditions and material properties, creating isolated clusters in the metallic, conducting, phase. Because it is energetically favorable, some of these isolated clusters eventually merge to create conducting filaments. The two solid arrows on panel (c) point out two continuous conducting filaments traversing the top and bottom edges, each now appropriately electroformed to contribute to bulk resistive switching behavior.
2308.04114
Collective Human Opinions in Semantic Textual Similarity
Despite the subjective nature of semantic textual similarity (STS) and pervasive disagreements in STS annotation, existing benchmarks have used averaged human ratings as the gold standard. Averaging masks the true distribution of human opinions on examples of low agreement, and prevents models from capturing the semantic vagueness that the individual ratings represent. In this work, we introduce USTS, the first Uncertainty-aware STS dataset with ~15,000 Chinese sentence pairs and 150,000 labels, to study collective human opinions in STS. Analysis reveals that neither a scalar nor a single Gaussian fits a set of observed judgements adequately. We further show that current STS models cannot capture the variance caused by human disagreement on individual instances, but rather reflect the predictive confidence over the aggregate dataset.
Yuxia Wang, Shimin Tao, Ning Xie, Hao Yang, Timothy Baldwin, Karin Verspoor
2023-08-08T08:00:52Z
http://arxiv.org/abs/2308.04114v1
# Collective Human Opinions in Semantic Textual Similarity ###### Abstract Despite the subjective nature of semantic textual similarity (STS) and pervasive disagreements in STS annotation, existing benchmarks have used averaged human ratings as gold standard. Averaging masks the true distribution of human opinions on examples of low agreement, and prevents models from capturing the semantic vagueness that the individual ratings represent. In this work, we introduce USTS, the first Uncertainty-aware **STS** dataset with \(\sim\)15,000 Chinese sentence pairs and 150,000 labels, to study collective human opinions in STS. Analysis reveals that neither a scalar nor a single Gaussian fits a set of observed judgements adequately. We further show that current STS models cannot capture the variance caused by human disagreement on individual instances, but rather reflect the predictive confidence over the aggregate dataset. ## 1 Introduction Semantic textual similarity (STS) is a fundamental natural language understanding (NLU) task, involving the prediction of the degree of semantic equivalence between two pieces of text (S1,S2). STS has been approached in various ways, ranging from early efforts using string- or knowledge-based measures and count-based co-occurrence models (Resnik, 1999; Barron-Cedeno et al., 2010; Matveeva et al., 2005), to modern neural networks. Broadly speaking, the goal of the STS task is to train models to make a similarity assessment that matches what a human would make. Gold-standard scores are typically assigned by asking multiple raters to label a pair of sentences and then taking the average (Agirre et al., 2012, 2013, 2014, 2015, 2016; Marelli et al., 2014; Sogancioglu et al., 2017; Wang et al., 2018). The underlying assumption here is that there is a single "true" similarity score between S1 and S2, and that this label can be approximated by averaging multiple -- possibly noisy -- human ratings. While this assumption might be reasonable in settings such as educational testing with well-defined knowledge or norms (Trask and Trask, 1999), it is not the case for more subjective NLU tasks. Pavlick and Kwiatkowski (2019) show that in natural language inference (NLI), disagreements often persist even if more ratings are collected or when the amount of context provided to raters is increased. High disagreement has been observed in a number of existing NLI datasets (Nie et al., 2020). In STS, concerns about inconsistent judgements have been raised, particularly for difficult boundary cases in complex domains, where even expert annotators can disagree about the "true" label (Wang et al., 2020; Olmin and Lindsten, 2022). Identifying and discarding "noisy" labels during training can reduce generalisation error (Wang et al., 2022, 2020). We reexamine whether the disagreement observed among raters should be attributed to "noise" and resolved via dismissing, or should rather treated as an inherent quality of the STS labels. Specifically, our primary contributions are: 1. We develop USTS, the first Uncertainty-aware **STS** dataset with a total of \(\sim\)15,000 Chinese sentence pairs and 150,000 labels. We study the human assessments and investigate how best to integrate them into a gold label across varying degrees of observed human disagreement. 2. We show that state-of-the-art STS models cannot capture disagreement when trained using a single averaged rating, and argue that STS evaluation should incentivise models to predict distributions over human judgements, especially for cases of low agreement. 3. We discuss the practicalities of transferring labels across languages in building a multilingual STS corpus, and present evidence to suggest that this may be problematic in the continuous labelling space. ## 2 Background ### Semantic Textual Similarity Task **Data Collection and Annotation:** As STS requires a sentence pair, to construct a dataset, ideally sentence pairs should be sampled to populate the spectrum of differing degrees of semantic equivalence, which is a huge challenge. If pairs of sentences are taken at random, the vast majority would be totally unrelated, and only a very small fraction would have some degree of semantic equivalence (Agirre et al., 2012). Accordingly, previous work has either resorted to string similarity metrics (e.g. edit distance or bag-of-word overlap) (Agirre et al., 2013, 2014, 2015, 2016; Sogancioglu et al., 2017; Wang et al., 2018), or reused existing datasets from tasks related to STS, such as paraphrasing based on news/video descriptions (Agirre et al., 2012) and NLI (Marelli et al., 2014). In terms of annotation, for general text (e.g. news, glosses, or image descriptions), it has mostly been performed using crowdsourcing via platforms such as Amazon Mechanical Turk with five crowd workers (Cer et al., 2017). For knowledge-rich domains such as clinical and biomedical text, on the other hand, a smaller number of expert annotators has been used, such as two clinical experts for Med-STS (Wang et al., 2018). Raters are asked to assess similarity independently on the basis of semantic equivalence using a continuous value in range \([0,5]\). Then a gold label is computed by averaging these human ratings. **Is averaging appropriate?** Averaging has been the standard approach to generating gold labels since Lee et al. (2005). However, this approach relies on the assumption that _there is a well-defined gold-standard interpretation + score, and that any variance in independent ratings is arbitrary rather than due to systematic differences in interpretation_. An example of this effect can be seen in case No. 1 in Table 1. In practice, however, high levels of disagreement can be observed among annotators in different domains.2 Footnote 2: The individual annotations for STS-B are not available, so we collected new ratings from 15 PhD NLPers. _bert-base_ fine-tuned on the STS-B training data (\(r\)=0.91) is used for prediction, same as the one in Section 3.1.1 for selection. In such cases, a simple average fails to capture the latent distribution of human opinions/interpretations, and masks the uncertain nature of subjective assessments. With Nos. 2 and 3 in Table 1, e.g., the average scores \(\mu\) of 1.7 and 2.4 do not convey the fact that the ratings vary substantially (\(\sigma>1.0\)). While the integrated score may reflect the average opinion, it neither captures the majority viewpoint nor exposes the inherent disagreements among raters. Put differently, **not all average scores of a given value convey the same information**. Consider three scenarios that all average to 3.0: (3,3,3,3)/5, (1,3.5,3.5,3.5,3.5)/5, and (2,4,2,4,3)/5. The inherent level of human agreement varies greatly in these three cases. Looking to the system predictions, the model prediction of 3.5 for No. 1 in Table 1 is clearly incorrect, as it lies well outside the (tight) range of human annotations in the range \([4.5,5.0]\). While \begin{table} \begin{tabular}{l l} \hline \hline **No. 1** & Low Human Disagreement \\ S1 & _Kenya Supreme Court upholds election result._ \\ S2 & _Kenya \$C\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$\$$\$\$\$$\$\$\$$\$\$\$$\$\$$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$\$$\$$\$$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$\$$\$$\$$\$$\$$\$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$$\$$\$$\$$$\$$\$$\$$$\$$$\$$\$$\$$\$$$\$$\$$\$$$\$$$\$$$\$$\$$$\$$$\$$$\$$$\$$\$$$\$$$\$$$\$$$\$$\$$$\$$$\$$$\$$\$$\$$\$$$\$$\$$$\$$\$$$\$$\$$$\$$$\$$\$$$\$$\$$$\$$$\$$$\$$$\$$$\$$$\$$\$$$\$$$\$$\$$$\$$$\$$$\$$\$$$\$$$\$$$\$$\$$$\$$\$$$\$$$\$$\$$\$$\$$$\$$$\$$$\$$\$$$\$$$\$$\$$\$$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$\$$$\$$\$$\$$\$$\$$\$$$\$$\$$$\$$\$$\$$\$$\$$\$$$\$$\$$$\$$$\$$\$$\$$$\$$\$$$\$$\$$\$$$\$$\$$$\$$$\$$\$$\$$\$$$\$$\$$\$$\$$$\$$\$$$\$$$\$$\$$\$$$\$$\$$$\$$\$$\$$$\$$\$$\$$\$$\$$$\$$\$$$\$$$\$$\$$$\$$$\$$$\$$$\$$$\$$\$$$\$$\$$$\$$\$$$\$$\$$$\$$$\$$$\$$\$$$\$$$\$$\$$\$$\$$$\$$\$$$\$$$\$$\$$$\$$\$$$\$$$\$$$\$$\$$$$\$$\$$$\$$$\$$\$$$\$$$\$$$\$$$\$$$\$$\$$$\$$$\$$\$$$\$$\$$$\$$$\$$\$$$\$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$\$$\$$$\$$$\$$$\$$\$$$\$$$\$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$\$$$\$$\$$$\$$\$$$\$$$\$$$\$$$\$$$\$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$\$$$\$$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$$\$$$\$$\$$$$\$$$\$$$\$$$\$$$\$$$\$$$$\$$$\$$$\$$\$$$$\$$\$$$$\$$\$$$\$$$\$$$\$$$\$$$\$$\$$$\$$$\$$\$$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$\$$$$\$$$\$$$$$\$$$$\$$$$$$\$$$$$\$$$$$$\$$$$$\$$$$$$$\$$$$$\$$$$$$\$$$$$\$$$$$$$\$$$$$$$\$$$$$$$\$$$$$$$$\$$$$$$$$$$$\$$$$$$$$\$$$$$$$$$$$$\$$$$$$$$$$$$\$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$\$$$$$$$$$\$$$$$$\$$$$$$\$$$$$$$\$$$\$$$$$\$$$\$$$\$$$$$\$$$$$\$$$$$\$$$$\$$$$\$$$$\$$$$$$$\$$$$$$$\$$$$$$$\$$$$$$$$$$$$$$$$\$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ the model prediction of 4.3 for No. 2 also lies outside the annotation range of \([0.0,3.5]\), it is closer to an extremum, and there is much lower agreement here, suggesting that the prediction is better than that for No. 1. No. 3 seems to be better again, as the model prediction of 0.6 is both (just) within the annotation range of \([0.5,4.5]\) and closer to the average for a similarly low-agreement instance. Based on the standard evaluation methodology in STS research of calculating the Pearson correlation over the mean rating, however, No. 1 would likely be assessed as being a more accurate prediction than Nos. 2 or 3, based solely on how close the scalar prediction is to the annotator mean. A more nuanced evaluation should take into consideration the relative distribution of annotator scores, and assuming a model which outputs a score distribution rather than a simple scalar, the relative fit between the two. We return to explore this question in Section 5. Based on these observations, we firstly study _how to aggregate a collection of ratings into a representation which better reflects the ground truth_, and further go on to consider evaluation metrics which _measure the fit between the distribution of annotations and score distribution of a given model_. ### Human Disagreements in Annotations **Individual Annotation Uncertainty** Past discussions of disagreement on STS have mostly focused on uncertainty stemming from an individual annotator and the noisiness of the data collection process. They tend to attribute an outlier label to "inattentive" raters. This has led to the design of annotation processes to control the reliability of individual ratings and achieve high inter-annotator agreement (Wang et al., 2018). However, disagreements persist. **Inherent Disagreements Among Humans** Studies in NLI have demonstrated that disagreements among annotations are reproducible signals Pavlick and Kwiatkowski (2019). It has also been acknowledged that disagreement is an intrinsic property of subjective tasks (Nie et al., 2020; Wang et al., 2022c; Plank, 2022). Despite this, most work in STS still has attributed high levels of disagreement to poor-quality data (Wang et al., 2022a), and has focused on reducing the uncertainty in STS modelling and providing reliable predictions (Wang et al., 2022b). Little attention has been paid to analysing the inherent underlying variation in STS annotations on a continuous rating scale, or how to fit the collective human opinions to a mathematical representation. Does a real value, Gaussian distribution, Gaussian mixture model, or a more complicated distribution most effectively approximate the latent truth? The shortage of individual annotator labels in STS has been a critical obstacle to in-depth analysis of disagreements among human judgements, since only the averaged similarity scores are available to the public for almost all STS datasets, apart from two small-scale biomedical benchmarks with 0.1k and 1k examples, respectively. To this end, we first construct a large-scale STS corpus in this work with 4-19 annotators for each of almost 15k sentence pairs. We focus on analysing disagreements among annotators instead of the individual uncertainty, presuming that each individual rater is attentive under a quality-controlled annotation process. ### Chinese STS Corpus Most progress on STS, driven by large-scale investment in datasets and advances in pre-training, has centred around English.3 Efforts to build comparable datasets for other languages have largely focused on (automatically) translating existing English STS datasets Huertas-Garcia et al. (2021); Yang et al. (2019). However, this approach may come with biases (see Section 6). Our dataset is generated from Chinese rather than English sources, and we employ native Chinese speakers as annotators, producing the first large-scale Chinese STS dataset.4 Footnote 3: English STS models have achieved \(r=0.91\), while for Chinese the best results are markedly lower at \(r=0.82\) for STS-B test. Footnote 4: Apart from translated STS-B, there are only two Chinese corpora related to STS: BQ (Chen et al., 2018) and LCQMC (Liu et al., 2018) for paraphrase detection (binary). ## 3 Data Collection We collected STS judgements from multiple annotators to estimate the distribution, for sentence pairs drawn from three multilingual sources. Sections 3.1 and 3.2 provide details of the collection, along with challenges in the annotation and how we ensure data quality. All data and annotations are available at [https://github.com/yuxiaw/USTS](https://github.com/yuxiaw/USTS). ### Data Sources The first step is to gather sentence pairs. In response to rapid rises in STS performance and in sights into the shortcomings of current models and limitations of existing datasets, we create a new corpus that not only incorporates inherent human disagreements in the gold label representation, but also includes more challenging examples, on which state-of-the-art STS models tend to make wrong predictions. **Common errors:** our analysis over general STS-B and clinical N2C2-STS exposes three major error types. More than half of errors lie in subsets where human agreement is low. High uncertainty in STS labelling leads to pervasive disagreement among human judgements. Another is attributed to the lack of reasoning, as Nos. 1 and 3 in Table 1 reveal: (1) matching an abbreviation with its full name, e.g. _Supreme Court_ to _SC_; and (2) building connections between descriptions that are lexically divergent but semantically related, e.g. _carrot_ and _orange food_. The other is the failure to distinguish pairs with high lexical overlap but opposite meaning, due to word substitution or reordering. However, these types of examples account for only a tiny proportion of existing test sets and have minimal impact on results. Thus, our goal is to gather more cases of high ambiguity, requiring reasoning abilities and more semantic attention in annotation. As our data sources, we use sentences from TED talks, and sentence pairs from NLI and paraphrase corpora, as detailed below. The combined dataset contains 14,951 pairs, which we perform basic data cleaning over to remove repeated punctuation marks (e.g. multiple quotation marks, dashes, or blank spaces). #### 3.1.1 Ted-X Compared to written texts such as essays, spoken texts are more spontaneous and typically less formal [13]. Without any contextual cues such as prosody or multi-modality to help interpret utterances, readers may have trouble understanding, especially for single sentences out of context [10], resulting in high uncertainty in labelling. We therefore choose TED speech transcriptions to gather high-ambiguity examples. Selecting Single SentencesTED2020 contains a crawl of nearly 4000 TED and TED-X transcripts, translated into more than 100 languages. Sentences are aligned to create a parallel corpus [11]. We extracted 157,047 sentences for zh-cn with character length ranging between 20 and 100, and aligned it with the other 8 languages of en, de, es, fr, it, ja, ko, ru, and traditional zh. Pairing by RetrievalSentence pairs generated by random sampling are prone to be semantically distant. To avoid pairs with similarity scores overwhelmingly distributed in the range \([0,1]\), we use embedding-based retrieval. For each sentence, we search for the two most similar sentences based on _faiss_[12] using the SimCSE sentence embedding of _sup-simcse-bert-base-uncased_[1], obtaining 155,659 pairs after deduplication.5 That is, we use (approximate) cosine similarity based on contextualised sentence embeddings instead of the surface string-based measures of previous work to sample sentence pairs. This is expected to find pairs with a higher level of semantic overlap, rather than some minimal level of lexical match. Footnote 5: Note that we base this on the English versions of each sentence, due to the higher availability of pre-trained language models and sentence encoders for English. Selecting Low-agreement ExamplesTo select what we expect to be examples with low agreement, we leverage the observation that high-variance examples tend to be associated with low human agreement [23]. That is, we keep pairs with large predictive variance, and predictions that differ greatly between two agents. We use a _bert-base-uncased_-based STS model fine-tuned on the STS-B training data for prediction. We obtain the mean \(\mu\) and standard deviation \(\sigma\) for each example from sub-networks based on MC-Dropout, where \(\mu\) is re-scaled to the same magnitude \([0,1]\) as the normalised \(L_{2}\) using SimCSE embedding \(\mathbf{x}\), and \(\mathit{len}_{word}(S_{en})\) is the word-level length of the English sentence. We then select instances which satisfy the three criteria: (1) \(|\frac{1}{5}\mu-(1.0-L_{2}(\mathbf{x}_{1},\mathbf{x}_{2}))|\geq 0.25\); (2) \(\sigma\geq 0.16\); and (3) \(\mathit{len}_{word}(S_{en})\geq 12\).6 This results in 9,462 sentence pairs. Footnote 6: We tuned these threshold values empirically, until the majority of sampled instances fell into the range \([1,3]\) — the score interval most associated with ambiguous instances. #### 3.1.2 Xnli Though sentence pairs from SICK-R and UNLI [13] are annotated with _entailment_ and _contradiction_ relations and also continuous labels, they don't specifically address semantic equivalence: the scores in SICK-R reflect semantic relatedness rather than similarity, and in UNLI the annotators were asked to estimate how likely the situation described in the hypothesis sentence would be true given the premise. We use sentence pairs from Cross-lingual NLI (XNLI: Conneau et al. (2018)) where there is label disagreement (which we hypothesise reflects ambiguity), noting that the dataset was annotated for textual entailment in en, and translated into 14 languages: fr, es, de, el, bg, ru, tr, ar, vi, th, zh, hi, sw and ur. From the development (2,490) and test sets (5,010), we select examples where there is not full annotation agreement among the five annotators, resulting in 3,259 sentence pairs (1,097 dev and 2,162 test). #### 3.1.3 Paws-X We sample 2230 sentence pairs from PAWS-X (Yang et al., 2019) which are not paraphrases but have high lexical overlap. Note that this is an extension of PAWS (Zhang et al., 2019) to include six typologically-diverse languages: fr, es, de, zh, ja and ko. ### Annotation We employ four professional human annotators (all Chinese native speakers) to assign labels to the 14,951 Chinese sentence pairs in the first round, and an additional 15 annotators to provide additional annotations for 6,051 examples of low human agreement (as detailed below). Annotation GuidelineTable 2 shows the 6-point ordinal similarity scale we use, plus definitions. Quality ControlIt is difficult to ensure that any divergences in annotations are more likely due to task subjectivity or language ambiguity than inattentiveness. We attempt to achieve this by not using crowdsourced workers, but instead training up in-house professional annotators with expert-level knowledge in Linguistics, and significant experience in data labelling. They were first required to study the annotation guidelines and exemplars, and then asked to annotate up to 15 instances of high-agreement pre-selected from the STS-B training set. For each example, the annotation is regarded to be correct when the difference between the assigned and gold-standard label is \(<\)0.5. Failing this, the annotator is provided with the correct label and asked to annotated another instance. This procedure was iterated for three rounds to familiarize the annotators with the task. On completion of the training, we only retain annotators who achieve a cumulative accuracy of \(\geq\)75%. ### Analysis of First-round Annotations Dataset breakdownTable 3 shows the breakdown of instances across the three component sets, as well as the combined USTS dataset. In terms of average length (_zh_ character level), XNLI is the shortest on average (esp. for S2, the hypothesis), followed by TED-X and PAWS-X. Inter-annotator agreementThe average Pearson (\(r\)) and Spearman (\(\rho\)) correlation between the six pairings of annotators, and standard deviation (\(\sigma\)) among the four annotators, are \(r=0.74\), \(\rho=0.68\), \(\sigma=0.47\). These numbers reflect the fact that there is high disagreement for a substantial number of instances in USTS, in line with the sampling criteria used to construct the dataset. As such, aggregating ratings by _averaging_ is not able to capture the true nature of much of the data. Two questions naturally arise: (1) at what level of variance does averaging noticeably bias the gold label? and (2) how should annotations be aggregated to fit the latent truth most closely? High vs. Low agreementFigure 1 shows the first-round variance distribution, wherein \(\sigma\) ranges from 0.0 to 1.5, with 8,900 pairs being less than 0.5. It indicates that on \(\sim\)60% examples, the assessments of four annotators fluctuate around the average score in a smaller range (0.0-0.5 on aver \begin{table} \begin{tabular}{c l} \hline \hline Score & Description \\ \hline 5 & The two sentences are completely equivalent, as they mean the same thing. \\ 4 & The two sentences are mostly equivalent, but some unimportant details differ. \\ 3 & The two sentences are roughly equivalent, but some important information differs/missing. \\ 2 & The two sentences are not equivalent, but share some details. \\ 1 & The two sentences are not equivalent, but are on the same topic. \\ 0 & The two sentences are completely dissimilar. \\ \hline \hline \end{tabular} \end{table} Table 2: Similarity scores with descriptions (Agirre et al., 2013). age), while the judgements of the remaining 6,051 pairs are spread out over a wider range (0.5-1.5). We sample 100 examples and find that, when \(\sigma\leq\) 0.5, generally more than 10 out of 15 annotators highly agree with each other. This basically satisfies the assumption that makes _averaging_ less biased: _individual ratings do not vary significantly_(Lee et al., 2005). While less than half annotators reach consensus when \(\sigma>\) 0.7, and less than 5 when \(\sigma\geq\) 1.0 (referring back to our earlier examples in Table 1). Thus, we heuristically regard \(\sigma\)=0.5 as a tipping point for distinguishing examples of low (\(\sigma>0.5\)) and high agreement (\(\sigma\leq 0.5\)). Accordingly, we split the data into two subsets, reflecting the different levels of disagreement: cases where \(\sigma\leq 0.5\) are _uncontroversial_ (**USTS-U**); and cases where \(\sigma>0.5\) are _contentious_ (**USTS-C**). Does the model agree with the annotators?We take _bert-base-chinese_ and fine-tune it on the Chinese STS-B training data7 with a learning rate of 2e-5 for 3 epochs, obtaining \(r{=}0.82\)/\(\rho{=}0.82\) on the validation set, and \(r{=}0.80\)/\(\rho{=}0.79\) on the test set; we refer to this model as "STSb-zh". We compute \(r\) and \(\rho\) between the model prediction and each of the four annotations, and present the average results in Table 3. Footnote 7: Chinese STS-B has 5,231, 1,458 and 1,361 examples for training, validation and test, respectively; see [https://github.com/pluto-junzeng/CNSD](https://github.com/pluto-junzeng/CNSD) Both \(r\) and \(\rho\) across TED-X, XNLI, and PAWS-X are below 0.5, with PAWS-X being particularly bad with half of the pairs being predicted to be in the range \([4,5]\). Predictions of USTS are primarily concentrated in the range \([1,3]\), when majority annotations are in the range \([0,2]\) This suggests it is non-trivial for current models to perform well without training on USTS, and models tend to over-assign high scores (Figure 1: predictive \(\sigma\) is \(<0.3\) vs. annotator \(\hat{\sigma}=0.47\)). However, it also leads us to consider whether the distribution estimated based on the four annotators is adequate to generate a gold standard. To this end, we investigate the question _How does the collective distribution vary when increasing the number of annotators, on cases of uncontroversial USTS-U and contentious USTS-C?_ ### Collective Distribution Analysis We measure the distributional variation through (1) fluctuation of \(\mu\) and \(\sigma\); and (2) distributional divergence between first-round and second-round annotators. Study design:we sample 100 instances from USTS-U and 100 from USTS-C, with a ratio of 4:3:3 from TED-X, XNLI, and PAWS-X, resp. We then had another 15 qualified Chinese native annotators score the 200 Chinese sentence pairs. Formally, the annotation matrix \(A^{N\times M}\) represents a data set with \(N\) examples annotated by \(M\) annotators. In our setting, \(N{=}100\) and \(M{=}19\) for both USTS-U and USTS-C. We capture the variation of \(\mu\) and \(\sigma\) over 100 examples by averaging \(\mathbf{\mu}\)=mean(A[:,:i], axis=1) and \(\mathbf{\sigma}\)=std(A[:,:i], axis=1), where \(i\) ranges from 4 to 19, incorporating the new Figure 1: Standard deviation distribution of the four first-stage annotators (left) and model predictions (right). \begin{table} \begin{tabular}{l c c c c} \hline \hline Source & TED-X & XNLI & PAWS-X & USTS \\ \hline **Amount** & & & & \\ raw & 9462 & 3259 & 2230 & 14951 \\ \(\sigma>0.5\) & 3458 & 1597 & 996 & 6051 \\ ratio & 36.5\% & 49.0\% & 44.7\% & 40.5\% \\ \hline **Length** & & & & \\ S1 & 39.0 & 34.0 & 43.5 & 38.6 \\ S2 & 39.2 & 16.9 & 43.3 & 34.9 \\ pair & 39.1 & 25.4 & 43.4 & 36.8 \\ \hline **Raters** & & & & \\ \(r\) & 0.48 & 0.61 & 0.49 & 0.74 \\ \(\rho\) & 0.50 & 0.58 & 0.41 & 0.68 \\ \(\sigma\) & 0.44 & 0.52 & 0.49 & 0.47 \\ \hline **STSb-zh** & & & & \\ \(r\) & 0.41 & 0.48 & 0.32 & 0.70 \\ \(\rho\) & 0.43 & 0.50 & 0.18 & 0.63 \\ \(\sigma\) & 0.21 & 0.22 & 0.19 & 0.21 \\ \hline \hline \end{tabular} \end{table} Table 3: Details of the USTS dataset. “\(r\)” = Pearson’s correlation; “\(\rho\)” = Spearman’s rank correlation; and “\(\sigma\)” = standard deviation ratings incrementally. The collective distribution for the first-round annotation A[:,:4] is denoted as \(\mathbf{p}{=}\mathcal{N}(\mathbf{\mu_{1}},\mathbf{\sigma_{1}})\), and \(\mathbf{q}{=}\mathcal{N}(\mathbf{\mu_{2}},\mathbf{\sigma_{2}})\) for A[:,4:4:4+j] as we add new annotators. We observe the KL-Divergence\((p\|q)\) as we increase \(j\). **Hypothesis:** We hypothesise that the distribution will remain stable regardless of the number of annotators on the uncontroversial USTS-U, but change substantially on the contentious USTS-C. **Results:** To plot the value of \(\mu\) and \(\sigma\) in the same figure, we re-scale \(\mu\) by subtracting \(0.9\) in Figure 2. We find that with an increased number of annotators, \(\mu\) of USTS-U remains stable with minor perturbations, while \(\mu\) of USTS-C declines and steadily flattens out. On USTS-U, \(\sigma\) ascends slowly and converges to 0.3. This matches our expectation that increasing annotators will result in more variance. Yet it still varies in the range \([0.1,0.3]\) due to the high certainty of the uncontroversial examples. In contrast, \(\sigma\) of USTS-C stays consistently high, indicating that there are still strong disagreements even with more annotators, because of the inherent ambiguity of contentious cases. It fluctuates in a larger range of \([0.6,1.0]\), with a steeper drop. That is, combining more ratings results in large variations in \(\mu\) and \(\sigma\) for USTS-C, but less for USTS-U. Therefore, the distribution obtained from four annotators is adequate for uncontroversial examples, but insufficient for USTS-C: more annotators are needed to gain a representative distribution. How many annotators should be employed?In Figure 2, \(\mu\) and \(\sigma\) of USTS-C vary substantially before \(M{=}15\), then stabilise. The trend of KL-Divergence in Table 4 demonstrates the same phenomenon: KL declines as the number of annotators increases, with a relatively small and stable divergence when \(j>10\). Combining these two, we employ 15 extra annotators to score the 6,051 cases for USTS-C in the second-round annotation. second-round (in red) annotations in Figure 3 (top). The shape of the \(\sigma\) distributions is very similar, but the green bars (\(\sigma_{1}\)) move towards the right by 0.3 or so, with respect to the red bars (\(\sigma_{2}\)), leading to the average \(\hat{\sigma_{2}}\)=0.42 \(\ll\hat{\sigma_{1}}\)=0.76. This indicates that the second-round distribution is more stable, with less overall variance. Nonetheless, 87% of pairs exceed the average deviation of 0.27 for USTS-U, reflecting the higher number of disagreements. Additionally, the distribution of \(\mu_{1}-\mu_{2}\) in Figure 3 (bottom) is close to a normal distribution, within the range of \([-1,2]\). The majority are to the right of zero, indicating that annotators in the first round tend to assign higher scores than in the second, resulting in a larger \(\mu\). ### The Resulting Corpus **USTS-U vs. USTS-C** The number of examples in USTS-U and USTS-C is 8,900 and 6,051, respectively, with largely comparable \(\mu\) range of \([0,5]\) and \([0.2,4.4]\) (see Table 5). USTS-U has a much smaller \(\hat{\sigma}\) of 0.27 than USTS-C (\(\hat{\sigma}\)\(=\)\(0.56\)), consistent with their inherent uncertainty level. Analogously, USTS-U has a higher correlation of \(r\)=\(0.91\) among annotators, compared to \(r\)=\(0.72\) for USTS-C. ## 4 Aggregation of Human Judgements For the high-agreement cases of USTS-U, gold labels can be approximated by aggregating multiple annotations into either a scalar or a single Gaussian distribution. However, for low-agreement examples, how to aggregate the human ratings remains an open question. **Are all distributions unimodal Gaussian?** Though most distributions of human assessments can be assumed to be sampled from an underlying (generative) distribution defined by a single Gaussian, we observed judgements that a unimodal Gaussian struggles to fit. The annotations of examples Nos. 2 and 3 in Figure 4 exhibit clear bi- or tri-modal distributions. How often, then, and to what extent do multimodal distributions fit better? We answer this question by fitting human judgements using a Gaussian Mixture Model (GMM), where the number of components is selected during training. This means the model can still choose to fit the distribution with only one Gaussian component where appropriate. If additional components yield a better fit to the judgements, i.e. larger log likelihood is observed than using a unimodal distribution, we consider the human judgements to exhibit a multimodal distribution. **Experiments and Results** We randomly split USTS-C into a training (4,051) and test set (2,000), and use the training data to fit a GMM with: (1) one component; or (2) the optimal number of components \(k\). We compute the log likelihood assigned to each example in the test set in Figure 5 (left), with the unimodal results as the \(x\)-axis and multimodal Gaussian as the \(y\)-axis. The majority of points fall on or above the diagonal line (\(y=x\)), with a multimodal distribution outperforming a unimodal Gaussian distribution for 83% of instances. However, does this suggest that most examples exhibit multiple peaks? **Effective components:** We count the effective components for each sentence pair based on the weight assigned by the GMM in form of a probability for each component. We see that, for 11.3% of pairs, there is a nontrivial second component (weight\(\geq 0.2\)), and a third component on 3 pairs. Rarely are there more than three components with significant weights (see Table 6). Moreover, we find that the weight of the dominant component Figure 4: Human judgement distributions of examples in Table 1, with uni-, tri- and bi-modal Gaussian resp. The dotted black line shows the model fit when using a single Gaussian; the shaded curve shows the model learned when allowed to fit \(k\) components of a GMM. Figure 5: _Left:_ Log likelihood of test data under the single-component Gaussian (\(x\)-axis) vs. the \(k\)-component GMM (\(y\)-axis). The darker the area, the more the examples concentrate. _Right:_ Weights of top-2 effective component distribution. mostly (87%) distributes over 0.8, and that the weight of the second effective component scatters across the range 0.25-0.5 (the right of Figure 5). This reveals that the GMM does not frequently use more than one effective component, with much lower weights on the second or third components. The majority of held-out human judgements fit a unimodal distribution well. **Gold Labels:** Given that a minority of instances in USTS-C are bimodally distributed, and that even for these instances, the weight on the second components is low, we conservatively use a single Gaussian to aggregate human judgements for all cases in this work. ## 5 Analysis of Model Predictions Most STS models predict a pointwise similarity score rather than of a distribution over values. Wang et al. (2022) estimated the uncertainty for continuous labels by MC-Dropout and Gaussian process regression (GPR). However, due to the lack of gold distributions, they only evaluate outputs using expected calibration error (ECE) and negative log-probability density (NLPD), assessing the predictive reliability. It's unknown whether these uncertainty-aware models mimic human disagreements, i.e. the predicted deviation reflects the variance of human judgements. To explore this, we experiment over USTS and incorporate distributional divergence (i.e. Kullback-Leibler Divergence; "KL") into the evaluation, to observe the fit between the distribution of collective human judgements and the model predictive probability. We also examine the ability of different models to capture the averaged score for low-agreement cases, and whether a well-calibrated model fits the distribution of annotations better. Evaluation Metrics:For singular values, STS accuracy is generally evaluated with Pearson correlation (\(r\)) and Spearman rank correlation (\(\rho\)), measuring the linear correlation between model outputs and the average annotation, the degree of monotonicity under ranking, respectively. For uncertainty-aware outputs, ECE and NLPD can be used to assess model reliability in the absence of gold distributions. ECE measures whether the estimated predictive confidence is aligned with the empirical correctness likelihoods. A well-calibrated model should be less confident on erroneous predictions and more confident on correct ones. NLPD penalises over-confidence more strongly through logarithmic scaling, favouring under-confident models. ### Models and Setup BERT with Two-layer MLP:The hidden state \(\mathbf{h}\) from the last-layer hidden state of BERT _CLS_ token Devlin et al. (2019) is passed through a two-layer MLP with \(\tanh\) activation function. We refer to this model as _BERT-lr_ when making deterministic predictions, and _BERT-lr-MC_ when using MC-Dropout Gal and Ghahramani (2016) for uncertainty estimation. SBERT with GPR:In contrast with end-to-end training, sparse GPR is applied to estimate distributions, taking encoded sentences from Sentence-BERT (SBERT: Reimers and Gurevych (2019)) as input. We also calculate the cosine similarity between S1 and S2 using SBERT, as a non-Bayesian counterpart. **Setup:**bert-base-chinese is used with input format [CLS] S1 [SEP] S2 [SEP] for text pair \((S1,S2)\), implemented based on the _hugging-face Transformer_ framework. We fine-tune SBERT separately over each STS corpus based on \(\texttt{bert-base-chinese-nli}\), using the same configuration as the original paper. We use the concatenation of the embeddings \(u\oplus v\), along with their absolute difference \(|u-v|\) and element-wise multiplication \(v\times t\) to represent a sentence pair, implemented in pyro.8 Footnote 8: [https://pyro.ai/](https://pyro.ai/) We evaluate STS-B, USTS-U, and USTS-C under five training settings, as presented in Table 7: 1. Zero-shot: SBERT with no tuning; 2. GPR trained on \(\texttt{sbert-nli}\); 3. Domain-specific: fine-tuned on each dataset separately; 4. Domain-generalised: fine-tuned using the three datasets combined; \begin{table} \begin{tabular}{c r r r r r r} \hline \hline & \multicolumn{3}{c}{**Testing**} & \multicolumn{3}{c}{**Train**} \\ K & amount & prop(\%) & \(\hat{\sigma}\) & amount & prop & \(\hat{\sigma}\) \\ \cline{2-7} 1 & 1772 & 88.6 & 0.55 & 3755 & 92.7 & 0.48 \\ 2 & 225 & 11.3 & 0.63 & 294 & 7.3 & 0.50 \\ 3 & 3 & 0.0 & 0.39 & 2 & 0.0 & 0.66 \\ \hline \hline \end{tabular} \end{table} Table 6: The amount and averaged standard deviation \(\hat{\sigma}\) of examples with \(k=\{1,2,3\}\) effective components of human judgements distributions in the training and test splits. 5. Cross-domain: train with STS-B training data for USTS-U and USTS-C, and with USTS for STS-B. ### Results and Analysis **USTS is challenging.** In setting (1) of Table 7, purely depending on pre-trained semantic representation and cosine similarity, correlations over USTS-U and USTS-C are much lower than STS-B. This suggests that USTS is a challenging dataset, but can be learned. USTS-U in particular achieves large improvements in performance after domain-specific training in experiments (3)-(4). **Critical differences exist between model outputs and human annotations.** The models can capture average opinion, resulting in reasonable \(r\)/\(\rho\) between the predicted target value and the averaged annotations. However, they cannot capture the variance of human opinions. To quantify how well the predicted variance \(\sigma_{M}\) captures the variance \(\sigma_{H}\) of human judgements, we analyse the outputs of the top-2 settings: BERT-lr-MC from setting (4) and SBERT-GPR from setting (5), for USTS-U and USTS-C. We compute the correlation \(r\) and \(\rho\) between \(\boldsymbol{\sigma}_{M}\) and \(\boldsymbol{\sigma}_{H}\) in Table 8, and visualise the \(\sigma_{M}\) with increasing human disagreement in Figure 6. There is no apparent correlation between \(\sigma_{M}\) and \(\sigma_{H}\). A given model displays similar deviation \(\sigma_{M}\) regardless of the relative amount of human disagreement. Different models concentrate on differ \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**STS-B**} & \multicolumn{4}{c}{**USTS-U**} & \multicolumn{4}{c}{**USTS-C**} \\ \cline{2-13} \multicolumn{1}{c}{**Model**} & \(r\uparrow\) & \(\rho\uparrow\) & ECE \(\downarrow\) & NLPD \(\downarrow\) & \(r\uparrow\) & \(\rho\uparrow\) & ECE \(\downarrow\) & NLPD \(\downarrow\) & KL \(\downarrow\) & \(r\uparrow\) & \(\rho\uparrow\) & ECE \(\downarrow\) & NLPD \(\downarrow\) & KL \(\downarrow\) \\ \hline \multicolumn{13}{l}{_SBERT-NLI_} \\ \multicolumn{1}{l}{**(1)**} & SBERT-cosine & 0.714 & 0.718 & N/A & N/A & 0.597 & 0.383 & N/A & N/A & N/A & 0.572 & 0.442 & N/A & N/A & N/A \\ \multicolumn{1}{l}{**(2)**} & SBERT-GPR & 0.741 & 0.743 & 0.001 & 0.532 & 0.709 & 0.433 & 0.020 & 0.033 & 2.233 & 0.656 & 0.455 & 0.139 & \(-\)0.09 & 0.576 \\ \hline \multicolumn{13}{l}{_Domain-specific_} \\ \multicolumn{1}{l}{} & BERT-lr & 0.808 & 0.804 & N/A & N/A & 0.855 & 0.700 & N/A & N/A & N/A & 0.806 & 0.707 & N/A & N/A & N/A \\ \multicolumn{1}{l}{**(3)**} & BERT-lr-MC & 0.811 & 0.805 & **0.167** & **4.709** & 0.856 & **0.703** & **0.054** & 1.079 & 4.587 & 0.809 & 0.708 & **0.046** & **0.442** & 6.073 \\ \multicolumn{1}{l}{SBERT-cosine } & 0.779 & 0.781 & N/A & N/A & 0.661 & 0.387 & N/A & N/A & N/A & 0.596 & 0.460 & N/A & N/A & N/A \\ \multicolumn{1}{l}{} & SBERT-GPR & 0.780 & 0.782 & 0.053 & 0.917 & 0.683 & 0.388 & 0.137 & 0.651 & 3.050 & 0.606 & 0.444 & 0.415 & 0.717 & 0.950 \\ \hline \multicolumn{13}{l}{_Domain-generalised_} \\ \multicolumn{1}{l}{} & BERT-lr & **0.815** & **0.813** & N/A & N/A & 0.860 & 0.692 & N/A & N/A & N/A & 0.835 & 0.768 & N/A & N/A & N/A \\ \multicolumn{1}{l}{**(4)**} & BERT-lr-MC & 0.814 & 0.811 & 0.179 & 5.865 & **0.861** & 0.697 & 0.060 & **0.898** & **4.434** & **0.838** & **0.774** & 0.278 & 0.702 & **5.401** \\ \multicolumn{1}{l}{} & SBERT-cosine & 0.772 & 0.772 & N/A & N/A & 0.686 & 0.435 & N/A & N/A & N/A & 0.670 & 0.523 & N/A & N/A & N/A \\ \multicolumn{1}{l}{SBERT-GPR} & 0.772 & 0.775 & 0.017 & 0.645 & 0.707 & 0.433 & 0.098 & 0.268 & 2.578 & 0.674 & 0.497 & 0.157 & \(-\)0.04 & 0.955 \\ \hline \hline \multicolumn{13}{l}{_Cross-domain_} \\ \multicolumn{1}{l}{} & BERT-lr & 0.675 & 0.667 & N/A & N/A & 0.754 & 0.650 & N/A & N/A & N/A & 0.725 & 0.676 & N/A & N/A & N/A \\ \multicolumn{1}{l}{**(5)**} & BERT-lr & 0.678 & 0.671 & 0.348 & 12.90 & 0.755 & 0.695 & 1.296 & 10.55 & 13.95 & 0.729 & 0.687 & 1.298 & 8.956 & 12.62 \\ \multicolumn{1}{l}{} & SBERT-cosine & 0.695 & 0.692 & N/A & N/A & 0.647 & 0.449 & N/A & N/A & N/A & 0.606 & 0.481 & N/A & N/A & N/A \\ \multicolumn{1}{l}{} & SBERT-GPR & 0.726 & 0.726 & 0.001 & 0.555 & 0.723 & 0.481 & 0.020 & 0.012 & 2.215 & 0.675 & 0.494 & 0.148 & \(-\)0.11 & 0.555 \\ \hline \hline \end{tabular} \end{table} Table 7: Test set correlation (\(r\)/\(\rho\)), ECE, NLPD and KL using end-to-end (BERT) and pipeline (SBERT), over STS-B, USTS-U and USTS-C, under five settings. The bold number is the best result for BERT, and the underlined number is that for SBERT. Figure 6: Predicted variance \(\sigma_{M}\) (\(y\)-axis) with increasing human disagreement (\(x\)-axis). The Red and blue triangles = USTS-U and USTS-C from experiment setting (4) in Table 7, orange and green circles = USTS-U and USTS-C from experiment setting (5), and black line is \(y=x\). USTS-U disperses at the left of the \(x\)-axis and low-agreement USTS-C scatters to the right. ent parts of the spectrum, e.g. BERT-lr-MC is distributed in the range \([0.1,0.2]\) while SBERT-GPR is distributed in the range \([0.5,0.7]\), and neither follows the line of \(y=x\). This suggests that **the uncertainty captured by current models is not the uncertainty underlying human disagreements**. Rather it may **reflect the model's predictive confidence** on the data set as a whole. This finding is not surprising since none of the models are optimised to capture collective human opinions, but suggests an important direction for future improvement. **Being trustworthy is orthogonal to being accurate.** We see that ECE and NLPD do not mirror the results for _r_/\(\rho\) and distributional divergence KL. This implies the ability required to improve model reliability differs from that required to perform accurately, regardless of whether a target value or a target distribution is predicted. **Low human-agreement USTS is detrimental to training sentence embeddings.** Comparing the performance of experiment settings (2) and (5) in Table 7, tuning SBERT on USTS hurts results over STS-B across the board, while training on STS-B benefits both USTS-U and USTS-C. We speculate that the examples in USTS with larger annotator variance are more ambiguous than STS-B. Forcing networks to learn from high-ambiguity signals may inhibit generalisation, resulting in worse representations. **Discussion** For instances of high disagreement, neither a scalar nor a single Gaussian fits a set of observed judgements adequately. As a direction for future work, we suggest exploring the direct estimation of individual ratings (e.g. by few-shot prompt-based prediction) and evaluating against the raw collective opinions. This could circumvent the ineffective training and evaluation caused by aggregation. ## 6 Multilingual USTS Before extending USTS into a multilingual benchmark, we question the validity of previous approaches involving direct transfer of annotations collected for one language to other languages Liu et al. (2021); Yang et al. (2019). This strategy assumes that the nuanced semantics of the component sentences is not changed under translation, and hence the label will be identical. To test whether this assumption is reasonable, we analyse the impact of language on the annotations, and discuss whether such ratings are transferable across languages. Specifically, we establish whether the label distribution varies based on language, and how annotator proficiency affects the distribution given the same text. Collecting LabelsTaking English as a pivot language, we employ native English speakers ("NT") and bilingual raters whose mother language is Mandarin Chinese, including 5 professional translators ("PT"), 5 overseas students ("OS"), and 5 general users ("GU"). Each annotator assigns labels to 100 examples sampled from each of USTS-U and USTS-C (the same data set used in Section 3.4), which have been manually post-edited by professional translators to ensure content alignment. ResultsWe average the KL between collective distributions drawn from 19 raters given _zh_ text, and 5 native English speakers (NT) given _en_ text. Table 9 shows there is not a substantial distributional divergence. Differences decline further as annotations of the other three groups of bilingual raters are incorporated. Detailed analysis of distributions across each of these groups (Figure 7) reveals that the language of the text affects the distribution of human opinions. On both USTS-U and USTS-C, the distribution differs substantially between native Chinese speakers and native English speakers when given _zh_ and _en_ sentence pairs, respectively. While the _zh_ annotations cluster in the lower \(\sigma\) region, those for _en_ are dispersed across a large \(\sigma\) span. Figure 7 also shows that the distribution of professional translators mirrors that of English natives, while general users differ substantially from both these groups, but are similar to native-speaker Chinese annotators who are given _zh_ text. We suspect that translators make judgements based on the meaning of _en_ text directly, but general users may use translation tools to translate _en_ text back to _zh_ to support their understanding, meaning they are in fact rating a Chinese text pair. Intermediate-level overseas students may mix strategies and thus are somewhere in between these two extremes. \begin{table} \begin{tabular}{l c c c c} \hline \hline _en_-rater & NT & +PT & +OS & +GU \\ \hline USTS-U & 0.69 & 0.67 & 0.53 & 0.38 \\ USTS-C & 0.94 & 0.78 & 0.73 & 0.68 \\ \hline \hline \end{tabular} \end{table} Table 9: KL-divergence of labels as ratings from less proficient language speakers are incorporated. **Discussion** The differences we observe may be attributed to bias introduced during manual translation. Each sentence in a pair is translated separately, so while a source pair may have lexical overlap, this may not carry over under independent translation. We examine this effect by calculating the word overlap similarity as Eq (1) for _zh/en_ pairs, where \(T_{1}\) and \(T_{2}\) are whitespace-tokenised words for English and based on the _jieba segment tool_ for Chinese. We calculate string similarity as: \[Sim=\frac{len(T_{1}\cap T_{2})+1}{max(len(T_{1}),len(T_{2}))+1} \tag{1}\] As detailed in Table 10, the lexical overlap similarity for _en_ and _zh_ is similar for USTS-U and USTS-C, suggesting that inconsistencies under translation are not a primary cause of the observed discrepancy. In summaryThe language of the text impacts the distribution of human judgements. In our analysis, English results in higher-uncertainty labelling than Chinese, for both uncontroversial and contentious cases. This suggests that the previous assumption that labels remain identical across languages as long as the meaning of the text is kept the same, is potentially problematic, even though pairwise lexical overlap remains similar. ## 7 Discussion We focus on the STS task in this work. However, the methods we propose can be transferred to other subjective textual regression tasks, such as sentiment analysis (SA) rating and machine translation quality estimation in the format of direct assessment (DA). Similar findings stemming from task subjectivity may be relevant to other types of NLP tasks relying on human annotation. High disagreement among annotators may occur due to ambiguous labelling, where it is challenging to compile guidelines that are widely accepted and consistently interpreted by all individual annotators. In practice, it may be difficult to estimate the distribution of human annotations in instances where multiple annotators are difficult to source, such as occurs in clinical and biomedical STS due to the need for highly specialised knowledge. Transfer learning, which relies on patterns learned from general-purpose USTS, provides a means to predict such a distribution, if noisily. We propose to explore the direct estimation of individual ratings by in-context learning based on large language models (LLMs), e.g. GPT-3 (Brown et al., 2020) and \begin{table} \begin{tabular}{l c c c} \hline \hline lan & USTS-U & USTS-C & USTS \\ \hline zh & 0.42 & 0.45 & 0.44 \\ en & 0.40 & 0.43 & 0.41 \\ \hline \hline \end{tabular} \end{table} Table 10: Lexical similarity between _en_ and _zh_ pairs sampled from USTS-U, USTS-C, and the combination of the two. Figure 7: Scatter plot of 100 examples sampled from USTS-U (top) and USTS-C (bottom) annotated by Chinese native, English native, professional translator (PT), overseas students (OS), and general users (GU). We plot (\(\mu\),\(\sigma\)) as coordinate points. ChatGPT.9 LLMs are able to perform in-context learn -- perform a new task via inference alone, by conditioning on a few labelled pairs as part of the input (Min et al., 2022). Footnote 9: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) ChatGPT appears to be highly effective at style transfer and tailoring of content to specific audiences such as _five-year old children_ or _domain experts_, through learning about language style and tone from interactional data and individual preferences. This allows it to generate more personalised responses (Aljanabi et al., 2023). Deshpande et al. (2023) show that assigning ChatGPT a persona through the parameter _system-role_, such as _a bad/horrible person_, can increase the toxicity of generated outputs up to sixfold. Additionally, Schick and Schutze (2021) show that generative LLMs can be used to automatically generate labelled STS datasets using targeted instructions. This data can be utilised to improve the quality of sentence embeddings. Together, these imply that LLMs may have utility in generating personalised semantic similarity assessments, based on annotator meta data (e.g. age, educational background, or domain expertise). Simulating variation in judgements between individual annotators using synthetic personalised ratings could mitigate ineffective training and evaluation caused by aggregation, given that neither a scalar nor a single Gaussian fits the set of observed judgements adequately for instances of high disagreement. ## 8 Conclusion We presented the first uncertainty-aware STS corpus, consisting of 15k Chinese examples with more than 150k annotations. The dataset is intended to promote the development of STS systems from the perspective of capturing inherent disagreements in STS labelling, and establish less biased and more nuanced gold labels when large variances exist among individual ratings. We additionally examine the models' ability to capture the averaged opinion and the distribution of collective human judgements. Results show that the uncertainty captured by current models is not explained by the semantic uncertainty that results in disagreements among humans. Rather, it tends to reflect the predictive confidence over the whole data set. We also found that the text language and language proficiency of annotators affect labelling consistency. ## Acknowledgements We thank the anonymous reviewers and editors for their helpful comments, Yanqing Zhao, Samuel Luke Winfield D'Arcy, Yimeng Chen, Minghan Wang in Huawei TSC and NLP Group colleagues in The University of Melbourne for various discussions. Yuxia Wang is supported by scholarships from The University of Melbourne and China Scholarship Council (CSC).
2301.08225
Many topological regions on the Bloch sphere of the spin-1/2 double kicked top
Floquet topological systems have been shown to exhibit features not commonly found in conventional topological systems such as topological phases characterized by arbitrarily large winding numbers. This is clearly highlighted in the quantum double kicked rotor coupled to spin-1/2 degrees of freedom [Phys. Rev. A 97, 063603 (2018)] where large winding numbers are achieved by tuning the kick strengths. Here, we extend the results to the spin-1/2 quantum double kicked top and find not only does the system exhibit topological regions with large winding numbers, but a large number of them are needed to fully characterize the topology of the Bloch sphere of the top for general kick strengths. Due to the geometry of the Bloch sphere it is partitioned into regions with different topology and the boundaries separating them are home to 0 and $\pi$ quasienergy bound states. We characterize the regions by comparing local versions of the mean field, quantum and mean chiral displacement winding numbers. We also use a probe state to locate the boundaries by observing localization as the state evolves when it has a large initial overlap with bound states. Finally, we briefly discuss the connections between the spin-1/2 quantum double kicked top and multi-step quantum walks, putting the system in the context of some current experiments in the exploration of topological phases.
J. Mumford
2023-01-19T18:36:46Z
http://arxiv.org/abs/2301.08225v2
# Many topological phases on the Bloch sphere of the spin-1/2 double kicked top ###### Abstract Floquet topological systems have been shown to exhibit features not commonly found in conventional topological systems such topological phases characterized by arbitrarily large winding numbers. This is clearly highlighted in the quantum double kicked rotor coupled to a spin-1/2 degree of freedom [L. Zhou and J. Gong, Phys. Rev. A **97**, 063603 (2018)] where large winding numbers are achieved by tuning the kick strengths. Here, we extend the results to the spin-1/2 quantum double kicked top and find not only does the system exhibit topological phases with large winding numbers, but a large number of them are needed to fully characterize the topology of the Bloch sphere of the top for general kick strengths. The Bloch sphere is partitioned into regions with different topology and the boundaries separating them are home to 0 and \(\pi\) quasienergy edge states. We characterize the regions by comparing the mean field, quantum and mean chiral displacement versions of the winding numbers. We also use a probe state to locate the boundaries by observing localization as the state evolves when it has a large initial overlap with the edge states located at the boundary. Finally, we briefly discuss the connections between the spin-1/2 quantum double kicked top and multi-step quantum walks, putting the system in the context of some current experiments in the exploration of topological phases. ## I Introduction Periodically driven systems, also called Floquet systems, have been a useful tool in simulating novel phases of matter. Periodic driving allows for control in the time domain which can lead to exotic phases such as Anderson localization in time [1; 2] and phase space crystals [3; 4]. Of particular interest is the creation of effective magnetic fields and spin-orbit couplings with lasers to simulate topological features found in condensed matter systems [5; 6; 7; 8]. A notable example along these lines are Floquet topological insulators which are laser-induced topological states in normal materials resulting in the creation and control of chiral edge states [9; 10]. In some cases, the crystal structure of solids is also simulated using optical lattices [11; 12; 13; 14]. In these systems, the lasers creating the lattice are periodically driven to induce effective magnetic field strengths unobtainable in real materials. This has lead to the observation of the celebrated Harper-Hofstadter model [15; 16] which displays one of the more striking examples of integer quantum Hall physics. A special class of Floquet systems involves periodic kicks rather than continuous driving. Their appeal comes from the fact that only part of the Hamiltonian is responsible for evolving the system at a given time, so they are generally easier to conceptualize than their non-kicked counterparts. Early applications of this method were used to explore connections between topology and chaos in the kicked Harper model [17]. In recent years, one of the main focuses has been on generating topological phases with large topological invariants such as the winding and Chern numbers. Double kicked systems such as the quantum double kicked rotor (QDKR) and the quantum double kicked top (QDKT) have shown to be promising in this regard [18; 19; 20] where large invariants were predicted. We note that the single kicked versions of these models have a long history of illuminating the connections between classical and quantum dynamics relating to chaos [21; 22; 23; 24] and diffusion [25; 26]. Proposals involving the QDKR coupled to a spin-1/2 degree of freedom have shown that arbitrarily large winding numbers can be achieved [27; 28]. The additional spin-1/2 degree of freedom is significant because it puts the QDKR in the perspective of quantum walks where it plays the role of a coin which is 'tossed' each step. The state of the spin determines the direction the rotor evolves and is analogous to tostsing a coin at regular intervals and choosing a direction based on whether it is heads or tails in classical random walks [29]. Of course, the difference in the quantum case is that the coin can be in a superposition of heads and tails. Quantum walks play an important part in the building of efficient quantum algorithms [30; 31] and provide a foundation for quantum computation [32]. Quantum walks have also been proposed as a method to explore topological phases [33; 34; 35] where experiments involved the measurement of the response of topological invariants to disorder [36; 37] and the identification of topologically protected edge states [38; 39; 40]. In this paper, we perform the natural step of extending the results of the spin-1/2 QDKR to the spin-1/2 QDKT. This is a generalization which amounts to extending the Hilbert space of the system from a ring to a sphere [22] - commonly referred to as the Bloch sphere. Our main result is that, due to the change in geometry of the Hilbert space, instead of a single topological region on the ring, an arbitrarily large number of different topological regions can be generated on the Bloch sphere depending on the kick strengths. In general, many winding numbers are needed to characterize the topology of the entire Bloch sphere and due to the bulk-edge correspondence, the boundaries between the regions are home to protected edge states. The number of edge states at a boundary can also be arbitrarily large, so there can be regions around the boundaries which are especially dynamically stable. We also show that the spin-1/2 QDKT dynamics can be framed in terms of a quantum walk on the Bloch sphere by breaking down the Floquet operator into a series of coin tosses and coin-dependent rotations. Thus, we make proper connections to previous work done with the spin-1/2 QDKR and other quantum walk systems. ## II Model The system we will be investigating is the spin-1/2 QDKT. The Hamiltonian is \[\hat{H}_{T} = \frac{\Lambda}{j}\hat{J}_{z}^{2}+\alpha_{1}\hat{J}_{x}\hat{ \sigma}_{x}\sum_{n}\delta\left[t-nT\right] \tag{1}\] \[+\alpha_{2}\hat{J}_{y}\hat{\sigma}_{y}\sum_{n}\delta\left[t-(n+1/2 )T\right]\] where \(\hat{J}_{a}\), \(a=x,y,z\) are the top operators obeying the usual commutation relation \([\hat{J}_{i},\hat{J}_{j}]=i\epsilon_{ijk}\hat{J}_{k}\) and the Pauli matrices represent the spin-1/2 degree of freedom which we label as \(\uparrow\) and \(\downarrow\). The parameter \(\Lambda\) is the non-linear energy, and \(\alpha_{1}\) and \(\alpha_{2}\) are the first and second kick strengths, respectively. Each period \(T\) there are two kicks where the second kick takes place \(T/2\) after the first. The dynamics generated after one period is given by the Floquet operator \[\hat{U}_{T}=e^{-i\Lambda\hat{J}_{z}^{2}T/2j}e^{-i\alpha_{2}\delta t \hat{J}_{y}\hat{\sigma}_{y}}e^{-i\Lambda\hat{J}_{z}^{2}T/2j}e^{-i\alpha_{1} \delta t\hat{J}_{x}\hat{\sigma}_{x}} \tag{2}\] where \(\delta t\ll 1\) is the duration of a kick. Going forward we will analyze a simplified version of \(\hat{U}_{T}\) by setting the period to \(T=4\pi j/\Lambda\). This is similar to the on-resonance condition found in variations of the quantum kicked rotor [41; 42; 43] and results in the unitary operators containing \(\hat{J}_{z}^{2}\) becoming unity since the eigenvalues of \(\hat{J}_{z}\) are integers. The Floquet operator becomes \[\hat{U}_{T}=e^{-i\frac{\epsilon_{2}}{2j}\hat{J}_{y}\hat{\sigma}_{y}}e^{-i\frac {\epsilon_{1}}{2j}\hat{J}_{x}\hat{\sigma}_{x}} \tag{3}\] where \(\kappa_{1}=\alpha_{1}\delta tj\) and \(\kappa_{2}=\alpha_{2}\delta tj\) are the scaled kick strengths. The specific degrees of freedom that the top operators belong to in \(\hat{U}_{T}\) will be kept general, however, we will briefly discuss some possibilities for their physical origin. They can belong to the angular momentum of a single particle or they can represent a collection of two-mode indistinguishable particles under the Schwinger representation [44]. In the many-body case, the two modes can be external states like the energy levels of a trapping potential such as the system of a Bose-Einstein condensate (BEC) occupying the ground states of a double well potential. The two modes can also be internal states like the hyperfine states of atoms [45; 46] or two polarizations of light [47]. When the top operators belong to the angular momentum of a single particle, the Pauli operators can represent the internal spin states of that particle. And in the many-body case, they can represent a two-mode particle that is distinguishable from the others and is either a different species of particle with access to the same two modes [48; 49] or the same species with a different pair of modes. If the interactions between the two subsystems in Eq. (3) are difficult to generate, then a possible solution is to use interactions of the general form \(\hat{H}_{\rm int}=\hat{J}_{z}\hat{S}_{z}\), where in our case \(\hat{S}_{z}=\hat{\sigma}_{z}\). They can arise in squeezing experiments involving interactions between matter [50] or between matter and light [47]. Also, similar interactions are found in a Bose-Fermi mixture in an optical lattice [51]. Using \(\hat{H}_{\rm int}\) as a starting point, we can construct \(\hat{U}_{T}\) from the rotation operators \(\hat{R}_{x}(a)=e^{-ia\hat{J}_{x}}\), \(\hat{R}_{y}(a)=e^{-ia\hat{J}_{y}}\) and \(\hat{R}_{z}(a)=e^{-ia\hat{J}_{z}}\) and the general Pauli rotation operator \[\hat{M}(\alpha,\beta)=\begin{pmatrix}\cos(\alpha/2)&\sin(\alpha/2)e^{-i\beta} \\ -\sin(\alpha/2)e^{i\beta}&\cos(\alpha/2)\end{pmatrix}\,. \tag{4}\] The \(x\) and \(z\) rotation operators can be thought of as a phase accumulation when only tunneling and only an imbalance between the two modes is switched on, respectively, and can be used to create the \(y\) rotations since \(\hat{R}_{y}(a)=\hat{R}_{z}(\pi/2)\hat{R}_{x}(a)\hat{R}_{z}(-\pi/2)\). The explicit breakdown of the unitary operators in \(\hat{U}_{T}\) is \[e^{-i\kappa_{1}\hat{J}_{x}\hat{\sigma}_{x}/j} = \hat{M}(-\pi/2,0)\hat{R}_{y}(\pi/2)e^{-i\kappa_{1}\hat{J}_{x}\hat {\sigma}_{z}/j}\] \[\qquad\times\hat{R}_{y}(-\pi/2)\hat{M}(\pi/2,0)\] \[e^{-i\kappa_{2}\hat{J}_{y}\hat{\sigma}_{y}/j} = \hat{M}(-\pi/2,\pi/2)\hat{R}_{x}(-\pi/2)e^{-i\kappa_{2}\hat{J}_{ x}\hat{\sigma}_{z}/j} \tag{5}\] \[\qquad\times\hat{R}_{x}(\pi/2)\hat{M}(\pi/2,\pi/2)\,.\] A recent proposal of a quantum walk on the Bloch sphere [52] used the Floquet operator \[\hat{U}_{W}=e^{-i2\kappa\hat{J}_{x}\hat{\sigma}_{z}}\hat{M}(\alpha,\beta) \tag{6}\] to evolve the system. In this context, the Pauli operators represent a quantum coin which is tossed at each step via \(\hat{M}(\alpha,\beta)\), then a rotation about the \(J_{z}\) axis is performed whose direction depends on the state of the coin. Therefore, \(\hat{U}_{T}\) can be thought of as the Floquet operator for a quantum walk on the Bloch sphere involving four coin tosses and two rotations, one about the \(J_{y}\) axis and one about the \(J_{x}\) axis, at each step. ## III Results ### Quasienergy spectrum When dealing with time periodic systems it is convenient to use Floquet theory which allows one to write the dynamics over one period in terms of a time independent effective Hamiltonian \[\hat{U}_{T}=e^{-i\hat{H}_{\text{eff}}T} \tag{7}\] The set of eigenvalues of the Floquet operator are \(\{\lambda_{i}\}\) and they can be used to calculate the eigenvalues of the effective Hamiltonian \(\{\varepsilon_{i}\}=\{\frac{i}{l}\text{log}\lambda_{i}\}\), however, they are only unique within a range of \(2\pi\), so they are referred to as quasienergies. Going forward we set \(T=1\) without loss of generality. In static systems, topological phases are characterized by integers such as the winding number. Through the bulk-edge correspondence, they count the number of protected edge states in each phase. In periodically driven systems, it has been shown that in addition to the usual \(\varepsilon=0\) edge states, there are also \(\varepsilon=\pi\) edge states which come from the fact that the quasienergies are calculated from a unitary operator and not a Hamiltonian. These states are protected [53] and cannot be deformed into each other without an energy gap closing, or breaking of some symmetry, so two integers are required to characterize each phase. Care must be taken in calculating these numbers for Floquet systems, however, and it has been shown that the winding numbers of the chiral symmetrized timeframes \[\hat{U}_{T,1} =e^{-i\frac{\kappa_{1}}{2}\hat{J}_{x}\hat{\sigma}_{x}}e^{-i\frac{ \kappa_{2}}{2}\hat{J}_{y}\hat{\sigma}_{y}}e^{-i\frac{\kappa_{1}}{2}\hat{J}_{x} \hat{\sigma}_{x}} \tag{8}\] \[\hat{U}_{T,2} =e^{-i\frac{\kappa_{2}}{2}\hat{J}_{y}\hat{\sigma}_{y}}e^{-i\frac{ \kappa_{1}}{2}\hat{J}_{x}\hat{\sigma}_{x}}e^{-i\frac{\kappa_{2}}{2}\hat{J}_{y} \hat{\sigma}_{y}}\,, \tag{9}\] which are \(w_{1}\) and \(w_{2}\), can be used to calculate the winding numbers which count the number of \(\varepsilon=0\) and \(\varepsilon=\pi\) edge states in each phase with the relation [54] \[w_{0}=\frac{w_{1}+w_{2}}{2}\hskip 28.452756ptw_{\pi}=\frac{w_{1}-w_{2}}{2}\,. \tag{10}\] The forms of the effective Hamiltonians from Eqns. (8) and (9) are not obvious. However, some intuition can be gained by performing a mean field approximation on the top operators by transforming them into their coherent state expectation values \[\langle\hat{\mathbf{J}}\rangle=\langle(\hat{J}_{x},\hat{J}_{y},\hat{J}_{z})\rangle =j(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\,. \tag{11}\] The angles \(\theta\) and \(\phi\) are the polar and azimuthal angles of the Bloch sphere of the top with radius \(\sqrt{j(j+1)}\). Going forward we will use \(m\) as a variable instead of the polar angle through the relation \(m=j\cos\theta\) where \(m\) is an eigenvalue of \(\hat{J}_{z}\) and takes integer values in the range \(-j\leq m\leq j\). Defining the new parameters \(K_{1}=\kappa_{1}\sqrt{1-(m/j)^{2}}\cos\phi\) and \(K_{2}=\kappa_{2}\sqrt{1-(m/j)^{2}}\sin\phi\), Eqns. (8) and (9) become \[\hat{U}_{T,1}^{\text{MF}} =e^{-i\frac{K_{1}}{2}\hat{\sigma}_{x}}e^{-iK_{2}\hat{\sigma}_{y}} e^{-i\frac{K_{1}}{2}\hat{\sigma}_{x}} \tag{12}\] \[\hat{U}_{T,2}^{\text{MF}} =e^{-i\frac{K_{2}}{2}\hat{\sigma}_{y}}e^{-iK_{1}\hat{\sigma}_{x}} e^{-i\frac{K_{2}}{2}\hat{\sigma}_{y}}\,. \tag{13}\] In this form, the effective Hamiltonians can be calculated exactly giving \(\hat{H}_{\text{eff},i}=\varepsilon\mathbf{n_{i}}\cdot\hat{\mathbf{\sigma}}\), where \(i=1,2\) for the two symmetrized timeframes. Both the quasienergy \(\varepsilon\) and the vector \(\mathbf{n_{i}}=(n_{ix},n_{iy})\), which has unit length, depend on the state of the top. This means \(\mathbf{n_{i}}\) maps points on the Bloch sphere of the top to points on the equator of the spin-\(1/2\) Bloch sphere. The effective Hamiltonian has chiral symmetry such that \(\hat{\Gamma}\hat{H}_{\text{eff},i}\hat{\Gamma}=-\hat{H}_{\text{eff},i}\) where the chiral symmetry operator is \(\hat{\Gamma}=\hat{\sigma}_{z}\). This means that for any state with quasienergy \(\varepsilon\) there is a partner states with quasienergy \(-\varepsilon\). It is chiral symmetry which protects the edge states and ensures they come in pairs since if there is a transition where a \(\varepsilon>0\) state becomes a \(\varepsilon=0\) edge state, then a \(\varepsilon<0\) state must do so as well. \(\varepsilon=\pi\) edge states come in pairs for the same reason. The chiral symmetry is displayed in Fig. 1 which shows an example of the full quantum spectrum of \(\hat{U}_{T}\) in Eq. (3) for a fixed value of \(\kappa_{2}=0.5\pi\) and variable \(\kappa_{1}\). New \(\varepsilon=0\) and \(\varepsilon=\pi\) edge states form at \(\kappa_{1}\) values which are even and odd multiples of \(\pi\), respectively, and form in pairs. The \(\varepsilon=0\) edge state for \(\kappa_{1}<2\pi\) is due to the Hilbert space having natural edges at \(m=\pm j\). We note that the form of the spectrum depends on the value of \(\kappa_{2}\) and for general values, the number of edge states does not always increase monotonically as \(\kappa_{1}\) increases. This is highlighted in the phase diagram of the spin-\(1/2\) QDKR [27] which quickly becomes complicated as \(\kappa_{1}\) and \(\kappa_{2}\) increase. With the intent of keeping our analysis simple, we set \(\kappa_{2}=0.5\pi\) for the remainder of the paper. Turning back to the mean field results, the explicit form of \(\mathbf{n_{i}}\) is given in Appendix A and the quasienergy, which is the same in the two timeframes in Eqns. (12) and (13), is \[\varepsilon=\arccos\left[\cos(K_{1})\cos(K_{2})\right]\,. \tag{14}\] As previously mentioned, topological transitions occur when a gap in the quasienergy closes at \(\varepsilon=0\) and \(\varepsilon=\pi\) and this is the only method which will change the topological properties of the system when chiral symmetry is maintained. Looking at Eq. (14), we see that this means \(\cos(K_{1})\cos(K_{2})=\pm 1\) which in turn means \(\kappa_{1}\sqrt{1-(m/j)^{2}}\cos\phi=\mu\pi\) and \(\kappa_{2}\sqrt{1-(m/j)^{2}}\sin\phi=\nu\pi\), where \(\mu\) and \(\nu\) are integers. Therefore, the boundaries between different topological phases will follow the equation \[\frac{\pi^{2}}{\sqrt{1-(m/j)^{2}}}\left[\frac{\mu^{2}}{\kappa_{1}^{2}}+\frac{ \nu^{2}}{\kappa_{2}^{2}}\right]=1\,. \tag{15}\] Thus far, we have followed the same analysis done on the spin-1/2 QDKR in Ref. [27] with the Floquet operator \[\hat{U}_{R}=e^{-i\kappa_{2}\sin\hat{x}\hat{\sigma}_{y}}e^{-i\kappa_{1}\cos\hat {x}\hat{\sigma}_{x}}\,. \tag{16}\] Comparing this with the mean field approximation of the top, we find that they agree when \(m=0\) (\(\theta=\pi/2\)). Therefore, the rotor spatial degree of freedom is similar to the azimuthal (equitorial) degree of freedom of the top [22]. What we show in Eq. (15) is that as one moves away from the equator of the Bloch sphere, depending on the values of \(\kappa_{1}\) and \(\kappa_{2}\), any number of topological phase boundaries can be encountered. The boundary locations can be found by solving for \(m\) giving \[m_{\mu,\nu}=\pm j\sqrt{1-\pi^{2}\left(\frac{\mu^{2}}{\kappa_{1}^{2}}+\frac{ \nu^{2}}{\kappa_{2}^{2}}\right)}\,. \tag{17}\] ### Winding numbers The boundaries partition the Bloch sphere into regions of different topology which are quantified in terms of the winding number. Under the mean field approximation, the winding number has a simple geometric interpretation as the number of times the vector \(\mathbf{n_{i}}\) winds around the origin as \(\phi\) goes from \(-\pi\) to \(\pi\) for a given \(m\). The mean field winding number is calculated with the equation [36] \[w_{\mathrm{MF},i}(m)=\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\mathbf{n_{i}}\times\partial _{\phi}\mathbf{n_{i}}\,. \tag{18}\] The calculation of the quantum winding number cannot be performed using Eq. (18) because \(\phi\) is not a good quantum number. Instead, we compute another version of the winding number which is local in the \(\hat{J}_{z}\) state space. The calculation relies on the flat-band transformation of the effective Floquet Hamiltonian, \(\hat{Q}=\hat{P}_{\uparrow}-\hat{P}_{-}\), where \(\hat{P}_{+}\) is the projector onto eigenstates with \(\varepsilon>0\) and \(\hat{P}_{-}\) is the projector onto eigenstates with \(\varepsilon<0\). The chiral symmetry of \(\hat{H}_{\mathrm{eff}}\) allows us to write the flat-band projector as \(\hat{Q}=\hat{Q}_{\uparrow\downarrow}+\hat{Q}_{\downarrow\uparrow}\) where \(\hat{Q}_{\uparrow\downarrow}=\hat{\Gamma}_{\uparrow}\hat{Q}\hat{\Gamma}_{\downarrow}\) and \(\hat{\Gamma}_{\uparrow}\), \(\hat{\Gamma}_{\downarrow}\) are the projectors onto the spin-1/2 \(\uparrow\) and \(\downarrow\) states, respectively. The spin-1/2 projectors form the chiral symmetry operator \(\hat{\Gamma}=\hat{\Gamma}_{\uparrow}-\hat{\Gamma}_{\downarrow}\) and the symmetrized winding number operator is then [55; 56; 57] \[\hat{w}_{i}=\left(\hat{Q}_{\downarrow\uparrow,i}[\hat{J}_{z},\hat{Q}_{\uparrow \downarrow,i}]+\hat{Q}_{\uparrow\downarrow,i}[\hat{J}_{z},\hat{Q}_{\downarrow \uparrow,i}]\right)/2 \tag{19}\] where the index \(i=1,2\) is for the two chiral timeframes from Eqns. (8) and (9). Therefore, the winding number localized to a single \(\hat{J}_{z}\) state is \(w_{i}(m)=\mathrm{Tr}_{\sigma}\langle m|\hat{w}_{i}|m\rangle\) where the trace is over the spin-1/2 degrees of freedom. It is expected that for large system sizes and for states comfortably within the bulk (away from \(m=\pm j\)), the quantum and mean field calculations will agree. Figure 2: Winding numbers. (a) Comparison of the mean field (solid black), quantum (solid green) and mean chiral displacement (dashed red) predictions of the \(\varepsilon=0\) winding number over the space of \(\hat{J}_{z}\) states for \(\kappa_{1}=4.25\pi\), \(\kappa_{2}=0.5\pi\) and \(j=200\). The locations of the steps in the mean field prediction can be calculated from Eq. (17). (b) Same as (a) except the calculations are performed for the \(\varepsilon=\pi\) winding number. In Fig. 2 (a) and (b), we plot the mean field (black) and quantum (green) values of \(w_{0}\) and \(w_{\pi}\), respectively. The parameter values used are \(\kappa_{1}=4.25\pi\) and \(\kappa_{2}=0.5\pi\), so the \(\varepsilon=0\) winding number has an extreme value of \(w_{0}=5\) and takes odd values until it reaches \(w_{0}=1\) at the edge, whereas the \(\varepsilon=\pi\) winding number has an extreme value of \(w_{\pi}=-4\) and takes even values until it reaches \(w_{\pi}=0\) at the edge. This matches the the number of edge states shown in Fig. 1. The different topological phases are represented as plateaus while the phase boundaries are represented as a steps in each image whose locations are calculated from Eq. (17) \(m_{\mu,\nu}=\pm j\sqrt{1-16\mu^{2}/289-4\nu^{2}}\) where we find five pairs of integers which satisfy the equation: \(0\leq\mu\leq 4\) and \(\nu=0\). We see that the quantum winding number agrees with the mean field result strongly in the center of the bulk and the agreement falls off moving closer to the edge. This is expected since Eq. (19) is most accurate within the bulk and finite size effects become pronounced at the edges. Another obvious feature of the quantum winding number is the sudden jumps/dips at the boundaries between different topological phases. These are attributed to the fact that the boundaries are home to exponentially localized edge states, so they are close to being eigenstates of \(\hat{J}_{z}\). The winding number is not always directly measurable in experiments and it is often easier to extract information from the dynamics of the system. To this end, the chiral displacement [58, 36] \[C_{i}(m,n) =\text{Tr}_{\sigma}\langle m|\hat{U}_{T,i}^{-n}\hat{J}_{z}\hat{ \sigma}_{z}\hat{U}_{T,i}^{n}|m\rangle\] \[=\left(\sum_{a=\uparrow,\downarrow}\sum_{j,k}c_{j,a}^{m*}c_{k,a} ^{m}e^{-i(\varepsilon_{j}-\varepsilon_{k})n}\langle\varepsilon_{k}|\hat{J}_{z }\hat{\sigma}_{z}|\varepsilon_{j}\rangle\right)_{i} \tag{20}\] has been shown to be related to the winding number. In the second line of Eq. (20), we take the spectral decomposition of the Floquet operators, \(c_{i,a}^{m}=\langle a,m|\varepsilon_{j}\rangle\) and the subscript \(i=1,2\) indicates which Floquet operator we are using from Eqns. (8) and (9). The relation to the winding number comes from the fact that the local winding number at \(m=0\) can be approximated as [55] \[w_{i}(0) =\text{Tr}_{\sigma}\langle 0|\hat{w}_{i}|0\rangle\] \[=\left(\sum_{a=\uparrow,\downarrow}\sum_{j,k}c_{j,a}^{\rho*}c_{k,a}^{0}\langle\varepsilon_{k}|\hat{J}_{z}\hat{\sigma}_{z}|\varepsilon_{j} \rangle\right)_{i}. \tag{21}\] Comparing this equation with Eq. (20), we see that a long time average of the chiral displacement at \(m=0\) \[\overline{C_{i}(0)}=\lim_{\mathcal{N}\rightarrow\infty}\frac{1}{\mathcal{N}} \sum_{n}^{\mathcal{N}}C_{i}(0,n) \tag{22}\] will be equal to the winding number if the off-diagonal terms \(\langle\varepsilon_{j}|\hat{J}_{z}\hat{\sigma}_{z}|\varepsilon_{k}\rangle\), where \(j\neq k\), can be neglected since they get 'washed out' in the averaging. This was found to be the case for the spin-1/2 QDKR [27] and in a synthesized 1D topological wire based on the momentum states of a BEC [55]. Although these terms are small for our system, they are non-negligible, so they need to be included in order for the dynamics to display an accurate winding number. Therefore, short time averages are better in order for the off-diagonal terms to not vanish completely. The dashed red curves in Fig. 2 (a) and (b) show the chiral displacement averaged over \(\mathcal{N}=20\) steps for all states (not just the \(m=0\) state). We see it maintains a similar shape to the winding number calculated from Eq. (19) (green) including the sudden jumps/dips at the phase boundaries, however, it does fall short of the mean field and quantum winding numbers over the majority of the states. A comparison between short and long time averages of the chiral displacement can be found in Appendix B. ### Using a probe state to locate topological boundaries The boundaries separating different topological regions can also be located using another dynamical method which takes advantage of the fact that the boundaries are home to the \(\varepsilon=0,\pi\) edge states. If an initial state has a strong overlap with the edge states at the boundary, then it will remain localized there for long periods of time. In contrast, an initial state comfortably away from the boundary will explore more of the Hilbert space as it evolves. We choose the initial probe state to be Gaussian in shape \(|\psi_{0}\rangle=\sqrt{\frac{1}{\Delta m\sqrt{\pi}}}\sum_{m}e^{-\frac{(m-m_{0 })^{2}}{2\Delta m^{2}}}|m,\uparrow\rangle\) with \(\Delta m=10\). One of the benefits of using this method is that the details of the initial state are not important as long as it can pick out one boundary over another. In Fig. 3 (a), we show the evolution of the probability distribution in the \(\uparrow\) subspace of the initial Gaussian state centered on the boundary located at \(m_{0}=\lfloor m_{4,0}\rfloor=67\). This boundary is shown in Fig. 2 (a) as the first step away from the \(m=0\) state. The strong overlap between the initial state and the edge states keeps the distribution localized over the period shown. In Fig. 3 (b), the initial state is centered at \(m_{0}=104\) which is halfway between two boundaries. Clearly the probability distribution does not remain localized and has a checkered pattern which is due to parts of the wave function occupying the \(\downarrow\) subspace which is not shown. The reason why the checkered pattern appears in (b) and not (a) is because the initial state has a larger overlap with \(\varepsilon\neq 0,\pi\) states in (b) and these states have equal support on both subspaces due to chiral symmetry whereas the \(\varepsilon=0,\pi\) edge states have support on a single subspace. In Fig. 3 (c), the initial Gaussian is centered on another boundary at \(m_{0}=\lfloor m_{3,0}\rfloor=141\) which is the first step from the center in Fig. 2 (b). Again, the probability distribution is fairly localized with small parts of it propagating away due the Gaussian having a weaker overlap with the edge states compared to the ones in (a). In order to get a sense of the overall quality of the specific initial state we chose, we take a look at the inverse participation ratio (IPR) which is a good measure of the localization of a state in a given basis \[\text{IPR}=\sum_{i}|\langle\varepsilon_{i}|\psi_{0}\rangle|^{4}\,. \tag{23}\] Here, \(|\varepsilon_{i}\rangle\) are the eigenstates of \(\hat{U}_{T}\) in Eq. (3) and \(|\psi_{0}\rangle\) is the initial Gaussian state. The two extreme values are \(\text{IPR}=1\) and \(\text{IPR}=\frac{1}{2(2j+1)}\) when \(|\psi_{0}\rangle\) is completely localized and completely spread in the basis, respectively. In Fig. 4 the IPR is plotted as a function of the center of the Gaussian \(m_{0}\) where the red vertical lines are the locations of the boundaries predicted from Eq. (17). The peaks correspond to the initial state being localized at the boundaries where it strongly overlaps with the edge states. The relative height of the peaks, including the peak at \(m=0\), are state dependent, so Gaussians with different widths will produce different results. Nevertheless, the chosen Gaussian does an excellent job at picking out the boundary locations with the only discrepancy being near the edge of the system at \(m=200\). There the distance between boundaries is similar to \(\Delta m=10\), so the Gaussian lacks the resolution required to locate the them completely. ## IV Summary and Discussions We have shown that, like the spin-1/2 QDKR, the spin-1/2 QDKT can exhibit topological phases with large winding numbers. However, the two models differ in the number of topological regions for a given pair of kick strengths. Whereas the spin-1/2 QDKR has a single region, different regions proliferate in the Bloch sphere of the spin-1/2 QDKT as the kick strengths increase. We quantify the topology of each region by comparing the mean field, quantum and mean chiral displacement winding numbers. We find they agree with the number of edge states at the boundaries separating each region which confirms the bulk-edge correspondence for the sys Figure 3: Evolution of the probability distribution for different initial states. (a) Evolution generated by \(\hat{U}_{T}\) in Eq. (3) of the probability distribution of an initial Gaussian state centered on the boundary between two different topological regions. The boundary is located near \(\lfloor m_{4,0}\rfloor=67\) and corresponds to the right closest step to the center in Fig. 2 (a), so it is the boundary between the \((w_{0},w_{\pi})=(5,-4)\) and the \((w_{0},w_{\pi})=(3,-4)\) topological regions. (b) Same initial Gaussian state as in (a) except it is centered at \(\lfloor m_{4,0}+m_{3,0}\rfloor/2=104\) which is halfway between the boundary separating the \((w_{0},w_{\pi})=(5,-4)\) and the \((3,-4)\) regions and the boundary separating the \((w_{0},w_{\pi})=(3,-4)\) and the \((3,-2)\) regions. (c) Same initial Gaussian state as in (a) except it is centered at \(\lfloor m_{3,0}\rfloor=141\) which is the boundary separating the \((w_{0},w_{\pi})=(3,-4)\) and the \((3,-2)\) regions which is the right closest step to the center in Fig. 2 (b). The parameters are \(\kappa_{1}=4.25\pi\), \(\kappa_{2}=0.5\pi\) and \(j=200\). tem. Finally, we used a simple dynamical method to locate the boundaries by preparing a Gaussian initial state and evolving it. When the initial state was centered on a boundary the state remained localized due to the large overlap with the edge states exponentially localized there. The spin-1/2 QDKT is a rich topological system with many possible avenues to explore. As previously mentioned, it is related to quantum walks and can be used to investigate the effects of multiple topological boundaries in the walk space. A notable departure from former quantum walk studies is the source of the boundaries. Usually they are implemented via insertion of an inhomogeneity at a specific site [34; 38; 59], whereas here, they come from the geometry of the Bloch sphere in which the walk is taking place and can therefore be considered as being built-in by nature. The positions of the boundaries are controlled by the kick strengths which leads to some interesting possibilities for using the topology of the spin-1/2 QDKT as a tool to control the state of the system. One example of this is in the creation of cat states. One can imagine initially setting each kick strength close to some multiple of \(\pi\), so that a pair of boundaries just forms at the equator of the Bloch sphere of the top (\(m=0\)), then preparing the initial product state \(|\psi_{0}\rangle=\frac{1}{\sqrt{2}}(|\uparrow\rangle+|\downarrow\rangle)| \theta_{0}=\pi/2\rangle\), where \(|\theta_{0}=\pi/2\rangle\) is a coherent state centered on the equator. Provided the coherent state has strong overlap with the edge states at the boundary it will remain clamped there as we showed in Fig. 3. If one of the the kick strengths is made to increase slowly in time, then the coherent state is effectively pulled apart with parts of it traveling toward opposite poles of the Bloch sphere as the boundaries move away from the equator resulting in the final state \(|\psi_{f}\rangle=\frac{1}{\sqrt{2}}(|\uparrow,\theta\rangle+|\downarrow,- \theta\rangle)\). The final state is often referred to as a Bell-cat state and is used to test entanglement generation and efficient information extraction beyond the original two-qubit Bell state [60]. The process is possible due to the chiral symmetry of the system which forces the edge states at the two newly formed boundaries to have support in opposite spin-1/2 subspaces. Another topic to explore is the connection between topology and chaos. Some initial numerical results of the level spacings of the quasienergies indicate that the spin-1/2 QDKT displays chaotic behavior as the kick strengths increase. It has been shown that nonlinear effects can lead to chaotic behavior in the bulk of a system while the boundaries have topological order [61]. However, the spin-1/2 QDKT is unique in that the number of boundaries also increases as the kick strengths increase. This raises the question as to how a bulk can be chaotic when boundaries proliferate in the system. Indeed, the spin-1/2 QDKT raises many questions deserving of further investigations. ## Appendix A Derivation of winding vector We can find \(\mathbf{n_{i}}\) by expanding the unitary operators in Eqns. (12) and (13) in terms of trigonometric functions using the identity \[e^{-i\mathbf{ab}\cdot\hat{\mathbf{\sigma}}}=\cos(a)-i\mathbf{b}\cdot\hat{\mathbf{\sigma}}\sin (a) \tag{16}\] which gives \[\hat{U}_{T,1}^{\rm MF} = \cos K_{1}\cos K_{2}-i\left(\sin K_{1}\cos K_{2}\hat{\sigma}_{x} +\sin K_{2}\hat{\sigma}_{y}\right) \tag{17}\] \[\hat{U}_{T,2}^{\rm MF} = \cos K_{1}\cos K_{2}-i\left(\sin K_{1}\hat{\sigma}_{x}+\sin K_{2} \cos K_{1}\hat{\sigma}_{y}\right)\,. \tag{18}\] For \(\hat{U}_{T,1}^{\rm MF}\) we define \[\cos\varepsilon = \cos K_{1}\cos K_{2}\] \[\sin\varepsilon = \sqrt{\sin^{2}K_{1}\cos^{2}K_{2}+\sin^{2}K_{2}}\] and similarly for \(\hat{U}_{T,2}^{\rm MF}\) we define \[\cos\varepsilon = \cos K_{1}\cos K_{2}\] \[\sin\varepsilon = \sqrt{\sin^{2}K_{2}\cos^{2}K_{1}+\sin^{2}K_{1}}\] which allows us to write the Floquet operators as \[\hat{U}_{T,i}^{\rm MF}=\cos\varepsilon-i\sin\varepsilon\left(n_{ix}\hat{\sigma }_{x}+n_{iy}\hat{\sigma}_{y}\right) \tag{19}\] where the vector components are \[n_{1x} = \frac{\sin K_{1}\cos K_{2}}{\sin\varepsilon}, n_{1y} = \frac{\sin K_{2}}{\sin\varepsilon} \tag{20}\] \[n_{2x} = \frac{\sin K_{1}}{\sin\varepsilon}, n_{2y} = \frac{\sin K_{2}\cos K_{1}}{\sin\varepsilon}\,. \tag{21}\] It is clear from Eq. (19) that the effective Hamiltonian mentioned in the main text takes the form \(\hat{H}_{\rm eff,i}=\varepsilon\mathbf{n_{i}}\cdot\hat{\mathbf{\sigma}}\). We note that the derivation of the winding vector components is the same as the one for the spin-1/2 QDKR in Ref. [27] with the only difference being that here we have \(K_{1}=\kappa_{1}\sqrt{1-(m/j)^{2}}\cos\phi\) and \(K_{2}=\kappa_{2}\sqrt{1-(m/j)^{2}}\sin\phi\) and they have \(m=0\). ## Appendix B Short and long time averages of the chiral displacement We find that the short time average rather than the long time average of the chiral displacement in Eq. (20) captures the winding number. Figure 5 explicitly shows the difference between short and long time average calculations of \(w_{\pi}\) for \(\kappa_{1}=7.5\pi\) and \(\kappa_{2}=0.5\pi\). The solid orange and dashed blue data is calculated with \(\mathcal{N}=10\) and \(\mathcal{N}=1000\), respectively, and the solid black curve is the mean field prediction. The long time average result has less fluctuations than the short time average, but has a larger disagreement with the mean field result over the range of states. This is because the long time average does not keep terms like \(\langle\varepsilon_{k}|\hat{J}_{z}\hat{\sigma}_{z}|\varepsilon_{j}\rangle\), where \(j\neq k\), which are necessary to accurately predict the winding number from the chiral displacement. However, the long time average still exhibits the sudden jumps at the boundaries, so it can still be used to locate the boundaries.
2303.09742
A proof of a conjecture on the distance spectral radius
A cactus is a connected graph in which any two cycles have at most one common vertex. We determine the unique graph that maximizes the distance spectral radius over all cacti with fixed numbers of vertices and cycles, and thus prove a conjecture on the distance spectral radius of cacti in [S.S. Bose, M. Nath, S. Paul, On the distance spectral radius of cacti, Linear Algebra Appl. 437 (2012) 2128--2141]. We prove the result in the context of hypertrees.
Yanna Wang, Bo Zhou
2023-03-17T02:43:28Z
http://arxiv.org/abs/2303.09742v1
# A proof of a conjecture on the distance spectral radius ###### Abstract A cactus is a connected graph in which any two cycles have at most one common vertex. We determine the unique graph that maximizes the distance spectral radius over all cacti with fixed numbers of vertices and cycles, and thus prove a conjecture on the distance spectral radius of cacti in [S.S. Bose, M. Nath, S. Paul, On the distance spectral radius of cacti, Linear Algebra Appl. 437 (2012) 2128-2141]. We prove the result in the context of hypertrees. _Keywords:_ distance spectral radius, cactus, hypertree, distance matrix _AMS Mathematics Subject Classifications:_ 05C50, 05C65 ## 1 Introduction A (simple) hypergraph \(G\) consists of a vertex set \(V(G)\) and an edge set \(E(G)\), where every edge in \(E(G)\) is a subset of \(V(G)\) containing at least two vertices, see [3]. The rank of \(G\) is the maximum cardinality of edges of \(G\). For an integer \(k\geq 2\), we say that \(G\) is \(k\)-uniform if every edge of \(G\) contains exactly \(k\) vertices. An ordinary (simple) graph is just a \(2\)-uniform hypergraph. For \(u,v\in V(G)\), if they are contained in some edge of \(G\), then we say that they are adjacent, or \(v\) is a neighbor of \(u\). For \(u\in V(G)\), let \(N_{G}(u)\) be the set of neighbors of \(u\) in \(G\) and \(E_{G}(u)\) be the set of edges containing \(u\) in \(G\). The degree of a vertex \(u\) in \(G\), denoted by \(\deg_{G}(u)\), is \(|E_{G}(u)|\). For distinct vertices \(v_{0},\ldots,v_{p}\) and distinct edges \(e_{1},\ldots,e_{p}\) of \(G\), the alternating sequence \((v_{0},e_{1},v_{1},\ldots,v_{p-1},e_{p},v_{p})\) such that \(v_{i-1},v_{i}\in e_{i}\) for \(i=1,\ldots,p\) and \(e_{i}\cap e_{j}=\emptyset\) for \(i,j=1,\ldots,p\) with \(j>i+1\) is a loose path of \(G\) from \(v_{0}\) to \(v_{p}\) of length \(p\). If there is a loose path from \(u\) to \(v\) for any \(u,v\in V(G)\), then we say that \(G\) is connected. For distinct vertices \(v_{0},\ldots,v_{p-1}\) and distinct edges \(e_{1},\ldots,e_{p}\), the alternating sequence \((v_{0},e_{1},v_{1},\ldots,v_{p-1},e_{p},v_{0})\) such that \(v_{i-1},v_{i}\in e_{i}\) for \(i=1,\ldots,p\) with \(v_{p}=v_{0}\) and \(e_{i}\cap e_{j}=\emptyset\) for \(i,j=1,\ldots,p\) with \(\mid i-j\mid>1\) and \(\{i,j\}\neq\{1,p\}\) is a loose cycle of \(G\) of length \(p\). A hypertree is a connected hypergraph with no loose cycles. An edge \(e\) of a hypergraph \(G\) is called a pendant edge of \(G\) at \(v\) if \(v\in e\), the degree of all vertices of \(e\) except \(v\) in \(G\) is one, and \(\deg_{G}(v)>1\). A pendant vertex is a vertex of degree one. Any hypergraph \(G\) corresponds naturally to a graph \(O_{G}\) with \(V(O_{G})=V(G)\) such that for \(u,v\in V(O_{G})\), \(\{u,v\}\) is an edge of \(O_{G}\) if and only if \(u\) and \(v\) are in some edge of \(G\). Obviously, an edge of \(G\) with size \(r\) corresponds naturally to a clique (maximal 2-connected subgraph) of \(O_{G}\) with size \(r\). If \(G\) is connected, then the distance between vertices \(u\) and \(v\) in \(G\) (or \(O_{G}\)), denoted by \(d_{G}(u,v)\), is the length of a shortest loose path connecting them in \(G\). For some extremal spectral problems related to distance, we find hypergraph notation is more convenient and effective. Let \(G\) be a connected hypergraph on \(n\) vertices. The distance matrix of \(G\) is defined as \(D(G)=(d_{G}(u,v))_{u,v\in V(G)}\). The distance spectral radius of \(G\), denoted by \(\rho(G)\), is the largest eigenvalue of \(D(G)\). The eigenvalues of distance matrices of graphs, arisen from a data communication problem studied by Graham and Pollack [5] in 1971, have been studied extensively, and particularly, the distance spectral radius received much attention, see the survey [1]. We mentioned that the distance spectral radius has also been used as a molecular descriptor, see [2, 6]. Watanabe et al. [17] studied spectral properties of the distance matrix of uniform hypertrees, generalizing some results by Graham and Pollak [5] and Sivasubramanian [11]. Lin and Zhou [7] studied the distance spectral radius of uniform hypergraphs and particularly, uniform hypertrees. Wang and Zhou [15] studied the distance spectral radius of a hypergraph that is not necessarily uniform, and determined the unique hypertrees with minimum and maximum distance spectral radius, respectively, among hypertrees on \(n\) vertices with \(m\) edges, where \(1\leq m\leq n-1\), and also determined the unique hypertrees with the first three smallest (largest, respectively) distance spectral radii among hypertrees on \(n\geq 6\) vertices. In [16], they made further efforts to identify extremal hypergraphs in some classes of hypergraphs with given parameters. A cactus is a connected graph in which any two cycles have at most one common vertex. Denote by \(P_{n}\) the (ordinary) path of order \(n\). A saw-graph of order \(n\) with length \(k\) is a cactus of order \(n\) obtained from \(P_{n-k}\) by replacing \(k\) of its edges with \(k\) triangles, where \(0\leq k\leq\lfloor\frac{n-1}{2}\rfloor\). In particular, \(P_{n}\) is a saw-graph of length \(0\). A saw-graph with length \(k\) is a proper saw-graph if its order is \(2k+1\). An end of a saw-graph is a vertex of degree \(2\) that is adjacent to a vertex of degree \(2\). The saw-graph obtained by joining an end of a proper saw graph of length \(p\) with an end of another proper saw-graph of length \(q\) by a path of length \(\ell\) is denoted by \(S(p,q;\ell)\). Particularly, \(S(p,q;0)\) is just the proper saw-graph of length \(p+q\). Let \(\mathcal{C}(n,k)\) be the class of all cacti on \(n\) vertices and \(k\) cycles, where \(0\leq k\leq\lfloor\frac{n-1}{2}\rfloor\). Let \(G\) be a graph with maximum distance spectral radius in \(\mathcal{C}(n,k)\). Then, by [4, Lemma 5.2], all cycles of \(G\) are triangles. If \(G\) is not a saw-graph, then there does not necessarily exist a cut vertex \(v\) such that \(G-v\) has three components, for example, a graph obtained from a saw-graph on \(n-2\) vertices by adding a triangle at a vertex of degree two that is not an end on some triangle. In this case, [4, Lemma 5.1] does not apply. So, to show \(G\) is a saw-graph, some different technique is needed. Note that \(S(0,0,n-1)\) and \(S(0,1,n-3)\) are the unique graphs with maximum distance spectral radius in \(C(n,0)\) and \(C(n,1)\), respectively [10, 12, 18]. Based on further computer results, Bose et al. [4] posed the following conjecture. **Conjecture 1.1**.: _[_4_, Conjecture 5.4]__\(S(\lfloor\frac{k}{2}\rfloor,\lceil\frac{k}{2}\rceil;n-2k-1)\) uniquely maximizes the distance spectral radius in \(\mathcal{C}(n,k)\)._ If \(T\) is a hypertree, then \(O_{T}\) is a connected graph in which every clique corresponds to an edge of \(T\). Let \(T(n,a,b)\) be the hypertree obtained by inserting a pendant vertex \(w_{i}\) in \(e_{i}=\{v_{i},v_{i+1}\}\) of the path \(P_{n-a-b}=v_{1}v_{2}\cdots v_{n-a-b}\) for \(i=1,\ldots,a,n-a-2b,\ldots,n-a-b-1\), where \(0\leq a\leq b\) and \(a+b\leq\lfloor\frac{n-1}{2}\rfloor\). It is evident that \(O_{T(n,a,b)}\cong S(a,b;n-2(a+b)-1)\). In this paper, we prove the following result. **Theorem 1.1**.: _Let \(T\) be a hypertree of order \(n\) with rank at most three and \(k\) edges of size three, where \(1\leq k\leq\lfloor\frac{n-1}{2}\rfloor\). Then \(\rho(T)\leq\rho(T(n,\lfloor\frac{k}{2}\rfloor,\lceil\frac{k}{2}\rceil))\) with equality if and only if \(T\cong T(n,\lfloor\frac{k}{2}\rfloor,\lceil\frac{k}{2}\rceil)\)._ In terms of graphs, Theorem 1.1 may be rephrased as: **Theorem 1.2**.: _Suppose that \(G\in\mathcal{C}(n,k)\) and all cycles of \(G\) are triangles, where \(1\leq k\leq\lfloor\frac{n-1}{2}\rfloor\). Then \(\rho(G)\leq\rho(S(\lfloor\frac{k}{2}\rfloor,\lceil\frac{k}{2}\rceil;n-2k-1))\) with equality if and only if \(G\cong S(\lfloor\frac{k}{2}\rfloor,\lceil\frac{k}{2}\rceil;n-2k-1)\)._ Let \(G\) be a graph with maximum distance spectral radius in \(\mathcal{C}(n,k)\) for \(k\geq 1\), then all cycles of \(G\) are triangles. So, by Theorem 1.2, \(G\cong S(\lfloor\frac{k}{2}\rfloor,\lceil\frac{k}{2}\rceil;n-2k-1)\). That is, Conjecture 1.1 is true. ## 2 Preliminaries Let \(G\) be a connected hypergraph. Since \(D(G)\) is irreducible, by Perron-Frobenius theorem, \(\rho(G)\) is simple and there is a unique unit positive eigenvector corresponding to \(\rho(G)\), which is called the distance Perron vector of \(G\), denoted by \(x(G)\). Let \(V(G)=\{v_{1},\ldots,v_{n}\}\) and \(x=(x_{v_{1}},\ldots,x_{v_{n}})^{T}\in\mathbb{R}^{n}\). Then \[x^{\top}D(G)x=2\sum_{\{u,v\}\subseteq V(G)}d_{G}(u,v)x_{u}x_{v}.\] If \(x\) is unit and \(x\) has at least one nonnegative component, then by Rayleigh's principle, we have \(\rho(G)\geq x^{\top}D(G)x\) with equality if and only if \(x=x(G)\). For \(x=x(G)\) and each \(u\in V(G)\), we have \[\rho(G)x_{u}=\sum_{v\in V(G)}d_{G}(u,v)x_{v},\] which is called the distance eigenequation of \(G\) at \(u\). For a connected hypergraph \(G\) with \(V_{1}\subseteq V(G)\), let \(\sigma_{G}(V_{1})\) be the sum of the entries of the distance Perron vector of \(G\) corresponding to the vertices in \(V_{1}\). Furthermore, if all the vertices of \(V_{1}\) induce a connected subhypergraph \(H\) of \(G\), then we write \(\sigma_{G}(H)\) instead of \(\sigma_{G}(V_{1})\). For \(e\in E(G)\), let \(G-e\) be the subhypergraph of \(G\) obtained by deleting \(e\). Here, we give a result that will be used frequently in the next section. **Lemma 2.1**.: _Let \(T\) be a hypertree with two edges, say \(e_{1}\) and \(e_{2}\). Suppose that \(u_{i},v_{i}\in e_{i}\) for \(i=1,2\), and \(d_{T}(u_{1},u_{2})=d_{T}(v_{1},v_{2})+2\). For \(i=1,2\), let \(T_{i}\) be the component of \(T-e_{i}\) containing \(u_{i}\) and \(A_{i}=\{w\in V(T):d_{T}(w,u_{i})=d_{T}(w,v_{i})\}\). Let \(x=x(T)\)._ 1. \[\rho(T)(x_{u_{1}}-x_{u_{2}})-\rho(T)(x_{v_{1}}-x_{v_{2}})\] \[= 2(\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sigma_{T}(A_{2})-\sigma_{ T}(A_{1}).\] 2. _If_ \(e_{i}\setminus\{u_{i},v_{i}\}=\{w_{i}\}\) _and_ \(\text{deg}_{T}(w_{i})=1\) _for_ \(i=1,2\)_, then_ \[(\rho(T)+1)(x_{w_{1}}-x_{w_{2}})-\rho(T)(x_{v_{1}}-x_{v_{2}})=x_{w_{2}}-x_{w_{ 1}}+\sigma_{T}(T_{2})-\sigma_{T}(T_{1})\] _and_ \[\rho(T)(x_{u_{1}}-x_{u_{2}})-(\rho(T)+1)(x_{w_{1}}-x_{w_{2}})=\sigma_{T}(T_{2 })-\sigma_{T}(T_{1}).\] Proof.: Let \(T_{i}^{\prime}\) be the component of \(T-e_{i}\) containing \(v_{i}\) for \(i=1,2\). Evidently, \(V(T_{1}^{\prime})=(V(T_{1}^{\prime})\cap V(T_{2}^{\prime}))\cup A_{2}\cup V(T_ {2})\), and \(V(T_{1}^{\prime})\cap V(T_{2}^{\prime}),A_{2},V(T_{2})\) are disjoint. From the distance eigenequations of \(T\) at \(u_{1}\) and \(v_{1}\), we have \[\rho(T)(x_{u_{1}}-x_{v_{1}})= \sum_{w\in V(T)}(d_{T}(u_{1},w)-d_{T}(v_{1},w))x_{w}\] \[= \sigma_{T}(T_{1}^{\prime})-\sigma_{T}(T_{1})\] \[= \sigma_{T}(V(T_{1}^{\prime})\cap V(T_{2}^{\prime}))+\sigma_{T}(A_ {2})+\sigma_{T}(T_{2})-\sigma_{T}(T_{1})\] as \(d_{T}(u_{1},w)-d_{T}(v_{1},w)=1\) if \(w\in V(T_{1}^{\prime})\), \(-1\) if \(w\in V(T_{1})\), and \(0\) otherwise. Similarly, \[\rho(T)(x_{v_{2}}-x_{u_{2}})= \sigma_{T}(T_{2})-\sigma_{T}(T_{2}^{\prime})\] \[= \sigma_{T}(T_{2})-\sigma_{T}(V(T_{1}^{\prime})\cap V(T_{2}^{ \prime}))-\sigma_{T}(A_{1})-\sigma_{T}(T_{1}).\] \[\rho(T)(x_{u_{1}}-x_{v_{1}})+\rho(T)(x_{v_{2}}-x_{u_{2}})=2(\sigma_{T}(T_{2})- \sigma_{T}(T_{1}))+\sigma_{T}(A_{2})-\sigma_{T}(A_{1}),\] from which Item (i) follows. Now suppose that \(\deg_{T}(w_{i})=1\) and \(e_{i}\setminus\{u_{i},v_{i}\}=\{w_{i}\}\) for \(i=1,2\). Then the component of \(T-e_{i}\) containing \(w_{i}\) has exactly one vertex \(w_{i}\) and \(\{w\in V(T):d_{T}(w,w_{i})=d_{T}(w,v_{i})\}=V(T_{i})\) for \(i=1,2\). Item (i) reduces to \[\rho(T)(x_{w_{1}}-x_{w_{2}})-\rho(T)(x_{v_{1}}-x_{v_{2}})=2(x_{w_{2}}-x_{w_{1}} )+\sigma_{T}(T_{2})-\sigma_{T}(T_{1}),\] from which the first equation in Item (ii) follows. From the distance eigenequations of \(T\) at \(u_{1}\), \(w_{1}\), \(w_{2}\) and \(u_{2}\), we have \[\rho(T)(x_{u_{1}}-x_{w_{1}})=x_{w_{1}}-\sigma_{T}(T_{1})\] and \[\rho(T)(x_{w_{2}}-x_{u_{2}})=\sigma_{T}(T_{2})-x_{w_{2}}.\] So \(\rho(T)(x_{u_{1}}-x_{u_{2}})-\rho(T)(x_{w_{1}}-x_{w_{2}})=x_{w_{1}}-x_{w_{2}} +\sigma_{T}(T_{2})-\sigma_{T}(T_{1})\), from which the second equation in Item (ii) follows. ## 3 Distance spectral properties of \(T(n,a,b)\) In this section, we give some properties related to the entries of the Perron vector of \(T(n,a,b)\), which will be used in subsequent proof. **Lemma 3.1**.: _Let \(T=T(n,a,b)\), where \(a\geq 0\), \(b\geq a+2\) and \(2(a+b)<n-1\). Let \(\ell=n-a-b\). Let \(x=x(T)\). Let \(e\) be the edge containing both \(v_{\ell-b}\) and \(v_{\ell-b+1}\). Let \(T_{1}\) and \(T_{2}\) be the components of \(T-e\) containing \(v_{\ell-b}\) and \(v_{\ell-b+1}\), respectively. If \(b\geq\frac{\ell}{2}\). then \(\sigma_{T}(T_{1})<\sigma_{T}(T_{2})\)._ Proof.: Let \(\ell-b=q\). Then \(\sigma_{T}(T_{1})=\sum_{j=1}^{q}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\) and \(\sigma_{T}(T_{2})=\sum_{j=q+1}^{\ell}x_{v_{j}}+\sum_{j=q+1}^{\ell-1}x_{w_{j}}\). As \(b\geq\frac{\ell}{2}\), we have \(2q\leq\ell\). Suppose that \(\sigma_{T}(T_{1})\geq\sigma_{T}(T_{2})\). **Claim 1.**\(x_{v_{q-i}}\leq x_{v_{q+1+i}}\) for \(0\leq i\leq q-a-1\). We show this by induction on \(i\). For \(i=0\), from the distance eigenequations of \(T\) at \(v_{q}\) and \(v_{q+1}\), we have \[\rho(T)(x_{v_{q}}-x_{v_{q+1}})=\sigma_{T}(T_{2})-\sigma_{T}(T_{1}),\] so \(x_{v_{q}}\leq x_{v_{q+1}}\). Suppose that \(1\leq i\leq q-a-1\) and \(x_{v_{q-j}}\leq x_{v_{q+1+j}}\) for \(0\leq j\leq i-1\). Note that \(\sigma_{T}(T_{2})-\sigma_{T}(T_{1})\leq 0\), \(\sum_{j=0}^{i-1}\left(x_{v_{q-j}}-x_{v_{q+1+j}}\right)\leq 0\) and \(2\sum_{j=q+1}^{q+i}x_{w_{j}}-x_{w_{q+i}}>0\). So, by Lemma 2.1(i), we have \[\rho(T)\left(x_{v_{q-i}}-x_{v_{q+1+i}}\right)-\rho(T)\left(x_{v_{q -(i-1)}}-x_{v_{q+1+(i-1)}}\right)\] \[= 2\left(\sum_{j=q+1+i}^{\ell}x_{v_{j}}+\sum_{j=q+1+i}^{\ell-1}x_{ w_{j}}\right)-2\left(\sum_{j=1}^{q-i}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\right)+x_{w_{q +i}}\] \[= 2\left(\sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{q+1+j}}-\sum_{j=q+1}^ {q+i}x_{w_{j}}\right)-2\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{q-j}}\right) +x_{w_{q+i}}\] \[= 2\left(\sigma_{T}(T_{2})-\sigma_{T}(T_{1})\right)+2\sum_{j=0}^{i- 1}\left(x_{v_{q-j}}-x_{v_{q+1+j}}\right)-\left(2\sum_{j=q+1}^{q+i}x_{w_{j}}-x_ {w_{q+i}}\right)\] \[< 0,\] implying that \(x_{v_{q-i}}-x_{v_{q+1+i}}<x_{v_{q-(i-1)}}-x_{v_{q+1+(i-1)}}\leq 0\). So \(x_{v_{q-i}}<x_{v_{q+1+i}}\). This proves Claim 1. **Claim 2.**\(x_{v_{q-i}}\leq x_{v_{q+1+i}}\) and \(x_{w_{q-i}}\leq x_{w_{q+i}}\) for \(q-a\leq i\leq q-1\) with \(a\geq 1\). We show this by induction on \(i\). By Claim 1, \(\sum_{j=0}^{q-a-1}\left(x_{v_{q-j}}-x_{v_{q+1+j}}\right)\leq 0\). Then, by Lemma 2.1(ii), we have \[\left(\rho(T)+1\right)\left(x_{w_{a}}-x_{w_{2q-a}}\right)-\rho(T )\left(x_{v_{a+1}}-x_{v_{2q-a}}\right)\] \[= \sum_{j=2q+1-a}^{\ell}x_{v_{j}}+\sum_{j=2q-a}^{\ell-1}x_{w_{j}}- \left(\sum_{j=1}^{a}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{q-a-1}x_{v_{q+1+j}}-\sum_{j=1}^{q-a -1}x_{w_{q+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{q-a-1}x_{v_{q-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{q-a-1}\left(x_ {v_{q-j}}-x_{v_{q+1+j}}\right)-\sum_{j=1}^{q-a-1}x_{w_{q+j}}\] \[< 0,\] so \((\rho(T)+1)(x_{w_{a}}-x_{w_{2q-a}})<\rho(T)(x_{v_{a+1}}-x_{v_{2q-a}})<0\), and thus \(x_{w_{a}}<x_{w_{2q-a}}\). On the other hand, by Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{a}}-x_{v_{2q+1-a}}\right)-(\rho(T)+1)\left(x_{ w_{a}}-x_{w_{2q-a}}\right)\] \[= \sum_{j=2q+1-a}^{\ell}x_{v_{j}}+\sum_{j=2q+1-a}^{\ell-1}x_{w_{j}} -\left(\sum_{j=1}^{a}x_{v_{j}}+\sum_{j=1}^{a-1}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{q-a-1}x_{v_{q+1+j}}-\sum_{j=1}^{q-a }x_{w_{q+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{q-a-1}x_{v_{q-j}}-x_{w_{a}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{q-a-1}\left(x_ {v_{q-j}}-x_{v_{q+1+j}}\right)-\sum_{j=1}^{q-a-1}x_{w_{q+j}}+(x_{w_{a}}-x_{w_{2 q-a}})\] \[< 0,\] so \(\rho(T)(x_{v_{a}}-x_{v_{2q+1-a}})<(\rho(T)+1)(x_{w_{a}}-x_{w_{2q-a}})<0\), and thus \(x_{v_{a}}<x_{v_{2q+1-a}}\). So Claim 2 is true for \(i=q-a\). Suppose that \(q-a+1\leq i\leq q-1\) with \(a\geq 2\), \(x_{v_{q-j}}\leq x_{v_{q+1+j}}\) and \(x_{w_{q-j}}\leq x_{w_{q+j}}\) for \(q-a\leq j\leq i-1\). By Lemma 2.1(ii), we have \[\left(\rho(T)+1\right)\left(x_{w_{q-i}}-x_{w_{q+i}}\right)-\rho(T)\left(x_{v_{ q-(i-1)}}-x_{v_{q+1+(i-1)}}\right)\] \[= \sum_{j=q+1+i}^{\ell}x_{v_{j}}+\sum_{j=q+i}^{\ell-1}x_{w_{j}}-\left( \sum_{j=1}^{q-i}x_{v_{j}}+\sum_{j=1}^{q-i}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{q+1+i}}-\sum_{j=1}^{i-1}x _{w_{q+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{q-j}}-\sum_{j=q-a}^{i- 1}x_{w_{q-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{i-1}\left(x_{v_ {q-j}}-x_{v_{q+1+j}}\right)+\sum_{j=q-a}^{i-1}\left(x_{w_{q-j}}-x_{w_{q+j}} \right)-\sum_{j=1}^{q-a-1}x_{w_{q+j}}\] \[< 0,\] so \((\rho(T)+1)(x_{w_{q-i}}-x_{w_{q+i}})<\rho(T)(x_{v_{q-(i-1)}}-x_{v_{q+1+(i-1)}})\leq 0\). Thus \(x_{w_{q-i}}<x_{w_{q+i}}\). On the other hand, by Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{q-i}}-x_{v_{q+1+i}}\right)-(\rho(T)+1)\left(x_ {w_{q-i}}-x_{w_{q+i}}\right)\] \[= \sum_{j=q+1+i}^{\ell}x_{v_{j}}+\sum_{j=q+1+i}^{\ell-1}x_{w_{j}}- \left(\sum_{j=1}^{q-i}x_{v_{j}}+\sum_{j=1}^{q-i-1}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{q+1+j}}-\sum_{j=1}^{i}x_ {w_{q+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{q-j}}-\sum_{j=q-a}^{i} x_{w_{p-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{i-1}\left(x_{v_ {q-j}}-x_{v_{q+1+j}}\right)\] \[-\sum_{j=1}^{q-a-1}x_{w_{q+j}}+\sum_{j=q-a}^{i}\left(x_{w_{q-j}}- x_{w_{q+j}}\right)\] \[< 0,\] so \(\rho(T)(x_{v_{q-i}}-x_{v_{q+1+i}})<(\rho(T)+1)(x_{w_{q-i}}-x_{w_{q+i}})<0\). Thus \(x_{v_{q-i}}<x_{v_{q+1+i}}\). This proves Claim 2. Combining Claims 1 and 2, we conclude that \(x_{v_{q-i}}\leq x_{v_{q+1+i}}\) for \(0\leq i\leq q-1\), and \(x_{w_{q-i}}\leq x_{w_{q+i}}\) for \(q-a\leq i\leq q-1\) with \(a\geq 1\). However, by letting \(J=0\) if \(b=\frac{\ell}{2}\) and \(J=\sum_{i=q}^{\ell-q-1}x_{v_{q+1+i}}+\sum_{i=q}^{\ell-q-1}x_{w_{q+i}}\) otherwise, we have \[0\geq \sigma_{T}(T_{2})-\sigma_{T}(T_{1})\] \[= \sum_{i=0}^{\ell-q-1}x_{v_{q+1+i}}+\sum_{i=1}^{q-a-1}x_{w_{q+i}} +\sum_{i=q-a}^{\ell-q-1}x_{w_{q+i}}-\left(\sum_{i=0}^{q-1}x_{v_{q-i}}+\sum_{i=q -a}^{q-1}x_{w_{q-i}}\right)\] \[= \sum_{i=0}^{q-1}(x_{v_{q+1+i}}-x_{v_{q-i}})+\sum_{i=q-a}^{q-1}(x_ {w_{q+i}}-x_{w_{q-i}})+\sum_{i=1}^{q-a-1}x_{w_{q+i}}+J\] \[> 0,\] a contradiction. It thus follows that \(\sigma_{T}(T_{1})<\sigma_{T}(T_{2})\). **Lemma 3.2**.: _Let \(T=T(n,a,b)\), where \(a\geq 0\), \(b\geq a+2\) and \(2(a+b)<n-1\). Let \(\ell=n-a-b\). Let \(p=\lfloor\frac{\ell}{2}\rfloor\) and \(p_{1}=\lceil\frac{\ell}{2}\rceil\). Let \(x=x(T)\). Suppose that \(b<\frac{\ell}{2}\). Then_ _(i) \(x_{v_{b}}>x_{v_{l_{1}+1}}\);_ _(ii) \(x_{v_{i}}>x_{v_{\ell+1-i}}\) and \(x_{w_{i}}>x_{w_{\ell-i}}\) for \(i=1,\ldots,a\) with \(a\geq 1\), and \(x_{v_{a+1}}>x_{v_{\ell-a}}\)._ Proof.: Note that \(b\leq p\). We prove the Item (i) by considering two cases. **Case 1.**\(\ell\) is odd and \(b=p\), i.e., \(b=\frac{\ell-1}{2}\). Let \(T_{1}\) be the component of \(T-\{v_{b},v_{b+1}\}\) containing \(v_{b}\), and \(T_{2}\) the component of \(T-\{v_{b+1},v_{b+1},v_{b+2}\}\) containing \(v_{b+2}\). Note that \(\sigma_{T}(T_{1})=\sum_{j=1}^{b}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\) and \(\sigma_{T}(T_{2})=\sum_{j=b+2}^{2b+1}x_{v_{j}}+\sum_{j=b+2}^{2b}x_{w_{j}}\). From the distance eigenequations of \(T\) at \(v_{b}\) and \(v_{b+2}\), we have \[\rho(T)(x_{v_{b}}-x_{v_{b+2}})=2(\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+x_{w_{b+1 }}. \tag{3.1}\] Suppose that \(2(\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+x_{w_{b+1}}\leq 0\). Then \(\sigma_{T}(T_{2})<\sigma_{T}(T_{1})\). **Claim i.**\(x_{v_{b-i}}\leq x_{v_{b+2+i}}\) for \(0\leq i\leq b-a-1\). We show this by induction on \(i\). For \(i=0\), from (3.1), we have \(x_{v_{b}}\leq x_{v_{b+2}}\). Suppose that \(1\leq i\leq b-a-1\) and \(x_{v_{b-j}}\leq x_{v_{b+2+j}}\) for \(0\leq j\leq i-1\). Then, we have by Lemma 2.1(i) that \[\rho(T)\left(x_{v_{b-i}}-x_{v_{b+2+i}}\right)-\rho(T)\left(x_{v_{b -(i-1)}}-x_{v_{b+2+(i-1)}}\right)\] \[= 2\left(\sum_{j=b+2+i}^{2b+1}x_{v_{j}}+\sum_{j=b+2+i}^{2b}x_{w_{j }}\right)-2\left(\sum_{j=1}^{b-i}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\right)+x_{ w_{b+1+i}}\] \[= 2\left(\sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{b+2+j}}-\sum_{j=b +2}^{b+1+i}x_{w_{j}}\right)-2\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{b- j}}\right)+x_{w_{b+1+i}}\] \[= 2(\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+2\sum_{j=0}^{i-1}\left(x_ {v_{b-j}}-x_{v_{b+2+j}}\right)-\left(2\sum_{j=b+2}^{b+1+i}x_{w_{j}}-x_{w_{b+1 +i}}\right)\] \[< 0,\] implying that \(x_{v_{b-i}}-x_{v_{b+2+i}}<x_{v_{b-(i-1)}}-x_{v_{b+2+(i-1)}}\leq 0\), so \(x_{v_{b-i}}<x_{v_{b+2+i}}\). Claim i follows. **Claim ii.**\(x_{v_{b-i}}\leq x_{v_{b+2+i}}\) and \(x_{w_{b-i}}\leq x_{w_{b+1+i}}\) for \(b-a\leq i\leq b-1\) with \(a\geq 1\). We show this by induction on \(i\). By Claim i, \(\sum_{j=0}^{b-a-1}\left(x_{v_{b-j}}-x_{v_{b+2+j}}\right)\leq 0\). By Lemma 2.1(ii), we have \[(\rho(T)+1)\left(x_{w_{a}}-x_{w_{2b+1-a}}\right)-\rho(T)\left(x_{v _{a+1}}-x_{v_{2b-a+1}}\right)\] \[= \sum_{j=b+2-a}^{2b+1}x_{v_{j}}+\sum_{j=2b+1-a}^{2b}x_{w_{j}}- \left(\sum_{j=1}^{a}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{b-a-1}x_{v_{b+2+j}}-\sum_{j=1}^{b-a- 1}x_{w_{b+1+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{b-a-1}x_{v_{b-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{b-a-1}\left(x_{v _{b-j}}-x_{v_{b+2+j}}\right)-\sum_{j=1}^{b-a-1}x_{w_{b+1+j}}\] \[< 0,\] implying that \((\rho(T)+1)(x_{w_{a}}-x_{w_{2b+1-a}})<\rho(T)(x_{v_{a+1}}-x_{v_{2b-a+1}})<0\), so \(x_{w_{a}}<x_{w_{2b+1-a}}\). On the other hand, by Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{a}}-x_{v_{2b+2-a}}\right)-(\rho(T)+1)\left(x_{w _{a}}-x_{w_{2b+1-a}}\right)\] \[= \sum_{j=2b+2-a}^{2b+1}x_{v_{j}}+\sum_{j=2b+2-a}^{2b}x_{w_{j}}- \left(\sum_{j=1}^{a}x_{v_{j}}+\sum_{j=1}^{a-1}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{b-a-1}x_{v_{b+2+j}}-\sum_{j=1}^{b-a }x_{w_{b+1+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{b-a-1}x_{v_{b-j}}-x_{w_{a}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{b-a-1}\left(x_{ v_{b-j}}-x_{v_{b+2+j}}\right)\] \[+(x_{w_{a}}-x_{w_{2b+1-a}})-\sum_{j=1}^{b-a-1}x_{w_{b+1+j}}\] \[< 0,\] so \(\rho(T)(x_{v_{a}}-x_{v_{2b+2-a}})<(\rho(T)+1)(x_{w_{a}}-x_{w_{2b+1-a}})<0\), and thus \(x_{v_{a}}<x_{v_{2b+2-a}}\). So Claim ii is true for \(i=b-a\). Suppose that \(b-a+1\leq i\leq b-1\) with \(a\geq 2\), \(x_{v_{b-j}}\leq x_{v_{b+2+j}}\) and \(x_{w_{b-j}}\leq x_{w_{b+1+j}}\) for \(b-a\leq j\leq i-1\). By Lemma 2.1(ii), we have \[(\rho(T)+1)\left(x_{w_{b-i}}-x_{w_{b+1+i}}\right)-\rho(T)\left(x_ {v_{b-(i-1)}}-x_{v_{b+2+(i-1)}}\right)\] \[= \sum_{j=b+2+i}^{2b+1}x_{v_{j}}+\sum_{j=b+1+i}^{2b}x_{w_{j}}- \left(\sum_{j=1}^{b-i}x_{v_{j}}+\sum_{j=1}^{b-i}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{b+2+j}}-\sum_{j=1}^{i-1} x_{w_{b+1+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{b-j}}-\sum_{j=b-a}^{i-1} x_{w_{b-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{i-1}\left(x_{v _{b-j}}-x_{v_{b+2+j}}\right)\] \[+\sum_{j=b-a}^{i-1}\left(x_{w_{b-j}}-x_{w_{b+1+i}}\right)-\sum_{ j=1}^{b-a-1}x_{w_{b+1+j}}\] \[< 0,\] so \((\rho(T)+1)(x_{w_{b-i}}-x_{w_{b+1+i}})<\rho(T)(x_{v_{b-(i-1)}}-x_{v_{b+2+(i-1 )}})\leq 0\). Thus \(x_{w_{b-i}}<x_{w_{b+1+i}}\). On the other hand, by Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{b-i}}-x_{v_{b+2+i}}\right)-(\rho(T)+1)\left(x_ {w_{b-i}}-x_{w_{b+1+i}}\right)\] \[= \sum_{j=b+2+i}^{2b+1}x_{v_{j}}+\sum_{j=b+2+i}^{2b}x_{w_{j}}- \left(\sum_{j=1}^{b-i}x_{v_{j}}+\sum_{j=1}^{b-i-1}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{w_{b+2+j}}-\sum_{j=1}^{i}x_{w_{b+ 1+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{b-j}}-\sum_{j=b-a}^{i}x_{w_{ p-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{i-1}\left(x_{v_ {b-j}}-x_{v_{b+2+j}}\right)\] \[-\sum_{j=1}^{b-a-1}x_{w_{b+1+j}}+\sum_{j=b-a}^{i}\left(x_{w_{b-j}} -x_{w_{b+1+j}}\right)\] \[< 0,\] so \(\rho(T)(x_{v_{b-i}}-x_{v_{b+2+i}})<(\rho(T)+1)(x_{w_{b-i}}-x_{w_{b+1+i}})<0\), and thus \(x_{v_{b-i}}<x_{v_{b+2+i}}\). This proves Claim ii. Combining Claims i and ii, we have \(x_{v_{b-i}}\leq x_{v_{b+2+i}}\) for \(0\leq i\leq b-1\), and \(x_{w_{b-i}}\leq x_{w_{b+1+i}}\) for \(b-a\leq i\leq b-1\) with \(a\geq 1\). However, \[0\geq \sigma_{T}(T_{2})-\sigma_{T}(T_{1})\] \[= \sum_{i=0}^{b-1}x_{v_{b+2+i}}+\sum_{i=1}^{b-a-1}x_{w_{b+1+i}}+\sum _{i=b-a}^{b-1}x_{w_{b+1+i}}-\left(\sum_{i=0}^{b-1}x_{v_{b-i}}+\sum_{i=b-a}^{b-1 }x_{w_{b-i}}\right)\] \[= \sum_{i=0}^{b-1}(x_{v_{b+2+i}}-x_{v_{b-i}})+\sum_{i=b-a}^{b-1}(x_ {w_{b+1+i}}-x_{w_{b-i}})+\sum_{i=1}^{b-a-1}x_{w_{b+1+i}}\] \[> 0,\] which is a contradiction. Therefore, \(2(\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+x_{w_{b+1}}>0\), and from (3.1), we have \(x_{v_{b}}>x_{v_{b+2}}\), i.e., \(x_{v_{p}}>x_{v_{p_{1}+1}}\). **Case 2.**\(b<p\). Let \(e=\{v_{p},v_{p_{1}+1}\}\) if \(\ell\) is even, and \(e=\{v_{p},v_{p_{1}}\}\) otherwise. As \(b<p\), we have \(e\in E(T)\). Let \(T_{1}\) be the component of \(T-e\) containing \(v_{p}\), and \(T_{2}\) the component of \(T-v_{p_{1}}v_{p_{1}+1}\) containing \(v_{p_{1}+1}\). Note that \(\sigma_{T}(T_{1})=\sum_{j=1}^{p}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\) and \(\sigma_{T}(T_{2})=\sum_{j=p_{1}+1}^{\ell}x_{v_{j}}+\sum_{j=l-b}^{\ell-1}x_{w_{j}}\). In the following, we show that \(\sigma_{T}(T_{2})-\sigma_{T}(T_{1})>0\). Suppose that this is not true, i.e., \(\sigma_{T}(T_{2})-\sigma_{T}(T_{1})\leq 0\). **Claim a.**\(x_{v_{p-i}}\leq x_{v_{p_{1}+1+i}}\) for \(0\leq i\leq p-a-1\). We show this by induction on \(i\). For \(i=0\), from the distance eigenequations of \(T\) at \(v_{p}\) and \(v_{p_{1}+1}\), we get \[\rho(T)(x_{v_{p}}-x_{v_{p_{1}+1}})=(p_{1}+1-p)(\sigma_{T}(T_{2})-\sigma_{T}(T_ {1})). \tag{3.2}\] so \(x_{v_{p}}\leq x_{v_{p_{1}+1}}\). Suppose that \(1\leq i\leq p-a-1\) and \(x_{v_{p-j}}\leq x_{v_{p_{1}+1+j}}\) for \(0\leq j\leq i-1\). We consider \(i\leq p-b-1\) with \(p-b\geq 2\), and \(i\geq p-b\) separately. In the former case, we have by Lemma 2.1(i) that \[\rho(T)\left(x_{v_{p-i}}-x_{v_{p_{1}+1+i}}\right)-\rho(T)\left(x_{ v_{p-(i-1)}}-x_{v_{p_{1}+1+(i-1)}}\right)\] \[= 2\left(\sum_{j=p_{1}+1+i}^{\ell}x_{v_{j}}+\sum_{j=\ell-b}^{ \ell-1}x_{w_{j}}\right)-2\left(\sum_{j=1}^{p-i}x_{v_{j}}+\sum_{j=\ell}^{a}x_{ w_{j}}\right)\] \[= 2\left(\sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{p_{1}+1+i}}\right)-2 \left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{p-j}}\right)\] \[= 2\left(\sigma_{T}(T_{2})-\sigma_{T}(T_{1})\right)+2\sum_{j=0}^{i- 1}\left(x_{v_{p-j}}-x_{v_{p_{1}+1+i}}\right)\] \[\leq 0,\] so \(x_{v_{p-i}}-x_{v_{p_{1}+1+i}}\leq x_{v_{p-(i-1)}}-x_{v_{p_{1}+1+(i-1)}}\leq 0\), and thus \(x_{v_{p-i}}\leq x_{v_{p_{1}+1+i}}\). In the latter case, we have by Lemma 2.1(i) that \[\rho(T)\left(x_{v_{p-i}}-x_{v_{p_{1}+1+i}}\right)-\rho(T)\left(x_{ v_{p-(i-1)}}-x_{v_{p_{1}+1+(i-1)}}\right)\] \[= 2\left(\sum_{j=p_{1}+1+i}^{\ell}x_{v_{j}}+\sum_{j=p_{1}+1+i}^{ \ell-1}x_{w_{j}}\right)-2\left(\sum_{j=1}^{p-i}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j }}\right)+x_{w_{p_{1}+i}}\] \[= 2\left(\sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{p_{1}+1+i}}-\sum_ {j=\ell-b}^{p_{1}+i}x_{w_{j}}\right)-2\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1 }x_{v_{p-j}}\right)+x_{w_{p_{1}+i}}\] \[= 2(\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+2\sum_{j=0}^{i-1}\left(x_ {v_{p-j}}-x_{v_{p_{1}+1+j}}\right)-2\sum_{j=\ell-b}^{p_{1}+i}x_{w_{j}}+x_{w_{p _{1}+i}}\] \[< 0,\] so \(x_{v_{p-i}}-x_{v_{p_{1}+1+i}}<x_{v_{p-(i-1)}}-x_{v_{p_{1}+1+(i-1)}}\leq 0\), and thus \(x_{v_{p-i}}<x_{v_{p_{1}+1+i}}\). This proves Claim a. **Claim b.**\(x_{v_{p-i}}\leq x_{v_{p_{1}+1+i}}\) and \(x_{w_{p-i}}\leq x_{w_{p_{1}+i}}\) for \(p-a\leq i\leq p-1\) with \(a\geq 1\) We show this by induction on \(i\). By Claim a, \(\sum_{j=0}^{p-a-1}\left(x_{v_{p-j}}-x_{v_{p_{1}+1+j}}\right)\leq 0\). By Lemma 2.1(ii), we have \[\left(\rho(T)+1\right)\left(x_{w_{a}}-x_{w_{l-a}}\right)-\rho(T) \left(x_{v_{a+1}}-x_{v_{l-a}}\right)\] \[= \sum_{j=\ell+1-a}^{\ell}x_{v_{j}}+\sum_{j=\ell-a}^{\ell-1}x_{w_{j }}-\left(\sum_{j=1}^{a}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{p-a-1}x_{v_{p_{1}+1+j}}-\sum_{j=p-b }^{p-a-1}x_{w_{p_{1}+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{p-a-1}x_{v_{p-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{p-a-1}\left(x_ {v_{p-j}}-x_{v_{p_{1}+1+j}}\right)-\sum_{j=p-b}^{p-a-1}x_{w_{p_{1}+i}}\] \[< 0,\] so \((\rho(T)+1)(x_{w_{a}}-x_{w_{\ell-a}})<\rho(T)(x_{v_{a+1}}-x_{v_{\ell-a}})<0\), and thus \(x_{w_{a}}<x_{w_{\ell-a}}\). On the other hand, by Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{a}}-x_{v_{\ell+1-a}}\right)-(\rho(T)+1)\left(x _{w_{a}}-x_{w_{\ell-a}}\right)\] \[= \sum_{j=\ell+1-a}^{\ell}x_{v_{j}}+\sum_{j=\ell+1-a}^{\ell-1}x_{w_ {j}}-\left(\sum_{j=1}^{a}x_{v_{j}}+\sum_{j=1}^{a-1}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{p-a-1}x_{v_{p_{1}+1+j}}-\sum_{j=p-b}^{p- a}x_{w_{p_{1}+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{p-a-1}x_{v_{p-j}}-x_{w_{a}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{p-a-1}\left(x_{v _{p-j}}-x_{v_{p_{1}+1+j}}\right)\] \[-\sum_{j=p-b}^{p-a-1}x_{w_{p_{1}+j}}+(x_{w_{a}}-x_{w_{l-a}})\] \[< 0,\] so \(\rho(T)(x_{v_{a}}-x_{v_{\ell+1-a}})<(\rho(T)+1)(x_{w_{a}}-x_{w_{\ell-a}})<0\), and thus \(x_{v_{a}}<x_{v_{\ell+1-a}}\). So Claim b is true for \(i=p-a\). Suppose that \(p-a+1\leq i\leq p-1\) with \(a\geq 2\), \(x_{v_{p-j}}\leq x_{v_{p_{1}+1+j}}\) and \(x_{w_{p-j}}\leq x_{w_{p_{1}+j}}\) for \(p-a\leq j\leq i-1\). By Lemma 2.1(ii), we have \[(\rho(T)+1)\left(x_{w_{p-i}}-x_{w_{p_{1}+i}}\right)-\rho(T)\left( x_{v_{p-(i-1)}}-x_{v_{p_{1}+1+(i-1)}}\right)\] \[= \sum_{j=p_{1}+1+i}^{\ell}x_{v_{j}}+\sum_{j=p_{1}+i}^{\ell-1}x_{w_ {j}}-\left(\sum_{j=1}^{p-i}x_{v_{j}}+\sum_{j=1}^{p-i}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{p_{1}+1+j}}-\sum_{j=p-b}^ {i-1}x_{w_{p_{1}+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{p-j}}-\sum_ {j=p-a}^{i-1}x_{w_{p-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{i-1}\left(x_{v _{p-j}}-x_{v_{p_{1}+1+j}}\right)\] \[-\sum_{j=p-b}^{p-a-1}x_{w_{p_{1}+j}}+\sum_{j=p-a}^{i-1}\left(x_{w _{p-j}}-x_{w_{p_{1}+j}}\right)\] \[< 0,\] so \((\rho(T)+1)(x_{w_{p-i}}-x_{w_{p_{1}+i}})<\rho(T)(x_{v_{p-(i-1)}}-x_{v_{p_{1}+1 +(i-1)}})\leq 0\), and thus \(x_{w_{p-i}}<x_{w_{p_{1}+i}}\). On the other hand, by Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{p-i}}-x_{v_{p_{1}+1+i}}\right)-(\rho(T)+1) \left(x_{w_{p-i}}-x_{w_{p_{1}+i}}\right)\] \[= \sum_{j=p_{1}+1+i}^{\ell}x_{v_{j}}+\sum_{j=p_{1}+1+i}^{\ell-1}x_{ w_{j}}-\left(\sum_{j=1}^{p-i}x_{v_{j}}+\sum_{j=1}^{p-i-1}x_{w_{j}}\right)\] \[= \sigma_{T}(T_{2})-\sum_{j=0}^{i-1}x_{v_{p_{1}+1+j}}-\sum_{j=p-b}^ {i}x_{w_{p_{1}+j}}-\left(\sigma_{T}(T_{1})-\sum_{j=0}^{i-1}x_{v_{p-j}}-\sum_{j =p-a}^{i}x_{w_{p-j}}\right)\] \[= (\sigma_{T}(T_{2})-\sigma_{T}(T_{1}))+\sum_{j=0}^{i-1}\left(x_{v _{p-j}}-x_{v_{p_{1}+1+j}}\right)\] \[-\sum_{j=p-b}^{p-a-1}x_{w_{p_{1}+j}}+\sum_{j=p-a}^{i}\left(x_{w_{p -j}}-x_{w_{p_{1}+j}}\right)\] \(<\)0, so \(\rho(T)(x_{v_{p-i}}-x_{v_{p_{1}+1+i}})<(\rho(T)+1)(x_{w_{p-i}}-x_{w_{p_{1}+i}})<0\), and thus \(x_{v_{p-i}}<x_{v_{p_{1}+1+i}}\). This proves Claim b. Combining Claims a and b, one has \(x_{v_{p-i}}\leq x_{v_{p_{1}+1+i}}\) for \(0\leq i\leq p-1\), and \(x_{w_{p-i}}\leq x_{v_{p_{1}+i}}\) for \(p-a\leq i\leq p-1\) with \(a\geq 1\). However, \[0\geq \sigma_{T}(T_{2})-\sigma_{T}(T_{1})\] \[= \sum_{i=0}^{p-1}x_{v_{p_{1}+1+i}}+\sum_{i=p-b}^{p-a-1}x_{w_{p_{1}+ i}}+\sum_{i=p-a}^{p-1}x_{w_{p_{1}+i}}-\left(\sum_{i=0}^{p-1}x_{v_{p-i}}+\sum_{i=p- a}^{p-1}x_{w_{p-i}}\right)\] \[= \sum_{i=0}^{p-1}(x_{v_{p_{1}+1+i}}-x_{v_{p-i}})+\sum_{i=p-a}^{p-1} (x_{w_{p_{1}+i}}-x_{w_{p-i}})+\sum_{i=p-b}^{p-a-1}x_{w_{p_{1}+i}}\] \[> 0,\] a contradiction. Therefore, \(\sigma_{T}(T_{2})-\sigma_{T}(T_{1})>0\), and from (3.2), we have \(x_{v_{p}}>x_{v_{p_{1}+1}}\). By combining the above two cases, we have \(x_{v_{p}}>x_{v_{p_{1}+1}}\), so Item (i) follows. In the following, we prove Item (ii). Recall that \(a\geq 1\). Suppose that \(x_{v_{1}}\leq x_{v_{\ell}}\). First we show that \(x_{v_{l}}\leq x_{v_{\ell+1-i}}\) and \(x_{w_{i}}\leq x_{w_{\ell-i}}\) for \(1\leq i\leq a\) by induction \(i\). For \(i=1\), by Lemma 2.1(ii), we have \[(\rho(T)+1)(x_{v_{1}}-x_{v_{\ell}})=(\rho(T)+1)(x_{w_{1}}-x_{w_{\ell-1}}).\] As \(x_{v_{1}}\leq x_{v_{\ell}}\), so \(x_{w_{1}}\leq x_{w_{\ell-1}}\). Suppose that \(2\leq i\leq a\) with \(a\geq 2\), \(x_{v_{j}}\leq x_{v_{\ell+1-j}}\) and \(x_{w_{j}}\leq x_{w_{\ell-j}}\) for \(1\leq j\leq i-1\). By Lemma 2.1(ii), we have \[(\rho(T)+1)\left(x_{w_{i-1}}-x_{w_{\ell-(i-1)}}\right)-\rho(T) \left(x_{v_{i}}-x_{v_{l+1-i}}\right)\] \[= \sum_{j=\ell-(i-1)+1}^{\ell}x_{v_{j}}+\sum_{j=\ell-(i-1)}^{\ell-1 }x_{w_{j}}-\left(\sum_{j=1}^{i-1}x_{v_{j}}+\sum_{j=1}^{i-1}x_{w_{j}}\right)\] \[= \sum_{j=1}^{i-1}\left(x_{v_{\ell+1-j}}-x_{v_{j}}\right)+\sum_{j=1 }^{i-1}\left(x_{w_{\ell-j}}-x_{w_{j}}\right)\] \[\geq 0,\] so \(\rho(T)(x_{v_{i}}-x_{v_{\ell+1-i}})\leq(\rho(T)+1)(x_{w_{i-1}}-x_{w_{\ell-(i-1 )}})\leq 0\), and thus \(x_{v_{i}}\leq x_{v_{\ell+1-i}}\). On the other hand, by Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{i}}-x_{v_{\ell+1-i}}\right)-(\rho(T)+1)\left(x_ {w_{i}}-x_{w_{\ell-i}}\right)\] \[= \sum_{j=\ell+1+i}^{\ell}x_{v_{j}}+\sum_{j=\ell+1+i}^{\ell-1}x_{w_ {j}}-\left(\sum_{j=1}^{i}x_{v_{j}}+\sum_{j=1}^{i-1}x_{w_{j}}\right)\] \[= \sum_{j=1}^{i}\left(x_{v_{\ell+1-j}}-x_{v_{j}}\right)+\sum_{j=1}^{ i-1}\left(x_{w_{\ell-j}}-x_{w_{j}}\right)\] \[\geq 0,\] so \((\rho(T)+1)(x_{w_{i}}-x_{w_{\ell-i}})\leq\rho(T)(x_{v_{i}}-x_{v_{\ell+1-i}})\leq 0\), and thus \(x_{w_{i}}\leq x_{w_{\ell-i}}\). Therefore, \(x_{v_{i}}\leq x_{v_{\ell+1-i}}\) and \(x_{w_{i}}\leq x_{w_{\ell-i}}\) if \(1\leq i\leq a\). Next we show that \(x_{v_{i}}\leq x_{v_{\ell+1-i}}\) for \(a+1\leq i\leq p\) by induction on \(i\). By above proof, \(\sum_{j=1}^{a}\left(x_{v_{\ell+1-j}}-x_{v_{j}}\right)\geq 0\) and \(\sum_{j=1}^{a}\left(x_{w_{\ell-j}}-x_{w_{j}}\right)\geq 0\). For \(i=a+1\), by Lemma 2.1(ii), we have \[(\rho(T)+1)\left(x_{w_{a}}-x_{w_{\ell-a}}\right)-\rho(T)\left(x_{v _{a+1}}-x_{v_{\ell-a}}\right)\] \[= \sum_{j=\ell-a+1}^{\ell}x_{v_{j}}+\sum_{j=\ell-a}^{\ell-1}x_{w_{j }}-\left(\sum_{j=1}^{a}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\right)\] \[= \sum_{j=1}^{a}\left(x_{v_{\ell+1-j}}-x_{v_{j}}\right)+\sum_{j=1} ^{a}\left(x_{w_{\ell-j}}-x_{w_{j}}\right)\] \[\geq 0,\] so \(\rho(T)(x_{v_{a+1}}-x_{v_{\ell-a}})\leq(\rho(T)+1)(x_{w_{a}}-x_{w_{\ell-a}})\leq 0\), and thus \(x_{v_{a+1}}\leq x_{v_{\ell-a}}\). Suppose that \(a+2\leq i\leq p\) and \(x_{v_{j}}\leq x_{v_{\ell+1-j}}\) for \(a+1\leq j\leq i-1\). Let \(E=0\) if \(i=a+2\), and \(E=2\sum_{j=\ell+1-(i-1)}^{\ell-a-1}x_{w_{j}}\) otherwise. Let \(F=x_{w_{\ell+1-i}}\) if \(i\leq b+1\) with \(b<p\), or \(i\leq b\) with \(b=p\), and \(F=0\) otherwise. Evidently, \(E\geq 0\) and \(F\geq 0\). By Lemma 2.1(i), we have \[\rho(T)\left(x_{v_{i-1}}-x_{v_{\ell+1-(i-1)}}\right)-\rho(T)\left( x_{v_{i}}-x_{v_{\ell+1-i}}\right)\] \[= 2\left(\sum_{j=\ell+1-(i-1)}^{\ell}x_{v_{j}}+\sum_{j=\ell+1-(i-1) }^{\ell-1}x_{w_{j}}\right)-2\left(\sum_{j=1}^{i-1}x_{v_{j}}+\sum_{j=1}^{a}x_{w _{j}}\right)+F\] \[= 2\sum_{j=1}^{i-1}\left(x_{v_{\ell+1-j}}-x_{v_{j}}\right)+2\sum_ {j=1}^{a}\left(x_{w_{\ell-j}}-x_{w_{j}}\right)+E+F\] \[\geq 0,\] so \(x_{v_{i}}-x_{v_{\ell+1-i}}\leq x_{v_{i-1}}-x_{v_{\ell+1-(i-1)}}\leq 0\), and thus \(x_{v_{i}}\leq x_{v_{\ell+1-i}}\). Now it follows that \(x_{v_{i}}\leq x_{v_{\ell+1-i}}\) for \(1\leq i\leq p\). In particular, we have \(x_{v_{p}}\leq x_{v_{p_{1}+1}}\), contradicting to Item (i). Therefore, we have \(x_{v_{1}}>x_{v_{\ell}}\). So, by similar arguments as above by induction on \(i\), we have \(x_{v_{i}}>x_{v_{\ell+1-i}}\) and \(x_{w_{i}}>x_{w_{\ell-i}}\) for \(i=1,\ldots,a\), and \(x_{v_{a+1}}>x_{v_{\ell-a}}\). This is Item (ii). **Lemma 3.3**.: _Let \(T=T(n,a,b)\), where \(a\geq 0\), \(b\geq a+2\) and \(2(a+b)<n-1\). Let \(\ell=n-a-b\). Let \(x=x(T)\). If \(b<\frac{\ell}{2}\). Then_ _(i) \(x_{v_{i}}-x_{v_{\ell+1-i}}<x_{v_{i+1}}-x_{v_{\ell+1-(i+1)}}\) and \(x_{w_{i}}-x_{w_{\ell-i}}<x_{v_{i+1}}-x_{v_{\ell-i}}\) for \(i=1,\ldots,a\) with \(a\geq 1\);_ _(ii) \(x_{v_{a+1+i}}-x_{v_{\ell-b+1-i}}>x_{v_{a+1+(i+1)}}-x_{v_{\ell-b+1-(i+1)}}>0\) for \(i=1,\ldots,\lfloor\frac{\ell-b-a-1}{2}\rfloor-1\);_ _(iii) \(x_{v_{\ell-b+1}}<x_{v_{\ell-a}}\)._ Proof.: By Lemma 2.1(i) and Lemma 3.2(ii), we have \[\rho(T)\left(x_{v_{1}}-x_{v_{\ell}}\right)-\rho(T)\left(x_{v_{2}}-x _{v_{\ell-1}}\right)= 2(x_{v_{\ell}}-x_{v_{1}})+(x_{w_{\ell-1}}-x_{w_{1}})\] \[< 0\] and \[\rho(T)\left(x_{v_{i}}-x_{v_{\ell+1-i}}\right)-\rho(T)\left(x_{v_{i+1 }}-x_{v_{\ell+1-(i+1)}}\right)\] \[= 2\left(\sum_{j=\ell+1-i}^{\ell}x_{v_{j}}+\sum_{j=\ell+1-i}^{\ell -1}x_{w_{j}}\right)-2\left(\sum_{j=1}^{i}x_{v_{j}}+\sum_{j=1}^{i-1}x_{w_{j}} \right)+x_{w_{\ell-i}}-x_{w_{i}}\] \[= 2\sum_{j=1}^{i}\left(x_{v_{\ell+1-j}}-x_{v_{j}}\right)+2\sum_{j =1}^{i-1}\left(x_{w_{\ell-j}}-x_{w_{j}}\right)+x_{w_{\ell-i}}-x_{w_{i}}\] \[< 0\] for \(i=2,\ldots,a\). So \(x_{v_{i}}-x_{v_{\ell+1-i}}<x_{v_{i+1}}-x_{v_{\ell+1-(i+1)}}\) for \(i=1,\ldots,a\). This proves the first part of Item (i). For \(i=1,\ldots,a\), by Lemma 2.1(ii) and Lemma 3.2(ii), we have \[\left(\rho(T)+1\right)\left(x_{w_{i}}-x_{w_{\ell-i}}\right)-\rho( T)\left(x_{v_{i+1}}-x_{v_{\ell+1-(i+1)}}\right)\] \[= \sum_{j=\ell-i+1}^{\ell}x_{v_{j}}+\sum_{j=\ell-i}^{\ell-1}x_{w_{ j}}-\left(\sum_{j=1}^{i}x_{v_{j}}+\sum_{j=1}^{i}x_{w_{j}}\right)\] \[= \sum_{j=1}^{i}\left(x_{v_{\ell+1-j}}-x_{v_{j}}\right)+\sum_{j=1}^ {i}\left(x_{w_{\ell-j}}-x_{w_{j}}\right)\] \[< 0,\] so \[0<(\rho(T)+1)(x_{w_{i}}-x_{w_{\ell-i}})<\rho(T)(x_{v_{i+1}}-x_{v_{\ell+1-(i+1) }}),\] and thus \(x_{w_{i}}-x_{w_{\ell-i}}<x_{v_{i+1}}-x_{v_{\ell-i}}\) for \(i=1,\ldots,a\). This proves the second part of Item (i). Next we prove Item (ii). Let \(t=\lfloor\frac{\ell-b+a+1}{2}\rfloor\) and \(t_{1}=\lceil\frac{\ell-b+a+1}{2}\rceil\). It suffices to prove that \(x_{v_{t-i}}-x_{t_{1}+1+i}>x_{v_{t-(i-1)}}-x_{v_{t_{1}+1+(i-1)}}>0\) for \(0\leq i\leq t-(a+2)\). Let \(S_{1}=\sum_{j=1}^{t}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\) and \(S_{2}=\sum_{j=t_{1}+1}^{\ell}x_{v_{j}}+\sum_{j=\ell-b}^{\ell-1}x_{w_{j}}\). First we show that \(S_{1}<S_{2}\). As in the proof of Lemma 3.2, let \(p=\lfloor\frac{\ell}{2}\rfloor\) and \(p_{1}=\lceil\frac{\ell}{2}\rceil\). Recall that \(b\leq p\). If \(\ell\) is odd and \(b=p\) (i.e., \(b=\frac{\ell-1}{2}\)), then, by the arguments in Case 1 of the proof of Item (i) in Lemma 3.2, we have \[\sum_{j=1}^{b}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}<\sum_{j=b+2}^{2b+1}x_{v_{j}}+ \sum_{j=b+2}^{2b}x_{w_{j}}+\frac{1}{2}x_{w_{b+1}}<\sum_{j=b+1}^{2b+1}x_{v_{j}}+ \sum_{j=b+1}^{2b}x_{w_{j}}.\] As \(t_{1}\leq b\), we have \(S_{1}<S_{2}\). Suppose that \(b<p\). Then, by the arguments in Case 2 of the proof of Item (i) in Lemma 3.2, we have \[\sum_{j=1}^{p}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}<\sum_{j=p_{1}+1}^{\ell}x_{v_{j}}+ \sum_{j=\ell-b}^{\ell-1}x_{w_{j}}.\] As \(t\leq p\) and \(t_{1}\leq p_{1}\), we have \(S_{1}<S_{2}\). It follows that \(S_{1}<S_{2}\) in either case. Next we prove that \(x_{v_{t-i}}>x_{v_{t_{1}+1+i}}\) for \(0\leq i\leq t-(a+2)\) by induction on \(i\). For \(i=0\), from the distance eigenequations of \(T\) at \(v_{t}\) and \(v_{t_{1}+1}\), we have \(\rho(T)(x_{v_{t}}-x_{v_{t_{1}+1}})=(t_{1}+1-t)(S_{2}-S_{1})>0\), so \(x_{v_{t}}>x_{v_{t_{1}+1}}\). Suppose that \(1\leq i\leq t-(a+2)\), and \(x_{v_{t-j}}>x_{v_{t_{1}+1+j}}\) for \(0\leq j\leq i-1\). By Lemma 2.1(i), we have \[\rho(T)\left(x_{v_{t-i}}-x_{v_{t_{1}+1+i}}\right)-\rho(T)\left(x_ {v_{t-(i-1)}}-x_{v_{t_{1}+1+(i-1)}}\right)\] \[= 2\left(\sum_{j=t_{1}+1+i}^{\ell}x_{v_{j}}+\sum_{j=\ell-b}^{\ell- 1}x_{w_{j}}\right)-2\left(\sum_{j=1}^{t-i}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}\right)\] \[= 2\left(S_{2}-\sum_{j=0}^{i-1}x_{v_{t_{1}+1+j}}\right)-2\left(S_{ 1}-\sum_{j=0}^{i-1}x_{v_{t-j}}\right)\] \[= 2\left(S_{2}-S_{1}\right)+2\sum_{j=0}^{i-1}\left(x_{v_{t-j}}-x_{ v_{t_{1}+1+j}}\right)\] \[> 0,\] so \(x_{v_{t-i}}-x_{v_{t_{1}+1+i}}>x_{v_{t-(i-1)}}-x_{v_{t_{1}+1+(i-1)}}>0\) for \(1\leq i\leq t-(a+2)\). This proves Item (ii). In the following, we prove Item (iii). By Lemma 3.2(ii), we have \(x_{v_{i}}>x_{v_{\ell+1-i}}\) and \(x_{w_{i}}>x_{w_{\ell-i}}\) for \(1\leq i\leq a\) with \(a\geq 1\). Thus \[\sum_{i=1}^{a}(x_{v_{i}}+x_{w_{i}})>\sum_{i=\ell-a+1}^{\ell}x_{v_{i}}+\sum_{i= \ell-a}^{\ell-1}x_{w_{i}}.\] Let \(m=\ell-\lfloor\frac{a+b}{2}\rfloor\). We consider two cases. **Case 1.**\(a+b\) is even, i.e., \(b-a\) is even. Let \(R_{1}=\sum_{j=1}^{m}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=l-b}^{m-1}x_{w_{ j}}\) and \(R_{2}=\sum_{j=m+1}^{\ell}x_{v_{j}}+\sum_{j=m+1}^{\ell-1}x_{w_{j}}\). We claim that \(x_{v_{m-i}}-x_{v_{m+1-i}}\) and \(R_{2}-R_{1}\) have common sign for \(0\leq i\leq\frac{b-a}{2}-1\), and \(x_{w_{m-i}}-x_{w_{m+i}}\) and \(R_{2}-R_{1}\) have common sign for \(1\leq i\leq\frac{b-a}{2}-1\). For \(i=0\), from the distance eigenequations of \(T\) at \(v_{m}\) and \(v_{m+1}\), we have \(\rho(T)(x_{v_{m}}-x_{v_{m+1}})=R_{2}-R_{1}\), so \(x_{v_{m}}-x_{v_{m+1}}\) and \(R_{2}-R_{1}\) have common sign. For \(i=1\) with \(b-a\geq 4\), by Lemma 2.1(ii), we have \[\left(\rho(T)+1\right)\left(x_{w_{m-1}}-x_{w_{m+1}}\right)-\rho(T )\left(x_{v_{m}}-x_{v_{m+1}}\right)\] \[= \sum_{j=m+2}^{\ell}x_{v_{j}}+\sum_{j=m+1}^{\ell-1}x_{w_{j}}- \left(\sum_{j=1}^{m-1}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=\ell-b}^{m-1}x_{ w_{j}}\right)\] \[= \left(R_{2}-x_{v_{m+1}}\right)-\left(R_{1}-x_{v_{m}}\right),\] so \(\left(\rho(T)+1\right)\!\left(x_{w_{m-1}}-x_{w_{m+1}}\right)=\left(R_{2}-R_{1} \right)+\left(x_{v_{m}}-x_{v_{m+1}}\right)+\rho(T)\left(x_{v_{m}}-x_{v_{m+1}}\right)\), and thus \(x_{w_{m-1}}-x_{w_{m+1}}\) and \(R_{2}-R_{1}\) have common sign. By Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{m-1}}-x_{v_{m+2}}\right)-\left(\rho(T)+1\right) \left(x_{w_{m-1}}-x_{w_{m+1}}\right)\] \[= \sum_{j=m+2}^{\ell}x_{v_{j}}+\sum_{j=m+2}^{\ell-1}x_{w_{j}}- \left(\sum_{j=1}^{m-1}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=\ell-b}^{m-2}x _{w_{j}}\right)\] \[= \left(R_{2}-x_{v_{m+1}}-x_{w_{m+1}}\right)-\left(R_{1}-x_{v_{m}} -x_{w_{m-1}}\right),\] so \[\rho(T)\left(x_{v_{m-1}}-x_{v_{m+2}}\right)= \left(R_{2}-R_{1}\right)+\left(x_{v_{m}}-x_{v_{m+1}}\right)+\left( x_{w_{m-1}}-x_{w_{m+1}}\right)\] \[+\left(\rho(T)+1\right)\left(x_{w_{m-1}}-x_{w_{m+1}}\right),\] and thus \(x_{v_{m-1}}-x_{v_{m+2}}\) and \(R_{2}-R_{1}\) have common sign. Suppose that \(2\leq i\leq\frac{b-a}{2}-1\) with \(b-a\geq 6\), and \(x_{v_{m-j}}-x_{v_{m+1+j}}\) and \(R_{2}-R_{1}\) have common sign for \(0\leq j\leq i-1\), and \(x_{w_{m-j}}-x_{w_{m+j}}\) and \(R_{2}-R_{1}\) have common sign for \(1\leq j\leq i-1\). By Lemma 2.1(ii), we have \[\left(\rho(T)+1\right)\left(x_{w_{m-i}}-x_{w_{m+i}}\right)-\rho( T)\left(x_{v_{m-(i-1)}}-x_{v_{m+1+(i-1)}}\right)\] \[= \sum_{j=m+1+i}^{\ell}x_{v_{j}}+\sum_{j=m+i}^{\ell-1}x_{w_{j}}- \left(\sum_{j=1}^{m-i}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=\ell-b}^{m-i}x _{w_{j}}\right)\] \[= \left(R_{2}-\sum_{j=0}^{i-1}x_{v_{m+1+j}}-\sum_{j=1}^{i-1}x_{w_{ m+j}}\right)-\left(R_{1}-\sum_{j=0}^{i-1}x_{v_{m-j}}-\sum_{j=1}^{i-1}x_{w_{m-j}} \right),\] so \[\left(\rho(T)+1\right)\left(x_{w_{m-i}}-x_{w_{m+i}}\right)= \left(R_{2}-R_{1}\right)+\sum_{j=0}^{i-1}\left(x_{v_{m-j}}-x_{v_{m +1+j}}\right)\] \[+\sum_{j=1}^{i-1}(x_{w_{m-j}}-x_{w_{m+j}})\] \[+\rho(T)\left(x_{v_{m-(i-1)}}-x_{v_{m+1+(i-1)}}\right),\] and thus \(x_{w_{m-i}}-x_{w_{m+i}}\) and \(R_{2}-R_{1}\) have common sign. By Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{m-i}}-x_{v_{m+1+i}}\right)-\left(\rho(T)+1 \right)\left(x_{w_{m-i}}-x_{w_{m+i}}\right)\] \[= \sum_{j=m+1+i}^{\ell}x_{v_{j}}+\sum_{j=m+1+i}^{\ell-1}x_{w_{j}}- \left(\sum_{j=1}^{m-i}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=\ell-b}^{m-i-1 }x_{w_{j}}\right)\] \[= \left(R_{2}-\sum_{j=0}^{i-1}x_{v_{m+1+j}}-\sum_{j=1}^{i}x_{w_{m+j}} \right)-\left(R_{1}-\sum_{j=0}^{i-1}x_{v_{m-j}}-\sum_{j=1}^{i}x_{w_{m-j}}\right),\] so \[\rho(T)\left(x_{v_{m-i}}-x_{v_{m+1+i}}\right)= (R_{2}-R_{1})+\sum_{j=0}^{i-1}\left(x_{v_{m-j}}-x_{v_{m+1+j}}\right)\] \[+\sum_{j=1}^{i}(x_{w_{m-j}}-x_{w_{m+j}})+\left(\rho(T)+1\right) \left(x_{w_{m-i}}-x_{w_{m+i}}\right),\] and thus \(x_{v_{m-i}}-x_{v_{m+1+i}}\) and \(R_{2}-R_{1}\) have common sign. Now we have showed that \(x_{v_{m-i}}-x_{v_{m+1-i}}\) and \(R_{2}-R_{1}\) have common sign for \(0\leq i\leq\frac{b-a}{2}-1\), and \(x_{w_{m-i}}-x_{w_{m+i}}\) and \(R_{2}-R_{1}\) have common sign for \(1\leq i\leq\frac{b-a}{2}-1\), as claimed. Note that \[R_{2}-R_{1}< \left(\sum_{i=\ell-a+1}^{\ell}x_{v_{i}}+\sum_{i=\ell-a}^{\ell-1} x_{w_{i}}\right)+\left(\sum_{i=0}^{\frac{b-a}{2}-1}x_{v_{m+1+i}}+\sum_{i=1}^{ \frac{b-a}{2}-1}x_{w_{m+i}}\right)\] \[-\sum_{i=1}^{a}\left(x_{v_{i}}+x_{w_{i}}\right)-\left(\sum_{i=0} ^{\frac{b-a}{2}-1}x_{v_{m-i}}+\sum_{i=1}^{\frac{b-a}{2}-1}x_{w_{m-i}}\right)\] \[< -\left(\sum_{i=0}^{\frac{b-a}{2}-1}(x_{v_{m-i}}-x_{v_{m+1-i}})+ \sum_{i=1}^{\frac{b-a}{2}-1}(x_{w_{m-i}}-x_{w_{m+i}})\right).\] This requires the above common sign to be \(-\), and thus \(x_{v_{m-\left(\frac{b-a}{2}-1\right)}}<x_{v_{m+1+\left(\frac{b-a}{2}-1\right)}}\), i.e., \(x_{v_{\ell-b+1}}<x_{v_{\ell-a}}\), as desired. **Case 2.**\(a+b\) is odd, i.e., \(b-a\) is odd. Let \(B_{1}=\sum_{j=1}^{m-1}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=\ell-b}^{m-1}x_ {w_{j}}\) and \(B_{2}=\sum_{j=m+1}^{\ell}x_{v_{j}}+\sum_{j=m}^{\ell-1}x_{w_{j}}\). We claim that \(x_{w_{m-i}}-x_{w_{m-1+i}}\) and \(B_{2}-B_{1}\) have common sign, and \(x_{v_{m-i}}-x_{v_{m+i}}\) and \(B_{2}-B_{1}\) have common sign for \(1\leq i\leq\frac{b-a-1}{2}\). For \(i=1\), from the distance eigenequations of \(T\) at \(w_{m-1}\) and \(w_{m}\), we have \((\rho(T)+1)(x_{w_{m-1}}-x_{w_{m}})=B_{2}-B_{1}\), so \(x_{w_{m-1}}-x_{w_{m}}\) and \(B_{2}-B_{1}\) have common sign. By Lemma 2.1(ii), we have \[\rho(T)\left(x_{v_{m-1}}-x_{v_{m+1}}\right)-\left(\rho(T)+1\right) \left(x_{w_{m-1}}-x_{w_{m}}\right)\] \[= \sum_{j=m+1}^{\ell}x_{v_{j}}+\sum_{j=m+1}^{\ell-1}x_{w_{j}}-\left( \sum_{j=1}^{m-1}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=\ell-b}^{m-2}x_{w_{j }}\right)\] \[= \left(B_{2}-x_{w_{m}}\right)-\left(B_{1}-x_{w_{m-1}}\right),\] so \[\rho(T)\left(x_{v_{m-1}}-x_{v_{m+1}}\right)= (B_{2}-B_{1})+\left(x_{w_{m-1}}-x_{w_{m}}\right)\] \[+\left(\rho(T)+1\right)\left(x_{w_{m-1}}-x_{w_{m}}\right),\] and thus \(x_{v_{m-1}}-x_{v_{m+1}}\) and \(B_{2}-B_{1}\) have common sign. Suppose that \(2\leq i\leq\frac{b-a-1}{2}\) with \(b-a\geq 5\), and \(x_{v_{m-j}}-x_{v_{m+j}}\) and \(B_{2}-B_{1}\) have common sign, and \(x_{w_{m-j}}-x_{w_{m-1+j}}\) and \(B_{2}-B_{1}\) have common sign for \(1\leq j\leq i-1\). By Lemma 2.1(ii), we have \[(\rho(T)+1)\left(x_{w_{m-i}}-x_{w_{m-1+i}}\right)-\rho(T)\left(x_ {v_{m-(i-1)}}-x_{v_{m+(i-1)}}\right)\] \[= \sum_{j=m+i}^{\ell}x_{v_{j}}+\sum_{j=m-1+i}^{\ell-1}x_{w_{j}}- \left(\sum_{j=1}^{m-i}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=\ell-b}^{m-i}x _{w_{j}}\right)\] \[= \left(B_{2}-\sum_{j=1}^{i-1}x_{v_{m+j}}-\sum_{j=1}^{i-1}x_{w_{m-1 +j}}\right)-\left(B_{1}-\sum_{j=1}^{i-1}x_{v_{m-j}}-\sum_{j=1}^{i-1}x_{w_{m-j} }\right),\] so \[(\rho(T)+1)\left(x_{w_{m-i}}-x_{w_{m-1+i}}\right)= (B_{2}-B_{1})+\sum_{j=1}^{i-1}\left(x_{v_{m-j}}-x_{v_{m+j}}\right)\] \[+\sum_{j=1}^{i-1}(x_{w_{m-j}}-x_{w_{m-1-j}})\] \[+\rho(T)\left(x_{v_{m-(i-1)}}-x_{v_{m+(i-1)}}\right),\] and thus \(x_{w_{m-i}}-x_{w_{m-1+i}}\) and \(B_{2}-B_{1}\) have common sign. By Lemma 2.1(ii) again, we have \[\rho(T)\left(x_{v_{m-i}}-x_{v_{m+i}}\right)-(\rho(T)+1)\left(x_{w _{m-i}}-x_{w_{m-1+i}}\right)\] \[= \sum_{j=m+i}^{\ell}x_{v_{j}}+\sum_{j=m+i}^{\ell-1}x_{w_{j}}- \left(\sum_{j=1}^{m-i}x_{v_{j}}+\sum_{j=1}^{a}x_{w_{j}}+\sum_{j=\ell-b}^{m-i-1} x_{w_{j}}\right)\] \[= \left(B_{2}-\sum_{j=1}^{i-1}x_{v_{m+j}}-\sum_{j=1}^{i}x_{w_{m-1+ j}}\right)-\left(B_{1}-\sum_{j=1}^{i-1}x_{v_{m-j}}-\sum_{j=1}^{i}x_{w_{m-j}} \right),\] so \[\rho(T)\left(x_{v_{m-i}}-x_{v_{m+i}}\right)= (B_{2}-B_{1})+\sum_{j=1}^{i-1}\left(x_{v_{m-j}}-x_{v_{m+j}}\right)\] \[+\sum_{j=1}^{i}(x_{w_{m-j}}-x_{w_{m-1+j}})\] \[+\left(\rho(T)+1\right)\left(x_{w_{m-i}}-x_{w_{m-1+i}}\right),\] and thus \(x_{v_{m-i}}-x_{v_{m+i}}\) and \(B_{2}-B_{1}\) have common sign. Now we conclude that \(x_{v_{m-i}}-x_{v_{m+i}}\) and \(B_{2}-B_{1}\) have common sign, and \(x_{w_{m-i}}-x_{w_{m-1+i}}\) and \(B_{2}-B_{1}\) have common sign for \(1\leq i\leq\frac{b-a-1}{2}\), as claimed. Note that \[B_{2}-B_{1}<\sum_{i=\ell-a+1}^{\ell}x_{v_{i}}+\sum_{i=\ell-a}^{\ell-1}x_{w_{i} }+\sum_{i=1}^{\frac{b-a-1}{2}}x_{v_{m+i}}+\sum_{i=1}^{\frac{b-a-1}{2}}x_{w_{m- 1+i}}\] \[-\sum_{i=1}^{a}\left(x_{v_{i}}+x_{w_{i}}\right)-\left(\sum_{i=1}^{b-a-1 }x_{v_{m-i}}+\sum_{i=1}^{b-a-1}x_{w_{m-i}}\right)\] \[< -\sum_{i=1}^{b-a-1}\left((x_{v_{m-i}}-x_{v_{m+i}})+(x_{w_{m-i}}-x_ {w_{m-1+i}})\right).\] This requires the above common sign to be \(-\), and thus \(x_{v_{m-\frac{b-a-1}{2}}}<x_{v_{m+\frac{b-a-1}{2}}}\), i.e., \(x_{v_{\ell-b+1}}<x_{v_{\ell-a}}\), as desired. Let \(G\) be a connected hypergraph. For \(u\in V(G)\), the status (or transmission) of \(u\) in \(G\), denoted by \(s_{G}(u)\), is defined to be the sum of distances from \(u\) to all other vertices of \(G\), i.e., the row sum of \(D(G)\) indexed by vertex \(u\), i.e., \(s_{G}(u)=\sum_{v\in V(G)}d_{G}(u,v)\). Let \(s(G)=\min\{s_{G}(u):u\in V(G)\}\). It is known that \(\rho(G)\geq s(G)\), see [8, p. 24, Theorem 1.1]. **Lemma 3.4**.: _Let \(T=T(n,a,b)\), where \(a\geq 0\), \(b\geq a+2\) and \(2(a+b)<n-1\). Let \(\ell=n-a-b\) and \(r=\ell-b-a\). Let \(x=x(T)\). If \(b<\frac{\ell}{2}\), then \(\rho(T)>(2a+1)(r-1)+\frac{r}{r-1}\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)\)._ Proof.: As \(b<\frac{\ell}{2}\) and \(b\geq a+2\), we have \(r>2b-b-(b-2)=2\). Note that \[\frac{r}{r-1}\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)<\sum_{i=1}^{\lfloor \frac{r-1}{2}\rfloor}(r-2i)+\left\lfloor\frac{r-1}{2}\right\rfloor.\] This is because \[\frac{r}{r-1}\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)-\left( \sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)+\left\lfloor\frac{r-1}{2} \right\rfloor\right)\] \[= \frac{1}{r-1}\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)- \left\lfloor\frac{r-1}{2}\right\rfloor\] \[= \frac{\lfloor\frac{r-1}{2}\rfloor}{r-1}\cdot\frac{(r-2+r-2 \lfloor\frac{r-1}{2}\rfloor)}{2}-\left\lfloor\frac{r-1}{2}\right\rfloor\] \[= \left\lfloor\frac{r-1}{2}\right\rfloor\left(1-\frac{\lfloor\frac{ r-1}{2}\rfloor}{r-1}-1\right)\] \[= -\left\lfloor\frac{r-1}{2}\right\rfloor\frac{\lfloor\frac{r-1}{2} \rfloor}{r-1}\] \[< 0.\] Let \(T^{\prime}=T(\ell-b+3a+2,a,a+1)\). As \(T^{\prime}\) is a proper induced subgraph of \(T\) and the distance between any two vertices in \(T^{\prime}\) remains unchanged, we have \(s(T)>s(T^{\prime})\). Thus it is sufficient to prove that \(s(T^{\prime})\geq(2a+1)r+\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)+ \lfloor\frac{r-1}{2}\rfloor\). Obviously, \(s_{T^{\prime}}(w_{i})\geq s_{T^{\prime}}(v_{i})\) for \(1\leq i\leq a\) and \(\ell-b\leq i\leq\ell-b+a\). Let \(t=\lfloor\frac{\ell-b+a+1}{2}\rfloor\) and \(t_{1}=\lceil\frac{\ell-b+a+1}{2}\rceil\). We claim that \(s(T^{\prime})=s_{T^{\prime}}(v_{t+1})\). We consider two cases. **Case 1.**\(r\) is even, i.e., \(\ell-b+a\) is even. Let \(V_{1}=\{v_{1},\ldots,v_{t+1},w_{1},\ldots,w_{a}\}\) and \(V_{2}=V(T^{\prime})\setminus V_{1}\). We have \(|V_{1}|=t+1+a\) and \(|V_{2}|=t+1+a\). As \(s_{T^{\prime}}(v_{t+1})-s_{T^{\prime}}(v_{t+2})=\sum_{u\in V_{1}}(d_{T^{\prime }}(v_{t+1},u)-d_{T^{\prime}}(v_{t+2},u))+\sum_{u\in V_{2}}(d_{T^{\prime}}(v_{t+ 1},u)-d_{T^{\prime}}(v_{t+2},u))=\sum_{u\in V_{1}}(-1)+\sum_{u\in V_{2}}1=-|V_{ 1}|+|V_{2}|=0\), we have \(s_{T^{\prime}}(v_{t+1})=s_{T^{\prime}}(v_{t+2})\). By similar argument as above, we have \(s_{T^{\prime}}(v_{i-1})>s_{T^{\prime}}(v_{i})\) for \(2\leq i\leq t+1\) and \(s_{T^{\prime}}(v_{i})<s_{T^{\prime}}(v_{i+1})\) for \(t+2\leq i\leq\ell-b+a+1\). Thus \(s(T^{\prime})=s_{T^{\prime}}(v_{t+1})\). **Case 2.**\(r\) is odd, i.e., \(\ell-b+a\) is odd. Let \(V_{1}=\{v_{1},\ldots,v_{t},w_{1},\ldots,w_{a}\}\) and \(V_{2}=V(T^{\prime})\setminus V_{1}\). We have \(|V_{1}|=t+a\) and \(|V_{2}|=t+a+1\). As \(s_{T^{\prime}}(v_{t})-s_{T^{\prime}}(v_{t+1})=\sum_{u\in V_{1}}(d_{T^{\prime}} (v_{t},u)-d_{T^{\prime}}(v_{t+1},u))+\sum_{u\in V_{2}}(d_{T^{\prime}}(v_{t},u) -d_{T^{\prime}}(v_{t+1},u))=\sum_{u\in V_{1}}(-1)+\sum_{u\in V_{2}}1=-|V_{1}|+ |V_{2}|=1\), we have \(s_{T^{\prime}}(v_{t})>s_{T^{\prime}}(v_{t+1})\). By similar argument as above, we have \(s_{T^{\prime}}(v_{i-1})>s_{T^{\prime}}(v_{i})\) for \(2\leq i\leq t\) and \(s_{T^{\prime}}(v_{i})<s_{T^{\prime}}(v_{i+1})\) for \(t+1\leq i\leq\ell-b+a+1\). Thus \(s(T^{\prime})=s_{T^{\prime}}(v_{t+1})\). Note that \[\sum_{i=1}^{a+1}\left(d_{T^{\prime}}(v_{t+1},v_{i})+d_{T^{\prime }}(v_{t+1},v_{\ell-b+a+2-i})\right)= \sum_{i=1}^{a+1}(\ell-b+a+2-2i)\] \[\geq \sum_{i=1}^{a+1}r\] \[= (a+1)r,\] and when \(a\geq 1\), \[\sum_{i=1}^{a}\left(d_{T^{\prime}}(v_{t+1},w_{i})+d_{T^{\prime}} (v_{t+1},w_{\ell-b+a+1-i})\right)= \sum_{i=1}^{a}(\ell-b+a+2-2i)\] \[\geq \sum_{i=1}^{a}(r+2)\] \[> ar\] and \[\sum_{i=0}^{t-(a+2)}\left(d_{T^{\prime}}(v_{t+1},v_{t-i})+d_{T^{ \prime}}(v_{t+1},v_{t+1+i})\right)+d_{T^{\prime}}(v_{t+1},w_{\ell-b})\] \[= \sum_{i=0}^{t-(a+2)}\left(t_{1}-t+1+2i\right)+\left(\ell-b-t\right)\] \[= \sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)+\left\lceil\frac{ r-1}{2}\right\rceil\] \[\geq \sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)+\left\lceil\frac{ r-1}{2}\right\rceil.\] Thus \(s_{T^{\prime}}(v_{t+1})\geq(2a+1)r+\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)+ \left\lfloor\frac{r-1}{2}\right\rfloor\), as desired. ## 4 Graft transformations that increase the distance spectral radius Let \(G\) be a hypergraph with \(u,v\in V(G)\) and \(e_{1},\ldots,e_{r}\in E(G)\) such that \(u\notin e_{i}\) and \(v\in e_{i}\) for \(1\leq i\leq r\). Let \(e^{\prime}_{i}=(e_{i}\setminus\{v\})\cup\{u\}\) for \(1\leq i\leq r\). Suppose that \(e^{\prime}_{i}\not\in E(G)\) for \(1\leq i\leq r\). Let \(G^{\prime}\) be the hypergraph with \(V(G^{\prime})=V(G)\) and \(E(G^{\prime})=(E(G)\setminus\{e_{1},\ldots,e_{r}\})\cup\{e^{\prime}_{1}, \ldots,e^{\prime}_{r}\}\). Then we say that \(G^{\prime}\) is obtained from \(G\) by moving edges \(e_{1},\ldots,e_{r}\) from \(v\) to \(u\). **Lemma 4.1**.: _[_15_]_ _For \(t\geq 3\), let \(G\) be a hypergraph consisting of \(t\) connected subhypergraphs \(G_{1},\ldots,G_{t}\) such that \(|V(G_{i})|\geq 2\) for \(1\leq i\leq t\) and \(V(G_{i})\cap V(G_{j})=\{u\}\) for \(1\leq i<j\leq t\). Suppose that \(\emptyset\neq I\subseteq\{3,\ldots,t\}\). Let \(v\in V(G_{2})\setminus\{u\}\) and \(G^{\prime}\) be the hypergraph obtained from \(G\) by moving all the edges containing \(u\) in \(G_{i}\) for all \(i\in I\) from \(u\) to \(v\). If \(\sigma_{G}(G_{1})\geq\sigma_{G}(G_{2})\), then \(\rho(G)<\rho(G^{\prime})\)._ Let \(G\) be a hypergraph with \(e_{1},e_{2}\in E(G)\) and \(u_{1},\ldots,u_{s}\in V(G)\) such that \(u_{1},\ldots,u_{s}\notin e_{1}\) and \(u_{1},\ldots,u_{s}\in e_{2}\), where \(|e_{2}|-s\geq 2\). Let \(e^{\prime}_{1}=e_{1}\cup\{u_{1},\ldots,u_{s}\}\) and \(e^{\prime}_{2}=e_{2}\setminus\{u_{1},\ldots,u_{s}\}\). Suppose that \(e^{\prime}_{1},e^{\prime}_{2}\not\in E(G)\). Let \(G^{\prime}\) be the hypergraph with \(V(G^{\prime})=V(G)\) and \(E(G^{\prime})=(E(G)\setminus\{e_{1},e_{2}\})\cup\{e^{\prime}_{1},e^{\prime}_{ 2}\}\). Then we say that \(G^{\prime}\) is obtained from \(G\) by moving vertices \(u_{1},\ldots,u_{s}\) from \(e_{2}\) to \(e_{1}\). **Lemma 4.2**.: _[_15_]_ _For \(t\geq 3\), let \(G\) be a hypergraph with an edge \(e=\{w_{1},\ldots,w_{t}\}\), such that \(G-e\) consists of vertex-disjoint connected subhypergraphs \(H_{1},\ldots,H_{t}\), each containing exactly one vertex of \(e\). Let \(e\cap V(H_{i})=\{w_{i}\}\) for \(i=1,\ldots,t\). Suppose that \(|V(H_{i})|\geq 2\) for \(i=1,2\). Let \(\emptyset\neq I\subseteq\{3,\ldots,t\}\). Let \(e^{\prime}\in E(H_{2})\) and \(G^{\prime}\) be the hypergraph obtained from \(G\) by moving all the vertices in \(\{w_{i}:i\in I\}\) from \(e\) to \(e^{\prime}\). If \(\sigma_{G}(H_{1})\geq\sigma_{G}(H_{2})\), then \(\rho(G)<\rho(G^{\prime})\)._ **Lemma 4.3**.: _Suppose that \(v,w\) be two non-adjacent neighbors of vertex \(u\) in a connected hypergraph \(G\). Let \(x=x(G)\). Then \(x_{w}+x_{u}-x_{v}>0\)._ Proof.: Let \(V_{1}=V(G)\setminus\{u,v,w\}\). For \(z\in V_{1}\), one has \(d_{G}(w,z)\geq 1\) and \(d_{G}(u,z)-d_{G}(v,z)\geq-d_{G}(u,v)=-1\), so \(d_{G}(w,z)+d_{G}(u,z)-d_{G}(v,z)\geq 0\). From the distance eigenequations of \(G\) at \(w\), \(u\) and \(v\), we have \[\rho(G)x_{w}=x_{u}+2x_{v}+\sum_{z\in V_{1}}d_{G}(w,z)x_{z},\] \[\rho(G)x_{u}=x_{w}+x_{v}+\sum_{z\in V_{1}}d_{G}(u,z)x_{z},\] and \[\rho(G)x_{v}=2x_{w}+x_{u}+\sum_{z\in V_{1}}d_{G}(v,z)x_{z}.\] Thus \[\rho(G)(x_{w}+x_{u}-x_{v})\] \[= -x_{w}+3x_{v}+\sum_{z\in V_{1}}(d_{G}(w,z)+d_{G}(u,z)-d_{G}(v,z))x_{z}\] \[\geq -x_{w}+3x_{v},\] which implies \((\rho(G)+1)(x_{w}+x_{u}-x_{v})\geq x_{u}+2x_{v}>0\). So it follows that \(x_{w}+x_{u}-x_{v}>0\). **Lemma 4.4**.: _Suppose that \(b\geq a+2\) and \(2(a+b)<n-1\). Then \(\rho(T(n,a+1,b-1))>\rho(T(n,a,b))\)._ Proof.: Let \(T=T(n,a,b)\) and \(x=x(T)\). Let \(\ell=n-a-b\) and \(r=\ell-b-a\). Let \(T^{\prime}\) be the hypergraph obtained from \(T\) by moving vertex \(w_{\ell-b}\) from \(e_{\ell-b}\) to \(e_{a+1}\). Obviously, \(T^{\prime}\cong T(n,a+1,b-1)\). **Case 1.**\(b\geq\frac{\ell}{2}\). Let \(e\) be the edge containing both \(v_{\ell-b}\) and \(v_{\ell-b+1}\). Let \(T_{1}\) be the component of \(T-e\) containing \(v_{\ell-b}\). Let \(T_{2}\) be the component of \(T-e\) containing \(v_{\ell-b+1}\). By Lemma 3.1, we have \(\sigma_{T}(T_{1})<\sigma_{T}(T_{2})\). Then by Lemma 4.2, we have \(\rho(T^{\prime})>\rho(T)\). **Case 2.**\(b<\frac{\ell}{2}\). Let \(A=\{v_{\ell-b+1},\ldots,v_{\ell}\}\cup\{w_{\ell-b+1},\ldots,w_{\ell-1}\}\), and \(B=\{v_{1},\ldots,v_{a+1}\}\cup\{w_{1},\ldots,w_{a}\}\). As we pass from \(T\) to \(T^{\prime}\), the distance between \(w_{l-b}\) and a vertex of \(A\) is increased by \(r-1\), the distance between \(w_{\ell-b}\) and a vertex of \(B\) is decreased by \(r-1\), and the distance between \(w_{\ell-b}\) and \(v_{\ell-b+1-i}\) is increased by \(r-2i\) for \(i=1,\ldots,r-1\), and the distance between any other vertex pair remains unchanged. So \[\frac{1}{2}(\rho(T^{\prime})-\rho(T))\geq\frac{1}{2}x^{\top}(D(T^{\prime})-D(T ))x=x_{w_{\ell-b}}W,\] where \(W=(r-1)(\sigma_{T}(A)-\sigma_{T}(B))+C\), \[C=\sum_{i=1}^{r-1}(r-2i)x_{v_{\ell-b+1-i}}=\sum_{i=1}^{\lfloor\frac{r-1}{2} \rfloor}(r-2i)(x_{v_{\ell-b+1-i}}-x_{v_{a+1+i}}),\] and \[\sigma_{T}(A)-\sigma_{T}(B)= \sum_{i=\ell-b+1}^{\ell}x_{v_{i}}+\sum_{i=\ell-b+1}^{\ell-1}x_{w_ {i}}-\sum_{i=1}^{a+1}x_{v_{i}}-\sum_{i=1}^{a}x_{w_{i}}\] \[= \sum_{i=1}^{a+1}(x_{v_{\ell+1-i}}-x_{v_{i}})+\sum_{i=1}^{a}(x_{w_ {\ell-i}}-x_{w_{i}})+\sum_{i=\ell-b+1}^{\ell-a-1}(x_{v_{i}}+x_{w_{i}}).\] Now we prove that \(x_{v_{a+1}}-x_{v_{\ell-b+1}}>x_{v_{a+2}}-x_{v_{\ell-b}}\). Suppose to the contrary that \(x_{v_{a+1}}-x_{v_{\ell-b+1}}\leq x_{v_{a+2}}-x_{v_{\ell-b}}\). Then, by Lemma 2.1(i), we have \[2(\sigma_{T}(A)-\sigma_{T}(B))+x_{w_{\ell-b}}= \rho(T)(x_{v_{a+1}}-x_{v_{\ell-b+1}})-\rho(T)(x_{v_{a+2}}-x_{v_{ \ell-b}})\] \[\leq 0.\] So \(\sigma_{T}(A)-\sigma_{T}(B)<0\), and \(C<0\) by Lemma 3.3(ii). It follows that \(W<0\). By Lemma 4.3, we have \(x_{w_{\ell-b}}\leq x_{v_{\ell-b+1}}+x_{w_{\ell-b+1}}\), so \(x_{w_{\ell-b}}\leq\sum_{i=\ell-b+1}^{\ell-a-1}(x_{v_{i}}+x_{w_{i}})\). So, from the distance eigenequations of \(T\) at \(v_{a+2}\) and \(v_{\ell-b}\), we have \[\rho(T)(x_{v_{a+2}}-x_{v_{\ell-b}})\] \[= (r-2)(\sigma_{T}(A)-\sigma_{T}(B))+C+(r-2)x_{w_{\ell-b}}\] \[< (r-2)(\sigma_{T}(A)-\sigma_{T}(B))+C+(r-1)\sum_{i=\ell-b+1}^{\ell -a-1}(x_{v_{i}}+x_{w_{i}})\] \[= (r-2)(\sigma_{T}(A)-\sigma_{T}(B))+C+(r-1)(\sigma_{T}(A)-\sigma_ {T}(B))\] \[-(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{\ell+1-i}}-x_{v_{i}})+\sum_{i= 1}^{a}(x_{w_{\ell-i}}-x_{w_{i}})\right)\] \[= \frac{r-2}{r-1}W-\frac{r-2}{r-1}C+W\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{i}}-x_{v_{\ell+1-i}})+\sum_{i= 1}^{a}(x_{w_{i}}-x_{w_{\ell-i}})\right)\] \[= \frac{2r-3}{r-1}W+\frac{r-2}{r-1}(-C)\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{i}}-x_{v_{\ell+1-i}})+\sum_{i= 1}^{a}(x_{w_{i}}-x_{w_{\ell-i}})\right)\] \[< \frac{2r-3}{r-1}W+\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i) (x_{v_{a+1+i}}-x_{v_{\ell-b+1-i}})\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{i}}-x_{v_{\ell+1-i}})+\sum_{i= 1}^{a}(x_{w_{i}}-x_{w_{\ell-i}})\right).\] Now by Lemma 3.3(i-iii) and the hypothesis that \(x_{v_{a+1}}-x_{v_{\ell-b+1}}\leq x_{v_{a+2}}-x_{v_{\ell-b}}\), we have \[\rho(T)(x_{v_{a+2}}-x_{v_{\ell-b}})\] \[< \frac{2r-3}{r-1}W+\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i) (x_{v_{a+2}}-x_{v_{\ell-b}})\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{a+1}}-x_{v_{\ell-a}})+\sum_{i= 1}^{a}(x_{v_{a+1}}-x_{v_{\ell-a}})\right)\] \[< \frac{2r-3}{r-1}W+\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i) (x_{v_{a+2}}-x_{v_{\ell-b}})\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{a+1}}-x_{v_{\ell-b+1}})+\sum_{ i=1}^{a}(x_{v_{a+1}}-x_{v_{\ell-b+1}})\right)\] \[\leq \frac{2r-3}{r-1}W+\left((2a+1)(r-1)+\sum_{i=1}^{\lfloor\frac{r-1}{2} \rfloor}(r-2i)\right)(x_{v_{a+2}}-x_{v_{\ell-b}}).\] Thus \[\frac{2r-3}{r-1}W>\left(\rho(T)-\left((2a+1)(r-1)+\sum_{i=1}^{\lfloor\frac{r-1} {2}\rfloor}(r-2i)\right)\right)(x_{v_{a+2}}-x_{v_{\ell-b}}).\] By Lemma 3.3(ii), \(x_{v_{a+2}}-x_{v_{\ell-b}}>0\). By Lemma 3.4, \(\rho(T)>(2a+1)(r-1)+\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)\). Thus, \(W>0\), a contradiction. It follows that \(x_{v_{a+1}}-x_{v_{\ell-b+1}}>x_{v_{a+2}}-x_{v_{\ell-b}}>0\). By similar arguments as above and from the distance eigenequations of \(T\) at \(v_{a+1}\) and \(v_{\ell-b+1}\), we have \[\rho(T)(x_{v_{a+1}}-x_{v_{\ell-b+1}})\] \[= r(\sigma_{T}(A)-\sigma_{T}(B))+C+(r-1)x_{w_{\ell-b}}\] \[< r(\sigma_{T}(A)-\sigma_{T}(B))+C+(r-1)\sum_{i=\ell-b+1}^{\ell-a -1}(x_{v_{i}}+x_{w_{i}})\] \[= r(\sigma_{T}(A)-\sigma_{T}(B))+C+(r-1)(\sigma_{T}(A)-\sigma_{T}( B))\] \[-(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{\ell+1-i}}-x_{v_{i}})+\sum_{i= 1}^{a}(x_{w_{\ell-i}}-x_{w_{\ell-i}})\right)\] \[= \frac{r}{r-1}W-\frac{r}{r-1}C+W\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{i}}-x_{v_{\ell+1-i}})+\sum_{i= 1}^{a}(x_{w_{i}}-x_{w_{\ell-i}})\right)\] \[= \frac{2r-1}{r-1}W-\frac{r}{r-1}C\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{i}}-x_{v_{\ell+1-i}})+\sum_{i= 1}^{a}(x_{w_{i}}-x_{w_{\ell-i}})\right)\] \[= \frac{2r-1}{r-1}W+\frac{r}{r-1}\sum_{i=1}^{\lfloor\frac{r-1}{2} \rfloor}(r-2i)(x_{v_{a+1+i}}-x_{v_{\ell-b+1-i}})\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{i}}-x_{v_{\ell+1-i}})+\sum_{i= 1}^{a}(x_{w_{i}}-x_{w_{\ell-i}})\right)\] \[< \frac{2r-1}{r-1}W+\frac{r}{r-1}\sum_{i=1}^{\lfloor\frac{r-1}{2} \rfloor}(r-2i)(x_{v_{a+2}}-x_{v_{\ell-b}})\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{a+1}}-x_{v_{\ell-a}})+\sum_{i= 1}^{a}(x_{w_{i}}-x_{v_{\ell-a}})\right)\] \[< \frac{2r-1}{r-1}W+\frac{r}{r-1}\sum_{i=1}^{\lfloor\frac{r-1}{2} \rfloor}(r-2i)(x_{v_{a+2}}-x_{v_{\ell-b}})\] \[+(r-1)\left(\sum_{i=1}^{a+1}(x_{v_{a+1}}-x_{v_{\ell-b+1}})+\sum_{i=1}^{ a}(x_{v_{a+1}}-x_{v_{\ell-b+1}})\right)\] \[\leq \frac{2r-1}{r-1}W+\left((2a+1)(r-1)+\frac{r}{r-1}\sum_{i=1}^{\lfloor \frac{r-1}{2}\rfloor}(r-2i)\right)(x_{v_{a+1}}-x_{v_{\ell-b+1}}).\] Thus \[\frac{2r-1}{r-1}W>\left(\rho(T)-\left((2a+1)(r-1)+\frac{r}{r-1}\sum_{i=1}^{ \lfloor\frac{r-1}{2}\rfloor}(r-2i)\right)\right)(x_{v_{a+1}}-x_{v_{\ell-b+1}}).\] By Lemma 3.4, \(\rho(T)>(2a+1)(r-1)+\frac{r}{r-1}\sum_{i=1}^{\lfloor\frac{r-1}{2}\rfloor}(r-2i)\). So \(W>0\) and then \(\rho(T^{\prime})>\rho(T)\). ## 5 Proof of Theorem 1.1 Now we are ready to give a proof to Theorem 1.1. Proof.: Let \(T\) be a hypertree of rank at most three on \(n\) vertices with \(k\) edges of size three that maximizes the distance spectral radius. Let \(x=x(T)\). Let \(\Delta\) be the maximum degree of \(T\). **Claim 1.**\(\Delta=2\). Suppose that \(\Delta\geq 3\). Then there is a vertex \(u\) in \(T\) with \(\deg_{T}(u)=\Delta\geq 3\). Thus \(T\) consists of \(\Delta\) maximal subhypertrees \(T_{1},\ldots,T_{\Delta}\) containing a unique common vertex \(u\) such that the degree of \(u\) in any of \(T_{1},\ldots,T_{\Delta}\) is one. Assume that \(\sigma_{T}(T_{1})\geq\sigma_{T}(T_{2})\). Let \(z\) be a pendant vertex in \(T_{2}\). Denote by \(T^{\prime}\) the hypergraph obtained from \(T\) by moving the edge containing \(u\) in \(T_{3}\) from \(u\) to \(z\). Obviously, \(T^{\prime}\) is a hypertree of rank at most three on \(n\) vertices with \(k\) edges of size three. By Lemma 4.1, \(\rho(T)<\rho(T^{\prime})\), a contradiction. Thus \(\Delta=2\). This proves Claim 1. Let \(e\) be a pendant edge of \(T\) at some vertex, say \(v\). **Claim 2.** There is a vertex of degree one in any edge of size three of \(T\). Suppose that there is an edge of size three of \(T\) in which every vertex has degree larger than one. That is, there is an edge in \(T\) of size three, whose deletion yields three nontrivial components. Choose such an edge \(e_{1}=\{u_{1},u_{2},u_{3}\}\) so that \(\min\{d_{T}(v,u_{i}):i=1,2,3\}\) is as large as possible. Assume that \(d_{T}(v,u_{3})=\min\{d_{T}(v,u_{i}):i=1,2,3\}\). Then \(d_{T}(v,u_{3})=d_{T}(v,u_{i})-1\) for \(i=1,2\). For \(i=1,2,3\), let \(Q_{i}\) be the component of \(T-e_{1}\) containing \(u_{i}\). Assume that \(\sigma_{T}(Q_{1})\geq\sigma_{T}(Q_{2})\). By Claim 1, \(\Delta=2\). Let \(e^{\prime}\) be the unique edge in \(Q_{2}\) containing \(u_{2}\). Suppose that \(e^{\prime}\) is of size two. Let \(T^{\prime\prime}\) be the hypertree obtained from \(T\) by moving vertex \(u_{3}\) from \(e_{1}\) to \(e^{\prime}\). Obviously, \(T^{\prime\prime}\) is a hypertree of rank at most three on \(n\) vertices with \(k\) edges of size three. By Lemma 4.2, \(\rho(T)<\rho(T^{\prime\prime})\), a contradiction. Thus \(e^{\prime}\) is of size three. By the choice of \(e_{1}\), \(T-e^{\prime}\) has at most two nontrivial components, i.e., \(T\) has one pendant vertex, say \(w\) in \(e^{\prime}\). By Lemma 4.3, we have \(x_{u_{3}}+x_{u_{2}}-x_{w}>0\). Let \(T^{*}\) be the hypertree obtained from \(T\) by moving the edge containing \(u_{3}\) different \(e_{1}\) from \(u_{3}\) to \(w\). Obviously, \(T^{*}\) is a hypertree of rank at most three on \(n\) vertices with \(k\) edges of size three. As we pass from \(T\) to \(T^{*}\), the distance between a vertex of \(V(Q_{3})\setminus\{u_{3}\}\) and a vertex of \(V(Q_{1})\) is increased by \(1\), the distance between a vertex of \(V(Q_{3})\setminus\{u_{3}\}\) and a vertex of \(V(Q_{2})\setminus\{u_{2},w\}\) is decreased by \(1\), the distance between a vertex of \(V(Q_{3})\setminus\{u_{3}\}\) and \(u_{3}\) is increased by \(2\), the distance between a vertex of \(V(Q_{3})\setminus\{u_{3}\}\) and \(w\) is decreased by \(2\), and the distance between any other vertex pair remains unchanged. Thus \[\frac{1}{2}(\rho(T^{*})-\rho(T)) \geq\frac{1}{2}x^{\top}(D(\widetilde{T})-D(T))x\] \[=(\sigma_{T}(Q_{3})-x_{u_{3}})(\sigma_{T}(Q_{1})-(\sigma_{T}(Q_{ 2})-x_{u_{2}}-x_{w})+2x_{u_{3}}-2x_{w})\] \[=(\sigma_{T}(Q_{3})-x_{u_{3}})(\sigma_{T}(Q_{1})-\sigma_{T}(Q_{2 })+2x_{u_{3}}+x_{u_{2}}-x_{w})\] \[>0,\] so \(\rho(T)<\rho(T^{*})\), a contradiction. Thus, there is a vertex of degree one in any edge of size three. This proves Claim 2. By Claim 1, the maximum degree is two. By Claim 2, there is a vertex of degree one in any edge of size three of \(T\). So \(T\) is a loose path. Now the result follows if \(k=\lfloor\frac{n-1}{2}\rfloor\). Suppose that \(1\leq k<\lfloor\frac{n-1}{2}\rfloor\). Suppose that there is an edge of size \(3\), say \(e_{2}=\{w_{1},w_{2},w_{3}\}\) with \(\deg_{T}(w_{1})=\deg_{T}(w_{2})=2\). For \(i=1,2\), let \(F_{i}\) be the component of \(T-e_{2}\) containing \(w_{i}\), respectively. **Claim 3.** One of \(F_{1}\) or \(F_{2}\) has only edges of size three. Otherwise, both \(F_{1}\) and \(F_{2}\) have at least one edge of size two. Assume that \(\sigma_{T}(F_{1})\geq\sigma_{T}(F_{2})\). Let \(e^{*}\) be an edge in \(F_{2}\) of size two and \(T^{**}\) be the hypertree obtained from \(T\) by moving vertex \(w_{3}\) from \(e_{2}\) to \(e^{*}\). Obviously, \(T^{**}\) is a hypertree of rank at most three on \(n\) vertices with \(k\) edges of size three. By Lemma 4.2, \(\rho(T)<\rho(T^{**})\), a contradiction. So Claim 3 follows. Recall that \(T\) is a loose path. By Claim 3, all vertices in the edges of size two induce an ordinary path, so \(T\cong T(n,a,b)\), where \(0\leq a\leq b\) and \(a+b=k\). Suppose that \(b\geq a+2\), then by Lemma 4.4, we have \(\rho(T(n,a+1,b-1))>\rho(T(n,a,b))\), a contradiction. It follows that \(T\cong T(n,\lfloor\frac{k}{2}\rfloor,\lceil\frac{k}{2}\rceil)\). **Acknowledgement.** This work was supported by the National Natural Science Foundation of China (No. 12071158) and the Youth Innovative Talent Project of Guangdong Province of China (No. 2020KQNCX160).
2307.04791
Unitarity breaking in self-averaging spectral form factors
The complex Fourier transform of the two-point correlator of the energy spectrum of a quantum system is known as the spectral form factor (SFF). It constitutes an essential diagnostic tool for phases of matter and quantum chaos. In black hole physics, it describes the survival probability (fidelity) of a thermofield double state under unitary time evolution. However, detailed properties of the SFF of isolated quantum systems with generic spectra are smeared out by large temporal fluctuations, whose minimization requires disorder or time averages. This requirement holds for any system size, that is, the SFF is non-self averaging. Exploiting the fidelity-based interpretation of this quantity, we prove that using filters, disorder and time averages of the SFF involve unitarity breaking, i.e., open quantum dynamics described by a quantum channel that suppresses quantum noise. Specifically, averaging over Hamiltonian ensembles, time averaging, and frequency filters can be described by the class of mixed-unitary quantum channels in which information loss can be recovered. Frequency filters are associated with a time-continuous master equation generalizing energy dephasing. We also discuss the use of eigenvalue filters. They are linked to non-Hermitian Hamiltonian evolution without quantum jumps, whose long-time behavior is described by a Hamiltonian deformation. We show that frequency and energy filters make the SFF self-averaging at long times.
Apollonas S. Matsoukas-Roubeas, Mathieu Beau, Lea F. Santos, Adolfo del Campo
2023-07-10T18:00:04Z
http://arxiv.org/abs/2307.04791v2
# A self-averaging spectral form factor implies unitarity breaking ###### Abstract The complex Fourier transform of the two-point correlator of the energy spectrum of a quantum system is known as the spectral form factor (SFF). It constitutes an essential diagnostic tool for phases of matter and quantum chaos. In black hole physics, it describes the survival probability (fidelity) of a thermofield double state under unitary time evolution. However, detailed properties of the SFF of isolated quantum systems with generic spectra are smeared out by large temporal fluctuations, whose minimization requires disorder or time averages. This requirement holds for any system size, that is, the SFF is non-self averaging. Exploiting the fidelity-based interpretation of this quantity, we prove that using filters, disorder and time averages of the SFF involve unitarity breaking, i.e., open quantum dynamics described by a quantum channel that suppresses quantum noise. Specifically, averaging over Hamiltonian ensembles, time averaging, and frequency filters can be described by the class of mixed-unitary quantum channels in which information loss can be recovered. Frequency filters are associated with a time-continuous master equation generalizing energy dephasing. We also discuss the use of eigenvalue filters. They are linked to non-Hermitian Hamiltonian evolution without quantum jumps, whose long-time behavior is described by a Hamiltonian deformation. We show that frequency and energy filters make the SFF self-averaging at long times. The spectral form factor (SFF) is an essential diagnostic tool in the characterization of complex quantum systems [1; 2; 3; 4; 5; 6]. Given a Hamiltonian \(H\) of a single system with spectrum \(\mathrm{Sp}(H)=\{E_{n}|n=1,\ldots,d\}\), the SFF is a real-valued function defined as \[\mathrm{SFF}(t) = \left|\frac{Z(\beta+it)}{Z(\beta)}\right|^{2}\] \[= \frac{1}{Z(\beta)^{2}}\sum_{n,m=1}^{d}e^{-\beta(E_{n}+E_{m})-it( E_{n}-E_{m})},\] where the partition function \(Z(\beta)=\mathrm{tr}[\exp(-\beta H)]\) is included as a normalization such that \(\mathrm{SFF}(0)=1\). Finite values of the inverse temperature \(\beta\) exponentially suppress the contribution from the excited states. Thus, the Boltzmann factor \(\exp(-\beta E_{n})\) acts as an (energy) eigenvalue filter, where large values of \(\beta\) preferentially sample the low-energy part of the spectrum, and \(\beta=0\) gives equal weight to the whole spectrum. The SFF admits several information theoretic interpretations. In particular, it can be expressed as the fidelity [7; 8; 9; 10] between a coherent Gibbs state \(|\psi_{\beta}\rangle=\frac{1}{\sqrt{Z(\beta)}}\sum_{n}e^{-\beta E_{n}/2}|n\rangle\) and its unitary time-evolution \[\mathrm{SFF}(t)=\left|\langle\psi_{\beta}|e^{-itH}|\psi_{\beta}\rangle\right| ^{2}, \tag{2}\] or equivalently, as the survival probability of the evolving coherent Gibbs state. Likewise, in bipartite systems, it is convenient to consider the entangled state \[|\mathrm{TFD}\rangle=\frac{1}{\sqrt{Z(\beta)}}\sum_{n}e^{-\beta E_{n}/2}|n \rangle\otimes|n\rangle, \tag{3}\] known as the thermofield double state (TFD). In terms of it, \(\mathrm{SFF}(t)=|\langle\mathrm{TFD}|e^{-itH\otimes 1}|\mathrm{TFD}\rangle|^{2}\). The TFD is the purification of the thermal state of a single copy of the system, obtained by doubling the Hilbert space. The TFD was first introduced as a convenient reference state to extract thermal averages in field theory [11]. The TFD dynamics was used early on to model the "hot" thermal vacuum observed outside the horizon of a single radiating eternal black hole [12]. In the context of the AdS/CFT correspondence, it describes an eternal two-sided black hole in AdS [13; 14]. The SFF captures the survival probability of the TFD state under unitary time evolution [7; 8; 9; 10]. The conjecture that back holes are maximally chaotic [15] has led to a surge of activity in the study of the dynamical manifestations of quantum chaos in the SFF [8; 16; 17; 18; 19; 20]. In theoretical and numerical studies, it is customary to average the SFF by considering a Hamiltonian ensemble, e.g., in random-matrix theory or in disordered systems. In such a scenario, a property is said to be self-averaging when its estimate using a typical member of the ensemble and the explicit average over the ensemble coincide. Self-averaging largely eases numerical studies in many-body systems, disposing of the need for Hamiltonian ensemble averages in characterizing the desirable property of the system. However, the SFF is not self-averaging [21]. The structure of the SFF in the time domain is well-characterized [16; 17]: it exhibits a slope-dip-ramp-plateau structure, as shown in Fig. 1, that is manifested under averaging over disorder or a Hamiltonian ensemble. In the absence of averages, erratic time-domain fluctuations appear, making it difficult to appreciate some of its features. An exception is the SFF computed using the gauge-gravity duality in the semiclassical approximation, where the erratic fluctuations are absent [22]. These fluctuations are sometimes referred to as noise [21], or quantum noise [23], terminology to be distinguished from the standard one in the theory of open quantum systems [24]. Erratic wiggles in the time domain are a consequence of the discreteness of the energy spectrum and can be associated with quantum coherence in the energy eigenbasis in the time evolution of the coherent Gibbs state or the TFD. Quantum noise is further responsible for the lack of self-averaging in the SFF. Fluctuations with respect to the signal do not cancel out upon averaging, e.g., over a Hamiltonian ensemble [1; 25; 26; 21; 27]. This can be quantified by the finite value of the relative variance (RV) \[\mathrm{RV}(t)=\frac{\langle\mathrm{SFF}^{2}(t)\rangle-\langle\mathrm{SFF}(t) \rangle^{2}}{\langle\mathrm{SFF}(t)\rangle^{2}}\;, \tag{4}\] that does not vanish as the size of the Hilbert space is increased. The lack of self-averaging of the SFF and the survival probability for random matrices and disordered spin models was analytically shown in [28; 29; 30]. This implies that no matter how large the system size is, an ensemble average is required, adding an extra layer of complexity to numerical studies, which are generally challenging due to the large Hilbert space involved in analyzing many-body quantum systems. As an alternative to averaging over a Hamiltonian ensemble, numerical and analytical studies often resort to running averages over time that smear \(\mathrm{SFF}(t)\) over intervals of time [1; 27]. A yet different approach resorts to modifying the definition of the SFF restricting the Fourier transform of the two-point function over an energy window, or more generally, using a filter function over an energy or frequency band [31; 32; 33; 18; 21]. In what follows, we build on the interpretation of the SFF as a fidelity between quantum states related by time evolution and show that suppressing the erratic wiggles implies the breakdown of unitarity in the dynamics. To this end, we reformulate as quantum operations described by a (nonunitary) quantum channel, the different approaches to reduce the time fluctuations in the SFF, such as ensemble averages, and to enforce self-averaging, such as filters in the energy and frequency domain. For a particular class of filters, the resulting channels are of the mixed-unitary class, and the information lost due to the unitarity breaking can be recovered. The paper is organized as follows. We review the structure of the unfiltered SFF in Sec. I, and introduce the generalization of the SFF to arbitrary physical processes in Sec. II, paving the way to the description of filtering of the SFF in terms of nonunitary quantum channels in Sec. III. Physical mechanisms associated with energy-dephasing and giving rise to different spectral filters are discussed in Sec. IV. Section V discusses the filtered SFF as a function of the system size, while Sec. VI focuses on information recovery under mixed-unitary quantum channels and the frequency filter deconvolution. Fundamental limits to quantum noise associated with the fidelity-based SFF are presented in Sec. VII. The relation between eigenvalue filters and Hermitian Hamiltonian deformations is discussed in Sec. VIII. Time-continuous master equations for frequency filters are derived in Sec. IX before closing with a discussion and conclusions. ## I Features of the spectral form factor in an isolated chaotic quantum system We start by reviewing the well-known structure of the SFF for a chaotic system in isolation. The (unfiltered) SFF averaged over a Hamiltonian ensemble can generally be written down in terms of different contributions. Invoking the annealed approximation, which replaces the average of a quotient by the ratio of the averages at high temperature, and in the absence of degeneracies in the energy spectrum, one finds [8; 16] \[\langle\mathrm{SFF}(t)\rangle=\frac{1}{\langle Z(\beta)\rangle^{2 }}\bigg{[}|\langle Z(\beta+it)\rangle|^{2}+g_{c}(\beta,t)+\langle Z(2\beta) \rangle\bigg{]}. \tag{5}\] The first term in brackets is known as the disconnected part as it can be derived from the average density of states \(\langle\rho(E)\rangle=\langle\sum_{n}\delta(E-E_{n})\rangle\) (one-point function), as \[\langle Z(\beta+it)\rangle=\int dE\langle\rho(E)\rangle e^{-( \beta+it)E}.\] The second term captures correlations among eigenvalues and is governed by the Fourier transform \(g_{c}(\beta,t)\) of the connected two-level correlation function of the energy spectrum \(\langle\rho(E)\rho(E^{\prime})\rangle_{c}=\langle\rho(E)\rho(E^{\prime}) \rangle-\langle\rho(E)\rangle\langle\rho(E^{\prime})\rangle\). Specifically, \[g_{c}(\beta,t)=\int dEdE^{\prime}\langle\rho(E)\rho(E^{\prime}) \rangle_{c}e^{-(\beta+it)E}e^{-(\beta-it)E^{\prime}}.\] The last term is constant and governs the long-time asymptotics. The SFF reduces to unit value at \(t=0\). The short time evolution gives rise to a parabolic decay \(\langle\mathrm{SFF}(t)\rangle=1-\langle\Delta H^{2}\rangle t^{2}\) in the time scale fixed by the inverse of the energy fluctuations \(\langle\Delta H^{2}\rangle=\int dE(E-\langle E\rangle)^{2}\langle\rho(E)\rangle\) and extends, forming a slope. This decay is governed by the disconnected part of the SFF. In chaotic systems, the decay reaches a dip below the long-time asymptotics. The region where \(\langle\mathrm{SFF}(t)\rangle<\langle Z(2\beta)\rangle/\langle Z(\beta) \rangle^{2}\) is known as correlation hole or dip [1; 3]. The latter is followed by a ramp, governed by the eigenvalue correlations, and is thus a proxy for quantum chaos. The ramp extends from the dip time to the plateau time, at which it takes the constant value \(\langle\mathrm{SFF}\rangle=\langle Z(2\beta)\rangle/\langle Z(\beta) \rangle^{2}\) in the annealed approximation, in the absence of degeneracies expected in chaotic systems. ## II Spectral form factor in arbitrary physical processes The fidelity-based interpretation of the SFF can be leveraged to consider more general sorts of time evolu tion beyond the unitary case. In particular, this makes it possible to generalize the SFF to non-Hermitian and open quantum systems characterized by nonunitary evolution [34; 35; 36; 37; 38; 10]. This section introduces tools used to describe nonunitary evolution that will be employed in the explanations that follow in the next sections. Several generalizations of the SFF have been put forward when the dynamics is not unitary. At variance with alternative proposals with a restricted domain of applicability [39; 40], the fidelity-based generalization of the SFF has the advantage of involving only the eigenvalue correlations that govern quantum dynamics and applies to arbitrary physical processes. Provided that the evolution is described by a quantum channel \(\Phi_{t}(\cdot)\) (i.e., a completely-positive and trace-preserving map), the fidelity-based SFF is given by [35; 36; 37; 38; 10] \[\mathrm{SFF}(t)=\langle\psi_{\beta}|\Phi_{t}(|\psi_{\beta}\rangle\langle\psi_{ \beta}|)|\psi_{\beta}\rangle. \tag{6}\] An arbitrary quantum channel admits a Kraus representation, \(\Phi_{t}(\rho_{0})=\sum_{\alpha=1}^{r}K_{\alpha}\rho_{0}K_{\alpha}^{\dagger}\), where \(r\) is known as the Choi rank [41]. The case of unitary evolution corresponds to the case of a single Kraus operator that equals the time evolution operator, i.e., \(K(t)=K_{1}(t)=U(t)\) and \(K_{\alpha}(t)=0\) for \(\alpha>1\). Given that Kraus operators need only obey the condition of adding up to the identity \(\sum_{\alpha}K_{\alpha}^{\dagger}K_{\alpha}=\mathbb{I}\) for the dynamics to be trace-preserving and that the Kraus decomposition involves \(1\leq r\leq d^{2}\) Kraus operators in a \(d\)-dimensional Hilbert space, it is apparent that the chaotic features of the SFF under unitary dynamics are generally suppressed under nonunitary time evolution. As a result, quantum channels with a simple representation in the energy eigenbasis are singled out to study filtering in quantum chaos and self-averaging of the SFF. An important class of channels that will be of relevance in the following is that of mixed-unitary channels [42]. A channel \(\Phi\) is a mixed-unitary channel if there is an alphabet \(\Sigma\), a probability vector \(p\) and a collection of unitaries \(\{U_{y}:y\in\Sigma\}\) such that \[\Phi(\rho)=\sum_{y\in\Sigma}p(y)U_{y}\rho U_{y}^{\dagger}. \tag{7}\] The channel is thus a convex combination of unitaries. This kind of quantum channel is unital and thus preserves the identity \(\mathbb{I}\), i.e., \(\Phi_{t}(\mathbb{I})=\mathbb{I}\). The fidelity-based interpretation of the SFF extends to higher moments of the SFF. Indeed, given that the initial state \(\rho_{0}=|\psi_{\beta}\rangle\langle\psi_{\beta}|\) is pure, the \(k\)-th moment reads \[\mathrm{SFF}^{k}=\mathrm{tr}[\underbrace{\rho_{0}\rho_{t}\ldots\rho_{0}\rho_{ t}}_{k\text{ times}}]=\langle\psi_{\beta}|\rho_{t}|\psi_{\beta}\rangle^{k}. \tag{8}\] The \(k\)-th moment can be associated with a Zeno sequence in which the time evolution is interrupted by sequential projective measurements onto the initial state. Therefore, the RV in Eq. (4) probes the degree of factorization of the time evolution in a sequence with \(k=2\) in the presence of averaging. ## III Unitarity breaking: spectrum filtering as a nonunitary quantum channel In what follows, we consider three approaches frequently used to reduce the erratic wiggles in the SFF. They involve averaging over a Hamiltonian ensemble and the use of frequency filters and eigenvalue filters. The last two involve different kinds of time-averaging and ensure the self-averaging of the SFF at long times. We show that all three cases involve unitary breaking described by a nonunitary quantum channel. ### Averaging over Hamiltonian ensembles Averaging over a Hamiltonian ensemble constitutes a popular approach that smooths out the quantum noise wiggles in the SFF. This approach is at the core of the random-matrix theory, the study of disordered systems, and matrix models [43; 5; 6]. Given a Hilbert space \(\mathcal{H}\) of dimension \(d\), consider an ensemble of Hamiltonians \(\mathcal{E}_{H}\) with a probability density function \(P(H)\) and integration measure \(dH\) such that \(\int_{\mathcal{E}_{H}}P(H)dH=1\). The average of the SFF over \(\mathcal{E}_{H}\) is given by \[\langle\mathrm{SFF}(t)\rangle_{\mathcal{E}_{H}}=\int_{\mathcal{E}_{H}}dHP(H) \left|\frac{\mathrm{tr}\left(e^{-(\beta+it)H}\right)}{\mathrm{tr}\left(e^{- \beta H}\right)}\right|^{2}. \tag{9}\] The fidelity-based interpretation of the SFF illuminates the underlying physical process involved in such an average. For a specific Hamiltonian \(H=\sum_{n}E_{n}|n\rangle\langle n|\), the initial state is chosen as the coherent Gibbs state (or the TFD in the case of a bipartite system), \[|\psi_{\beta}(H)\rangle=\sum_{n}\frac{e^{-\beta H/2}}{\sqrt{\mathrm{tr}\left(e ^{-\beta H}\right)}}|n\rangle. \tag{10}\] The Hamiltonian ensemble \(\mathcal{E}_{H}\) provides an alphabet, together with the collection of unitaries \(\{U_{H}(t)=e^{-itH}:H\in\mathcal{E}_{H}\}\). The state \(|\psi_{\beta}(H)\rangle\) is chosen with probability measure \(P(H)dH\) and evolved unitarily into \(U_{H}(t)|\psi_{\beta}(H)\rangle\). The SFF is then computed as the averaged survival probability over the Hamiltonian ensemble, \[\langle\mathrm{SFF}(t)\rangle_{\mathcal{E}_{H}}=\int_{\mathcal{E}_{H}}dHP(H)| \langle\psi_{\beta}(H)|U_{H}(t)|\psi_{\beta}(H)\rangle|^{2}. \tag{11}\] As a result, averaging the SFF over a Hamiltonian ensemble involves breaking the unitarity of the dynamics by classically mixing a distribution of states and unitaries. When the initial state \(\rho_{0}\) is fixed and independent of the Hamiltonian \(H\), the process can be associated with a mixed-unitary channel \(\Phi(\rho_{0})=\int_{\mathcal{E}_{H}}dHP(H)U_{H}(t)\rho_{0}U_{H}(t)^{\dagger}.\) As a relevant instance, this is the case when the initial state is the coherent Gibbs state in the infinite temperature limit \(\beta=0\), \(|\psi_{\beta}\rangle=\sum_{n}\frac{1}{\sqrt{d}}|n\rangle\), where the Hilbert space dimension fixes the probability amplitudes. ### Frequency filtering and the time-averaged SFF As an alternative to Hamiltonian averaging, in numerical and analytical studies, it is customary to enforce the SFF's self-averaging by using a filter function \(w(E_{n}-E_{m})\) that acts on the frequency domain, suppressing contributions from given eigenvalue differences in the spectrum of a single Hamiltonian. This is equivalent to filtering eigenvalues of the Liouville superoperator \(\mathbb{L}=-i(H\otimes\mathbb{I}-\mathbb{I}\otimes H^{T})\) that governs the unitary evolution in the vectorized density matrix \(|\rho_{t}\rangle\) according to \(\frac{d}{dt}|\rho_{t}\rangle=\mathbb{L}|\rho_{t}\rangle\), i.e., when representing the Liouville-von Neumann equation as a linear matrix equation. We assume the frequency filter to be described by a symmetric function \(w(x):\mathbb{R}\rightarrow[0,1]\) satisfying \(w(x)=w(-x)\). The frequency-filtered SFF is then proportional to \(\sum_{nm}e^{-\beta(E_{n}+E_{m})-it(E_{n}-E_{m})}w(E_{n}-E_{m})\). Making use of the Fourier transform of \(w\), the frequency-filtered SFF reads \[\mathrm{SFF}_{w}(t) = \sum_{n,m=1}^{d}\frac{e^{-\beta(E_{n}+E_{m})-it(E_{n}-E_{m})}}{Z( \beta)^{2}}w(E_{n}-E_{m}) \tag{12}\] \[= \frac{1}{2\pi}\int_{-\infty}^{\infty}d\tau\widetilde{w}(t-\tau) \left|\frac{Z(\beta+i\tau)}{Z(\beta)}\right|^{2},\] with \(\widetilde{w}(y)=\int_{-\infty}^{\infty}dE\exp(-iyE)w(E)\). Filtering in frequency space is equivalent to time-averaging the canonical SFF associated with the unitary time evolution. Without degeneracies in the energy spectrum, the long-time behavior of \(\mathrm{SFF}_{w}\) saturates at the plateau value set by \(w(0)\). Further, in the fidelity-based interpretation of the SFF, frequency filtering can be recast as the result of a nonunitary time evolution. To this end, consider a quantum channel \(\Phi_{t}\) such that the time-evolution of the initial coherent Gibbs state \(|\psi_{\beta}\rangle\langle\psi_{\beta}|=\sum_{nm}e^{-\beta(E_{n}+E_{m})/2}/Z(\beta)\) reads \[\rho_{t} = \Phi_{t}(|\psi_{\beta}\rangle\langle\psi_{\beta}|)\] \[= \sum_{nm}\frac{e^{-\beta(E_{n}+E_{m})/2-it(E_{n}-E_{m})}}{Z(\beta )}w(E_{n}-E_{m})|n\rangle\langle m|.\] The latter can be rewritten as \[\Phi_{t}(\rho_{0})=\int_{-\infty}^{\infty}dyK(y)\rho_{0}K(y)^{\dagger}, \tag{14}\] with \[K(y)=\left(\frac{\widetilde{w}(y)}{2\pi}\right)^{\frac{1}{2}}e^{-i(t+y)H}. \tag{15}\] For the time evolution to be trace-preserving, it is required that \[\int_{-\infty}^{\infty}dyK(y)^{\dagger}K(y)=\frac{1}{2\pi}\int_{-\infty}^{ \infty}dy\widetilde{w}(y)=1, \tag{16}\] that is, \(w(0)=1\). The above equations provide an analog of the Kraus decomposition with a continuous index [44]. They are associated with energy diffusion processes. Generally, the Fourier transform \(\widetilde{w}(y)\) of the frequency filter can take both negative and positive values. However, given Eq. (16), whenever \(p(y)=\frac{1}{2\pi}\widetilde{w}(y)\geq 0\), it can be thought of as a probability distribution. Frequency filtering is then described by a mixed-unitary channel, i.e., the convex combination of unitary quantum channels, each with a single Kraus operator that equals the time-evolution operator shifted as \(t\to t+y\). The collection of unitaries in this case \(\{U_{y}(t)=e^{-i(t+y)H}:y\in\mathbb{R}\}\) is generated by one single Hamiltonian \(H\), leading to a time-average of the quantum state at time \(t\), \(\overline{\rho_{t}}=\int dyp(y)e^{-i(t+y)H}\rho_{0}e^{i(t+y)H}\), from which the SFF is obtained as the fidelity \(\mathrm{SFF}_{w}(t)=\langle\psi_{\beta}|\overline{\rho_{t}}|\psi_{\beta}\rangle\). Figure 1: Spectral form factor for a single realization (solid red line) and upon Hamiltonian average (solid black line), together with the corresponding RV (black dashed line). The averages are taken over a sample of 500 random GOE(64) Hamiltonians \(H\), with \(\sigma=1\). \(\blacksquare\) In the unfiltered case, \(\kappa=0\), the RV saturates at the unit value after the dip time. \(\blacktriangleright\) RV using frequency filtering with the Gaussian function (26) and a finite dephasing strength \(\kappa=0.1\). The RV reaches a maximum value at the dip time and then drops to a plateau of value \(\mathrm{RV}_{p}=\langle\langle Z(2\beta)^{2}/Z(\beta)^{4}\rangle/\langle Z(2 \beta)/Z(\beta)^{2}\rangle^{2}\). \(\blacktriangleright\) RV with eigenvalue filtering using the Gaussian function (30) with \(f(E)=E\). The RV increases to its maximum at the dip time and then drops to a plateau given by \(\mathrm{RV}_{p}=\langle 1/Z(\beta)^{2}\rangle/\langle 1/Z(\beta)\rangle^{2}\). In all three panels, the inverse temperature is \(\beta=0.1\). An important example concerns the time averaging of the SFF over a time window of duration \(T\), \[\overline{\mathrm{SFF}}(t)=\frac{1}{T}\int_{-T/2}^{+T/2}\left|\frac{Z(\beta+it+ iy)}{Z(\beta)}\right|^{2}dy, \tag{17}\] for which \(\widetilde{w}(y)=2\pi/T\) for \(y\in[-T/2,T/2]\) and zero otherwise. This is tantamount to considering the averaged time-dependent state \(\overline{\rho t}=\frac{1}{T}\int_{-T/2}^{T/2}dye^{-i(t+y)H}\rho_{0}e^{i(t+y)H}\). ### Eigenvalue filtering An alternative filtering of the SFF involves expressions of the form \(|\sum_{n}e^{-\beta E_{n}-itE_{n}}w(E_{n})|^{2}\) with a filter function \(w(E)\geq 0\) that acts directly on the eigenvalues. This is equivalent to selecting an energy band to study the SFF, while disregarding contributions from other parts of the spectrum [21; 27]. As noted in the introduction, the Boltzmann factor \(\exp(-\beta E_{n})\) can be considered as an exponential eigenvalue filter acting on the SFF with \(\beta=0\). The use of an energy-eigenvalue filter function can be associated with the evolution governed by a single nonunitary Kraus operator \[K(t) = e^{-itH}\sqrt{w}(H). \tag{18}\] The selection of the energy window corresponds to a post-selection represented by the operation \[|\psi_{\beta}\rangle\langle\psi_{\beta}|\rightarrow\rho_{t}=\frac{K(t)|\psi_{ \beta}\rangle\langle\psi_{\beta}|K(t)^{\dagger}}{Z_{w}(\beta)}, \tag{19}\] which is always a pure and normalized state, including the state at \(t=0\). Here, the modified partition function \[Z_{w}(\beta) = \mathrm{tr}[K(t)|\psi_{\beta}\rangle\langle\psi_{\beta}|K(t)^{ \dagger}] \tag{20}\] \[= \mathrm{tr}[w(H)e^{-\beta H}].\] This accounts for the correct normalization, so that the SFF at all times \(t\geq 0\) is still given as the Uhlmann fidelity \(\mathrm{SFF}_{w}(t)=\mathrm{tr}(\rho_{0}\rho_{t})\), i.e., the survival probability of the post-selected coherent Gibbs state \(\rho_{0}\) and its time evolution, \[\mathrm{SFF}_{w}(t)=\sum_{nm}e^{-\beta(E_{n}+E_{m})-it(E_{n}-E_{m})}\frac{w(E_ {n})w(E_{m})}{Z_{w}(\beta)^{2}}. \tag{21}\] The choice of the Kraus operator is nonlinear in the quantum state, as it is tailored for the initial coherent Gibbs state, i.e., \(\mathrm{tr}[K(t)|\psi_{\beta}\rangle\langle\psi_{\beta}|K(t)^{\dagger}]=1\), making (only) in this case the dynamics trace-preserving. While this scenario is not the standard one in the theory of open quantum systems, it admits a natural interpretation in terms of energy dephasing without quantum jumps, as discussed in IV.2. For completeness, we note that in terms of the Fourier transform of \(\widetilde{w}(y)=\int_{-\infty}^{\infty}dE\exp(-iyE)w(E)\) and the definition \(p(y)=\widetilde{w}(y)/(2\pi)\), the filtered SFF can be found in terms of the analytically continued partition function \[\mathrm{SFF}_{w}(t)=\frac{1}{Z_{w}(\beta)^{2}}\left|\int_{-\infty}^{\infty}dyp (t-y)Z(\beta-iy)\right|^{2}. \tag{22}\] Naturally, for \(w(E)=1\), \(Z_{w}(\beta)=Z(\beta)\), \(p(t-y)=\delta(t-y)\), one recovers the canonical SFF in Eq. (2). Before moving forward, let us characterize the performance of frequency and energy filters in the SFF. We consider random matrix Hamiltonians as a paradigm of quantum chaos. We sample the Hamiltonian matrices \(H\) from the Gaussian orthogonal ensemble, \(\mathrm{GOE}(d)\), calculate the corresponding \(\mathrm{SFF}(t)\) and \(\mathrm{SFF}^{2}(t)\), and then perform the average over the different realizations. Specifically, we consider samples of real matrices \(H=(X+X^{\mathrm{T}})/2\), where all elements \(x\in\mathbb{R}\) of \(X\) are pseudo-randomly generated with probability measure given by the Gaussian, \(\exp(-x^{2}/\big{(}2\sigma^{2}\big{)})/(\sigma\sqrt{2\pi})\), where \(\sigma\) is the standard deviation. Figure 1 shows three panels corresponding to the isolated, unfiltered SFF in panel **a** and its modified versions with frequency and energy filters in panels **b** and **c**, respectively. A single realization of the SFF exhibits quantum noise, manifested in the erratic oscillatory behavior in the time evolution (red line in Fig. **1a**). This is suppressed by performing a Hamiltonian ensemble average (solid black line in Fig. **1a**). Alternatively, the frequency filter can suppress quantum noise in the SFF for a single random-matrix Hamiltonian without relying on ensemble averages, as illustrated in panel **b**. Its effect is to reduce the oscillatory wiggles and the RV. The use of filters acting on energy eigenvalues directly provides a different alternative, shown in panel **c**. Note that for the unfiltered SFF, RV equals 1 from the time of the dip onward, as seen in Fig. **1a**. This result holds for random matrices of any dimension [28] and for chaotic many-body quantum systems of any size [28; 29], which means that the unfiltered SFF is non-self-averaging. RV \(=1\), because the distribution of the \(\mathrm{SFF}(t)\) for large times [30] is exponential, so the square of the mean of the distribution and its variance are equal. In contrast, the asymptotic values of the RV under frequency and energy filters become smaller than 1. Furthermore, as we shall see in Sec. V, the long-time values of the RV of the filtered SFF further decrease as \(d\) increases, indicating that the SFF becomes self-averaging. ## IV Energy dephasing processes and spectral filtering This section explores the relationship between energy-dephasing processes and the effects of different spectral filters. ### Frequency filters from energy dephasing Energy dephasing processes, also known as energy diffusion processes, arise in various scenarios [45, 46]. They are postulated in modifications of quantum mechanics involving wavefunction collapse models [47, 48, 49]. They also arise in the description of unitary time evolution timed by a realistic clock subject to errors [50, 51]. They have been used to study the interplay between quantum chaos and decoherence [52, 10, 35]. Energy dephasing has also been analyzed in the context of AdS/CFT [53, 54, 55, 44] to explore the relation between entanglement and spacetime connectedness [13]. It can be described by the master equation \[d_{t}\rho_{t}=-i[H,\rho_{t}]-\kappa[X,[X,\rho_{t}]], \tag{23}\] with the condition that \([H,X]=0\), so that both Hermitian operators have a common set of eigenvectors, i.e., \(H=\sum_{n}E_{n}|n\rangle\langle n|\) and \(X=\sum_{n}x_{n}|n\rangle\langle n|\). The nested commutator plays the role of the dissipator and induces dephasing, suppressing coherent quantum superpositions in the energy eigenbasis. This is explicitly seen by considering the time evolution of an initial quantum state \(\rho_{0}=\sum_{nm}\rho_{nm}(0)|n\rangle\langle m|\), \[\rho_{t}=\sum_{nm}\rho_{nm}(0)e^{-it(E_{n}-E_{m})-\kappa t(x_{n}-x_{m})^{2}}|n \rangle\langle m|. \tag{24}\] For an initial coherent Gibbs state, the SFF is obtained as the survival probability \[\text{SFF}(t)=\sum_{nm}\frac{e^{-\beta(E_{n}+E_{m})-it(E_{n}-E_{m})}}{Z(\beta )^{2}}e^{-\kappa t(x_{n}-x_{m})^{2}}. \tag{25}\] When the Hermitian Lindblad operator is a deformation of the Hamiltonian, \(X=f(H)\), \(x_{n}=f(E_{n})\), and \(w(E_{n}-E_{m})=\exp\{-\kappa t[f(E_{n})-f(E_{m})]^{2}\}\) in Eq. (12). When they are equal, \(X=H\), one recovers the canonical case of energy dephasing. In this case, one can recast \(\text{SFF}(t)\) in Eq. (25) as the frequency-filtered \(\text{SFF}_{w}\) (12) with the identification of a time-dependent Gaussian filter function \[w(E_{n}-E_{m})=\exp[-\kappa t(E_{n}-E_{m})^{2}]. \tag{26}\] The action of the frequency filter (26) in the SFF is shown in Fig. 2 for fixed \(\beta=0.1\) and varying \(\kappa\); see also Fig. 1. Such filtering delays the onset of the ramp, reduces its span, and decreases the depth of the correlation hole. In short, it decreases the dynamical manifestations of quantum chaos. The corresponding RV is shown in Fig. 2**b** indicating that the long-time plateau of the RV is independent of \(\kappa\) for \(\kappa>0\), as expected from Eq. (25). Figure 2**c** shows the effect of varying \(\beta\) for fixed \(\kappa\), with the corresponding \(\beta\)-dependent long-time plateau being associated with the RV of a canonical thermal equilibrium state, as shown in Fig. 2**d**. ### Eigenvalue filtering from energy dephasing without quantum jumps In what follows, we show that eigenvalue filtering can be described as the non-Hermitian evolution associated with energy-dephasing processes without quantum jumps. To this end, consider the evolution operator \(U(t)=\exp(-itH_{T})\) generated by the time-independent non-Hermitian Hamiltonian \(H_{T}=H-i\Gamma\), with \(H=H^{\dagger}\) and \(\Gamma=\Gamma^{\dagger}\). In this case, the evolution is not trace-preserving, and one can introduce a single nonlinear Kraus operator dependent on the initial state \(\rho_{0}\) \[K=\frac{1}{\sqrt{\text{tr}(e^{-itH_{T}}\rho_{0}e^{itH_{T}^{\dagger}})}}e^{-itH _{T}}. \tag{27}\] The latter is associated with a master equation of the form \[d_{t}\rho_{t}=-i(H_{T}\rho_{t}-\rho_{t}H_{T}^{\dagger})+2\text{tr}(\Gamma\rho _{t})\rho_{t}, \tag{28}\] which describes non-Hermitian dynamics subject to balanced norm gain and loss [56, 57]. In particular, consider a non-Hermitian Hamiltonian in which the Hermitian and anti-Hermitian parts commute \([H,\Gamma]=0\) and thus have common eigenstates Figure 2: Frequency filtered SFF and its RV for different dephasing strengths and inverse temperatures. Panels **a** and **b** show the SFF next to the corresponding RV for inverse temperature \(\beta=0.1\) and different dephasing strengths \(\kappa\). Panels **c** and **d** show the SFF next to the RV for a dephasing strength \(\kappa=0.01\) and different values of the inverse temperature \(\beta\). In all panels, the Hamiltonian averages were taken over a sample of 500 random GOE(64) Hamiltonians \(H\) with \(\sigma=1\). \(\{|E_{n}\rangle\}\). The action of the filter function can be identified by noting that \(w(H)=\exp[-it\Gamma]\), i.e., \(\Gamma|E_{n}\rangle=-\frac{1}{t}\log w(E_{n})|E_{n}\rangle\). As an illustrative example, consider the master equation for energy dephasing in Eq. (23) with the condition \([H,X]=0\). This evolution is of the Lindblad form with a single Hermitian Lindblad operator \(\sqrt{2}X\) and is thus Markovian [41]. As such, it can alternatively be written in terms of a non-Hermitian Hamiltonian \(H_{T}=H-i2\kappa X^{2}\) and a quantum jump term \(\mathcal{J}(\rho)=2\kappa X\rho X\). Disregarding the quantum jump term induces a non-Hermitian evolution exclusively governed by \(H_{T}\). This can be justified at short times or by post-selection of quantum trajectories to the absence of quantum jumps [58]. The evolution of the subset of quantum trajectories exhibiting no quantum jumps from time \(t=0\) to time \(t\) is governed by Eq. (28), which is known as the nonlinear Schrodinger equation for null-measurement conditioning in this context [56]. Specifically, the time evolution subject to energy dephasing in the absence of quantum jumps is governed by (28), which admits a closed-form solution [35]. Explicit computation of the survival probability for the coherent Gibbs state yields the expression of the SFF \[\mathrm{SFF}(t)=\sum_{nm}e^{-\beta(E_{n}+E_{m})-it(E_{n}-E_{m})} \frac{e^{-\kappa t(x_{n}^{2}+x_{m}^{2})}}{Z(\beta)Z_{w}(\beta,t)}, \tag{29}\] where the modified partition function \(Z_{w}(\beta,t)=\mathrm{tr}[w(X)^{2}\exp(-\beta H)]\). The case of the Hamiltonian deformation \(X=f(H)\) corresponds to the choice of the time-dependent filter function \[w(E_{n})=\exp[-\kappa tf(E_{n})^{2}], \tag{30}\] in Eq. (21). The time dependence of the SFF with eigenvalue filtering in Eq. (30) engineered through energy dephasing in the absence of quantum jumps is illustrated in Fig. 3. At fixed \(\beta\), increasing \(\kappa\) reduces the correlation hole; see Fig. 3**a**. For \(\kappa>0\), the long-time plateaus of the SFF and RV differ from the unfiltered case. Increasing \(\beta\) for fixed \(\kappa\) favors contributions to the SFF from the low-energy part of the spectrum and generally reduces the correlation hole and increases the plateau value of the SFF and the RV, as shown in Figs. 3**c** and **d**, respectively. ## V Self-averaging at long times Under chaotic quantum dynamics, quantities that are local in space are expected to be self-averaging at short times [28; 29; 30]. It has further been suggested that time-locality implies self-averaging at long times. The SFF can be interpreted as a time auto-correlation function, thus a non-local quantity in time. The unfiltered SFF lacks the self-averaging property at all timescales in isolated quantum systems [28]. We have shown that filters ubiquitously used to reduce the erratic wiggles of the SFF can be associated with quantum channels involving nonunitary dynamics. The breaking of unitarity contributes to suppressing quantum noise. In what follows, we numerically investigate the dependence of the RV as a function of the system size to identify when RV decreases as \(d\) increases, thus rendering the SFF self-averaging. Figure 1 implies that unitarity breaking can suppress the quantum noise of the SFF. Nevertheless, the robustness against sample-to-sample fluctuations is associated with the reduction of the RV as the Hilbert space dimension increases. Fig. 4**a** and **b** show that the frequency- and eigenvalue-filtered SFFs become self-averaging at the small inverse temperature shown and large times. Figure 4**c**-**d** confirm that the filtered SFFs become self-averaging at times after the correlation hole, where, according to Fig. 1**b**-**c**, \(\langle\mathrm{SFF}(t)\rangle>\mathrm{RV}(t)\). The effect of the inverse temperature depends on the filter considered, as shown in Fig. 5. The long-time SFF is only self-averaging for moderate to high temperatures in the case of frequency filtering; see panel **a**. By contrast, the long-time eigenvalue-filtered SFF remains self-averaging as the inverse temperature varies, as shown in panel **b**. Figure 3: Eigenvalue filtered SFF and its RV for different dephasing strengths and inverse temperatures. Hamiltonian averages over a sample of 500 random GOE(64) Hamiltonians \(H\) with \(\sigma=1\). Panels **a** and **b** show the SFF and the corresponding RV with inverse temperature \(\beta=0.1\) and different dephasing strengths \(\kappa\). Panels **c** and **d** show the SFF and the associated RV with fixed dephasing strength \(\kappa=0.01\) and varying inverse temperatures \(\beta\). ## VI Information loss and its recovery We have shown that the different approaches to suppress quantum noise in the SFF can be described as quantum channels involving nonunitary physical processes. In particular, Hamiltonian averaging, frequency filtering, and time averaging of the SFF are all associated with mixed-unitary channels. The latter are unital and thus satisfy the necessary conditions for the purity \(P_{t}=\mathrm{tr}[\Phi_{t}(\rho_{0})^{2}]\) of the time-evolving state to decay monotonically under the action of the channel [52; 59]. Conversely, the linear entropy \(S_{L}=1-P_{t}\) increases monotonically. Thus, these channels lead to monotonic information loss. Yet, the lost information is fully recoverable [60; 42]. To appreciate this, it is convenient to consider the Hilbert space of the system together with the Hilbert space \(\mathcal{H}_{E}\) of the environment with initial density matrix \(\rho_{E}\), such that \[\Phi_{t}(\rho_{0})=\mathrm{tr}_{E}\left(U_{SE}\rho_{0}\otimes\rho_{E}U_{SE}^{ \dagger}\right), \tag{31}\] in terms of a global unitary \(U_{SE}\). One can consider a measurement on the environment associated with a family of operators \(M_{y}\), such that \(\sum_{y}M_{y}=\mathbb{I}_{E}\). The expectation value of an operator \(A\) on the system can be described in terms of a family of completely positive maps \(\Phi_{y}\). \[\mathrm{tr}[\Phi_{t}(\rho_{0})A\otimes\mathbb{I}_{E}] = \sum_{y}\mathrm{tr}_{E}\left(U_{SE}\rho_{0}\otimes\rho_{E}U_{SE}^ {\dagger}A\otimes M_{y}\right) \tag{32}\] \[= \sum_{y}\mathrm{tr}[\Phi_{y}(\rho_{0})A].\] The decomposition of the channel \(\Phi_{t}=\sum_{y}\Phi_{y}\) is known as an instrument. The measurement of \(M\) on the environment yields outcome \(y\) and the quantum state \(\Phi_{y}(\rho_{0})/\mathrm{tr}[\Phi_{y}(\rho_{0})]\) with probability \(p(y)=\mathrm{tr}[\Phi_{y}(\rho_{0})]\). It is then possible to select the reverse operation \[R_{y}=U_{y}^{\dagger}\Phi_{t}(\rho_{0})U_{y}, \tag{33}\] so that the information-recovery channel is \[R=\sum_{y}R_{y}\circ\Phi_{y}. \tag{34}\] In short, the information acquired by performing a measurement on the environment can be used to reverse the action of the quantum channel \(\Phi_{t}\) on the system, thus recovering the initial state. This information-recovery protocol involves access to the degrees of freedom of an environment, which may be physical or an auxiliary construction, depending on the context. Any physical system is embedded in an environment that may give rise to decoherence and filtering through interaction with the system of interest. By contrast, in an effectively isolated system, one may still consider using nonunitary operations for filtering as done in numerical analysis without an explicit physical environment. Figure 5: Self-averaging of the filtered SFF as a function of the inverse temperature. Panel **a** shows the value of the long-time RV plateau in the frequency-filtered SFF, reflecting a breakdown of self-averaging as the inverse temperature is increased. By contrast, panel **b** indicates that self-averaging remains robust against variations of the inverse temperature in the case of eigenvalue filtering. Figure 4: Asymptotic self-averaging of the filtered SFF. Hamiltonian averages over a sample of 1000 random GOE(\(d\)) Hamiltonians \(H\) with \(\sigma=1\). Panels **a** and **b** show the plateau value of the frequency-filtered and the energy-filtered RV, that is independent of \(\kappa\), as a function of the Hilbert space dimension \(d\) is shown for different inverse temperatures. Panel **c** shows the frequency-filtered RV for inverse temperature \(\beta=0.5\), dephasing strength \(\kappa=0.2\), and different Hilbert space dimensions \(d\). In panel **d**, the corresponding energy-filtered RV for the same parameters is shown. In both cases, the relative variance plateau decreases with the dimension increment, i.e., the SFF becomes self-averaging. In what follows, we tackle a complementary problem, the recovery of information masked exclusively by the filter. We focus on frequency filtering and aim at obtaining the unfiltered SFF from the filtered one by undoing the action of the filter. The filtered SFF is the convolution of the Fourier transform of the filter function and the canonical SFF, as shown in Eq. (12), which can be written as \[\mathrm{SFF}_{w}(t)=\frac{1}{2\pi}\widetilde{w}(t)*\mathrm{SFF}(t). \tag{35}\] By the convolution theorem, it is thus possible to retrieve SFF from knowledge of \(\mathrm{SFF}_{w}\) and \(w\), using \[\widehat{\mathrm{SFF}}(\nu)=\frac{\widehat{\mathrm{SFF}}_{w}(\nu)}{w(\nu)}, \tag{36}\] provided that \(w(\nu)\) is nonzero everywhere in the domain of \(\widehat{\mathrm{SFF}}_{w}(\nu)\). Even when the inverse frequency filter function \(1/w(\nu)\) is nonsingular, the inversion can be unstable for small values of \(w(\nu)\). Furthermore, knowledge of \(\widehat{\mathrm{SFF}}_{w}(\nu)\) generally comes with additive noise, whether resulting from limited machine precision in a numerical simulation or statistical errors in measured data. This scenario is common in filter analysis and motivates alternatives to direct deterministic deconvolution, such as the Wiener deconvolution. ## VII Intrinsic quantum noise from eigenvalue statistics In the fidelity-based interpretation, the SFF is the survival probability of the time-evolving quantum state \(\rho_{t}\) in the initial coherent Gibbs (or TFD) state. As such, one can introduce a projector onto the initial state \[P=\rho_{0}=|\psi_{\beta}\rangle\langle\psi_{\beta}|, \tag{37}\] satisfying \(P^{2}=P\), i.e., with eigenvalues \(\pm 1\). Such eigenvalues correspond to measurement outcomes in a projective measurement of \(P\). The full counting statistics associated with the projective measurement associated with \(P\) is thus that of a discrete random variable, i.e., the Bernoulli distribution. Its characteristic function reads \[\mathrm{tr}\left[\rho_{t}e^{i\theta P}\right]=1+(e^{i\theta}-1)\mathrm{SFF}(t). \tag{38}\] For any nontrivial evolution, an intrinsic quantum noise cannot be suppressed (other than by post-selection), whether the dynamics is unitary or not. The quantum noise associated with the uncertainty in the measurement outcomes of a projective measurement of \(P\) can be quantified by the relative variance of the eigenvalue statistics encoded in the relation \[\frac{\mathrm{var}_{\rho_{t}}(P)}{\mathrm{tr}(P\rho_{t})^{2}}=\frac{\mathrm{tr }(P^{2}\rho_{t})-\mathrm{tr}(P\rho_{t})^{2}}{\mathrm{tr}(P\rho_{t})^{2}}= \frac{1-\mathrm{SFF}}{\mathrm{SFF}}. \tag{39}\] For any \(t>0\), up to recurrences of zero measure [61; 62], \(\mathrm{var}_{\rho_{t}}(P)>0\). ## VIII Eigenvalue filtering as Hamiltonian deformation We first note the following identity for the modified partition function (20) with an eigenvalue filter \(w(E)\), \[Z_{w}(\beta)=\mathrm{tr}\left[e^{-\beta\left(H-\frac{1}{\beta}\log w(H)\right) }\right]. \tag{40}\] As a result, \(Z_{w}(\beta)\) can be understood as the standard partition function of the operator \[F_{\beta}=H-\frac{1}{\beta}\log w(H), \tag{41}\] that describes a one-parameter family of Hermitian Hamiltonian deformations of \(H\)[63; 64]. Formally, this deformation takes the form of a Helmholtz free energy operator analogous to that introduced to bound the charging power of quantum batteries [65]. In particular, the filter gives rise to the entropy (surprisal) term \(S(H)=\log w(H)\). The eigenvalue-filtered SFF in Eq. (21) is then \[\mathrm{SFF}_{w}(t)=\left|\frac{\mathrm{tr}\left(e^{-\beta F_{\beta}-itH} \right)}{\mathrm{tr}\left(e^{-\beta F_{\beta}}\right)}\right|^{2}. \tag{42}\] At long times, in the absence of degeneracies, \(\mathrm{SFF}_{w}(t)\) tends to \[\overline{\mathrm{SFF}_{w}}=\frac{\mathrm{tr}\left(e^{-2\beta F_{\beta}} \right)}{\mathrm{tr}\left(e^{-\beta F_{\beta}}\right)^{2}}. \tag{43}\] This expression is nothing but the purity \(P[\rho_{w}(\beta)]=\mathrm{tr}[\rho_{w}(\beta)^{2}]\) of the canonical Gibbs thermal state \(\rho_{w}(\beta)\) defined with respect to the deformed Hamiltonian, i.e., the free energy operator \(F_{\beta}\), \[\rho_{w}(\beta)=\frac{e^{-\beta F_{\beta}}}{Z_{w}(\beta)}. \tag{44}\] Indeed, the asymptotic value of \(\mathrm{SFF}_{w}(t)\) can be written in terms of the second Renyi entropy \(S_{2}[\rho_{w}(\beta)]=-\log\mathrm{tr}[\rho_{w}(\beta)^{2}]\) as \[\overline{\mathrm{SFF}_{w}}=P[\rho_{w}(\beta)]=e^{-S_{2}[\rho_{w}(\beta)]}. \tag{45}\] For an eigenvalue filter function \(w(E):\mathbb{R}\rightarrow[0,1]\), \(\overline{\mathrm{SFF}_{w}}\leq\overline{\mathrm{SFF}}\) and \(S_{2}[\rho_{w}(\beta)]\geq S_{2}[\rho(\beta)]\), where \(\rho(\beta)=\exp(-\beta H)/Z(\beta)\) is the canonical thermal state of the undeformed Hamiltonian. ## IX Master equations for frequency filters from Liouvillian deformation We next show that frequency filters are associated with a family of master equations that generalize the dynamics related to energy dephasing. Consider the master equation in which time-evolution is generated by a Liouvillian \(\mathbb{L}\), \[\frac{d}{dt}|\rho_{t})=\mathbb{L}|\rho_{t}), \tag{46}\] where \(|\rho_{t})\) denotes the vectorized density matrix at time \(t\). In terms of it, \(\mathrm{SFF}(t)=(\rho_{0}|\rho_{t})\). Formally, equation (46) is solved by \(|\rho_{t})=e^{\mathrm{L}t}|\rho_{0})\). We focus on the case in which the Liouvillian is diagonalizable, so that it admits a spectral decomposition of the form \(\mathbb{L}=\sum_{\mu}\lambda_{\mu}|\mu)(\tilde{\mu}|\) using a bi-orthogonal basis. Here, \(|\mu)\) and \((\tilde{\mu}|\) are the right and left eigenstates, respectively, with complex eigenvalue \(\lambda_{\mu}\)[66; 67]. We next consider the Liouvillian of the form \[\mathbb{L}(\cdot)=-i[H,\cdot], \tag{47}\] associated with an isolated system with Hamiltonian \(H\). Its spectrum is purely imaginary, and left and right eigenvectors coincide. Given a complex function \(W(z):\mathbb{C}\rightarrow\mathbb{C}\) we define the associated Liouvillian deformation \(W(\mathbb{L})=\sum_{n}W(\lambda_{\mu})|\mu)(\mu|\)[36]. By specifying the Liouvillian deformation in terms of the frequency filter function \(w(x):\mathbb{R}\rightarrow[0,1]\) as \[W(\mathbb{L})=\log w(i\mathbb{L}), \tag{48}\] we consider a physical process in which the initial, unfiltered coherent Gibbs state \(|\psi_{\beta}\rangle\langle\psi_{\beta}|\) evolves into a generalization of the frequency-filtered time-dependent density matrix in Eq. (13). Specifically, we consider the time evolution for \(t\geq 0\) described by the time-dependent density matrix \[\rho_{t} = \sum_{nm}\frac{e^{-\beta(E_{n}+E_{m})/2-it(E_{n}-E_{m})}}{Z(\beta )}e^{\chi(t)W(E_{n}-E_{m})}|n\rangle\langle m|,\] where \(\chi(t)\) is a real function satisfying \(\chi(0)=0\) and \(W(E_{n}-E_{m})=\langle n|W(\mathbb{L})|m\rangle\) are matrix elements in the Hamiltonian eigenbasis. This evolution fulfills the master equation \[\frac{d}{dt}|\rho_{t})=[\mathbb{L}+\dot{\chi}(t)W(\mathbb{L})]\,|\rho_{t}), \tag{49}\] with the initial condition \(\rho_{0}=|\psi_{\beta}\rangle\langle\psi_{\beta}|\) and \(\dot{\chi}\) denotes the time-derivative of \(\chi\). While \(\mathbb{L}\) is anti-Hermitian, \(W(\mathbb{L})\) is Hermitian. Thus, \(W(\mathbb{L})\) breaks unitarity and can be identified as the dissipator in the master equation (49). Given that the \(W(z)=W(-z)\), its Taylor series expansion involves only even powers of \(z\), i.e., \(W(z)=\sum_{n=0}^{\infty}W^{(2n)}(0)z^{2n}/(2n!)\). The master equation can be written as \[\frac{d}{dt}\rho_{t}=-i[H,\rho_{t}]+\dot{\chi}(t)\sum_{n=0}^{\infty}\frac{W^{ (2n)}(0)}{(2n)!}\mathrm{ad}_{H}^{2n}\rho_{t}, \tag{50}\] where the nested commutators in each term of the Taylor series have been written compactly in terms of the adjoint map \(\mathrm{ad}_{X}Y=[X,Y]\), \(\mathrm{ad}_{X}^{2}Y=[X,[X,Y]]\), etc. The case of a time-independent frequency filter for \(t>0\) is described by choosing \(\chi(t)\) as the Heavisde step function, \[\chi(t)=\Theta(t),\qquad\dot{\chi}(t)=\delta(t). \tag{51}\] The delta function \(\delta(t)=\frac{d}{dt}\Theta(t)\) in the master equation is thus required for the frequency filter to be time-independent. Implementing this filter relies on a single kick with the dissipator \(W(\mathbb{L})\). Naturally, for the conventional energy-dephasing frequency filter (26), the master equations (49) and (50) truncate at \(\mathrm{ad}_{H}^{2}\rho\) and reduce to (23) for the choice \(\chi(t)=t\), \(\dot{\chi}(t)=1\). ## X Discussion and Conclusions The lack of self-averaging in the SFF is tied to quantum noise, manifested by erratic wiggles in the time domain. Analytical and numerical studies of the SFF enforce the reduction of the wiggles by resorting to Hamiltonian ensembles, time-averaging, and spectral filters in the energy or frequency domain. Through scaling analysis of the relative variance of the SFF, we have shown that the frequency and energy filters ensure that the SFF becomes self-averaging at long times. We have established that suppressing the erratic wiggles (quantum noise) in the SFF implies nonunitary dynamics characterized by information loss and decoherence. Hamiltonian averaging, time-averaging, and frequency filters can be described by a mixed-unitary channel representing the application of a random unitary with a given probability distribution. Mixed-unitary channels are unital and induce information loss that can, however, be recovered by environment-assisted channel correction. By contrast, filters acting directly in the energy eigenvalues can be interpreted as a nonlinear quantum channel describing the non-Hermitian evolution of an energy-dephasing process conditioned to the absence of quantum jumps. The identification of the canonical, filtered SFFs for isolated systems in terms of the survival probability of a coherent Gibbs state under nonunitary evolution singles out the fidelity-based generalization of the SFF to open quantum systems put forward in Refs. [35; 36; 37; 38] with respect to alternative proposals [39; 40]. The fidelity-based approach makes it possible to unify SFFs for isolated systems with and without filters and for open quantum systems in a single framework. Our results rely on the combination of tools in quantum information science and quantum chaos, and contribute to the understanding of filters in the characterization of the spectral properties of many-body systems (e.g., in numerical studies) as physical operations breaking unitary. In particular, our results establish how such filters can be implemented in digital or analog quantum simulation experiments of the noenquilibrium dynamics of many-body systems. This conclusion should be generalizable to quantities other than the SFF, such as correlation functions, that admit an information-theoretic interpretation associated with a quantum evolution. Our results hold for the dynamics of finite-dimensional systems and thus can be applied to the description of black hole physics in this framework, where self-averaging SFFs appear in semiclassical approaches. In view of our findings, the latter involve unitarity breaking. ###### Acknowledgements. It is a pleasure to acknowledge discussions with Federico Balducci, Aurelia Chenu, Julien Cornelius, Inigo L. Egusquiza, Pablo Martinez-Azcona, Federico Roccati, Avadh Saxena, and Zhenyu Xu. This project was funded within the QuantERA II Programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 16434093. For open access and in fulfillment of the obligations arising from the grant agreement, the authors have applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission. L.F.S. was supported by a grant from the United States National Science Foundation (NSF, Grant No. DMR-1936006).
2307.15844
Shared Information for a Markov Chain on a Tree
Shared information is a measure of mutual dependence among multiple jointly distributed random variables with finite alphabets. For a Markov chain on a tree with a given joint distribution, we give a new proof of an explicit characterization of shared information. The Markov chain on a tree is shown to possess a global Markov property based on graph separation; this property plays a key role in our proofs. When the underlying joint distribution is not known, we exploit the special form of this characterization to provide a multiarmed bandit algorithm for estimating shared information, and analyze its error performance.
Sagnik Bhattacharya, Prakash Narayan
2023-07-29T00:20:23Z
http://arxiv.org/abs/2307.15844v2
# Shared Information for a Markov Chain on a Tree ###### Abstract Shared information is a measure of mutual dependence among multiple jointly distributed random variables with finite alphabets. For a Markov chain on a tree with a given joint distribution, we give a new proof of an explicit characterization of shared information. The Markov chain on a tree is shown to possess a global Markov property based on graph separation; this property plays a key role in our proofs. When the underlying joint distribution is not known, we exploit the special form of this characterization to provide a multiarmed bandit algorithm for estimating shared information, and analyze its error performance. **Keywords:** Global Markov property, Markov chain on a tree, multiarmed bandits, mutual information, mutual information estimation, shared information. ## I Introduction Let \(X_{1},\ldots,X_{m}\), \(m\geq 2\) be random variables (rvs) with finite alphabets \(\mathcal{X}_{1},\ldots,\mathcal{X}_{m}\), respectively, and joint probability mass function (pmf) \(P_{X_{1}\ldots X_{m}}\). The _shared information_\(\mathrm{SI}(X_{1},\ldots,X_{m})\) of the rvs \(X_{1},\ldots,X_{m}\) is a measure of mutual dependence among them, and for \(m=2\), \(\mathrm{SI}(X_{1},X_{2})\) particularizes to mutual information \(\mathrm{I}(X_{1}\wedge X_{2})\). Consider \(m\) terminals, with terminal \(i\) having privileged access to independent and identically distributed (i.i.d.) repetitions of \(X_{i}\), \(i=1,\ldots,m\). Shared information \(\mathrm{SI}(X_{1},\ldots,X_{m})\) has the operational meaning of being the largest rate of _shared common randomness_ that the \(m\) terminals can generate in a distributed manner upon cooperating among themselves by means of interactive, publicly broadcast and noise-free communication1. Shared information measures the maximum rate of common randomness that is (nearly) independent of the open communication used to generate it. Footnote 1: Our preferred nomenclature of shared information is justified by its operational meaning. The (Kullback-Leibler) divergence-based expression for \(\mathrm{SI}(X_{1},\ldots,X_{m})\) was discovered in [20, Example 4], where it was derived as an upper bound for a single-letter formula for the "secret key capacity of a source model" with \(m\) terminals, a concept defined by the operational meaning above. The upper bound was shown to be tight for \(m=2\) and \(3\). Subsequently, in a significant advance [9, 14, 11], tightness of the upper bound was established for arbitrary \(m\), thereby imbuing \(\mathrm{SI}(X_{1},\ldots,X_{m})\) with the operational significance of being the mentioned maximum rate of shared secret common randomness. The potential for shared information to serve as a natural measure of mutual dependence of \(m\geq 2\) rvs, in the manner of mutual information for \(m=2\) rvs, was suggested in [32]; see also [33]. A comprehensive and consequential study of shared information, where it is termed "multivariate mutual information" [12], examines the role of secret key capacity as a measure of mutual dependence among multiple rvs and derives important properties including structural features of an underlying optimization along with connections to the theory of submodular functions. In addition to constituting secret key capacity for a multiterminal source model ([20, 9, 14]), shared information also affords operational meaning for: maximal packing of edge-disjoint spanning trees in a multigraph ([35, 34]; see also [10, 18, 12] for variant models); optimum querying exponent for resolving common randomness [40]; strong converse for multiterminal secret key capacity [40, 41]; and also undirected network coding [11], data clustering [13], among others. As argued in [12], shared information also possesses several attributes of measures of dependence among \(m\geq 2\) rvs proposed earlier, including Watanabe's total correlation [43] and Han's dual total correlation [25] (both mentioned in Section II). For \(m=2\) rvs, measures of common information due to Gacs-Korner [22], Wyner [44] and Tyagi [39] have operational meanings; extensions to \(m>2\) rvs merit further study (an exception [30] treats Wyner's common information). For a given joint pmf \(P_{X_{1}\cdots X_{m}}\) of the rvs \(X_{1},\ldots,X_{m}\), an explicit characterization of \(\mathrm{SI}(X_{1},\ldots,X_{m})\) can be challenging (see Definition 1 below); exact formulas are available for special cases (cf. e.g., [20, 35, 12]). An efficient algorithm for calculating \(\mathrm{SI}(X_{1},\ldots,X_{m})\) is given in [12]. Our focus in this paper is on a Markov chain on a tree (MCT) [23]. Tree-structured probabilistic graphical models are appealing owing to desirable statistical properties that enable, for instance, efficient algorithms for exact inference [27, 37]; decoding [31, 27]; sampling [21]; and structure learning [15]. We take the tree structure of our model to be known; algorithms exist already for learning tree structure from data samples [15, 16]. We exploit the special form of \(P_{X_{1}\cdots X_{m}}\) in the setting of an MCT to obtain a simple characterization for shared information. When the joint pmf \(P_{X_{1}\cdots X_{m}}\) is not known but the tree structure is, the said characterization facilitates an estimation of shared information. In the setting of an MCT [23], our contributions are three-fold. First, we derive an explicit characterization of shared information for an MCT with a given joint pmf \(P_{X_{1}\cdots X_{m}}\) by means of a direct approach that exploits tree structure and Markovity of the pmf. A characterization of shared information had been sketched already in [20]; our new proof does not seek ###### Abstract We consider a class of _MCT_, where \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\), and \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\). We show that \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\), and \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\). We show that \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\), and \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\). We show that \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\), and \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\). We also show that \(P_{X_{1}\cdots X_{m}}\) is a \(k\)-partition of \(\mathcal{M}\). Suppose that \(\pi^{*}=(\pi_{1}^{*},\ldots,\pi_{l}^{*})\), \(l\geq 2\), attains \(\mathrm{SI}(X_{\mathcal{M}})>0\) (not necessarily uniquely) in Definition 1, i.e, \[\mathrm{SI}(X_{\mathcal{M}})=\frac{1}{l-1}D(P_{X_{\mathcal{M}}}\parallel\prod_{ u=1}^{l}P_{X_{\pi_{u}^{*}}}). \tag{4}\] A simple but useful observation based on Definition 1, (3) and (4) is that upon agglomerating the rvs in each atom of an optimum partition \(\pi^{*}=(\pi_{1}^{*},\ldots,\pi_{l}^{*})\), the resulting shared information \(\mathrm{SI}(X_{\pi_{1}^{*}},\ldots,X_{\pi_{l}^{*}})\) of the teams \(X_{\pi_{1}^{*}},\ldots,X_{\pi_{l}^{*}}\) equals the shared information \(\mathrm{SI}(X_{\mathcal{M}})\) of the (unteamed) rvs \(X_{1},\ldots,X_{m}\), i.e, for these special teams (3) holds with equality. This property has benefited information-clustering applications (cf. e.g., [12, 13]). Shared information satisfies the data processing inequality [12]. For \(X_{\mathcal{M}}=(X_{1},\ldots,X_{m})\), consider \(X_{\mathcal{M}}^{\prime}=(X_{1}^{\prime},\ldots,X_{m}^{\prime})\) where for a fixed \(1\leq j\leq m\), \(X_{i}^{\prime}=X_{i}\) for \(i\in\mathcal{M}\setminus\{j\}\) and \(X_{j}^{\prime}\) is obtained as the output of a stochastic matrix \(W:X_{j}\to X_{j}\) with input \(X_{j}\). Then, \(\mathrm{SI}(X_{\mathcal{M}}^{\prime})\leq\mathrm{SI}(X_{\mathcal{M}})\). It is worth comparing \(\mathrm{SI}(X_{\mathcal{M}})\) with two well-known measures of correlation among \(X_{1},\ldots,X_{m}\), \(m\geq 2\), of a similar vein. Watanabe's _total correlation_[43] is defined by \[\mathcal{C}(X_{\mathcal{M}})=D(P_{X_{\mathcal{M}}}\parallel\prod_{i=1}^{m}P_{ X_{i}})=\sum_{i=1}^{m-1}\mathrm{I}(X_{i+1}\wedge X_{1},\ldots,X_{i}) \tag{5}\] and equals \((m-1)\,\mathcal{I}\,(\pi)\) for the partition \(\pi=(\left\{1\right\},\ldots,\left\{m\right\})\) of \(\mathcal{M}\) consisting of singleton atoms. By (1) and (5), clearly \[\mathrm{SI}(X_{\mathcal{M}})\leq\frac{1}{m-1}\ \mathcal{C}(X_{\mathcal{M}}). \tag{6}\] Han's _dual total correlation_[25] is defined (equivalently) by \[\begin{split}\mathcal{D}(X_{\mathcal{M}})&=\sum_{i=1 }^{m-1}D(P_{X_{i}}\parallel P_{X_{i+1}\cdots X_{m}}\parallel P_{X_{1}\cdots X_ {i-1}})\\ &=\sum_{i=1}^{m}\mathrm{H}(X_{\mathcal{M}\setminus\{i\}})-(m-1) \,\mathrm{H}(X_{\mathcal{M}})\\ &=\mathrm{H}(X_{\mathcal{M}})-\sum_{i=1}^{m}\mathrm{H}(X_{i} \,|\,X_{\mathcal{M}\setminus\{i\}})\\ &=\sum_{i=1}^{m-1}\mathrm{I}(X_{i}\wedge X_{i+1},\ldots,X_{m}\,| \,X_{1},\ldots,X_{i-1}),\end{split} \tag{7}\] (with conditioning vacuous for \(i=1\)) where the expression in (7) is from [1]. By a straightforward calculation, these measures are seen to enjoy the sandwich \[\frac{\mathcal{C}(X_{\mathcal{M}})}{m-1}\leq\mathcal{D}(X_{\mathcal{M}})\leq (m-1)\ \mathcal{C}(X_{\mathcal{M}}) \tag{8}\] whereby we get from (6) and the first inequality in (8) that \[\mathrm{SI}(X_{\mathcal{M}})\leq\frac{\mathcal{C}(X_{\mathcal{M}})}{m-1}\quad \text{and}\quad\mathrm{SI}(X_{\mathcal{M}})\leq\mathcal{D}(X_{\mathcal{M}}). \tag{9}\] This makes \(\mathrm{SI}(X_{\mathcal{M}})\) a leaner measure of correlation than \(\mathcal{C}(X_{\mathcal{M}})\) (upon setting aside the fixed constant \(1/(m-1)\)) or \(\mathcal{D}(X_{\mathcal{M}})\). Significantly, the notion of an optimal partition in \(\mathrm{SI}(X_{\mathcal{M}})\) in (1) makes shared information an appealing measure for "local" as well as "global" dependencies among the rvs \(X_{1},\ldots,X_{m}\). _Remark 1_.: When \(\mathcal{M}=\left\{1,2\right\}\), \[\mathrm{SI}(X_{1},X_{2})=\mathcal{C}(X_{1},X_{2})=\mathcal{D}(X_{1},X_{2})= \mathrm{I}(X_{1}\wedge X_{2}).\] Our focus is on shared information for a Markov chain on a tree. **Definition 2** (Markov Chain on a Tree).: _Let \(\mathcal{G}=(\mathcal{M},\mathcal{E})\) be a tree with vertex set \(\mathcal{M}=\left\{1,\ldots,m\right\}\), \(m\geq 2\), i.e., a connected graph containing no circuits. For \((i,j)\) in the edge set \(\mathcal{E}\), let \(\mathcal{B}(i\gets j)\) denote the set of all vertices connected with \(j\) by a path containing the edge \((i,j)\). Note that \(i\in\mathcal{B}(i\gets j)\) but \(j\notin\mathcal{B}(i\gets j)\). See Figure 1. The rvs \(X_{1},\ldots,X_{m}\) form a Markov Chain on a Tree (MCT) \(\mathcal{G}\) if for every \((i,j)\in\mathcal{E}\), the conditional pmf of \(X_{j}\) given \(X_{\mathcal{B}(i\gets j)}=\left\{X_{l}:l\in\mathcal{B}(i\gets j)\right\}\) depends only on \(X_{i}\). Specifically, \(X_{j}\) is conditionally independent of \(X_{\mathcal{B}(i\gets j)\setminus\{i\}}\) when conditioned on \(X_{i}\). Thus, \(P_{X_{\mathcal{M}}}\) is such that for each \((i,j)\in\mathcal{E}\),_ \[P_{X_{j}\,|\,X_{\mathcal{B}(i\gets j)}}=P_{X_{j}\,|\,X_{i}}, \tag{10}\] or, equivalently,_ \[X_{j}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ Proof.: By an obvious extension of Definition 2 to the agglomerated tree \(\mathcal{G}^{\prime}=(\mathcal{M}^{\prime},\mathcal{E}^{\prime})\), the lemma would follow upon showing that for every \((\pi_{i^{\prime}},\pi_{j^{\prime}})\in\mathcal{E}^{\prime}\), it holds that \[X_{\pi_{j^{\prime}}}\mathrel{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox t o 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox t o 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox{\hbox to 0.0pt{\hbox{\hbox t o 0.0pt{\hbox{\hbox{\hbox{\hbox{\ which has the local Markov property. Since \[\mathrm{I}(Z\wedge U,W\,|\,X) =\mathrm{I}(Z\wedge U\,|\,UZ)\] \[=\mathrm{H}(Z\,|\,UZ)\] \[=0.75\ \mathrm{H}(Z\,|\,UZ=0)\] \[=0.75\,h(1/3),\] the claim in (iii) is true. ## IV Shared Information for a Markov Chain on a Tree We present a new proof of an explicit characterization of \(\mathrm{SI}(X_{\mathcal{M}})\) for an MCT. The expression in Theorem 6 below was obtained first in [20] relying on its secrecy capacity interpretation. Specifically, it was computed using a linear program for said capacity and seen to equal an upper bound corresponding to shared information ([20, Examples 4, 7]). The new approach below works directly with the definition of shared information in Definition 1. Also, it differs materially from the treatment in [13, Section 4] for a model that appears to differ from ours. While the upper bound for \(\mathrm{SI}(X_{\mathcal{M}})\) below is akin to that involving secret key capacity in [20], the proof of the lower bound uses an altogether new method based on the structure of a "good" partition \(\pi\) in Definition 1. **Theorem 6**.: _Let \(\mathcal{G}=(\mathcal{M},\mathcal{E})\) be an MCT with pmf \(P_{X_{\mathcal{M}}}\) in (10). Then_ \[\mathrm{SI}(X_{\mathcal{M}})=\min_{(i,j)\in\mathcal{E}}\mathrm{I}(X_{i}\wedge X _{j}). \tag{19}\] _Remark 6_.: For later use, let \((\bar{i},\bar{j})\) be the (not necessarily unique) minimizer in the right-side of (19) _Example 3_.: For the MCT in Example 2, \(\mathrm{SI}(X_{\mathcal{M}})=1-h(p^{*})\), where \(p^{*}=\max_{1\leq i\leq m-1}p_{i}<0.5\). Thus, the \(2\)-partition obtained by cutting the (not necessarily unique) weakest correlating edge attains the minimum in Definition 1. Proof.: As shown in [20], \[\mathrm{SI}(X_{\mathcal{M}})\leq\min_{(i,j)\in\mathcal{E}}\mathrm{I}(X_{i} \wedge X_{j}) \tag{20}\] and is seen as follows. For each \((i,j)\in\mathcal{E}\), consider a \(2\)-partition of \(\mathcal{M}\), viz. \(\pi=\pi((i,j))=(\pi_{1},\pi_{2})\) where \(\pi_{1}=\mathcal{B}(i\gets j)\), \(\pi_{2}=\mathcal{B}(j\gets i)\). Then, \[\mathrm{I}(X_{\pi_{1}}\wedge X_{\pi_{2}}) =\mathrm{I}(X_{\mathcal{B}(i\gets j)}\wedge X_{\mathcal{B}(j \gets i)})\] \[=\mathrm{I}(X_{i}\wedge X_{j}),\ \text{by Lemma \ref{lem:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def::defdef::def:def::def:def:def:def:def:def::def:def::defdef Now, let \(\pi_{1},\ldots,\pi_{k}\) be an enumeration of the atoms, obtained from a breadth-first search [17, Ch. 22] run on the agglomerated tree with \(\pi_{1}\) as the root vertex. Then, \[\mathcal{I}\left(\pi\right) =\frac{1}{k-1}D(P_{X_{\mathcal{M}}}\parallel\prod_{u=1}^{k}P_{X_{ \pi_{u}}})\] \[=\frac{1}{k-1}\left[\sum_{u=1}^{k}\left(\operatorname{H}(X_{\pi_{ u}})-\operatorname{H}(X_{\pi_{u}}\,|\,X_{\pi_{1}},\ldots,X_{\pi_{u-1}}) \right)\right]\] \[=\frac{1}{k-1}\sum_{u=2}^{k}\operatorname{I}(X_{\pi_{u}}\wedge X _{\pi_{1}},\ldots,X_{\pi_{u-1}})\] \[=\frac{1}{k-1}\sum_{u=2}^{k}\operatorname{I}(X_{\pi_{u}}\wedge X _{\operatorname{parent}(\pi_{u})}) \tag{24}\] \[\geq\min_{(\pi_{u},\pi_{v})\leq\mathcal{E}}\operatorname{I}(X_{ \pi_{u}}\wedge X_{\pi_{v}})\] \[\geq\min_{(i,j)\in\mathcal{E}}\operatorname{I}(X_{i}\wedge X_{j}).\] By the breadth-first search algorithm [17, Ch. 22], \(\pi_{1},\ldots,\pi_{u-1}\) are either at the same depth as \(\pi_{u}\) or are above it (and include \(\operatorname{parent}(\pi_{u})\)). This, combined with Theorem 3, gives (24). The last inequality is by (23). **Step 2:** Consider first the case \(k=2\). Take any \(2\)-partition \(\pi=(\pi_{1},\pi_{2})\) with possibly disconnected atoms, where \(\pi_{1}=\cup_{\rho=1}^{r}C_{\rho}\) and \(\pi_{2}=\cup_{\sigma=1}^{s}D_{\sigma}\) are unions of disjoint components. Since \(\pi_{1}\) is connected to \(\pi_{2}\), some \(C_{\rho}\) and \(D_{\sigma}\) must be connected by some edge \((i,j)\) in \(\mathcal{E}\), so that \[\mathcal{I}\left(\pi\right)=\operatorname{I}(X_{\pi_{1}}\wedge X_{\pi_{2}}) \geq\operatorname{I}(X_{C_{\rho}}\wedge X_{D_{\sigma}})\geq\operatorname{I}( X_{i}\wedge X_{j})\geq\operatorname{I}(X_{i}\wedge X_{j})\] where the final lower bound, with \(\bar{i},\bar{j}\) as in Remark 6, is achieved by the \(2\)-partition with connected atoms \((\mathcal{B}(\bar{i}\leftarrow\bar{j}),\mathcal{B}(\bar{j}\leftarrow\bar{i}))\) as in (21). Next, consider a \(k\)-partition \(\pi=(\pi_{1},\ldots,\pi_{k})\), \(k\geq 3\), and suppose that the atom \(\pi_{1}\) is not connected. Without loss of generality, assume \(\pi_{1}\) to be the (disjoint) union of maximally connected subsets \(A_{1},\ldots,A_{t}\), \(t\geq 2\), of \(\pi_{1}\) (which, at an extreme, can be the individual vertices constituting \(\pi_{1}\)). Take any \(A_{l}\), say \(A_{l}=A_{l}\), and consider all its boundary edges, namely those edges for which one vertex is in \(A_{l}\) and the other outside it. As \(A_{l}\) is maximally connected in \(\pi_{1}\), for each boundary edge the outside vertex cannot belong to \(\pi_{1}\) and so must lie in \(\mathcal{M}\setminus\pi_{1}\). Also, every such outside vertex associated with \(A_{l}\) must be the root of a subtree and, like \(A_{l}\), every \(A_{l}\), \(l\neq\bar{l}\), too, must be a subset of one such subtree linked to \(A_{\bar{l}}\) - owing to connectedness within \(A_{\bar{l}}\). Furthermore, since \(A_{1},\ldots,A_{t}\) are connected, and only through the subtrees rooted in \(\mathcal{M}\setminus\pi_{1}\), there must exist at least one \(A_{l}\) such that all \(A_{l^{\prime}}\)s, \(l^{\prime}\neq l\), are subsets of one subtree linked to \(A_{l}\). In other words, denoting this \(A_{l}\) as \(A\), we note that \(A\) has the property that \[\pi_{1}\setminus A=\bigcup_{\begin{subarray}{c}l\in\{1,\ldots,t\}:\\ A_{l}\neq\bar{A}\end{subarray}}A_{l}\] is contained entirely in a subtree rooted at an outside vertex associated with \(A\) and lying in \(\mathcal{M}\setminus\pi_{1}\). Let this vertex be \(j\in\mathcal{M}\setminus\pi_{1}\), and let \(\pi_{u}\in\pi\) be the atom that contains \(j\). Since vertex \(j\) separates \(A\) from \(\pi_{1}\setminus A\), so does \(\pi_{u}\). By Theorem 3, it follows that \[A\mathrel{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\cdot}{\cdot \cdot Then, \[\mathcal{I}\left(\pi\right) =\frac{1}{k-1}\left[\mathrm{H}(X_{\pi_{1}})+\mathrm{H}(X_{\pi_{u}})+ \sum_{v\neq 1,v\neq u}\mathrm{H}(X_{\pi_{v}})-\mathrm{H}(X_{\mathcal{M}})\right],\] \[\mathcal{I}\left(\pi^{\prime}\right) =\frac{1}{k-2}\left[\mathrm{H}(X_{\pi_{1}\cup\pi_{u}})+\sum_{v \neq 1,v\neq u}\mathrm{H}(X_{\pi_{v}})-\mathrm{H}(X_{\mathcal{M}})\right],\] \[\mathcal{I}\left(\pi^{\prime\prime}\right) =\frac{1}{k}\left[\mathrm{H}(X_{\pi_{1}\setminus A})+\mathrm{H}( X_{A})+\mathrm{H}(X_{\pi_{u}})+\sum_{v\neq 1,v\neq u}\mathrm{H}(X_{\pi_{v}})-\mathrm{H}(X_{ \mathcal{M}})\right].\] We claim that \[\mathcal{I}\left(\pi\right)\geq\min\left\{\mathcal{I}\left(\pi^{\prime}\right),\mathcal{I}\left(\pi^{\prime\prime}\right)\right\}. \tag{28}\] Referring to (26) and (27), we can infer from the claim (28) that for a given \(k\)-partition \(\pi\) with a disconnected atom \(\pi_{1}\) as above, merging a disconnected atom with another atom (as in (26)) or breaking it to create a connected atom (as in (27)), lead to partitions \(\pi^{\prime}\) or \(\pi^{\prime\prime}\), of which at least one has a lower \(\mathcal{I}\)-value than \(\pi\). This argument is repeated until a final partition with connected atoms is reached that has the following form: considering the set of all maximally connected components of the atoms of \(\pi=(\pi_{1},\ldots,\pi_{k})\), the final partition will consist of _connected_ unions of such components. (A connected \(\pi_{i}\) already constitutes such a component.) It remains to show (28). Suppose (28) were not true, i.e., \[\mathcal{I}\left(\pi\right) <\min\left\{\mathcal{I}\left(\pi^{\prime}\right),\mathcal{I} \left(\pi^{\prime\prime}\right)\right\}.\] Then, \[\mathcal{I}\left(\pi\right) <\mathcal{I}\left(\pi^{\prime}\right) \Leftrightarrow\left(k-2\right)\mathcal{I}\left(\pi\right)<\left(k -2\right)\mathcal{I}\left(\pi^{\prime}\right)\] \[\Leftrightarrow\mathrm{I}(X_{\pi_{u}}\wedge X_{\pi_{1}})< \mathcal{I}\left(\pi\right), \tag{29}\] and similarly, \[\mathcal{I}\left(\pi\right) <\mathcal{I}\left(\pi^{\prime\prime}\right) \Leftrightarrow k\,\mathcal{I}\left(\pi\right)<k\,\mathcal{I}\left(\pi^{\prime \prime}\right)\] \[\Leftrightarrow\mathcal{I}\left(\pi\right)<\mathrm{I}(X_{\pi_{1} \setminus A}\wedge X_{A}) \tag{30}\] where the second equivalences in (29) and (30) are obtained by straightforward manipulation. By (29) and (30), \[\mathrm{I}(X_{\pi_{u}}\wedge X_{\pi_{1}})<\mathrm{I}(X_{\pi_{1}\setminus A} \wedge X_{A})\] which contradicts (25). Hence, (28) is true. ## V Estimating \(\mathrm{SI}(X_{\mathcal{M}})\) for an MCT We consider the estimation of \(\mathrm{SI}(X_{\mathcal{M}})\) when the pmf \(P_{X_{\mathcal{M}}}\) of \(X_{\mathcal{M}}=(X_{1},\ldots,X_{m})\) is unknown to an "agent" who, however, is assumed to know the tree \(\mathcal{G}=(\mathcal{M},\mathcal{E})\). We assume further in this section that \(\mathcal{X}_{1}=\cdots=\mathcal{X}_{m}=\mathcal{X}\), say, and also that the minimizing edge \((\bar{i},\bar{j})\) on the right side of (19) is unique. By Theorem 6, \(\mathrm{SI}(X_{\mathcal{M}})\) equals the minimum mutual information across an edge in the tree \(\mathcal{G}\). Treating the determination of this edge as a correlated bandits problem of best arm pair identification, we provide an algorithm to home in on it, and analyze its error performance and associated sample complexity. _The estimate of shared information is taken to be the mutual information across the best arm-pair thus identified._ Our estimation procedure is motivated by the form of \(\mathrm{SI}(X_{\mathcal{M}})\) in Theorem 6. ### _Preliminaries_ As stated, estimation of \(\mathrm{SI}(X_{\mathcal{M}})\) for an MCT will entail estimating \(\mathrm{I}(X_{i}\wedge X_{j})\), \((i,j)\in\mathcal{E}\). We first present pertinent tools that will be used to this end. Let \((X_{t},Y_{t})_{t=1}^{n}\) be \(n\geq 1\) independent and identically distributed (i.i.d.) repetitions of \(\mathsf{rv}\)\((X,Y)\) with (unknown) pmf \(P_{XY}\) of _assumed full support_ on \(\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) are finite sets. For \((\mathbf{x},\mathbf{y})\) in \(\mathcal{X}^{n}\times\mathcal{Y}^{n}\), let \(Q_{\mathbf{xy}}^{(n)}\) represent its joint type on \(\mathcal{X}\times\mathcal{Y}\) (cf. [19, Ch. 2, 3]). Also, let \(Q_{\mathbf{x}}^{(n)}\) (resp. \(Q_{\mathbf{xy}}^{(n)}\)) represent the (marginal) type of \(\mathbf{x}\) (resp. \(\mathbf{y}\)). A well-known estimator for \(\mathrm{I}(X\wedge Y)=\mathrm{I}_{P_{XY}}(X\wedge Y)\) on the basis of \((\mathbf{x},\mathbf{y})\) in \(\mathcal{X}^{n}\times\mathcal{Y}^{n}\) is the _empirical mutual information_ (EMI) estimator [19, Ch. 3], [24]\(\mathrm{I}_{\text{EMI}}^{(n)}\) defined by \[\mathrm{I}_{\text{EMI}}^{(n)}(\mathbf{x}\wedge\mathbf{y})=\mathrm{H}(Q_{\mathbf{ x}}^{(n)})+\mathrm{H}(Q_{\mathbf{y}}^{(n)})-\mathrm{H}(Q_{\mathbf{xy}}^{(n)}). \tag{31}\] Throughout this section, \((\mathbf{X},\mathbf{Y})\) will represent \(n\) i.i.d. repetitions of the rv \((X,Y)\). **Lemma 7** (Bias of EMI estimator).: _The bias_ \[\mathrm{Bias}(\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{X}\wedge\mathbf{Y})) \triangleq\mathbb{E}_{P_{XY}}\left[\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{X }\wedge\mathbf{Y})\right]-\mathrm{I}(X\wedge Y)\] _satisfies_ \[-\log\left(1+\frac{\left|\mathcal{X}\right|-1}{n}\right)\left(1+\frac{\left| \mathcal{Y}\right|-1}{n}\right)\leq\mathrm{Bias}(\mathrm{I}_{\mathsf{EMI}}^{( n)}(\mathbf{X}\wedge\mathbf{Y}))\leq\log\left(1+\frac{\left|\mathcal{X}\right| \left|\mathcal{Y}\right|-1}{n}\right).\] Proof.: The proof follows immediately from [36, Proposition 1]. A concentration bound for the estimator \(\mathrm{I}_{\mathsf{EMI}}^{(n)}\) in (31) using techniques from [2], is given by **Lemma 8**.: _Given \(\epsilon>0\) and for every \(n\geq 1\),_ \[P_{XY}\left(\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{X}\wedge\mathbf{Y})- \mathbb{E}_{P_{XY}}\left[\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{X}\wedge \mathbf{Y})\right]\geq\epsilon\right)\leq\exp\left(-\frac{2n\epsilon^{2}}{36 \log^{2}n}\right).\] _The same bound applies upon replacing \(\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{X}\wedge\mathbf{Y})\) by \(-\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{X}\wedge\mathbf{Y})\) above._ Proof.: The empirical mutual information \(\mathrm{I}_{\mathsf{EMI}}^{(n)}:\mathcal{X}^{n}\times\mathcal{Y}^{n}\to \mathbb{R}^{+}\cup\{0\}\) satisfies the bounded differences property, namely \[\max_{\begin{subarray}{c}(\mathbf{x},\mathbf{y})\in\mathcal{X}^{n}\times \mathcal{Y}^{n}\\ (x_{i}^{\prime},y_{i}^{\prime})\in\mathcal{X}\times\mathcal{Y}\end{subarray}} \left|\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{x}\wedge\mathbf{y})-\mathrm{I} _{\mathsf{EMI}}^{(n)}((x_{1}^{i-1},x_{i}^{\prime},x_{i+1}^{n})\wedge(y_{1}^{i -1},y_{i}^{\prime},y_{i+1}^{n}))\right|\leq\frac{6\log n}{n} \tag{32}\] for \(1\leq i\leq n\), where for \(l<k\), \(x_{k}^{k}=(x_{l},x_{l+1},\ldots,x_{k})\). To see this, we note that changing \[(\mathbf{x},\mathbf{y})=((x_{1},\ldots,x_{n}),(y_{1},\ldots,y_{n}))\to((x_{1 }^{i-1},x_{i}^{\prime},x_{i+1}^{n}),(y_{1}^{i-1},y_{i}^{\prime},y_{i+1}^{n}))\] amounts to changing at most two components in the joint type \(Q_{\mathbf{xy}}^{(n)}\) and marginal types \(Q_{\mathbf{x}}^{(n)}\) and \(Q_{\mathbf{y}}^{(n)}\); in each of these three cases, the probability of one symbol or one pair of symbols decreases by \(1/n\) and that of another increases by \(1/n\). The difference between the corresponding empirical entropies is given in each case by the sum of two terms. For instance, one such term for the joint empirical entropy is given by \[\left|Q_{\mathbf{xy}}^{(n)}(x_{i},y_{i})\log Q_{\mathbf{xy}}^{(n)}(x_{i},y_{i })-\left(Q_{\mathbf{xy}}^{(n)}(x_{i},y_{i})-\frac{1}{n}\right)\log\left(Q_{ \mathbf{xy}}^{(n)}(x_{i},y_{i})-\frac{1}{n}\right)\right|.\] Each of these terms is \(\leq\log n/n\), using the inequality [2] \[\left|\frac{j+1}{n}\log\frac{j+1}{n}-\frac{j}{n}\log\frac{j}{n}\right|\leq \frac{\log n}{n},\qquad 0\leq j<n.\] The bound in (32) is obtained upon applying the triangle inequality twice in each of the three mentioned cases. The claim of the lemma then follows by a standard application of McDiarmid's Bounded Differences Inequality [42, Theorem 2.9.1]. Since we seek to identify the edge with the smallest mutual information across it, we next present a technical lemma that bounds above the probability that the estimates of the mutual information between two pairs of rvs are in the wrong order. Our proof used Lemma 8. Let \((X,Y)\) and \((X^{\prime},Y^{\prime})\) be two pairs of rvs with pmfs \(P_{XY}\) and \(P_{X^{\prime}Y^{\prime}}\), respectively, on the (common) alphabet \(\mathcal{X}\times\mathcal{X}\), such that \(\mathrm{I}(X\wedge Y)<\mathrm{I}(X^{\prime}\wedge Y^{\prime})\). Let \[\Delta=\mathrm{I}(X^{\prime}\wedge Y^{\prime})-\mathrm{I}(X\wedge Y)>0. \tag{33}\] By Lemma 7, \(\mathrm{I}_{\mathsf{EMI}}^{(n)}\) is asymptotically unbiased and, in particular, we can make \(\mathrm{Bias}(\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{X}\wedge\mathbf{Y})), \mathrm{Bias}(\mathrm{I}_{\mathsf{EMI}}^{(n)}(\mathbf{X}^{\prime}\wedge \mathbf{Y}^{\prime}))<\Delta/2\) by choosing \(n\) large enough, for instance, \[n>\max\left\{\frac{\left|\mathcal{X}\right|^{2}-1}{2^{\Delta/2}-1},\frac{ \left|\mathcal{X}\right|-1}{2^{\Delta/4}-1}\right\}. \tag{34}\] The upper bound on the probability of ordering error depends on the bias of \(\mathrm{I}_{\mathsf{EMI}}^{(n)}\) and decreases with decreasing bias. **Lemma 9**.: _With \((X,Y)\), \((X^{\prime},Y^{\prime})\) and \(n\) as in (34),_ \[P\left(\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}\wedge\mathbf{Y}) \geq\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}^{\prime}\wedge\mathbf{Y}^{\prime })\right)\] \[\qquad\leq 2\max\left\{\exp\left(-\frac{2n\left(\Delta/2- \mathrm{Bias}\left(\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}\wedge\mathbf{Y}) \right)\right)^{2}}{36\log^{2}n}\right),\ \exp\left(-\frac{2n\left(\Delta/2-\mathrm{Bias}\left(\mathrm{I}_{\mathsf{EM}}^{ (n)}(\mathbf{X}^{\prime}\wedge\mathbf{Y}^{\prime})\right)\right)^{2}}{36\log^ {2}n}\right)\right\}.\] Proof.: Recalling (33), we have \[P\left(\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}\wedge\mathbf{Y })\geq\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}^{\prime}\wedge\mathbf{Y}^{ \prime})\right)\] \[\qquad\qquad=P\left(\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X} \wedge\mathbf{Y})-\mathrm{I}(X\wedge Y)-\mathrm{I}_{\mathsf{EM}}^{(n)}( \mathbf{X}^{\prime}\wedge\mathbf{Y}^{\prime})+\mathrm{I}(X^{\prime}\wedge Y^{ \prime})\geq\Delta\right)\] \[\qquad\qquad\leq P\left(\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X }\wedge\mathbf{Y})-\mathrm{I}(X\wedge Y)\geq\Delta/2\right)+P\left(\mathrm{I}_ {\mathsf{EM}}^{(n)}(\mathbf{X}^{\prime}\wedge\mathbf{Y}^{\prime})-\mathrm{I}(X ^{\prime}\wedge Y^{\prime})\leq-\Delta/2\right) \tag{35}\] Using Lemma 8, and in view of (33), (34), \[P\left(\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}\wedge\mathbf{Y })-\mathrm{I}(X\wedge Y)\geq\Delta/2\right) =P\left(\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}\wedge\mathbf{Y })-\mathbb{E}\left[\mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}\wedge\mathbf{Y })\right]\geq\Delta/2-\mathrm{Bias}\left(\mathrm{I}_{\mathsf{EM}}^{(n)}( \mathbf{X}\wedge\mathbf{Y})\right)\right)\] \[\leq\exp\left(-\frac{2n\left(\Delta/2-\mathrm{Bias}\left( \mathrm{I}_{\mathsf{EM}}^{(n)}(\mathbf{X}\wedge\mathbf{Y})\right)\right)^{2}}{ 36\log^{2}n}\right), \tag{36}\] and similarly, \[P\left(\mathrm{I}(X^{\prime}\wedge Y^{\prime})-\mathrm{I}_{\mathsf{EM}}^{(n)} (\mathbf{X}^{\prime}\wedge\mathbf{Y}^{\prime})\geq\Delta/2\right)\leq\exp \left(-\frac{2n\left(\Delta/2-\mathrm{Bias}\left(\mathrm{I}_{\mathsf{EM}}^{(n) }(\mathbf{X}^{\prime}\wedge\mathbf{Y}^{\prime})\right)\right)^{2}}{36\log^{2}n }\right). \tag{37}\] The claimed bound follows by using (36) and (37) in (35). ### _Bandit algorithm for estimating \(\mathrm{SI}(X_{\mathcal{M}})\)_ The following method identifies the best arm pair corresponding to the edge of the MCT across which mutual information is minimal. In the parlance of banditry, the environment has \(m\) arms, one arm corresponding to each vertex in \(\mathcal{G}=(\mathcal{M},\mathcal{E})\). The agent can pull, in any step, two arms that are connected by an edge in \(\mathcal{E}\). Each action of the agent is specified by the pair \((i,j)\), \(1\leq i<j\leq m\), \((i,j)\in\mathcal{E}\), with associated reward being the realizations \((X_{i}=x_{i},X_{j}=x_{j})\). The agent is allowed to pull a total of \(N\) pairs of arms, say, using _uniform sampling_, where \(N\) will be specified below. **Definition 6** (Uniform sampling).: _In uniform sampling, pairs of rvs corresponding to edges of the tree are sampled equally often. Specifically, each pair of rvs \((X_{i},X_{j})\), \((i,j)\in\mathcal{E}\), is sampled \(n\) times over nonoverlapping time instants. Hence, an agent pulls a total of \(N\) pairs of arms, where \(N=|\mathcal{E}|\,n\)._ By means of these actions, the agent seeks to form estimates of all two-dimensional marginal pmfs \(P_{X_{i}X_{j}}\) and of the corresponding \(\mathrm{I}(X_{i}\wedge X_{j})\) for \((i,j)\) as above, and subsequently identify \((\tilde{i},\tilde{j})\in\mathcal{E}\) (see Remark 6). Let \(X_{\mathcal{M}}^{N}\) denote \(N\) i.i.d. repetitions of \(X_{\mathcal{M}}=(X_{1},\ldots,X_{m})\). Specifically, the agent must produce an estimate \(\hat{e}_{N}=\hat{e}_{N}(X_{\mathcal{M}}^{N})\in\mathcal{E}\) of \((\tilde{i},\tilde{j})\in\mathcal{E}\) at the conclusion of \(N\) steps so as to minimize the error probability \(P(\hat{e}_{N}\neq(\tilde{i},\tilde{j}))\). The following notation is used. Write \[\mathrm{I}(i\wedge j)=\mathrm{I}(X_{i}\wedge X_{j}),\quad(i,j)\in\mathcal{E}\] for simplicity, and let \[\mathrm{I}_{\mathsf{EM}}^{(n)}(i\wedge j)\triangleq\mathrm{I}_{\mathsf{EM}}^{( n)}(\mathbf{X}_{i}\wedge\mathbf{X}_{j})\] be the estimate of \(\mathrm{I}(i\wedge j)\). At the end of \(N=|\mathcal{E}|\,n\) steps, set \[\hat{e}(X_{\mathcal{M}}^{N})=\arg\min_{(i,j)\in\mathcal{E}}\mathrm{I}_{ \mathsf{EM}}^{(n)}(i,j)=(i^{*},j^{*}),\ \mathrm{say} \tag{38}\] with ties being resolved arbitrarily. Correspondingly, the estimate of shared information is \[\mathrm{SI}_{\mathsf{EM}}^{(N)}(X_{\mathcal{M}}^{N})\triangleq\mathrm{I}_{ \mathsf{EM}}^{(n)}(i^{*}\wedge j^{*}). \tag{39}\] Denote \[\Delta_{ij}=\mathrm{I}(X_{i}\wedge X_{j})-\mathrm{I}(X_{\tilde{i}},X_{\tilde{j} }),\qquad(i,j)\in\mathcal{E}\] and \[\Delta_{1}=\min_{\begin{subarray}{c}(i,j)\in\mathcal{E}\\ (i,j)\neq(\bar{i},\bar{j})\end{subarray}}\mathrm{I}(X_{i}\wedge X_{j})- \mathrm{I}(X_{\bar{i}}\wedge X_{\bar{j}}),\] where the latter is the difference between the second-lowest and lowest mutual information across edges in \(\mathcal{E}\). Note that \(\Delta_{1}>0\) by the assumed uniqueness of the minimizing edge \((\bar{i},\bar{j})\). The shared information estimate \(\mathrm{SI}^{(N)}_{\mathsf{EMI}}(X^{N}_{\mathcal{M}})\) converges almost surely and in the mean. This is shown in Theorem 11 below. To that end, we first provide an upper bound for the probability of arm misidentification with uniform sampling. **Proposition 10**.: _For uniform sampling, the probability of error in identifying the optimal pair of arms is_ \[P\left(\hat{e}_{N}(X^{N}_{\mathcal{M}})\neq(\bar{i},\bar{j})\right)\leq 2 \left|\mathcal{E}\right|\exp\left(\frac{-(N/\left|\mathcal{E}\right|)\Delta_{1 }^{2}}{648\log^{2}(N/\left|\mathcal{E}\right|)}\right)\] _for all_ \[N>\left|\mathcal{E}\right|\max\left\{\frac{\left|\mathcal{X}\right|^{2}-1}{2 \Delta_{1}/3}-1\cdot\frac{\left|\mathcal{X}\right|-1}{2\Delta_{1}/6}\right\}. \tag{40}\] Proof.: With \(N=\left|\mathcal{E}\right|n\), let \(x^{N}_{\mathcal{M}}\) represent a realization of \(X^{N}_{\mathcal{M}}\). For each \((i,j)\in\mathcal{E}\), the agent computes the empirical mutual information estimate \(\mathrm{I}^{(n)}_{\mathsf{EMI}}(\mathbf{x}_{i}\wedge\mathbf{x}_{j})\) of \(\mathrm{I}(X_{i}\wedge X_{j})\). Note that the sampling of arm pairs occurs over nonoverlapping time instants. By Lemma 7 and (34) \[\left|\mathrm{Bias}(\mathrm{I}^{(n)}_{\mathsf{EMI}}(i\wedge j))\right|\leq \frac{\Delta_{1}}{3}\leq\frac{\Delta_{ij}}{3}<\frac{\Delta_{ij}}{2}\qquad \text{for }(i,j)\neq(\bar{i},\bar{j}),\] for all \(N\) as in (40). Then, we have \[P\left(\hat{e}_{N}(X^{N}_{\mathcal{M}})\neq(\bar{i},\bar{j})\right) =P\left(\mathrm{I}^{(n)}_{\mathsf{EMI}}(\bar{i}\wedge\bar{j}) \geq\mathrm{I}^{(n)}_{\mathsf{EMI}}(i\wedge j)\text{ for some }(i,j)\neq(\bar{i},\bar{j})\right)\] \[\leq\sum_{(i,j)\neq(\bar{i},\bar{j})}P\left(\mathrm{I}^{(n)}_{ \mathsf{EMI}}(\bar{i}\wedge\bar{j})\geq\mathrm{I}^{(n)}_{\mathsf{EMI}}(i\wedge j )\right)\] \[\leq\sum_{(i,j)\neq(\bar{i},\bar{j})}2\exp\left(\frac{-n\Delta_{ ij}^{2}}{648\log^{2}n}\right),\qquad\text{by Lemma \ref{lem: **Corollary 12**.: _For \(0<\epsilon<1/2\) and \(\delta<1/e\), we have_ \[P\left(\left|\mathrm{SI}_{\mathsf{EMI}}^{(N)}(X_{\mathcal{M}}^{N})- \mathrm{SI}(X_{\mathcal{M}})\right|>\epsilon\right)\leq\delta\] _for sample complexity \(N=N(\epsilon,\delta)\) that obeys2_ Footnote 2: The approximate form of (42) considers only the significant terms depending on \(|\mathcal{X}|\), \(|\mathcal{E}|\), \(\Delta_{1}\), \(\epsilon\) and \(\delta\). \[N\gtrsim|\mathcal{E}|\left[\frac{|\mathcal{X}|}{\epsilon}+ \frac{1}{\epsilon^{2}}\ln\left(\frac{1}{\delta}\right)\log^{2}\left(\frac{1}{ \epsilon^{2}}\ln\left(\frac{1}{\delta}\right)\right)\right.\] \[\left.+\frac{1}{\Delta_{1}^{2}}\ln\left(\frac{|\mathcal{E}|}{ \delta}\right)\log\left(\frac{1}{\Delta_{1}^{2}}\ln\left(\frac{|\mathcal{E}|}{ \delta}\right)\right)+\frac{1}{\Delta_{1}^{2}}\ln\left(\frac{|\mathcal{E}|}{ \delta}\right)\log^{2}\left(\frac{1}{\Delta_{1}^{2}}\ln\left(\frac{|\mathcal{E }|}{\delta}\right)\right)\right]. \tag{42}\] The proof of the corollary relies on the following technical lemma, which is similar in spirit to [38, Lemma A.1]. **Lemma 13**.: _It holds that_ \[x\geq c\ln^{2}x,\quad c\geq 1,\ x\geq\max\left\{1,4c\ln 2c+16c\ln^{2}c\right\}.\] Proof.: See Appendix A. Proof of Corollary 12.: From (41), \[P\left(\left|\mathrm{SI}_{\mathsf{EMI}}^{(N)}(X_{\mathcal{M}}^{N})-\mathrm{SI }(X_{\mathcal{M}})\right|>\epsilon\right)\leq\delta,\] for \(n=N/\left|\mathcal{E}\right|\) satisfying \[n\geq\frac{|\mathcal{X}|}{\epsilon},\qquad\frac{n}{\log^{2}n} \geq\frac{1}{\epsilon^{2}}\ln\left(\frac{1}{\delta}\right),\qquad\frac{n}{ \log^{2}n}\geq\frac{1}{\Delta_{1}^{2}}\ln\left(\frac{|\mathcal{E}|}{\delta}\right) \tag{43}\] upto numerical constant factors. Each of the inequalities in (43) yields one or more lower bounds for \(n=N/\left|\mathcal{E}\right|\); the first does so directly, and the latter two upon writing them as \(n\geq c\log^{2}n\) (where \(c\) does not depend on \(n\)) and using Lemma 13. The conditions on \(\epsilon\) and \(\delta\) in Corollary 12, allow us to drop one of the bounds since it is always weaker than another. Combining all the lower bounds obtained from (43) and using \(N=\left|\mathcal{E}\right|n\) finally results in (42). ## VI Closing Remarks While the Hammersley-Clifford Theorem [29, Theorem 3.9] can be used to show the equivalence in Theorem 3, between the MCT definition and the global Markov property, when the joint pmf of \(X_{\mathcal{M}}\) is strictly positive, we show for an MCT that it holds even for pmfs that are _not_ strictly positive. The tree structure plays a material role in our proof. In particular, agglomeration of connected subsets of an MCT form an MCT as in Definition 3 and Lemma 2. The MCT property only involves verifying the Markov condition (10) or (11) for each edge in the tree, and therefore is easier to check than the global Markov property (16). Joint pmfs that are not strictly positive arise in applications such as function computation when a subset of rvs are determined by another subset. Theorem 6 shows that for an MCT, a simple 2-partition achieves the minimum in Definition 1. While the result in Theorem 6 was known [20], our proof uses new techniques and further implies that for _any_ partition \(\pi\) with disconnected atoms, there is a partition with connected atoms that has \(\mathcal{I}\) -value (see (2)) less than or equal to that of \(\pi\). This structural property is stronger than that needed for proving Theorem 6. Our proof technique for Theorem 6 can serve as a stepping stone for analyzing SI for more complicated graphical models in which the underlying graph is not a tree; see [5]. In particular, the tree structure was used in the proof of Theorem 6 only in Step 1 and (25) in Step 2. In Section V, we have presented an algorithm for best-arm identification with biased (and asymptotically unbiased) estimates. We resorted to a simple uniform sampling strategy and the empirical mutual information estimator for the sake of simplicity. Using bias-corrected estimators like the Miller-Madow or jack-knifed estimator for mutual information [36] would improve the bias performance of the algorithm. However, it hurts the constant in the bounded differences inequality that appears in Lemma 8. Polynomial approximation-based estimators [26] could also improve sample complexity. Moreover, a successive rejects algorithm [3, 7] could yield a better sample complexity than uniform sampling for a fixed estimator, as hinted by [3, 7] in different settings. The precise tradeoff afforded by the choice of a better estimator remains to be understood, as does the sample complexity of more refined algorithms for best arm identification in our setting. Both demand a converse result that needs to take into account estimator bias; this remains under study in our current work. A converse would also settle the question of optimality of the \(\exp(-O(N/\log^{2}N))\) decay in the probability of error in Proposition 10. ## Appendix A Appendix A: Proof of Lemma 1 We have \[\mathrm{I}(X_{\mathcal{B}(i\gets j)}\wedge X_{\mathcal{B}(j \gets i)})\] \[=\mathrm{I}(X_{i}\wedge X_{j})+\mathrm{I}(X_{i}\wedge X_{\mathcal{ B}(j\gets i)\setminus\{j\}}\,|\,X_{j})+\mathrm{I}(X_{\mathcal{B}(i\gets j) \setminus\{i\}}\wedge X_{j}\,|\,X_{i})+\mathrm{I}(X_{\mathcal{B}(i\gets j )\setminus\{i\}}\wedge X_{\mathcal{B}(j\gets i)\setminus\{j\}}\,|\,X_{i },X_{j})\] \[=\mathrm{I}(X_{i}\wedge X_{j})+\mathrm{I}(X_{\mathcal{B}(i \gets j)\setminus\{i\}}\wedge X_{\mathcal{B}(j\gets i)\setminus\{j \}}\,|\,X_{i},X_{j})\] \[=\mathrm{I}(X_{i}\wedge X_{j})+\mathrm{H}(X_{\mathcal{B}(i \gets j)\setminus\{i\}}\,|\,X_{i})-\mathrm{H}(X_{\mathcal{B}(i \gets j)\setminus\{i\}}\,|\,X_{i},X_{\mathcal{B}(j\gets i)}) \tag{44}\] where the previous two inequalities are by (11). The claim of Lemma 1 would follow from (44) upon showing that \[\mathrm{H}(X_{\mathcal{B}(i\gets j)\setminus\{i\}}\,|\,X_{i},X_{\mathcal{ B}(j\gets i)})=\mathrm{H}(X_{\mathcal{B}(i\gets j)\setminus\{i\}}\,|\,X_{i}). \tag{45}\] Without loss of generality, set \(j\) to be the root of the tree; this defines a _directed_ tree whose leaves are from among the vertices (in \(\mathcal{M}\)) with no descendants. Denote the parent of \(i^{\prime}\) in the (directed) tree by \(p(i^{\prime})\). Note that \(p(i)=j\) in (45). We shall use induction on the _height_ of \(i^{\prime}\), i.e., the maximum distance of \(i^{\prime}\) from a leaf of the directed tree, to show that \[\mathrm{H}(X_{\mathcal{B}(i^{\prime}\gets p(i^{\prime}))\setminus\{i^{ \prime}\}}\,|\,X_{i^{\prime}},X_{\mathcal{B}(p(i^{\prime})\gets i^{\prime}) })=\mathrm{H}(X_{\mathcal{B}(i^{\prime}\gets p(i^{\prime}))\setminus\{i^{ \prime}\}}\,|\,X_{i^{\prime}}), \tag{46}\] which proves (45) upon setting \(i^{\prime}=i\) and \(p(i^{\prime})=p(i)=j\). First, assume that \(i^{\prime}\) is a leaf. Then \(\mathcal{B}(i^{\prime}\gets p(i^{\prime}))\setminus\{i^{\prime}\}= \varnothing\) and (46) holds trivially. Next, assume the induction hypothesis that (46) is true for all vertices at height \(<h\), and consider a vertex \(i^{\prime}\) at height \(h\). Let \(i^{\prime}\) have children \(1,\ldots,t\); each of these vertices is the root of subtree \(T_{\tau}=\mathcal{B}(\tau\gets i^{\prime})\), \(1\leq\tau\leq t\). See Figure 2. Further, each vertex \(\tau\), \(1\leq\tau\leq t\), has height \(<h\). Then in (46), \[\mathrm{H}(X_{\mathcal{B}(i^{\prime}\gets p(i^{\prime})) \setminus\{i^{\prime}\}}\,|\,X_{i^{\prime}},X_{\mathcal{B}(p(i^{\prime}) \gets i^{\prime})})\] \[=\mathrm{H}\left((X_{T_{\tau}\setminus\{\tau\}},X_{\tau})_{1\leq \tau\leq t}\,|\,X_{i^{\prime}},X_{\mathcal{B}(p(i^{\prime})\gets i^{\prime} )}\right)\] \[=\sum_{\tau=1}^{t}\left[\mathrm{H}\left(X_{\tau}\,|\,\left(X_{T_{ \tau}}\right)_{1\leq\sigma\leq\tau-1},X_{i^{\prime}},X_{\mathcal{B}(p(i^{ \prime})\gets i^{\prime})}\right)\,+\mathrm{H}\left(X_{T_{\tau}\setminus\{ \tau\}}\,|\,X_{\tau},\left(X_{T_{\sigma}}\right)_{1\leq\sigma\leq\tau-1},X_{i^ {\prime}},X_{\mathcal{B}(p(i^{\prime})\gets i^{\prime})}\right)\right]. \tag{47}\] Figure 2: Schematic for proof of (46). In (47), for each \(\tau\), \(1\leq\tau\leq t\), the first term within \(\left[\cdot\right]\) is \[\mathrm{H}\left(X_{\tau}\,|\,\left(X_{T_{\sigma}}\right)_{1\leq\sigma\leq\tau-1 },X_{i^{\prime}},X_{\mathcal{B}(p(i^{\prime})\gets i^{\prime})}\right)= \mathrm{H}(X_{\tau}\,|\,X_{i^{\prime}})=\mathrm{H}\left(X_{\tau}\,|\,\left(X_{T _{\sigma}}\right)_{1\leq\sigma\leq\tau-1},X_{i^{\prime}}\right) \tag{48}\] by (11) since \[\left(\bigcup_{\sigma=1}^{\tau-1}T_{\sigma},\mathcal{B}(p(i^{\prime})\gets i ^{\prime})\right)\subseteq\mathcal{B}(i^{\prime}\leftarrow\tau)\] (see Figure 2). In the second term in \(\left[\cdot\right]\), we apply the induction hypothesis to vertex \(\tau\) which is at height \(h-1\). Note that \(p(\tau)=i^{\prime}\). Since \[X_{T_{\tau}\setminus\left\{\tau\right\}}=X_{\mathcal{B}(\tau\gets p(\tau ))\setminus\left\{\tau\right\}}\text{ and }\left(\left(X_{T_{\sigma}}\right)_{1\leq \sigma\leq\tau-1},X_{i^{\prime}},X_{\mathcal{B}(p(i^{\prime})\gets i^{ \prime})}\right)\subseteq\mathcal{B}(p(\tau)\leftarrow\tau),\] by the induction hypothesis at vertex \(\tau\), we get \[\mathrm{H}\left(X_{T_{\tau}\setminus\left\{\tau\right\}}\,|\,X_{\tau},\left(X _{T_{\sigma}}\right)_{1\leq\sigma\leq\tau-1},X_{i^{\prime}},X_{\mathcal{B}(p( i^{\prime})\gets i^{\prime})}\right)=\mathrm{H}\left(X_{T_{\tau}\setminus\left\{ \tau\right\}}\,|\,X_{\tau}\right)=\mathrm{H}\left(X_{T_{\tau}\setminus\left\{ \tau\right\}}\,|\,X_{\tau},\left(X_{T_{\sigma}}\right)_{1\leq\sigma\leq\tau-1},X_{i^{\prime}}\right) \tag{49}\] with the last equality being due to (11). Substituting (48), (49) in (47), we obtain \[\mathrm{H}(X_{\mathcal{B}(i^{\prime}\gets p(i^{\prime})) \setminus\left\{i^{\prime}\right\}}\,|\,X_{i^{\prime}},X_{\mathcal{B}(p(i^{ \prime})\gets i^{\prime})}) =\sum_{\tau=1}^{t}\left[\mathrm{H}\left(X_{\tau}\,|\,\left(X_{T _{\sigma}}\right)_{1\leq\sigma\leq\tau-1},X_{i^{\prime}}\right)+\mathrm{H} \left(X_{T_{\tau}\setminus\left\{\tau\right\}}\,|\,X_{\tau},\left(X_{T_{\sigma }}\right)_{1\leq\sigma\leq\tau-1},X_{i^{\prime}}\right)\right]\] \[=\sum_{\tau=1}^{t}\mathrm{H}\left(X_{T_{\tau}}\,|\,\left(X_{T_{ \sigma}}\right)_{1\leq\sigma\leq\tau-1},X_{i^{\prime}}\right)\] \[=\mathrm{H}(X_{\mathcal{B}(i^{\prime}\gets p(i^{\prime})) \setminus\left\{i^{\prime}\right\}}\,|\,X_{i^{\prime}})\] (see Figure 2) which is (46). ## Appendix B: Proof of Lemma 4 and Theorem 3 Proof of Lemma 4.: Considering first (17), suppose that vertex \(i\in\mathcal{M}\) has \(k\) neighbor, with \(\mathcal{N}(i)=\{i_{1},\ldots,i_{k}\}\), \(1\leq k\leq m-1\). Then \[\mathcal{M}\setminus\left(\{i\}\cup\mathcal{N}(i)\right)=\bigcup_{l=1}^{k} \mathcal{B}(i_{l}\gets i)\setminus\left\{i_{l}\right\}.\] The claim of the lemma is \[X_{i}\neg\neg\neg\left(X_{i_{u}}\right)_{1\leq u\leq k}\neg\neg\neg\left(X_ {\mathcal{B}(i_{l}\gets i)\setminus\left\{i_{l}\right\}}\right)_{1\leq l \leq k}. \tag{50}\] We have \[\mathrm{I}\left(X_{i}\wedge\left(X_{\mathcal{B}(i_{l}\gets i) \setminus\left\{i_{l}\right\}}\right)_{1\leq l\leq k}\,\left|\,\left(X_{i_{u}} \right)_{1\leq u\leq k}\right)\] \[=\sum_{l=1}^{k}\mathrm{I}\left(X_{i}\wedge X_{\mathcal{B}(i_{l} \gets i)\setminus\left\{i_{l}\right\}}\,\left|\,\left(X_{\mathcal{B}(i_{j }\gets j)\setminus\left\{i_{j}\right\}}\right)_{1\leq j\leq l-1},\left(X_{i _{u}}\right)_{1\leq u\leq k}\right)\] \[\leq\sum_{l=1}^{k}\mathrm{I}\left(\left[X_{i},\left(X_{\mathcal{B }(i_{j}\gets i)\setminus\left\{i_{j}\right\}}\right)_{1\leq j\leq l-1}, \left(X_{i_{u}}\right)_{1\leq u\neq l\leq k}\right]\wedge X_{\mathcal{B}(i_{l} \gets i)\setminus\left\{i_{l}\right\}}\,\middle|\,X_{i_{l}}\right). \tag{51}\] For each \(l\), \(1\leq l\leq k\), the rvs within \(\left[\cdot\right]\) above have indices that lie in \(\mathcal{B}(i\gets i_{l})\setminus\left\{i_{l}\right\}\). Hence, by Lemma 1 (specifically (13)), each term in the sum in (51) equals zero. This proves (50). See Figure 3. Turning to (18), we have \[\mathrm{I}\left(X_{A}\wedge X_{\mathcal{M}\setminus\left(A\cup \mathcal{N}(A)\right)}\,|\,\mathcal{N}(A)\right) =\mathrm{I}\left(\left(X_{i},i\in A\right)\wedge X_{\mathcal{M} \setminus\bigcup_{u\in A}\left(\{u\}\cup\mathcal{N}(u)\right)}\,\middle|\,X_{ \bigcup_{v\in A}\mathcal{N}(v)}\right)\] \[\leq\sum_{i\in A}\mathrm{I}\left(X_{i}\wedge X_{\bigcup_{j\in A \setminus\{i\}}\left(\{j\}\cup\mathcal{N}(j)\right)},X_{\mathcal{M}\setminus \bigcup_{u\in A}\left(\{u\}\cup\mathcal{N}(u)\right)}\,\middle|\,X_{\mathcal{N }(i)}\right)\] \[=\sum_{i\in A}\mathrm{I}\left(X_{i}\wedge X_{\left(\bigcup_{j\in A \setminus\{i\}}\left(\{j\}\cup\mathcal{N}(j)\right)\right)\right)\right) \right)\right)\right)\right)\right)\right)\right)\)\),\right)\right\)\right\)}}}\rightrightrightright\right\]\]\]\]\]\] by (17) since for each \(i\in A\), \[\left(\left(\bigcup_{j\in A\setminus\{i\}}(\{j\}\cup\mathcal{N}(j))\right)\setminus \mathcal{N}(i)\right)\cup\left(\mathcal{M}\setminus\bigcup_{u\in A}\left(\{u\} \cup\mathcal{N}(u)\right)\right)\subseteq\mathcal{M}\setminus(\{i\}\cup \mathcal{N}(i)).\qed\] Proof of Theorem 3.: The converse claim is immediately true upon choosing: for every \((i,j)\in\mathcal{E}\), \(A=\mathcal{B}(i\gets j)\setminus\{i\}\), \(S=i\), \(B=\{j\}\). Turning to the first claim, let \[A=\bigsqcup_{\alpha=1}^{a}A_{\alpha},\quad B=\bigsqcup_{\beta=1}^{b}B_{\beta },\quad S=\bigsqcup_{\sigma=1}^{s}S_{\sigma}\] be representations in terms of maximally connected subsets of \(A\), \(B\) and \(S\), respectively. With \(N=\mathcal{M}\setminus(A\cup B\cup S)\), let \(N=\sqcup_{\nu=1}^{n}N_{\nu}\) be a decomposition into maximally connected subsets of \(N\). Denote \[\mathcal{A}=\left\{A_{\alpha},1\leq\alpha\leq a\right\},\quad\mathcal{B}=\left\{ B_{\beta},1\leq\beta\leq b\right\},\quad\mathcal{S}=\left\{S_{\sigma},1\leq \sigma\leq s\right\},\quad\mathcal{N}=\left\{N_{\nu},1\leq\nu\leq n\right\}.\] Referring to Definition 3 and recalling Lemma 2, the tree \(\mathcal{G}^{\prime}=(\mathcal{M}^{\prime},\mathcal{E}^{\prime})\) with vertex set \(\mathcal{M}^{\prime}=\mathcal{A}\cup\mathcal{B}\cup\mathcal{S}\cup\mathcal{N}\) and edge set in the manner of Definition 3 constitutes an agglomerated MCT. Next, we observe that since each \(N_{\nu}\in\mathcal{N}\), \(1\leq\nu\leq n\), is maximally connected in N, the neighbors of \(N_{\nu}\) in \(\mathcal{G}^{\prime}\) cannot be in \(\mathcal{N}\). Therefore, neighbors of a given \(N_{\nu}\) in \(\mathcal{G}^{\prime}\) that are not in \(\mathcal{S}\) must be in \(\mathcal{A}\) or \(\mathcal{B}\). However, \(N_{\nu}\) cannot have a non\(\mathcal{S}\) neighbor in \(\mathcal{A}\) and also one in \(\mathcal{B}\), for then \(A\) and \(B\) would not be separated by \(S\) in \(\mathcal{G}\). Accordingly, for _each_\(N_{\nu}\) in \(\mathcal{N}\), if its non\(\mathcal{S}\) neighbors in \(\mathcal{G}^{\prime}\) are only in \(\mathcal{A}\), add \(N_{\nu}\) to \(\mathcal{A}\); let \(N^{\prime}\) be the union of all such \(N_{\nu}\)s. Consider \(A^{\prime}=A\cup N^{\prime}\) and write \(A^{\prime}=\sqcup_{\alpha=1}^{\sigma}A^{\prime}_{\alpha}\) where the \(A^{\prime}_{\alpha}\)s are maximally connected subsets of \(A^{\prime}\). Let \(\mathcal{A}^{\prime}=\{A^{\prime}_{\alpha},1\leq\alpha\leq a^{\prime}\}\). Now note that \(\mathcal{A}^{\prime}\) and \(\mathcal{B}\) are separated in \(\mathcal{G}^{\prime}\) by \(\mathcal{S}\). Thus, to establish (16), it suffices to show the (stronger) assertion \[X_{\mathcal{A}^{\prime}}\mathrel{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\hbox{ \lower 4.0pt\hbox{$\sim$}}}\hbox{$\sim$}}}}X_{\mathcal{S}}\mathrel{\hbox{ \hbox to 0.0pt{\hbox{\kern 2.0pt\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$\sim$}}}}X_{ \mathcal{B}}. \tag{52}\] By the description of \(\mathcal{A}^{\prime}\), each of its components (maximal subsets of \(A^{\prime}\)) has its neighborhood in \(\mathcal{G}^{\prime}\) that is contained _fully_ in \(\mathcal{S}\). Let \(\tilde{\mathcal{S}}\subseteq\mathcal{S}\) denote the union of all such neighborhoods. Then, by Lemma 4 ((18)) applied to the agglomerated tree \(\mathcal{G}^{\prime}\), since there is no edge in \(\mathcal{G}^{\prime}\) that connects any two elements of \(\mathcal{A}^{\prime}\), \[X_{\mathcal{A}^{\prime}}\mathrel{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\hbox{ \lower 4.0pt\hbox{$\sim$}}}\hbox{$\sim$}}}}X_{\tilde{\mathcal{S}}}\mathrel{ \hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$\sim$}}}}X_{ \mathcal{M}^{\prime}\setminus(\mathcal{A}^{\prime}\cup\tilde{\mathcal{S}})}\] Figure 3: Schematic for the proof of Lemma 4. so that \[0 =\mathrm{I}(X_{\mathcal{A}^{\prime}}\wedge X_{\mathcal{M}^{\prime} \setminus(\mathcal{A}^{\prime}\cup\mathcal{S})}\,|\,X_{\tilde{\mathcal{S}}})\] \[=\mathrm{I}(X_{\mathcal{A}^{\prime}}\wedge X_{\mathcal{M}^{ \prime}\setminus(\mathcal{A}^{\prime}\cup\mathcal{S})},X_{\tilde{\mathcal{S}} \setminus\mathcal{S}}\,|\,X_{\tilde{\mathcal{S}}})\] \[\geq\mathrm{I}(X_{\mathcal{A}^{\prime}}\wedge X_{\mathcal{M}^{ \prime}\setminus(\mathcal{A}^{\prime}\cup\mathcal{S})}\,|\,X_{\mathcal{S}}) \tag{53}\] since \(\tilde{\mathcal{S}}\subseteq\mathcal{S}\). Finally, (53) implies (52) as \(\mathcal{B}\subseteq\mathcal{M}^{\prime}\setminus(\mathcal{A}^{\prime}\cup \mathcal{S})\). ## Appendix C: Proof of Lemma 13 Proof.: For \(1\leq c\leq 1.2\), \(x\geq c\ln^{2}x\) holds unconditionally; so assume that \(c\geq 1.2\). Consider the function \(f(x)=x-c\ln^{2}x\). Then, using [38, Lemma A.1], \(x\geq 4c\ln 2c\) implies \(x\geq 2c\ln x\) which, in turn, implies \(f^{\prime}(x)\geq 0\). Therefore, for \(x\geq 4c\ln c\), \(f(x)\) is increasing in \(x\). It is easy to check numerically that \(f(16c\ln^{2}c)\) is positive for \(c\geq 1.2\). Thus, for all \(x\geq\max\{4c\ln 2c,16c\ln^{2}c\}\), \(f(x)\geq 0\) and so \(x\geq c\ln^{2}x\).
2305.13494
Deep Clustering for Data Cleaning and Integration
Deep Learning (DL) techniques now constitute the state-of-the-art for important problems in areas such as text and image processing, and there have been impactful results that deploy DL in several data management tasks. Deep Clustering (DC) has recently emerged as a sub-discipline of DL, in which data representations are learned in tandem with clustering, with a view to automatically identifying the features of the data that lead to improved clustering results. While DC has been used to good effect in several domains, particularly in image processing, the impact of DC on mainstream data management tasks remains unexplored. In this paper, we address this gap by investigating the impact of DC in data cleaning and integration tasks, specifically schema inference, entity resolution, and domain discovery, tasks that represent clustering from the perspective of tables, rows, and columns, respectively. In this setting, we compare and contrast several DC and non-DC clustering algorithms using standard benchmarks. The results show, among other things, that the most effective DC algorithms consistently outperform non-DC clustering algorithms for data integration tasks. However, we observed a significant correlation between the DC method and embedding approaches for rows, columns, and tables, highlighting that the suitable combination can enhance the efficiency of DC methods.
Hafiz Tayyab Rauf, Andre Freitas, Norman W. Paton
2023-05-22T21:25:23Z
http://arxiv.org/abs/2305.13494v2
# Deep Clustering for Data Cleaning and Integration ###### Abstract. Deep Learning (DL) techniques now constitute the state-of-the-art for important problems in areas such as text and image processing, and there have been impactful results that deploy DL in several data management tasks. Deep Clustering (DC) has recently emerged as a sub-discipline of DL, in which data representations are learned in tandem with clustering, with a view to automatically identifying the features of the data that lead to improved clustering results. While DC has been used to good effect in several domains, particularly in image processing, the impact of DC on mainstream data management tasks still remains unexplored. In this paper, we address this gap, by investigating the impact of DC in canonical data cleaning and integration tasks, including _schema inference, entity resolution_, and _domain discovery_, tasks which represent clustering form the perspective of tables, rows and columns, respectively. In this setting, we compare and contrast several DC and non-DC clustering algorithms using standard benchmarks. The results show, among other things, that the most effective DC algorithms consistently outperform non-DC clustering algorithms for data integration tasks. However, we also observed that the chosen embedding approaches for rows, columns, and tables significantly impacted the clustering performance. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote†: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted none + Footnote †: copyrighted none: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnote †: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none + FootnoteFootnoteFootnote: copyrighted: none DC enables interaction between: (i) clustering, and (ii) representation learning through joint optimization to improve each other iteratively. Several proposals for clustering and representation learning architectures have been developed [(62)]. The representation learning architectures take a raw high dimensional distance matrix as input and map it to a low dimensional latent space. In data management, the distance matrix is represented by distances between embeddings in the latent space. Most of the time, latent space representations are independent of clustering and can be used for other applications. The most widely used representation learning architecture in deep clustering is auto-encoder-based unsupervised learning [(24; 47)]. The encoder function \(f_{\mathbf{\ell}}\) encodes the input representation \(x_{i}\) into a low dimensional representation \(h_{i}\)\(=\)\(f_{\mathbf{\ell}}(x_{i})\) = \(\frac{1}{1+\mathbf{\varepsilon}^{-}(w_{\mathbf{\varepsilon}}x_{i}+b_{\mathbf{\ell}})}\), and the decoder function \(f_{\mathbf{\ell}}\) decodes \(f_{\mathbf{\ell}}(x_{i})\) into the reconstructed input \(\overline{x}_{i}\)\(=\)\(f_{\mathbf{\ell}}(h_{i})\). \(W\) and \(b_{\mathbf{\ell}}\) are the weights and bias of neural networks. The optimization function of a simple auto-encoder architecture for \(N\) samples can be defined as: \(f_{min}=min\frac{1}{N}\sum_{i=1}^{N}\|x_{i}-\overline{x}_{i}\|^{2}\). Considering different applications, researchers proposed enhanced versions of auto-encoders, including Convolutional Auto-encoders [(16)] for image clustering, Variational auto-encoders [(54)] for text classification, Generative Auto-encoders [(56)] for image reconstruction, and Adversarial Auto-encoders [(41)] to detect generative probabilistic novelty. Feature distribution in the latent space is important; how efficiently features are learned depends on how features are distributed; in this context, subspace representation learning [(9; 27; 58; 61)] has been used widely for clustering. In subspace representation learning, the latent spaces are divided into several subspaces to categorize instances, and two instances are associated with linear relationships in the same subspace. In terms of the clustering architecture in DC, it takes as input the optimized low-dimensional latent representation from the representation learning architecture and returns the clustering labels. At this stage, the learned representation is evaluated to determine whether it is more cluster-friendly, for example, if two contextual instances are close to each other in latent space. Several clustering techniques have been used in deep clustering [(62)]. The basic structure of the clustering component is to feed the \(d\)-dimensional representation using neural networks in the forward direction and reduce the dimensions to cluster number \(K\). Then, a softmax layer can be used for the cluster assignment [(62)]. To bridge the semantic gap between representation learning and clustering, relation-matching deep clustering techniques have been used [(14; 19)]. However, relation based matching proposals are computationally expensive [(62)]. A more advanced proposal includes graph-based architectures (e.g., [(6; 35; 60)] ) in which multiple distributions are generated and fed to graph neural networks to preserve the hidden relations between the latent and \(K\)-dimensional target distribution. ## 3. Deep Clustering Concepts and Techniques The fundamental difference between SC and DC methods is that SC methods act on a static representation, and DC methods adapt and learn the representation in an unsupervised manner and then use it in clustering. Most SC methods follow a hard clustering mechanism where they take a distance matrix as input and return the clustering results in one discrete set of labels corresponding to each data point's cluster [(62)]. It is hard to optimize a 1-dimensional discrete vector for a neural network. As opposed to this, DC algorithms work on a soft clustering mechanism that takes a high dimensional distance matrix as input, learns the representation used for clustering in a low dimensional latent space, and returns a K-dimensional continuous vector in the label space. The resulting K-dimensional continuous vector can be optimized to get the final clustering 1-dimensional discrete vector [(62)]. The basic DC framework consists of three main components, i.e., (i) learning representation architecture, (ii) reconstruction loss, and (iii) clustering loss. Concerning (i), we adopt DC methods that use auto-encoder-based and subspace representation learning architectures. A DC framework with a basic auto-encoder architecture is presented in Figure 1. In an auto-encoder-based architecture, the clustering is based on the lower dimensional representation produced by the encoder, with a loss function that trades off cluster quality with the ability to reconstruct the original representation. Through this approach, the latent space representation of the original input data should preserve the most suitable features for clustering. Consider raw data \(X\)\(\in\)\(\mathcal{R}^{N\times d}\), where \(\mathcal{R}^{N\times d}\) belongs to a \(d\)- dimensional distance matrix \(\mathcal{R}\) with \(N\) elements, \(x_{i}\) is the \(ith\) element in \(X\). Representation learning in auto-encoders initiates with the encoder part, the purpose of which is to encode \(X\) into a low dimensional latent representation \(H\). Let's suppose the auto-encoder consists of \(L\) layers where \(\ell\) is layer number, the initial representation learned from \(\mathcal{R}^{N\times d}\) in encoder \(H^{\ell}\) can be obtained as [(6)]: \[H^{\ell}=\phi\left(w_{e}^{\ell}H^{\ell-1}+b_{e}^{\ell}\right), \tag{1}\] where \(H^{0}\)\(e\)\(X\) and \(\phi\) denotes the activation function, \(w_{e}^{\ell}\) and \(b_{e}^{\ell}\) represents the weight and bias of \(\ell tth\) layer. The decoder part decodes \(H\) into reconstructed input \(\overline{X}\) using the following equation [(6)]: \[H^{\ell}=\phi\left(w_{d}^{\ell}H^{\ell-1}+b_{d}^{\ell}\right), \tag{2}\] where \(H^{0}\)\(e\)\(\overline{X}\), \(w_{d}^{\ell}\) and \(b_{d}^{\ell}\) represents the weight and bias of \(\ell tth\) layer for decoder. The objective function used in auto-encoder Figure 1. DC with basic auto-encoder architecture architectures can be defined as: \[\mathcal{L}=\lambda\mathcal{L}_{r}+\mathcal{L}_{c}, \tag{3}\] where \(\mathcal{L}_{r}\) and \(\mathcal{L}_{c}\) represents reconstruction and clustering loss, respectively. \(\mathcal{L}_{c}\) is clustering module specific and each deep clustering proposal provides several other module specific losses that are combined with \(\mathcal{L}_{c}\). The basic version of \(\mathcal{L}_{r}\) can be defined as: \[f_{min}=min\frac{1}{N}\sum_{i=1}^{N}\left\|X-\overline{X}\right\|^{2} \tag{4}\] We adopted two DC algorithms, SDCN (Covington et al., 2016) and EDESC (Covington et al., 2016), from recent DC proposals to evaluate data integration tasks. The selection of the DC algorithms is based on their implementation suitability and the flexibility of their proposed distance functions towards data integration tasks because most of the current DC methods are purely designed for image and text clustering applications (Shou et al., 2017; Wang et al., 2018; Wang et al., 2019) and are not very friendly when comparing rows, columns, and tables in the latent space. Another selection criterion is performance; we describe the top two performers on data integration tasks. The description of the DC algorithms used for the experimental evaluation is given below: * **SDCN**(Covington et al., 2016) is based on two representation learning modules, a Graph Convolutional Network (GCN) and an auto-encoder, that work in parallel to learn structural and auto-encoder specific information. SDCN starts by constructing a K-Nearest Neighbor (KNN) graph from \(X\) and feeding it to the GCN model to learn the low-order and high-order information. To generate the KNN graph, two widely used approaches, Heat Kernel (Covington et al., 2016) and Dot-product (Covington et al., 2016), are used to develop a similarity matrix \(S\) and select the top K-nearest neighbors to produce an undirected graph. Embedding techniques in data integration applications already compute the similarity matrix \(S\). To learn auto-encoder-specific representations, SDCN uses a DNN module, which consists of a simple auto-encoder architecture. GCN-specific and auto-encoder-specific representations are combined through a delivery operator and dual self-supervised mechanism to perform soft clustering assignments from multiple representations. SDCN combines three losses (reconstruction, clustering, and GCN) based on three distributions, \(P\), \(Q\) and \(Z\), to form its objective function. \(P\) is the target distribution in the DNN module that supports data points mapping to the cluster centers, whereas \(Q\) is the distribution assignments of all samples. \(Z\) is GCN-specific distribution. The reconstruction loss is similar to that given in Equation 4. Clustering loss refers to minimizing the KL divergence loss between \(P\) and \(Q\). Lastly, the GCN loss also minimizes KL divergence between \(P\) and \(Z\). (Covington et al., 2016). * **EDESC**(Covington et al., 2016) is a deep subspace clustering method. Subspace representation learning refers to mapping data points into low dimensional sub spaces to separate each data point, similar to the early stage of subspace clustering (Shou et al., 2017). Unlike SDCN, EDESC is a non-graphical clustering method. Usually, deep subspace clustering models are quadratic in time and space complexity due to their self-expressive nature, leading to poor performance when tackling large-scale data (Covington et al., 2016). The self-expressive model assumes a linear combination between one data point to its other data points from the same subspace. The simplest self-expressiveness property can be denoted as \(X=XC\), where \(X\) represents the data matrix, and \(C\) represents the self-expression coefficient matrix. The objective function for self-expression-based representation learning can be defined as (Shou et al., 2017): \[\min_{C}\ \left\|C\right\|_{p}+\frac{\lambda}{2}\ \left\|H-HC\right\|_{F}^{2} \ \ \ \ \ s.t.\ \ \ \ \ \ \ diag\left(C\right)=0 \tag{5}\] where \(\left\|.\right\|_{p}\) shows matrix norm and \(\lambda\) is weight controlling factor. \(H\) is the representation learned by the network. EDESC takes a deep representation and learns the subspace bases in an iterative refining manner. The latent space representation is learned through the refined subspace bases outside the self-expressive framework. EDESC first initializes a subspace D using K-means clustering (Covington et al., 2016). However, we employed Birch, which is also a standard clustering algorithm and shows better performance. EDESC proposes a novel subspace affinity operator, \(S\), that estimates the relationship between subspace bases \(D\) and embedded latent space \(Z\). To emphasize high confidence assignments in \(\widetilde{S}\), EDESC proposed a refined subspace affinity SS. EDESC performs a combined optimization of subspace clustering and embedded representation learning. Both SDCN and EDESC build on a pre-trained autoencoder that learns a compressed representation of the optimized input embedding for reconstruction while ignoring the clustering task. We refer to this autoencoder as AE. This can help reduce the input embedding's dimensionality, remove noise and redundancy, and capture relevant patterns and structure. Then the learned representation passes to the original training part combined with clustering loss for further fine-tuning. It is helpful to evaluate the impact of learned representation on non-DC algorithms. In this context, we used a different AE version, which used the Birch algorithm to perform clustering not directly on the embedding but on the representation learned by AE. This can be interpreted as performing Birch on \(H\) in Figure 1. ## 4. Experimental Setup The hypothesis is that DC is expected to outperform SC as it builds on a latent space representation that can better integrate schema-level and instance-level representations. To evaluate the hypothesis, we included the following SC methods: * **K-means**(Kmeans, 2012) initializes with a set of data points; in the context of the data integration problem, it initializes with distance vectors of either schema or instance-level data points in the vector space and assigns the data points to the clusters with the nearest centroid. It repeatedly iterates to optimize the cluster centers. K-means minimizes the clustering loss with a squared euclidean distance function. K-means requires the value of K in advance to predict the clusters. K-means has a time complexity of \(O(n\times 1\times K)\), where \(n\) is the number of data points, \(I\) is the number of iterations, and \(K\) is the number of clusters. K-means can be efficient for large data sets with a reasonable number of clusters. * **Birch**(Sirch, 2016) is from a hierarchical clustering family designed for tackling massive databases, especially involving data with noise or outliers. Since we have a noisy Web Table data set, Birch is a suitable choice to compare with DC algorithms over different encodings. Birch is supervised in terms of number of clusters K. Birch has a time complexity of \(O(n\times\log(n))\), which allows it to scale well with large datasets. Birch provides a hierarchical clustering structure, which can help understand the data structure and provide more interpretable results. The actual number of clusters can be chosen after the CF tree is built. This provides flexibility in exploring different numbers of clusters and helps in determining the appropriate level of granularity. Setting the number of clusters K in advance gives the SC methods an (in a sense unfair) advantage as they are given the (unknown) ground truth value for K. In contrast, DC methods only take K to initialize the centers of the clusters for pre-training. Subsequently, the DC methods work out K without taking a GT value in training, and in practice, it may not be possible to establish the correct K. As such, the DC methods are more flexible in their ability to automatically estimate the number of clusters without the need for prior knowledge of K. We used the scikit-learn implementations (Krishna et al., 2017) of the SC algorithms. A detailed overview of the experimental framework is presented in Figure 2, which consists of three main phases from left to right. Firstly, the raw data is preprocessed to remove high-level syntactic errors. In the second phase, the preprocessed data is fed to the embedding module to generate dense representations. Lastly, dense representations are further enhanced in the clustering module and final clustering assignments are produced. ### Evaluation Metrics We employ two widely used standard clustering evaluation metrics, Accuracy (ACC) (Zhou et al., 2017) and Adjusted Rand Score (ARI) (Zhou et al., 2017). ARI can be defined as (Zhou et al., 2017): Assume we are given a set \(S\) of \(n\) elements and two clustering sets of these elements consists of \(r\) and \(s\) groups represented as \(X=\{X_{1},\ X_{2},\ldots,X_{r}\}\) and \(\{Y=Y_{1},\ Y_{2},\ldots,Y_{s}\}\) in a contingency Table \(\left\lfloor t_{ij}\right\rfloor\) of overlaps between \(X\) and \(Y\). Each element in \(\left\lfloor t_{ij}\right\rfloor\) shows the count of common objects between \(X_{i}\) and \(Y_{j}\). \(\left\lfloor t_{ij}\right\rfloor\) can be represented as: ARI can be defined from \(\left\lfloor t_{ij}\right\rfloor\): \[ARI=\frac{\sum_{ij}\left(\begin{array}{c}t_{ij}\\ 2\end{array}\right)-\left[\sum_{i}\left(\begin{array}{c}a_{i}\\ 2\end{array}\right)\sum_{j}\left(\begin{array}{c}b_{i}\\ 2\end{array}\right)\right]/\left(\begin{array}{c}n\\ 2\end{array}\right)}{\frac{1}{2}\left[\sum_{i}\left(\begin{array}{c}a_{i} \\ 2\end{array}\right)+\sum_{j}\left(\begin{array}{c}b_{i}\\ 2\end{array}\right)\right]-\left[\sum_{i}\left(\begin{array}{c}a_{i}\\ 2\end{array}\right)\sum_{j}\left(\begin{array}{c}b_{i}\\ 2\end{array}\right)\right]/\left(\begin{array}{c}n\\ 2\end{array}\right)}{\frac{1}{2}\left[\sum_{i}\left(\begin{array}{c}a_{i} \\ 2\end{array}\right)+\sum_{j}\left(\begin{array}{c}b_{i}\\ 2\end{array}\right)\right]-\left[\sum_{i}\left(\begin{array}{c}a_{i}\\ 2\end{array}\right)\sum_{j}\left(\begin{array}{c}b_{i}\\ 2\end{array}\right)\right]/\left(\begin{array}{c}n\\ 2\end{array}\right)} \tag{6}\] ARI determines the similarity between two clustering results; usually, one is ground truth labels, and the second corresponds to the labels returned by the clustering algorithm. Generally, the value of ARI lies between 0 and 1. An ARI value closer to 1 represents a strong match between predicted and ground truth clusters. The clustering ACC for \(N\) samples, with cluster id \(R\in c_{i}\) and ground truth id \(T\in gt_{i}\) can be defined as: \[ACC\left(R,T\right)=\frac{\sum_{i=1}^{N}\delta(gt_{i},\ map(c_{i}))}{N} \tag{7}\] \[\delta\left(gt_{i},\ map(c_{i})\right)=\left\{\begin{array}{cc}1,\ if\ gt_{i}= map(c_{i})\\ 0,\ \ \ \ otherwise\end{array}\right. \tag{8}\] Function \(map()\) gives best permutation mapping through the Hungarian Algorithm (Krishna et al., 2017). ACC maps the predicted labels into ground labels since cluster ids in the prediction are randomly generated and dissimilar to those assigned to ground truth labels. To facilitate the evaluation with in-depth analysis, we recorded standard sensitivity and specificity measures in the form of clustering pairs, such as: _true positives_: The number of pairs of elements that appeared together in the ground truth and predicted clusters correctly. _true negatives_: The number of pairs of elements that do not appear in the ground truth and predicted clusters correctly. _false positives_: The number of pairs of elements that are not together in the ground truth but are in the predicted clusters. _false negatives_: The number of pairs of elements that are together in ground truth but are not in the the predicted clusters. ### Hyper-parameter setting Network learning benefits from hyper-parameters optimized for the particular task. Since deep clustering algorithms are heavily used for image processing tasks, optimizing hyper-parameters for data integration tasks is necessary. In this context, we have three basic parameters of SDCN and EDESC: * **The number of layers** are important because they determine the network's ability to learn unsupervised feature representations. Since we have pre-defined embeddings as AE inputs, rows, columns, or tables with similar meanings are positioned closer together. An autoencoder with fewer layers can efficiently compress and reconstruct this underlying structure. We fixed _number of layers_ = \(2\) in all experiments of SDCN and EDESC after experimenting with different values. * **Layer size** (refers to the number of neurons in each layer) provides the capacity and ability of AE to learn complex patterns in the data. In order to maintain a high-dimensional hidden representation, we fixed _layer size = 1000_ for both SDCN and EDESC experiments after experimenting with different values. This suggests that the complexity of row, column, or table embeddings requires a larger hidden layer size to retain more semantic information. * **latent space size \(z\)**: Originally, SDCN and EDESC used \(z=10\), but this dimensionality is too small in data integration tasks to capture the complexity of the row, column, or table embeddings, leading to significant information loss. Considering this, after systematically experimenting with different values, we fixed \(z\)_=100_ for SDCN and AE, and \(z=a\) for EDESC where the shape of \(a=(n\_clusters\times d)\) where the value of d is used to compute subspace affinity in EDESC. * **Training Epochs**: We have observed that the loss function for SDCN and EDESC exhibits high variance and inconsistency between convergence and divergence during training. For example, SDCN with SBERT (Zhou et al., 2017) converged to its best performance for schema inference experiments on the 95th epoch and diverged afterward. However, the divergence is also observed before the 95th epoch, and SDCN runs with an irregular convergence/divergence pattern. We used the silhouette coefficient (Zhou et al., 2017) on the learned representation with predicted clusters to choose where to stop training. We pre-train SDCN and EDESC for 30 epochs except for entity resolution (100 epochs), which requires more pre-training due to the large numbers of clusters. We decide the number of training epochs based on the best silhouette score. We use AE (described in Section 3) with Birch for the entity resolution experiment instead of SDCN. The learned features without clustering loss from the AE step were more effective at capturing the underlying structure of the data than fine-tuning features along with clustering loss. In order to decide whether to use only AE for training or to continue for SDCN, we use the silhouette score. If the silhouette score converges during training with SDCN, we use SDCN; otherwise, we retain AE and cluster using Birch. The Hyperparameter settings for the experiments, along with the source code, are given 1. Footnote 1: [https://github.com/hafirrauf/dc_data-integration](https://github.com/hafirrauf/dc_data-integration) ## 5. Schema Inference Schema inference is proposing a schema that makes recurring structural features in data explicit. Schema inference may be applied to extensional data (e.g., inferring a JSON schema from several JSON documents (Brock et al., 2016)) or to intensional data (e.g., inferring a schema that summarises a complex relational database (Sundundhi et al., 2017)). Schema inference has been a topic of ongoing investigation for different data models for some time, and several surveys have been produced (Sundhi et al., 2017; Sundhi et al., 2018; Sundhi et al., 2018). It is common for schema inference to build on clustering, to identify candidate types/classes in the data, so clustering is an important enabling technology for schema inference. Consider schema inference as a clustering problem, where the task is: for a given set of data sets \(D=\{d_{1},d_{2},\ d_{3}\ldots d_{n}\}\) cluster every subset \(D_{s}\subseteq D\) that can share a common schema. Pre-trained embeddings have been used widely for data integration tasks (Sundhi et al., 2018). They fall into two specific categories: sentence-based and word based. Sentence-based embeddings directly map a sequence of tokens into a single dense vector. In contrast, word-based embeddings encode each token separately, and then perform an aggregation function to derive a single vector. Pre-trained embeddings are trained on large corpora and tend to have broader vocabulary coverage. Considering this, we choose two pre-trained embeddings (one word-based, FastText (Sundhi et al., 2018), and one sentence-based, SBERT (Zhou et al., 2017)) to perform schema inference with schema-level evidence. The schema-level information includes only table headers; each table is represented by a string (combination of attributes). Nevertheless, pre-trained embeddings have disadvantages when tackling large databases, especially with instance level data: for example, involving custom vocabulary, semantic relationships between header and body, and numerical data distributions in different columns (Sundhi et al., 2018). To produce embeddings with Schema+instance-level Figure 2. Overview of the experimental framework. Deep learning input will be considered as a distance matrix for each data integration problem data, tabular transformers have been a mainstream option in the deep learning community (Beng et al., 2017). Several tabular transformers are proposed in the literature to handle noisy and incomplete data (e.g., (Beng et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019)). We used web data for schema inference to match web tables to DBpedia, particularly the T2D Entity-Level Gold standard (T2Dv1) (Zheng et al., 2017). We rejected all tables that included languages other than English. We also excluded all DBpedia _Thing_ tables to avoid significant data imbalance, as it is mapped to more than 50% of data tables. Further properties of the web tables data are given in Table 1. The criteria for choosing a tabular transformer are based on the nature of the data set. Web tables' are noisy; for example, a table with attribute: symbol and values 'aa', 'axp' is problematic for pre-trained embeddings as 'aa', 'axp' are not present in the pre-trained vocabulary, and most of the cases will be treated as unknown tokens. However, training on a local vocabulary can overcome this issue. Another significant issue is incomplete columns or rows. To handle these issues, we evaluated six tabular transformers, including Tabnet (Beng et al., 2017), TabTransformer (Li et al., 2018), SAINT (Li et al., 2019), FT-Transformer (Li et al., 2019), TabFastFormer (Li et al., 2019), and TabPerciever (Li et al., 2019), on web tables data. We retained and included the two best performers, Tabnet and TabTransformer for comparison. TabTransformer (Li et al., 2019) has been found to be robust with missing table values and noisy data. TabTransformer is based on Transformers (Zheng et al., 2019) with several multi-head attention layers to contextually embed categorical columns. The dense features are obtained from the Transformer block (building block of the Transformer architecture) and concatenated with continuous column data. Tabnet (Beng et al., 2017) is based on row-wise feature selection and is more suitable for raw data without pre-processing. Tabnet uses sequential attention to choose categorical and numerical features at each decision step. Tabnet quantifies each feature's contribution in the embedding both separately and combined. ### Data dimensionality for tabular transformers To produce a distance matrix (\(X_{i}\)), each tabular encoding method produces a different dimension size \(d\). In our experiments, we use the standard values of \(d\) for FastText and SBERT as 300 and 768, respectively. Tabnet and TabTransformer process each input feature individually, whether row or column, and apply a series of transformations. Each table's categorical and continuous features have different cardinalities affecting the size of output embedding \(d\) for each table in Tabnet and TabTransformer. To normalize \(d\) for instance-level data, we selected the maximum feature size occurrence and performed linear interpolation to fill the empty values. However, for TabTransformer, the last column of the distance matrix needs the preceding value to interpolate, which makes the size of \(d\) as \(max(\text{d}-1)\), where \(max(\text{d}-1)\) denotes the maximum number of dimensions observed for any table. The obtained values of \(d\) for Tabnet and TabTransformer are 693 and 208, respectively. ### Results and Discussion For all experimental results, the bold values in black and red indicate the best and the second-best results considering the corresponding embedding methods, respectively. Table 2 presents the clustering results for schema inference using only schema-level data. The following can be observed: (i) **The representation chosen significantly impacts performance, with SBERT significantly outperforming FastText for all clustering algorithms.** SDCN achieved 0.38 higher ARI with SBERT compared to FastText. Birch and K-means with SBERT outperformed Fast Text by 0.34 and 0.17 in the ARI score. Figures 2(a) and 2(b) confirm that the separability of data points for SBERT is more robust than for FastText, in which data points are compact in the latent space, which is unsuitable for clustering. We can also observe fewer outliers (probable cases of unary clusters) in SBERT than FastText. This observation can be verified from Table 2, with the number of unary clusters. SDCN produced 6816 true positive pairs with SBERT as opposed to FastText, which predicted 4650 fewer true positive pairs. (ii) **The DC algorithms outperform the SC algorithms in most cases, with the largest differences being for SBERT.** SDCN and EDESC lead with an ARI difference of 0.19 and 0.14 compared to K-means, respectively. Regarding SC-to-SC comparison with schema-level data, the hierarchical clustering algorithm (i.e., Birch, designed to handle database tasks) is more suitable for schema inference with SBERT. Birch is ahead in terms of performance with 0.06 higher ARI than K-means. The number of false positive pairs with Birch is 8557, far less than with K-means (9349). The results from Table 2, suggest that SBERT can better capture polysemy than FastText, where a single word can have different meanings depending on context. In contrast, while FastText is particularly well-suited for handling out-of-vocabulary words, performing mean operations on embedding words does not take into account their table co-occurrence context. Table 3 presents the clustering results for schema inference using both schema and instance level data. The following can be observed: (i) **The tabular transformer chosen significantly impacts performance, with a relatively high performance of Tabnet against TabTransformer for SDCN.** Figures 2(c) and 2(d) represent no significant latent space difference in terms of data points' relative positions. This low visual difference entirely depends on the Umap projections, which can lead to the loss of information in relationships among the tables. Furthermore, it shows that web table data does not have a clear cluster structure, making it difficult to discover meaningful patterns. Adding instance-level evidence with tabular embedding failed to show its suitability for clustering compared with schema-level evidence with SBERT for web tables data. (ii) **The DC algorithms outperform the SC algorithms in all cases when we consider Schema+Instance-level. Notably, SDCN (when used with Tabnet) outperformed all SC algorithms with a ACC difference of 0.28 compared to K-means and Birch. SDCN also appeared as second best DC algorithm (when used \begin{table} \begin{tabular}{c c} \hline \hline & Web Tables \\ \hline Number of Tables & 429 \\ Ground-truth clusters & 26 \\ Mean cluster size & 16.5 \\ Median cluster size & 8.5 \\ Largest cluster size & 125 \\ Number of unary clusters & 0 \\ \hline \hline \end{tabular} \end{table} Table 1. Data set properties for schema inference with TabTransformer, which treats each feature as a token and uses the Transformer architecture to learn contextual embeddings) and obtained an ACC difference of 0.13 and 0.14 compared to K-means and Birch, respectively. Incorporating subspace clustering in the latent space by EDESC failed to produce robust clusters with TabTransformer, which comes with a 0.02 and 0.03 ACC improvement difference for K-means and Birch, respectively. (iii) **The provision of K does not significantly impact the clustering algorithms' overall performance.** For example, Birch with TabTransformer may have been expected to outperform EDESC due to the prescription of a fixed number of clusters (26), but EDESC outperformed Birch by 0.07 ARI even though it only produced 14 clusters. Similar behavior can be seen when using Tabnet with EDESC, which produced 12 clusters compared to 26 ground truth clusters and achieved 0.09 higher ARI than K-means, which generated 26 clusters. (iv) **Tabnet and TabTransformer treat all attributes as being equally important.** As a result, even when two tables share a subject attribute, they may be clustered separately because their other attributes are different. A _subject attribute_ identifies the artifact that the table is about. For example, tables T1 and T2, are clustered separately where they have a common subject column _Country_ and other columns _(T1.Total population in 2004 (million), T1.Annual population growth rate (\(\%\)), T1.Population density (persons per square km.), T1.Average number of persons per household)_ and _(T2.rank, T2.population, T2.date of information)_. In terms of relative performance between Tables 2 and 3, empirical results for the web tables dataset show that: **schema-level evidence is more suitable for DC and SC, and adding instance-level evidence leads to poorer performance.** This is because the actual instances tend to have low overlap even when their tables are clustered together in the GT. For example, SDCN with Tabnet failed to cluster tables T3 and T4, which belong to the class _Film_ because of the same schema and different instances, e.g., _(T3.fansrank: 101, T3.title: treasure sierra made, T3.year: 1948, T3.director: john huston, T3.overallrank: 92)_ and _(T4.fansrank: 442, T4.title: game, T4.year: 1997, T4.director: david fischer, T4.overallrank: 1491)_. Considering the observations and evidence from Tables 2 and 3, we can conclude that DC outperforms SC, regardless of the selected embedding strategy. ## 6. Entity Resolution Entity resolution is the well-studied process of identifying where two or more records in a data set represent the same real world object (Han et al., 2015; Krizhevsky et al., 2015). Entity resolution thus takes place at the instance level. Most entity resolution proposals focus on pairwise similarity between records. However, the transitive closure of the pairwise similarity relationship may not lead to suitable clusters, and as a result some entity resolution proposals include a clustering step (e.g., (Han et al., 2015; Krizhevsky et al., 2015)). We note that deep learning has been applied with positive results to entity resolution (e.g., (Krizhevsky et al., 2015; Krizhevsky et al., 2015)), but that the need for training data is a barrier to adoption (Krizhevsky et al., 2015). Deep clustering can potentially provide some of the benefits of deep representation learning in an unsupervised setting. To do entity resolution with clustering, the task can be defined as the clustering of each subset \(R_{s}\subseteq R\) for a given set of records \(R=\{r_{1},r_{2},\ r_{3}\ldots r_{n}\}\) that refers to the same real world entity. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{DC} & \multicolumn{5}{c}{SC} \\ & \multicolumn{2}{c}{SDCN} & \multicolumn{2}{c}{EDESC} & \multicolumn{2}{c}{K-mean} & \multicolumn{2}{c}{Birch} \\ \hline **Metric** & SBERT & FastText & SBERT & FastText & SBERT & FastText & SBERT & FastText \\ Ground-truth clusters & 26 & 26 & 26 & 26 & 26 & 26 & 26 & 26 \\ Predicted clusters & 16 & 19 & 26 & 26 & 26 & 26 & 26 & 26 \\ Mean cluster size & 26.8 & 22.57 & 16.5 & 16.5 & 16.5 & 16.5 & 16.5 & 16.5 \\ Median cluster size & 17.5 & 10.0 & 9.5 & 11.0 & 18 & 11.5 & 11 & 1.0 \\ Unary clusters & 2 & 2 & 0 & 1 & 0 & 1 & 0 & 15 \\ Run time (S) & 8.9 & 9.5 & 2.41 & 1.66 & 0.14 & 0.20 & 0.11 & 0.05 \\ ARI & **0.46** & 0.08 & **0.41** & **0.14** & 0.27 & **0.10** & 0.33 & -0.01 \\ ACC & **0.58** & 0.27 & **0.55** & **0.35** & 0.45 & **0.31** & 0.49 & 0.28 \\ \hline \hline \end{tabular} \end{table} Table 2. Schema Inference: Schema-level clustering results DC (SDCN and EDESC) vs SC (K-means and Birch) using pre-trained embeddings on web tables data. Figure 3. Umap representation of pre-trained sentence and tabular based encodings For the entity resolution task, we employed the MusicBrainz database (Wang et al., 2017), which contains continuously updated song data from five sources. The data set includes duplicates for 50% of the original records. The original version, "Music Brainz 20K", contains 10,000 ground truth clusters. We observed (see Figure 4) that DC algorithms are sensitive to number of clusters K in terms of time complexity. Considering this behavior, we synthesize the Music Brainz 20K (Wang et al., 2017) to Music Brainz 2K, to provide more manageable run times. To ensure the data set is balanced, we first discarded all instances associated with a single cluster, sorted them by cluster id in increasing order, and chose the top 2002 instances with 684 clusters. The properties of the data set evaluated for entity resolution are given in Table 4. We only used schema+instance-level data to cluster all records that describe the same real work entity. Schema-level information is not considered because each record in the Music Brainz 2K data set contains the same attributes with different descriptions. Embedding rows with data heterogeneity problems can be challenging, for example, coping with missing attributes for a particular record, the size of the description, and data type ambiguity (for example handling numeric data and multi-word tokens). Consider a scenario of identifying duplicate records with different descriptive patterns (_year:2008 langaugeeng_), (_year:'08'language:English_), (_year: languageeng_), and (_year:2008 length: 24sec_). The data suffer from several issues, including missing year, year value with numerical and categorical type, same record with different attribute and value abbreviations. Considering these issues, we used EmbDi (Dong et al., 2016) to embed records into the distance matrix, which can be directly input to the DC algorithms. EmbDi (Dong et al., 2016) is based on a tripartite graph with three types of node, specifically value node (representation of unique value), a column node (corresponds to the columns or attribute representation), and a row node (a unique token for each tuple). These nodes are connected in a graph based on the structural information that exists in the data set. EmbDi adopts random walks between neighboring nodes to capture the local and global structure in the graph, where the length of the random walk and the number of walks per node are user-defined. Column nodes with similar neighborhoods are placed together in the embedding space. EmbDi offers optimizations to handle data heterogeneity problems. We selected only those encodings produced by EmbDi with prefixes (see (Dong et al., 2016)) _idx_, as each token with prefix _idx_ represents one tuple. As SBERT has shown competitive performance on schema inference, we have applied the pre-trained SBERT model on entity resolution tasks. We computed the SBERT embeddings of each row of the six attributes in the Music Brainz 2K dataset and performed addition to get the final embedding of each row. \begin{table} \begin{tabular}{c c} \hline \hline & Music Brainz 2K \\ \hline Sources & 5 \\ Number of records & 2002 \\ Ground-truth clusters & 684 \\ Mean cluster size & 2.92 \\ Median cluster size & 3.0 \\ Largest cluster size & 5 \\ Number of unary clusters & 0 \\ \hline \hline \end{tabular} \end{table} Table 4. Data set properties for entity resolution Figure 4. Run time analysis for different values of K on Music Brainz 20K data set \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{DC} & \multicolumn{4}{c}{SC} \\ & \multicolumn{2}{c}{SDCN} & \multicolumn{2}{c}{EDESC} & \multicolumn{2}{c}{K-mean} & \multicolumn{2}{c}{Birch} \\ \hline **Metric** & TabTransformer & Tabnet & TabTransformer & Tabnet & TabTransformer & Tabnet & TabTransformer & Tabnet \\ Ground-truth clusters & 26 & 26 & 26 & 26 & 26 & 26 & 26 \\ Predicted clusters & 26 & 26 & 14 & 12 & 26 & 26 & 26 & 26 \\ Mean cluster size & 16.5 & 16.5 & 30.64 & 35.75 & 16.5 & 16.5 & 16.5 & 16.5 \\ Median cluster size & 16.0 & 13.5 & 23.0 & 12.5 & 1.0 & 1.0 & 2.0 & 1.0 \\ Unary clusters & 0 & 0 & 1 & 3 & 15 & 20 & 12 & 20 \\ Run time (S) & 4.9 & 4.93 & 2.93 & 4.6 & 0.10 & 0.10 & 0.10 & 0.17 \\ ARI & **0.26** & **0.45** & **0.09** & **0.08** & 0.02 & -0.013 & 0.02 & -0.013 \\ ACC & **0.42** & **0.55** & **0.31** & **0.31** & 0.29 & 0.27 & 0.28 & 0.27 \\ \hline \hline \end{tabular} \end{table} Table 3. Schema Inference: Schema+Instance-level clustering results DC (SDCN and EDESC) vs SC (K-means and Birch) using tabular embeddings on web tables data. ### Results and Discussion Table 5 presents the clustering results for entity resolution using Schema+Instance-level data. The following can be observed: (i) **Running SDCN has not managed to improve the representation compared to AE.** We observed that SDCN was not further optimizing the representation AE produced during pre-training. The silhouette score was also not improving. Due to this, we used the representation AE from the pre-training module without considering the clustering loss from SDCN. (ii) **Most clustering algorithms produced better results with SBERT than with EmbDi.** AE with SBERT leads with a 0.26 higher ARI difference than AE with EmbDi. Regarding _TP_ pairs, AE with SBERT obtained 616 more _TP_ pairs than AE with EmbDi. For example, one pair, which is _TP_ in AE with SBERT and _FN_ in AE with EmbDi, is _(title: 009-Ballade a donner, length: 4m 2sec, artist: Luce Dufault, album: Luce Dufault (1996), year: nan, language: Fre.)_ and _(title: Luce Dufault - Ballade A donner, length: 242, artist: nan, album: Luce Dufault, year: 96, language: French)_. EmbDi encoded _(length: 242)_ as numerical which is given in seconds and _(length: 4m 2sec)_ as a string token, whereas SBERT considered both as strings. Similarly, EmbDi did not manage to preserve the contextual information by comparing text with its abbreviations _(language: Fre. vs. language: French)_. Overall, SBERT with AE obtained 626 more _TN_ pairs than AE with EmbDi. One example of the common cases is: _(length: 153, artist: nan, language: English)_ and _(length: 283, artist: nan, language: English)_ are _FP_ in EmbDi but _TN_ in SBERT. Even when the pairs have different attributes values _(title: Sender - When Grading Comes, album: The Singles Ward, year: 2)_ and _(title: Dr. Olive - Zodida 99, album: First DJ on the Moon, year: nan)_, respectively, EmbDI may combine them in the latent space with a high EmbDi similarity score. (iii) **The best overall results are with AE for both representations.** AE outperforms EDESC with 0.08 and 0.34 higher ARI scores with EmbDi and SBERT, respectively, since AE learned features more effectively than those learned during the training with EDESC. ACC shows that AE with SBERT assigns 7% more samples to the correct clusters in the prediction compared to the number of clusters assigned with EDESC using SBERT. For example, the cosine similarity of two contextually similar SBERT vectors representing _(title: Uriah Heep - Southern Star, length: 266, artist: nan, album: Into the Wild, year: 11, language: English)_ and _(title: 0B1-Southern Star, length: 4m 26sec, artist: Heep Uriah, album: Into the Wild (2011), year: nan, language: Eng.)_ is 0.78, and they should be cluster together with high contextual similarity. However, EDESC placed the two rows in separate clusters compared to AE, which produced the correct clusters. (iv) **EDESC with SBERT failed to distinguish most of the unary clusters (_TN_ in GT) leading to predicting the incorrect number of clusters (668 against 684 GT clusters) compared to AE.** Most EDESC unary clusters have been merged in the prediction, causing a high _FP_ rate (EDESC misassigns rows to the same cluster when they should be in different clusters). For example, two instances sharing lexically similar values _(length: 4m 56sec, year: nan, language: Eng.)_ and _(length: 4m 29sec, year: nan, language: Eg.)_ belonging to different clusters in GT but obtain high cosine similarity (0.99) in the EDESC latent space with SBERT resulting in misclassification. (v) **The original EmbDi representation does not perform especially well, but is improved on by AE and EDESC.** In EmbDi, high similarity scores may be given even where there are few attributes in common. For example, all rows in the largest cluster contain only the common attribute value _(Language: spa.)_, which occurred frequently. The cosine similarity of EmbDi vectors of two records with different values of _(title, length, artist, album, year)_ and _(language: spa) is 0.75_, causing the SC algorithms to cluster them together. The representation learned by AE from EmbDi resolved this issue. One of the properties of the dataset used is that true negative pairs always dominate over true positive pairs due to the small mean cluster size, and there is always an inverse relation between mean cluster size and true negative rate. The significant contribution to the ARI score is 99.8 % true negative pairs for all algorithms. This indicates that both DC and SC algorithms have correctly identified and separated tuples that belong to different clusters. Furthermore, the high ACC score of all clustering algorithms with many true negative pairs shows that EmbDi is learning meaningful features from the data. These features allow the DC and SC algorithms to distinguish between different clusters better. ## 7. Domain Discovery Domain discovery is the process of identifying collections of values that instantiate an application concept. Discovering domains tends to involve looking for similar collections of values in different dataset columns. Most prior work has used bespoke algorithms (Srivastava et al., 2014; Srivastava et al., 2015; Srivastava et al., 2016), but in this section we investigate the use of generic clustering techniques for identifying columns that share domains. For domain discovery, the clustering problem can be defined as: for a given set of columns \(C=\{c_{1},c_{2},\ c_{3}\ldots c_{n}\}\) cluster every subset \(C_{S}\subseteq C\) that shares a common domain. Similar to SI, to infer a domain from a set of columns we considered schema-level evidence with pre-trained sentence transformers SBERT and FastText and Schema+Instance-level with SBERT and EmbDi (Dong et al., 2015). We used the Di2KG (Camera) data set2, which consists of camera specifications extracted from multiple e-commerce web pages. The data set is highly heterogeneous in terms of single or multiple sources. For example, synonyms, e.g., (Less) from www.cambuy.com.au and (_normalized optical zoom_) from _buynet_, semantically represent the same domain. There are several homonyms, i.e., (_screen type_) is considered in some sources to represent _(screen size)_. The properties of the data set evaluated for domain discovery is presented in Table 6 Footnote 2: [http://di2kg.inf.uniroma3.it/datasets.html](http://di2kg.inf.uniroma3.it/datasets.html) We used three embedding methods for column clustering, considering schema-level and schema+instance-level data. To encode column attributes, we used pre-trained models SBERT and FastText as we used in schema inference. To encode columns at schema+instance-level, we utilized the Schema Matching (SM) version of EmbDi (Algorithm 5 in EmbDi (Dong et al., 2015)) and evaluated skip-gram as a learning method with piece-wise smoothing. We only selected those encodings with prefixes (see (Dong et al., 2015)) \(tt\_{-},idx\_{-}\) and \(ed\_{-}\) for categorical columns and \(tn\_{-}\) for continuous columns. In domain discovery, we have a set of columns with cell values that can be represented as a _phrase_ in SBERT which is trained on diverse text corpora and can capture semantic and syntactic information. Considering this, we used SBERT to encode column headers and values jointly. SBERT processes each column and generates embeddings representing the semantic content of the column headers and values. Subsequently, the mean embedding for each column is computed by performing a mean operation on the corresponding column header and value embeddings. ### Results and Discussion Table 7 shows the clustering results for domain discovery using schema-level data. We observe the following: (i) **All the clustering algorithms perform quite similarly when we consider schema-level data**. This suggests that DC is not improving the representation much and indicates that the representations used capture the necessary structure and meaningful differences well enough for SC to group suitable column headers together. (ii) **SBERT and FastText with schema-level data are much more similar than in schema inference**. SBERT is leading by 0.03 ARI in SDCN and 0.08 ARI score in EDESC, a relatively small difference compared to the performance FastText in SI. This is because, in schema inference, we have long contextual phrases compared to domain discovery. The attribute phrases in the Camera data set are small, and FastText does not need to consider the order of words to embed, leading to good performance. (iii) **The schema+instance-level approach, specifically pre-trained SBERT, has performed better (empirically a difference of 0.12 ARI score with SDCN ) than the schema-level representations (see Table 8)**. The reason could be a feature of the Camera data set, as it has many syntactically different names for equivalent attributes, and schema+instance-level approaches capture the context better. For example, SDCN produced two clusters with the attributes _(sensor type)_ and _(optical sensor)_; however, these attributes belong to one cluster in GT. The cosine similarity of the attribute names is 0.49, which is low, and SBERT and FastText failed to capture the weak semantic relationship between the attribute names, in contrast with the schema+instance-level SBERT. Table 8 presents the clustering results for domain discovery using schema+instance-level data. We observe the following: (i) **Both deep clustering algorithms SDCN and EDESC struggled to integrate Schema+Instance data with EmbDi and showed much better performance with pre-trained SBERT**. EmbDi failed to produce suitable embeddings for column headers and values because EmbDi emphasises relationships between columns in a table, which are not especially relevant to domain discovery. In contrast, SBERT considers the textual context for each column header and value and then combines them, ignoring surrounding columns. Furthermore, the performance of EmbDi is also impacted by the syntactic dissimilarity between column headers. For example, attributes _(image size pixels)_ and _(max resolution)_ are lexically dissimilar with a cosine similarity of 0; however, they can have similar instance values. The largest cluster predicted by EDESC with EmbDi contains 1151 columns that belong to 13 ground truth domains but represent one domain in the prediction, which shows a high false positive rate. Some examples of domains that are clustered together by EDESC with EmbDi but are not in the GT clusters are _(battery type, lens type, battery life, camera type)_. We used heat maps (Figure 5) to analyze how the distance vectors of columns are similar or dissimilar. We investigate how adding instance-level data affects the representation of columns. For heat map visualization, columns are selected randomly from the predicted clusters of SDCN encoded with SBERT (schema-level) and EmbDi (schema+instance-level). Unlike SBERT, EmbDi with SDCN groups those columns that are neither syntactically similar nor belong to the same domain. The addition of instance-level data gives rise to a poorer encoding for SDCN with EmbDi. Figure 4(a) confirms that different columns that should be in the same cluster are in different clusters. This indicates that the column headers are lexically different and suitable for models pre-trained on large dictionaries. From Figure 4(b), \begin{table} \begin{tabular}{c c} \hline \hline & Camera \\ \hline Sources & 24 \\ Number of columns & 19036 \\ Ground-truth clusters & 56 \\ Mean cluster size & 339.92 \\ Median cluster size & 183 \\ Largest cluster size & 2463 \\ Number of unary clusters & 1 \\ \hline \hline \end{tabular} \end{table} Table 6. Di2KG (Camera) data set properties for domain discovery \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{DC} & \multicolumn{4}{c}{SC} \\ & \multicolumn{2}{c}{AE} & \multicolumn{2}{c}{EDESC} & \multicolumn{2}{c}{K-mean} & \multicolumn{2}{c}{Birch} \\ \hline **Metric** & EmbDi & SBERT & EmbDi & SBERT & EmbDi & SBERT & EmbDi & SBERT \\ Ground-truth clusters & 684 & 684 & 684 & 684 & 684 & 684 & 684 \\ Predicted clusters & 684 & 684 & 684 & 668 & 684 & 684 & 684 & 684 \\ Mean cluster size & 2.92 & 2.92 & 2.92 & 2.99 & 2.92 & 2.92 & 2.92 & 2.92 \\ Median cluster size & 3.0 & 3.0 & 3.0 & 2.0 & 3.0 & 2.0 & 3.0 & 2.0 \\ Unary clusters & 43 & 51 & 29 & 77 & 123 & 135 & 35 & 67 \\ Run time (S) & 1187.40 & 1173.69 & 36.39 & 1023.97 & 12.25 & 12.72 & 9.76 & 7.50 \\ ARI & **0.51** & **0.77** & **0.43** & 0.43 & 0.41 & 0.38 & 0.41 & **0.56** \\ ACC & **0.71** & **0.86** & **0.67** & **0.79** & 0.65 & **0.67** & 0.67 & 0.76 \\ \hline \hline \end{tabular} \end{table} Table 5. Entity Resolution: clustering results DC (AE, EDESC) vs SC (K-means and Birch) using EmbDi and SBERT on Music Brainz 2K data set. we can observe that all the example columns belong to different real-world domains but are still assigned to one cluster. SBERT with SDCN managed to segregate those columns, which are lexically different, from schema-level evidence. (ii) **Value-count per column (the count of total values in a column, regardless of whether they are distinct) impacts the clustering performance**. In Birch, the columns with attributes and values (_image resolutions: 480x480 / 512x384..._) and (_digital zoom: without digital zoom..._) have appeared as unary clusters in the prediction and are false negative cases because there are no unary clusters in GT. EDESC and K-means produced 0 unary clusters. However, the median cluster size for EDESC and K-means are 216 and 264.5 respectively, which is higher than the ground truth median cluster size (183). We also observe some common cases that a false negative in EmbDI (schema+instance-level) appears as a true positive in SBERT with schema-level. SBERT generated (schema+instance-level) encodings for several occurrences of the attribute (_humidity_) with 1.0 cosine similarity, but EmbDi considered values of the instances, and gave rise to a false negative with SDCN. Another false negative case results from ambiguity in value data. For attribute values (_max focal length: 90mm_) and (_focal length tele: 90_), the cosine similarity of attributes (_max focal length_) and (_focal length tele_) is 0.66; however, EmbDi embeds (_90mm_) as a string and (_90_) as a numerical token. The combined cosine similarity of the two columns is 0.43, which SDCN finds hard to cluster together. Ambiguity in the units and value ranges caused many of the false negative cases. Columns _(humidity: 85% or less (no condensation))_ and _(humidity: 85%)_ belong to the same domain; however, extra information of range (_or less (no condensation)_) in value makes the combined vector different enough for the SDCN to categorize them in separate clusters. For example, cosine similarity of _(humidity)_ and _(humidity)_ is 1.0, while the cosine similarity of the EmbDi vector of both columns is 0.44. ## 8. Conclusions We have investigated the application of DC for _schema inference_, _entity resolution_ and _domain discovery_, tasks that cluster tables, rows and columns, respectively. Experiments have explored the use of DC algorithms on mainstream data management tasks, using a variety of embeddings for complete tables, columns, and rows. Results have been reported comparing two existing DC algorithms with two non-DC algorithms representing different clustering paradigms. The results show that DC algorithms consistently outperform non-DC clustering algorithms for data integration tasks, thus motivating their adoption to cluster tabular data sets or their components. We identified potential research opportunities by empirical evaluation, \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{DC} & \multicolumn{5}{c}{SC} \\ & \multicolumn{2}{c}{SDCN} & \multicolumn{2}{c}{EDESC} & \multicolumn{2}{c}{K-mean} & \multicolumn{2}{c}{Birch} \\ \hline **Metric** & SBERT & Fast Text & SBERT & Fast Text & SBERT & Fast Text & SBERT & Fast Text \\ Ground-truth clusters & 56 & 56 & 56 & 56 & 56 & 56 & 56 & 56 \\ Predicted clusters & 42 & 56 & 56 & 56 & 56 & 56 & 56 & 44 \\ Mean cluster size & 453.23 & 339.92 & 339.92 & 339.92 & 339.92 & 339.92 & 339.92 & 433.63 \\ Median cluster size & 186.5 & 196.5 & 170.0 & 173.5 & 221.5 & 203.5 & 147.5 & 162 \\ Unary clusters & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Run time (S) & 18.68 & 18.35 & 350.20 & 406.60 & 2.41 & 1.36 & 1.67 & 01.22 \\ ARI & 0.74 & **0.71** & **0.78** & **0.70** & 0.73 & **0.71** & **0.76** & 0.58 \\ ACC & 0.69 & **0.68** & **0.74** & **0.66** & 0.69 & **0.66** & **0.70** & 0.62 \\ \hline \hline \end{tabular} \end{table} Table 7. Domain discovery: Schema-level clustering results DC (SDCN, EDESC) vs SC (K-means and Birch) using SBERT and FastText on Di2KG (Camera) data set. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{DC} & \multicolumn{5}{c}{SC} \\ & \multicolumn{2}{c}{SDCN} & \multicolumn{2}{c}{EDESC} & \multicolumn{2}{c}{K-mean} & \multicolumn{2}{c}{Birch} \\ \hline **Metric** & SBERT & EmbDi & SBERT & EmbDi & SBERT & EmbDi & SBERT & EmbDi \\ Ground-truth clusters & 56 & 56 & 56 & 56 & 56 & 56 & 56 & 56 \\ Predicted clusters & 51 & 56 & 56 & 56 & 56 & 56 & 56 & 56 \\ Mean cluster size & 373.25 & 339.92 & 339.92 & 339.92 & 339.92 & 339.92 & 339.92 & 339.92 & 339.92 \\ Median cluster size & 174 & 222.5 & 170.5 & 216 & 283.5 & 264.5 & 195 & 7 \\ Unary clusters & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \\ Run time (S) & 532.90 & 12.21 & 321.57 & 11.55 & 4.82 & 3.15 & 15.34 & 2.50 \\ ARI & **0.86** & **0.13** & **0.81** & 0.11 & 0.51 & **0.12** & 0.78 & 0.03 \\ ACC & **0.80** & **0.17** & **0.78** & **0.15** & 0.56 & **0.15** & 0.74 & 0.14 \\ \hline \hline \end{tabular} \end{table} Table 8. Domain discovery: Schema+Instance-level clustering results DC (SDCN, EDESC) vs SC (K-means and Birch) using SBERT and EmbDi on Di2KG (Camera) data set. which includes (i) Exploring distance functions to effectively measure row-to-row, column-to-column and table-to-table similarity in the latent space for deep clustering. (ii) Efficient transformation of dense to sparse matrices before learning the representation. (iii) Exploring different techniques to minimize the effect of large numbers of clusters on deep clustering performance. As the number of clusters grows, the model's complexity increases, and it becomes more likely that some clusters will be very similar, leading to more challenging optimization problems. ###### Acknowledgements. The authors would like to acknowledge the assistance given by Research IT and the use of the Computational Shared Facility (CSF) at The University of Manchester.
2310.17444
Baryon density dependence of viscosities of the quark-gluon plasma at hadronization
The $\phi$ meson and $\Omega$ baryon provide unique probes of the properties of the quark-gluon plasma (QGP) at hadronization in relativistic heavy-ion collisions. Using the quark recombination model with the quark phase-space information parameterized in a viscous blastwave, we perform Bayesian inference of the shear and bulk viscosities of the QGP at hadronization with a temperature of $T\sim 160$ MeV by analyzing the $\phi$ and $\Omega$ data in Au+Au collisions at $\sqrt{s_{\rm NN}}=$ 19.6-200 GeV and Pb+Pb collisions at $\sqrt{s_{\rm NN}}=$ 2.76 TeV, corresponding to a baryon chemical potential variation from $\mu_B\approx 0$ (at $\sqrt{s_{\rm NN}}= 2.76$ TeV) to $200$ MeV (at $\sqrt{s_{\rm NN}}= 19.6$ GeV). We find that the shear viscosity to enthalpy ratio $\eta T/(\epsilon +P)$ of the QGP at hadronization decreases as $\mu_B$ increases, with $\eta T/(\epsilon +P)\approx 0.18$ at $\mu_B=0$ and $\eta T/(\epsilon +P)\approx 0.08$ at $\mu_B=200$ MeV, while the corresponding specific bulk viscosity is essentially constant with $\zeta T/(\epsilon + P)=0.02\sim 0.04$ for $\mu_B<200$ MeV. Our results suggest that the QGP at hadronization ($T\sim 160$ MeV) with finite baryon density is more close to perfect fluid than that with zero baryon density.
Zhidong Yang, Yifeng Sun, Lie-Wen Chen
2023-10-26T14:56:05Z
http://arxiv.org/abs/2310.17444v2
# Baryon density dependence of viscosities of the quark-gluon plasma at hadronization ###### Abstract The \(\phi\) meson and \(\Omega\) baryon provide unique probes of the properties of the quark-gluon plasma (QGP) at hadronization in relativistic heavy-ion collisions. Using the quark recombination model with the quark phase-space information parameterized in a viscous blastwave, we perform Bayesian inference of the shear and bulk viscosities of the QGP at hadronization with a temperature of \(T\sim 160\) MeV by analyzing the \(\phi\) and \(\Omega\) data in Au+Au collisions at \(\sqrt{s_{\rm NN}}=19.6\)-\(200\) GeV and Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV, corresponding to a baryon chemical potential variation from \(\mu_{B}\approx 0\) (at \(\sqrt{s_{\rm NN}}=2.76\) TeV) to \(200\) MeV (at \(\sqrt{s_{\rm NN}}=19.6\) GeV). We find that the shear viscosity to enthalpy ratio \(\eta T/(\epsilon+P)\) of the QGP at hadronization decreases as \(\mu_{B}\) increases, with \(\eta T/(\epsilon+P)\approx 0.18\) at \(\mu_{B}=0\) and \(\eta T/(\epsilon+P)\approx 0.08\) at \(\mu_{B}=200\) MeV, while the corresponding specific bulk viscosity is essentially constant with \(\zeta T/(\epsilon+P)=0.02\sim 0.04\) for \(\mu_{B}<200\) MeV. Our results suggest that the QGP at hadronization (\(T\sim 160\) MeV) with finite baryon density is more close to perfect fluid than that with zero baryon density. _Introduction.--_ Lattice Quantum Chromodynamics (QCD) calculations predict a transition from ordinary hadronic matter to a new state of matter that consists of deconfined quarks and gluons, called quark gluon plasma (QGP) [1]. The QGP is believed to have existed in the early universe \(10^{-6}s\) after the big bang and can be created in relativistic heavy-ion collisions (HICs) at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). Exploring the QCD phase diagram as well as the transport properties of the QGP is one of the most fundamental problems in high-energy nuclear physics (See, e.g., Ref. [2]). It is known that at zero baryon chemical potential (\(\mu_{B}=0\)), the transition between QGP and hadronic matter is a smooth crossover [3; 4]. Further calculations from lattice QCD suggest that the crossover line extends up to \(\mu_{B}\sim 250\)-\(300\) MeV [5]. However, it is not yet known if there exists a critical point where the crossover transforms into a first-order phase transition at higher baryon densities. In fact, the main goal of the beam energy scan (BES) program at RHIC is to investigate the phase diagram of QCD and locate the critical point [6]. One of the most significant discoveries made at RHIC and LHC is that the QGP behaves like a near-perfect fluid, characterized by an exceptionally small shear viscosity to entropy density ratio \(\eta/s\), close to the universal lower bound \(1/4\pi\) based on the anti-de Sitter/conformal field theory (AdS/CFT) correspondence [7]. This is surprising since the QGP was initially expected to be a weakly interacting gas of quarks and gluons but turned out to be a strongly coupled fluid. This discovery has attracted intense interest in the transport properties of the QGP fluid, which are closely related to the underlying strong interactions between quarks and gluons. In recent years, theoretical calculations on the shear (\(\eta\)) and bulk (\(\zeta\)) viscosity have been extensively explored [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19], usually considering \(\mu_{B}=0\) and focusing on the temperature dependence of viscosities. The viscosities of the QGP have significant effects on the final observables of HICs [20; 21], allowing us to constrain their values with experimental data. Early studies employing viscous hydrodynamics usually assumed a constant \(\eta/s\) over the entire evolution and found \(\eta/s=0.08\sim 0.2\)[22; 23; 24; 25]. Recent researches using Bayesian statistical analysis with multi-stage models that integrate initial conditions, viscous hydrodynamics and hadronic transport obtained constraints on the temperature dependence of the shear and bulk viscosities of the baryon-free QGP [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. Overall, most recent analyses have yielded consistent results and found \(\eta/s\approx 0.16\) at at pseudo-critical temperature \(T_{\rm pc}\approx 160\) MeV for the baryon-free QGP [33; 36]. Studies have also found that the viscosities of QCD matter depend on baryon chemical potential [37; 38; 39; 40; 18; 19]. Hybrid models that incorporate hadronic transport and viscous hydrodynamics show that different values of effective shear viscosity are required to describe the data at different collision energies by assuming a constant value of \(\eta/s\) for each collision energy [41; 42]. The baryon chemical potential dependence of \(\eta/s\) is explored in Refs. [43; 44; 45]. In general, due to large uncertainties in the initial conditions and equation of state, hydrodynamics simulations at finite \(\mu_{B}\) are under development [43; 45; 46; 47; 48] and a quantitative estimate for QGP's viscosities at finite \(\mu_{B}\) is still challenging. In this work, we present a novel approach to constrain the shear and bulk viscosities of QGP at finite \(\mu_{B}\) with a temperature of \(T\sim 160\) MeV. In a recent work [49], we proposed using \(\phi\) meson and \(\Omega\) baryon to explore the viscosities of the baryon-free QGP at hadronization. Here we generalize the approach to collisions at lower energies and study the viscosities of QGP at finite \(\mu_{B}\). In our approach, hadrons are produced through quark recombination [50; 51; 52; 53; 54; 55] with the phase-space distribution of quarks at hadronization parameterized in a viscous blastwave [56; 57; 58; 20; 59] which includes non-equilibrium deformations of thermal distributions due to shear and bulk stresses. The viscous effects for the QGP at hadronization are then imported into \(\phi\) and \(\Omega\) through the recombination process. Since the \(\phi\) and \(\Omega\) have relatively small hadronic interaction cross sections [60], they thus carry direct information of QGP at hadronization with negligible hadronic effects [61; 62; 63; 64; 65; 66; 67; 68]. We find that the QGP at \(T\sim 160\) MeV with finite baryon density is more close to perfect fluid than that with zero baryon density. It should be pointed out that blastwave models obtain their parameters through fits to data and are independent of initial conditions and equation of state, thus providing a complementary way to hydrodynamic simulations [57; 58]. _Theoretical model.--_ Quark recombination or coalescence models were initially proposed to explain the baryon-over-meson enhancement and valence quark number scaling observed in RHIC Au+Au collisions [50; 51; 52; 53]. In a recent work [49], we introduce viscous corrections into quark recombination, and here give a brief overview of the formalism. In the following \({\bf r}\) is the 3-space position, \({\bf p}\) is the 3-momentum and \(m\) is particle mass. The 4-momentum of hadrons are denoted as \(p^{\mu}=(E,\ {\bf p})\) with \(E=\sqrt{m^{2}+p^{2}}\). Following Refs. [52; 53; 54], the momentum distribution of mesons is given by \[E\frac{dN_{M}}{d^{3}{\bf p}} = C_{M}\int_{\Sigma}\frac{p^{\mu}\cdot d\sigma_{\mu}}{(2\pi)^{3}} \int_{0}^{1}dx_{1}dx_{2}\Phi_{M}(x_{1},x_{2}) \tag{1}\] \[\times f_{a}({\bf r},x_{1}{\bf p})f_{b}({\bf r},x_{2}{\bf p}).\] where \(C_{M}\) is the spin degeneracy factor of a given meson species, \(\Sigma\) is the hypersurface of hadronization, \(\Phi_{M}\) is the effective wave function squared of mesons, \(x_{1,2}\) are light cone coordinates defined as \({\bf p}_{1,2}=x_{1,2}{\bf p}\), and \(f_{a,b}\) are the parton phase-space distributions. The \(\Phi_{M}\) is parameterized as Gaussian type \[\Phi_{M}=\frac{2}{\sqrt{2\pi}\sigma_{M}}\exp(-\frac{(x_{1}-x_{a})^{2}+(x_{2}-x _{b})^{2}}{\sigma_{M}^{2}})\delta(x_{1}+x_{2}-1) \tag{2}\] where \(\sigma_{M}\) is the variance, \(x_{a,b}=m_{1,2}/(m_{1}+m_{2})\) are the peak values, and \(m_{1,2}\) are the masses of the constituent partons. Similar expression can be derived for baryons. The quark phase-space distribution is parameterized in a viscous blastwave [57; 58; 59], based on the Retiere and Lisa blastwave [69]. The quark distribution is given by \[f(r,p)=f_{0}(r,p)+\delta f_{\rm shear}(r,p)+\delta f_{\rm bulk}(r,p) \tag{3}\] where \(f_{0}\) is the equilibrium Bose/Fermi distribution depending on the flow field \(u^{\mu}\), particle momentum \(p^{\mu}\), the local temperature \(T\) and chemical potentials \(\mu_{i}=b_{i}\mu_{B}+s_{i}\mu_{S}+q_{i}\mu_{Q}\) with baryon number \(b_{i}\), strangeness \(s_{i}\) and electric charge \(q_{i}\) (we assume \(\mu_{Q}=0\) in this work). \[f_{0}(r,p)=\frac{1}{e^{(u\nu p_{\mu}-\mu_{i})/T}\mp 1} \tag{4}\] \(\delta f_{\rm shear}\) and \(\delta f_{\rm bulk}\) denote corrections from the shear and bulk viscosities, respectively. For the shear viscous corrections, we use the Grad's method [70; 71] \[\delta f_{\rm shear}=\frac{1}{2T^{2}}\frac{p_{\mu}p_{\nu}}{\epsilon+P}\pi^{ \mu\nu}f_{0}(1\pm f_{0}) \tag{5}\] where \(\epsilon\) is the energy density, \(P\) is the pressure, \(\pi^{\mu\nu}\) is the shear stress tensor and \(+(-)\) for bosons (fermions). In the Navier-Stokes approximation \(\pi^{\mu\nu}=2\eta\sigma^{\mu\nu}\) where \(\eta\) is the shear viscosity and \(\sigma^{\mu\nu}\) is the shear gradient tensor defined as \(\sigma^{\mu\nu}=\frac{1}{2}\left(\nabla^{\mu}u^{\nu}+\nabla^{\nu}u^{\mu} \right)-\frac{1}{3}\Delta^{\mu\nu}\nabla_{\lambda}u^{\lambda}\) with flow field \(u^{\mu}\), \(\nabla^{\mu}=\Delta^{\mu\nu}\partial_{\nu}\) and \(\Delta^{\mu\nu}=g^{\mu\nu}-u^{\mu}u^{\nu}\). For the bulk viscous corrections, we use the 14-moment approximation [72; 73] \[\delta f_{\rm bulk}=-f_{0}(1\pm f_{0})\Pi\frac{\tau_{\Pi}}{\zeta}\left[\frac{ 1}{3}\frac{m^{2}}{T}\frac{1}{p^{\mu}u_{\mu}}+\frac{p^{\mu}u_{\mu}}{T}\left(c_ {s}^{2}-\frac{1}{3}\right)\right] \tag{6}\] where \(\zeta\) is the bulk viscosity, \(\Pi\) is the bulk viscous pressure and \(\tau_{\Pi}\) is the bulk relaxation time. At the first order approximation, one has \(\Pi=-\zeta\partial_{\mu}u^{\mu}\). The expression for \(\frac{\tau_{\Pi}}{\zeta}\) is given in [72]. We note that the fluidity of a system at finite chemical potential should be evaluated by the ratio of shear viscosity over the enthalpy multiplied by the temperature, \(\eta T/(\epsilon+P)\)[74], where \(\epsilon+P=Ts+\mu_{B}n_{B}\). When \(\mu_{B}=0\), this quantity reduces to \(\eta/s\). Let us now describe the blastwave parameterization for the flow field \(u^{\mu}\). Here \(R_{x,y}\) are the semi axes of the fireball at freeze-out, \(\rho=\sqrt{x^{2}/R_{x}^{2}+y^{2}/R_{y}^{2}}\) is the reduced radius, \(\eta_{s}=\frac{1}{2}\ln\frac{t\pm z}{t-z}\) is the space-time rapidity. The hypersurface is assumed to be constant \(\tau=\sqrt{t^{2}-z^{2}}\). The flow field is parameterized as \[u^{\mu} = (\cosh\eta_{s}\cosh\eta_{T},\sinh\eta_{T}\cos\phi_{b}, \tag{7}\] \[\sinh\eta_{T}\sin\phi_{b},\sinh\eta_{s}\cosh\eta_{T})\] where \(\eta_{T}\) is the transverse flow rapidity and \(\phi_{b}\) is the azimuthal angle of \(u^{\mu}\) in the transverse plane. \(\eta_{T}\) is given by the transverse velocity \(v_{T}=\tanh\eta_{T}\) with \[v_{T}=\rho^{n}(\alpha_{0}+\alpha_{2}\cos 2\phi_{b}) \tag{8}\] where \(\alpha_{0}\) is the average surface velocity, \(\alpha_{2}\) is an elliptic deformation of the flow field and \(n\) is a power term. In this work, we use a linear expression and set \(n=1\). The transverse flow vector is chosen to be perpendicular to the elliptic surface at \(\rho=1\). The ratio \(R_{y}/R_{x}\) significantly influences elliptical flow, so we choose \(R_{y}/R_{x}\) as a fit parameter and constrain \(R_{x}\), \(R_{y}\) and \(\tau\) by adding a simple geometric estimate \(R_{x}\approx(R_{0}-b/2)+0.65\tau(\alpha_{0}+\alpha_{2})\) where \(R_{0}\) is the radius of the colliding nucleus and \(b\) is the impact parameter. The values of \(b\) used for each centrality bin are based on Glauber Monte Carlo simulations for related experiments [75, 76]. It is worth to mention that the viscous blastwave offers a simplified representation of the flow field and freeze-out hypersurface, serving as an approximate snapshot of a viscous hydrodynamic system at a fixed time [56, 57, 58, 59, 20]. The dissipative effects before freeze-out are incorporated into the parameterized flow field \(u^{\mu}\). Consequently, the viscous blastwave carries information on the viscosities of the fluid at a specific time, e.g., the QGP at hadronization in our present work. _Experimental data and fit parameters.--_ We utilize the transverse-momentum (\(p_{T}\)) spectra and elliptic flows \(v_{2}\) of \(\phi\) and \(\Omega\) as our observables. The data we used are from the STAR collaboration, covering Au+Au collisions at 19.6, 39, 54.4, 62.4 and 200 GeV [77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87], and the ALICE collaboration for Pb+Pb collisions at \(\sqrt{s_{NN}}\)=2.76 TeV [88, 89, 90] around mid-rapidity, as listed in Tab. 1. Due to data availability, the centrality bins for spectra are slightly different from that of \(v_{2}\). For Au+Au at 19.6, 39, 54.4 and 62.4 GeV, we use data of \(\Omega^{-}\). For Au+Au at 200 GeV and Pb+Pb at 2.76 TeV, we use data of \(\Omega^{-}+\Omega^{+}\). For Au+Au collisions at \(\sqrt{s_{NN}}\) = 7.7,11.5 and 27 GeV, the current data of \(\phi\) and \(\Omega\) have very few points and large uncertainties, so we will not use them in our analysis. As for the spectra of \(\phi\) and \(\Omega\) at \(\sqrt{s_{NN}}\) = 54.4 GeV, there is currently no available data and we use the spectra from \(\sqrt{s_{NN}}\) = 62.4 GeV instead. For each collision energy, we perform a combined analysis of the available centrality bins. The fitted \(p_{T}\) ranges for \(\phi\) and \(\Omega\) are given in Tab. 2. We have checked that the correction \(\delta f(r,p)\) is small for both \(\phi\) and \(\Omega\) with the above fit ranges, i.e., less than 20% of \(f_{0}\) for the majority of points (very few points going up to 40% of \(f_{0}\)), which is much smaller than the commonly adopted upper bound \(\sim 1\)[20, 30], ensuring the applicability of the viscous corrections. The temperature \(T\) at hadronization is taken from Ref. [91], assuming its value is close to the chemical freeze-out temperature \(T_{ch}\). The baryon chemical potentials \(\mu_{B}\) are energy-dependent and their values are also taken from Ref. [91]. Besides, the strangeness chemical potentials \(\mu_{S}\) are obtained by extrapolating the results from Ref. [81]. The values of \((T,\mu_{B},\mu_{S})\) used in our calculation are listed in Tab. 2. To fit the yields of \(\phi\) and \(\Omega\) and to determine the value of freeze-out time \(\tau\), we introduce a fugacity factor \(\gamma_{s,\bar{s}}\) and assume \(\gamma_{s}=\gamma_{\bar{s}}\). We set \(\gamma_{s}=0.65\) for 19.6-62.4 GeV and \(\gamma_{s}=0.8\) for 200, 2760 GeV. Regarding other constants, we specify the hadron wave function variances as \(\sigma_{M}=0.3\) and \(\sigma_{B}=0.1\). We use sound speed squared \(c_{s}^{2}=0.15\) (see Eq. (6)) for the QGP at hadronization [92, 93], quark mass \(m_{s}=500\) MeV, spin degeneracy factor \(C_{M}=3\) for \(\phi\) and \(C_{B}=4\) for \(\Omega^{-}\). The parameters left in our model are (\(\tau\), \(\alpha_{0}\), \(\alpha_{2}\), \(R_{y}/R_{x}\), \(\eta T/(\epsilon+P)\), \(\zeta T/(\epsilon+P)\)), which can be determined by fitting experimental data. For each centrality bin at each collision energy, the fluid has unique values for (\(\tau\), \(\alpha_{0}\), \(\alpha_{2}\), \(R_{y}/R_{x}\)) and shared values for (\(\eta T/(\epsilon+P)\), \(\zeta T/(\epsilon+P)\)) and thus we have 3\(\times\)4+2=14 parameters for \(\sqrt{s_{NN}}\)= 19.6-62.4 GeV, 2\(\times\)4+2=10 parameters for \(\sqrt{s_{NN}}\)= 200 GeV and 4\(\times\)4+2=18 parameters for \(\sqrt{s_{NN}}\)= 2.76 TeV. _Bayesian method.--_ To determine the above parameters, we employ the Bayesian analysis package from the Models and Data Analysis Initiative (MADAI) project [94, 27]. The MADAI package includes a Gaussian process emulator and a Bayesian analysis tool. Further information is available in Refs. [27, 30]. We adopt a uniform prior distribution for all model parameters. For instance, for 10-40% centrality bin at \(\sqrt{s_{NN}}\)= 19.6 GeV, we set prior ranges 5.8-7.8 fm/\(c\) for \(\tau\), 0.44-0.6\(c\) for \(\alpha_{0}\), 0.012-0.042\(c\) for \(\alpha_{2}\), 1.12-1.32 for \(R_{y}/R_{x}\) and obtain posterior values (\(\tau,\alpha_{0},\alpha_{2},R_{y}/R_{x}\))=(6.8 fm/\(c\), 0.53\(c\), 0.03c,1.24). Besides, we set prior ranges 0-0.2 for \(\eta T/(\epsilon+P)\), 0-0.12 for \(\zeta T/(\epsilon+P)\) at 19.6 GeV and obtain \(\eta T/(\epsilon+P)=0.076\) and \(\zeta T/(\epsilon+P)=0.035\). The same procedure is applied to other centralities and energies. After setting prior ranges for each parameter, we generate a set of training points within the parameter space and calculate all fitted observables at each training point. The MADAI package then builds a Gaussian process emulator, which can estimate the observables for random parameter values. Finally a Markov Chain Monte Carlo (MCMC) provides a likelihood analysis and gives the maximum likelihood or best-fit parameters. Here for \begin{table} \begin{tabular}{c|c c c c c c} \hline \(\sqrt{s_{NN}}\)(GeV) & 19.6 & 39 & 54.4 & 62.4 & 200 & 2760 \\ \hline \(T\) (MeV) & 155 & 157.5 & 158.5 & 158.5 & 160 & 160 \\ \hline \(\mu_{B}\) (MeV) & 197 & 107 & 79 & 69 & 22 & 0 \\ \hline \(\mu_{S}\) (MeV) & 46 & 29 & 21 & 18 & 8 & 0 \\ \hline \(p_{T}\)-range (GeV/\(c\)) & \(\phi\) & \(<\) 2.3 & \(<\) 2.4 & \(<\) 2.6 & \(<\) 2.7 & \(<\) 2.6 & \(<\) 2.8 \\ \cline{2-6} & \(\Omega\) & \(<\) 3.2 & \(<\) 3.0 & \(<\) 3.9 & \(<\) 3.8 & \(<\) 4.0 & \(<\) 3.8 \\ \hline \end{tabular} \end{table} Table 2: Values of temperature (\(T\)), baryon chemical potential (\(\mu_{B}\)), strangeness chemical potential (\(\mu_{S}\)) and fit ranges for \(\phi\) and \(\Omega\) observables used in quark recombination for different collision energies. \begin{table} \begin{tabular}{c|c|c} \hline \(\sqrt{s_{NN}}\)(GeV) & \multicolumn{2}{c}{Centrality} \\ \hline 19.6,39 & \(\frac{v_{2}(p_{T})}{\sigma_{M}^{2}}\) & 0-10\%, 10-40\%, 40-80\% [77, 78, 79] \\ & \(\frac{\sigma_{M}^{2}}{2\sigma_{p_{T}}^{2}}\) & 0-10\%, 20-30\%(20-40\%), 40-60\% [81, 80, 81] \\ \hline 54.4,62.4 & \(\frac{\sigma_{M}^{2}}{2\sigma_{p_{T}}^{2}}\) & 0-10\%, 10-40\%, 40-80\% [77, 82] \\ & \(\frac{\sigma_{M}^{2}}{2\sigma_{p_{T}}^{2}}\) & -20\%, 20-40\%, 40-60\% [83, 84] \\ \hline 200 & \(\frac{v_{2}(p_{T})}{\sigma_{M}^{2}}\) & 0-30\%, 30-80\% [85] \\ & \(\frac{\sigma_{M}^{2}}{2\sigma_{p_{T}}^{2}}\) & 10-20\%(0-5\%), 40-50\%(40-60\%) [86, 87] \\ \hline 2760 & \(\frac{v_{2}(p_{T})}{\sigma_{M}^{2}}\) & 10-20\%, 20-30\%, 30-40\%, 40-50\% [88, 89] \\ & \(\frac{\sigma_{M}^{2}}{2\sigma_{p_{T}}^{2}}\) & 0-10\%, 20-40\%, 40-60\% [90] \\ \hline \end{tabular} \end{table} Table 1: Experimental data of \(\phi\) and \(\Omega\) used in our analysis. Values in parentheses are centralities of \(\Omega\) when they are different from \(\phi\). each collision energy we use N=500 training points. To validate proper functioning of the Bayesian analysis, we perform a closure test and confirm the Bayesian framework correctly reproduces model parameters within reasonable uncertainties. The likelihood analysis has used \(\mathrm{N_{\star}}=2\times 10^{6}\) predicted points to search for the best-fit parameters, which is sufficient for MCMC to converge. _Results and discussions.--_ Using the data and parameters discussed earlier, we perform a model-to-data comparison with MADAI package and obtain the best-fit parameters, which are defined as the mean value given by the maximum likelihood analysis. With the best-fit parameters provided by Bayesian analysis, we can calculate transverse momentum spectra and elliptic flows of \(\phi\) and \(\Omega\) and compare with experimental data. Fig. 1 shows our results for \(p_{T}\) spectra and \(v_{2}\) of \(\phi\) and \(\Omega\) in selected centralities at different collision energies. As seen from Fig. 1, our calculations describe data rather well. Fig. 2 shows our Bayesian inference of the \(\eta T/(\epsilon+P)\) (a) and \(\zeta T/(\epsilon+P)\) (b) at 68.3% confidence level (C.L.) for the QGP at hadronization or \(T\sim 160\) MeV with different \(\mu_{B}\). The range of baryon chemical potentials is between \(\mu_{B}=0\) and \(\mu_{B}=200\) MeV, which corresponds to collision energy varying from 2.76 TeV (most left) to 19.6 GeV (most right) [91]. For the purpose of comparison, in Fig. 2 we also include results from other approaches, i.e., Chapman-Enskog theory (Chap-Ensk)[37] for \(\eta\), hadron resonance gas model (HRG) [38] and holographic model (Holo) [19] for both \(\eta\) and \(\zeta\). In these approaches, \(\eta\) and \(\zeta\) are caculated as a function of temperature with different \(\mu_{B}\). Here we take their values at T=160 MeV for \(\mu_{B}=0\) and 300 MeV (T=150 MeV with \(\mu_{B}=0\) and 500 MeV for Chap-Ensk). Note that the results from Chap-Ensk and HRG are for hadronic matter. One sees from Fig. 2(a) that the shear viscosity has a significant dependency on baryon chemical potential and decreases with \(\mu_{B}\), which suggests that the QGP with finite baryon chemical potential is more close to perfect fluid than that with zero baryon chemical potential. We note that this trend of shear viscosity, i.e., \(\eta T/(\epsilon+P)\) decreases as \(\mu_{B}\) increases, aligns with observations in Chap-Ensk [37], HRG [38; 40] and holographic model [19], but seems to be different from the findings in hybrid models [41; 42; 45], where a constant \(\eta/s\)[41; 42] or temperature-independent \(\eta T/(\epsilon+P)\)[45] was assumed for each collision energy and thus the effects of varying \(T\) during the dynamical evolution are neglected. As found in Refs. [37; 38], the behavior of \(\eta T/(\epsilon+P)\) is primarily attributed to the rapid increase in entropy density \(s\) and very slow increase in \(\eta\) with \(\mu_{B}\). We would like to emphasize that at \(\mu_{B}=0\), the most recent Bayesian statistical analysis obtained \(\eta/s\approx 0.16\) at 160 MeV [33; 36], which is consistent with our result but at variance with [41]. On the other hand, Fig. 2(b) indicates rather small Figure 1: Transverse-momentum spectra and elliptic flows \(v_{2}\) of \(\phi\) mesons and \(\Omega\) baryons (selected centrality bins). For Au+Au at 19.6-62.4 GeV, we use data of \(\Omega^{-}\). For Au+Au at 200 GeV and Pb+Pb at 2.76 TeV, we use data of \(\Omega^{-}+\hat{\Omega}^{+}\). Solid lines are recombination calculation using the best-fit parameters. The data from STAR [77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87] and ALICE [88; 89; 90] are included for comparison. Figure 2: Baryon chemical potential dependence of shear (a) and bulk (b) viscosities for QGP/hadronic matter at hadronization (see text for details). values for \(\zeta\) at hadronization for \(\mu_{B}<200\) MeV. In particular, we find \(\zeta T/(\epsilon+P)\) is essentially constant within uncertainty with \(\zeta T/(\epsilon+P)=0.02\sim 0.04\) for \(\mu_{B}<200\) MeV. Interestingly, our results are quantitatively consistent with HRG and holographic model. However, due to the accuracy of our results, we cannot draw any conclusions about the trend for \(\mu_{B}\) dependence of \(\zeta\). We note that HRG gives an increase of \(\zeta T/(\epsilon+P)\) with \(\mu_{B}\), while the holographic model predicts an opposite behavior. _Conclusions._-- We have performed Bayesian inference of the shear and bulk viscosities of the QGP at hadronization (with \(T\sim 160\) MeV) by analyzing the \(\phi\) and \(\Omega\) data in Au+Au collisions at \(\sqrt{s_{\rm NN}}=19.6\)-\(200\) GeV and Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV, based on the quark recombination model with the quark phase-space information parameterized in a viscous blastwave. We find that the shear viscosity to enthalpy ratio \(\eta T/(\epsilon+P)\) of the QGP at hadronization decreases as \(\mu_{B}\) increases, while the corresponding specific bulk viscosity is essentially constant for \(\mu_{B}<200\) MeV, suggesting that the QGP at \(T\sim 160\) MeV with finite baryon density is more close to perfect fluid than that with zero baryon density. Our work provides valuable reference for future theoretical calculations as well as parameterization of viscosities in hydro simulations at finite \(\mu_{B}\). Our work is also useful for exploring the dynamics of binary neutron star mergers, given that quark matter is expected to exist in the cores of massive neutron stars and a transition from nuclear to quark matter may happen [95]. Furthermore, dramatic changes in transport properties may be considered as a signal of phase transition, and a thorough understanding of the baryon density dependence of viscosities of QCD matter is of great significance for the exploration of phase transition [40]. Our study represents a step forward in achieving this objective, especially if high quality \(\phi\) and \(\Omega\) data in Au+Au collisions at \(\sqrt{s_{NN}}=7.7\) and \(11.5\) GeV or even lower energies would be provided at RHIC in future. _Acknowledgements._-- The authors would like to thank Rainer J. Fries, Defu Hou, Jie Pu and Kaijia Sun for useful discussions. This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 12205182, 12235010, 11625521 and 12375124, and the National SKA Program of China No. 2020SKA0120300. Y.S. thanks the sponsorship from Yangyang Development Fund.
2301.01567
Smooth Calabi-Yau structures and the noncommutative Legendre transform
We elucidate the relation between smooth Calabi-Yau structures and pre-Calabi-Yau structures. We show that, from a smooth Calabi-Yau structure on an $A_\infty$-category $A$, one can produce a pre-Calabi-Yau structure on $A$; as defined in our previous work, this is a shifted noncommutative version of an polyvector field satisfying a Maurer-Cartan equation. We explain how this relation is an analogue of the Legendre transform, and how it admits a lift to a weak equivalence of two natural simplicial sets, one whose points are non-degenerate negative cyclic chains, and another whose points are nondegenerate pre-CY structures.
Maxim Kontsevich, Alex Takeda, Yiannis Vlassopoulos
2023-01-04T12:22:20Z
http://arxiv.org/abs/2301.01567v2
# Smooth Calabi-Yau structures and the noncommutative Legendre transform ###### Abstract. We elucidate the relation between smooth Calabi-Yau structures and pre-Calabi-Yau structures. We show that, from a smooth Calabi-Yau structure on an \(A_{\infty}\)-category \(A\), one can produce a pre-Calabi-Yau structure on \(A\); as defined in our previous work, this is a shifted noncommutative version of an integrable polyvector field. We explain how this relation is an analogue of the Legendre transform, and how it defines a one-to-one mapping, in a certain homological sense. For concreteness, we apply this formalism to chains on based loop spaces of (possibly non-simply connected) Poincare duality spaces, and fully calculate the case of the circle. ###### Contents * 1 Introduction * 1.1 Notation and conventions * 2 Smooth Calabi-Yau structures * 2.1 Chain-level nondegeneracy * 2.2 Examples * 2.2.1 Chains on the based loop space of orientable manifolds * 2.2.2 Path dg category * 2.2.3 Inclusion of constant loops * 2.2.4 The circle * 3 Noncommutative Legendre transform * 3.1 Commutative and odd Legendre transform * 3.1.1 Legendre transform on vector bundles * 3.1.2 Odd Legendre transform * 3.1.3 Inverting the Legendre transform * 3.1.4 Roadmap to the noncommutative version * 3.2 Tube quivers * 3.3 Differentials on tube quivers * 3.3.1 The chain boundary differential * 3.3.2 The rotation differential * 3.4 Cyclicity and negative cyclic homology * 3.4.1 Cyclic complex of tube quivers * 3.4.2 Action on negative cyclic homology * 3.4.3 Rotation invariant tube quivers * 3.5 Defining the Legendre transform * 3.5.1 The fiberwise derivative * 3.5.2 The energy function and the Legendre transform * 4 ## 1. Introduction This paper is a continuation of our previous work [10]. There, we described a type of algebraic structure that we called a _pre-Calabi-Yau_ structure on an \(A_{\infty}\)-algebra/category \(A\); this is a generalization of both proper and smooth Calabi-Yau structures. In that paper we described how, using the formalism of ribbon quivers (that is, ribbon graphs with acyclic orientation), one can use the pre-CY structure maps to describe the action of certain PROP on the morphism spaces of \(A\), and on its Hochschild chains \(C_{*}(A)\). The relevant dg PROP has spaces given by of chains on moduli spaces of open-closed surfaces with framed boundaries, with at least one input and one output. We can paraphrase this result in the language of the cobordism description: a pre-CY structure on \(A\) gives a partially defined fully extended 2d oriented TQFT with values in (an \(\infty\)-categorical version) of 2-category of algebras and bimodules, assigning \(A\) to the point and \(HH_{*}(A)\) to the framed circle. This theory is partially defined in the sense that it does not assign a value to every cobordism, but rather only to those cobordisms that can be generated by handles of index one only; such a cobordism has at least one input and one output. If instead one admits cobordisms generated by handles of indices one and two, one can cap the outputs of the cobordism, and obtain all cobordisms with at least one input. That type of TQFT is known to be described by proper Calabi-Yau structures; see Lurie's description [12] of Costello's results in [14, 15]. In other words, requiring the finiteness condition of \(A\) being proper (that is, \(H^{*}(A)\) being finite-rank) allows one to evaluate caps in the TQFT, which get sent to the trace \(HH_{*}(A)\to\Bbbk\) defined by the proper CY structure. There is another finiteness condition that is dual to properness, which is homological smoothness: \(A\) is homologically smooth if the diagonal bimodule \(A_{\Delta}\) is a perfect object in the category of \((A,A)\)-bimodules. In the work cited above, Lurie mentions in passing that, from abstract reasons, there should be a dual story to Costello's description of this TQFT: smooth Calabi-Yau structures on \(A\) should give a dual type of partially-defined TQFT, which now has a _cup_; this gets sent to a cotrace \(\Bbbk\to HH_{*}(A)\). These TQFTs are also, in practice, described by algebra structures over certain PROPs, given by chains on spaces of surfaces with non-empty incoming/outgoing boundary. The homotopy theory of these objects has been studied in detail elsewhere; see [14] for a recent description of these PROPs and their relation to Deligne-Mumford compactifications. As a corollary to these statements about cobordisms and TQFTs, it should be the case that a smooth Calabi-Yau structure defines a pre-CY structure. The main purpose of this work is to make this result as explicit as possible: we demonstrate that there is an algorithmic procedure, using the formalism of ribbon quivers we defined previously, which starts from a smooth Calabi-Yau structure on an \(A_{\infty}\)-category \(A\) and produces the structure maps of a pre-CY structure on the same \(A\). The point of having such an explicit description is that many categories/algebras of interest in topology and geometry have such smooth CY structures [1, 1, 2, 3, 4]. Using the description in this paper allows one to apply the results of [10] to these categories, and compute the TQFT operations that the resulting pre-CY structure gives. Let us recall the definitions of these objects. A smooth CY structure of dimension \(d\) on \(A\) is a negative cyclic chain \(\omega\in CC_{*}^{-}(A)\) which satisfies a nondegeneracy condition: its image in \(HH_{*}(A)\) induces a quasi-isomorphism \(A^{!}[d]\to A\) between the inverse dualizing bimodule \(A^{!}\) and a shift of the diagonal bimodule \(A_{\Delta}\). A variant of this definition first appeared in the work of Ginzburg [14], without requiring the lift to negative cyclic homology; often these are referred to as 'Ginzburg CY structures' or 'weak smooth CY structures' in the literature. Requiring the negative cyclic lift was first proposed, by the first and third named authors of this article, back in 2013, motivated exactly by this TQFT perspective: in order to 'close up inputs' with a cup, the cotrace \(k\to HH_{*}(A)\) associated to that cup should factor through the (homotopy) fixed points of the \(S^{1}\)-action. For more recent precise definitions of smooth CY structures in the dg and \(A_{\infty}\)-case, see [1, 1, 13]. We will need an even more explicit description; we explain a chain-level version of the smooth CY nondegeneracy condition in Section 2. On the other side, the definition of pre-CY structure is already given 'at cochain-level': a pre-CY structure of dimension \(d\) on an \(A_{\infty}\)-category \((A,\mu)\) is the choice of an element \[m=\mu+m_{(2)}+m_{(3)}+\cdots\in C_{[d]}^{*}(A)\] extending the \(A_{\infty}\)-structure maps, and satisfying a Maurer-Cartan equation \([m,m]=0\). We refer the reader to [10, Sec.3] for the definition of the space \(C_{[d]}^{*}(A)\); let us just mention its noncommutative geometry interpretation: if the space of Hochschild chains \(C^{*}(A)\) is seen as the space of vector fields on some noncommutative space associated to \(A\), then the space \(C_{[d]}^{*}(A)\) is the space of polyvector fields (up to some shifts depending on \(d\)), carrying an analogue of the Schouten-Nijenhuis bracket; a pre-CY structure can be seen as a non-strict version of a Poisson structure; the 'bivector field' \(m_{(2)}\) does not satisfy the involutivity condition \([m_{(2)},m_{(2)}]=0\) on the nose, but up to a correction given by \(m_{(3)}\), which itself is satisfies an involutivity condition up to a higher correction, and so on. In Section 3 we describe maps \[(CC_{d}^{-}(A))_{\rm nondeg}\leftrightarrow(\mathcal{M}_{d-{\rm pre-CY}})_{ \rm nondeg} \tag{1}\] going between smooth CY structures of dimension \(d\) and pre-CY structures of the same dimension, whose 'bivector field' \(m_{(2)}\) is nondegenerate. It turns out that these maps are noncommmutative analogues of the Legendre transform and the inverse Legendre transform (between e.g. functions on the total space of a real vector bundle and of its dual). In order to make this analogy more understandable we start with the (mildly noncommutative) case of an odd vector bundle. The fully noncommutative case is described in terms of certain combinations of ribbon quivers; we evaluate these by inserting the correct structure maps into the vertices, and following the prescriptions in [11, Sec.6]. By solving an iterative lifting problem, it is possible to construct linear combinations of ribbon quivers \(\Gamma_{(2)},\Gamma_{(3)},\dots\), which evaluated on some smooth CY structure \(\lambda\) give the pre-CY structure maps. In general, this procedure is very complicated to implement in practice. However for some simple cases it is possible to use it and calculate pre-CY structures, even by hand. We do this for the case of a particularly simple dg category, the path category \(A\) of the triangle. This is equivalent to the dg algebra \(k[x^{\pm 1}]\) of chains on the based loop space of the circle. We discuss these path dg categories for general simplicial sets in Section 2.2.2; an orientation on the geometric realization of such a simplicial set gives a smooth CY structure on \(A\). We then specialize to the circle, showing how to understand the chain-level nondegeneracy condition in the case of \(A\). Later, in Section 4.1.1, we carry our the computation and calculate the full corresponding pre-CY structure in that case. Finally, we discuss in what sense the maps Eq. (1) are inverses. The usual Legendre transform defines a one-to-one correspondence between fiberwise convex functions on \(E\) and on \(E^{\vee}\); this is also true for the maps in the noncommutative case, but in a more subtle sense. The most natural statement to be made is that these maps lift to weak homotopy equivalences of simplicial sets; on the left-hand side of Eq. (1) we have then a simplicial set corresponding (under the Dold-Kan equivalence) to the nondegenerate locus of a truncated negative cyclic complex, and on the right-hand side, the simplicial set of solutions to the Maurer-Cartan equation as described by [14, 15]. _Acknowledgments:_ We would like to thank Damien Calaque, Sheel Ganatra, Ludmil Katzarkov, Bernhard Keller, Joost Nuiten, Manuel Rivera, Vivek Shende, Bertrand Toen, Bruno Vallette and Zhengfang Wang for helpful conversations. AT and YV would like to thank IHES for the wonderful working conditions provided. This work was supported by the Simons Collaboration on Homological Mirror Symmetry. ### Notation and conventions Throughout this paper, we fix a field \(\Bbbk\) of characteristic zero, and will denote simply by \(\otimes,\operatorname{Hom}\) the tensor product/\(\operatorname{hom}\) (of vector spaces, complexes, modules etc) over \(\Bbbk\). In this paper we will assume everything is \(\mathbb{Z}\)-graded, but all of the results also follow for \(\mathbb{Z}/2\mathbb{Z}\)-grading. Given an \(A_{\infty}\) category \(A\), and an element \(a\in A\) of homogeneous degree, we denote by \(\deg(a)\) its degree in \(A\) and by \(\bar{a}\) its degree in \(A[1]\). As for the degrees in other complexes, we will often switch between homological degree and cohomological degree; we made an effort to explicitly specify which degree we mean when there is room for confusion, and as is conventional, denote by upper indices cohomological degree and lower indices homological degree, which are related by \((-)^{i}=(-)_{-i}\). We will always assume our \(A_{\infty}\)-algebras/categories are strictly unital. For ease of notation, by \(C^{*}(A,M)\) and \(C_{*}(A,M)\), we will always denote the _reduced_ Hochschild cochain/chain complexes. That is, if \(A\) is an \(A_{\infty}\)-algebra, we have \[C^{*}(A,M)=\prod_{n\geq 0}\operatorname{Hom}(\overline{A}[1]^{\otimes n},M), \quad C_{*}(A,M)=\prod_{n\geq 0}M\otimes\overline{A}[1]^{\otimes n}\] where \(\overline{A}=A/\Bbbk.1_{A}\). These complexes can be obtained by taking the quotient of the usual complexes by the elements that have \(1_{A}\in A[1]\) somewhere. Analogously, for a category we take the quotient by the strict units, see [10]. We will denote by \(b\) the chain differential and by \(d\) the cochain differential. Finally, we denote by \(A_{\Delta}\) and \(A^{!}\) the diagonal bimodule of \(A\) and the inverse dualizing bimodule, respectively; for the definition of the bimodule structure maps for these objects, together with all the signs, see [11]. Our conventions agree with the signs there, with the single difference that we write the arguments in the structure maps \(\mu(\dots)\) in the opposite order. ## 2. Smooth Calabi-Yau structures An \(A_{\infty}\)-category is _(homologically) smooth_ if its diagonal bimodule \(A_{\Delta}\) is a compact object; equivalently, if that object is quasi-isomorphic to a retract of a finite complex of 'Yoneda bimodules' (see [11]). For example, if \(A\) is a dg algebra such that \(A_{\Delta}\) has a finite resolution by direct sums of the free bimodule \(A\otimes A\), then \(A\) is smooth. When \(A\) is smooth, there is a quasi-isomorphism of complexes \[C_{*}(A)\simeq\operatorname{Hom}_{A-A}(A^{!},A_{\Delta})\] between Hochschild chains and morphisms of \(A\)-bimodules from the inverse dualizing bimodule to the diagonal bimodule. Recall also that \(C_{*}(A)\) carries the action of the homology of a circle, and the homotopy fixed points of this action are calculated by the negative cyclic complex \(CC_{*}^{-}(A)=(C_{*}(A)[[u]],b+uB)\), where \(B\) is the Connes differential, of homological degree \(+1\). The following definition is a refinement of the notion of a Ginzburg CY algebra [11]. **Definition 1**.: A smooth Calabi-Yau structure of dimension \(d\) on \(A\) is a negative cyclic chain \(\lambda=\lambda_{0}+\lambda_{1}u+\lambda_{2}u^{2}+\dots\in CC_{d}^{*}(A)\) whose image \(\lambda_{0}\in C_{d}(A)\cong\operatorname{Hom}_{A-A}(A^{!},A_{\Delta}[-d])\) is a quasi-isomorphism. Requiring this lift to negative cyclic homology was suggested by two of us some years ago. This notion has since appeared in many places; for a dg discussion see [1, 2] and for an \(A_{\infty}\) discussion see [11, 12]. ### Chain-level nondegeneracy We now use the graphical calculus described in [12] in order to formulate the nondegeneracy condition of smooth Calabi-Yau structures. Given \(\lambda=\lambda_{0}+\lambda_{1}u+\lambda_{2}u^{2}+\dots\in CC_{d}^{*}(A)\) a negative cyclic \(d\)-chain, let us phrase the nondegeneracy condition on \(\lambda_{0}\) at the chain level: under the quasi-isomorphism \(C_{*}(A)\simeq\operatorname{Hom}_{A-A}(A^{!},A_{\Delta})\), it must map to a quasi-isomorphism of \(A\)-bimodules, that is, it must have a quasi-inverse \[\alpha=\text{``}(\lambda_{0})^{-1}\text{''}\in\operatorname{Hom}_{A-A}(A_{ \Delta},A^{!})\simeq C_{(2)}^{*}(A)\] where \(C_{(2)}^{*}(A)\) is the complex of higher Hochschild cochains (with two outputs) with the quasi-isomorphism we explained in [12, Sec.4.2]. In terms of morphisms of bimodules, there is evidently a composition map \[\operatorname{Hom}_{A-A}(A_{\Delta},A^{!})\otimes\operatorname{Hom}_{A-A}(A^ {!},A_{\Delta})\to\operatorname{Hom}_{A-A}(A_{\Delta},A_{\Delta})\] which, using these quasi-isomorphisms, can be represented by a map of complexes \[C_{*}(A)\otimes C_{(2)}^{*}(A)\to C^{*}(A),\] for which we give the following explicit representative. **Lemma 1**.: _The composition map can be represented by the map of complexes \((\alpha,\lambda_{0})\mapsto\lambda_{0}\circ\alpha\in C^{*}(A)\) given by the ribbon quiver_ That is, we produce a map \(C_{*}(A)\otimes C_{(2)}^{*}(A)\to C^{*}(A)\) given by plugging in \(\alpha\) into the 2-valent vertex at the top, \(\lambda_{0}\) into the source at the center and evaluate this diagram using the prescription in [13, Sec.6.1.4]. The fact that this represents the desired map follows from using the explicit descriptions of the quasi-isomorphisms, together with the calculations in [1, Sec.2]. Since we assume \(A\) was strictly unital, there is a distinguished element \(1\in C^{0}(A)\), the _unit cochain_, which only has a nonzero component of length zero, giving the unit element.1 Moreover, under the quasi-isomorphism \(C^{*}(A)\simeq\operatorname{Hom}_{A-A}(A_{\Delta},A_{\Delta})\) the unit cochain maps to the identity. Footnote 1: In the category case, that is, when \(A\) has multiple objects, recall that the regions around our diagrams get labeled with objects, so the cochain 1 just returns the identity morphism \(1_{X}\in\operatorname{End}_{A}(X)\) when the region around the vertex is labeled by any object \(X\). **Proposition 2**.: _Let \(A\) be smooth. The elements \(\lambda_{0}\in C_{d}(A)\) and \(\alpha\in C_{(2)}^{d}(A)\) represent inverse classes if and only if they are closed under the relevant differentials, and there is an element \(\beta\in C^{-1}(A)\) such that_ _In other words, \(\lambda=\lambda_{0}+\lambda_{1}u^{1}+\dots\) represents a smooth Calabi-Yau structure if and only if there are \(\alpha\) and \(\beta\) such that the first term \(\lambda_{0}\) satisfies the equation above._ One of those directions obviously follows from the Lemma above; this is exactly the condition \([\lambda_{0}\circ\alpha]=[\operatorname{id}]\) in \(\operatorname{Hom}_{A-A}(A,A)\), so the equation says that \(\alpha\) represents a right-inverse to \(\lambda_{0}\). We will now argue that it is also a left-inverse, once we assume that \(A\) is smooth. Before that, let us present an analogy using a finite-dimensional vector space \(V\) and its linear dual \(V^{\vee}\): let \(f:V\to V^{\vee}\) and \(g:V^{\vee}\to V\) be linear maps such that \(g\circ f=\operatorname{id}_{V}\). Then obviously both \(f\) and \(g\) have full rank, are bijective and so \(f\circ g=\operatorname{id}_{V^{\vee}}\). At the risk of boring the reader, let us give another proof for this easy fact: since \(V\) is finite-dimensional, the canonical map \(V\to(V^{\vee})^{\vee}\) is an isomorphism, so dualize the composition \(g\circ f\) and use this identification \[(V\overset{f}{\to}V^{\vee}\overset{g}{\to}V)\mapsto(V^{\vee}\overset{f^{ \vee}}{\leftarrow}V\overset{g^{\vee}}{\leftarrow}V^{\vee})\] to conclude that the composition \(f^{\vee}\circ g^{\vee}\) is the identity on \(V^{\vee}\). Note now that if we pick any symmetric bilinear form on \(V\), we can identify the maps \(f,g\) with matrices; the maps \(f^{\vee},g^{\vee}\) are then the transposes of those matrices. Now, in finite dimensions, every matrix is conjugate to its transpose. So \(g\) also has a left-inverse which is constrained to be \(f\) by the equation \(f\circ g\circ f=f\). We now explain the analog of this reasoning for our smooth category \(A\), first by identifying the role of the transposed map. **Lemma 3**.: _Let \(A\) be a smooth category, and \(\alpha\in C^{*}_{(2)}(A)\cong\operatorname{Hom}_{A\cdot A}(A^{!},A)\). Let \(\alpha^{!}\in\operatorname{Hom}_{A\cdot A}(A^{!},(A^{!})^{!})\) be the morphism obtained by taking bimodule duality. Upon identifying \((A^{!})^{!}\cong A\), the class of \(\alpha^{!}\) is represented by the \(\mathbb{Z}/2\) rotation of the vertex of \(\alpha\)._ Proof.: This follows from some diagrammatic calculus as we developed in [10]. Recall that for any \(A_{\infty}\)-category \(A\) we have a quasi-isomorphism \(A_{\Delta}\otimes_{A\cdot A}A_{\Delta}\overset{\sim}{\to}A_{\Delta}\); we regard an element of the former as a set of \(A[1]\)-arrows, traveling down a strip with an \(A_{\Delta}\) arrow on each side: More precisely, an element of \(A_{\Delta}\otimes_{A\cdot A}A_{\Delta}\overset{\sim}{\to}A_{\Delta}\) is something we can input into the top of this diagram. Analogously, we represent an element of \(A^{!}\) as a strip where the inner arrows travel _up the strip_: Again, more precisely an element of \(A^{!}\) is something we can input into the top of this diagram. This can be seen from the explicit representative for the left dual \(A^{!}\) given in [1, Def.2.40]. More generally, for any perfect \((A,A)\)-bimodule \(M\), its left dual is represented by the strip with an \(M\) arrow going up in the middle. Under these identifications, given an element \(\alpha\in C^{*}_{(2)}(A)\), the map \(A^{!}\to A_{\Delta}\) it gives by dualizing is represented by the diagram where as usual the white arrow marks the first output of \(\alpha\). The map induced on the left duals \(\alpha^{!}:(A^{!})^{!}\to A^{!}\) is then given by reversing this diagram and inserting it in the place of the \(M\) arrow. But since \(A\) is smooth, \(A^{!}\) is perfect and the map \(A\to(A^{!})^{!}\) is a quasi-isomorphism. So we can simplify the diagram obtained and conclude that the diagram is also a representative for \(\alpha^{!}\); comparing it to the previous diagram we have the desired statement. We are now ready to prove Proposition 2. Proof.: It remains to prove that if \(\alpha\) and \(\lambda_{0}\) satisfy the given equation, then \([\alpha]\circ[\lambda_{0}]=[\operatorname{id}_{A^{!}}]\). We use the analogy with the discussion about finite-dimensional vector spaces above: we already know that the composition \[A_{\Delta}\overset{\alpha}{\to}A^{!}\overset{\lambda_{0}}{\to}A_{\Delta}\] is quasi-isomorphic to the identity on \(A_{\Delta}\), so taking left duals we know that the composition \[A^{!}\overset{\alpha^{!}}{\xleftarrow{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and therefore by the assumption that \(\alpha\) and \(\lambda_{0}\) satisfy the equation in the statement of the proposition, this element is also cohomologous to \(\raisebox{-0.5pt}{\includegraphics[]{fig/C3.eps}}\), which we showed above represents the left dual map \(\alpha^{!}\). A similar calculation shows that it is also cohomologous to \(\alpha\), that is, \(\raisebox{-0.5pt}{\includegraphics[]{fig/C4.eps}}\). Therefore \([\alpha]\) also has a right-inverse, which is then constrained to be the same class as \([\lambda_{0}]\). ### Examples So far, we have explained that given a smooth CY structure \(\lambda\) on an \(A_{\infty}\)-category \(A\), one can in principle find a chain-level representative, that is, a solution \(\alpha\) to the equation in Proposition 2, which we will later use to construct the desired pre-CY structure on \(A\). In general, finding an explicit solution for \(\alpha\) may be difficult, since it involves solving an inverse function problem for a morphism between bimodules. However, in some specific cases of interest, it is possible to solve this problem explicitly. #### 2.2.1. Chains on the based loop space of orientable manifolds Consider a pointed path-connected topological space \((X,x)\); concatenation of loops gives a morphism \[\Omega_{x}X\times\Omega_{x}X\to\Omega_{x}X\] inducing a product on the complex of chains \(C_{*}(\Omega_{x}X,\Bbbk)\), making it into a dg algebra. It has been long understood that structures on this algebra are intimately related to operations of string topology. In the 80s, work of Goodwillie [10] and Burghelea and Fiedorowicz [1] gave an equivalence \[H_{*}(LX,\Bbbk)\cong HH_{*}(C_{*}(\Omega_{x}X),\Bbbk)\] between the homology of the free loop space and the Hochschild homology of the dg algebra \(A=C_{*}(\Omega_{x}X,\Bbbk)\). This equivalence relate certain BV algebra structures on each side, constructed algebraically on the Hochschild complex and topologically on loop space homology. Much has been written about this relation between Hochschild theory of \(A\) and loop space operations, but here we would like to focus on the effect of Poincare duality. Let \(X\) be a \(n\)-dimensional (\(\Bbbk\)-)Poincare duality space with a choice of fundamental chain, that is, endowed with a \(n\)-chain \(c_{X}\in H_{n}(X,\Bbbk)\) such that the cap product \(c_{X}\cap:H^{*}(X,\Bbbk)\to H_{n-*}(X,\Bbbk)\) is an isomorphism. By now, it is a well-known fact that in that case \(A=C_{*}(\Omega_{x}X,\Bbbk)\) is a smooth CY algebra of dimension \(n\). At the level of weak CY structures (that is, without the lift to negative cyclic homology), this is explicitly proven in [1] following an argument of [13], and the lift to negative cyclic complex is discussed in a draft [CG] of Cohen-Ganatra. We summarize these results: **Proposition 4**.: _There is a map \(\iota:C_{*}(X,\Bbbk)\to CC_{*}^{-}(A)\) such that if \(c_{X}\in C_{d}(X,\Bbbk)\) such that \([c_{X}]\in H_{d}(X,\Bbbk)\) is the fundamental class of \(X\) then the image \(\iota(c_{X})\) is a smooth CY structure of dimension \(d\) on \(A=C_{*}(\Omega_{x}X,\Bbbk)\)._ The map \(\iota\) (or rather, its composition with the canonical map \(CC_{*}^{-}(A)\to C_{*}(A)\) to Hochschild chains) should be seen as an algebraic incarnation of the map \(X\hookrightarrow LX\) given by inclusion of \(X\) as constant loops. We would like to have a chain-level inverse for the image of \(c_{X}\), that is, a appropriate value for the \(\alpha\) vertex in the statement of Proposition 2; by the result above it is always possible to find one, but doing so algorithmically for a general Poincare duality space turns out to be quite involved. We will leave the full description for future work [14], but at least we can present the general lines and a simple example. #### 2.2.2. Path dg category We start by replacing the algebra \(A\) by an equivalent dg category, with a more local, combinatorial description. Let \(\Lambda\) be a simplicial set, that is, a simplicial complex endowed with the total ordering on its set \(\Lambda_{0}\) of vertices. We will denote an \(n\)-simplex \(\sigma\) in \(\Lambda\) with vertices \(v_{0},\ldots,v_{n}\) (in order) by the notation \((v_{0}\ldots v_{n})_{\sigma}\), or more simply \((v_{0}\ldots v_{n})\) when there is no ambiguity. **Definition 2**.: The path dg category \(P_{\Lambda}\) of the simplicial set \(\Lambda\) has as object set \(\Lambda_{0}\) and as morphism space between vertices \(s\) and \(t\), the graded \(\Bbbk\)-vector space spanned by symbols of the form \[(v_{0}^{1}v_{1}^{1}\ldots v_{n_{1}}^{1})_{\sigma_{1}}*(v_{0}^{2}\ldots v_{n_{ 2}}^{2})_{\sigma_{2}}*\cdots*(v_{1}^{j}v_{0}^{j})_{\sigma_{i}}^{-1}\ldots(v_{ 0}^{N}\ldots v_{n_{N}}^{N})_{\sigma_{N}},\] where \(v_{n_{j}}^{k}=v_{0}^{k+1}\), \(v_{0}^{1}=s\) and \(v_{n_{N}}^{N}=t\), modulo the relation generated by \[x*(u_{0}u_{1})_{\sigma}*(u_{0}u_{1})_{\sigma}^{-1}*y=x*y\] where \(x\) and \(y\) are any sequences as above. A generator of the form above is placed in degree \(\sum_{k}(1-n_{k})\). If \(u=v\) we also add the identity morphism \(e_{u}\). That is, generators are composable sequences whose elements are either simplices of \(\Lambda\) (of any nonzero dimension), or formal inverses of \(1\)-simplices. We place \(n\)-simplices in degree \(n-1\), and endow this vector space with the differential given by \[d(v_{0}\ldots v_{n})=\sum_{i=1}^{n-1}(-1)^{i}\left((v_{0}\ldots\check{v_{i}} \ldots v_{n})-(v_{0}\ldots v_{i})*(v_{i}\ldots v_{n})\right)\] and by the Leibniz rule with respect to \(*\). These compositions of simplices are sometimes referred to as 'necklaces' in the literature (not to be confused with the 'necklace bracket' we defined in [10]), as one can imagine the simplices as beads in an unfastened necklace. It was suggested by one of us in [11] that these categories of necklaces (after formally inverting \(1\)-simplices as we did above) give models for based loop spaces. This relation was studied in [13, 14, 15, 16, 17, 18]; in particular, [19] describes in detail the localization at \(1\)-simplices to give a functor \(\hat{\mathfrak{C}}\) from simplicial sets into some category of 'necklical sets', making this assignment functorial. Our dg category \(P_{\Lambda}\) is just the image on objects of this functor, composed with taking \(C_{*}(-,\Bbbk)\). We will not need the full simplicial description developed in the references above, so we summarize: **Proposition 5**.: _if \(X,x\) is a pointed topological space \(X\) which is homotopy equivalent to the geometric realization of \(\Lambda\), the dg category \(P_{\Lambda}\) is quasi-isomorphic to the dg algebra \(C_{*}(\Omega_{x}X)\), with product given by concatenation._ Concretely, each morphism \(x\to y\) of degree \(-n\) describes an \(n\)-dimensional _cube_ in the space of paths between \(x\) and \(y\); for example, the morphism given by \((v_{0}v_{1})*(v_{1}v_{2}v_{3})\), which has boundary \(-(v_{0}v_{1})*(v_{1}v_{3})+(v_{0}v_{1})*(v_{1}v_{2})*(v_{2}v_{3})\), describes the family of paths over the \(1\)-cube (that is, the interval) which sweeps from the path \(v_{0}\to v_{1}\to v_{3}\) to the path \(v_{0}\to v_{1}\to v_{2}\to v_{3}\), deforming it across the \(2\)-simplex \((v_{1}v_{2}v_{3})\). The comparison result with the algebra \(C_{*}(\Omega_{x}X)\) above relies on the fact that this cubical complex computes the homology of the based loop space. #### 2.2.3. Inclusion of constant loops One can use this model to describe the map \(\iota\), which corresponds to the inclusion of constant loops into the free loop space. An explicit representative for a map \[C_{*}(X)\to CC_{*}^{-}(C_{*}(\Omega_{x}X))\] from simplicial chains on \(X\) is described in [1, App.B], following constructions in the classical work of Adams [1]. We can rephrase this in terms of the dg category \(P_{\Lambda}\): **Proposition 6**.: _There is a chain map \(\iota:C_{*}(\Lambda)\to CC_{*}^{-}(P_{\Lambda})\), whose composition with the canonical map \(CC_{*}^{-}(P_{\Lambda})\to C_{*}(P_{\Lambda})\) agrees with the the map induced by the inclusion of constant loops, under the identification \(HH_{*}(P_{\Lambda})\cong H_{*}(L(|\Lambda|)\)._ Proof.: One can construct this map locally on each simplex, and inductively in dimension. We start by sending each \(0\)-simplex \((v)\mapsto e_{v}\) (that is, the length \(1\) zero-chain in \(CC_{*}^{-}(P_{\Lambda})\) consisting solely of the identity morphism \(e_{v}\in P_{\Lambda}(v,v)\)). Suppose now that we have the map for all simplices of dimension up to \(n-1\), and moreover, that for each such simplex \(\tau\), the image lies in the subcomplex \(CC_{*}^{-}(P_{\overline{\sigma}})\subseteq CC_{*}^{-}(P_{\Lambda})\) on its closure \(\overline{\tau}\). Let \(\sigma\) be some \(n\)-simplex; in order to extend the map to \(\sigma\) it is sufficient to find some degree \(n\) element \(x\) of \(CC_{*}^{-}(P_{\overline{\sigma}})\) such that \[(b+uB)x=\iota(\partial\sigma)\] But by assumption \((b+uB)\iota(\partial\sigma)=0\) so \([\iota(\partial\sigma)]\) is a class in \(HC_{n-1}^{-}(P_{\overline{\sigma}})\) which is zero for \(n\geq 1\) since \(\overline{\sigma}\) is contractible. The argument above, however, does not give us a way to explicit construct a representative for \(\iota(\sigma)\), which may involve quite complicated expressions for higher dimensions. For a \(1\)-simplex \((v_{0}v_{1})\), we can choose its corresponding Hochschild chain to be \((v_{0}v_{1})[(v_{0}v_{1})^{-1}]\), which for simplicity of notation we denote \(01[10]\). This lifts to the negative cyclic chain \[01[10]-01[10|01|10]u+01[10|01|10|01|10]u^{2}\dots\] To a \(2\)-simplex \((v_{0}v_{1}v_{2})\), we need to then assign a Hochschild \(2\)-chain whose boundary is \(01[10]+12[21]-02[20]\); one possible choice is \[01*12[21|10]-01*12[20|02*21*10]+01*12*20[012*21*10]-012*20*012*21*10.\] As for the chain we associated to the \(1\)-simplex, the Hochschild chain above has some lift to a negative cyclic chain, whose expression is not particularly enlightening, and so on. Now, given a simplicial triangulation \(\Lambda\) of a Poincare duality space \(X\), we can look at the image \(\lambda=\iota(c_{X})\) of its fundamental chain and find the inverse \(\alpha\) of its \(u=0\) component \(\lambda_{0}\), that is, the element \(\alpha\) in \(C_{(2)}^{*}(P_{\Lambda})\) which satisfies the equation in the statement of Proposition 2. #### 2.2.4. The circle Let us illustrate how to do this for the simplest non-trivial case, that is, for the circle. We pick a triangulation that exhibits it as the boundary of the \(2\)-simplex \((v_{0}v_{1}v_{2})\). The fundamental chain \((v_{0}v_{1})+(v_{1}v_{2})-(v_{0}v_{2})\) maps to the Hochschild chain \[\lambda_{0}=01[10]+12[20]-02[20],\] again using our shorthand notation. We are looking for an inverse \(\alpha\) to \(\lambda_{0}\). Recall that this is a vertex that receives any number of \(A[1]\) arrows above and below, and outputs two arrows in \(A\); if \(A\) were an \(A_{\infty}\)-algebra this would be the space \(\operatorname{Hom}(T(A[1]\otimes T(A[1]),A\otimes A)\); as our chosen \(A\) has multiple objects one replaces the \(A\) factors by morphism spaces and sum over objects. We proceed inductively on the length of the inputs. Since \(A\) is concentrated in non-negative degrees and we want \(\alpha\) of (cohomological) degree \(+1\), its component \(\alpha^{0,0}\) with zero entries on both sides is necessarily zero. The first non-trivial degree to be specified is the component \(\alpha^{0,1}\), that is, with zero inputs on top and one input on the bottom. Note that since \(A\) is a category, and not an algebra, the regions in a diagram are labeled by objects, which we will denote by writing \(\underline{0},\underline{1}\) and \(\underline{2}\) for each of the objects of \(A\) (vertices of the triangle). Recall that our convention is to use the _reduced_ Hochschild complex, therefore the element \(\alpha\) evaluates to zero whenever one of the inputs is an identity morphism. Now, every non-identity morphism in \(A\) is either 'counter-clockwise' (for example, the morphisms \((01),(01*12),(20)\) etc.) or 'clockwise' (for example, \((10),(12*20*01)\) etc.) Let \(P:i\to j\) be one of these morphisms, that is, a path from some vertex \(i\) to a vertex \(j\). We define where \(\delta_{\dots}\) is the usual delta function on the set of pairs of vertices. **Proposition 7**.: _The prescription above extends uniquely to a closed element \(\alpha\in C^{1}_{(2)}(A)\), symmetric under the \(\mathbb{Z}/2\) action, which moreover satisfies the equation_ Proof.: For the element \(\alpha\) to be closed, it needs to satisfy compatibility equations such as together with a similar equation with one input on the top and one input on the bottom. Since \(A\) is concentrated in degree zero, any element of \(\alpha\) with two or more inputs vanishes, so the left-hand side of these equations is always zero; we must check that the right-hand side is zero, which we can explicitly calculate. Now, it remains to check that this element is an inverse to the Hochschild chain \(\lambda_{0}=01[10]+12[20]-02[20]\). Let us calculate the Hochschild cochain given by the diagram Since \(A\) is a dg category, all the higher structure maps \(\mu_{\bar{A}}^{\geq 3}\) are zero, so the only nontrivial terms occur when the \(A[1]\)-arrow coming from \(\lambda_{0}\) lands in \(\alpha\). We calculate the values of this cochain on zero inputs; for example, when we label the region around the diagram with the object \(0\), we have and similarly for the diagrams with the outside region labeled with \(\underline{1}\) or \(\underline{2}\). Moreover, this diagram evaluates to zero on any input of length \(\geq 1\); it is exactly the unit cochain in \(C^{0}(A)\). We note that the guess for the element \(\alpha\) above can be derived from a smaller set of data by using the closedness condition: we can just specify that \(\alpha\) gives some sort of 'local' pairing, by setting (10) and analogously for \(\underline{1},\underline{2}\) above; that is, assigning zero when the simplex above and the simplex below do not intersect, and some appropriately signed local pair of paths when they do. In this sense, our expression for the element \(\alpha\) is 'localized' to a neighborhood of the diagonal in the product space \(S^{1}\times S^{1}\). This smooth Calabi-Yau structure on chains on \(\Omega S^{1}\) is well-known in the literature; for a recent explicit description of this structure from another angle, see the recent works [1, 1]. Under the equivalence between our dg category \(A\) and the dg algebra \(k[x^{\pm 1}]\), the element above \(\lambda_{0}\) maps to the Hochschild chain cohomologous to \(x^{-1}[x]\), corresponding to the noncommutative form \(x^{-1}d_{dR}x\) in the description of [1, Sec.3]. ## 3. Noncommutative Legendre transform We recall from [10, 11] that in the language of noncommutative geometry, an \(A_{\infty}\)-algebra \((A,\mu)\) can be thought of as a noncommutative pointed dg manifold \(X_{A}\) with an integrable vector field \(Q_{\mu}\); the space \(C^{*}_{[d]}(A)\) where pre-CY structures of dimension \(d\) live is then the space of shifted polyvector fields on \(X_{A}\), with the necklace bracket playing the role of Schouten-Nijenhuis bracket. A pre-CY structure \(m\) then satisfies the (quadratic) Maurer-Cartan equation \([m,m]=0\). We denote by \(\mathcal{M}_{\mathrm{pre-CY}}\) the space of such solutions; at a given point \(m\) this space has tangent complex given by \[T_{m}\mathcal{M}_{\mathrm{pre-CY}}=(C^{*}_{[d]}(A),[m,-]_{\mathrm{ nec}}),\] that is, polyvector fields with differential given by necklace bracket with \(m\). As mentioned in the introduction, we will eventually construct a map that takes a smooth CY structure, represented by some negative cyclic chain \(\lambda\in CC^{-}_{*}(A)\), and produces a point of \(\mathcal{M}_{\mathrm{pre-CY}}\); in the process we will construct another map in the inverse direction, and in Section 4.2 we argue that this map is 'one-to-one', in a certain homotopical sense. In this section we will explain why such a relation should be seen as a noncommutative analog of the Legendre transform, and then we will build this transform using the formalism of ribbon quivers developed in [11]. But first let us take a digression through the theory of usual Legendre transforms. ### Commutative and odd Legendre transform #### 3.1.1. Legendre transform on vector bundles Recall that the classical Legendre transform can also be defined (fiberwise) on a vector bundle. Let \(M\) be a manifold and \(E\to M\) an \(N\)-dimensional vector bundle, with a real-valued smooth function \(L:E\to\mathbb{R}\), fiberwise convex. For simplicity we assume \(L\) is bounded below by some positive-definite quadratic function. We describe the Legendre transform in two steps: we first take the _fiberwise derivative_ \[FL:E\to E^{\vee},\] defined in dual coordinates by \[FL(x_{1},\dots,x_{N},v_{1},\dots,v_{N})=(x_{1},\dots,x_{N},\partial L/\partial v _{1},\dots,\partial L/\partial v_{N})\] which gives a diffeomorphism, given our assumptions on \(L\). We then construct the _energy function_ associated to \(L\) given by \[e_{L}=\sum_{i}v_{i}\frac{\partial L}{\partial v_{i}}-L\] and then we can define the Legendre transform \(H:E^{\vee}\to\mathbb{R}\) by \[H=\mathcal{L}(L):=e_{L}\circ(FL)^{-1}\] which, using the pairing between \(E\) and \(E^{\vee}\), can also be expressed on each fiber by the classical Legendre transform \[H_{x}(p)=\mathrm{crit.value}_{v}(vp-L_{x}(v)).\] This function has the property that \(FH\) is the inverse diffeomorphism to \(FL\), and \(\mathcal{L}(H)=L\). Moreover, we have the following fact. **Proposition 8**.: _The pullback under the fiberwise derivative corresponding to \(H\) is minus the variational derivative of the Legendre transform at \(L\), that is:_ \[(FH)^{*}=-\frac{\delta\mathcal{L}(f)}{\delta f}|_{f=L}.\] Proof.: Note that \((FH^{*})\) is a ring isomorphism on functions \(\mathcal{O}(E)\to\mathcal{O}(E^{\vee})\). We vary \(L\to L+\delta L\) and calculate the variation \(H\to H+\delta H\) by expanding the relation \(\mathcal{E}_{H+\delta H}=(L+\delta L)\circ F(H+\delta H)\) to first order. #### 3.1.2. Odd Legendre transform We now discuss an analogue of the classical Legendre transform, but that now goes between odd tangent and cotangent bundles. A description of this odd Legendre transform appears in the early work [1], and its idea appears as a motivation for the discussion in [10], relating shifted symplectic structures to nondegenerate shifted Poisson structures. Let us denote by \(\Pi TM\) the odd cotangent bundle of \(M\), and its dual \(\Pi T^{*}M\) its odd tangent bundle. In precise terms, we will consider maps between the spaces of (formal) functions on those bundles, namely the space of polyvector fields \[\mathcal{O}(\Pi T^{*}M)=\wedge^{*}TM=\mathcal{O}(M)[\alpha_{i}]\] where the anticommuting variables \(\alpha_{i}\) represent the vector fields \(\partial/\partial x^{i}\), and the space of differential forms \[\mathcal{O}(\Pi TM)=\wedge^{*}\Omega^{1}\mathcal{O}(M)[\beta_{i}]\] where the anticommuting variables \(\beta^{i}\) represent the one-forms \(dx^{i}\). We make the convention that the degree of \(\alpha_{i}\) is \(+1\) and of \(\beta_{i}\) is \(-1\). We will write a \(p\)-vector field in coordinates as \[P=\frac{1}{p^{!}}P^{i_{1},\ldots,i_{p}}\alpha_{i_{1}}\ldots\alpha_{i_{p}}\] using the summation convention for repeated indices. We can also regard this as an element of the tensor algebra of \(TM\) by using the antisymmetric embedding: \[P=\frac{1}{p^{!}}P^{j_{1},\ldots,j_{p}}\delta_{j_{1},\ldots,j_{p}}^{i_{1}, \ldots,i_{p}}\alpha_{i_{1}}\ldots\alpha_{i_{p}}\] using the Kronecker symbol giving the sign of the permutation. This allows us to calculate derivatives: we have that the derivative \(\partial/\partial\alpha_{i}\) acts as \[\frac{\partial}{\partial\alpha_{i_{n}}}\alpha_{i_{1}}\ldots\alpha_{i_{p}}=p(- 1)^{n}\alpha_{i_{1}}\ldots\hat{\alpha}_{i_{n}}\alpha_{i_{p}}\] where the hat denotes omission. Let \(\gamma\in\mathcal{O}(\Pi T^{*}M)\) be a polyvector field _without degree one term_, which we write in coordinates as \[\gamma(x)=\frac{\gamma_{2}^{ij}(x)}{2!}\alpha_{i}\alpha_{j}+\frac{\gamma_{3}^ {ijk}(x)}{3!}\alpha_{i}\alpha_{j}\alpha_{k}+\ldots\] where \(\gamma_{p}(x)\) is some \(p\)-tensor depending on \(x\). Let us assume that the degree two term \(\gamma_{2}\) is given by a positive definite matrix. In terms of functions on the odd space \(\Pi T^{*}M\), \(\gamma\) should be seen as a convex function with a fiberwise critical point at the 'locus \(\alpha=0\)'. The odd fiberwise derivative should then send a neighborhood of this locus to the neighborhood of the 'locus \(\beta=0\)', and the odd Legendre transform should produce a formal series in the variables \(\beta_{i}\), that is, a differential form. Let us calculate the odd Legendre transform \(F\gamma\): first we write \[\beta^{i}=\frac{\partial\gamma}{\partial\alpha_{i}}=\gamma_{2}^{ij}\alpha_{j}+ \frac{\gamma_{3}^{ijk}}{2!}\alpha_{j}\alpha_{k}+\ldots\] and then invert this equation, expressing each \(\alpha_{i}\) in terms of the \(\beta\) variables, as a function \(f_{i}(\gamma,\beta)\), depending on the matrices \(\gamma_{p}\) for all \(p\). This is possible if and only if the matrix \(\gamma_{2}\) is invertible: denoting \(\{M_{ij}\}\) for its inverse we can calculate \(f_{i}\) by an iterative procedure. To second order this gives \[\alpha_{i}=f_{i}(\gamma,\beta)=M_{ij}\beta^{j}+M_{ij}M_{ka}M_{lb}\frac{\gamma _{3}^{jkl}}{2!}\beta^{a}\beta^{b}+\ldots\] In other words, the functions \(f_{i}\) are the data of the inverse induced map on functions \(((F\gamma^{*})^{-1})\). We calculate now that the energy function associated to \(\gamma\) is given by \[e_{\gamma}=\alpha_{i}\frac{\partial\gamma}{\partial\alpha_{i}}-\gamma=\frac{ \gamma_{2}^{ij}}{2!}\alpha_{i}\alpha_{j}+2\times\frac{\gamma_{3}^{ijk}}{3!} \alpha_{i}\alpha_{j}\alpha_{k}+\cdots+(p-1)\times\frac{\gamma_{p}^{ij\ldots}} {p!}\alpha_{i}\alpha_{j}\cdots+\ldots\] that is, we just multiply each degree \(p\) term by \(p-1\). The odd Legendre transform \(\lambda=\mathcal{L}(\gamma)\) is then calculated by substituting, in the expression above, \(\alpha_{i}\mapsto f_{i}(\gamma,\beta)\). **Proposition 9**.: _The polyvector field \(\gamma\) satisfies \([\gamma,\gamma]=0\), where \([,]\) is the Schouten-Nijenhuis bracket, if and only if \(d\lambda=0\)._ Proof.: We calculate \[d\lambda =\beta^{i}\frac{\partial\lambda}{\partial x^{i}}=\beta^{i}\frac{ \partial}{\partial x^{i}}\left(f_{j}(\gamma,\beta)\beta_{i}-\gamma(\alpha_{j} \mapsto f^{j}(\gamma,\beta))\right)\] \[=\beta^{i}\frac{\partial f_{j}}{\partial x^{i}}\beta^{j}-\beta^{i }\frac{\partial\gamma}{\partial x^{i}}-\beta^{i}\frac{\partial\gamma}{ \partial\alpha_{i}}\frac{\partial f_{i}}{\partial x}=-\beta^{i}\frac{ \partial\gamma}{\partial x^{i}}\] along the locus \(\alpha=f(\gamma,\beta)\) this is equal to \[-\frac{\partial\gamma}{\partial\alpha}\frac{\partial\gamma}{\partial x_{i}}= -[\gamma,\gamma]\] proving the proposition. Performing the opposite operations, we see that odd Legendre transform gives a bijection between closed forms and polyvector fields satisfying the Maurer-Cartan-type equation \([\gamma,\gamma]=0\). Let us now consider a slightly more general situation, where \(\gamma\) may have a _small_ nonvanishing first order term, that is: \[\gamma=\gamma_{1}^{i}\alpha_{i}+\frac{\gamma_{2}^{ij}(x)}{2!}\alpha_{i}\alpha _{j}+\frac{\gamma_{3}^{ijk}(x)}{3!}\alpha_{i}\alpha_{j}\alpha_{k}+\ldots\] In this case, the locus \(\alpha=0\) is not critical, so we should not expect it to be sent to a neighborhood of \(\beta=0\) by the odd Legendre transform. Instead it will be sent to a neighborhood of the 'point' of \(\Pi TX\) with fiber coordinates given by \(\partial\gamma/\partial\alpha_{i}|_{\alpha_{i}=0}=\gamma_{1}^{i}\). More precisely: given a form \(\lambda\) in terms of the \(\beta_{i}\), we shift the coordinates to \(\tilde{\beta}^{i}=\beta^{i}-\gamma_{1}^{i}\); the shifted form will then satisfy Proposition 9 with respect to the differential \(\tilde{d}\) in the variables \(\tilde{\beta}^{i}\). We calculate this in terms of the original variables, expanding to linear order in \(\gamma_{1}\) \[\begin{split}\tilde{d}\tilde{\lambda}&=\tilde{\beta}^ {i}\frac{\partial\tilde{\lambda}_{0}}{\partial x^{i}}+\tilde{\beta}^{i}\frac{ \partial\tilde{\lambda}_{j}^{1}}{\partial x^{i}}\tilde{\beta}^{j}+\dots\\ &=\beta^{i}\frac{\partial}{\partial x^{i}}\left(\lambda^{0}+ \lambda_{j}^{1}\beta^{j}+\dots\right)-\gamma_{i}^{i}\frac{\partial}{\partial x ^{i}}\left(\lambda^{0}+\lambda_{j}^{1}\beta^{j}+\dots\right)-\beta^{i}\frac{ \partial}{\partial x^{i}}\left(\lambda_{j}^{1}\gamma_{i}^{j}+\dots\right)+O( (\gamma_{1})^{2})\\ &=(d-Lie_{\gamma_{1}})\lambda+O((\gamma_{1})^{2})\end{split}\] Therefore we have the following result: **Corollary 10**.: _If the polyvector field \(\gamma\) satisfies the equation \([\gamma,\gamma]=0\), then its odd Legendre transform \(\lambda\) satisfies \((d-Lie_{\gamma_{1}})\lambda=0\) up to higher corrections in \(\gamma_{1}\)._ Note that comparing the result above with the noncommutative analogies in [10] motivates the statement of the correspondence we mentioned in the introduction (Eq. (1)). #### 3.1.3. Inverting the Legendre transform Before we move on to the noncommutative world, let us discuss one last thing about the odd Legendre transform. For simplicity we return to the case where \(\gamma_{1}=0\). Given only the degree two matrix \(\gamma_{2}^{ij}\) of \(\gamma\), and the expression for the Legendre transform \(\lambda=\mathcal{L}(\gamma)\) (of all of \(\gamma\)), how can one compute \(\gamma_{3},\gamma_{4},\dots\)_without_ explicitly writing down the inverse Legendre transform? This question may seem silly since in this case we could just directly write down the inverse Legendre transform. But in the noncommutative case we will not have an explicit inverse; instead we will use a version of the implicit function theorem. That is, we know by definition that the pair \((\gamma,\lambda)\) solves the equation \[e_{\gamma}=(F\gamma)(\lambda)\] By calculating the expressions for the 'energy function' of \(\gamma\) and the fiberwise derivative we have: \[\begin{split}\frac{\gamma_{2}^{ij}}{2!}\alpha_{i}\alpha_{j}+2 \frac{\gamma_{3}^{ijk}}{3!}\alpha_{i}\alpha_{j}\alpha_{k}+\dots& =\lambda_{i_{1}i_{2}}^{2}\left(\gamma_{2}^{i_{1}j_{1}}\alpha_{j_{ 1}}+\frac{\gamma_{3}^{i_{1}j_{1}k_{1}}}{2!}\alpha_{j_{1}}\alpha_{k_{1}}+\dots \right)\left(\gamma_{2}^{i_{2}j_{2}}\alpha_{j_{2}}+\frac{\gamma_{3}^{i_{2}j_ {2}k_{2}}}{2!}\alpha_{j_{2}}\alpha_{k_{2}}+\dots\right)\\ &+\lambda_{i_{1}i_{2}i_{3}}^{3}\left(\dots\right)\dots\end{split}\] When the matrix \(\gamma_{2}\) is nondegenerate, the quadratic term in \(\alpha\) gives exactly the matrix equation \(\lambda_{2}=(\gamma_{2})^{-1}\), as expected. In third order the equation is:2 Footnote 2: Note that to get the right numerical factors, it is easier to first embed the exterior algebra into the tensor algebra, antisymmetrically. \[\frac{\lambda_{ab}^{2}}{2!}(\gamma_{2}^{ai}\gamma_{3}^{bjk}+\gamma_{3}^{aij} \gamma_{2}^{bk})+\lambda_{abc}^{3}\gamma_{2}^{ai}\gamma_{2}^{bj}\gamma_{2}^{ ck}=2\gamma_{3}^{ijk} \tag{2}\] which, using the solution for \(\lambda_{2}\) and the skew-symmetry of \(\gamma_{3}\), gives our desired solution \(\gamma_{3}^{ijk}=\lambda_{abc}^{3}\gamma_{2}^{ai}\gamma_{2}^{bj}\gamma_{2}^{ ck}\). #### 3.1.4. Roadmap to the noncommutative version Above we explained that the odd Legendre transform relates polyvector fields \[\gamma=\gamma_{1}+\gamma_{2}+\dots\] that are solutions of the Maurer-Cartan equation \([\gamma,\gamma]=0\) to differential forms that are closed under \(d-Lie_{\gamma_{1}}\). On one of the sides related by this noncommutative Legendre transform, we have an \(A_{\infty}\)-algebra \((A,\mu)\); in other words, a noncommutative pointed dg manifold \(X_{A}\) with an integrable vector field \(Q_{\mu}\); the space \(C^{*}_{[d]}(A)\) where pre-CY structures of dimension \(d\) is then the space of shifted polyvector fields on \(X_{A}\), with the necklace bracket playing the role of Schouten-Nijenhuis bracket. On the other side, we have the space of noncommutative forms on \(X_{A}\); this is the negative cyclic homology complex \(CC^{-}_{*}(A)\) with differential \(b_{\mu}+uB\). For a discussion of this relation in our context, see e.g. [11, Sec.3.3]. The noncommutative Legendre transform then relates these two sides. Let us sketch a roadmap for what follows. We start with a pre-CY structure \(m\), a solution of the Maurer-Cartan equation \([m,m]=0\). From this, we will proceed in a completely parallel fashion to what we described for the odd Legendre transform. We first define the noncommutative fiberwise derivative, which is a map of complexes \[Fm:(CC^{-}_{*}(A),b_{\mu}+uB)\to(C^{*}_{[d]}(A),[m,-]_{\rm nec})\] In analogy with the (super)commutative case, we prove that if \(m_{(2)}\) is nondegenerate, the map above is a quasi-isomorphism; we then write the 'energy function' \(e_{m}\) corresponding to \(m\) and define the Legendre transform of \(m\) to be \((Fm)^{-1}(e_{m})\), which defines a nondegenerate element in the negative cyclic complex. In order to compute the inverse Legendre transform, going from a nondegenerate cyclic homology class to a pre-CY structure, we use the same 'implicit function theorem' strategy of Section 3.1.3 to produce a map \[\Phi:(CC^{-}_{d}(A))_{\rm nondeg}\to C^{*}_{[d]}(A)\] which lands inside of the space \(\mathcal{M}_{\rm pre-CY}\) of solutions to that equation, that is, pre-CY structures. Just like the ordinary Legendre transform, the map \(\Phi\) is also 'one-to-one', but in a more sophisticated sense; note that one of the sides has a natural 'linear' notion of equivalence (\(b_{\mu}+uB\)-cohomology) but the other one does not. We will later explain in Section 4.2 what is the correct notion of equivalence. ### Tube quivers Our constructions of the maps \(Pm\) and \(\Phi\) are described using a specific type of ribbon quivers, which we now explain. We refer to [11] for the general definition of ribbon quivers and for the formalism that calculates their action on Hochschild chains. **Definition 3**.: For any integer \(\ell\geq 1\), the space \(\mathcal{T}_{(\ell)}\) of _tube quivers with \(\ell\) outgoing edges_ is the vector space spanned by ribbon quivers corresponding to genus zero surfaces with two boundary components: one boundary component with a closed input (source in \(V_{\times}\)) and another boundary component with \(\ell\) open outputs (sinks in \(V_{\rm open-out}\)). For each integer \(d\), a \(d\)-orientation on a ribbon graph can be specified by a total ordering of all its edges and vertices, with permutations assigning a \(\pm\) weight depending on the parity of \(d\). We will fix this ordering to be of the following form: if the \(\times\)-source is denoted by \(s\) (with incident edge \(e\)) and the open outputs are labeled \(o_{1},\ldots,o_{\ell}\), going _clockwise starting from the right_, we will always put this orientation in the form \[\pm(o_{1}\ o_{2}\ \ldots\ o_{\ell}\ \ldots\ e\ s)\] Recall also that each ribbon quiver \(\vec{\Gamma}\) has a homological degree which depends on \(d\), given by the formula \[\deg_{d}(\Gamma)=\sum_{v\neq s,o_{1},\dots,o_{n}}((2-d)out(v)+d+in(v)-4)\] _Example_.: Here are some examples of ribbon quivers appearing in \(\mathcal{T}_{(2)}\) and \(\mathcal{T}_{(3)}\), respectively: When \(d=0\), the quivers of degree zero only have the input \(\times\), the outputs, and vertices either with two inputs and one output, or with zero inputs and one output. Also, the degree of some quiver is equal to how many edges were contracted to obtain it from any degree zero quiver; for example, for the examples above \(\deg_{0}(\Gamma)=0\) and \(\deg_{0}(\Gamma^{\prime})=2\), since two vertices have degree one (marked \(v\) and \(w\)). For any \(d\), these degrees get shifted to \(\deg_{d}(\Gamma)=-2d\) and \(\deg_{d}(\Gamma^{\prime})=-3d+2\). Thus picking an integer \(d\), we can define the space \(\mathcal{T}_{(\ell)}^{d}\) of \(d\)-oriented tube quivers, which as a vector space is equal to \(\mathcal{T}_{(\ell)}\) but has a grading depending on \(d\). We consider then inserting a Hochschild chain into the \(\times\)-vertex and also any numbers of incoming \(A[1]\) arrows, in between the \(\ell\) outgoing legs. Now, given _any_ element \(m=m_{(1)}+m_{(2)}+\dots\in C_{[d]}^{*}(A)\), evaluating this oriented ribbon quiver we then get \(\ell\) outgoing \(A\) factors. This evaluation gives a linear morphism of graded vector spaces \[E:\mathcal{T}_{(\ell)}^{d}\otimes CC_{*}^{-}(A)\longrightarrow C_{(\ell)}^{*}(A)\] For general \(m\), \(E\) has no reason to commute with any differentials whatsoever. We now analyze some natural differentials on the space \(\mathcal{T}_{(\ell)}^{d}\). ### Differentials on tube quivers #### 3.3.1. The chain boundary differential We first have the differential \(\partial\) defined by summing over vertex separations. This is the boundary operator on chains, and has homological degree of \(-1\), as usual. The evaluation of a ribbon quiver is compatible with this differential, that is, \[E:(\mathcal{T}_{(\ell)}^{d},\partial)\otimes(C_{*}(A),b)\longrightarrow(C_{( \ell)}^{*}(A),[\mu,-]_{\mathrm{nec}})\] is a map of cochain complexes (where we regard everything with _cohomological_ grading), and so descends to a map in cohomology. **Proposition 11**.: _When \(d=0\), the complex \((\mathcal{T}_{(\ell)}^{d},\partial)\) calculates the homology of the circle:_ \[H_{*}(\mathcal{T}_{(\ell)}^{d=0},\partial)=H_{*}(S^{1})\] _that is, \(\Bbbk\) in homological degrees zero and one. The same holds for other values of \(d\), but with a shift:_ \[H_{*+\ell d}(\mathcal{T}_{(\ell)}^{d},\partial)=H_{*}(S^{1})\] Proof.: The first statement is a special case of [11, Thm.60]. The ribbon quivers which appear in \(\mathcal{T}_{(\ell)}\) give a cell decomposition of the space \[\text{``}\mathcal{M}_{0,2}\text{''}\times S^{1}\times(\mathbb{R}_{>0})^{2\ell-1 }=S^{1}\times(\mathbb{R}_{>0})^{2\ell-1}\] here "\(\mathcal{M}_{0,2}\)" denotes just the point, since the genus zero curve with two punctures has no moduli. As explained in the proof of that theorem, \((\mathcal{T}^{d=0},\partial)\) is the complex of chains on a cell complex that is dual to the cell decomposition above. The ribbon quivers with \(d\)-degree zero correspond to top-dimensional cells. To see that the identification above is correct, we note that for such a quiver we can independently choose \(\ell\) positive number to be the lengths of the outgoing legs, and \(\ell-1\) positive numbers to be the lengths of distances between the vertices that are not \(s,o_{1},\dots,o_{\ell}\); the last length is fixed by the others. To that we add a circle worth of directions where the edge coming out of the \(\times\)-vertex \(s\) can land. The latter statement is a variation on the \(d=0\) case; conceptually, for different \(d\), the result is twisted by powers of a line bundle \(\mathcal{L}\) on this moduli space which is trivial up to a shift of \(\ell\). #### 3.3.2. The rotation differential We now introduce another differential, corresponding to rotation around the \(S^{1}\)-factor. For that, let us first consider the following decomposition of vector spaces \[\mathcal{T}_{(\ell)}=(\mathcal{T}_{(\ell)})^{\text{edge}}\oplus(\mathcal{T}_{ (\ell)})^{\text{vertex}}\] where we decompose by what is at the end of the edge \(e\) (the edge incident to the \(\times\)-vertex). The subspace \((\mathcal{T}_{(\ell)})^{\text{edge}}\) is spanned by ribbon quivers where \(e\) ends on a trivalent vertex with two incoming and one outgoing edge and the subspace \((\mathcal{T}_{(\ell)})^{\text{vertex}}\) is spanned by the other ribbon quivers. The names come from thinking of each tube ribbon graph without the source vertex; each is a circle with trees attached to the outside. To the interior of the circle we add the source ; this arrow can either land on some edge, giving a tube quiver in \((\mathcal{T}_{(\ell)})^{\text{edge}}\), or on an already-existing vertex, giving a tube quiver in \((\mathcal{T}_{(\ell)})^{\text{vertex}}\). We now define a map \(R:(\mathcal{T}^{d}_{(\ell)})^{\text{edge}}\to(\mathcal{T}^{d}_{(\ell)})^{ \text{vertex}}\) of homological degree \(+1\) by the following prescription. Let \(\vec{\Gamma}\) be some tube quiver in \((\mathcal{T}_{(\ell)})^{\text{edge}}\); its edge \(e\) lands at a vertex \(v\) of either of the two forms: [MISSING_PAGE_POST] \[\text{Type (23 **Definition 4**.: The _rotation differential_\(R\) is defined on the subspace \((\mathcal{T}^{d}_{(\ell)})^{\mathrm{edge}}\) by \[R\left(\vec{\Gamma},(b\ v\ \dots\ o_{1}\ o_{2}\ \dots\ o_{\ell}\ \dots\ a\ \dots\ e\ s)\right)=\sum_{v\neq v^{\prime}\ \mathrm{in\ circle}}\left(\vec{\Gamma}_{v^{\prime}},(-1)^{\ell+\#}(o_{1}\ o_{2}\ \dots\ o_{\ell}\ \dots\ a\ \dots\ e\ s)\right)\] where in the exponent of the sign, \(\#=0\) if the vertex \(v\) is of type (1) above, or \(\#=1\) if it is of type (2) above. We extend by zero from the subspace \((\mathcal{T}^{d}_{(\ell)})^{\mathrm{edge}}\) to a map \(R:\mathcal{T}^{d}_{(\ell)}\to\mathcal{T}^{d}_{(\ell)}\), of homological degree \(+1\). In other words, under the direct sum decomposition, \(R\) is given by a strictly triangular matrix \(\begin{pmatrix}0&0\\ R&0\end{pmatrix}\); it is evident that \(R^{2}=0\). We check that the assignment of the orientation above is coherent with respect to the change of ordering, and does define a map from oriented ribbon quivers. _Example_.: Let us compute the differential for the tube quiver with orientation \[(\Gamma,\mathcal{O})=\left(\begin{array}{c}\includegraphics[scale=0.5]{fig-2.eps} \end{array}\right)\] To calculate the signs in \(R(\Gamma,\mathcal{O})\), we first bring the pair \(e_{7}\ v_{5}\) to the beginning of the string, getting a sign \((-1)^{6\times(d-1)+6\times d}=+1\) from the permutation (recall that swapping two edges gives \((-1)^{d-1}\) and two vertices, \((-1)^{d}\)). For the orientation of the terms in \(R(\Gamma,\mathcal{O})\) we just take this sign into consideration and delete the pair \(e_{7}\ v_{5}\), getting: all with orientation \(+(o_{1}\ o_{2}\ v_{1}\ v_{2}\ v_{3}\ v_{4}\ e_{1}\ e_{2}\ e_{3}\ e_{4}\ e_{5}\ e_{6}\ e_{7}\ e\ s)\). A direct calculation using the sign conventions for \(\partial\) and \(R\) shows that they graded-commute with each other, that is, \[\partial R+R\partial=0\] Therefore \((\mathcal{T}^{d}_{(\ell)},\partial,R)\) has the structure of a _mixed complex_, or equivalently, a complex with a chain-level action of the circle. ### Cyclicity and negative cyclic homology Let us describe how the tube quivers with differentials \(\partial\) and \(R\) interact with the Hochschild and Connes differential on Hochschild chains. We write just \(\Gamma\) for some oriented tube quiver \((\Gamma,\mathcal{O})\), and will only specify the orientation when actually necessary. #### 3.4.1. Cyclic complex of tube quivers Let \(u\) be a variable of homological degree \(-2\). **Definition 5**.: For a fixed \(\ell,d,\) the _cyclic complex of tube quivers_ is the \(\Bbbk[u]\)-module \[\mathcal{C}\mathcal{T}^{d}_{(\ell)}:=\mathcal{T}^{d}_{(\ell)}\otimes_{\Bbbk} \Bbbk[u,u^{-1}]/k[u]\] together with the differential \(\partial-uR.\) That is, an element of homological degree \(n\) in \(\mathcal{C}\mathcal{T}^{d}_{(\ell)}\) is given by an expression \[\vec{\Gamma}=\vec{\Gamma}^{0}+\vec{\Gamma}^{1}u^{-1}+\vec{\Gamma}^{2}u^{-1}\] where \(\vec{\Gamma}^{i}\) is an element of \(\mathcal{T}^{d}_{(\ell)}\) of homological degree \(n-2i.\) **Proposition 12**.: _Up to a shift, the complex \(\mathcal{C}\mathcal{T}^{d}_{(\ell)}\) calculates the homology of the point. That is,_ \[H_{\ell d}(\mathcal{C}\mathcal{T}^{d}_{(\ell)},\partial-uR)=\Bbbk\] _and \(H_{i}(\mathcal{C}\mathcal{T}^{d}_{(\ell)},\partial)=0\) for \(i\neq\ell d.\)_ Proof.: Let us show the case \(d=0\); as in Proposition 11, the general case follows from that one by twisting with a line bundle that is trivial up to an overall shift. When \(d=0\), \(\mathcal{C}\mathcal{T}^{d}_{(\ell)}\) is a non-negatively graded mixed complex. So the Connes long exact sequence in low degrees splits as \[0\to H_{2}(\mathcal{C}\mathcal{T}^{d}_{(\ell)},\partial-uR)\to H_{0}( \mathcal{C}\mathcal{T}^{d}_{(\ell)},\partial-uR)\to H_{1}(\mathcal{T}^{d=0}_{ (\ell)},\partial)\to H_{1}(\mathcal{C}\mathcal{T}^{d}_{(\ell)},\partial-uR)\to 0\] and \[0\to H_{0}(\mathcal{T}^{d=0}_{(\ell)},\partial)\to H_{0}(\mathcal{C} \mathcal{T}^{d}_{(\ell)},\partial-uR)\to 0\] Thus \(H_{0}(\mathcal{C}\mathcal{T}^{d}_{(\ell)},\partial-uR)=\Bbbk\), and since \(H_{1}(\mathcal{T}^{d}_{(\ell)},\partial)=\Bbbk\) it is enough to calculate that \[H_{1}(\mathcal{C}\mathcal{T}^{d}_{(\ell)},\partial-uR)=0\] This follows from the fact that the nontrivial class in \(H_{1}(\mathcal{T}^{d}_{(\ell)},\partial)\) can be represented by a chain in the image of \(R\) #### 3.4.2. Action on negative cyclic homology Recall that the negative cyclic homology of \((A,\mu)\) can be computed by the complex \[CC_{*}^{-}(A)=(C_{*}(A)[[u]],b+uB)\] where \(b\) is the Hochschild differential of homological degree \(-1\) (depends on \(\mu\)) and \(B\) is the Connes differential of homological degree \(+1\) (does not depend on \(\mu\)). Suppose now that we are given a pre-CY structure \(m\). We extend the evaluation map \(E\) to a map of \(\Bbbk[u]\)-modules \[E|_{u^{-1}=0}:\mathcal{C}\mathcal{T}^{d}_{(\ell)}\otimes CC_{*}^{-}(A)\to C_{ (\ell)}^{*}(A)\] by taking the part of degree zero in \(u\), that is, by adding all the evaluations \(\vec{\Gamma}^{i}(\lambda_{i})\). For ease of notation let us also denote this map by \(E\). **Proposition 13**.: _The map \(E|_{u^{-1}=0}\) is compatible with the differentials, and descends to a map in co/homology_ \[H_{*}(\mathcal{C}\mathcal{T}^{d}_{(\ell)},\partial-uR)\otimes HC_{*}^{-}(A) \to H_{(\ell)}^{*}(A).\] Proof.: It is enough to prove that for any \(\vec{\Gamma}\in\mathcal{T}^{d}_{(\ell)}\) and \(\lambda\in C_{*}(A)\), we have \[[\mu,E(\vec{\Gamma},\lambda)]_{\mathrm{nec}}=E((\partial-uR)\vec{\Gamma}, \lambda)+(-1)^{\deg(\vec{\Gamma})}E(\vec{\Gamma},(b+uB)\lambda).\] But we already know that \[[\mu,E(\vec{\Gamma},\lambda)]_{\mathrm{nec}}=E(\partial\vec{\Gamma},\lambda)+( -1)^{\deg(\vec{\Gamma})}E(\vec{\Gamma},b\lambda),\] so it remains to prove that \(E(R\vec{\Gamma},\lambda)=(-1)^{\deg(\vec{\Gamma})}E(\vec{\Gamma},B\lambda)\). This follows from the fact that the unit of \(A\) satisfies \[\mu^{2}(1_{A},a)=(-1)^{\bar{a}+1}\mu^{2}(a,1_{A})=a\] for all \(a\), so inputting the result of \(B\) into a ribbon quiver in \((\mathcal{T}^{d}_{(\ell)})^{\mathrm{edge}}\) has the same result as redistributing that arrow around the other vertices of the cycle. We check that the signs in the differentials give the correct relation. #### 3.4.3. Rotation invariant tube quivers For any \(d\), we have the dimension \(d\)-action of \(\mathbb{Z}_{\ell}\) on the higher Hochschild cochains \(C_{(\ell)}^{*}(A)\), as defined in [10, Sec.3.1]: in this definition, the rotation of an angle \(2\pi/\ell\) of the vertex comes with the Koszul sign together with an extra sign \((d-1)(\ell-1)\). The invariants under this action, when properly shifted, define what we called the dimension \(d\) higher cyclic cochains: \[C_{(\ell,d)}^{*}(A):=(C_{(\ell)}^{*}(A))^{(\mathbb{Z}_{\ell},d)}[(d-2)(\ell-1)]\] We now define this action analogously on the other side: **Definition 6**.: The dimension \(d\)-action of \(\mathbb{Z}/\ell\) on the complex \(\mathcal{T}^{d}_{(\ell)}\) is given by rotating the quiver by an angle of \(+2\pi/\ell\), together with cyclically permuting the output vertices in the orientation, sending \[(o_{1}\ o_{2}\ \dots\ o_{\ell}\ \dots\ e\ s)\mapsto(o_{\ell}\ o_{1}\ o_{2} \ \dots\ o_{\ell-1}\ \dots\ e\ s).\] This action extends \(k[u]\)-linearly to \(\mathcal{CT}^{d}_{(\ell)}\), and we denote \[\mathcal{C}\mathcal{T}_{(\ell,d)}:=(\mathcal{C}\mathcal{T}^{d}_{(\ell)})^{( \mathbb{Z}/\ell,d)}[(d-2)(\ell-1)]\] for its shifted invariants. We also put all of these complexes for \(\ell\geq 2\) together, into a complex \[\mathcal{C}\mathcal{T}_{[d]}:=\prod_{\ell\geq 2}\mathcal{C}\mathcal{T}_{(\ell,d)}\] Note that since vertices enter in the orientation with weight \((d-1)\), the sign of this permutation in the orientations is \((-1)^{(d-1)(\ell-1)}\), already accounting for the extra sign of the dimension \(d\) action. We also calculate that this action commutes with the differentials \(\partial\) and \(R\), so the map \(E|_{u^{-1}=0}\) restricts to \(\mathcal{C}\mathcal{T}_{(\ell,d)}\), and gives a map of graded vector spaces \[\mathcal{C}\mathcal{T}_{[d]}\otimes CC_{*}^{-}(A)\to C_{[d]}^{*}(A).\] Recall that given a pre-CY structure \(m\) on \(A\), the necklace bracket \([m,-]_{\mathrm{nec}}\) defines a differential on \(C_{[d]}^{*}(A)\); by definition this differential is a sum \[[m,-]_{\mathrm{nec}}=d_{\mu}+[m_{(2)},-]_{\mathrm{nec}}+[m_{(3)},-]_{\mathrm{ nec}}+\dots\] where \(d_{\mu}\) is just the differential on (usual) Hochschild cochains. Each term \([m_{(k)},-]_{\mathrm{nec}}\) maps \(C_{(\ell,d)}^{*}(A)\to C_{(\ell+k-1,d)}^{*}(A)[1]\). We mimic this differential on the tube quiver side, by defining differentials \[\partial_{k}:\mathcal{C}\mathcal{T}_{(\ell,d)}\to\mathcal{C}\mathcal{T}_{( \ell+k-1,d)}\] by taking the necklace bracket of the tube quiver with a vertex of \(k\) outgoing legs. After checking all the signs and degrees, we conclude that: **Proposition 14**.: _The map \(E|_{u^{-1}=0}\) is compatible with the differential \((\partial-uR+\partial_{2}+\partial_{3}+\dots)\) on \(\mathcal{C}\mathcal{T}_{[d]}\), and gives a map in co/homology_ \[H_{*}(\mathcal{C}\mathcal{T}_{[d]},\partial-uR+\partial_{2}+\partial_{3}+ \dots)\otimes HC_{*}^{-}(A)\to C_{[d]}^{*}(A).\] ### Defining the Legendre transform #### 3.5.1. The fiberwise derivative From the proposition above, given any closed class in \(\mathcal{C}\mathcal{T}_{[d]}\), any negative cyclic homology class, and a pre-CY structure \(m\), we produce another element \(n\) of \(C_{[d]}^{*}(A)\) satisfying the equation \([m,n]_{\mathrm{nec}}=0\). This will play the role of the _fiberwise derivative_ in the usual Legendre transform. Recall from Proposition 8 that the fiberwise derivative can be understood as the variational derivative of the Legendre transform; thus in the noncommutative case it is natural that it would land in the tangent complex \((C_{[d]}^{*}(A),[m,-]_{\mathrm{nec}})\), which calculates the tangent space of \(\mathcal{M}_{\mathrm{pre-CY}}\) at the point \(m\). **Proposition 15**.: _Any \(\partial\)-closed element of cohomological degree \(2d\) in \(\mathcal{C}\mathcal{T}_{(2)}^{d}\), invariant under the \((\mathbb{Z}_{2},d)\) action, extends to a \((\partial-uR+\partial_{2}+\partial_{3}+\dots)\)-closed element of cohomological degree \(d+2\) in \(\mathcal{C}\mathcal{T}_{[d]}\)._ Proof.: We prove this by induction in the number \(\ell\) of outgoing legs. Let us say that we have an extension \[\Gamma_{(2)}+\Gamma_{(3)}+\dots+\Gamma_{(\ell)}\] where \(\Gamma_{(\ell)}\in\mathfrak{CY}_{(\ell,d)}\). Unraveling the definitions and degree shifts, it means we have \[\Gamma_{(2)} =\Gamma_{(2)}^{0}\] \[\Gamma_{(3)} =\Gamma_{(3)}^{0}+\Gamma_{(3)}^{1}u^{-1}\] \[\ldots\] \[\Gamma_{(\ell)} =\Gamma_{(\ell)}^{0}+\Gamma_{(\ell)}^{1}u^{-1}+\cdots+\Gamma_{( \ell)}^{\ell-2}u^{-\ell+2}\] where \(\Gamma_{(k)}^{i}\) is of homological degree \(-dk+2k-4+2i\) in \(\mathcal{T}_{(k)}^{d}\); the terms above are the only ones that can be nonzero for degree reasons. Suppose that we have \[(\partial-uR+\partial_{2}+\ldots)(\Gamma_{(2)}+\Gamma_{(3)}+\cdots+\Gamma_{( \ell)})=0\] up to terms with \(\ell\) outgoing legs. The next equation to be solved, with exactly \(\ell+1\) outgoing legs, is to find \[\Gamma_{(\ell+1)}=\Gamma_{(\ell+1)}^{0}+\Gamma_{(\ell+1)}^{1}u^{-1}+\cdots+ \Gamma_{(\ell+1)}^{\ell-1}u^{-\ell+1}\] solving \[(\partial-uR)\Gamma_{(\ell+1)}=\partial_{2}\Gamma_{(\ell)}+\cdots+\partial_{ \ell}\Gamma_{(2)}\] From the fact that \((\partial-uR)^{2}\), and using the induction hypothesis, we find that \((\partial-uR)\) applied to the right-hand side gives zero. But since this equation is in homological degree \(-d(\ell+1)+2\ell-2>-d(\ell+1)\), due to Proposition 12 it must have a solution for \(\Gamma_{(\ell+1)}\); using symmetrization under the \((\mathbb{Z}_{\ell+1},d)\) action we get the desired \(\Gamma_{(\ell+1)}\). Therefore, all we need to do is to specify a \(\partial\)-closed element \(\Gamma_{(2)}^{0}\) of \(\mathcal{T}_{(2)}^{d}\) which is \(\mathbb{Z}/2\)-invariant if \(d\) is odd and anti-invariant if \(d\) is even. **Definition 7**.: We define the following element of homological degree \(-2d\) in \(\mathcal{T}_{(2)}^{d}\). both quivers with orientation \((o_{1}\ o_{2}\ v_{1}\ v_{2}\ v_{3}\ v_{4}\ v\ e_{1}\ e_{2}\ e_{3}\ e_{4}\ e_{5}\ e_{6}\ b\ e\ s)\). Note that \(\Gamma_{(2)}\) is of top cohomological degree \(+2d\), so \(\partial\Gamma_{(2)}=0\). To know that it defines an element of \(\mathfrak{CY}_{[d]}\) (of cohomological degree \(+2d-(d-2)(2-1)=d+2\)), we must check that the generator of \(\mathbb{Z}/2\) acts on it with a sign \((-1)^{(d-1)(2-1)}=(-1)^{d-1}\); to see that we calculate the sign of the permutation of the sequence \((v_{1}\ v_{2}\ v_{3}\ v_{4}\ v\ e_{1}\ e_{2}\ e_{3}\ e_{4}\ e_{5}\ e_{6}\ b\ e\ s)\) induced by the 180-degree rotation; note that the labels \(o_{1}\) and \(o_{2}\) because we still want to read the outputs of these quivers starting from the right. **Definition 8**.: We define \(\Gamma=\Gamma_{(2)}+\Gamma_{(3)}+\dots\) to be the element of cohomological degree \(d+2\) of \(\mathfrak{CY}_{[d]}\) given by some fixed extension of \(\Gamma_{(2)}\). Note that \(\Gamma\) is only defined up to some \((\partial-uR+\partial_{2}+\partial_{3}+\dots)\)-exact term. **Lemma 16**.: _For any \(\ell>2\), the class of \(\Gamma_{(\ell)}^{\ell-2}\) in \(H_{-\ell d}(\mathcal{T}_{\ell}^{d},\partial)\) is nonzero._ Proof.: Note that for degree reasons, the term \(\Gamma_{(\ell)}^{\ell-2}\) is the lowest homological degree term that appears in \(\Gamma_{(\ell)}\). For \(\ell=2\) it is the only one; and the claim follows from the fact that we can use the \(\partial\)-differential to move the edge \(e\) around the circle, giving \[\Gamma_{(2)}=\left(\begin{array}{c}\includegraphics[scale=0.5]{fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig//fig/fig/fig//fig/fig/fig/fig/ fig//fig/fig/fig//fig/fig//fig/fig//fig/fig/ fig//fig//fig//fig/fig//fig//fig//fig/ fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig// fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig///fig//fig///fig//fig///fig// fig///fig///fig///fig///fig//fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig////fig///fig///fig///fig///fig///fig///fig////fig///fig///fig///fig////fig///fig///fig////fig////fig////fig///fig///fig///fig////fig///fig////fig////fig///fig///fig///fig////fig///fig///fig////fig////fig////fig////fig////fig///fig///fig////fig////fig////fig////fig////fig////fig////fig///fig////fig///fig////fig////fig////fig////fig////fig////fig////fig////fig///fig////fig///fig///fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig///fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig///fig///fig////fig///fig///fig////fig///fig////fig///fig////fig////fig///fig///fig///fig///fig////fig//fig///fig///fig///fig///fig////fig////fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig//fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig//fig///fig///fig///fig///fig///fig///fig///fig///fig////fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig//fig////fig///fig//fig///fig///fig///fig///fig//fig///fig///fig//fig///fig///fig///fig///fig///fig//fig///fig///fig///fig///fig///fig//fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig//fig//fig///fig//fig///fig///fig//fig///fig//fig///fig//fig///fig//fig///fig//fig//fig//fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig//fig///fig///fig///fig///fig///fig///fig///fig//fig///fig//fig///fig///fig///fig///fig//fig///fig///fig///fig///fig//fig///fig//fig///fig///fig////fig///fig///fig///fig///fig//fig///fig///fig///fig////fig///fig///fig///fig///fig////fig////fig////fig////fig//fig///fig///fig///fig///fig////fig///fig///fig///fig///fig////fig///fig///fig///fig////fig////fig////fig///fig////fig///fig////fig////fig////fig////fig////fig///fig////fig////fig////fig////fig////fig///fig////fig////fig////fig///fig////fig////fig/////fig////fig////fig////fig////fig////fig////fig////fig////fig/////fig////fig////fig/////fig////fig////fig/////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig////fig/////fig////fig////fig/////fig////fig////fig////fig////fig////fig////fig/////fig///fig////fig////fig////fig////fig////fig////fig////fig////fig///fig////fig///fig////fig////fig////fig////fig////fig///fig///fig////fig/////fig///fig////fig///fig///fig////fig////fig////fig////fig///fig////fig///fig////fig////fig////fig///fig////fig////fig////fig////fig////fig////fig////fig////fig///fig////fig////fig////fig///fig////fig///fig////fig///fig////fig////fig///fig///fig///fig///fig///fig////fig////fig////fig////fig////fig///fig///fig///fig///fig////fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig//fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig////fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig///fig//fig///fig//fig///fig///fig///fig//fig//fig//fig///fig//fig///fig///fig//fig//fig///fig//fig//fig//fig//fig///fig//fig//fig///fig//fig//fig//fig///fig//fig//fig//fig///fig//fig//fig///fig//fig//fig///fig//fig//fig//fig//fig///fig//fig///fig///fig/fig///fig//fig//fig//fig///fig//fig///fig//fig//fig//fig//fig//fig///fig//fig//fig//fig//fig//fig//fig//fig///fig/fig///fig//fig//fig//fig/fig//fig//fig//fig/fig//fig/fig//fig//fig//fig//fig//fig/fig/fig//fig//fig//fig//fig/fig//fig/fig//fig/fig/fig//fig//fig//fig//fig//fig/fig/fig//fig//fig/fig//fig//fig//fig/fig/fig/fig//fig/fig/fig//fig/fig//fig/fig//fig/fig//fig/fig/fig//fig//fig/fig/fig//fig//fig/fig//fig/fig/fig//fig//fig/fig/fig/fig//fig/fig/fig//fig/fig/fig/fig/fig//fig/fig/fig//fig/fig/fig/fig/fig/fig/fig//fig//fig/fig//fig/fig/fig/fig//fig/fig/fig//fig/fig/fig//fig/fig/fig//fig/fig/fig/fig/fig/fig//fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ Proof.: As in the proof of the theorem above, for degree reasons, the term of \(\Gamma\) with \(\ell\) outgoing legs is of the form \[\Gamma_{(\ell)}=\Gamma_{(\ell)}^{0}+\Gamma_{(\ell)}^{1}u^{-1}+\dots+\Gamma_{( \ell)}^{\ell-2}u^{-\ell+2}\] so for some negative cyclic class \(\lambda=\lambda_{0}+\lambda_{1}u+\lambda_{2}u^{2}+\dots\) we have \[\Gamma_{(\ell)}(\lambda)=\sum_{i=0}^{i=\ell-2}\Gamma_{(\ell)}^{i}(\lambda_{i})\] that is, as we increase \(\ell\), each new term of \(Fm(\lambda)\) depends only on one new term \(\lambda_{\ell-2}\). It suffices then to show that the map \[\Gamma_{(\ell)}^{\ell-2}(-):(C_{*}(A),b_{\mu})\to(C_{(\ell)}^{*}(A),[\mu,-]_{ \mathrm{nec}})\] is a quasi-isomorphism. But recall that when \(A\) is smooth and \(m_{(2)}\) is nondegenerate (that is, gives a quasi-isomorphism \(A^{!}\simeq A_{\Delta}\)), we have the following quasi-isomorphisms \[(C_{*}(A),b_{\mu}) \simeq\mathrm{Hom}_{A-A}(A^{!},A_{\Delta})\simeq\mathrm{Hom}_{A- A}(A_{\Delta},A_{\Delta})\] \[(C_{(\ell)}^{*}(A),[\mu,-]_{\mathrm{nec}}) \simeq\mathrm{Hom}_{A-A}((A^{!})^{\otimes_{A}(k-1)},A_{\Delta}) \simeq\mathrm{Hom}_{A-A}(A_{\Delta},A_{\Delta})\] That is, up to quasi-isomorphisms all these invariants are just Hochschild cohomology. We see that all tube quivers of degree \(-\ell d\) give cohomologous maps \((C_{*}(A),b_{\mu})\to(C_{(\ell)}^{*}(A),[\mu,-]_{\mathrm{nec}})\): they all involve applying the quasi-isomorphism coming from \(m_{(2)}\)\(\ell\) times in a sequence, which is the same operation given by the composition of the quasi-isomorphisms above. So the map \(\Gamma_{(\ell)}^{\ell-2}(-)\) is cohomologous to (a scalar multiple) of this composition. #### 3.5.2. The energy function and the Legendre transform In our discussion of the odd Legendre transform between polyvector fields and forms, we calculated that the correct analog of sending a Lagrangian function \(L\) to its energy function \(e_{L}=v_{i}\partial L/\partial v_{i}-L\) was given by sending a polyvector field \(\gamma=\gamma_{2}+\gamma_{3}+\dots\) to the polyvector field \[e_{\gamma}=\gamma_{2}+2\gamma_{3}+3\gamma_{4}+\dots\] We make the same definition in the noncommutative case: **Definition 10**.: The energy function associated to an element \(m=\mu+\sum_{\ell\geq 2}m_{(\ell)}\in C_{[d]}^{*}(A)\) is the element \[e_{m}=\sum_{\ell\geq 2}(\ell-1)m_{(\ell)}\in C_{[d]}^{*}(A).\] We calculate that \([m,e_{m}]_{\mathrm{nec}}=0\), that is, this element \(e_{m}\) gives a closed element under the differential on \(C_{[d]}^{*}(A)\). We then define the Legendre transform: **Definition 11**.: The noncommutative Legendre transform is the map \[\mathcal{L}:(\mathcal{M}_{d-\mathrm{preCY}}(A))_{\mathrm{nondeg}} \to HC_{d}^{-}(A)\] \[m \mapsto[(Fm)^{-1}(e_{m})]\] where \((\mathcal{M}_{d-\mathrm{preCY}}(A))_{\mathrm{nondeg}}\subset C_{[d]}^{d}(A)\) is the set of \(d\)-dimensional pre-CY structures \(m=\mu+m_{(2)}+m_{(3)}+\dots\in C_{[d]}^{2}(A)\) such that \(m_{(2)}\) is nondegenerate. Note that the map \(\mathcal{L}\) is _not_ linear, and strictly speaking, as a map of sets, depends on our choice of quasi-inverse \((Fm)^{-1}\). Nevertheless, in some sense, also like the ordinary Legendre transform on convex functions, it is 'one-to-one'; in the next section we explain what this means. ## 4. The nondegenerate locus We now focus on the case where \(A\) is smooth, and continue the study of the nondegenerate locus \(\left(\mathcal{M}_{d-\text{preCY}}(A)\right)_{\text{nondeg}}\) of the space of pre-CY structures on \(A\). The main result of this section is that the noncommutative Legendre transform \(\mathcal{L}\) we defined above is invertible, and gives an equivalence between nondegenerate pre-CY structures and smooth CY structures. However, this equivalence does not hold in a strict sense; its correct form is as a weak homotopy equivalence of simplicial sets, see later in Section 4.2. Under the assumption that the Hochschild cohomology of \(A\) is concentrated in non-negative degree, this equivalence can be more simply phrased in terms of a groupoid of solutions to the Maurer-Cartan equation, which we describe in Section 4.2.4. ### Inverting the noncommutative Legendre transform We now characterize the image of the noncommutative Legendre transform \(\mathcal{L}\). **Proposition 18**.: _Every element of \(CC_{d}^{-}(A)\) in the image of \(\mathcal{L}\) is a smooth CY structure on \(A\)._ Proof.: Let \(\lambda=\lambda_{0}+\lambda_{1}u^{1}+\cdots=\mathcal{L}(m)\) be the image under the Legendre transform of a nondegenerate pre-CY structure \(m\). The relation between \(\lambda_{0}\in C_{d}(A)\) and \(m_{(2)}\in C_{(2)}^{d}(A)\) is given by where as usual we evaluate by inserting \(m_{(1)}=\mu\) and \(m_{(2)}\) into the \(\bullet\) vertices, accordingly, with some sign that we omit. By using the \(\partial\)-differentials we find that \[m_{(2)}=\] We now use the canonical coevaluation map \(\text{coev}:\Bbbk\to A_{\Delta}\otimes_{A\text{-}A}A^{1}\) on both sides, getting an equality in \(C^{*}(A,A^{!})\). Because \(m_{(2)}\) is nondegenerate, this implies the equality in \(C^{*}(A)\): \[\raisebox{-14.226378pt}{\includegraphics[]{fig/L2.eps}}\quad=\quad\raisebox{-14.226378pt}{\includegraphics[]{fig/L2.eps}}+([\mu,-]\text{-exact term})\] which is exactly the chain-level description of the nondegeneracy condition on \(\lambda_{0}\). So in other words, a nondegenerate pre-CY structure on a smooth \(A_{\infty}\)-category gives a smooth CY structure on it. We would like to invert this map. Recall that in Section 3.1.2 we described the (odd) commutative version of this inverse, and showed how to calculate the inverse odd Legendre transform by describing it implicitly by the equation it solves. We now describe the noncommutative analog of this procedure, using the same tube quivers \[\Gamma_{(\ell)}=\Gamma_{(\ell)}^{0}+\Gamma_{(\ell)}^{1}+\cdots+\Gamma_{(\ell)} ^{\ell-2}\] we defined in Definition 8. Suppose that we have a smooth CY structure of dimension \(d\) given by an element \[\lambda=\lambda_{0}+\lambda_{1}u+\lambda_{2}u^{2}+\cdots\in CC_{*}^{-}(A)\] As in the proof of Proposition 18 above, from our chain-level description of nondegeneracy, we know that we can find \(\alpha\in C_{(2)}^{d}(A),\beta_{(2)}\in C_{(2)}^{d-1}(A)\) such that (3) where the vertices get assigned \(\alpha\). From this, we will construct a pre-CY structure \(m\) with \(m_{(1)}=\mu\) and \(m_{(2)}=\alpha\); each higher term \(m_{(\ell)}\) for \(\ell>3\) can be calculated iteratively in the previous terms. To illustrate this, let us first discuss the case of \(m_{(3)}\). By definition this element must solve \[[\mu,m_{(3)}]_{\mathrm{nec}}=m_{(2)}\circ m_{(2)}\] which is an equation in \(C^{2d-1}(A)\). We multiply by two to express everything in terms of brackets \[2[\mu,m_{(3)}]_{\mathrm{nec}}=[m_{(2)},m_{(2)}]_{\mathrm{nec}}\] and then substitute the second \(m_{(2)}=\alpha\) factor using Eq. (3), to get \[2[\mu,m_{(3)}]_{\mathrm{nec}}=\pm\frac{1}{2}\] In other words, \(m_{(3)}\) satisfies the equation \[[\mu,m_{3}]_{\mathrm{nec}}=\frac{1}{2}\left([m_{(2)},[\mu,\beta]_{\mathrm{nec} }]_{\mathrm{nec}}+(\partial_{2}\Gamma_{(2)})(\lambda)\right)\] where, as in Proposition 14, \(\partial_{2}\) is the differential on that increases the number of outgoing legs by one, given by taking the necklace bracket of a tube quiver with the valence \(2\) vertex. Using the graded Jacobi relation, the equation satisfied by \(m_{(2)}\) and the equation defining \(\Gamma_{(3)}\) we then have that \(m_{(3)}\) satisfies \[[\mu,m_{(3)}]_{\rm nec}=\frac{1}{2}\left([\mu,[m_{(2)},\beta]_{\rm nec}]_{\rm nec }+[\mu,\Gamma^{0}_{(3)}(\lambda_{0})+\Gamma^{1}_{(3)}(\lambda_{1})]_{\rm nec}\right) \tag{4}\] This equation is slightly misleading; it looks like it we could write \[m_{(3)}=\frac{1}{2}\left([m_{(2)},\beta]_{\rm nec}+\Gamma^{0}_{(3)}(\lambda_{0 })+\Gamma^{1}_{(3)}(\lambda_{1})\right)\] and compute \(m_{(3)}\) directly from \(m_{(1)}=\mu,m_{(2)}=\alpha\) and \(\lambda\), but this is not true. By counting degrees, we see that the homological degree of \(\Gamma^{0}_{(3)}\) in \(C^{*}_{(\ell)}(A)\) is \(-\ell d+2\); so among the tube quivers of that degree, there could be some that have a single vertex with \(3\) outgoing arrows, such as, for example, the vertex \(p\) of the following quiver: So in order to evaluate the right-hand side we would need to already know the value of \(m_{(3)}\). Nevertheless, we prove that one can solve Eq. (4) up to cohomology; this is also true for each higher term \(m_{(\ell)}\). More precisely, we have: **Proposition 19**.: _For each \(\ell\geq 2\), the component of the Maurer-Cartan equation_ \[2[\mu,m_{(\ell)}]_{\rm nec}=\sum_{i=2}^{\ell-1}[m_{(i)},m_{(\ell-i+1)}]_{\rm nec}\] _has a solution given by_ \[m_{(\ell)}=\frac{1}{\ell-1}\left([\mu,\beta_{(\ell)}]_{\rm nec}+[m_{(2)}, \beta_{(\ell-1)}]_{\rm nec}+\cdots+[m_{(\ell-1)},\beta_{(2)}]_{\rm nec}+ \Gamma^{0}_{(\ell)}(\lambda_{0})+\cdots+\Gamma^{\ell-2}_{(\ell)}(\lambda_{1})\right)\] _for some element_ \[\beta_{(2)}+\beta_{(3)}+\cdots+\beta_{(\ell-1)}\in C^{1}_{[d]}(A)\] _where \(\beta_{(i)}\in C^{1}_{(i,d)}(A)\) depends on all previous \(\beta_{(j)}\) and \(m_{(j)}\) with \(j<i\)._ This proposition ultimately follows from a combinatorial fact about tube quivers of a specific degree,which we now explain. Note that \(\Gamma^{0}_{(\ell)}\) is the only 'problematic' term; all the other \(\Gamma^{i}_{(\ell)}\) only have vertices with \(\ell-1\) or less outgoing legs so by induction we know how to evaluate them. Recall that the space \(\mathcal{C}\mathcal{T}_{(\ell,d)}\) where \(\Gamma^{0}_{(\ell)}\) lives is defined as the cyclically graded-symmetric elements of \(\mathcal{T}^{d}_{(\ell)}\) in homological degree \(-d\ell+2\ell-4\). We define \(\mathcal{T}^{d,<\ell}_{(\ell)}\) to be the subcomplex of \(\mathcal{T}^{d}_{(\ell)}\) spanned by the tube quivers that _do not_ have any vertex with \(\ell\) outgoing legs. We now look at the quotient complex \(\mathcal{T}^{d}_{(\ell)}/\mathcal{T}^{d,<\ell}_{(\ell)}\). In homological degree \(-d\ell+2\ell-4\), every element of degree \(\mathcal{T}^{d}_{(\ell)}/\mathcal{T}^{d,<\ell}_{(\ell)}\) can be represented by a linear combination of tube quivers with a single vertex with \(\ell\) outgoing legs; all other non-output vertices are 'generic', that is, either have two inputs and one output, or are sources with two outputs. **Lemma 20**.: _Any two elements in \(\mathcal{T}^{d}_{(\ell)}/\mathcal{T}^{d,<\ell}_{(\ell)}\) are homologous, up to a sign. In other words, given any two tube quivers \(\Gamma,\Gamma^{\prime}\) as above, there is some linear combination of tube quivers \(\Gamma^{\prime\prime}\) such that_ \[\partial\Gamma^{\prime\prime}=\Gamma\pm\Gamma^{\prime}+(\text{ term in }\mathcal{T}^{d,<\ell}_{(\ell)}).\] Proof.: Let \(Q\) be any such tube quiver as above, that is, with exactly one vertex \(p\) with \(\ell\) outgoing legs and all other vertices 'generic', for instance the tube quiver for \(\ell=5\), where \(v\) is the only non-generic vertex, of homological degree \(d\times 5-d-2\times 5+4=4d-6\). We pick any edge \(e:v_{1}\to v_{2}\) (not connecting to the outputs) of \(Q\) and contract it to a vertex \(w\), giving some other tube quiver \(P\). Calculating the differential gives \[\partial P=\pm Q\pm Q^{\prime}\pm(\text{term in }\mathcal{T}^{d,<\ell}_{(\ell)})\] where \(Q^{\prime}\) is obtained from \(P\) by expanding \(w\) in another direction. We deduce this from checking separately the three possibilities: \(v_{1}=p,v_{2}=p\) or \(e\) not incident to \(p\). For example But we can go from any of these tube quivers \(Q\) in \(\mathcal{T}^{d}_{(\ell)}/\mathcal{T}^{d,<\ell}_{(\ell)}\) to any other one by a sequence of edge contractions and expansions. So we can find a sequence of \(P\)s going from \(\Gamma\) to \(\Gamma^{\prime}\) whose sum \(\Gamma^{\prime\prime}\) solves the desired equation. Proof.: (of Proposition 19) We prove this by induction. The case \(\ell=2\) is just the chain-level description of nondegeneracy; so it follows by the assumption that \(\lambda_{0}\) is nondegenerate. For any fixed \(\ell\) we write the component of the Maurer-Cartan equation \[2[\mu,m_{(\ell)}]_{\text{nec}}=\sum_{i=2}^{\ell-1}[m_{(i)},m_{(\ell-i+1)}]_{ \text{nec}}\] and using the result for \(m_{(j)}\) with \(j<\ell\) we get that \(m_{(\ell)}\) satisfies the equation \[(\ell-1)[\mu,m_{(\ell)}]_{\text{nec}}=[\mu,\Big{(}[m_{(2)},\beta_{(\ell-1)}]_{ \text{nec}}+\cdots+[m_{(\ell-1)},\beta_{(2)}]_{\text{nec}}+\Gamma^{0}_{(\ell)} (\lambda_{0})+\cdots+\Gamma^{\ell-2}_{(\ell)}(\lambda_{1})\Big{)}]_{\text{nec}}\] But by Lemma 20, we can choose find a solution to \[\Gamma^{0}_{(\ell)}=\Theta+\partial\Gamma^{\prime}+(\text{term in }\mathcal{T}^{d,<\ell}_{(\ell)})\] where \(\Theta\) is the specific 'problematic quiver' of the form with one vertex \(p\) with \(\ell\) outgoing arrows (for example, in the drawing above, \(\ell=6\)). We now rearrange the equation as \[[\mu,(\ell-1)m_{(\ell)}-\Theta(\lambda_{0})]_{\rm nec}=[\mu,\Big{(}[m_{(2)}, \beta_{(\ell-1)}]_{\rm nec}+\cdots+[m_{(\ell-1)},\beta_{(2)}]_{\rm nec}+\Gamma ^{0}_{(\ell)}(\lambda_{0})+\cdots+\Gamma^{\ell-2}_{(\ell)}(\lambda_{1})\Big{)} ]_{\rm nec}\] where now the right-hand side does have any vertices with \(\ell\) or more outgoing legs, so we can evaluate it using the \(m_{(j)}\) that we already know. As for the left-hand side, by the nondegeneracy condition we know that the 'bubble' to the right of \(\Theta\) evaluates to a cochain that is cohomologous to the unit cochain \(1\in C^{0}*(A)\). Thus, if we replace \(\Theta(\lambda_{0})\) by \(m_{(3)}\) in the equation above and solve for \(m_{(\ell)}\), we will find an element that solves \[(\ell-1)m_{(\ell)}=[m_{(2)},\beta_{(\ell-1)}]_{\rm nec}+\cdots+[m_{(\ell-1)}, \beta_{(2)}]_{\rm nec}+\Gamma^{0}_{(\ell)}(\lambda_{0})+\cdots+\Gamma^{\ell-2 }_{(\ell)}(\lambda_{1})+([\mu,-]_{\rm nec}\text{-exact terms})\] so we pick \(\beta_{(\ell)}\) to be a primitive of the \([\mu,-]_{\rm nec}\)-exact terms. Repackaging the statement of Proposition 19, by picking an appropriate element \(\beta=\sum_{i\geq 2}\beta_{(i)}\in C^{1}_{[d]}(A)\), we get a map to pre-CY structures **Definition 12**.: The map \(\Phi:(CC^{-}_{d}(A))_{\rm nondeg}\to(\mathcal{M}_{d-{\rm preCY}}(A))_{\rm nondeg }\subset C^{2}_{[d]}(A)\) is defined by sending \(\lambda\mapsto m=\mu+m_{(2)}+\dots\), where the \(m_{(i)}\) are defined using the tube quivers \(\Gamma_{(i)}\) and an appropriately chosen element \(\beta\). By definition, the image \(m\) satisfies the equations \(m\circ m=0\) and \(e_{m}=\Gamma(m,\lambda)+[m,\beta]_{\rm nec}\), with the 'energy function' \(e_{m}=\sum_{\ell\geq 2}(\ell-1)m_{(\ell)}\). In other words, \(\Phi\) maps smooth CY structures of dimension \(d\) to pre-CY structures of dimension \(d\) with nondegenerate \(m_{(2)}\). #### 4.1.1. Example: the circle, continued In order to illustrate the statement of Proposition 19 in action, we return to the simple example discussed in Section 2.2.4, that is, the dg category \(A\) corresponding to the circle (more precisely, to its realization as the boundary of the 2-simplex). Recall that the negative cyclic chain representing the smooth CY structure is given by \[\lambda=01[10]+12[21]-02[20]+(-01[10|01|10]-12[21|12|21]+02[20|02|20])u+\cdots \in C_{*}(A)[[u]]\] and the element \(\alpha\in C^{*}_{(2)}(A)\) inverse to the \(u^{0}\) component \(\lambda_{0}\) is given by the formula Together with its \(\mathbb{Z}/2\)-rotation, this specifies all the nontrivial values of \(\alpha\). By the previous results of this section, the element \(\alpha\) is the first component \(m_{(2)}\) of a pre-CY structure. Each higher component \(m_{(k)}\) has cohomological degree \(dk-d-2k+4=3-k\) since \(d=1\). Since \(A\) is concentrated in degree zero, we have \(C^{3-k}_{(k)}(A)=0\) for \(k\geq 4\), so the terms \(m_{(\geq 4)}\) are all zero. Moreover, the only component of \(m_{(3)}\) that could be nontrivial is the term \(m_{(3)}^{0,0,0}\) with zero inputs. By Proposition 19, we can find linear combinations of tube quivers \(\Gamma^{0}_{(3)},\Gamma^{1}_{(3)}\), both with three outputs, such that a solution for the equation \([\mu,m_{(3)}]=[m_{(2)},m_{(2)}]\) is given by \[m_{(3)}=\frac{1}{2}(\Gamma^{0}_{(3)}(\lambda_{0})+\Gamma^{1}_{(3)}(\lambda_{1 }))\] Let us start from the second term. We know that the terms in \(\Gamma^{1}_{(3)}\) are of maximum (cohomological) degree among the tube quivers with three outgoing legs; all of those are cohomologous and we can pick any of those representatives; so \(\Gamma^{1}_{(3)}(\lambda_{1})\) is given by the diagram where we input \(\lambda_{1}=01[10|01|10]+12[21|12|21]-02[20|02|20]\). Since \(\alpha\) is only nonzero when there is exactly one arrow as input, the only nonzero terms we get upon evaluating the diagram are when sending one arrow to each \(\alpha\). We evaluate this diagram separately for each choice of labeling of the three regions around the circle with the objects \(0,1,2\); we see that if the three labels are the same, we get cancelling contributions, and that the only nonzero terms happen when exactly two labels are the same: we have The first term \(\Gamma^{0}_{(3)}(\lambda_{0})\) is more difficult to compute, since the combination of tube quivers \(\Gamma^{0}_{(3)}\) has a complicated expression. However, in this case we can calculate it without expressing this entire combination. Recall from Section 3.3.2 that we have a decomposition \[\mathcal{T}_{(\ell)}=(\mathcal{T}_{(\ell)})^{\text{edge}}\oplus(\mathcal{T}_{( \ell)})^{\text{vertex}}\] between tubes whose central arrow lands onto an edge or a vertex of the circle. For any choice of dimension \(d\) (which puts a grading on this space, and defines the signs of the differentials), the differential \(\partial\) preserves \((\mathcal{T}_{(\ell)})^{\text{edge}}\) and decomposes as \(\partial=\partial^{\text{edge}}+\partial^{\text{vertex}}\) on \((\mathcal{T}_{(\ell)})^{\text{vertex}}\), where \(\partial^{v}\) preserves this space. In our case, \(d=1\) and the homological degrees of \((\mathcal{T}^{1}_{(3)})\) lie in \(-3,-2,-1,0,1,2\) and this decomposition becomes: \[(\mathcal{T}^{1}_{(3)})_{-3} =(\mathcal{T}^{1}_{(3)})_{-3}^{\text{edge}}\] \[(\mathcal{T}^{1}_{(3)})_{-2} =(\mathcal{T}^{1}_{(3)})_{-2}^{\text{edge}}\otimes(\mathcal{T}^{1 }_{(3)})_{-2}^{\text{vertex}}\] \[\ldots\] \[(\mathcal{T}^{1}_{(3)})_{2} =(\mathcal{T}^{1}_{(3)})_{2}^{\text{vertex}}\] The element \(\Gamma^{0}_{(2)}\in(\mathcal{T}^{1}_{(3)})_{-1}\) decomposes under the direct sum above as \((\Gamma^{0}_{(2)})^{\text{edge}}+(\Gamma^{0}_{(2)})^{\text{vertex}}\). We observe that inputting the choice of \(\alpha\) above into the 2-valent vertices of any diagram in \((\mathcal{T}^{1}_{(3)})_{-1}^{\text{edge}}\) gives zero. Thus the value we want is determined by \((\Gamma^{0}_{(2)})^{\text{vertex}}\). The equation satisfied by \(\Gamma_{(2)}\) says that \(\partial\Gamma^{0}_{(2)}=-R(\Gamma^{1}_{(2)})\); together with the long exact sequence in cohomology associated to the short exact sequence \((\mathcal{T}^{1}_{(3)})^{\text{edge}}\to(\mathcal{T}^{1}_{(3)})\to(\mathcal{T} ^{1}_{(3)})^{\text{vertex}}\), these facts imply that we can pick any solution \(X\) to the equation \[\partial^{\text{vertex}}(X)=R(\Gamma^{1}_{(2)})\] and the term \(\Gamma^{0}_{(3)}(\lambda_{0})\) will be equal to \(X(\lambda_{0})\). We now provide such a solution, given by \(X=R(Y)\), where \(Y\) is the following linear combination: 4 Footnote 4: We omit the orientations in the expression for \(Y\), but they must be chosen coherently \[\frac{1}{6}\left(\begin{array}{c}\includegraphics[height=56.905512pt]{ -1.eps}\end{array}\right)+\begin{array}{c}\includegraphics[height=56.905512pt]{ -1.eps}\end{array}+\begin{array}{c}\includegraphics[height=56.905512pt]{ -1.eps}\end{array}+\begin{array}{c}\includegraphics[height=56.905512pt]{ -1.eps}\end{array}+\begin{array}{c}\includegraphics[height=56.905512pt]{ -1.eps}\end{array}+\begin{array}{c}\includegraphics[height=56.905512pt]{ -1.eps}\end{array}\] Evaluating \(X(\lambda_{0})\) with the given value of \(\alpha\), and labels \(i,i,j\neq i\) on the three regions gives us 18 non-zero terms, all equal; taking in account the 1/6 factor gives \[\begin{cases}-\frac{3}{8}(ij)\otimes e_{i}\otimes ji,\text{ if }(ij)\text{ counter-clockwise}\\ \frac{3}{8}(ij)\otimes e_{i}\otimes ji,\text{ if }(ij)\text{ clockwise}\end{cases}\] with all other cases zero. Thus we get that the element \(m_{(3)}=\frac{1}{2}(\Gamma^{0}_{(3)}(\lambda_{0})+\Gamma^{1}_{(3)}(\lambda_{1}))\) we wanted to calculate is given by One can check that this element satisfies \([\mu,m_{(3)}]+\alpha\circ\alpha=0\) and \([\alpha,m_{(3)}]=0\). In other words, we have a fully explicit description of the pre-CY structure on this dg category: **Corollary 21**.: _Taking \(m_{2}=\alpha\) and \(m_{(3)}\) as above, and \(m_{(\geq 4)}=0\), defines a pre-CY structure of dimension 1 on \(A\)._ ### The simplicial lift It is clear that the map \(\Phi\) we constructed above should be some sort of inverse to the noncommutative Legendre transform \(\mathcal{L}\) of Definition 11. But this is not true strictly; for one, the definitions of both \(\mathcal{L}\) and \(\Phi\) involve making choices. Also, the two sides related by these maps look different: the locus of nondegenerate elements in negative cyclic homology is a conical locus inside of a linear space, while the set of Maurer-Cartan solutions is the solution set of a quadratic equation. However, looking at the iterative way in which we constructed these maps, we see that in each step we had to solve an equation relating _linearly_ one new component \(\lambda_{(k-2)}\) of the negative cyclic chain \(\lambda\) to one new component \(m_{(k)}\) of the pre-CY structure \(m\). Moreover, this relation came from a quasi-isomorphism between these linear spaces of choices; the space \(\left(\mathcal{M}_{d-\mathrm{preCY}}(A)\right)_{\mathrm{nondeg}}\) should have the structure of an iterated fibration of linear spaces, with the fiber at each step related by a quasi-isomorphism to a graded piece of \(CC^{*}_{-}(A)\) It turns out that this can be made precise by using the theory of simplicial sets of solutions to Maurer-Cartan equations, as developed in [11, 10], among others. We will show that the map \(\Phi\) we constructed in the previous section admits a simplicial lift to a weak equivalence of simplicial sets. #### 4.2.1. The simplicial Maurer-Cartan set Let \((\mathfrak{g}^{*},\delta)\) be a nilpotent dg Lie algebra over \(\Bbbk\). One can look at its naive set of solutions to the Maurer-Cartan equation \[MC(\mathfrak{g}^{*})=\{y\in\mathfrak{g}^{1}\ |\ \delta y+[y,y]/2=0\}\] which in principle has only the structure of a set. Following the exposition in [10], we recall how to upgrade this to a simplicial set. For any \(n\geq 0\), denote by \[\Omega^{*}(\Delta^{n})=\Bbbk[t_{0},\ldots,t_{n}]/\langle t_{0}+\cdots+t_{n}- 1,dt_{0}+\ldots dt_{n}\rangle\] the graded commutative dg algebra of polynomial differential forms on the \(n\)-simplex. Here, the \(t_{i}\) are in degree zero and the \(dt_{i}\) are in degree one; the differential is \(d(t_{i})dt_{i}\) and \(d(dt_{i})=0\). The following proposition/definition is due to Hinich [11]. **Proposition 22**.: _There is a simplicial set \(MC_{*}(\mathfrak{g}^{*})\) whose \(n\)-simplices are given by_ \[MC_{n}(\mathfrak{g}^{*})=MC(\mathfrak{g}^{*}\otimes\Omega^{*}(\Delta^{n})),\] _that is, by the solution set of the Maurer-Cartan equation \((\delta+d)y+[y,y]/2=0\) on the dg Lie algebra of \(\mathfrak{g}\)-forms on the simplex._ _Remark_.: In the literature of this topic, the name 'Maurer-Cartan' is applied to two different formulations of the equation; in order to avoid confusion let us be clear about their relation. On any _dg Lie algebra_\((\mathfrak{g}^{*},d)\), one can look at the equation \(dx+[x,x]/2=0\), and on any _graded Lie algebra_\(\mathfrak{h}^{*}\) one can look at the equation \([y,y]=0\), as we did earlier in this paper. Both are often called the Maurer-Cartan equation; the relation is that if \(\mathfrak{g}^{*}\subset\mathfrak{h}^{*}\) as graded Lie algebras and there is an element \(\mu\in\mathfrak{h}^{1}\) such that \([\mu,\mu]=0\) and \(dx=[\mu,x]\) for all \(x\in\mathfrak{g}^{*}\), then the two equations are equivalent if we look for a solution of the form \(y=\mu+x\). In our case, we have \[\mathfrak{g}^{*}=\prod_{\ell\geq 2}C^{*}_{(\ell,d)}(A),\qquad\mathfrak{h}^{*}= C^{*}_{[d]}(A)=\prod_{\ell\geq 1}C^{*}_{(\ell,d)}(A)\] with \(\mu\) given by our fixed \(A_{\infty}\)-structure; the solution \(x\) is then the sum of all the \(m_{(\ell)}\) with \(\ell\geq 2\) and \(y\) is the full pre-CY structure \(m\), also including \(m_{(1)}=\mu\). The set of zero-simplices is exactly our naive set \(MC(\mathfrak{g}^{*})\). We would like to apply this to the dg Lie algebra \(\mathfrak{g}^{*}=\prod_{\ell\geq 2}C^{*}_{(\ell,d)}(A)\). This does not make sense exactly since this algebra is not nilpotent. Nevertheless, note that we can truncate it at any finite \(\ell\) and obtain a nilpotent algebra, and that the Maurer-Cartan solutions we want are a limit over solutions on these truncated algebras. Let us be more precise. Suppose that we have a graded Lie algebra \(\mathfrak{a}^{*}\), endowed with a descending filtration \[\mathfrak{a}^{*}=F^{0}\mathfrak{a}^{*}\supset F^{1}\mathfrak{a}^{*}\supset F^ {2}\mathfrak{a}^{*}\supset\ldots\] with the property that for \(x\in F^{i}\mathfrak{a}^{*},y\in F^{j}\mathfrak{a}^{*}\), we have \([x,y]\in F^{i+j}\mathfrak{a}^{*}\). We then consider the completions under the filtration \(F\) \[\mathfrak{g}^{*}=\lim_{\stackrel{{\longleftarrow}}{{i\geq 1}}}F^{1} \mathfrak{a}^{*}/F^{i}\mathfrak{a}^{*},\qquad\mathfrak{h}^{*}:=\widehat{ \mathfrak{a}^{*}}=\lim_{\stackrel{{\longleftarrow}}{{i\geq 0}}} \mathfrak{a}^{*}/F^{i}\mathfrak{a}^{*}\] Note that each truncated piece \(F^{1}\mathfrak{a}^{*}/F^{i}\mathfrak{a}^{*}\) is a nilpotent graded Lie algebra, and we have a natural injection \(\mathfrak{g}^{*}\subset\mathfrak{h}^{*}\). If we have an element \(\mu\in\mathfrak{h}^{1}\) such that \([\mu,\mu]=0\) and \([\mu,-]\) preserves the filtration, this defines a differential on each \(F^{1}\mathfrak{a}^{*}/F^{i}\mathfrak{a}^{*}\), and each map \(F^{1}\mathfrak{a}^{*}/F^{i+1}\mathfrak{a}^{*}\to F^{1}\mathfrak{a}^{*}/F^{i} \mathfrak{a}^{*}\) is a surjection of nilpotent dg algebras. By [11], this induces a Kan fibration \[MC_{*}(F^{1}\mathfrak{a}^{*}/F^{i+1}\mathfrak{a}^{*})\to MC_{*}(F^{1} \mathfrak{a}^{*}/F^{i}\mathfrak{a}^{*})\] between the Maurer-Cartan simplicial sets. **Definition 13**.: The Maurer-Cartan simplicial set of the dg Lie algebra \(\mathfrak{g}^{*}\) is the limit of simplicial sets \[MC_{*}(\mathfrak{g}^{*}):=\lim_{i\geq 0}MC_{*}(F^{1}\mathfrak{a}^{*}/F^{i+1} \mathfrak{a}^{*}).\] The case we are interested is when \[\mathfrak{a}^{*}=\bigoplus_{\ell\geq 1}C^{*}_{(\ell,d)}(A)[1]\] endowed with the descending filtration \[F^{i}\mathfrak{a}^{*}=\bigoplus_{\ell\geq i+1}C^{*}_{(\ell,d)}(A)[1].\] Then we have that the graded Lie algebra \(\mathfrak{h}^{*}\) is exactly \(C^{*}_{[d]}(A)[1]\), which is the dg Lie algebra where pre-CY structures of dimension \(d\) live. The condition on the element \(\mu\) is exactly the condition for an \(A_{\infty}\)-structure on \(A\); taking \(\mathfrak{g}^{*}\) to be the dg Lie algebra where the rest of the pre-CY structure lives, with differential \([\mu,-]\), we get an identification between the set of zero-simplices \(MC_{0}(\mathfrak{g}^{*})\) and the naive set of \(d\)-pre-CY structures \(\mathcal{M}_{d-\mathrm{preCY}}(A)\) of the previous section. #### 4.2.2. The Deligne groupoid We now recall another type of structure on the solutions to the Maurer-Cartan equation, the 'Deligne groupoid'. Let us describe it in the graded Lie algebra picture. If \(\mathfrak{n}^{*}\) is a nilpotent graded Lie algebra, there is an exponential action of its degree zero part \[e^{\mathrm{ad}(-)}:\mathfrak{n}^{0}\times\mathfrak{n}^{*}\to\mathfrak{n}^{*}\] given by \[e^{\mathrm{ad}(x)}(y)=y+[x,y]+\frac{1}{2!}[x,[x,y]]+\frac{1}{3!}[x,[x,[x,y]]]+\ldots\] This exponential action extends to an action of the group-like elements of the (completed) universal enveloping algebra \(\widehat{U}(\mathfrak{n}^{0})\). Note now that \(e^{\mathrm{ad}(x)}\) preserves the solution set of the (graded) Maurer-Cartan equation \[MC(\mathfrak{n}^{*})=\{y\in\mathfrak{n}^{1}\ |\ [y,y]=0\}\] since the adjoint action preserves the equation; therefore we can regard \(MC(\mathfrak{n}^{*})//\mathfrak{n}^{0}\) as a groupoid. Again we must be a little careful because we want to apply this formalism to the graded Lie algebra \(C^{*}_{[d]}(A)[1]\), which is not nilpotent. Considering again the case of the dg algebra \[\mathfrak{g}^{*}=\lim_{\stackrel{{\leftarrow}}{{i\geq 1}}}F^{1} \mathfrak{a}^{*}/F^{i}\mathfrak{a}^{*}\] with differential \([\mu,-]\) sitting inside of the graded algebra \[\mathfrak{h}^{*}=\lim_{\stackrel{{\leftarrow}}{{i\geq 1}}} \mathfrak{a}^{*}/F^{i}\mathfrak{a}^{*}\] the we see that the exponential action of \(\mathfrak{g}^{0}\) is well-defined on each truncated piece \(F\mathfrak{a}^{*}/F^{i}\mathfrak{a}^{*}\) and therefore can be lifted to an action on the graded Lie algebra \(\mathfrak{h}^{*}\), preserving the Maurer-Cartan equation. Therefore we also have a groupoid \(MC(\mathfrak{h}^{*})//\mathfrak{g}^{0}\). We choose this notation (with both \(\mathfrak{h}^{*}\) and \(\mathfrak{g}^{*}\)) to remind us that the action of \(\mathfrak{g}^{*}\) also involves the element \(\mu\in\mathfrak{h}^{*}\), producing higher order terms \([x,\mu]\), \(\frac{1}{2}[x,[x,\mu]]\), etc., but does not change \(\mu\) itself. So we can look for solutions of the Maurer-Cartan equation on \(\mathfrak{h}^{*}\) of the form \(\mu+x\) where \(x\in F^{2}\mathfrak{h}^{*}\); this is a subgroupoid \(MC(\mathfrak{h}^{*},\mu)//\mathfrak{g}^{0}\). From comparing this groupoid to the simplicial set we defined before, in the case where the algebra in question is supported in non-negative degrees, we have the following fact [1]: **Proposition 23**.: _If \(\mathfrak{g}^{*}\) vanishes in negative degrees, there is a natural bijection of sets_ \[\pi_{0}(MC_{*}(\mathfrak{g}^{*}))\cong\pi_{0}(MC(\mathfrak{h}^{*},\mu)// \mathfrak{g}^{0})\] _between the connected components of the Maurer-Cartan simplicial set and the set of orbits of the Deligne groupoid._ #### 4.2.3. The simplicial equivalence Let us return to the Maurer-Cartan simplicial set and focus on the case of interest \(\mathfrak{g}^{*}=\prod_{\ell\geq 2}C^{*}_{(\ell,d)}(A)\) for smooth \(A\). We now prove the main result of this Section, lifting the map \(\Phi:CC^{-}_{d}(A)\to(\mathcal{M}_{d-\mathrm{preCY}}(A))_{\mathrm{nondeg}}\) to a weak simplicial equivalence. The target for this lift is evidently the nondegenerate locus in the Maurer-Cartan simplicial set corresponding to the dg Lie algebra above: \[\mathcal{M}^{\Delta}_{d-\mathrm{preCY}}(A):=MC_{*}(\prod_{\ell\geq 2}C^{*}_{( \ell,d)}(A))\] The source is given by replacing the negative cyclic chain complex by its corresponding simplicial set under the Dold-Kan correspondence. Recall the Kan functor \[K_{*}:\mathrm{Ch}_{\geq 0}(\mathrm{Ab})\to\mathrm{sAb}\] which gives an equivalence between chain complexes of abelian groups supported in non-negative degree and simplicial abelian groups. We can further forget the abelian group structure and get a simplicial set. The functor \(K_{*}\) assigns to the chain complex \((V,\delta)\) the \(n\)-simplices \[K_{n}(V)=Z^{0}(C^{*}(\Delta^{n})\otimes V,d+\delta)\] where \((C^{*}(\Delta^{n}),d)\) is the normalized simplicial cochain complex on the \(n\)-simplex. One possible representation for this complex is in terms of _linear differential forms_ \[\omega_{i_{0},\dots,i_{k}}=k!\sum_{0\leq j\leq k}(-1)^{j}t_{i_{j}}dt_{i_{0}} \dots\widehat{dt_{i_{j}}}\dots dt_{i_{k}}\] for any \(1\leq k\leq n\). We now consider the chain complex \(\tau_{\geq 0}(CC^{-}_{*+d}(A))\), i.e. the object of \(\mathrm{Ch}_{\geq 0}(\mathrm{Ab})\) given by shifting the negative cyclic complex down by \(d\) and truncating it to lie in non-negative degrees. A degree zero cycle in this complex is represented by a negative cyclic chain of degree \(d\) \[\lambda=\lambda_{0}+\lambda_{1}u+\lambda_{2}u^{2}+\dots\] closed under \(b+uB\), where \(\lambda_{i}\in C_{d+2i}(A)\). Let us fix a choice of a nondegenerate 'first component' \(\lambda_{0}=\nu\) and its inverse \(\alpha\in C^{d}_{(2)}(A)\), representing inverse morphisms of bimodules in \(\mathrm{Hom}_{A-A}(A^{!},A[d])\) and \(\mathrm{Hom}_{A-A}(A[d],A^{!})\), respectively. We now consider simplicial subsets on each side by requiring the 'first components' to be constant simplices at \(\lambda_{0}=nu\) and \(m_{(2)}=\alpha\). Let us describe this more precisely. On the source side \(K_{*}(\tau_{\geq 0}(CC^{-}_{*+d}(A)))\), we decompose each \(n\)-simplex \(\sigma\) as a sum over powers of \(u\) and over the basis of forms \(\{\omega_{i_{0},\dots,i_{k}}\}\) \[\sigma(\underline{t})=\sum_{p=0}^{\infty}\lambda_{p,k,\{i_{0},\dots,i_{k}\}}u^ {p}\otimes\omega_{i_{0},\dots,i_{k}}\] Given any such \(n\)-simplex, we can require that its \(p=0\) component (that is, its 'value' at \(u=0\)) be constant along the simplicial coordinates \(t_{i}\), and equal to \(\nu\). That is, we require \(\lambda_{0,k,\{i_{0},\dots,i_{k}\}}=0\) for all \(k>0\) and \(\lambda_{0,0,\{\}}=\nu\). This condition defines a simplicial subset of \(K_{*}(\tau_{\geq 0}(CC^{-}_{*+d}(A)))\), which we denote by \(K_{*}(\tau_{\geq 0}(CC^{-}_{*+d}(A)))_{\lambda_{0}=\nu}\). On the other side, we can do the same thing and fix the 'first component' \(m_{(2)}\) to be constant and equal to our chosen quasi-inverse \(\alpha\). Each \(n\)-simplex of \(\mathcal{M}^{\Delta}_{d-\mathrm{preCY}}(A)\) is a solution to the Maurer-Cartan equation on the dg Lie algebra \(\prod_{\ell\geq 2}C^{*}_{(\ell,d)}(A))\otimes\Omega^{*}(\Delta^{n})\). Here \(\Omega^{*}(\Delta^{n})\) is spanned by polynomial differential forms on the simplicial coordinates \(t_{0},\dots,t_{n}\); we take the \(\ell=2\) part of the solution and demand that it be degree zero and constant as a form, equal to \(\alpha\). This again defines a simplicial subset which we denote \(\mathcal{M}^{\Delta}_{d-\mathrm{preCY}}(A)_{m_{(2)}=\alpha}\). We are now ready to phrase the main result of this section. Recall the map of sets \(\Phi\) that we defined in Definition 12 takes a nondegenerate negative cyclic chain \(\lambda=\lambda_{0}+\dots\) whose first term \(\lambda_{0}=\nu\) has a quasi-inverse \(\alpha\) and gives a pre-CY structure \(m=\mu+m_{(2)}+\dots\) with \(m_{(2)}=\alpha\). **Theorem 24**.: _The map \(\Phi\) lifts to a weak equivalence of simplicial sets_ \[\Phi^{\Delta}:K_{*}(\tau_{\geq 0}(CC^{-}_{*+d}(A)))_{\lambda_{0}=\nu}\xrightarrow{ \simeq}\mathcal{M}^{\Delta}_{d-\mathrm{preCY}}(A)_{m_{(2)}=\alpha}\] _Taking connected components and putting the resulting bijections together for pair of inverse classes \([\lambda]\) and \([\alpha]\), we get a bijection of sets_ \[HC^{-}_{d}(A)_{\mathrm{nondeg}}\simeq\pi_{0}(\mathcal{M}^{\Delta}_{d-\mathrm{ preCY}}(A)_{\mathrm{nondeg}})\] _between (classes of) smooth CY structures and connected components of the space of nondegenerate pre-CY structures, both of dimension \(d\)._ Proof.: Note that on the left hand side we have the normalized cochains on the simplex, while on the right-hand side we have differential forms; using the representatives above \(\omega_{i_{0},\dots,i_{k}}\), we embed the former into the latter. Given that embedding, to construct the simplicial lift \(\Phi^{\Delta}\), we simply extend the evaluation map of ribbon quivers linearly over \(\Omega^{*}(\Delta^{n})\), and use the same formulas we did for defining \(\Phi\). For example, given \(m_{(2)}\), in the proof of Proposition 19 we showed that we can find a solution for \(m_{(3)}\) of the form \[m_{(3)}=\tilde{\Gamma}^{0}_{(3)}(\lambda_{0})+\Gamma^{1}_{(3)}(\lambda_{1})+[ \mu,\beta_{(3)}]+[m_{(2)},\beta_{(2)}]\] where the ribbon quivers in \(\tilde{\Gamma}^{0}\) and \(\Gamma^{1}_{(3)}\) only have vertices with \(1\) and \(2\) outgoing arrows. Given an \(n\)-simplex on the left hand side given by a linear form \(\lambda(\underline{t})\), we can input this instead of \(\lambda\) and get a form \(m_{(3)}(\underline{t})\); by the same argument as we used to prove Proposition 19, but now extended linearly over differential forms, this form will satisfy the equation \[[\mu+d,m_{(3)}(\underline{t})]=[m_{(2)},m_{(2)}]\] which is the new component of the Maurer-Cartan equation on the truncated piece \(\left(C^{*}_{[d]}(A)[1]/\prod_{i>\ell}C^{*}_{(\ell,d)}(A)\right)\otimes\Omega^{*}( \Delta^{n})\). We continue this iteratively for \(\ell=3,4,\dots\); and in each step we get some polynomial differential forms \(m_{(\ell)}(\underline{t})\) solving a new component of that equation, with \(m_{(2)}\) fixed to be constant with value \(\alpha\). It follows from the compatibility of evaluation with all the differentials involved that at each new step this defines a map of simplicial sets. Recall that from the definition of \(\Phi\), at the step \(\ell\) this map depends only on \(\lambda\) up to the term with \(u\)-exponent \(\ell-2\). These maps intertwine the maps induced by truncation, so we get a map between towers of simplicial sets and the desired map \(\Phi^{\Delta}\) is the map induced between the limits of these towers. We now prove that the map \(\Phi^{\Delta}\) so defined is a weak equivalence of simplicial sets. Each horizontal map is a Kan fibration, so it is enough to prove that each vertical map is a weak equivalence. We do this by induction; the last column is actually just the identity map of the point (seen as a the totally degenerate \(n\)-simplices); this is because we fixed by hand the \(\lambda_{0}\) and \(m_{(2)}\) components to be exactly \(\nu\) and \(\alpha\). For the induction step we focus on a single square If the right column is a weak equivalence, by [10] it is enough to show that the left vertical map induces weak equivalences for each pair of fibers of the horizontal maps over _points_ (i.e. \(0\)-simplices). In down-to-earth terms, we have an 'actual' solution of the Maurer-Cartan equation on \(C^{*}_{[d]}(A)[1]\) up to the term \(m_{(i)}\), corresponding to a truncated negative cyclic chain \(\lambda=\lambda_{0}+\lambda_{1}u+\dots+\lambda_{i-2}u^{i-2}\). We then look at the map \(\Phi^{\Delta}\) applied to a differential form \[\lambda=\lambda_{0}+\lambda_{1}u+\dots+\lambda_{i-2}u^{i-2}+\lambda_{i-1}( \underline{t})u^{i-1}\] where \(\lambda_{i-1}(\underline{t})\) is linear on the \(t_{i}\) coordinates on the \(n\)-simplex. The only dependence on this form is in the evaluation of the last ribbon quiver in \(\Gamma_{(i+1)}\), that is, the term \[\Gamma_{(i+1)}^{i-1}(\lambda_{i-1}(\underline{t}))\] so we are reduced to proving that the operation \(\Gamma_{(i+1)}^{i-1}(-)\), seen as a map \[C_{*}(A)\otimes C^{*}(\Delta^{n})\to C^{*}_{(i+1)}(A)\otimes\Omega^{*}( \Delta^{n})\] defines a weak equivalence between closed forms and forms satisfying the \(i+1\) component of the Maurer-Cartan equation. This follows from the proof of Proposition 17 together with the fact that the inclusion of normalized simplicial cochains into differential forms is a homotopy retract (a simplicial version of the de Rham theorem). #### 4.2.4. Special case: the groupoid Theorem 24 holds for any smooth \(A_{\infty}\) algebra/category \(A\), without any assumptions on degrees. Now recall that if a dg Lie algebra \(\mathfrak{g}\) vanishes in negative degrees, its set of Maurer-Cartan solutions admits an equivalent, simpler, description than the full simplicial set, given by the Deligne groupoid \(MC(\mathfrak{g})/\mathfrak{g}^{0}\). The following result can be seen as a slight refinement of Theorem 24 in the case where \(A\) has vanishing Hochschild cohomology in negative degrees. **Theorem 25**.: _Assume that \(HH^{i}(A)=0\) for all \(i<0\). Then there is a bijection_ \[HC_{*}^{-}(A)_{\operatorname{nondeg}}\simeq\pi_{0}(MC(\mathfrak{g})_{ \operatorname{nondeg}}/\mathfrak{g}^{0})\] _where \(\mathfrak{g}=\prod_{\ell\geq 2}C_{(\ell,d)}^{*}(A)\), between nondegenerate negative cyclic homology classes and orbits in the groupoid of nondegenerate pre-CY structures._ This is _almost_ a direct corollary of Theorem 24 and Proposition 23; the only reason why it does not follow directly is because we are not assuming that \(\mathfrak{g}\) vanishes at chain level in negative degrees. Nevertheless, we can prove this fact explicitly, by an iterative calculation that we now sketch. Proof.: We first note that even though the dg Lie algebra \(\mathfrak{g}\) is not nilpotent, the action of \(\mathfrak{g}\) is still well-defined; each element \(x\in\mathfrak{g}\) is a sum of vertices with at least two outgoing arrows, so \([x,-]\) increases the number of outgoing arrows. So the sum defining \(\exp(x)y\) is finite at each truncated level \(\prod_{i>2}C_{i,d}^{*}(A)[1]/\prod_{i>\ell}C_{i,d}^{*}(A)\), and the action lifts to the limit. Recall that we have maps \[\Phi:CC_{d}^{-}(A)_{\operatorname{nondeg}}\longleftrightarrow(\mathcal{M}_{d -\operatorname{precCY}}(A))_{\operatorname{nondeg}}:\mathcal{L}\] One of the directions is easier: let us pick \(\lambda\in CC_{d}^{-}(A)_{\operatorname{nondeg}}\) and take \(m=\Phi(\lambda)\); by definition this satisfies \[e_{m}=\Gamma(m,\lambda)+[m,p]\] where \(e_{m}\) is the 'energy function' associated to \(m\), \(\Gamma\) is the sum of tube quivers we defined previously and \(p\in\prod_{i>2}C_{i,d}^{1}(A)\). Defining now \(\lambda^{\prime}=\mathcal{L}(m)\), by definition we have \[e_{m}=\Gamma(m,\lambda^{\prime})+[m,q]\] for some other element \(q\in\prod_{i>2}C_{i,d}^{1}(A)\). Therefore \(\Gamma(m,\lambda^{\prime}-\lambda)=[m,q-p]\), and since \(\Gamma(m,-)\) is a quasi-isomorphism in cohomology we have \([\lambda^{\prime}]=[\lambda]\) The remaining direction is harder and has to be done iteratively in \(\ell\). We now start with a pre-CY structure \(m\), take \(\lambda=\mathcal{L}(m)\) and \(n=\Phi(\lambda)\); reusing the symbols \(p,q\) these satisfy equations \[e_{m}=\Gamma(m,\lambda)+[m,p],\quad e_{n}=\Gamma(n,\lambda)+[n,q]\] We have to prove that there is \[x=x_{(2)}+x_{(3)}+\dots\in\prod_{\ell>2}C_{\ell,d}^{1}(A)\] such that \(n=m+[x,m]+\frac{1}{2}[x,[x,m]]+\dots\). At level \(\ell=2\), this is simply \(n_{(2)}=m_{(2)}+[x_{(2)},\mu]\). Let us then write \(s_{(2)}=m_{(2)}-n_{(2)}\). Subtracting the equations satisfied by \(n_{(2)},m_{(2)}\) we get \[-s_{(2)} =\Gamma(n,\lambda)-\Gamma(m,\lambda)+[\mu,q_{(2)}-p_{(2)}]\] \[=-\frac{1}{2}\left(\begin{array}{c}\includegraphics[scale=0.5]{ s_{(2)}}\\ \includegraphics[scale=0.5]{s_{(2)}}\\ \includegraphics[scale=0. equation, and again using the closedness of \(\Gamma_{(2)},\lambda_{0}\) we have \[[\mu,x_{(2)}]=\frac{1}{2}\left[\mu,\vbox{\hbox{\includegraphics[height=36.135pt]{ -2.pdf}}}+\vbox{\hbox{\includegraphics[height=36.135pt]{-2.pdf}}}+\vbox{ \hbox{\includegraphics[height=36.135pt]{-2.pdf}}}\right]+[\mu,q_{(2)}-p_{(2)}]\] Note that \([\mu,x_{(2)}]\) is a closed element of degree \(d-1\) in \(C^{*}_{(2)}(A)\). Now, since the element \(m_{(2)}\) was nondegenerate, we have a quasi-isomorphism of bimodules \(A\cong A^{!}[d]\) so we have a quasi-isomorphism of complexes \[C^{*}_{(2)}(A)\simeq\operatorname{Hom}_{A\text{-}A}(A,A^{!})\simeq \operatorname{Hom}_{A\text{-}A}(A,A[-d])\simeq C^{*-d}(A)\] so since we assumed \(HH^{-1}(A)=0\) we can find \(x_{(2)}\) solving the equation \[x_{(2)}=\frac{1}{2}\left(\vbox{\hbox{\includegraphics[height=36.135pt]{ -2.pdf}}}+\vbox{\hbox{\includegraphics[height=36.135pt]{-2.pdf}}}\right)+q_{(2 )}-p_{(2)}+(\mu-\text{exact})\] For the next step \(\ell=3\) we write \(n_{(3)}=m_{(3)}+[x_{(2)},m_{(2)}]+\frac{1}{2}[x_{(2)},[x_{(2)},\mu]]-s_{(3)}\) and write the analogous equation, substituting in the above solution for \(x_{(2)}\), solving for \(x_{(3)}\) in \(s_{(3)}=[\mu,x_{(3)}]\). Proceeding like that for each \(\ell\) gives a solution \(x=x_{(2)}+x_{(3)}+\dots\) for the 'gauge transformation' taking \(m\) to \(n\).
2302.00329
Effective divisors on projectivized Hodge bundles and modular forms
We construct vector-valued modular forms on moduli spaces of curves and abelian varieties using effective divisors in projectivized Hodge bundles over moduli of curves. Cycle relations tell us the weight of these modular forms. In particular we construct basic modular forms for genus $2$ and $3$. We also discuss modular forms on the moduli of hyperelliptic curves. In that case the relative canonical bundle is a pull back of a line bundle on a ${\mathbb P}^1$-bundle over the moduli of hyperelliptic curves and we extend that line bundle to a compactification so that its push down is (close to) the Hodge bundle and use this to construct modular forms. In an appendix we use our method to calculate divisor classes in the dual projectivized $k$-Hodge bundle determined by Gheorghita-Tarasca and by Korotkin-Sauvaget-Zograf.
Gerard van der Geer, Alexis Kouvidakis
2023-02-01T09:17:45Z
http://arxiv.org/abs/2302.00329v2
# Effective divisors on projectivized Hodge bundles and modular forms ###### Abstract. We construct vector-valued modular forms on moduli spaces of curves and abelian varieties using effective divisors in projectivized Hodge bundles over moduli of curves. Cycle relations tell us the weight of these modular forms. In particular we construct basic modular forms for genus \(2\) and \(3\). We also discuss modular forms on the moduli of hyperelliptic curves. In that case the relative canonical bundle is a pull back of a line bundle on a \(\mathbb{P}^{1}\)-bundle over the moduli of hyperelliptic curves and we extend that line bundle to a compactification so that its push down is (close to) the Hodge bundle and use this to construct modular forms. ## 1. Introduction Moduli spaces of curves and of abelian varieties come with a natural vector bundle, the Hodge bundle \(\mathbb{E}\). Starting from this vector bundle one can construct other natural vector bundles by applying Schur functors, like \(\operatorname{Sym}^{n}(\mathbb{E})\) or \(\det(\mathbb{E})^{\otimes m}\). Sections of such bundles are called modular forms. For example, for the moduli space \(\mathcal{A}_{g}\) of principally polarized abelian varieties of dimension \(g\) these are Siegel modular forms, and for the moduli space \(\mathcal{M}_{g}\) of curves of genus \(g\) these are Teichmuller modular forms. The Hodge bundle extends to appropriate compactifications of such moduli spaces and in many cases the sections also extend automatically to the compactifications, e.g. for \(\mathcal{A}_{g}\) with \(g\geq 2\) by the so-called Koecher principle. In this paper we try to construct modular forms in a geometric way. It is well-known that an effective divisor on \(\mathcal{A}_{g}\) or on \(\overline{\mathcal{M}}_{g}\) with \(g\geq 2\) representing the cycle class \(m\lambda\) with \(\lambda=c_{1}(\det(\mathbb{E}))\) and \(m\in\mathbb{Z}_{>0}\) yields a scalar-valued modular form of weight \(m\). We will exploit explicit effective divisors on projectivized vector bundles to construct vector-valued modular forms. In particular, we will construct in this way certain modular forms that play a pivotal role in low genera. For example, in the case of \(g=2\) there is the modular form \(\chi_{6,8}\), a section of \(\operatorname{Sym}^{6}(\mathbb{E})\otimes\det(\mathbb{E})^{8}\), that appeared in [4] as follows. Recall that the Torelli morphism \(\mathcal{M}_{2}\hookrightarrow\mathcal{A}_{2}\) has dense image and we have an equality of standard compactifications \(\overline{\mathcal{M}}_{2}=\tilde{\mathcal{A}}_{2}\). The moduli space \(\mathcal{M}_{2}\) has another description as a stack quotient. This derives from the fact that a smooth complete curve of genus \(2\) over a field \(k\) of characteristic not \(2\) is a double cover of \(\mathbb{P}^{1}\) ramified at six points, so can be given as \(y^{2}=f\) with \(f\) a polynomial of degree \(6\) with non-vanishing discriminant. Writing \(f\) as a homogeneous polynomial in two variables, say \(f\in\operatorname{Sym}^{6}(W)\) with \(W\) the \(k\)-vector space generated by \(x_{1},x_{2}\), and observing that we may change the base of \(W\), we find a presentation of \(\mathcal{M}_{2}\) as a stack ###### Abstract We consider the \(\mathrm{GL}(W)\)-representation \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\) of \(W\) with \(\mathrm{det}(W)^{b}\). We prove that \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\) is a direct sum of \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\). We prove that \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\) is a direct sum of \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\). We prove that \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\) is a direct sum of \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\). We also prove that \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\) is a direct sum of \(\mathrm{Sym}^{a}(W)\otimes\mathrm{det}(W)^{b}\). \(\mathcal{H}_{g,2}\) has as compactification the space \(\overline{\mathcal{H}}_{g,2}\) of admissible degree \(2\) covers of genus \(g\). In the stack description modular forms pull back to covariants for the action of \(\operatorname{GL}(2)\) on the space of binary forms of degree \(2g+2\). In the Hurwitz space description the relative canonical bundle of the universal curve over \(\mathcal{H}_{g,2}\) can be viewed as the pull back of \(O(g-1)\) from the trivial \(\mathbb{P}^{1}\)-bundle \(P\) over \(\mathcal{H}_{g,2}\) equipped with \(2g+2\) non-intersecting sections. Using the theory of admissible covers, \(P\) is compactified to a space \(\overline{P}\), a fibration of rational stable curves with \(2g+2\) marked points over \(\overline{\mathcal{H}}_{g,2}\), and we show that the line bundle \(O(g-1)\) on \(P\) extends to a line bundle on \(\overline{P}\) with the property that its push down to \(\overline{\mathcal{H}}_{g,2}\) is close to the Hodge bundle. This allows us to construct modular forms on \(\overline{\mathcal{H}}_{g,2}\). ## Acknowledgements We thank Fabien Clery, Carel Faber and Gavril Farkas for useful remarks. ## 2. The Case of Genus Two Let \(k\) be a field of characteristic not \(2\). We consider the moduli space \(\mathcal{M}_{2}\) of curves of genus \(2\) over \(k\). This is a Deligne-Mumford stack and it carries a universal curve \(\pi:\mathcal{C}\to\mathcal{M}_{2}\) of genus \(2\). The relative dualizing sheaf \(\omega_{\pi}\) is base point free and thus defines a morphism \(\varphi:\mathcal{C}\to\mathbb{P}(\mathbb{E})\). Projectivization is in the Grothendieck sense, so that for a vector space \(V\) the projective space \(\mathbb{P}(V)\) parametrizes hyperplanes in \(V\). For a curve \(C\) the map \(\varphi:C\to\mathbb{P}(\mathbb{E}_{C})\) associates to a point the space of differentials vanishing in that point. We have a commutative diagram with \(u\) the natural morphism We let \(\overline{\mathcal{M}}_{2}\) be the Deligne-Mumford compactification and \(\overline{\pi}:\overline{\mathcal{C}}\to\overline{\mathcal{M}}_{2}\) the corresponding universal curve. However, the extension \(\omega_{\overline{\pi}}\) of \(\omega_{\pi}\) does not define an extension of the morphism \(\varphi\) to \(\mathbb{P}(\mathbb{E})\) over the boundary component \(\Delta_{1}\) that parametrizes reducible curves. We consider the branch divisor \(D\subset\mathbb{P}(\mathbb{E})\) of the morphism \(\varphi\). The divisor \(D\) is of relative degree \(6\) in the \(\mathbb{P}^{1}\)-bundle \(\mathbb{P}(\mathbb{E})\) over the base \(\mathcal{M}_{2}\). We define \(\overline{D}\) to be the closure of \(D\) in \(\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{M}}_{2}\). In the rational Picard group of \(\mathbb{P}(\mathbb{E})\) we can write \[[\overline{D}]=[O(6)]+u^{*}(A)\] with \(A\) a class in the rational Picard group of \(\overline{\mathcal{M}}_{2}\) and \(u\) denotes the canonical projection \(\mathbb{P}(\mathbb{E})\to\overline{\mathcal{M}}_{2}\). We want to determine \(A\) in terms of the generators \(\lambda\), \(\delta_{0}\) of the Picard group of \(\overline{\mathcal{M}}_{2}\). We write \(\lambda\) for the first Chern class of \(\mathbb{E}\) and \(\delta_{1}\) (resp. \(\delta_{0}\)) for the class of \(\Delta_{1}\) (resp. \(\Delta_{0}\)) in the Picard group of the stack \(\overline{\mathcal{M}}_{2}\). In order to do this we extend the morphism \(\varphi\). First we extend it over a Zariski open part of \(\Delta_{0}\). Let \(f:C\to B\) be a \(1\)-dimensional family of stable curves of genus \(2\) over a discrete valuation ring (or disk) with smooth generic fibre and special fibre of type \(\Delta_{0}\). A curve \(C\) of genus \(2\) of type \(\Delta_{0}\) is obtained by identifying two distinct points \(p\) and \(q\) on a curve \(E\) of genus \(1\). The fibre \(\mathbb{E}_{C}\) can be identified with \(H^{0}(C,\omega)=H^{0}(E,O(p+q))\). The corresponding linear system is base point free, hence we get a surjective morphism \(f^{*}(\mathbb{E}_{B})\to\omega_{f}\) which extends our map \(\varphi\). For a similar family \(f:C\to B\) over a discrete valuation ring with smooth generic fibre and special fibre of type \(\Delta_{1}\), we cannot argue like that, since the corresponding linear system does not define such a map. In this case the special fibre is a union \(C^{\prime}\cup C^{\prime\prime}\) with \(C^{\prime}\) and \(C^{\prime\prime}\) of genus \(1\) intersecting in one nodal point \(p\). We blow up this nodal point obtaining as special fibre a chain \(C^{\prime}+2T+C^{\prime\prime}\) with \(T\) an exceptional curve and do a quadratic base change after which we have a family \(\tilde{f}:\tilde{C}\to\tilde{B}\) with special fibre a chain of curves \(C^{\prime}+R+C^{\prime\prime}\), where now \(R\) is a \((-2)\)-curve lying over \(T\) with degree \(2\). Instead of \(\omega_{\tilde{f}}\) we now consider \(\omega_{\tilde{f}}(-R)\) on our family. We claim that \(\tilde{f}_{*}\omega_{\tilde{f}}(-R)=\mathbb{E}_{\tilde{B}}\). Indeed, the restriction of \(\omega_{\tilde{f}}\) to \(C^{\prime}+R+C^{\prime\prime}\) is \((O_{C^{\prime}}(p^{\prime}),O_{R},O_{C^{\prime\prime}}(p^{\prime\prime}))\), but since \(H^{0}(C^{\prime},O_{C^{\prime}}(p^{\prime}))=H^{0}(C^{\prime},O_{C^{\prime}})\) and \(\omega_{\tilde{f}}|R\cong O_{R}\), a section of the restriction of \(\omega_{\tilde{f}}\) to the central fibre necessarily vanishes on \(R\). By the exact sequence \(0\to\omega_{\tilde{f}}(-R)\to\omega_{\tilde{f}}\to\omega_{\tilde{f}}|R\to 0\) and \(\omega_{\tilde{f}}|R=O_{R}\) we see that \(\tilde{f}_{*}\omega_{\tilde{f}}(-R)\cong\tilde{f}_{*}(\omega_{\tilde{f}})= \mathbb{E}_{\tilde{B}}\). The special fibre of \(\tilde{f}_{*}(\omega_{\tilde{f}}(-R))\) is of codimension \(1\) in \(H^{0}(C,\omega_{\tilde{f}}(-R))\) and defines a base point free linear system. For this we refer to Proposition 17.1 in the appendix Section 17. In this way we get a map \(\varphi^{\prime}:\tilde{C}\to\mathbb{P}(\mathbb{E}_{\tilde{B}})\) that is of degree \(2\) on \(R\) and contracts \(C^{\prime}\) and \(C^{\prime\prime}\). It has the property that \[{\varphi^{\prime}}^{*}(O(1))=\omega_{\tilde{f}}(-R)\,,\] with \(O(1)\) the hyperplane bundle on \(\mathbb{P}(\mathbb{E}_{\tilde{B}})\). **Proposition 2.1**.: _We have \([\overline{D}]=6\,[O(1)]+u^{*}(8\,\lambda-\delta_{0}-\delta_{1})\,\,\,.\)_ Proof.: We write \([\overline{D}]=6\,[O(1)]+u^{*}(A)\). We work with the above two types of \(1\)-dimensional families \(f:C\to B\). The morphism \(\varphi\) is ramified over \(D\), and thus \(\varphi^{\prime}\) is ramified over \(\overline{D}\) and contracts \(C^{\prime}\) and \(C^{\prime\prime}\). We denote the ramification divisor by \(S\). We thus get (writing abusively line bundles and divisors for the corresponding divisor classes) \[\omega_{\tilde{f}}={\varphi^{\prime}}^{*}\omega_{u}+S+2(C^{\prime}+C^{\prime \prime}),\quad{\varphi^{\prime}}^{*}\overline{D}/2=S+3\,(C^{\prime}+C^{\prime \prime})\,,\] where the first equation comes from adjunction \(\omega_{\tilde{f}}+C^{\prime}_{|C^{\prime}}=O_{C^{\prime}}\) for \(C^{\prime}\) and similarly for \(C^{\prime\prime}\), and the second one from \(C^{\prime}\cdot{\varphi^{\prime}}^{*}\overline{D}=0=C^{\prime\prime}\cdot{ \varphi^{\prime}}^{*}\overline{D}\). This gives \[\omega_{\tilde{f}} ={\varphi^{\prime}}^{*}(\omega_{u}+\overline{D}/2)-(C^{\prime}+C^ {\prime\prime})\] \[={\varphi^{\prime}}^{*}(O(-2)+u^{*}(\lambda)+O(3)+u^{*}(A/2))-(C^{ \prime}+C^{\prime\prime})\] \[={\varphi^{\prime}}^{*}(O(1)+u^{*}(\lambda+A/2))-(C^{\prime}+C^{ \prime\prime})\] \[=\omega_{\tilde{f}}-R+\tilde{f}^{*}(\lambda+A/2)-(C^{\prime}+C^{ \prime\prime})\] \[=\omega_{\tilde{f}}+\tilde{f}^{*}(\lambda+A/2-b_{1})\] with \(b_{1}\) the special point of \(\tilde{B}\). This shows that \(A=-2\,\lambda+2\,b_{1}\). Because of the base change that we executed, we have \(2b_{1}=\delta_{1}\) and we obtain \(A=-2\,\lambda+\delta_{1}\). Now we use the well-known relation \(10\,\lambda=\delta_{0}+2\,\delta_{1}\) (see [21]) and thus get \(A=-2\lambda+\delta_{1}=8\lambda-\delta_{0}-\delta_{1}\). _Remark 2.2_.: We give an alternative proof of this result in Remark 14.2. We remark that \(u_{*}(O(1))=\mathbb{E}\) and \(u_{*}(O(m))=\operatorname{Sym}^{m}(\mathbb{E})\) for \(m\geq 1\). The divisor \(\overline{D}\) with \[[\overline{D}]=[O(6)]+u^{*}(8\lambda-\delta_{0}-\delta_{1})\] is an effective divisor on \(\mathbb{P}(\mathbb{E})\). We apply \(u_{*}\) to the corresponding section 1 of \(O(\overline{D})\). By Proposition 2.1 we see that we get a regular section \(\chi_{6,8}\) of the vector bundle \(\operatorname{Sym}^{6}(\mathbb{E})\otimes\det(\mathbb{E})^{8}\) over \(\overline{\mathcal{M}}_{2}\). Moreover, this section vanishes on the divisors \(\Delta_{0}\) and \(\Delta_{1}\). Note that the Torelli map extends to an isomorphism \(\overline{\mathcal{M}}_{2}\cong\tilde{\mathcal{A}}_{2}\) with \(\tilde{\mathcal{A}}_{2}\) the standard smooth compactification of \(\mathcal{A}_{2}\). Therefore our section defines a Siegel modular form \(\chi_{6,8}\) of weight \((6,8)\) that is a cusp form. The relation \(10\,\lambda=\delta_{0}+2\,\delta_{1}\) quoted above implies that there exists a Siegel modular cusp form \(\chi_{10}\) of degree \(2\) and of weight \(10\) with divisor \(\delta_{0}+2\,\delta_{1}\). The quotient \(\chi_{6,-2}:=\chi_{6,8}/\chi_{10}\) defines a meromorphic section of \(\operatorname{Sym}^{6}(\mathbb{E})\otimes\det\mathbb{E}^{-2}\) that is regular outside \(\Delta_{1}\). **Corollary 2.3**.: _Let \(\overline{D}\) be the closure in \(\mathbb{P}(\mathbb{E})\) of the branch divisor of the canonical map for the universal curve over \(\mathcal{M}_{2}\). The push forward \(u_{*}(s)\), with \(s\) the natural section \(1\) of \(O(\overline{D})\) on \(\mathbb{P}(\mathbb{E})\), defines a Siegel modular cusp form \(\chi_{6,8}\) of degree \(2\) and weight \((6,8)\)._ We now analyze the orders of vanishing along \(\delta_{1}\) of \(\chi_{6,8}\) and \(\chi_{6,-2}\). When identifying \(\overline{\mathcal{M}}_{2}\) with \(\tilde{A}_{2}\) we also write \(\mathcal{A}_{1,1}\) for \(\delta_{1}\); it is the locus of products of elliptic curves. We analyze the orders by working locally on a family over a local base \(B\) with central fibre a general point \(b_{1}\) of the boundary divisor \(\Delta_{1}\). As we mentioned before, the map \(\varphi:\mathcal{C}\to\mathbb{P}(\mathbb{E})\) defined over \(\mathcal{M}_{2}\) does not extend to the whole \(\overline{\mathcal{C}}\) over \(\overline{\mathcal{M}}_{2}\) due to the fact that the canonical system has base points at the nodes of the curves over the boundary divisor \(\Delta_{1}\). On the other hand, by the theory of admissible covers, the ramification divisor of the above map \(\varphi\) extends to a divisor \(S\) on \(\overline{\mathcal{C}}\) in a way that avoids the above nodal locus. Namely, over \(b_{1}\in\Delta_{1}\) the fibre is a nodal curve \(C\) which is the union of two elliptic curves \(C_{1}\) and \(C_{2}\) meeting at a point \(p\). The restriction of the ramification divisor on each component is the union of the three -additional to \(p\)- ramification points of the system \(|O(2p)|\). Therefore the extension of the map \(\varphi\) is defined on the ramification divisor \(S\). The map \(\varphi\) maps \(C_{1}\backslash\{p\}\) and \(C_{2}\backslash\{p\}\) to two distinct points \(p_{1}\) and \(p_{2}\) respectively which are defined as follows. The fibre of \(\mathbb{P}(\mathbb{E})\) over \(b_{1}\) can be identified with \(\mathbb{P}(H^{0}(C,\omega_{C}))\). The points of \(H^{0}(C,\omega_{C})\) have the form \((s_{1},s_{2})\), with \(s_{i}\) a section of \(H^{0}(C_{i},O(p))\). Then the point \(p_{1}\) corresponds to the hyperplane \(\{(0,s_{2}),s_{2}\in H^{0}(C_{2},O(p))\}\) and the point \(p_{2}\) corresponds to the hyperplane \(\{(s_{1},0),s_{1}\in H^{0}(C_{1},O(p))\}\). The divisor \(\overline{D}\), the image of the ramification divisor under the extended map \(\varphi\), splits then into six irreducible components denoted by \(D_{1},\ldots,D_{6}\). Over our local base \(B\) we thus have the six local sections \(D_{i}\) (\(i=1,\ldots,6\)) of the family \(\mathbb{P}(\mathbb{E})\to B\). By the above description of the extension of the map \(\varphi\), we may conclude that \(D_{1},D_{2},D_{3}\) pass through \(p_{1}\) and \(D_{4},D_{5},D_{6}\) through \(p_{2}\). Lifting the sections \(D_{i}\) locally to sections \(\sigma_{i}\) of \(\mathbb{E}\) and choosing a basis \(e_{1}\), \(e_{2}\) of \(\mathbb{E}\) over \(B\) such that \(e_{1}\) and \(e_{2}\) determine \(p_{1}\) and \(p_{2}\) in the fibre of \(\mathbb{P}(\mathbb{E})\) over \(z=0\), we can write \(\sigma_{i}=a_{i}e_{1}+b_{i}e_{2}\) for \(i=1,\ldots,6\). Then at \(z=0\) the functions \(b_{1},b_{2},b_{3}\) and \(a_{4},a_{5},a_{6}\) vanish, while \(a_{1},a_{2},a_{3},b_{4},b_{5},b_{6}\) do not vanish. Since by blowing up once we can separate, we may assume that these sections vanish with order \(1\) at \(z=0\). By construction the section \(\chi\) of \(\operatorname{Sym}^{6}(\mathbb{E})\otimes\det(\mathbb{E})^{-2}\) is locally given by \[\frac{\sigma_{1}\cdots\sigma_{6}}{z}\,.\] We may write \(\sigma_{1}\cdots\sigma_{6}\) as \[a_{1}\cdots a_{6}\,e_{1}^{6}+\left(a_{1}a_{2}a_{3}a_{4}a_{5}b_{6 }+\ldots+b_{1}a_{2}a_{3}a_{4}a_{5}a_{6}\right)e_{1}^{5}e_{2}+\] \[\left(a_{1}a_{2}a_{3}a_{4}b_{5}b_{6}+\ldots+b_{1}b_{2}a_{3}a_{4}a _{5}a_{6}\right)e_{1}^{4}e_{2}^{2}+\] \[\left(a_{1}a_{2}a_{3}b_{4}b_{5}b_{6}+\ldots+b_{1}b_{2}b_{3}a_{4} a_{5}a_{6}\right)e_{1}^{3}e_{2}^{3}+\ldots+b_{1}\cdots b_{6}\,e_{2}^{6}\,.\] The order at \(z=0\) of the coefficient of \(e_{1}^{i}e_{2}^{6-i}\) equals \[\min_{\#\Lambda=i}(\#\Lambda^{c}\cap\{1,2,3\}+\#\Lambda\cap\{4,5,6\})\] with \(\Lambda\) running though the subsets of \(\{1,\ldots,6\}\) of cardinality \(i\) and \(\Lambda^{c}\) denoting the complement. We find for these orders \((3,2,1,0,1,2,3)\) for \(i=0,\ldots,6\), hence for the section \(\chi\) given by \(\sigma_{1}\cdots\sigma_{6}/z\) we find the orders \((2,1,0,-1,0,1,2)\). **Corollary 2.4**.: _The section \(1\) of the line bundle \(O(\overline{D})\) on \(\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{M}}_{2}\) pushes down via \(\mathbb{P}(\mathbb{E})\to\overline{\mathcal{M}}_{2}\) to the meromorphic modular form \(\chi_{6,-2}\) on \(\overline{\mathcal{M}}_{2}=\tilde{\mathcal{A}}_{2}\). The orders of the seven coordinates of \(\chi_{6,-2}\) along \(\mathcal{A}_{1,1}\) in \(\mathcal{A}_{2}\) are \((2,1,0,-1,0,1,2)\)._ These orders are in agreement with the result of [4] where \(\chi_{6,-2}\) was constructed by invariant theory and properties of modular forms were used to determine these orders. A different way to construct the form \(\chi_{6,8}\) uses the so-called Weierstrass divisor \(W\) in the dual bundle: \[W:=\{(C,\eta)\in\mathbb{P}(\mathbb{E}^{\vee}):\operatorname{div}(\eta)\text{ contains a Weierstrass point}\}\] over \(\mathcal{M}_{2}\). Here \(C\) denotes a curve of genus \(2\) and \(\eta\) a differential form on \(C\). We let \(\overline{W}\) be the closure of \(W\) over \(\overline{\mathcal{M}}_{2}\). We then have an identity due to Gheorghita [11, Thm 1] \[[\overline{W}]=6\,[O_{\mathbb{P}(\mathbb{E}^{\vee})}(1)]+34\,\lambda-3\,\delta _{0}-5\,\delta_{1}\,,\] where we write \(\lambda\) and \(\delta_{i}\) for the pullback of \(\lambda\) and \(\delta_{i}\) to \(\mathbb{P}(\mathbb{E}^{\vee})\). Now \(\overline{W}\) is an effective divisor and the push forward of the section \(1\) of \(O(\overline{W})\) is a section of \(\operatorname{Sym}^{6}(\mathbb{E}^{\vee})\otimes\det(\mathbb{E})^{34}\otimes O (-3\delta_{0}-5\delta_{1})\). For \(g=2\) we have \(\mathbb{E}^{\vee}\cong\mathbb{E}\otimes\det(\mathbb{E})^{-1}\), hence \(\operatorname{Sym}^{6}(\mathbb{E}^{\vee})\cong\operatorname{Sym}^{6}( \mathbb{E})\otimes\det(\mathbb{E})^{-6}\). This implies that under the isomorphism of \(\mathbb{P}^{1}\)-bundles \(\mathbb{P}(\mathbb{E})\cong\mathbb{P}(\mathbb{E}^{\vee})\) the isomorphism identifies \([\overline{W}]\) with \([\overline{D}]\), and we get in the dual bundle \[[\overline{W}]=6\,[O(1)]+28\,\lambda-3\,\delta_{0}-5\,\delta_{1}=6\,[O(1)]+8 \,\lambda-\delta_{0}-\delta_{1}\,.\] Using push forward we find again a form of weight \((6,8)\) vanishing on \(\delta_{1}\) and \(\delta_{0}\). Up to a multiplicative non-zero constant this is \(\chi_{6,8}\). _Remark 2.5_.: The identity \([\overline{W}]=6\,[O(1)]+28\,\lambda-3\,\delta_{0}-5\,\delta_{1}\) implies that there exists a modular form of weight \((6,28)\) vanishing with multiplicity \(3\) on \(\Delta_{0}\) and multiplicity \(5\) on \(\Delta_{1}\), but this is (up to a multiplicative constant) the form \(\chi_{10}^{2}\,\chi_{6,8}\) with \(\chi_{10}\) the form of weight \(10\) with divisor \(\delta_{0}+2\delta_{1}\) that displays the relation \(10\,\lambda=\delta_{0}+2\delta_{1}\). ## 3. The Case of Genus Three Here there is no restriction on the characteristic. We consider the moduli stack \(\mathcal{M}_{3}\) of curves of genus \(3\) over our field \(k\) and the universal curve \(\pi:\mathcal{C}\to\mathcal{M}_{3}\). The canonical map defines a morphism \(\varphi:\mathcal{C}\to\mathbb{P}(\mathbb{E})\) and we thus obtain the image divisor \(D\) in \(\mathbb{P}(\mathbb{E})\) over \(\mathcal{M}_{3}\). We have a diagram We consider the closure \(\overline{D}\) of \(D\) in \(\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{M}}_{3}\), the Deligne-Mumford compactification of \(\mathcal{M}_{3}\). The canonical image of the generic curve is a quartic curve. Thus we have a relation in the rational Picard group of \(\mathbb{P}(\mathbb{E})\) \[[\overline{D}]=[O(4)]+u^{*}(A)\,,\] with \(A\) a divisor class on \(\overline{\mathcal{M}}_{3}\). We now determine \(A\) by using test families. **Proposition 3.1**.: _We have \(A=8\lambda-\delta_{0}+c\,\delta_{1}\) for some integer \(c\)._ _Remark 3.2_.: The constant \(c\) equals \(-2\) as we shall see later, see Lemma 14.1. Proof.: We write the class \(A\) as \(A=a\,\lambda+b\,\delta_{0}+c\,\delta_{1}\). The map \(\varphi\) can be extended over \(\overline{\mathcal{M}}_{3}-\Delta_{1}\), but the relative dualizing sheaf \(\omega_{\overline{\pi}}\) restricted to a fibre of type \(\Delta_{1}\) has base points. Thus we consider the morphism \(\varphi^{\prime}:\overline{\mathcal{C}}\to\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{M}}_{3}-\Delta_{1}\). We use test families without fibres of type \(\Delta_{1}\). The first type of test family is a pencil of quartic curves defined by \(s\,F+t\,G\) with \(F\) and \(G\) ternary quartics defining generic smooth quartic curves in \(\mathbb{P}^{2}\). This is a family \(\mathcal{X}\to B\) with \(B=\mathbb{P}^{1}\). By standard theory (see [8, 14.5.1] or [13, Table 3.167, p. 189]) the degrees of the classes \(\lambda,\delta_{0}\) and \(\delta_{1}\) on \(B\) are given by \[\deg(\lambda,\delta_{0},\delta_{1})_{|B}=(3,27,0).\] In particular this family does not have hyperelliptic fibres since the class \(\mathfrak{h}\) of the hyperelliptic locus \(\overline{\mathcal{H}}_{3}\) in \(\overline{\mathcal{M}}_{3}\) satisfies (cf. [12, p. 140]) \[\mathfrak{h}=9\,\lambda-\delta_{0}-3\,\delta_{1}\,. \tag{1}\] In view of the relation (where again we abusively write line bundles for their corresponding divisor classes) \[\omega_{\pi} ={\varphi^{\prime}}^{*}(\overline{D}+\omega_{u})\] \[={\varphi^{\prime}}^{*}(O(4)+u^{*}(A)+O(-3)+u^{*}(\lambda))\] \[={\varphi^{\prime}}^{*}(O(1)+u^{*}(A+\lambda))\] and the fact that \(\omega_{\pi}={\varphi^{\prime}}^{*}(O(1))\) we find \(A_{|B}=-\lambda_{|B}\). This implies \(3\,a+27\,b=-3\), so \[a+9\,b=-1\,. \tag{2}\] The second test family is a family \(s\,F+t\,Q^{2}\) with \(F\) a generic smooth quartic and \(Q\) a smooth conic. For such a family we have \[\deg(\lambda,\delta_{0},\delta_{1})_{|B}=(3/2,13,0)\,. \tag{3}\] see [13, Table 3.167, p. 189]. We now have \[\omega_{f} ={\varphi^{\prime}}^{*}((\overline{D}+\omega_{u})_{|\overline{D} }-\mathfrak{h})\] \[={\varphi^{\prime}}(O(4)+u^{*}(A)+O(-3)+u^{*}(\lambda)-u^{*}( \mathfrak{h}))\] \[={\varphi^{\prime}}(O(1)+u^{*}(A+\lambda-\mathfrak{h}))\,,\] where we used the fact that \({\varphi^{\prime}}\) restricted to a generic fibre is of degree \(1\), but of degree \(2\) on a hyperelliptic fibre leading to the correction term \(-\mathfrak{h}\). On the other hand \(\omega_{\pi}={\varphi^{\prime}}^{*}(O(1))\), hence we find \(A+\lambda-\mathfrak{h}=0\). Using (1) and (3) this gives \[(a+1-9)\,\frac{3}{2}+(b+1)\,13+(c-3)\,0=0\,,\quad\text{that is,}\quad 3\,a+26\,b=-2\,. \tag{4}\] The relations (2) and (4) determine \(a\) and \(b\). The divisor \(\overline{D}\) is effective over \(\overline{\mathcal{M}}_{3}-\Delta_{1}\). Because of the relation \[[\overline{D}]=[O(4)]+u^{*}(8\,\lambda-\delta_{0}-c\,\delta_{1})\] the corresponding section \(1\) of \(O(\overline{D})\) maps under \(u_{*}\) to a section \(\psi\) of \(\operatorname{Sym}^{4}(\mathbb{E})\otimes\det(\mathbb{E})^{8}\) that is regular outside \(\Delta_{1}\) and vanishes on \(\Delta_{0}\). This section \(\psi\) is invariant under the action of \(-1\) on the fibres of \(\mathbb{E}\). Therefore it descends to a section \(\chi_{4,0,8}\) of \(\operatorname{Sym}^{4}(\mathbb{E})\otimes\det(\mathbb{E})^{8}\) on the image of \(\overline{\mathcal{M}}_{3}-\Delta_{1}\) under the Torelli morphism \(\overline{\mathcal{M}}_{3}\to\tilde{\mathcal{A}}_{3}\), with \(\tilde{\mathcal{A}}_{3}\) the standard second Voronoi compactification of \(\mathcal{A}_{3}\). Since the image of \(\Delta_{1}\) in \(\tilde{\mathcal{A}}_{3}\) is of codimension \(2\), the section \(\chi_{4,0,8}\) extends to a regular section of \(\operatorname{Sym}^{4}(\mathbb{E})\otimes\det(\mathbb{E})^{8}\) on all of \(\mathcal{A}_{3}\), and then by the Koecher Principle it extends to \(\tilde{\mathcal{A}}_{3}\). Thus it defines a regular Siegel modular cusp form \(\chi_{4,0,8}\) of degree \(3\) and weight \((4,0,8)\). **Corollary 3.3**.: _Let \(\overline{D}\) be the closure of the canonical curve over \(\mathcal{M}_{3}\) in \(\mathbb{P}(\mathbb{E})\) and \(s\) the natural section \(1\) of \(O(\overline{D})\). Then \(\chi_{4,0,8}=u_{*}(s)\) is a Teichmuller modular form and it descends to a Siegel modular cusp form of degree \(3\) and weight \((4,0,8)\)._ The relation (1) shows that there exists a scalar-valued Teichmuller modular form \(\chi_{9}\) of weight \(9\) on \(\overline{\mathcal{M}}_{3}\). Its square is invariant under the action of \(-1\) on the fibres of \(\mathbb{E}\) hence descends to a Siegel modular form of weight \(18\). Up to a multiplicative scalar this is Igusa's modular form \(\chi_{18}\). If we divide \(\chi_{4,0,8}\) by \(\chi_{9}\) we obtain a meromorphic section of \(\operatorname{Sym}^{4}(\mathbb{E})\otimes\det(\mathbb{E})^{-1}\) on \(\overline{\mathcal{M}}_{3}\) that is regular on \(\mathcal{M}_{3}\) outside the hyperelliptic locus. This form was used in [5] to construct Teichmuller modular forms and Siegel modular forms by invariant theory. ## 4. Moduli of hyperelliptic curves as a stack quotient In this section we discuss the stack quotient description of the moduli of hyperelliptic curves. We consider hyperelliptic curves in characteristic not \(2\). A hyperelliptic curve of genus \(g\) is a morphism \(\alpha:C\to\mathbb{P}^{1}\) of degree \(2\) where \(C\) is a smooth curve of genus \(g\). A morphism \(a:\alpha\to\alpha^{\prime}\) between two hyperelliptic curves is a commutative diagram A hyperelliptic curve \(C\) of genus \(g\) can be written as \(y^{2}=f(x)\) with \(f\in\mathbb{C}[x]\) of degree \(2g+2\). In fact, choosing a basis \((x_{1},x_{2})\) of the \(g_{2}^{1}\) defines the morphism \(\alpha\). Let \(W=\langle x_{1},x_{2}\rangle\), a vector space (over our algebraically closed base field) of dimension \(2\), and \(L=\alpha^{*}(O_{\mathbb{P}^{1}}(1))\). By Riemann-Roch we have \(\dim H^{0}(C,L^{g+1})=g+3\), while \(\dim\operatorname{Sym}^{g+1}(W)=g+2\), so we have a non-zero element \(y\in H^{0}(C,L^{g+1})\) which is anti-invariant under the involution corresponding to \(\alpha\). The anti-invariant subspace of \(H^{0}(C,L^{g+1})\) has dimension \(1\). Then \(y^{2}\) is invariant and lies in \(\operatorname{Sym}^{2g+2}(W)\). Thus we find the equation \(y^{2}=f(x_{1},x_{2})\) with \(f\) homogeneous of degree \(2g+2\) and with non-zero discriminant. We have made two choices here: a generator \(y\) of \(H^{0}(C,L^{g+1})^{(-1)}\), a space of dimension \(1\), and a basis of \(W\). We can change the choice of \(y\) (by a non-zero scalar) and the choice of a basis of \(W\) by \(\gamma=(a,b;c,d)\in\operatorname{GL}(W)\). The action of \(\operatorname{GL}(W)\) is on the right via \[f(x_{1},x_{2})\mapsto f(ax_{1}+bx_{2},cx_{1}+dx_{2})\,.\] If we let \(\operatorname{GL}(W)\) act on \(y\) by a power of the determinant, then this action preserves the type of equation. In inhomogeneous form the action by \(\operatorname{GL}(W)\) is by \[f\mapsto f(\frac{ax+b}{cx+d}),\quad y\mapsto y/(cx+d)^{g+1}\,,\] with the following effect on the equation: \[y^{2}=f(x)\mapsto y^{2}=(cx+d)^{2g+2}f(\frac{ax+b}{cx+d})\,.\] The last expression on the right-hand-side can be written as binary form of degree \(2g+2\). The stabilizer of a generic \(f\in\operatorname{Sym}^{2g+2}(W)\) is \(\mu_{2g+2}\), the roots of unity of order dividing \(2g+2\). Since we want a stabilizer of order \(2\) for the generic element, we consider a twisted action: define the \(\operatorname{GL}(W)\)-representation \[W_{a,b}=\operatorname{Sym}^{a}(W)\otimes\det(W)^{\otimes b}\,.\] This can be identified with \(\operatorname{Sym}^{a}(W)\) as a vector space, but the action by \(\operatorname{GL}(W)\) is different. Inside this space \(W_{a,b}\) we have the open subspace \(W^{0}_{a,b}\) of homogeneous polynomials of degree \(a\) with non-zero discriminant. We now distinguish two cases. **Case 1.**\(g\) even. Here we consider the stack quotient \[[W^{0}_{2g+2,-g}/\operatorname{GL}(W)]\,.\] This stack quotient can be identified with the moduli stack \(\mathcal{H}_{g}\) of hyperelliptic curves of genus \(g\) for \(g\) even. Indeed, the action of \(t\cdot\operatorname{Id}_{W}\) is (on inhomogeneous equations) by \[f\mapsto t^{-2g}f,\quad y\mapsto y/t^{g+1}\,,\] hence \(y^{2}=f\) maps to \(y^{2}=t^{2}\,f\), so that the stabilizer is \(\mu_{2}\), as required. Note also that the action of \(-1\in\operatorname{GL}(W)\) is by \(y\mapsto-y\), so \(y\) is an odd element. A basis of \(H^{0}(C,\Omega^{1}_{C})\) is given by \[x^{i}dx/y,\quad(i=0,\dots,g-1)\,.\] The action on \(dx\) is by \((ad-bc)dx/(cx+d)^{2}\) resulting in the action on the space of differentials by \[x^{i}dx/y\mapsto(ad-bc)(cx+d)^{g-1-i}(ax+b)^{i}\,dx/y,\] If we forget the twisted action on \(y\), we can identify \(H^{0}(C,\Omega^{1}_{C})\) with \(W_{g-1,1}\). But \(y^{2}\) must be viewed as an element of \(W_{2g+2,-g}\), so the action of \(t\,1_{W}\) on \(y\) should be twisted by \(t^{-g}=\det^{-g/2}\). We get \[H^{0}(C,\Omega^{1}_{C})\cong W_{g-1,(2-g)/2}\quad\text{for $g$ even}.\] We see that under the identification \(h:[W^{0}_{g+2,-g}/\operatorname{GL}(W)]\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{H}_{g}\) the pullback \(h^{*}(\mathbb{E})\) of the Hodge bundle \(\mathbb{E}\) is the equivariant bundle \(W_{g-1,(2-g)/2}\). The action of \(-1_{W}\) is by \(-1\) on \(W_{g-1,(2-g)/2}\). We also observe \(h^{*}(\det(\mathbb{E}))=\det(W)^{g/2}\). **Case 2.**\(g\) odd. Here we take \(W_{2g+2,-g+1}\). _Remark 4.1_.: If we consider \(W_{2g+2,r}\) then \(r\) has to be even, since as above we later view \(y^{2}\) as an element of \(W_{2g+2,r}\) and we need an action by \(\det^{r/2}\) on \(y\). Here the stabilizer of a generic element is \(\mu_{4}\). Now on inhomogeneous equations the action is by \[f\mapsto t^{-2g+2}\,f,\quad y\mapsto y/t^{g+1}\,,\] hence \(y^{2}=f\) maps to \(y^{2}=t^{4}\,f\). Note that here \(-1_{W}\) acts by \(f\mapsto f\) and \(y\mapsto y\). But \(\sqrt{-1}_{W}\) acts by \(f\mapsto f\) and \(y\mapsto-y\). To get the right stack quotient with stabilizer of the generic element of order \(2\), we take \[[W^{0}_{2g+2,1-g}/(\operatorname{GL}(W)/(\pm 1_{W}))]\,.\] The action on the differentials \(x^{i}dx/y\) with \(i=0,\dots,g-1\) is by \[x^{i}dx/y\mapsto(ad-bc)^{(1-g)/2}(cx+d)^{g-1-i}(ax+b)^{i}\,,\] hence without twisting we get \(H^{0}(C,\Omega^{1}_{C})=W_{g,1}\). Since we view \(y^{2}\) as element of \(W_{g+3,1-g}\) we find under \(h:[W^{0}_{2g+2,1-g}/(\operatorname{GL}(W)/(\pm 1_{W}))]\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{H}_{g}\) that \(h^{*}(\mathbb{E})=W_{g-1,(3-g)/2}\). The element \(\sqrt{-1}\,1_{W}\) acts on the differentials as \((-1)^{(3-g)/2}(\sqrt{-1})^{g-1}=-1\). We summarize. **Proposition 4.2**.: _Writing \(W_{a,b}=\operatorname{Sym}^{a}(W)\otimes\det(W)^{b}\) we have the identification of stacks_ \[h^{-1}:\mathcal{H}_{g}\stackrel{{\sim}}{{\longrightarrow}}\begin{cases} [W^{0}_{2g+2,-g}/\mathrm{GL}(W)]&g\quad\mathrm{even}\\ [W^{0}_{2g+2,1-g}/(\mathrm{GL}(W)/(\pm 1_{W}))]&g\quad\mathrm{odd},\end{cases}\] _and_ \[h^{*}(\mathbb{E})\cong\begin{cases}W_{g-1,(2-g)/2}&g\quad\mathrm{even}\\ W_{g-1,(3-g)/2}&g\quad\mathrm{odd},\end{cases}\qquad h^{*}(\det(\mathbb{E}))= \begin{cases}\det(W)^{g/2}&g\quad\mathrm{even}\\ \det(W)^{g}&g\quad\mathrm{odd}.\end{cases}\] For a somewhat different description see [2, Cor. 4.7, p. 654]. Recall that the moduli stack \(\mathcal{H}_{g}\) has as compactification the closure \(\overline{\mathcal{H}}_{g}\) of \(\mathcal{H}_{g}\) inside the moduli stack \(\overline{\mathcal{M}}_{g}\). The Picard group of \(\mathcal{H}_{g}\) is known by [2] to be finite cyclic for \(g\geq 2\) of order \(4g+2\) if \(g\) is even and \(8g+4\) else. The rational Picard group of \(\overline{\mathcal{H}}_{g}\) is known (see [7]) to be free abelian of rank \(g\) generated by classes \(\delta_{i}\) and \(\xi_{j}\) for \(i=0,\ldots,\lfloor g/2\rfloor\) and \(j=1,\ldots\lfloor(g-1)/2\rfloor\). Cornalba gives also the first Chern class \(\lambda\) of the Hodge bundle \(\mathbb{E}\) on \(\overline{\mathcal{H}}_{g}\) \[(8g+4)\,\lambda=g\,\delta_{0}+4\sum_{i=1}^{\lfloor g/2\rfloor}i(g-i)\,\delta_ {i}+2\sum_{i=1}^{\lfloor(g-1)/2\rfloor}(i+1)(g-i)\,\xi_{i}\,,\] where the generic point of the divisor \(\xi_{i}\) has an admissible model \(C^{\prime}\cup C^{\prime\prime}\) with two nodes \(C^{\prime}\cap C^{\prime\prime}=\{p,q\}\) mapping to a union of two \(\mathbb{P}^{1}\), with \(2i+2\) marked points on \(C^{\prime}\), see Figure 1 in Section 6. ## 5. Modular forms on the hyperelliptic locus of genus three Let \(\mathbb{E}\) be the Hodge bundle on \(\overline{\mathcal{H}}_{3}\). By a modular form of weight \(k\) on on \(\overline{\mathcal{H}}_{3}\) we mean a section of \(\det(\mathbb{E})^{\otimes k}\). The construction in the preceding section shows that a modular form of weight \(k\) on \(\overline{\mathcal{H}}_{3}\) when pulled back to the stack \([W^{0}_{2,0}/(\mathrm{GL}(W)/\pm\mathrm{id}_{W})]\) gives rise to an invariant of degree \(3k/2\). Indeed, it defines a section of the equivariant bundle \(\det(W)^{3k}\) invariant under \(\mathrm{SL}(W)\), but in view of the fact that we divide by the action of \(\mathrm{GL}(W)/(\pm\mathrm{id}_{W})\) this yields an invariant of degree \(3k/2\). Let \(M_{k}(\Gamma_{3})=H^{0}(\mathcal{A}_{3},\det(\mathbb{E})^{k})\) be the space of Siegel modular forms of degree \(3\) on \(\Gamma_{3}=\operatorname{Sp}(6,\mathbb{Z})\). In [17] Igusa considered an exact sequence \[0\to M_{k-18}(\Gamma_{3})\stackrel{{\cdot\chi_{18}}}{{ \longrightarrow}}M_{k}(\Gamma_{3})\to I_{3k/2}(2,8)\] with \(I_{d}(2,8)\) the vector space of invariants of degree \(d\) of binary octics. We can interpret Igusa's sequence in the following way. A Siegel modular form of weight \(k\) defines by restriction to the hyperelliptic locus a modular form of weight \(k\) on \(\overline{\mathcal{H}}_{3}\) and it thus defines an invariant of degree \(3k/2\). For each irreducible representation \(\rho\) of \(\mathrm{GL}(3)\) we have a vector bundle \(\mathbb{E}_{\rho}\) made from \(\mathbb{E}\) by a Schur functor. By a modular form of weight \(\rho\) on \(\overline{\mathcal{H}}_{3}\) we mean a section of a line bundle \(\mathbb{E}_{\rho}\). We can pull back to the stack \([W^{0}_{8,-2}/(\mathrm{GL}(W)/\pm\mathrm{id}_{W})]\), but the situation is more involved as \(\operatorname{Sym}^{n}(\operatorname{Sym}^{2}(W))\) decomposes as a representation of \(\operatorname{GL}(W)\). For example, we have \[h^{*}(\operatorname{Sym}^{4}(\mathbb{E}))=\operatorname{Sym}^{4}(\operatorname{ Sym}^{2}(W))=W_{8,0}\oplus W_{4,2}\oplus W_{0,4}\,.\] Here and in the rest of this section we assume that the characteristic is \(0\), or not \(2\), and high enough for the representation theory (plethysm) to work1. In this case we can consider the restriction of the Siegel modular form \(\chi_{4,0,8}\) to the hyperelliptic locus and we know that it does not vanish identically by [5, Lemma 7.7]. On the other hand we have the basic covariant \(f_{8,-2}\), the diagonal section of \(W_{8,-2}\) over the stack \([W_{8,-2}^{0}/(\operatorname{GL}(W)/\pm\operatorname{id}_{W})]\). Footnote 1: Alternatively one could use divided powers as in [1, 3.1] The discriminant form \(\mathfrak{d}\) of binary octics, an invariant of degree \(14\), does not define a modular form, but its third power \(\mathfrak{d}^{3}\) does. It defines a modular form of weight \(28\), see [24, p. 811]. Via the projection \(p_{8,0}:\operatorname{Sym}^{4}(\operatorname{Sym}^{2}(W))\to W_{8,0}\) a section of \(\operatorname{Sym}^{4}(\mathbb{E})\otimes\det(\mathbb{E})^{8}\) defines a covariant of bi-degree \((8,24/2)=(8,12)\) for the action of \(\operatorname{GL}(W)\). **Proposition 5.1**.: _The restriction to the hyperelliptic locus of the section \(\chi_{4,0,8}\) corresponds via the projection \(p_{8,0}\) to the covariant \(f_{8,-2}\cdot\mathfrak{d}\) with \(\mathfrak{d}\) the discriminant of binary octics._ Proof.: By restricting and projecting we obtain a covariant of bi-degree \((8,12)\). This covariant is divisible by the discriminant and does not vanish on the locus of smooth hyperelliptic curves. Therefore, division by \(\mathfrak{d}\) gives a non-vanishing covariant of bidegree \((8,-2)\). Taking into account the 'twisting' by \(\det(W)^{-2}\) this must be a multiple of the universal binary octic. We will discuss the other two projections later. Note that the divisor \(D\), the canonical image of the universal curve in \(\mathbb{P}(\mathbb{E})\) that defines \(\chi_{4,0,8}\), has a restriction to the locus of smooth hyperelliptic curves which is divisible by \(2\). Indeed, the canonical image of a hyperelliptic curve is a double conic. This suggests that we can take the'square root' of the restriction of \(\chi_{4,0,8}\) to the hyperelliptic locus. However, the boundary divisors prevent this. If we take a level cover of the moduli space we can construct a modular form of weight \((2,0,4)\). We will carry this out later (in Corollary 12.4), working on a Hurwitz space that we shall introduce in the next section. ## 6. The Hurwitz space of admissible covers of degree two In this and the following sections will use the other description of the moduli of hyperelliptic curves, namely moduli space \(\overline{\mathcal{H}}_{g,2}\) of admissible covers of degree \(2\) and genus \(g\) in the sense of [14], see [13]. Thus we are looking at covers \(f:C\to P\) of degree \(2\) with \(C\) nodal of genus \(g\) and \(P\) a stable \(b\)-pointed curve of genus \(0\). Here the \(b=2g+2\) branch points are ordered and \(\mathcal{H}_{g,2}\to\mathcal{H}_{g}\) is a Galois cover with Galois group the symmetric group \(\mathfrak{S}_{2g+2}\). The boundary \(\overline{\mathcal{H}}_{g,2}-\mathcal{H}_{g,2}\) consists of finitely many divisors that we shall denote by \(\Delta^{\Lambda}_{b}=\Delta^{\Lambda}\), where we omit the index \(b\) if \(g\) is clear. Here the index \(\Lambda\) defines a partition \[\{1,2,\dots,b\}=\Lambda\sqcup\Lambda^{c}\,,\] and the generic point of \(\Delta^{\Lambda}\) corresponds to an admissible cover that maps to a stable curve of genus \(0\) that is the union of two copies of \(\mathbb{P}^{1}\), one containing the points with mark in \(\Lambda\), the other one those with mark in \(\Lambda^{c}\). Here we will assume that \(\#\Lambda=j\) with \(2\leq j\leq g+1\). The parity of \(\#\Lambda\) plays an important role here. If \(\#\Lambda=2i+2\) is even, then the generic admissible cover corresponding to a point of \(\Delta^{\Lambda}\) is a union \(C_{i}\cup C_{g-i-1}\) that is a double cover of a union of two rational curves \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) with \(C_{i}\) lying over \(\mathbb{P}_{1}\) and \(C_{g-i-1}\) over \(\mathbb{P}_{2}\). Here \(C_{i}\) (resp. \(C_{g-i-1}\)) has genus \(i\) (resp. \(g-i-1\)) with \(0\leq i\leq(g-1)/\) and is ramified over the points of \(\Lambda\) (resp. \(\Lambda^{c}\)). \(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{2}\)\(\mathbb{P}_{1}\)\(\mathbb{P}_{2}\)\(\mathbb{ ## 7. Divisors on the moduli of stable curves of genus zero For later use we recall some notation and facts concerning divisors on the moduli spaces \(\overline{\mathcal{M}}_{0,n}\). We refer to [18]. The boundary divisors on \(\overline{\mathcal{M}}_{0,n}\) are denoted by \(S^{\Lambda}_{n}\) and are indexed by partitions \(\{1,\ldots,n\}=\Lambda\sqcup\Lambda^{c}\) into two disjoint sets with \(2\leq\#\Lambda\leq n-2\) and we have \(S^{\Lambda}_{n}=S^{\Lambda^{c}}_{n}\). Via the natural map \(\pi_{n+1}:\overline{\mathcal{M}}_{0,n+1}\to\overline{\mathcal{M}}_{0,n}\) we may view \(\overline{\mathcal{M}}_{0,n+1}\) as the universal curve and \(\pi_{n+1}\) has \(n\) sections. The generic point of \(S^{\Lambda}_{n}\) corresponds to a stable curve with two rational components, one of which contains the points marked by \(\Lambda\). For pullback by \(\pi_{n+1}\) we have the relation \[\pi_{n+1}^{*}(S^{\Lambda}_{n})=S^{\{\Lambda,n+1\}}_{n+1}\cup S^{\{\Lambda^{c}, n+1\}}_{n+1}\,.\] The \(n\) sections of \(\pi_{n+1}\) have images \(S^{\{i,n+1\}}_{n+1}\) with \(i=1,\ldots,n\). We can collect these boundary divisors on \(\overline{\mathcal{M}}_{0,n+1}\) via \[T_{n+1,j}=\sum_{\#\Lambda=j}S^{\{\Lambda,n+1\}}_{n+1},\quad T^{c}_{n+1,j}=\sum _{\#\Lambda=j}S^{\{\Lambda^{c},n+1\}}_{n+1}\,,\] with the convention that in view of the symmetry we add a factor \(1/2\) for even \(n\) and \(j=n/2\) \[T_{n+1,n/2}=\frac{1}{2}\sum_{\#\Lambda=n/2}S^{\{\Lambda,n+1\}}_{n+1},\quad T^{ c}_{n+1,n/2}=\frac{1}{2}\sum_{\#\Lambda=n/2}S^{\{\Lambda^{c},n+1\}}_{n+1}\,.\] Later, when a fixed index \(k\) is given we will split these divisors as \(T=T(k^{+})+T(k^{-})\) where \((k^{+})\) (resp. \((k^{-})\)) indicates that the sum is taken over \(\Lambda\) containing \(k\) (resp. not containing \(k\)). So \(T_{n+1,j}(k^{+})=\sum_{\#\Lambda=j,k\in\Lambda}S^{\{\Lambda,n+1\}}_{n+1}\) (and with a factor \(1/2\) if \(j=n/2\)). ## 8. A good model We now will work with a 'good model' of the universal admissible cover over \(\overline{\mathcal{H}}_{g,2}\). Such a model was constructed in [10, Section 4]. We start with the observation that the space \(\overline{\mathcal{H}}_{g,2}\) is not normal, and we therefore normalize it. The result \(\widetilde{\mathcal{H}}_{g,2}\) is now a smooth stack over which we have a universal curve \(\tilde{\mathcal{C}}\to\widetilde{\mathcal{H}}_{g,2}\). We have a natural map \(h:\widetilde{\mathcal{H}}_{g,2}\to\overline{\mathcal{M}}_{0,b}\) with \(b=2g+2\) and the universal curve now fits into a commutative diagram We can construct a proper flat map that extends the relative canonical morphism \(\mathcal{C}\to\mathbb{P}^{1}_{\mathcal{H}_{g,2}}\) by taking the fibre product \(\mathbb{P}\) of \(\overline{\mathcal{M}}_{0,b+1}\) and \(\widetilde{\mathcal{H}}_{g,2}\) over \(\overline{\mathcal{M}}_{0,b}\) and thus obtain a commutative diagram The resulting space \(\mathbb{P}\) is not smooth, but has rational singularities. Resolving these in a minimal way gives a model \(\widetilde{\mathbb{P}}\); taking the resolution \(\widetilde{Y}\) of the normalization \(Y\) of the fibre product of \(\widetilde{\mathbb{P}}\) and \(\widetilde{\mathcal{C}}\) over \(\mathbb{P}\) gives us finally a commutative diagram where \(B\) is our base \(\widetilde{\mathcal{H}}_{g,2}\) or any other base mapping to it. We write \(\pi\) for the resulting morphism \(\widetilde{\mathbb{P}}\to B\), \(h\) for the natural map \(B\to\overline{\mathcal{M}}_{0,b}\) and \(\nu\) for \(B\to\overline{\mathcal{H}}_{g,2}\). We refer to [10, Section 4] for addtional details. In the following we will assume that we have a physical family over a base \(B\). We will abuse the notation \(\Delta^{\Lambda}\) for the pull back of the divisor \(\Delta^{\Lambda}\) under \(\nu:B\to\overline{\mathcal{H}}_{g,2}\). In the case that \(\#\Lambda\) is even, say \(\#\Lambda=2i+2\) with \(0\leq i\leq(g-1)/2\), the pull back of \(\Delta^{\Lambda}\) decomposes as \[\pi^{*}(\Delta^{\Lambda})=\Pi^{\Lambda}+\Pi^{\Lambda^{c}}\,,\] corresponding to the two components of a general fibre of \(\pi\), with \(\Pi^{\Lambda}\) mapping to \(S^{\{\Lambda,b+1\}}_{b+1}\) under \(\widetilde{\mathbb{P}}\to\overline{\mathcal{M}}_{0,b+1}\), and similarly \(\Pi^{\Lambda^{c}}\) mapping to \(S^{\{\Lambda^{c},b+1\}}_{b+1}\). Note that we restrict \(\#\Lambda\) by \(\leq g+1\), hence the notation \(\Pi^{\Lambda}\) should not lead to confusion. In the case \(\#\Lambda\) is odd we find a similar decomposition \[\pi^{*}(\Delta^{\Lambda})=\Pi^{\Lambda}+R^{\Lambda}+\Pi^{\Lambda^{c}}\,,\] corresponding now to the fact that the general fibre of \(\pi\) has three components, one coming from the blowing up. We notice \[h^{*}(S^{\Lambda}_{b})=\begin{cases}\Delta^{\Lambda}&\#\Lambda\equiv 0\,( \operatorname{mod}2)\\ 2\,\Delta^{\Lambda}&\#\Lambda\equiv 1\,(\operatorname{mod}2)\end{cases}\] If we use the notation \(\Delta_{j}=\sum_{\#\Lambda=j}\Delta^{\Lambda}\), we find for the tautological classes \(\lambda=c_{1}(\mathbb{E})\) and \(h^{*}(\psi_{k})\), simply denoted by \(\psi_{k}\), \((k=1,\dots,b)\) on our base \(B\), the following formulas (see [9]) \[\lambda=\sum_{i=0}^{(g-1)/2}\frac{(i+1)(g-i)}{2(2g+1)}\Delta_{2i+2}+\sum_{i=1} ^{g/2}\frac{i(g-i)}{2g+1}\Delta_{2i+1} \tag{5}\] and \[\begin{split}\psi_{k}=\sum_{i=0}^{(g-1)/2}\left(\frac{(g-i)(2g-2i-1)}{ g(2g+1)}\Delta_{2i+2}(k^{+})+\frac{(i+1)(2i+1)}{g(2g+1)}\Delta_{2i+2}(k^{-})\right) \\ +2\,\sum_{i=1}^{g/2}\left(\frac{(g-i)(2g-2i+1)}{g(2g+1)}\Delta_{2 i+1}(k^{+})+\frac{i(2i+1)}{g(2g+1)}\Delta_{2i+1}(k^{-})\right)\end{split} \tag{6}\] where we use the notation \((k^{+})\) (resp. \((k^{-})\)) to denote the condition \(k\in\Lambda\) (resp. \(k\not\in\Lambda\)) as above. The relation (5) implies the following. **Corollary 8.1**.: _There exists a scalar-valued modular form of weight \(2(2g+1)\) on the moduli space \(\widetilde{\mathcal{H}}_{g,2}\) whose divisor is a union of boundary divisors. It descends to the hyperelliptic locus \(\overline{\mathcal{H}}_{g}\) and corresponds to a power of the discriminant of the binary form of degree \(2g+2\)._ ## 9. Extending the linear system The canonical system on a hyperelliptic curve is defined by the pull back of the sections of the line bundle of \(O(g-1)\) of degree \(g-1\) on the projective line. We now try to extend this line bundle over our compactification. A first attempt would be to consider the divisor \((g-1)\,\tilde{S}_{k}\) with \(\tilde{S}_{k}\) the pullback to \(\widetilde{\mathbb{P}}\) of the section \(S_{k}\) of \(\pi_{b+1}:\overline{\mathcal{M}}_{0,b+1}\to\overline{\mathcal{M}}_{0,b}\). We can add a boundary divisor \(\Xi_{k}\) to it such that \(f^{*}O_{\widetilde{\mathbb{P}}}(D_{k})\) with \(D_{k}=(g-1)\tilde{S}_{k}+\Xi_{k}\) coincides with \(\omega_{t}\) on the fibres of \(t\), namely take \(\Xi_{k}\) equal to \[\begin{split}&\sum_{i=0}^{(g-1)/2}\left((g-1-i)\,\Pi_{2i+2}(k^{+})+i \,\Pi_{2i+2}^{c}(k^{-})\right)+\\ &\sum_{i=1}^{g/2}\left((g-i-1)\,\Pi_{2i+1}(k^{+})-(g-i)\,\Pi_{2 i+1}^{c}(k^{+})\right)+\sum_{i=1}^{g/2}\left((i-1)\,\Pi_{2i+1}^{c}(k^{-})-i\,\Pi_{2 i+1}(k^{-})\right)\end{split}\] Here \(\Pi_{j}=\sum_{\#\Lambda=j}\Pi^{\Lambda}\) and \(\Pi_{j}^{c}=\sum_{\#\Lambda=j}\Pi^{\Lambda^{c}}\) and \((k^{+})\) (resp. \((k^{-})\)) indicates the condition that \(k\in\Lambda\) (resp. \(k\not\in\Lambda\)); moreover, we add a factor \(1/2\) in case \(j=b/2\). Now \(f^{*}O(D_{k})\) and \(\omega_{t}\) agree on the fibres of \(t\), so they differ by a pull back under \(t=\pi\circ f\), see diagram (1). We therefore will change \(D_{k}\) by a pull back under \(\pi\). Define a divisor class on \(B\) by \[\begin{split} E_{k}=\frac{2g-1}{2}\psi_{k}&-\sum_{i=0 }^{(g-1)/2}\left((g-i-1)\,\Delta_{2i+2}(k^{+})+i\,\Delta_{2i+2}(k^{-})\right) \\ &-\sum_{i=1}^{g/2}\left((g-i-1)\,\Delta_{2i+1}(k^{+})+(i-1)\, \Delta_{2i+1}(k^{-})\right)\end{split}\] and define a line bundle on \(\tilde{\mathbb{P}}\) by \[M=O(D_{k}+\pi^{*}E_{k})\,. \tag{7}\] **Lemma 9.1**.: _The line bundle \(M\) does not depend on \(k\), satisfies \(f^{*}(M)=\omega_{t}\) and restricts to the general fibre \(\mathbb{P}^{1}\) of \(\pi\) as \(O(g-1)\). For \(\#\Lambda=2i+2\) its restriction to the general fibre \(\mathbb{P}_{1}\cup\mathbb{P}_{2}\) over \(\Delta_{b}^{\Lambda}\) is of degree \((i,g-i-1)\), while for \(\#\Lambda=2i+1\) its restriction to the general fibre \(\mathbb{P}_{1}\cup R\cup\mathbb{P}_{2}\) it is of degrees \((i,-1,g-i)\)._ Proof.: We use the section \(\tau_{k}\) of \(t:\widetilde{Y}\to B\) with \(f\tau_{k}=\tilde{s}_{k}\) with \(\tilde{s}_{k}\) the natural sections of the map \(\pi\) with image \(\tilde{S}_{k}\). Then we have \(\tau_{k}^{*}\omega_{t}=\psi_{k}/2\) and \(\tau_{k}^{*}f^{*}D_{k}=\tilde{s}_{k}^{*}D_{k}\) for which we have \[\tilde{s}_{k}^{*}D_{k} =-(g-1)\psi_{k}+\sum_{i=0}^{(g-1)/2}\left((g-i-1)\,\Delta_{2i+2}( k^{+})+i\,\Delta_{2i+2}(k^{-})\right)\] \[\qquad+\sum_{i=1}^{g/2}\left((g-1-i)\,\Delta_{2i+1}(k^{+})+(i-1) \,\Delta_{2i+1}(k^{-})\right).\] From this we obtain \(\tau_{k}^{*}(\omega_{t})-\tau_{k}^{*}f^{*}D_{k}=\pi^{*}(E_{k})\), so that \(\omega_{t}=f^{*}(O(D_{k}+E_{k}))\). We also see that the restriction of \(M\) on the fibres of \(\pi\) does not depend on \(k\). Moreover, we have \[\tilde{s}_{j}^{*}(D_{k}+\pi^{*}E_{k})=\tau_{j}^{*}f^{*}(D_{k}+E_{k})=\tau_{j}^ {*}(\omega_{t})=\psi_{j}/2=\tilde{s}_{j}^{*}(D_{j}+\pi^{*}E_{j})\,,\] showing that the restrictions of \(O(D_{k}+\pi^{*}E_{k})\) and \(O(D_{j}+\pi^{*}E_{j})\) agree on \(\tilde{S}_{j}\). The restrictions of the fibres of \(\pi\) over the general points of \(\Delta_{b}^{\Lambda}\) are easily checked. We now want to compare \(\pi_{*}(M)\) with the Hodge bundle \(\mathbb{E}=t_{*}(\omega_{t})\) on \(B\). The next proposition shows that these agree up to codimension 2. **Proposition 9.2**.: _We have an exact sequence \(0\to\pi_{*}(M)\to\mathbb{E}\to\mathcal{T}\to 0\), where \(\mathcal{T}\) is a coherent sheaf that is a torsion sheaf supported on the boundary. Moreover, we have \(c_{1}(\pi_{*}(M))=\lambda\)._ Proof.: By Lemma 9.1 we have \(\omega_{t}=f^{*}(M)\). But \(R^{1}\pi_{*}(M)=(0)\), so we have \[\pi_{*}(M\otimes f_{*}O_{\tilde{Y}})=\pi_{*}f_{*}(f^{*}(M))=\pi_{*}f_{*}( \omega_{t})=t_{*}(\omega_{t})\,.\] We have an exact sequence \(0\to O_{\mathbb{P}}\to f_{*}O_{\tilde{Y}}\to\mathcal{F}\to 0\) with \(\mathcal{F}\) a coherent sheaf of rank 1 that restricted to the smooth fibers of \(\pi\) has degree \(-(g+1)\), as one sees by applying Riemann-Roch to \(f\) and \(O_{\tilde{Y}}\). Tensoring the sequence with \(M\) and applying \(\pi_{*}\) gives the exact sequence \[0\to\pi_{*}(M)\to\pi_{*}(M\otimes f_{*}O_{\tilde{Y}})\to\pi_{*}(M\otimes \mathcal{F})\to 0\,.\] On the smooth fibers of \(\pi\) the sheaf \(M\otimes\mathcal{F}\) restricts to a line bundle of degree \((g-1)-(g+1)=-2\), hence \(\pi_{*}(M\otimes\mathcal{F})\) is a torsion sheaf. We now calculate \(c_{1}(\pi_{*}(M))\). We apply Grothendieck-Riemann-Roch to \(\pi\) and \(O(D_{k})\). It says \[\operatorname{ch}(\pi_{!}(O(D_{k})))=\pi_{*}(\operatorname{ch}(O(D_{k}))) \text{\rm{Td}}^{\vee}(\omega_{\pi})\,,\] which by \(\pi_{*}(\text{\rm{Td}}_{2}^{\vee}(\omega_{\pi}))=0\) gives \[c_{1}(\pi_{*}(O(D_{k})))=\frac{1}{2}\pi_{*}(-D_{k}\,\omega_{\pi}+D_{k}^{2})\,.\] We calculate \[\pi_{*}(D_{k}\omega_{\pi})=(g-1)\psi_{k}-\sum_{i=0}^{(g-1)/2}\big{(}(g-i-1)\, \Delta_{2i+2}(k^{+})+i\,\Delta_{2i+2}(k^{-})\big{)}+\sum_{i=1}^{g/2}\Delta_{2i+1}\,, \tag{8}\] and \[\begin{split}\pi_{*}(D_{k}^{2})=&-(g-1)^{2}\psi_{k }+\sum_{i=0}^{(g-1)/2}(g-i-1)(g+i-1)\,\Delta_{2i+2}(k^{+})\\ &+\sum_{i=0}^{(g-1)/2}(2g-2-i)i\,\Delta_{2i+2}(k^{-})+\sum_{i=1}^ {g/2}((2g-i-1)(i-1)-i^{2})\,\Delta_{2i+1}\,.\end{split} \tag{9}\] Adding \(\pi_{*}(\pi^{*}E_{k})\) gives \[\begin{split} c_{1}(\pi_{*}(M))=\frac{g^{2}}{2}\,\psi_{k}-\frac {1}{2}\sum_{i=0}^{(g-1)/2}\big{(}(g-i-1)(g-i)\,\Delta_{2i+2}(k^{+})+i(i+1)\, \Delta_{2i+2}(k^{-})\big{)}\\ -\sum_{i=1}^{g/2}\big{(}(g-i)^{2}\,\Delta_{2i+1}(k^{+})+i^{2}\, \Delta_{2i+1}(k^{-})\big{)}\.\end{split}\] Substituting the formula for \(\psi_{k}\) we find \[c_{1}(\pi_{*}(M))=\sum_{i=0}^{(g-1)/2}\frac{(g-i)(i+1)}{2(2g+1)}\Delta_{2i+2}+ \sum_{i=1}^{g/2}\frac{i(g-i)}{2g+1}\Delta_{2i+1}=\lambda\,.\] The line bundle \(M\) on \(\tilde{\mathbb{P}}\) is not base point free as Proposition 9.2 shows; the restriction to the \(R\)-part has negative degree. We can make it base point free by defining \[N=M(-R)=O(D_{k}+\pi^{*}E_{k}-R)\,. \tag{10}\] Now the restriction of \(N\) to a general fibre over \(\Delta_{2i+1}\), which is a chain of three rational curves \(\mathbb{P}_{1},R,\mathbb{P}_{2}\), has degrees \((i-1,1,g-i-1)\) and one checks that \(N\) is base point free. **Lemma 9.3**.: _Up to codimension \(2\) we have on \(B\) that \(\pi_{*}(N)=\mathbb{E}\)._ Proof.: We have \(R^{1}\pi_{*}(N)=0\). Therefore the exact sequence \(0\to N\to M\to M_{|R}\to 0\) yields the exact sequence \[0\to\pi_{*}(N)\to\pi_{*}(M)\to\pi_{*}(M_{|R})\to 0\,.\] We now show that \(c_{1}(\pi_{*}(N))=c_{1}(\pi_{*}(M))\). Since \(R^{1}\pi_{*}(M)=0=R^{1}\pi_{*}(N)\) we find by Grothendieck-Riemann-Roch that \[c_{1}(\pi_{*}(M))=\frac{1}{2}\pi_{*}(c_{1}(M)^{2}-c_{1}(M)\omega_{\pi}),\quad c _{1}(\pi_{*}(N))=\frac{1}{2}\pi_{*}(c_{1}(N)^{2}-c_{1}(N)\omega_{\pi})\,.\] By the definition of \(N\) and the fact that \(R\) is a \((-2)\)-curve if we take a base \(B\) of dimension \(1\), and thus has intersection number \(0\) with a fibre, we have \[\pi_{*}(c_{1}(N))^{2}=\pi_{*}(c_{1}(M))^{2}\] and \(c_{1}(N)\,\omega_{\pi}=c_{1}(M)\,\omega_{\pi}\) since the restriction of \(\omega_{\pi}\) to \(R\) is trivial. ## 10. The rational normal curve The image of a hyperelliptic curve by the canonical map is a rational normal curve, that is, \(\mathbb{P}^{1}\) embedded in \(\mathbb{P}^{g-1}\) via the linear system of degree \(g-1\). In our setting we can see the rational normal curve and its degenerations using the extension \(N\) of the line bundle of degree \(g-1\), as defined in (10), to the compactification as constructed in the preceding section. We let \(u:\mathbb{P}(\mathbb{E})\to B\) be natural projection. Now \(N\) is base point free and up to codimension \(2\) we have \(\pi_{*}(N)=\mathbb{E}\), so the global-to-local map \(\pi^{*}\pi_{*}(N)\to N\) induces a surjective map \(\nu:\pi^{*}(\mathbb{E})\to N\) over \(\tilde{\mathbb{P}}\). This induces a morphism \(\varphi:\tilde{\mathbb{P}}\to\mathbb{P}(\mathbb{E})\) by associating to a point of \(\tilde{\mathbb{P}}\) the kernel of \(\nu\). It fits into a diagram **Proposition 10.1**.: _For a point of \(B\) with smooth fibre under \(\pi\) the image of \(\varphi\) is a rational normal curve of degree \(g-1\). For a general point \(\beta\in\Delta_{2i+2}\) with fibre \(\mathbb{P}_{1}\cup\mathbb{P}_{2}\) the image is a union of two rational normal curves of degree \(i\) and \(g-i-1\). For a general point \(\beta\in\Delta_{2i+1}\) with fibre \(\mathbb{P}_{1},R,\mathbb{P}_{2}\) the image is a union of three rational normal curves of degree \(i-1\), \(1\) and \(g-i-1\). Here we interpret the case of degree \(0\) as a contracted curve._ Proof.: The proposition follows almost immediately from Lemma 9.1. _Remark 10.2_.: If \(i=1\) then \(\mathbb{P}_{1}\) is contracted. If also \(g=2\) then both \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) are contracted and the image of \(R\) coincides with the fibre of \(\mathbb{P}(\mathbb{E})\). _Remark 10.3_.: The sections \(\tilde{s}_{i}:B\to\tilde{\mathbb{P}}\) for \(i=1,\dots,b\) induce sections \(\sigma_{i}=\varphi\circ s_{i}:B\to\mathbb{P}(\mathbb{E})\) by sending \(\beta\) to the kernel of \(\mathbb{E}=\tilde{s}_{i}^{*}\pi^{*}(\mathbb{E})\to s_{i}^{*}(N)\). _Remark 10.4_.: In the case \(g=2\) the map \(\varphi\) is a birational map \(\tilde{\mathbb{P}}\to\mathbb{P}(\mathbb{E})\) that blows down boundary components. More precisely, over \(\Delta_{2}\) it blows down \(\Pi_{2}\) and over \(\Delta_{3}\) the components supported at \(\Pi_{3}=\Pi_{3}^{c}\). ## 11. Symmetrization We have been working with the moduli space \(\mathcal{H}_{g,2}\) and \(\mathcal{M}_{0,b}\) and their compactifications. Here the symmetric group \(\mathfrak{S}_{b}\) acts. We therefore make our construction symmetric. We put \(D=\sum_{k=1}^{b}D_{k}\) and \(E=\sum_{k=1}^{b}E_{k}\) and set \[\tilde{M}=O(D+E)\,,\quad\psi=\sum_{k=1}^{b}\psi_{k}\,,\quad\text{and}\quad\tilde{ S}=\sum_{k=1}^{b}\tilde{S}_{k}\,.\] We find \[\psi=4\,\sum_{i=0}^{(g-1)/2}\frac{(g-i)(i+1)}{2g+1}\,\Delta_{2i+2}+2\,\sum_{i= 1}^{g/2}\frac{(2g-2i+1)(2i+1)}{2g+1}\,\Delta_{2i+1}\] and \[D=(g-1)\tilde{S}+2\sum_{i=0}^{(g-1)/2}\left((g-i-1)(i+1)\Pi_{2i+2}+i\,(g-i)\, \Pi_{2i+2}^{c}\right)+\] \[\sum_{i=1}^{g/2}\left((g-4i-1)\Pi_{2i+1}-(3g-4i+1)\,\Pi_{2i+1}^{c}\right)\] and \[E=\frac{2g-1}{2}\psi-2\sum_{i=0}^{(g-1)/2}((g-i)(2i+1)-(i+1))\,\Delta_{2i+2}- \sum_{i=1}^{g/2}\left(4i\,(g-i)-(g+2)\right)\Delta_{2i+1}\] ## 12. The Case of Hyperelliptic Genus Three The Hurwitz space \(\mathcal{H}_{3,2}\) admits a compactification \(\overline{\mathcal{H}}_{3,2}\) with boundary components \(\Delta^{\Lambda}\) with \(\#\Lambda\in\{2,3,4\}\). Taking the components with \(\Lambda\) of fixed cardinality together gives boundary components \(\Delta_{2},\Delta_{3}\) and \(\Delta_{4}\). Under the morphism \(\overline{\mathcal{H}}_{3,2}\to\overline{\mathcal{M}}_{3}\) the components \(\Delta_{2}\) and \(\Delta_{4}\) are mapped to \(\delta_{0}\), while \(\Delta_{3}\) goes to \(\delta_{1}\). The formulas (5) and (6) specialize to \[\lambda=\frac{3}{14}\Delta_{2}+\frac{2}{7}\Delta_{3}+\frac{2}{7}\Delta_{4}\,. \tag{11}\] and \[\psi_{k}=\frac{5}{7}\,\Delta_{2}(k^{+})+\frac{1}{21}\,\Delta_{2}(k^{-})+\frac {20}{21}\,\Delta_{3}(k^{+})+\frac{2}{7}\,\Delta_{3}(k^{-})+\frac{2}{7}\, \Delta_{4}\,.\] _Remark 12.1_.: The equation (11) shows that on \(\overline{\mathcal{H}}_{3,2}\) there exists a scalar-valued modular form of weight \(14\) whose square equals \(\chi_{28}\), a form mentioned in Section 5. Since on \(\overline{\mathcal{H}}_{3}\) we have \(28\,\lambda=3\,\delta_{0}+8\,\Delta_{1}+8\,\xi_{1}\), an integral class not divisible by \(2\), there is not a modular form of weight \(14\) on \(\overline{\mathcal{H}}_{3}\) with square \(\chi_{28}\). We have the line bundle \(M\) on \(\tilde{\mathbb{P}}\) defined in (7) corresponding to the divisor class \(D_{k}+E_{k}\) given by \[D_{k}=2\,\tilde{S}_{k}+2\,\Pi_{2}(k^{+})+2\,\Pi_{4}(k^{+})+\Pi_{3}(k^{+})-2\, \Pi_{3}^{c}(k^{+})-\Pi_{3}(k^{-})\] and \[E_{k}=\frac{5}{2}\psi_{k}-(2\,\Delta_{2}(k^{+})+\Delta_{3}(k^{+})+\Delta_{4})\,,\] where \(\psi_{k}\) is given in (6). Define the rational divisor class \[U:=\frac{1}{14}\Delta_{2}+\frac{3}{7}(\Delta_{3}+\Delta_{4})=\frac{3}{2}\psi_{k}- (\Delta_{2}(k^{+})+\Delta_{3}(k^{+}))\,.\] The divisor class of \(D_{k}+E_{k}\) is independent of \(k\) as observed in Lemma 9.1, but this can be seen also directly from the next lemma. **Lemma 12.2**.: _We have the linear equivalence \(D_{k}+E_{k}\sim-\omega_{\pi}+\Pi_{2}+\Pi_{3}+\pi^{*}(U)\)._ Proof.: One checks that \(-\omega_{\pi}+\Pi_{2}+\Pi_{3}\) and \(D_{k}+E_{k}\) have the same restriction to fibres of \(\pi\). We have \(s_{k}^{*}(-\omega_{\pi}+\Pi_{2}+\Pi_{3})=-\psi_{k}+\Delta_{2}(k^{+})+\Delta_{ 3}(k^{+})\) and \(s_{k}^{*}(D_{k}+E_{k})=\psi_{k}/2\). Let \(Q\) be the image of \(\varphi:\tilde{\mathbb{P}}\to\mathbb{P}(\mathbb{E})\), see Proposition 10.1. The map \(\varphi\) is the composition of a map \(\varphi^{\prime}:\tilde{\mathbb{P}}\to Q\) with the inclusion map \(\iota:Q\hookrightarrow\mathbb{P}(\mathbb{E})\). The generic fibre of \(Q\to B\) is a conic, hence \(O(Q)=O(2)\otimes O(u^{*}A)\) for some divisor \(A\) on \(B\). We determine \(A\). **Lemma 12.3**.: _On \(\mathbb{P}(\mathbb{E})\) we have the linear equivalence_ \[[Q]\sim[O(2)]+u^{*}(4\,\lambda-(\Delta_{2}+\Delta_{3}+\Delta_{4}))\,.\] Proof.: We have \(\omega_{\mathbb{P}(\mathbb{E})}\otimes u^{*}(\omega_{B}^{-1})=O(-3)\otimes u^ {*}(\det\mathbb{E})\) and by adjunction \(\omega_{Q}=\iota^{*}(O(Q)\otimes\omega_{\mathbb{P}(\mathbb{E})})\). Since \(\varphi^{\prime}\) is a blow down we have \(\omega_{\tilde{\mathbb{P}}}=(\varphi^{\prime})^{*}\omega_{Q}\otimes O(\Pi_{2}+ \Pi_{3})\). We get \[\varphi^{*}(O(Q))= \varphi^{\prime}{}^{*}\omega_{Q}\otimes\varphi^{*}\omega_{ \mathbb{P}(\mathbb{E})}^{-1}\] \[= \omega_{\tilde{\mathbb{P}}}\otimes O(-\Pi_{2}-\Pi_{3})\otimes \varphi^{*}O(3)\otimes\pi^{*}\det(\mathbb{E})^{-1}\otimes\pi^{*}\omega_{B}^{-1}\] \[= \omega_{\pi}\otimes O(-\Pi_{2}-\Pi_{3})\otimes\varphi^{*}O(3) \otimes\pi^{*}\det(\mathbb{E})^{-1}\,.\] On the other hand we have \(\varphi^{*}O(Q)=\varphi^{*}O(2)\otimes O(\pi^{*}A)\) and \(\varphi^{*}O(1)=N\), hence we get \[O(u^{*}A)=N\otimes\omega_{\pi}\otimes O(-\Pi_{2}-\Pi_{3})\otimes\pi^{*}\det( \mathbb{E})^{-1}\,. \tag{12}\] By Lemma 12.2 we have \(N=\omega_{\pi}^{-1}\otimes O(\Pi_{2}+\Pi_{3})\otimes O(U)\). Substituting this in (12) we get the desired result. The effective divisor \(Q\) yields a modular form and Lemma 12.3 gives its weight. **Corollary 12.4**.: _The effective divisor \(Q\) on \(\mathbb{P}(\mathbb{E})\) defines a modular form \(\chi_{2,0,4}\) on \(\tilde{\mathcal{H}}_{3,2}\) of weight \((2,0,4)\), that is, a non-zero section of \(\operatorname{Sym}^{2}(\mathbb{E})\otimes\det(\mathbb{E})^{4}\)._ Since the divisor \(\Delta_{2}+\Delta_{3}+\Delta_{4}\) is not a pull back from the moduli space \(\overline{\mathcal{H}}_{3}\), the modular form does not descend to \(\overline{\mathcal{H}}_{3}\). Recall that the modular form \(\chi_{4,0,8}\) restricted to the hyperelliptic locus was associated to a divisor \(D\) that equals \(2Q\). ## 13. Comparison with the Hodge Bundle We know by Lemma 9.3 that the line bundle \(N=O_{\tilde{\mathbb{P}}}(D_{k}+E_{k}-R)\) on \(\tilde{\mathbb{P}}\) over \(\widetilde{\mathcal{H}}_{3,2}\) has the property that \(\pi_{*}(N)\cong\mathbb{E}\) up to codimension \(2\). We now deal with the push forward of the tensor powers of \(N\). **Lemma 13.1**.: _We have for \(m\in\mathbb{Z}_{\geq 1}\)_ \[c_{1}(\pi_{*}(N^{\otimes m}))=\frac{2\,m^{2}+m}{14}\,\Delta_{2}+\frac{5\,m^{2}-m }{14}(\Delta_{3}+\Delta_{4})\,.\] Proof.: We apply Grothendieck-Riemann-Roch to \(\pi\) and \(N^{\otimes m}\) as in (9) in the proof of Proposition 9.2. Recall that \(N\) corresponds to the divisor(class) \(D_{k}+E_{k}-R\). We use that \(R^{1}\pi_{*}N^{\otimes m}=0\) for all \(m\) and find \[c_{1}(\pi_{*}(N^{\otimes m}))=\frac{1}{2}\pi_{*}\left(m^{2}(D_{k}+E_{k}-R)^{2} -m\,\omega_{\pi}\cdot(D_{k}+E_{k}-R)\right)\] and using the relations (8) and (9) of the proof of Proposition 9.2 we get \[c_{1}(\pi_{*}(N^{\otimes m}))=\frac{2\,m^{2}+m}{14}\,\Delta_{2}+\frac{5\,m^{2 }-m}{14}(\Delta_{3}+\Delta_{4})\] as required. **Proposition 13.2**.: _On \(B\) we have the exact sequence_ \[0\to\operatorname{Sym}^{m-2}(\mathbb{E})\otimes O(-A)\to\operatorname{Sym}^{ m}(\mathbb{E})\to\pi_{*}(N^{\otimes m})\to 0\,,\] _with \(A=4\,\lambda-(\Delta_{2}+\Delta_{3}+\Delta_{4})\)._ Proof.: By Lemma 12.3 we have on \(\mathbb{P}(\mathbb{E})\) the exact sequence \[0\to O(m-2)\otimes u^{*}O(-A)\to O(m)\to O(m)_{|Q}\to 0\] Applying \(u_{*}\) and observing that \(R^{1}u_{*}O(m-2)\) vanishes gives the result. A section of \(\operatorname{Sym}^{j}(\mathbb{E})\otimes\det(\mathbb{E})^{k}\) over \(\mathcal{H}_{3}\) pulls back to the stack \([W^{0}_{8,-2}/(\operatorname{GL}(W)/(\pm 1_{W}))]\) as a section of \(\operatorname{Sym}^{j}(\operatorname{Sym}^{2}(W))\otimes\det(W)^{k/2}\) for even \(k\). We have an isotypical decomposition \[\operatorname{Sym}^{j}(\operatorname{Sym}^{2}(W))=\oplus_{n=0}^{\lfloor j/2 \rfloor}\operatorname{Sym}^{2j-4n}(W)\otimes\det(W)^{2n}\,,\] where we assume here and in the rest of this section that the characteristic is \(0\) or not \(2\) and high enough for this identity to hold (or use divided powers as in [1, 3.1]). A section of \(\operatorname{Sym}^{j}(\mathbb{E})\otimes\det(\mathbb{E})^{k}\) over \(\mathcal{M}_{3}^{nh}\) pulls back to \([V_{4,0,-1}/\operatorname{GL}(V)]\), where we now write \(V\) for the standard space of dimension \(3\). An identification \(V\cong\operatorname{Sym}^{2}(W)\) corresponds to an embedding \(\mathbb{P}^{1}\hookrightarrow\mathbb{P}^{2}\) with image a smooth quadric. If we view \(V\) with basis \(x,y,z\), the kernel of the projection \[\operatorname{Sym}^{j}(V)=\operatorname{Sym}^{j}(\operatorname{Sym}^{2}(W)) \to\operatorname{Sym}^{2j}(W)\] consists of the polynomials of degree \(j\) in \(x,y,z\) that vanish on the quadric. Thus in view of the isotypical decomposition above the exact sequence \[0\to\operatorname{Sym}^{m-2}(\mathbb{E})\otimes O(-A)\to\operatorname{Sym}^{ m}(\mathbb{E})\to\pi_{*}(N^{\otimes m})\to 0\,.\] corresponds to (the pullback to \(W^{0}_{8,-2}\) of) an exact sequence \[0\to\left(\operatorname{Sym}^{m-2}(\operatorname{Sym}^{2}W)\right)\otimes \det(W)^{2}\to\operatorname{Sym}^{m}(\operatorname{Sym}^{2}W)\to\operatorname {Sym}^{2m}(W)\to 0\,.\] The section \(\chi_{4,0,8}\) of \(\operatorname{Sym}^{4}(\mathbb{E})\otimes\det(\mathbb{E})^{8}\) restricted to the hyperelliptic locus allows three projections according to the decomposition \[\operatorname{Sym}^{4}(\operatorname{Sym}^{2}W)\otimes\det(W)^{24}=W_{8,24} \oplus W_{4,26}\oplus W_{0,28}\,. \tag{13}\] **Lemma 13.3**.: _The projections to the three summands in (13) of the pull back of \(\chi_{4,0,8}\) to \(\mathcal{H}_{3,2}\) define modular forms on \(\overline{\mathcal{H}}_{3,2}\) of weights \((4,0,8)\), \((2,0,4)\) and \((0,0,14)\) and these are given by the covariants \(f_{8,-2}\,\mathfrak{d}\), \(f_{4,-1}\,\mathfrak{d}\) and the discriminant \(\mathfrak{d}\)._ Proof.: The identification of \(\mathbb{E}\) with \(\operatorname{Sym}^{2}(W)\) corresponds to the embedding of \(\mathbb{P}^{1}\) as a conic \(C\) in \(\mathbb{P}^{2}\). A ternary quartic \(Q\) contains \(C\) either \(0\), \(1\) or \(2\) times, say \(Q=mC+R\) with \(0\leq m\leq 2\). The three projections correspond to \(R\cap C\) and give the universal binary octic, the universal binary quartic and \(1\) up to twisting. The first projection was identified in Proposition 5.1. The argument for the second is similar, while the third descends to \(\overline{\mathcal{H}}_{3}\) and does not vanish on \(\mathcal{H}_{3}\). Therefore it must be a multiple of the disciminant. Taking into account the action of \(\operatorname{GL}_{2}/\pm 1_{W}\) we get the indicated weights (namely \(2(14+\epsilon)\) with \(\epsilon=-2,-1,0\)). ## 14. More Modular Forms for Genus Three We will use more effective divisors on projectivized Hodge bundles to produce more modular forms. Here pull back to the Hurwitz space can be useful. We start by calculating the coefficient of \(\delta_{1}\) in the cycle relation for the canonical curve of genus \(3\) given in Proposition 3.1. Let \(\overline{D}\) be the closure in \(\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{M}}_{3}\) of the canonical image \(D\) of the universal curve of genus \(3\). **Lemma 14.1**.: _We have \([\overline{D}]=[O(4)]+u^{*}(8\,\lambda-\delta_{0}-2\,\delta_{1})\)._ Proof.: We will prove this by pulling back the divisor \(\overline{D}\) to the Hurwitz space \(\mathcal{H}_{3,2}\) and take its closure there in \(\mathbb{P}(\mathbb{E})\) over the space of admissible covers \(\overline{\mathcal{H}}_{3,2}\). Since \(\overline{D}\) is defined as the closure of the canonical image \(D\) of the universal curve over \(\mathcal{M}_{3}\), it does not contain the whole fibre of \(\mathbb{P}(\mathbb{E})\) over a boundary component of \(\overline{\mathcal{M}}_{3}\). But in order to be sure that the pull back of \(\overline{D}\) on \(\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{H}}_{3,2}\) coincides with that of the closure of the pull back of \(D\) over \(\mathcal{H}_{3,2}\), we have to check that \(\overline{D}\) does not contain the whole fibre of \(\mathbb{P}(\mathbb{E})\) over boundary components of \(\overline{\mathcal{H}}_{3,2}\). For boundary components of type \(\Delta_{0}\) this follows from the fact that in a \(1\)-dimensional family \(X\to B\) with central fibre \(C\) of type \(\Delta_{0}\) the canonical image in the fibre of \(\mathbb{P}(\mathbb{E})\) is well-defined and depends only on \(C\). For a \(1\)-dimensional family with central fibre of type \(\Delta_{1}\) we blow up the node of the central fibre, do a quadratic base change and obtain a family \(\tilde{X}\to\tilde{B}\) with central fibre of type \(C=C_{1}\cup R\cup C_{2}\) with \(R\) a \((-2)\)-curve. We show in the appendix Section 17 that also in this case the image of \(C\) in the central fibre of \(\mathbb{P}(\mathbb{E})\) is well-defined and depends only on \(C\). In fact, the \(\mathbb{P}^{2}\) that is the projectivized central fibre of \(\mathbb{E}\), contains a line \(L\) and a point \(P\) corresponding to the cotangent spaces of \(\operatorname{Jac}(C_{i})\) in that of \(\operatorname{Jac}(C)\). The image of the central fibre is \(2L+2L^{\prime}\) with \(L^{\prime}\) the line connecting \(P\) with the image of a Weierstrass point on \(L\). Thus we may pull back to \(\overline{\mathcal{H}}_{3,2}\) and do the calculation of the class there. We have \([\overline{D}]=O(4)+u^{*}(a\,\lambda+b\,\delta_{0}+c\,\delta_{1})\). The pull back of the class to the space of admissible covers \(\overline{\mathcal{H}}_{3,2}\) is \(O(4)+a\,\lambda+2\,b(\Delta_{2}+\Delta_{4})+c\,\Delta_{3}\). Using the class of \(Q\) given in Lemma 12.3 we see that this equals \(O(4)+8\,\lambda-2(\Delta_{2}+\Delta_{3}+\Delta_{4})\). By the relation \(14\,\lambda=3\,\Delta_{2}+4\Delta_{3}+4\,\Delta_{4}\) this gives \[(3a+28b+4)\Delta_{2}+(4a+14c-4)\Delta_{3}+(4a+28b-4)\Delta_{4}=0\] and by the linear independence of the boundary classes this implies \(a=8\), \(b=-1\) and \(c=-2\). _Remark 14.2_.: Since the case \(g=2\) is hyperelliptic the method just used in the proof of Lemma 14.1 can be applied as well to determine in an alternative way the class of the closure \(\overline{D}\) of the ramification divisor \(D\) of the universal genus \(2\) curve in Proposition 2.1. By the theory of admissible covers there is a natural map \(\overline{\mathcal{H}}_{2,2}\to\overline{\mathcal{M}}_{2}\) with the property that the pull back of the Hodge bundle on \(\overline{\mathcal{M}}_{2}\) is the Hodge bundle on \(\overline{\mathcal{H}}_{2,2}\) associated to the corresponding family of admissible covers. Hence the pull back of the \(O(1)\) of the bundle \(\mathbb{P}(\mathbb{E})\to\overline{\mathcal{M}}_{2}\) equals the \(O(1)\) of the bundle \(\mathbb{P}(\mathbb{E})\to\overline{\mathcal{H}}_{2,2}\). Let \(\Sigma=\varphi_{*}(\sum_{k=1}^{6}\tilde{S}_{k})\), with \(\varphi:\mathbb{P}\to\mathbb{P}(\mathbb{E})\) the map defined in Section 10. By geometry, the pull back of \(\overline{D}\) to the bundle \(\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{H}}_{2,2}\) equals \(\Sigma\). By Remark 10.4 we have \(\varphi^{*}\Sigma=\tilde{S}+2\,\Pi_{2}+6\,\Pi_{3}\). By using the formulas of Section 11 11, we have for \(g=2\): \[\varphi^{*}O(6)=\tilde{M}-6\,R=\tilde{S}+\frac{12}{5}\Pi_{2}+\frac{2}{5}\Pi_{ 2}^{c}+\frac{24}{5}\Pi_{3}-\frac{3}{5}R\,.\] We now write \([\overline{D}]=O(6)+u^{*}(a\,\delta_{0}+b\,\delta_{1})\). By pulling back to \(\tilde{\mathbb{P}}\) and using the above formulas, we get (we refer to the diagram in Section 10 for notation) \[\tilde{S}+\frac{12}{5}\Pi_{2}+\frac{2}{5}\Pi_{2}^{c}+\frac{24}{5}\Pi_{3}- \frac{3}{5}R+\pi^{*}(2a\,\Delta_{0}+b\,\Delta_{3})=\tilde{S}+2\,\Pi_{2}+6\,\Pi _{3}\,.\] This implies \[\pi^{*}(2a\,\Delta_{0}+b\,\Delta_{3})=-\frac{2}{5}(\Pi_{2}+\Pi_{2}^{c})+\frac {3}{5}(2\,\Pi_{3}+R)=\pi^{*}(-\frac{2}{5}\Delta_{2}+\frac{3}{5}\Delta_{3})\,,\] hence \(a=-1/5\) and \(b=3/5\) and the result follows by using the formula \(10\,\lambda=\delta_{1}+2\,\delta_{2}\). The connection between divisors on projectivized Hodge bundles and modular forms can also be used in the other direction: obtaining results on cycle classes using modular forms. We give a few examples. To a canonical quartic plane curve \(C\) we can associate a curve \(\tilde{S}\) in the dual plane of lines intersecting \(C\) equianharmonically. It corresponds to a contravariant (concomitant) \(\sigma\) of the ternary quartic given by Salmon in [22, p. 264] and it is defined by an equivariant \(\operatorname{GL}(3)\) embedding \(W[4,4,0]\hookrightarrow\operatorname{Sym}^{2}(\operatorname{Sym}^{4}(W))\). It gives rise to a divisor in \(\mathbb{P}(\mathbb{E}^{\vee})\) and a modular form \(\chi_{0,4,16}\) of weight \((0,4,16)\). We refer to [5, p. 54] for the relation between invariant theory of ternary quartics and modular forms. The Siegel modular form \(\chi_{0,4,16}\) vanishes with order \(2\) at infinity and order \(4\) along the locus \(\mathcal{A}_{2,1}\) of decomposable abelian threefolds. With \(\check{u}:\mathbb{P}(\mathbb{E}^{\vee})\to\overline{\mathcal{M}}_{3}\) the projection we have \(\check{u}_{*}(O_{\mathbb{P}(\mathbb{E}^{\vee})}(1))=\mathbb{E}^{\vee}\cong \wedge^{2}\mathbb{E}\otimes\det(\mathbb{E})^{-1}\) and we thus find an effective divisor on \(\mathbb{P}(\mathbb{E}^{\vee})\) over \(\tilde{\mathcal{A}}_{3}\) with class \([\tilde{S}]=[O_{\mathbb{P}(\mathbb{E}^{\vee})}(4)]+20\,\lambda-2\,\delta\) and it vanishes with multiplicity \(4\) along \(\mathcal{A}_{2,1}\). We thus find on \(\mathbb{P}(\mathbb{E}^{\vee})\) over \(\overline{\mathcal{M}}_{3}\) a relation \[[\tilde{S}]=[O_{\mathbb{P}(\mathbb{E}^{\vee})}(4)]+20\,\lambda-2\,\delta_{0}- 4\,\delta_{1}\,,\] where we identify \(\lambda\) and \(\delta_{i}\) with their pullbacks to \(\mathbb{P}(\mathbb{E}^{\vee})\). Similarly, in the dual plane we have the sextic \(\check{T}\) of lines intersecting the quartic curve in a quadruple of points with \(j\)-invariant \(1728\). The corresponding concomitant \(\tau\) corresponds to \(W[6,6,0]\hookrightarrow\operatorname{Sym}^{3}(\operatorname{Sym}^{4}(W))\) and defines a modular form of weight \((0,6,24)\) vanishing with multiplicity \(3\) at infinity and multiplicity \(6\) along \(\mathcal{A}_{2,1}\). We thus get a cycle relation \[[\check{T}]=[O_{\mathbb{P}(\mathbb{E}^{\vee})}(6)]+30\,\lambda-3\,\delta_{0}- 6\,\delta_{1}\,.\] The concomitant \(\sigma^{3}-27\,\tau^{2}\) vanishes on the locus of double conics and the corresponding modular form of weight \((0,12,48)\) is divisible by \(\chi^{2}_{18}\) as can be checked using the methods of [5]. Dividing by \(\chi^{2}_{18}\) gives a cusp form of weight \((0,12,12)\) vanishing with multiplicity \(2\) at infinity and multiplicity \(3\) along \(\mathcal{A}_{2,1}\). It is classically known (see e.g. [3, p. 43]) that this concomitant defines the dual curve \(\check{C}\) to the canonical image \(C\) in \(\mathbb{P}(\mathbb{E})\). We thus find an effective divisor in \(\mathbb{P}(\mathbb{E}^{\vee})\) containing the closure of the dual curve with class \[12\,[O_{\mathbb{P}(\mathbb{E}^{\vee})}(1)]+24\,\lambda-2\,\delta_{0}-3\, \delta_{1}\,.\] This effective divisor class can also be given by the cycle \[B=\{(C,\eta)\in\mathbb{P}(\mathbb{E}^{\vee}):\,\text{div}(\eta)\text{ has a point of multiplicity }2\}\] over \(\mathcal{M}_{3}\) and Korotkin and Zograf in [19] determined the class of its closure \(\overline{B}\) \[[\overline{B}]=12\,[O_{\mathbb{P}(\mathbb{E}^{\vee})}(1)]+24\,\lambda-2\, \delta_{0}-3\,\delta_{1}\,.\] Another example of an effective divisor for genus \(3\) is provided by the Weierstrass divisor \(W\) with class \[[\overline{W}]=24\,[O_{\mathbb{P}(\mathbb{E}^{\vee})}(1)]+68\,\lambda-6\, \delta_{0}-12\,\delta_{1}\] as given by Gheorghita in [11]. Here we get a section of \[\operatorname{Sym}^{24}(\wedge^{2}\mathbb{E})\otimes\det(\mathbb{E})^{44}(-6 \,\delta_{0}-12\,\delta_{1})\] This gives a Teichmuller modular form of weight \((0,24,44)\) vanishing with multiplicity \(6\) at the cusp. It descends to a Siegel modular form. **Corollary 14.3**.: _The dual of the canonical curve defines a Siegel modular cusp form of degree \(3\) of weight \((0,12,12)\) vanishing with multiplicity \(2\) at infinity. The Weierstrass divisor defines a cusp form of weight \((0,24,44)\) vanishing with multiplicity \(6\) at infinity._ ## 15. The hypertangent divisor A generic canonically embedded curve \(C\) of genus \(3\) has \(24\) (Weierstrass) points where the tangent line intersects \(C\) with multiplicity \(3\). The union of these \(24\) lines forms a divisor in \(\mathbb{P}^{2}\). Taking the closure of this divisor for the universal curve over \(\mathcal{M}_{3}\) defines a divisor \(H\) in \(\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{M}}_{3}\) which we call the hypertangent line divisor. We calculate the class of this divisor over \(\overline{\mathcal{M}}_{3}\) and also calculate the class of a corresponding divisor over \(\overline{\mathcal{H}}_{3,2}\). The calculation over \(\overline{\mathcal{M}}_{3}\) uses the divisors \(\check{S}\) and \(\check{T}\) in \(\mathbb{P}(\mathbb{E}^{\vee})\) over \(\overline{\mathcal{M}}_{3}\) as defined in the preceding section. It is a classical result that the intersection \(\check{S}\cdot\check{T}\) in the generic fibre is the \(0\)-cycle consisting of the \(24\) points defining the \(24\) hyperflexes of the generic curve \(C\), see [22]. We consider the incidence variety \[I=\{(p,\ell)\in\mathbb{P}(\mathbb{E})\times_{\overline{\mathcal{M}}_{3}} \mathbb{P}(\mathbb{E}^{\vee}):p\in\ell\}\] Let \(\pi:I\to\mathbb{P}(\mathbb{E})\) and \(\check{\pi}:I\to\mathbb{P}(\mathbb{E}^{\vee})\) be the two projections fitting in the commutative diagram We have the tautological sequence on \(\mathbb{P}(\mathbb{E})\) \[0\to F\to u^{*}(\mathbb{E})\to O_{\mathbb{P}(\mathbb{E})}(1)\to 0\] and a similar one on \(\mathbb{P}(\mathbb{E}^{\vee})\) \[0\to\check{F}\to\check{u}^{*}(\mathbb{E}^{\vee})\to O_{\mathbb{P}(\mathbb{E}^ {\vee})}(1)\to 0\,.\] Now note that \(I\) can be identified with the \(\mathbb{P}^{1}\)-bundle \(\mathbb{P}(F^{\vee})\) on \(\mathbb{P}(\mathbb{E})\), but also with the \(\mathbb{P}^{1}\)-bundle \(\mathbb{P}(\check{F}^{\vee})\) on \(\mathbb{P}(\mathbb{E}^{\vee})\). The tautological inclusion \(F\to u^{*}\mathbb{E}\) induces a surjection \(u^{*}\mathbb{E}^{\vee}\to F^{\vee}\) and this gives an inclusion \(\mathbb{P}(F^{\vee})\to\mathbb{P}(u^{*}\mathbb{E}^{\vee})\) of projective bundles over \(\mathbb{P}(\mathbb{E})\) which composed with natural map \(\mathbb{P}(u^{*}\mathbb{E}^{\vee})\to\mathbb{P}(\mathbb{E}^{\vee})\) gives the map \(\check{\pi}:I=\mathbb{P}(F^{\vee})\to\mathbb{P}(\mathbb{E}^{\vee})\). This implies \[O_{\mathbb{P}(F^{\vee})}(1)=\check{\pi}^{*}O_{\mathbb{P}(\mathbb{E}^{\vee})} (1)\quad\text{and similarly}\quad O_{\mathbb{P}(F^{\vee})}(1)=\pi^{*}O_{ \mathbb{P}(\mathbb{E})}(1)\,. \tag{14}\] With \[f=c_{1}(F),\quad\check{f}=c_{1}(\check{F}),\quad h=c_{1}(O_{\mathbb{P}( \mathbb{E})}(1)),\quad\check{h}=c_{1}(O_{\mathbb{P}(\mathbb{E}^{\vee})}(1))\,,\] this gives the identities of pullbacks of the first Chern classes \(c_{1}(\mathbb{E})=-c_{1}(\mathbb{E}^{\vee})=\lambda\) \[\pi^{*}(f)+\pi^{*}(h)=\pi^{*}u^{*}(\lambda)=\check{\pi}^{*}\check{u}^{*}( \lambda)=-\check{\pi}^{*}(\check{f})-\check{\pi}^{*}(\check{h})\,,\] Since \(I=\mathbb{P}(F^{\vee})\) over \(\mathbb{P}(\mathbb{E})\) and \(\check{\pi}^{*}\check{h}=c_{1}(\mathcal{O}_{\mathbb{P}(F^{\vee})}(1)\), the Chern classes of \(F^{\vee}\) and the first Chern class of the tautological line bundle satisfy the relation \[\check{\pi}^{*}\check{h}^{2}+\pi^{*}(f)\,\check{\pi}^{*}(\check{h})+\pi^{*}( c_{2}(F))=0\,. \tag{15}\] **Corollary 15.1**.: _Under the map \(\pi_{*}\check{\pi}^{*}\) we have_ \[\check{h}^{2}\mapsto h-u^{*}(\lambda),\quad\check{h}\,\check{u}^{*}(\xi) \mapsto u^{*}(\xi),\quad\check{u}^{*}(\eta)\mapsto 0\] _for \(\xi\in CH^{1}(\overline{\mathcal{M}}_{3})\) and \(\eta\in CH^{2}(\overline{\mathcal{M}}_{3})\)._ Proof.: Using relation (15) gives \[\pi_{*}(\check{\pi}^{*}(\check{h})^{2})=-\pi_{*}(\pi^{*}(f)\check{\pi}^{*}( \check{h})-\pi^{*}(c_{2}(F))=-f=h-u^{*}(\lambda)\,.\] The other properties follow from general intersection theory. Let now \(\psi\) be the class of the codimension \(2\) cocycle \(\check{S}\cdot\check{T}\). **Lemma 15.2**.: _We have \(\pi_{*}\check{\pi}^{*}\psi=24\,h+216\,\lambda-24\,\delta_{0}-48\,\delta_{1}\)._ Proof.: By the results of the preceding section we have \[\psi=24\,\check{h}^{2}+240\,\check{h}\lambda-24\,\check{h}\delta_{0}-48\, \check{h}\delta_{1}+r\] with \(r\in\check{\pi}^{*}CH^{2}(\overline{\mathcal{M}}_{3})\). Corollary 15.1 implies the result. We now claim that the codimension \(2\) cycle \(\check{S}\cdot\check{T}\) when restricted to the hyperelliptic locus is of the form \(12\,\check{h}\), in other words, by (1) it contains an effective codimension \(2\) cycle with class \[12\,(9\,\lambda-\delta_{0}-3\,\delta_{1})\,\check{h}+\check{u}^{*}(\xi)\] with \(\xi\) a codimension \(2\) class on \(\overline{\mathcal{M}}_{3}\). We check this using the explicit form of the two concomitants \(\sigma\) and \(\tau\) defining \(\check{S}\) and \(\check{T}\). Here \(\sigma\) is a polynomial of degree \(4\) in \(a_{0},\ldots,a_{14}\) and degree \(4\) in the coordinates \(u_{0},u_{1},u_{2}\) where \(a_{0},\ldots,a_{14}\) are the coefficients of the general ternary quartic. A calculation shows that \(\sigma\) restricted to the locus of double conics becomes a square \(q^{2}\) with \(q\) of degree \(2\) in the \(u_{i}\), while \(\tau\) becomes a cube \(q^{3}\). Hence the cycle \(S^{\vee}\cdot T^{\vee}\) restricted to the hyperelliptic locus is represented by an effective cycle representing \(6\,q\sim 12\check{h}\). By Corollary 15.1 under \(\pi_{*}\check{\pi}^{*}\) this is sent to an effective cycle with class \(12(9\,\lambda-\delta_{0}-3\,\delta_{1})\). Since \(H\) is defined as the closure of the hypertangent divisor in the generic fibre, the class of \(H\) equals \(\pi_{*}\check{\pi}^{*}\psi\) minus \(12\) times the class of the hyperelliptic locus; by Lemma 15.2 we get \[24\,h+216\,\lambda-24\,\delta_{0}-48\,\delta_{1}-12(9\lambda-\delta_{0}-3 \delta_{1})=24\,h+108\,\lambda-12\,\delta_{0}-12\,\delta_{1}\,.\] We summarize. **Proposition 15.3**.: _The class \([H]\) of the hypertangent divisor \(H\) in \(\mathbb{P}_{\overline{\mathcal{M}}_{3}}(\mathbb{E})\) equals \([O_{\mathbb{P}(\mathbb{E})}(24)]+108\,\lambda-12\,\delta_{0}-12\,\delta_{1}\). It gives rise to a Siegel modular form of degree \(3\) and weight \((24,0,108)\) vanishing with multiplicity \(12\) along the boundary._ We now work on the Hurwitz space and define and calculate the class of a hypertangent \(H_{h}\) divisor there. It is defined by taking the eight tangent lines at the ramification points of the canonical image. More precisely, on \(\tilde{\mathbb{P}}\) we have the line bundle \(N\) defined in (10). Recall that \(\tilde{S}_{k}\) for \(1\leq k\leq 8\) is the pullback of the section \(S_{k}\) of \(\pi_{9}:\overline{\mathcal{M}}_{0,9}\to\overline{\mathcal{M}}_{0,8}\). Under restriction to the hyperelliptic locus the Weierstrass points degenerate to the ramification points. We define the corresponding hypertangent divisor \(H_{h}\) in \(\mathbb{P}(\mathbb{E})\) over \(\overline{\mathcal{H}}_{3,2}\) by taking the tangents to the canonical image of the generic curve at the points of the sections \(\tilde{S_{k}}\), \(k=1,\ldots,8\) over \(\mathcal{H}_{3,2}\) and then taking the closure over \(\overline{\mathcal{H}}_{3,2}\). We now consider the bundle \(N(-2\tilde{S}_{k})\) on \(\tilde{\mathbb{P}}\). This line bundle is trivial on the generic fibre of \(\pi:\tilde{\mathbb{P}}\to B\), so \(\pi_{*}(N(-2\tilde{S}_{k}))\) is a line bundle on \(B\). **Lemma 15.4**.: _We have_ \[c_{1}(R^{1}\pi_{*}N(-2\tilde{S}_{k}))=\Delta_{2}(k^{+})+\Delta_{3}(k^{+}), \quad\text{and}\quad c_{1}(\pi_{*}N(-2\tilde{S}_{k}))=-\Delta_{3}(k^{+})- \Delta_{3}+E_{k}\,.\] Proof.: Recall that \(N=O(D_{k}+\pi^{*}(E_{k})-R)\). The first statement follows by analyzing the restrictions over the boundary components. For the second we apply Grothendieck-Riemann-Roch as in the proof of Proposition 9.2. By (8) and (9) we have \[c_{1}(\pi_{*}N(-2\tilde{S}_{k})) =-\Delta_{2}(k^{+})-2\,\Delta_{3}(k^{+})-\Delta_{3}+c_{1}(R^{1} \pi_{*}N(-2\tilde{S}_{k}))\] \[=-\Delta_{3}(k^{+})-\Delta_{3}+E_{k}\,.\] Put \(\mathcal{F}_{k}=\pi_{*}(N(-2\tilde{S}_{k}))\). The injection \(N(-2\tilde{S}_{k})\hookrightarrow N\) induces an injection \(\mathcal{F}_{k}\to\mathbb{E}\). Pulling back to \(\mathbb{P}(\mathbb{E})\) via \(u^{*}\) and composing with the canonical surjection \(u^{*}(\mathbb{E})\to O_{\mathbb{P}(\mathbb{E})}(1)\) we get an induced map \[q:u^{*}\mathcal{F}_{k}\to O_{\mathbb{P}(\mathbb{E})}(1)\,.\] The degeneracy locus of \(q\) is an effective divisor \(F_{k}\) that is the vanishing divisor of a section of \(O_{\mathbb{P}(\mathbb{E})}(1)\otimes u^{*}\mathcal{F}_{k}^{-1}\). The interpretation is as follows. The map \(\varphi\) defines an embedding of the generic fibre of \(\tilde{\mathbb{P}}\) into the generic fibre \(\mathbb{P}(\mathbb{E})\). If we identify \(H^{0}(\mathbb{P}^{1},O(2))\) with the fibre of \(\mathbb{E}\) and projectivize, the divisor \(p_{1}+p_{2}\in|O(2)|\) is mapped to the line through through the points \(\varphi(p_{1})\), \(\varphi(p_{2})\). We now sum these divisors \(F_{k}\) and get an effective divisor \(H_{h}\) with class \[[H_{h}]=8\,[O(1)]-\sum_{k=1}^{8}[u^{*}\mathcal{F}_{k}] =[O(8)]+u^{*}(3\,\Delta_{3}+8\,\Delta_{3}-\sum_{k=1}^{8}E_{k})\] \[=[O(8)]+u^{*}(8\,\lambda-2\,\Delta_{2}+\Delta_{3})\,,\] where we use the formulas of Section 9 and Section 11. We can now compare the class of the hyperelliptic hypertangent divisor \(H_{h}\) with that of the pull back of the hypertangent divisor \(H\) to the Hurwitz space. By Proposition 15.3 the pull back of \(H\) has class \([O(24)]+108\,\lambda-24\,(\Delta_{2}+\Delta_{4})-12\,\Delta_{1}\). Since the 24 Weierstrass points collapse with multiplicity 3 to the 8 ramification points we compare the class (of the pull back of) \([H]\) with that of \(3\,[H_{h}]\). Substituting the formula for \(\lambda\) we get \[[H]-3\,[H_{h}]=9\,\Delta_{1}\,,\] which means that the pull back of \(H\) vanishes with multiplicity 9 at the hyperelliptic boundary component \(\Delta_{1}\). ## 16. Genus Four For a smooth curve \(C\) of genus 4 the natural map \(\operatorname{Sym}^{2}(H^{0}(C,\omega_{C}))\to H^{0}(C,\omega_{C}^{\otimes 2})\) is surjective and the kernel has dimension 1. It determines a quadric in \(\mathbb{P}^{3}=\mathbb{P}(H^{0}(C,\omega_{C}))\) containing the canonical curve. Over \(\overline{\mathcal{M}}_{4}\) we find a corresponding exact sequence \[0\to U\to\operatorname{Sym}^{2}(\mathbb{E})\to\pi_{*}\omega_{C}^{\otimes 2} \to 0\,.\] The line bundle \(U\) has first Chern class \(5\,\lambda-(13\,\lambda-\delta)=-8\,\lambda+\delta\) by Mumford's calculation of \(c_{1}(\pi_{*}\omega_{c}^{\otimes 2})\)[20, Thm. 5.10]. In the bundle \(\mathbb{P}(\mathbb{E})\) the quadric containing the canonical curve determines a divisor \(Q\). Let \(u:\mathbb{P}(\mathbb{E})\to\overline{\mathcal{M}}_{4}\) be the projection. **Lemma 16.1**.: _The divisor class of \(Q\) satisfies: \([Q]=[O(2)]+u^{*}(8\,\lambda-\delta)\)._ Proof.: Observe that \(u^{*}u_{*}O(2)=u^{*}(\operatorname{Sym}^{2}(\mathbb{E}))\). The natural morphism \(u^{*}u_{*}O(2)\to O(2)\) induces \(u^{*}U\to O(2)\). The divisor \(Q\) is the vanishing locus of this morphism, hence has class \([O(2)]+u^{*}(8\,\lambda-\delta)\). **Corollary 16.2**.: _The effective divisor \(Q\) defines a Teichmuller modular cusp form \(\chi\) of genus \(4\) and weight \((2,0,0,8)\)._ If we view a section of \(\operatorname{Sym}^{2}(\mathbb{E})\) as a quadratic form on \(\mathbb{E}^{\vee}\) we can take the discriminant, cf. [6]. Doing this with the form \(\chi\) of weight \((2,0,0,8)\) just constructed we get a scalar-valued modular form \(D(\chi)\) of weight \(34\). This modular form vanishes on the closure of the locus of curves whose canonical model lies on a quadric cone. This locus has class \(34\lambda-4\,\delta_{0}-14\,\delta_{1}-18\,\delta_{2}\) by Teixidor i Bigas [23, Prop. 3.1] and equals the divisor of curves with a vanishing thetanull. The modular form \(D(\chi)\) is the square root of the restriction to \(\mathcal{M}_{4}\) of the product of the even theta characteristics on \(\mathcal{A}_{4}\). ## 17. Appendix on base-point freeness We consider a family \(\pi:X\to B\) of semistable curves of genus \(g\) with \(B\) a disc or the spectrum of a discrete valuation ring. We assume that the central fibre \(C\) is a nodal curve which is a chain of three curves \(C=C_{1}\cup R\cup C_{2}\) with \(C_{1}\), \(C_{2}\) smooth curves of genus \(g_{1}\) and \(g_{2}\) and \(R\) a rational \((-2)\)-curve. We let \(q=C_{1}\cap R\) and \(p=C_{2}\cap R\). **Proposition 17.1**.: _Let \(\omega\) the relative dualizing sheaf of \(\pi:X\to B\). Then \(\pi_{*}(\omega(-R))\cong\pi_{*}(\omega)\) and the central fibre of \(\pi_{*}(\omega(-R))\) is of codimension \(1\) in \(H^{0}(C,\omega(-R))\) and defines a base point free linear system on \(C\)._ Proof.: The exact sequence \(0\to\omega(-R)\to\omega\to\omega_{|R}\to 0\) induces a sequence \[0\to\pi_{*}(\omega(-R))\to\pi_{*}(\omega)\xrightarrow{r}\pi_{*}(\omega)_{|R}\] and the map \(r\) is zero because \(\omega_{|C}=(\omega_{C_{1}}(q),O_{R},\omega_{C_{2}}(p))\), therefore the restrictions to \(C_{1}\) (resp. \(C_{2}\)) must vanish at \(q\) (resp. \(p\)), hence extend by \(0\) on \(R\). We thus see by the exactness that \(\pi_{*}(\omega(-R))\cong\pi_{*}(\omega)\). Next we observe that \(\dim H^{0}(C,\omega(-R))=g_{1}+g_{2}+1\) with \(g_{i}\) the genus of \(C_{i}\). This follows directly from \(\omega(-R)_{|C}=(\omega_{C_{1}},O_{R}(2),\omega_{C_{2}})\). We have the exact sequence \[0\to\omega(-R-C_{1})\to\omega(-R)\to\omega(-R)_{|C_{1}}\to 0\,, \tag{22}\] where \(\omega(-R)_{|C_{1}}\cong\omega_{C_{1}}\) and \(\omega(-R-C_{1})_{|C}=(\omega_{C_{1}}(q),O_{R}(1),\omega_{C_{2}})\). For a section \((s_{1},s,s_{2})\in H^{0}(C,\omega(-R-C_{1}))\) the section \(s\) is the unique section of \(O_{R}\) that vanishes at \(q\) and with \(s(p)=s_{2}(p)\). We thus see \(\dim H^{0}(C,\omega(-R-C_{1}))=g_{1}+g_{2}\). Therefore \(h^{0}(\omega(-R-C_{1}))\) has constant rank \(g_{1}+g_{2}\) on the fibres of \(\pi\), hence \(R^{1}\pi_{*}(\omega(-R-C_{1}))\) is a line bundle. We conclude that the special fibre of \(\pi_{*}(\omega(-R-C_{1}))\) equals \(H^{0}(C,\omega(-R-C_{1}))\). But \(\pi_{*}(\omega(-R)_{|C_{1}})\) is a torsion sheaf, hence the connecting homomorphism \(\pi_{*}(\omega(-R)_{|C_{1}})\to R^{1}\pi_{*}(\omega(-R-C_{1}))\) of (22) must be zero and we get an induced exact sequence \[0\to\pi_{*}(\omega(-R-C_{1}))\xrightarrow{\iota}\pi_{*}(\omega(-R)) \xrightarrow{j}\pi_{*}(\omega(-R))_{|C_{1}})\to 0\,.\] Consider now a section \(\sigma\) of \(\pi_{*}(\omega(-R-C_{1}))\) with restriction \((s_{1},s,s_{2})\) to \(C\). Suppose that \(s\neq 0\). If we multiply \(\sigma\) with a local section \(\tau\) of \(O_{C_{1}}\) on \(X\) with divisor \(C_{1}\) then \(\iota(\sigma)=\sigma\cdot\tau_{|C}\) has as restriction to \(R\) a section of \(O_{R}(2)\) vanishing with multiplicity \(2\) at \(q\) and therefore it does not vanish anywhere else. Hence the subspace of the special fibre \(V\) of \(\pi_{*}(\omega(-R))\) of sections vanishing on \(C_{1}\) has \(q\) as only base point on \(R\). Furthermore, the map \(j\) is surjective, and choosing a section \(s_{1}\in H^{0}(C_{1},\omega_{C_{1}})\) with \(s_{1}(q)\neq 0\) we see that \(q\) is not a base point. Therefore there are no base points on \(R\). By the surjectivity of \(j\) the restriction of \(V\) to \(C_{1}\) is \(H^{0}(C_{1},\omega_{C_{1}})\) and therefore there are no base points on \(C_{1}\). By symmetry the same holds for \(C_{2}\). Similarly to (22) we have exact sequence \[0\to\omega(-R-C_{1}-C_{2})\to\omega(-R)\to\omega(-R)_{|C_{1}+C_{2}}\to 0\] and by a similar reasoning we see that we get an exact sequence \[0\to\pi_{*}(\omega(-R-C_{1}-C_{2}))\xrightarrow{\iota}\pi_{*}(\omega(-R)) \xrightarrow{j}\pi_{*}(\omega(-R))_{|C_{1}+C_{2}})\to 0\,.\] This implies that given \(s_{1}\in H^{0}(C_{1},\omega_{C_{1}})\) and \(s_{2}\in H^{0}(C_{2},\omega_{C_{2}})\) there is a unique element \((s_{1},s,s_{2})\) in the special fibre \(V\) of \(\pi_{*}(\omega(-R))\) mapping to \((s_{1},s_{2})\) under \(j\). The morphism \(X\to\mathbb{P}(\pi_{*}(\omega(-R)))\) is given by the surjection \(\pi^{*}\pi_{*}(\omega(-R))\to\omega(-R)\). The image of the curve \(C\) in the special fibre of \(\mathbb{P}(\mathbb{E})\) consists of the canonical images of \(C_{1}\) and \(C_{2}\), provided with with image of \(p\) and \(q\) and the image of \(R\), that is, the line connecting the images of \(p\) and \(q\). If the genus \(g(C_{i})=1\) then the image of \(C_{i}\) is a point.
2310.18777
Improving Compositional Generalization Using Iterated Learning and Simplicial Embeddings
Compositional generalization, the ability of an agent to generalize to unseen combinations of latent factors, is easy for humans but hard for deep neural networks. A line of research in cognitive science has hypothesized a process, ``iterated learning,'' to help explain how human language developed this ability; the theory rests on simultaneous pressures towards compressibility (when an ignorant agent learns from an informed one) and expressivity (when it uses the representation for downstream tasks). Inspired by this process, we propose to improve the compositional generalization of deep networks by using iterated learning on models with simplicial embeddings, which can approximately discretize representations. This approach is further motivated by an analysis of compositionality based on Kolmogorov complexity. We show that this combination of changes improves compositional generalization over other approaches, demonstrating these improvements both on vision tasks with well-understood latent factors and on real molecular graph prediction tasks where the latent structure is unknown.
Yi Ren, Samuel Lavoie, Mikhail Galkin, Danica J. Sutherland, Aaron Courville
2023-10-28T18:30:30Z
http://arxiv.org/abs/2310.18777v1
# Improving Compositional Generalization using Iterated Learning and Simplicial Embeddings ###### Abstract Compositional generalization, the ability of an agent to generalize to unseen combinations of latent factors, is easy for humans but hard for deep neural networks. A line of research in cognitive science has hypothesized a process, "iterated learning," to help explain how human language developed this ability; the theory rests on simultaneous pressures towards compressibility (when an ignorant agent learns from an informed one) and expressivity (when it uses the representation for downstream tasks). Inspired by this process, we propose to improve the compositional generalization of deep networks by using iterated learning on models with simplicial embeddings, which can approximately discretize representations. This approach is further motivated by an analysis of compositionality based on Kolmogorov complexity. We show that this combination of changes improves compositional generalization over other approaches, demonstrating these improvements both on vision tasks with well-understood latent factors and on real molecular graph prediction tasks where the latent structure is unknown. ## 1 Introduction Deep neural networks have shown an amazing ability to generalize to new samples on domains where they have been extensively trained, approaching or surpassing human performance on tasks including image classification [62], Go [70], reading comprehension [13], and more. A growing body of literature, however, demonstrates that some tasks that can be easily solved by a human can be hard for deep models. One important such problem is compositional generalization ([18], _comp-gen_ for short). For example, Schott et al. [65] study manually-created vision datasets where the true generating factors are known, and demonstrate that a wide variety of current representation learning methods struggle to learn the underlying mechanism. To achieve true "artificially intelligent" methods that can succeed at a variety of difficult tasks, it seems necessary to demonstrate compositional generalization. One contribution of this paper is to lay out a framework towards understanding and improving compositional generalization, and argue that most currently-common training methods fall short. In wondering how deep networks can learn to compositionally generalize, we might naturally ask: how did humans achieve such generalization? Or, as a particular case, how did human languages evolve components (typically, words) that can systematically combine to form new concepts? This has been a long-standing question in cognitive science and evolutionary linguistics. One promising hypothesis is known as _iterated learning_ (IL), a procedure simulating cultural language evolution [41]. Aspects of this proposal are supported by lab experiments [42], a Bayesian model [7], the behavior of neural networks in a simple emergent communication task [60], and real tasks like machine translation [50] and visual question answering [76]. To link the study in cognitive science and deep learning, we first analyze the necessary properties of representations in order to generalize well compositionally. By linking the compositionality and the Kolmogorov complexity, we find iteratively resetting and relearning the representations can introduce compressibility pressure to the representations, which is also the key to the success of iterated learning. To apply iterated learning in a general representation learning problem, we propose to split the network into a backbone and a task head, and discretize the representation at the end of the backbone using _simplicial embeddings_ (SEM, [45]). This scheme is more practical than LSTM [34] encoders previously used for neural iterated learning [60]. We observe in various controlled vision domains that SEM-IL can enhance compositional generalization by aligning learned representations to ground-truth generating factors. The proposed method also enhances downstream performance on molecular graph property prediction tasks, where the generating process is less clear-cut. ## 2 Compositional Generalization Generalization is a long-standing topic in machine learning. The traditional notion of (in-distribution) generalization assumes that training and test samples come from the same distribution, but this is insufficient for many tasks: we expect a well-trained model to generalize to some novel scenarios that are unseen during training. One version of this is _compositional generalization_ (comp-gen) [17], which requires the model to perform well on novel combinations of semantic concepts. ### Data-generating assumption and problem definition Any type of generalization requires some "shared rules" between training and test distributions. We hence assume a simple data-generating process that both training and test data samples obey. In Figure 1, the semantic generating factors, also known as latent variables, are divided into two groups: the task-relevant factors (or semantic generating factors) \(\textbf{G}=[G_{1},...,G_{m}]\), and task-irrelevant (or noise) factors **O**. This division depends on our understanding of the task; for example, if we only want to predict the digit identity of an image in the color-MNIST dataset [3], then \(m=1\) and \(G_{1}\) represents the digit identity. All the other generating factors such as color, stroke, angle, and possible noise are merged into **O**. If we want to predict a function that depends on both identity and color, e.g. identifying blue even numbers, we could have \(\textbf{G}=[G_{1},G_{2}]\) with \(G_{1}\) the identity and \(G_{2}\) the color. Each input sample \(\textbf{x}\in\mathcal{X}\) is determined by a deterministic function \(\textsf{GenX}(\textbf{G},\textbf{O})\). The task label(s) \(\textbf{y}\in\mathcal{Y}\) only depend on the factors **G** and possible independent noise \(\epsilon\), according to the deterministic function \(\textsf{GenY}(\textbf{G},\epsilon)\). Note \((\textbf{x},\textbf{O})\perp\textbf{(y},\epsilon)\mid\textbf{G}\), and that **O**, **G**, and \(\epsilon\) are independent. The data-generating distribution \(P(\textbf{x},\textbf{y})\) is determined by the latent distributions \(P(\textbf{G})\) and \(P(\textbf{O})\), along with the \(\textsf{GenX}\) and \(\textsf{GenY}\). We assume \(\textsf{GenX}\) and \(\textsf{GenY}\) are fixed across environments (the "rules of production" are consistent), while \(P(\textbf{G})\) and \(P(\textbf{O})\) might change between training and test.2 Footnote 2: This differs from the classical setting of covariate shift: \(P(\textbf{y}\mid\textbf{x})\) might change due to the shift in \(P(\textbf{G})\). For compositional generalization, we wish to model the problem of generalizing to new combinations of previously seen attributes: understanding "red circle" based on having seen "red square" and "blue circle." Thus, we may assume that the supports of \(P(\textbf{G})\) are non-overlapping between train and test. (If this assumption is not true, it only makes the problem easier.) In summary, our goal is to find an algorithm \(\mathcal{A}\) such that, when trained on a dataset \(\mathcal{D}_{train}\sim P_{train}^{n}\), \(\mathcal{A}\) achieves small test risk \(\mathcal{R}_{P_{test}}(\mathcal{A}(\mathcal{D}_{train}))\). Here \(P_{train}\) and \(P_{test}\) should satisfy these conditions: * \(P_{train}\) and \(P_{test}\) have **G**, **O**, \(\epsilon\) jointly independent, and \(\textbf{x}=\textsf{GenX}(\textbf{G},\textbf{O})\), \(\textbf{y}=\textsf{GenY}(\textbf{G},\epsilon)\). * \(\textsf{GenX}\) and \(\textsf{GenY}\) are the same deterministic functions for \(P_{train}\) and \(P_{test}\). * In challenging cases, we may have \(\textsf{supp}[P_{train}(\textbf{G})]\cap\textsf{supp}[P_{test}(\textbf{G})]=\emptyset\). ### Representationl Learning and Ladder of Compositionality For compositional generalization, we expect that the model must extract atomic semantic features from the training data, and systematically re-combine them in a procedure akin to how the data is generated [41]. We thus consider a typical representation learning framework, which resembles the inverse of the data generation process (Figure 1(a), bottom). We use a backbone \(h:\mathcal{X}\to\mathcal{Z}\) to convert the input signal \(\mathbf{x}\) into a representation \(\mathbf{z}\), and a task head \(g:\mathcal{Z}\to\mathcal{Y}\) to solve the given task based on that representation \(\mathbf{z}\). The prediction of the model is \(\hat{\mathbf{y}}=(g\circ h)(\mathbf{x})\). Intuitively, we would like our learned \(\mathbf{z}\) to uncover the hidden \(\mathbf{G}\), and \(g(\mathbf{z})\) to recover \(\mathsf{GenY}(\mathbf{G},\epsilon)\). We thus analyze how the relationship between \(\mathbf{z}\) and \(\mathbf{G}\) influences the model's generalization capability, building off principles such as information bottleneck [74]. Inspired by the "ladder of causation" [55], we propose a "ladder of compositionality" in Figure 1(b), which outlining a series of conditions on \(\mathbf{z}\) and \(\mathbf{G}\). We hypothesize that comp-gen roughly requires reaching the highest rung of that ladder: **Hypothesis 1**.: _To generalize compositionally, the learned \(\mathbf{z}\) should capture exactly the information in \(\mathbf{G}\) and nothing more (\(\mathbf{G}\) to \(\mathbf{z}\) should be a bijection), and moreover it should preserve the "structure" of \(\mathbf{G}\) (i.e. the mapping from \(\mathbf{G}\) to \(\mathbf{z}\) should be an isomorphism)._ More on this hypothesis, the ladder, and relationship to models of disentanglement [32] are discussed in Appendix A. In short, we find that a model trained using common learning methods relying on mutual information between input \(\mathbf{x}\) and supervision \(\mathbf{y}\) cannot reliably reach the final stage of the ladder - it is necessary to seek other inductive biases in order to generalize compositionally. ## 3 Compressibility pressure and Compositional mapping From the analysis above, we need to find other inductive biases to obtain compositional mappings. Inspired by how compositionality emerges in human language,3 we speculate that the _compressibility_ pressure is the key. Note that this pressure does not refer to compressing information from \(\mathbf{x}\) to \(\mathbf{z}\) (as in Stage III does), but whether a mapping can be expressed in a compact way by reusing common rules. In this section, we will first link compressibility pressure to Kolmogorov complexity by defining different mappings using group theory. As the Kolmogorov complexity is hard to compute, making explicit regularization difficult, we propose to implicitly regularize via _iterated learning_, a procedure in cognitive science proposed to increase compositionality in human-like language. Footnote 3: Human languages are examples of compositional mapping [35]: words are composed of combinations of reusable mappings, and those words in turn are combined to form complex sentences following specific _stable rules_. These properties make our language unique among natural communication systems and enable humans to convey an open-ended set of messages in a compositional way [42]. Researchers in cognitive science and evolutionary linguistics have proposed many explanations for the origin of this property; one persuasive method for simulating it is iterated learning [41]. ### Compositional mappings have lower Kolmogorov complexity From Occam's razor, we know efficient and effective mappings are more likely to capture the ground truth generating mechanism of the data, and hence generalize better. The efficiency is determined by how compressed the mapping is, which can also be measured by Kolmogorov complexity [71, 47]. Figure 1: Left: the data-generating assumption and a typical representation learning method (in red). We use the model \((g\circ h)(\mathbf{x})\) for downstream predictions. Right: the ladder of compositionality stating the requirements of \(\mathbf{z}\) using the entropy-related measurements; see Appendix A for more. To build a link between compositionality and Kolmogorov complexity, we can first describe different bijections between **z** and **G** using group theory, and then use the description length to compare the complexity of a typical element. Specifically, assuming \(\mathbf{z}\in\mathcal{Z}\), \(\textbf{G}\in\mathcal{G}\) and \(|\mathcal{Z}|=|\mathcal{G}|\), the space of all bijections between **z** and **G** is an isomorphism of a symmetric group \(S_{|\mathcal{G}|}\). If \(\textbf{G}=[G_{1},...,G_{m}]\) and each \(G_{m}\) has \(v\) different possible values, \(|\mathcal{G}|=v^{m}\). For clarity in the analysis, we assume **z** also has the same shape. Then, any bijection between **z** and **G** can be represented by an element in \(S_{v^{m}}\). The space of compositional mapping, which is a subset of all bijections, has more constraints. Recall how a compositional mapping is generated (see Appendix A.4 for more details): we first select \(z_{i}\) for each \(G_{j}\) in a non-overlapping way. Such a process can be represented by an element in \(S_{m}\). After that, we will assign different "words" for each \(z_{i}\), which can be represented by an element in \(S_{v}\). As we have \(m\) different \(z_{i}\), this procedure will be repeated \(m\) times. In summary, any compositional mapping can be represented by an element in the group \(S_{v}^{m}\rtimes S_{v}\in S_{v^{m}}\), where \(\rtimes\) is the semidirect product in group theory. The cardinality of \(S_{v^{m}}\) is significantly larger than \(S_{v}^{m}\rtimes S_{v}\), and so a randomly selected bijection is unlikely to be compositional. Thus **Proposition 1** (Informal).: _For \(m,v\geq 2\), among all bijections, any compositional mapping has much lower Kolmogorov complexity than a typical non-compositional mapping._ We prove this by constructing descriptive protocols for each bijection. As a compositional mapping has more _reused rules_, its description length can be smaller (see Appendix B.1 for more details). ### Compressibility pressure is amplified in iterated learning Now, our target is finding bijections with higher compositionality and lower Kolmogorov complexity, which are both non-trivial. Because the ground truth **G** is usually inaccessible and the Kolmogorov complexity is hard to calculate. Fortunately, researchers find that human language also evolved to become more compositional without knowing **G**. Authors of [42] hypothesize that the _compressibility pressure_, which exists when an innocent agent (e.g., a child) learns from an informed agent (e.g., an adult), plays an important role. Such pressure is reinforced and amplified when the human community repeats this learning fashion for multiple generations. However, the aforementioned hypothesis assumes that simplicity bias is inborn in the human cognition system. Will deep neural agents also have similar preferences during training? The answer is yes. By analyzing an overparameterized model on a simple supervised learning problem, we can strictly prove that repeatedly introducing new agents to learn from the old agent (then this informed agent becomes the old agent for the next generation) can exert a non-trivial regularizing effect on the number of "active bases" of the learned mapping. Restricting the number of active bases encourages the model to reuse the learned rules. In other words, this regularization effect favors mappings with lower Kolmogorov complexity, which is exactly what we expect for compositional generalization. Due to the space limits, we left the formulation and proof of this problem in Appendix B.2. ### Complete the proposed solution We thus expect that iteratively resetting and relearning can amplify the compressibility pressure, which helps us to reach the final rung of the ladder from the third. Before that, we need another pressure to reach third rung (i.e., ensure a bijection between **z** and **G**). _Expressivity pressure_, constraining the learned mapping to be capable enough to accomplish the downstream tasks, is what we need. The complete iterated learning hypothesis of Kirby et al. [42] claims that the compositional mapping emerges under the interaction between the compressibility pressure (i.e., efficiency) and the expressivity pressure (i.e., effectiveness). Inspired by this, we propose to train a model in generations consisting of two phases. At the \(t\)-th generation, we first train the backbone \(h\) in an _imitation phase_, where a student \(h_{t}^{S}\) learns to imitate **z** sampled from a teacher \(h_{t}^{T}\). As analyzed above, iteratively doing so will amplify the compressibility pressure. Then, in the following _interaction phase_, the model \(g_{t}\circ h_{t}^{S}\) follows standard downstream training to predict **y**. The task head \(g_{t}\) is randomly initialized and fine-tuned together with the backbone in this phase. By accomplishing this phase, the expressivity pressure is introduced. The fine-tuned backbone \(h_{t}^{S}\) then becomes the teacher \(h_{t+1}^{T}\) for the next generation, and we repeat, as illustrated in Figure 2 and Algorithm 1. Another problem with applying iterated learning to deep neural networks is how to create the discrete message, i.e., **z**. Discretization is not _necessary_: for example, the imitation phase could use \(L_{2}\) loss to match a student's continuous representations to the teacher's. We find greatly improved performance with our discretization scheme, however, due to much-increased compressibility pressure. It is also possible [60] to use e.g. an LSTM encoder at the end of \(h(\mathbf{x})\) to produce discrete \(\mathbf{z}\), and an LSTM decoder at the start of \(g(\mathbf{z})\). The interaction phase is then not directly differentiable; though many estimator options exist [6; 39; 78], training tends to be difficult due to high bias and/or variance. Instead, we consider a simplicial embedding layer (SEM, [45]), which has proven effective on many self-supervised learning tasks. As illustrated in Figure 2(c), a dense representation \(\mathbf{h}\) (the output of the original backbone) is linearly transformed into \(m\) vectors \(z_{i}\in\mathbb{R}^{v}\). Then we apply a separate softmax with temperature \(\tau\) to each \(z_{i}\), yielding \(\bar{z}_{i}\) which are, if the temperature is not too high, approximately sparse; the \(\bar{z}_{i}\) are then concatenated to a long vector \(\mathbf{z}\). The overall process is \[\bar{z}_{i}=\mathsf{Softmax}_{\tau}(z_{i})=\left[\frac{e^{z_{ij}/\tau}}{\sum_{ k=1}^{V}e^{z_{ik}/\tau}}\right]_{j}\in\mathbb{R}^{v}\qquad\mathbf{z}=\begin{bmatrix} \bar{z}_{1}^{\top}&\ldots&\bar{z}_{m}^{\top}\end{bmatrix}^{\top}\in\mathbb{R }^{mv}. \tag{1}\] By using an encoder with a final SEM layer, we obtain an approximately-sparse \(\mathbf{z}\). In the imitation phase, we generate discrete pseudo-labels by sampling from the categorical distribution defined by each \(\bar{z}_{i}\), then use cross-entropy loss so that the student is effectively doing multi-label classification to reconstruct the teacher's representations. In the imitation phase, the task head \(g\) operates directly on the long vector \(\mathbf{z}\). The full model \(g\circ h\) is differentiable, so we can use any standard task loss. Pseudocode for the proposed method, SEM-IL, is in the appendix (Algorithm 1). ## 4 Analysis on Controlled Vision Datasets We will first verify the effectiveness of the proposed SEM-IL method on controlled vision datasets, where the ground truth \(\mathbf{G}\) is accessible. Thus, we can directly observe how \(\mathbf{z}\) gradually becomes more similar to \(\mathbf{G}\), and how the compressibility and expressivity pressures affect the training process. In this section, we consider a regression task on 3dShapes [9], where recovering and recombining the generating factors is necessary for systematic generalization. The detailed experimental settings and results on additional similar datasets, dSprites [52] and MPI3D-real [23], are given in Appendix C. ### The Effectiveness of SEM-IL Better comp-gen performanceWe first show the effectiveness of the proposed method using results on 3dShapes, containing images of objects with various colors, sizes, and orientations against various backgrounds. Here \(\mathbf{G}\) numerically encodes floor hue, wall hue, object hue, and object scale into discrete values, and the goal is to recover a particular linear function of that \(\mathbf{G}\). (Results for a simple nonlinear function were comparable.) We compare five algorithms: * Baseline: directly train a ResNet18 [31] on the downstream task. * SEM-only: insert an SEM layer to the baseline model. * IL-only: train a baseline model with Algorithm 1, using MSE loss during imitation. * SEM-IL: train an SEM model with Algorithm 1. * Given-G: train an SEM model to reproduce the true \(\mathbf{G}\) (which would not be known in practice), then fine-tune on the downstream task. Figure 2: An illustration of iterated learning and SEM layer design. In the first panel of Figure 3, we see that the baseline and SEM-only models perform similarly on the training set; IL-based methods periodically increase in error at the beginning of each generation, but are eventually only slightly worse than the baselines on training data. On the test set, however, evaluating compositional generalization by using values of **G** which did not appear in training, SEM-IL brings significant improvement compared with other methods. Using only SEM or only IL gives no improvement over the baseline, however; it is only their combination which helps, as we will discuss further shortly. The (unrealistic) oracle method Given-G is unsurprisingly the best, since having **z** similar to **G** is indeed helpful for this task. How z evolves during learningTo see if better generalization ability is indeed achieved by finding **z** that resembles the structure of **G**, we check their topological similarity4 Footnote 4: This measure is also known as the distance correlation [72]; it is a special case of the Hilbert-Schmidt Independence Critierion (HSIC, [25]) for a particular choice of kernel based on \(d_{z}\) and \(d_{G}\)[66]. \[\rho(\textbf{z},\textbf{G})\triangleq\mathsf{Corr}\left(d_{z}(\textbf{z}^{( i)},\textbf{z}^{(j)}),d_{G}(\textbf{G}^{(i)},\textbf{G}^{(j)})\right) \tag{2}\] where \(d_{z}\) and \(d_{G}\) are distance metrics, \(\textbf{z}^{(i)}\) is the predicted representation of \(\textbf{x}^{(i)}\), and \(\textbf{G}^{(i)}\) is the corresponding ground-truth generating factors. This measurement is widely applied to evaluate the compositionality of the mappings in cognitive science [8] and emergent communication [60]. Following existing works, we use the Hamming distance for **G** and discretized **z** in SEM-based methods, and cosine distance for continuous **z** in non-SEM methods. We expect \(h(\textbf{x})\) to map **x** with similar **G** to close **z**, and dissimilar **x** to distant **z**, so that \(\rho(\textbf{z},\textbf{G})\) will be high. The third panel of Figure 3 shows that the SEM-only model quickly reaches a plateau after 200 epochs and then slowly decreases, while SEM-IL, after briefly stalling at the same point, continues to increase to a notably higher topological similarity. In the last panel, however, the IL-only method doesn't improve \(\rho\) over the baseline: it seems both parts are needed. ### Discretized Representation is Beneficial for the Imitation Phase of IL To explain why SEM and IL cooperate well, we need to look deeper into how the compressibility pressure influences the learning of representations. This pressure induced by iterated learning, which helps us to find mappings with lower Kolmogorov complexity, leads to representations that are more compositional and systematic [42]. However, in prior works, these mappings were only considered in conjunction with some discretized representation [54; 60]. While IL could be used with continuous representation during the imitation phase, similar to born-again networks [19], we found that our algorithm benefits a lot from the discretized representations. To get a clear picture of why discretized representations are so important, we divide \(h(\textbf{x})\) into \(m\) sub-mappings \(h_{i}(\textbf{x})\), which map \(\textbf{x}\) to \(\bar{z}_{i}\in[0,1]^{v}\). We can understand each \(\bar{z}_{i}\) as a categorical distribution over \(v\) different possible values. As such, during training, the model learns discrete features of the dataset and assigns confidence about each feature for every sample. The neural network will tend to more quickly learn simpler mappings [5; 24], and will assign higher confidence according to the mapping it has learned. In other words, if a mapping does not align well with **G**, it is more likely to give idiosyncratic learned \(\bar{z}_{i}\), and will lead to low confidence for most samples. On the contrary, \(\bar{z}_{i}\) belonging to compositional mappings will be more general, and on average tend towards higher confidence. Figure 3: Left: compositional generalization performance on a regression task. Right: topological similarity for IL and non-IL methods. Note the values of \(\rho\) in the two panels are not comparable, as the structure of **z** in the two settings (with or without SEM) is different. The imitation phase reinforces this bias when the new student learns from the _sampled_ pseudo labels \(\textbf{g}_{i}\) from the teacher's prediction \(\bar{z}_{i}\). As such, confident predictions, which are more likely to belong to the compositional mappings, will be learned faster (and harder to forget) by the student. On the contrary, for less confident features where \(P(\bar{z}_{i}\mid\textbf{x})\) is flat, \(\textbf{g}_{i}\) could change across epochs. This makes it hard for the student to remember any related \((\textbf{x},\textbf{g}_{i})\). For example, a student will be reluctant to build a stable mapping between "red" and \(z_{1}\) if the teacher communicates \((\text{``red square''},\textbf{g}_{1}=0)\), \((\text{``red square''},\textbf{g}_{1}=1)\), \((\text{``red square''},\textbf{g}_{1}=2)\) in three consecutive epochs. Furthermore, using the sampled pseudo-labels can help the student to align the learned \((\textbf{x},\textbf{g}_{i})\) better. Assume during training, the student already remembers some pairs like \((\text{``blue circle''},\textbf{g}_{1}=0)\), \((\text{``blue square''},\textbf{g}_{1}=0)\), \((\text{``blue star''},\textbf{g}_{1}=0)\), but the teacher is not confident in \((\text{``blue apple''},\textbf{g}_{1})\), perhaps because apples are rarely blue. Following the analysis above, as \(P(\bar{z}_{1}\mid\text{ blue apple})\) is flat, the teacher may generate \(\textbf{g}_{1}\neq 0\) a significant portion of the time. However, if the teacher happens to generate \(\textbf{g}_{1}=0\) at some point, the student would learn \((\text{``blue apple''},\textbf{g}_{1}=0)\) faster than those with \(\textbf{g}_{1}\neq 0\), because it aligns well with the other information stored in the student network. The parameter updates caused by the learning of other \((\text{``blue [shape]''},\textbf{g}_{1}=0)\) will also promote the learning of \((\text{``blue apple''},\textbf{g}_{1}=0)\), similar to how "noisy" labels are fixed as described by [61]. To support the explanations above, we can first observe the correlation between the teacher's confidence and the model's learning speed for \((\textbf{x},\textbf{g}_{i})\). Specifically, for each **x**, the teacher makes \(m\) predictions with the corresponding categorical distribution \(\bar{z}_{i}\), \(i\in[m]\). For each \((\textbf{x},\bar{z}_{i})\), the confidence is measured by the negative logarithm of the teacher's predicted probability, \(-\log[\bar{z}_{i}]_{j}\) where \(\hat{j}\in\operatorname*{argmax}_{j}[\bar{z}_{i}]_{j}\). The learning speed of \((\textbf{x},\textbf{g}_{i})\) is measured by the integral of the student's prediction with training time \(t\), i.e., \(\sum_{t=0}[\hat{z}_{i}(t)]_{j}\), where \(j\) is the value provided by the teacher and \(\hat{z}_{i}(t)\) is student's prediction at time \(t\). As illustrated in the first panel of Figure 4, the \((\textbf{x},\textbf{g}_{i})\) with higher confidence are usually learned faster by the student. We also provide the learning curves of \((\textbf{x},\textbf{g}_{i})\) with high/intermediate/low confidence (each with 10 samples) in the other three panels of the figure. The curves for high-confidence samples all converge to \([\hat{z}_{i}(t)]_{j}=1\) while those for low-confidence predictions could converge to a value less than 0.3. This means the student might make predictions that are different from the teacher's supervision. By highlighting such low-confidence \((\textbf{x},\textbf{g}_{i})\) pairs in the scatter plot, we find they are all low-confidence samples. Another interesting observation from the high-confidence curves is that some \((\textbf{x},\textbf{g}_{i})\) pairs are not remembered by the student in the first generation: they emerge at some point and gradually dominate as the training goes on. This phenomenon matches our analysis of how the sampled pseudo-labels help the student align \((\textbf{x},\textbf{g}_{i})\) to its knowledge well. To further support this explanation, Appendix C shows that performance is substantially harmed by taking pseudo-labels from the \(\operatorname*{argmax}\), rather than sampling from \(\bar{z}_{i}\). To recap, this subsection provided an explanation (along with some supporting evidence) for why the combination of SEM and IL is so important, based on the perspective of sample difficulty, which we believe to be a significant factor in the success of this algorithm. ## 5 Application: Molecular Property Prediction Given the success in controlled vision examples, we now turn to a real problem where the true generative process is unknown. We focus on predicting the properties of molecular graphs, for several Figure 4: First panel: correlation between teacher’s confidence and student’s learning speed for each \((\textbf{x},\bar{z}_{i})\); \(\bar{z}_{i}\) is the prediction of the \(l\)-th attribute in imitation phase. “Consistent” means the student makes the same prediction as the teacher. Other panels: learning curves of the student’s predictions. reasons. First, molecular graphs and their labels might follow a (chemical) procedure akin to that in Figure 1: for instance, one \(G_{i}\) might be the existence of a specific functional group, or the number of specific atoms. Different molecular properties could then be determined by different subsets of \(G_{i}\), as we desired in the compositional generalization problem. Furthermore, the generating mechanisms (GenX and GenY) should be consistent and determined by nature. Second, benchmark datasets in this community contain various types of tasks (e.g., binary classification, multi-label classification, and regression) with similar input signals: performing well on different tasks will broaden the scope of our algorithm. Furthermore, the scaffold split used by most molecular datasets corresponds well to the compositional generalization setup we consider here. (We also try some more challenging splits, using structural information.) Last, learning meaningful representations that uncover the generating mechanisms of molecules is important, of practical significance, and difficult: it can potentially help predict the properties of unknown compounds, or accelerate the discovery of new compounds with specific properties, but scaling based on massive datasets as in recent work on vision or language seems more difficult. We hope our analysis can provide a new perspective on this problem. ### Improvement on the Downstream Performance We conduct experiments on three common molecular graph property datasets: ogbg-molhiv (1 binary classification task), ogbg-molpcba (128 binary classification tasks), and PCQM4Mv2 (1 regression task); all three come from the Open Graph Benchmark [37]. We choose two types of backbones, standard GCN [40] and GIN [80]. For the baseline experiments, we use the default hyperparameters from [37]. As the linear transform added in SEM-based method gives the model more parameters, we consider "baseline+" to make a fair comparison: this model has an additional embedding layer, but no softmax operation. Detailed information on these datasets, backbone models, and hyper-parameters is provided in Appendix D. From Table 1, we see the SEM-IL method almost always gives the best performance. Unlike in the controlled vision experiments (Figure 3), however, SEM alone can bring significant improvements in this setting. We speculate that compressibility pressure might be more significant in the interaction phase (i.e. standard training) when the generating mechanism is complex. This suggests it may be possible to develop a more efficient algorithm to better impose compressibility and expressivity pressures at the same time. ### Probing Learned \(\mathbf{z}\) by Meaningful Structures In the controlled vision examples, we know that SEM-IL not only enhances the downstream performance, but also provides \(\mathbf{z}\) more similar to the ground-truth generating factors, as seen by the improvement in topological similarity. However, as the generating mechanism is usually inaccessible in real problems, we indirectly measure the quality of \(\mathbf{z}\) using graph probing [2]. Specifically, we first extract some meaningful substructures in a molecule using domain knowledge. For example, we can conclude whether a benzene ring exists in \(\mathbf{x}\) by directly observing its 2D structure. With the help of the RDKit tool [44], we can generate a sequence of labels for each \(\mathbf{x}\), which is usually known as the "fingerprint" of molecules (denoted \(\mathsf{FP}(\mathbf{x})\in\{0,1\}^{k}\), indicating whether each specific structure exists in \(\mathbf{x}\)). Then, we add a linear head on top of the fixed \(\mathbf{z}\) and train it using a generated training set \((\mathbf{x},\mathsf{FP}(\mathbf{x})),\mathbf{x}\sim\mathcal{D}_{train}\), and compare the generalization performance on the generated test set \begin{table} \begin{tabular}{c|c|c c c c c c c c|c c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{ \begin{tabular}{c} Total-full \\ \end{tabular} } & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} & \multicolumn{3}{c|}{\(\mathsf{Total-full}\)} \\ \hline \multirow{3}{*}{GCN} & Baseline & 82.41\(\pm\)1.14 & 76.25\(\pm\)0.38 & 75.65\(\pm\)0.91 & 72.11\(\pm\)1.36 & 21.44\(\pm\)0.25 & 22.13\(\pm\)0.46 & 22.13\(\pm\)0.46 & 20.13\(\pm\)0.88 & 20.78\(\pm\)0.62 & 0.125\(\pm\)0.002 \\ & Baseline & 81.61\(\pm\)0.63 & 75.85\(\pm\)0.70 & 73.23\(\pm\)0.47 & 27.11\(\pm\)0.22 & 23.31\(\pm\)0.34 & 22.68\(\pm\)0.30 & 20.11\(\pm\)0.45 & 20.06\(\pm\)0.37 & 0.118\(\pm\)0.004 \\ & SEM-IL & **84.00\(\pm\)**0.10 & 78.40\(\pm\)0.67 & 78.44\(\pm\)1.57 & 78.21\(\pm\)2.32 & 22.39\(\pm\)0.46 & 25.90\(\pm\)0.71 & **27.91\(\pm\)0.29** & **21.20\(\pm\)0.12** & 0.06\(\pm\)0.002 \\ & **SEM-IL** & **84.59\(\pm\)0.65** & **79.79\(\pm\)0.67** & **78.48\(\pm\)0.67** & **74.20\(\pm\)0.78** & **28.51\(\pm\)0.72** & **27.15\(\pm\)0.74** & 22.59\(\pm\)0.84 & 21.50\(\pm\)0.81 & **0.102\(\pm\)0.005** \\ \hline \multirow{3}{*}{GIN} & Baseline & 81.76\(\pm\)0.79 & 76.91\(\pm\)0.42 & 76.55\(\pm\)0.41 & 76.33\(\pm\)2.21 & 22.19\(\pm\)0.32 & 22.64\(\pm\)0.29 & 22.52\(\pm\)0.95 & 20.52\(\pm\)0.42 & 0.109\(\pm\)0.003 \\ & Baseline & 81.55\(\pm\)0.72 & 77.01\(\pm\)0.94 & 77.47\(\pm\)1.62 & 69.75\(\pm\)0.31 & 23.83\(\pm\)0.29 & 22.91\(\pm\)0.40 & 22.14\(\pm\)0.12 & 20.19\(\pm\)0.27 & 0.108\(\pm\)0.003 \\ & SEM-IL & 83.05\(\pm\)0.90 & 78.21\(\pm\)0.78 & 76.29\(\pm\)0.26 & 72.70\(\pm\)0.49 & 26.01\(\pm\)0.52 & 25.66\(\pm\)0.47 & 22.26\(\pm\)0.39 & 21.50\(\pm\)0.48 & 0.106\(\pm\)0.004 \\ & **SEM-IL** & **83.31\(\pm\)1.51** & **78.61\(\pm\)0.73** & **78.06\(\pm\)1.24** & **72.89\(\pm\)0.48** & **29.30\(\pm\)0.48** & **28.02\(\pm\)0.61** & **24.41\(\pm\)0.47** & **23.39\(\pm\)0.77** & **0.098\(\pm\)0.005** \\ \hline \end{tabular} \end{table} Table 1: Downstream performance on different tasks. The numbers of AUROC and average precision are in percent form. For PCQM, we report the validation performance, as the test set is private and inaccessible. Means and standard deviations of 5 seeds are given. Valid/test-full means the standard train/val/test split provided by the dataset. Valid/test-half means we train the model on half of the training data which is _less similar_ to the validation and test sets. See Appendix D for more. (\(\mathbf{x},\text{FP}(\mathbf{x})\)), \(\mathbf{x}\sim\mathcal{D}_{test}\). For fair comparison, we set \(m=30\) and \(v=10\) to make \(\mathbf{z}\) and \(\mathbf{h}\) be the same width, excluding the influence of the linear head's capacity. In the experiments, we use the validation split of molhiv as \(\mathcal{D}_{train}\) and the test split as \(\mathcal{D}_{test}\), each of which contain 4,113 distinct molecules unseen during the training of \(\mathbf{z}\). The generalization performance of ten different substructures is reported in Table 2. The first block (first two rows) of the table demonstrates the performance of two types of models before training. They behave similarly across all tasks and give a higher AUROC than a random guess. Then, comparing the three algorithms in each block, we see SEM-based methods consistently outperform the baseline, which supports our hypothesis well. SEM-IL outperforms SEM-only on average, but not for every task; this may be because some structures are more important to the downstream task than others. Comparing the results across the four blocks, we find that the task in the interaction phase also influences the quality of \(\mathbf{z}\): the \(\mathbf{z}\) trained by molpcba is much better than those trained by molhiv. To figure out where this improvement comes from, we first use only 10% of the training samples in molpcba to make the training sizes similar, then make the supervisory signal more similar by using only one task from molpcba. As illustrated in the last two blocks in the table, we can conclude that the complexity of the task in the interaction phase, which introduces the expressivity pressure, plays a more important role in finding better \(\mathbf{z}\). Based on this observation, we can improve SEM-IL by applying more complex interaction tasks. For example, existing works on iterated learning use a referential game or a reconstruction task in the interaction phase, which could introduce stronger expressivity pressure from a different perspective. Furthermore, [45] demonstrates that SEM works well with most contrastive learning tasks. We hope the fundamental analysis provided in this paper can shed light on why SEM and IL collaborate so well and also arouse more efficient and effective algorithms in the future. ## 6 Related Works **Iterated Learning and its Applications.** Iterated learning (IL) is a procedure that simulates cultural language evolution to explain how the compositionality of human language emerges [41]. In IL, the knowledge (i.e., the mapping between the input sample and its representation) is transferred between different generations, during which the compositional mappings gradually emerge and dominate under the interaction between compressibility and expressivity pressures. Inspired by this principle, there are some successful applications in symbolic games [60], visual question answering [76], machine translation [50], multi-label learning [58], reinforcement learning [54], etc. There are also many algorithms training a neural network for multiple generations, which could possibly support the principles proposed in iterated learning. For example, [19] proposes to iteratively distill the downstream logits from the model in the previous generation, and finally bootstrap all the models to achieve better performance on image classification task; this can be considered as an IL algorithm merging the imitation and interaction phases together. [82] proposes to re-initialize the latter layers of a network and re-train the model for multiple generations, which is similar to an IL algorithm that only re-initializes the task head. [54] extends such a reset-and-relearn training to reinforcement learning and shows that resetting brings benefits that cannot be achieved by other regularization methods such as dropout or weight decay. In the era of large language models, self-refinement in-context learning [51] and self-training-based reinforcement learning [26] can also \begin{table} \begin{tabular}{c|c|c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Sat. King} & \multicolumn{1}{c}{Auto.King} & \multicolumn{1}{c}{Auto.Style} & \multicolumn{1}{c}{Auto. Ketune} & \multicolumn{1}{c}{Recubic} & \multicolumn{1}{c}{Industry} & \multicolumn{1}{c}{ParityFlow.} & \multicolumn{1}{c}{Purs.} & \multicolumn{1}{c}{Remence} & \multicolumn{1}{c}{Avg.} \\ \hline \multirow{3}{*}{} & \multirow{3}{*}{Init. base} & 0.570 & 0.958 & 0.811 & 0.629 & 0.595 & 0.615 & 0.627 & 0.706 & 0.692 & 0.812 & 0.732 \\ & & 0.872 & 0.958 & 0.812 & 0.635 & 0.597 & 0.638 & 0.613 & 0.692 & 0.683 & 0.815 & 0.731 \\ \hline \multirow{3}{*}{} & \multirow{3}{*}{Baseline} & 0.824 & 0.948 & 0.916 & 0.700 & 0.717 & 0.694 & 0.804 & 0.740 & 0.703 & 0.913 & 0.807 \\ & & 0.893 & **0.899** & 0.918 & 0.722 & 0.517 & 0.723 & 0.763 & 0.763 & 0.918 & 0.836 \\ & & **SEM-IL** & **0.907** & 0.980 & **0.967** & **0.781** & **0.801** & **0.794** & **0.903** & **0.815** & **0.869** & **0.965** & **0.878** \\ \hline \multirow{3}{*}{} & \multirow{3}{*}{Baseline} & 0.972 & 0.958 & 0.968 & 0.866 & 0.875 & 0.835 & 0.875 & 0.835 & 0.826 & 0.965 & 0.950 \\ & & 0.870 & **0.942** & **0.991** & 0.981 & 0.888 & 0.916 & **0.854** & **0.921** & 0.888 & 0.897 & 0.980 & 0.926 \\ \cline{1-1} & & 0.940 & 0.908 & **0.932** & **0.910** & **0.931** & 0.849 & 0.912 & **0.910** & **0.912** & **0.981** & **0.931** \\ \hline \multirow{3}{*}{} & \multirow{3}{*}{Baseline} & 0.923 & 0.906 & 0.962 & 0.835 & 0.837 & 0.832 & 0.870 & 0.833 & 0.864 & 0.962 & 0.835 \\ \cline{1-1} & & 0.943 & 0.993 & **0.980** & 0.872 & 0.906 & 0.835 & 0.913 & **0.876** & 0.900 & **0.899** & 0.922 \\ \cline{1-1} & & 0.948 & **0.944** & 0.925 & **0.891** & **0.918** & **0.847** & **0.927** & 0.874 & **0.907** & **0.955** & **0.927** \\ \cline{1-1} \cline{1-1} & & 0.922 & 0.974 & 0.948 & 0.723 & 0.750 & 0.689 & 0.845 & 0.758 & 0.726 & 0.947 & 0.831 \\ \cline{1-1} \cline{1-1} & & 0.906 & **0.898** & 0.958 & **0.722** & **0.890** & 0.735 & 0.876 & **0.770** & 0.835 & 0.957 & 0.861 \\ \cline{1-1} & & **0.906** & 0.988 & **0.963** & 0.741 & **0.851** & **0.744** & **0.887** & 0.765 & **0.869** & **0.962** & **0.867** \\ \hline \hline \end{tabular} \end{table} Table 2: AUROC for graph probing based on different \(\mathbf{z}\); random guessing would be \(\approx 0.5\). benefit from iteratively learning from the signals generated by agents in the previous generation. We left the discussion and analysis on these more complex real systems in our future work. **Knowledge Distillation and Discrete Bottleneck.** Broadly speaking, the imitation phase in SEM-IL, which requires the student network to learn from the teacher, can be considered as a knowledge distillation method [33]. Different from the usual setting, where the student learns from the teacher's prediction on a downstream task, we assume a data-generating mechanism and create a simplex space for the generating factors. By learning from the teacher in this space, we believe the compressibility pressure is stronger and is more beneficial for the compositional generalization ability. For the discretization, there are also other possible approaches, e.g., [28] uses an LSTM to create a discrete message space, and [48] proposes a method using a vector quantized bottleneck [75]. We choose SEM [45] for its simplicity and universality: it is easy to insert it into a model for different tasks. Besides, SEM has proved to be effective on self-supervised learning tasks; we extend it to classification, regression, and multi-label tasks. **Compressibility, learning dynamics, and Kolmogorov complexity** Recently, with the success of large language models, the relationship between compressibility and generalization ability gradually attracted more attention [12]. Authors of [57] propose that how well a model is compressed corresponds to the integral of the training loss curve when negative logarithmic likelihood loss is used. Although this claim assumes the model sees each training sample only once, which might not be consistent with the multiple-epochs training discussed in this paper, the principles behind this claim and our analysis are quite consistent: the mappings generalize better and are usually learned faster by the model. Furthermore, authors of [71] link the generalization ability to Kolmogorov complexity. Our analysis in Appendix B also supports this claim well. Hence we believe the evolution of the human cognition system can provide valuable insights into deep learning systems. **Graph Representation Learning.** Chemistry and molecular modeling are some of the main drivers of neural graph representation learning since its emergence [21] and graph neural networks, in particular. The first theoretical and practical advancements [27; 40; 80] in the GNN literature were mostly motivated by molecular use cases. Furthermore, many standard graph benchmarks [15; 16; 37] include molecular tasks on node, edge, and graph-levels, e.g., graph regression in ZINC and PCQM4Mv2 or molecular property prediction in ogbg-molhiv and ogbg-molpcba datasets. Graph Transformers [14; 43; 59] exhibit significant gains over GNNs in molecular prediction tasks. Self-supervised learning (SSL) on graphs is particularly prominent in the molecular domain highlighted by the works of GNN PreTrain [38], BGRL [73], and Noisy Nodes [22]. We will extend the proposed method to different models and different pretraining strategies in our future work. ## 7 Conclusion In this paper, we first define the compositional generalization problem by assuming the samples in the training and test sets share the same generating mechanism while the generating factors of these two sets can have different distributions. Then, by proposing the compositionality ladder, we analyze the desired properties of the representations. By linking the compositionality, compressibility, and Kolmogorov complexity together, we find iterated learning, which is well-studied in cognitive science, is beneficial for our problem. To appropriately apply iterated learning, we attach an SEM layer to the backbone model to discretize the representations. On the datasets where the true generating factors are accessible, we show that the representations learned by SEM-IL can better portray the generation factors and hence lead to better test performance. We then extend the proposed algorithm to molecular property prediction tasks and find it improves the generalization ability. The main drawback of the current solution is the time-consuming training: we must run multiple generations and some common features might be re-learned multiple times, which is inefficient. Hence a more efficient way of imposing compressibility is desired. Overall, though, our analysis and experiments show the potential of the SEM-IL framework on compositional generalization problems. We believe a better understanding of where the compressibility bias comes from in the context of deep learning can inspire more efficient and non-trivial IL framework designs. Clearly defining the compositional generalization problem and finding more related practical applications can also promote the development of IL-related algorithms.
2304.02868
Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions
Large language models (LLMs) such as ChatGPT and GPT-4 have recently demonstrated their remarkable abilities of communicating with human users. In this technical report, we take an initiative to investigate their capacities of playing text games, in which a player has to understand the environment and respond to situations by having dialogues with the game world. Our experiments show that ChatGPT performs competitively compared to all the existing systems but still exhibits a low level of intelligence. Precisely, ChatGPT can not construct the world model by playing the game or even reading the game manual; it may fail to leverage the world knowledge that it already has; it cannot infer the goal of each step as the game progresses. Our results open up new research questions at the intersection of artificial intelligence, machine learning, and natural language processing.
Chen Feng Tsai, Xiaochen Zhou, Sierra S. Liu, Jing Li, Mo Yu, Hongyuan Mei
2023-04-06T05:01:28Z
http://arxiv.org/abs/2304.02868v1
# Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions ###### Abstract Large language models (LLMs) such as ChatGPT and GPT-4 have recently demonstrated their remarkable abilities of communicating with human users. In this technical report, we take an initiative to investigate their capacities of playing text games, in which a player has to understand the environment and respond to situations by having dialogues with the game world. Our experiments show that ChatGPT performs competitively compared to all the existing systems but still exhibits a low level of intelligence. Precisely, ChatGPT can not construct the world model by playing the game or even reading the game manual; it may fail to leverage the world knowledge that it already has; it cannot infer the goal of each step as the game progresses. Our results open up new research questions at the intersection of artificial intelligence, machine learning, and natural language processing. ## 1 Motivation: The Role of Games in AI and The Rise of LLMs Games are a microcosm of human life. Both involve setting goals, making decisions, overcoming challenges, and interacting with the world. This makes games an ideal touchstone for research progress in artificial intelligence (AI): by comparing AI systems with human players in games, researchers are able to evaluate the capabilities of these systems in a meaningful way. Throughout the history of AI, many significant moments have been tied to games. In 1997, IBM Deep Blue beat the world chess champion Garry Kasparov, marking the first time that a human world champion had lost a match to a computer under standard time controls (Campbell et al., 2002). In 2016, AlphaGo of Google DeepMind defeated Lee Sedol, marking the first time that a computer had beaten a 9-dan professional player without handicap (Silver et al., 2016). In 2017, DeepStack and Libratus defeated professional players in heads-up, no-limit Texas Hold'em (Moravcik et al., 2017; Brown and Sandholm, 2018). In 2019, OpenAI Five became the first AI to beat the world champions in Dota 2 and DeepMind AlphaStar beat the world-class players in StarCraft II; both Dota 2 and StarCraft II are extremely complex real-time strategy games (Berner et al., 2019; Arulkumaran et al., 2019). Recently, large language models (such as ChatGPT and GPT-4 developed by OpenAI) have demonstrated their impressive abilities to understand and respond to complex human language queries. They also sparked a debate in the research community: some people regard the rise of LLMs as "a significant step towards artificial general intelligence (AGI)"; some argue that they are just "stochastic parrots" (Bender and Koller, 2020; Bender et al., 2021); some criticize that LLM is an "off-ramp on the highway towards human-level intelligence". This has inspired us to join the force of evaluating LLMs and exploring their limitations (OpenAI, 2023; Bubeck et al., 2023). Particularly, we test LLMs in playing text games. A text game is a computer-simulated environment in which players use text commands to control characters and interact with the game world. It is also known as interactive fiction or text adventure. By situating an LLM in a text game, we are able to investigate its level of intelligence in a controlled environment. This follows the classical practice of the AI community in testing groundbreaking AI systems but has not been done by any other work of assessing LLMs. ## 2 A Case Study: ChatGPT Plays Zork Now we present our case study on ChatGPT playing Zork I. ChatGPT is a language model developed by OpenAI that can generate contextually appropriate responses to a wide range of natural language queries. It is competent in a variety of tasks and has been integrated into a diversity of applications. Zork I was released in 1970s (followed by Zork II and Zork III). In Zork I, the character follows natural language commands to search through the Great Underground Empire for treasure: it moves between locations and interacts with objects; the game program acts as a narrator, describing the player's location and the consequences of the actions. Throughout the report, Zork refers to Zork I. ChatGPT plays Zork with the help of a human player. A human player acts as a communication channel between ChatGPT and Zork2: at each step, the human player passes to ChatGPT the current game state described in natural language by the game world, asks ChatGPT to take a legal action, and then feeds ChatGPT's action back to the game. Below we show the opening of this process: what's given by the game has a lightpurple background; what ChatGPT says is in light green background; anything else has no background; our comments (i.e., not read or said by ChatGPT) are in gray color. Footnote 2: We use the Zork implementation in the Jericho game suite developed by Hausknecht et al. (2020). ``` ImagineyouareanplayerinZorkandtryingtowainthegame.Youreceivethismessage: WestofHouse Thisisanopenfieldwestofawhitehouse,withabboardfrontdoor.Thereisasmallmailboxhere.Arubbermatsay'WelcometoZork!'liesbythedoor. Youhavefouractionoptions["openmailbox","north","south","west"]Select"one"actionfromtheaboveoptions. ``` //Ateachstep,thevalidactionsareprovidedbyJericho.ChatGPToftenrefusestosuggestanactionifnotgivenvalidactions. ``` //ChatGPToftengivesmorethanoneactionevenifitisaskedtogiveonlyone.Wefoundithelpfulttouse""toemphasizethequantity. ``` //WhenChatGPTgivesmorethanoneaction,weselectthefirstactionmentionedinitsresponse,assumingthatChatGPT favorsitoverothers. ``` Isuggestselectingthe"openmailbox"action.ThisisbecauseexploringtheenvironmentandinteractingwithobjectsisoftennecessarytoprogressinadventuregameslikeZork.Byopeningthemailbox,youmayfindimportantcluesoritemsthatcouldbehelpfultlaterinthegame. Additionally,itisaslow-riskactionthatmaynothaveanynegativeconsequences.Onceyouhaveopenedthemailbox,youcanthenconsideryournextmovebasedonwhatyoufind. ``` Chat1:OpeningofChatGPTplayingZork. ChatGPT's response in Chat 1 is perfect: it selects the correct action and gives a convincing justification. ChatGPT seems to be an expert at text games. Has it already known how to play Zork? After all, Zork is perhaps the most famous text game and ChatGPT may have read a lot of text about it during its pretraining. By checking with ChatGPT, we found that it has already known Zork: ``` Pleasedescribethetextgame"ZorkI". ``` "Zork"isaclassictext-basedadventuregamethatwasreleasedin1980byInfocom.Itissetinafantasyworldandisplayedentirelythroughtextcommandsenteredbytheplayer. ``` The player takes on the role of an adventurer exploring the ruins of an ancient empire. The goal is to collect treasures and solve puzzles to ultimately defeat the evil wizard who has taken control of the empire. The game is played by typing in text commands such as "go north," "pick up key," or "open door." The player takes on the role of an adventurer exploring the ruins of an ancient empire. The goal is to collect treasures and solve puzzles to ultimately defeat the evil wizard who has taken control of the empire. The game is played by typing in text commands such as "go north," "pick up key," or "open door." The player takes on the role of an adventurer exploring the ruins of an ancient empire. The goal is to collect treasures and solve puzzles to ultimately defeat the evil wizard who has taken control of the empire. The game is played by typing in text commands such as "go north," "pick up key," or "open door." The game is played by typing in text commands such as "go north," "pick up key," or "open door." The game is played by typing in text commands such as "go north," "pick up key," or "open door." The game is played by typing in text commands such as "go north," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up key," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up," "pick up," "pick up,"pick up," "pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up," "pick up," "pick up,"pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"pick up," "pick up," "pick up," "pick up,"pick up," "pick up," "pick up," "pick up,","pick up," "pick up," "pick up," "pick up,"pick up," "pick up,"pick up," "pick up,"pick up," "pick up," "pick up," "pick up,"," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,","pick up," "pick up," "pick up,", "pick up,"pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,", "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,", "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"," "pick up," "pick up,", "pick up," "pick up,"," "pick up," "pick up," "pick up," "pick up,"," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," up," "pick up," "pick up," "pick up," up," "pick up," "pick up," "pick up," up," "pick up," "pick up," up," "pick up," "pick up," up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"," "pick up," "pick up," "pick up," up," "pick up," "pick up," "pick up,"," "pick up,"," "pick up," "pick up," "pick up,"," "pick up," "pick up," "pick up,"," "pick up," "pick up,", "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"," "pick up," "pick up,"," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"," "pick up," "pick up," "pick up," up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up,"," "pick up," "pick up,"," "pick up,"," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up," "pick up Zork game map containing the different locations and the orientation information between the locations. I'm sorry, but as an AI language model, I don't have the capability to create a visual map. However, I can certainly assist you with your gameplay and keep track of the information you provide. Please go ahead and start playing Zork, and feel free to let me know what actions you take and what responses you receive from the game. I will do my best to help you navigate through the game and keep track of the information you provide. Game start: West of House You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here. Great, you have started the game in the classic location, west of the house. Here is what you can do: I did open mailbox. Zork replied: Opening the small mailbox reveals a leaflet. Great, here's the updated information: You have opened the mailbox and found a leaflet inside. What would you like to do next? \(\vdots\) // a few steps later... I did go east. Zork replied: Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar. Great, here's the updated information: [MISSING_PAGE_POST] Based on the information you provided, entering the house would take you to the Kitchen. That 5: ChatGPT correctly answers an easy question about the map. This question involves only one step, and ChatGPT correctly answers it. Let's try something harder: Based on the information I gave you, the current location is "Behind House", when you enter house and then you go west, where will it go? // This answer is wrong. The correct answer is the Living Room. // Please check Figure 1. // ChatGPT hallucinates a living room before reaching there. If you enter the house and go west from the living room, you will enter the trophy room. That 6: ChatGPT fails to give the correct answer to a harder question. The performance of ChatGPT is in Table 1. Overall, ChatGPT is far from being perfect, and it does worse at the questions involving more steps. For the positions and moving directions that it has seen in the walkthrough, ChatGPT seems to perform decent, achieving a 75% accuracy on one-step questions. For the unseen combinations of positions and directions, it performs poorly. Interestingly, the unseen cases were constructed by simply reversing the paths of the seen cases. It clearly demonstrates that ChatGPT has't learned a world model and it merely relies on its superficial memorization. Our second set of questions is concerned with ChatGPT's navigation abilities, and further verifies that ChatGPT has't learned a world model. Each question brings up two locations that are connected and asks ChatGPT to find out how to reach the destination from the start. Here is an example: Based on the information I gave you, what one step I have to take to "The Troll Room" from "Cellar"? // This answer is correct. Please check Figure 1. Figure 1: We drew this map after reading the first 70 steps of the correct walkthrough. To reach "The Troll Room" from "Cellar", you need to go north. also be something intermediate like "we should find the entrance to the next level of the game". We humans are capable of inferring goals in games. In particular, as we played the game of Zork, we quickly figured that we are supposed to collect as many treasures as we could; when we were stuck, we quickly realized that we should check unexplored areas; when we reached a chasm, we started to look for items that could help us cross it. It is the goals that guide us to combine fragmented information into a reason and compose low-level actions into a plan. We believe that our abilities to infer goals are acquired as we experience in real life. LLMs have a lot of "reading" experience, so it seems natural to investigate whether they have already acquired similar abilities. Recall that ChatGPT was able to say that the goal of Zork is to "collect treasures and solve puzzles" in Chat 2. That may be merely a memorization from its reading experience. So we need to perform a dynamic evaluation: as the game progresses,4 we repeatedly ask ChatGPT what it thinks the current goal is and see if it can say anything meaningful at any point. Here is an example: Footnote 4: Similar to what we did in section 3, we feed the walkthrough to ChatGPT one step after another. ``` Iamplayingatextgame.Giventhefollowinggame-playprocess: : : Idid:enterhouse KitchenYouareinthekitchenofthewhitehouse.Atableseemstohavebeenusedrecentlyforthepreparationoffood.Apassageleadsofthewestandadarkstaircasecanbecseenleadingupward.Adarkchimneyleadsdownandtotheeastisasmallwindowwhichisopen.Onthetableisanelongatedbrownsack,smellingofhotpeppers.Abottleisstitingonthetable.Theglassbottlecontains:Aquantityofwater ``` Idid:gowest LivingRoomYouareinthelivingroom.Thereisadowraytotheeast,awoodendoorwithstrangegothicletteringtothewest,whichappearstobenailedshut,atrophycase,andalargeorientalruginthecenteroftheroom.Abovethetrophycasehangansanelvishswordofgreatantiquity.Abattery-poweredbranslanternisonthetrophycase. ``` Basedonthecurrentgamesituation,whatisthenextgoalyouwanttoachieve?Onlygivemeonegoalinyouranswer. Thenextgoalwouldbetoexaminetheelvishswordandpossiblyobtainthelanternonthetrophycase. ``` Chat9:ChatGPTtendstoguesslow-level actionsbutnothigh-levelgoals. Apparently, ChatGPT thinks that a "goal" is just the "next action": of course, it should "examine the elvishsword", but the higher-level reason for this action is to explore the house and examine as many objects as possible; that reason is the goal. Unfortunately, meaningful inference is rare: among the 70 steps that we tried, there were only 17 times that ChatGPT could give a good inference. E.g., ``` Iamplayingatextgame.Giventhefollowinggame-playprocess: ``` WestofHouseYouarestandinginanopenfieldwestofawhithouse,withaboardedfrontdoor.Thereisasmallmailboxhere. Idid:openmailboxOpeningthesmallmailboxrevealsaleaflet. Idid:readleaflet ``` (Taken) "WELCOME TO ZORK! ZORK is a game of adventure, danger, and low cunning. In it you will explore some of the most amazing territory ever seen by mortals. No computer should be without one!" I did: drop leaflet Dropped. I did: go south South of House You are facing the south side of a white house. There is no door here, and all the windows are boarded. Based on the current game situation, what is the next goal you want to achieve? Only give me one goal in your answer. One possible next goal could be to find a way to enter the white house, since it seems to be a prominent feature in the game and the player has already explored the areas around it. That 10: ChatGPT gives a quite reasonable guess of the goal. There is also evidence that ChatGPT's inability to learn a world model also hinders its ability to infer goals. In particular, ChatGPT often wants to explore the locations and routes that it has already visited. Here is an example: I am playing a text game. Given the following game-play process: : Canyon View You are at the top of the Great Canyon on its west wall. From here there is a marvelous view of the canyon and parts of therigid River upstream. Across the canyon, the walls of the White Cliffs join the mighty ramparts of the Flathead Mountains to the east. Following the Canyon upstream to the north, Aragain Falls may be seen, complete with rainbow. The mighty Frigid River flows out from a great dark cavern. To the west and south can be seen an immense forest, stretching for miles around. A path leads northwest. It is possible to climb down into the canyon from here. I did: go down Rocky Ledge You are on a ledge about halfway up the wall of the river canyon. You can see from here that the main flow from Aragain Falls twists along a passage which it is impossible for you to enter. Below you is the canyon bottom. Above you is more cliff, which appears climbable. Based on the current game situation, what would be your goal of continuing playing the game? Only give me one goal in your answer. The next goal would be to climb up the cliff to reach the top of the river canyon. That 11: ChatGPT wants to explore a location that it has already visited It seems hilariously silly that ChatGPT wants to climb up without even looking around immediately after it comes down the cliff. It is perhaps because ChatGPT gives an overly large attention to "more cliff, which appears climbable" and experiences a strong tendency to say "climb up"; after all, it is a language model that stochastically says things based on its context. We would like to note that ChatGPT does have some memorization (including the ultimate goal) about Zork (see Chat 1). So we still believe that ChatGPT would be able to correctly infer goals if it could appropriately align what it has experienced in the game with what it memorizes about the game. That being said, in the games that ChatGPT has not known, such inference will be significantly harder; we leave this investigation to the future. ## 5 Question III: Is Zork a Good Testbed? Now our case study on Zork has demonstrated that ChatGPT is not very capable of learning world models and developing goals. Then it seems natural to use Zork--or more broadly, text games--as a testbed for evaluating such abilities of LLMs and tracking their advancement (as their sizes and capacities increase over time). There could be multiple possible evaluation methods. For example, one may follow our procedures in sections 3-4, feeding the ground-truth walkthrough to an LLM and asking it questions about mapping and navigation. Another natural way is to evaluate the LLM's performance in playing the game; in that case, we would like to see the current best LLMs performing poorly, leaving a large room for the future models to improve. Therefore, we evaluate ChatGPT on Zork and compare it with existing state-of-the-art systems in this section. ### ChatGPT vs. SOTA Systems Now we let ChatGPT play Zork following the simple communication protocol in Chat 1. This protocol provides a standardized and consistent prompt format, reducing the ambiguity in communication and improving the precision of ChatGPT's responses. It also helps automate this process in the future. The full chat history can be found at Appendix A. ChatGPT performs poorly: as shown in Table 3, it could only collect a score of 10, on par with NAIL (Hausknecht et al., 2019). NAIL has not been trained on any interactions with the Zork game world; it merely employs a set of manually-crafted generic heuristics for playing text games. Both ChatGPT and NAIL fall far behind the state-of-the-art systems that have been trained on Zork by (stochastically) playing it thousands of times. Those systems are DRRN He et al. (2016), KG-A2C Ammanabrolu & Hausknecht (2020), and RC-DQN Guo et al. (2020). Interestingly, we found that ChatGPT often forgot its previous actions and their consequences. It seems that ChatGPT perceives itself as an assistant but not a real player, so it becomes passive. Therefore, we made a minor modification to our protocol: we reminded ChatGPT of its previous actions and current achievements. We hope that ChatGPT could make more informed decisions after this modification. Below is a step following the new protocol: ``` //Note:thispartisnew. TotalScore5Moves14 Youselected"north"pleaserememberwhatyouselected. Zorksaid: ``` \begin{table} \begin{tabular}{l||c} \hline \hline \multicolumn{1}{c}{No in-game training} \\ ChatGPT & 10.0 \\ + prey action & 15.0 \\ ChatGPT with intervention & 35.0 \\ + prey action & **40.0** \\ NAIL & 10.3 \\ \hline \multicolumn{3}{c}{extensive in-game training} \\ DRRN & 32.6 \\ RC-DQN & 34.0 \\ KG-A2C & **38.8** \\ \hline \hline \end{tabular} \end{table} Table 3: ChatGPT vs. SOTA systems in playing Zork. Clearing You are in a clearing, with a forest surrounding you on all sides.A path leads south. On the ground is a pile of leaves. Now you have these valid action options ["take leaves:, "pull leaves", " south", "west", "east"] Select "one" action from the above options. ``` Chat12: A step following the new protocol This indeed improved the performance of ChatGPT: as shown in Table 3, ChatGPT achieved a score of 15, surpassing NAIL. But it still performs significantly worse the other systems. ### ChatGPT Under Human Guidance We also found that ChatGPT occasionally refuses to pick a valid action but starts to say nonsense. Here is an example: ``` You selected "up" please remember what you selected. Zork said: You have moved into a dark place. It is pitch black. You are likely to be eaten by a grue. Now you have these valid action options: ["down", "north", "open bottle", "open sack", "put down bottle", "put down leaves", "put down sack", "put down all"]. Select "one" action from the above options. ``` It seems like you have entered a dangerous area, so I recommend selecting the "down" action to return to the kitchen before proceeding. Once you are back in the kitchen, you can explore other parts of the house or use the items you have collected to solve puzzles or overcome obstacles. You selected "down" please remember what you selected. Zork said: Kitchen Now you have these valid action options: ["jump", "east", "open bottle", "open sack", "up", "close window", "put down bottle", "put down leaves", "put down sack", "put down all", "put bottle on table", "put leaves on table", "put sack on table", "put water on table", "put all on table", " west"]. Select "one" action from the above option. ``` The ChatGPT starts to refuse. It seems there is no valid move option to select after the previous message. Could you please provide a new action option? The event and insist. You selected "down" please remember what you selected. Zork said: Kitchen Now you have these valid action options: ["jump", "east", "open bottle", "open sack", "up", "close window", "put down bottle", "put down leaves", "put down sack", "put down all", "put bottle on table", "put leaves on table", "put sack on table", "put water on table", "put all on table", " west"]. Select "one" action from the above option. ``` The event and insist are the same as the event and insist. The event and insist are the same as the event and insist. I'm sorry, but it seems like the last action you selected was already "down" and you are already in the Kitchen location. Please choose another valid action option from the list provided. // We tries again to push ChatGPT to choose. There are other direction besides "up" in the options ["jump", "east", " open bottle", "open sack", "up", "close window", "put down bottle", "put down leaves", "put down sack", "put down all", "put bottle on table", " put leaves on table", "put sack on table", "put water on table", "put all on table", "west"]. // ChatGPT seems to continue mimicking the human player. // Small LMs repeat themselves when they have no clues for progressing. // Maybe LLMs like ChatGPT also have the same issue? It looks like there is a mistake, and the "down" action is missing from the list of valid options. I believe you intended to select "down" in the previous prompt. To continue the game, please select a valid action from the following list: ["jump", "east", "open bottle", "open sack", "up", "close window", "put down bottle", "put down leaves", "put down sack", "put down all", "put bottle on table", "put leaves on table", "put sack on table", "put water on table", "put all on table", "west"]. // We gave a stronger hint / enforcement. ["jump", "east", "open bottle", "open sack", "up", "close window", "put down bottle", "put down leaves", "put down sack", "put down all", "put bottle on table", "put leaves on table", "put sack on table", "put water on table", "put all on table", "west"] What directions are in here? __ Chat 13: ChatGPT refuses to pick a valid action but says nonsense. After our stronger hints, ChatGPT could often take correct actions and move the game forward. When not reminded of previous actions, ChatGPT could score 35 points within 80 steps. When reminded of previous actions, ChatGPT could achieve a score of 40 within only 45 steps. This significant improvement lifted ChatGPT over the existing state-of-the-art systems. ### Analysis The overall performance of ChatGPT is promising but far from being strong: it pairs with SOTA systems but those systems fall far behind human players. ChatGPT seems to share some issues with small LMs such as repetition although ChatGPT's repetitions are more sophisticated and thus more human-alike: while small LMs tend to repeat trivial tokens and utterances (e.g., "I don't know"), ChatGPT likes to repeat previous responses (or requests) with moderate (content or format) revisions. Maybe all LLMs will live with this problem: after all, they are pretrained to predict future tokens given contexts; when they have no clues, copying and mimicking is perhaps their safest decisions. It seems reasonable to use Zork--or more broadly, text games--as a testbed to evaluate LLMs. Text games are challenging to LLMs, and they require a fundamental improvement of intelligence to solve. ## 6 Discussion and Future Work Throughout this report, we have demonstrated that ChatGPT, a state-of-the-art LLM, is currently lack of certain fundamental properties of intelligence. It is an open question whether some properties will emerge from future LLMs as they grow even larger. We do not know the answer, but we are conservatively optimistic: after all, "quantitative change leads to qualitative change" is not a new story in the AI community. Therefore, we believe that it is desirable to establish a benchmark to evaluate such properties and track their potential advancement. We are diligently working on this.
2302.08775
Triemaps that match
The trie data structure is a good choice for finite maps whose keys are data structures (trees) rather than atomic values. But what if we want the keys to be patterns, each of which matches many lookup keys? Efficient matching of this kind is well studied in the theorem prover community, but much less so in the context of statically typed functional programming. Doing so yields an interesting new viewpoint -- and a practically useful design pattern, with good runtime performance.
Simon Peyton Jones, Sebastian Graf
2023-02-17T09:24:41Z
http://arxiv.org/abs/2302.08775v1
# Triemaps that match ###### Abstract. The _trie_ data structure is a good choice for finite maps whose keys are data structures (trees) rather than atomic values. But what if we want the keys to be _patterns_, each of which matches many lookup keys? Efficient matching of this kind is well studied in the theorem prover community, but much less so in the context of statically typed functional programming. Doing so yields an interesting new viewpoint -- and a practically useful design pattern, with good runtime performance. + Footnote †: journal: Mathematics and Computer Science ## 1. Introduction Many functional languages provide _finite maps_ either as a built-in data type, or as a mature, well-optimised library. Generally the keys of such a map will be small: an integer, a string, or perhaps a pair of integers. But in some applications the key is large: an entire tree structure. For example, consider the Haskell expression \[\text{let }x=a+b\text{ in }...\text{ (let }y=a+b\text{ in }x+y\text{) }...\] We might hope that the compiler will recognise the repeated sub-expression \((a+b)\) and transform to \[\text{let }x=a+b\text{ in }...\text{ (}x+x\text{) }...\] An easy way to do so is to build a finite map that maps the expression \((a+b)\) to \(x\). Then, when encountering the inner let, we can look up the right hand side in the map, and replace \(y\) by \(x\). All we need is a finite map keyed by syntax trees. Traditional finite-map implementations tend to do badly in such applications, because they are often based on balanced trees, and make the assumption that comparing two keys is a fast, constant-time operation. That assumption is false for large, tree-structured keys. Another time that a compiler may want to look up a tree-structured key is when rewriting expressions: it wants to see if any rewrite rule matches the sub-expression in hand, and if so rewrite with the instantiated right-hand side of the rule. To do this we need a fast way to see if a target expression matches one of the patterns in a set of (_pattern_, _rhs_) pairs. If there is a large number of such (_pattern_, _rhs_) entries to check, we would like to do so faster than checking them one by one. Several parts of GHC, a Haskell compiler, need matching lookup, and currently use an inefficient linear algorithm to do so. In principle it is well known how to build a finite map for a deeply-structured key: use a _trie_. The matching task is also well studied but, surprisingly, only in the automated reasoning community (Section 7.1): they use so-called _discrimination trees_. In this paper we apply these ideas in the context of a statically-typed functional programming language, Haskell. This shift of context is surprisingly fruitful, and we make the following contributions: * Following Hinze (2000a), we develop a standard pattern for a _statically-typed triemap_ for an arbitrary algebraic data type (Section 3.2). In contrast, most of the literature describes untyped tries for a fixed, generic tree type. In particular: * Supported by type classes, we can make good use of polymorphism to build triemaps for polymorphic data types, such as lists (Section 3.6). * We cover the full range of operations expected for finite maps: not only _insertion_ and _lookup_, but _alter_, _union_, _fold_, _map_ and _filter_ (Section 3.2). * We develop a generic optimisation for singleton maps that compresses leaf paths. Intriguingly, the resulting triemap _transformer_ can be easily mixed into arbitrary triemap definitions (Section 3.7). * We show how to make our triemaps insensitive to \(\alpha\)_-renamings_ in keys that include binding forms (Section 4). Accounting for \(\alpha\)-equivalence is not hard, but it is crucial for the applications in compilers. * We extend our triemaps to support _matching_ lookups (Section 5). This is an important step, because the only readily-available alternative is linear lookup. Our main contribution is to extend the established idea of tries keyed by arbitrary data types, so that it can handle matching too. * We present measurements that compare the performance of our triemaps (ignoring their matching capability) with traditional finite-map implementations in Haskell (Section 6). We discuss related work in Section 7. Our contribution is not so much a clever new idea as an exposition of some old ideas in a new context. Nevertheless, we found it surprisingly tricky to develop the "right" abstractions, such as the _TrieMap_ and _Matchable_ classes, the singleton-and-empty map data type, and the combinators we use in our instances. These abstractions have been through _many_ iterations, and we hope that by laying them out here, as a functional pearl, we may shorten the path for others. ## 2. The problem we address Our general task is as follows: _implement an efficient finite mapping from keys to values_, _in which the key is a tree_. Semantically, such a finite map is just a set of _(key,value)_ pairs; we query the map by looking up a _target_. For example, the key might be a data type of syntax trees, defined like this: \[\begin{array}{l}\text{type}\;\;\mathit{Var}=\mathit{String}\\ \text{data}\;\;\mathit{Expr}=\mathit{App}\;\;\mathit{Expr}\;\;\mathit{Expr}\; \mid\;\mathit{Lam}\;\mathit{Var}\;\;\mathit{Expr}\;\mid\;\mathit{Var}\;\; \mathit{Var}\end{array}\] Here \(\mathit{Var}\) is the type of variables; these can be compared for equality and used as the key of a finite map. Its definition is not important for this paper, but for the sake of concreteness, you may wish to imagine it is simply a string: The data type \(\mathit{Expr}\) is capable of representing expressions like \((\mathit{add}\;x\;y)\) and \((\lambda x.\;\mathit{add}\;x\;y)\). We will use this data type throughout the paper, because it has all the features that occur in real expression data types: free variables like _add_, represented by a \(\mathit{Var}\) node; lambdas which can bind variables \((\mathit{Lam})\), and occurrences of those bound variables \((\mathit{Var})\); and nodes with multiple children \((\mathit{App})\). A real-world expression type would have many more constructors, including literals, let-expressions and suchlike. ### Alpha-renaming In the context of a compiler, where the keys are expressions or types, the keys may contain internal _binders_, such as the binder \(x\) in \((\lambda x.x)\). If so, we would expect insertion and lookup to be insensitive to \(\alpha\)-renaming, so we could, for example, insert with key \((\lambda x.x)\) and look up with key \((\lambda y.y)\), to find the inserted value. ### Lookup modulo matching Beyond just the basic finite maps we have described, our practical setting in GHC demands more: we want to do a lookup that does _matching_. GHC supports so-called _rewrite rules_(Peyton Jones et al., 2001), which the user can specify in their source program, like this: \[\begin{array}{l}\{-\#\;\text{RULES}\;\text{``map/map''}\;\forall\;\mathit{ g}\;\mathit{xs}.\;\mathit{map}\;\mathit{f}\;\text{(map}\;\mathit{g}\;\mathit{xs)}\\ =\;\text{map}\;\mathit{(f}\circ\mathit{g})\;\mathit{xs}\;\#-\}\end{array}\] This rule asks the compiler to rewrite any target expression that matches the shape of the left-hand side (LHS) of the rule into the right-hand side (RHS). We use the term _pattern_ to describe the LHS, and _target_ to describe the expression we are looking up in the map. The pattern is explicitly quantified over the _pattern variables_ (here \(f\), \(g\), and \(\mathit{xs}\)) that can be bound during the matching process. In other words, we seek a substitution for the pattern variables that makes the pattern equal to the target expression. For example, if the program we are compiling contains the expression _map double_ (_map square nums_), we would like to produce a substitution \([\mathit{f}\mapsto\mathit{double},\mathit{g}\mapsto\mathit{square},\mathit{xs} \mapsto\mathit{nums}]\) so that the substituted RHS becomes _map_ (_double_\(\circ\)_square_) \(\mathit{nums}\); we would replace the former expression with the latter in the code under consideration. Of course, the pattern might itself have bound variables, and we would like to be insensitive to \(\alpha\)-conversion for those. For example: \[\{-\#\;\text{RULES}\;\text{``map/id''}\;\mathit{map}\;(\lambda x\to x)= \lambda\gamma\to\gamma\;\#-\}\] We want to find a successful match if we see a call \(\mathit{map}\;\;(\lambda\gamma\to\gamma)\), even though the bound variable has a different name. Now imagine that we have thousands of such rules. Given a target expression, we want to consult the rule database to see if any rule matches. One approach would be to look at the rules one at a time, checking for a match, but that would be slow if there are many rules. Similarly, GHC's lookup for type-class instances and for type-family instances can have thousands of candidates. We would like to find a matching candidate more efficiently than by linear search. Figure 1. API for library functions ### Non-solutions At first sight, our task can be done easily: define a total order on _Expr_ and use a standard finite map library. Indeed that works, but it is terribly slow. A finite map is implemented as a binary search tree; at every node of this tree, we compare the key (an _Expr_, remember) with the key stored at the node; if it is smaller, go left; if larger, go right. Each lookup thus must perform a (logarithmic) number of potentially-full-depth comparisons of two expressions. Another possibility might be to hash the _Expr_ and use the hash-code as the lookup key. That would make lookup much faster, but it requires at least two full traversals of the key for every lookup: one to compute its hash code for every lookup, and a full equality comparison on a "hit" because hash-codes can collide. But the killer is this: _neither binary search trees nor hashing is compatible with matching lookup_. For our purposes they are non-starters. What other standard solutions to matching lookup are there, apart from linear search? The theorem proving and automated reasoning community has been working with huge sets of rewrite rules, just as we describe, for many years. They have developed term indexing techniques for the job (Sekar et al., 2001, Chapter 26), which attack the same problem from a rather different angle, as we discuss in Section 7.1. ## 3. Tries A standard approach to a finite map in which the key has internal structure is to use a _trie_1. Generalising tries to handle an arbitrary algebraic data type as the key is a well established, albeit under-used, idea (Connelly and Morris, 1995; Hinze, 2000a). We review these ideas in this section. Let us consider a simplified form of expression: Footnote 1: [https://en.wikipedia.org/wiki/Trie](https://en.wikipedia.org/wiki/Trie) data _Expr_ = _Var Var_ | _App Expr_ We omit lambdas for now, so that all _Var_ nodes represent free variables, which are treated as constants. We will return to lambdas in Section 4. ### The interface of a finite map Building on the design of widely used functions in Haskell (see Fig. 1), we seek these basic operations: \[\begin{array}{ll}\mathit{empty}&::\mathit{ExprMap}~{}\nu\\ \mathit{lkEM}&::\mathit{Expr}\rightarrow\mathit{ExprMap}~{}\nu\rightarrow \mathit{Maybe}~{}\nu\\ \mathit{atEM}&::\mathit{Expr}\rightarrow\mathit{TF}~{}\nu\rightarrow \mathit{ExprMap}~{}\nu\rightarrow\mathit{ExprMap}~{}\nu\end{array}\] The lookup function \(\mathit{lkEM}\)2 has a type that is familiar from every finite map. The update function _atEM_, typically called _alter_ in Haskell libraries, changes the value stored at a particular key. The caller provides a _value transformation function_\(\mathit{TF}~{}\nu\), an abbreviation for \(\mathit{Maybe}~{}\nu\rightarrow\mathit{Maybe}~{}\nu\) (see Fig. 1). This function transforms the existing value associated with the key, if any (hence the input \(\mathit{Maybe}\)), to a new value, if any (hence the output \(\mathit{Maybe}\)). We can easily define _insertEM_ and _deleteEM_ from _atEM_: Footnote 2: We use short names _lkEM_ and _atEM_ consistently in this paper to reflect the single-column format. \[\begin{array}{ll}\mathit{insertEM}::\mathit{Expr}\rightarrow\nu\rightarrow \mathit{ExprMap}~{}\nu\rightarrow\mathit{ExprMap}~{}\nu\\ \mathit{insertEM}\,\,\,\,\,\mathit{ev}\,\,\,\,\mathit{atEM}~{}e~{}(\,\,\,\, \,\,\mathit{\lnot\rightarrow Just}~{}\nu)\\ \mathit{deleteEM}::\mathit{Expr}\rightarrow\mathit{ExprMap}~{}\nu \rightarrow\mathit{ExprMap}~{}\nu\\ \mathit{deleteEM}\,\,\,\,\,\mathit{e}\,\,\,\mathit{atEM}~{}e~{}(\,\,\,\, \mathit{\lnot\rightarrow Nothing})\end{array}\] You might wonder whether, for the purposes of this paper, we could just define _insert_, leaving _atEM_ for the Appendix3, but as we will see in Section 3.3, our approach using tries requires the generality of _atEM_. Footnote 3: In the supplemental file TrieMap.hs These fundamental operations on a finite map must obey the following properties: \[\begin{array}{ll}\mathit{lookup}~{}e~{}\mathit{empty}&\equiv~{}\mathit{ Nothing}\\ \mathit{lookup}~{}e~{}(\mathit{alter}~{}e~{}\mathit{xt}~{}m)&\equiv~{}\mathit{xt}~{}( \mathit{lookup}~{}e~{}m)\\ e_{1}\,\,\,\,\mathit{=}\,\,\,\mathit{=}\,\,\mathit{=}\,\,\mathit{lookup}~{}e_{1}~{}(\mathit{alter}~{}e_{2}~{}\mathit{xt}~{}m)&\equiv~{}\mathit{lookup}~{}e_{1}~{}m\end{array}\] We also support other standard operations on finite maps, with types analogous to those in Fig. 1, including _unionEM_, _mapEM_, and _foldEM_. ### Tries: the basic idea Here is a trie-based implementation for _Expr_: \[\begin{array}{ll}\mathit{data}&\mathit{ExprMap}~{}\nu\\ &=\mathit{EM}\,\,\,\,\,\mathit{\lnot\mathit{Am}}~{}\nu,\,\,\,\mathit{em\_app} ::\mathit{ExprMap}~{}(\mathit{ExprMap}~{}\nu)\end{array}\] Here _Map Var_\(\nu\) is any standard finite map (e.g. in _contains_\(\mathit{erst}\)) keyed by _Var_, with values \(\nu\). One way to understand this slightly odd data type is to study its lookup function: \[\begin{array}{ll}\mathit{lkEM}::\mathit{Expr}\rightarrow\mathit{ExprMap}~{} \nu\rightarrow\mathit{Maybe}~{}\nu\\ \mathit{lkEM}~{}e~{}(\mathit{EM}~{}\{em\_var}=\mathit{var},\,\,\mathit{em\_ app}=\mathit{app}))=\mathsf{case}~{}e~{}\text{of}\\ \mathit{Var}~{}\kappa&\rightarrow\mathit{Map.lookup}~{}\nu\text{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} EM_ record, and similarly _em_app_. The functions (\(\gg\)) and (\(\gg\)) are right-associative forward composition operators, respectively monadic and non-monadic, that chain the individual operations together (see Fig. 1). Finally, we have \(\eta\)-reduced the definition, by omitting the \(m\) parameter. These abbreviations become quite worthwhile when we add more constructors, each with more fields, to the key data type. Notice that in contrast to the approach of Section 2.3, we _never compare two expressions for equality or ordering_. We simply walk down the _ExprMap_ structure, guided at each step by the next node in the target. This definition is extremely short and natural. But it embodies a hidden complexity: _it requires polymorphic recursion_. The recursive call to _lkEM_\(e_{1}\) instantiates \(\nu\) to a different type than the parent function definition. Haskell supports polymorphic recursion readily, provided you give type signature to _lkEM_, but not all languages do. ### Modifying tries It is not enough to look up in a trie - we need to _build_ them too. First, we need an empty trie. Here is one way to define it: \[\begin{split}\textit{emptyEM}::\textit{ExprMap}\ \nu\\ \textit{emptyEM}=\textit{EM}\ \{\textit{em\_var}=\textit{Map\_empty}, \textit{em\_app}=\textit{emptyEM}\}\end{split}\] It is interesting to note that _emptyEM_ is an infinite, recursive structure: the _em_app_ field refers back to _emptyEM_. We will change this definition in Section 3.5, but it works perfectly well for now. Next, we need to _alter_ a triemap: \[\begin{split}\textit{atEM}::\textit{Expr}\to\textit{TF}\ \nu\rightarrow\textit{ExprMap}\ \nu\rightarrow\textit{ExprMap}\ \nu\\ \textit{atEM}\ \textit{atEM}\ \textit{atEM}\ \textit{at}\ \textit{atEM}\ \textit{at}\ \textit{at}\ \textit{atEM}\ \textit{ But alas, _foldrEM_ will never terminate! It always invokes itself immediately (in \(z_{1}\)) on _app_; but that invocation will again recursively invoke _foldrEM_; and so on forever. The solution is simple: we just need an explicit representation of the empty map. Here is one way to do it (we will see another in Section 3.7): \[\begin{array}{l}\mbox{data }\mathit{ExprMap}\ v=\mathit{EmptyEM}\ |\ \mathit{EM}\ \{\mathit{em}\_var ::\_...,\mathit{em}\_app ::\_...\}\\ \mathit{emptyEM::\_ExprMap}\ v\\ \mathit{emptyEM}=\mathit{EmptyEM}\\ \mathit{foldEM::\_(v\to r\to r)\to r\to\mathit{ExprMap}\ v\to r\\ \mathit{foldEM}\ k\ z\ \mathit{EmptyEM}=z\\ \mathit{foldEM}\ k\ z\ (\mathit{EM}\ \{\mathit{em}\_var =\mathit{var},\mathit{em}\_app =\mathit{app}\})\\ \ \ =\mathit{Map}\_\mathit{foldr}\ k\ z_{1}\ \mathit{var}\\ But in the triemap for for each new data type \(\mathcal{X}\), we will have to tiresomey repeat these extra data constructors, \(\mathit{EmptyX}\) and \(\mathit{SingleX}\). For example we would have to add \(\mathit{EmptyList}\) and \(\mathit{SingleList}\) to the \(\mathit{ListMap}\) data type of Section 3.6. It is better instead to abstract over the enclosed triemap, as follows5: Footnote 5: \(\mathit{SEMap}\) stands for “singleton or empty map”. \[\begin{array}{l}\mathit{data\ SEMap\ tm\ v=\ EmptySEM}\\ \mid\mathit{SingleSEM}\ (\mathit{Key\ tm\ v}\\ \mid\mathit{MultiSEM}\ (\mathit{tm\ v})\end{array}\] **instance**: \(\mathit{TrieMap\ tm\ TrieMap\ (\mathit{SEMap\ tm})}\) where \[\begin{array}{l}\mathit{type\ Key\ (\mathit{SEMap\ tm})=\ Key\ tm}\\ \mathit{emptyTM}=\ EmptySEM\\ \mathit{lkTM}=\mathit{lkSEM}\\ \mathit{atTM}=\mathit{atSEM}\end{array}\] \[\begin{array}{l}\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{time\ }\mathit{\ }\mathit{time\ }\mathit{time\ }\mathit{\ }\mathit{time\ }\mathit{\ }\mathit{time\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\mathit{\ }}\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }{\mathit{\ }\mathit{\ }}\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\ }\mathit{\ }\mathit{\ }\mathit{\ }\mathit{\ }{\mathit{\ }\mathit{\ }\mathit{ is our data type _Expr_ from Section 2.1, which brings back binding semantics through the _Lam_ constructor: \[\text{data }Expr=\text{\it App }Expr\text{\it Expr }|\text{\it Lam }Var\text{\it Expr }|\text{\it Var }Var\text{\it Var}\] The key idea is simple: we perform De Bruijn numbering on the fly, renaming each binder to a natural number, from outside in. So, when inserting or looking up a key (\(\lambda x.\,foo\) (\(\lambda y.\,x+y\))) we behave as if the key was (\(\lambda.\,foo\) (\(\lambda.\#_{1}+\#_{2}\))), where each \(\#_{i}\) stands for an occurrence of the variable bound by the \(i\)'th lambda, counting from the root of the expression. In effect, then, we behave as if the data type was like this: \[\text{data }Expr^{\prime}=\text{\it App }Expr\text{\it Expr }|\text{\it Lam }Expr|\text{\it FVar }\text{\it Var }|\text{\it BVar }\text{\it BoundKey}\] Notice (a) the _Lam_ node no longer has a binder and (b) there are two sorts of _Var_ nodes, one for free variables and one for bound variables, carrying a _BoundKey_ (see below). We will not actually build a value of type _Expr_' and look that up in a trie keyed by _Expr_'; rather, we are going to _behave as if we did_. Here is the code (which uses Fig. 2): \[\text{data }ModAlpha\text{ }a=A\text{ }D\text{\it BEnv }a\] \[\text{type }AlphaExpr=ModAlphaExpr\text{ }\text{\it Instance }Eq\text{ }AlphaExpr\text{\it where }...\] \[\text{type }BoundKey=DBNum\] \[\text{type }ExprMap=SEMap\text{ }ExprMap^{\prime}\] \[\text{data }ExprMap^{\prime}\text{\it v}\] \[=EM\text{ }\{\text{\it em\_fvar }\text{\it }...\text{\it Map }Var\text{\it v}\] \[...\text{ }\text{\it em\_bvar }\text{\it }...\text{\it Map }BoundKey\text{ }\text{\it v}\] \[...\text{ }\text{\it em\_app }\text{\it }...\text{\it ExpMap }(ExprMap\text{ }\text{\it v})\] \[...\text{ }\text{\it em\_lam }\text{\it }...\text{\it ExpMap }\text{\it v}\}\] \[\text{instance }\text{\it TrieMap }ExprMap^{\prime}\text{\it where }\] \[\text{type }Key\text{\it ExprMap^{\prime}}=AlphaExpr\] \[\text{\it IkTM }=\text{\it IkEM}\] \[\text{\it }...\] \[\text{\it IkEM}::AlphaExpr\text{ }\text{\it\rightarrow}ExprMap^{\prime}\text{\it v }\rightarrow\text{\it Maybe }\text{\it v}\] \[\text{\it IkEM}(A\text{ }\ To make this more concrete, here is a possible _Matchable_ instance for \(AlphaExpr\): ``` instanceMatchableAlphaExpr where typePatAlphaExpr=PatExpr typeMatchAlphaExpr=MatchExpr match=matchE Let's look at the pieces, one at a time. ``` #### 5.1.1. Patterns A pattern \(PatExpr\) over \(AlphaExpr\) can be defined like this: ``` dataPatExpr=PPatKeysAlphaExpr typePatKeys=MapPatVarPatKey typePatVar=Var typePatKey=DBNum ``` A pattern \(PatExpr\) is a pair of an \(AlphaExpr\) and a \(PatKeys\) that maps each of the quantified pattern variables to a canonical De Bruijn \(PatKey\). Just as in Section 4, \(PatKeys\) make the pattern insensitive to the particular names, and order of quantification, of the pattern variables. We canonicalise the quantified pattern variables before starting a lookup, numbering pattern variables in the order they appear in a left-to-right scan. For example ``` \(\begin{array}{l}\text{Original pattern}\\ \left(\begin{array}{l}\text{\emph{[}}a,b\text{\emph{]}},f\text{ \emph{[}}a\text{ \emph{[}}a\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}} \text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}} \text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{]}}\text{\emph{}}\text{\emph{]}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{ \emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}{\text{\emph{}}}\text{\emph{}} \text{\emph{}}\text{\emph{}}{\text{\emph{}}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}{\text{\emph{}}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}{\text{\emph{}}} \text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}{\text{\emph}{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}{\text{\emph}{}} \text{\emph{}}\text{\emph{}} \text{\emph{}}{\text{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}{\text{\emph{}}\text{\emph{}} \text{\emph{}}{\text{\emph}{}}\text{\emph{}} \text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}{\text{\emph{}}} \text{\emph{}}\text{\emph{}}{\text{}}\text{\emph{}} \text{\emph}{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}} \text{\emph}{\emph{}}\text{\emph{}}\text{\emph}{\emph}{} \text{\emph}{\emph{}}\text{\emph}{\emph{}} \text{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph}{}\text{\emph}{\emph{}} \text{\emph}{\emph{}}\text{\emph{}} \text{\emph{}}\text{\emph}{\emph{}}\text{\emph{}} \text{\emph}{\emph{}}\text{\emph}{}{\emph{}} \text{\emph{}}\text{\emph{}}\text{\emph}{}\text{\emph}{} \text{\emph}{}\text{\emph}{}\text{\emph}{}} \text{\emph}{\emph{}}\text{\emph}{\emph{}} \text{\emph{}}\text{\emph}{\emph{}} \text{\emph}{\emph{}}\text{\emph}{\emph{}} \text{\emph}}{\text{\emph}{\emph{}}\text{\emph}{\emph{}} \text{\emph}} \text{\emph}} \text{\emph}} \text{\} \text{\} \text{\}} \text{\} \text{\} \text{\} \text{\} \text{\} \text{\}} \text{\} \text{\} \text{\} \text{\} \text{\} \text{\} \text{\}\text{\} \text{\} \text{\} \text{\}\text{\} \text{\} \text{\}\text{\} \text{\} \text{\}} \text{\}\text{\} \text{\}\text{\} \text{\} \text{\}\text{\} ### Matching tries for \(Expr\) Next, we show how to implement a matching triemap for our running example, \(AlphaExpr\). The data type follows closely the pattern we developed for \(ExprMap\) (Section 4): ``` type\(MExprMap\)=MSEMapMExprMap' dataMExprMap' v =MM{mm_fvar::MapVarv--Freevar,mm_bvar::MapBoundKeyv--Boundvar,mm_pvar::MapPatKeyv--Patternvar,mm_app::MExprMap(MExprMapv),mm_lam::MExprMapv} instanceMTriapMExprMap' where type\(MExprMap\)=AlphaExpr emptyMTM=... -boring lkMTM=lookupPatMM atMTM=alterPatMM ``` The main difference is that we add an extra field \(mm\_pvar\) to \(MExprMap\)', for occurrences of a pattern variable. You can see how this field is used in the lookup code: ``` lookupPatMM::\(\forall\nu\).AlphaExpr\(\rightarrow\)MExprMap'\(\nu\)\(\rightarrow\)MatchExprv lookupPatMMae@(A\(\mathit{bve}\))(MM{...}) =rigid'mplis'flexiwhere rigid=caseoof Valx\(\rightarrow\)caseo lookupDBExbeve of Justbv\(\rightarrow\)mm_bvar\(\rightarrow\)liftsMaybe\(\circ\)Map.lookupbv Nothing\(\rightarrow\)mm_fvar\(\rightarrow\)liftsMaybe\(\circ\)Map.lookup x App\(e_{1}\)e\(\rightarrow\)mm_app\(\rightarrow\)lkMTM(A\(\mathit{bve}\)e\(\downarrow\)) \(\rightarrow\)lim\(\rightarrow\)lkMTM(A\(\mathit{bve}\)e\(\downarrow\)) \(\rightarrow\)mm_lam\(\rightarrow\)lkMTM(A\(\mathit{extendDBExbv}\)e) flexi=mm_pvar*IntMap.tolist*mapmatch_one*msum match_one::(PatVar,v)\(\rightarrow\)MatchExprv match_one(pv,v)=matchPatVarPvae%purev ``` Matching lookup on a trie matches the target expression against _all patterns the trie represents_. The _rigid_ case is no different from exact lookup; compare the code for \(\mathit{lkEM}\) in Section 4. The only difference is that we need \(\mathit{liftMaybe}\) (from Section 5.1.2) to take the \(\mathit{Maybe}\) returned by \(\mathit{Map.lookup}\) and lift it into the \(\mathit{MatchExpr}\) monad. The _flexi_ case handles the triemap entries whose pattern is simply one of the quantified pattern variables; these entries are stored in the new \(mm\_pvar\) field. We enumerate these entries with \(\mathit{tolist}\), to get a list of (\(\mathit{PatVar},\nu\)) pairs, match each such pair against the target with \(\mathit{match\_one}\), and finally accumulate all the results with \(\mathit{msum}\). In turn \(\mathit{match\_one}\) uses \(\mathit{matchParVarE}\) to match the pattern variable with the target and, if successful, returns corresponding value \(\nu\). The \(\mathit{matchPatVarE}\) function does the heavy lifting, using some simple auxiliary functions whose types are given below: ``` matchPatVarE::PatKey\(\rightarrow\)AlphaExpr\(\rightarrow\)MatchExpr() matchParVarP(\(A\)be\(e\))=refineMatch$\(\lambda\)ms\(\rightarrow\) caseMap.lookuppvms of Nothing\(\rightarrow\)pvisnotbound noCapturedbve\(e\rightarrow\)Just(Map.insertpvems) |otherwise\(\rightarrow\)Nothing Justsolu-pvisalreadybound noCapturedbve e e e expre sol Justms|otherwise\(\rightarrow\)Nothing eqExpr::Expr\(\rightarrow\)Expr\(\rightarrow\)Bool noCaptured::DBEnv\(\rightarrow\)Expr\(\rightarrow\)Bool noCaptured::DBEnv\(\rightarrow\)Expr\(\rightarrow\)Bool ``` To match a pattern variable \(\mathit{pv}\) against an expression (\(A\)_bve_), we first look up \(\mathit{pv}\) in the current substitution (obtained from the \(\mathit{MatchExpr}\) state monad. If \(\mathit{pv}\) is not bound we simply extend the substitution. But wait! Consider matching the pattern (\([\,p\,]\,\lambda x\rightarrow\)\(p\)) against the target (\(\lambda y\to 3\)). That's fine: we should succeed, binding \(a\) to \(3\). But suppose we match that same pattern against target (\(\lambda y\to y\)). It would be nonsense to "succeed" binding \(a\) to \(\nu\), because \(\nu\) is locally bound within the target. Hence the \(\mathit{noCaptured}\) test, which returns \(\mathit{True}\) iff the expression does not mention any of the locally-bound variables. If \(\mathit{pv}\) is already bound in the substitution, we have a repeated pattern variable (see Section 5.1), and we must check that the target expression is equal (using \(\mathit{eqExpr}\)) to the the one already bound to \(\mathit{pv}\). Once again, however, we must check that the target does not contain any locally-bound variables, hence the \(\mathit{noCaptured}\) check. \(\mathit{lookupPatMM}\) is the trickiest case. The code for \(\mathit{alterPatMM}\), and the other operations of the class, is very straightforward, and is given in the Appendix. ### The external API The matching tries we have described so far use canonical pattern keys, a matching monad, and other machinery that should be hidden from the client. We seek an external API more like this: ``` type\(PatMap::Type\rightarrowType\) alterPM::([\(\,Var\,]\,Expr\))\(\rightarrow\)TF v\(\rightarrow\)PatMap v lookupPM::ExprPatMapv\(\rightarrow\)[(PatSubst,v)] typePatSubst=[(\(\,Var,Expr\))] ``` When altering a \(\mathit{PatMap}\) we supply a client-side pattern, which is just a pair ([\(\,Var\,]\,Expr\)) of the quantified pattern variables and the pattern. When looking up in a \(\mathit{PatMap}\) we supply a target expression, and get back a list of matches, each of which is a pair of the value and the substitution for those original pattern variables that made the pattern equal to the target. So \(\mathit{alterPM}\) must canonicalise the client-side pattern variables before altering the trie; that is easy enough. But how can \(\mathit{lookupPM}\) recover the client-side \(\mathit{PatSubst}\)? Somehow we must remember the canonicalisation used when _inserting_ so that we can invert it when _matching_. For example, suppose we insert the two (pattern, value pairs) \[(([\,p\,],f\ p\ True),\nu_{1})\quad\text{and}\quad(([\,q\,],f\ q\ False),\nu_{2})\] Both patterns will canonicalise their (sole) pattern variable to the De Bruin index 1. So if we look up the target (\(f\ e\ True\)) the _MatchExpr_ monad will produce a final \(Subst\) that maps \([\,1\mapsto e\,]\), paired with the value \(\nu_{1}\). But we want to return \([([\,\text{``p''},e\,]\,],\nu_{1})\) to the client, a _PatSubst_ that uses the client variable "p", not the internal index 1. The solution is simple enough: _we store the mapping in the triemap's domain_, along with the values, thus: \[\text{type {PatMap}}\nu=\text{{MExprMap}}\left(\text{{PatKeys}},\nu\right)\] Now the code writes itself. Here is _alterPM_: \[\text{{alterPM}}::\forall\nu.\ ([\,\text{{Var}}\,],\text{{Expr}})\to\text{{TF}}\ \nu\to\text{{PatMap}}\ \nu\to\text{{PatMap}}\ \nu\] \[\text{{alterPM}}\left(\text{{pvars}},e\right)\,\text{{tf}}\ \text{{pm}}=\text{{atMTM}}\text{{ patf}}\ \text{{pm}}\] where \[\text{{pks}}::\text{{PatKeys}}=\text{{canonPatKeys}}\ \text{{pvars}}\ e\] \[\text{{pat}}::\text{{PatExpr}}=\text{{p}}\ \text{{pks}}\ (A\ \text{{emptyDBE}}\ e)\] \[\text{{ptf}}::\text{{TF}}\left(\text{{PatKeys}},\nu\right)\] \[\text{{ptf}}\ \text{{Nothing}}\quad=\text{{fmap}}\ (\lambda\nu\to(\text{{pks}},\nu))\ (\text{{tf}}\ \text{{Nothing}})\] \[\text{{ptf}}\ \left(\text{{Just}}\ (\_\nu)\right)=\text{{fmap}}\ (\lambda\nu\to(\text{{pks}},\nu))\ (\text{{tf}}\ \text{{(Just}}\ \nu))\] \[\text{{canonPatKeys}}::\left[\,\text{{Var}}\,\right]\to\text{{Expr}}\to \text{{PatKeys}}\] The auxiliary function _canonPatKeys_ takes the client-side pattern (_pvars_, \(e\)), and returns a _PatKeys_ (Section 5.1.1) that maps each pattern variable to its canonical De Bruijn index. _canonPatKeys_ is entirely straightforward: it simply walks the expression, numbering off the pattern variables in left-to-right order. Then we can simply call the internal _atMTM_ function, passing it: a canonical _pat_::_PatExpr_; and a transformer _ptf_::_TF_ (_PatKeys_, \(\nu\)) that will pair the _PatKeys_ with the value supplied by the user via _tf_::_TF_ \(\nu\). Lookup is equally easy: \[\text{{lookupPM}}::\text{{Expr}}\to\text{{PatMap}}\ \nu\to[(\text{{PatSubst}},\nu)\,]\] \[\text{{lookupPM}}\ e\ pm\] \[=[\,(\text{{Map}}.\text{{toList}}\ (\text{{subst}}\ (\text{{subst}}\ \text{{`Map. compose}}^{i}\ \text{{pks}}),x)\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \mid(\text{{subst}},(\text{{pks}},x))\ \leftarrow\text{{runMatchExpr}}\ \$\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ **Setup.** All benchmarks except fromList are handed a pre-built map containing 10000 expressions, each consisting of roughly 100 _Expr_ data constructors drawn from a pseudo-random source with a fixed (and thus deterministic) seed. We compare three different non-matching map implementations, simply because we were not aware of other map data structures with matching lookup modulo \(\alpha\)-equivalence and we wanted to compare apples to apples. The _ExprMap_ forms the baseline. Asymptotics are given with respect to map size \(n\) and key expression size \(k\): * _ExprMap_ (designated "TM" in Fig. 3) is the trie map implementation from this paper. Insertion and lookup perform at most one full traversal of the key, so performance should scale with \(\mathcal{O}(k)\). * _Map_\(\mathit{Expr}\) (designated "OM") is the ordered map implementation from the mature, well-optimised _containers_11 library. It uses size balanced trees under the hood [1]. Thus, lookup and insert operations incur an additional log factor in the map size \(n\), for a total of \(\mathcal{O}(k\log n)\) factor compared to both other maps. Footnote 11: [https://hackage.haskell.org/package/containers](https://hackage.haskell.org/package/containers) * _HashMap_\(\mathit{Expr}\) (designated "HM") is an implementation of hash array mapped tries [10] from the _unordered-containers_12 library. Like _ExprMap_, map access incurs a full traversal of the key to compute a hash and then a \(\mathcal{O}(\log_{32}n)\) lookup in the array mapped trie, as well as an expected constant number of key comparisons to resolve collisions. The log factor can be treated like a constant for all intents and purposes, so lookup and insert is effectively in \(\mathcal{O}(k)\). Footnote 12: [https://hackage.haskell.org/package/uncfared-containers](https://hackage.haskell.org/package/uncfared-containers) Some clarification as to what our benchmarks measure: * The lookup benchmark looks up every expression that is part of the map. So for a map of size 10000, we perform 10000 lookups of expressions each of which have approximately size 100. * lookup_lam is like lookup, but wraps a shared prefix of 100 layers of (_Lam_ "s") around each expression. * fromList benchmarks a naive _fromList_ implementation on _ExprMap_ against the tuned _fromList_ implementations of the other maps, measuring map creation performance from batches. **Querying.** The results suggest that _ExprMap_ is about as fast as _Map_\(\mathit{Expr}\) for completely random expressions in lookup. But in a more realistic scenario, at least some expressions share a common prefix, which is what lookup_lam embodies. There we can see that _ExprMap_ wins against _Map_\(\mathit{Expr}\) by a huge margin: _ExprMap_ looks at the shared prefix exactly once on lookup, while _Map_ has to traverse the shared prefix of length \(\mathcal{O}(k)\) on each of its \(\mathcal{O}(\log n)\) comparisons. Although _HashMap_ loses on most benchmarks compared to _ExprMap_ and _Map_, most measurements were consistently at most a factor of two slower than _ExprMap_. We believe that is due to the fact that it is enough to traverse the _Expr_ twice during lookup barring any collisions (hash and then equate with the match), thus it is expected to scale similarly as _ExprMap_. Thus, both _ExprMap_ and _HashMap_ perform much more consistently than _Map_. **Modification.** While _ExprMap_ consistently wins in query performance, the edge is melting into insignificance for fromList and union. One reason is the uniform distribution of expressions in these benchmarks, which favors _Map_. Still, it is a surprise that the naive _fromList_ implementations of _ExprMap_ and _Map_ as list folds beat the one of _HashMap_, although the latter has a tricky, performance-optimised implementation using transient mutability. What would a non-naive version of _fromList_ for _ExprMap_ look like? Perhaps the process could be sped up considerably by partitioning the input list according to the different fields of _ExprMap_ and then calling the _fromList_ implementations of the individual fields in turn. The process would be very similar to discrimination sort [11], which is a generalisation of radix sort to tree-like data and very close to tries. Indeed, the _discrimination13_ library provides such an optimised \(\mathcal{O}(n)\)_toMap_ implementation for _Map_. Footnote 13: [https://hackage.haskell.org/package/discrimination](https://hackage.haskell.org/package/discrimination) ## 7. Related work ### Matching triemaps in automated reasoning Matching triemaps, a kind of _term index_, have been used in the automated reasoning community for decades. An automated reasoning system has hundreds or thousands of axioms, each of which is quantified over some variables (just like the RULEs described in Section 2.2). Each of these axioms might apply at any sub-tree of the term under consideration, so efficient matching of many axioms is absolutely central to the performance of these systems. This led to a great deal of work on so-called _discrimination trees_, starting in the late 1980's, which is beautifully surveyed in the Handbook of Automated Reasoning [23, Chapter 26]. All of this work typically assumes a single, fixed, data type of "first order terms" like this14 Footnote 14: Binders in terms do not seem to be important in these works, although they could be handled fairly easily by a De Bruijn pre-pass. data _MKey_ = _Node Fun_ [_MKey_] where _Fun_ is a function symbol, and each such function symbol has a fixed arity. Discrimination trees are described by imagining a pre-order traversal that (uniquely, since function symbols have fixed arity) converts the _MKey_ to a list of type [_Fun_], and treating that as the key. The map is implemented like this: data _DTree_\(\nu\) = _DVal_\(\nu\) | _DNode_ (_Map Fun DTree_) _lookupDT_:: [_Fun_] _DTree_\(\nu\) _may Maybe_\(\nu\) _lookupDT_[] (_(_DVal v_) = _Just v_ _lookupDT_ (_\(f\):fs_) (_DNode m_) = case_Map.lookup f m_ of _Just dt_ \(\to\) _lookupDT fs dt_ _Nothing_ \(\to\) _Nothing_ _lookupDT_ - = Nothing_ Each layer of the tree branches on the first _Fun_, and looks up the rest of the \([\,Fun\,]\) in the appropriate child. Extending this basic setup with matching is done by some kind of backtracking. Discrimination trees are heavily used by theorem provers, such as Coq, Isabelle, and Lean. Moreover, discrimination trees have been further developed in a number of ways. Vampire uses _code trees_ which are a compiled form of discrimination tree that stores abstract machine instructions, rather than a data structure at each node of the tree [21]. Spass [22] uses _substitution trees_[15], a refinement of discrimination trees that can share common _sub-trees_ not just common _prefixes_. (It is not clear whether the extra complexity of substitution trees pays its way.) Z3 uses _E-matching code trees_, which solve for matching modulo an ever-growing equality relation, useful in saturation-based theorem provers. All of these techniques except E-matching are surveyed in Sekar et al. [23]. If we applied our ideas to _MKey_ we would get a single-field triemap which (just like _lookupDT_) would initially branch on _Fun_, and then go though a chain of _ListMap_ constructors (which correspond to the _DNode_ above). You have to squint pretty hard -- for example, we do the pre-order traversal on the fly -- but the net result is very similar, although it is arrived at by entirely different thought process. Many of the insights of the term indexing world re-appear, in different guise, in our triemaps. For example, when a variable is repeated in a pattern we can eagerly check for equality during the match, or instead gather an equality constraint and check those constraints at the end [24, Section 26.14]. ### Haskell triemaps Trie data structures have found their way into numerous Haskell packages over time. There are trie data structures that are specific to _String_, like the _StringMap15_ package, or polymorphically, requiring just a type class for trie key extraction, like the _TrieMap16_ package. None of these libraries describe how to index on expression data structures modulo \(\alpha\)-equivalence or how to perform matching lookup. Footnote 15: [https://hackage.haskell.org/package/StringMap](https://hackage.haskell.org/package/StringMap) Memoisation has been a prominent application of tries in Haskell [13, 14, 15]. Given a function \(f\), the idea is to build an _infinite_, lazily-evaluated trie, that maps every possible argument \(x\) to (a thunk for) (\(f\) _x_). Now, a function call becomes a lookup in the trie. The ideas are implemented in the _MemoTrie17_ library. For memo tries, operations like alter, insert, union, and fold are all irrelevant: the infinite trie is built once, and then used only for lookup. Footnote 17: [https://hackage.haskell.org/package/MemoTrie](https://hackage.haskell.org/package/MemoTrie) Footnote 18: [https://hackage.haskell.org/package/functor-combo](https://hackage.haskell.org/package/functor-combo) Footnote 19: [https://hackage.haskell.org/package/representable-tries](https://hackage.haskell.org/package/representable-tries) A second strand of work concerns data type generic, or polytypic, approaches to generating tries, which nicely complements the design-pattern approach of this paper (Section 3.8). Hinze [23] describes the polytypic approach, for possibly parameterised and nested data types in some detail, including the realisation that we need _alter_ and _unionWith_ in order to define _insert_ and _union_. A generalisation of those ideas then led to _functor-combo18_. The _representable-tries19_ library observes that trie maps are representable functors and then vice versa tries to characterise the sub-class of representable functors for which there exists a trie map implementation. Footnote 20: [https://hackage.haskell.org/package/tree-lib](https://hackage.haskell.org/package/tree-lib) The _twee-lib21_ library defines a simple term index data structure based on discrimination trees for the _twee_ equation theorem prover. We would arrive at a similar data structure in this paper had we started from an expression data type Footnote 21: [https://hackage.haskell.org/package/tree-lib](https://hackage.haskell.org/package/tree-lib) data_Expr = App Con [Expr] | Var Var_ In contrast to our _ExprMap_, _twee's _Index_ does path compression not only for paths ending in leaves (as we do) but also for internal paths, as is common for radix trees. It is however unclear how to extend _twee's _Index_ to support \(\alpha\)-equivalence, hence we did not consider it for our benchmarks in Section 6. ## Acknowledgments We warmly thank Leonardo de Moura and Edward Yang for their very helpful feedback. ## 8. Conclusion We presented trie maps as an efficient data structure for representing a set of expressions modulo \(\alpha\)-equivalence, re-discovering polytypic deriving mechanisms described by Hinze [23]. Subsequently, we showed how to extend this data structure to make it aware of pattern variables in order to interpret stored expressions as patterns. The key innovation is that the resulting trie map allows efficient matching lookup of a target expression against stored patterns. This pattern store is quite close to discrimination trees [24], drawing a nice connection to term indexing problems in the automated theorem proving community.
2310.08564
The geometry of maximal development and shock formation for the Euler equations in multiple space dimensions
We construct a fundamental piece of the boundary of the maximal globally hyperbolic development (MGHD) of Cauchy data for the multi-dimensional compressible Euler equations, which is necessary for the local shock development problem. For an open set of compressive and generic $H^7$ initial data, we construct unique $H^7$ solutions to the Euler equations in the maximal spacetime region below a given time-slice, beyond the time of the first singularity; at any point in this spacetime, the solution can be smoothly and uniquely computed by tracing both the fast and slow acoustic characteristic surfaces backward-in-time, until reaching the Cauchy data prescribed along the initial time-slice. The future temporal boundary of this spacetime region is a singular hypersurface, containing the union of three sets: first, a co-dimension-$2$ surface of ``first singularities'' called the pre-shock; second, a downstream hypersurface called the singular set emanating from the pre-shock, on which the Euler solution experiences a continuum of gradient catastrophes; third, an upstream hypersurface consisting of a Cauchy horizon emanating from the pre-shock, which the Euler solution cannot reach. We develop a new geometric framework for the description of the acoustic characteristic surfaces which is based on the Arbitrary Lagrangian Eulerian (ALE) framework, and combine this with a new type of differentiated Riemann variables which are linear combinations of gradients of velocity, sound speed, and the curvature of the fast acoustic characteristic surfaces. With these new variables, we establish uniform $H^7$ Sobolev bounds for solutions to the Euler equations without derivative loss and with optimal regularity.
Steve Shkoller, Vlad Vicol
2023-10-12T17:54:10Z
http://arxiv.org/abs/2310.08564v3
# The geometry of maximal development for the Euler equations ###### Abstract. We establish the maximal hyperbolic development of Cauchy data for the multi-dimensional compressible Euler equations. For an open set of compressive and generic \(H^{7}\) initial data, we construct unique \(H^{7}\) solutions to the Euler equations in the maximal spacetime region such that at any point in this spacetime, the solution can be smoothly and uniquely computed by tracing both the fast and slow acoustic characteristic surfaces backward-in-time, until reaching the Cauchy data prescribed along the initial time-slice. The future temporal boundary of this spacetime region is a singular hypersurface, consisting of the union of three sets: first, a co-dimension-2 surface of "first singularities" called the _pre-shock_; second, a downstream hypersurface emanating from the pre-shock, on which the Euler solution experiences a continuum of gradient catastrophes; third, an upstream hypersurface consisting of a Cauchy horizon emanating from the pre-shock, which the Euler solution cannot reach. We develop a new geometric framework for the description of the acoustic characteristic surfaces which is based on the _Arbitrary Lagrangian Eulerian (ALE)_ framework, and combine this with a new type of _differentiated Riemann-type variables_ which are linear combinations of gradients of velocity and sound speed and the curvature of the fast acoustic characteristic surfaces. With these new variables, we establish uniform \(H^{7}\) Sobolev bounds for solutions to the Euler equations without derivative loss and with optimal regularity. This is the first result on the maximal hyperbolic development of compressive Cauchy data in all regions of spacetime. ###### Contents * 1 Introduction * 2 Acoustic characteristic surfaces associated to shock formation * 3 A new set of variables for Euler shock formation * 4 Initial data and main results * 5 Shock formation: spacetime, energy norms, and bootstrap assumptions * 6 First consequences of the bootstrap assumptions * 7 Bounds for the geometry, sound speed, and the tangential reparameterization velocity * 8 Vorticity energy estimates and the resulting improved estimates * 9 Closing the pointwise bootstrap inequalities * 10 The sixth order energy estimates for the tangential components * 11 Improved normal-component estimates for six pure time derivatives * 12 The sixth order energy estimates for the normal components * 13 Downstream maximal development * 14 Upstream maximal development * 15 Optimal regularity for velocity, sound speed, and ALE map * A Maximal development of Cauchy data for the 1D Euler equations * B Functional analysis in the remapped domain * C Transport bounds ## 1. Introduction We establish the maximal hyperbolic development of Cauchy data, during the shock formation process, for solutions of the Euler equations \[\partial_{t}(\rho u)+\operatorname{div}(\rho u\otimes u)+\nabla p =0\,, \tag{1.1a}\] \[\partial_{t}\rho+\operatorname{div}(\rho u) =0\,,\] (1.1b) \[\partial_{t}E+\operatorname{div}(u(E+p)) =0\,, \tag{1.1c}\] where \(p=(\gamma-1)\left(E-\frac{1}{2}\rho|u|^{2}\right)\) is the scalar pressure function, \(\gamma>1\) is the adiabatic exponent, \(u:\mathbb{T}^{d}\times\mathbb{R}\to\mathbb{R}^{d}\) denotes the \(d\)-component velocity vector field, \(\rho:\mathbb{T}^{d}\times\mathbb{R}\to\mathbb{R}_{+}\) denotes the strictly positive density function, and \(E:\mathbb{T}^{d}\times\mathbb{R}\to\mathbb{R}\) is the total energy. In particular, we develop a new geometric and analytic framework that allows us to obtain uniform Sobolev estimates for the Euler solution, evolving past the time of the first gradient singularity, and in fact uniformly evolving through a spacetime hypersurface of gradient catastrophes. In turn, we are able to give a complete description of the largest possible spacetime region on which (an open set of) compressive Cauchy data can be smoothly and uniquely evolved. This resolves the first step in a two-tier program to establish the existence of unique shock wave solutions for the Euler equations in multiple space dimensions. An abbreviated form of our main result can be found in Theorem 1.2 below, while the detailed statements can be found in Section 4.3, Theorems 4.6, 4.7, and 4.8. ### Shock formation and shock development The system (1.1) is the quintessential system of nonlinear hyperbolic conservation laws. Such systems exhibit shock waves; these are spacetime hypersurfaces of discontinuity which emerge in finite time from smooth initial data, and dynamically evolve according to the Rankine-Hugoniot (RH) jump conditions (see, for example, [21]). In addition to the physical variable unknowns in (1.1) - velocity, density, and energy - the location of the shock surface is also an unknown. A weak solution to the Euler equations requires that the physical variables satisfy the Euler equations pointwise on either side of the shock surface, and that the shock surface propagates with the correct normal speed. Moreover, certain physical "entropy" conditions must be satisfied to ensure that the solution is physically meaningful. While the theory of shock waves and weak solutions to the compressible Euler equations (and more general systems of conservation laws) is fairly complete in one space dimension (see [47], [29]-[31], [24], [25], [33], [23], [40], [32], [9], [53], [2] as well as the fairly complete bibliography in [21]), the problem of obtaining and propagating unique shock solutions in more than one space dimension without symmetry assumptions remains open. Detailed shock formation under azimuthal symmetry with the functional description of the solution at the first shock singularity has been extensively studied in [4], [45] and [44]. The methods of one-dimensional conservation laws have not proven to be easily extendable to multiple space dimensions (see, for example [46]). Moreover, the convex-integration based results originating in [11] and refined in [10], have shown that entropy inequalities cannot be used as a uniqueness selection criterion. As a result, there is yet no general existence theorem, describing the evolution of smooth data towards the creation and unique propagation of discontinuous shock surfaces for the Euler equations in multiple space dimensions. Christodoulou [13, 14] introduced a novel two-stage program for the construction of unique shock wave solutions to the Euler equations. Starting from smooth initial data, the first step is called _shock formation_, in which smooth compressive initial data is evolved up to a cusp-like Eulerian spacetime co-dimension-\(2\) hypersurface of "first singularies" - these first singularities are where the gradient of velocity, density, and energy first become infinite. We term this co-dimension-\(2\) hypersurface of "first singularies" the _pre-shock_ set, because along this set, the solution remains continuous but forms a \(C^{\frac{1}{3}}\) cusp. The term pre-shock is used, because on this set, the gradient of the solution has become infinite, but the actual shock _discontinuity_ is yet to develop. The second step of the program is called _shock development_. Here, one uses the analytical description of the \(C^{\frac{1}{3}}\) solution along the pre-shock as Cauchy data, from which the shock surface of discontinuity instantaneously _develops_. To date, this program remains unresolved in the absence of symmetry assumptions; for the Euler equations, see Christodoulou [14] for the so-called restricted shock development problem, Yin [53] and Christodoulou & Lisibach [15] for shock development in spherical symmetry, and [4] for shock development together with the emergence of the _weak characteristic discontinuities_ conjectured by Landau & Lifshitz [28]. It is important to note that Majda's shock stability result [38, 39] is neither a _shock formation_ result nor a _shock development_ result, but rather a short-time existence theorem on the propagation of shock front initial data by the shock speed imposed by the Rankine-Hugoniot jump conditions. Specifically, Majda assumes the existence of a surface of discontinuity in the data, while the objective of shock development is to dynamically create this surface of discontinuity from the \(C^{\frac{1}{3}}\) cusp-solution at the the pre-shock. To summarize, the first step of this two-tiered program necessitates the analysis of the maximal development of smooth compressive Cauchy data. The second step of the program uses the Euler solution along the pre-shock as Cauchy data, together with the existence of the unique Euler solution _downstream of the pre-shock_, to produce a unique shock-wave solution to the Euler equations as the discontinuous shock front dynamically and instantaneously emerges from the pre-shock. The main objective of this paper is the resolution of the first step of this program, giving the complete maximal hyperbolic development for solutions of the Euler equations throughout the entire _shock formation_ process. ### The evolution of the Euler solution past the time of first blowup With the exception of the recent result of Abbrescia & Speck [1] which we shall describe below, the analysis of the multi-dimensional shock formation process for the Euler equations considered solutions only up to the _time of the very first singularity_, the earliest time \(t_{*}\) at which the solution gradient becomes infinite. Prior results on shock formation have analyzed the solution only up to this time \(t_{*}\) (see [48], [13], [16], [34], [35], [5], [6], [7]). Such an analysis is insufficient to proceed with the problem of shock development. It is essential to describe the full shock formation process, past the time of first singularity, and to capture the entire set of "first singularities" which successively emerges. Indeed, it is the description of the solution about this full spacetime set of first singularities (see the black curve on both the left and right images in Figure 1) that is used as Cauchy data for the development of discontinuous shock waves. The objective is therefore to create a novel geometric and analytical framework that can provide uniform energy estimates for solutions that are experiencing successive gradient catastrophes along a hypersurface of spacetime. Thus, we are not simply trying to extend the solution past a single first singularity, but rather we are evolving the solution through a continuum of gradient blow-ups in appropriately chosen coordinates that allow for uniform bounds to be maintained. We note that while our focus in this work is on the shock-type gradient singularity, solutions to the Euler equations can form finite-time implosions from smooth initial conditions. Such unstable imploding solutions, which form a finite-time amplitude blow-up, have been proven to exist in [3, 8, 41, 42]. ### Maximal hyperbolic development In the traditional Cauchy problem in fluid dynamics, initial data is prescribed on the initial data manifold \(\{t=0\}\) and the evolution of this data is considered up to some time slice, given by the manifold \(\{t=t_{*}\}\). Given the localized nature of hyperbolic PDE with finite speed of propagation, localized sets of initial data can be propagated up to space-like submanifolds of spacetime which do not necessarily coincide with time-slices. For example, as shown on the left side of Figure 1, which displays an example of an Eulerian spacetime, the fast acoustic characteristic surfaces (that emanate from the initial time-slice) are impinging upon each other; the black curve represents the location in spacetime where the first gradient catastrophes occur, while the union of the red and green surfaces indicate the future temporal boundary of the collection of points that can be smoothly and uniquely reached by the Cauchy data. The right panel in Figure 1 displays the analogous spacetime set, but in _Arbitrary Lagrangian Eulerian_ (ALE) coordinates adapted to the fast acoustic characteristic surfaces. These ALE coordinates, which will be defined in Section 2.1, provide a smooth geometric framework for our analysis. Figure 1. Left. Spacetime in traditional Eulerian coordinates. The characteristic surfaces are shown to impinge on the cuspoidal surface of first singularities, which consists of the pre-shock set (shown as the black curve) together with the singular hypersurface which emanates from the pre-shock in the downstream direction, which consists of a continuum of gradient catastrophes (shown as the red surface). The slow acoustic characteristic which emanates from the pre-shock in the upstream direction (shown in green) is a Cauchy horizon which the Euler solution can never reach. Right. The spacetime in Arbitrary Lagrangian Eulerian (ALE) coordinates. The spacetime of maximal hyperbolic development denotes the region below the union of the downstream ALE paraboloid of “gradient catastrophes” (in red), the set of pre-shocks (in black), and the slow acoustic characteristic surface which emanates from the pre-shock (in green). In ALE coordinates, each fast acoustic characteristic surface has been flattened. The hypersurface shown in the magenta color denotes the fast acoustic characteristic surface that passes through the pre-shock. There is a notion of the _maximal hyperbolic development_ of a data set, which originated in the study of general relativity and can be traced back to the fundamental paper of Choquet-Bruhat & Geroch [12]. For a hyperbolic PDE, an initial data set is defined as a spacelike manifold \(\mathcal{S}_{0}\) on which initial data \(U_{0}\) is prescribed. A _development_ of such initial data consists of a spacetime \(\mathcal{M}\), solutions of the hyperbolic PDE, together with a diffeomorphism of \(\mathcal{S}_{0}\) onto a spacelike submanifold \(\mathcal{S}\) of spacetime \(\mathcal{M}\) such that the solution restricted to \(\mathcal{S}\) coincides with the data on \(\mathcal{S}_{0}\). In other words, each point on the manifold \(\mathcal{S}\) can be reached by a unique and smooth characteristic emanating from \(\mathcal{S}_{0}\). There exists a precise notion of _maximal development_. It is known that every initial data set has a development, and that any two developments of \(\mathcal{S}_{0}\) are extensions of a common development. Moreover, for any initial data set \(\mathcal{S}_{0}\), there exists a development \(\mathcal{M}\) of \(\mathcal{S}_{0}\) which is an extension of every other development of \(\mathcal{S}_{0}\), and this development is unique. In the context of shock formation for Euler in the traditional Eulerian spacetime, there exists a cuspidal surface (see the union of the red surface, the black curve, and the green surface on the left side of Figure 1) which acts as a temporal boundary to the spacetime set on which the Cauchy data can be evolved in a smooth and unique fashion. As noted above, the black curve, which we will refer to as the _pre-shock_, denotes the set of "first singularites" where characteristic surfaces first impinge. This _pre-shock_ set is a collection of spacetime points where the gradient of the solution (velocity, density, pressure, etc.) experience the first gradient blowup. The red surface shown on the left of Figure 1 denotes the spacetime hypersurface on which the fast characteristic surfaces impinge, and the green surface on the left of of Figure 1 denotes the slow acoustic characteristic surface emanating from the pre-shock set. The maximal development of \(\mathcal{S}_{0}=\{t=\mathfrak{t}_{\mathfrak{in}}\}\) consists of the solution in the spacetime set lying "below" this cuspidal surface (consisting of the union of the red, black and green sets in Figure 1). For a self-contained introduction to the maximal hyperbolic development of Cauchy data in gas dynamics, we refer the reader to Appendix A for a brief yet detailed description of maximal development for the Euler equations in 1D. ### Preparing the Euler equations for shock formation analysis #### 1.4.1. A symmetric form of the Euler equations While the Euler solution stays smooth, the energy equation \(\partial_{t}E+\operatorname{div}(u(E+p))=0\) can be replaced by the transport equation for the specific entropy \(S\) \[\partial_{t}S+u\cdot\nabla S=0\,,\] in which case the pressure law can be equivalently written as \[p=\tfrac{1}{\gamma}e^{\mathcal{S}}\rho^{\gamma}\,,\ \ \gamma>1\,.\] When the _initial entropy function_ is a constant, the entropy remains a constant during the shock formation process and the dynamics are termed _isentropic_; for isentropic dynamics, the pressure law is given by \(p=\tfrac{1}{\gamma}\rho^{\gamma}\) for \(\gamma>1\). It is convenient to rewrite the Euler equations in the symmetric form \[\partial_{t}u+u\cdot\nabla u+\alpha\sigma\nabla\sigma=0\,, \tag{1.2a}\] \[\partial_{t}\sigma+u\cdot\nabla\sigma+\alpha\sigma\operatorname{ div}u=0\,, \tag{1.2b}\] where \(\alpha=\tfrac{\gamma-1}{2}>0\) is the adiabatic exponent, and \(\sigma=\tfrac{1}{\alpha}c\) is the rescaled sound speed. The sound speed \(c=\rho^{\alpha}\). #### 1.4.2. Acoustic form of the Euler equations A key feature in the analysis of the Euler equations, and general systems of nonlinear hyperbolic equations, is the concept of _characteristic surfaces_ and the geometry describing their evolution (see Courant & Hilbert [18]). Characteristic surfaces in spacetime can be viewed as propagating wave fronts, and for the Euler equations, these represent either sound waves or vorticity waves. We shall focus our presentation on isentropic dynamics in space dimension \(d=2\).1 Footnote 1: For both simplicity and concision, we present our method of analysis of the maximal development of Cauchy data for the case of \(d=2\), in which there are three independent variables \((x_{1},x_{2},t)\), each characteristic surface is a \(2\)-dimensional hypersurface of spacetime that possesses one unit normal vector \(n\) and one unit normal vector \(\tau\) at each point. The modifications of our theory to the case that \(d=3\), merely require the use of two linearly independent tangent vectors \(\tau_{1}\) and \(\tau_{2}\) to each fast acoustic characteristic hypersurface of spacetime, which turns out to be a fairly trivial generalization. Let \((n(\cdot,t),\tau(\cdot,t))\) denote an orthonormal basis for \(\mathbb{T}^{2}\) for each time \(t\). The Euler equations (1.2) have three _distinct_ waves speeds \[\lambda_{1}=u\cdot n-\alpha\sigma\,,\qquad\lambda_{2}=u\cdot n\,,\qquad \lambda_{3}=u\cdot n+\alpha\sigma\,. \tag{1.3}\] Here \(\lambda_{3}\) is the fast acoustic wave speed and it is along the transport velocity \(u+\alpha\sigma n\) that sound waves steepen to form shocks. The normal vector \(n\) will be made explicit below as the normal to dynamically evolving and steepening _fast acoustic characteristic surfaces_. The wave speed \(\lambda_{1}\) denotes the _slow acoustic wave speed_, and plays a prominent role in defining the future temporal boundary for maximal development. When spacetime is foliated by the fast acoustic characteristic surfaces, and if \(n\) and \(\tau\) denote the normal and tangent vectors, respectively, to the intersection of these surfaces with each time-slice, then the propagation of acoustic waves (and shock formation) can be studied by rewriting (1.2) as \[\partial_{t}u+(u+\alpha\sigma n)\cdot\nabla u+\alpha\sigma(\nabla \sigma-n\cdot\nabla u) =0\,, \tag{1.4a}\] \[\partial_{t}\sigma+(u+\alpha\sigma n)\cdot\nabla\sigma+\alpha \sigma(\operatorname{div}u-n\cdot\nabla\sigma) =0\,. \tag{1.4b}\] The system (1.4) can be thought of as the _acoustic Euler equations_ with fast acoustic transport velocity \(u+\alpha\sigma n\), along which the fast characteristic surfaces propagate. #### 1.4.3. Classical Riemann variables The description and dynamics of these characteristic surfaces greatly simplify when there is only one space dimension present. In the 1d case, the characteristic surfaces are curves in two-dimensional spacetime that can propagate in only two directions; namely, the positive or the negative spatial direction. Along these characteristic directions, the solutions to the 1d Euler equations possess certain invariant functions which are called the Riemann invariants [47], which (in the case of constant entropy) are exactly transported along the fast and slow acoustic wave speeds. In multiple space dimensions, complete invariance is generally not preserved, but Riemann variables can nevertheless be defined to both capture the dominant sound wave motion and to maintain small deviation of the dominant variable when transported along the fast characteristic surfaces. The classical Riemann variables are defined as2 Footnote 2: For the case that \(d=3\), \(w\) and \(z\) are defined in the identical fashion, while the tangential velocity is given the vector \(a=(a_{1},a_{2}):=(u\cdot\tau_{1},u\cdot\tau_{2})\) where the triad \((n,\tau_{1},\tau_{2})\) form a basis at each tangent space of the fast acoustic characteristic hypersurface. \[w=u\cdot n+\sigma\,, z=u\cdot n-\sigma\,, a=u\cdot\tau\,. \tag{1.5}\] #### 1.4.4. Generic and compressive initial data for shock formation We consider an open set of data \((u_{0},\sigma_{0})\in H^{7}\) which satisfy certain generic and compressive properties that are made precise in Section 4.2 below. We define the dominant Riemann variable at initial time \(t=\mathsf{t_{in}}\) by \(w_{0}(x):=w(x,\mathsf{t_{in}})=u_{0}^{1}(x,\mathsf{t_{in}})+\sigma(x,\mathsf{ t_{in}})\), where \(n(x,\mathsf{t_{in}})=e_{1}\). We set the maximal negative slope of \(w_{0}\) to occur in the \(x_{1}\) direction. We suppose that for \(0<\varepsilon\ll 1\), the derivative \(\partial_{1}w_{0}\) takes its minimum value at the origin and is given by \(\partial_{1}w_{0}(0,0)=-\frac{1}{\varepsilon}\). We further suppose that the initial conditions \(z_{0}(x)=z(x,\mathsf{t_{in}})\) and \(a_{0}(x)=a(x,\mathsf{t_{in}})\) are small, and have \(\mathcal{O}(1)\) derivatives. Such data is called non-degenerate or generic if \(\nabla^{2}\partial_{1}w_{0}(0,0)\) is positive definite. **Remark 1.1** (Compressive data for which \(\partial_{n}w\) blow-up).: _For generic and compressive initial data (described above) which yield sound waves that steepen in the primary direction of propagation \(x_{1}\), and evolve with relatively small changes in the transverse coordinate \(x_{2}\), \(w\) is the dominant Riemann variable and \(z\) is the subdominant Riemann variable. During the shock formation process, it is the normal derivative \(\partial_{n}w\) that blows-up, while \(\partial_{n}z\), \(\partial_{n}a\), as well as \(\partial_{\tau}w\), \(\partial_{\tau}z\), and \(\partial_{\tau}a\) all remain bounded._ ### The geometry and regularity of the fast acoustic characteristic surfaces The explicit presence of the unit normal vector \(n\) (to the steepening fast characteristic surfaces) in the system of equations (1.4) shows the geometric nature of these equations, when written in the form suitable for shock formation. Together with \(u\) and \(\sigma\), the normal vector \(n\) is one of the fundamental _unknowns_ in the dynamics of shock formation. The physical unknowns \(u\) and \(\sigma\) are directly coupled to the evolution of the geometry of the fast acoustic characteristic surface, and as such, the dynamics of shock formation can be thought of as a highly nonlinear example of a _free boundary problem in fluid dynamics_.3 Traditional free boundary problems have the location (or shape) of the fluid boundary as one of the basic unknowns, and the dynamics of this free boundary must be coupled to the evolution of the physical flow fields. For both incompressible [19] and compressible [20] free boundary problems, the dynamics of the geometry are governed by the fluid velocity \(u\), corresponding to the wave speed \(\lambda_{2}\) in (1.3), while for the shock formation problem it is the location and shape of the sound wave which is the geometric unknown, and this wave pattern is carried by the fast wave speed \(\lambda_{3}\). This creates a new level difficulty for the analysis. Footnote 3: We shall review the classical approach to the study of characteristic surfaces in Section 1.10.1. Our analysis of characteristic surfaces and their underlying geometry is quite different from the traditional viewpoint of solving the Eikonal equation to determine the bicharacteristic cone. Instead, we place the geometry of characteristic surfaces in the context of free boundary problems in fluid dynamics. To place this difficulty in perspective, we shall consider the Lagrangian parameterization of the geometry. For traditional free boundary problems in fluid dynamics, one considers the Lagrangian flow \(\eta(x,t)\) associated to the fluid velocity; namely, \[\partial_{t}\eta(x,t)=u(\eta(x,t),t)\ \ \text{for}\ \,t>\mathsf{t_{in}}\,, \eta(x,\mathsf{t_{in}})=x\,, \tag{1.6}\] where \(\mathfrak{t}_{\mathfrak{in}}\) denotes the initial time for the flow. A key observation is that (1.6) defines an ordinary differential equation, and Picard iteration shows that for (at least Lipschitz) velocity fields \(u\), the flow map \(\eta\) inherits (at least) the regularity of the velocity field, and can often be shown to gain regularity such that the associated normal vector \(n\) possesses the same regularity as the velocity field. This is indeed the case for the classical incompressible and compressible Euler free boundary problem. We now contrast this scenario with the geometric dynamics of shock formation. Observe from (1.4) that the transport velocity associated with the fast wave speed \(\lambda_{3}\) is given by \[\mathcal{V}_{3}=u+\alpha\sigma n=\lambda_{3}n+(u\cdot\tau)\tau\,. \tag{1.7}\] We can foliate spacetime with acoustic characteristic surfaces associated to the "fast" wave speed \(\lambda_{3}\). One way of doing so is by studying the Lagrangian flow map of the transport velocity \(\mathcal{V}_{3}\): \[\partial_{t}\eta(x_{1},x_{2},t)=\mathcal{V}_{3}(\eta(x_{1},x_{2},t),t)\,, \qquad\eta(x,\mathfrak{t}_{\mathfrak{in}})=x\,. \tag{1.8}\] In terms of the standard Cartesian basis, we have that4 Footnote 4: We identity vector fields and \(1\)-forms in Euclidean space; in particular, raised indices for components \(F^{i}\) of a vector field are obtained as \(F^{i}=\delta^{ij}\widetilde{F}_{j}\), where \(\widetilde{F}_{j}\) denotes the components of the \(1\)-form \(\delta^{ij}\) denotes the Kronecker-\(\delta\). \[\eta(x_{1},x_{2},t)=(\eta^{1}(x_{1},x_{2},t),\eta^{2}(x_{1},x_{2},t))\,,\] and that \[\eta^{1}(x_{1},x_{2},\mathfrak{t}_{\mathfrak{in}})=x_{1}\,\,\,\,\,\text{and} \,\,\,\,\,\eta^{2}(x_{1},x_{2},\mathfrak{t}_{\mathfrak{in}})=x_{2}\,.\] Using the flow map \(\eta\) we can give a geometric description of the fast acoustic characteristics surfaces. At initial time \(t=\mathfrak{t}_{\mathfrak{in}}\), we foliate \(\mathbb{T}^{2}\) by lines parallel to \(e_{1}=(1,0)\), and denote these lines by \(\gamma_{x_{1}}(\mathfrak{t}_{\mathfrak{in}})=\{x_{1}\}\times\mathbb{T}\). For each \(x_{1}\in\mathbb{T}\) and \(t\in[\mathfrak{t}_{\mathfrak{in}},T]\), we define the characteristic curve (at a fixed time-slice) by \[\gamma_{x_{1}}(t)=\eta(\gamma_{x_{1}}(\mathfrak{t}_{\mathfrak{in}}),t)\,, \tag{1.9}\] and the characteristic surfaces up to time \(T\geq\mathfrak{t}_{\mathfrak{in}}\) (which are parameterized of \(x_{1}\)) by \[\Gamma_{x_{1}}(T)=\bigcup\nolimits_{t\in[\mathfrak{t}_{\mathfrak{in}},T]} \gamma_{x_{1}}(t)\,. \tag{1.10}\] Figure 2 displays a few such characteristic surfaces \(\Gamma_{x_{1}}\) for five different values of \(x_{1}\in\mathbb{T}\). The unit tangent vector to \(\gamma_{x_{1}}(t)\) is given by \[\tau(\eta(x,t),t):=|\partial_{2}\eta|^{-1}\partial_{2}\eta=|\partial_{2}\eta| ^{-1}\big{(}\partial_{2}\eta^{1},\partial_{2}\eta^{2}\big{)}\] (1.11a) and by a rotation, the unit normal vector \[\gamma_{x_{1}}(t)\] is given by \[n(\eta(x,t),t):=|\partial_{2}\eta|^{-1}\partial_{2}\eta^{\perp}=|\partial_{2} \eta|^{-1}\big{(}\partial_{2}\eta^{2},-\partial_{2}\eta^{1}\big{)}\,.\] (1.11b) Substituting this identity into the flow equation ( 1.8 ), shows that \[\eta(x,t)\] is a solution to \[\partial_{t}\eta(x_{1},x_{2},t)=u(\eta(x_{1},x_{2},t),t)+\alpha\sigma(\eta(x_ {1},x_{2},t),t)\tfrac{\partial_{2}\eta^{\perp}(x,t)}{|\partial_{2}\eta(x,t)|} \,,\qquad\eta(x,\mathfrak{t}_{\mathfrak{in}})=x\,. \tag{1.12}\] Figure 2. For \(T>\mathfrak{t}_{\mathfrak{in}}\) which is strictly less than the very first blowup time, we display the characteristic surfaces \(\Gamma_{x_{1}}(T)\) defined in (1.10) emanating from five different values of \(x_{1}\in\mathbb{T}\). At \(t=\mathfrak{t}_{\mathfrak{in}}\), the curves \(\{\gamma_{x_{1}}(\mathfrak{t}_{\mathfrak{in}})\}_{x_{1}\in\mathbb{T}}\) are lines which foliate \(\mathbb{T}^{2}\). The distance between the characteristic surfaces \(\Gamma_{x_{1}}(T)\) is decreasing as \(T\) increases, leading to shock formation when this distance vanishes. Observe that (1.12) is a PDE for \(\eta(x,t)\), while the traditional Lagrangian flow map (1.6) solves an ODE. As a consequence of the structure of the PDE (1.12), it is not _a priori_ clear if \(\eta\) maintains the Sobolev regularity of the velocity \(u\) and sound speed \(\sigma\). What is clear, however, is that the normal vector \(n\) loses one derivative of smoothness relative to the physical variables \(u\) and \(\sigma\). This mismatch in regularity between \(n\) and \((u,\sigma)\) necessitates studying a differentiated form of the Euler equations, in which the fundamental unknowns have the same regularity as the normal vector \(n\). Rather than analyzing the system (1.4), we return to the system (1.2). We first compute the evolution of the partial (space) derivatives of \(u\) and \(\sigma\). We differentiate (1.2), and then reexpress the resulting equation using fast acoustic transport velocity \(\mathcal{V}_{3}\). We find that \[\partial_{t}u^{i},_{k}+ (u\cdot n+\alpha\sigma)u^{i},_{kj}n^{j}+(u\cdot\tau)u^{i},_{kj} \tau^{j}+\alpha\sigma(\sigma,_{ki}-\partial_{n}u^{i},_{k})+\alpha\sigma,_{ik} \sigma,_{i}+u^{j},_{k}u^{i},_{j}=0\,, \tag{1.13a}\] \[\partial_{t}\sigma,_{k}+ (u\cdot n+\alpha\sigma)\sigma,_{kj}n^{j}+(u\cdot\tau)\sigma,_{kj} \tau^{j}+\alpha\sigma(u^{i},_{ki}-\partial_{n}\sigma,_{k})+\alpha\sigma,_{k} u^{i},_{i}+u^{j},_{k}\sigma,_{j}=0\,. \tag{1.13b}\] The system (1.13) constitutes the differentiated (fast) acoustic Euler equations. It is imperative to study this differentiated form of the Euler equations in order to avoid derivative loss in the geometry. We are using the following derivative notation: for a differentiable function \(f\), we write \[f,_{k}:=\partial_{k}f\ \ \text{for}\ \ k\in\{1,2\}\,.\] We are also employing the Einstein summation convention in which repeated indices are summed from \(1\) to \(2\); e.g. \[u^{j},_{k}u^{i},_{j}:=\sum\nolimits_{j=1}^{2}\partial_{k}u^{j}\partial_{j}u^{ i}\,.\] Therefore, the equation (1.13a) is a matrix equation with indices \((i,k)\), with \(i,k\in\{1,2\}\), and the equation (1.13b) is a vector equation with indices \(k\), where \(k\in\{1,2\}\). ### An Arbitrary Lagrangian Eulerian (ALE) parameterization of the fast characteristic surfaces The differentiated equation set (1.13) will indeed be the foundation for our analysis, but we will not use the Lagrangian flow \(\eta\) of \(\mathcal{V}_{3}\) to parameterize the fast acoustic characteristic surfaces. Instead, we shall use a novel parameterization based on the Arbitrary Lagrangian Eulerian description of fluid flow which we shall detail in Section 2.1. The use of the natural map \(\eta\) provides control of the second fundamental form along fast characteristic surfaces, but due to a mild degeneracy created by the tangential re-parameterization symmetry, control of the first fundamental form necessitates a cumbersome analysis. We avoid this problem: a tangential re-parameterization of \(\eta\) is introduced in the form of the so-called Arbitrary-Lagrangian-Eulerian (ALE) coordinates. This tangential re-parameterization, via the ALE maps, provides a simple identity to control both the curvature and the metric tensors associated to the fast characteristic surfaces, and is reminiscent of DeTurck's simplification [22] of Hamilton's [26] Ricci flow local existence theorem, in which the infinite-dimensional kernel of the linearized Ricci flow operator (caused by the tangential re-parameterization symmetry) lead to derivative loss and Hamilton's application of Nash-Moser iteration. Our ALE re-parameterization works in the following manner. Because each curve \(\gamma_{x_{1}}(t)\) is a graph over the set \(\{x_{2}\in\mathbb{T}\}\), we introduce the _height_ function \(h(x_{1},x_{2},t)\) such that \(\gamma_{x_{1}}(t)=(h(x_{1},x_{2},t),x_{2})\). The induced metric on \(\gamma_{x_{1}}(t)\) is given by \(g(x_{1},x_{2},t)=1+|h,_{2}(x_{1},x_{2},t)|^{2}\) and the unit tangent vectors \(\tau\) and normal vectors \(\mathcal{N}\) to the curves \(\gamma_{x_{1}}(t)\) are \[\tau(x_{1},x_{2},t)=g^{-\frac{1}{2}}(h,_{2},1)\qquad\text{ and }\qquad\mathcal{N}(x_{1},x_{2},t)=g^{-\frac{1}{2}}(1,-h,_{2})\,, \tag{1.14}\] respectively. We define the ALE family of maps \(\psi\) by \(\psi(x_{1},x_{2},t)=h(x_{1},x_{2},t)e_{1}+x_{2}e_{2}\), with initial condition \(h(x_{1},x_{2},\mathfrak{t}_{\mathfrak{n}})=x_{1}\). Note that \(\det(\nabla\psi)=h,_{1}\). In order to preserve the shape of the characteristic surfaces \(\Gamma_{x_{1}}\), the family of diffeomorphisms \(\psi(\cdot,t)\) must satisfy the constraint that \(\partial_{t}\psi\cdot\mathcal{N}=\mathcal{V}_{3}\circ\psi\cdot\mathcal{N}\). As we shall explain in Section 2.1, the dynamics of the ALE family of diffeomorphisms \(\psi(\cdot,t)\) are governed by \[\partial_{t}\psi=((u\circ\psi)\cdot\mathcal{N}+\alpha\sigma\circ\psi)\, \mathcal{N}+((u\circ\psi)\cdot\mathcal{N}+\alpha\sigma\circ\psi)\,h,_{2}\tau\,.\] With this definition, the fast acoustic characteristics (see Figure 2) are parameterized by \((x_{2},t)\mapsto\psi(x_{1},x_{2},t)\). Shock formation is measured by the _metric-rescaled Jacobian determinant_ of the deformation tensor \(\nabla\psi\). In particular, we define \[J_{s}(x,t)=g(x,t)^{-\frac{1}{2}}\nabla\psi(x,t)=g(x,t)^{-\frac{1}{2}}\nabla \partial_{1}h(x,t)\,. \tag{1.15}\] As can be seen in Figure 1, the fast acoustic characteristic surfaces impinge on one another due to compression, and gradient blow-up occurs exactly when \[J_{s}(x,t)=0\,.\] The pre-shock, denoted by the black curve in Figure 1, denotes the spacetime co-dimension-\(2\) surface of _first singularities_, and is defined to be the set of points \((x,t)\) such that the following two conditions simultaneously hold \[J_{{}_{\!g}}(x,t)=0\qquad\text{ and }\qquad\partial_{1}J_{{}_{\!g}}(x,t)=0\,.\] This set of pre-shocks indicates the location and time of the first gradient blow-up in the primary direction of wave steepening (the \(x_{1}\)-direction), parameterized by the transverse coordinates \((x_{2},...,x_{d})\). The condition \(\partial_{1}J_{{}_{\!g}}(x,t)=0\) together with the _non-degeneracy condition_ \[\operatorname{Hess}(J_{{}_{\!g}})>0\,,\] indicate that points in the pre-shock set are _local minima_ for \(J_{{}_{\!g}}(x,t)\). The red surface in Figure 1 corresponds to the level set \(\{J_{{}_{\!g}}(x,t)=0\}\)_downstream_ of the pre-shock set. The red surface indicates the location in spacetime of subsequent gradient catastrophes, and again designates the location of characteristic-surface-impingement caused by the steepening sound waves during compression. This red surface \(\{J_{{}_{\!g}}(x,t)=0\}\), together with the pre-shock set \(\{J_{{}_{\!g}}(x,t)=0\}\cap\{\partial_{1}J_{{}_{\!g}}(x,t)=0\}\), form a portion of the future temporal boundary of the maximal development of the Cauchy data. The green surface shown in Figure 1, _upstream_ of the pre-shock, displays the distinguished slow acoustic characteristic surface which passes through the pre-shock set. This green surface forms the remaining portion of the future temporal boundary of the maximal development of the Cauchy data. In effect, this distinguished slow acoustic characteristic surface (passing through the pre-shock) plays the role of the event horizon. These notions are discussed in further detail in Section A in the simplified setting of the Euler equations in one space dimension. ### A new set of Riemann-type variables that prevent derivative loss We can now map the physical variables \((u,\sigma)\), as well as the classical Riemann variables \((w,z,a)\) defined in (1.5), into our ALE coordinate system. We define \[U^{i} =u^{i}\circ\psi\,,\qquad\Sigma=\sigma\circ\psi\,,\] \[W =U\!\cdot\!\mathcal{N}+\Sigma\,,\ \ Z=U\!\cdot\!\mathcal{N}+\Sigma\,, \qquad A=U\!\cdot\!\mathcal{T}\,.\] Since it is the differentiated form of the Euler equations (1.13) that will be our starting point, the non-differentiated variables \((U,\Sigma)\) and \((W,Z,A)\) will play only a secondary role in our analysis. Our main idea is the introduction of specially constructed _differentiated Riemann-type variables_ that remove any derivative loss from the resulting analysis. First, for \(i,k=1,2\), we define \[\dot{\mathsf{U}}^{i}_{k}=u^{i}{}_{,k}\circ\psi\,,\qquad\dot{\Sigma}_{k}= \sigma{}_{,k}\circ\psi\,.\] The variable \(\dot{\mathsf{U}}^{i}_{k}(x,t)\) denotes the \((i,k)\)-component of the matrix \(\nabla u(\psi(x,t),t)\), and the variable \(\dot{\Sigma}_{k}(x,t)\) denotes the \(k\)-component of the vector \(\nabla\sigma(\psi(x,t),t)\). Second, we introduce the differentiated Riemann variables as the vector fields (with components) \[\dot{\mathsf{W}}_{k}=\mathcal{N}^{i}\dot{\mathsf{U}}^{i}_{k}+\dot{\Sigma}_{k}\,, \qquad\dot{\mathsf{Z}}_{k}=\mathcal{N}^{i}\dot{\mathsf{U}}^{i}_{k}-\dot{\Sigma }_{k}\,,\qquad\dot{\mathsf{A}}_{k}=\tau^{i}\dot{\mathsf{U}}^{i}_{k}\,,\qquad k =1,2\,. \tag{1.16}\] Again, \(\dot{\mathsf{W}}_{k}\), \(\dot{\mathsf{Z}}_{k}\), \(\dot{\mathsf{A}}_{k}\) denote the \(k\)-component of the vectors \(\dot{\mathsf{W}}:=\mathcal{N}^{T}\dot{\mathsf{U}}+\dot{\mathsf{\Sigma}}\), \(\dot{\mathsf{Z}}:=\mathcal{N}^{T}\dot{\mathsf{U}}-\dot{\mathsf{\Sigma}}\), and \(\dot{\mathsf{A}}=\tau^{T}\dot{\mathsf{U}}\), respectively. The vector field \(\dot{\mathsf{W}}\) is the _dominant_ differentiated Riemann variable, the vector field \(\dot{\mathsf{Z}}\) is the _subdominant_ differentiated Riemann variable, and the vector field \(\dot{\mathsf{A}}\) is the _tangential_ gradient of the fluid velocity vector. These three vector fields are then further projected into their normal and tangential components. We define \[\dot{\mathsf{W}}_{\mathcal{N}}=\dot{\mathsf{W}}\!\cdot\!\mathcal{N}\,,\qquad \dot{\mathsf{Z}}_{\mathcal{N}}=\dot{\mathsf{Z}}\!\cdot\!\mathcal{N}\,,\qquad \dot{\mathsf{A}}_{\mathcal{N}}=\dot{\mathsf{A}}\!\cdot\!\mathcal{N}\,,\qquad \dot{\mathsf{W}}_{\mathcal{T}}=\dot{\mathsf{W}}\!\cdot\!\mathcal{T}\,,\qquad \dot{\mathsf{Z}}_{\mathcal{T}}=\dot{\mathsf{Z}}\!\cdot\!\mathcal{T}\,,\qquad \dot{\mathsf{A}}_{\mathcal{T}}=\dot{\mathsf{A}}\!\cdot\!\mathcal{T}\,.\] The generic and compressive initial data that we employ are designed to create steepening sound waves whose dominant direction of propagation is along the \(x_{1}\) coordinate. By design, it is the function \(\ddot{\mathsf{W}}_{\mathcal{N}}\) that encodes the steepening of the sound wave, and which blows-up when the slope of this steepening sound wave becomes infinite. Meanwhile, the other five functions \(\dot{\mathsf{Z}}_{\mathcal{N}},\dot{\mathsf{A}}_{\mathcal{N}},\dot{\mathsf{W}}_ {\mathcal{T}},\dot{\mathsf{Z}}_{\mathcal{T}},\dot{\mathsf{A}}_{\mathcal{T}}\) remain uniformly bounded throughout the entire shock formation process. As noted in Remark 1.1, it is the Eulerian quantity \(\partial_{n}w\) that blows-up during the shock formation process, and It is therefore tempting to believe that the normal component of the dominant differentiated Riemann variable \(\dot{\mathsf{W}}_{\mathcal{N}}\) should be defined to be \(\partial_{n}w\circ\psi\). This is, in actuality, not the case. In fact, our new Riemann variables are chosen to satisfy the following important identities: \[\dot{\mathsf{W}}_{\mathcal{N}} =(\partial_{n}w-a\partial_{n}n\cdot\tau)\circ\psi\,,\ \ \ \dot{\mathsf{Z}}_{\mathcal{N}}=(\partial_{n}z-a\partial_{n}n\cdot\tau)\circ\psi\,, \ \ \dot{\mathsf{A}}_{\mathcal{N}}=(\partial_{n}a-\tfrac{1}{2}(w+z)\partial_{n}\tau \cdot n)\circ\psi\,, \tag{1.17a}\] \[\dot{\mathsf{W}}_{\mathcal{T}} =(\partial_{\tau}w-a\partial_{\tau}n\cdot\tau)\circ\psi\,,\ \ \dot{\mathsf{Z}}_{\mathcal{T}}=(\partial_{\tau}z-a\partial_{\tau}n\cdot\tau)\circ \psi\,,\ \ \dot{\mathsf{A}}_{\mathcal{T}}=\big{(}\partial_{\tau}a-\tfrac{1}{2}(w+z)\partial_{ \tau}\tau\cdot n\big{)}\circ\psi\,. \tag{1.17b}\] This _unique linear combination_ of differentiated classical Riemann variables together with the _curvature of the fast acoustic characteristic surfaces_ creates the _good variables_ that prevent derivative loss. Let us note the importance of using these differentiated Riemann variables for our analysis. In addition to preventing derivative loss in energy estimates, these differentiated Riemann variables are created so that modulo very small errors \[J_{g}\hat{\mathbf{W}}_{\mathcal{N}}(x,t)\sim\partial_{1}w_{0}(x)\,, \tag{1.18}\] where \(w_{0}(x)\) is the initial condition for the classical dominant Riemann variable defined in (3.9), and in particular, \[w_{0}(x)=w(x,\mathfrak{t}_{\mathfrak{in}})=u^{1}(x,\mathfrak{t}_{\mathfrak{in }})+\sigma(x,\mathfrak{t}_{\mathfrak{in}})\,.\] The approximate identity (1.18) indicates that \(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}\) is almost _frozen into the flow of the fast acoustic characteristic_, i.e., it is almost exactly transported by the fast characteristics, and this fact is fundamental to all of the pointwise bounds that are used in our work. We shall describe these variables in greater detail in Section 3, and we shall explain how (1.18) is used in our energy method next. ### The leading order dynamics Our objective is to establish uniform Sobolev-type bounds for the solution of the Euler equations in a spacetime that contains the maximal hyperbolic development of the Cauchy data. We work with the following collection of fundamental variables: \[(J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g }\hat{\mathbf{A}}_{\mathcal{N}},\hat{\mathbf{W}}_{\mathcal{T}},\hat{\mathbf{ Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{\mathcal{T}})\] whose evolution is coupled to the following basic geometric variables \[(J_{g},h_{;2}\,)\] as well as as the undifferentiated sound speed \(\Sigma\). While the exact evolution equations are given in Section 3 below, it is instructive at this point to write down the approximate dynamical systems in which only leading order terms are displayed. It is convenient to define the directional derivative operators \[\partial_{\mathcal{T}} =g^{-\frac{1}{2}}\partial_{2}\,, \tag{1.19a}\] \[\partial_{\mathcal{N}} =\partial_{1}-J_{g}g^{-\frac{1}{2}}h_{;2}\,\partial_{2}\,. \tag{1.19b}\] We note that since \(h_{;2}\) is proven to maintain \(\mathcal{O}(\varepsilon)\) size for the duration of the maximal development of the Cauchy data, to leading order, \(\partial_{\mathcal{N}}\sim\partial_{1}\). For the same reason, \(g\sim 1\) so that \(\partial_{\mathcal{T}}\sim\partial_{2}\). Thus, with no danger of arriving at an erroneous conclusion, the reader is safe to make these derivative replacements in the discussion that now follows. To leading order we have the following system of equations for the normal component variables: \[\tfrac{1}{2}\partial_{t}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})+ \alpha\partial_{\mathcal{T}}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})-\alpha\hat{ \mathbf{A}}_{\mathcal{N}}\partial_{\mathcal{T}}J_{g}-\mathsf{P}_{1}\partial_{ \mathcal{T}}\mathcal{T}\cdot\mathcal{N}=1.\,\mathrm{o.\,t.}\,, \tag{1.20a}\] \[\tfrac{1}{2}\partial_{t}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})- \alpha\partial_{\mathcal{T}}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})-2\alpha J_{g }^{-1}\partial_{\mathcal{N}}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})+\alpha\hat{ \mathbf{A}}_{\mathcal{N}}\partial_{\mathcal{T}}J_{g}+\mathsf{P}_{1}\partial_{ \mathcal{T}}\mathcal{T}\cdot\mathcal{N}\] \[\qquad\qquad\qquad-\mathsf{P}_{2}J_{g}^{-1}\partial_{\mathcal{N}} \mathcal{T}\cdot\mathcal{N}+2\alpha\hat{\mathbf{Z}}_{\mathcal{N}}J_{g}^{-1} \partial_{\mathcal{N}}J_{g}=1.\,\mathrm{o.\,t.}\,,\] (1.20b) \[\tfrac{1}{2}\partial_{t}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})+ \alpha\partial_{\mathcal{T}}(J_{g}\hat{\mathbf{E}}_{\mathcal{N}})-\alpha J_{g }^{-1}\partial_{\mathcal{N}}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})-\alpha\hat{ \mathbf{E}}_{\mathcal{N}}\partial_{\mathcal{T}}J_{g}+\alpha\partial_{\mathcal{T }}\mathcal{T}\cdot\mathcal{N}\hat{\mathbf{E}}_{\mathcal{T}}\] \[\qquad\qquad+\mathsf{P}_{1}J_{g}^{-1}\partial_{\mathcal{N}} \mathcal{T}\cdot\mathcal{N}+\alpha J_{g}^{-1}\partial_{\mathcal{N}}J_{g}\hat{ \mathbf{A}}_{\mathcal{N}}=1.\,\mathrm{o.\,t.}\,, \tag{1.20c}\] where \(\mathsf{P}_{1}=\frac{\alpha}{2}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{\mathbf{A}}_{\mathcal{T}})\), \(\mathsf{P}_{2}=2\alpha(J_{g}\hat{\mathbf{A}}_{\mathcal{N}}+J_{g}\hat{\mathbf{ Z}}_{\mathcal{T}})\), and \(\mathrm{l.\,o.\,t.}\) denotes a polynomial of the fundamental variables. We additionally have the following system of evolution equations for the tangential-component variables: \[\tfrac{1}{2}\partial_{t}\hat{\mathbf{W}}_{\mathcal{T}}+\alpha \partial_{\mathcal{T}}\hat{\mathbf{A}}_{\mathcal{T}}-\mathsf{P}_{3}\partial_{ \mathcal{T}}\mathcal{T}\cdot\mathcal{N}=1.\,\mathrm{o.\,t.}\,, \tag{1.21a}\] \[\tfrac{1}{2}\partial_{t}\hat{\mathbf{Z}}_{\mathcal{T}}-\alpha \partial_{\mathcal{T}}\hat{\mathbf{A}}_{\mathcal{T}}-2\alpha J_{g}^{-1} \partial_{\mathcal{N}}\hat{\mathbf{Z}}_{\mathcal{T}}+\mathsf{P}_{3}\partial_{ \mathcal{T}}\mathcal{T}\cdot\mathcal{N}-\mathsf{P}_{4}J_{g}^{-1}\partial_{ \mathcal{N}}\mathcal{T}\cdot\mathcal{N}=1.\,\mathrm{o.\,t.}\,,\] (1.21b) \[\tfrac{1}{2}\partial_{t}\hat{\mathbf{A}}_{\mathcal{T}}+\alpha \partial_{\mathcal{T}}\hat{\mathbf{E}}_{\mathcal{T}}-\alpha J_{g}^{-1} \partial_{\mathcal{N}}\hat{\mathbf{A}}_{\mathcal{T}}-\alpha\partial_{\mathcal{T }}\hat{\mathbf{E}}_{\mathcal{N}}\mathcal{T}\cdot\mathcal{N}+\mathsf{P}_{3}J_{g }^{-1}\partial_{\mathcal{N}}\mathcal{T}\cdot\mathcal{N}=1.\,\mathrm{o.\,t.}\,, \tag{1.21c}\] where \(\mathsf{P}_{3}=\alpha(\Omega+\hat{\mathbf{W}}_{\mathcal{T}}+\hat{\mathbf{Z}}_{ \mathcal{T}})\) and \(\mathsf{P}_{4}=2\alpha(\hat{\mathbf{A}}_{\mathcal{T}}-\hat{\mathbf{Z}}_{ \mathcal{N}})\). We couple the above systems of equations with the leading-order evolution equations for \(J_{g}\), \(h_{;2}\), and \(\Sigma\): \[\partial_{t}J_{g} =\tfrac{1+\alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+1.\, \mathrm{o.\,t.}\,, \tag{1.22a}\] \[\partial_{t}h_{;2} =\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+1.\,\mathrm{o.\,t.}\,,\] (1.22b) \[\partial_{t}\Sigma =-\alpha\Sigma\hat{\mathbf{Z}}_{\mathcal{N}}+1.\,\mathrm{o.\,t.}\,, \tag{1.22c}\] together with the identity \[\Sigma_{,1}=\tfrac{1}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+1.\,\mathrm{o.\,t.}\,\,. \tag{1.22d}\] ### An overview of our energy method Energy estimates are performed simultaneously for the normal-component variables \((J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g} \hat{\mathbf{A}}_{\mathcal{N}})\) in (1.20) and for the tangential-component variables \((\hat{\mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A} }_{\mathcal{T}})\) in (1.21), and then separately for \(J_{g}\) and \(h_{,2}\) in (1.22). The actual energy estimates will be done in three different coordinate systems in which the time coordinate \(t\) is transformed in three different ways, but in order to describe the main difficulties that must be overcome, for pedagogical reasons, our discussion will be in terms of the independent variables \((x_{1},x_{2},t)\), and we shall use the derivative notation \(\mathsf{D}=(\varepsilon\partial_{t},\varepsilon\partial_{1},\partial_{2})\). Our energy method will be performed at the level of the sixth-order differentiated system. We begin with the normal-component system (1.20). The rough idea is that we first multiply the equations (1.20b) and (1.20c) by \(J_{g}\) (thereby eliminating the presence of the inverse power \(J_{g}^{-1}\)), we then let \(D^{6}\) act on each equation in (1.20), and then test this differentiated equation set with \(\Sigma^{-2\beta+1}\big{(}J_{g}\varphi^{2r}\mathsf{D}^{6}(J_{g}\hat{\mathbf{W} }_{\mathcal{N}}),\varphi^{2r}\mathsf{D}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{ N}}),2\varphi^{2r}\mathsf{D}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\big{)}\), where \(\varphi\) is a weight function that degenerates to zero at the _future temporal boundary_. The presence of the additional \(J_{g}\) weight for \(\mathsf{D}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\) is needed to match the natural weight that will appear for \(\mathsf{D}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\) and \(\mathsf{D}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\) (since we multiplied those equations by \(J_{g}\) prior to differentiation). The weight function \(\varphi\) will be chosen in three different ways, corresponding to three different spacetime regions that we shall employ for the analysis. The values of \(\beta\) and \(r\) will be chosen below. At this stage, we simply wish to explain why a weight function is necessary for the energy estimates, and why its choice is determined by the region of spacetime that is being analyzed. We shall consider the energy estimates term-by-term, and we shall begin with the first term in (1.20) containing the time-derivative \(\partial_{t}\). For demonstration purposes only, let us drastically simplify the domain of integration so that we can explain a few of the fundamental ideas (in actuality, our spacetimes require certain changes of coordinates and are somewhat more complicated). Let us suppose that our energy method employs the spacetime \([\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{top}}]\times\mathbb{T}^{2}\) with spacetime integral \(\int_{\mathsf{t}_{\mathsf{in}}}^{t}\iint_{\mathbb{T}^{2}}\) for \(\mathsf{t}_{\mathsf{in}}\leq t\leq\mathsf{t}_{\mathsf{top}}\). #### 1.9.1. Energy estimates for the first term in (1.20) The first term in all three equations in eqrefnn-approx yields both the standard _energy norm_, and also a compression-induced _damping norm_. The standard energy norm is an \(L^{\infty}\)-in-time and \(L^{2}\)-in-space norm, while the so-called damping norm is an \(L^{2}\)-in-time and \(L^{2}\)-in-space norm, but with a smaller power of the weight function \(\varphi\) (and hence better regularity). In order to explain this, let us set \(\varphi=J_{g}\) and let us suppose (again for demonstration purposes only) that \(J_{g}(x,\mathsf{t}_{\mathsf{top}})=0\). With this assumption, the weight function \(J_{g}\) degenerates to zero at \(t=\mathsf{t}_{\mathsf{top}}\) and indicates a gradient catastrophe. While this spacetime is artificially constructed for demonstration purposes only, it allows us to use the function \(J_{g}\) as our example-weight; \(J_{g}\) possesses the compression-property that all of the actual weight functions \(\varphi\) must possess. To explain this, we will make crucial use of the following three properties: 1. The initial condition \(w_{0}\), for the dominant classical Riemann variable \(w\), satisfies \(-\tfrac{1}{\varepsilon}\leq\partial_{1}w_{0}(x)\leq-\tfrac{1}{2\varepsilon}\) in a local open neighborhood of \(x_{1}=0\). 2. Observe that the evolution equation for \(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}\) in (1.20a) does not have any normal-derivative terms (whereas the \(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}\) equation and the \(J_{g}\hat{\mathbf{A}}_{\mathcal{N}}\) equation both do). By applying the fundamental theorem of calculus (in time) to equation (1.20a), and using that the time-integral of all of the tangential-derivative terms are small relative to \(\partial_{1}w_{0}\), we prove that \[J_{g}\hat{\mathbf{W}}_{\mathcal{N}}(x,t)=\partial_{1}w_{0}(x)+\mathcal{O}( \varepsilon)\text{ as }\varepsilon\to 0\,.\] 3. Using (a), (b), and the approximate identity (1.22a), we see that \[-J_{g}\hat{\mathbf{W}}_{\mathcal{N}}\geq\tfrac{1}{4\varepsilon} \text{ in a local open neighborhood of }x_{1}=0\text{,}\] (1.23a) \[-\partial_{t}J_{g}(x,t)\geq\tfrac{1}{4\varepsilon}\cdot\tfrac{1+ \alpha}{2} \text{ in a local open neighborhood of }x_{1}=0.\] (1.23b) The inequality (1.23b) indicates that the fluid is in _compression5_, and is responsible for the emergence of the damping norm using the following argument (with the weight \(\varphi\) set to be \(J_{g}\)) for our energy method applied to the first term in all three equations of (1.20): \[\int_{\mathsf{t}_{\mathsf{m}}}^{t}\iint_{\mathbb{T}^{2}}\Sigma^{-2 \beta}J_{g}^{2r+1}\partial_{t}\big{(}\mathsf{D}^{6}(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}}),\mathsf{D}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}),\mathsf{D}^{ 6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\big{)}\cdot\big{(}\mathsf{D}^{6}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}}),\mathsf{D}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{ N}}),2\mathsf{D}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\big{)}dxdt^{\prime} \tag{1.24}\] \[\qquad+(r+\tfrac{1}{2})\int_{\mathsf{t}_{\mathsf{m}}}^{t}\iint_{ \mathbb{T}^{2}}\Sigma^{-2\beta}J_{g}^{2r}\big{(}-\partial_{t}J_{g}\big{)}\big{(} \big{\|}\mathsf{D}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\big{\|}^{2}+ \big{\|}\mathsf{D}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\big{\|}^{2}+2 \big{\|}\mathsf{D}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\big{\|}^{2}\big{)} dxdt^{\prime}\] \[\qquad+\beta\int_{\mathsf{t}_{\mathsf{m}}}^{t}\iint_{\mathbb{T}^ {2}}\Sigma^{-2\beta-1}\partial_{t}\Sigma J_{g}J_{g}^{2r}\big{(}\big{\|}\mathsf{ D}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\big{\|}^{2}+\big{\|}\mathsf{D}^{6}(J_{g} \hat{\mathbf{Z}}_{\mathcal{N}})\big{\|}^{2}+2\big{\|}\mathsf{D}^{6}(J_{g}\hat {\mathbf{A}}_{\mathcal{N}})\big{\|}^{2}\big{)}dxdt^{\prime}\,. \tag{1.25}\] Now, the ALE sound speed \(\Sigma\) satisfies \(\frac{\kappa_{0}}{4}\leq\Sigma\leq\kappa_{0}\) for a fixed constant \(\kappa_{0}\geq 1\), and hence the first integral on the right side of the equality is sign-definite and positive, and therefore produces the \(L^{\infty}\)-in-time and \(L^{2}\)-in-space _energy norm_. The second integral on the right side of the equality produces the damping norm; in particular, to avoid obfuscation, we shall assume that the compression inequality (1.23b) holds globally in our spacetime set (this is, of course, not the case but it is a technical, rather than fundamental, matter to contend with). In this case, the second integral is is again sign-definite and positive, and bounded from below by \[\tfrac{(2r+1)(1+\alpha)}{16\varepsilon}\int_{\mathsf{t}_{\mathsf{m}}}^{t} \iint_{\mathbb{T}^{2}}\Sigma^{-2\beta}J_{g}^{2r}\big{(}\big{\|}\mathsf{D}^{6}( J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\big{\|}^{2}+\big{\|}\mathsf{D}^{6}(J_{g} \hat{\mathbf{Z}}_{\mathcal{N}})\big{\|}^{2}+2\big{\|}\mathsf{D}^{6}(J_{g}\hat {\mathbf{A}}_{\mathcal{N}})\big{\|}^{2}\big{)}dxdt^{\prime}\,, \tag{1.26}\] and this is the \(L^{2}\)-in-time and \(L^{2}\)-in-space _damping norm_ that arises from compression. Notice that the pre-factor in front of this integral is proportional to \(\frac{1}{\varepsilon}\) and that \(\varepsilon>0\) is a parameter which is taken to be very small. This means that other integrals that arise from our energy method that have the same type of integrand, _but whose pre-factor is \(\mathcal{O}(1)\) as \(\varepsilon\to 0\)_ can be viewed as small errors that our damping integral can easily absorb. In particular, the third integral on the right side of (1.25) is an example of such a small error integral. From (1.22c), we have that \(\partial_{t}\Sigma\) is approximately equal to \(-\alpha\hat{\mathbf{Z}}_{\mathcal{N}}\Sigma\) and, as we will show in Section 9, \(\big{|}\alpha\hat{\mathbf{Z}}_{\mathcal{N}}\big{|}\leq C_{\hat{\mathbf{Z}}_{ \mathcal{N}}}=\mathcal{O}(1)\) and \(J_{g}\leq\frac{6}{5}\); therefore, one simply requires that \(\tfrac{(2r+1)(1+\alpha)}{16}\) is larger than \(\varepsilon\alpha\beta\frac{12C_{\hat{\mathbf{Z}}_{\mathcal{N}}}}{5}\) and by choosing \(\varepsilon<\tfrac{5(2r+1)(1+\alpha)}{192\alpha\beta C_{\hat{\mathbf{Z}}_{ \mathcal{N}}}}\) this is indeed achieved. To summarize, the first term in all three equations in (1.20) have produced two types of regularity via an _energy_ norm and a _damping_ norm. Reverting to the notation \(\varphi\) for the weight function, we record these norms here as follows: \[\mathcal{E}_{6,\mathcal{N}}^{2}(t) =\big{\|}\varphi^{\frac{3}{2}}J_{g}^{\frac{1}{2}}\mathsf{D}^{6}( J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{ \mathbf{A}}_{\mathcal{N}})(\cdot,t)\big{\|}_{L^{2}_{\mathsf{z}}}^{2}\,, \tag{1.27a}\] \[\mathcal{D}_{6,\mathcal{N}}^{2}(t) =\int_{0}^{t}\big{\|}\varphi^{\frac{1}{2}}J_{g}^{\frac{1}{2}} \mathsf{D}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{ \mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})(\cdot,t^{\prime})\big{\|}_{L^{ 2}_{\mathsf{z}}}^{2}\mathrm{d}t^{\prime}\,. \tag{1.27b}\] The purpose of our energy method is to obtain uniform bounds on the norms \(\mathcal{E}_{6,\mathcal{N}}^{2}(t)\) and \(\mathcal{D}_{6,\mathcal{N}}^{2}(t)\) for \(\mathsf{t}_{\mathsf{m}}\leq t\leq\mathsf{t}_{\mathsf{top}}\). The energy estimates for the tangential-component equations (1.21) will produce analogous energy and damping norms \(\mathcal{E}_{6,\mathcal{T}}^{2}(t)\) and \(\mathcal{D}_{6,\mathcal{T}}^{2}(t)\) which we will define below, and we also obtain uniform bounds for these tangential norms for \(\mathsf{t}_{\mathsf{m}}\leq t\leq\mathsf{t}_{\mathsf{top}}\). #### 1.9.2. Energy estimates for the second term in (1.20) Recall that for the purposes of this pedagogical overview, we are setting the weight function \(\varphi\) to equal \(J_{g}\), we multiplying (1.20b) and (1.20c) by \(J_{g}\), we then let \(D^{6}\) act on each equation in (1.20), and test the resulting equations with \(\big{(}J_{g}J_{g}^{2r}\mathsf{D}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}),J_{g}^{2r }\mathsf{D}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}),2J_{g}^{2r}\mathsf{D}^{6}(J_{g }\hat{\mathbf{A}}_{\mathcal{N}})\big{)}\). We now focus on the second term in (1.20). The second terms in all three equations must be grouped together to form an exact derivative which can then be moved off of the highest-derivative term by use of integration-by-parts. Note that for us to be able to group all three terms together to form an exact derivative, it is essential that the weights are identical in all three equations. Our energy method yields the following combination of highest-order integrals: \[\alpha\int_{\mathsf{t}_{\mathsf{m}}}^{t}\iint_{\mathbb{T}^{2}}J_{g}^{(2r+1)}\Big{(} \partial_{\tau}\mathsf{D}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\big{(} \mathsf{D}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})-\mathsf{D}^{6}(J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}})\big{)}+2\partial_{\tau}\mathsf{D}^{6}(J_{g}\hat{ \mathbf{E}}_{\mathcal{N}})\mathsf{D}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}}) \Big{)}\mathrm{d}x\mathrm{d}t^{\prime}\,.\] By definition, we have that \(\mathsf{D}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})-\mathsf{D}^{6}(J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}})=2\mathsf{D}^{6}(J_{g}\hat{\mathbf{E}}_{\mathcal{N}})\), and hence the above integrand contains the exact derivative \(2\partial_{\tau}\big{(}\mathsf{D}^{6}(J_{g}\hat{\mathbf{E}}_{\mathcal{N}}_{ \mathcal{N}})\big{)}\mathsf{D}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\big{)}\) and upon integrating-by-parts with respect to \(\partial_{\tau}\), we obtain at highest-order, the resulting integral \[-(2r+1)\alpha\int_{\mathfrak{t}_{\mathfrak{m}}}^{t}\iint_{\mathbb{T}^{2}}\partial_{ \tau}J_{g}\,J_{g}^{2r}\big{(}\mathsf{D}^{6}(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_ {\mathcal{N}})\big{)}-\mathsf{D}^{6}(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{ \mathcal{N}})\big{)}\,\mathsf{D}^{6}(J_{g}\hat{\boldsymbol{\mathsf{A}}}_{ \mathcal{N}})\mathrm{d}x\mathrm{d}t^{\prime}\,.\] We prove in Section 9 that \(\partial_{\tau}J_{g}=\mathcal{O}(1)\) and hence, the above integral is an _error integral_ which is easily controlled by our damping norm \(\mathcal{D}_{6,\mathcal{N}}(t)\) via an application of the Cauchy-Young inequality. #### 1.9.3. Energy estimates for the third term in (1.20b) and (1.20c) We now focus on the energy estimates for the third term in (1.20b) and (1.20c). As can be seen in (1.20a), this type of normal derivative term does not exist in the equation for \(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}}\), but the presence of such terms in the evolution equations for \(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}}\) and \(J_{g}\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}}\) create the fundamental difficulty in the analysis of the maximal development of Cauchy data, and are the terms which are primarily responsible for our use of three different spacetime regions with three different weight functions \(\varphi\). Our energy method applied to the third term in (1.20b) and (1.20c) produces the following integral: \[-\alpha\int_{\mathfrak{t}_{\mathfrak{m}}}^{t}\iint_{\mathbb{T}^{2}}\Sigma^{-2 \beta+1}\varphi^{2r}\partial_{\mathcal{N}}\big{(}\big{|}\mathsf{D}^{6}(J_{g} \hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}})\big{|}^{2}+\big{|}\mathsf{D}^{6}( J_{g}\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}})\big{|}^{2}\big{)}\mathrm{d}x \mathrm{d}t^{\prime}\,. \tag{1.28}\] Observe that we have written the weigh function in this integral as \(\varphi^{2r}\). While for the purposes of this simplified overview, we have set \(\varphi\) equal to \(J_{g}\), for the integral arising from this third term, we use the more general \(\varphi\) for the weight function, and we will explain the reason for this notational choice below. Upon integrating-by-parts with respect to \(\partial_{\mathcal{N}}\) in (1.28), using (1.19b) and the fact that \(h,_{2}=\,\mathcal{O}(\varepsilon)\), we have that (1.28), to leading order, is given by \[-\alpha(2\beta-1)\int_{\mathfrak{t}_{\mathfrak{m}}}^{t}\iint_{ \mathbb{T}^{2}}\Sigma^{-2\beta}\varphi^{2r}\partial_{1}\Sigma\,\big{(}\big{|} \mathsf{D}^{6}(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}})\big{|}^{2}+ \big{|}\mathsf{D}^{6}(J_{g}\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}})\big{|} ^{2}\big{)}\mathrm{d}x\mathrm{d}t\] \[\qquad\qquad+2\alpha r\int_{\mathfrak{t}_{\mathfrak{m}}}^{t} \iint_{\mathbb{T}^{2}}\Sigma^{-2\beta+1}\varphi^{2r-1}\partial_{1}\varphi\, \big{(}\big{|}\mathsf{D}^{6}(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}} )\big{|}^{2}+\big{|}\mathsf{D}^{6}(J_{g}\hat{\boldsymbol{\mathsf{A}}}_{ \mathcal{N}})\big{|}^{2}\big{)}\mathrm{d}x\mathrm{d}t\,. \tag{1.29}\] The first integral in (1.29) produces another (arbitrarily large) type of damping term thanks to the use of the function \(\Sigma^{-2\beta+1}\) as part of the weighting. We use the approximate identity (1.22d) and we again assume (only for this demonstration) that the compression lower-bound (1.23a) holds globally (rather than locally as stated). In this case, we see that the first integral in (1.29) has the positive lower bound \[\tfrac{\alpha(2\beta-1)}{8\varepsilon}\int_{\mathfrak{t}_{\mathfrak{m}}}^{t} \iint_{\mathbb{T}^{2}}\Sigma^{-2\beta-1}\varphi^{2r}\,\big{(}\big{|}\mathsf{D }^{6}(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}})\big{|}^{2}+\big{|} \mathsf{D}^{6}(J_{g}\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}})\big{|}^{2} \big{)}\mathrm{d}x\mathrm{d}t\] whose size can be adjusted with the choice of \(\beta>\tfrac{1}{2}\). Figure 3. Four fundamental hypersurfaces are displayed. The “nearly vertical” surface \(x_{1}=x_{1}^{*}(x_{2},t):=\operatorname*{argmin}_{x_{1}\in\mathbb{T}}J_{g}(x_ {1},x_{2},t)\) is shown in magenta. This surface passes through the set of pre-shocks, displayed as the black curve. In red, the downstream surface \(\{J_{g}(x,t)=0\}\) is displayed, and in green, the upstream slow acoustic characteristic surface that passes through the pre-shock set is shown. In orange, the cylindrical surface \(t=t^{*}(x_{2})\) is displayed, where \(t^{*}(x_{2})\) denotes the time coordinate along the set of pre-shocks. #### 1.9.4. Analysis is split into three regions of spacetime The second integral in (1.29) is a highly problematic _error integral_. The difficulty in bounding this error integral is twofold: first, the damping integral has \(\varphi^{2r}\) as the weight, while the problematic error integral has only \(\varphi^{2r-1}\) as a weight and hence, this integral cannot be naively absorbed by our damping norm; second, the function \(\partial_{1}\varphi\) is not a signed function, and therein lies a fundamental difficulty for our analysis. Specifically, we consider the hypersurface, shown as the (almost vertical) magenta surface in Figure 3, defined by \[x_{1}^{*}(x_{2},t):=\operatorname{argmin}_{x_{1}\in\mathbb{T}}J_{g}(x_{1},x_{2 },t)\,.\] We can now define the _upstream_ and the _downstream_ regions of spacetime. The _upstream region_ consists of all triples \((x_{1},x_{2},t)\) such that \(\{x_{1}<x_{1}^{*}(x_{2},t)\}\) and time \(t\) is less than the times corresponding to the green slow characteristic surface in the right panel in Figure 4. Similarly, the _downstream region_ consists of all triples \((x_{1},x_{2},t)\) such that \(\{x_{1}>x_{1}^{*}(x_{2},t)\}\) and time \(t\) is less than the times corresponding to the red surface in the center panel in Figure 4 where \(J_{g}\) vanishes. The magenta surface \(x_{1}=x_{1}^{*}(x_{2},t)\) passes through the pre-shock, denoted by the black curve in Figure 3 (see also Figure 4). Clearly, the sign of \(\partial_{1}\varphi\) is of basic importance in estimating the second integral in (1.29). Since we have set \(\varphi\) to equal \(J_{g}\) for this overview, we have that \(\partial_{1}\varphi=\partial_{1}J_{g}\). As \(\partial_{1}J_{g}=0\) on the surface \(x_{1}=x_{1}^{*}(x_{2},t)\), it must change sign from the upstream region to the downstream region. In fact, we have that \(\partial_{1}J_{g}<0\) in the upstream region, and \(\partial_{1}J_{g}>0\) in the downstream region.6 Returning to the second integral in (1.29), we see that this error integral acquires a "good" sign in the downstream region and can be viewed as an additional damping integral with a reduced power in the weight function \(J_{g}^{2r-1}\). Meanwhile, in the upstream region in which \(\partial_{1}J_{g}<0\), this error integral has the "bad" sign and cannot be properly bounded with this choice of weight function. To be precise, _we cannot set the weight function \(\varphi\) equal to \(J_{g}\) in the upstream region_; instead, we must devise a weight function \(\varphi\) that obeys good properties with respect \(\partial_{1}\varphi\) in the upstream region; we will use a weight function which is essentially _transported_ by the slow acoustic characteristics. This type of transport structure produces a cancellation that entirely eliminates the problematic error integral in (1.29) in the upstream region. Footnote 6: This is only true in a local open neighborhood of the surface \(x_{1}=x_{1}^{*}(x_{2},t)\). For \(x_{1}>x_{1}^{*}(x_{2},t)\) sufficiently large, \(\partial_{1}J_{g}\) becomes negative, but this technical difficulty can be ignored at this stage of the presentation. Clearly, different weights must be used in the upstream and downstream regions of spacetime so as to bound the error integral in (1.29). To that end, we consider a third region of spacetime, whose closure contains the pre-shock set as well as large subsets of both the upstream and downstream regions. See the left panel of Figure 4, which displays this third region, consisting of the triples \((x_{1},x_{2},t)\) such that time \(t\) is less than the times corresponding to the set of pre-shocks. Because the set of pre-shocks depends only upon the transverse coordinate \(x_{2}\) and time \(t\), in order to bound the error integral in (1.29) in this third spacetime region, we can set the weight function \(\varphi\) to be a function that degenerates to zero along this cylindrical surface and which is independent of the \(x_{1}\) coordinate. This can be done by setting \(\varphi(x_{2},t)=J_{g}(x_{1}^{*}(x_{2},t),x_{2},t)\), in which case the problematic error integral in (1.29) vanishes.7 Footnote 7: The actual weight function for this region of spacetime uses a modification of \(J_{g}\) which is defined in (5.4). #### 1.9.5. Energy estimates for the remaining terms in (1.20) It remains for us to explain how our energy method bounds the fourth term in (1.20a) and the fourth, fifth, sixth, and seventh terms in (1.20b) and (1.20c). The common feature which these terms share is _over-differentiated geometry_; however, instead of producing derivative-loss, all of these terms (by design) have exact derivative structure, and some of the over-differentiated terms produce new types of damping norms, as well as _anti-damping_ norms, the latter requiring the choice of sufficiently large exponent \(r\). Perhaps the most interesting of the _over-differentiated_ terms is the fourth term in (1.20c), \(-\alpha\tilde{\boldsymbol{\Sigma}}_{{}_{\mathcal{N}}}\partial_{\mathcal{T}}J_{g}\). Recalling that the procedure for our energy method requires first multiplying by \(J_{g}\), then applying \(\mathsf{D}^{6}\), and then testing with \(2\Sigma^{-2\beta+1}\varphi^{2r}\mathsf{D}^{6}(J_{g}\tilde{\boldsymbol{ \mathsf{A}}}_{{}_{\mathcal{N}}})\), we find that the leading order integral obtained from this term is given by \[-2\alpha\int_{\mathfrak{t}_{\mathfrak{m}}}^{t}\iint_{\mathbb{T}^{2}}\Sigma^{-2 \beta+1}\varphi^{2r}(J_{g}\tilde{\boldsymbol{\Sigma}}_{{}_{\mathcal{N}}}) \partial_{\mathcal{T}}\mathsf{D}^{6}J_{g}\,\mathsf{D}^{6}(J_{g}\tilde{ \boldsymbol{\mathsf{A}}}_{{}_{\mathcal{N}}})\mathrm{d}x\mathrm{d}t^{\prime}\,. \tag{1.30}\] This integral explains the over-differentiated geometry nomenclature: it appears that there is one too many derivatives in \(\partial_{\mathcal{T}}\mathsf{D}^{6}J_{g}\). We can have six derivatives on \(J_{g}\) but not seven, and as we shall see, there is indeed an exact derivative structure here. Let us explain why six derivatives on \(J_{g}\) is in agreement with the norms (1.27) of our energy method. We return to the approximate dynamics for \(J_{g}\) given by (1.22a) and perform a sixth-order energy estimate on this relation. The resulting differential inequality yields, via the Gronwall inequality, the bound \[\sup_{t^{\prime}\in[\mathfrak{t}_{\mathfrak{t}_{\mathfrak{t}}},t]}\varepsilon \big{\|}\varphi^{4}\widetilde{\mathsf{D}}^{6}J_{g}(x,t^{\prime})\big{\|}_{L^{ 2}_{x}}^{2}+\int_{\mathfrak{t}_{\mathfrak{t}_{\mathfrak{t}}}}^{t}\!\big{\|} \varphi^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_{g}(x,t^{\prime})\big{\|}_{ L^{2}_{x}}^{2}\mathrm{d}t^{\prime}\lesssim\varepsilon^{2}\mathcal{D}_{6, \mathcal{N}}^{2}(t)\,. \tag{1.31}\] Because \(\varphi\) is equal go \(J_{g}\) and because \(J_{s}^{-1}\geq\frac{5}{6}\) which is proven in Section 9, we see from (1.31) that the unweighted function \(\mathsf{D}^{6}J_{g}\) is bounded in the spacetime \(L^{2}\)-norm by \(\varepsilon\mathcal{D}_{6,\mathcal{N}}^{2}(t)\), showing not only that six derivatives of \(J_{g}\) are bounded, but that the upper bound of the spacetime \(L^{2}\)-norm is extremely small. To estimate the integral (1.30), we integrate-by-parts with respect to \(\partial_{\tau}\). Using that \(J_{g}\hat{\boldsymbol{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsfmathsfmathsfmathsf{ }}}}}}}}}}}}}} \ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}} \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsfmathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf We now explain how to estimate the integral \(I_{1}^{\rm error}\) in (1.33). We first integrate-by-parts with respect to \(\partial_{t}\), use (1.22a), and find that to leading order with \(\varphi\) set equal to \(J_{s}\), we have that \[I_{1}^{\rm error}=\int_{\mathfrak{t}_{\mathfrak{t}_{\mathfrak{m}}}}^{t} \iint_{\mathbb{T}^{2}}\Sigma^{-2\beta}J_{g}^{2r}(\partial_{t}J_{s})\big{|} \mathsf{D}^{6}(J_{s}\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}})\big{|}^{2} \mathrm{d}x\mathrm{d}t^{\prime}\] Comparing this integral with the second integral on the right side of (1.25), we see that the sign on \(\partial_{t}J_{s}\) is now positive while it is negative in (1.25). In (1.25), the second integral on the right side is a damping integral, while the integral \(I_{1}^{\rm error}\) produces an _anti-damping_ integral, meaning a sign-definite integral but with the wrong sign. Consequently, this anti-damping integral \(I_{1}^{\rm error}\) must be combined with the damping integral on the right side of (1.25). The sum of these two integrals yields the same type of integral as in (1.26) for the function \(\mathsf{D}^{6}(J_{s}\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}})\); more precisely, we have the lower bound for this sum of integral given by \[\tfrac{(r-\frac{1}{2})(1+\alpha)}{8\varepsilon}\int_{\mathfrak{t}_{\mathfrak{ m}}}^{t}\iint_{\mathbb{T}^{2}}\Sigma^{-2\beta}J_{g}^{2r}\big{\|}\mathsf{D}^{6}(J_{s} \boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}})\big{\|}^{2}dxdt^{\prime}\,, \tag{1.35}\] it is is therefore clear that in order to obtain our damping norm, we must choose \(r>\frac{1}{2}\). For our analysis, we set \(r=\frac{3}{4}\). We have now given an overview of our energy method for the normal-component system (1.20). Similar energy estimates are performed for the tangential-component system, leading us to bound the following norms \[\mathcal{E}_{6,\mathcal{T}}^{2}(t) =\big{\|}\varphi^{\frac{3}{4}}J_{s}^{\frac{1}{2}}\mathsf{D}^{6}( \boldsymbol{\hat{\mathsf{W}}}_{\mathcal{T}},\boldsymbol{\hat{\mathsf{Z}}}_{ \mathcal{T}},\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{T}})(\cdot,t^{\prime}) \big{\|}_{L_{x}^{2}}^{2}\,, \tag{1.36a}\] \[\mathcal{D}_{6,\mathcal{T}}^{2}(t) =\int_{0}^{t}\big{\|}\varphi^{\frac{3}{4}}J_{g}^{\frac{1}{2}} \mathsf{D}^{6}(\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{T}},\boldsymbol{\hat{ \mathsf{Z}}}_{\mathcal{T}},\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{T}})(\cdot,t^{\prime})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}t^{\prime}\,. \tag{1.36b}\] Defining the _total norms_ by \[\mathcal{E}_{6}^{2}(t) =\mathcal{E}_{6,\mathcal{N}}^{2}(t)+(\mathsf{K}\varepsilon)^{-2} \mathcal{E}_{6,\mathcal{T}}^{2}(t)\,,\] \[\mathcal{D}_{6}^{2}(t) =\mathcal{D}_{6,\mathcal{N}}^{2}(t)+(\mathsf{K}\varepsilon)^{-2} \mathcal{D}_{6,\mathcal{T}}^{2}(t)\,,\] our combined energy estimates prove that \(\mathcal{E}_{6}(t)\) and \(\mathcal{D}_{6}(t)\) remain uniformly bounded for \(\mathfrak{t}_{\mathfrak{m}}\leq t\leq\mathfrak{t}_{\mathsf{top}}\). As we stated above, this overview used a highly simplified spacetime to explain some of the key ideas for our energy method. The actual scheme uses three different spacetime regions: (1) the spacetime region bounded from above by the pre-shock cylindrical surface shown in the left panel of Figure 4, (2) the downstream region bounded from above by the level set \(\{J_{s}(x,t)=0\}\), shown in red in the center panel of Figure 4; and (3) the upstream region bounded from above by the distinguished slow acoustic characteristic surface passing through the pre-shock, shown in green in the right panel of Figure 4. ### Prior results on the maximal development of Cauchy data for the Euler equations #### 1.10.1. The classical approach to characteristic surfaces In multiple space dimensions, characteristic surfaces are co-dimension-\(1\) submanifolds of spacetime. In the classical framework, which can be traced back to Courant & Friedrichs [17], such manifolds are locally generated by special _vector fields_, whose directions are termed _bicharacteristic directions_. These vector fields are generators of a cone, the so-called _bicharacteristic cone_, which is tangent to the characteristic surfaces.8 Footnote 8: This traditional characterization of characteristic surfaces is very different than the approach that we have taken and described above in Section 1.5. #### 1.10.2. Christodoulou's method [13] The velocity potential \(\phi\) of an irrotational compressible fluid satisfies the wave equation \(\Box_{g}\varphi=0\) (see, for example, [51] and [52]); that is, the irrotational Euler equations solve the d'Alembertian associated to the Lorentz (acoustic) metric \(g\), where the components of \(g^{-1}\) are given by \(g^{00}=-1\), \(g^{0j}-u^{j}\), \(g^{i0}=-u^{i}\), \(g^{ij}=c^{2}\delta^{ij}-u^{i}u^{j}\), where \(i,j=1,...d\), and the index \(0\) denotes the temporal component. It is quite remarkable that even though the underlying fluid dynamics are Newtonian, non-relativistic, and occur in flat spacetime, the fluctuations (sound waves) are governed by a curved Lorentzian (pseudo-Riemannian) spacetime geometry. With such a construct, sound rays play the role of photons and follow the _null geodesics_ of the acoustic metric. As a result, the analysis of the irrotational compressible Euler equations can be entirely relegated to the study of the second-order nonlinear wave equation \(\Box_{g}\varphi=0\), and thereby placed in the framework of analysis in General Relativity. This is indeed the starting point for Christodoulou's method [13] for the analysis of shock formation in fluids. The null geodesics and hence the bicharacteristic directions are obtained as solutions to the _eikonal equation_ \[g^{\nu\mu}\partial_{\mu}q\partial_{\nu}q=0\,,\qquad\nu,\mu=0,1,\dots,d\,.\] By definition, the gradient of \(q\) then determines the tangent and normal directions to the acoustic characteristic surfaces. With the geometric variables defined, certain Riemann-type variables are used in geometric coordinates, and the resulting closure of weighted Sobolev-class energy estimates provides the uniform bounds necessary to describe shock formation. The use of the gradient of the eikonal function \(q\) to define the normal and tangential directions to the fast characteristic surfaces in conjunction with the Riemann-type variables employed leads to a type of derivative loss in the energy estimates of this scheme. To overcome this derivative loss, time-integration of derivatives can be successively implemented, with each time integral gaining back a portion of the original loss, and this is referred to as a _descent scheme_. Christodoulou's method has been used as the framework for the study of shock formation (and singularity formation) for a large class of Lorentzian wave equations; see, for example, [49, 50], [27], [43], [34, 35], [36, 37]. #### 1.10.3. The recent result of Abbrescia & Speck [1] Employing the geometric framework of Christodoulou's method, Abbrescia & Speck [1] have established the maximal development of Cauchy data for the Euler equations in a _localized downstream_ region of spacetime. Their result is easiest to explain using the left panel in Figure 1. The black curve, we which have termed the _pre-shock_, they refer to as the _crease_ in [1]. As we have explained above, the surface \(x_{1}=x_{1}^{*}(x_{2},t)\) (shown as the magenta surface in Figure 1) separates spacetime into the _upstream_ region (bounded from above in time by the green surface in Figure 1) and the _downstream_ region (bounded from above in time by the red surface in Figure 1). In [1], the authors establish the maximal development of Cauchy data in a localized region, just downstream of the surface \(x_{1}=x_{1}^{*}(x_{2},t)\), with \(x_{1}^{*}(x_{2},t)\leq x_{1}\leq M\) and \(M\) taken sufficiently small so that \(\partial_{1}J_{s}(x,t)\geq 0\) for \(x_{1}\) in this region (this is a local region to the right of the magenta surface and under the red surface in Figure 1). Upstream maximal development was not considered. The authors of [1] employed an optimized descent scheme, requiring a minimum of \(25\) such "descents", which in turn, determines the minimum number of \(25\) derivatives that must be used for their energy method. ### A rough statement of the main theorem The main results of this paper are Theorem 4.6, Theorem 4.7, and Theorem 4.8, corresponding to the three panes in Figure 4. For convenience, we summarize in Theorem 1.2 a significantly abbreviated version of our main results. **Theorem 1.2** (**Maximal development )**.: _Consider the 2D Euler equations (1.2) for arbitrary \(\gamma>1\), with \(H^{7}(\mathbb{T}^{2})\)-smooth isentropic initial data \((u_{0},\sigma_{0})\), bounded away from vacuum. Assume that the data is compressive in the \(x_{1}\) direction and generic.9 For \(0<\varepsilon\ll 1\), we assume that the initial dominant Riemann variable \(w_{0}:=u_{0}^{1}+\sigma_{0}=\mathcal{O}(1)\) is assumed to have a point at which \(\partial_{1}w_{0}\) attains its global (non-degenerate) minimum, this maximally negative slope in the \(x_{1}\) direction is \(\mathcal{O}(-\frac{1}{\varepsilon})\), while in the \(x_{2}\) direction the slope of \(w_{0}\) is \(\mathcal{O}(1)\). Assume that the initial subdominant Riemann variable \(z_{0}:=u_{0}^{1}-\sigma_{0}\) and the initial tangential velocity \(a_{0}=u_{0}^{2}\) are \(\mathcal{O}(\varepsilon)\) in amplitude and that their derivatives satisfy \((\partial_{1}z_{0},\partial_{1}a_{0})=\mathcal{O}(1)\) and \((\partial_{2}z_{0},\partial_{2}a_{0})=\mathcal{O}(\varepsilon)\). The initial condition for the geometry is \((\mathcal{N},\mathcal{T},J_{s})=(e_{1},e_{2},1)\). Then, assuming that \(\varepsilon\) is sufficiently small, the following hold:_ Footnote 9: The precise assumptions on the Cauchy data are given in Section 4.2, items ((i))–((viii)). The set of such initial conditions forms an open set in the \(H^{7}\) topology, as discussed in Remark 4.4. 1. _There exists a spacetime_ \(\mathcal{M}_{\text{Eulerian}}\)_, the maximal hyperbolic development of the Cauchy data_ \((u_{0},\sigma_{0})\) _prescribed at the initial time slice, and a unique solution_ \((u,\sigma)\) _of (_1.2_) in this spacetime, which propagates the regularity of the initial data; in particular,_ \((u,\sigma)\in C_{t}^{0}H_{y}^{7}\cap C_{t}^{7}L_{y}^{2}\)_._ 2. _There exists a family of Arbitrary-Lagrangian-Eulerian (ALE) diffeomorphisms_ \(\psi(\cdot,t)\)_, indexed by time and defined in Section_ 1.6_, which flattens all fast acoustic characteristic surfaces. Under the action of_ \(\psi\)_, the spacetime of maximal hyperbolic development of the Cauchy data gets mapped into its ALE counterpart, the spacetime_ \(\mathcal{M}_{\text{ALE}}\)_._ 3. _The Euler evolution (_1.2_) is equivalent to the evolution of the differentiated ALE Riemann variables_ \((\mathbf{\dot{W}},\mathbf{\dot{Z}},\mathbf{\dot{A}})\)_, of the geometry_ \((\mathcal{N},\mathcal{T},J_{s})\) _and of the (rescaled) sound speed_ \(\Sigma\)_, cf. (_1.20_), (_1.21_), and (_1.22_). These new geometric unknowns propagate the regularity of their initial data throughout the spacetime_ \(\mathcal{M}_{\text{ALE}}\)_; for instance,_ \(J_{s},h_{,2}\,,J_{s}^{\mathbf{\dot{W}}}\mathbf{\dot{Z}}_{\mathcal{N}},\mathbf{ \dot{A}}_{\mathcal{N}},\mathbf{\dot{W}}_{\mathcal{T}},\mathbf{\dot{Z}}_{ \mathcal{T}},\mathbf{\dot{A}}_{\mathcal{T}}\) _are bounded in_ \(C_{t}^{0}H_{x}^{6}\)_, whereas_ \(\psi\) _and_ \(\Sigma\) _remain bounded in_ \(C_{t}^{0}H_{x}^{7}\)_._ _._ * _The spacetime_ \(\mathcal{M}_{\mathsf{ALE}}\) _is the maximal hyperbolic development of the Cauchy data for the ALE dynamics (_1.20_), (_1.21_), and (_1.22_). The future temporal boundary ("top boundary") of_ \(\mathcal{M}_{\mathsf{ALE}}\) _consists of: a co-dimension-_\(2\) _set of "first gradient singularities", which is the set of pre-shocks_ \(\Xi^{*}=\{J_{g}=0\}\cap\{J_{g,1}=0\}\)_; a co-dimension-_\(1\) _singular surface which emerges from the pre-shock set in the downstream region, the set_ \(\{J_{g}=0\}\)_, which parametrizes a continuum of gradient catastrophes for the density and the normal velocity; the distinguished slow acoustic characteristic co-dimension-_\(1\) _surface emanating from the pre-shock set in the upstream direction, which serves as a Cauchy horizon for the ALE Euler evolution. See Figures_ 3 _and_ 4 _below for a pictorial representation of the future temporal boundary of_ \(\mathcal{M}_{\mathsf{ALE}}\)_._ ### Organization of the paper In Section 2, we introduce the Arbitrary Lagrangian Eulerian (ALE) coordinate system, adapted to the fast acoustic characteristic surfaces. In Section 3, we introduce a new type of differentiated Riemann-type variable which is a linear combination of the gradient of velocity and sound speed and the curvature of the fast acoustic characteristic surfaces (see the identities (1.17)). Our analysis will make use of these new differentiated Riemann variables in the ALE coordinate system. In Section 4, we give a detailed description of the open set of compressive and generic initial conditions used for our analysis. We then state the three main theorems of this work. Theorem 4.6 produces the unique Euler solution up the pre-shock. Theorem 4.7 establishes downstream maximal development of the Cauchy data, and Theorem 4.8 establishes upstream maximal development of the Cauchy data. In Section 5, we explain the geometry of the spacetime region lying below the cylindrical-type surface corresponding to the set of pre-shocks. Specifically, we smooth the corner formed at the intersection of the set of pre-shocks with our final time-slice, and create a new smooth cylindrical-type surface using a modification of the metric-scaled determinant \(J_{g}\). We then remap time, \(t\mapsto\mathsf{s}\), so as to flatten this surface, and redefine all of our variables to now be functions of \((x,\mathsf{s})\). We define the Sobolev norms used for energy estimates and state the pointwise bootstrap assumptions on low-order derivatives and the Sobolev bootstrap assumptions on high-order derivatives. Section 6 provides some of the basic consequences of the pointwise bootstrap assumptions, including the fact that \(J_{g}\tilde{\mathbf{W}}_{\mathcal{N}}(x,t)\sim\partial_{1}w_{0}(x)\), which is used on numerous occasions throughout our proof. Section 7 establishes the sixth-order Sobolev bounds for the geometric quantities \(J_{g}\), \(\partial_{2}h\), \(\mathcal{N}\), \(\tau\), the sound speed \(\Sigma\), and the tangential reparameterization velocity \(V\). In Section 8, we prove sixth-order energy estimates for the vorticity. The resulting vorticity bound allows us to produce improved \(L^{2}\) bounds for both the fifth-order and sixth-order derivatives of \(J_{g}\tilde{\mathbf{A}}_{\mathcal{N}}\), \(J_{g}\tilde{\mathbf{Z}}_{\mathcal{N}}\), and \(J_{g}\tilde{\mathbf{W}}_{\mathcal{N}}\). In Section 9, we close all of the bootstrap bounds. Sections 10-12 contain the complete set of energy estimates used to obtain uniform Sobolev bounds for Euler solutions in the spacetime bounded from above by the pre-shock cylindrical surface. Using the \((x,\mathsf{s})\) coordinates defined in (5.18), and the spacetime gradient operator \(\widetilde{\mathsf{D}}\) defined in (5.23), sixth-order energy estimates on the tangential-component equation set (10.1) are performed in Section 10, providing uniform bounds for the tangential norms \(\tilde{\mathcal{E}}_{5,\tau}(\mathsf{s})\), Figure 4. Left. The spacetime region with the future temporal boundary consisting of the orange pre-shock cylindrical surface passing through the co-dimension-\(2\) pre-shock set, shown as the black curve. Center. The spacetime region used for the analysis of the downstream maximal development of Cauchy data. The red surface displays the singular hypersurface consisting of the level set \(\{J_{g}=0\}\), which emanates from the pre-shock (the black curve) in the downstream direction. In orange we represent the pre-shock cylindrical surface which also emanates from the pre-shock, but in the upstream direction. Right. The spacetime region used for the analysis of the upstream maximal development of Cauchy data. The green surface denotes the distinguished slow acoustic characteristic surface emanating from the pre-shock set (in black), which connects to the initial time slice in the downstream direction, and which characterizes the maximal hyperbolic development in the upstream direction. \(\widetilde{\mathcal{E}}_{6,\mathcal{T}}\)(s), \(\widetilde{\mathcal{D}}_{5,\mathcal{T}}\)(s), and \(\widetilde{\mathcal{D}}_{6,\mathcal{T}}\)(s) defined in (5.36). In Section 12, energy estimates for the normal-component equation set (12.1) are given, with uniform bounds for the norms \(\widetilde{\mathcal{E}}_{5,\mathcal{N}}\)(s), \(\widetilde{\mathcal{E}}_{6,\mathcal{N}}\)(s), \(\widetilde{\mathcal{D}}_{5,\mathcal{N}}\)(s), and \(\widetilde{\mathcal{D}}_{6,\mathcal{N}}\)(s) defined in (5.36). The closure of these normal-component energy estimates relies upon improved bounds for the case of six pure time-derivatives, and these improved bounds are proven in Section 11. In Section 13, we prove downstream maximal development of the Cauchy data. With the modified determinant function \(\overline{J}_{g}\) defined in (5.7), we consider the spacetime with future temporal boundary given by the level-set \(\{\overline{J}_{g}(x,t)=0\}\) in the downstream region, and the pre-shock cylinder in the upstream region. A new coordinate transformation (13.8) is introduced which maps \((x,t)\mapsto(x,\mathsf{s})\) and flattens this future temporal boundary. Again, sixth-order energy estimates are closed for the normal-component equations, improved normal-component estimates are obtained for the case of six pure time derivatives, and tangential-component estimates are closed. The resulting sixth-derivative uniform bounds establishes the existence of the unique Euler solution for all times up to the singular boundary where the metric-scale determinant \(J_{g}\) vanishes and thus where gradient blow-up occurs. Upstream maximal development of the Cauchy data is established in Section 14. A large portion of the upstream spacetime region is foliated by (what are essentially) slow acoustic characteristic surfaces. The natural _time evolution_ of the slow characteristic surfaces is somewhat singular when written in our ALE coordinates, adapted to the fast acoustic characteristic surfaces. In particular, the temporal rate of change of each slow characteristic surface is proportional to \(J_{g}^{-1}\) which blows-up at the pre-shock. As such, we introduce a reparameterization of these slow characteristic surfaces but employing the \(x_{1}\) coordinate as the evolutionary independent variable. The resulting description of the geometry of the slow characteristic surfaces becomes smooth. We define the weight function \(\tilde{\mathcal{J}}\) used in upstream energy method via transport along these slow surfaces of the value of \(J_{g}\) along the fast characteristic surface passing through the pre-shock. With this weight function, we once again close sixth-order energy estimates for the normal-component equations, improved normal-component estimates are once again obtained for the case of six pure time derivatives, and tangential-component estimates are again closed. Uniform bounds are therefore established in the entire upstream spacetime region, lying below the slow characteristic surface that emanates from the pre-shock. Section 15 is devoted to establishing the optimal regularity of the velocity, sound speed, and the ALE family of diffeomorphism \(\psi\). We prove that \(H^{7}\) Sobolev regularity is maintained for the entire maximal development of the Cauchy data. In Appendix A, we provide the reader a self-contained introduction to the notion of maximal development of Cauchy data for the Euler equations in the simplified setting of one space dimension. A complete description of the maximal spacetime set is provided in both the traditional Eulerian setting, as well as the more geometric Lagrangian framework. Appendix B is devoted to the basic functional analysis lemmas in the three different spacetime regions that we employ for our analysis. We prove a number of technical lemmas which are spacetime variants of the classical Sobolev and Poincare inequalities, the Gagliardo-Nirenberg inequalities, Moser inequalities, and a number of commutator lemmas. Finally, in Appendix C, we establish \(L^{\infty}\) bounds for solutions to certain transport equations, by obtaining \(p\)-independent bounds for \(L^{p}\) energy estimates and passing to the limit as \(p\to\infty\). Keeping in mind that our ALE coordinate system is adapted to the fast characteristics associated to the \(\lambda_{3}\) wave speed, the lemmas in this section allow us to obtain pointwise bounds for quantities which are naturally associated with either the \(\lambda_{1}\) or \(\lambda_{2}\) wave speeds. ## 2. Acoustic characteristic surfaces associated to shock formation In this section, we develop the geometry for shock formation. We shall study the problem on the spacetime \(\mathbb{T}^{2}\times[\mathsf{t}_{\mathsf{in}},T]\), where the initial time \(\mathsf{t}_{\mathsf{in}}\) shall be made precise below, and \(T\in(\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}}]\) is arbitrary. The times \(\mathsf{t}_{\mathsf{in}}\) and \(\mathsf{t}_{\mathsf{fin}}\) are defined in (4.1), respectively in (4.4). ### Characteristic surfaces We denote the \(3\)-transport velocity by \[\mathcal{V}_{3}=u+\alpha\sigma n=\lambda_{3}n+(u\cdot\tau)\tau\,. \tag{2.1}\] We shall foliate spacetime with acoustic characteristic surfaces associated to the "fast" wave speed \(\lambda_{3}\). One way of doing so is by studying the Lagrangian flow map of the \(3\)-transport velocity \(\mathcal{V}_{3}\): \[\partial_{t}\eta(x_{1},x_{2},t)=\mathcal{V}_{3}(\eta(x_{1},x_{2},t),t)\,,\qquad \eta(x,\mathsf{t}_{\mathsf{in}})=x\,. \tag{2.2}\] In terms of the standard Cartesian basis, we have that \[\eta(x_{1},x_{2},t)=(\eta^{1}(x_{1},x_{2},t),\eta^{2}(x_{1},x_{2},t))\,,\] and that \[\eta^{1}(x_{1},x_{2},\mathfrak{t}_{\mathsf{in}})=x_{1}\ \ \ \text{and}\ \ \ \eta^{2}(x_{1},x_{2},\mathfrak{t}_{\mathsf{in}})=x_{2}\,.\] Using the flow map \(\eta\) we can give a geometric description of the fast acoustic characteristics surfaces. At initial time \(t=\mathfrak{t}_{\mathsf{in}}\), we foliate \(\mathbb{T}^{2}\) by lines parallel to \(e_{1}=(1,0)\), and denote these lines by \(\gamma_{x_{1}}(\mathfrak{t}_{\mathsf{in}})=\{x_{1}\}\times\mathbb{T}\). For each \(x_{1}\in\mathbb{T}\) and \(t\in[\mathfrak{t}_{\mathsf{in}},T]\), we define the characteristic curve (at a fixed time-slice) by \[\gamma_{x_{1}}(t)=\eta(\gamma_{x_{1}}(\mathfrak{t}_{\mathsf{in}}),t)\,, \tag{2.3}\] and the characteristic surfaces up to time \(T\geq\mathfrak{t}_{\mathsf{in}}\) (which are parameterized of \(x_{1}\)) by \[\Gamma_{x_{1}}(T)=\bigcup\nolimits_{t\in[\mathfrak{t}_{\mathsf{in}},T]} \gamma_{x_{1}}(t)\,. \tag{2.4}\] Figure 5 below displays a few such characteristic surfaces \(\Gamma_{x_{1}}\) for five different values of \(x_{1}\in\mathbb{T}\). ### An Arbitrary-Lagrangian-Eulerian (ALE) description of the geometry While the Lagrangian flow \(\eta\) is a natural parameterization for the fast acoustic characteristic surfaces \(\Gamma_{x_{1}}\), it is convenient to introduce a tangential re-parameterization in the form of the so-called Arbitrary-Lagrangian-Eulerian (ALE) coordinates. Because each curve \(\gamma_{x_{1}}(t)\) is a graph over the set \(\mathbb{T}\ni x_{2}\), we introduce a _height_ function \(h(x_{1},x_{2},t)\) such that \[\gamma_{x_{1}}(t)=\{(h(x_{1},x_{2},t),x_{2})\colon x_{2}\in\mathbb{T}\}, \qquad t\in[\mathfrak{t}_{\mathsf{in}},T]\,.\] The induced metric on \(\gamma_{x_{1}}(t)\) is given by \[g(x_{1},x_{2},t)=1+|h_{,2}\left(x_{1},x_{2},t\right)|^{2}\,, \tag{2.5}\] and the unit tangent vectors \(\tau\) and normal vectors \(\mathcal{N}\) to the curves \(\gamma_{x_{1}}(t)\) are then given by \[\tau(x_{1},x_{2},t)=g^{-\frac{1}{2}}(h_{,2}\,,1)\,,\qquad\text{and}\qquad \mathcal{N}(x_{1},x_{2},t)=g^{-\frac{1}{2}}(1,-h_{,2}\,)\,. \tag{2.6}\] Figure 5. For \(T>\mathfrak{t}_{\mathsf{in}}\) which is strictly less than the very first blowup time, we display the characteristic surfaces \(\Gamma_{x_{1}}(T)\) defined in (2.4) emanating from five different values of \(x_{1}\in\mathbb{T}\). At \(t=\mathfrak{t}_{\mathsf{in}}\), the curves \(\{\gamma_{x_{1}}(\mathfrak{t}_{\mathsf{in}})\}_{x_{1}\in\mathbb{T}}\) are lines which foliate \(\mathbb{T}^{2}\). The distance between the characteristic surfaces \(\Gamma_{x_{1}}(T)\) is decreasing as \(T\) increases, leading to shock formation when this distance vanishes. Figure 6. For \(T>\mathfrak{t}_{\mathsf{in}}\) which is strictly less than the very first blowup time, and for five different values of \(x_{1}\in\mathbb{T}\), we display both the fast acoustic characteristic surfaces \(\Gamma_{x_{1}}(T)\) (in orange), and the corresponding slow acoustic characteristic surfaces emanating from the same values of \(x_{1}\) (in olive-green). While the orange fast acoustic characteristic surfaces are close-to-impinging on each other, the olive-green slow acoustic surfaces smoothly foliate spacetime. We define the ALE family of maps \(\psi(x_{1},x_{2},t)=\left(\psi^{1}(x_{1},x_{2},t),\psi^{2}(x_{1},x_{2},t)\right),\) by10 Footnote 10: The tangent and normal vectors to \(\gamma_{x_{1}}(t)\) can be equivalently defined via the map \(\psi\). In particular, we have that \(\tau=g^{-\frac{1}{2}}\psi_{,2}\!=\!g^{-\frac{1}{2}}(h_{,2}\,,1),\)\(\mathcal{N}=g^{-\frac{1}{2}}\psi_{,\frac{1}{2}}^{\perp}=g^{-\frac{1}{2}}(1,-h _{,2}\,)\), and the induced metric \(g=\psi_{,2}\cdot\psi_{,2}=1+\left|h_{,2}\,\right|^{2}\). \[\psi(x_{1},x_{2},t)=h(x_{1},x_{2},t)e_{1}+x_{2}e_{2}\,, \tag{2.7}\] where \[h(x_{1},x_{2},\mathsf{t}_{\text{in}})=x_{1}\,. \tag{2.8}\] In order to preserve the shape of the characteristic surfaces \(\Gamma_{x_{1}}(T)\), the family of diffeomorphisms \(\psi(\cdot,t)\) must satisfy the constraint \[\partial_{t}\psi\cdot\mathcal{N}=\left(\mathcal{V}_{3}\circ\psi\right)\cdot \mathcal{N}=\left(u+\alpha\sigma n\right)\circ\psi\cdot\mathcal{N}\,. \tag{2.9}\] Time-differentiating (2.7), we have that \[\partial_{t}\psi\cdot\mathcal{N}=\partial_{t}h(e_{1}\cdot\mathcal{N})=g^{- \frac{1}{2}}\partial_{t}h\,,\] and from (2.9), we have that \[\partial_{t}h=g^{\frac{1}{2}}\left(\left(u\circ\psi\right)\cdot\mathcal{N}+ \alpha\sigma\circ\psi\right)\,. \tag{2.10}\] Similarly, \[\partial_{t}\psi\cdot\tau=\partial_{t}h(e_{1}\cdot\tau)=g^{-\frac{1}{2}}h_{,2 }\,\partial_{t}h\,.\] It follows that \[\partial_{t}\psi =(\partial_{t}\psi\cdot\mathcal{N})_{\mathcal{N}}+(\partial_{t} \psi\cdot\tau)\tau\] \[=\left(\left(u\circ\psi\right)\cdot\mathcal{N}+\alpha\sigma\circ \psi\right)\mathcal{N}+\left(\left(u\circ\psi\right)\cdot\mathcal{N}+\alpha \sigma\circ\psi\right)h_{,2}\,\tau\,. \tag{2.11}\] ### The deformation matrix \(\nabla\psi\) and its determinant and inverse The diffeomorphisms \(\psi\) are fundamental in our analysis, since the definition of the paraboloid of first singularities, which describes the downstream maximal development, is determined by the vanishing of the Jacobian determinant of \(\psi\) (see Figure 7). From (2.7) we have that \[\nabla\psi=\begin{bmatrix}h_{,1}&h_{,2}\\ 0&1\end{bmatrix}\,,\] so that the Jacobian determinant is given by \[J=\det\nabla\psi=h_{,1}\.\] We introduce the metric-normalized Jacobian determinant as \[J_{{}_{g}}=g^{-\frac{1}{2}}J=g^{-\frac{1}{2}}h_{,1}. \tag{2.12}\] The paraboloid of first singularities on the right side of Figure 7 will be shown to be the level set \(\{J_{{}_{g}}=0\}\). The cofactor matrix of \(\nabla\psi\) is denoted by \[b=\begin{bmatrix}1&-h_{,2}\\ 0&h_{,1}\end{bmatrix}\,,\] and the inverse matrix \(B(x,t)=[\nabla\psi(x,t)]^{-1}\) is defined by \[B=J^{-1}b\,.\] The components of \(b\) are denoted by \(b^{i}_{j}\), the upper index for the row, and the lower index for the column. It is important to observe that \[b^{1}_{j}=\psi_{,2}^{\,j\perp}=g^{\frac{1}{2}}\mathcal{N}^{j}\,,\qquad\text{ and}\qquad b^{2}_{j}=(0,h_{,1}\,)=:Je^{j}_{2}\,, \tag{2.13}\] so that \[B^{1}_{j}=J_{s}^{-1}\mathcal{N}^{j}\,,\qquad\text{and}\qquad B^{2}_{j}=\delta ^{2}_{j}\,. \tag{2.14}\] As usual, the columns of \(b\) are divergence-free. ### The relationship between ALE and Eulerian derivatives The associated Eulerian unit tangent and normal vectors are given by \[\tau=\tau\circ\psi^{-1}\,,\ \ \ n=\mathcal{N}\circ\psi^{-1}\,. \tag{2.15}\] Suppose that \(f\) denotes a differentiable Eulerian function, and that let \(F=f\circ\psi\) be the associated ALE function. Using (2.14) together with the chain-rule, we have that \[f_{,i}\circ\psi=F_{,k}\,B_{i}^{k}=J_{s}^{-1}F_{,1}\,\mathcal{N}^{i}+F_{,2}\,e _{2}^{i}\,.\] Since \(e_{2}\cdot\mathcal{N}=-g^{-\frac{1}{2}}h_{,2}\) and \(e_{2}\cdot\tau=g^{-\frac{1}{2}}\), it follows that \[\partial_{n}f\circ\psi=J_{s}^{-1}F_{,1}-g^{-\frac{1}{2}}h_{,2}\,F_{,2}\, \tag{2.16a}\] \[\partial_{\tau}f\circ\psi=g^{-\frac{1}{2}}F_{,2}. \tag{2.16b}\] The identities in (2.16) show that the ALE coordinate system characterizes the "tame" tangential derivatives of \(f\) simply as derivatives with respect to \(x_{2}\) for \(f\circ\psi\). The "singular" nature of the normal derivatives of \(f\) is characterized Figure 7. Left: in Eulerian coordinates, we represent the 2D analogue of the 1D image from Figure 18 (below). For \(T\in[\mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{fin}}]\) we display in orange the fast acoustic characteristic surfaces \(\Gamma_{x_{1}}(T)\) for various values of \(x_{1}\). The black curve, parametrized by \(x_{2}\) represents the set of pre-shocks in Eulerian coordinates (see Definition 6.6 below). The green surface is the slow acoustic characteristic surface emanating from the pre-shock set in the upstream direction. The cuspoidal red surface, which also emanates from the curve of pre-shocks, is the surface of “first singularities”, the envelope (or “top” boundary) of the spacetime region in which the fast acoustic characteristic surfaces remain in one-to-one correspondence with the initial foliation of spacetime. The Eulerian Cauchy maximal hyperbolic development of the initial data is the spacetime which lies in the “temporal past” of the union of the green and red surfaces (and the time slice \(\{t=\mathfrak{t}_{\text{fin}}\}\)). This spacetime has a boundary which is not smooth; meaning, not differentiable, it has limited Hölder regularity. Right: in ALE coordinates, we represent the 2D analogue of the 1D image from Figure 19 (below). We represent the same surfaces as in the left figure, except that we are composing with the ALE diffeomorphism \(\psi\). In these ALE coordinates, the fast acoustic characteristics are flat planes which foliate spacetime, parametrized by \(x_{1}\). The spacetime of Cauchy maximal hyperbolic development of the initial data in ALE coordinates is the spacetime which lies “below” the union of the green and red surfaces (and the time slice \(\{t=\mathfrak{t}_{\text{fin}}\}\)). In turn, the red surface is the portion of the paraboloid of “first singularities” \(\{J_{s}=0\}\) which lies downstream of the curve of pre-shocks, whereas the green surface represents the upstream part of the slow acoustic characteristic surface emanating from the pre-shock, composed with the flow \(\psi\). The boundary of the ALE spacetime is \(W^{2,\infty}\) smooth. not as much by the fact that these are derivatives with respect to the \(x_{1}\) or \(x_{2}\) direction; it is the presence of the \(J_{g}^{-1}\) term (which blows up as \(J_{g}\to 0\)) in front of \(F_{,1}\) which fully characterizes the singular nature of normal derivatives. ## 3. A new set of variables for Euler shock formation Having introduced a new system of ALE coordinates, we now introduce a new set of variables that are essential for the analysis of the shock formation process and of the hyperbolic maximal development of the Cauchy data. ### Euler equations in geometric ALE coordinates #### 3.1.1. The differentiated acoustic Euler equations We first compute the evolution of partial (space) derivatives of \(u\) and \(\sigma\). We differentiate (1.4) and find that \[\partial_{t}u^{i}{}_{,k}+ (u\cdot n+\alpha\sigma)u^{i}{}_{,kj}\,n^{j}+(u\cdot\tau)u^{i}{}_{,kj}\,\tau^{j}+\alpha\sigma(\sigma_{,ki}-\partial_{n}u^{i}{}_{,k}\,)+\alpha \sigma_{,k}\,\sigma_{,i}+u^{j}{}_{,k}\,u^{i}{}_{,j}=0\,, \tag{3.1a}\] \[\partial_{t}\sigma_{,k}+ (u\cdot n+\alpha\sigma)\sigma_{,kj}\,n^{j}+(u\cdot\tau)\sigma_{, kj}\,\tau^{j}+\alpha\sigma(u^{i}{}_{,ki}-\partial_{n}\sigma_{,k}\,)+\alpha \sigma_{,k}\,u^{i}{}_{,i}+u^{j}{}_{,k}\,\sigma_{,j}=0\,. \tag{3.1b}\] The system (3.1) constitutes the differentiated acoustic Euler equations. It is imperative to study this differentiated form to avoid derivative loss in the geometry. #### 3.1.2. ALE variables Using our family of ALE mappings \(\psi\) defined in (2.7), we define a new set of ALE variables representing velocity, sound speed, and their gradients along the acoustic characteristic surfaces by \[U^{i}=u^{i}\circ\psi\,, \Sigma=\sigma\circ\psi\,, \tag{3.2a}\] \[\mathring{\mathsf{U}}^{i}_{k}=u^{i}{}_{,k}\circ\psi\,, \mathring{\Sigma}_{k}=\sigma_{,k}\circ\psi\,. \tag{3.2b}\] Additionally, we define the ALE wave-speed \[\Lambda_{3}=\lambda_{3}\circ\psi=U\cdot\mathcal{N}+\alpha\Sigma\,, \tag{3.2c}\] With respect to the variables (3.2), the system (3.1) takes the form \[\partial_{t}\mathring{\mathsf{U}}^{i}_{k}+g^{-\frac{1}{2}}\,(U \cdot\mathcal{T}-\Lambda_{3}h_{,2})\,\mathring{\mathsf{U}}^{i}_{k,2}\] \[\quad+\alpha\Sigma_{J}{}^{-1}(\mathring{\Sigma}_{k,1}\,\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[+J_{{}_{\!g}}\Big{(}\tfrac{1}{2}(\dot{\mathbf{W}}_{k}+\dot{\mathbf{Z}}_{k})\hat{A} _{i}\mathcal{N}^{i}+\hat{\mathbf{A}}_{k}\hat{\mathbf{A}}_{i}\tau^{i}+\tfrac{ \alpha}{4}(\dot{\mathbf{W}}_{k}-\dot{\mathbf{Z}}_{k})(\dot{\mathbf{W}}_{i}- \dot{\mathbf{Z}}_{i})\tau^{i}\Big{)}=0\,, \tag{3.7c}\] which is coupled with the transport-type equation for \(h\) given by \[\partial_{t}h=g^{\frac{1}{2}}(U\cdot\mathcal{N}+\alpha\Sigma)=g^{-\frac{1}{2}} (\tfrac{1+\alpha}{2}W+\tfrac{1-\alpha}{2}Z)\,, \tag{3.8}\] and the evolution equations for \((W,Z,A)\), which are given in (3.21) below. #### 3.1.4. Tensor notation We use bold notation for tensors, and write (3.5) as \[\mathbf{\ddot{U}}=\nabla u\circ\psi\,,\;\;\;\mathbf{\ddot{\Sigma}}=\nabla \sigma\circ\psi\,,\;\;\;\mathbf{\dot{W}}=\mathcal{N}\mathbf{\ddot{U}}+\mathbf{ \ddot{\Sigma}}\,,\;\;\mathbf{\dot{Z}}=\mathcal{N}\mathbf{\ddot{U}}-\mathbf{ \ddot{\Sigma}}\,,\;\;\mathbf{\ddot{A}}=\tau\mathbf{\ddot{U}}\,.\] By \(\mathcal{N}\mathbf{\ddot{U}}\) we mean the contraction of the vector \(\mathcal{N}\) with the tensor \(\mathbf{\ddot{U}}\) so that \(\mathcal{N}\mathbf{\ddot{U}}\) is a \(1\)-form, which we identify with a vector field, with components defined by \[(\mathcal{N}\mathbf{\ddot{U}})_{k}=\mathcal{N}^{i}\mathbf{\ddot{U}}_{k}^{i}\,.\] Similarly, the vector \(\tau\mathbf{\ddot{U}}\) has components \((\tau\mathbf{\ddot{U}})_{k}=\tau^{i}\mathbf{\ddot{U}}_{k}^{i}\). We next introduce the following component notation \[\mathbf{\dot{W}}_{\mathcal{N}}=\mathbf{\dot{W}}\cdot\mathcal{N}\,,\;\;\mathbf{ \dot{W}}_{\tau}=\mathbf{\dot{W}}\cdot\tau\,,\;\;\mathbf{\dot{Z}}_{\mathcal{N}} =\mathbf{\dot{Z}}\cdot\mathcal{N}\,,\;\;\mathbf{\dot{Z}}_{\tau}=\mathbf{\dot {Z}}\cdot\mathcal{\tau}\,,\;\;\mathbf{\dot{A}}_{\mathcal{N}}=\mathbf{\dot{A}} \cdot\mathcal{N}\,,\;\;\mathbf{\dot{A}}_{\tau}=\mathbf{\dot{A}}\cdot\mathcal{ \tau}\,,\] and this shall henceforth be used. 1.5. Geometric significance of the variables \(\mathbf{\dot{W}}\), \(\mathbf{\dot{\Sigma}}\), and \(\mathbf{\dot{A}}\) The variables \(\mathbf{\dot{W}}\), \(\mathbf{\ddot{Z}}\), and \(\mathbf{\dot{A}}\) have been designed to both encode certain components of the characteristic surface curvature tensor and avoid geometric derivative loss. The significance of these variables is demonstrated using classical Eulerian Riemann variables. Following our definition of the ALE Riemann variables in (3.4), we define the Eulerian Riemann variables by \[w=u\cdot n+\sigma\,,\qquad z=u\cdot n-\sigma\,,\qquad a=u\cdot\tau\,. \tag{3.9}\] Then, the identities (2.16), (3.4), (3.5), and (3.9) provide the following: \[\mathbf{\dot{W}}_{\mathcal{N}} =J_{{}_{\!g}}^{-1}W_{,1}-h_{,2}\,g^{-\frac{1}{2}}W_{,2}+g^{-1}A(J_{ {}_{\!g}}^{-1}h_{,12}-h_{,2}\,g^{-\frac{1}{2}}h_{,22}\,)=(\partial_{n}w-a \partial_{n}n\cdot\tau)\circ\psi\,, \tag{3.10a}\] \[\mathbf{\dot{Z}}_{\mathcal{N}} =J_{{}_{\!g}}^{-1}Z_{,1}-h_{,2}\,g^{-\frac{1}{2}}Z_{,2}+g^{-1}A(J_{ {}_{\!g}}^{-1}h_{,12}-h_{,2}\,g^{-\frac{1}{2}}h_{,22}\,)=(\partial_{n}z-a \partial_{n}n\cdot\tau)\circ\psi\,,\] (3.10b) \[\mathbf{\dot{A}}_{\mathcal{N}} =J_{{}_{\!g}}^{-1}A_{,1}-h_{,2}\,g^{-\frac{1}{2}}A_{,2}-\tfrac{1}{ 2}g^{-1}(W+Z)(J_{{}_{\!g}}^{-1}h_{,12}-h_{,2}\,g^{-\frac{1}{2}}h_{,22}\,)=( \partial_{n}a-\tfrac{1}{2}(w+z)\partial_{n}\tau\cdot n)\circ\psi\,,\] (3.10c) \[\mathbf{\dot{W}}_{\mathcal{T}} =g^{-\frac{1}{2}}W_{,2}+Ag^{-\frac{3}{2}}h_{,22}=(\partial_{\tau} w-a\partial_{\tau}n\cdot\tau)\circ\psi\,,\] (3.10d) \[\mathbf{\dot{Z}}_{\mathcal{T}} =g^{-\frac{1}{2}}Z_{,2}+Ag^{-\frac{3}{2}}h_{,22}=(\partial_{\tau} z-a\partial_{\tau}n\cdot\tau)\circ\psi\,,\] (3.10e) \[\mathbf{\dot{A}}_{\mathcal{T}} =g^{-\frac{1}{2}}A_{,2}-\tfrac{1}{2}(W+Z)g^{-\frac{3}{2}}h_{,22}= \big{(}\partial_{\tau}a-\tfrac{1}{2}(w+z)\partial_{\tau}\tau\cdot n\big{)}\circ \psi\,. \tag{3.10f}\] #### 3.1.6. Dynamics of geometry We will use the identities \[D\mathcal{N}=-g^{-1}Dh_{,2}\,\tau\,,\qquad D\mathcal{T}=g^{-1}Dh_{,2}\, \mathcal{N}\,,\qquad Dg=2h_{,2}\,Dh_{,2}\, \tag{3.11}\] which hold for any differential operator \(D\). From (3.5), we see that \[\mathcal{N}\mathbf{\dot{U}}+\alpha\mathbf{\ddot{\Sigma}}=\tfrac{1+\alpha}{2} \mathbf{\dot{W}}+\tfrac{1-\alpha}{2}\mathbf{\dot{Z}}\,. \tag{3.12}\] Differentiating (3.8) and using the identity (3.12), we obtain that \[(\partial_{t}+V\partial_{2})h_{,1} =h_{,1}\,\big{(}\tfrac{1+\alpha}{2}\mathbf{\dot{W}}+\tfrac{1-\alpha}{2} \mathbf{\dot{Z}}\big{)}\cdot(\mathcal{N}+h_{,2}\,\tau)\,,\;\;\text{in}\;\; \mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},T]\,, \tag{3.13a}\] \[(\partial_{t}+V\partial_{2})h_{,2} =g\big{(}\tfrac{1+\alpha}{2}\mathbf{\dot{W}}_{\mathcal{T}}+\tfrac{1- \alpha}{2}\mathbf{\dot{Z}}_{\mathcal{T}}\big{)}\,,\qquad\qquad\;\;\text{in}\;\; \mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},T]\,,\] (3.13b) \[h_{,1} =1\;\;\text{and}\;\;h_{,2}=0\,, \tag{3.13c}\] From (2.6) and (3.13), we then have that \[(\partial_{t}+V\partial_{2})\mathcal{N}+\big{(}\tfrac{1+\alpha}{2} \mathbf{\dot{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\mathbf{\dot{Z}}_{\mathcal{T}} \big{)}\tau=0\,,\;\text{in}\;\;\mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},T]\,, \tag{3.14a}\] \[(\partial_{t}+V\partial_{2})\mathcal{T}-\big{(}\tfrac{1+\alpha}{2} \mathbf{\dot{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\mathbf{\dot{Z}}_{\mathcal{T}} \big{)}\mathcal{N}=0\,,\;\text{in}\;\;\mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},T]\,,\] (3.14b) \[\mathcal{N}=e_{1}\;\;\text{and}\;\;\mathcal{T}=e_{2}\,,\qquad\qquad \qquad\qquad\qquad\qquad\;\;\;\text{on}\;\;\mathbb{T}^{2}\times\{t= \mathfrak{t}_{\text{in}}\}\,. \tag{3.14c}\] The definition of \(J_{{}_{\!g}}\) in (2.12), together with the dynamics (3.13), shows that \[(\partial_{t}+V\partial_{2})J_{{}_{\!g}} =J_{{}_{\!g}}(\tfrac{1+\alpha}{2}\mathbf{\dot{W}}_{\mathcal{N}}+ \tfrac{1-\alpha}{2}\mathbf{\dot{Z}}_{\mathcal{N}})\,,\;\text{in}\;\;\mathbb{T}^{2} \times[\mathfrak{t}_{\text{in}},T]\,, \tag{3.15a}\] \[J_{{}_{\!g}} =1\,,\qquad\qquad\qquad\qquad\;\;\;\text{on}\;\;\mathbb{T}^{2} \times\{t=\mathfrak{t}_{\text{in}}\}\,. \tag{3.15b}\] Similarly, with (2.5), we compute that \[(\partial_{t}+V\partial_{2})g^{\frac{1}{2}} =g^{\frac{1}{2}}\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\tau}+ \tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\tau}\big{)}h_{,2}\;,\text{in}\;\;\mathbb{T }^{2}\times[\mathfrak{t}_{\text{in}},T]\,, \tag{3.16a}\] \[g^{\frac{1}{2}} =1\,,\qquad\qquad\qquad\qquad\qquad\qquad\text{on}\;\;\mathbb{T }^{2}\times\{t=\mathfrak{t}_{\text{in}}\}\,. \tag{3.16b}\] Thanks to (2.12) and (3.11), we have that \[J_{g^{\text{,}2}}=g^{-\frac{1}{2}}h_{,12}-g^{-\frac{3}{2}}h_{,1}\,h_{,2}\,h_{,22}=g^{-\frac{1}{2}}\big{(}h_{,12}-J_{g}g^{-\frac{1}{2}}h_{,2}\,h_{,22}\, \big{)}\,. \tag{3.17}\] and therefore \[\tau_{,1}\cdot\mathcal{N}=g^{-1}h_{,12}=g^{-\frac{1}{2}}J_{g^{\text{,}2}}+g^{ -\frac{3}{2}}J_{g}h_{,2}\,h_{,22}=-\mathcal{N}_{,1}\cdot\mathcal{T}\,. \tag{3.18}\] #### 3.1.7. Dynamics of the Riemann variables With (3.2) and (3.5), the un-differentiated Euler system (1.4) is written in ALE variables as \[(\partial_{t}+V\partial_{2})U^{i}-\alpha\Sigma\tau^{i}\hat{ \mathbf{A}}_{\mathcal{N}}-\tfrac{\alpha}{2}\Sigma_{\mathcal{N}}i(\hat{ \mathbf{W}}_{\mathcal{N}}+\hat{\mathbf{Z}}_{\mathcal{N}})+\tfrac{\alpha}{2} \Sigma(\hat{\mathbf{W}}_{i}-\hat{\mathbf{Z}}_{i})=0\,,\;\text{in}\;\;\mathbb{T }^{2}\times[\mathfrak{t}_{\text{in}},T]\,, \tag{3.19a}\] \[(\partial_{t}+V\partial_{2})\Sigma+\alpha\Sigma(\hat{\mathbf{Z} }_{\mathcal{N}}+\hat{\mathbf{A}}_{\tau})=0\,,\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\text{in}\;\;\mathbb{T}^{2}\times[\mathfrak{t}_ {\text{in}},T]\,,\] (3.19b) \[U^{i}=u_{0}^{i}\;\;\text{and}\;\Sigma=\sigma_{0}\,,\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\text{on}\;\;\mathbb{T}^{2}\times\{t= \mathfrak{t}_{\text{in}}\}\,. \tag{3.19c}\] Using the chain-rule and (2.14), we also record that \[\Sigma_{,1} =J_{g}(\hat{\mathbf{S}}_{\mathcal{N}}+h_{,2}\,\hat{\mathbf{S}}_{ \mathcal{T}})=\tfrac{1}{2}J_{g}(\hat{\mathbf{W}}_{\mathcal{N}}-\hat{\mathbf{Z }}_{\mathcal{N}})+\tfrac{1}{2}J_{g}h_{,2}\,(\hat{\mathbf{W}}_{\mathcal{T}}- \hat{\mathbf{Z}}_{\mathcal{T}})\,, \tag{3.20a}\] \[\Sigma_{,2} =g^{\frac{1}{2}}\hat{\mathbf{S}}_{\mathcal{T}}=\tfrac{1}{2}g^{ \frac{1}{2}}(\hat{\mathbf{W}}_{\mathcal{T}}-\hat{\mathbf{Z}}_{\mathcal{T}})\,. \tag{3.20b}\] From (3.4), (3.19), and (3.14), we have that \[(\partial_{t}+V\partial_{2})W+A\big{(}\tfrac{1+\alpha}{2}\hat{ \mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}} \big{)}+\alpha\Sigma\hat{\mathbf{A}}_{\mathcal{T}}=0\,, \tag{3.21a}\] \[(\partial_{t}+V\partial_{2})Z+A\big{(}\tfrac{1+\alpha}{2}\hat{ \mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}} \big{)}-\alpha\Sigma(\hat{\mathbf{A}}_{\mathcal{T}}+2\hat{\mathbf{Z}}_{ \mathcal{N}})=0\,,\] (3.21b) \[(\partial_{t}+V\partial_{2})A+\tfrac{\alpha}{2}\Sigma(\hat{ \mathbf{W}}_{\mathcal{T}}-\hat{\mathbf{Z}}_{\mathcal{T}}-2\hat{\mathbf{A}}_{ \mathcal{N}})-\tfrac{1}{2}(W+Z)\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{ \mathcal{T}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}=0\,, \tag{3.21c}\] in \(\mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},T]\), with initial datum \[W=w_{0}:=u_{0}^{1}+\alpha\sigma\,,\qquad Z=z_{0}:=u_{0}^{1}-\alpha\sigma\,, \qquad A=a_{0}:=u_{0}^{2}\,, \tag{3.21d}\] on \(\mathbb{T}^{2}\times\{t=\mathfrak{t}_{\text{in}}\}\). Similarly, from (3.5), (3.7), and (3.14), we deduce \[J_{g}(\partial_{t}+V\partial_{2})\hat{\mathbf{W}}+\alpha J_{g} \Sigma g^{-\frac{1}{2}}\tau\hat{\mathbf{U}}_{,2}\] \[\qquad+J_{g}\hat{\mathbf{W}}_{\mathcal{N}}(\tfrac{1+\alpha}{2} \hat{\mathbf{W}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}})+J_{g}\hat{\mathbf{A}}( \tfrac{3+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\hat{ \mathbf{Z}}_{\mathcal{T}})+\tfrac{\alpha}{2}J_{g}(\hat{\mathbf{W}}-\hat{ \mathbf{Z}})\hat{\mathbf{A}}_{\mathcal{T}}=0\,, \tag{3.22a}\] \[J_{g}(\partial_{t}+V\partial_{2})\hat{\mathbf{Z}}-\alpha J_{g} \Sigma g^{-\frac{1}{2}}\tau\hat{\mathbf{U}}_{,2}-2\alpha\Sigma\big{(}\mathcal{N} \hat{\mathbf{U}}_{,1}-\hat{\mathbf{S}}_{,1}\big{)}+2\alpha\Sigma J_{g}g^{- \frac{1}{2}}h_{,2}\,(\mathcal{N}\hat{\mathbf{U}}_{,2}-\hat{\mathbf{S}}_{,2}\, \big{)}\] \[\qquad+J_{g}(\tfrac{1+\alpha}{2}\hat{\mathbf{W}}+\tfrac{1-\alpha}{2} \hat{\mathbf{Z}})\hat{\mathbf{Z}}_{\mathcal{N}}+J_{g}\hat{\mathbf{A}}( \tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{3-\alpha}{2}\hat{ \mathbf{Z}}_{\mathcal{T}})-\tfrac{\alpha}{2}J_{g}(\hat{\mathbf{W}}-\hat{ \mathbf{Z}})\hat{\mathbf{A}}_{\mathcal{T}}=0\,,\] (3.22b) \[J_{g}(\partial_{t}+V\partial_{2})\hat{\mathbf{A}}+\alpha\Sigma g^{- \frac{1}{2}}J_{g}\hat{\Sigma}_{,2}-\alpha\Sigma\tau\hat{\mathbf{U}}_{,1}+\alpha \Sigma g^{-\frac{1}{2}}J_{g}h_{,2}\,\tau\hat{\mathbf{U}}_{,2}\] \[\qquad+J_{g}\big{(}\tfrac{1}{2}(\hat{\mathbf{W}}+\hat{\mathbf{Z}})( \hat{\mathbf{A}}_{\mathcal{N}}-\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}} -\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}})+\hat{\mathbf{A}}\hat{ \mathbf{A}}_{\mathcal{T}}+\tfrac{\alpha}{4}(\hat{\mathbf{W}}-\hat{\mathbf{Z}})( \hat{\mathbf{W}}_{\mathcal{T}}-\hat{\mathbf{Z}}_{\mathcal{T}})\big{)}=0\,, \tag{3.22c}\] in \(\mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},T]\), with initial datum \[\hat{\mathbf{W}}=\nabla w_{0}:=\nabla u_{0}^{1}+\alpha\nabla\sigma_{0}\,, \qquad\hat{\mathbf{Z}}=\nabla z_{0}:=\nabla u_{0}^{1}-\alpha\nabla\sigma_{0}\,, \qquad\hat{\mathbf{A}}=\nabla a_{0}:=\nabla u_{0}^{2}\,, \tag{3.22d}\] on \(\mathbb{T}^{2}\times\{t=\mathfrak{t}_{\text{in}}\}\). ### Dynamics of \(V\) From (3.6), (3.13b), (3.16a), and (3.21), we have that \[(\partial_{t}+V\partial_{2})V=-\alpha\Sigma g^{-\frac{1}{2}}\Big{(}\tfrac{2+ \alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}-\tfrac{\alpha}{2}\hat{\mathbf{Z}}_{ \mathcal{T}}-\hat{\mathbf{A}}_{\mathcal{N}}-h_{,2}\,\big{ Dynamics of normal components \(\hat{\mathbf{W}}_{\mathcal{N}}\), \(\hat{\mathbf{Z}}_{\mathcal{N}}\), and \(\hat{\mathbf{A}}_{\mathcal{N}}\) From (3.5), (3.11), (3.14a) and (3.22a) we deduce that \[(\partial_{t}+V\partial_{2})\hat{\mathbf{W}}_{\mathcal{N}}=- \hat{\mathbf{W}}_{\mathcal{N}}\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{ \mathcal{N}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{N}}+\tfrac{\alpha} {2}\hat{\mathbf{A}}_{\mathcal{T}}-\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h_{ \mathrm{,}22}\,\big{)}-\alpha\Sigma g^{-\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{ N};2}+\tfrac{\alpha}{2}\big{(}\hat{\mathbf{A}}_{\mathcal{T}}+\Sigma g^{-\frac{3}{2}}h_{ \mathrm{,}22}\,\big{)}\hat{\mathbf{Z}}_{\mathcal{N}}\\ -\big{(}\tfrac{3+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{ 1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}\hat{\mathbf{A}}_{\mathcal{ N}}-\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2} \hat{\mathbf{Z}}_{\mathcal{T}}\big{)}\hat{\mathbf{W}}_{\mathcal{T}}-\alpha \Sigma g^{-\frac{3}{2}}h_{\mathrm{,}22}\,\hat{\mathbf{A}}_{\mathcal{T}}\,. \tag{3.24}\] Furthermore, using (3.15a) we obtain that \[(\partial_{t}+V\partial_{2})(J_{y}\hat{\mathbf{W}}_{\mathcal{N}})=-(J_{y}\hat {\mathbf{W}}_{\mathcal{N}})\big{(}\tfrac{\alpha}{2}\hat{\mathbf{A}}_{\mathcal{ T}}-\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h_{\mathrm{,}22}\,\big{)}- \alpha\Sigma g^{-\frac{1}{2}}J_{y}\hat{\mathbf{A}}_{\mathcal{N};2}+\tfrac{ \alpha}{2}\big{(}\hat{\mathbf{A}}_{\mathcal{T}}+\Sigma g^{-\frac{3}{2}}h_{ \mathrm{,}22}\,\big{)}J_{y}\hat{\mathbf{Z}}_{\mathcal{N}}\\ -\big{(}\tfrac{3+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac {1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}J_{y}\hat{\mathbf{A}}_{ \mathcal{N}}-\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{ 1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}J_{y}\hat{\mathbf{W}}_{ \mathcal{T}}-\alpha\Sigma g^{-\frac{3}{2}}h_{\mathrm{,}22}\,J_{y}\hat{\mathbf{ A}}_{\mathcal{T}}\,. \tag{3.25}\] From (3.5), (3.11), (3.14a) and (3.22b) we have that \[(\partial_{t}+V\partial_{2})\hat{\mathbf{Z}}_{\mathcal{N}}=- \hat{\mathbf{Z}}_{\mathcal{N}}\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{ \mathcal{N}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{N}}+\tfrac{\alpha} {2}\hat{\mathbf{A}}_{\mathcal{T}}+\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h_{ \mathrm{,}22}\,\big{)}-2\alpha\Sigma g^{-\frac{1}{2}}h_{\mathrm{,}2}\,\hat{ \mathbf{Z}}_{\mathcal{N};2}+2\alpha\Sigma J_{g}^{-1}\hat{\mathbf{Z}}_{ \mathcal{N};1}\\ +\alpha\Sigma g^{-\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{N};2}+2 \alpha\Sigma(g^{-\frac{1}{2}}\hat{\mathbf{Z}}_{\mathcal{T}}+\hat{\mathbf{A}}_{ \mathcal{N}})J_{g}^{-1}J_{g}{}_{\mathrm{,}2}+\tfrac{\alpha}{2}\big{(}\hat{ \mathbf{A}}_{\mathcal{T}}-\Sigma g^{-\frac{3}{2}}h_{\mathrm{,}22}\,\big{)} \hat{\mathbf{W}}_{\mathcal{N}}\\ -\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac {3-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}\hat{\mathbf{A}}_{\mathcal{ N}}-\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\hat{ \mathbf{Z}}_{\mathcal{T}}\big{)}\hat{\mathbf{Z}}_{\mathcal{T}}+\alpha\Sigma g^{- \frac{3}{2}}h_{\mathrm{,}22}\,\hat{\mathbf{A}}_{\mathcal{T}}\,. \tag{3.26}\] Furthermore, using (3.15a) we obtain that \[(\partial_{t}+V\partial_{2})(J_{y}\hat{\mathbf{Z}}_{\mathcal{N}})=- (J_{y}\hat{\mathbf{Z}}_{\mathcal{N}})\big{(}\tfrac{\alpha}{2}\hat{\mathbf{A}}_{ \mathcal{T}}+\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h_{\mathrm{,}22}\,\big{)}-2 \alpha\Sigma g^{-\frac{1}{2}}h_{\mathrm{,}2}\,(J_{y}\hat{\mathbf{Z}}_{\mathcal{ N}})_{\mathrm{,}2}+2\alpha\Sigma\hat{\mathbf{Z}}_{\mathcal{N};1}+\alpha\Sigma g^{- \frac{1}{2}}J_{y}\hat{\mathbf{A}}_{\mathcal{N};2}\\ +2\alpha\Sigma(g^{-\frac{1}{2}}h_{\mathrm{,}2}\,\hat{\mathbf{Z}}_{ \mathcal{N}}+g^{-\frac{1}{2}}\hat{\mathbf{Z}}_{\mathcal{T}}+\hat{\mathbf{A}}_{ \mathcal{N}})J_{g}{}_{\mathrm{,}2}+\tfrac{\alpha}{2}\big{(}\hat{\mathbf{A}}_{ \mathcal{T}}-\Sigma g^{-\frac{3}{2}}h_{\mathrm{,}22}\,\big{)}J_{y}\hat{\mathbf{ W}}_{\mathcal{N}}\\ -\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac {3-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}J_{y}\hat{\mathbf{A}}_{ \mathcal{N}}-\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1- \alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}J_{y}\hat{\mathbf{Z}}_{\mathcal{T}}+ \alpha\Sigma g^{-\frac{3}{2}}h_{\mathrm{,}22}\,J_{y}\hat{\mathbf{A}}_{\mathcal{ T}}\,. \tag{3.27}\] Figure 8. We revisit Figure 7, emphasizing the top boundary of the spacetime of maximal hyperbolic development of the data, in which the evolution equations (3.14), (3.15), (3.19), and (3.22) will be studied. On the left, we use traditional Eulerian coordinates, while on the right we use ALE coordinates. In addition to the curve of pre-shocks (in black), the slow acoustic characteristic surface emanating from the pre-shock (in green), and the downstream part of the cuspoidal surface/paraboloid of “first singularities” (in red), we display two more surfaces (in both figures). In magenta, we display the fast acoustic characteristic surface which contains the curve of pre-shocks. In orange, we display the cylinder obtained by translating the curve of pre-shocks with respect to \(x_{1}\). The Euler evolution in the spacetime which lies below the orange cylinder is analyzed in Sections 5–12. The Euler evolution on the downstream side of the pre-shock, i.e., in between the orange and the red surfaces, is analyzed in Section 13. The Euler evolution on the upstream side of the pre-shock, i.e., in between the orange and the red, in between the orange and the green surfaces, is analyzed in Section 14. We will later make use of the fact that we can write the \(\mathbf{\hat{Z}}_{\mathcal{N}}\) evolution in terms of a transport velocity based on the \(\lambda_{1}\) wave speed as follows: \[\big{(}\partial_{t}+(V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2})\, \partial_{2}-2\alpha\Sigma J_{g}^{-1}\partial_{1}\big{)}\mathbf{\hat{Z}}_{ \mathcal{N}}\\ =-\mathbf{\hat{Z}}_{\mathcal{N}}\big{(}\tfrac{1+\alpha}{2}\mathbf{ \hat{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{N}}+ \tfrac{\alpha}{2}\mathbf{\hat{A}}_{\mathcal{T}}+\tfrac{\alpha}{2}\Sigma g^{- \frac{3}{2}}h_{,22}\,\big{)}+2\alpha\Sigma(g^{-\frac{1}{2}}\mathbf{\hat{Z}}_{ \mathcal{T}}+\mathbf{\hat{A}}_{\mathcal{N}})J_{g}^{-1}J_{g},2+\alpha\Sigma g^{ -\frac{1}{2}}\mathbf{\hat{A}}_{\mathcal{N};2}\\ +\tfrac{\alpha}{2}\big{(}\mathbf{\hat{A}}_{\mathcal{T}}-\Sigma g^{- \frac{3}{2}}h_{,22}\,\big{)}\mathbf{\hat{W}}_{\mathcal{N}}-\big{(}\tfrac{1+ \alpha}{2}\mathbf{\hat{W}}_{\mathcal{T}}+\tfrac{3-\alpha}{2}\mathbf{\hat{Z}}_ {\mathcal{T}}\big{)}\mathbf{\hat{A}}_{\mathcal{N}}-\big{(}\tfrac{1+\alpha}{2} \mathbf{\hat{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{ T}}\big{)}\mathbf{\hat{Z}}_{\mathcal{T}}+\alpha\Sigma g^{-\frac{3}{2}}h_{,22}\, \mathbf{\hat{A}}_{\mathcal{T}}\,. \tag{3.28}\] From (3.5), (3.11), (3.14a) and (3.22c) we have that \[(\partial_{t}+V\partial_{2})\mathbf{\hat{A}}_{\mathcal{N}}=- \mathbf{\hat{A}}_{\mathcal{N}}(\tfrac{1}{2}\mathbf{\hat{W}}_{\mathcal{N}}+ \tfrac{1}{2}\mathbf{\hat{Z}}_{\mathcal{N}}+\mathbf{\hat{A}}_{\mathcal{T}})+ \alpha\Sigma J_{g}^{-1}\mathbf{\hat{A}}_{\mathcal{N};1}-\alpha\Sigma g^{- \frac{1}{2}}h_{,2}\,\mathbf{\hat{A}}_{\mathcal{N};2}\\ -\tfrac{\alpha}{2}\Sigma g^{-\frac{1}{2}}(\mathbf{\hat{W}}_{ \mathcal{N}}-\mathbf{\hat{Z}}_{\mathcal{N}}),_{2}+\tfrac{1}{2}(\mathbf{\hat{W }}_{\mathcal{N}}+\mathbf{\hat{Z}}_{\mathcal{N}}-2\mathbf{\hat{A}}_{\mathcal{T }})(\tfrac{1+\alpha}{2}\mathbf{\hat{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2} \mathbf{\hat{Z}}_{\mathcal{T}})\\ -\tfrac{\alpha}{4}(\mathbf{\hat{W}}_{\mathcal{N}}-\mathbf{\hat{Z} }_{\mathcal{N}}+2\Sigma g^{-\frac{3}{2}}h_{,22}\,)(\mathbf{\hat{W}}_{\mathcal{ T}}-\mathbf{\hat{Z}}_{\mathcal{T}})-\tfrac{\alpha}{2}\Sigma g^{-\frac{1}{2}}J_{g}^{-1}J_{g},2 \,(\mathbf{\hat{W}}_{\mathcal{N}}+\mathbf{\hat{Z}}_{\mathcal{N}}-2\mathbf{ \hat{A}}_{\mathcal{T}})\,, \tag{3.29}\] and by using (3.15a) we find that \[(\partial_{t}+V\partial_{2})(J_{g}\mathbf{\hat{A}}_{\mathcal{N}})=- (J_{g}\mathbf{\hat{A}}_{\mathcal{N}})(-\tfrac{\alpha}{2}\mathbf{\hat{W}}_{ \mathcal{N}}+\tfrac{\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{N}}+\mathbf{\hat{A}} _{\mathcal{T}})+\alpha\Sigma\mathbf{\hat{A}}_{\mathcal{N};1}-\alpha\Sigma g^{- \frac{1}{2}}h_{,2}\,(J_{g}\mathbf{\hat{A}}_{\mathcal{N}}),_{2}\\ -\tfrac{\alpha}{2}\Sigma g^{-\frac{1}{2}}(J_{g}\mathbf{\hat{W}}_{ \mathcal{N}}-J_{g}\mathbf{\hat{Z}}_{\mathcal{N}}),_{2}+\tfrac{1}{2}(J_{g} \mathbf{\hat{W}}_{\mathcal{N}}+J_{g}\mathbf{\hat{Z}}_{\mathcal{N}}-2J_{g} \mathbf{\hat{A}}_{\mathcal{T}})(\tfrac{1+\alpha}{2}\mathbf{\hat{W}}_{\mathcal{ T}}+\tfrac{1-\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{T}})\\ -\tfrac{\alpha}{4}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}}-J_{g} \mathbf{\hat{Z}}_{\mathcal{N}}+2\Sigma g^{-\frac{3}{2}}J_{g}h_{,22}\,)(\mathbf{ \hat{W}}_{\mathcal{T}}-\mathbf{\hat{Z}}_{\mathcal{T}})+\alpha\Sigma g^{-\frac{ 1}{2}}J_{g},2\,(h_{,2}\mathbf{\hat{A}}_{\mathcal{N}}+\mathbf{\hat{A}}_{ \mathcal{T}})\,. \tag{3.30}\] We shall also make use of the \(\mathbf{\hat{A}}_{\mathcal{N}}\) evolution with a transport velocity that encodes the \(\lambda_{2}\) wave speed as \[\big{(}\partial_{t}+(V+\alpha\Sigma g^{-\frac{1}{2}}h_{,2})\, \partial_{2}-\alpha\Sigma J_{g}^{-1}\partial_{1}\big{)}\mathbf{\hat{A}}_{ \mathcal{N}}\\ =-\mathbf{\hat{A}}_{\mathcal{N}}(\tfrac{1}{2}\mathbf{\hat{W}}_{ \mathcal{N}}+\tfrac{1}{2}\mathbf{\hat{Z}}_{\mathcal{N}}+\mathbf{\hat{A}}_{ \mathcal{T}})-\tfrac{\alpha}{2}\Sigma g^{-\frac{1}{2}}J_{g}^{-1}J_{g},2\,( \mathbf{\hat{W}}_{\mathcal{N}}+\mathbf{\hat{Z}}_{\mathcal{N}}-2\mathbf{\hat{A}}_{ \mathcal{T}})-\tfrac{\alpha}{2}\Sigma g^{-\frac{1}{2}}(\mathbf{\hat{W}}_{ \mathcal{N}}-\mathbf{\hat{Z}}_{\mathcal{N}}),_{2}\\ +\tfrac{1}{2}(\mathbf{\hat{W}}_{\mathcal{N}}+\mathbf{\hat{Z}}_{ \mathcal{N}}-2\mathbf{\hat{A}}_{\mathcal{T}})(\tfrac{1+\alpha}{2}\mathbf{\hat{W}}_{ \mathcal{T}}+\tfrac{1-\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{T}})-\tfrac{\alpha}{4} (\mathbf{\hat{W}}_{\mathcal{N}}-\mathbf{\hat{Z}}_{\mathcal{N}}+2\Sigma g^{- \frac{3}{2}}h_{,22}\,)(\mathbf{\hat{W}}_{\mathcal{T}}-\mathbf{\hat{Z}}_{ \mathcal{T}})\,. \tag{3.31}\] Dynamics of tangential components \(\mathbf{\hat{W}}_{\mathcal{T}},\mathbf{\hat{Z}}_{\mathcal{T}}\), and \(\mathbf{\hat{A}}_{\mathcal{T}}\) Using (3.5), (3.11), (3.14b) and (3.22a), we have that \[(\partial_{t}+V\partial_{2})\mathbf{\hat{W}}_{\mathcal{T}}=-\big{(} \tfrac{3+2\alpha}{2}\mathbf{\hat{W}}_{\mathcal{T}}+\tfrac{1-2\alpha}{2} \mathbf{\hat{Z}}_{\mathcal{T}}\big{)}\mathbf{\hat{A}}_{\mathcal{T}}+\tfrac{ \alpha}{2}\Sigma g^{-\frac{3}{2}}h_{,22}\,\big{(}\mathbf{\hat{W}}_{\mathcal{T} }+\mathbf{\hat{Z}}_{\mathcal{T}}\big{)}-\alpha\Sigma g^{-\frac{1}{2}}\mathbf{\hat{A}} _{\mathcal{T}}\cdot\] \[=-\alpha\Sigma g^{-\frac{1}{2}}\mathbf{\hat{A}}_{\mathcal{T},2}+ \tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h_{,22}\,(\mathbf{\hat{W}}_{\mathcal{T}}+ \mathbf{\hat{Z}}_{\mathcal{T}}+2\mathbf{\hat{A}}_{\mathcal{N}})-\big{(} \tfrac{3+2\alpha}{2}\mathbf{\hat{W}}_{\mathcal{T}}+\tfrac{1-2\alpha}{2} \mathbf{\hat{Z}}_{\mathcal{T}}\big{)}\mathbf{\hat{A}}_{\mathcal{T}}\,. \tag{3.32}\] In a similar way, from (3.5), (3.11), (3.14b) and (3.22b), we deduce that \[(\partial_{t}+V\partial_{2})\mathbf{\hat{Z}}_{\mathcal{T}}=-2 \alpha\Sigma g^{-\frac{1}{2}}h_{,2}\,\mathbf{\hat{Z}}_{\mathcal{T},2}+2\alpha \Sigma J_{g}^{-1}\mathbf{\hat{Z}}_{\mathcal{T},1}-2\alpha\Sigma\big{(}g^{- \frac{1}{2}}\mathbf{\hat{Z}}_{\mathcal{N}}-\mathbf{\hat{A}}_{\mathcal{T}}\big{)}J_{g }^{-1} ### Dynamics of vorticity The Eulerian vorticity \(\omega:=\nabla^{\perp}\cdot u=\partial_{\tau}u\cdot n-\partial_{n}u\cdot\tau\) is a solution of \[\partial_{t}\omega+(u+\alpha\sigma n)\cdot\nabla\omega+\omega(\operatorname{div }u-n\cdot\nabla\omega)=0\,.\] We define the ALE vorticity \[\Omega=\omega\circ\psi=\hat{\mathbf{A}}_{\mathcal{N}}-\tfrac{1}{2}(\hat{ \mathbf{W}}_{\mathcal{T}}+\hat{\mathbf{Z}}_{\mathcal{T}})\,. \tag{3.37}\] We next define the Eulerian specific vorticity by \(\boldsymbol{\omega}=(\alpha\sigma)^{-\frac{1}{\alpha}}\omega\) and a simple computation verifies that \[\partial_{t}\boldsymbol{\omega}+u\cdot\nabla\boldsymbol{\omega}=0\,.\] Defining the ALE specific vorticity \[\Omega=\boldsymbol{\omega}\circ\psi=(\alpha\Sigma)^{-\frac{1}{\alpha}}\Omega\,, \tag{3.38}\] an application of the chain-rule, (2.14), and (3.5) shows that \[\tfrac{J_{g}}{\Sigma}(\partial_{t}+V\partial_{2})\Omega-\alpha\Omega_{,1}+ \alpha g^{-\frac{1}{2}}h_{,2}\,J_{g}\Omega_{,2}=0\,. \tag{3.39}\] ### Identity for the divergence The Eulerian divergence of the fluid velocity is given as \(\operatorname{div}u:=\partial_{n}u\cdot n+\partial_{\tau}u\cdot\tau\). We show here that the ALE version of the divergence of the fluid velocity, namely \((\operatorname{div}u)\circ\psi\), may be written as a linear combination of the differentiated ALE Riemann variables \(\hat{\mathbf{W}},\hat{\mathbf{Z}}\), and \(\hat{\mathbf{A}}\). More precisely, by appealing to (3.4) and (3.10) we have \[(\operatorname{div}u)\circ\psi =(\partial_{n}u\cdot n+\partial_{\tau}u\cdot\tau)\circ\psi\] \[=(\partial_{n}(u\cdot n)-u\cdot\tau\partial_{n}n\cdot\tau+ \partial_{\tau}(u\cdot\tau)-u\cdot n\partial_{\tau}\tau\cdot n)\circ\psi\] \[=(\tfrac{1}{2}\partial_{n}(w+z)-a\partial_{n}n\cdot\tau+\partial_ {\tau}a-\tfrac{1}{2}(w+z)\partial_{\tau}\tau\cdot n)\circ\psi\] \[=\tfrac{1}{2}(\hat{\mathbf{W}}_{\mathcal{N}}+\hat{\mathbf{Z}}_{ \mathcal{N}})+\hat{\mathbf{A}}_{\mathcal{T}}\,. \tag{3.40}\] The above identity is in direct analogy to how the ALE vorticity was written in (3.37) as a linear combination of components of \(\hat{\mathbf{W}},\hat{\mathbf{Z}}\), and \(\hat{\mathbf{A}}\). ### Identities for \(J_{g}\hat{\mathbf{W}},J_{g}\hat{\mathbf{Z}},J_{g}\hat{\mathbf{A}}\) It is important to first rewrite the system of equations (3.22) by commuting \(J_{g}\) with the operator \(\partial_{t}+V\partial_{2}\) as follows \[(\partial_{t}+V\partial_{2})(J_{g}\hat{\mathbf{W}})+\alpha\Sigma g ^{-\frac{1}{2}}J_{g}\tau\hat{\mathbf{U}}_{,2}=\mathsf{F}_{\mathsf{w}}\,, \tag{3.41a}\] \[(\partial_{t}+V\partial_{2})(J_{g}\hat{\mathbf{Z}})-\alpha\Sigma g ^{-\frac{1}{2}}J_{g}\tau\hat{\mathbf{U}}_{,2}-2\alpha\Sigma\big{(}\mathcal{N} \tilde{\mathbf{U}}_{,1}\,-\hat{\mathbf{\Sigma}}_{,1}\,\big{)}+2\alpha\Sigma J _{g}g^{-\frac{1}{2}}h_{,2}\,\big{(}\mathcal{N}\hat{\mathbf{U}}_{,2}-\hat{ \mathbf{\Sigma}}_{,2}\,\big{)}=\mathsf{F}_{2}\,,\] (3.41b) \[(\partial_{t}+V\partial_{2})(J_{g}\hat{\mathbf{A}})+\alpha\Sigma g ^{-\frac{1}{2}}J_{g}\hat{\mathbf{\Sigma}}_{,2}-\alpha\Sigma\tau\hat{\mathbf{U} }_{,1}+\alpha\Sigma J_{g}g^{-\frac{1}{2}}h_{,2}\,\tau\hat{\mathbf{U}}_{,2}= \mathsf{F}_{\mathsf{h}}\,,\] (3.41c) \[(\partial_{t}+V\partial_{2})h_{,2}=g\big{(}\tfrac{1+\alpha}{2} \hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T }}\big{)}\,,\] (3.41d) \[(\partial_{t}+V\partial_{2})J_{g}=\big{(}\tfrac{1+\alpha}{2}J_{g} \hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g}\hat{\mathbf{Z}}_{ \mathcal{N}}\big{)}\,, \tag{3.41e}\] where \[\mathsf{F}_{\mathsf{w}} =-\tfrac{1-\alpha}{2}J_{g}(\hat{\mathbf{W}}_{\mathcal{N}}\hat{ \mathbf{Z}}_{\mathcal{T}}-\hat{\mathbf{W}}_{\mathcal{T}}\hat{\mathbf{Z}}_{ \mathcal{N}})\mathcal{T}-J_{g}(\tfrac{3+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T }}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}})\hat{\mathbf{A}}-\tfrac{ \alpha}{2}J_{g}(\hat{\mathbf{W}}-\hat{\mathbf{Z}})\hat{\mathbf{A}}_{\mathcal{T }}\,,\] \[\mathsf{F}_{\mathsf{Z}} =J_{g}\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{N}} \hat{\mathbf{Z}}_{\mathcal{T}}-\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T }}\hat{\mathbf{Z}}_{\mathcal{N}}\big{)}\mathcal{T}-J_{g}(\tfrac{1+\alpha}{2} \hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{3-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}) \hat{\mathbf{A}}+\tfrac{\alpha}{2}J_{g}(\hat{\mathbf{W}}-\hat{\mathbf{Z}})\hat{ \mathbf{A}}_{\mathcal{T}}\,,\] \[\mathsf{F}_{\mathsf{A}} =-J_{g}(-\tfrac{\alpha}{2}\hat{\mathbf{W}}_{\mathcal{N}}\mathcal{N }+\tfrac{1}{2}\hat{\mathbf{W}}_{\mathcal{T}}\mathcal{T}+\tfrac{\alpha}{2}\hat{ \mathbf{Z}}_{\mathcal{N}}\mathcal{N}+\tfrac{1}{2}\hat{\mathbf{Z}}_{\mathcal{T}} \mathcal{T})\hat{\mathbf{A}}_{\mathcal{N}}+J_{g}(\tfrac{1+\alpha}{2}\hat{ \mathbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{N}})\hat{ \mathbf{A}}_{\mathcal{T}}\mathcal{T}\] \[\qquad-J_{g}\big{(}\hat{\mathbf{A}}\hat{\mathbf{A}}_{\mathcal{T }}+\tfrac{\alpha}{4}(\hat{\mathbf{W}}-\hat{\mathbf{Z}})(\hat{\mathbf{W}}_{ \mathcal{T}}-\hat{\mathbf{Z}}_{\mathcal{T}})\big{)}+\tfrac{1}{2}J_{g}(\hat{ \mathbf{W}}+\hat{\mathbf{Z}})(\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T} }+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}\,.\] In order to close highest-order energy estimates, it is essential to re-weight the equations (3.41b) and (3.41c) in a manner specific to the normal and tangential components. By computing the normal components of (3.41) and the tangential components of (3.22), we arrive at the following system of equations: \[\tfrac{1}{\Sigma}(\partial_{t}+V\partial_{2})(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}})+\alpha g^{-\frac{1}{2}}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})_{,2}- \alpha g^{-\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{N}}J_{g},-\tfrac{\alpha}{2}g^ {-\frac{1}{2}}\hat{\mathbf{Z}}_{\mathcal{T}},_{2}\cdot\mathcal{N}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}}+J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{ \mathbf{A}}_{\mathcal{T}})=\mathsf{F}_{\mathsf{w}}^{\mathcal{N}}\,,\] (3.42a) \[\tfrac{J_{g}}{\Sigma}(\partial_{t}+V\partial_{2})(J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}})-\tfrac{1}{\Sigma}(\tfrac{1+\alpha}{2}J_{g}\hat{ \mathbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})(J_{ g}\hat{\mathbf{Z}}_{\mathcal{N}})-\alpha g^{-\frac{1}{2}}J_{g}(J_{g}\hat{ \mathbf{A}}_{\mathcal{N}})_{,2}+\alpha g^{-\frac{1}{2}}\hat{\mathbf{A}}_{ \mathcal{N}}J_{g}J_{g}J_{g},\] \[\qquad+\tfrac{\alpha}{2}g^{-\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{T }},_{2}\cdot\mathcal{N}J_{g}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g}\hat{ \mathbf{Z \[+\alpha g^{-\frac{1}{2}}J_{g}^{2}\tau_{,2}\cdot\!\!\mathcal{N}\hat{ \boldsymbol{\Sigma}}_{\tau}-\alpha(J_{g}\hat{\boldsymbol{\Lambda}}_{\mathcal{N}}) _{,1}+\tfrac{\alpha}{2}\tau_{,1}\cdot\!\!\mathcal{N}(J_{g}\hat{\boldsymbol{ \mathbf{W}}}_{\mathcal{N}}+J_{g}\hat{\boldsymbol{\mathbf{Z}}}_{\mathcal{N}}-2J_ {g}\hat{\boldsymbol{\mathbf{A}}}_{\mathcal{T}})+\alpha J_{g}{}_{,1}\hat{ \boldsymbol{\mathbf{A}}}_{\mathcal{N}}\] \[+\alpha J_{g}g^{-\frac{1}{2}}h_{,2}\,(J_{g}\hat{\boldsymbol{ \mathbf{A}}}_{\mathcal{N}})_{,2}-\tfrac{\alpha}{2}\tau_{,2}\cdot\!\!\mathcal{N} J_{g}g^{-\frac{1}{2}}h_{,2}\,(J_{g}\hat{\boldsymbol{\mathbf{W}}}_{\mathcal{N}}+J_{g} \hat{\boldsymbol{\mathbf{Z}}}_{\mathcal{N}}-2J_{g}\hat{\boldsymbol{\mathbf{A}} }_{\mathcal{T}})-\alpha J_{g}g^{-\frac{1}{2}}h_{,2}\,J_{g}{}_{,2}\,\hat{ \boldsymbol{\mathbf{A}}}_{\mathcal{N}}=\mathsf{F}_{\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _When \(\mathsf{D}\) acts on a function of space alone, we implicitly identify \(\mathsf{D}\) with \((\mathsf{D}_{1},\mathsf{D}_{2})=(\varepsilon\partial_{1},\partial_{2})\). We will use the following notation when discussing these derivatives:_ * _The symbol_ \(\mathsf{D}^{m}\) _is used to denote any partial derivative_ \(\mathsf{D}^{\gamma}\) _with_ \(\gamma\in\mathbb{N}_{0}^{3}\)_, where_ \(\mathsf{D}^{\gamma}=(\varepsilon\partial_{t})^{\gamma_{0}}(\varepsilon \partial_{1})^{\gamma_{1}}\partial_{2}^{\gamma_{2}}\) _with_ \(|\gamma|=m\)_. In particular, throughout this section there is no need to keep track of the specific multi-index_ \(\gamma\)_, just of the total order_ \(|\gamma|=m\) _of the tangential derivative._ * _Naturally, the symbol_ \(\|\mathsf{D}^{m}f\|_{L^{2}}^{2}\) _denotes_ \(\sum_{|\gamma|=m}\|(\varepsilon\partial_{t})^{\gamma_{0}}(\varepsilon \partial_{1})^{\gamma_{1}}\partial_{2}^{\gamma_{2}}f\|_{L^{2}}^{2}\)_. Whenever the aforementioned sum over all pairs_ \(\gamma\in\mathbb{N}_{0}^{3}\) _with_ \(|\gamma|=m\) _is implicit, it will be dropped, so that we do not further clutter the notation._ * _For any scalar function_ \(f\)_, with the notation for_ \(\mathsf{D}\) _introduced in (_4.5_), we shall denote commutators as_ \[[\![f,\mathsf{D}^{\gamma}]\!]g:=f\mathsf{D}^{\gamma}g-\mathsf{D}^{\gamma}(fg) =-\sum\nolimits_{\delta\leq\gamma,|\delta|\leq|\gamma|-1}\binom{\gamma}{\delta} \mathsf{D}^{\gamma-\delta}f\;\mathsf{D}^{\delta}g\,.\] _Lastly, we shall use the notation_ \[\big{(}\mathsf{D}^{\gamma},f,g\big{)}=\mathsf{D}^{\gamma}(f\,g)-f\mathsf{D}^{ \gamma}g-g\mathsf{D}^{\gamma}f=\sum\nolimits_{\delta\leq\gamma,1\leq|\delta| \leq|\gamma|-1}\binom{\gamma}{\delta}\mathsf{D}^{\gamma-\delta}f\;\mathsf{D}^ {\delta}g\,,\] _to denote "double-commutators"._ ### The initial data It is convenient to state the initial data assumptions in terms of \((w_{0},z_{0},a_{0})\) defined cf. (2.6), (2.8), and (3.4) via \[w_{0}=u_{0}\cdot e_{1}+\sigma_{0},\qquad z_{0}=u_{0}\cdot e_{1}-\sigma_{0}, \qquad a_{0}=u_{0}\cdot e_{2}, \tag{4.6}\] rather the velocity and rescaled sound speed \((u_{0},\sigma_{0})\). The initial data is taken to satisfy the following properties: 1. There exists a constant11\(\kappa_{0}\geq 20\), **which is independent of \(\varepsilon\)**, such that Footnote 11: The purpose of \(\kappa_{0}\) is to ensure that the initial data does not have vacuum, see (4.8). The assumption \(\kappa_{0}\geq 20\) is made here only for convenience. \[\operatorname{supp}\,(w_{0}-\kappa_{0})\cup\operatorname{supp}\,(z_{0})\cup \operatorname{supp}\,(a_{0})\subseteq\mathcal{X}_{\mathsf{in}}:=[-13\pi \varepsilon,13\pi\varepsilon]\times\mathbb{T}\,.\] (4.7) Naturally, \(\varepsilon\) is assumed small enough to ensure that \(13\varepsilon\leq 1\), so that \(\mathcal{X}_{\mathsf{in}}\subset\mathbb{T}^{2}\). 2. At the level of no derivatives, we assume that \[\|w_{0}-\kappa_{0}\|_{L^{\infty}_{x}}\leq\tfrac{5}{3}\,,\qquad\|z_{0}\|_{L^{ \infty}_{x}}\leq\varepsilon\kappa_{0}\,,\qquad\|a_{0}\|_{L^{\infty}_{x}}\leq \varepsilon\kappa_{0}\,.\] Therefore, the initial rescaled sound speed \(\sigma_{0}\) satisfies \[\tfrac{1}{3}\kappa_{0}\leq\sigma_{0}(x)\leq\tfrac{2}{3}\kappa_{0}\,,\qquad \text{for all}\qquad x\in\mathbb{T}^{2}\,.\] (4.8) 3. Assume that \((w_{0},z_{0},a_{0})\in H^{7}(\mathbb{T}^{2})\), and that there exists a constant \(\overline{\mathsf{C}}\geq 1\), **which is independent of \(\varepsilon\)**, with12 Footnote 12: Intuitively, estimate (4.9) says that (at least up to the seventh derivative) we should think of \(w_{0}(x)\sim\kappa_{0}+\mathcal{W}(\tfrac{x_{1}}{\varepsilon},x_{2}),z_{0}(x) \sim\varepsilon\mathcal{Z}(\tfrac{x_{1}}{\varepsilon},x_{2})\), and \(a_{0}(x)\sim\varepsilon\mathcal{A}(\tfrac{x_{1}}{\varepsilon},x_{2})\), for some smooth functions \((\mathcal{W},\mathcal{Z},\mathcal{A})\) which obey \(\mathcal{O}(1)\) bounds (w.r.t \(\varepsilon\)) for all their derivatives. \[\sum_{1\leq|\gamma|\leq 5}\bigl{(}\bigl{\|}\mathsf{D}^{\gamma}w_{0}\bigr{\|}_{L^{ \infty}_{x}}+\varepsilon^{-1}\bigl{\|}\mathsf{D}^{\gamma}(z_{0},a_{0}) \bigr{\|}_{L^{\infty}_{x}}\bigr{)}+\sum_{6\leq|\gamma|\leq 7}\bigl{(}\varepsilon^{-\frac{1}{2}} \bigl{\|}\mathsf{D}^{\gamma}w_{0}\bigr{\|}_{L^{2}_{x}}+\varepsilon^{-\frac{3}{2 }}\bigl{\|}\mathsf{D}^{\gamma}(z_{0},a_{0})\bigr{\|}_{L^{2}_{x}}\bigr{)}\leq \overline{\mathsf{C}}\,.\] (4.9) Here we have used the notation \(\mathsf{D}_{1}=\varepsilon\partial_{1}\), \(\mathsf{D}_{2}=\partial_{2}\), and \(\mathsf{D}^{\gamma}=\mathsf{D}_{1}^{\gamma_{1}}\mathsf{D}_{2}^{\gamma_{2}}\) for \(\gamma=(\gamma_{1},\gamma_{2})\in\mathbb{N}_{0}^{2}\). 4. Recalling the notation \(\mathsf{D}_{1}=\varepsilon\partial_{1}\) and \(\mathsf{D}_{2}=\partial_{2}\), we assume that for all \(x\in\mathbb{T}^{2}\) we have \[-1\leq\mathsf{D}_{1}w_{0}\leq\tfrac{1}{10}\,,\qquad|\mathsf{D}_{2}w_{0}|\leq 1 \,,\qquad\text{and}\qquad|\mathsf{D}\mathsf{D}_{1}w_{0}|\leq 2\,.\] (4.10) 5. We assume that the global minimum of \(\mathsf{D}_{1}w_{0}\) is non-degenerate and occurs at \(x=0\). By this we mean that \(\mathsf{D}_{1}w_{0}(0)=-1\), \(\mathsf{D}_{2}w_{0}(0)=0\), \(\mathsf{D}\mathsf{D}_{1}w_{0}(0)=0\), and \((1-\varepsilon)\mathrm{Id}\leq\mathsf{D}^{2}\mathsf{D}_{1}w_{0}(0)\leq(1+ \varepsilon)\mathrm{Id}\). 6. We assume that for each \(x_{2}\in\mathbb{T}\), the function \(x_{1}\mapsto\mathsf{D}_{1}w_{0}(x_{1},x_{2})\) attains its global minimum at a unique point \(x_{1}^{\vee}=x_{1}^{\vee}(x_{2})\), such that the bounds \(\mathsf{D}_{1}w_{0}(x_{1}^{\vee}(x_{2}),x_{2})\leq-\tfrac{9}{10}\), and \(\mathsf{D}_{1}^{3}w_{0}(x_{1}^{\vee}(x_{2}),x_{2})\geq\tfrac{9}{10}\) hold. 7. We assume 13 that there exists \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) and all \(x_{1}\) such that \(|x_{1}-x_{1}^{\vee}(x_{2})|\geq\varepsilon^{\frac{5}{4}}\), we have \(\mathsf{D}_{1}w_{0}(x_{1},x_{2})\geq\mathsf{D}_{1}w_{0}(x_{1}^{\vee}(x_{2}),x_{ 2})+\varepsilon^{\frac{5}{4}}\). Footnote 13: Assumption ((viii)) is only used once: in the proof of Lemma 6.3; more precisely, in the proof of estimate (6.53). 3. We assume 14 that there exists \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) the following holds. If \(x=(x_{1},x_{2})\) is such that \(x_{1}-x_{1}^{\vee}(x_{2})\geq\varepsilon^{\frac{7}{4}}\) and \(\mathsf{D}_{1}^{2}w_{0}(x_{2})\leq\varepsilon^{\frac{7}{8}}\), then \(\mathsf{D}_{1}w_{0}(x)\geq-\tfrac{1}{3}\) and \(\mathsf{D}_{1}^{2}w_{0}(x)\geq-1\). Symmetrically, if \((x_{1},x_{2})\) is such that \(x_{1}-x_{1}^{\vee}(x_{2})\leq-\varepsilon^{\frac{7}{4}}\) and \(\mathsf{D}_{1}^{2}w_{0}(x_{1},x_{2})\geq-\varepsilon^{\frac{7}{8}}\), then we assume that \(\mathsf{D}_{1}w_{0}(x_{1},x_{2})\geq-\tfrac{1}{3}\) and \(\mathsf{D}_{1}^ **Remark 4.2** (Size of derivatives of the initial data).: _Instead of working with (4.9), it is convenient to quantify the size of higher order derivatives of the fundamental variables in the analysis. Recall that \((W,Z,A,U,\Sigma)|_{t=\mathfrak{t}_{\mathfrak{n}}}=(w_{0},z_{0},a_{0},u_{0},\sigma _{0})\), \(J_{g}|_{t=\mathfrak{t}_{\mathfrak{n}}}=1\), \(h_{1,1}|_{t=\mathfrak{t}_{\mathfrak{n}}}=1\), \(h_{2,2}|_{t=\mathfrak{t}_{\mathfrak{n}}}=0\), \(g|_{t=\mathfrak{t}_{\mathfrak{n}}}=1\), \(\mathcal{N}|_{t=\mathfrak{t}_{\mathfrak{n}}}=e_{1}\), \(\mathcal{I}|_{t=\mathfrak{t}_{\mathfrak{n}}}=e_{2}\), that \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}})|_{t=\mathfrak{t}_{ \mathfrak{n}}}\) may be computed from the identities (3.10), while \(V|_{t=\mathfrak{t}_{\mathfrak{n}}}\) may be computed from (3.6). Then, using the identities in Section 3 (at \(t=\mathfrak{t}_{\mathfrak{n}}\)), we may show that (4.9) and assumptions ((i)), ((ii)) imply that there exists a constant \(\mathsf{C}_{\mathsf{data}}=\mathsf{C}_{\mathsf{data}}(\alpha,\kappa_{0}, \overline{\mathsf{C}})\geq 1\), which is independent of \(\varepsilon\), such that_ \[\sum_{1\leq|\gamma|\leq 7}\varepsilon^{-\frac{5}{2}}\big{\|} \mathsf{D}^{\gamma}W(\cdot,\mathfrak{t}_{\mathfrak{n}})\big{\|}_{L^{2}_{x}}+ \varepsilon^{-\frac{3}{2}}\big{\|}\mathsf{D}^{\gamma}(Z,A)(\cdot,\mathfrak{t}_ {\mathfrak{n}})\big{\|}_{L^{2}_{x}}+\sum_{|\gamma|\leq 5}\big{\|}\mathsf{D}^{ \gamma}W(\cdot,\mathfrak{t}_{\mathfrak{n}})\big{\|}_{L^{\infty}_{x}}+ \varepsilon^{-1}\big{\|}\mathsf{D}^{\gamma}(Z,A)(\cdot,\mathfrak{t}_{\mathfrak{ n}})\big{\|}_{L^{\infty}_{x}}\] \[\quad+\sum_{|\gamma|\leq 6}\varepsilon^{\frac{1}{2}}\big{\|} \mathsf{D}^{\gamma}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})(\cdot,\mathfrak{t}_ {\mathfrak{n}})\big{\|}_{L^{2}_{x}}+\varepsilon^{-\frac{1}{2}}\big{\|}\mathsf{D }^{\gamma}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{A}}_{ \mathcal{N}})(\cdot,\mathfrak{t}_{\mathfrak{n}})\big{\|}_{L^{2}_{x}}+ \varepsilon^{-\frac{1}{2}}\big{\|}\mathsf{D}^{\gamma}(\hat{\mathbf{Z}}_{ \mathcal{N}},\hat{\mathbf{A}}_{\mathcal{N}})(\cdot,\mathfrak{t}_{\mathfrak{n} })\big{\|}_{L^{\infty}_{x}}\] \[\quad+\sum_{|\gamma|\leq 4}\varepsilon^{-\frac{1}{2}}\big{\|} \mathsf{D}^{\gamma}\hat{\mathbf{W}}_{\mathcal{T}}(\cdot,\mathfrak{t}_{ \mathfrak{n}})\big{\|}_{L^{\infty}_{x}}+\varepsilon^{-\frac{3}{2}}\big{\|} \mathsf{D}^{\gamma}(\hat{\mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{ \mathcal{T}})(\cdot,\mathfrak{t}_{\mathfrak{n}})\big{\|}_{L^{2}_{x}}\] \[\quad+\sum_{|\gamma|\leq 4}\big{\|}\mathsf{D}^{\gamma}\hat{ \mathbf{W}}_{\mathcal{T}}(\cdot,\mathfrak{t}_{\mathfrak{n}})\big{\|}_{L^{ \infty}_{x}}+\varepsilon^{-1}\big{\|}\mathsf{D}^{\gamma}(\hat{\mathbf{Z}}_{ \mathcal{T}},\hat{\mathbf{A}}_{\mathcal{T}})(\cdot,\mathfrak{t}_{\mathfrak{ n}})\big{\|}_{L^{\infty}_{x}}\] \[\quad+\sum_{|\gamma|\leq 6}\varepsilon^{-\frac{3}{2}}\big{\|} \mathsf{D}^{\gamma}(h,h_{2}\,,V)(\cdot,\mathfrak{t}_{\mathfrak{n}})\big{\|}_{L^{ 2}_{x}}+\sum_{|\gamma|\leq 4}\varepsilon^{-1}\big{\|}\mathsf{D}^{\gamma}(h,h_{2,2}\,,V)( \cdot,\mathfrak{t}_{\mathfrak{n}})\big{\|}_{L^{\infty}_{x}}\leq\mathsf{C}_{ \mathsf{data}}\,. \tag{4.11}\] _Here we use the notation from (4.5), with \(\mathsf{D}^{\gamma}=\mathsf{D}_{t}^{\gamma_{0}}\mathsf{D}_{1}^{\gamma_{1}} \mathsf{D}_{2}^{\gamma_{2}}\), and \(\gamma\in\mathbb{N}_{0}^{3}\). We note that the gain of \(\varepsilon^{\frac{1}{2}}\) that the \(L^{2}_{x}\) bounds experience over the \(L^{\infty}_{x}\) bounds are due to the support of size \(\mathcal{O}(\varepsilon)\) in the \(x_{1}\) direction, see assumption ((i)). Verifying that the bounds in (4.11) have a scaling with respect to \(\varepsilon\) that is consistent with (4.9), and also with assumptions ((i)), ((ii)), is an exercise whose details are omitted here. Throughout the paper we shall refer to (4.11) instead of (4.9), although the former follows from the latter._ **Example 4.3** (**The prototypical initial data)**.: _The prototypical example for \(w_{0}\) is of the type_ \[w_{0,\mathrm{ex}}(x_{1},x_{2})=\kappa_{0}+\varphi(\tfrac{x_{1}}{\varepsilon}) \phi(x_{2}), \tag{4.12}\] _where \(\kappa_{0}\geq 20\), and the functions \(\varphi\in C_{0}^{\infty}(\mathbb{R})\) and \(\phi\in C^{\infty}(\mathbb{T})\) have the following properties:_ * \(\varphi(r)=-r+\frac{1}{\sharp}r^{3}\) _for all_ \(|r|\leq\sqrt{2}\) _and_ \(\varphi(r)=0\) _for_ \(|r|\geq 13\pi\)_. For_ \(\sqrt{2}<|r|<13\pi\) _we take_ \(\varphi\) _such that_ \(-1\leq\mathrm{sgn}(r)\varphi(r)\leq\frac{1}{2}\)_,_ \(-\frac{1}{4}\leq\varphi^{\prime}(r)\leq\frac{1}{11}\)_, and_ \(-\frac{1}{2}\leq\mathrm{sgn}\left(r\right)\varphi^{\prime\prime}(r)\leq\frac{3} {2}\)_. For_ \(\varepsilon\leq\frac{1}{13}\)_, we may view_ \(\varphi(\tfrac{z}{\varepsilon})\) _as_ \([-\pi,\pi]\)_-periodic. It is straightforward to construct a function_ \(\varphi\) _which satisfies all the above conditions._ * \(\phi(\overline{r})=1-\frac{1}{2}\overline{r}^{2}\) _for all_ \(|\overline{r}|\leq\frac{1}{\sqrt{20}}\) _and_ \(\phi(\overline{r})=\frac{19}{20}\) _for_ \(\frac{\pi}{2}\leq|\overline{r}|\leq\pi\)_. For_ \(\frac{1}{\sqrt{20}}<|\overline{r}|<\frac{\pi}{2}\) _we take_ \(\phi\) _such that_ \(-\frac{1}{2}\leq\mathrm{sgn}(\overline{r})\phi^{\prime}(\overline{r})\leq 0\)_, and_ \(|\phi^{\prime\prime}(\overline{r})|\leq\frac{3}{2}\)_. It is straightforward to construct a function_ \(\phi\) _which satisfies all the above conditions._ _A plot of a prototypical function \(w_{0,\mathrm{ex}}\) and its derivative \(\mathsf{D}_{1}w_{0,\mathrm{ex}}\) are given in Figures 9 and 10 below. We then verify that the function defined in (4.12) satisfies the assumptions we imposed on \(w_{0}\):_ * ((i)) _holds because_ \(\varphi(\tfrac{x_{1}}{\varepsilon})=0\) _when_ \(|x_{1}|\geq 13\pi\varepsilon\)_._ * ((ii)) _holds since_ \(\kappa_{0}\geq 20\) _and because_ \(|\varphi|\leq 1\) _and_ \(0\leq\phi\leq 1\)_._ * ((iii)) _holds for some_ \(\overline{\mathsf{C}}>0\) _because_ \(\varphi\) _and_ \(\phi\) _are_ \(C^{\infty}\) _smooth and have the correct scaling in_ \(x_{1}\) _and_ \(x_{2}\)_._ * ((iv)) _holds because_ \(-1\leq\varphi^{\prime}\leq\frac{1}{11}\)_,_ \(|\phi^{\prime}|\leq\frac{1}{2}\)_, and_ \(|\varphi^{\prime\prime}|\leq\frac{3}{2}\)_._ * ((v)) _holds because_ \(w_{0,\mathrm{ex}}(x_{1},x_{2})=\kappa_{0}-\frac{x_{1}}{\varepsilon}+\frac{1}{ 2}\frac{x_{1}}{\varepsilon}x_{2}^{2}+\frac{1}{6}\frac{x_{1}^{3}}{\varepsilon^{3}}- \frac{1}{12}\frac{x_{1}^{3}}{\varepsilon^{3}}x_{2}^{2}\)_, for all_ * ((viii)) _holds because if \(x_{1}-x_{1}^{\vee}(x_{2})=x_{1}\geq\varepsilon^{\frac{7}{4}}\), it means that \(\frac{x_{1}}{\varepsilon}\geq\varepsilon^{\frac{3}{4}}\), and since \(\phi\geq\frac{19}{20}\) we have \(\mathsf{D}_{1}^{2}w_{0,\text{ex}}(x)\leq\varepsilon^{\frac{7}{8}}\Leftrightarrow \varphi^{\prime\prime}(\frac{x_{1}}{\varepsilon})\leq\frac{20}{19}\varepsilon^{ \frac{7}{8}}\Rightarrow\frac{x_{1}}{\varepsilon}>\sqrt{2}\), since if \(\frac{x_{1}}{\varepsilon}\leq\sqrt{2}\), then \(\varphi^{\prime\prime}(\frac{x_{1}}{\varepsilon})=\frac{x_{1}}{\varepsilon} \geq\varepsilon^{\frac{3}{4}}\) which is strictly larger than \(\frac{20}{19}\varepsilon^{\frac{7}{8}}\) if \(\varepsilon\leq\varepsilon_{0}\leq(\frac{19}{20})^{8}\). But if \(\frac{x_{1}}{\varepsilon}\geq\sqrt{2}\), then \(\varphi^{\prime}(\frac{x_{1}}{\varepsilon})\geq-\frac{1}{4}\) and so \(\mathsf{D}_{1}w_{0,\text{ex}}(x)=\varphi^{\prime}(\frac{x_{1}}{\varepsilon} )\phi(x_{2})\geq-\frac{1}{4}>-\frac{1}{3}\). Moreover, \(\varphi^{\prime\prime}(\frac{x_{1}}{\varepsilon})\geq-\frac{1}{2}\) and so \(\mathsf{D}_{1}^{2}w_{0,\text{ex}}(x)=\varphi^{\prime\prime}(\frac{x_{1}}{ \varepsilon})\phi(x_{2})\geq-\frac{1}{2}\). The symmetric statement for \(x_{1}-x_{1}^{\vee}(x_{2})=x_{1}\leq-\varepsilon^{\frac{3}{4}}\) holds for the same reason. The last condition holds true because if \(-\frac{1}{3}\geq\mathsf{D}_{1}w_{0,\text{ex}}(x)=\varphi^{\prime}(\frac{x_{1}} {\varepsilon})\phi(x_{2})\), then \(\varphi^{\prime}(\frac{x_{1}}{\varepsilon})\leq-\frac{1}{3}<-\frac{1}{4}\), and hence \(|\frac{x_{1}}{\varepsilon}|\leq\sqrt{2}\). But in this region we have that \(\varphi^{\prime\prime\prime}(\frac{x_{1}}{\varepsilon})=1\), and hence \(\mathsf{D}_{1}^{3}w_{0,\text{ex}}(x)=\varphi^{\prime\prime\prime}(\frac{x_{1}} {\varepsilon})\phi(x_{2})\geq\phi(x_{2})\geq\frac{19}{20}>\frac{1}{3}\)._ _Next, we identify prototypical initial data for \(a_{0}\) and \(z_{0}\). Note that these fields only need to satisfy conditions ((ii)) and ((iii)). As such, we may for instance take_ \[z_{0,\text{ex}}(x_{1},x_{2})=0\,. \tag{4.13}\] _We may also take_ \[a_{0,\text{ex}}(x_{1},x_{2})=0\,,\] (4.14a) _but maybe a more interesting prototypical example of initial data for \[a_{0}\] is one for which the initial velocity \[u_{0}\] is irrotational, which is equivalent to \[\frac{1}{2}\partial_{2}w_{0}=\partial_{1}a_{0,\text{ex}}\], and this is given by_ \[a_{0,\text{ex}}(x_{1},x_{2})=\tfrac{\varepsilon}{2}\Phi(\tfrac{x_{1}}{ \varepsilon})\phi^{\prime}(x_{2})\,, \tag{4.14b}\] _where \(\phi\) is as in (4.12), and \(\Phi\) is the compactly supported primitive of the function \(\varphi\) from (4.12). That is, \(\Phi(r)=\int_{-\infty}^{r}\varphi(r^{\prime})\mathrm{d}r^{\prime}\). By choosing \(\varphi\) to be odd, we ensure that \(\Phi\) is supported in \(|r|\leq 13\pi\), ensuring condition ((i)). Moreover, condition ((ii)) holds as long as \(\kappa_{0}\geq 20\), so that (4.14b) is a permissible choice of initial data for \(a_{0}\)._ **Remark 4.4** (**An open set of initial data)**.: _We note that the initial data \((w_{0},z_{0},a_{0})\) may be taken in an open set with respect to a suitable topology. The most direct way to see this is to consider a ball of radius \(\varepsilon^{N}\) with respect to the \(H_{0}^{7}(\mathcal{X}_{\mathrm{in}})\) topology15, with \(N\) sufficiently large, centered at the prototypical functions \((w_{0,\mathrm{ex}},z_{0,\mathrm{ex}},a_{0,\mathrm{ex}})\), defined earlier in Example 4.3--see (4.12), (4.13), (4.14)--with \(\kappa_{0}>20\), and \(\varepsilon\in(0,\varepsilon_{0})\), where \(\varepsilon_{0}=\varepsilon_{0}(\alpha,\kappa_{0})\) is a sufficiently small constant. Indeed, for any function in this ball, conditions ((i))-((viii)) are satisfied, if \(N\) is taken to be sufficiently large. To see this, start with conditions ((i)) and ((iii)); these hold automatically for all functions in this ball by the definition of the \(H_{0}^{7}(\mathcal{X}_{\mathrm{in}})\) norm, upon possibly enlarging the value of \(\overline{\mathbb{C}}\). Condition ((ii)) holds because for \(\kappa_{0}>20\) the functions \((w_{0,\mathrm{ex}},z_{0,\mathrm{ex}},a_{0,\mathrm{ex}})\) satisfy these bounds with strict inequalities, and we have the Sobolev embedding \(H_{0}^{7}(\mathcal{X}_{\mathrm{in}})\subset L^{\infty}(\mathcal{X}_{\mathrm{ in}})\). Similarly, the bounds \(\mathsf{D}_{1}w_{0}\leq\frac{1}{10}\), \(|\mathsf{D}_{2}w_{0}|\leq 1\), and \(|\mathsf{D}_{1}w_{0}|\leq 2\) appearing in ((iv)) hold if \(\varepsilon\) is taken to be sufficiently small, because by construction, the function \(w_{0,\mathrm{ex}}\) satisfies these bounds with strict inequalities. The bound \(\mathsf{D}_{1}w_{0}(x)\geq-1\) holds in a vicinity of \(x=0\) due to assumption ((v)), while for \(x\) away from \(0\) it holds because there \(w_{0,\mathrm{ex}}\) satisfies this bound with a strict inequality. Similarly, the conditions on the initial data in ((vi)), ((vii)), and ((viii)) are all bounds satisfied by \(w_{0,\mathrm{ex}}\) with strict inequalities, and hence small enough smooth perturbations will also satisfy these bounds. It thus remains to discuss assumption ((v)) on the initial data. It is clear that arbitrary small perturbations of \(w_{0,\mathrm{ex}}\) may not anymore attain their global minimum exactly at \(x=0\), or this minimum may not anymore equal exactly \(-1\), or we may not anymore have the exact equality \(\mathsf{D}_{2}w_{0}=0\) or \(\mathsf{D}^{2}\mathsf{D}_{1}w_{0}=\mathrm{Id}\) at this global minimum. Nonetheless, as we have previously discussed in [7, Section 13.7], we may use the Galilean symmetry group and the scaling invariance of the Euler system to relax the pointwise constraints in ((v)). For instance, small and smooth perturbations of \(w_{0,\mathrm{ex}}\) will attain their global minimum at a point near \(0\), which may then be shifted to be exactly at \(0\) using translational invariance. An affine transformation of space may then be used to ensure that \(\mathsf{D}_{2}w_{0}\) and \(\mathsf{D}_{1}w_{0}\) vanish at this point, and scaling may be used to enforce that \(\mathsf{D}_{1}w_{0}\) equals \(-1\). Our condition on the Hessian of \(\mathsf{D}_{1}w_{0}\) is already an open condition, so it will be automatically satisfied for small perturbations. This concludes the proof of the fact that all smooth and sufficiently small perturbations of the prototypical functions constructed in Example 4.3 satisfy all the assumptions on the initial data: ((i))-((viii)). Footnote 15: Alternatively, we may take perturbations in the standard \(H^{7}(\mathbb{T}^{2})\) topology which are sufficiently small. Such perturbations can potentially destroy the support assumption (4.7). Nonetheless, because these perturbations are infinitesimal, and because the Euler system has finite speed of propagation, a suitable cutting procedure, and the classical local well-posedness theorem for the Euler system reduces the problem to studying a “cut” or “localized” initial data, which now does satisfy the support assumption (4.7). We refer to [7, Section 13.7] for the detailed argument. **Remark 4.5** (**Notation: usage of \(\lesssim\) and the dependence of of generic constants)**.: _Throughout the paper we shall write \(A\lesssim B\) to mean that there exists a constant \(\hat{C}\geq 1\) such that \(A\leq\hat{C}B\), where \(\hat{C}\) is allowed to depend only on \(\alpha\), \(\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), but be independent of \(\varepsilon\). Throughout the paper we use \(\hat{C}\) to denote a sufficiently large constant which depends only on \(\alpha\), \(\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), and which may change (increase) from line to line. We emphasize that \(\hat{C}\) is never allowed to depend on \(\varepsilon\). Since \(\varepsilon\) will be chosen to be sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), we frequently write inequalities of the type \(\varepsilon\hat{C}\leq 1\)._ ### Main results The following three theorems are the main results of this paper. Theorem 4.6 concerns the process of shock formation, and is proven in Sections 5-12. Theorem 4.7 concerns the spacetime of downstream maximal Cauchy hyperbolic development of the initial data, and is proven in Section 13. Theorem 4.8 concerns the spacetime of upstream maximal Cauchy hyperbolic development of the initial data, and is proven in Section 14. See Figure 4 above. Additional optimal bounds for velocity, sound speed, and ALE map are reported in Section 15 in all cases. **Theorem 4.6** (**Shock formation and the set of pre-shocks)**.: _Fix \(\alpha=\frac{\gamma-1}{2}>0\), where \(\gamma>1\) is the adiabatic exponent. Let \(\kappa_{0}\geq 20\) and \(\overline{\mathbb{C}}\geq 1\) be two arbitrary constants. Then, there exists a sufficiently small \(\varepsilon_{0}=\varepsilon_{0}(\alpha,\kappa_{0},\overline{\mathbb{C}})\in(0,1]\) such that for every \(\varepsilon\in(0,\varepsilon_{0}]\) the following holds. If the initial data \((u_{0},\sigma_{0})\)-or equivalently, \((w_{0},z_{0},a_{0})\) cf. (4.6)-of the Euler equations at time \(t=\mathsf{t}_{\mathsf{in}}\) (cf. (4.1)) satisfies assumptions ((i))-((vii)) with parameters \((\alpha,\kappa_{0},\overline{\mathbb{C}},\varepsilon)\), then there exists a spacetime \(\mathcal{P}\) and a time-dependent family of diffeomorphisms \(\psi(\cdot,t)\colon\mathcal{P}\cap\{t\}\to\mathbb{R}^{2}\) such that the following hold:_ 1. _There exists a unique classical solution_ \((u,\sigma)\) _of the Cauchy problem for the Euler equations (_1.2_) in the spacetime_ \(\mathcal{P}_{\mathsf{Eulerian}}:=\{(\psi(x,t),t)\colon(x,t)\in\mathcal{P}\}\)_, with data_ \((u_{0},\sigma_{0})\)_. The solution_ \((u,\sigma)\) _is as smooth as the initial data, i.e., it does not lose derivatives._ 2. _Each diffeomorphism_ \(\psi(\cdot,t)\) _is invertible with_ \(\det(\nabla\psi)>0\) _on_ \(\mathcal{P}\)_, for every_ \(t\) _the map_ \(x\mapsto\psi(x,t)-x\) _is_ \(\mathbb{T}^{2}\)_-periodic, and_ \(\psi\) _is as smooth as the initial data, i.e., it does not lose derivatives._ 3. _The map_ \(\psi\) _defines a smooth ALE coordinate system (_2.7_) on_ \(\mathcal{P}\)_, with associated smooth normal & tangent vectors_ \(\mathcal{N}\) & \(\tau\) _defined via (_2.6_), and smooth metric-normalized Jacobian determinant_ \(J_{g}\approx\det(\nabla\psi)\) _defined via (_2.12_). This ALE coordinate system flattens every fast acoustic characteristic surface and allows us to characterize_ \(\mathcal{P}=\{(x,t)\in\mathbb{T}^{2}\times[\mathsf{t}_{\mathsf{in}},\mathsf{t}_ {\mathsf{med}}]\colon\min_{x_{1}\in\mathbb{T}}J_{g}(x_{1},x_{2},t)>0\}\)_, cf. (_5.11_), where_ \(\mathsf{t}_{\mathsf{med}}\) _is given by (_4.3_)._ _The spacetime_ \(\mathcal{P}\) _describes the Euler evolution for an_ \(\mathcal{O}(\varepsilon)\) _amount of time past the "very first" singularity and satisfies_ \(\mathcal{P}\subset\mathbb{T}^{2}\times[\mathfrak{t}_{\mathfrak{t}},\mathfrak{t} _{\mathsf{med}}]\)_._ 4. _The "top" boundary (future temporal boundary) of_ \(\mathcal{P}\)_, i.e.,_ \(\partial_{\mathsf{top}}\mathcal{P}=\{(x,t)\in\mathbb{T}^{2}\times[\mathfrak{t} _{\mathfrak{t}},\mathfrak{t}_{\mathsf{med}}]\colon\,\min_{x_{1}\in\mathbb{T}}J _{\sigma}=0\}\) _contains the set of pre-shocks_ \(\Xi^{*}\)_, which parametrizes a cascade of first gradient catastrophes, resulting from the distance between fast acoustic characteristic surfaces collapsing to zero. The set of pre-shocks is a smooth co-dimension-_\(2\) _subset of spacetime (see Definition_ 6.6_) characterized as the intersection of two co-dimension-_\(1\) _surfaces:_ \(\Xi^{*}=\{(x,t)\in\mathbb{T}^{2}\times[\mathfrak{t}_{\mathfrak{t}},\mathfrak{t} _{\mathsf{med}}]\colon\,J_{\sigma}(x,t)=0\}\cap\{(x,t)\in\mathbb{T}^{2}\times[ \mathfrak{t}_{\mathfrak{t}},\mathfrak{t}_{\mathsf{med}}]\colon\partial_{1}J _{\sigma}(x,t)=0\}\subset\partial_{\mathsf{top}}\mathcal{P}\)_._ 5. _The ALE coordinate system allows us to define, via (_3.2b_) and (_3.5_), a new set of smooth multi-dimensional differentiated geometric Riemann variables_ \((\mathbf{\hat{W}},\mathbf{\hat{Z}},\mathbf{\hat{A}})\) _whose time evolution is given by (_3.22_). On the spacetime_ \(\mathcal{P}\) _the Euler equations (_1.2_) for_ \((u,\sigma)\) _are equivalent to the evolution of the differentiated geometric Riemann variables, sound speed, and of the geometry itself, via (_3.22_), (_3.19b_), (_3.14_), and (_3.15_)._ 6. _The unique solution_ \((\mathbf{\hat{W}},\mathbf{\hat{Z}},\mathbf{\hat{A}},\Sigma,\mathcal{N}, \mathcal{\tau},J_{\sigma})\) _of this ALE-Euler system of equations-(_3.22_), (_3.19b_), (_3.14_), (_3.15_)-on the spacetime_ \(\mathcal{P}\) _maintains uniform_ \(H^{6}\) _Sobolev bounds throughout the cascade of gradient catastrophes emerging on_ \(\Xi^{*}\subset\overline{\partial_{\mathsf{top}}\mathcal{P}}\)_. These Sobolev estimates propagate the regularity of the initial data, there is no derivative loss. The precise pointwise and energy estimates for_ \((\mathbf{\hat{W}},\mathbf{\hat{Z}},\mathbf{\hat{A}},\Sigma,\mathcal{N}, \mathcal{\tau},J_{\sigma})\) _are found in the bootstraps (_5.37_), the geometry bounds (_7.1_), the improved estimates (_8.21_), (_8.22_), and (_11.2_), and in the optimal_ \(H^{7}\) _regularity bounds for_ \(u\circ\psi\)_,_ \(\sigma\circ\psi\)_, and_ \(\psi\) _reported in (_15.1_)._ 7. _No gradient singularity occurs at points in the closure of_ \(\mathcal{P}\) _which are away from the curve of pre-shocks. That is, for_ \((x_{*},t_{*})\in\partial_{\mathsf{top}}\mathcal{P}\setminus\Xi^{*}\) _we have_ \(\lim_{\mathcal{P}\ni(x,t)\to(x_{*},t_{*})}(|\nabla u|,|\nabla\sigma|)\circ\psi( x,t)<+\infty\)_. On the other hand, for_ \((x_{*},t_{*})\in\Xi^{*}\) _exactly one component of_ \((\nabla u)\circ\psi\) _and one component of_ \((\nabla\sigma)\circ\psi\) _blows up at_ \((x_{*},t_{*})\)_. With_ \(n=\mathcal{N}\circ\psi^{-1}\) _and_ \(\tau=\tau\circ\psi^{-1}\)_, we have that_ \(\lim_{\mathcal{P}\ni(x,t)\to(x_{*},t_{*})}(|\tau\cdot\partial_{n}u|,|\partial _{\tau}u|,|\mathrm{curl}\,u|,|\partial_{\tau}\sigma|)\circ\psi(x,t)<+\infty\) _and also_ \(\lim_{\mathcal{P}\ni(x,t)\to(x_{*},t_{*})}(n\cdot\partial_{n}u,\mathrm{div}\,u, \partial_{n}\sigma)\circ\psi(x,t)=-\lim_{\mathcal{P}\ni(x,t)\to(x_{*},t_{*})}J _{\sigma}^{-1}(x,t)=-\infty\)_. That is, the singularities emerging on_ \(\Xi^{*}\) _are pre-shocks, and there are no other singularities on the closure of_ \(\mathcal{P}\)_._ 8. _With respect to the usual Eulerian variables_ \((y,t)\)_, the solution_ \((u,\sigma)\) _inherits the_ \(H^{7}\) _regularity from_ \(U=u\circ\psi\)_,_ \(\Sigma=\sigma\circ\psi\)_, and the_ \(H^{7}\) _invertible map_ \(\psi\)_, in the interior of the spacetime_ \(\mathcal{P}_{\mathsf{Eulerian}}=\{(y,t)\colon y=\psi(x,t),(x,t)\in\mathcal{P}\}\)_. In particular,_ \((u,\sigma)\in C_{t}^{0}C_{y}^{5}\cap C_{t}^{5}C_{y}^{0}\) _is a classical solution of the Cauchy problem for the Euler equations in the interior of_ \(\mathcal{P}_{\mathsf{Eulerian}}\)_. The "top" boundary of the spacetime_ \(\mathcal{P}_{\mathsf{Eulerian}}\) _contains the Eulerian curve of pre-shocks defined as_ \(\Xi^{*}_{\mathsf{Eulerian}}:=\{(y,t)\colon y=\psi(x,t),(x,t)\in\Xi^{*}\}\)_. We have that_ \(|\nabla u|\) _and_ \(|\nabla\sigma|\) _remain bounded as we approach boundary points away from the curve of pre-shocks. As we approach points on the co-dimension-_\(2\) _set of pre-shocks,_ \(n\cdot\partial_{n}u\)_,_ \(\mathrm{div}\,u\)_, and_ \(\partial_{n}\sigma\) _diverge towards_ \(-\infty\) _at a rate proportional to the spacetime distance to_ \(\Xi^{*}_{\mathsf{Eulerian}}\)_, while_ \(\tau\cdot\partial_{n}u\)_,_ \(\partial_{\tau}u\)_,_ \(\mathrm{curl}\,u\)_, and_ \(\partial_{\tau}\sigma\) _remain uniformly bounded._ **Theorem 4.7** (**Downstream maximal development)**.: _Let \(0<\varepsilon\leq\varepsilon_{0}(\alpha,\kappa_{0},\overline{\mathsf{C}})\) be as in Theorem 4.6, and assume that the initial data \((w_{0},z_{0},a_{0})\) satisfies the same assumptions as in Theorem 4.6. If \(w_{0}\) furthermore satisfies (_(viii)_), then there exists a spacetime \(\mathcal{P}^{\sharp}\) and a family of diffeomorphisms \(\psi(\cdot,t)\colon\mathcal{P}^{\sharp}\cap\{t\}\to\mathbb{R}^{2}\) such that \(\mathcal{P}^{\sharp}\supset\mathcal{P}\), \(\psi\big{|}_{\mathcal{P}}\) is the same as the diffeomorphism \(\psi\) from Theorem 4.6, and such that the following hold:_ 1. _There exists a unique classical solution_ \((u,\sigma)\) _of the Cauchy problem for the Euler equations (_1.2_) in the spacetime_ \(\mathcal{P}^{\sharp}_{\mathsf{Eulerian}}:=\{(\psi(x,t),t)\colon(x,t)\in\mathcal{P}^ {\sharp}\}\)_, with data_ \((u_{0},\sigma_{0})\)_. The solution_ \((u,\sigma)\) _is as smooth as the initial data, i.e., it does not lose derivatives, and_ \((u,\sigma)\big{|}_{\mathcal{P}}\) _is the same as the solution_ \((u,\sigma)\) _of Theorem_ 4.6_._ 2. _Each diffeomorphism_ \(\psi(\cdot,)\) _is invertible with_ \(\det(\nabla\psi)>0\) _on_ \(\mathcal{P}^{\sharp}\)_, for every_ \(t\) _the map_ \(x\mapsto\psi(x,t)-x\) _is_ \(\mathbb{T}^{2}\)_-periodic, and_ \(\psi\) _is as smooth as the initial data, i.e., it does not lose derivatives. As in Theorem_ 4.6_, the diffeomorphism_ \(\psi\) _defines a smooth ALE coordinate system on_ \(\mathcal{P}^{\sharp}\)_, with associated smooth normal & tangent vectors_ \(\mathcal{N}\) _&_ \(\tau\)_, and smooth metric-normalized Jacobian determinant_ \(J_{\sigma}\approx\det(\nabla\psi)\)_, which flattens every fast acoustic characteristic surface._ 3. _There exists a co-dimension-_\(1\) _surface_ \(\Pi\) _parametrized as_ \(\Pi=\{(x_{1}^{*}(x_{2},t),x_{2},t)\colon(\cdot,x_{2},t)\in\mathcal{P}\}\) _such that the co-dimension-_\(2\) _surface of pre-shocks defined in Theorem_ 4.6 _is given by_ \(\Xi^{*}=\Pi\cap\partial_{\mathsf{top}}\mathcal{P}\)_, and such that_ \(\Pi\subset\{J_{\sigma,1}=0\}\) _(cf. (_5.12_) and (_5.13_)). We say that a point_ \((x,t)\) _lies upstream of the surface_ \(\Pi\) _if_ \(x_{1}<x_{1}^{*}(x_{2},t)\)_, and write this as_ \((x,t)\in\Pi_{-}\)_. We say that a point_ \((x,t)\) _lies downstream of_ \(\Pi\) _if_ \(x_{1}>x_{1}^{*}(x_{2},t)\)_, and we write this as_ \((x,t)\in\Pi_{+}\)_._ 4. _The spacetime_ \(\mathcal{P}^{\sharp _The new results in the present theorem (when compared to Theorem_ 4.6_) concern the downstream part of_ \(\mathcal{P}^{\sharp}\)_. The Euler evolution within the spacetime_ \(\mathcal{P}^{\sharp}\cap\Pi_{+}\) _is the maximal hyperbolic development of the Cauchy data_ \((u_{0},\sigma_{0})\)_, within_ \(\Pi_{+}\)_. In the closure of the upstream region,_ \(\psi\) _is the same as in Theorem_ 4.6 _and moreover all results from Theorem_ 4.6 _apply as is._ * _The "top" boundary (future temporal boundary) of_ \(\mathcal{P}^{\sharp}\) _has global_ \(W^{2,\infty}\) _regularity, and is smooth (the level set of an_ \(H^{6}\) _function) on either side of the set of pre-shocks_ \(\Xi^{*}\)_, which lies at the intersection of_ \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\) _with the surface_ \(\Pi\)_. A surface of fast acoustic characteristic singularities smoothly connects to the set of pre-shocks in the downstream part of the "top" boundary of_ \(\mathcal{P}^{\sharp}\)_. This is the co-dimension-_\(1\) _surface given explicitly as_ \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi_{+}=\{(x,t)\in\mathbb{T}^{2 }\times[\mathfrak{t}_{\mathfrak{in}},\mathfrak{t}_{\mathsf{med}}]\colon J_{ \mathsf{s}}(x,t)=0,x_{1}>x_{1}^{*}(x_{2},t)\}\)_. Since_ \(J_{\mathsf{s}}\approx\det(\nabla\psi)\)_, this surface parametrizes gradient catastrophies resulting from impinging fast acoustic characteristic surfaces in_ \(\Pi_{+}\)_, i.e., it is the "envelope" of the spacetime in which the fast acoustic characteristic surfaces remain in one-to-one correspondence with the initial foliation of spacetime._ * _In the spacetime_ \(\mathcal{P}^{\sharp}\)_, the smooth evolution (_3.22_) of the differentiated geometric Riemann variables_ \((\mathbf{\hat{W}},\mathbf{\hat{Z}},\mathbf{\hat{A}})\)_, together with the evolution of sound speed and the geoemetry in (_3.19_), (_3.14_), and (_3.15_), is equivalent to the Euler equations (_1.2_) for_ \((u,\sigma)\)_. The unique solution_ \((\mathbf{\hat{W}},\mathbf{\hat{Z}},\mathbf{\hat{A}},\Sigma,\mathcal{N}, \mathcal{T},J_{\mathsf{s}})\) _of this ALE-Euler system of equations maintains uniform_ \(H^{6}\) _Sobolev bounds on the spacetime_ \(\mathcal{P}^{\sharp}\)_. These Sobolev estimates propagate the level regularity of the initial data, i.e., there is no derivative loss. The precise pointwise and energy estimates are found in the bootstrap bounds (_13.37_), the geometry bounds in Proposition_ 13.9_, the improved estimates (_13.48_) and (_13.57_), and in the optimal_ \(H^{7}\) _regularity bounds for_ \(u\circ\psi\)_,_ \(\sigma\circ\psi\)_, and_ \(\psi\) _reported in (_15.1_)._ * _Gradient singularities occur at every point which lies in the downstream part of the "top" boundary of_ \(\mathcal{P}^{\sharp}\)_. That is, for all_ \((x_{*},t_{*})\in\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi_{+}\) _we have that_ \(\lim_{\mathcal{P}^{\sharp}\ni(x,t)\to(x_{*},t_{*})}(|\tau\cdot\partial_{n}u|,| \partial_{\tau}u|,|\mathrm{curl}\,u|,|\partial_{\tau}\sigma|)\circ\psi(x,t)<+\infty\) _and_ \(\lim_{\mathcal{P}^{\sharp}\ni(x,t)\to(x,t_{*},t_{*})}(n\cdot\partial_{n}u, \mathrm{div}\,u,\partial_{\sigma})\circ\psi(x,t)=-\lim_{\mathcal{P}^{\sharp }\ni(x,t)\to(x,t_{*},t_{*})}J_{\mathsf{s}}^{-1}(x,t)=-\infty\)_. The same type of gradient singularities occur on the set of pre-shocks_ \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi\)_. There are no gradient singularities on the upstream part of the "top" boundary of_ \(\mathcal{P}^{\sharp}\)_, i.e., on_ \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi_{-}\)_._ * _With respect to the Eulerian variables_ \((y,t)\)_, the solution_ \((u,\sigma)\) _inherting the_ \(H^{7}\) _regularity from_ \(U=u\circ\psi\)_,_ \(\Sigma=\sigma\circ\psi\)_, and the_ \(H^{7}\) _invertible map_ \(\psi\)_, in the interior of the spacetime_ \(\mathcal{P}^{\sharp}_{\mathsf{Eulerian}}=\{(y,t)\colon y=\psi(x,t),(x,t)\in \mathcal{P}^{\sharp}\}\)_. In particular,_ \((u,\sigma)\in C_{t}^{5}\cap C_{t}^{5}C_{t}^{0}\) _is a classical solution of the Cauchy problem for the Euler equations in the interior of_ \(\mathcal{P}^{\sharp}_{\mathsf{Eulerian}}\)_. The "top" boundary of the spacetime_ \(\mathcal{P}^{\sharp}_{\mathsf{Eulerian}}\) _has global_ \(W^{2,\infty}\) _regularity and is smooth on either side of the Eulerian curve of pre-shocks_ \(\Xi^{*}_{\mathsf{Eulerian}}\)_. Gradient singularities occur at all points which lie on the downstream part of_ \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}_{\mathsf{Eulerian}}\)_. Here,_ \(n\cdot\partial_{n}u\)_,_ \(\mathrm{div}\,u\)_, and_ \(\partial_{n}\sigma\) _diverge towards_ \(-\infty\) _at a rate proportional to the spacetime distance to_ \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}_{\mathsf{Eulerian}}\)_, while_ \(\tau\cdot\partial_{n}u\)_,_ \(\sigma_{n}u\)_,_ \(\mathrm{curl}\,u\)_, and_ \(\partial_{\tau}\sigma\) _remain bounded._ **Theorem 4.8** (Upstream maximal development).: _Fix \(\alpha=\frac{\gamma-1}{2}>0\) for \(\gamma>1\). Let \(\kappa_{0}\geq 20\) be large enough with respect to \(\alpha\) to ensure that (14.38) holds. Let \(\overline{\mathsf{C}}\geq 1\) and \(\delta\in(0,\frac{1}{2}]\) be arbitrary. Then, there exists a sufficiently small \(\varepsilon_{0}=\varepsilon_{0}(\alpha,\kappa_{0},\overline{\mathsf{C}}, \delta)\in(0,1]\) such that for every \(\varepsilon\in(0,\varepsilon_{0}]\) the following holds. If the initial data \((u_{0},\sigma_{0})\)-or equivalently \((w_{0},z_{0},a_{0})\)-of the Euler equations at time \(t=\mathfrak{t}_{\mathfrak{in}}\) satisfies assumptions ((i))-((viiiii)) with parameters \((\alpha,\kappa_{0},\overline{\mathsf{C}},\varepsilon)\), then there exists a spacetime \(\hat{\mathcal{H}}^{\delta}\) and a time-dependent family of diffeomorphisms \(\psi(\cdot,t)\colon\hat{\mathcal{H}}^{\delta}\cap\{t\}\to\mathbb{R}^{2}\), such that \(\psi\big{|}_{\hat{\mathcal{H}}^{\delta}\cap\mathcal{P}}\) is the same as the diffeomorphism \(\psi\) from Theorem 4.6, and such that the following hold:_ * _There exists a unique classical solution_ \((u,\sigma)\) _of the Cauchy problem for the Euler equations (_1.2_) in the spacetime_ \(\hat{\mathcal{H}}^{\delta}_{\mathsf{Eulerian}}\overline{:=\{(\psi(x,t),t) \colon(x,t)\in\hat{\mathcal{H}}^{\delta}\}}\)_, with data_ \((u_{0},\sigma_{0})\)_. The solution_ \((u,\sigma)\) _is as smooth as the initial data, i.e., it does not lose derivatives, and_ \((u,\sigma)\big{|}_{\hat{\mathcal{H}}^{\delta}\cap\mathcal{P}}\) _is the same as the solution_ \((u,\sigma)\) _of Theorem_ 4.6_._ * _Each diffeomorphism_ \(\psi\) _is invertible with_ \(\det(\nabla\psi)>0\) _on_ \(\hat{\mathcal{H}}^{\delta}\)_, for every_ \(t\) _the map_ \(x\mapsto\psi(x,t)-x\) _is_ \(\mathbb{T}^{2}\)_-periodic, and_ \(\psi\) _is as smooth as the initial data, i.e., it does not lose derivatives. As in Theorem_ 4.6_, the diffeomorphism_ \(\psi\) _defines a smooth ALE coordinate system on_ \(\hat{\mathcal{H}}^{\delta}\)_, with associated smooth normal & tangent vectors_ \(\mathcal{N}\) & \(\tau\)_, and smooth metric-normalized Jacobian determinant_ \(J_{\mathsf{s}}\approx\det(\nabla\psi)\)_, which flattens every fast acoustic characteristic surface._ * _Upstream of the surface_ \(\Pi\) _defined in Theorem_ 4.7_, item_ (c)_, the spacetime_ \(\Pi_{-}\) _is foliated by slow acoustic characteristic surfaces which emanate from_ \(\Pi=\{x_{1}=x_{1}^{*}(x_{2},t)\}\)_, at least locally for_ \(x_{1}<x_{1}^{*}(x_{2},t)\)_. The portion of the spacetime of maximal Cauchy development of the initial data_ \((u_{0},\sigma_{0})\)_, which lies within_ \(\Pi_{-}\)_, has as "top" boundary (future temporal boundary) the unique slow acoustic characteristic surface emanating from the set of pre-shocks_ _._ 4. _The spacetime_ \(\hat{\mathcal{H}}^{\delta}\) _is characterized as follows. For_ \(\delta\in(0,\frac{1}{2}]\) _arbitrary, we consider (cf._ (_14.3_))_ \(\delta\)_-approximate slow acoustic characteristic surfaces_ \(\{(x,\Theta^{\delta}(x,t))\}\)_. Among these surfaces there exists a unique and smooth distinguished_ \(\delta\)_-approximate slow acoustic characteristic surface_ \(\{(x,\overline{\Theta^{\delta}}(x))\}\)_, which emanates from the set of pre-shocks_ \(\Xi^{*}\)_. The spacetime_ \(\hat{\mathcal{H}}^{\delta}\) _is then characterized as_ \(\{(x,t)\in\mathbb{T}^{2}\times[\mathfrak{t}_{\mathfrak{in}},\mathfrak{t}_{ \mathfrak{fin}})\colon t<\overline{\Theta^{\delta}}(x)\}\)_. The new results in this theorem concern the upstream part of_ \(\hat{\mathcal{H}}^{\delta}\)_, i.e._ \(\hat{\mathcal{H}}^{\delta}\cap\Pi_{-}\)_. In the downstream region_ \(\hat{\mathcal{H}}^{\delta}\cap\Pi_{+}\)_, the spacetime considered in Section_ 14 _is a strict subset of the spacetime_ \(\mathcal{P}\) _from Theorem_ 4.6_, and all the results in Theorem_ 4.6 _still apply, as is._ 5. _The "top" boundary of_ \(\hat{\mathcal{H}}^{\delta}\) _is the surface_ \(\partial_{\mathsf{top}}\hat{\mathcal{H}}^{\delta}=\{(x,\min\{\overline{\Theta^{ \delta}}(x)\},\mathfrak{t}_{\mathfrak{fin}})\}\) _and the set of pre-shocks embeds this future temporal boundary as_ \(\partial_{\mathsf{top}}\hat{\mathcal{H}}^{\delta}\cap\Pi=\Xi^{*}\)_. As_ \(\delta\to 0^{+}\)_, the surface_ \(\{(x,\overline{\Theta^{\delta}}(x))\) _converges precisely to the slow acoustic characteristic surface emanating from the curve of pre-shocks, so that in the limit_ \(\delta\to 0^{+}\) _we recover the entire upstream part of the spacetime of maximal Cauchy development._ 6. _In the spacetime_ \(\hat{\mathcal{H}}^{\delta}\) _the smooth evolution (_3.22_) of the differentiated geometric Riemann variables_ \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}})\)_, together with the evolution of sound speed and the geoemetry in (_3.19_), (_3.14_), and (_3.15_), is equivalent to the Euler equations (_1.2_) for_ \((u,\sigma)\)_. The unique solution_ \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}},\Sigma,\mathcal{N},\tau,J_{ g})\) _of this ALE-Euler system of equations maintains uniform_ \(H^{6}\) _Sobolev bounds on the spacetime_ \(\hat{\mathcal{H}}^{\delta}\)_. These Sobolev estimates propagate the level regularity of the initial data, i.e., there is no derivative loss. The precise pointwise and energy estimates are found in the bootstrap bounds (_14.132_), the geometry bounds in Proposition_ 14.8_, the improved estimates in (_14.190_), (_14.193_), and (_14.194_), and in the optimal_ \(H^{7}\) _regularity bounds for_ \(u\circ\psi\)_,_ \(\sigma\circ\psi\)_, and_ \(\psi\) _from (_15.1_)._ 7. _No gradient singularity occurs at points in the closure of_ \(\hat{\mathcal{H}}^{\delta}\) _which are away from the curve of pre-shocks. That is, for_ \((x_{*},t_{*})\in\partial_{\mathsf{top}}\hat{\mathcal{H}}^{\delta}\setminus\Xi^ {*}\) _we have_ \(\lim_{\hat{\mathcal{H}}^{\delta}\ni(x,t)\to(x_{*},t_{*})}(|\nabla u|,|\nabla \sigma|)\circ\psi(x,t)<+\infty\)_. A different kind of singular phenomenon occurs in the upstream part of_ \(\partial_{\mathsf{top}}\hat{\mathcal{H}}^{\delta}\)_, in the limit as_ \(\delta\to 0\)_: the ALE diffeomorphism_ \(\psi\) _cannot be extended beyond this Cauchy horizon in a unique and smooth fashion._ 8. _With respect to the Eulerian variables_ \((y,t)\)_, the solution_ \((u,\sigma)\) _inherits the_ \(H^{7}\) _regularity from_ \(U=u\circ\psi\)_,_ \(\Sigma=\sigma\circ\psi\)_, and the_ \(H^{7}\) _invertible map_ \(\psi\)_, in the interior of the spacetime_ \(\hat{\mathcal{H}}^{\delta}_{\mathsf{Eulerian}}=\{(y,t)\colon y=\psi(x,t),(x,t) \in\hat{\mathcal{H}}^{\delta}\}\)_. In particular,_ \((u,\sigma)\in C^{0}_{t}C^{5}_{\tau}\cap C^{5}_{\tau}C^{0}_{\tau}\) _is a classical solution of the Cauchy problem for the Euler equations in the interior of_ \(\hat{\mathcal{H}}^{\delta}_{\mathsf{Eulerian}}\)_. The "top" boundary of_ \(\hat{\mathcal{H}}^{\delta}_{\mathsf{Eulerian}}\) _is smooth and the only gradient singularities occur on the Eulerian curve of pre-shocks_ \(\Xi^{*}_{\mathsf{Eulerian}}\) _which is embedded in_ \(\partial_{\mathsf{top}}\hat{\mathcal{H}}^{\delta}_{\mathsf{Eulerian}}\)_._ ### The proofs of the main theorems The remainder of the paper contains the proofs of Theorems 4.6, 4.7, and 4.8. The proofs of these theorems paint a much more detailed picture than what is summarized in the statements above, which only mentions the highlights. Here we provide a roadmap for the structure of these proofs (including the necessary forward references). The precise details are given in subsequent sections. #### 4.4.1. The proof of Theorem 4.6 Assume that the initial data \((w_{0},z_{0},a_{0})=(u_{0}\cdot e_{1}+\sigma_{0},u_{0}\cdot e_{1}-\sigma_{0},u_ {0}\cdot e_{2})\) satisfies conditions ((i))-((vii)) from Section 4.2, for some parameters \(\alpha\), \(\kappa_{0}\), \(\overline{\mathsf{C}}\) (independent of \(\varepsilon\)), and \(\varepsilon>0\). As discussed in Remark 4.4, this constitutes an open set of initial data. These assumptions in particular give \((u_{0},\sigma_{0})\in H^{7}(\mathbb{T}^{2})\), and the initial density is bounded away from vacuum. By the classical local well-posedness theory for the isentropic Euler system in Sobolev spaces, we know that there exists a sufficiently small time \(T>\mathfrak{t}_{\mathfrak{in}}\) and a unique classical solution \((u,\sigma)\in C^{0}([\mathfrak{t}_{\mathfrak{in}},T];H^{7}(\mathbb{T}^{2}))\) of the Euler equations (1.2), such that all bounds on this solution are inherited from the initial data, up to a deterioration/magnification factor of \(1+\varepsilon\) for all norms. On \(\mathbb{T}^{2}\times[\mathfrak{t}_{\mathfrak{in}},T]\) we may define as in Section 2 the Arbitrary-Lagrangian-Eulerian (ALE) coordinates \((\psi(x,t),t)=(h(x_{1},x_{2},t),x_{2},t)\) adapted to the geometry of the fast acoustic characteristics. The family of diffeomorphisms \(\psi(\cdot,t)\), which evolves according to (2.11), induces a normal (\(\mathcal{N}\)) and tangent (\(\tau\)) vector according to (2.6), and a metric-normalized Jacobian determinant \(J_{g}\) according to (2.12). In terms of this ALE geometry, we may then define as in Section 3 a new set of differentiated multidimensional Riemann variables \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}})\), according to (3.2b) and (3.5). Since on \(\mathbb{T}^{2}\times[\mathfrak{t}_{\mathfrak{in}},T]\) we are dealing with \(C^{2}_{x,t}\) functions, and since the map \(\psi\) is invertible (\(\det(\nabla\psi)\approx J_{g}\) is bounded from below by a strictly positive number), by the construction of these differentiated Riemann variables we have that their evolution in (3.22), together with the evolution of the rescaled sound speed (3.19b), and that of the geometry (3.14)-(3.15) is in fact equivalent to the original Euler evolution in (1.2). The above described short-time analysis may then be extended to a larger spacetime via a classical continuation argument (which relies on local well-posedness of the Euler system and on finite speed of propagation) if we are able to guarantee that in this extended spacetime all the unknowns in the problem retain the regularity of the initial data (in this case, \(H^{7}\) regularity for \(U=u\circ\psi,\Sigma=\sigma\circ\psi\) and \(\psi\) itself, and \(H^{6}\) regularity for the geometric quantities \(\mathcal{N},\tau,J_{\sigma}\) and for the differentiated Riemann variables \((\mathbf{\dot{W}},\mathbf{\dot{Z}},\mathbf{\dot{A}})\)), and if we are able to show that in this larger spacetime the family of diffeomorphisms \(\psi\) remain invertible (that is, \(J_{\sigma}>0\)). A rigorous implementation of this continuation argument requires quantitative bounds on all unknowns in the problem, which we establish via a series of "bootstrap inequalities" for the solutions \((\mathbf{\dot{W}},\mathbf{\dot{Z}},\mathbf{\dot{A}},\Sigma,\mathcal{N},\tau,J_ {\sigma})\) of (3.22), (3.19b), (3.14)-(3.15). These bootstrap inequalities (see the inequalities in (5.37) below) consist of pointwise bounds for the fields \((\mathbf{\dot{W}},\mathbf{\dot{Z}},\mathbf{\dot{A}},\Sigma,\mathcal{N},\tau,J_ {\sigma})\) and their derivatives with respect to space and time, and of \(L^{2}\)-based energy bounds for derivatives up to order six for \((\mathbf{\dot{W}},\mathbf{\dot{Z}},\mathbf{\dot{A}},\mathcal{N},\tau,J_{ \sigma})\) (the same as the regularity of these fields at the initial time). These bootstrap inequalities are stated in either the ALE spacetime coordinates \((x,t)\), or equivalently using a set of "flattened" \((x,\mathsf{s})\) spacetime coordinates (given by (5.18b) and (5.20)), which are more convenient to use for energy estimates (see also Remark 5.3 below). The spacetime \(\mathcal{P}\) mentioned in Theorem 4.6 is then defined as a cylinder (meaning, it is invariant under translations in the \(x_{1}\)-variable) in which the map \(\psi(\cdot,t)\) remains invertible, and which is quantified as \(\mathcal{J}(x_{2},t)=\min_{x_{1}\in\mathbb{T}}J_{g}(x_{1},x_{2},t)>0\) (see (5.11)). We note that the "top" boundary of the spacetime \(\{\mathcal{J}>0\}\) intersects the final time slice mentioned in Theorem 4.6, namely \(\{t=\mathsf{t_{med}}\}\), in a Lipschitz (as opposed to \(H^{6}\)-smooth) fashion. In order to work with a spacetime which is as smooth as possible, in our case the zero level set of a \(H^{6}\) function, in Section 4.1 we have included one more time-slice denoted by \(\{t=\mathsf{t_{fin}}\}\). On \(\mathbb{T}^{2}\times[\mathsf{t_{med}},\mathsf{t_{fin}}]\), which is a spacetime beyond the scope of Theorem 4.6, we smoothly extend the Euler evolution in an artificial way (by working with the function \(\overline{J}_{g}\) defined in (5.4a), instead of the natural Jacobian determinant \(J_{\sigma}\)), in order to ensure a smooth termination of our spacetime before the slice \(\{t=\mathsf{t_{fin}}\}\). While this extension is not seen in the statement of Theorem 4.6, its use is very convenient for the proof, as it for instance ensures the flattening map \((x,t)\mapsto(x,\mathsf{s})\) given by (5.18b) retains maximal regularity, instead of being merely Lipschitz continuous. It is important to emphasize that this technically useful extension does not alter the Euler dynamics in any way on \(\mathbb{T}^{2}\times[\mathsf{t_{in}},\mathsf{t_{med}}]\), and at first reading one should ignore the modifications due to \(J_{\sigma}\mapsto\overline{J}_{\sigma}\). The proof of Theorem 4.6 then consists of showing that in this dynamically defined spacetime the bootstrap inequalities may be "closed" if \(\varepsilon\) is taken to be sufficiently small. By "closing the bootstrap assumptions" we mean the standard continuity argument: assuming the inequalities in (5.37) hold true with a specific constant on \(\mathcal{P}\), we use the evolution equations (3.22), (3.19b), (3.14)-(3.15) to prove that the bounds in fact hold true on \(\mathcal{P}\) with a strictly smaller constant than what was postulated. This requires a careful fine-tuning of the constants in the bootstrap assumptions, which is detailed in Remark 5.4 below. It is important here to notice that \(\varepsilon\) is the last parameter chosen in the proof, sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\overline{\mathsf{C}}\). The closure of the bootstrap inequalities (5.37) is achieved in Sections 5-12 below. This necessitates a careful blend of pointwise bounds and energy estimates, which appeal not just the bootstraps themselves, but also to a number of bounds that are direct consequences of the bootstrap assumptions when combined with the ALE Euler evolution and the functional analytic framework from Appendix B and Appendix C. In particular, the closure of the energy bootstraps requires that we carefully keep track of the behavior of all unknowns in the problem as we reach the "top" boundary of the spacetime \(\mathcal{P}\). It is here that we encounter the co-dimension-\(2\) set of pre-shocks \(\Xi^{*}\) (see Defintion 6.6), on which \(J_{\sigma}\) vanishes (\(\psi\) becomes not invertible). We keep track of the behavior of all unknowns in the vicinity of \(\partial_{\mathsf{top}}\mathcal{P}\) using carefully chosen weights for the energy norms, in terms of fractional powers of \(\mathcal{J}\) and \(J_{\sigma}\); see the definitions of the energy and damping norms in Subsection 5.4. Once the bootstraps are closed on \(\mathcal{P}\), we have established optimal \(H^{6}\) regularity estimates for \((\mathbf{\dot{W}},\mathbf{\dot{Z}},\mathbf{\dot{A}},\mathcal{N},\tau,J_{ \sigma})\), and also the invertibility of the map \(\psi\) (guaranteed by \(J_{\sigma}>0\)). Optimal \(H^{7}\) bounds for velocity \(U\), sound speed \(\Sigma\), and ALE map \(\psi\) are reported in Section 15. By the Sobolev embedding and the (Sobolev) inverse function theorem, this implies the claimed \(C^{5}_{x,t}\) regularity of \(u,\sigma,\psi\) and \(\psi^{-1}\) in the interior of \(\mathcal{P}\), and also the equivalence of the system (3.22), (3.19b), (3.14)-(3.15) with the original Euler evolution in (1.2). The claimed properties of the set of pre-shocks are established in Section 6.6. Lastly, the behavior of gradients of the solution as we approach the "top" boundary of the spacetime, as claimed in (g) (or equivalently, (h) in Eulerian variables) are now direct consequences of: the identities (3.10), of the pointwise bootstrap bounds (5.37), the properties of \(\mathcal{J}\) and \(J_{\sigma}\) established in Sections 6.4 and 6.7, and the characterization of \(\Xi^{*}\) in Proposition 6.7. For example, (3.10a) and (3.10b) imply that \((n\cdot\partial_{n}u)\circ\psi=\frac{1}{2}(\mathbf{\dot{W}}_{\mathcal{N}}+ \mathbf{\dot{\dot{Z}}}_{\mathcal{N}})\) and \((\partial_{n}\sigma)\circ\psi=\frac{1}{2}(\mathbf{\dot{W}}_{\mathcal{N}}- \mathbf{\dot{\dot{Z}}}_{\mathcal{N}})\). The bootstrap (5.37g) gives that \(\mathbf{\dot{\dot{Z}}}_{\mathcal{N}}\) remains uniformly bounded in \(\mathcal{P}\). The bootstrap (5.37b) together with the characterization of the pre-shock in Proposition 6.7 show that as \(\mathcal{P}\ni(x,t)\to\Xi^{*}\), we must have \(J_{\sigma}(x,t)\to 0^{+}\) and hence \(\mathbf{\dot{W}}_{\mathcal{N}}(x,t)\leq-\frac{9}{10}\varepsilon^{-1}J_{\sigma} (x,t)^{-1}\to-\infty\). This shows that \((n\cdot\partial_{n}u,\partial_{n}\sigma)\circ\psi(x,t)\to-\infty\) as \(\mathcal{P}\ni(x,t)\to\Xi^{*}\). On the other hand, (3.10) shows that the gradients \((\tau\cdot\partial_{n}u)\circ\psi,(\partial_{\tau}u)\circ\psi\) and \((\partial_{\tau}\sigma)\circ\psi\) may be computed solely in terms of \(\mathbf{\dot{A}}_{\mathcal{N}},\mathbf{\dot{W}}_{\mathcal{T}},\mathbf{\dot{ \dot{Z}}}_{\mathcal{T}},\mathbf{\dot{A}}_{\mathcal{T}}\), whereas the bootstraps (5.37h), (5.37e), (5.37i), and (5.37j) show that these terms remain uniformly bounded in \(\mathcal{P}\). This shows that \((\tau\cdot\partial_{n}u,\partial_{\tau}u,\partial_{\tau}\sigma)\circ\psi(x,t)\) remain bounded as \(\mathcal{P}\ni(x,t)\to\Xi^{*}\). The fact that as \(\mathcal{P}\ni(x,t)\to\partial_{\mathrm{top}}\mathcal{P}\setminus\Xi^{*}\) all gradients remain bounded is a consequence of the fact that \(J_{g}\)_does not vanish on_\(\partial_{\mathsf{top}}\mathcal{P}\setminus\Xi^{*}\), which is in turn a consequence of the proof of Lemma 6.4. The statements concerning \(\mathrm{div}\;u\) and \(\mathrm{curl}\;u\) follow from (3.37) and (3.40). The asymptotic behavior of gradients in Eulerian variables, as claimed in point (h), follows identically; we omit these redundant details. #### 4.4.2. The proof of Theorem 4.7 The proof is very similar in both spirit and implementation to the proof of Theorem 4.6, outlined above. It is based on the equivalent formulation of the Euler equations in ALE variables from Sections 2 and 3, and a continuation argument which is made quantitative via the propagation of bootstrap inequalities. This close resemblance allows us to confine the entire analysis to one section, namely Section 13, in which we highlight the details in the analysis which are different from the analysis in Sections 5-12. The heart of the proof is to close bootstrap inequalities and to ensure that the map \(\psi\) is smooth and invertible in the spacetime considered. The bootstrap inequalities themselves are the same as in Sections 5-12 and have been restated for convenience in (13.37). The principal difference with respect to the analysis in Sections 5-12 is that in the downstream region, i.e., for \(x_{1}>x_{1}^{*}(x_{2},t)\) (written as \(\Pi_{+}\) in the statement of the theorem), we wish to extend the spacetime \(\mathcal{P}\) to a strictly larger spacetime \(\mathcal{P}^{\sharp}\), whose "top" boundary should be characterized by \(\{J_{g}(x_{1},x_{2},t)=0\}\), as opposed to \(\{\mathcal{J}(x_{2},t)=\min_{x_{1}}J_{g}(x_{1},x_{2},t)=0\}\), for times prior to \(\mathsf{t_{med}}\). In particular, this means that any parametrization of the downstream part of the "top" boundary of the spacetime necessitates \(x_{1}\)-dependence. In turn, this \(x_{1}\) dependence enters the weight function \(\mathcal{J}\) which replaces \(\mathcal{J}\) in the downstream region (see definition (13.6)), and in the definition of the flattening map \(\mathsf{s}=\mathsf{q}(x,t)\) which replaces \(\mathsf{q}\) in the downstream region (see definitions (13.8) and (13.10)). The closure of the energy bootstraps is then complicated by the appearance of an \(\mathcal{J}_{1}\) term in the energy estimates, and of the coefficient \(\overline{\mathsf{Q}}_{1}=\varepsilon\mathcal{J}_{1}\) in the definition of the \(\widetilde{\mathsf{D}}_{1}\) operator (see (13.12) and (13.13)). This difficulty is overcome by noting that for \(x_{1}\) which is in the downstream region \(\mathcal{P}^{\sharp}\cap\Pi_{+}\) and is "close" to the co-dimension-\(1\) set \(\Pi=\{(x_{1}^{*}(x_{2},t),x_{2},t)\}\), we have that \(J_{g},1>0\), while for points in \(\mathcal{P}^{\sharp}\cap\Pi_{+}\) which are far from \(\Pi\), we have that \(J_{g}\) is bounded from below by a positive constant. A careful design of the weight function \(\mathcal{J}\) and of the flattening map \(\mathsf{q}\) in the downstream region (see Section 13.1) then ensures \(\mathcal{J}_{1}\) is related to \(J_{g},1\) and thus has a favorable sign. This information is encoded through the fact that the coefficients \(\overline{\mathsf{Q}}_{1}\) and \(\mathring{\mathsf{Q}}_{1}\) are non-negative (see (13.38c) and (13.38d)), and hence certain dangerous terms in our energy estimates have a favorable sign. Physically speaking, this desired favorable sign in our energy estimates is a manifestation of the phenomenon of "compression", which is natural in the downstream region. The weight function \(\mathcal{J}\) and the flattening map \(\mathsf{q}\) are also carefully designed so that the spacetime \(\mathcal{P}^{\sharp}\cap\Pi_{+}\) captures the full downstream maximal hyperbolic development for Euler, for times prior to \(\mathsf{t_{med}}\) (see Remark 13.5). As in Sections 5-12, we artificially extend their definitions for times \(t\in(\mathsf{t_{med}},\mathsf{t_{fin}})\) in order to ensure the smoothness of the "top" boundary of the downstream part of the extended spacetime \(\mathcal{P}^{\sharp}\). As before, this extension of the ALE Euler dynamics to \(\mathbb{T}^{2}\times(\mathsf{t_{med}},\mathsf{t_{fin}})\) (past the scope of Theorem 4.7) is done for technical convenience only, and it does not affect the statement of Theorem 4.7. The closure of the bootstraps corresponding to the spacetime \(\mathcal{P}^{\sharp}\), is established in Sections 13.6-13.13. Modifications to the argument in Sections 5-12 (which already covers the spacetime \(\mathcal{P}^{\sharp}\cap\Pi_{-}\)) are only required for the downstream part \(\mathcal{P}^{\sharp}\cap\Pi_{+}\). The closure of these bootstraps then implies optimal \(H^{6}\) regularity estimates for \((\mathbf{\hat{W}},\mathbf{\hat{Z}},\mathbf{\hat{A}},\mathcal{N},\mathcal{T},J _{g})\), the invertibility of the map \(\psi\) (guaranteed by \(J_{g}>0\) in the interior of \(\mathcal{P}^{\sharp}\)), and optimal \(H^{7}\) bounds for velocity \((U,\Sigma,\psi)\) (reported in Section 15). The claimed \(C^{5}_{x,t}\) regularity of \(u,\sigma,\psi\) and \(\psi^{-1}\) in the interior of \(\mathcal{P}^{\sharp}\), and the equivalence of the system (3.22), (3.19b), (3.14)-(3.15) with the original Euler evolution in (1.2) directly follows. The fact that \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi_{+}\) and \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi_{-}\) are smooth (the zero level sets of \(H^{6}\) functions) follows by construction. The fact that \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\) only retains \(W^{2,\infty}\) regularity across its intersection with \(\Pi\), i.e., at the pre-shock \(\Xi^{*}\), is due to the fact that the second derivative with respect to \(x_{1}\) of the weight function \(\mathcal{J}\) is equal to \(0\) as \(x_{1}\to x_{1}^{*}(x_{2},t)^{-}\) (from the left, the upstream part), while the second \(x_{1}\) derivative of the weight function \(\mathcal{J}\) is strictly positive (due to (6.54)) as \(x_{1}\to x_{1}^{*}(x_{2},t)^{+}\) (from the right, the downstream part). The remaining issue to discuss is the behavior of gradients of the solutions \((u,\sigma)\) discussed in item (g) (in ALE variables) and item (h) (in Eulerian variables). The novelty here lies in the statement that the gradient components \((n\cdot\partial_{n}u,\partial_{n}\sigma)\circ\psi(x,t)\) blow up as \((x,t)\in\mathcal{P}^{\sharp}\) approaches _any point_ on the downstream part of the "top" boundary, \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi_{+}\cap\{t\leq\mathsf{t_{med}}\}\), not just at points on the pre-shock \(\Xi^{*}\) (as was shown in Theorem 4.6). This fact is in a sense the very definition of downstream maximal development: components of \((\nabla u,\nabla\sigma)\) blow up _everywhere_ on this future temporal boundary of the spacetime. In turn, this blowup follows by our construction, which implies that this future temporal boundary \(\{(x,t)\colon\mathcal{J}(x,t)=0,x_{1}>x_{1}^{*}(x_{2},t),t\leq\mathfrak{t}_{\text{ med}}\}\) in fact equals the set \(\{(x,t)\colon J_{{}_{g}}(x,t)=0,x_{1}>x_{1}^{*}(x_{2},t),t\leq\mathfrak{t}_{\text{ med}}\}\) (see Remark 13.5), and thus \(J_{{}_{g}}\) vanishes identically on this set. As discussed in the last paragraph of Section 4.4.1, the asymptotic vanishing of \(J_{{}_{g}}\) is equivalent to the divergence towards \(-\infty\) of \(\hat{\textbf{W}}_{{}_{\mathcal{N}}}\), and thus also of \((n\cdot\partial_{n}u,\partial_{n}\sigma)\circ\psi=(\frac{1}{2}(\hat{\textbf{W} }_{{}_{\mathcal{N}}}+\tilde{\textbf{Z}}_{{}_{\mathcal{N}}}),\frac{1}{2}(\hat{ \textbf{W}}_{{}_{\mathcal{N}}}-\tilde{\textbf{Z}}_{{}_{\mathcal{N}}}))\). The bootstraps (13.37) also imply uniform bounds for \((\tilde{\textbf{Z}}_{{}_{\mathcal{N}}},\hat{\textbf{A}}_{{}_{\mathcal{N}}}, \hat{\textbf{W}}_{{}_{\mathcal{T}}},\tilde{\textbf{Z}}_{{}_{\mathcal{T}}}, \tilde{\textbf{A}}_{{}_{\mathcal{T}}})\) on \(\mathcal{P}^{\sharp}\), showing that \((\tau\cdot\partial_{n}u,\partial_{\tau}u,\partial_{\tau}\sigma)\circ\psi\) remain uniformly bounded on \(\mathcal{P}^{\sharp}\). The statements concerning \(\operatorname{div}u\) and \(\operatorname{curl}u\) follow from (3.37) and (3.40). The asymptotic behavior of gradients in Eulerian variables, as claimed in point (h), follows identically. #### 4.4.3. The proof of Theorem 4.8 The proof follows the same strategy that was utilized in the proofs of Theorem 4.6 and 4.7 above: we use the equivalent formulation of the Euler equations in ALE variables from Sections 2 and 3, and a continuation argument which is made quantitative via the propagation of bootstrap inequalities. We confine the entire proof to Section 14, where we highlight the details in the analysis which are different from the analysis in Sections 5-12. Nearly all differences arise due to the fact that we need to carefully analyze all slow acoustic characteristic surfaces emanating from the pre-shock, and its vicinity. The heart of the proof is to close bootstrap inequalities and to ensure that the map \(\psi\) is smooth and invertible in the spacetime \(\hat{\mathcal{H}}^{\mathcal{S}}\). Both of these require a careful design and analysis of the upstream part of the spacetime, denoted as \(\hat{\mathcal{H}}^{\mathcal{S}}\cap\Pi_{-}\) in the statement of the theorem. While the bootstrap inequalities themselves are the same as in Sections 5-12, and have been re-stated for convenience in (14.132), the meaning of the \(L_{x}^{2}\)-based norms present in (14.132b) has been adapted to the upstream geometry (cf. (14.122)), and the weight function \(\mathcal{J}\) present in the definition of the energy (cf. (14.130)) and damping (cf. (14.131)) norms has undergone a significant transformation (see (14.58) and (14.62)) in order to account for the degeneracy in the problem which occurs along the slow acoustic characteristics emanating from the pre-shock. The intuition behind the construction of the weight function \(\mathcal{J}\) in (14.58) and (14.62) is as follows. Based on intuition gained from Sections 5-12 and Section 13, we need to design the weight function \(\mathcal{J}\) such that: * \(\mathcal{J}\) is \(H^{6}\) smooth, the same regularity as \(J_{{}_{g}}\), * the level set \(\{\mathcal{J}=0\}\) perfectly describes the future temporal boundary of the upstream part of the spacetime \(\hat{\mathcal{H}}^{\mathcal{S}}\), at least in the vicinity of the set of pre-shocks \(\Xi^{*}\), where the gradient singularities are lurking, * such that the the action of the \(\lambda_{i}\)-transport operators \((\partial_{t}+V\partial_{2})-(3-i)\alpha\Sigma(J_{{}_{g}}^{-1}\partial_{1}-g ^{-\frac{1}{2}}h_{,2}\,\partial_{2})\), gives a sign-definite term when acting on \(\mathcal{J}\), for all \(i\in\{1,2,3\}\). The immediate issue is that as opposed to our earlier analysis we cannot let \(\mathcal{J}\) equal simply to \(\min_{x_{1}}J_{{}_{g}}\) (cf. Sections 5-12), or even \(J_{{}_{g}}\) itself (cf. Section 13). This is because upstream of the pre-shock, the level set \(\{J_{{}_{g}}=0\}\) describes the future temporal boundary of a spacetime which cannot be accessed by neither \(\lambda_{1}\)-characteristic surfaces (suitable for propagation of slow sound waves via \(\tilde{\textbf{Z}}\)) nor \(\lambda_{2}\)-characteristic surfaces (suitable for the propagation density waves via \(\Sigma\), vorticity waves via \(\Omega\), and tangential velocity waves via \(\hat{\textbf{A}}\)), which emanate from the initial data \((u_{0},\sigma_{0})\) at \(t=\mathfrak{t}_{\text{in}}\). This begs the question: what is the maximal spacetime upstream of the pre-shock which is accessible by all characteristic surfaces (corresponding to the \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) transport operators) emanating from the initial data? As discussed in Section A, this is the spacetime whose future temporal boundary is given by the slow acoustic characteristic surface (corresponding to the slowest wave-speed, \(\lambda_{1}\)) which emanates from the set or pre-shocks, in the upstream direction. This matches item (c) in the statement of Theorem 4.8. As discussed in (14.1), (14.3), and Figure 15 below, this surface would normally be characterized as a graph over \((x_{2},t)\), by letting \(x_{1}=\theta(x_{2},t)\) for a suitable function \(\theta\). The fact that this surface emanates from the pre-shock \(\Xi^{*}=\{(\tilde{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))\}\) is then the statement \(\theta(x_{2},t^{*}(x_{2}))=\tilde{x}_{1}(x_{2})\) for all \(x_{2}\in\mathbb{T}\) (cf. (14.4)). The immediate issue which emerges is that the evolution equation for the function \(\theta\) contains factors of \(J_{{}_{g}}^{-1}\), which degenerate in as one approaches \(\Xi^{*}\). Our observation is that if we re-parametrize the the slow acoustic characteristic surface emanating from \(\Xi^{*}\) as a graph over \((x_{1},x_{2})\), by letting \(t=\Theta(x_{1},x_{2})\) for the function \(\Theta\) such that \(\Theta(\theta(x_{2},t),x_{2})=t\). A consequence of this re-parametrization is that the "evolution equation" for \(\Theta(x)\) (we view \(x_{1}\) as the evolution direction) now contains only factors of \(J_{{}_{g}}\) (cf. (14.8)), which merely vanish as one approaches \(\Xi^{*}\). This makes a smooth analysis of \(\Theta\) accessible, and with that, a smooth description of the spacetime of upstream maximal development. For technical reasons, related to the third bullet in the above-described requirements for \(\mathcal{J}\), it is convenient to retain a damping term in our energy estimates (see the discussion in Remark 14.14). As such, for \(\mathcal{S}>0\), arbitrarily small, we define a \(\mathcal{\delta}\)-approximate slow acoustic characteristic surface passing though the curve of pre-shocks, and replace the \(\Theta(x)\) described above by \(\overline{\Theta^{\mathcal{S}}}(x)\), as defined in (14.13). Then, the \(\mathcal{\delta}\)-adjusted upstream spacetime of maximal development of the Cauchy data, \(\hat{\mathcal{H}}^{\mathcal{S}}\), is characterized as the set \((x,t)\) such that \(t<\overline{\Theta^{\mathcal{S}}(x)}\), matching (d) in the statement of Theorem 4.8. For convenience, we also require \(t<\mathsf{t}_{\mathsf{fin}}\) in order to cap the time evolution at an \(\mathcal{O}(\varepsilon)\) past the pre-shock. Then, the second bullet described above dictates that \(\mathcal{J}\) needs to be designed such that \(\mathcal{J}(x,\overline{\Theta^{\delta}}(x))=0\) for all \(x_{1}\) in the vicinity of the pre-shock \(\Xi^{*}\) located at \(x_{1}=\hat{x}_{1}(x_{2})\) and \(t=t^{*}(x_{2})\). In order to ensure that \(\mathcal{J}\) vanishes on \(\partial_{\mathsf{top}}\hat{\mathcal{H}}^{\delta}\) for times \(t<\mathsf{t}_{\mathsf{fin}}\), we thus design \(\mathcal{J}\) as a \(H^{6}\) smooth function in \(\hat{\mathcal{H}}^{\delta}\), whose _zero level-set_ is given by \(\{(x,\overline{\Theta^{\delta}}(x))\}\). This matches the first two bullets in the above list of requirements for \(\mathcal{J}\). The extension of \(\mathcal{J}\) from \(\partial_{\mathsf{top}}\hat{\mathcal{H}}^{\delta}\) down into \(\hat{\mathcal{H}}^{\delta}\) is then made to also take into account the third bullet, by ensuring that the \(\delta\)-modified \(\lambda_{1}\) transport operator \((1-\delta)(\partial_{t}+V\partial_{2})-2\alpha\Sigma(J_{g}^{-1}\partial_{1}-g ^{-\frac{1}{2}}h,_{2}\partial_{2})\) has \(\mathcal{J}\) in its kernel (see (14.60a)). Additionally, we require that on the plane \(\{(\hat{x}_{1}(x_{2}),x_{2},t)\}\) which emerges from the pre-shock at earlier times \(t<t^{*}(x_{2})\), the weight \(\mathcal{J}\) precisely matches the function \(\overline{J}_{g}\) (cf. (14.60b)). This ensures that \(\mathcal{J}\) vanishes not just on the pre-shock, but on the entire surface \((x,\overline{\Theta^{\delta}}(x))\), which is a characteristic surface for \((1-\delta)(\partial_{t}+V\partial_{2})-2\alpha\Sigma(J_{g}^{-1}\partial_{1}-g ^{-\frac{1}{2}}h,_{2}\partial_{2})\). This strategy is implemented by letting \(\mathcal{J}(x,\Theta^{\delta}(x,t))=\overline{J}_{g}(\hat{x}_{1}(x_{2}),x_{2},t)\) for \(t<t^{*}(x_{2})\) and all \(x_{2}\in\mathbb{T}\) (cf. (14.58)), where \(\Theta^{\delta}(x,t)\) represents a family of characteristic surfaces for the \(\delta\)-approximate \(\lambda_{1}\) transport operator \((1-\delta)(\partial_{t}+V\partial_{2})-2\alpha\Sigma(J_{g}^{-1}\partial_{1}-g ^{-\frac{1}{2}}h,_{2}\partial_{2})\), which emanate from the plane \(\{(\hat{x}_{1}(x_{2}),x_{2},t)\}\) (see (14.10)). In fact, the surfaces \((x,\Theta^{\delta}(x,t))\), for \((x,t)\) as described in (14.11a), smoothly foliate a portion of the spacetime \(\hat{\mathcal{H}}^{\delta}\) which has \(\Xi^{*}\) on its future temporal boundary, labeled as \(\hat{\mathcal{H}}^{\delta}_{+}\) in the analysis, and defined in (14.15a). The fact that \((x,\Theta^{\delta}(x,t))\) smoothly foliates \(\hat{\mathcal{H}}^{\delta}_{+}\) allows us to perform a smooth and sharp analysis upstream of the set of pre-shocks. The spacetime \(\hat{\mathcal{H}}^{\delta}\setminus\hat{\mathcal{H}}^{\delta}_{+}\) is denoted by \(\hat{\mathcal{H}}^{\delta}_{-}\) in (14.15b). Here the analysis is simpler because \(J_{g}\approx 1\) (see (14.71c)), we are "far away" from \(\partial_{\mathsf{top}}\hat{\mathcal{H}}^{\delta}\), and so we just need to ensure that \(\mathcal{J}\) satisfies the third bullet from the above list of requirements. This is implemented in (14.62). With the weight \(\mathcal{J}\) and our spacetime \(\hat{\mathcal{H}}^{\delta}\) defined precisely, the proof turns to the closure of the bootstrap assumptions, establishing the properties stated in item (f) of the Theorem. As before, pointwise bootstraps are closed in \((x,t)\in\hat{\mathcal{H}}^{\delta}\) coordinates, while energy norms are estimated in flattened \((x,\mathsf{s})\in\mathcal{H}^{\delta}\) coordinates defined in (14.99)-(14.106) below. Note that while the parameter \(\delta\) does not enter the bootstrap assumptions explicitly, this parameter does affect the dependencies of the bootstrap constants themselves, cf. Remark 5.4. Our analysis shows that the constants appearing in items ((iv)) (corresponding to \(\mathsf{K}\))-((xiv)) (corresponding to \(\mathsf{\varepsilon}\)) of Remark 5.4 need to be chosen to depend on \(\delta\). At the level of pointwise estimates, we highlight the lower bounds for \(\mathcal{J}\) in (14.67), the fact that \(\mathcal{J}\) gives a (good) signed contribution when acted upon by the \(\{\lambda_{i}\}_{i=1}^{3}\) transport operators (this follows from (14.93)), and the fact that \(J_{g}\geq\frac{1}{3}\mathcal{J}\) (see (14.71)). In particular, this last fact and the fact that \(\mathcal{J}>0\) in the interior of \(\hat{\mathcal{H}}^{\delta}\) ensures that \(J_{g}>0\) in the interior of \(\hat{\mathcal{H}}^{\delta}\), implying the invertibility of \(\psi\) claimed in item (b) of the theorem. At the level of energy estimates, several complications arise when compared to the analysis in Sections 5-12 and Section 13. The principal new difficulty stems from the fact that \(\mathcal{H}^{\delta}\) has a "right lateral" boundary located at \(x_{1}=\theta^{\delta}(x_{2},\mathsf{s})\) (see the definition in (14.118)). As such, the adjoint operator \(\widetilde{\mathsf{D}}^{*}\) corresponding to the \(L^{2}\)-norms defined in (14.118) contains a number of boundary terms (14.129) at \(x_{1}=\theta^{\delta}(x_{2},\mathsf{s})\). At the top level of the energy estimates, these boundary terms seem to be out of control; a more careful inspection, which uses fine properties of the spacetime \(\mathcal{H}^{\delta}\) and the design of the weight function \(\mathcal{J}\) shows however that these boundary terms (or, suitable combinations of them) either have a good sign (see e.g. (B.23)), or they vanish altogether since \(\mathcal{J}\) vanishes (see for instance the proof of Proposition (14.10)). Another difficulty in closing the energy estimates stems from the \(\lambda_{i}\)-transport operators, written as \((\mathbb{Q}\delta_{\mathsf{s}}+V\partial_{2})-(3-i)\alpha\Sigma(J_{g}^{-1} \partial_{1}-g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\widetilde{\mathsf{D} }_{2})\) in \((x,\mathsf{s})\) coordinates, acting on \(\mathcal{J}\). Here, by design we obtain helpful signed damping terms, see e.g. Remark 14.14 and the lower bound corresponding to \(\underline{\mathsf{Z}}_{\mathcal{N}}\) in (14.211) (which is due to the fact that \(\delta>0\)). We also mention here that the functional analytic framework from Appendix B can be adapted to the \((x,\mathsf{s})\) coordinates considered in Section 14, as discussed in Section B.5 below. Similarly, the space-time \(L^{\infty}\) estimates from Appendix C also hold in the flattened upstream geometry, which changes to the proof of these estimates that are described in Section C.2. Concerning points (e) and (g) in the statement of the Theorem, we remark that at points \((x,t)\) such that \(x_{1}<x_{1}^{*}(x_{2},t)\) and such that \(t>\sup_{\mathsf{s}\in(0,\frac{1}{2}]}\overline{\Theta^{\delta}}(x)\), that is, at points which lie upstream of the pre-shock and above the envelope of the \(\delta\)-approximate slow acoustic characteristic surfaces \((\cdot,\Theta^{\delta}(\cdot,\cdot))\), an Euler solution cannot be computed in a smooth and unique fashion from the initial data \((u_{0},\sigma_{0})\) at time \(\mathsf{t}_{\mathsf{in}}\). This is because a slow acoustic characteristic surface passing through \((x,t)\) would necessarily have to intersect (backwards in time) the surface \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi_{+}\), the downstream part of the top boundary of the space time \(\mathcal{P}^{\sharp}\) constructed in Theorem 4.7. But according to Theorem 4.7, item (g), at every point on \(\partial_{\mathsf{top}}\mathcal{P}^{\sharp}\cap\Pi_{+}\) a gradient singularity occurs both in density and in the normal derivative of the normal velocity, precluding a smooth continuation back to the initial data. In closing, we mention that the only gradient singularities for the fundamental variables \(u\) or \(\sigma\) which may be encountered on the closure of the spacetime \(\hat{\mathcal{H}}^{\delta}\) (the closure of \(\hat{\mathcal{H}}^{\delta}_{\text{Eulerian}}\) in Eulerian variables) occur on the set of pre-shocks \(\Xi^{*}\) (denoted as \(\Xi^{*}_{\text{Eulerian}}\) in Eulerian variables), which is embedded in the future temporal boundary of our spacetime, \(\partial_{\text{top}}\hat{\mathcal{H}}^{\delta}\). This fact was claimed in items (g) and (h) of the statement of the Theorem. Indeed, as was already discussed in proof of Theorem 4.6 and the proof of Theorem 4.7, the only potential singularities permitted by the pointwise bootstrap assumptions are in \((n\cdot\partial_{n}u,\partial_{n}\sigma)\circ\psi(x,t)\), because these terms are computed in terms of \(\hat{\mathbf{W}}_{{}_{\mathcal{N}}}\), while the bootstraps only control \(J_{{}_{\theta}}\hat{\mathbf{W}}_{{}_{\mathcal{N}}}\approx(w_{0})_{1}\). As before, for \((x_{*},t_{*})\in\Xi^{*}\) we have that \(\lim_{\hat{\mathcal{H}}^{\delta}\ni(x,t)\rightarrow(x_{*},t_{*})}J_{{}_{ \theta}}(x,t)=0\), and thus \(\hat{\mathbf{W}}_{{}_{\mathcal{N}}}\rightarrow-\infty\). However, for any \((x_{*},t_{*})\in\partial_{\text{top}}\hat{\mathcal{H}}^{\delta}\setminus\Xi^ {*}\), we either have \(\lim_{\hat{\mathcal{H}}^{\delta}\ni(x,t)\rightarrow(x_{*},t_{*})}J_{{}_{ \theta}}(x,t)\geq\frac{1}{9}\) when \(\operatorname{dist}((x_{*},t_{*}),\Xi^{*})\gtrsim\varepsilon\) due to (14.80a), or we have that \(\lim_{\hat{\mathcal{H}}^{\delta}\ni(x,t)\rightarrow(x_{*},t_{*})}J_{{}_{ \theta}}(x,t)\geq\frac{(x_{*}-\hat{x}_{{}_{\mathcal{N}}}(x_{*}))^{2}}{14 \varepsilon^{2}}\approx(\frac{1}{\varepsilon}\operatorname{dist}((x_{*},t_{*} ),\Xi^{*}))^{2}\), when \(0<\operatorname{dist}((x_{*},t_{*}),\Xi^{*})\ll\varepsilon\) (due to (14.86) and (14.92)). As such, \(\lim_{\hat{\mathcal{H}}^{\delta}\ni(x,t)\rightarrow(x_{*},t_{*})}J_{{}_{ \theta}}(x,t)>0\), and so the bounds for \(J_{{}_{\theta}}\hat{\mathbf{W}}_{{}_{\mathcal{N}}}\) do imply a (finite) upper bound for \((|n\cdot\partial_{n}u|,|\partial_{n}\sigma|)\circ\psi(x,t)\), thereby concluding the proof. ## 5. Shock formation: spacetime, energy norms, and bootstrap assumptions ### Local well-posedness With the Cauchy data defined in Section 4.2, the classical local well-posedness of the Euler system gives the existence of a time \(T\in(\mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{fin}})\), such that uniform sixth-order energy estimates for solutions \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}},J_{{}_{\theta}},h_{,2})\) to (3.41) are obtained on the time interval \([\mathfrak{t}_{\text{in}},T]\), and the support of each solution at all times \(t\in[\mathfrak{t}_{\text{in}},T]\) is contained in the set \[\mathcal{X}_{\text{fin}}:=\left\{x\in\mathbb{T}^{2}\colon\operatorname{dist}( x,\mathcal{X}_{\text{in}})\leq\mathsf{C}_{\text{supp}}\varepsilon\right\}, \tag{5.1}\] where the constant \(\mathsf{C}_{\text{supp}}>0\) depends only on \(\alpha\) and \(\kappa_{0}\) (see (6.5) below). That is, solutions to the compressible Euler system have finite speed of propagation, and we are bounding solutions for an amount of time which is at most \(\mathfrak{t}_{\text{fin}}-\mathfrak{t}_{\text{in}}=\frac{2\varepsilon}{1+\alpha }\cdot\frac{51}{50}\); moreover, the local existence theory implies that \[\inf_{(x,t)\in\mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},T]}J_{{}_{ \theta}}(x,t)>0 \tag{5.2}\] which is to say that no collision of characteristics occurs on \([\mathfrak{t}_{\text{in}},T]\). ### A smooth remapping of spacetime Our initial goal is to extend the ALE Euler solution \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}},J_{{}_{\theta}},h_{,2})\) of (3.41) from the set \(\mathcal{X}_{\text{fin}}\times[\mathfrak{t}_{\text{in}},T]\) described in Section 5.1 to a certain spacetime \(\mathcal{P}\subset\mathcal{X}_{\text{fin}}\times[\mathfrak{t}_{\text{in}}, \mathfrak{t}_{\text{fin}}]\), such that * in this spacetime \(\mathcal{P}\), the ALE Euler solution maintains uniform sixth-order Sobolev bounds; * \(J_{{}_{\theta}}>0\) in the interior of \(\mathcal{P}\); and * the boundary of \(\mathcal{P}\) contains a co-dimension-\(2\) surface on which \(J_{{}_{\theta}}\) and \(J_{{}_{\theta}}\) vanish, the so-called _curve of pre-shocks_. A priori, is is natural to consider the spacetime set \[\hat{\mathcal{P}}:=\left\{(x,t)\in\mathbb{T}^{2}\times[\mathfrak{t}_{\text{in} },\mathfrak{t}_{\text{fin}})\colon\min_{x_{1}\in\mathbb{T}}J_{{}_{\theta}}(x_{1 },x_{2},t)>0\right\}. \tag{5.3}\] We note that the _future_ temporal boundary of the spacetime \(\hat{\mathcal{P}}\) in (5.3) is not smooth along the intersection of the parabolic cylinder \(\{(x_{2},t)\colon\min_{x_{1}\in\mathbb{T}}J_{{}_{\theta}}(x_{1},x_{2},t)=0\}\) and \(\{t=\mathfrak{t}_{\text{fin}}\}\) (see the green surface in Figure 11). The lack of smoothness of this _future_ temporal boundary along this intersection is an artifact of our choice of the final time \(t=\mathfrak{t}_{\text{fin}}\); in particular, any "final time" which is \(\mathcal{O}(\varepsilon)\) can be used in place of \(\mathfrak{t}_{\text{fin}}\). As such, we introduce a new spacetime, which coincides with \(\hat{\mathcal{P}}\) for \(t\in[\mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{med}}]\), but whose future temporal boundary is both smooth and properly contained in the set \(\mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{fin}}]\). For this purpose, we introduce a specially constructed modification of \(J_{{}_{\theta}}\), which we denote by \(\overline{J}_{{}_{\theta}}\), which has the following three properties: 1. \(\overline{J}_{{}_{\theta}}\equiv J_{{}_{\theta}}\) for all \(t\in[\mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{med}}]\); 2. \(\overline{J}_{{}_{\theta}}\) has the identical regularity as \(J_{{}_{\theta}}\); and 3. for any \(x\in\mathbb{T}^{2}\), we have that \(\overline{J}_{{}_{\theta}}(x,\cdot)\) vanishes at a time \(t_{*}(x)\leq\mathfrak{t}_{\text{fin}}\). More precisely, we define \(\overline{J}_{{}_{\theta}}\) by modifying (3.15) as follows: \[(\partial_{t}+V\partial_{2})\overline{J}_{{}_{\theta}} =J_{{}_{\theta}}(\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{{}_{\mathcal{N}}}+ \tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{{}_{\mathcal{N}}})-\mathfrak{J}\,, \text{in}\ \ \mathbb{T}^{2}\times[\mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{fin}}]\,, \tag{5.4a}\] \[\overline{J}_{{}_{\theta}} =1\,,\] on \[\ \mathbb{T}^{2}\times\{t=\mathfrak{t}_{\text{in}}\}\,, \tag{5.4b}\] where \(\mathfrak{J}=\mathfrak{J}(t)\geq 0\) is a smooth time-dependent function (independent of \(x\)), given by \[\mathfrak{J}(t)=\tfrac{2(\mathfrak{t}_{\text{in}}-\mathfrak{t}_{\text{mod}})}{ \varepsilon}\mathfrak{C}(\tfrac{t-\mathfrak{t}_{\text{mod}}}{\mathfrak{t}_{ \text{fin}}-\mathfrak{t}_{\text{mod}}})\,, \tag{5.5}\] where \(\mathfrak{C}\) is a \(C^{5}\)-smooth cut-off function, with \(\mathfrak{C}(r)=0\) for \(r\leq 0\), with \(0<\mathfrak{C}(r)\leq 2\) and \(0<\mathfrak{C}^{\prime}(r)\leq 4\) for \(r\in(0,1]\), with \(\int_{0}^{1}\mathfrak{C}(r)\mathrm{d}r=1\), and with \(\|\tfrac{\mathrm{d}^{\mathfrak{d}}}{\mathrm{d}r^{\star}}\mathfrak{C}\|_{L^{ \infty}(0,1)}\lesssim 1\), where the implicit constant depends only on \(k\in\{1,5\}\). We note that in view of (3.15a) and (5.4a) we have \((\partial_{t}+V\partial_{2})(\overline{J}_{{}_{g}}-J_{{}_{g}})=-\mathfrak{J}\), and thus we arrive at the identity \[\overline{J}_{{}_{g}}(x,t)=J_{{}_{g}}(x,t)-\mathbf{1}_{t>\mathfrak{t}_{\text{ mod}}}\int_{\mathfrak{t}_{\text{mod}}}^{t}\mathfrak{J}(x_{1},\xi(x_{1},\xi^{-1}(x,t),t^ {\prime}),t^{\prime}),\mathrm{d}t^{\prime}\,. \tag{5.6}\] With the choice of \(\mathfrak{J}\) in (5.5), we arrive at the explicit formula \[\overline{J}_{{}_{g}}(x,t)=J_{{}_{g}}(x,t)-2\mathbf{1}_{t>\mathfrak{t}_{\text {mod}}}\int_{0}^{\frac{t-\mathfrak{t}_{\text{mod}}}{\mathfrak{t}_{\text{fin}}- \mathfrak{t}_{\text{mod}}}}\mathfrak{C}(r)\mathrm{d}r\,. \tag{5.7}\] This identity, the bootstrap (5.37k) and continuity, shows that for every \(x\in\mathbb{T}^{2}\), there exists a time \(t_{\star}(x)\in[\mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{fin}}]\) such that \(\overline{J}_{{}_{g}}(x,t_{\star}(x))=0\). Additionally, (5.7) shows that uniformly for \((x,t)\in\mathcal{P}\) we have \[\partial_{1}(\overline{J}_{{}_{g}}-J_{{}_{g}})\equiv 0\,,\qquad\partial_{2}( \overline{J}_{{}_{g}}-J_{{}_{g}})\equiv 0\,,\qquad-\tfrac{200(1+\alpha)}{ \varepsilon}\mathbf{1}_{t\in[\mathfrak{t}_{\text{mod}},\mathfrak{t}_{\text{ fin}}]}\leq\partial_{t}(\overline{J}_{{}_{g}}-J_{{}_{g}})\leq 0\,, \tag{5.8}\] and also \[|(\varepsilon\partial_{t})^{k}(\overline{J}_{{}_{g}}-J_{{}_{g}})|\lesssim \mathbf{1}_{t\in[\mathfrak{t}_{\text{mod}},\mathfrak{t}_{\text{fin}}]} \tag{5.9}\] for all \(k\in\{1,\dots,6\}\), where the implicit constant only depends on \(\alpha\) and \(k\). Note also that \[\overline{J}_{{}_{g}}\leq J_{{}_{g}}\,. \tag{5.10}\] Next, we modify the spacetime \(\hat{\mathcal{P}}\) of (5.3), and define the spacetime \[\mathcal{P}:=\left\{(x,t)\in\mathbb{T}^{2}\times[\mathsf{t}_{\mathsf{in}},\mathsf{ t}_{\mathsf{fin}})\colon\min_{x_{1}\in\mathbb{T}}\overline{J}_{{}_{g}}(x_{1},x_{2},t)>0 \right\}. \tag{5.11}\] We shall prove in Lemma 6.4 below that for \((x,t)\in\mathcal{P}\) with \(t>\mathsf{t}_{\mathsf{in}}\)16 the minimum with respect to \(x_{1}\) of \(\overline{J}_{{}_{g}}\) is attained at a unique point \(x_{1}^{*}(x_{2},t)\), so that we have Footnote 16: Note that since \(\overline{J}_{g}(x,\mathsf{t}_{\mathsf{in}})=1\), the minimum of \(\overline{J}_{g}\) is not attained at a unique point when \(t=\mathsf{t}_{\mathsf{in}}\). \[\overline{J}_{{}_{g}}(x_{1}^{*}(x_{2},t),x_{2},t)=\min_{x_{1}\in\mathbb{T}} \overline{J}_{{}_{g}}(x_{1},x_{2},t)\,. \tag{5.12}\] In particular, for \(t>\mathsf{t}_{\mathsf{in}}\), \(x_{1}^{*}(x_{2},t)\) is a critical point for the function \(\overline{J}_{{}_{g}1}\left(\cdot,x_{2},t\right)\), i.e. \[\overline{J}_{{}_{g},1}\left(x_{1}^{*}(x_{2},t),x_{2},t\right)=0. \tag{5.13}\] For brevity of notation, throughout Sections 5-12 we shall denote \[\mathcal{J}(x_{2},t):=\overline{J}_{{}_{g}}(x_{1}^{*}(x_{2},t),x_{2},t). \tag{5.14}\] We note at this stage that due to (5.10), we may show that \[\mathcal{J}(x_{2},t)\leq 1\,. \tag{5.15}\] In order to prove (5.15), we refer to (6.39)-(6.40) below, which implies the bound \(J_{{}_{g}}(x_{1}^{*}(x_{2},t),x_{2},t)\leq 1+(t-\mathsf{t}_{\mathsf{in}}) \frac{1+\alpha}{2}(-\frac{9}{10\varepsilon}+2\mathsf{C}_{\mathsf{J}})\leq 1 -(t-\mathsf{t}_{\mathsf{in}})\frac{2(1+\alpha)}{5\varepsilon}\leq 1\), once \(\varepsilon\) is chosen sufficiently small. We note that since \(\overline{J}_{{}_{g}}-J_{{}_{g}}\) is independent of \(x\), _the global minimum of \(\overline{J}_{\mathsf{s}}(\cdot,x_{2},t)\) is attained at the same point where the global minimum of \(J_{\mathsf{s}}(\cdot,x_{2},t)\) is attained_, and hence (5.12) may be rewritten as \[J_{\mathsf{s}}(x_{1}^{*}(x_{2},t),x_{2},t)=\min_{x_{1}\in\mathbb{T}}J_{\mathsf{ s}}(x_{1},x_{2},t)\,. \tag{5.16}\] Note that if \((x_{1},x_{2},t)\in\mathcal{P}\), then \((x_{1}^{\prime},x_{2},t)\in\mathcal{P}\) for all \(x_{1}^{\prime}\), and so this spacetime is invariant under shifts in the \(x_{1}\) direction. It is thus convenient to define its projection onto the \((x_{2},t)\) coordinates by: \[\widehat{\mathcal{P}}=\big{\{}(x_{2},t)\in\mathbb{T}\times[\mathsf{t}_{\mathsf{ in}},\mathsf{t}_{\mathsf{fin}})\colon\mathcal{J}(x_{2},t)>0\big{\}}\,. \tag{5.17}\] In order to perform energy estimates on \(\mathcal{P}\), it is convenient to introduce the transformation \[\mathsf{q}\colon\widehat{\mathcal{P}}\mapsto[0,\varepsilon) \tag{5.18a}\] \[\mathsf{s}=\mathsf{q}(x_{2},t):=\varepsilon\big{(}1-\mathcal{J}(x_{2},t) \big{)}\,. \tag{5.18b}\] We have that \(\mathsf{q}(x_{2},\mathsf{t}_{\mathsf{in}})=0\), so that the set \(\{\mathsf{s}=0\}\) corresponds to the initial time slice \(\{t=\mathsf{t}_{\mathsf{in}}\}\), which is the _past_ temporal boundary of the projected spacetime \(\widehat{\mathcal{P}}\). We also note that the _future_ temporal boundary of \(\widehat{\mathcal{P}}\), namely the set \[\partial_{\mathsf{top}}\widehat{\mathcal{P}}=\{(x_{2},t)\in\mathbb{T}\times[ \mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}}]\colon\mathcal{J}(x_{2},t )=0\}\] is mapped under \(\mathsf{q}\) to the set \(\{\mathsf{s}=\varepsilon\}\). Next, we define a suitable inverse of \(\mathsf{q}\) by \[\mathsf{q}^{-1} :\mathbb{T}\times[0,\varepsilon)\to[\mathsf{t}_{\mathsf{in}}, \mathsf{t}_{\mathsf{fin}})\,, \tag{5.19a}\] \[t =\mathsf{q}^{-1}(x_{2},\mathsf{s})\,, \tag{5.19b}\] such that \(t=\mathsf{q}^{-1}(x_{2},\mathsf{q}(x_{2},t))\) for all \((x_{2},t)\in\widehat{\mathcal{P}}\), or equivalently, that \(\mathsf{s}=\mathsf{q}(x_{2},\mathsf{q}^{-1}(x_{2},\mathsf{s}))\) for all \((x_{2},\mathsf{s})\in\mathbb{T}\times[0,\varepsilon)\). In (5.19) we are abusing convention: it is the map \((x_{2},\mathsf{s})\mapsto(x_{2},t)\) defined from \(\mathbb{T}\times[0,\varepsilon)\to\mathbb{T}\times[\mathsf{t}_{\mathsf{in}}, \mathsf{t}_{\mathsf{fin}})\) which is the inverse of the map \((x_{2},t)\mapsto(x_{2},\mathsf{s})=(x_{2},\mathsf{q}(x_{2},t))\). The fact that such a map is well-defined is established in Lemma 6.5 below. ### Change of coordinates for the remapped spacetime Given any function \(f\colon\mathcal{P}\to\mathbb{R}\), where we recall cf. (5.11) that \(\mathcal{P}\subset\mathbb{T}^{2}\times[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{ \mathsf{fin}})\), we define the function \(\widehat{f}\colon\mathbb{T}^{2}\times[0,\varepsilon)\to\mathbb{R}\) by \[\widetilde{f}(x,\mathsf{s}):=f(x,t),\qquad\text{where}\qquad\mathsf{s}= \mathsf{q}(x_{2},t)\,. \tag{5.20}\] Then, by the chain-rule, (5.18b), and (5.13) we obtain \[\partial_{t}f(x,t) =\widehat{\mathsf{Q}}(x_{2},\mathsf{s})\partial_{\mathsf{s}} \widetilde{f}(x,\mathsf{s})\,, \tag{5.21a}\] \[\partial_{2}f(x,t) =\big{(}\partial_{2}-\overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s}) \partial_{\mathsf{s}}\big{)}\widetilde{f}(x,\mathsf{s})\,,\] (5.21b) \[\partial_{1}f(x,t) =\partial_{1}\widetilde{f}(x,\mathsf{s})\,, \tag{5.21c}\] where for compactness of notation we have introduced the functions \[\widehat{\mathsf{Q}}(x_{2},\mathsf{s}) =\partial_{t}\mathsf{q}(x_{2},t)\Big{|}_{t=\mathsf{q}^{-1}(x_{2}, \mathsf{s})}=-\varepsilon(\partial_{t}\overline{J}_{\mathsf{s}})(x_{1}^{*}(x_ {2},t),x_{2},t)\Big{|}_{t=\mathsf{q}^{-1}(x_{2},\mathsf{s})} \tag{5.22a}\] \[\overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s}) =-\partial_{2}\mathsf{q}(x_{2},t)\Big{|}_{t=\mathsf{q}^{-1}(x_{2},\mathsf{s})}=\varepsilon(\partial_{2}\overline{J}_{\mathsf{s}})(x_{1}^{*}(x_ {2},t),x_{2},t)\Big{|}_{t=\mathsf{q}^{-1}(x_{2},\mathsf{s})}\,. \tag{5.22b}\] For later use, it is also convenient to define \[\mathsf{Q}(x,\mathsf{s}):=\widehat{\mathsf{Q}}(x_{2},\mathsf{s})- \widetilde{V}(x,\mathsf{s})\overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s}) =-\varepsilon(\partial_{t}\overline{J}_{\mathsf{s}}+V\partial_{2} \overline{J}_{\mathsf{s}})(x_{1}^{*}(x_{2},t),x_{2},t)\Big{|}_{t=\mathsf{q}^{-1}( x_{2},\mathsf{s})}\] \[\qquad\qquad\qquad-\big{(}V(x_{1},x_{2},t)-V(x_{1}^{*}(x_{2},t), x_{2},t)\big{)}\Big{|}_{t=\mathsf{q}^{-1}(x_{2},\mathsf{s})}\overline{\mathsf{Q}}_{2}(x_ {2},\mathsf{s})\,, \tag{5.22c}\] and \[\mathsf{Q}=\partial_{\mathsf{s}}\mathsf{Q}\,,\qquad\mathsf{Q}_{\mathsf{s}}= \partial_{\mathsf{s}}\widehat{\mathsf{Q}}\,,\qquad\mathsf{Q}_{2}=\partial_{ \mathsf{s}}\overline{\mathsf{Q}}_{2}\,. \tag{5.22d}\] With the above notation, it follows from (5.21) that the spacetime gradient operator in \((x,t)\) variables, namely \(\mathsf{D}=(\varepsilon\partial_{t},\varepsilon\partial_{1},\partial_{2})\), becomes the gradient operator \(\widetilde{\mathsf{D}}\) associated with the \((x,\mathsf{s})\) coordinates, which is defined by \[\widetilde{\mathsf{D}}=(\widetilde{\mathsf{D}}_{\mathsf{s}},\widetilde{\mathsf{ D}}_{1},\widetilde{\mathsf{D}}_{2}):=\big{(}\varepsilon\widehat{\mathsf{Q}}\partial_{ \mathsf{s}},\varepsilon\partial_{1},\partial_{2}-\overline{\mathsf{Q}}_{2} \partial_{\mathsf{s}}\big{)}\,. \tag{5.23}\] That is, we have that \[\mathsf{D}f(x,t)=\widetilde{\mathsf{D}}\widetilde{f}(x,\mathsf{s}).\] Next, we notice that the components of \(\widetilde{\mathsf{D}}\) commute, that is \[[\widetilde{\mathsf{D}}_{\mathsf{s}},\widetilde{\mathsf{D}}_{2}]=[\widetilde{ \mathsf{D}}_{\mathsf{s}},\widetilde{\mathsf{D}}_{1}]=[\widetilde{\mathsf{D}}_{2 },\widetilde{\mathsf{D}}_{1}]=0\,, \tag{5.24}\] so that for any \(\gamma\in\mathbb{N}_{0}^{3}\) we may write unambiguously \(\widetilde{\mathsf{D}}^{\gamma}=\widetilde{\mathsf{D}}_{\mathsf{s}}^{\gamma_{ 0}}\widetilde{\mathsf{D}}_{1}^{\gamma_{1}}\widetilde{\mathsf{D}}_{2}^{\gamma_ {2}}\), and notice that \[(\mathsf{D}^{\gamma}f)(x,t)=(\widetilde{\mathsf{D}}^{\gamma}\widetilde{f})(x, s)\,. \tag{5.25}\] Via the identity \(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}=\frac{1}{\varepsilon}\widetilde{ \mathsf{D}}_{\mathsf{s}}+\widetilde{V}\widetilde{\mathsf{D}}_{2}\), we note that material derivatives are mapped into \((x,\mathsf{s})\) coordinates as \[(\partial_{t}+V\partial_{2})f(x,t)=(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde {V}\partial_{2})\widetilde{f}(x,\mathsf{s})=(\tfrac{1}{\varepsilon}\widetilde{ \mathsf{D}}_{\mathsf{s}}+\widetilde{V}\widetilde{\mathsf{D}}_{2})\widetilde{f}( x,\mathsf{s})\,. \tag{5.26}\] It also follows from (5.24) and the second equality in (5.26) that \[[(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2}),\widetilde{ \mathsf{D}}^{k}]\widetilde{f}=[\widetilde{V},\widetilde{\mathsf{D}}^{k}] \widetilde{\mathsf{D}}_{2}\widetilde{f}=-\widetilde{\mathsf{D}}^{k}\widetilde {V}\,\widetilde{\mathsf{D}}_{2}\widetilde{f}-(\widetilde{\mathsf{D}}^{k}, \widetilde{V},\widetilde{\mathsf{D}}_{2}\widetilde{f})\,. \tag{5.27}\] Lastly, we may identify the adjoint of \(\widetilde{\mathsf{D}}\) with respect to the \(L^{2}\) inner product on \(\mathbb{T}^{2}\times[0,\mathsf{s}]\) by \[\widetilde{\mathsf{D}}^{\mathsf{s}}_{1} =-\widetilde{\mathsf{D}}_{\mathsf{s}}-\varepsilon\hat{\mathsf{Q }}_{\mathsf{s}}+\varepsilon\widehat{\mathsf{Q}}(\delta_{\mathsf{s}}-\delta_{0 })\,, \tag{5.28a}\] \[\widetilde{\mathsf{D}}^{\mathsf{s}}_{1} =-\widetilde{\mathsf{D}}_{1}\,,\] (5.28b) \[\widetilde{\mathsf{D}}^{\mathsf{s}}_{2} =-\widetilde{\mathsf{D}}_{2}+\hat{\mathsf{Q}}_{2}-\overline{ \mathsf{Q}}_{2}\delta_{\mathsf{s}}\,,\] (5.28c) \[(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2})^{*} =-(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2})- \hat{\mathsf{Q}}_{\mathsf{s}}+\mathsf{Q}(\delta_{\mathsf{s}}-\delta_{0})+ \widetilde{V}\hat{\mathsf{Q}}_{2}-\widetilde{\mathsf{D}}_{2}\widetilde{V}\,. \tag{5.28d}\] Here we have used that \(\overline{J}_{g}(x,\mathsf{t}_{\mathsf{in}})=1\), so that \(\overline{\mathsf{Q}}_{2}(x,0)=0\). **Remark 5.1** (Lower bound for \(\widetilde{J}_{g}\) and definition of "fake \(J_{g}\)").: _By using (5.20) with \(f=\overline{J}_{g}\), and the definition of the map \(\mathfrak{q}\), we deduce that_ \[\widetilde{\mathcal{J}}_{g}(x,\mathsf{s})=\overline{J}_{g}(x,t)\geq\min_{x_{1 }}\overline{J}_{g}(x,t)=\mathcal{J}(x_{2},t)=\left(1-\tfrac{\mathsf{s}}{ \varepsilon}\right)=\widetilde{\mathcal{J}}(x_{2},\mathsf{s})\,. \tag{5.29}\] _Throughout the paper, we shall refer to \(\mathcal{J}(x_{2},t)=\widetilde{\mathcal{J}}(x_{2},\mathsf{s})\) as "fake \(J_{g}\)". We shall discuss in Section 6 several useful properties of \(\widetilde{\mathcal{J}}\). Moreover, note that \(\widetilde{\mathcal{J}}\) does not in fact depend on \(x_{2}\) at all._ **Remark 5.2** (Dropping the tildes).: _Rather than working with a new family of variables that depend on the spacetime coordinates \((x,\mathsf{s})\), namely \(\widetilde{\widetilde{\mathsf{W}}}(x,\mathsf{s})=\hat{\mathsf{W}}(x,t)\), \(\widetilde{\widetilde{\mathsf{Z}}}(x,\mathsf{s})=\hat{\mathsf{Z}}(x,t)\), \(\widetilde{\widetilde{\mathsf{A}}}(x,\mathsf{s})=\hat{\mathsf{A}}(x,t)\), \(\widetilde{J}_{g}(x,\mathsf{s})=J_{g}(x,t)\), \(\widetilde{\widetilde{J}}_{g}(x,\mathsf{s})=\overline{J}_{g}(x,t)\), \(\widetilde{\mathcal{J}}(x,\mathsf{s})=\mathcal{J}(x,\mathsf{s})\), \(\widetilde{h}(x,\mathsf{s})=h(x,t)\), \(\widetilde{g}(x,\mathsf{s})=g(x,t)\), \(\widetilde{\mathcal{N}}(x,\mathsf{s})=N(x,t)\), and \(\widetilde{\tau}(x,\mathsf{s})=\tau(x,t)\), for notational simplicity we drop the tildes and abuse notation to continue using the variables \(\hat{\mathsf{W}},\hat{\mathsf{Z}},\hat{\mathsf{A}},\hat{J}_{g},\widetilde{J}_{g },\mathcal{J},h,g,\mathcal{N},\tau\), but now depending on \((x,\mathsf{s})\) rather than \((x,t)\). This identification is made throughout the rest of the paper and no ambiguity may arise because we shall still use the notation \(\widetilde{\mathsf{D}}\) for the spacetime derivative operator in \((x,\mathsf{s})\) coordinates. As such, \(\widetilde{\mathsf{D}}f\) means that \(f\) is viewed as a function of \((x,\mathsf{s})\), while \(\mathsf{D}f\) means that \(f\) is viewed as a function of \((x,t)\), where \(t=\mathfrak{q}^{-1}(x_{2},\mathsf{s})\)._ At this stage it is convenient to record a few of the evolution equations transformed into \((x,\mathsf{s})\) coordinates. For instance, (3.13b), (3.15a), and (5.4a) imply that (as mentioned in Remark 5.2, we drop the tildes) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})J_{g} =\tfrac{1+\alpha}{2}J_{g}\hat{\mathsf{W}}_{\mathcal{N}}+\tfrac{1- \alpha}{2}J_{g}\hat{\mathsf{Z}}_{\mathcal{N}}\,, \tag{5.30}\] \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\overline{J}_{g} =\tfrac{1+\alpha}{2}J_{g}\hat{\mathsf{W}}_{\mathcal{N}}+\tfrac{1- \alpha}{2}J_{g}\hat{\mathsf{Z}}_{\mathcal{N}}-\mathfrak{J}\,,\] (5.31) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D }}_{2}h =g\big{(}\tfrac{1+\alpha}{2}\hat{\mathsf{W}}_{\mathcal{N}}+\tfrac{1- \alpha}{2}\hat{\mathsf{Z}}_{\mathcal{T}}\big{)}\,, \tag{5.32}\] from (3.19c) and (3.20) we deduce \[\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{1}\Sigma =\tfrac{1}{2}J_{g}(\hat{\mathsf{W}}_{\mathcal{N}}-\hat{\mathsf{ Z}}_{\mathcal{N}})+\tfrac{1}{2}J_{g}\widetilde{\mathsf{D}}_{2}h(\hat{\mathsf{W}}_{ \mathcal{T}}-\hat{\mathsf{Z}}_{\mathcal{T}})\,, \tag{5.33a}\] \[\widetilde{\mathsf{D}}_{2}\Sigma =\tfrac{1}{2}g^{\sharp}(\hat{\mathsf{W}}_{\mathcal{T}}-\hat{ \mathsf{Z}}_{\mathcal{T}})\,,\] (5.33b) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\Sigma =-\alpha\Sigma(\hat{\mathsf{Z}}_{\mathcal{N}}+\hat{\mathsf{A}}_{ \mathcal{T}})\,,\] (5.33c) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\Sigma^{-2\beta} =2\alpha\beta\Sigma^{-2\beta}(\hat{\mathsf{Z}}_{\mathcal{N}}+\hat{ \mathsf{A}}_{\mathcal{T}})\,, \tag{5.33d}\] while transforming (3.14) yields \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{N}=-\big{(}\tfrac{1+ \alpha}{2}\hat{\mathsf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\hat{\mathsf{Z}}_{ \mathcal{T}}\big{)}\tau\,, \tag{5.34a}\] \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{T}=\big{(}\tfrac{1+\alpha} {2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{ \mathcal{T}}\big{)}\mathcal{N}\,. \tag{5.34b}\] The specific vorticity evolution is transformed to the equation \[\tfrac{J_{g}}{\Sigma}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\Omega- \alpha\partial_{1}\Omega+\alpha J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2 }h\;\widetilde{\mathsf{D}}_{2}\Omega=0\,. \tag{5.35}\] Here we have appealed to the abuse of notation mentioned in Remark 5.2. ### The \(L^{2}\)-based energy norms In Sections 5-12, we will make use of the "energy" and "damping" norms defined as follows. We use the convention in Remark 5.2, dropping all tildes for functions that depend on \((x,\mathsf{s})\in\mathbb{T}^{2}\times[0,\varepsilon)\). We keep the \(\widetilde{\mathsf{D}}\) notation from (5.23) to emphasize this \((x,\mathsf{s})\) dependence. See also Remark 5.3 for equivalent norms in term of \((x,t)\) coordinates. The energy norms at the sixth derivative level are given by \[\widetilde{\mathcal{E}}_{6}^{2}(\mathsf{s}) =\widetilde{\mathcal{E}}_{6,\mathcal{N}}^{2}(\mathsf{s})+( \mathsf{K}\varepsilon)^{-2}\widetilde{\mathcal{E}}_{6,\mathcal{T}}^{2}( \mathsf{s}) \tag{5.36a}\] \[\widetilde{\mathcal{E}}_{6,\mathcal{N}}^{2}(\mathsf{s}) =\big{\|}\mathcal{J}^{\mathsf{A}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{ \mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|} _{L^{2}_{x}}^{2}\] (5.36b) \[\widetilde{\mathcal{E}}_{6,\mathcal{T}}^{2}(\mathsf{s}) =\big{\|}\mathcal{J}^{\mathsf{A}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{6}(\hat{\mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\mathcal{T} },\hat{\mathbf{A}}_{\mathcal{T}})(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}\,, \tag{5.36c}\] and are defined at the fifth derivative level by \[\widetilde{\mathcal{E}}_{5}^{2}(\mathsf{s}) =\widetilde{\mathcal{E}}_{5,\mathcal{N}}^{2}(\mathsf{s})+( \mathsf{K}\varepsilon)^{-2}\widetilde{\mathcal{E}}_{5,\mathcal{T}}^{2}( \mathsf{s}) \tag{5.36d}\] \[\widetilde{\mathcal{E}}_{5,\mathcal{N}}^{2}(\mathsf{s}) =\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{ \mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}\] (5.36e) \[\widetilde{\mathcal{E}}_{5,\mathcal{T}}^{2}(\mathsf{s}) =\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}(\hat{ \mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{ \mathcal{T}})(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}\,, \tag{5.36f}\] where \(\mathsf{K}=\mathsf{K}(\alpha)\geq 1\) is a sufficiently large constant chosen solely in terms of \(\alpha\), see (10.73). In particular, \(\mathsf{K}\) is independent of \(\varepsilon\). The sixth-order damping norms are given by \[\widetilde{\mathcal{D}}_{6}^{2}(\mathsf{s}) =\widetilde{\mathcal{D}}_{6,\mathcal{N}}^{2}(\mathsf{s})+( \mathsf{K}\varepsilon)^{-2}\widetilde{\mathcal{D}}_{6,\mathcal{T}}^{2}( \mathsf{s}) \tag{5.36g}\] \[\widetilde{\mathcal{D}}_{6,\mathcal{N}}^{2}(\mathsf{s}) =\int_{0}^{\mathsf{s}}\big{\|}\mathcal{J}^{\mathsf{A}}J_{g}^{ \frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g} \hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] (5.36h) \[\widetilde{\mathcal{D}}_{6,\mathcal{T}}^{2}(\mathsf{s}) =\int_{0}^{\mathsf{s}}\big{\|}\mathcal{J}^{\frac{1}{4}}J_{g}^{ \frac{1}{2}}\widetilde{\mathsf{D}}^{6}(\hat{\mathbf{W}}_{\mathcal{T}},\hat{ \mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{\mathcal{T}})(\cdot,\mathsf{s}^{ \prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,, \tag{5.36i}\] and the fifth-order damping norms are \[\widetilde{\mathcal{D}}_{5}^{2}(\mathsf{s}) =\widetilde{\mathcal{D}}_{5,\mathcal{N}}^{2}(\mathsf{s})+( \mathsf{K}\varepsilon)^{-2}\widetilde{\mathcal{D}}_{5,\mathcal{T}}^{2}( \mathsf{s}) \tag{5.36j}\] \[\widetilde{\mathcal{D}}_{5,\mathcal{N}}^{2}(\mathsf{s}) =\int_{0}^{\mathsf{s}}\big{\|}\widetilde{\mathsf{D}}^{5}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{ A}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}\] (5.36k) \[\widetilde{\mathcal{D}}_{5,\mathcal{T}}^{2}(\mathsf{s}) =\int_{0}^{\mathsf{s}}\big{\|}\widetilde{\mathsf{D}}^{5}(\hat{ \mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{ \mathcal{T}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}\,, \tag{5.36l}\] where \(\mathsf{K}\geq 1\) is the same constant in (5.36a), (5.36d), (5.36g), and in (5.36j). ### Bootstrap assumptions The existence of solutions in the spacetime \(\mathcal{P}\) (equivalently, on \(\mathbb{T}^{2}\times[0,\varepsilon)\) in \((x,\mathsf{s})\) variables) relies on quantitative bounds on all unknowns in the problem. We establish these quantitative bounds via a series of "bootstrap inequalities". Assuming these inequalities hold true with a specific constant on \(\mathcal{P}\) (intuitively, this constant is related to the size of various norms of the initial data multiplied by a constant which only depends on \(\alpha\) and \(\kappa_{0}\)), we use the equations to prove that the bounds in fact hold true on \(\mathcal{P}\) (equivalently, on \(\mathbb{T}^{2}\times[0,\varepsilon)\) in \((x,\mathsf{s})\) variables) with a constant which is strictly smaller than what was assumed. A standard continuity argument is then used to justify that the bootstrap inequalities indeed hold true globally on \(\mathcal{P}\). The bootstrap assumptions are as follows. There exists a constant \(\mathsf{C}_{\mathsf{supp}}>0\), which depends only on \(\alpha\) and \(\kappa_{0}\) (see (6.5) below), such that for all \(\mathsf{s}\in[0,\varepsilon)\), we have \[\mathrm{supp}\,(\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}},\widetilde{ \mathsf{D}}J_{g},\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{2}h)(\cdot, \mathsf{s})\subset\mathcal{X}_{\mathrm{fin}}:=\big{\{}x\in\mathbb{T}^{2}\colon \mathrm{dist}(x,\mathcal{X}_{\mathrm{in}})\leq\mathsf{C}_{\mathsf{supp}} \varepsilon\big{\}}\,. \tag{5.37a}\] Regarding \(\hat{\mathbf{W}}\), we assume that pointwise for \((x,t)\in\mathcal{P}\), or equivalently, \((x,\mathsf{s})\in\mathbb{T}^{2}\times[0,\varepsilon)\), we have \[J_{g}\hat{\mathbf{W}}_{\mathcal{N}} \geq-\tfrac{9}{10}\varepsilon^{-1}\ \ \text{ implies that }\ \ J_{g}\geq\tfrac{2}{25}\,, \tag{5.37b}\] \[\big{|}J_{g}\hat{\mathbf{W}}_{\mathcal{N}} \big{|} \leq(1+\varepsilon)\varepsilon^{-1}\,, \tag{5.37c}\] \[\big{|}\widetilde{\mathsf{D}}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}}) \big{|} \leq 3\varepsilon^{-1}\,, \tag{5.37d}\] \[\big{|}\mathbf{\hat{W}}_{\mathcal{T}}\big{|} \leq 1+\varepsilon\,,\] (5.37e) \[\big{|}\widetilde{\mathsf{D}}\mathbf{\hat{W}}_{\mathcal{T}}\big{|} \leq 2\mathsf{C}_{\mathsf{data}}\,. \tag{5.37f}\] Regarding \(\mathbf{\hat{Z}}\) and \(\mathbf{\hat{A}}\), we assume that pointwise for \((x,t)\in\mathcal{P}\), or equivalently, \((x,\mathsf{s})\in\mathbb{T}^{2}\times[0,\varepsilon)\), it holds that \[\big{|}\mathbf{\hat{Z}}_{\mathcal{N}}\big{|}+\big{|}\widetilde{ \mathsf{D}}\mathbf{\hat{Z}}_{\mathcal{N}}\big{|} \leq\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{N}}}\,, \tag{5.37g}\] \[\big{|}\mathbf{\hat{A}}_{\mathcal{N}}\big{|}+\big{|}\widetilde{ \mathsf{D}}\mathbf{\hat{A}}_{\mathcal{N}}\big{|} \leq\mathsf{C}_{\mathbf{\hat{A}}_{\mathcal{N}}}\,,\] (5.37h) \[\big{|}\mathbf{\hat{Z}}_{\mathcal{T}}\big{|}+\big{|}\widetilde{ \mathsf{D}}\mathbf{\hat{Z}}_{\mathcal{T}}\big{|} \leq\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{T}}}\varepsilon\,,\] (5.37i) \[\big{|}\mathbf{\hat{A}}_{\mathcal{T}}\big{|}+\big{|}\widetilde{ \mathsf{D}}\mathbf{\hat{A}}_{\mathcal{T}}\big{|} \leq\mathsf{C}_{\mathbf{\hat{A}}_{\mathcal{T}}}\varepsilon\,, \tag{5.37j}\] where \(\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{N}}},\mathsf{C}_{\mathbf{\hat{A}}_{ \mathcal{N}}},\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{T}}}\), and \(\mathsf{C}_{\mathbf{\hat{A}}_{\mathcal{T}}}\) are sufficiently large constants which depend only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), and which are fixed throughout the proof (see conditions (9.17), (9.20), (9.26), (9.33), (9.40), (9.47), (9.54), and (9.61) below). Similar pointwise in spacetime bootstraps are assumed on the geometry, ALE-drift, and sound-speed: \[0 \leq J_{g}\leq\tfrac{6}{5} \tag{5.37k}\] \[\big{|}\mathsf{D}J_{g}\big{|} \leq 4(1+\alpha)\] (5.37l) \[\max\bigl{\{}\tfrac{1}{2}|\mathsf{D}_{1}h|,\tfrac{1}{3}|\mathsf{ D}_{2}h|,\tfrac{1}{(1+\alpha)\kappa_{0}}|\mathsf{D}_{t}h|\bigr{\}} \leq\varepsilon\,,\] (5.37m) \[\max\bigl{\{}\tfrac{1}{5(1+\alpha)}\big{|}\mathsf{D}\mathsf{D}_{1 }h\big{|},\tfrac{1}{5\mathsf{C}_{\mathsf{data}}}\big{|}\mathsf{D}\mathsf{D}_{2 }h\big{|}\bigr{\}} \leq\varepsilon\,,\] (5.37n) \[\big{|}V\big{|}+\big{|}\mathsf{D}V\big{|} \leq\mathsf{C}_{\mathsf{V}}\varepsilon\,,\] (5.37o) \[\tfrac{\kappa_{0}}{4} \leq\Sigma\leq\kappa_{0}\,,\] (5.37p) \[\big{|}\mathsf{D}\Sigma\big{|} \leq 2\kappa_{0}\,, \tag{5.37q}\] where \(\mathsf{C}_{\mathsf{V}}\) is a sufficiently large constant which depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), which which is fixed throughout the proof (see (9.15)). Lastly, for the energy bootstrap we assume that there exist constants \(\mathsf{B}_{6},\mathsf{B}_{5},\mathsf{B}_{J},\mathsf{B}_{\mathsf{h}}\geq 1\), which only depend on \(\alpha\), \(\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), such that \[\varepsilon^{\frac{1}{2}}\sup_{\mathsf{s}\in[0,\varepsilon]} \widetilde{\mathcal{E}}_{6}(\mathsf{s})+\widetilde{\mathcal{D}}_{6}(\varepsilon) \leq\mathsf{B}_{6}\,, \tag{5.37r}\] \[\varepsilon^{\frac{1}{2}}\sup_{\mathsf{s}\in[0,\varepsilon]} \widetilde{\mathcal{E}}_{5}(\mathsf{s})+\widetilde{\mathcal{D}}_{5}(\varepsilon) \leq\mathsf{B}_{5}\,,\] (5.37s) \[\big{\|}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{1}h \big{\|}_{L^{2}_{x,\mathsf{s}}([0,\varepsilon)\times\mathbb{T}^{2})}+\big{\|} \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{x, \mathsf{s}}([0,\varepsilon)\times\mathbb{T}^{2})} \leq\mathsf{B}_{\mathsf{h}}\varepsilon^{2}\,,\] (5.37t) \[\big{\|}\widetilde{\mathsf{D}}^{6}J_{g}\big{\|}_{L^{2}_{x, \mathsf{s}}([0,\varepsilon)\times\mathbb{T}^{2})} \leq\mathsf{B}_{J}\varepsilon. \tag{5.37u}\] Without loss of generality, we will henceforth assume the ordering \(\mathsf{B}_{6}\leq\mathsf{B}_{5}\leq\mathsf{B}_{J},\mathsf{B}_{\mathsf{h}}\). **Remark 5.3** (**Norms with respect to \((x,\mathsf{s})\) versus \((x,t)\) variables)**.: _Using definition (5.20), in the bootstrap bounds (5.37) we have identified functions \(F=F(x,t)\colon\mathcal{P}\to\mathbb{R}\) and their counterparts \(\widetilde{F}=\widetilde{F}(x,\mathsf{s})\colon\mathbb{T}^{2}\times[0, \varepsilon)\to\mathbb{R}\). Additionally, according to Remark 5.2 we have dropped tildes, writing \(F\) instead of \(\widetilde{F}\), but have kept \(\widetilde{\mathsf{D}}\) instead of \(\mathsf{D}\) to emphasize \((x,\mathsf{s})\) versus \((x,t)\) dependence. The perspective taken in our proof is that some of the bootstrap assumptions (e.g. for \(\mathbf{\hat{W}}_{\mathcal{N}}\) and \(J_{g}\)) are more convenient to close in \((x,t)\) variables, while some others (e.g. the energy bounds) are more convenient to close in \((x,\mathsf{s})\) variables. It is important however to emphasize that with the exception of the bootstraps for \(\sup_{\mathsf{s}}\widetilde{\mathcal{E}}_{5}(\mathsf{s})\) and \(\sup_{\mathsf{s}}\widetilde{\mathcal{E}}_{6}(\mathsf{s})\), no ambiguity arises from using \((x,\mathsf{s})\) versus \((x,t)\) variables. Indeed, it is clear that at the level of pointwise bounds, (5.20) and (5.25) imply that for any function \(F(x,t)=\widetilde{F}(x,\mathsf{s})\) and \(k\geq 0\), we have_ \[\|\mathsf{D}^{k}F\|_{L^{\infty}_{x,t}(\mathcal{P})}=\|\widetilde{ \mathsf{D}}^{k}\widetilde{F}\|_{L^{\infty}_{x,s}([0,\varepsilon)\times\mathbb{T}^ {2})}\,.\] (5.38a) _This addresses (5.37a)-(5.37q). Next, we note that the Jacobian of the map \[(x,t)\mapsto(x,\mathsf{s})\] present in ( 5.20 ) is easily seen to equal \[|\partial_{t}\mathsf{q}|=\widehat{\mathsf{Q}}\], and the bound ( 6.38a ) below gives global upper and lower bounds for \[\widehat{\mathsf{Q}}\] (which are strictly positive, and depend only on \[\alpha\] ). As such, with the spacetime \[\mathcal{P}\] defined in ( 5.11 ), the change of variable formula gives that for any function \[F(x,t)=\widetilde{F}(x,\mathsf{s})\], any \[k\geq 0\], and any weight \[\varphi(x,t)=\widetilde{\varphi}(x,\mathsf{s})\geq 0\], we have \[C_{\alpha}^{-1}\|\widetilde{\varphi}\widetilde{\mathsf{D}}^{k}\widetilde{F}\|_{L^{2}_{x, \mathsf{s}}([0,\varepsilon)\times\mathbb{T}^{2})}\leq\|\varphi\mathsf{D}^{k}F \|_{L^{2}_{x,\mathsf{s}}([0,\varepsilon)\times\mathbb{T}^{2})}\leq C_{\alpha}\| \widetilde{\varphi}\widetilde{\mathsf{D}}^{k}\widetilde{F}\|_{L^{2}_{x,\mathsf{s}}([0, \varepsilon)\times\mathbb{T}^{2})}\,, \tag{5.38b}\] _for a constant \(C_{\alpha}\geq 1\) that only depends on \(\alpha\). This addresses the \(L^{2}_{x,\mathsf{s}}=L^{2}([0,\varepsilon)\times\mathbb{T}^{2})\) norms present in (5.37r)-(5.37u). It remains to discuss the \(L^{\infty}_{x}L^{2}_{x}=L^{\infty}([0,\varepsilon);L^{2}(\mathbb{T}^{2}))\) norms norms encoded by the bootstraps for \(\sup_{\mathsf{s}}\widetilde{\mathcal{E}}_{5}(\mathsf{s})\) and \(\sup_{\mathsf{s}}\widetilde{\mathcal{E}}_{6}(\mathsf{s})\) in (5.37r)-(5.37s). Here, bounds which correspond to the "time-slice foliation" of \(\mathbb{T}^{2}\times[0,\varepsilon)\), namely \((\mathbb{T}^{2}\times\{\mathsf{s}\})_{\mathsf{s}\in[0,\varepsilon]}\), do not translate to bounds on a "time-slice foliation" of \(\mathcal{P}\). Instead, a foliation via level sets of \(\mathcal{J}\), namely \(\{(x_{1},x_{2},t)\colon\mathcal{J}(x_{2},t)=1-\frac{\mathsf{s}}{\varepsilon} \}=\{(x_{1},x_{2},\mathfrak{q}^{-1}(x_{2},\mathsf{s}))\}\) for \(\mathsf{s}\in[0,\varepsilon]\), must be used. That is, for any function \(F(x,t)=\widetilde{F}(x,\mathsf{s})\), any \(k\geq 0\), and any weight \(\varphi(x,t)=\widetilde{\varphi}(x,\mathsf{s})\geq 0\), we have_ \[\|(\widetilde{\mathcal{P}}\widetilde{\mathcal{D}}^{k}\widetilde{F})(x, \mathsf{s})\|_{L^{2}_{x}(\mathbb{T}^{2})}=\|(\varphi\mathsf{D}^{k}F)(x, \mathfrak{q}^{-1}(x_{2},\mathsf{s}))\|_{L^{2}_{x}(\mathbb{T}^{2})}\,, \tag{5.38c}\] _for any \(\mathsf{s}\in[0,\varepsilon)\). The equivalences in (5.38a) and (5.38b) will be used throughout the paper. In contrast, (5.38c) is never used._ **Remark 5.4** (**The order in which the bootstrap constants are chosen)**.: _We will show that the bootstrap assumptions (5.37) close if the various constants appearing therein, namely_ \[\mathsf{C}_{\mathsf{supp}}\,,\mathsf{C}_{\mathbf{\hat{2}}_{N}}\,,\mathsf{C}_{ \mathbf{\hat{A}}_{N}}\,,\mathsf{C}_{\mathbf{\hat{2}}_{T}}\,,\mathsf{C}_{ \mathbf{\hat{A}}_{T}}\,,\mathsf{C}_{\mathsf{V}}\,,\mathsf{K}\,,\mathsf{B}_{6} \,,\mathsf{B}_{5}\,,\mathsf{B}_{J}\,,\mathsf{B}_{\mathsf{h}}\,, \tag{5.39}\] _are chosen suitably. In order to make sure that there is no circular argument, we discuss the precise interdependence of these constants. We first note that:_ 1. _The Euler equation fixes the parameter_ \(\alpha=\frac{\gamma-1}{2}>0\)_, where_ \(\gamma>1\) _is the adiabatic exponent._ 2. _The initial data, through assumptions (_i_)-(_iii_), fixes two parameters_ \(\kappa_{0}\geq 1\) _and_ \(\overline{\mathsf{C}}\geq 1\)_. Moreover, as explained in Remark_ 4.2_, the parameters_ \(\alpha,\kappa_{0}\)_, and_ \(\overline{\mathsf{C}}\) _determine a sufficiently large parameter_ \(\mathsf{C}_{\mathsf{data}}\geq\kappa_{0}\)_._ _It is important to emphasize that \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\) are independent of \(\varepsilon>0\), which will be chosen to be sufficiently small at the end of the proof. Next, the precise order in which the bootstrap constants are chosen is as follows:_ 1. \(\mathsf{C}_{\mathsf{supp}}\) _is chosen in (_6.5_) to depend only on_ \(\alpha\) _and_ \(\kappa_{0}\)_. Subsequent dependence on_ \(\mathsf{C}_{\mathsf{supp}}\) _is encoded as dependence on_ \(\alpha\) _and_ \(\kappa_{0}\)_._ 2. \(\mathsf{K}\geq 1\) _is chosen in (_10.73_) to depend only on_ \(\alpha\)_. For the downstream maximal development in Section_ 13_,_ \(\mathsf{K}\) _also needs to depend on_ \(\kappa_{0}\)_, cf. (_13.55_)._ 3. \(\mathsf{B}_{6}\geq 1\) _is determined by (_10.74_) and (_12.92_) to be sufficiently large with respect to only_ \(\alpha\)_, and_ \(\mathsf{C}_{\mathsf{data}}\)_. For the downstream maximal development in Section_ 13_,_ \(\mathsf{B}_{6}\) _also needs to depend on_ \(\kappa_{0}\)_, cf. (_13.54_) and (_13.84_)._ 4. \(\mathsf{B}_{5}\) _is chosen in (_6.79_) to depend only on_ \(\alpha\)_,_ \(\mathsf{C}_{\mathsf{data}}\)_, and_ \(\mathsf{B}_{6}\)_. As shown in (_6.10_), the quotient_ \(\mathsf{B}_{5}\mathsf{B}_{6}^{-1}\) _is bounded from above by a universal constant, and from below by a constant that only depends on_ \(\alpha\)_. As such, subsequent bounds of the type_ \(A\leq\mathsf{B}_{5}\) _will be written as_ \(A\leq\tilde{C}\mathsf{B}_{6}\) _or_ \(A\lesssim\mathsf{B}_{6}\) _(see Remark_ 4.5_)._ 5. \(\mathsf{B}_{J}\) _is chosen in (_7.2_) to depend only on_ \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)_, and in a linear fashion on_ \(\langle\mathsf{B}_{6}\rangle\)_. Since_ \(\mathsf{B}_{6}\geq 1\)_, subsequent bounds of the type_ \(A\leq\mathsf{B}_{J}\) _will be written as_ \(A\leq\tilde{C}\mathsf{B}_{6}\) _or_ \(A\lesssim\mathsf{B}_{6}\) _(see Remark_ 4.5_)._ 6. \(\mathsf{B}_{h}\) _is chosen in (_7.2_) to depend only on_ \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)_, and in a linear fashion on_ \(\langle\mathsf{K}\rangle\langle\mathsf{B}_{6}\rangle\)_. Since_ \(\mathsf{B}_{6}\geq 1\) _and_ \(\mathsf{K}\geq 1\)_, subsequent bounds of the type_ \(A\leq\mathsf{B}_{h}\) _will be written as_ \(A\leq\tilde{C}\mathsf{KB}_{6}\) _or_ \(A\lesssim\mathsf{KB}_{6}\) _(see Remark_ 4.5_)._ 7. \(\mathsf{C}_{\mathbf{\hat{A}}_{N}}\) _is determined by (_9.17_) and (_9.20_), and depends only on_ \(\alpha\) _and_ \(\mathsf{C}_{\mathsf{data}}\)_._ 8. \(\mathsf{C}_{\mathbf{\hat{A}}_{T}}\) _is determined by (_9.26_) and (_9.33_), and depends on_ \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}},\mathsf{C}_{\mathbf{\hat{A}}_{N}}\)_, and_ \(\mathsf{B}_{6}\)_. In view of points (_(_v_)) and (_(_iv_)) above, dependence on_ \(\mathsf{C}_{\mathbf{\hat{A}}_{T}}\) _is subsequently encoded as dependence only on_ \(\alpha,\kappa_{0}\)_, and_ \(\mathsf{C}_{\mathsf{data}}\)_._ 9. \(\mathsf{C}_{\mathsf{V}}\) _is determined by (_9.15_) and depends only on_ \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}},\mathsf{C}_{\mathbf{\hat{A}}_{N}}\)_, and_ \(\mathsf{C}_{\mathbf{\hat{A}}_{T}}\)_. In view of points (_(_iv_)) and (_(_iv_)) above, dependence on_ \(\mathsf{C}_{\mathsf{V}}\) _is subsequently encoded as dependence only on_ \(\alpha,\kappa_{0}\)_, and_ \(\mathsf{C}_{\mathsf{data}}\)_._ 10. \(\mathsf{C}_{\mathbf{\hat{2}}_{N}}\) _is determined by (_9.40_) and (_9.47_) and depends only on_ \(\alpha,\kappa_{0}\) _and_ \(\mathsf{C}_{\mathsf{data}}\)_._ 11. \(\mathsf{C}_{\mathbf{\hat{2}}_{T}}\) _is determined by (_9.54_) and (_9.61_) and depends only on_ \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)_, and_ \(\mathsf{C}_{\mathbf{\hat{2}}_{N}}\)_. In view of point (_(_iii_)) above, dependence on_ \(\mathsf{C}_{\mathbf{\hat{2}}_{T}}\) _is subsequently encoded as dependence only on_ \(\alpha,\kappa_{0}\)_, and_ \(\mathsf{C}_{\mathsf{data}}\)_._ _The last parameter chosen in the proof is:_ 1. \(\varepsilon>0\)_, which is taken to be sufficiently small with respect to_ \(\alpha\)_,_ \(\kappa_{0}\)_, and_ \(\mathsf{C}_{\mathsf{data}}\)_. That is,_ \(\varepsilon\) _is taken to be small enough with respect to the parameters induced by the initial data, cf. points (_(_i_))-(_(_ii_)) above._ _We emphasize that in view of points (_(_iii_))-(_(_viii_)) above, \(\varepsilon\) is sufficiently small enough with respect to any of the constants appearing in the bootstrap assumptions, cf. (5.39). This fact is used implicitly throughout the paper._ ## 6. First consequences of the bootstrap assumptions In this section we collect a few direct consequences of the bootstrap bounds (5.37), which are then subsequently used throughout the paper. We emphasize that the order in which these consequences are proven is irrelevant, they are all consequences of (5.37). As such, we sometimes make forward references to other bounds which are direct consequences of the bootstrap assumptions. ### Spatial support The goal of this subsection is to prove the bootstrap (5.37a). Recall that at the initial time \(t=\mathfrak{t}_{\mathfrak{n}}\) we have that \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}})\) and hence \((\hat{\mathbf{U}},\hat{\mathbf{\Sigma}})\) are compactly supported in \([-13\pi\varepsilon,13\pi\varepsilon]\times\mathbb{T}\); see the set \(\mathcal{X}_{\mathfrak{in}}\) defined in (4.7). Then, for a speed \(\mathsf{v}\) which is to be determined later in the proof, one may define an expanding set \[\mathcal{X}(\mathsf{s}):=\{x\in\mathbb{T}^{2}\colon|x_{1}|\leq 26\pi \varepsilon+\mathsf{v}\mathsf{s}\}.\] Note that at time \(\mathsf{s}=0\) we have that \(\mathcal{X}_{\mathfrak{in}}\subset\mathcal{X}(0)\), giving us a bit of room to operate, at least for some infinitesimally small time. Then, we may use the system (3.3), (3.15a), and (3.19b) to show that there exists a sufficiently large parameter \(\mathsf{v}\), depending only on \(\alpha\), \(\kappa\), such that \[\int_{(\mathcal{X}(\mathsf{s}))^{\mathsf{E}}}\tfrac{\mathsf{Q}J_{g}}{2\Sigma} \big{(}|\hat{\mathbf{U}}|^{2}+|\hat{\mathbf{\Sigma}}|^{2}\big{)}\mathrm{d}x=0\,, \tag{6.1}\] for all \(\mathsf{s}\in[0,\varepsilon)\), where we have denoted \(|\hat{\mathbf{U}}|^{2}=\hat{\mathsf{U}}_{k}^{i}\hat{\mathsf{U}}_{k}^{i}\) and \(|\hat{\mathbf{\Sigma}}|^{2}=\hat{\Sigma}_{k}\hat{\Sigma}_{k}\), with the usual convention of summation over repeated indices. As \(\tfrac{\mathsf{Q}J_{g}}{2\Sigma}>0\) on \(\mathbb{T}^{2}\times[0,\varepsilon)\) (cf. (5.11), (6.38a), (6.38b), (9.10)) if we establish the above identity, it means that the solution \((\hat{\mathbf{U}},\hat{\mathbf{\Sigma}})\) is compactly supported in \(\mathcal{X}(\mathsf{s})\), as claimed. For the sake of a contradiction, assume that there is a minimal \(\overline{\mathsf{s}}\in(0,\varepsilon)\) such that (6.1) holds on \([0,\overline{\mathsf{s}}]\), but that \((\tfrac{d}{\mathrm{d}s}\int_{(\mathcal{X}(\mathsf{s}))^{\mathsf{E}}}\tfrac{ \mathsf{Q}J_{g}}{2\Sigma}(|\hat{\mathbf{U}}|^{2}+|\hat{\mathbf{\Sigma}}|^{2}) \mathrm{d}x)|_{\mathsf{s}=\overline{\mathsf{s}}}>0\). Then, from the chain rule we obtain \[\Big{(}\tfrac{d}{\mathrm{d}s}\int_{(\mathcal{X}(\mathsf{s}))^{ \mathsf{E}}}\tfrac{\mathsf{Q}J_{g}}{2\Sigma}\big{(}|\hat{\mathbf{U}}|^{2}+| \hat{\mathbf{\Sigma}}|^{2}\big{)}\mathrm{d}x\Big{)}\Big{|}_{\mathsf{s}= \overline{\mathsf{s}}}\leq\int_{(\mathcal{X}(\mathsf{s}))^{\mathsf{E}}}\partial _{\mathsf{s}}\Big{(}\tfrac{\mathsf{Q}J_{g}}{2\Sigma}\big{(}|\hat{\mathbf{U}}| ^{2}+|\hat{\mathbf{\Sigma}}|^{2}\big{)}\Big{)}\mathrm{d}x-\tfrac{\mathsf{v}}{2 }\int_{\partial\mathcal{X}(\overline{\mathsf{s}})}\tfrac{\mathsf{Q}J_{g}}{2 \Sigma}\big{(}|\hat{\mathbf{U}}|^{2}+|\hat{\mathbf{\Sigma}}|^{2}\big{)} \mathrm{d}x\,. \tag{6.2}\] Next, from (3.3), (3.15a), (3.19b), (5.21) and (5.26), we obtain \[\partial_{\mathsf{s}}\big{(}\tfrac{\mathsf{Q}J_{g}}{2\Sigma}(| \hat{\mathbf{U}}|^{2}+|\hat{\mathbf{\Sigma}}|^{2})\big{)} =\tfrac{1}{\Sigma}(|\hat{\mathbf{U}}|^{2}+|\hat{\mathbf{\Sigma}}|^ {2})\big{(}\tfrac{1+\alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{1- \alpha}{2}J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}+\tfrac{\alpha}{2}J_{g}\hat{ \mathbf{A}}_{\tau}+\tfrac{\hat{\mathsf{Q}}+V_{2}}{2}J_{g}\big{)}\] \[\quad-\tfrac{J_{g}}{2\Sigma}\Big{(}(1+\alpha)\hat{\Sigma}_{k}\hat {\Sigma}_{i}\hat{\mathsf{U}}_{k}^{i}+\alpha|\hat{\mathbf{\Sigma}}|^{2}\hat{ \mathsf{U}}_{k}^{i}+\hat{\mathsf{U}}_{j}^{i}\hat{\mathsf{U}}_{k}^{j}\hat{ \mathsf{U}}_{k}^{i}\Big{)}+\tfrac{\alpha}{2}\Big{(}|\hat{\mathbf{U}}|^{2}+| \hat{\mathbf{\Sigma}}|^{2}-2\mathcal{N}^{i}\hat{\Sigma}_{k}\hat{\mathsf{U}}_{k}^ {i}\Big{)}_{,1}\] \[\quad-J_{g}\Big{(}\tfrac{V}{2\Sigma}(|\hat{\mathbf{U}}|^{2}+| \hat{\mathbf{\Sigma}}|^{2})-\alpha\hat{\Sigma}_{k}e_{2}^{i}\hat{\mathsf{U}}_{k}^ {i}-\tfrac{\alpha}{2}g^{-\tfrac{\alpha}{2}}h_{,2}\,(|\hat{\mathbf{U}}|^{2}+| \hat{\mathbf{\Sigma}}|^{2})\Big{)}_{,2}\] \[\quad-\Big{(}\alpha\hat{\mathsf{U}}_{k}^{i}\hat{\Sigma}_{k}\tau^{ i}g^{-\frac{1}{2}}+\alpha\hat{\Sigma}_{k}e_{2}^{i}\hat{\mathsf{U}}_{k}^{i}+\tfrac{ \alpha}{2}(|\hat{\mathbf{U}}|^{2}+|\hat{\mathbf{\Sigma}}|^{2})g^{-1}h_{,2}+ \tfrac{V}{2\Sigma}(|\hat{\mathbf{U}}|^{2}+|\hat{\mathbf{\Sigma}}|^{2})\Big{)} J_{g,2}\] \[\quad-J_{g}\Big{(}\alpha\tau^{i}g^{-1}h_{,2}\,\hat{\mathsf{U}}_{k} ^{i}\hat{\Sigma}_{k}-\tfrac{\alpha}{2}g^{-\frac{3}{2}(|\hat{\mathbf{U}}|^{2}+| \hat{\mathbf{\Sigma}}|^{2})\Big{)}h_{,22}}. \tag{6.3}\] A remarks is in order. Since for all \(s\in[0,\overline{\mathsf{s}}]\) we have that \((\hat{\mathbf{U}},\hat{\mathbf{\Sigma}})\), and hence also \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}})\), are supported in \(\mathcal{X}(\mathsf{s})\), from identity (6.25) below, and the bounds (6.8), (5.37o), and (6.38), we obtain that if \(\mathsf{v}\geq C\varepsilon\), then \(J_{g}(x,\mathsf{s})=1\) and \(h_{,2}\,(x,\mathsf{s})=0\) for all \(x\in(\mathcal{X}(\overline{\mathsf{s}}))^{\mathsf{E}}\). Thus, in (6.3) we have that the \(J_{g};2\) and \(h_{,22}\) terms vanish identically, at all points in the interior of \((\mathcal{X}(\overline{\mathsf{s}}))^{\mathsf{E}}\). Moreover, we may freely multiply (or divide) the right side of (6.3) by powers of \(J_{g}\), since we are then multiplying by \(1\). Using these observations, we integrate (6.3) over \((\mathcal{X}(\mathsf{s}))^{\mathsf{E}}\), integrate by parts the pure derivative terms, and appeal to the bootstrap bounds (5.37) to conclude \[\int_{(\mathcal{X}(\overline{\mathsf{s}})^{\mathsf{E}}}\partial _{\mathsf{s}}\Big{(}\tfrac{\mathsf{Q}J_{g}}{2\Sigma}\big{(}|\hat{\mathbf{U}}|^{2}+| \hat{\mathbf{\Sigma}}|^{2}\big{)}\Big{)}\mathrm{d}x\] \[\qquad\leq\tfrac{\hat{C}}{\varepsilon}\int_{(\mathcal{X}( \overline{\mathsf{s}}))^{\mathsf{E}}}\tfrac{\mathsf{Q}J_{g}}{2\Sigma}\big{(}|\hat{ \mathbf{U}}|^{2}+|\hat{\mathbf{\Sigma}}|^{2}\big{)}\mathrm{d}x+\Big{(}32\alpha(1+ \alpha)\kappa_{0}+\hat{C}\varepsilon\Big{)}\int_{\partial\mathcal{X}( \overline{\mathsf{s}})}\tfrac{\mathsf{Q}J_{g}}{2\Sigma}\big{(}|\hat{\mathbf{U}}|^{2}+| \hat{\mathbf{\Sigma}}|^{2}\big{)}\mathrm{d}x\,. \tag{6.4}\] Note that by assumption on the minimality of \(\overline{\mathsf{s}}\), the first term on the right side of (6.4), vanishes. As such, if \(\mathsf{v}\) is taken to as \(\mathsf{v}:=65\alpha(1+\alpha)\kappa_{0}\), and \(\varepsilon\) is chosen sufficiently small, (6.2) and (6.4) yield \(\tfrac{\mathrm{d}}{ds}\int_{(\mathcal{X}(\mathsf{s}))^{\mathsf{E}}}\tfrac{ \mathsf{Q}J_{g}}{2\Sigma}\big{(} ### Flow of the ALE velocity \(V\) Due to the presence of the ALE transport operator \(\partial_{t}+V\partial_{2}\), when working in \((x,t)\) coordinates it convenient to consider the flow map: \[\partial_{t}\xi(x_{1},x_{2},t) =V(x_{1},\xi(x_{1},x_{2},t),t)\qquad\text{for}\qquad t\in(\mathfrak{ t}_{\text{in}},\mathfrak{t}_{\text{fin}})\,, \tag{6.6a}\] \[\xi(x_{1},x_{2},\mathfrak{t}_{\text{in}}) =x_{2}\,, \tag{6.6b}\] where we recall that \(V\) is defined in (3.6). Given a label \(x\), the flow \(\xi(x,\cdot)\) is then defined up to the stopping time \[\mathsf{T}_{\xi}(x)=\sup\{t\in[\mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{ fin}}]\colon(\xi(x,t),t)\in\mathcal{P}\}\,.\] Then, for any function \(F\colon\mathcal{P}\to\mathbb{R}\), by the composition \(F\circ\xi\) we mean \[(F\circ\xi)(x,t)=F(x_{1},\xi(x_{1},x_{2},t),t)\,,\] which is well-defined for \(t\leq\mathsf{T}_{\xi}(x)\). Similarly, the composition with the inverse of \(\xi\), which is denoted as usual by \(\xi^{-1}\) only affects the second space coordinate. Next, we note that the bootstrap assumptions (5.37) and the identities \[\partial_{t}(\log\xi_{,2}) =V_{;2}\circ\xi \tag{6.7a}\] \[\xi_{;2}\,\partial_{t}\big{(}\tfrac{\xi_{;1}}{\xi_{;2}}\big{)} =V_{;1}\circ\xi\] (6.7b) \[V_{,1} =-\alpha\Sigma g^{-\frac{3}{2}}h_{;12}\,+g^{-\frac{1}{2}}\big{(} J_{s}\mathbf{\hat{A}}_{\mathcal{N}}-h_{,2}\,(\tfrac{1+\alpha}{2}J_{s}\mathbf{ \hat{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{s}\mathbf{\hat{Z}}_{\mathcal{N}} )\big{)}\] \[\qquad\qquad\qquad+g^{-\frac{1}{2}}h_{;2}\,J_{s}\big{(}\mathbf{ \hat{A}}_{\mathcal{T}}-h_{,2}\,(\tfrac{1+\alpha}{2}\mathbf{\hat{W}}_{ \mathcal{T}}+\tfrac{1-\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{T}})\big{)}\,,\] (6.7c) \[V_{;2} =-\alpha\Sigma g^{-\frac{3}{2}}h_{;22}\,+\mathbf{\hat{A}}_{ \mathcal{T}}-h_{,2}\,(\tfrac{1+\alpha}{2}\mathbf{\hat{W}}_{\mathcal{T}}+ \tfrac{1-\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{T}})\,, \tag{6.7d}\] imply that pointwise in \((x,t)\in\mathcal{P}\) we have \[|\xi(x,t)-x_{2}|\lesssim\varepsilon^{2}\,,\qquad|\xi_{,2}-1|\lesssim \varepsilon^{2}\,,\qquad|\partial_{t}\xi|\lesssim\varepsilon\,,\qquad|\xi_{, 1}|\lesssim\varepsilon\,. \tag{6.8}\] Pointwise estimates for \(\nabla^{2}\xi\) may also be obtained upon by noting that (6.6), (6.8), (7.1j), and (B.2d), imply \[|\xi_{,11}| \lesssim\varepsilon^{-1}\big{\|}\widetilde{\mathsf{D}}_{1}^{2}V \big{\|}_{L^{\infty}_{x,n}}+\varepsilon\big{\|}\widetilde{\mathsf{D}}_{1} \widetilde{\mathsf{D}}_{2}V\big{\|}_{L^{\infty}_{x,n}}+\varepsilon^{3}\big{\|} \widetilde{\mathsf{D}}_{2}^{2}V\big{\|}_{L^{\infty}_{x,n}}\] \[\lesssim\varepsilon^{-\frac{3}{2}}\big{\|}\widetilde{\mathsf{D}}_ {1}\widetilde{\mathsf{D}}^{3}V(\cdot,0)\,,\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \[\leq\alpha\Sigma\big{(}(U\cdot\mathcal{N})^{2}+A^{2}\big{)}^{\frac{1}{2 }}\big{(}|\mathbf{\hat{Z}}_{\mathcal{N}}|+\tfrac{1}{2}(|\mathbf{\hat{W}}_{ \mathcal{T}}|+|\mathbf{\hat{Z}}_{\mathcal{T}}|+2|\mathbf{\hat{A}}_{\mathcal{N}}| )\big{)}\] \[\leq\mathring{C}\big{(}(U\cdot\mathcal{N})^{2}+A^{2}\big{)}^{\frac{ 1}{2}}\,. \tag{6.14}\] Upon integrating the above estimate we further obtain \[\big{|}\big{(}(U\cdot\mathcal{N})^{2}+A^{2}\big{)}^{\frac{1}{2}}\circ\xi(x,t)- \left|u_{0}(x)\right|\big{|}\leq\mathring{C}\varepsilon\,. \tag{6.15}\] On the other hand, appealing to ((ii)) we have that \[\left|u_{0}(x)\right|=\big{(}\tfrac{1}{4}(w_{0}(x)+z_{0}(x))^{2}+a_{0}(x)^{2} \big{)}^{\frac{1}{2}}\in[\tfrac{11}{24}-\varepsilon,\tfrac{13}{24}+\varepsilon] \kappa_{0}\,. \tag{6.16}\] We deduce in particular the preliminary estimate \(\|U\cdot\mathcal{N}\|_{L^{\infty}_{x,t}}\leq\tfrac{13}{24}\kappa_{0}+\mathring {C}\varepsilon\leq\tfrac{7}{12}\kappa_{0}\). The estimate for \(A\) now follows by integrating (6.13b), and using the previously established bound for \(U\cdot\mathcal{N}\) along with the bootstrap inequalities (5.37), to obtain \[\big{|}A\circ\xi(x,t)\big{|} \leq\left|a_{0}(x)\right|+\tfrac{3\varepsilon}{1+\alpha}\Big{(} \tfrac{\alpha}{2}\kappa_{0}\big{(}1+\varepsilon+\varepsilon\mathsf{C}_{ \mathbf{\hat{Z}}_{\mathcal{T}}}+2\mathsf{C}_{\mathbf{\hat{A}}_{\mathcal{N}}} \big{)}+\tfrac{7}{12}\kappa_{0}\big{(}\tfrac{1+\alpha}{2}(1+\varepsilon)+ \tfrac{1-\alpha}{2}\varepsilon\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{T}}} \big{)}\Big{)}\] \[\leq\kappa_{0}\varepsilon+\tfrac{2\varepsilon}{1+\alpha}\tfrac{5 1}{50}\Big{(}\tfrac{7+19\alpha}{24}\kappa_{0}+\alpha\kappa_{0}\mathsf{C}_{ \mathbf{\hat{A}}_{\mathcal{N}}}+\mathring{C}\varepsilon\Big{)}\leq\kappa_{0} \varepsilon\Big{(}1+3\kappa_{0}+\tfrac{3\alpha}{1+\alpha}\mathsf{C}_{\mathbf{ \hat{A}}_{\mathcal{N}}}\Big{)}\,.\] The \(A\) estimate in (6.12) follows by composing with \(\xi^{-1}\). The \(Z\) estimate is obtained in a similar way by integrating (3.21b), and appealing to the existing pointwise bounds \[\big{|}Z\circ\xi(x,t)\big{|}\leq\left|z_{0}(x)\right|+\tfrac{2 \varepsilon}{1+\alpha}\tfrac{51}{50}\Big{(}\varepsilon\mathring{C}+2\alpha \kappa_{0}\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{N}}}\Big{)}\leq\kappa_{0} \varepsilon\Big{(}1+\tfrac{5\alpha}{1+\alpha}\mathsf{C}_{\mathbf{\hat{Z}}_{ \mathcal{N}}}\Big{)}\,.\] The \(Z\) estimate in (6.12) now follows. Lastly, the \(W\) estimate is obtained by integrating (3.21a), together with assumption (5.37p), and appealing to the existing pointwise bounds \[\big{|}W\circ\xi(x,t)-w_{0}(x)\big{|}\leq\mathring{C}\varepsilon^{2}\Rightarrow \big{|}W\circ\xi(x,t)-\kappa_{0}\big{|}\leq\big{|}W\circ\xi(x,t)-w_{0}(x) \big{|}+\big{|}w_{0}(x)-\kappa_{0}\big{|}\leq\mathring{C}\varepsilon^{2}+ \tfrac{\kappa_{0}}{12}\,.\] Taking \(\varepsilon\) to be sufficiently small and composing with \(\xi^{-1}\) yields the \(W\) estimate in (6.12). Pointwise bounds for \(\mathsf{D}^{k}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})\) and \(\mathsf{D}^{k}J_{g}\) when \(0\leq k\leq 2\) For future use, it is convenient to record the following pointwise bounds for the first few derivatives of \(J_{g}\mathbf{\hat{W}}_{\mathcal{N}}\) and \(J_{g}\mathbf{\hat{Z}}_{\mathcal{N}}\). **Lemma 6.1**.: _Assume that the bootstrap bounds (5.37) hold on \(\mathbb{T}^{2}\times[0,\varepsilon)\). If \(\varepsilon\) is taken to be sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), then_ \[\big{|}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})(x,t)-(w_{0})_{,1}\,(x )\big{|} \lesssim\varepsilon \tag{6.17a}\] \[\big{|}\mathsf{D}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})(x,t)- \mathsf{D}(w_{0})_{,1}\,(x)\big{|} \lesssim\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\] (6.17b) \[\big{|}\mathsf{D}^{2}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})(x,t)- \mathsf{D}^{2}(w_{0})_{,1}\,(x)\big{|} \lesssim\mathsf{K}\langle\mathsf{B}_{6}\rangle \tag{6.17c}\] _holds for all \((x,t)\in\mathcal{P}\)._ Proof of Lemma 6.1.: From (3.25), upon composing with the flow \(\xi\) associated to the vector field \(\partial_{t}+V\partial_{2}\), we deduce that for each frozen \(x\), and for \(t\leq\mathsf{T}_{\xi}(x)\), we have \[(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})\circ\xi(x,t)=(w_{0})_{,1}(x)\,\mathsf{I}_ {\mathsf{W}}^{\mathcal{N},1}(x,t)+\mathsf{I}_{\mathsf{W}}^{\mathcal{N},2}(x,t)\,,\] (6.18a) where we have denoted \[\mathsf{I}_{\mathsf{W}}^{\mathcal{N},1}(x,t) =e^{-\tfrac{\alpha}{2}\int_{\mathsf{in}}^{t}(\mathbf{\hat{A}}_{ \mathcal{T}}-g^{-\frac{3}{2}}\Sigma h,2z)\circ\xi(x,r)\mathrm{d}r}\,, \tag{6.18b}\] \[\mathsf{I}_{\mathsf{W}}^{\mathcal{N},2}(x,t) =\int_{\mathsf{in}}^{t}\mathsf{E}_{\mathsf{W}}^{\mathcal{N}}\circ \xi(x,r)e^{-\tfrac{\alpha}{2}\int_{r}^{t}(\mathbf{\hat{A}}_{\mathcal{T}}-g^{- \frac{3}{2}}\Sigma h,2z)\circ\xi(x,r^{\prime})\mathrm{d}r^{\prime}}\mathrm{d}r\,,\] (6.18c) \[\mathsf{E}_{\mathsf{W}}^{\mathcal{N}}(x,t) =-\alpha\Sigma g^{-\frac{1}{2}}J_{g}\mathbf{\hat{A}}_{\mathcal{N} \mathcal{N},2}+\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h,2z\,(J_{g}\mathbf{\hat{Z}} _{\mathcal{N}}-2J_{g}\mathbf{\hat{A}}_{\mathcal{T}})\] \[\qquad+\tfrac{\alpha}{2}\mathbf{\hat{A}}_{\mathcal{T}}J_{g}\mathbf{ \hat{Z}}_{\mathcal{N}}-\big{(}\tfrac{3+\alpha}{2}\mathbf{\hat{W}}_{\mathcal{T}}+ \tfrac{1-\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{T}}\big{)}J_{g}\mathbf{\hat{A}}_{ \mathcal{N}}-\big{(}\tfrac{1+\alpha}{2}\mathbf{\hat{W}}_{\mathcal{T}}+ \tfrac{1-\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{T}}\big{)}J_{g}\mathbf{\hat{W}}_{ \mathcal{T}}\,. \tag{6.18d}\] Using the bootstrap assumptions (5.37), we see that the terms defined in (6.18b)-(6.18c) satisfy the pointwise estimates \[\big{|}1-\mathsf{I}_{\mathsf{W}}^{\mathcal{N},1}\big{|}\lesssim\varepsilon^{2}\,, \qquad\big{|}\mathsf{I}_{\mathsf{W}}^{\mathcal{N},2}\big{|}\lesssim\varepsilon\,, \tag{6.19}\] from where we deduce that \[\big{|}(J_{g}\tilde{\textbf{W}}_{\mathcal{N}})\circ\xi(x,t)-(w_{0})_{,1}(x)\big{|} \lesssim\varepsilon^{2}|(w_{0})_{,1}(x)|+\varepsilon\lesssim\varepsilon\,. \tag{6.20}\] Additionally, the implicit function theorem combined with the bounds (4.11) and (6.8) give \[|\mathsf{D}^{k}(w_{0})_{,1}(x)-\mathsf{D}^{k}(w_{0})_{,1}\circ\xi^{-1}(x,t)| \leq\|\mathsf{D}^{k}(w_{0})_{,12}\,\|_{L^{\infty}_{x}}|x_{2}-\xi^{-1}(x,t)| \lesssim\varepsilon\,, \tag{6.21}\] for \(k\in\{0,1,2\}\), from which (6.17a) follows. Establishing (6.17b) and (6.17c) requires bounds for the derivatives of \(\mathsf{I}_{\mathsf{W}}^{\mathsf{V},1}\) and \(\mathsf{I}_{\mathsf{W}}^{\mathsf{V},2}\), which we obtain as follows. Using (6.8), we have that \(|\mathsf{D}_{1}(f\circ\xi)|\lesssim|\mathsf{D}_{1}f|\circ\xi+\varepsilon^{2}| \mathsf{D}_{2}f|\circ\xi\lesssim|\mathsf{D}f|\circ\xi\), \(|\mathsf{D}_{2}(f\circ\xi)|\lesssim|\mathsf{D}_{2}f|\circ\xi\), and \(|\mathsf{D}_{t}(f\circ\xi)|\lesssim|\mathsf{D}_{t}f|\circ\xi+\varepsilon^{2} |\mathsf{D}_{2}f|\circ\xi\lesssim|\mathsf{D}f|\circ\xi\). With this in hand, we return to the terms defined in (6.18b)-(6.18d) and use the bootstrap inequalities (5.37), the product and chain rules to estimate \[\big{\|}\mathsf{D}\,\mathsf{I}_{\mathsf{W}}^{\mathsf{V},1}\big{\|} _{L^{\infty}_{x,t}} \lesssim\varepsilon^{2}+\varepsilon\big{\|}\mathsf{D}h_{,22} \big{\|}_{L^{\infty}_{x,t}}\] \[\big{\|}\mathsf{D}^{2}\,\mathsf{I}_{\mathsf{W}}^{\mathsf{V},1} \big{\|}_{L^{\infty}_{x,t}} \lesssim\varepsilon^{3}+\varepsilon^{\frac{1}{2}}\big{\|} \mathsf{D}^{2}\tilde{\textbf{A}}_{\mathcal{T}}\big{\|}_{L^{2}_{t}L^{\infty}_{ x}}+\varepsilon\big{\|}\mathsf{D}^{2}h_{,2}\,\big{\|}_{L^{\infty}_{x,t}}+ \varepsilon^{\frac{1}{2}}\big{\|}\mathsf{D}^{3}h_{,2}\,\big{\|}_{L^{2}_{t}L^{ \infty}_{x}}+\varepsilon\big{\|}\mathsf{D}^{2}h_{,2}\,\big{\|}_{L^{\infty}_{ x,t}}^{2}\] \[\big{\|}\mathsf{D}\,\mathsf{E}_{\mathsf{W}}^{\mathsf{V}}\big{\|} _{L^{\infty}_{x,t}} \lesssim 1+\big{\|}\mathsf{D}^{2}h_{,2}\,\big{\|}_{L^{\infty}_{x,t}}+ \big{\|}\mathsf{D}^{2}(J_{g}\tilde{\textbf{A}}_{\mathcal{N}})\big{\|}_{L^{ \infty}_{x,t}}+\big{\|}\mathsf{D}^{2}J_{g}\big{\|}_{L^{\infty}_{x,t}}\] \[\big{\|}\mathsf{D}^{2}\,\mathsf{E}_{\mathsf{W}}^{\mathsf{V}}\big{\|} _{L^{2}_{t}L^{\infty}_{x}} \lesssim\varepsilon^{\frac{5}{2}}+\big{\|}\mathsf{D}^{2}(J_{g} \tilde{\textbf{A}}_{\mathcal{N}};2)\,\big{\|}_{L^{2}_{t}L^{\infty}_{x}}+ \big{\|}\mathsf{D}\tilde{\textbf{A}}_{\mathcal{N}};2\,\big{\|}_{L^{2}_{t}L^{ \infty}_{x}}+\varepsilon^{\frac{1}{2}}\big{\|}\mathsf{D}^{2}(\Sigma,J_{g}) \big{\|}_{L^{\infty}_{x,t}}+\big{\|}\mathsf{D}^{3}J_{g}\big{\|}_{L^{2}_{t}L^{ \infty}_{x}}\] \[\quad+\varepsilon^{\frac{3}{2}}\big{\|}\mathsf{D}^{2}h_{,2}\, \big{\|}_{L^{\infty}_{x,t}}+\big{\|}\mathsf{D}^{2}(\tilde{\textbf{W}}_{\mathcal{ T}},\tilde{\textbf{Z}}_{\mathcal{T}},J_{g}\tilde{\textbf{A}}_{\mathcal{N}}) \big{\|}_{L^{2}_{t}L^{\infty}_{x}}+\varepsilon\big{\|}\mathsf{D}^{2}\tilde{ \textbf{A}}_{\mathcal{T}}\big{\|}_{L^{2}_{t}L^{\infty}_{x}}+\varepsilon\big{\|} \mathsf{D}^{2}(J_{g}\tilde{\textbf{Z}}_{\mathcal{N}})\big{\|}_{L^{2}_{t}L^{ \infty}_{x}}\,.\] The \(L^{\infty}_{x,t}\) norms present in the above estimate, which are the same as \(L^{\infty}_{x,\text{s}}\) norms cf. (5.38a), are estimated using (B.2d). On the other hand, the \(L^{2}_{t}L^{\infty}_{x}\) norms are not equivalent to \(L^{2}_{x}L^{\infty}_{x}\) norms, akin to the discussion above (5.38c). Nonetheless, the proof of estimate (B.2e) (designed for \(L^{2}_{x}L^{\infty}_{x}\) norms) can still be applied in this case,17 leading to obtain upper bounds in terms of \(L^{2}_{x,\text{s}}\) norms for \(5^{th}\) order derivatives. In turn, to bound these we appeal to the bootstraps (5.37r)-(5.37u), the bounds for the geometry and sound speed in (7.1), and to the vorticity estimate (8.2). For instance, (B.2d), and the fact that \(\mathcal{J}\leq 1\), imply that \(\|\mathsf{D}^{2}h_{,2}\,\big{\|}_{L^{\infty}_{x,t}}\lesssim\varepsilon^{-\frac{ 1}{2}}\big{\|}\mathsf{D}^{4}h_{,2}\,(\cdot,\mathsf{t}_{\mathsf{m}})\|_{L^{2}_{x }}+\varepsilon^{-1}\|\mathsf{D}^{5}h_{,2}\,\big{\|}_{L^{\infty}_{x,t}} \lesssim\mathsf{K}\varepsilon\langle\mathsf{B}_{6}\rangle\). Similar arguments show that \(\|\mathsf{D}^{2}J_{g}\|_{L^{\infty}_{x,t}}\leq\langle\mathsf{B}_{6}\rangle\), \(\|\mathsf{D}^{2}\Sigma\|_{L^{\infty}_{x,t}}\leq\langle\mathsf{B}_{6}\rangle\), \(\|\mathsf{D}^{3}h_{,2}\,\big{\|}_{L^{2}_{t}L^{\infty}_{x}}\lesssim\mathsf{K} \varepsilon^{\frac{3}{2}}\langle\mathsf{B}_{6}\rangle\), and \(\|\mathsf{D}^{3}J_{g}\|_{L^{2}_{t}L^{\infty}_{x}}\lesssim\varepsilon^{\frac{1} {2}}\langle\mathsf{B}_{6}\rangle\). Similar arguments imply for \((\tilde{\textbf{W}}_{\mathcal{T}},\tilde{\textbf{Z}}_{\mathcal{T}},\tilde{ \textbf{A}}_{\mathcal{T}},J_{g}\tilde{\textbf{Z}}_{\mathcal{N}},J_{g}\tilde{ \textbf{A}}_{\mathcal{N}})\). For instance, (B.2e) and (5.36j) imply that \(\|\mathsf{D}^{2}(\tilde{\textbf{W}}_{\mathcal{T}},\tilde{\textbf{Z}}_{\mathcal{T}}, \tilde{\textbf{A}}_{\mathcal{T}})\|_{L^{2}_{t}L^{\infty}_{x}}\lesssim\varepsilon^{- \frac{1}{2}}\|\mathsf{D}^{4}(\tilde{\textbf{W}}_{\mathcal{T}},\tilde{\textbf{Z}}_{ \mathcal{T}},\tilde{\textbf{A}}_{\mathcal{T}})\|_{L^{2}_{t}L^{\infty}_{x}} \lesssim\varepsilon^{-\frac{1}{2}}\|\mathsf{D}^{4}(\tilde{\textbf{W}}_{ \mathcal{T}},\tilde{\textbf{Z}}_{\mathcal{T}},\tilde{\textbf{A}}_{\mathcal{T}})\|_{L^{ 2}_{x,t}}\lesssim\varepsilon^{-\frac{1}{2}}\mathcal{D}_{5,\tau}\lesssim\mathsf{K} \varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\). The same argument shows \(\|\mathsf{D}^{2}(J_{g}\tilde{\textbf{Z}}_{\mathcal{N}})\|_{L^{2}_{t}L^{\infty}_{x }}\lesssim\varepsilon^{-\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\). Some of the bounds for the terms involving \(\tilde{\textbf{A}}_{\mathcal{N}}\) are more delicate and require that we relate \(\tilde{\textbf{A}}_{\mathcal{N}}\) to the vorticity via \(\tilde{\textbf{A}}_{\mathcal{N}}=\Omega+\frac{1}{2}\tilde{\textbf{W}}_{\mathcal{ T}}+\frac{1}{2}\tilde{\textbf{Z}}_{\mathcal{T}}\). These vorticity-improved bounds are obtained in Section 8 (see (8.21a) and (8.21c)). For instance, (8.21c) implies \(\|\mathsf{D}^{2}\tilde{\textbf{A}}_{\mathcal{N With (6.19) and (6.22), we return to (6.18a) and obtain the pointwise estimates \[\big{|}\mathsf{D}\big{(}(J_{g}^{\,\circ}\!\hat{\mathbf{W}}_{\!{ \mathcal{N}}})\circ\xi(x,t)-(w_{0})_{,1}\,(x)\big{)}\big{|} \lesssim\mathsf{K}\varepsilon\langle\mathsf{B}_{6}\rangle \tag{6.23a}\] \[\big{|}\mathsf{D}^{2}\big{(}(J_{g}^{\,\circ}\!\hat{\mathbf{W}}_{ \!{\mathcal{N}}})\circ\xi(x,t)-(w_{0})_{,1}\,(x)\big{)}\big{|} \lesssim\mathsf{K}\langle\mathsf{B}_{6}\rangle \tag{6.23b}\] for all \((x,t)\in\mathcal{P}\). We note that (6.23) and (6.10), together with (6.17a) and (6.21) imply (6.17b)-(6.17c). Integrating the bounds obtained in Lemma 6.1, we obtain pointwise bounds for \(J_{g}\) as follows: **Corollary 6.2**.: _Assume that the bootstrap bounds (5.37) hold on \(\mathbb{T}^{2}\times[0,\varepsilon)\). If \(\varepsilon\) is taken to be sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), then the bounds_ \[\big{|}J_{g}(x,t)-1-(t-\mathsf{t}_{\mathsf{in}})\tfrac{1+\alpha}{ 2}(w_{0})_{,1}\,(x)\big{|} \leq\tfrac{1+\alpha}{2}\mathsf{C}_{\!\lambda}(t-\mathsf{t}_{ \mathsf{in}}), \tag{6.24a}\] \[\big{|}\mathsf{D}_{i}J_{g}(x,t)-(t-\mathsf{t}_{\mathsf{in}}) \tfrac{1+\alpha}{2}\mathsf{D}_{i}(w_{0})_{,1}\,(x)\big{|} \lesssim(t-\mathsf{t}_{\mathsf{in}})\varepsilon\mathsf{K}\langle \mathsf{B}_{6}\rangle,\qquad\text{for}\qquad i\in\{1,2\},\] (6.24b) \[\big{|}(\partial_{t}+V\partial_{2})J_{g}(x,t)-\tfrac{1+\alpha}{ 2}(w_{0})_{,1}\,(x)\big{|} \leq\tfrac{5|1-\alpha|}{6}\mathsf{C}_{\!\hat{\mathbf{Z}}_{ \!{\mathcal{N}}}},\] (6.24c) \[\big{|}\partial_{t}J_{g}(x,t)-\tfrac{1+\alpha}{2}(w_{0})_{,1}\,(x )\big{|} \leq\tfrac{1+\alpha}{2}\mathsf{C}_{\!\lambda},\] (6.24d) \[\big{|}\mathsf{D}_{i}\mathsf{D}_{j}J_{g}(x,t)-(t-\mathsf{t}_{ \mathsf{in}})\tfrac{1+\alpha}{2}\mathsf{D}_{i}\mathsf{D}_{j}(w_{0})_{,1}\,(x )\big{|} \lesssim(t-\mathsf{t}_{\mathsf{in}})\mathsf{K}\langle\mathsf{B}_{6 }\rangle,\qquad\text{for}\qquad i,j\in\{1,2\},\] (6.24e) \[\big{|}\mathsf{D}_{i}\partial_{t}J_{g}(x,t)-\tfrac{1+\alpha}{2} \mathsf{D}_{i}(w_{0})_{,1}\,(x)\big{|} \lesssim 8(1+\alpha)\mathsf{C}_{\!\hat{\mathbf{Z}}_{\!{ \mathcal{N}}}},\qquad\text{for}\qquad i\in\{1,2\},\] (6.24f) \[\big{|}\partial_{t}\partial_{t}J_{g}(x,t) \leq 4(1+\alpha)|1-\alpha|\mathsf{C}_{\!\hat{\mathbf{Z}}_{\!{ \mathcal{N}}}}\varepsilon^{-1}, \tag{6.24g}\] _hold for all \((x,t)\in\mathcal{P}\). Here we have introduced \(\mathsf{C}_{\!\lambda}=\mathsf{C}_{\!\lambda}(\alpha,\kappa_{0},\mathsf{C}_{ \mathsf{data}})>0\), defined by \(|1-\alpha|\mathsf{C}_{\!\hat{\mathbf{Z}}_{\!{\mathcal{N}}}}=\tfrac{1+\alpha}{ 2}\mathsf{C}_{\!\lambda}\)._ Proof of Corollary 6.2.: Upon composing (3.15a) with the flow \(\xi\) and integrating in time, we obtain that for \(t\leq\mathsf{T}_{\xi}(x)\), \[J_{g}\circ\xi(x,t)=1+\int_{\mathsf{t}_{\mathsf{in}}}^{t}\left(\tfrac{1+\alpha}{ 2}(J_{g}\!\hat{\mathbf{W}}_{\!{\mathcal{N}}})\circ\xi+\tfrac{1-\alpha}{2}(J_{g }\!\hat{\mathbf{Z}}_{\!{\mathcal{N}}})\circ\xi\right)(x,t^{\prime})\mathrm{d}t^ {\prime}\,. \tag{6.25}\] Using the bootstrap assumptions (5.37), and the previously established bound (6.17a), we deduce from the above identity that \[\big{|}J_{g}\circ\xi(x,t)-1-(t-\mathsf{t}_{\mathsf{in}})\tfrac{1+\alpha}{2}(w_ {0})_{,1}\,(x)\big{|}\leq(t-\mathsf{t}_{\mathsf{in}})\big{(}\varepsilon\dot{ \hat{C}}+\tfrac{3|1-\alpha|}{4}\mathsf{C}_{\!\hat{\mathbf{Z}}_{\!{ \mathcal{N}}}}\big{)}\,. \tag{6.26}\] Upon composing with \(\xi^{-1}(x,t)\) and appealing to (6.21) with \(k=0\), we deduce (6.24a). In a similar fashion, we may differentiate (6.8) with respect to space, appeal to (6.18a), and deduce that for \(i\in\{1,2\}\) we have \[\mathsf{D}_{i}(J_{g}\circ\xi)(x,t)-(t-\mathsf{t}_{\mathsf{in}}) \tfrac{1+\alpha}{2}\mathsf{D}_{i}(w_{0})_{,1}\,(x)\] \[=\int_{\mathsf{t}_{\mathsf{in}}}^{t}\Bigl{(}\tfrac{1+\alpha}{2}(w_ {0})_{,1}\,(x)\mathsf{D}_{i}\mathsf{l}_{W}^{\!\!\times\!1}(x,t^{\prime})+\tfrac {1+\alpha}{2}\mathsf{D}_{i}(w_{0})_{,1}\,(x)(\mathsf{l}_{W}^{\!\!\times\!1}(x,t^{ \prime})-1)\] \[\qquad\qquad\qquad+\tfrac{1+\alpha}{2}\mathsf{D}_{i}\mathsf{l}_{ W}^{\!\!\times\!2}(x,t^{\prime})+\tfrac{1-\alpha}{2}\mathsf{D}_{i}((J_{g}\!\hat{ \mathbf{Z}}_{\!{\mathcal{N}}})\circ\xi)\Bigr{)}(x,t^{\prime})\mathrm{d}t^{ \prime}\,. \tag{6.27}\] Using the initial data assumptions, the bounds (6.19), (6.22), (6.10), (5.37g), (5.37k), and (5.37l), we deduce \[\big{|}\mathsf{D}_{i}(J_{g}\circ\xi)(x,t)-(t-\mathsf{t}_{\mathsf{in}})\tfrac{1+ \alpha}{2}\mathsf{D}_{i}(w_{0})_{,1}\,(x)\big{|}\lesssim(t-\mathsf{t}_{\mathsf{ in}})\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{6.28}\] Next, instead of merely appealing to (6.8), we use that similarly to (6.8) we may show \(|\xi_{,2}\,(x,t)-1|\lesssim\varepsilon(t-\mathsf{t}_{\mathsf{in}})\) and \(|\xi_{,1}\,(x,t)|\lesssim(t-\mathsf{t}_{\mathsf{in}})\). The resulting bounds are \[\big{|}(\mathsf{D}_{i}J_{g})\circ\xi(x,t)-(t-\mathsf{t}_{\mathsf{in}})\tfrac{1+ \alpha}{2}\mathsf{D}_{i}(w_{0})_{,1}\,(x)\big{|}\lesssim(t-\mathsf{t}_{\mathsf{ in}})\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,,\qquad\text{for}\qquad i \in\{1,2\}\,. \tag{6.29}\] Combined with (6.21) with \(k=1\), this bound implies (6.24b). Differentiating (6.25) with respect to \(t\), recalling the definition of \(\xi\) in (6.6), the identity (6.18a), and the bounds (6.19), (5.37g), and (5.37k), we obtain \[\big{|}((\partial_{t}+V\partial_{2})J_{g})\circ\xi(x,t)-\tfrac{1+\alpha}{2}(w_ {0})_{,1}\,(x)\big{|}\leq\hat{C}\varepsilon+\tfrac{3|1-\alpha|}{4}\mathsf{C}_{ \!\hat{\mathbf{Z}}_{\!{\mathcal{N}}}}\,. \tag{6.30}\] The bound (6.24c) now follows upon composing with \(\xi^{-1}\) and using (6.21) with \(k=0\). The bound (6.24d) follows from (6.24b) with \(i=2\), (6.24c), and (5.37o). Next, we establish (6.24e). We apply \(\mathsf{D}_{i}\) to (6.27), appeal to the bounds (6.19), (6.22), and (6.10), to obtain \[\big{|}\mathsf{D}_{j}\mathsf{D}_{i}(J_{g}\circ\xi)(x,t)-(t-\mathsf{t}_{\mathsf{in}}) \tfrac{1+\alpha}{2}\ To bound the last term in the above estimate, we use (B.2d) and the improved \(J_{g}\underline{\hat{\mathbf{Z}}}_{{\mathcal{N}}}\) bound available from (8.22a). We obtain \[\|\mathsf{D}^{2}(J_{g}\underline{\hat{\mathbf{Z}}}_{{\mathcal{N}}})\|_{L^{\infty }_{x,t}}\lesssim\varepsilon^{-\frac{1}{2}}\|\mathsf{D}^{4}(J_{g}\underline{ \hat{\mathbf{Z}}}_{{\mathcal{N}}})(\cdot,\mathfrak{t}_{\mathfrak{n}})\|_{L^{2 }_{x}}+\varepsilon^{-1}\|\mathsf{D}^{5}(J_{g}\underline{\hat{\mathbf{Z}}}_{{ \mathcal{N}}})\|_{L^{2}_{x,s}}\lesssim\mathsf{K}(\mathsf{B}_{6})\,. \tag{6.32}\] The bounds (6.31) and (6.32) thus imply \[\big{|}\mathsf{D}_{j}\mathsf{D}_{i}(J_{g}\circ\xi)(x,t)-(t-\mathfrak{t}_{ \mathfrak{n}})\tfrac{1+\alpha}{2}\mathsf{D}_{j}\mathsf{D}_{i}(w_{0})_{,1}\,(x )\big{|}\lesssim(t-\mathfrak{t}_{\mathfrak{n}})\mathsf{K}(\mathsf{B}_{6})\,. \tag{6.33}\] Appealing to (6.24b), (6.9), to the bounds \(|\xi_{,2}\,(x,t)-1|\lesssim\varepsilon(t-\mathfrak{t}_{\mathfrak{n}})\) and \(|\xi_{,1}\,(x,t)|\lesssim(t-\mathfrak{t}_{\mathfrak{n}})\), and to the previously derived estimate \(\|\mathsf{D}^{2}J_{g}\|_{L^{\infty}_{x,t}}\leq\langle\mathsf{B}_{6}\rangle\), we also obtain \[\big{|}\mathsf{D}_{j}\mathsf{D}_{i}(J_{g}\circ\xi)(x,t)-(\mathsf{D}_{j}\mathsf{ D}_{i}J_{g})\circ\xi(x,t)\big{|}\lesssim(t-\mathfrak{t}_{\mathfrak{n}})\mathsf{K} (\mathsf{B}_{6})\,. \tag{6.34}\] Upon combining (6.33) and (6.34), composing with \(\xi^{-1}\), and using (6.21) with \(k=2\), we arrive at the proof of (6.24e). In order to prove (6.24f) we apply \(\mathsf{D}_{i}\partial_{t}\) to (6.25), use (6.23a) and the bootstrap assumptions (5.37), and obtain that \[\big{|}\xi_{,2}\,(\mathsf{D}_{2}(\partial_{t}+V\partial_{2})J_{g}) \circ\xi-\tfrac{1+\alpha}{2}\mathsf{D}_{2}(w_{0})_{,1}\,\big{|}\leq\mathring{C }\varepsilon\mathsf{K}(\mathsf{B}_{6})+7(1+\alpha)\mathsf{C}_{\underline{ \hat{\mathbf{Z}}}_{{\mathcal{N}}}}\,, \tag{6.35a}\] \[\big{|}\xi_{,2}\,(\mathsf{D}_{1}(\partial_{t}+V\partial_{2})J_{g}) \circ\xi-\tfrac{1+\alpha}{2}\mathsf{D}_{1}(w_{0})_{,1}-\varepsilon\xi_{,1}\,( \mathsf{D}_{2}(\partial_{t}+V\partial_{2})J_{g})\circ\xi\big{|}\leq\mathring{ C}\varepsilon\mathsf{K}(\mathsf{B}_{6})+7(1+\alpha)\mathsf{C}_{\underline{\hat{ \mathbf{Z}}}_{{\mathcal{N}}}}\,. \tag{6.35b}\] By again appealing to (6.8) and also to (5.37o) and to \(\|\mathsf{D}^{2}J_{g}\|_{L^{\infty}_{x,t}}\leq\langle\mathsf{B}_{6}\rangle\), we derive from the above that \[\big{|}(\mathsf{D}_{i}\partial_{t}J_{g})\circ\xi-\tfrac{1+\alpha}{2}\mathsf{D }_{i}(w_{0})_{,1}\,\big{|}\leq\mathring{C}\varepsilon\mathsf{K}(\mathsf{B}_{6 })+7(1+\alpha)\mathsf{C}_{\underline{\hat{\mathbf{Z}}}_{{\mathcal{N}}}}\,, \tag{6.36}\] for \(i\in\{1,2\}\). The bound (6.24f) now follows from the above estimate, upon composing with \(\xi^{-1}\), and from (6.21) with \(k=1\). The last estimate in (6.24), namely (6.24g), follows by differentiating (3.15a) with respect to time, which yields \[(\partial_{t}\partial_{t}+\tfrac{1}{\varepsilon}\mathsf{D}_{t}V\partial_{2}+V \partial_{t}\partial_{2})J_{g}=\tfrac{1+\alpha}{2}(\partial_{t}+V\partial_{2} )(J_{g}\hat{\mathbf{W}}_{{\mathcal{N}}})-\tfrac{1+\alpha}{2}V\partial_{2}(J_ {g}\hat{\mathbf{W}}_{{\mathcal{N}}})+\tfrac{1-\alpha}{2\varepsilon}\mathsf{D} _{t}(J_{g}\underline{\hat{\mathbf{Z}}}_{{\mathcal{N}}})\,. \tag{6.37}\] Using the previously established bound (6.24f), the bootstrap assumptions (5.37), the time differentiated version of (6.18a) which gives \((\partial_{t}+V\partial_{2})(J_{g}\hat{\mathbf{W}}_{{\mathcal{N}}})\circ\xi(x,t )=(w_{0})_{,1}\,(x)\,\partial_{t}\mathsf{h}_{\mathsf{W}}^{{\mathcal{N}},1}(x,t )+\partial_{t}\mathsf{h}_{\mathsf{W}}^{{\mathcal{N}},2}(x,t)\), and the bounds (6.22), we deduce that \[\big{|}\partial_{t}\partial_{t}J_{g}\big{|}\leq\mathring{C}+\tfrac{[1-\alpha]}{2 \varepsilon}\mathsf{C}_{\underline{\hat{\mathbf{Z}}}_{{\mathcal{N}}}}7(1+ \alpha)\,.\] The bound (6.24) now follows, concluding the proof of the Corollary. ### Properties of the remapping coefficients We recall the coefficients \(\mathsf{Q},\overline{\mathsf{Q}}_{2},\widehat{\mathsf{Q}},\hat{\mathsf{Q}}, \hat{\mathsf{Q}}_{s}\), and \(\hat{\mathsf{Q}}_{2}\) introduced in (5.22). These coefficients are bounded as follows: **Lemma 6.3**.: _Assume that the bootstrap bounds (5.37) hold on \(\mathbb{T}^{2}\times[0,\varepsilon)\). If \(\varepsilon\) is taken to be sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), then the various functions appearing in in (5.22) bounds are bounded as_ \[\tfrac{17(1+\alpha)}{40} \leq\widehat{\mathsf{Q}}(x_{2},\mathsf{s})\leq 401(1+\alpha)\,, \tag{6.38a}\] \[\big{|}\mathsf{Q}-\widehat{\mathsf{Q}}\big{|} \leq 3\mathsf{C}_{\mathsf{V}}\varepsilon\mathrm{s}\leq 3 \mathsf{C}_{\mathsf{V}}\varepsilon^{2}\,,\] (6.38b) \[\big{|}\overline{\mathsf{Q}}_{2}\big{|} \leq 3\mathsf{s}\leq 3\varepsilon\,,\] (6.38c) \[\big{|}\hat{\mathsf{Q}}_{\mathsf{s}}\big{|} \leq\tfrac{2\cdot 250^{2}\mathsf{Q}}{\varepsilon}\,,\] (6.38d) \[\big{|}\hat{\mathsf{Q}}_{2}\big{|} \leq\mathring{C}\,,\] (6.38e) \[\big{|}\hat{\mathsf{Q}}-\hat{\mathsf{Q}}_{\mathsf{s}}\big{|} \leq\mathring{C}\varepsilon\,, \tag{6.38f}\] _hold uniformly for all \((x,\mathsf{s})\in\mathbb{T}^{2}\times[0,\varepsilon)\). In particular, (6.38a) and (6.38b) imply that_ \[\tfrac{2(1+\alpha)}{5}\leq\mathsf{Q}\leq 402(1+\alpha)\,. \tag{6.38g}\] Proof of Lemma 6.3.: From (5.22a), (6.24d), and (5.7) it follows that in order to obtain a bound on \(\widehat{\mathsf{Q}}\), we first need a bound on \((w_{0})_{,1}\,(x_{1}^{*}(x_{2},t),x_{2})\). To obtain such an estimate, we recall that by (5.16) \(x_{1}^{*}(x_{2},t)\) is the point at which the global minimum of \(J_{g}(\cdot,x_{2},t)\) is attained, and hence by (6.24a) we have \[1+(t-\mathfrak{t}_{\mathfrak{n}})\tfrac{1+\alpha}{2}\big{(}(w_{0})_{,1}\,(x_{1}^{*}(x_{2},t),x_{2})-\mathsf{C}_{\underline{\lambda}}\big{)} \leq J_{g}(x_{1}^{*}(x_{2},t),x_{2},t)\] \[\leq J_{g}(x_{1}^{\vee}(x_{2},x_{2},t))\] \[\leq 1+(t-\mathfrak{t}_{\mathfrak{n}})\tfrac{1+\alpha}{2}\big{(}(w_{0 })_{,1}\,(x_{1}^{\vee}(x_{2}),x_{2})+\mathsf{C}_{\underline{\lambda}}\big{)}\,, \tag{6.39}\] where we recall cf. assumption ((vi)) that \(x_{1}^{\vee}(x_{2})\) is the point at which \((w_{0})_{,1}\left(\cdot,x_{2}\right)\) attains a minimum. Therefore, \[-\tfrac{1}{\varepsilon}\leq(w_{0})_{,1}\left(x_{1}^{\vee}(x_{2}),x_{2}\right) \leq(w_{0})_{,1}\left(x_{1}^{*}(x_{2},t),x_{2}\right)\leq(w_{0})_{,1}\left(x_{ 1}^{\vee}(x_{2}),x_{2}\right)+2\mathsf{C}_{\mathsf{J}_{k}}\leq-\tfrac{9}{10 \varepsilon}+2\mathsf{C}_{\mathsf{J}_{k}}\,. \tag{6.40}\] Inserting the above obtained bounds for \((w_{0})_{,1}\left(x_{1}^{*}(x_{2},t),x_{2}\right)\) into (6.24d) yields \[\tfrac{1+\alpha}{2}\bigl{(}-\tfrac{1}{\varepsilon}-\mathsf{C}_{ \mathsf{J}_{k}}\bigr{)} \leq\tfrac{1+\alpha}{2}\bigl{(}(w_{0})_{,1}\left(x_{1}^{*}(x_{2}, t),x_{2}\right)-\mathsf{C}_{\mathsf{J}_{k}}\bigr{)}\] \[\leq(\partial_{t}J_{g})(x_{1}^{*}(x_{2},t),x_{2},t)\] \[\leq\tfrac{1+\alpha}{2}\bigl{(}(w_{0})_{,1}\left(x_{1}^{*}(x_{2}, t),x_{2}\right)+\mathsf{C}_{\mathsf{J}_{k}}\bigr{)}\] \[\leq\tfrac{1+\alpha}{2}\bigl{(}-\tfrac{9}{10\varepsilon}+\mathsf{ C}_{\mathsf{J}_{k}}\bigr{)}\,. \tag{6.41}\] Next, we use the bounds (6.40) and (6.41) in order to prove (6.38a). Differentiating (5.18b) with respect to \(t\) and appealing to (5.7), we have \[\widehat{\mathsf{Q}}(x_{2},\mathsf{q}(x_{2},t))=(\partial_{t} \mathsf{q})(x_{2},t) =-\varepsilon(\partial_{t}\overline{J}_{g})(x_{1}^{*}(x_{2},t),x _{2},t)\] \[=-\varepsilon(\partial_{t}J_{g})(x_{1}^{*}(x_{2},t),x_{2},t)+200 (1+\alpha)\mathfrak{C}\bigl{(}\tfrac{t-\mathsf{t}_{\mathsf{m}\mathsf{m}}}{ \mathsf{t}_{\mathsf{m}}-\mathsf{t}_{\mathsf{m}\mathsf{m}}}\bigr{)}\,. \tag{6.42}\] We observe that (6.41) and the definition of \(\mathfrak{C}\) implies \[\tfrac{1+\alpha}{2}(\tfrac{9}{10}-\varepsilon\mathsf{C}_{\mathsf{J}_{k}}) \leq\widehat{\mathsf{Q}}(x_{2},\mathsf{s})\leq\tfrac{1+\alpha}{2}(1+ \varepsilon\mathsf{C}_{\mathsf{J}_{k}})+400(1+\alpha)\mathbf{1}_{t\in(\mathsf{ t}_{\mathsf{m}\mathsf{m}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{ s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} \mathsf{s}\mathsf{s} It thus becomes apparent that we require a positive lower bound for \(J_{{}_{\!g},11}\left(x_{1}^{*}(x_{2},t),x_{2},t\right)\); we note that this is the only place in the argument where assumption ((viii)) on the initial data enters. We revisit the second and third inequalities in (6.40), which show that \(0\leq\mathsf{D}_{1}w_{0}(x_{1}^{*}(x_{2},t),x_{2})-\mathsf{D}_{1}w_{0}(x_{1}^{ \vee}(x_{2},t),x_{2})\leq 2\varepsilon\mathsf{C}_{\mathsf{J}_{\!t}}\). Since \(2\varepsilon\mathsf{C}_{\mathsf{J}_{\!t}}<\varepsilon^{\frac{3}{4}}\) for \(\varepsilon\) sufficiently small, by assumption ((vii)) this implies that \(|x_{1}^{*}(x_{2},t)-x_{1}^{\vee}(x_{2})|<\varepsilon^{\frac{3}{4}}\). We can however obtain an improved bound for this difference. We recall that \(J_{{}_{\!g},1}\left(x_{1}^{*}(x_{2},t),x_{2},t\right)=0\) (see (5.13) and (5.8)), while (6.24b), (5.37r), and (5.37s) imply that \(|J_{{}_{\!g},1}\left(x_{1}^{*}(x_{2},t),x_{2},t\right)-(t-\mathsf{t}_{\mathsf{ in}})(w_{0})_{11}\left(x_{1}^{*}(x_{2},t),x_{2}\right)|\leq\hat{C}(t-\mathsf{t}_{ \mathsf{in}})\mathsf{K}\langle\mathsf{B}_{6}\rangle\). Together, these bounds yield \[\big{|}(w_{0})_{,11}\left(x_{1}^{*}(x_{2},t),x_{2}\right)\big{|}\leq\hat{C} \mathsf{K}\langle\mathsf{B}_{6}\rangle. \tag{6.51}\] On the other hand, using assumption ((vi)) and (4.11) we may perform a second order Taylor expansion in \(x_{1}\) (at fixed \(x_{2}\)) around \(x_{1}^{\vee}=x_{1}^{\vee}(x_{2})\), to obtain \[\underbrace{(w_{0})_{,11}(x_{1}^{*},x_{2})}_{|\cdot|\leq\hat{C}\langle\mathsf{ B}_{6}\rangle}=\underbrace{(w_{0})_{,11}(x_{1}^{\vee},x_{2})}_{=0}+(x_{1}^{*}-x_{1}^{ \vee})\underbrace{(w_{0})_{,111}(x_{1}^{\vee},x_{2})}_{\geq\frac{9}{10\varepsilon ^{3}}}+\tfrac{1}{2}(x_{1}^{*}-x_{1}^{\vee})^{2}\underbrace{(w_{0})_{,1111} \left(x_{1}^{\sharp},x_{2}\right)}_{|\cdot|\leq\frac{\mathsf{C}_{\mathsf{data} }}{\varepsilon^{4}}} \tag{6.52}\] for some \(x_{1}^{\sharp}\) that lies in between \(x_{1}^{*}\) and \(x_{1}^{\vee}\). Moreover, using that \(|x_{1}^{*}(x_{2},t)-x_{1}^{\vee}(x_{2})|<\varepsilon^{\frac{9}{4}}\) we have that \(|(x_{1}^{*}-x_{1}^{\vee})(w_{0})_{,1111}\left(x_{1}^{\sharp},x_{2}\right)|\leq \mathsf{C}_{\mathsf{data}}\varepsilon^{-3+\frac{1}{4}}\) and thus \[|x_{1}^{*}(x_{2},t)-x_{1}^{\vee}(x_{2})|\leq\hat{C}\mathsf{K}\langle\mathsf{B }_{6}\rangle\big{(}\tfrac{9}{10\varepsilon^{3}}-\tfrac{\mathsf{C}_{\mathsf{ data}}\varepsilon^{\frac{5}{4}}}{2\varepsilon^{3}}\big{)}^{-1}\leq 2\hat{C}\mathsf{K} \langle\mathsf{B}_{6}\rangle\varepsilon^{3}\,. \tag{6.53}\] Notice the improvement of \(\mathcal{O}(\varepsilon^{\frac{5}{4}})\mapsto\mathcal{O}(\varepsilon^{3})\) that the above bound gives over assumption ((vii)). Using (6.53) we return to (6.24e) with \(i=j=1\), use the mean value theorem in \(x_{1}\) and assumption ((vi)), and deduce that \[\mathsf{D}_{1}\mathsf{D}_{1}J_{{}_{\!g}}(x_{1}^{*},x_{2},t) \geq(t-\mathsf{t}_{\mathsf{in}})\tfrac{1+\alpha}{2\varepsilon} \mathsf{D}_{1}^{3}(w_{0})(x_{1}^{*},x_{2})-\hat{C}(t-\mathsf{t}_{\mathsf{in}}) \mathsf{K}\langle\mathsf{B}_{6}\rangle\] \[\geq(t-\mathsf{t}_{\mathsf{in}})\tfrac{1+\alpha}{2\varepsilon} \mathsf{D}_{1}^{3}(w_{0})(x_{1}^{\vee},x_{2})-(t-\mathsf{t}_{\mathsf{in}}) \tfrac{1+\alpha}{2\varepsilon}\tfrac{|x_{1}^{*}-x_{1}^{\vee}|}{\varepsilon} \|\mathsf{D}_{1}^{4}w_{0}\|_{L^{\infty}}-\mathring{C}(t-\mathsf{t}_{\mathsf{in}} )\mathsf{K}\langle\mathsf{B}_{6}\rangle\] \[\geq(t-\mathsf{t}_{\mathsf{in}})\tfrac{9(1+\alpha)}{20\varepsilon} -(t-\mathsf{t}_{\mathsf{in}})\tfrac{1+\alpha}{2\varepsilon}\tfrac{2C\mathsf{K} \langle\mathsf{B}_{6}\rangle\varepsilon^{3}}{\varepsilon}\mathsf{C}_{\mathsf{ data}}-\hat{C}(t-\mathsf{t}_{\mathsf{in}})\mathsf{K}\langle\mathsf{B}_{6}\rangle\] \[\geq(t-\mathsf{t}_{\mathsf{in}})\tfrac{2(1+\alpha)}{5\varepsilon}\,, \tag{6.54}\] upon taking \(\varepsilon\) sufficiently small. We note that the above lower bound vanishes as \(t\to\mathsf{t}_{\mathsf{in}}\). We may obtain a matching bound for \(\mathsf{D}_{1}\partial_{t}J_{{}_{\!g}}(x_{1}^{*},x_{2},t)\) as follows. By Taylor's theorem in time, and using that \(J_{{}_{\!g}}(\cdot,\mathsf{t}_{\mathsf{in}})\equiv 1\), we obtain that \[0=\mathsf{D}_{1}J_{{}_{\!g}}(x,\mathsf{t}_{\mathsf{in}})=\mathsf{D}_{1}J_{{}_{ \!g}}(x,t)+(\mathsf{t}_{\mathsf{in}}-t)\mathsf{D}_{1}\partial_{t}J_{{}_{\!g}}(x,t)+\int_{\mathsf{t}_{\mathsf{in}}}^{t}\mathsf{D}_{1}\partial_{t}^{2}J_{{}_{ \!g}}(x,t^{\prime})(t^{\prime}-\mathsf{t}_{\mathsf{in}})\mathrm{d}t^{\prime}\,. \tag{6.55}\] Evaluating the above expression at \(x=(x_{1}^{*}(x_{2},t),x_{2})\), and using that \(J_{{}_{\!g},1}\left(x_{1}^{*}(x_{2},t),x_{2},t\right)=0\), we obtain that \[\mathsf{D}_{1}\partial_{t}J_{{}_{\!g}}(x_{1}^{*}(x_{2},t),x_{2},t)=\tfrac{1}{t- \mathsf{t}_{\mathsf{in}}}\int_{\mathsf{t}_{\mathsf{in}}}^{t}\mathsf{D}_{1} \partial_{t}^{2}J_{{}_{\!g}}(x_{1}^{*}(x_{2},t),x_{2},t^{\prime})(t^{\prime}- \mathsf{t}_{\mathsf{in}})\mathrm{d}t^{\prime} \tag{6.56}\] and therefore \[\big{|}\mathsf{D}_{1}\partial_{t}J_{{}_{\!g}}(x_{1}^{*}(x_{2},t),x_{2},t)\big{|} \leq\tfrac{t-\mathsf{t}_{\mathsf{in}}}{2}\|\mathsf{D}_{1}\partial_{t}^{2}J_{{}_ {\!g}}\|_{L^{\infty}_{x,t}}\,. \tag{6.57}\] In order to bound the right side of the above identity, we appeal to Lemma B.1 and the available bounds at the \(6^{th}\) derivative level which are given by the bootstrap (5.37u) and the initial data assumption. More precisely, we have \(\|\mathsf{D}^{3}J_{{}_{\!g}}\|_{L^{\infty}_{x_{\!g}}}\lesssim\varepsilon^{- \frac{1}{2}}\|\mathsf{D}^{5}J_{{}_{\!g}}(\cdot,\mathsf{t}_{\mathsf{in}})\|_{L^{ 2}_{x}}+\varepsilon^{-1}\|\mathsf{D}^{6}J_{{}_{\!g}}\|_{L^{2}_{x_{\!g}}}\lesssim (\mathsf{B}_{{}_{\!g}})\). This estimate gives a sub-optimal bound since upon noting that \(\mathsf{D}_{1}\partial_{t}^{2}=\varepsilon^{-2}\mathsf{D}_{1}\mathsf{D}_{t}^{2}\), we infer \(\|\mathsf{D}_{1}\partial_{t}^{2}J_{{}_{\!g}}\|_{L^{\infty}_{x_{\!g}}}\lesssim \varepsilon^{-2}\langle\mathsf{B}_{{}_{\!g}}\rangle\). To obtain the correct bound, which is sharper by a full power of \(\varepsilon\), we note that (6.37) implies \[\mathsf{D}_{1}\partial_{t}^{2}J_{{} Using the above estimate, we return to (6.58), and bound the remaining term by brute force using the bootstraps (5.37) and the improved \(\hat{\mathbf{Z}}_{\mathcal{N}}\) bounds from (8.22a), to obtain that \[\|\mathsf{D}_{1}\partial_{t}^{2}J_{s}\|_{L^{\infty}_{x,t}}\lesssim\varepsilon^ {-1}\langle\mathsf{B}_{6}\rangle\;. \tag{6.60}\] Then, the above estimate and (6.57) imply that \[\big{|}\mathsf{D}_{1}\partial_{t}J_{g}(x_{1}^{*}(x_{2},t),x_{2},t)\big{|} \lesssim\tfrac{t-\mathsf{t}_{\mathsf{in}}}{\varepsilon}\langle\mathsf{B}_{6}\rangle \tag{6.61}\] holds pointwise for \((x_{2},t)\in\widehat{\mathcal{P}}\). Using (6.54) and (6.61), we return to bound three terms in (6.50). By also appealing to (5.7) we rewrite \(\partial_{t}\partial_{t}\overline{J}_{g}=\partial_{t}\partial_{t}(\overline{ J}_{g}-J_{g})+\partial_{t}\partial_{t}J_{g}\), the fact that \(|\mathfrak{C}^{\prime}|\leq 4\) and \(\mathsf{t}_{\mathsf{fin}}-\mathsf{t}_{\mathsf{med}}=\tfrac{\varepsilon}{50(1 +\alpha)}\), and to the bounds (6.24), (6.38a), (6.38b), (6.38c), (6.46), and (6.51), we deduce \[\big{|}\hat{\mathsf{Q}}_{\mathsf{s}}(x_{2},\mathsf{s})\big{|} \lesssim\tfrac{\varepsilon}{\mathsf{Q}(x_{2},\mathsf{s})}\big{(} \big{(}\tfrac{8}{(\mathsf{t}_{\mathsf{fin}}-\mathsf{t}_{\mathsf{med}})^{2}}+ \tfrac{\hat{C}}{\varepsilon}\big{)}\!+\!\tfrac{\hat{C}\mathsf{s}}{\varepsilon }\langle\mathsf{B}_{6}\rangle^{2}\big{)}\leq\tfrac{2\cdot 5^{2}\cdot 50^{2} \mathsf{Q}}{\varepsilon} \tag{6.62a}\] \[\big{|}\hat{\mathsf{Q}}_{2}(x_{2},\mathsf{s})\big{|} \lesssim\tfrac{\varepsilon}{\mathsf{Q}(x_{2},\mathsf{s})}\big{(} \tfrac{1}{\varepsilon}+\tfrac{\mathsf{s}}{\varepsilon}\langle\mathsf{B}_{6} \rangle\big{)}\lesssim 1\] (6.62b) \[\big{|}\hat{\mathsf{Q}}(x_{2},\mathsf{s})-\hat{\mathsf{Q}}_{ \mathsf{s}}(x_{2},\mathsf{s})\big{|} \lesssim\varepsilon+\mathsf{s}\lesssim\varepsilon\,. \tag{6.62c}\] The above bounds establish (6.38d)-(6.38f), thereby concluding the proof of the lemma. ### Properties of \(\overline{J}_{g}\), \(\mathfrak{q}\), and the definition of the curve of pre-shocks First, we show that the minimum of \(J_{g}(\cdot,x_{2},t)\) is attained at a unique point as soon as \(t>\mathsf{t}_{\mathsf{in}}\), justifying the definition (5.12). **Lemma 6.4** (The point \(x_{1}^{*}(x_{2},t)\) is uniquely defined).: _Assume that the bootstraps (5.37) hold, and assume that \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). Then, for all \((x_{2},t)\in\widehat{\mathcal{P}}\) with \(t>\mathsf{t}_{\mathsf{in}}\), there exists a unique \(x_{1}^{*}(x_{2},t)\) at which the minimum of \(J_{g}(\cdot,x_{2},t)\) is attained._ Proof of Lemma 6.4.: Recall cf. (5.16) and the discussion in the paragraph above that equation that \(x_{1}^{*}\) can be equivalently defined as the location of the global minimum of \(\overline{J}_{g}(\cdot,x_{2},t)\) or \(J_{g}(\cdot,x_{2},t)\). Fix \((x_{2},t)\in\widehat{\mathcal{P}}\), with \(t>\mathsf{t}_{\mathsf{in}}\). By (4.7), (5.37a), (6.8), and (6.25), we have that the continuous map \(x_{1}\mapsto J_{g}(x_{1},x_{2},t)-1\) is supported in \(\{|x_{1}|\leq(9+\mathsf{C}_{\mathsf{supp}})\varepsilon\}\subset\mathbb{T}\). As such the minimum of this function is attained at at least one point. For any such point \(x_{1}^{*}\), we have that (6.40) holds, and thus the argument which as lead to the bound (6.53) holds true, yielding the estimate \(|x_{1}^{*}-x_{1}^{\vee}|\lesssim\varepsilon^{3}\). Now assume that \(x_{1}^{*}\) was not unique, rendering the existence of two such points \(x_{1,a}^{*}\) and \(x_{1,b}^{*}\). Then we must have \(J_{g}(x_{1,a}^{*},x_{2},t)=J_{g}\), \((x_{1,b}^{*},x_{2},t)=0\), and by the mean value theorem there must exist \(x_{1}^{\sharp}\) which lies in between \(x_{1,a}^{*}\) and \(x_{1,b}^{*}\), such that \(J_{g,11}\left(x_{1}^{\sharp},x_{2},t\right)=0\). But note that (6.53) implies \(|x_{1}^{\sharp}-x_{1}^{\vee}|\lesssim\varepsilon^{3}\). Therefore, we may repeat the bounds in (6.54), with \(x_{1}^{*}\) replaced by \(x_{1}^{\sharp}\), and deduce that \(J_{g}\),\(11\left(x_{1}^{\sharp},x_{2},t\right)\geq(t-\mathsf{t}_{\mathsf{in}})^{2(1+ \alpha)}\tfrac{2(1+\alpha)}{5\varepsilon^{3}}>0\). This is a contradiction, concluding the proof. **Lemma 6.5** (The map \(\mathfrak{q}\) is invertible).: _Assume that the bootstraps (5.37) hold, and assume that \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). Then, the map \(\mathfrak{q}\) defined by (5.18) is invertible, with inverse \(\mathfrak{q}^{-1}\) defined by (5.19)._ Proof of Lemma 6.5.: Fix \(x_{2}\in\mathbb{T}\) and \(\mathsf{s}\in[0,\varepsilon)\). We need to show that the equation \(\mathsf{s}-\mathfrak{q}(x_{2},t)=0\) has a unique solution \(t\in[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}})\). The uniqueness part is easy: if two solutions \(\mathsf{t}_{\mathsf{in}}\leq t_{a}<t_{b}<\mathsf{t}_{\mathsf{fin}}\) would exist, then we'd have \(\mathfrak{q}(x_{2},t_{a})=\mathfrak{q}(x_{2},t_{b})\), and so by the mean value theorem \(\partial_{t}\mathfrak{q}(x_{2},t^{\sharp})=0\) for some \(t_{a}<t^{\sharp}<t_{b}\). But we have already shown earlier, see (6.43), that \(\partial_{t}\mathfrak{q}(x_{2},t^{\sharp})\geq\tfrac{2(1+\alpha)}{5}>0\), a contradiction. When \(\mathsf{s}=0\), then \(t=\mathsf{t}_{\mathsf{in}}\) clearly solves the desired equation since \(\mathfrak{q}(x_{2},\mathsf{t}_{\mathsf{in}})=0\). When \(\mathsf{s}\in(0,\varepsilon)\), the existence of \(t\) follows by the intermediate value theorem, the continuity of \(\mathfrak{q}\) with respect to time, and the bounds (5.37k) and (6.46). Indeed, (6.46) shows that \(\mathfrak{q}(x_{2},t)\leq 401(1+\alpha)(t-\mathsf{t}_{\mathsf{in}})\), so we can find \(t_{a}>\mathsf{t}_{\mathsf{in}}\) such that \(\mathsf{s}-\mathfrak{q}(x_{2},t_{a})>0\). For the other bound, we recall cf. (5.7) and the bootstrap (5.37k) that for every \(x\in\mathbb{T}^{2}\) there exists \(t_{*}(x)<\mathsf{t}_{\mathsf{fin}}\) such that \(\overline{J}_{g}(x,t_{*}(x))=0\). This implies that for every \(x_{2}\) there exists \(t_{b}\in[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}})\) with \(\mathcal{J}(x_{2},t_{b})=0\). Hence, \(\mathsf{s}-\mathfrak{q}(x_{2},t_{b})=\mathsf{s}-\varepsilon<0\). Thus, by the intermediate value theorem there must exist a root \(t\in(t_{a},t_{b})\) of \(\mathsf{s}-\mathfrak{q}(x_{2},t)=0\). **Definition 6.6** (The curve of pre-shocks).: _For all \(x_{2}\in\mathbb{T}\) we define, extending the definition of \(\mathfrak{q}^{-1}\) by continuity,_ \[\hat{x}_{1}(x_{2}):=\lim_{\mathsf{s}\to\varepsilon^{-1}}x_{1}^{*}\big{(}x_{2}, \mathfrak{q}^{-1}(x_{2},\mathsf{s})\big{)}=x_{1}^{*}\big{(}x_{2},\mathfrak{q}^{-1}( x_{2},\varepsilon)\big{)}. \tag{6.63}\] _he parametrized curve_ \[\Xi^{*}:=\Big{\{}\big{(}\hat{x}_{1}(x_{2}),x_{2},\mathfrak{q}^{-1}(x_ _is called the curve of pre-shocks. We define \(t^{*}(x_{2}):=\mathfrak{q}^{-1}(x_{2},\varepsilon)\) so that \(\Xi^{*}:=\Big{\{}\big{(}\mathring{x}_{1}(x_{2}),x_{2},t^{*}(x_{2})\big{)}\colon x _{2}\in\mathbb{T}\Big{\}}\)._ **Proposition 6.7** (**Equivalent characterization of the curve of pre-shocks)**.: _The curve of pre-shocks \(\Xi^{*}\) is precisely the intersection of the two-dimensional surfaces \(\{\overline{J}_{{}_{g}}=0\}\) and \(\{J_{{}_{g},1}=0\}\)._ Proof of Proposition 6.7.: We first establish the inclusion \(\Xi^{*}\subseteq\{\overline{J}_{{}_{g}}=0\}\cap\{J_{{}_{g},1}=0\}\). The fact that \(J_{{}_{g},1}\) vanishes on \(\Xi^{*}\) follows from the definition of \(x_{1}^{*}\) (see (5.13)), which gives that \(J_{{}_{g},1}(x_{1}^{*}(x_{2},t),x_{2},t))|_{t=\mathfrak{q}^{-1}(x_{2},\mathfrak{ s})}=0\), and thus this equality also holds as \(\mathfrak{s}\to\varepsilon\), by continuity. The fact that \(\overline{J}_{{}_{g}}\) vanishes on \(\Xi^{*}\) is a consequence of the definition of the map \(\mathfrak{q}^{-1}\) (see (5.18b) and (5.19)). Indeed, as \(\mathfrak{s}\to\varepsilon\), we have that \(\mathcal{J}(x_{2},\mathfrak{q}^{-1}(x_{2},\mathfrak{s}))\to 0\), which means via (5.14) that \(\overline{J}_{{}_{g}}(x_{1}^{*}(x_{2},\mathfrak{q}^{-1}(x_{2},\mathfrak{s})), x_{2},\mathfrak{q}^{-1}(x_{2},\mathfrak{s}))\to 0\) as \(\mathfrak{s}\to\varepsilon\). By continuity of \(\overline{J}_{{}_{g}}\) in the \(x_{1}\) and \(t\) entries, it follows that \(\overline{J}_{{}_{g}}\equiv 0\) on \(\Xi^{*}\). The proof is completed once we establish the reverse inclusion, namely \(\{\overline{J}_{{}_{g}}=0\}\cap\{J_{{}_{g},1}=0\}\subseteq\Xi^{*}\). Let \((x_{1},x_{2},t)\in\{\overline{J}_{{}_{g}}=0\}\cap\{J_{{}_{g},1}=0\}\). We need to show two things: \(t=\mathfrak{q}^{-1}(x_{2},\varepsilon)\) and \(x_{1}=x_{1}^{*}(x_{2},\mathfrak{q}^{-1}(x_{2},\varepsilon))\). Since \(\overline{J}_{{}_{g}}\),\(1=J_{{}_{g},1}\), we have that \(\overline{J}_{{}_{g}}(x_{1},x_{2},t)=\overline{J}_{{}_{g},1}\,(x_{1},x_{2},t)=0\), and therefore the map \(x_{1}\mapsto\overline{J}_{{}_{g}}(x_{1},x_{2},t)\), with \((x_{2},t)\) frozen, has a global minimum at \(x_{1}\) (indeed, \(\overline{J}_{{}_{g}}\) cannot attain strictly negative values in the closure of the spacetime considered here). By (5.12) and the uniqueness statement established in Lemma 6.4, it follows that \(x_{1}=x_{1}^{*}(x_{2},t)\). Moreover, by the definition (5.14) it follows that \(0=\overline{J}_{{}_{g}}(x_{1},x_{2},t)=\mathcal{J}(x_{2},t)\), which gives in light of (5.18b) that \(t=\mathfrak{q}^{-1}(x_{2},\varepsilon)\). This concludes the proof. ### Damping properties of \(J_{{}_{g}}\) and \(\mathcal{J}\) In this section we record the properties of \(J_{{}_{g}}\) and \(\mathcal{J}\) that are most important to the analysis, especially to the energy estimates in subsequent sections. **Lemma 6.8** (**Damping**).: _Assume that the bootstraps (5.37) hold, and assume that \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\). Then, for all \((x,t)\in\mathcal{P}\), we have that_ \[(J_{{}_{g}}\hat{\textbf{W}}_{\mathcal{N}})(x,t)\leq-\tfrac{9}{10\varepsilon}+ \tfrac{13}{\varepsilon}J_{{}_{g}}(x,t)\,. \tag{6.64}\] Proof of Lemma 6.8.: If \((x,t)\in\mathcal{P}\) is such that \(J_{{}_{g}}\hat{\textbf{W}}_{\mathcal{N}}(x,t)\leq-\tfrac{9}{10\varepsilon}\), then (6.64) holds automatically due to (5.37k). If on the other hand \(J_{{}_{g}}\hat{\textbf{W}}_{\mathcal{N}}(x,t)\geq-\tfrac{9}{10\varepsilon}\), then by (5.37b) we have \(J_{{}_{g}}(x,t)\geq\tfrac{2}{25}\). In this case, (6.17a) and (4.10) give \[\tfrac{9}{10\varepsilon}+J_{{}_{g}}\hat{\textbf{W}}_{\mathcal{N}}(x,t)\leq \tfrac{9}{10\varepsilon}+(w_{0})_{1}\,(x)+\hat{C}\varepsilon\leq\tfrac{9}{10 \varepsilon}+\tfrac{1}{10\varepsilon}+\hat{C}\varepsilon\leq\tfrac{1+\hat{C} \varepsilon^{2}}{\varepsilon}J_{{}_{g}}{J_{g}}^{-1}\leq\tfrac{1+\hat{C} \varepsilon^{2}}{\varepsilon}J_{{}_{g}}\tfrac{25}{2}\,,\] thereby proving (6.64). For the purpose of closing energy estimates, we will make use of the following crucial lemma: **Lemma 6.9** (**Damping and anti-damping**).: _Assume that the bootstraps (5.37) hold, and assume that \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\). For all \((x,\mathfrak{s})\in\mathbb{T}^{2}\times[0,\varepsilon)\), we have_ \[-J_{{}_{g}}(\mathbb{Q}\delta_{\mathfrak{s}}+V\partial_{2})\mathcal{J}^{\frac{3} {2}}+\mathcal{J}^{\frac{3}{2}}(\mathbb{Q}\delta_{\mathfrak{s}}+V\partial_{2})J _{{}_{g}}\geq\tfrac{1+\alpha}{8\varepsilon}\mathcal{J}^{\frac{1}{2}}J_{{}_{g}}\,. \tag{6.65}\] Proof of Lemma 6.9.: We recall that by (5.29) we have \[(\mathbb{Q}\partial_{\mathfrak{s}}+V\partial_{2})\mathcal{J}=-\tfrac{\mathbb{Q}} {\varepsilon}\,. \tag{6.66}\] As such, using (5.30) we rewrite the left side of (6.65) as \[-J_{{}_{g}}(\mathbb{Q}\partial_{\mathfrak{s}}+V\partial_{2})\mathcal{J}^{\frac{3} {2}}+\mathcal{J}^{\frac{3}{2}}(\mathbb{Q}\partial_{\mathfrak{s}}+V\partial_{2})J _{{}_{g}}=\tfrac{3}{2}\mathcal{J}^{\frac{1}{2}}J_{{}_{g}}\tfrac{\mathbb{Q}}{ \varepsilon}+\mathcal{J}^{\frac{3}{2}}\big{(}\tfrac{1+\alpha}{2}J_{{}_{g}}\hat{ \textbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{{}_{g}}\hat{\textbf{Z}}_{{}_{ \mathcal{N}}}\big{)}\,. \tag{6.67}\] Using the bootstraps (5.37), the coefficient bounds (6.38), and the fact that \(0\leq\mathcal{J}\leq J_{{}_{g}}\), we then bound from below the right side of (6.67) as \[\tfrac{3}{2}\mathcal{J}^{\frac{1}{2}}J_{{}_{g}}\tfrac{\mathbb{Q}}{ \varepsilon}+\mathcal{J}^{\frac{3}{2}}\big{(}\tfrac{1+\alpha}{2}J_{{}_{g}}\hat{ \textbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{{}_{g}}\hat{\textbf{Z}}_{{}_{ \mathcal{N}}}\big{)} =\tfrac{3}{2}\mathcal{J}^{\frac{1}{2}}J_{{}_{g}}\tfrac{\mathbb{Q}}{ \varepsilon}+\frac{3}{2}\mathcal{J}^{\frac{1}{2}}J_{{}_{g}}\tfrac{\mathbb{Q}- \mathbb{Q}}{\varepsilon}+\mathcal{J}^{\frac{3}{2}}\big{(}\tfrac{1+\alpha}{2}J_{{}_{g }}\hat{\textbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{{}_{g}}\hat{\textbf{Z}}_{{}_{ \mathcal{N}}}\big{)}\] \[\geq\mathcal{J}^{\frac{3}{2}}J_{{}_{g}}\tfrac{17(1+\alpha)}{ 40\varepsilon}-\tfrac{9\mathbb{C}\varepsilon}{2}\varepsilon\mathcal{J}^{\frac{1} {2}}J_{{}_{g}}-\mathcal{J}^{\frac{3}{2}}\big{(}\tfrac{1+\alpha}{2\varepsilon}+ \hat{C}\big{)}\] \[\geq\mathcal{J}^{\frac{1}{2}}J_{{}_{g}}\tfrac{5(11+\alpha)}{ 80\varepsilon}-\mathcal{J}^{\frac{1}{2}}J_{{}_{g}}\big{(}\tfrac{ ### Closure of the (5.37s) bootstrap In this section we show that (5.37s) follows from (5.37r). We first consider a general function \(F\) such that \(\mathcal{J}^{r}\partial_{\mathsf{s}}F\in L^{2}_{x,\mathsf{s}}\) for some \(r\in\mathbb{R}\). Then, by the fundamental theorem of calculus in time we have that for \(r^{\prime}\in\mathbb{R}\) to be determined, pointwise in \(\mathsf{s}\) it holds that \[\mathcal{J}^{r^{\prime}}(\mathsf{s})\|F(\cdot,\mathsf{s})\|_{L^{2}_{x}}\leq \mathcal{J}^{r^{\prime}}(\mathsf{s})\|F(\cdot,0)\|_{L^{2}_{x}}+\mathcal{J}^{r^ {\prime}}(\mathsf{s})\int_{0}^{\mathsf{s}}\mathcal{J}^{r}(\mathsf{s}^{\prime} )\|\partial_{\mathsf{s}}F(\cdot,\mathsf{s}^{\prime})\|_{L^{2}_{x}}\mathcal{J}^ {-r}(\mathsf{s}^{\prime})\mathrm{d}\mathsf{s}^{\prime}\,, \tag{6.69}\] and therefore we have \[\|\mathcal{J}^{r^{\prime}}F\|_{L^{2}_{x,\mathsf{s}}}\leq\|F(\cdot,0)\|_{L^{2}_ {x}}\|\mathcal{J}^{r^{\prime}}\|_{L^{2}_{x}}+\|\mathcal{J}^{r}\partial_{ \mathsf{s}}F\|_{L^{2}_{x,\mathsf{s}}}\left(\int_{0}^{\varepsilon}\mathcal{J}^ {2r^{\prime}}(\mathsf{s})\int_{0}^{\mathsf{s}}\mathcal{J}^{-2r}(\mathsf{s}^{ \prime})\mathrm{d}\mathsf{s}^{\prime}\mathrm{d}\mathsf{s}\right)^{\frac{1}{2}}\,. \tag{6.70}\] At this point, we recall from the definition (5.18b) that we have the identity \(\mathcal{J}(\mathsf{s})=1-\frac{\mathsf{s}}{\varepsilon}\). Therefore, an explicit computation shows that (6.70) yields \[\text{if }r>\tfrac{1}{2}\text{ and }r^{\prime}>r-1,\text{ then: }\qquad\|\mathcal{J}^{r^{\prime}}F\|_{L^{2}_{x,\mathsf{s}}}\leq\tfrac{ \varepsilon^{\frac{1}{2}}}{\sqrt{2r^{\prime}+1}}\|F(\cdot,0)\|_{L^{2}_{x}}+ \tfrac{\varepsilon}{\sqrt{2(2r-1)(1+r^{\prime}-r)}}\|\mathcal{J}^{r}\partial_ {\mathsf{s}}F\|_{L^{2}_{x,\mathsf{s}}}\,, \tag{6.71}\] and \[\text{if }r<\tfrac{1}{2}\text{ and }r^{\prime}>-\tfrac{1}{2},\text{ then: }\qquad\|\mathcal{J}^{r^{\prime}}F\|_{L^{2}_{x,\mathsf{s}}}\leq\tfrac{ \varepsilon^{\frac{1}{2}}}{\sqrt{2r^{\prime}+1}}\|F(\cdot,0)\|_{L^{2}_{x}}+ \tfrac{\varepsilon}{\sqrt{(2r^{\prime}+1)(1-2r)}}\|\mathcal{J}^{r}\partial_{ \mathsf{s}}F\|_{L^{2}_{x,\mathsf{s}}}\,. \tag{6.72}\] Returning to (6.69), we also note that upon taking \(r^{\prime}=0\), we have \[\text{if }r<\tfrac{1}{2},\text{ then: }\qquad\|F\|_{L^{\infty}_{x}L^{2}_{x}}\leq\|F( \cdot,0)\|_{L^{2}_{x}}+\tfrac{\varepsilon^{\frac{1}{2}}}{\sqrt{1-2r}}\| \mathcal{J}^{r}\partial_{\mathsf{s}}F\|_{L^{2}_{x,\mathsf{s}}}\,. \tag{6.73}\] Lastly we note that if only a bound on \(\mathcal{J}^{r}\partial_{\mathsf{s}}F\in L^{2}_{x,\mathsf{s}}\) is available with \(r>\frac{1}{2}\), then (6.73) is not available. In this situation, we require knowledge of \(\mathcal{J}^{r-\frac{1}{2}}J^{\frac{1}{2}}_{\theta}\partial_{\mathsf{s}}F\in L ^{2}_{x,\mathsf{s}}\) and conclude \(J^{\frac{1}{2}}_{\theta}F\in L^{\infty}_{\mathsf{s}}L^{2}_{x}\). The argument is as follows. Using (5.30), (5.37), (6.38), and (6.64), we conclude \[\tfrac{d}{ds}\|J^{\frac{1}{2}}_{\theta}F\|^{2}_{L^{2}_{x}} =\int\tfrac{1}{\mathbb{Q}}|F|^{2}\mathsf{Q}\partial_{\mathsf{s}} J_{g}+2\int J_{g}F\partial_{\mathsf{s}}F\] \[=\int\tfrac{1}{\mathbb{Q}}|F|^{2}\big{(}-V\partial_{2}J_{g}+ \tfrac{1+\alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g }\hat{\mathbf{Z}}_{\mathcal{N}}\big{)}+2\int J_{g}F\partial_{\mathsf{s}}F\] \[\leq\int\tfrac{1}{\mathbb{Q}}|F|^{2}\big{(}5\varepsilon(1+\alpha )\mathsf{C}_{\mathsf{V}}-\tfrac{9(1+\alpha)}{20\varepsilon}+\tfrac{13(1+ \alpha)}{2\varepsilon}J_{g}+\tfrac{3(1-\alpha)}{4}\mathsf{C}_{\underline{ \mathcal{Z}}_{\mathcal{N}}}\big{)}+2\|J^{\frac{1}{2}}_{g}F\|_{L^{2}_{x}}\|J^{ \frac{1}{2}}_{\theta}\partial_{\mathsf{s}}F\|_{L^{2}_{x}}\] \[\leq\tfrac{18}{\varepsilon}\|J^{\frac{1}{2}}_{\theta}F\|^{2}_{L^{ 2}_{x}}+2\|J^{\frac{1}{2}}_{\theta}F\|_{L^{2}_{x}}\|J^{\frac{1}{2}}_{\theta} \partial_{\mathsf{s}}F\|_{L^{2}_{x}}\,, \tag{6.74}\] upon taking \(\varepsilon\) to be sufficiently small. Integrating the above inequality in time, and recalling that \(\mathcal{J}(\mathsf{s})=1-\frac{\mathsf{s}}{\varepsilon}\), we are lead to conclude that \[\text{if }\tfrac{1}{2}<r<1,\text{ then: }\qquad\sup_{\mathsf{s}\in[0,\varepsilon]}\|J^{\frac{1}{2}}_{ \theta}F(\cdot,\mathsf{s})\|_{L^{2}_{x}} \leq e^{9}\|J^{\frac{1}{2}}_{\theta}F(\cdot,0)\|_{L^{2}_{x}}+e^{9} \int_{0}^{\varepsilon}\|J^{\frac{1}{2}}_{\theta}\partial_{\mathsf{s}}F(\cdot, \mathsf{s})\|_{L^{2}_{x}}\mathrm{d}\mathsf{s}\] \[=e^{9}\|J^{\frac{1}{2}}_{\theta}F(\cdot,0)\|_{L^{2}_{x}}+e^{9} \int_{0}^{\varepsilon}\|\mathcal{J}^{r-\frac{1}{2}}J^{\frac{1}{2}}_{\theta} \partial_{\mathsf{s}}F(\cdot,\mathsf{s})\|_{L^{2}_{x}}\mathcal{J}^{-r+\frac{1}{2} }(\mathsf{s})\mathrm{d}\mathsf{s}\] \[\leq e^{9}\|J^{\frac{1}{2}}_{\theta}F(\cdot,0)\|_{L^{2}_{x}}+ \tfrac{\varepsilon^{\frac{1}{2}}e^{9}}{\sqrt{2(1-r)}}\|\mathcal{J}^{r-\frac{1}{2} }J^{\frac{1}{2}}_{\theta}\partial_{\mathsf{s}}F(\cdot,\mathsf{s})\|_{L^{2}_{x, \mathsf{s}}}\,. \tag{6.75}\] Having established the bounds (6.71), (6.73), and (6.75), we show that (5.37s) follows from (5.37r), assuming \(\mathsf{B}_{5}\) is sufficiently large with respect to \(\mathsf{B}_{6}\). Indeed, from (6.71) with \(r^{\prime}=0\) and \(r=\frac{3}{4}\), using (4.11), recalling that \(\partial_{\mathsf{s}}=\frac{1}{\varepsilon\mathsf{Q}}\widetilde{\mathsf{D}}_{ \mathsf{s}}\), and that \(\widehat{\mathsf{Q}}^{-1}\) is bounded according to (6.38a), we deduce \[\widetilde{\mathcal{D}}_{5,\mathcal{N}}(\varepsilon) \leq\mathsf{C}_{\mathsf{data}}+\tfrac{5}{1+\alpha}\widetilde{ \mathcal{D}}_{6,\mathcal{N}}(\varepsilon)\,,\] \[\widetilde{\mathcal{D}}_{5,\mathcal{T}}(\varepsilon) \leq\varepsilon\mathsf{C}_{\mathsf{data}}+\tfrac{5}{1+\alpha} \widetilde{\mathcal{D}}_{6,\mathcal{T}}(\varepsilon)\,.\] From the above bound and the definitions (5.36g) and (5.36j), it follows that \[\widetilde{\mathcal{D}}_{5}^{2}(\varepsilon)\leq 2\mathsf{C}_{\mathsf{data}}^{2}+ \tfrac{50}{(1+\alpha)^{2}}\widetilde{\mathcal{D}}_{6,\mathcal{N}}^{2}( \varepsilon)+(\mathsf{K}\varepsilon)^{-2}\big{(}2\varepsilon^{2}\mathsf{C}_{ \mathsf{data}}^{2}+\tfrac{50}{(1+\alpha)^{2}}\widetilde{\mathcal{D}}_{6, \mathcal{T}}^{2}(\varepsilon)\big{)}\leq 4\mathsf{C}_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{\mathcal{E}}_{5,\tau}( \mathsf{s})\leq e^{9}\mathsf{C}_{\mathsf{data}}\varepsilon^{\frac{1}{2}}+( \tfrac{2}{\varepsilon})^{\frac{1}{2}}e^{9}\tfrac{5}{1+\alpha}\widetilde{ \mathcal{D}}_{6,\tau}(\varepsilon)\] and therefore, upon recalling (5.36a) and (5.36d), we obtain \[\varepsilon\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{ \mathcal{E}}_{5}^{2}(\mathsf{s}) \leq 2e^{18}\mathsf{C}_{\mathsf{data}}^{2}+4e^{18}\tfrac{5^{2}}{(1+ \alpha)^{2}}\widetilde{\mathcal{D}}_{6,\mathcal{N}}^{2}(\varepsilon)+(\mathsf{ K}\varepsilon)^{-2}\big{(}2e^{18}\mathsf{C}_{\mathsf{data}}^{2}\varepsilon^{2}+4e^{18} \tfrac{5^{2}}{(1+\alpha)^{2}}\widetilde{\mathcal{D}}_{6,\mathcal{T}}^{2}( \varepsilon)\big{)}\] \[\leq 4e^{18}\mathsf{C}_{\mathsf{data}}^{2}+4e^{18}\tfrac{5^{2}}{(1 +\alpha)^{2}}\widetilde{\mathcal{D}}_{6}^{2}(\varepsilon)\,. \tag{6.77}\] From (6.76) and (6.77) we thus obtain \[\varepsilon^{\frac{1}{2}}\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{ \mathcal{E}}_{5}(\mathsf{s})+\widetilde{\mathcal{D}}_{5}(\varepsilon)\leq 2(1+e^{9}) \mathsf{C}_{\mathsf{data}}+\tfrac{10}{1+\alpha}(1+e^{9})\widetilde{\mathcal{D }}_{6}(\varepsilon) \tag{6.78}\] and so the bootstrap (5.37r) implies that (5.37s) holds (with a strict inequality) as soon as \[2(1+e^{9})\mathsf{C}_{\mathsf{data}}+\tfrac{10}{1+\alpha}(1+e^{9})\mathsf{B} _{6}=:\mathsf{B}_{5}\,. \tag{6.79}\] **Remark 6.10** (\(\mathsf{B}_{5}\) **and \(\mathsf{B}_{6}\) are proportional).**_Note that since we will choose \(\mathsf{B}_{6}\geq\mathsf{C}_{\mathsf{data}}\) (see (10.74)), the relation (6.79) implies_ \[\tfrac{10}{1+\alpha}(1+e^{9})\mathsf{B}_{6}\leq\mathsf{B}_{5}\leq 12(1+e^{9}) \mathsf{B}_{6}\,. \tag{6.80}\] _As such, any upper bound of the type \(A\lesssim\mathsf{B}_{5}\) may be written as \(A\lesssim\mathsf{B}_{6}\), upon changing the implicit constant._ ## 7. Bounds for the geometry, sound speed, and the tangential reparameterization velocity The purpose of this section is to establish the following bounds, which are then subsequently used throughout the paper. Additionally, we close the bootstrap assumptions for the \(\widetilde{\mathsf{D}}^{6}\) level bounds on the geometry, (5.37u)-(5.37t). **Proposition 7.1** (Bounds for the geometry, sound speed, and ALE velocity).: _Assume the bootstrap assumptions (5.37) hold, and that \(\varepsilon\) is taken to be sufficiently small to ensure \(\varepsilon^{\frac{1}{2}}\big{(}\langle\mathsf{B}_{\mathsf{J}}\rangle+\langle \mathsf{B}_{\mathsf{h}}\rangle+\langle\mathsf{B}_{6}\rangle\big{)}\leq 1\). Then, assuming \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0},\) and \(\mathsf{C}_{\mathsf{data}}\), we have_ \[\varepsilon^{\frac{1}{2}}\big{\|}\mathcal{J}^{\frac{1}{4}}\widetilde{ \mathsf{D}}^{6}J_{s}\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}+\big{\|}\mathcal{J }^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_{s}\big{\|}_{L_{x}^{2},\lesssim} \varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,, \tag{7.1a}\] \[\big{\|}\widetilde{\mathsf{D}}^{5}J_{s}\big{\|}_{L_{\infty}^{ \infty}L_{x}^{2}} \lesssim\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,,\] (7.1b) \[\varepsilon^{\frac{1}{2}}\|\mathcal{J}^{\frac{1}{4}}\widetilde{ \mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\|_{L_{\infty}^{\infty}L_{x}^{2}}^{2}+ \|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{ 2}h\|_{L_{x}^{2},\lesssim} \mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,,\] (7.1c) \[\varepsilon^{\frac{1}{2}}\|\mathcal{J}^{\frac{1}{4}}\widetilde{ \mathsf{D}}^{6}\widetilde{\mathsf{D}}_{1}h\|_{L_{\infty}^{\infty}L_{x}^{2}}+ \|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{ 1}h\|_{L_{x}^{2},\lesssim} \varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,,\] (7.1d) \[\big{\|}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}h( \cdot,\mathsf{s})\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}} \lesssim\mathsf{K}\varepsilon^{\frac{3}{2}}\langle\mathsf{B}_{6}\rangle\,,\] (7.1e) \[\varepsilon^{\frac{1}{2}}\|\mathcal{J}^{\frac{1}{4}}\widetilde{ \mathsf{D}}^{6}g\|_{L_{\infty}^{\infty}L_{x}^{2}}+\|\mathcal{J}^{-\frac{1}{4}} \widetilde{\mathsf{D}}^{6}g\|_{L_{x}^{2},\lesssim} \mathsf{K}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle^{2}\,,\] (7.1f) \[\sum_{3\leq|\tau|\leq 6}\|\mathcal{J}^{-\frac{1}{4}}\big{(} \widetilde{\mathsf{D}}^{|\gamma|}\mathcal{N}+g^{-1}\tau\widetilde{\mathsf{D}}^{| \gamma|}\widetilde{\mathsf{D}}_{2}h\big{)}\|_{L_{x}^{2},+} \|\mathcal{J}^{-\frac{1}{4}}\big{(}\widetilde{\mathsf{D}}^{|\gamma|} \mathcal{T}-g^{-1}\mathcal{N}\widetilde{\mathsf{D}}^{|\gamma|}\widetilde{\mathsf{ D}}_{2}h\big{)}\|_{L_{x}^{2},\lesssim} \mathsf{K}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle\,,\] (7.1g) \[\|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}\mathcal{N}\|_{L_ {x}^{\infty}L_{x}^{2}}+\|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6} \tau\|_{L_{x}^{2},\lesssim} \mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,,\] (7.1h) \[\big{\|}\widetilde{\mathsf{D}}^{6}\Sigma^{\perp 1}\big{\|}_{L_{x}^{\infty}L_{x}^{2}} \lesssim\varepsilon\langle\mathsf{B}_{6}\rangle\,,\] (7.1j) \[\big{\|}J_{g}^{\frac{1}{4}}\widetilde{\mathsf{D}}^{6}\Sigma^{ \perp 1}\big{\|}_{L_{x}^{\infty}L_{x}^{2}} \lesssim\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,,\] (7.1k) \[\big{\|}\widetilde{\mathsf{D}}^{6}V\big{\|}_{L_{x}^{2},\lesssim} \mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,,\] (7.1l) \[\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}V\big{\|}_{L_{ \infty}^{\infty}L_{x}^{2}} \lesssim\mathsf{K}\varepsilon^{\frac{3}{2}}\langle\mathsf{B}_{6}\rangle\,, \tag{7.1m}\] _where the implicit constants in all the above inequalities depend only on \(\alpha,\)\(\kappa_{0},\) and \(\mathsf{C}_{\mathsf{data}}\)._ Proof of Proposition 7.1.: The proof consists in combining the bounds contained in Lemmas 7.3, 7.4, 7.7, 7.14, in Remarks 6.10, 7.6, 7.9, 7.10, 7.11, 7.12, and Corollary 7.13, which are all proven below. One immediate consequence of the above proposition (we recall that the implicit constants therein only depend on \(\alpha,\kappa_{0},\) and \(\mathsf{C}_{\mathsf{data}}\)), and of the bound (5.15) is that the bootstraps (5.37u)-(5.37u) are closed. **Corollary 7.2**.: _Assume that \(\mathsf{B}_{\mathsf{J}}\) and \(\mathsf{B}_{\mathsf{h}}\) are sufficiently large with respect to \(\mathsf{B}_{\mathsf{6}}\) and \(\mathsf{K}\). More precisely, define_ \[6C_{(7.1\mathrm{a})}^{\frac{1}{2}}\langle\mathsf{B}_{\mathsf{6}}\rangle =:\mathsf{B}_{\mathsf{J}} \tag{7.2a}\] \[6\big{(}C_{(7.1\mathrm{c})}^{\frac{1}{2}}\mathsf{K}+C_{(7.1 \mathrm{d})}^{\frac{1}{2}}\langle\mathsf{B}_{\mathsf{6}}\rangle =:\mathsf{B}_{\mathsf{h}}\,. \tag{7.2b}\] _Then the bounds (5.37)-(5.37) hold with constants \(\frac{1}{2}\mathsf{B}_{\mathsf{h}}\) and \(\frac{1}{2}\mathsf{B}_{\mathsf{J}}\) respectively, thereby closing these bootstraps._ The following inequalities will be used several times throughout the proof, and provide bounds for the norms of \(V\) and \(\Sigma^{\pm 1}\) in terms of norms of \(\widetilde{\mathsf{D}}_{2}h\) and \(J_{\mathsf{s}}\), and of \(\langle\mathsf{B}_{\mathsf{6}}\rangle\). **Lemma 7.3**.: _Under the same assumptions as Proposition 7.1, the rescaled sound speed \(\Sigma\) and the tangential transport velocity \(V\) satisfy_ \[\big{\|}\widetilde{\mathsf{D}}^{6}\Sigma\big{\|}_{L_{x,\mathsf{s }}^{2}} \lesssim\varepsilon\langle\widetilde{\mathcal{D}}_{5}\rangle+ \varepsilon\big{(}\varepsilon+\big{\|}\widetilde{\mathsf{D}}^{6}\widetilde{ \mathsf{D}}_{2}h\big{\|}_{L_{x,\mathsf{s}}^{2}}+\varepsilon\big{\|} \widetilde{\mathsf{D}}^{5}J_{\mathsf{s}}\big{\|}_{L_{x,\mathsf{s}}^{2}}\big{)} \big{(}\langle\mathsf{B}_{\mathsf{6}}\rangle+\big{\|}\widetilde{\mathsf{D}}^ {6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L_{x,\mathsf{s}}^{2}}+\varepsilon \big{\|}\widetilde{\mathsf{D}}^{5}J_{\mathsf{s}}\big{\|}_{L_{x,\mathsf{s}}^{2} }\big{)}\,, \tag{7.3a}\] \[\big{\|}\widetilde{\mathsf{D}}^{6}V\big{\|}_{L_{x,\mathsf{s}}^{2}} \lesssim\varepsilon^{2}\langle\mathsf{B}_{\mathsf{6}}\rangle+ \big{\|}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L_{x, \mathsf{s}}^{2}}+\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}J_{\mathsf{s}} \big{\|}_{L_{x,\mathsf{s}}^{2}}\,. \tag{7.3b}\] Proof of Lemma 7.3.: We prove (7.3a) only for \(\Sigma\). Comparing (5.33) with \(\beta=\frac{1}{2}\) and (5.33c), and appealing to the bootstrap (5.37), it is clear that the bounds for \(\Sigma^{-1}\) are proven in exactly the same way. In the case that \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^ {5}\), we apply \(\widetilde{\mathsf{D}}^{5}\) to (5.33b), and apply (4.11), (5.37), and (B.13), to obtain \[\big{\|}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2} \Sigma\big{\|}_{L_{x,\mathsf{s}}^{2}} \lesssim\frac{1}{2}\big{\|}\widetilde{\mathsf{D}}^{5}(g^{\frac{1}{2}}( \hat{\mathbf{W}}_{\mathcal{T}}-\hat{\mathbf{Z}}_{\mathcal{T}}))\big{\|}_{L_{x, \mathsf{s}}^{2}}\lesssim\widetilde{\mathcal{D}}_{5,\tau}+\varepsilon\big{\|} \widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L_{x,\mathsf{s }}^{2}}+\varepsilon\lesssim\varepsilon\widetilde{\mathcal{D}}_{5}+ \varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}h\big{\|} _{L_{x,\mathsf{s}}^{2}}\,. \tag{7.4}\] Similarly, in the case that \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^ {5}\), we let \(\widetilde{\mathsf{D}}^{5}\) act upon to (5.33a) and and apply the inequalities (4.11), (5.37), and (B.13), to obtain that \[\big{\|}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{1} \Sigma\big{\|}_{L_{x,\mathsf{s}}^{2}} \lesssim\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}(J_{\mathsf{s}} \hat{\mathbf{W}}_{\mathcal{N}},J_{\mathsf{s}}\tilde{\mathbf{Z}}_{\mathcal{N}}) \big{\|}_{L_{x,\mathsf{s}}^{2}}+\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}( J_{\mathsf{s}}\widetilde{\mathsf{D}}_{2}h(\hat{\mathbf{W}}_{\mathcal{T}}-\hat{\mathbf{Z}}_{ \mathcal{T}}))\big{\|}_{L_{x,\mathsf{s}}^{2}}\] \[\lesssim\varepsilon\widetilde{\mathcal{D}}_{5,\mathcal{N}}^{2}+ \varepsilon^{2}\widetilde{\mathcal{D}}_{5,\tau}^{2}+\varepsilon\big{\|} \widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L_{x,\mathsf{s}}^{ 2}}+\varepsilon^{2}\big{\|}\widetilde{\mathsf{D}}^{5}J_{\mathsf{s}}\big{\|}_{L_{ x,\mathsf{s}}^{2}}+\varepsilon^{2}\] \[\lesssim\varepsilon\widetilde{\mathcal{D}}_{5}+\varepsilon\big{\|} \widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L_{x,\mathsf{s}}^{2 }}+\varepsilon^{2}\big{\|}\widetilde{\mathsf{D}}^{5}J_{\mathsf{s}}\big{\|}_{L_{x, \mathsf{s}}^{2}} \tag{7.5}\] Finally, in the case that \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}_{\mathsf{s}}^{6}\), from (5.26), (5.33c), we have the identity \[\widetilde{\mathsf{D}}_{\mathsf{s}}^{6}\Sigma=-\varepsilon \widetilde{\mathsf{D}}_{2}\Sigma\widetilde{\mathsf{D}}_{\mathsf{s}}^{5}V- \varepsilon V\widetilde{\mathsf{D}}_{\mathsf{s}}^{5}\widetilde{\mathsf{D}}_{2} \Sigma-\alpha\varepsilon(\hat{\mathbf{Z}}_{\mathcal{N}}+\hat{\mathbf{A}}_{ \mathcal{T}})\widetilde{\mathsf{D}}^{5}\Sigma-\alpha\varepsilon\Sigma(\widetilde{ \mathsf{D}}_{\mathsf{s}}^{5}\hat{\mathbf{Z}}_{\mathcal{N}}+\widetilde{\mathsf{D}}_{ \mathsf{s}}^{5}\hat{\mathbf{A}}_{\mathcal{T}})\] \[\qquad-\varepsilon(\widetilde{\mathsf{D}}_{\mathsf{s}}^{5},V, \widetilde{\mathsf{D}}_{2}\Sigma)-\alpha\varepsilon(\widetilde{\mathsf{D}}_{ \mathsf{s}}^{5},\hat{\mathbf{Z}}_{\mathcal{N}}+\hat{\mathbf{A}}_{\mathcal{T}}, \Sigma)\,.\] Using (4.11), (5.36j), (5.37), (7.4), (7.5), (8.22a), (B.2a), (B.2d), and (B.16) we find that \[\big{\|}\widetilde{\mathsf{D}}_{\mathsf{s}}^{6}\Sigma\big{\|}_{L_{ x,\mathsf{s}}^{2}} \lesssim\varepsilon\big{\|}\widetilde{\mathsf{D}}_{\mathsf{s}}^{5}V\big{\|}_{L_{x, \mathsf{s}}^{2}}+\varepsilon^{2}\big{\|}\widetilde{\mathsf{D}}^{5} \widetilde{\mathsf{D}}_{2}\Sigma\big{\|}_{L_{x,\mathsf{s}}^{2}}+\varepsilon \big{\|}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{1}\Sigma\big{\|}_{L_{x, \mathsf{s}}^{2}}+\varepsilon\big{\|}\widetilde{\mathsf{D}}_{\mathsf{s}}^{5}( \hat{\mathbf{Z}}_{\mathcal{N}},\hat{\mathbf{A}}_{\mathcal{T}})\big{\|}_{L_{x, \mathsf{s}}^{2}}\] \[\lesssim\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5} \widetilde{\mathsf{D}}_{1}V\big{\|}_{L_{x,\mathsf{s}}^{2}}+\varepsilon^{2}\big{\|} \widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}\Sigma\big{\|}_{L_{x, \mathsf{s}}^{2}}+\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}\widetilde{ \mathsf{D}}_{1}\Sigma\big{\|}_{L_{x,\mathsf{s}}^{2}}+\|\widetilde{\mathsf{D}}_{ \mathsf{s}}^{4}V\|_{L_{x,\mathsf{s}}^{2}}\big{\|}\widetilde{\mathsf{D}}^{4} \widetilde{\mathsf{D}}_{1}\Sigma\big{\|}_{L_{x,\mathsf{s}}^{2}}+\varepsilon^{2} \langle\mathsf{B}_{\mathsf{6}}\rangle+\varepsilon\] \[\lesssim\varepsilon+\varepsilon\big \[\lesssim\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}\Sigma\big{\|}_ {L^{2}_{x,s}}+\big{\|}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h \big{\|}_{L^{2}_{x,s}}+\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}J_{g}\big{\|} _{L^{2}_{x,s}}+\varepsilon^{2}\] \[\lesssim\varepsilon^{2}\big{\langle}\mathsf{B}_{6}\big{\rangle}+ \big{\|}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{ x,s}}+\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}J_{g}\big{\|}_{L^{2}_{x,s}} \tag{7.8}\] Lastly, if \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}^{6}_{\mathsf{s}}\), we transform (3.23) into \((x,\mathsf{s})\) coordinates, apply \(\widetilde{\mathsf{D}}^{5}_{\mathsf{s}}\) and find that \[\big{\|}\widetilde{\mathsf{D}}^{6}_{s}V\big{\|}_{L^{2}_{x,s}} \lesssim\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}_{\mathsf{s}}(V \widetilde{\mathsf{D}}_{2}V)\big{\|}_{L^{2}_{x,s}}+\varepsilon\big{\|} \widetilde{\mathsf{D}}^{5}_{\mathsf{s}}\big{(}\Sigma g^{-\frac{1}{2}}\Big{(} \tfrac{2+\alpha}{2}\hat{\boldsymbol{\mathsf{W}}}_{\tau}-\tfrac{\alpha}{2}\hat {\boldsymbol{\mathsf{Z}}}_{\tau}-\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}} -\widetilde{\mathsf{D}}_{2}h\big{(}\alpha\hat{\boldsymbol{\mathsf{A}}}_{\tau} -(1-\alpha)\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}}\big{)}\Big{)}\big{)} \big{\|}_{L^{2}_{x,s}}.\] By appealing to (4.11), (5.37), (7.5), (7.7), (8.21c), (8.22), (B.2a), and (B.13), we obtain \[\big{\|}\widetilde{\mathsf{D}}^{6}_{\mathsf{s}}V\big{\|}_{L^{2}_{ x,s}} \lesssim\varepsilon^{2}\big{\|}\widetilde{\mathsf{D}}^{5}_{\mathsf{s}}(\widetilde{ \mathsf{D}}_{1}V,\widetilde{\mathsf{D}}_{2}V)\big{\|}_{L^{2}_{x,s}}+ \varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}_{\mathsf{s}}\widetilde{ \mathsf{D}}_{1}\Sigma\big{\|}_{L^{2}_{x,s}}+\varepsilon^{2}\big{\|}\widetilde{ \mathsf{D}}^{5}_{\mathsf{s}}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{x,s}} +\varepsilon^{2}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad \[=\tfrac{1+\alpha}{2}\int\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{y} \dot{\mathbf{W}}_{{}_{\mathcal{N}}})\widetilde{\mathsf{D}}^{6}J_{y}+\tfrac{1- \alpha}{2}\int\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{y}\dot{ \mathbf{Z}}_{{}_{\mathcal{N}}})\widetilde{\mathsf{D}}^{6}J_{y}+\int\mathcal{J}^ {\frac{1}{2}}\mathsf{R}_{{}_{y}}\ \widetilde{\mathsf{D}}^{6}J_{y}\,.\] Then, using the bootstraps (5.37), and the bounds (6.38) we obtain that \[\tfrac{d}{2\mathrm{d}s}\int\mathsf{Q}\mathcal{J}^{\frac{1}{2}}| \widetilde{\mathsf{D}}^{6}J_{y}|^{2}+\tfrac{1+\alpha}{10\varepsilon}(1-\dot{ \mathcal{C}}\varepsilon)\|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6} J_{y}\|_{L^{2}}^{2}-\tfrac{2\cdot 250^{2}}{\varepsilon}\int\mathsf{Q}\mathcal{J}^{ \frac{1}{2}}|\widetilde{\mathsf{D}}^{6}J_{y}|^{2}\] \[\leq\|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_{y} \|_{L^{2}}\Big{(}\tfrac{1+\alpha}{2}\|\mathcal{J}^{\frac{3}{4}}\widetilde{ \mathsf{D}}^{6}(J_{y}\dot{\mathbf{W}}_{{}_{\mathcal{N}}})\|_{L^{2}}+\tfrac{|1- \alpha|}{2}\|\mathcal{J}^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}(J_{y}\dot{ \mathbf{Z}}_{{}_{\mathcal{N}}})\|_{L^{2}}+\|\mathcal{J}^{\frac{3}{4}}\mathsf{ R}_{{}_{y}}\|_{L^{2}}\Big{)}\] \[\leq\tfrac{1+\alpha}{20\varepsilon}\|\mathcal{J}^{-\frac{1}{4}} \widetilde{\mathsf{D}}^{6}J_{y}\|_{L^{2}}^{2}+4(1+\alpha)\varepsilon\| \mathcal{J}^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}(J_{y}\dot{\mathbf{W}}_{{} _{\mathcal{N}}},J_{y}\dot{\mathbf{Z}}_{{}_{\mathcal{N}}})\|_{L^{2}}^{2}+ \tfrac{16\varepsilon}{1+\alpha}\|\mathcal{J}^{\frac{3}{4}}\mathsf{R}_{{}_{y}}\| _{L^{2}}^{2}\,. \tag{7.14}\] In order to bound the commutator term appearing on the right side of the above estimate, we appeal to the bootstraps (5.37), the bound \(\mathcal{J}\lesssim 1\), and to Lemmas 7.3, B.1 and B.5, to conclude that \[\big{\|}\mathcal{J}^{\frac{3}{4}}\mathsf{R}_{{}_{y}}\big{\|}_{L^{ 2}_{x,s}} \leq\|\widetilde{\mathsf{D}}^{6}V\|_{L^{2}_{x,s}}\|\widetilde{ \mathsf{D}}_{2}J_{y}\|_{L^{\infty}_{x,s}}+\|(\widetilde{\mathsf{D}}^{6},V, \widetilde{\mathsf{D}}_{2}J_{y})\|_{L^{2}_{x,s}}\] \[\lesssim\big{(}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle+\big{\|} \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{x,s}}^{2} +\varepsilon\big{\|}\widetilde{\mathsf{D}}^{5}J_{y}\big{\|}_{L^{2}_{x,s}}\big{)} \big{(}1+\|\widetilde{\mathsf{D}}^{2}J_{y}\|_{L^{\infty}_{x,s}}\big{)}+ \varepsilon\|\widetilde{\mathsf{D}}^{6}J_{y}\|_{L^{2}_{x,s}}+\varepsilon^{2} \big{(}1+\|\widetilde{\mathsf{D}}^{2}J_{y}\|_{L^{\infty}_{x,s}}\big{)}\] \[\lesssim\varepsilon^{2}\langle\mathsf{B}_{3}\rangle\big{(}\langle \mathsf{B}_{6}\rangle+\langle\mathsf{B}_{3}\rangle+\langle\mathsf{B}_{3} \rangle\big{)}\,. \tag{7.15}\] Using Gronwall's inequality in time for \(\mathsf{s}\in[0,\varepsilon]\) in the bound (7.14), inserting the commutator bound (7.15) with the assumption that \(\varepsilon^{\frac{1}{2}}(\langle\mathsf{B}_{3}\rangle+\langle\mathsf{B}_{3} \rangle+\langle\mathsf{B}_{6}\rangle)\leq 1\), appealing to the upper and lower bound on \(\mathsf{Q}\) which arises from (6.38), we deduce \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|\mathcal{J}^{\frac{1}{4}}\widetilde{ \mathsf{D}}^{6}J_{y}(\cdot,\mathsf{s})\|_{L^{2}}^{2}+\tfrac{1}{\varepsilon}\!\! \int_{0}^{\varepsilon}\|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_ {y}(\cdot,\mathsf{s})\|_{L^{2}}^{2}\mathrm{d}\mathsf{s}\lesssim\|\widetilde{ \mathsf{D}}^{6}J_{y}(\cdot,0)\|_{L^{2}_{x}}^{2}+\varepsilon\langle\mathsf{B}_{6} \rangle^{2}\,, \tag{7.16}\] where the implicit constant depends only on \(\alpha\). The bound (4.11), which gives \(\|\widetilde{\mathsf{D}}^{6}J_{y}(\cdot,0)\|_{L^{2}_{x}}^{2}\lesssim\varepsilon\), and the bootstraps (5.37)-(5.37s) conclude the proof of (7.10). **Remark 7.5**.: _We shall frequently make use of the fact that (B.2d), (4.11), (7.10), and the bound \(1\leq\mathcal{J}^{-\frac{1}{4}}\) imply_ \[\big{\|}\widetilde{\mathsf{D}}^{3}J_{y}\big{\|}_{L^{\infty}_{x,s}}\lesssim \langle\mathsf{B}_{6}\rangle\,. \tag{7.17}\] **Remark 7.6**.: _In the proof of Lemma 7.4, we have tested the equation (7.11) with \(\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}J_{y}\). The presence of the \(\mathcal{J}^{\frac{1}{2}}\) is what allowed us to obtain the damping term on the left side of (7.14), by commuting \(\mathsf{Q}\mathsf{Q}_{\mathsf{s}}+V\partial_{2}\) past \(\mathcal{J}^{\frac{1}{2}}\). In addition, this factor was necessary in order to bound \(\widetilde{\mathsf{D}}^{6}(J_{y}\dot{\mathbf{W}}_{{}_{\mathcal{N}}},J_{y}\dot{ \mathbf{Z}}_{{}_{\mathcal{N}}})\). Other than this, the \(\mathcal{J}^{\frac{1}{2}}\) factor did not play any role in the bounds; indeed, already in the first line of (7.15) we have discarded the extra factor of \(\mathcal{J}^{\frac{3}{4}}\). With this in mind, we may return to (5.30), act on it with \(\widetilde{\mathsf{D}}^{5}\), and this time test it with \(\widetilde{\mathsf{D}}^{5}J_{y}\). By repeating the same bounds as in the proof of Lemma 7.4, in analogy to (7.14) and (7.15), we may thus establish the bound_ \[\tfrac{d}{2\mathrm{d}s}\int\mathsf{Q}|\widetilde{\mathsf{D}}^{5}J_{y}|^{2}\leq \dot{C}\varepsilon^{-1}\int\mathsf{Q}|\widetilde{\mathsf{D}}^{5}J_{y}|^{2}+\dot{C} \varepsilon\|\widetilde{\mathsf{D}}^{5}(J_{y}\dot{\mathbf{W}}_{{}_{\mathcal{N}}},J_{y} \dot{\mathbf{Z}}_{{}_{\mathcal{N}}})\|_{L^{2}}^{2}+\dot{C}\varepsilon\big{\|} \mathsf{R}_{{}_{y}}\big{\|}_{L^{2}}^{2}\,,\] _and therefore_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}e^{-\frac{C_{\mathsf{s}}}{ \varepsilon}}\|\widetilde{\mathsf{D}}^{5}J_{y}(\cdot,\mathsf{s})\|_{L^{2}}^{2} \lesssim\varepsilon\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{7.18}\] _This concludes the proof of (7.1b)._ **Lemma 7.7**.: _Under the same assumptions as Proposition 7.1, we have that_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|\mathcal{J}^{\frac{1}{4}}\widetilde{ \mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h(\cdot,\mathsf{s})\|_{L^{2}}^{2}+ \tfrac{1}{\varepsilon}\int_{0}^{\varepsilon}\|\mathcal{J}^{-\frac{1}{4}} \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\|_{L^{2}}^{2}\lesssim \mathsf{K}^{2}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle^{2}\,, \tag{7.19}\] _where the implicit constant depends only on \(\alpha\) and \(\mathsf{C}_{\mathsf{data}}\)._ Proof of Lemma 7.7.: The proof is similar to that of Lemma 7.4. We let \(\widetilde{\mathsf{D}}^{6}\) act on (5.32) and write this as \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D}}^{ We compute the \(L^{2}\)-inner product of (7.20) with \(\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\), and use (7.13) with \(r=\frac{1}{2}\) and \(f=\frac{1}{2}|\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h|^{2}\) to obtain \[\tfrac{d}{2d\mathsf{s}}\int\mathsf{Q}\mathcal{J}^{\frac{1}{2}}| \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h|^{2}+\tfrac{1}{4}\int \mathcal{J}^{-\frac{1}{2}}\tfrac{\mathsf{Q}}{\varepsilon}|\widetilde{\mathsf{ D}}^{6}\widetilde{\mathsf{D}}_{2}h|^{2}\] \[\qquad=\int\mathcal{J}^{\frac{1}{2}}\big{(}\tfrac{1}{2}(\mathsf{Q }+\partial_{2}V)+((1+\alpha)\mathbf{\hat{W}}_{\tau}+(1-\alpha)\mathbf{\hat{Z}}_ {\tau})\widetilde{\mathsf{D}}_{2}h\big{)}|\widetilde{\mathsf{D}}^{6}\widetilde{ \mathsf{D}}_{2}h|^{2}\] \[\qquad\qquad+\int\mathcal{J}^{\frac{1}{2}}g\big{(}\tfrac{1+ \alpha}{2}\widetilde{\mathsf{D}}^{6}\mathbf{\hat{W}}_{\tau}+\tfrac{1-\alpha}{ 2}\widetilde{\mathsf{D}}^{6}\mathbf{\hat{Z}}_{\tau}\big{)}\widetilde{\mathsf{ D}}^{6}\widetilde{\mathsf{D}}_{2}h+\sum_{k=1}^{3}\int\mathcal{J}^{\frac{1}{2}} \mathsf{R}_{\mathsf{o}_{2}h}^{k}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{ D}}_{2}h\,.\] Thanks to (5.37), (6.38), and the Cauchy-Young inequality, it follows that \[\tfrac{d}{2d\mathsf{s}}\int\mathsf{Q}\mathcal{J}^{\frac{1}{2}}| \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h|^{2}+\tfrac{1+\alpha}{1 0\varepsilon}(1-\dot{C}\varepsilon)\big{\|}\mathcal{J}^{-\frac{1}{4}} \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}}^{2}- \tfrac{2\cdot 250^{2}}{\varepsilon}\int\mathsf{Q}\mathcal{J}^{\frac{1}{2}}| \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h|^{2}\] \[\qquad\leq\tfrac{1+\alpha}{2}(1+\dot{C}\varepsilon^{2})\big{\|} \mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2 }h\big{\|}_{L^{2}}\big{\|}\mathcal{J}^{\frac{1}{4}}(\widetilde{\mathsf{D}}^{6 }\mathbf{\hat{W}}_{\tau},\widetilde{\mathsf{D}}^{6}\mathbf{\hat{Z}}_{\tau}) \big{\|}_{L^{2}}^{2}+\big{\|}\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}} ^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}}\sum_{k=1}^{3}\big{\|}\mathcal{J }^{\frac{3}{4}}\mathsf{R}_{\mathsf{o}_{2}h}^{k}\big{\|}_{L^{2}}\] \[\qquad\leq\tfrac{1+\alpha}{20\varepsilon}\big{\|}\mathcal{J}^{- \frac{1}{4}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{ 2}}^{2}+4(1+\alpha)\varepsilon\big{\|}\mathcal{J}^{\frac{1}{4}}(\widetilde{ \mathsf{D}}^{6}\mathbf{\hat{W}}_{\tau},\widetilde{\mathsf{D}}^{6}\mathbf{\hat {Z}}_{\tau})\big{\|}_{L^{2}}^{2}+\tfrac{15\varepsilon}{1+\alpha}{\sum_{k=1}^{3 }}\big{\|}\mathsf{R}_{\mathsf{o}_{2}h}^{k}\big{\|}_{L^{2}}^{2}\,. \tag{7.21}\] In anticipation of integrating (7.21) in time (via Gronwall), we bound the three commutators appearing on the right side as follows. First, using (4.11), (5.37), (7.3b), (B.2d), and Lemma B.3, Lemma B.5, we have that \[\big{\|}\mathsf{R}_{\mathsf{o}_{2}h}^{1}\big{\|}_{L^{2}_{s,s}}\leq\big{\|} \widetilde{\mathsf{D}}_{2}^{2}h\big{\|}_{L^{2}_{s,s}}\big{\|}\widetilde{ \mathsf{D}}^{6}V\big{\|}_{L^{2}_{s,s}}+\big{\|}\big{(}\widetilde{\mathsf{D}}^{6 },V,\widetilde{\mathsf{D}}_{2}^{2}h\big{)}\big{\|}_{L^{2}_{s,s}}\lesssim \varepsilon^{2}\langle\mathsf{B}_{\mathsf{h}}\rangle\langle\mathsf{B}_{6}\rangle\,.\] Similarly, since \(\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{\mathsf{h}}\rangle\leq 1\), we have and also \[\big{\|}\mathsf{R}_{\mathsf{o}_{2}h}^{3}\big{\|}_{L^{2}_{s,s}} \leq\big{(}(1+\alpha)\big{\|}\mathbf{\hat{W}}_{\tau}\big{\|}_{L^ {\infty}_{s,s}}+(1-\alpha)\big{\|}\mathbf{\hat{Z}}_{\tau}\big{\|}_{L^{\infty}_{s,s}}\big{)}\big{(}\big{\|}(\widetilde{\mathsf{D}}^{5},\widetilde{\mathsf{D}}_{2 }h,\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{2}h)\big{\|}_{L^{2}_{s,s}}+ \big{\|}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{\infty}_{s,s}}\big{\|}\widetilde{ \mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{s,s}}\big{)}\] \[\lesssim\varepsilon^{3}\langle\mathsf{B}_{\mathsf{h}}\rangle\lesssim \varepsilon^{2}\,.\] Inserting the bounds obtained in the previous three displays into the time-integrated form of (7.21), and appealing to the upper and lower bound on \(\mathsf{Q}\) which arises from (6.38), we obtain \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|\mathcal{J}^{\frac{1}{4}} \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h(\cdot,\mathsf{s})\|_{L^{2} }^{2}+\tfrac{1}{\varepsilon}\|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6} \widetilde{\mathsf{D}}_{2}h\|_{L^{2}_{s,s}}^{2}\lesssim\|\mathcal{J}^{\frac{1}{4}} \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h(\cdot,0)\|_{L^{2}}^{2}+ \mathsf{K}^{2}\varepsilon^{3}\langle\mathsf{B}_{\mathsf{6}}\rangle^{2}+ \varepsilon^{5}\langle\mathsf{B}_{\mathsf{h}}\rangle^{2}\langle\mathsf{B}_{ \mathsf{6}}\rangle^{2}\,, \tag{7.22}\] where the implicit constant depends only on \(\alpha\). The fact that \(\varepsilon\langle\mathsf{B}_{\mathsf{h}}\rangle^{2}+\varepsilon\langle\mathsf{B}_{ \mathsf{6}}\rangle^{2}\leq 1\), and the bound on the initial data which arises from (4.11), concludes the proof of (7.19). **Remark 7.8**.: _We shall also make use of the fact that from (B.2d), combined with (4.11), the bound \(1\leq\mathcal{J}^{-\frac{1}{4}}\), and (7.19), we have_ \[\big{\|}\widetilde{\mathsf{D}}^{3}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{ \infty}_{s,s}}\lesssim\mathsf{K}\varepsilon\langle\mathsf{B}_{\mathsf{6}}\rangle\,. \tag{7.23}\] **Remark 7.9**.: _In the proof of Lemma 7.7, we have tested the equation (7.20) with \(\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\). The presence of the \(\mathcal{J}^{\frac{1}{2}}\) is what allowed us to obtain the damping term on the left side of (7.19), by commuting \(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}\) past \(\mathcal{J}^{\frac{1}{2}}\). In addition, this factor was necessary in order to bound \(\widetilde{\mathsf{D}}^{6}(\mathbf{\hat{W}}_{\tau},\mathbf{\hat{Z}}_{\tau})\). Other than this, the \(\mathcal{J}^{\frac{1}{2}}\) factor did not play any role in the bounds; indeed, already in (7.21) we have discarded the extra factor of \(\mathcal{J}^{\frac{1}{4}}\) from the commutator terms on the right side. Keeping this in mind, we return to (5.32), act on it with \(\widetilde{\mathsf{D}}^{5}\), and test the resulting equation with \(\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}h\). By repeating the same arguments as in the proof of Lemma 7.7, in analogy to (7.21)-(7.22) we may establish_ \[\tfrac{d}{2d\mathsf{s}}\int\mathsf{Q}|\widetilde{\mathsf{D}}^{5} \widetilde{\mathsf{D}}_{2}h|^{2}\leq\hat{C}\varepsilon^{-1}\int\mathsf{Q **Remark 7.10** (**Estimates for \(g\))**.: _The \(\widetilde{\mathsf{D}}_{2}h\) estimates established in Lemma 7.7 have as a direct consequence estimates for \(g=1+(\widetilde{\mathsf{D}}_{2}h)^{2}\). More precisely, the identity \(\widetilde{\mathsf{D}}^{6}g=2\widetilde{\mathsf{D}}^{5}(\widetilde{\mathsf{D}} _{2}h\widetilde{\mathsf{D}}_{2}^{2}h)=2\widetilde{\mathsf{D}}_{2}h\widetilde{ \mathsf{D}}_{2}^{7}h+2[\widetilde{\mathsf{D}}^{5},\widetilde{\mathsf{D}}_{2}h ]\widetilde{\mathsf{D}}_{2}^{2}h\) combined with the bounds (5.37m), (5.37n), (7.19), (7.23), (7.24), (B.16), (B.17), and (B.21) (with \(a=a^{\prime}=b=0\)) imply that_ \[\varepsilon^{\frac{1}{2}}\sup_{\mathsf{s}\in[0,\varepsilon]}\|\mathcal{J}^{ \frac{1}{4}}\widetilde{\mathsf{D}}^{6}g(\cdot,\mathsf{s})\|_{L^{2}}+\|\mathcal{ J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}g\|_{L^{2}_{\mathsf{s},\mathsf{s}}} \lesssim\mathsf{K}^{2}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle^{2}\,,\] _which proves (7.1f). Since \(1\leq g\leq 1+\mathring{C}\varepsilon^{2}\), by appealing to the chain rule it is clear that any rational power of \(g\) appearing in the proof (e.g. \(g^{\frac{3}{2}},g^{\frac{1}{2}},g^{-\frac{1}{2}},g^{-\frac{3}{2}}\)) satisfies the same bound as \(g\)._ **Remark 7.11** (**Estimates for \(\widetilde{\mathsf{D}}_{1}h\))**.: _We note that the bound (7.1d) is a direct consequence of the identity \(\widetilde{\mathsf{D}}_{1}h=\varepsilon g^{\frac{1}{2}}J_{g}\), of the bounds (5.37k)-(5.37n), (7.1a)-(7.1f) and of the product Moser-type Lemmas B.4 and B.6._ **Remark 7.12**.: _We note that with the bound \(1\lesssim\mathcal{J}^{-\frac{1}{4}}\), inserting estimates (7.10) and (7.19) into (7.3), gives the proof of (7.1j) and (7.1l). Indeed, (7.3a) becomes_ \[\big{\|}\widetilde{\mathsf{D}}^{6}\Sigma\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s }}}\lesssim\varepsilon\langle\widetilde{\mathsf{D}}_{5}\rangle+\varepsilon \big{(}\varepsilon+\mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle+ \varepsilon^{2}\langle\mathsf{B}_{6}\rangle\big{)}\big{(}\langle\mathsf{B}_{6 }\rangle+\mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle+\varepsilon^ {2}\langle\mathsf{B}_{6}\rangle\big{)}\lesssim\varepsilon\langle\widetilde{ \mathsf{D}}_{5}\rangle+\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\lesssim \varepsilon\langle\mathsf{B}_{6}\rangle\,,\] _since \(\varepsilon\langle\mathsf{B}_{6}\rangle\leq 1\), and recalling Remark 6.10. Similarly, (7.3b) becomes_ \[\big{\|}\widetilde{\mathsf{D}}^{6}V\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}} \lesssim\varepsilon^{2}\langle\mathsf{B}_{6}\rangle+\mathsf{K}\varepsilon^{2 }\langle\mathsf{B}_{6}\rangle+\varepsilon^{2}\langle\mathsf{B}_{6}\rangle \lesssim\mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,.\] **Corollary 7.13**.: _Under the assumptions of Proposition 7.1, we have that for \(3\leq|\gamma|\leq 6\),_ \[\|\mathcal{J}^{-\frac{1}{4}}(\widetilde{\mathsf{D}}^{|\gamma|} \mathcal{N}+g^{-1}\tau\widetilde{\mathsf{D}}^{|\gamma|}\widetilde{\mathsf{D}}_{2 }h)\|_{L^{2}_{\mathsf{s},\mathsf{s}}}+\|\mathcal{J}^{-\frac{1}{4}}\big{(} \widetilde{\mathsf{D}}^{|\gamma|}\mathcal{T}-g^{-1}\kappa\widetilde{\mathsf{D} }^{|\gamma|}\widetilde{\mathsf{D}}_{2}h\big{)}\|_{L^{2}_{\mathsf{s},\mathsf{s}}} \lesssim\mathsf{K}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle \tag{7.25a}\] \[\|\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{|\gamma|} \mathcal{N}\|_{L^{2}_{\mathsf{s},\mathsf{s}}}+\|\mathcal{J}^{-\frac{1}{4}} \widetilde{\mathsf{D}}^{|\gamma|}\mathcal{T}\|_{L^{2}_{\mathsf{s},\mathsf{s}}} \lesssim\mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,. \tag{7.25b}\] Proof of Corollary 7.13.: The bounds for \(\mathcal{N}\) and \(\tau\) are symmetric, so we only prove the estimates for \(\mathcal{N}\). From (3.11), we have that \(\widetilde{\mathsf{D}}_{\mathcal{N}}=-g^{-1}\widetilde{\mathsf{D}}\widetilde{ \mathsf{D}}_{2}h\,\tau\). Therefore, we have \[\widetilde{\mathsf{D}}^{|\gamma|}\mathcal{N}+g^{-1}\tau\widetilde{\mathsf{D}}^{| \gamma|}\widetilde{\mathsf{D}}_{2}h=-[\widetilde{\mathsf{D}}^{|\gamma|-1},g^{- 1}\tau]\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{2}h. \tag{7.26}\] In order to bound the commutator term on the right side of (7.26), we appeal to inequality (6.72) with \(r^{\prime}=-\frac{1}{4}>-\frac{1}{2}\), \(r=0<\frac{1}{4}\), and \(F=[\widetilde{\mathsf{D}}^{|\gamma|-1},g^{-1}\tau]\widetilde{\mathsf{D}} \widetilde{\mathsf{D}}_{2}h\), together with the chain and product rules, and the \(\widetilde{\mathsf{D}}^{k}\) bounds for \(\widetilde{\mathsf{D}}_{2}h(\cdot,0)\) contained in (4.11), to deduce \[\big{\|}\mathcal{J}^{-\frac{1}{4}}\big{[}\widetilde{\mathsf{D}}^{| \gamma|-1},g^{-1}\tau\big{]}\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{2}h \big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}} \lesssim\varepsilon^{\frac{1}{2}}\big{\|}[\widetilde{\mathsf{D}}^{| \gamma|-1},g^{-1}\tau]\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{2}h(\cdot,0 )\big{\|}_{L^{2}_{\mathsf{s}}}+\varepsilon\big{\|}\partial_{\mathsf{s}}[ \widetilde{\mathsf{D}}^{|\gamma|-1},g^{-1}\tau]\widetilde{\mathsf{D}} \widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}}\] \[\lesssim\varepsilon^{3}+\big{\|}\widetilde{\mathsf{D}}_{\mathsf{s}}[ \widetilde{\mathsf{D}}^{|\gamma|-1},g^{-1}\tau]\widetilde{\mathsf{D}}\widetilde{ \mathsf{D}}_{2}h\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}}. \tag{7.27}\] Using the product rule and (3.11), we may further rewrite \[\widetilde{\mathsf{D}}_{\mathsf{s}}[\widetilde{\mathsf{D}}^{|\gamma|-1},g^{-1} \tau]\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{2}h=[\widetilde{\mathsf{D}}_{ \mathsf{s}}\widetilde{\mathsf{D}}^{|\gamma|-1},g^{-1}\tau]\widetilde{\mathsf{D}} \widetilde{\mathsf{D}}_{2}h+\big{(}2\widetilde{\mathsf{D}}_{2}h\tau-\mathcal{N} \big{)}g^{-2}\widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}_{2}h \widetilde{\mathsf{D}}^{|\gamma|}\widetilde{\mathsf{D}}_{2}h\] Due to (5.15), (5.37m), (5.37n), and (7.1c), the second term in the above estimate may be bounded as \[\big{\|}\big{(}2\widetilde{\mathsf{D}}_{2}h\tau-\mathcal{N}\big{)}g^{-2} \widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}_{2}h\widetilde{ \mathsf{D}}^{|\gamma|}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{\mathsf{s}, \mathsf{s}}}\lesssim\varepsilon\big{\|}\widetilde{\mathsf{D}}^{|\gamma|} \widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}}\lesssim \varepsilon^{3}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,.\] Similarly, by by appealing to the chain rule, the bootstrap inequalities (5.37), the bounds (7.19) and (7.23), and Lemma B.5, that \[\|[\widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{|\gamma|-1},g^{- 1}\tau]\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{2}h\|_{L^{2}_{\mathsf{s}, \mathsf{s}}}\leq\big{\|}\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{ \mathsf{s},\mathsf{s}}_{\mathsf{s}}}\big Proof of Lemma 7.14.: In view of Remark B.7, the proof is very similar to that of Lemma 7.3, and so we only present here the differences. As such, by repeating the argument used to establish (7.4), we deduce from (B.22) that Similarly to (7.5), with (B.22) we have \[\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}\widetilde{ \mathsf{D}}_{1}\Sigma\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}\lesssim\varepsilon \big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{ \gamma},\hat{\tilde{\mathbf{Z}}}_{\gamma})\big{\|}_{L_{\infty}^{\infty}L_{x}^ {2}}+\varepsilon^{2}\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}J_{g} \big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}\] \[\qquad\qquad\qquad\qquad\qquad+\varepsilon\big{\|}J_{g}^{\frac{1} {2}}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}\mathbb{\|}_{L_{ \infty}^{\infty}L_{x}^{2}}+\varepsilon^{2}\big{\|}J_{g}^{\frac{1}{2}}(\hat{ \mathbf{W}}_{\gamma},\hat{\tilde{\mathbf{Z}}}_{\gamma})\big{\|}_{L_{\infty}^ {\infty}L_{x}^{2}}+\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle \lesssim\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,, \tag{7.29}\] while in analogy to (7.6) from (B.22) and (B.2c) we have \[\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}_{s}\Sigma \big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}\lesssim\varepsilon\big{\|}\widetilde {\mathsf{D}}^{5}_{s}\mathbf{V}\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}+ \varepsilon\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}_{s}\widetilde {\mathsf{D}}_{2}\Sigma\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}+\varepsilon \big{\|}\widetilde{\mathsf{D}}^{5}_{s}\Sigma\big{\|}_{L_{\infty}^{\infty}L_{x }^{2}}\] \[\qquad\qquad\qquad\qquad+\varepsilon\big{\|}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{5}_{s}\tilde{\mathbf{Z}}_{\gamma}\big{\|}_{L_{\infty} ^{\infty}L_{x}^{2}}+\varepsilon\big{\|}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{5}_{s}\tilde{\mathbf{A}}_{\gamma}\big{\|}_{L_{\infty}^{\infty}L _{x}^{2}}+\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\] \[\qquad\qquad\qquad\qquad\lesssim\varepsilon^{\frac{1}{2}}\big{\|} \widetilde{\mathsf{D}}^{6}_{s}V\big{\|}_{L_{\infty}^{2},+}\varepsilon^{2} \big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}_{s}\widetilde{\mathsf{ D}}_{2}\Sigma\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}+\varepsilon^{\frac{1}{2}} \big{\|}\widetilde{\mathsf{D}}^{5}_{s}\Sigma\big{\|}_{L_{\infty}^{\infty}L _{x}^{2}}\] \[\qquad\qquad\qquad\qquad\qquad+\varepsilon\big{\|}J_{g}^{\frac{1} {2}}\widetilde{\mathsf{D}}^{5}_{s}\tilde{\mathbf{Z}}_{\gamma}\big{\|}_{L_{ \infty}^{\infty}L_{x}^{2}}+\varepsilon\big{\|}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{5}_{s}\tilde{\mathbf{A}}_{\gamma}\big{\|}_{L_{\infty}^{\infty}L_{ x}^{2}}+\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\lesssim\varepsilon^{\frac{1}{2}} \langle\mathsf{B}_{6}\rangle\,. \tag{7.30}\] In the last inequality we have additionally appealed to (7.1j), (7.11), the improved \(\tilde{\mathbf{Z}}_{\gamma}\) estimate in (8.22a), and to the previously established bound for \(\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}_{s}\widetilde{\mathsf{D} }_{2}\Sigma\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}\). The bound for \(\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}V\big{\|}_{L_{\infty}^{ \infty}L_{x}^{2}}\) is obtained similarly to (7.7) when \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}\), in analogy to (7.8) when \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{1}\), and in parallel to (7.9) when \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}^{6}_{s}\). To avoid redundancy we omit these details. The bound for \(\big{\|}\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}\mathcal{N}\big{\|}_ {L_{\infty}^{\infty}L_{x}^{2}}\) follows from the identity (7.26), the previously established bounds (7.1c) and (7.1e), the bound (7.23), and the commutator bound (B.21) with \(m=5\) and \(a=b=0\): \[\big{\|}\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6} \mathcal{N}\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}\leq\] \[\leq\mathring{C}\varepsilon^{\frac{3}{2}}\mathsf{K}\langle\mathsf{ B}_{6}\rangle\,. \tag{7.31}\] This concludes the proof of the lemma. ## 8. Vorticity energy estimates and the resulting improved estimates ### Bounds for the vorticity The goal of this subsection is to establish the following bound. **Proposition 8.1** (\(H^{6}\) estimates for the vorticity).: _Let \(\Omega\) be the ALE vorticity, defined in (3.37), and \(\Omega\) be the ALE specific vorticity given by (3.38). Assume that the bootstrap assumptions (5.37) hold, and that \(\varepsilon\) is taken to be sufficiently small to ensure \(\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{\mathsf{j}}\rangle+\varepsilon^{ \frac{1}{2}}\langle\mathsf{B}_{\mathsf{h}}\rangle+\varepsilon^{\frac{1}{2}} \langle\mathsf{B}_{6}\rangle\leq 1\). Then, assuming \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), we have the bound_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|}_{L_{x}^{\infty}L_{x}^ {2}}^{2}+\tfrac{1}{\varepsilon}\int_{0}^{\varepsilon}\big{\|}\widetilde{ \mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{ s}\lesssim\varepsilon\langle\mathsf{B}_{6}\rangle^{2}\,, \tag{8.1}\] _where the implicit constant depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). Additionally, we have that_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}+ \tfrac{1}{\varepsilon}\int_{0}^{\varepsilon}\big{\|}\widetilde{\mathsf{D}}^{6} \Omega(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}\lesssim \varepsilon\langle\mathsf{B}_{6}\rangle^{2}\,, \tag{8.2}\] _where the implicit constant depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). Moreover, we have that_ \[\big{\|}\Omega\big{\|}_{L_{x,\kappa}^{\infty}}\leq 2^{3+\frac{2}{\alpha}}e^{18} \mathsf{C}_{\mathsf{data}}\,, \tag{8.3a}\] \[\big{\|}\widetilde{\mathsf{D}}\Omega\big{\|}_{L_{x,\kappa}^{ \infty}}\leq 2(4e^{18})^{\frac{20\cdot 23(1+\alpha)}{\alpha}}\mathsf{C}_{\mathsf{data}}\,. \tag{8.3b}\] Proof of Proposition 8.1.: Recall from (3.38) that the vorticity is obtained from the specific vorticity by \(\Omega=(\alpha\Sigma)^{\frac{1}{\alpha}}\Omega\). In light of the already established bound (7.1j) and the product and chain rules, the bound (8.2) follows from (8.1). As such, we shall only establish the later. Applying the operator \(\widetilde{\mathsf{D}}^{6}\) to (5.35), we have that \[\tfrac{J_{g}}{\widetilde{\mathsf{D}}^{6}}(\mathsf{Q}_{\mathsf{s}}+V \partial_{2})\widetilde{\mathsf{D}}^{6}\Omega-\alpha\partial_{1}\widetilde{ \mathsf{D} where the remainder term \(\mathsf{R}_{\Omega}\) is given by appealing to (5.26) and (5.27) as \[\mathsf{R}_{\Omega} =-\big{[}\widetilde{\mathsf{D}}^{6},\tfrac{J_{g}}{\Sigma^{2}} \big{]}\big{(}\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{\mathsf{s}}+V \widetilde{\mathsf{D}}_{2}\big{)}\Omega-\tfrac{J_{g}}{\Sigma}\big{[}\widetilde{ \mathsf{D}}^{6},V\big{]}\widetilde{\mathsf{D}}_{2}\Omega-\alpha\big{[} \widetilde{\mathsf{D}}^{6},J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h \big{]}\widetilde{\mathsf{D}}_{2}\Omega\] \[=-\tfrac{1}{\varepsilon}\big{[}\widetilde{\mathsf{D}}^{6},\tfrac{J }{\Sigma^{2}}\big{]}\widetilde{\mathsf{D}}_{\mathsf{s}}\Omega-\big{[} \widetilde{\mathsf{D}}^{6},\tfrac{J_{g}}{\Sigma^{2}}\big{]}(V\widetilde{ \mathsf{D}}_{2}\Omega)-\tfrac{J_{g}}{\Sigma}\big{[}\widetilde{\mathsf{D}}^{6}, V\big{]}\widetilde{\mathsf{D}}_{2}\Omega-\alpha\big{[}\widetilde{\mathsf{D}}^{6},J_{g}g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\big{]}\widetilde{\mathsf{D}}_{2}\Omega\] \[=:\mathsf{R}_{\Omega}^{(1)}+\mathsf{R}_{\Omega}^{(2)}+\mathsf{R}_ {\Omega}^{(3)}+\mathsf{R}_{\Omega}^{(4)}\,. \tag{8.5}\] For \(\beta>0\) to be chosen below, we compute the spacetime \(L^{2}\) inner-product of (8.4) with \(\Sigma^{-2\beta+1}\widetilde{\mathsf{D}}^{6}\Omega\), use the identities (5.28b), (5.28c), and (5.28d), to obtain that \[\big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}- \big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}\Omega(\cdot,0)\big{\|}_{L^{2}_{x}}^{2}-\int_{0}^{\mathsf{s}} \!\!\!\int\!\!\!\int\!\widetilde{\mathsf{D}}^{6}\Omega\big{|}^{2}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})\tfrac{J_{g}}{\Sigma^{2\beta}}-\alpha(2 \beta-1)\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\left|\widetilde{\mathsf{D}}^{6} \Omega\right|^{2}\tfrac{\Sigma_{1,1}}{\Sigma^{2\beta}}\] \[=\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\left|\widetilde{\mathsf{D}}^ {6}\Omega\right|^{2}\tfrac{J_{g}}{\Sigma^{2\beta}}\big{(}\mathsf{\hat{Q}}_{ \mathsf{s}}-V\hat{\mathsf{Q}}_{2}+\widetilde{\mathsf{D}}_{2}V\big{)}+\alpha \int_{0}^{\mathsf{s}}\!\!\!\int\!\!\left|\widetilde{\mathsf{D}}^{6}\Omega \right|^{2}\!\big{(}\widetilde{\mathsf{D}}_{2}-\hat{\mathsf{Q}}_{2}\big{)} \tfrac{J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h}{\Sigma^{2\beta-1}}\] \[\qquad+\alpha\int\!\!\left|\widetilde{\mathsf{D}}^{6}\Omega \right|^{2}\!\overline{\mathsf{Q}}_{2}\tfrac{J_{g}g^{-\frac{1}{2}}\widetilde{ \mathsf{D}}_{2}h}{\Sigma^{2\beta-1}}\mathsf{\hat{I}}_{\mathsf{S}}+\int_{0}^{ \mathsf{s}}\!\!\!\int\!\!\frac{2}{\Sigma^{2\beta-1}}\mathsf{R}_{\Omega} \widetilde{\mathsf{D}}^{6}\Omega\,. \tag{8.6}\] By appealing to (5.30), (5.33a), (5.33c), the above becomes \[\big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}- \big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}\Omega(\cdot,0)\big{\|}_{L^{2}_{x}}^{2}+\int_{0}^{\mathsf{s}}\!\!\! \int\!\!\left|\widetilde{\mathsf{D}}^{6}\Omega\right|^{2}\tfrac{1}{\Sigma^{2 \beta}}\mathsf{G}_{\Omega}\] \[=\alpha\int\tfrac{J_{g}}{\Sigma^{2\beta}}\big{|}\widetilde{ \mathsf{D}}^{6}\Omega\big{|}^{2}\overline{\mathsf{Q}}_{2}\Sigma g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\big{|}_{\mathsf{s}}+\int_{0}^{\mathsf{s}}\!\!\!\int \!\!\frac{2}{\Sigma^{2\beta-1}}\mathsf{R}_{\Omega}\widetilde{\mathsf{D}}^{6} \Omega\,. \tag{8.7}\] where \[\mathsf{G}_{\Omega} :=-\Big{(}\big{(}\tfrac{1+\alpha}{2}J_{g}\hat{\boldsymbol{\mathsf{ W}}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{ \mathcal{N}}\big{)}+2\alpha\beta J_{g}\big{(}\hat{\boldsymbol{\mathsf{Z}}}_{ \mathcal{N}}+\hat{\boldsymbol{\mathsf{A}}}_{\tau}\big{)}\Big{)}-\alpha(\beta- \tfrac{1}{2})\big{(}J_{g}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}}-J_{g}\hat{ \boldsymbol{\mathsf{Z}}}_{\mathcal{N}}+J_{g}\widetilde{\mathsf{D}}_{2}h(\hat{ \boldsymbol{\mathsf{W}}}_{\tau}-\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{T}}) \big{)}\] \[\qquad-J_{g}\big{(}\hat{\mathsf{Q}}_{\mathsf{s}}-V\hat{\mathsf{Q}}_ {2}+\widetilde{\mathsf{D}}_{2}V\big{)}+(2\beta-1)\alpha J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\widetilde{\mathsf{D}}_{2}\Sigma+\alpha\hat{ \mathsf{Q}}_{2}\Sigma J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h \tag{8.8}\] Using the bounds (6.38), the damping inequality (6.64), and the bootstrap assumptions (5.37), we conclude that \[\mathsf{G}_{\Omega} \geq-(\alpha\beta+\tfrac{1}{2})J_{g}\hat{\boldsymbol{\mathsf{W}}}_{ \mathcal{N}}-\tfrac{2\cdot 250^{2}}{\varepsilon}\mathsf{Q}J_{g}-\hat{C}\langle\beta\rangle \geq(\alpha\beta+\tfrac{1}{2})\big{(}\tfrac{9}{10\varepsilon}-\tfrac{13}{ \varepsilon}J_{g}\big{)}-\tfrac{2\cdot 250^{2}}{\varepsilon}\mathsf{Q}J_{g}-\hat{C}\langle\beta\rangle\,,\] (8.9a) and that \[\big{|}\overline{\mathsf{Q}}_{2}\Sigma g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h \big{|}\leq\hat{C}\varepsilon^{2}\,. \tag{8.9b}\] From (8.6)-(8.9) we conclude that \[(1-\hat{C}\varepsilon^{2})\big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1} {2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|}_ {L^{2}_{x}}^{2}-\big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\Omega(\cdot,0)\big{\|}_{L^{2}_{x}}^{2}+\tfrac{9}{1 0\varepsilon}\big{(}\alpha\beta+\tfrac{1}{2}-\hat{C}\varepsilon\langle\beta \rangle\big{)}\int_{0}^{\mathsf{s}}\!\big{\|}\tfrac{1}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tfrac{33(\alpha\beta+\frac{1}{2})+2\cdot 250^{2}(1+\alpha)}{ \varepsilon(1+\alpha)}\int_{0}^{\mathsf{s}}\!\big{\|}\tfrac{(J_{g}\mathsf{Q})^{ \frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s}^{ \prime})\big{\|}_{L^{2}_{x}}^{2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! assumptions (5.37), the Poincare inequality (B.2a), the bounds in Lemmas B.3 and B.5, and the definition of \(\Omega\), we may thus estimate that \[\sum_{i=2}^{4}\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\mathsf{R}_{\Omega}^ {(i)}\bigr{\|}_{L^{2}_{x,*}}\leq(4\kappa_{0}^{-1})^{\beta}\!\sum_{i=2}^{4}\bigl{\|} \mathsf{R}_{\Omega}^{(i)}\bigr{\|}_{L^{2}_{x,*}}\] \[\leq\mathring{C}(4\kappa_{0}^{-1})^{\beta}\Bigl{(}\varepsilon \kappa_{0}^{\beta}\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6 }\Omega\bigr{\|}_{L^{2}_{x,*}}+\varepsilon\bigl{\|}\widetilde{\mathsf{D}}^{6} \tfrac{J_{g}}{\Sigma}\bigr{\|}_{L^{2}_{x,*}}+\bigl{\|}\widetilde{\mathsf{D}}^{ 6}V\bigr{\|}_{L^{2}_{x,*}}+\varepsilon\bigl{\|}\widetilde{\mathsf{D}}^{6}J_{y }\bigr{\|}_{L^{2}_{x,*}}+\bigl{\|}\widetilde{\mathsf{D}}^{6}\widetilde{ \mathsf{D}}_{2}h\bigr{\|}_{L^{2}_{x,*}}+\varepsilon\Bigr{)}\,. \tag{8.12}\] Note that Lemmas B.4, and the Poincare inequality (B.2a) and the bootstrap bounds give that \[\bigl{\|}\widetilde{\mathsf{D}}^{6}\tfrac{J_{g}}{\Sigma}\bigr{\|}_{L^{2}_{x,* }}\lesssim\bigl{\|}\widetilde{\mathsf{D}}^{6}J_{y}\bigr{\|}_{L^{2}_{x,*}}+ \bigl{\|}\widetilde{\mathsf{D}}^{6}\Sigma^{-1}\bigr{\|}_{L^{2}_{x,*}}+ \varepsilon\,. \tag{8.13}\] By further appealing to Proposition 7.1 and the bound \(1\leq\mathcal{J}^{-\frac{1}{4}}\), we deduce from (8.11)-(8.13) that \[\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\mathsf{R}_{\Omega}\bigr{\|}_{L^{2}_{x,*}} \leq\frac{\mathring{C}}{\varepsilon}(1+\varepsilon^{2}4^{\beta})\bigr{\|} \tfrac{1}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega\bigr{\|}_{L^{2}_{x,*}}+\mathring{C}(4^{5}\kappa_{0}^{-1})^{\beta}\langle\mathsf{B}_{6}\rangle\,, \tag{8.14}\] where \(\mathring{C}=\mathring{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\geq 1\) is a constant independent of \(\beta\). Next, by combining (8.10) and (8.14), with the Cauchy-Young inequality we arrive at \[(1-\mathring{C}\varepsilon^{2})\bigl{\|}\tfrac{(J_{g}\mathsf{Q})^ {\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{ s})\bigr{\|}_{L^{2}_{x}}^{2}-\bigl{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,0)\bigr{\|}_{L^{2}_{x}}^ {2}\] \[\qquad+\tfrac{9}{10\varepsilon}\bigl{(}\alpha\beta+\tfrac{1}{2}- \mathring{C}\varepsilon\langle\beta\rangle-\mathring{C}(1+\varepsilon^{2}4^{ \beta})\bigr{)}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{1}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{ x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tfrac{33(\alpha\beta+\frac{1}{2})+2\cdot 250^{2}(1+\alpha)}{ \varepsilon(1+\alpha)}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{(J_{g}\mathsf{Q})^ {\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{ s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\mathring{C} \varepsilon(4^{5}\kappa_{0}^{-1})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,, \tag{8.15}\] where \(\mathring{C}=\mathring{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\geq 1\) is a constant independent of \(\beta\). Since \(\mathring{C}_{\eqref{eq:C_data}}\) is independent of \(\beta\), we may first choose \(\beta\) to be sufficiently large (in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), and then \(\varepsilon\) to be sufficiently small (in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), to ensure that \[\alpha\beta-\mathring{\mathring{C}}_{\eqref{eq:C_data}}\geq 0,\qquad\text{and} \qquad\tfrac{1}{4}-\varepsilon\mathring{C}_{\eqref{eq:C_data}}\langle\beta \rangle-\varepsilon^{2}\mathring{C}_{\eqref{eq:C_data}}4^{\beta}\geq 0\,. \tag{8.16}\] This makes \(\beta=\beta(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\). With this choice of \(\beta\), (8.15) implies \[\tfrac{1}{2}\bigl{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\bigr{\|}_{L ^{2}_{x}}^{2}+\tfrac{1}{8\varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{1}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s}^{\prime}) \bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\bigl{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,0)\bigr{\|}_{L^{2}_{x}}^{2}+ \tfrac{\mathring{C}}{\varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{(J_{g} \mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega( \cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+ \mathring{C}\varepsilon(4^{5}\kappa_{0}^{-1})^{2\beta}\langle\mathsf{B}_{6} \rangle^{2}\,, \tag{8.17}\] Using a standard Gronwall argument, and using the initial data bound provided by (4.11), we obtain from (8.17) that \[\sup_{\mathsf{s}\in[0,\varepsilon]}\bigl{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2} }}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\bigr{\|}_{L ^{2}_{x}}^{2}+\tfrac{1}{\varepsilon}\int_{0}^{\varepsilon}\bigl{\|}\tfrac{1}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\bigr{\|}_{L^{2} _{x}}^{2}\mathrm{d}\mathsf{s}\leq\mathring{C}\varepsilon\langle\mathsf{B}_{6} \rangle^{2}\,.\] where \(\mathring{C}=\mathring{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})>0\) is a constant (the \(\beta\) dependence is included in the dependence on \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)). The proof of (8.1) is concluded upon multiplying the above estimate by \(\kappa_{0}^{2\beta}\) and appealing to (5.37p) and (6.38a)-(6.38b). The first inequality in (8.3) follows from (4.11) since we may apply Proposition C.1 to the evolution (3.39) of \(\Omega\), to deduce that \(\|\Omega\|_{L^{\infty}_{x,*}}\leq 4e^{18}\mathsf{C}_{\mathsf{data}}\). The bootstrap (5.37p) allows us to convert this into \(\|\Omega\|_{L^{\infty}_{x,*}}\leq 4\cdot 4^{\frac{1}{\alpha}}\cdot e^{16}\mathsf{C}_{ \mathsf{data}}\). The second inequality in (8.3) follows in a similar manner, but we need to differentiate (5.35) before applying Proposition C.1. We have that \[J_{g}\widetilde{\mathsf{Q}}\partial_{\mathsf{s}}\bigl{(}\widetilde{\mathsf{D}} \Omega\bigr{)}+J_{g}\bigl{(}V+\alpha\Sigma g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h \bigr{)}\widetilde{\mathsf{D}}_{2}(\widetilde{\mathsf{D}}\Omega)-\tfrac{ \alpha}{\varepsilon}\Sigma\widetilde{\mathsf{D}}_{1}(\widetilde{\mathsf{D}} \Omega)=\mathrm{Error}\,, \tag{8.18}\] where \[\mathrm{Error}=-\Sigma\widetilde{\mathsf{D}}(\tfrac{J_{g}}{\Sigma})(\tfrac{1}{ \varepsilon}\widetilde{\mathsf{D}}_{8}\Omega+V\widetilde{\mathsf{D}}_{2} \Omega)-\Bigl{(}J_{g}\widetilde{\mathsf{D}}V+\alpha\Sigma\widetilde{\mathsf{D}}(J_{ g}g^{-\frac{1}{2}}\widetilde{\mathsf{D} In turn, this means that the parameter \(\beta\) appearing in (C.13) may be chosen to be \(\beta=\max\{\frac{20\cdot 23(1+\alpha)}{\alpha},1\}=\frac{20\cdot 23(1+\alpha)}{\alpha}\), which is a constant that depends only on \(\alpha\). Then, (C.13) leads to the bound \[\big{\|}\mathsf{D}\Omega\big{\|}_{L^{\infty}_{x,\mathsf{s}}}\leq(4e^{18})^{ \beta}\big{\|}\mathsf{D}\Omega(\cdot,0)\big{\|}_{L^{\infty}_{x}}+\mathring{C} \varepsilon\tfrac{20\varepsilon}{\alpha\beta}(4e^{18})^{\beta}\|\mathsf{D}_{2 }\Omega\|_{L^{\infty}_{x,t}}\,.\] Taking \(\varepsilon\) to be sufficiently small to absorb the second term on the right side into the left side, concludes the proof. ### Improved estimates for \(\mathbf{\hat{A}}_{\mathsf{N}^{\star}}\) The boundedness of the damping norms \(\widetilde{\mathcal{D}}_{6}\) and \(\widetilde{\mathcal{D}}_{5}\) assumed in the bootstraps (5.37r) and (5.37s) implies that the \(L^{2}_{x,\mathsf{s}}\) norm of \(\mathcal{J}^{\frac{1}{4}}J^{\frac{1}{2}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{ 6}(J_{\mathbf{\hat{A}}_{\mathsf{N}^{\star}}})\) and \(\widetilde{\mathsf{D}}^{5}(J_{g}\mathbf{\hat{A}}_{\mathsf{N}^{\star}})\) are bounded by \(\mathsf{B}_{6}\) and respectively \(\mathsf{B}_{5}\). The goal of this subsection is to show that these bounds for \(\mathbf{\hat{A}}_{\mathsf{N}^{\star}}\) are greatly improved (one fewer powers of \(J_{g}\), and one gain of \(\varepsilon\)) as a consequence of the vorticity estimate obtained earlier in Proposition 8.1. **Corollary 8.2**.: _Under the standing bootstrap assumptions, we have that_ \[\big{\|}J^{\frac{1}{2}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{5} \mathbf{\hat{A}}_{\mathsf{N}^{\star}}\big{\|}_{L^{\infty}_{x}L^{2}_{x}} \lesssim\varepsilon^{\frac{1}{2}}\mathsf{K}\langle\mathsf{B}_{6} \rangle\,, \tag{8.21a}\] \[\big{\|}\widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{\mathsf{N}^{ \star}}\big{\|}_{L^{\infty}_{x}L^{2}_{x}} \lesssim\varepsilon^{\frac{1}{2}}\mathsf{K}\langle\mathsf{B}_{6} \rangle\,,\] (8.21b) \[\big{\|}\widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{\mathsf{N}^{ \star}}\big{\|}_{L^{2}_{x,\mathsf{s}}} \lesssim\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,,\] (8.21c) \[\big{\|}\widetilde{\mathsf{J}}^{\frac{1}{4}}J^{\frac{1}{2}}_{ \mathsf{s}}\widetilde{\mathsf{D}}^{6}\mathbf{\hat{A}}_{\mathsf{N}^{\star}} \big{\|}_{L^{2}_{x,\mathsf{s}}} \lesssim\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{8.21d}\] _where the implicit constant depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\)._ Proof of Corollary 8.2.: We first prove (8.21b) and (8.21d). From (3.37), (5.36a), (5.36g), (5.37r), (5.37s), (8.2) and the bounds \(\mathcal{J}\leq 1\) and \(J_{g}\leq\frac{3}{2}\), we deduce that \[\big{\|}\mathcal{J}^{\frac{3}{4}}J^{\frac{1}{2}}_{\mathsf{s}} \widetilde{\mathsf{D}}^{6}\mathring{\mathsf{A}}_{\mathsf{N}^{\star}}\big{\|}_{ L^{\infty}_{x}L^{2}_{x}} \lesssim\big{\|}J^{\frac{1}{2}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{6}\Omega\big{\|} _{L^{\infty}_{x}L^{2}_{x}}+\big{\|}\mathcal{J}^{\frac{3}{4}}J^{\frac{1}{2}}_{ \mathsf{s}}\widetilde{\mathsf{D}}^{6}(\mathring{\mathbf{W}}_{\tau},\mathring{ \mathbf{\hat{Z}}}_{\tau})\big{\|}_{L^{\infty}_{x}L^{2}_{x}}\lesssim\varepsilon ^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle+\mathsf{K}\varepsilon\langle\mathsf{ B}_{6}\rangle\varepsilon^{-\frac{1}{2}}\,,\] \[\big{\|}\mathcal{J}^{\frac{1}{4}}J^{\frac{1}{2}}_{\mathsf{s}} \widetilde{\mathsf{D}}^{6}\mathring{\mathsf{A}}_{\mathsf{N}^{\star}}\big{\|}_{ L^{2}_{x,\mathsf{s}}} \lesssim\big{\|}\widetilde{\mathsf{D}}^{6}\Omega\big{\|}_{L^{2}_{x,\mathsf{s}}}+ \big{\|}\mathcal{J}^{\frac{1}{2}}J^{\frac{1}{2}}_{\mathsf{s}}\widetilde{ \mathsf{D}}^{6}(\mathring{\mathbf{W}}_{\tau},\mathring{\mathbf{\hat{Z}}}_{ \tau})\big{\|}_{L^{2}_{x,\mathsf{s}}}\lesssim\varepsilon\langle\mathsf{B}_{6 }\rangle+\mathsf{K}\varepsilon\langle\mathsf{B}_{6}\rangle\,,\] thereby proving (8.21b) and (8.21d), since \(\mathcal{J}\leq\overline{J}_{g}\leq J_{g}\). Next, we note that (8.21c) immediately follows from (8.21d) and (6.71). Indeed, we may apply (6.71) with \(r^{\prime}=0\) and \(r=\frac{3}{4}\), and recall that \(\partial_{\mathsf{s}}=\frac{1}{\varepsilon\widetilde{\mathsf{Q}}}\widetilde{ \mathsf{D}}_{\mathsf{s}}\) so that with (4.11), (6.38a), (8.21d), and \(F=\widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{\mathsf{N}^{\star}}\), we obtain \[\|\widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{\mathsf{N}^{\star}}\|_{L^{2}_{x, \mathsf{s}}}\leq\varepsilon^{\frac{1}{2}}\|\widetilde{\mathsf{D}}^{5}\mathbf{ \hat{A}}_{\mathsf{N}^{\star}}(\cdot,0)\|_{L^{2}_{x}}+2\big{\|}\widehat{ \mathsf{Q}}^{-1}\big{\|}_{L^{\infty}_{x,\mathsf{s}}}\|\mathcal{J}^{\frac{3}{4}} \widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{ \mathsf{N}^{\star}}\|_{L^{2}_{x,\mathsf{s}}}\lesssim\varepsilon\mathsf{K} \langle\mathsf{B}_{6}\rangle\,,\] thereby establishing (8.21c). Similarly, (8.21a) follows from (8.21d) and (6.75). Applying (6.75) with \(r=\frac{3}{4}\) and \(F=\widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{\mathsf{N}^{\star}}\) we arrive at \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|J^{\frac{1}{2}}_{\mathsf{s}} \widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{\mathsf{N}^{\star}}(\cdot,\mathsf{s} )\|_{L^{2}_{x}}\lesssim\|\widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{\mathsf{N} ^{\star}}(\cdot,0)\|_{L^{2}_{x}}+\varepsilon^{-\frac{1}{2}}\|\mathcal{J}^{ \frac{1}{4}}J^{\frac{1}{4}}_{\mathsf{s}}\widetilde{\mathsf{D}}_{\mathsf{s}} \widetilde{\mathsf{D}}^{5}\mathbf{\hat{A}}_{\mathsf{N}^{\star}}\|_{L^{2}_{x, \mathsf{s}}}\lesssim\varepsilon^{\frac{1}{2}}\mathsf{K}\langle\mathsf{B}_{6} \rangle\,,\] concluding the proof of the lemma. ### Improved estimate for \(\mathbf{\hat{Z}}_{\mathsf{N}^{\star}}\) Since \(\mathbf{\hat{Z}}_{\mathsf{N}^{\star}}\) does not appear in the definition of the vorticity, we take a different approach to improve the bounds for the derivatives of \(\mathbf{\hat{Z}}_{\mathsf{N}^{\star}}\) in \(L^{2}_{x,\mathsf{s}}\). By using an argument similar to that in the proof of Proposition 8.1, and by appealing to a key step in the bound (8.21), we are able to show that when \(\varepsilon\) is sufficiently small (in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), we have the following: **Lemma 8.3**.: _Under the assumptions of Proposition 8.1, we have that for any \(\overline{\beta}>0\), and \(\overline{a}\in[0,\frac{1}{2}]\),_ \[\big{\|}J^{\frac{1}{2}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{5} \mathbf{\hat{Z}}_{\mathsf{N}^{\star}}\big{\|}_{L^{\infty}_{x}L^{2}_{x}}^{2}+ \tfrac{1}{\varepsilon}\int_{0}^{\varepsilon}\big{\|}\widetilde{\mathsf{D}}^{5} \mathbf{\hat{Z}}_{\mathsf{N}^{\star}}(\mathsf{s})\big{\|}_{L _where the implicit constant and the constant \(\hat{C}\) depend only on \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\). It is also convenient to record the estimates_ \[\big{\|}\mathcal{J}^{\frac{3}{2}}J_{s}^{\frac{1}{2}}\widetilde{ \mathsf{D}}_{1}\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}\big{\|} _{L_{x}^{\infty}L_{x}^{2}} \lesssim\varepsilon^{-\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,, \tag{8.22e}\] \[\big{\|}\mathcal{J}^{\frac{3}{2}}J_{s}^{\frac{1}{2}}\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}\big{\|} _{L_{x}^{\infty}L_{x}^{2}} \lesssim\varepsilon^{-\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,, \tag{8.22f}\] _where the implicit constant and the constant depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\)._ Proof of Lemma 8.3.: We first prove (8.22a). Letting \(\widetilde{\mathsf{D}}^{5}\) act on the \((x,\mathsf{s})\)-variable form of (3.27), we find that \[\tfrac{J_{\mathsf{d}}}{\Sigma}(\mathsf{Q}\partial_{\mathsf{s}}+V \partial_{2})(\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}})-2 \alpha(\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}),_{1}+2 \alpha J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\widetilde{\mathsf{D}} _{2}(\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}})=-\tfrac{1}{ \Sigma}(\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}})(\mathsf{ Q}\partial_{\mathsf{s}}+V\partial_{2})J_{g}+\mathsf{Err}_{\mathsf{Z}}\,, \tag{8.23}\] where the error term \(\mathsf{Err}_{\mathsf{Z}}\) is given by \[\mathsf{Err}_{\mathsf{Z}} =-[\widetilde{\mathsf{D}}^{5},\tfrac{J_{\mathsf{d}}}{\Sigma^{ \prime}}](\tfrac{1}{\Sigma}(\mathsf{Q}\partial_{\mathsf{s}}+V\widetilde{ \mathsf{D}}_{2})\tilde{\mathsf{Z}}_{\mathcal{N}}-\tfrac{J_{g}}{\Sigma}[ \widetilde{\mathsf{D}}^{5},V]\widetilde{\mathsf{D}}_{2}\tilde{\mathsf{Z}}_{ \mathcal{N}}-2\alpha[\widetilde{\mathsf{D}}^{5},J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h]\widetilde{\mathsf{D}}_{2}\tilde{\mathsf{Z}}_{ \mathcal{N}}-\big{[}\widetilde{\mathsf{D}}^{5},\tfrac{1}{\Sigma}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})J_{g}\big{]}\tilde{\mathsf{Z}}_{\mathcal{ N}}\] \[\quad-\tfrac{\alpha}{2}\widetilde{\mathsf{D}}^{5}\big{(}(J_{g} \tilde{\mathsf{Z}}_{\mathcal{N}})(\tilde{\mathsf{A}}_{\mathcal{T}}+\Sigma g ^{-\frac{3}{2}}\widetilde{\mathsf{D}}_{2}^{2}h)\big{)}+\tfrac{\alpha}{2} \widetilde{\mathsf{D}}^{5}\big{(}(J_{g}\tilde{\mathsf{W}}_{\mathcal{N}})( \tilde{\mathsf{A}}_{\mathcal{T}}-\Sigma g^{-\frac{3}{2}}\widetilde{\mathsf{D }}_{2}^{2}h)\big{)}\] \[\quad+\alpha\widetilde{\mathsf{D}}^{5}\big{(}g^{-\frac{1}{2}} (J_{g}\tilde{\mathsf{A}}_{\mathcal{N}})_{,2}\big{)}+2\alpha\widetilde{\mathsf{D }}^{5}\big{(}(g^{-\frac{1}{2}}\tilde{\mathsf{Z}}_{\mathcal{T}}+\tilde{\mathsf{ A}}_{\mathcal{N}}(1-\tfrac{1}{2}g^{-\frac{1}{2}}))\widetilde{\mathsf{D}}_{2}J_{g} \big{)}-\widetilde{\mathsf{D}}^{5}\big{(}(J_{g}\tilde{\mathsf{A}}_{\mathcal{ N}})(\tfrac{1+\alpha}{2}\tilde{\mathsf{W}}_{\mathcal{T}}+\tfrac{3-\alpha}{2}\tilde{\mathsf{Z}}_{ \mathcal{T}})\big{)}\] \[\quad-\widetilde{\mathsf{D}}^{5}\big{(}(J_{g}\tilde{\mathsf{Z}}_{ \mathcal{T}})(\tfrac{1+\alpha}{2}\tilde{\mathsf{W}}_{\mathcal{T}}+\tfrac{1- \alpha}{2}\tilde{\mathsf{Z}}_{\mathcal{T}})\big{)}+\alpha\widetilde{\mathsf{D }}^{5}\big{(}(J_{g}\tilde{\mathsf{Z}}_{\mathcal{T}})g^{-\frac{3}{2}}\widetilde{ \mathsf{D}}_{2}^{2}h\big{)}\] \[\quad=:\sum_{i=1}^{11}\mathsf{Err}_{\mathsf{Z}}{}^{(i)}\,. \tag{8.24}\] The first three terms appearing in (8.2) should be compared to the remainder terms in (8.5). Next, we test (8.23) with \(\Sigma^{-2\beta+1}\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}\), for \(\beta>0\) to be determined, and similarly to (8.6) we arrive at \[\big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}(\cdot,\mathsf{s}) \big{\|}_{L_{x}^{2}}^{2}-\big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}( \cdot,0)\big{\|}_{L_{x}^{2}}^{2}-\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\big{|} \widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}\big{|}^{2}(\mathsf{ Q}\partial_{s}+V\partial_{2})\tfrac{J_{g}}{\Sigma^{2\beta}}-2\alpha(2\beta-1)\int_{0}^{ \mathsf{s}}\!\!\!\int\!\!\!\big{|}\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{ \mathcal{N}}\big{|}^{2}\tfrac{\Sigma_{,1}}{\Sigma^{2\beta}}\] \[\quad+2\alpha\int\!\big{|}\widetilde{\mathsf{D}}^{5}\tilde{ \mathsf{Z}}_{\mathcal{N}}\big{|}^{2}\overline{\mathsf{Q}}_{2}\tfrac{J_{g}g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}h}{\Sigma^{2\beta-1}}\Big{|}_{\mathsf{s}} -\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\tfrac{2}{\Sigma^{2\beta}}\widetilde{ \mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}\big{|}^{2}(\widetilde{\mathsf{Q}} \partial_{s}+V\partial_{2})J_{g}+\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\tfrac{2 }{\Sigma^{2\beta-1}}\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}} \mathsf{Err}_{\mathsf{Z}}\,. \tag{8.25}\] By appealing to (5.30), (5.33a), (5.33c), the above becomes \[\big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}(\cdot,\mathsf{s}) \big{\|}_{L_{x}^{2}}^{2}-\big{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{ \Sigma^{\beta+1}}\widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}( \cdot,0)\big{\|}_{L_{x}^{2}}^{2}+\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\big{|} \widetilde{\mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}\big{|}^{2}\tfrac{1}{ \Sigma^{2\beta}}\mathsf{G}\] \[\quad=2\alpha\int\!\big{|}\widetilde{\mathsf{D}}^{5}\tilde{ \mathsf{Z}}_{\mathcal{N}}\big{|}^{2}\overline{\mathsf{Q}}_{2}\tfrac{J_{g}g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}h}{\Sigma^{2\beta-1}}\big{|}_{\mathsf{s}} +\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\tfrac{2}{\Sigma^{2\beta-1}}\widetilde{ \mathsf{D}}^{5}\tilde{\mathsf{Z}}_{\mathcal{N}}\mathsf{Err}_{\mathsf{Z}}\,, \tag{8.26}\] where \[\mathsf{G}:= \Big{(}\big{(}\tfrac{1+\alpha}{2}J_{g}\tilde{\mathsf{W}}_{ \mathcal{N}}+\tfrac{1-\alpha}{2}J_{g}\tilde{\mathsf{Z}}_{\mathcal{N}}\big{)}-2 \alpha\beta J_{g}\big{(}\tilde{\mathsf{Z}}_{\mathcal{N}}+\tilde{\ with the three terms in the remainder \(\mathsf{R}_{\Omega}\) from (8.5); as such, these terms are estimated in a nearly identical fashion, leading to a bound analogous to (8.14), namely \[\sum_{i=1}^{3}\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\mathsf{Err}_{2} ^{(i)}\bigr{\|}_{L^{2}_{x,\mathsf{s}}}\leq\tfrac{\hat{C}}{\varepsilon}(1+ \varepsilon^{2}4^{\beta})\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\widetilde{\mathsf{ D}}^{5}\widehat{\mathsf{Z}}_{\mathcal{N}}\bigr{\|}_{L^{2}_{x,\mathsf{s}}}+\hat{C}(4^{ 5}\kappa_{0}^{-1})^{\beta}\langle\mathsf{B}_{6}\rangle\,. \tag{8.31}\] The remaining terms on the right side of (8.24), namely \(\{\mathsf{Err}_{2}(^{i})\}_{i=4}^{1}\), are bounded similarly, using the Moser-type inequality in Lemma B.4, the Gagliardo-Nirenberg inequality in Lemma B.3, the bootstrap inequalities (5.37), the bounds in Proposition 7.1, and the above established estimates (8.21). We shall detail the estimates for the two most difficult terms: \(\mathsf{Err}_{2}^{(4)}\) and \(\mathsf{Err}_{2}^{(7)}\). For the fourth error term, we have that \[\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\mathsf{Err}_{2}^{(4)}\bigr{\|} _{L^{2}_{x,\mathsf{s}}} \leq(4\kappa_{0}^{-1})^{\beta}\bigl{\|}\bigl{[}\widetilde{\mathsf{ D}}^{5},\tfrac{1}{\Sigma}(\tfrac{1+\alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+ \tfrac{1-\alpha}{2}J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\bigr{]}\hat{\mathbf{Z }}_{\mathcal{N}}\bigr{\|}_{L^{2}_{x,\mathsf{s}}}\] \[\leq\hat{C}(4\kappa_{0}^{-1})^{\beta}\sum_{i=0}^{4}\bigl{(}\bigl{\|} \widetilde{\mathsf{D}}^{5}(\tfrac{1}{\Sigma}(J_{g}\hat{\mathbf{W}}_{\mathcal{ N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}))\bigr{\|}_{L^{5-\hat{z}}_{2,\mathsf{s}}} \varepsilon^{-\frac{\hat{z}}{\varepsilon}}+\varepsilon^{-\frac{\hat{z}}{\varepsilon }}\bigr{)}\bigl{(}(\kappa_{0}^{\beta}\bigl{\|}\tfrac{1}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{5}\widehat{\mathsf{Z}}_{\mathcal{N}}\bigr{\|}_{L^{2 }_{x,\mathsf{s}}}\bigr{)}^{\frac{\hat{z}}{\varepsilon}}+\varepsilon^{\frac{ \hat{z}}{\varepsilon}}\bigr{)}\] \[\leq\tfrac{\hat{C}}{\varepsilon}\bigl{\|}\tfrac{1}{\Sigma^{\beta} }\widetilde{\mathsf{D}}^{5}\widehat{\mathsf{Z}}_{\mathcal{N}}\bigr{\|}_{L^{2}_ {x,\mathsf{s}}}+(\tfrac{4^{5}}{\kappa_{0}})^{\beta}\langle\mathsf{B}_{6} \rangle\,, \tag{8.32}\] and for the seventh error term, we obtain that \[\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\mathsf{Err}_{2}^{(7)}\bigr{\|}_{L^{2}_{x, \mathsf{s}}}\leq\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}(g ^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}}) )\bigr{\|}_{L^{2}_{x,\mathsf{s}}}\leq\hat{C}\varepsilon(4\kappa_{0}^{-1})^{ \beta}\langle\mathsf{B}_{6}\rangle\,. \tag{8.33}\] A straightforward application of (5.37) and (B.16) shows that all of the remaining terms on the right side of (8.24) satisfy even better bounds, which combined with (8.31)-(8.33) leads to \[\bigl{\|}\Sigma^{-\beta}\mathsf{Err}_{2}\bigr{\|}_{L^{2}_{x,\mathsf{s}}}\leq \tfrac{\hat{C}}{\varepsilon}(1+\varepsilon^{2}4^{\beta})\bigl{\|}\tfrac{1}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}\widehat{\mathsf{Z}}_{\mathcal{N}} \bigr{\|}_{L^{2}_{x,\mathsf{s}}}+\hat{C}(4^{5}\kappa_{0}^{-1})^{\beta}\langle \mathsf{B}_{6}\rangle \tag{8.34}\] where \(\hat{C}=\hat{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\) is a suitably large constant, which is independent of \(\beta\). Inserting the bound (8.34) into (8.30), similarly to (8.15) we are led to \[(1-\hat{C}\varepsilon^{2})\bigl{\|}\tfrac{(J_{q}\mathsf{Q})^{ \frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}\widehat{\mathsf{Z}}_{ \mathcal{N}}(\cdot,\mathsf{s})\bigr{\|}_{L^{2}_{x}}^{2}-\bigl{\|}\tfrac{(J_{q} \mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}\widehat{ \mathsf{Z}}_{\mathcal{N}}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\] \[\qquad+\tfrac{9}{10\varepsilon}\bigl{(}\alpha\beta+\tfrac{1}{2}- \hat{C}\varepsilon\langle\beta\rangle-\hat{C}(1+\varepsilon^{2}4^{\beta}) \bigr{)}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{5}\widehat{\mathsf{Z}}_{\mathcal{N}}(\cdot,\mathsf{s}^{\prime}) \bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tfrac{33(\alpha\beta+\tfrac{1}{2})+2\cdot 250^{2}(1+ \alpha)}{\varepsilon(1+\alpha)}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{(J_{q} \mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}\widehat{ \mathsf{Z}}_{\mathcal{N}}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathsf{s}^{\prime}+\hat{C}\varepsilon(4^{5}\kappa_{0}^{-1})^{2\beta} \langle\mathsf{B}_{6}\rangle^{2}\,, \tag{8.35}\] where \(\hat{C}=\hat{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\geq 1\) is a constant independent of \(\beta\). Since \(\hat{C}_{(\ref{eq:C33})}\) is independent of \(\beta\), we may choose first \(\beta\) to be sufficiently large (in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), and then \(\varepsilon\) to be sufficiently small (in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), to ensure that \[\alpha\beta-\hat{C}_{(\ref{eq:C33})}\geq 0,\qquad\text{and}\qquad\tfrac{1}{4}- \varepsilon\hat{C}_{(\ref{eq:C33})}\langle\beta\rangle-\varepsilon^{2}\hat{C}_{( \ref{eq:C33})}4^{\beta}\geq 0\,. \tag{8.36}\] The choice (8.28) and (8.36) makes \(\beta=\beta(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\). With this choice of \(\beta\), (8.35) implies \[\tfrac{1}{2}\bigl{\|}\tfrac{(J_{q}\mathsf{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}\widehat{\mathsf{Z}}_{\mathcal{N}}( \cdot,\mathsf{s})\bigr{\|}_{L^{2}_{x}}^{2}+\tfrac{1}{8\varepsilon}\int_{0}^{ \varepsilon}\bigl{\|}\tfrac{1}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}\widehat{ \mathsf{Z}}_{\mathcal{N}}(\cdot,\mathsf{s})\bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}\] \[\leq\bigl{\|}\tfrac{(J_{q}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{5}\widehat{\mathsf{Z}}_{\mathcal{N}}(\cdot,0) \bigr{\|}_{L^{2}_{x}}^{2}+\tfrac{\hat{C}}{\varepsilon}\int_{0}\bigl{\|} \tfrac{(J_{q}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{5} \widehat{\mathsf{Z}}_{\mathcal{N}}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathsf{s}^{\prime}+\hat{C}\varepsilon(4^{5}\kappa_{0}^{-1})^{2\beta} \langle\mathsf{B}_{6}\rangle^{2}\,, \tag{8.37}\] Using a standard Gronwall argument, and using the initial data bound provided by (4.11), we obtain from (8.37) that \[\sup_{\mathsf{s}\in[0,\varepsilon]}\bigl{\|}\tfrac{(J_{q}\mathsf{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{5}\ which proves (8.22b). Next, we turn to the improved estimates for \(\widetilde{\mathsf{D}}^{6}\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{N}}}\) stated in (8.22c)-(8.22d). We emphasize that the parameter \(\overline{\beta}\geq 0\) appearing in these estimates is arbitrary, it is not the same as the \(\beta\) appearing in (8.37). Note that we are only able at this point to obtain such an improved estimate if either \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{1}\), or \(\widetilde{\mathsf{D}}^{6}=\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}\), i.e., if at least one space derivative is present. To see this, we return to (3.19b), which we first differentiate in space, and then convert to \((x,\mathsf{s})\)-variables, to obtain \[\widetilde{\mathsf{D}}_{i}\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{ \mathcal{N}}}=-\widetilde{\mathsf{D}}_{i}\tilde{\boldsymbol{\mathsf{A}}}_{ \mathcal{T}}-\tfrac{1}{\alpha\Sigma}(\tfrac{1}{\varepsilon}\widetilde{\mathsf{D }}_{\mathsf{s}}+V\widetilde{\mathsf{D}}_{2})\widetilde{\mathsf{D}}_{i}\Sigma- \tfrac{1}{\Sigma}(\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{N}}}+\tilde{ \boldsymbol{\mathsf{A}}}_{\mathcal{T}})\widetilde{\mathsf{D}}_{i}\Sigma- \tfrac{1}{\alpha\Sigma}\widetilde{\mathsf{D}}_{i}V\widetilde{\mathsf{D}}_{2} \Sigma\,,\qquad\text{for}\qquad i\in\{1,2\}\,. \tag{8.40}\] Upon applying \(\widetilde{\mathsf{D}}^{5}\) to the above identity, and recalling from (3.20) that \[\widetilde{\mathsf{D}}_{1}\Sigma=\tfrac{\varepsilon}{2}J_{g}(\tilde{ \boldsymbol{\mathsf{W}}}_{{}_{\mathcal{N}}}-\tilde{\boldsymbol{\mathsf{Z}}}_{ {}_{\mathcal{N}}})+\tfrac{\varepsilon}{2}J_{g}\widetilde{\mathsf{D}}_{2}h( \tilde{\boldsymbol{\mathsf{W}}}_{{}_{\mathcal{T}}}-\tilde{\boldsymbol{ \mathsf{Z}}}_{{}_{\mathcal{T}}})\,,\qquad\text{and}\qquad\widetilde{\mathsf{D }}_{2}\Sigma=\tfrac{1}{2}g^{\frac{1}{2}}(\tilde{\boldsymbol{\mathsf{W}}}_{{}_{ \mathcal{T}}}-\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{T}}})\,, \tag{8.41}\] we deduce \[\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{1}\tilde{ \boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{N}}} =-\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{1}\tilde{ \boldsymbol{\mathsf{A}}}_{{}_{\mathcal{T}}}-\tfrac{1}{2\alpha}\Sigma^{-1} \widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{\mathsf{s}}\big{(}J_{g} \tilde{\boldsymbol{\mathsf{W}}}_{{}_{\mathcal{N}}}-J_{g}\tilde{\boldsymbol{ \mathsf{Z}}}_{{}_{\mathcal{N}}}+J_{g}\widetilde{\mathsf{D}}_{2}h(\tilde{ \boldsymbol{\mathsf{W}}}_{{}_{\mathcal{T}}}-\tilde{\boldsymbol{\mathsf{Z}}}_{{ }_{\mathcal{T}}})\big{)}-\tfrac{1}{2\alpha}V\Sigma^{-1}\widetilde{\mathsf{D}}^ {5}\widetilde{\mathsf{D}}_{1}\big{(}g^{\frac{1}{2}}(\tilde{\boldsymbol{ \mathsf{W}}}_{{}_{\mathcal{T}}}-\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{ \mathcal{T}}})\big{)}\] \[\qquad-\tfrac{1}{2\alpha}[\widetilde{\mathsf{D}}^{5},\Sigma^{-1} ]\widetilde{\mathsf{D}}_{\mathsf{s}}\big{(}J_{g}\hat{\boldsymbol{\mathsf{W}}}_ {{}_{\mathcal{N}}}-J_{g}\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{N}}}+J_{ g}\widetilde{\mathsf{D}}_{2}h(\hat{\boldsymbol{\mathsf{W}}}_{{}_{ \mathcal{T}}}-\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{T}}})\big{)}- \tfrac{1}{\alpha}[\widetilde{\mathsf{D}}^{5},V\Sigma^{-1}]\widetilde{\mathsf{D }}_{1}\widetilde{\mathsf{D}}_{2}\Sigma\] \[\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}\tilde{ \boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{N}}} =-\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}\tilde{ \boldsymbol{\mathsf{A}}}_{{}_{\mathcal{T}}}-\tfrac{1}{2\alpha\Sigma^{-1}} \widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{\mathsf{s}}\big{(}g^{ \frac{1}{2}}(\tilde{\boldsymbol{\mathsf{W}}}_{{}_{\mathcal{T}}}-\tilde{ \boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{T}}})\big{)}-\tfrac{1}{2\alpha}V\Sigma^{ -1}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}\big{(}g^{\frac{1}{2} }(\hat{\boldsymbol{\mathsf{W}}}_{{}_{\mathcal{T}}}-\tilde{\boldsymbol{ \mathsf{Z}}}_{{}_{\mathcal{T}}})\big{)}\] \[\qquad-\tfrac{1}{2\alpha\varepsilon}[\widetilde{\mathsf{D}}^{5}, \Sigma^{-1}]\widetilde{\mathsf{D}}_{\mathsf{s}}\big{(}g^{\frac{1}{2}}(\hat{ \boldsymbol{\mathsf{W}}}_{{}_{\mathcal{T}}}-\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{ \mathcal{T}}})\big{)}-\tfrac{1}{\alpha}[\widetilde{\mathsf{D}}^{5},V\Sigma^{-1 }]\widetilde{\mathsf{D}}_{2}^{2}\Sigma\] \[\qquad-\widetilde{\mathsf{D}}^{5}\big{(}\tfrac{1}{\Sigma}( \tilde{\boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{N}}}+\tilde{\boldsymbol{ \mathsf{A}}}_{{}_{\mathcal{T}}}+\tfrac{1}{\alpha}\widetilde{\mathsf{D}}_{2}V) \widetilde{\mathsf{D}}_{2}\Sigma\big{)} \tag{8.42b}\] Taking into account the bootstraps (5.37), the previously established bounds (7.1), (8.21), (8.22a), and the Moser-type bound (B.13), we deduce from (8.42a) that \[\big{\|}\Sigma^{-\overline{\beta}}\mathcal{J}^{\frac{3}{4}-\overline{ \beta}}J_{g}^{\overline{\mathsf{D}}^{5}}\widetilde{\mathsf{D}}_{1}\tilde{ \boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{N}}}\big{\|}_{L^{2}_{x,\mathsf{s}}} \leq\tfrac{1}{2\alpha}\|\Sigma^{-1-\overline{\beta}}\mathcal{J}^{\frac{3}{4}- \overline{\alpha}}J_{g}^{\overline{\mathsf{D}}^{5}}\widetilde{\mathsf{D}}_{ \mathsf{s}}(J_{g}\tilde{\boldsymbol{\mathsf{Z}}}_{{}_{\mathcal{N}}})\|_{L^{2}_{x, \mathsf{s}}}\] \[\qquad+(4\kappa_{0}^{-1})^{\overline{\beta}}(1+\hat{C} \varepsilon)\varepsilon\mathsf{K}\mathsf{B}_{6}+\hat{C}(4\kappa_{0}^{-1})^{ \overline{\beta}}\varepsilon^{2}\mathsf{K}\mathsf{B}_{6}+\hat{C}(4\kappa_{0}^{-1})^{ \overline{\beta}}\varepsilon^{2}\mathsf{K}\langle\mathsf{B}_{6}\rangle+\hat{C}(4 \kappa_{0}^{-1})^{\overline{\beta}}\varepsilon\langle\mathsf{B}_{6}\rangle\] \[\qquad+\hat{C}(4\kappa_{0}^{-1})^{\overline{\beta}}\varepsilon \|\mathcal{J}^{\frac{3}{4}-\overline{\alpha}}J_{g}^{\overline{\mathsf{D}}^{5}}( \mathsf{Q}\mathsf{\mathsf{\mathsf{\mathsf{Q}}}}_{{}_{\mathcal{N}}}+V\partial_{2})(J_ {g}\hat{\boldsymbol{\mathsf{W}}}_{{}_{\mathcal{N}}})\|_{L^{2}_{x,\mathsf{s}}}\] \[\qquad+\hat{C}(4\kappa_{0}^{-1})^{\overline{\beta}}\varepsilon^{2} \langle\mathsf{B}_{6}\rangle\|(\mathsf{Q}\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{Q}}}}}_{\mathsf{s}}+V\partial_{2})(J_{g}\hat{\boldsymbol{ \mathsf{W}}}_{{}_{\mathcal{N}}})\|_{L^{2}_{x,\mathsf{s}}}\] \[\qquad+\hat{C}(4\kappa_{0}^{-1})^{\overline{\beta}}\varepsilon \|\widetilde{\mathsf{D}}^{4}(\mathsf{Q}\mathsf{\mathsf{\mathsf{\mathsf{Q}}}}_{ \mathsf{s}}+V\partial_{2})(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{{}_{\mathcal{N}}})\|_{L^{2 }_{x,\mathsf{s}}}\,. \tag{8.43}\] In the above estimate we have chosen not to leave the terms involving \(\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{\mathsf{s}}(J_{g}\hat{ \boldsymbol{\mathsf{W}}}_{{}_{\mathcal{N}}})\) as is, since they would give sub-optimal bounds, and instead to write them in terms of \((\mathsf{Q}\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{Q}}}}_{\mathsf{s}}+V\partial_{2})(J_ {g}\ (8.21), (8.22a), and (B.13), we may show that \[\|\Sigma^{-1-\overline{\beta}}\mathcal{J}^{\frac{3}{4}-\overline{ \alpha}}J_{g}^{\overline{\alpha}}\widetilde{\mathsf{D}}^{5}(\mathsf{Q}\partial_ {\mathsf{s}}+V\partial_{2})\big{(}g^{\frac{1}{2}}(\tilde{\mathbf{W}}_{\tau}- \tilde{\mathbf{Z}}_{\tau})\big{)}\|_{L^{2}_{x,x}}\] \[\qquad\leq\mathring{C}(\varepsilon(4\kappa_{0}^{-1})^{\overline{ \beta}}\mathsf{K}\langle\mathsf{B}_{6}\rangle+\tfrac{1}{\varepsilon}\|\Sigma^{ -1-\overline{\beta}}\mathcal{J}^{\frac{3}{4}-\overline{\alpha}}J_{g}^{ \overline{\alpha}}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{\mathsf{s }}\tilde{\mathbf{Z}}_{\tau}\|_{L^{2}_{x,x}}\] (8.47a) and \[\big{\|}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\big{(}g^{\frac{1}{2}} (\tilde{\mathbf{W}}_{\tau}-\tilde{\mathbf{Z}}_{\tau})\big{)}\|_{L^{\infty}_{x,\infty}}\lesssim\varepsilon+\|(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}) \tilde{\mathbf{Z}}_{\tau}\|_{L^{\infty}_{x,\infty}}\lesssim 1\,. \tag{8.47b}\] The bound (8.22d) now follows by combining the above two estimates and (8.46). It remains to prove the bounds (8.22e) and (8.22f). Notice that here we do not aim for a sharp pre-factor in front of the leading order term, as done previously for (8.22c) and (8.22d). First, we revisit (8.42a). Taking into account the bootstraps (5.37), the previously established bounds (7.1), (8.21), (8.22a), and the bounds in Lemma B.6, we deduce \[\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{4}}\widetilde{ \mathsf{D}}_{1}\widetilde{\mathsf{D}}^{5}\tilde{\mathbf{Z}}_{\mathcal{N}}\big{\|} _{L^{\infty}_{x}L^{2}_{x}}\lesssim \varepsilon^{\frac{1}{2}}\mathsf{K}\mathsf{B}_{6}+\varepsilon^{- \frac{1}{2}}\mathsf{B}_{6}+\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{4 }}\widetilde{\mathsf{D}}^{6}(J_{g}\widetilde{\mathsf{D}}_{2}h(\tilde{\mathbf{ W}}_{\tau}-\tilde{\mathbf{Z}}_{\tau}))\big{\|}_{L^{\infty}_{x}L^{2}_{x}}\] \[+\varepsilon\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{6}(g^{\frac{1}{2}}(\tilde{\mathbf{W}}_{\tau}-\tilde{ \mathbf{Z}}_{\tau}))\big{\|}_{L^{\infty}_{x}L^{2}_{x}}+\varepsilon^{-\frac{1}{ 2}}\langle\mathsf{B}_{6}\rangle\] \[+\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{4}}\big{[} \widetilde{\mathsf{D}}^{5},V\Sigma^{-1}\big{]}\widetilde{\mathsf{D}}_{1} \widetilde{\mathsf{D}}_{2}\Sigma\big{\|}_{L^{\infty}_{x}L^{2}_{x}}+ \varepsilon^{\frac{3}{2}}\langle\mathsf{B}_{6}\rangle+\varepsilon^{\frac{1}{ 2}}\langle\mathsf{B}_{6}\rangle\,. \tag{8.48}\] For the three terms on the right side of the (8.48) which still involve norms of products and commutators, the results in Lemma B.6 do not apply directly; instead, the desired bound follows from the argument which was used to prove Lemma B.6. For example, the available bounds, Sobolev interpolation, and the fundamental theorem of calculus in time, imply \[\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{6}(g^{\frac{1}{2}}(\tilde{\mathbf{W}}_{\tau}-\tilde{\mathbf{Z}}_{ \tau}))\big{\|}_{L^{\infty}_{x}L^{2}_{x}} \lesssim\mathsf{K}\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6} \rangle+\sum\nolimits_{k=2}^{4}\big{\|}\widetilde{\mathsf{D}}^{6-k}(g^{\frac{1} {2}})\widetilde{\mathsf{D}}^{k}(\tilde{\mathbf{W}}_{\tau},\tilde{\mathbf{Z}}_{ \tau})\big{\|}_{L^{\infty}_{x}L^{2}_{x}}\] \[\lesssim\mathsf{K}\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6} \rangle+\varepsilon^{\frac{3}{2}}+\varepsilon^{-\frac{1}{2}}\sum\nolimits_{k=2} ^{4}\big{\|}\widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{6-k}(g ^{\frac{1}{2}})\widetilde{\mathsf{D}}^{k}(\tilde{\mathbf{W}}_{\tau},\tilde{ \mathbf{Z}}_{\tau})\big{\|}_{L^{2}_{x,x}}\] \[\qquad+\varepsilon^{-\frac{1}{2}}\sum\nolimits_{k=2}^{4}\big{\|} \widetilde{\mathsf{D}}^{6-k}(g^{\frac{1}{2}})\widetilde{\mathsf{D}}_{\mathsf{s }}\widetilde{\mathsf{D}}^{k}(\tilde{\mathbf{W}}_{\tau},\tilde{\mathbf{Z}}_{ \tau})\big{\|}_{L^{2}_{x,x}}\] \[\lesssim\mathsf{K}\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6} \rangle\,.\] (8.49a) Similarly to ( 8.49a ) one may show that \[\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{6}(J_{g}\widetilde{\mathsf{D}}_{2}h(\tilde{\mathbf{W}}_{\tau}- \tilde{\mathbf{Z}}_{\tau}))\big{\|}_{L^{\infty}_{x}L^{2}_{x}}\lesssim\mathsf{K} \varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,,\] (8.49b) \[\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{2}}\big{[} \widetilde{\mathsf{D}}^{5},V\Sigma^{-1}\big{]}\widetilde{\mathsf{D}}_{1} \widetilde{\mathsf{D}}_{2}\Sigma\big{\|}_{L^{\infty}_{x}L^{2}_{x}}\lesssim \mathsf{K}\varepsilon^{\frac{3}{2}}\langle\mathsf{B}_{6}\rangle\,.\] (8.49c) Inserting ( 8.49b ) into the bound ( 8.48 ) leads to \[\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}_{1}\widetilde{\mathsf{D}}^{5}\tilde{\mathbf{Z}}_{\mathcal{N}}\big{\|}_{L ^{\infty}_{x}L^{2}_{x}}\lesssim\varepsilon^{-\frac{1}{2}}\langle\mathsf{B}_{6} \rangle\,,\] thereby proving ( 8.22e). Estimate ( 8.22f) is established in a nearly identical manner, by bounding the right side of ( 8.42b ), instead of ( 8.42a ). We omit these redundant details. ### Improved estimates for \(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}\) We next obtain a few improved estimates for \(\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\). **Lemma 8.4**.: _Under the assumptions of Proposition 8.1, in addition to the improved bounds for \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}})\) from (8.44), we have the estimates_ \[\big{\|}\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}})\big{\|}_{L^{\infty}_{x}L^{2}_{x}} \leq 2\mathsf{C}_{\mathsf{data}}\varepsilon^{-\frac{1}{2}}\,, \tag{8.50a}\] \[\big{\|}\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}})\big{\|}_{L^{2}_{x,x}} \leq\mathring{C}\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{8.50b}\] Proof of Lemma 8.4.: In order to prove (8.50a), we apply \(\mathsf{D}^{5}\) to (3.25) and then transform to \((x,\mathsf{s})\) variables, to obtain \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{ \mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})+\alpha\Sigma g^{-\frac{1}{2}J_{g} \widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}\hat{\mathbf{A}}_{\mathcal{N}}\] \[=-[\widetilde{\mathsf{D}}^{5},V]\widetilde{\mathsf{D}}_{2}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}})-\widetilde{\mathsf{D}}^{5}\big{(}\big{(} \tfrac{\alpha}{2}\tilde{\mathbf{A}}_{\tau Thus, establishing (8.50a) amounts to bounding the forcing terms in (8.51) and the remainder from (8.52) in \(L^{2}_{x,s}\). For this purpose, we note that (5.37), (6.17c), (7.1), (8.21), (B.2a), (B.13), and (B.17), imply \[\big{\|}[\widetilde{\mathsf{D}}^{5},V]\widetilde{\mathsf{D}}_{2}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,s}}+\big{\|}\widetilde{\mathsf{ D}}^{5}\big{(}\big{(}\tfrac{\alpha}{2}\hat{\mathbf{A}}_{\mathcal{T}}-\tfrac{\alpha}{2} \Sigma g^{-\frac{3}{2}}\widetilde{\mathsf{D}}_{2}^{2}h\big{)}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}})\big{)}\big{\|}_{L^{2}_{x,s}}\lesssim\mathsf{K} \langle\mathsf{B}_{6}\rangle \tag{8.53}\] and \[\big{\|}\mathsf{Rem}\big{\|}_{L^{2}_{x,s}}\lesssim\mathsf{K}_{\varepsilon} \langle\mathsf{B}_{6}\rangle\,. \tag{8.54}\] Next we return to (8.51), which we test with \(\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\). By appealing to (5.28c), (6.38), (8.53), and (8.54), we obtain \[\big{\|}\mathsf{Q}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2} \leq\big{\|}\mathsf{Q}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}})(\cdot,0)\big{\|}_{L^{2}_{x}}^{2}-2\alpha\int_ {0}^{\mathsf{s}}\!\!\!\int\Sigma g^{-\frac{1}{2}}J_{g}\widetilde{\mathsf{D}}^{ 5}\widetilde{\mathsf{D}}_{2}\hat{\mathbf{A}}_{\mathcal{N}}\widetilde{\mathsf{ D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\] \[\quad+\hat{C}\int_{0}^{\mathsf{s}}\!\!\big{\|}\mathsf{Q}^{\frac{1 }{2}}\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\hat {C}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{8.55}\] The only tricky term is the second term on the right side of (8.55). For this term, we recall the bootstrap (5.37s), Remark 6.10, and the improved estimate (8.21d), to bound \[\leq\|\Sigma g^{-\frac{1}{2}}\|_{L^{\infty}_{x,s}}\|_{g}^{1} \widetilde{\mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\|_{L^{\infty}_ {x}L^{2}_{x}}\|\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D }}^{5}\widetilde{\mathsf{D}}_{2}\hat{\mathbf{A}}_{\mathcal{N}}\|_{L^{2}_{x,s}} \big{\|}\mathcal{J}^{-\frac{1}{4}}\|_{L^{2}_{s}}\] \[\leq\hat{C}\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle^{2}\,, \tag{8.56}\] where in the last inequality we have used the fact that \(\mathsf{s}\leq\varepsilon\) and that \(\mathcal{J}(\mathsf{s})=1-\frac{\mathsf{s}}{\varepsilon}\) is a function independent of \(x\). Using this bound and taking a supremum in time in (8.55) for \(\mathsf{s}\in[0,\varepsilon]\), we deduce that \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}\mathsf{Q}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|} _{L^{2}_{x}}^{2}\leq\big{\|}\mathsf{Q}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5} (J_{g}\hat{\mathbf{W}}_{\mathcal{N}})(\cdot,0)\big{\|}_{L^{2}_{x}}^{2}+\hat{C} \varepsilon\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}\mathsf{Q}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})(\cdot,\mathsf{ s})\big{\|}_{L^{2}_{x}}^{2}+\hat{C}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\,.\] The proof of (8.50a) now follows upon absorbing the second term on the right side into the left side, by appealing to (4.11), to the upper and lower bounds for \(\mathsf{Q}\) at time \(\mathsf{s}=0\) which follow from (6.43)-(6.44), and by taking \(\varepsilon\) to be sufficiently small with respect to \(\mathsf{K}\) and \(\mathsf{B}_{6}\). In order to prove (8.50b), we appeal to (8.44c), to the identity \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})=\frac{1}{\varepsilon}\widetilde{ \mathsf{D}}_{\mathsf{s}}+V\widetilde{\mathsf{D}}_{2}\), and to the bound \(J_{g}\leq\frac{3}{2}\), to conclude \[\tfrac{1}{\varepsilon}\big{|}\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{\mathsf{s}}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,s}}\lesssim\varepsilon\|\mathcal{ J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{2}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}})\|_{L^{2}_{x,s}}+\|[\widetilde{\mathsf{D}}^{5},V] \widetilde{\mathsf{D}}_{2}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\big{\|}_{L^{2}_{ x,s}}+\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{8.57}\] Note that (5.37r) gives \(\|\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5} \widetilde{\mathsf{D}}_{2}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\|_{L^{2}_{x,s}} \leq\mathsf{B}_{6}\). Moreover, from (B.17), (5.37), (7.11), and (8.50a) we obtain \(\|[\widetilde{\mathsf{D}}^{5},V]\widetilde{\mathsf{D}}_{2}(J_{g}\hat{\mathbf{W}}_ {\mathcal{N}})\big{\rangle}\|_{L^{2}_{x,s}}\lesssim\varepsilon\mathsf{K}\langle \mathsf{B}_{6}\rangle\). With these bounds, multiplying both sides of (8.57) by \(\varepsilon\), gives (8.50b). ## 9. Closing the pointwise bootstrap inequalities The goal of this section is to close the pointwise bootstrap inequalities (5.37b)-(5.37q). We first claim that: **Proposition 9.1**.: _Assume that the bootstraps (5.37) hold, and assume that \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). Then, for all \((x,t)\in\mathcal{P}\) we have that_ \[J_{g}\hat{\mathbf{W}}_{\mathcal{N}} \geq-\tfrac{9}{10}\varepsilon^{-1}\ \ \text{implies that}\ \ \ J_{g}\geq\tfrac{81}{1000}\,, \tag{9.1a}\] \[\big{|}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}\big{|} \leq(1+\tfrac{\varepsilon}{2})\varepsilon^{-1}\,,\] (9.1b) \[\big{|}\mathsf{D}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\big{|} \leq\tfrac{5}{2}\varepsilon^{-1}\,,\] (9.1c) \[J_{g} \leq\tfrac{21}{20}\,,\] (9.1d) \[\big{|}\mathsf{D}J_{g}\big{|} \leq 4+\alpha\,. \tag{9.1e}\] _In particular, the bootstraps (5.37b), (5.37c), (5.37d), (5.37k), and (5.37l) are closed._ Proof of Proposition 9.1.: The bound (6.17a) implies that \[|J_{g}\mathbf{\hat{W}}_{\mathcal{N}}(x,t)|\leq|(w_{0})_{,1}\left(x\right)|+\hat{ C}\varepsilon\leq\varepsilon^{-1}(1+\hat{C}\varepsilon^{2})\leq\varepsilon^{-1}(1+ \tfrac{\varepsilon}{2}).\] The same estimate also shows that if \(J_{g}\mathbf{\hat{W}}_{\mathcal{N}}(x,t)\geq-\tfrac{9}{10}\varepsilon^{-1}\), then we must have \((w_{0})_{,1}\left(x\right)\geq-\tfrac{9}{10}\varepsilon^{-1}-\hat{C}\varepsilon\), and therefore (6.24a) and (4.4) imply \[J_{s}(x,t) \geq 1+(t-\mathfrak{t}_{\text{in}})\tfrac{1+\alpha}{2}\big{(}(w_{0 })_{,1}\left(x\right)-\mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\big{)}\] \[\geq 1+(t-\mathfrak{t}_{\text{in}})\tfrac{1+\alpha}{2}\big{(}- \tfrac{9}{10\varepsilon}-2\mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\big{)}\] \[\geq 1-\tfrac{2\varepsilon}{1+\alpha}\tfrac{51}{50}\tfrac{11+ \alpha}{2\varepsilon}(\tfrac{9}{10}+2\varepsilon\mathsf{C}_{\mathsf{J}_{ \mathsf{J}}})\geq 1-\tfrac{459}{500}-3\varepsilon\mathsf{C}_{\mathsf{J}_{ \mathsf{J}}}=\tfrac{41}{500}-3\mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\geq\tfrac{8 1}{1000}\,,\] assuming that \(\varepsilon\) is sufficiently small. In a similar fashion, the upper bound for \(J_{g}\) is also obtained from (6.24a), combined with the upper bound in (4.10) assuming that \(\varepsilon\) is sufficiently small. The bound for \(\mathsf{D}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})\) follows from (6.17b) and (4.10), which together give \[|\mathsf{D}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})(x,t)|\leq\tfrac{1}{e}| \mathsf{D}\mathsf{D}_{1}w_{0}(x)|+\hat{C}\varepsilon\mathsf{K}(\mathsf{B}_{6} )\leq\tfrac{2\varepsilon}{\varepsilon}+\hat{C}\varepsilon\mathsf{K}(\mathsf{ B}_{6})\leq\tfrac{5}{2\varepsilon}\] assuming that \(\varepsilon\) is sufficiently small. It remains to estimate \(\mathsf{D}J_{g}\). When \(\mathsf{D}=\mathsf{D}_{1}\) or \(\mathsf{D}=\mathsf{D}_{2}\), from (6.24b) and (4.10) we deduce \[|\mathsf{D}_{i}J_{g}(x,t)|\leq(t-\mathfrak{t}_{\text{in}})\tfrac{1+\alpha}{2 \varepsilon}|\mathsf{D}\mathsf{D}_{1}w_{0}(x)|+\hat{C}(t-\mathfrak{t}_{\text {in}})\varepsilon\mathsf{K}(\mathsf{B}_{6})\leq\tfrac{2\varepsilon}{1+\alpha }\tfrac{51}{50}\tfrac{1+\alpha}{\varepsilon}+\hat{C}\varepsilon^{2}\mathsf{K }(\mathsf{B}_{6})\leq 3, \tag{9.2}\] assuming that \(\varepsilon\) is sufficiently small. When \(\mathsf{D}=\varepsilon\partial_{t}\), from (6.24d) we deduce \[|\varepsilon\partial_{t}J_{g}(x,t)|\leq\tfrac{1+\alpha}{2}|\mathsf{D}_{1}w_{0 }(x)|+\hat{C}\varepsilon\leq\tfrac{1+\alpha}{2\alpha}+\hat{C}\varepsilon\leq 1+\alpha.\] The claimed bound for \(\mathsf{D}J_{g}\) thus holds in both cases, concluding the proof of the Proposition. ### The \(\mathbf{\hat{W}}_{\mathcal{T}}\) bootstraps Next we turn to the bounds for \(\mathbf{\hat{W}}_{\mathcal{T}}\). From (3.32) we obtain that \[\mathbf{\hat{W}}_{\mathcal{T}}\circ\xi(x,t) =(w_{0})_{,2}\left(x\right)e^{\int_{\mathfrak{t}_{\text{in}}} ^{t}(\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h_{,22}-\tfrac{3+2\alpha}{2} \mathbf{\hat{A}}_{\mathcal{T}})\circ\xi(x,r)\mathrm{d}r}\] \[+\int_{\mathfrak{t}_{\text{in}}}^{t}\!\!\left(-\alpha\Sigma g^{- \frac{1}{2}}\mathbf{\hat{A}}_{\mathcal{T},\mathcal{T}};=\tfrac{\alpha}{2} \Sigma g^{-\frac{3}{2}}h_{,22}\left(\mathbf{\hat{Z}}_{\mathcal{T}}+2\mathbf{ \hat{A}}_{\mathcal{N}}\right)-\tfrac{1-2\alpha}{2}\mathbf{\hat{Z}}_{\mathcal{ T}}\mathbf{\hat{A}}_{\mathcal{T}}\right)\circ\xi(x,r)\] \[\qquad\qquad\qquad\times e^{\int_{\mathcal{T}}(\tfrac{\alpha}{2} \Sigma g^{-\frac{3}{2}}h_{,22}-\tfrac{3+2\alpha}{2}\mathbf{\hat{A}}_{\mathcal{ T}})\circ\xi(x,r^{\prime})\mathrm{d}r^{\prime}}\mathrm{d}r \tag{9.3}\] By appealing to the bootstrap inequalities (5.37) we deduce from the above formula that \[\big{|}\mathbf{\hat{W}}_{\mathcal{T}}\circ\xi(x,t)-(w_{0})_{,2}\left(x\right) \big{|}\lesssim\varepsilon^{2}\big{|}(w_{0})_{,2}\left(x\right)\big{|}+ \varepsilon^{2}\,. \tag{9.4}\] Upon composing with \(\xi^{-1}(x,t)\) and using that (6.8) implies \(|(w_{0})_{,2}\left(\xi^{-1}(x,t)\right)-(w_{0})_{,2}\left(x\right)|\lesssim \varepsilon^{2}\|(w_{0})_{,22}\|_{L_{x}^{\infty}}\lesssim\varepsilon^{2}\), we arrive at \[\big{|}\mathbf{\hat{W}}_{\mathcal{T}}(x,t)-(w_{0})_{,2}\left(x\right)\big{|} \leq\hat{C}\varepsilon^{2} \tag{9.5}\] where the implicit constant depends solely on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). In view of assumption ((iv)) on the initial data, which gives \(\|(w_{0})_{,2}\|_{L_{\infty}}\leq 1\), the bound (9.5) closes the bootstrap (5.37e), upon taking \(\varepsilon\) to be sufficiently small. In order to obtain a bound for \(\mathsf{D}\mathbf{\hat{W}}_{\mathcal{T}}\), we differentiate (3.32) to obtain \[(\partial_{t}+V\partial_{2})\mathsf{D}\mathbf{\hat{W}}_{\mathcal{T}} =-\mathsf{D}V\mathsf{D}_{2}\mathbf{\hat{W}}_{\mathcal{T}}+\big{(} \tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h_{,22}-\tfrac{3+2\alpha}{2}\mathbf{ \hat{A}}_{\mathcal{T}}\big{)}\mathbf{\hat{W}}_{\mathcal{T}}\] \[\quad-\alpha\mathsf{D}(\Sigma g^{-\frac{3}{2}})\mathsf{D}_{2} \mathbf{\hat{A}}_{\mathcal{T}}-\alpha\Sigma g^{-\frac{1}{2}}\mathsf{D}_{2} \mathbf{\hat{A}}_{\mathcal{T}}+\big{(}\tfrac{\alpha}{2}\mathsf{D}(\Sigma g^{- \frac{3}{2}})h_{,22}+\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}\mathsf{D}h_{,22} -\tfrac{3+2\alpha}{2}\mathsf{D}\mathbf{\hat{A}}_{\mathcal{T}}\big{)}\mathbf{\hat{W}}_{ \mathcal{T}}\] \[\quad+\tfrac{\alpha}{2}\mathsf{D}\big{(}\Sigma g^{-\frac{3}{2}}( \mathbf{\hat{Z}}_{\mathcal{T}}+2\mathbf{\hat{A}}_{\mathcal{N}})\big{)}h_{,22}+ \tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}\mathsf{D}h_{,22}\left(\mathbf{\hat{Z}}_{ \mathcal{T}}+2\mathbf{\hat{A}}_{\mathcal{N}}\right)-\tfrac{1-2\alpha}{2}\mathsf{D}( \mathbf{\hat{Z}}_{\mathcal{T}}\mathbf{\hat{A}}_{\mathcal{T}})\,. \tag{9.6}\] In order to derive an estimate in the spirit of (9.4), we need to estimate in \(L_{x,t}^{\infty}\), the last two lines of the above identity. The most difficult terms are \(\mathsf{D}\mathsf{D}_{2}\mathbf{\hat{A}}_{\mathcal{T}}\) and \(\mathsf{D}h_{,22}\), for which we do not have pointwise bootstrap inequalities readily available. Instead, we use (7.23) to bound \(\|\mathsf{D}h_{,22}\|_{L_{x,t}^{\infty}}\lesssim\mathsf{K}_{\varepsilon} \langle\mathsf{B}_{6}\rangle\), and we appeal to the Sobolev embedding (B.2d), which gives \(\|\mathsf{D}\mathsf{D}\mathsf{D}_{2}\mathbf{\hat{A}}_{\mathcal{T}}\|_{L_{x,t}^{ \infty}}\lesssim\mathsf{K}\langle\mathsf{B}_{6}\rangle\). The remaining terms on the second and third line on the right side of (9.5) are then bounded using bootstrap inequalities (5.37), in \(L_{x,t}^{\infty}\), by \(\hat{C}\varepsilon\langle\mathsf{B}_{6}\rangle\). Similarly to (9.4), we then obtain \[\big{|}(\mathsf{D}\mathbf{\hat{W}}_{\mathcal{T which in turn implies upon composing with \(\xi^{-1}\) that \[\big{|}(\mathsf{D}\hat{\mathbf{W}}_{\mathcal{T}})(x,t)-\mathsf{D}(w_{0})_{,2}\,(x )\big{|}\lesssim\mathsf{K}_{\mathcal{E}}\langle\mathsf{B}_{6}\rangle\,. \tag{9.8}\] By combining (9.8) with (4.11), we thus obtain \[\big{|}\mathsf{D}\hat{\mathbf{W}}_{\mathcal{T}}(x,t)\big{|}\leq\big{\|} \mathsf{D}(w_{0})_{,2}\,\big{\|}_{L^{\infty}_{x}}+\hat{C}\mathsf{K}_{\mathcal{ E}}\langle\mathsf{B}_{6}\rangle\leq\mathsf{C}_{\mathsf{data}}+\hat{C}\mathsf{K}_{ \mathcal{E}}\langle\mathsf{B}_{6}\rangle \tag{9.9}\] where as usual we have \(\hat{C}=\hat{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\). Taking \(\varepsilon\) to be sufficiently small with respect to \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\), and \(\mathsf{B}_{6}\), closes the bootstrap inequality (5.37f). ### The \(\Sigma\) bootstraps We seek sharp estimates for the quantities \[\Sigma-\sigma_{0}\,,\qquad\text{and}\qquad\tfrac{\mathsf{D}\Sigma}{\Sigma}- \tfrac{\mathsf{D}\sigma_{0}}{\sigma_{0}}\.\] For this purpose, we appeal to (6.8), (6.9), and the \(\Sigma\) evolution in (3.19b). First, by using the bootstraps (5.37g) and (5.37j), we deduce that \[\big{|}\Sigma\circ\xi(x,t)-\sigma_{0}(x)\big{|}=\sigma_{0}(x)\Big{|}e^{- \alpha\int_{\mathsf{a}}^{t}\tfrac{\mathsf{Z}}{\Sigma}_{\mathcal{N}}\circ\xi+ \hat{\mathbf{A}}_{\mathcal{T}}\circ\xi\mathrm{d}r}-1\Big{|}\leq\hat{C}\varepsilon \sigma_{0}(x)e^{\hat{C}\varepsilon}\,.\] Since the mean value theorem gives \(|\Sigma\circ\xi(x,t)-\Sigma(x,t)|\leq|\xi(x,t)-x_{2}|\|\Sigma_{,2}\,\|_{L^{ \infty}_{x,t}}\), by (4.8), (6.8), and (5.37q), we deduce that \[\big{|}\tfrac{\Sigma(x,t)}{\sigma_{0}(x)}-1\big{|}\leq\hat{C}\varepsilon e^{ \hat{C}\varepsilon}+\tfrac{\hat{C}\varepsilon^{2}}{\sigma_{0}(x)}\leq\hat{C}\varepsilon \tag{9.10}\] upon taking \(\varepsilon\) sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). The initial data assumption (4.8) closes the bootstrap (5.37p) if we choose \(\varepsilon\) to be small enough to ensure \(\hat{C}\varepsilon\leq\tfrac{1}{12}\kappa_{0}\). Next, differentiating (3.19b) we deduce \[(\partial_{t}+V\partial_{2})\tfrac{\mathsf{D}\Sigma}{\Sigma}=-\mathsf{D}V \tfrac{\Sigma_{,2}}{\Sigma}-\alpha\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}- \alpha\mathsf{D}\hat{\mathbf{A}}_{\mathcal{T}}\,,\] and thus by appealing to (5.37g), (5.37h), and (5.37o), we obtain that \[\big{|}\tfrac{\mathsf{D}\Sigma}{\Sigma}\circ\xi(x,t)-\tfrac{ \mathsf{D}\sigma_{0}}{\sigma_{0}}(x)\big{|} \leq\tfrac{|\mathsf{D}\sigma_{0}(x)|}{\sigma_{0}(x)}\Big{|}e^{ \hat{C}_{\mathsf{m}}^{t}\,|\mathsf{D}V|\circ\xi\mathrm{d}r}-1\Big{|}+\alpha e^ {\int_{\mathsf{a}}^{t}\,|\mathsf{D}V|\circ\xi\mathrm{d}r}\int_{\mathsf{t}_{ \mathsf{m}}}^{t}|\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}\circ\xi|+|\mathsf{D} \hat{\mathbf{A}}_{\mathcal{T}}\circ\xi|\mathrm{d}r\] \[\leq\hat{C}\varepsilon^{2}\tfrac{|\mathsf{D}\sigma_{0}(x)|}{ \sigma_{0}(x)}+\hat{C}\varepsilon\,.\] Using the bound \(\|\mathsf{D}_{2}\tfrac{\mathsf{D}\Sigma}{\Sigma}\|_{L^{\infty}_{x,t}}\lesssim 1+ \|\mathsf{D}^{2}\Sigma\|_{L^{\infty}_{x,t}}\lesssim 1+\varepsilon^{-1}\| \mathsf{D}^{4}\mathsf{D}_{1}\Sigma\|_{L^{2}_{x,t}}\lesssim\langle\mathsf{B}_{6}\rangle\), and the estimate (6.8), we deduce from the above bound that \[\big{|}\tfrac{\mathsf{D}\Sigma}{\Sigma}(x,t)-\tfrac{\mathsf{D}\sigma_{0}}{ \sigma_{0}}(x)\big{|}\leq\hat{C}\varepsilon^{2}\tfrac{|\mathsf{D}\sigma_{0}(x)| }{\sigma_{0}(x)}+\hat{C}\varepsilon+\hat{C}\varepsilon^{2}\langle\mathsf{B}_{6 }\rangle\leq\hat{C}\varepsilon^{2}\tfrac{|\mathsf{D}\sigma_{0}(x)|}{\sigma_{0 }(x)}+\hat{C}\varepsilon.\] Note that from ((ii)), ((iv)), (4.10), and (4.11), upon taking \(\varepsilon\) to be sufficiently small with respect to \(\kappa_{0}\) and \(\mathsf{C}_{\mathsf{data}}\), we have that \[\big{|}\tfrac{\mathsf{D}\sigma_{0}}{\sigma_{0}}(x)\big{|}\leq\tfrac{1}{\kappa_{ 0}}\leq 1\,,\qquad\text{and}\qquad\big{|}\tfrac{\mathsf{D}\sigma_{0}}{\sigma_{0}}(x) \big{|}\leq\tfrac{1}{\kappa_{0}}\leq 1\,, \tag{9.11}\] where we have used that \(\kappa_{0}\geq 1\). By also appealing to (9.10), and by taking \(\varepsilon\) to be sufficiently small with respect to \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\), and \(\langle\mathsf{B}_{6}\rangle\), we deduce \[\big{|}\mathsf{D}\Sigma(x,t)\big{|}\leq|\mathsf{D}\sigma_{0}(x)|(1+\varepsilon \hat{C})+\varepsilon\hat{C}\leq\tfrac{2}{3}+\hat{C}\varepsilon\leq 1\leq\kappa_{0}\,. \tag{9.12}\] This closes the bootstrap (5.37q). ### The \(h\) bootstraps We first note that the bootstrap (5.37m) implies \[\big{|}g(x,t)-1\big{|}\leq(1+\alpha)^{2}\kappa_{0}^{2}\varepsilon^{2}\,.\] In order to close the bootstrap (5.37m) we first recall from (3.8) that \(\mathsf{D}_{t}h=\varepsilon\partial_{t}h=\varepsilon g^{-\frac{1}{2}}(\tfrac{1+ \alpha}{2}W+\tfrac{1-\alpha}{2}Z)\). Therefore, with (6.12) we arrive at \[|\mathsf{D}_{t}h|\leq\varepsilon(1-\hat{C}\varepsilon^{2})^{-\frac{1}{2}}\big{(} \tfrac{3(1+\alpha)}{4}\kappa_{0}+\hat{C}\varepsilon\big{)}<\tfrac{5(1+\alpha)}{6} \varepsilon\kappa_{0}\,,\] which closes the \(\mathsf{D}_{t}h\) part of the bootstrap (5.37m). Moreover, since \(h_{,1}=g^{\frac{1}{2}}J_{g}\), from (5.37k) we deduce \[|\mathsf{D}_{1}h|\leq\tfrac{3}{2}\varepsilon(1+\hat{C}\varepsilon^{2})^{\frac{1}{ 2}}\,,\] which closes the \(\mathsf{D}_{1}h\) part of the bootstrap (5.37m) upon taking \(\varepsilon\) to be sufficiently small. On the other hand, from (3.13b), (5.37e), and (5.37i), we obtain that \[\big{|}h_{,2}\circ\xi(x,t)\big{|}\leq\tfrac{4\varepsilon}{1+\alpha}(1+\hat{C} \varepsilon^{2})^{\frac{1}{2}}\Big{(}\tfrac{1+\alpha}{2}\cdot(1+\varepsilon)+ \tfrac{1-\alpha}{2}\cdot\mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{T}}}\varepsilon \Big{)}\leq 2\varepsilon+\hat{C}\varepsilon^{2}\,.\] which improves the \(\widetilde{\mathsf{D}}_{2}h\) part of (5.37m). Next, we estimate \(\mathsf{D}h_{,2}\). Upon applying \(\mathsf{D}\) to (3.13b), we derive \[(\partial_{t}+V\partial_{2})\mathsf{D}h_{,2}+\mathsf{D}Vh_{,22}=\tfrac{1+\alpha }{2}g\mathsf{D}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}g\mathsf{D} \widehat{\mathbf{Z}}_{\mathcal{T}}+2(\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{ \mathcal{T}}+\tfrac{1-\alpha}{2}\widehat{\mathbf{Z}}_{\mathcal{T}})h_{,2}\, \mathsf{D}h_{,2}\.\] By appealing to the bootstraps (5.37e), (5.37f), (5.37g), (5.37o), we deduce from the above identity that \[\big{|}\big{(}\mathsf{D}h_{,2}\big{)}\circ\xi(x,t)\big{|}\leq\tfrac{4\varepsilon }{1+\alpha}(1+\hat{C}\varepsilon^{2})\big{(}\tfrac{1+\alpha}{2}\cdot 2 \mathsf{C}_{\mathsf{data}}+\hat{C}\varepsilon\big{)}e^{\hat{C}\varepsilon} \leq 4\mathsf{C}_{\mathsf{data}}\varepsilon+\hat{C}\varepsilon^{2}\,.\] Taking \(\varepsilon\) to be sufficiently small, and composing with \(\xi^{-1}\) then improves the \(\mathsf{D}\mathsf{n}_{2}h\) part of the bootstrap (5.37n). In order to improve the \(\mathsf{D}\mathsf{D}_{1}h\) part of (5.37n), we just note that \[|\mathsf{D}\mathsf{D}_{1}h|\leq\varepsilon g^{\frac{1}{2}}|\mathsf{D}J_{g}|+ \varepsilon J_{g}g^{-\frac{1}{2}}|h_{,2}\,\mathsf{D}h_{,2}\,|\leq\varepsilon (1+\mathring{C}\varepsilon^{2})4(1+\alpha)+\mathring{C}\varepsilon^{3}\leq 4 (1+\alpha)\varepsilon+\mathring{C}\varepsilon^{3}\,.\] ### The \(V\) bootstraps From (3.6), (5.37m), and (6.12) we obtain that \[\big{|}V\big{|} \leq(1+\mathring{C}\varepsilon^{2})^{-\frac{1}{2}}\Big{(} \kappa_{0}\varepsilon\big{(}1+6\kappa_{0}+\tfrac{4\alpha}{1+\alpha}\mathsf{C} _{\hat{\mathbf{A}}_{\mathcal{N}}}\big{)}+3\varepsilon\big{(}\tfrac{1+\alpha }{2}\tfrac{3}{2}\kappa_{0}+\mathring{C}\varepsilon\big{)}\Big{)}\] \[\leq\kappa_{0}\varepsilon\big{(}1+10(1+\alpha)\kappa_{0}+\tfrac{4 \alpha}{1+\alpha}\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{N}}}\big{)}\,, \tag{9.13}\] upon taking \(\varepsilon\) to be sufficiently small. Next, the bootstrap inequalities (5.37) and the identities (6.7c)-(6.7d), yield \[|\mathsf{D}_{1}V| \leq 5\mathsf{C}_{\mathsf{data}}\alpha\varepsilon\kappa_{0}(1- \mathring{C}\varepsilon^{2})^{-\frac{3}{2}}+\varepsilon(1-\mathring{C} \varepsilon^{2})^{-\frac{1}{2}}\big{(}\tfrac{3}{2}\mathsf{C}_{\hat{\mathbf{A} }_{\mathcal{N}}}+5\kappa_{0}\varepsilon\tfrac{1+\alpha}{\varepsilon}\big{)}+ \mathring{C}\varepsilon^{2}\] \[\leq\varepsilon\kappa_{0}\big{(}6\mathsf{C}_{\mathsf{data}}+5(1+ \alpha)+\tfrac{3}{2}\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{N}}}\big{)}\,, \tag{9.14a}\] \[|\mathsf{D}_{2}V| \leq 5\mathsf{C}_{\mathsf{data}}\alpha\varepsilon\kappa_{0}(1- \mathring{C}\varepsilon^{2})^{-\frac{3}{2}}+\mathsf{C}_{\hat{\mathbf{A}}_{ \mathcal{T}}}\varepsilon+10\kappa_{0}^{2}\varepsilon\] \[\leq\varepsilon\kappa_{0}\big{(}6\mathsf{C}_{\mathsf{data}}\alpha+ 10\kappa_{0}+\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{T}}}\big{)}\,, \tag{9.14b}\] where we have used that \(\kappa_{0}\geq 1\). It remains to estimate \(\mathsf{D}_{t}V=\varepsilon\partial_{t}V\). For this purpose, we appeal to (3.23), which combined with the bootstrap inequalities (5.37) yields (for a constant \(C_{1}=C_{1}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})>0\)) \[\big{|}\mathsf{D}_{t}V\big{|} \leq\mathring{C}\varepsilon^{3}+\alpha\kappa_{0}\varepsilon(1- \mathring{C}\varepsilon^{2})^{-\frac{1}{2}}\Big{(}(2+\alpha)\kappa_{0}+ \mathring{C}\varepsilon+\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{N}}}\Big{)}\] \[\leq\alpha\kappa_{0}\varepsilon\big{(}2(1+\alpha)\kappa_{0}+ \mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{N}}}\big{)} \tag{9.14c}\] by choosing \(\varepsilon\) to be sufficiently small, in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). By combining (9.13) and (9.14), and choosing the constant \(\mathsf{C}_{\mathsf{V}}\) to satisfy \[\mathsf{C}_{\mathsf{V}}\geq 2\Big{(}1+27(1+\alpha)\kappa_{0}+6(1+\alpha)\mathsf{C} _{\mathsf{data}}+\tfrac{13}{2}\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{N}}}+ \mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{T}}}\Big{)}\,, \tag{9.15}\] we have thus improved the bootstrap (5.37o). We note that since \(\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{N}}}\) and \(\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{T}}}\) only depend on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), and therefore so does \(\mathsf{C}_{\mathsf{V}}\). ### The \(\hat{\mathbf{A}}\) bootstraps The estimate for \(\hat{\mathbf{A}}_{\mathcal{N}}\) is rather direct to obtain, since we've already estimated the vorticity \(\Omega=\hat{\mathbf{A}}_{\mathcal{N}}-\tfrac{1}{2}(\hat{\mathbf{W}}_{\mathcal{T} }+\widehat{\mathbf{Z}}_{\mathcal{T}})\) in (8.3). Indeed, by using (8.3), (5.37e), and (5.37i), we obtain that \[\big{|}\hat{\mathbf{A}}_{\mathcal{N}}\big{|}\leq C_{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq These estimates thus close the bootstrap assumption (5.37h). Note that both \(C_{\eqref{eq:C_{\eqref{eq:C_{\eqref{eq:C_{\eqref{eq:C_{\eqref{eq:C_{ \eqref{eq:C_{\eqref{eq:C_{\eqref{eq:C_{\eqref{eq:C_{\eqref{eqeq:C_{\eqref{eq:C_{\eqref{C}}}}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Thus, if we ensure that \[\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{T}}}\geq 4C_{(9,32)}\langle\mathsf{C}_{ \hat{\mathbf{A}}_{\mathcal{N}}}\rangle\langle\mathsf{B}_{6}\rangle\,, \tag{9.33}\] then \[\big{|}\mathsf{D}\hat{\mathbf{A}}_{\mathcal{T}}\big{|}\leq\tfrac{1}{4}\varepsilon \mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{T}}}\,. \tag{9.34}\] We note that (9.27) and (9.34) close the bootstrap (5.37j). Moreover, we emphasize that the conditions (9.26) and (9.33) on \(\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{T}}}\) mandate that this constant is chosen to be sufficiently large with respect to \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\), but also with respect to \(\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{N}}}\) (which has been chosen already in terms of \(\alpha,\mathsf{C}_{\mathsf{data}}\); cf. (9.17) and (9.20)) and \(\mathsf{B}_{6}\) (which will be chosen to be dependent only on \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\); cf. (10.74), and (12.92)). ### The \(\hat{\mathbf{Z}}\) bootstraps From (3.28) we deduce that \(\hat{\mathbf{Z}}_{\mathcal{N}}\) solves the forced transport equation \[\big{(}J_{\!s}\partial_{t}+J_{\!s}(V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2}) \,\partial_{2}-2\alpha\Sigma\partial_{1}\big{)}\hat{\mathbf{Z}}_{\mathcal{N} }=m_{\hat{\mathbf{Z}}_{\mathcal{N}}}\hat{\mathbf{Z}}_{\mathcal{N}}+q_{\hat{ \mathbf{Z}}_{\mathcal{N}}}\,, \tag{9.35}\] where we have denoted \[m_{\hat{\mathbf{Z}}_{\mathcal{N}}}:=-\tfrac{1+\alpha}{2}J_{\!s}\hat{\mathbf{W }}_{\mathcal{N}}-\tfrac{1-\alpha}{2}J_{\!s}\hat{\mathbf{Z}}_{\mathcal{N}}- \tfrac{\alpha}{2}\hat{\mathbf{A}}_{\mathcal{T}}-\tfrac{\alpha}{2}\Sigma g^{- \frac{3}{2}}h_{,22}\] and \[q_{\hat{\mathbf{Z}}_{\mathcal{N}}}:=2\alpha\Sigma(g^{-\frac{1}{2}} \hat{\mathbf{Z}}_{\mathcal{T}}+\hat{\mathbf{A}}_{\mathcal{N}})J_{\!s,2}+ \alpha\Sigma g^{-\frac{1}{2}}J_{\!s}\hat{\mathbf{A}}_{\mathcal{N},2}+\tfrac{ \alpha}{2}\big{(}\hat{\mathbf{A}}_{\mathcal{T}}-\Sigma g^{-\frac{3}{2}}h_{,22} \,\big{)}J_{\!s}\hat{\mathbf{W}}_{\mathcal{N}} \tag{9.36}\] \[\quad-\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+ \tfrac{3-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}J_{\!s}\hat{\mathbf{ A}}_{\mathcal{N}}-\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+ \tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}J_{\!s}\hat{\mathbf{ Z}}_{\mathcal{T}}+\alpha\Sigma g^{-\frac{3}{2}}h_{,22}\,J_{\!s}\hat{\mathbf{A}}_{ \mathcal{T}}. \tag{9.37}\] Since (9.35) takes the form of (C.11), with \(f=\hat{\mathbf{Z}}_{\mathcal{N}}\) and \(\alpha\) replaced by \(2\alpha\), the desired upper bound for \(\hat{\mathbf{Z}}_{\mathcal{N}}\) will follow from (C.13). We note that since (5.37) implies \(\|m_{\hat{\mathbf{Z}}_{\mathcal{N}}}\|_{L^{\infty}_{x,t}}\leq(1+\alpha) \varepsilon^{-1}\) (upon letting \(\varepsilon\) be sufficiently small), we may choose the exponent \(\beta\) appearing in (C.13) to equal \(\beta=\beta(\alpha)=\tfrac{20(1+\alpha)}{\alpha}\). Moreover, (4.11) implies that \(\|\hat{\mathbf{Z}}_{\mathcal{N}}(\cdot,\mathsf{t}_{\mathrm{n}})\|_{L^{\infty}_{ x}}\leq\mathsf{C}_{\mathsf{data}}\). Thus, it remains to estimate the space-time \(L^{\infty}\) norm of \(q_{\hat{\mathbf{Z}}_{\mathcal{N}}}\). From the bootstrap inequalities (5.37) we deduce \[\|q_{\hat{\mathbf{Z}}_{\mathcal{N}}}\|_{L^{\infty}_{x,t}} \leq 8\alpha(1+\alpha)\kappa_{0}\mathsf{C}_{\hat{\mathbf{A}}_{ \mathcal{N}}}+\alpha\kappa_{0}\tfrac{6}{5}\mathsf{C}_{\hat{\mathbf{A}}_{ \mathcal{N}}}+\tfrac{\alpha}{2}(\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{T}}}+5 \kappa_{0}\mathsf{C}_{\mathsf{data}})+\tfrac{1+\alpha}{2}\tfrac{6}{5}\mathsf{C}_ {\hat{\mathbf{A}}_{\mathcal{N}}}+\hat{C}\varepsilon \tag{9.38}\] \[\leq (1+\alpha)(1+9\alpha)\kappa_{0}\mathsf{C}_{\hat{\mathbf{A}}_{ \mathcal{N}}}+\tfrac{\alpha}{2}(\mathsf{C}_{\hat{\mathbf{A}}_{\mathcal{T}}}+5 \kappa_{0}\mathsf{C}_{\mathsf{data}})=:C_{(9.38)}\,,\] where \(C_{(9.38)}>0\) is explicitly computable solely in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). With the above estimate, we deduce from (C.13) the pointwise estimate \[\big{|}\hat{\mathbf{Z}}_{\mathcal{N}}\big{|}\leq(4e^{18})^{\frac{20(1+\alpha)}{ \alpha}}\Big{(}\big{\|}\hat{\mathbf{Z}}_{\mathcal{N}}(\cdot,\mathsf{t}_{ \mathrm{n}})\big{\|}_{L^{\infty}_{x}}+\tfrac{\varepsilon}{1+\alpha}C_{(9.38)} \Big{)}\leq 2\mathsf{C}_{\mathsf{data}}(4e^{18})^{\frac{20(1+\alpha)}{\alpha}} \tag{9.39}\] upon taking \(\varepsilon\) to be sufficiently small. This justifies the first condition we need to impose on \(\mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{N}}}\), namely \[\mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{N}}}\geq 8\mathsf{C}_{\mathsf{data}}(4e^{18})^{ \frac{20(1+\alpha)}{\alpha}}\,, \tag{9.40}\] which in turn implies the pointwise estimate \[\big{|}\hat{\mathbf{Z}}_{\mathcal{N}}\big{|}\leq\tfrac{1}{4}\mathsf{C}_{\hat{ \mathbf{Z}}_{\mathcal{N}}}\,. \tag{9.41}\] The estimate for \(\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}\) is obtained by differentiating (9.35), which leads to \[\big{(}J_{\!s}\partial_{t}+J_{\!s}(V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2})\, \partial_{2}-2\alpha\Sigma\partial_{1}\big{)}\mathsf{D}\hat{\mathbf{Z}}_{ \mathcal{N}}\] \[\qquad=\underbrace{m_{\hat{\mathbf{Z}}_{\mathcal{N}}}\mathsf{D} \hat{\mathbf{Z}}_{\mathcal{N}}-\mathsf{D}J_{\!s}\partial_{t}\hat{\mathbf{Z}}_{ \mathcal{N}}-\mathsf{D}\big{(}J_{\!s}(V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2}) \big{)}\partial_{2}\hat{\mathbf{Z}}_{\mathcal{N}}+2\alpha\mathsf{D}\Sigma \partial_{1}\hat{\mathbf{Z}}_{\mathcal{N}}}_{:=m_{\mathsf{D}\hat{\mathbf{Z}}_{ \mathcal{N}}}\mathsf{D}^{\hat{\mathbf{Z}}_{\mathcal{N}}}}+q_{\mathsf{D}\hat{ \mathbf{Z}}_{\mathcal{N}}}\,, \tag{9.42}\] where \[q_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}}=\mathsf{D}m_{\hat{\mathbf{Z}}_{ \mathcal{N}}}\hat{\mathbf{Z}}_{\mathcal{N}}+\mathsf{D}q_{\hat{\mathbf{Z}}_{ \mathcal{N}}}\,. \tag{9.43}\] As before, it follows that \(\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}\) solves a forced transport equation of the type (C.11). We start by using (5.37) to estimate the stretching factor \[\|m_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}}\|_{L^{\infty}_{x,t}}\leq\tfrac{1 +\alpha}{\varepsilon}+\tfrac{5(1+\alpha)}{\varepsilon}+\hat{C}\varepsilon+ \tfrac{6\alpha\kappa_{0}}{\varepsilon}\leq\tfrac{6(1+\alpha)(1+\kappa_{0})}{\varepsilon} \tag{9.44}\] by taking \(\varepsilon\) to be sufficiently small. As such, we may take the constant \(\beta\) appearing in (C.13) to equal \(\beta=\tfrac{12\alpha}{\alpha}(1+\alpha)(1+\kappa_{0})\). In order to estimate \(q_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}}\), as defined by (9.37) and the previously derived bound (9.37), the Sobolev estimate (B.2d), the initial data bounds (4.11), the bounds for the geometry (7.1), and the improved estimate (8.21), to deduce \[\|q_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}}\|_{L^{\infty}_{x,t}}\leq 2 \mathsf{C}_{\mathsf{data}}(4e^{18})^{\frac{20(1+\alpha)}{\alpha}}\big{(}\tfrac{ 1+\alpha}{2}(1+\alpha)\mathsf{C}_{\mathsf{data}}\varepsilon^{-1}+\mathring{C} \big{)}+\mathring{C}\langle\mathsf{B}_{6}\rangle. \tag{9.45}\] By taking \(\varepsilon\) to be sufficiently small with respect to \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\) and \(\mathsf{B}_{6}\), we deduce from the above two estimates and (C.13) that \[\big{|}\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}\big{|}\leq(4e^{18})^{\beta} \mathsf{C}_{\mathsf{data}}+\tfrac{(1+\alpha)}{6(1+\kappa_{0})}(4e^{18})^{ \frac{20(1+\alpha)}{\alpha}+\beta}\mathsf{C}_{\mathsf{data}}^{2} \tag{9.46}\] where we recall that \(\beta=\beta(\alpha,\kappa_{0})=\tfrac{120}{\alpha}(1+\alpha)(1+\kappa_{0})\). Thus, by further imposing that \[\mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{N}}}\geq 4\Big{(}(4e^{18})^{\beta} \mathsf{C}_{\mathsf{data}}+\tfrac{1(1+\alpha)}{20(1+\kappa_{0})}(4e^{18})^{ \frac{20(1+\alpha)}{\alpha}+\beta}\mathsf{C}_{\mathsf{data}}^{2}\Big{)}\,, \tag{9.47}\] we obtain \[\big{|}\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{N}}\big{|}\leq\tfrac{1}{4} \mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{N}}} \tag{9.48}\] which together with (9.41) closes the bootstrap inequality (5.37g). We note that the constant \(\mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{N}}}\) depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), through (9.40) and (9.47). It thus remains to estimate \(\hat{\mathbf{Z}}_{\mathcal{T}}\) and \(\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}\). For this purpose, (3.34) yields \[\big{(}J_{g}\partial_{t}+J_{g}(V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2}\,) \partial_{2}-2\alpha\Sigma\partial_{1}\big{)}\hat{\mathbf{Z}}_{\mathcal{T}}= -\big{(}\underbrace{\tfrac{9}{2}\Sigma g^{-\frac{3}{2}}h_{,22}\,J_{g}+\tfrac{3 }{2}J_{g}\hat{\mathbf{A}}_{\mathcal{T}}}_{=:m_{\hat{\mathbf{Z}}_{\mathcal{T}}} }\big{)}\hat{\mathbf{Z}}_{\mathcal{T}}+q_{\hat{\mathbf{Z}}_{\mathcal{T}}}\,, \tag{9.49}\] where we have denoted \[q_{\hat{\mathbf{A}}_{\mathcal{T}}}=2\alpha\Sigma J_{g^{*}2}\big{(}\hat{ \mathbf{A}}_{\mathcal{T}}-g^{-\frac{1}{2}}\hat{\mathbf{Z}}_{\mathcal{N}} \big{)}+\alpha\Sigma g^{-\frac{1}{2}}J_{g}\hat{\mathbf{A}}_{\mathcal{T},2}- \tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}h_{,22}\,J_{g}\big{(}\hat{\mathbf{W}} _{\mathcal{T}}+2\hat{\mathbf{A}}_{\mathcal{N}}\big{)}-\tfrac{1}{2}J_{g}\hat{ \mathbf{A}}_{\mathcal{T}}\hat{\mathbf{W}}_{\mathcal{T}}\,. \tag{9.50}\] Thus, \(\hat{\mathbf{Z}}_{\mathcal{T}}\) solves a forced transport equation of the type (C.11), with \(\alpha\) replaced by \(2\alpha\). From (5.37) we note that \[\|m_{\hat{\mathbf{Z}}_{\mathcal{T}}}\|_{L^{\infty}_{x,t}}\leq\mathring{C}\varepsilon, \tag{9.51}\] and thus by taking \(\varepsilon\) to be sufficiently small we may ensure that \(\beta=1\) in (C.13). Next, we estimate \[\|q_{\hat{\mathbf{Z}}_{\mathcal{T}}}\|_{L^{\infty}_{x,t}}\leq 10\alpha(1+ \alpha)\kappa_{0}(\mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{N}}}+\mathring{C} \varepsilon)+\mathring{C}\varepsilon\leq 10(1+\alpha)^{2}\kappa_{0}\mathsf{C}_{\hat{ \mathbf{Z}}_{\mathcal{N}}} \tag{9.52}\] upon taking \(\varepsilon\) to be sufficiently small in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\). From the above bound, (4.11), and (C.13), we deduce \[\big{|}\hat{\mathbf{Z}}_{\mathcal{T}}\big{|}\leq 4e^{18}\varepsilon\mathsf{C}_{ \mathsf{data}}+\varepsilon\tfrac{20}{\alpha}4e^{18}\cdot 10(1+\alpha)^{2}\kappa_{0} \mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{N}}}\,. \tag{9.53}\] As such, if we ensure \[\mathsf{C}_{\hat{\mathbf{Z}}_{\mathcal{T}}}\geq 16e^{18}\big{(}\mathsf{C}_{ \mathsf{data}}+\tfrac{200(1+\alpha)^{2}\kappa_{0}}{\alpha}\mathsf{C}_{\hat{ \mathbf{Z}}_{\mathcal{N}}}\big{)}\,, \tag{9.54}\] then we have the uniform estimate \[\big{|}\hat{\mathbf{Z}}_{\mathcal{T}}\big{|}\leq\tfrac{1}{4}\varepsilon\mathsf{ C}_{\hat{\mathbf{Z}}_{\mathcal{T}}}\,. \tag{9.55}\] The estimate for \(\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}\) is obtained by differentiating (9.49), which yields \[\big{(}J_{g}\partial_{t}+J_{g}(V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2}\,) \partial_{2}-2\alpha\Sigma\partial_{1}\big{)}\mathsf{D}\hat{\mathbf{Z}}_{ \mathcal{T}}\] \[=\underbrace{-m_{\hat{\mathbf{Z}}_{\mathcal{T}}}\mathsf{D}\hat{ \mathbf{Z}}_{\mathcal{T}}-\mathsf{D}J_{g}\partial_{t}\hat{\mathbf{Z}}_{\mathcal{ T}}-\mathsf{D}\big{(}J_{g}(V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2}\,)\big{)}\mathsf{D}_{2} \hat{\mathbf{Z}}_{\mathcal{T}}+2\alpha\mathsf{D}\Sigma\mathsf{D}_{1}\hat{ \mathbf{Z}}_{\mathcal{T}}}_{=:m_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}} \mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}}+q_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}} \tag{9.56}\] where we have denoted \[q_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}}=\hat{\mathbf{Z}}_{\mathcal{T}} \mathsf{D}m_{\hat{\mathbf{Z}}_{\mathcal{T}}}+\mathsf{D}q_{\hat{\mathbf{Z}}_{ \mathcal{T}}}\,. \tag{9.57}\] Using (5.37) we first bound \[\|m_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}}\|_{L^{\infty}_{x,t}}\leq\tfrac{5(1+ \alpha)}{\varepsilon}+\tfrac{6\alpha\kappa_{0}}{\varepsilon}+\mathring{C} \varepsilon\leq\tfrac{6(1+\alpha+\alpha\kappa_{0})}{\varepsilon} \tag{9.58}\] so that the constant \(\beta\) in (C.13) may be taken to equal \(\beta=\tfrac{120}{\alpha}(1+\alpha+\alpha\kappa_{0})\). Next, by using the bootstraps (5.37), the Sobolev estimate (B.2d), the initial data bounds (4.11), the bounds for the geometry (7.1), we obtain \[\|q_{\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}}\|_{L^{\infty}_{x,t}}\leq \mathring{C}\langle\mathsf{B}_{6}\rangle\varepsilon+\mathring{C}\mathsf{C}_{ \hat{\mathbf{Z}}_{\mathcal{N}}}\leq C_{(9.59)}\langle\mathsf{C}_{\hat{\mathbf{Z}}_{ \mathcal{N}}}\rangle \tag{9.59}\] for some computable constant \(C_{(9.59)}\), which depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). From the above bound, (4.11), and (C.13), we deduce the pointwise estimate \[\big{|}\mathsf{D}\hat{\mathbf{Z}}_{\mathcal{T}}\big{|}\leq(4e^{18}\varepsilon)^{ \beta}\varepsilon\mathsf{C}_{\mathsf{data}}+\varepsilon\tfrac{1}{6(1+\alpha+ \alpha\beta)}(4e^{18})^{\beta}C_{(9.59)}\langle\mathsf{C}_{\hat{\mathbf{Z}}_{ \mathcal{N}}}\rangle\,, \tag{9.60}\] where we recall that \(\beta=\frac{120}{\alpha}(1+\alpha+\alpha\kappa_{0})\). Thus, choosing \(\mathsf{C}_{\hat{\mathbf{\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}\) large enough to ensure \[\mathsf{C}_{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\geq 4(4e^{18}\varepsilon)^{\beta}\big{(}\mathsf{C}_{ \mathsf{data}}+C_{(9.59)}\langle\mathsf{C}_{\hat{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}}}\rangle\big{)}\,, \tag{9.61}\] we obtain the uniform bound \[\big{|}\mathsf{D}\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}}}}\big{|}\leq \frac{1}{4}\varepsilon\mathsf{C}_{\hat{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}}\),\] (\eqrefeqref\) which together with (\(\eqref\)) closes the bootstrap (\(\eqref\)). We note that since \(C_{(9.59)}\) and \(\langle\mathsf{C}_{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}}\rangle\) only depend on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), so does \(\mathsf{C}_{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}}\), \eqref\) and (\(\eqref\)). ## 10. The sixth order energy estimates for the tangential components From (\(\eqref\)), (\(\eqref\)), and (\(\eqref\)), we obtain the sixth-order differentiated \((\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}}} \)\) equations \[\frac{1}{\Sigma}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}) \widetilde{\mathsf{D}}^{6}\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbf{ \mathbf{ }}}}}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbf{ \mathbf{ }}}}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbf{ \mathbf{ }}}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbf{ }}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbf{ }}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbf{\mathbfmathbfmathbf{ \mathbfmathbfmathbfmathbf{\mathbf{\mathbf{ }}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbfmathbfmathbfmathbfmathbf{\mathbf{\mathbf{ }}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbfmathbfmathbfmathbfmathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbf{ }}}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbfmathbfmathbfmathbf{\mathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbfmathbf{ }}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbfmathbfmathbfmathbf{\mathbf{\mathbf{\mathbfmathbfmathbf{ }}}}}}}} \partial_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbfmathbfmathbfmathbf{\mathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{\mathbf{\mathbfmathbfmathbf{ \mathbfmathbfmathbf{\mathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbf{\mathbfmathbfmathbfmathbf{ \mathbfmathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbf{ \mathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbf{\mathbfmathbfmathbf{ \mathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbf\mathbf{ \mathbfmathbf{\mathbfmathbfmathbfmathbf\mathbf{ \mathbfmathbf{\mathbfmathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbfmathbf{\mathbfmathbf\mathbf{ \mathbfmathbfmathbfmathbf\mathbf{\mathbfmathbf\mathbf{ \mathbfmathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbf\mathbf{ \mathbfmathbfmathbf\mathbf\mathbf{ \mathbfmathbf{ \mathbfmathbfmathbfmathbf\mathbf \mathbf{ \mathbfmathbfmathbf\mathbf{ \mathbfmathbf \mathbfmathbf\mathbf{ \mathbfmathbf \mathbf{ \mathbfmathbf \mathbfmathbf{ \mathbfmathbf \mathbfmathbf \mathbf\mathbf{ \mathbf{ \mathbfmathbfmathbf \mathbf \mathbf{ \mathbfmathbfmathbf \mathbfmathbf \mathbf{\mathbfmathbf\mathbf \mathbf{\mathbfmathbf \mathbf \mathbf{ \mathbfmathbfmathbf \mathbf{\mathbfmathbf \mathbf \mathbf{ \mathbfmathbf \mathbf{ \mathbfmathbfmathbfmathbf \mathbf \mathbf{\mathbfmathbfmathbf\mathbf\mathbf\mathbf{ \mathbf{ \mathbfmathbf \mathbf \mathbf{ \mathbfmathbfmathbf \mathbfmathbf \mathbf{\mathbfmathbf\mathbf{ \mathbfmathbfmathbf \mathbf \mathbf{ \mathbfmathbfmathbf \mathbf\mathbf{ \mathbf{ \mathbfmathbfmathbfmathbf \mathbf \mathbf\mathbf{ \mathbf{\mathbfmathbfmathbfmathbfmathbf \mathbf \mathbf\mathbf{ \mathbf \mathbf{ \mathbfmathbfmathbfmathbf \mathbf{\mathbfmathbfmathbf\mathbf\mathbf{ \mathbf \mathbf{ \mathbfmathbfmathbf \mathbf{ \mathbfmathbf \mathbf{\mathbfmathbfmathbf \mathbf \mathbf{ \mathbfmathbf{ \mathbfmathbfmathbfmathbf \mathbf{ \mathbfmathbfmathbf \mathbfmathbf \mathbf{\mathbf{ \mathbfmathbfmathbfmathbf \mathbf \mathbf{\mathbf{ \mathbfmathbfmathbfmathbf \mathbf{ \mathbfmathbfmathbfmathbf \mathbf{ \mathbf}}}}}}}}{}}{}\approx}\approx\approx\approx\approx\approx\approx\approx}\approx\approx\approx\approx\approx}}}}}}}}}}\]}\\\\\\\\\\\\\\\}\\\\\}\\\\\}\\\\\\\}\\\\\\\\\}\\\\\\\}\\\\\\\\}\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\}\\\\\\}\\\\\\\\\\\\\\}\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\ and \(\beta>0\) is a constant which will be chosen to be sufficiently large, only with respect to \(\alpha\), in (10.68) below. Throughout this analysis, for notational convenience we will mostly omit the spacetime Lebesgue measure \(\mathrm{d}x\mathrm{d}\mathbf{s}^{\prime}\) from these integrals. The goal of this section is to show that (10.1), a good choice of \(\beta=\beta(\alpha)\), and choosing \(\varepsilon\) to be sufficiently small with respect to \(\alpha,\kappa_{0}\) and \(\mathsf{C}_{\mathsf{data}}\), implies a differential inequality of the type \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{0}\mathbf{s})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(\hat{\mathbf{W}}_{\tau},\hat{\mathbf{ Z}}_{\tau},\hat{\mathbf{A}}_{\tau})(\cdot,\mathbf{s})\big{\|}_{L^{2}_{x}}^{2}+ \tfrac{1}{\varepsilon}\int_{0}^{\mathsf{s}}\big{\|}\tfrac{\mathcal{J}^{\frac{ 1}{4}}J_{y}\frac{1}{2}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(\hat{ \mathbf{W}}_{\tau},\hat{\mathbf{Z}}_{\tau},\hat{\mathbf{A}}_{\tau})(\cdot, \mathbf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathbf{s}^{\prime}\] \[\qquad\leq C(\alpha)\varepsilon(\tfrac{4}{\kappa_{0}})^{2\beta} \mathsf{B}_{6}^{2}+\tfrac{C(\alpha)}{\varepsilon}\int_{0}^{\mathsf{s}}\| \tfrac{\mathcal{J}^{\frac{3}{4}}(J_{0}\mathbf{s})^{\frac{1}{2}}}{\Sigma^{\beta }}\widetilde{\mathsf{D}}^{6}(\hat{\mathbf{W}}_{\tau},\hat{\mathbf{Z}}_{\tau},\hat{\mathbf{A}}_{\tau})(\cdot,\mathbf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathbf{s}^{\prime}\,, \tag{10.5}\] where \(C(\alpha)>0\) is a constant that only depends on \(\alpha\). The true inequality we establish, see (10.69) below, is a bit more complicated because it turns out we need to augment the tangential energy estimate with the energy term \(\tfrac{1}{\varepsilon^{2}}\big{\|}\tfrac{\mathcal{J}^{\frac{1}{4}}}{\Sigma^{ \beta_{\alpha}}}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\mathcal{T}(\cdot, \mathbf{s})\big{\|}_{L^{2}_{x}}^{2}\). Leaving this complication aside, if we were to establish (10.5), then a standard Gronwall inequality in \(\mathbf{s}\in[0,\varepsilon]\) shows that \[\sup_{s\in[0,\varepsilon]}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{0} \mathbf{s})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(\hat{ \mathbf{W}}_{\tau},\hat{\mathbf{Z}}_{\tau},\hat{\mathbf{A}}_{\tau})(\cdot, \mathbf{s})\big{\|}_{L^{2}_{x}}^{2}+\tfrac{1}{\varepsilon}\int_{0}^{\mathsf{ s}}\big{\|}\tfrac{\mathcal{J}^{\frac{1}{4}}J_{y}\frac{1}{2}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(\hat{\mathbf{W}}_{\tau},\hat{\mathbf{Z}}_{\tau}, \hat{\mathbf{A}}_{\tau})(\cdot,\mathbf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathbf{s}^{\prime}\leq C^{\prime}(\alpha)\varepsilon(\tfrac{4}{ \kappa_{0}})^{2\beta}\mathsf{B}_{6}^{2}\,,\] for another constant \(C^{\prime}(\alpha)>0\) which only depends on \(\alpha\). Upon multiplying the above estimate by \(\kappa^{2\beta}\), noting that cf. (5.37p) we have \(1\leq\kappa_{0}^{2\beta}\Sigma^{-2\beta}\), and recalling that \(\beta=\beta(\alpha)\) we deduce that the tangential part of the bootstrap (5.37r) may be closed as soon as \(K\) is taken to be sufficiently large with respect to \(\alpha\). This argument is made precise in (10.8) below. ### The integral \(I^{\hat{\mathbf{W}}_{\tau}}\) We additively decompose the integral \(I^{\hat{\mathbf{W}}_{\tau}}\) as \[I^{\hat{\mathbf{W}}_{\tau}}=I_{1}^{\hat{\mathbf{W}}_{\tau}}+I_{2 }^{\hat{\mathbf{W}}_{\tau}}+I_{3}^{\hat{\mathbf{W}}_{\tau}}+I_{4}^{\hat{ \mathbf{W}}_{\tau}}\,,\] \[I_{1}^{\hat{\mathbf{W}}_{\tau}}=\int_{0}^{\mathsf{s}}\!\!\int_{ \Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}J_{y}(\mathsf{Q}\partial_{\mathsf{s}}+ V\partial_{2})\widetilde{\mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}\,\widetilde{ \mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}\,,\] (10.6a) \[I_{2}^{\hat{\mathbf{W}}_{\tau}}=\alpha\int_{0}^{\mathsf{s}}\!\!\int _{\mathcal{J}}\!\!\int_{\mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\!\!\int_{ \mathcal{D}}\!\!\!\int_{2}^{-\frac{1}{2}}J_{y}\widetilde{\mathsf{D}}_{2} \widetilde{\mathsf{D}}^{6}\hat{\mathbf{A}}_{\tau}\,\widetilde{\mathsf{D}}^{6} \hat{\mathbf{W}}_{\tau}\,,\] (10.6b) \[I_{3}^{\hat{\mathbf{W}}_{\tau}}=-\alpha\int_{0}^{\mathsf{s}}\!\!\int _{\mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\!\!\int_{ \mathcal{J}}\!\!\!\!\int_{2}^{-\frac{1}{2}}J_{y}(\Omega+\hat{\mathbf{W}}_{\tau}+ \hat{\mathbf{Z}}_{\tau})\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6} \mathcal{T}\cdot\mathcal{N}\,\widetilde{\mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}\,,\] (10.6c) \[I_{4}^{\hat{\mathbf{W}}_{\tau}}=-\int_{0}^{\mathsf{s}}\!\!\int _{\mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\!\!\int_{ \mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\!\!\int_{\mathcal{J}} \!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\! \!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\! \!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\! \!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\! \!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\! \!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\!\int_{\mathcal{J}}\!\! \!\!\int_{\mathcal{J}}\!\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\! \!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\!\int_{\mathcal{J}}\!\! \!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\! \!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\! \!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\! \!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\! \int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\! \!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\! \int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\! \int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\! \!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\! \!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\!\int_{\mathcal{J}}\!\!\!\! \int \[I_{8}^{\hat{\mathsf{Z}}_{\tau}}=-\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\! \int\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\int_{\mathcal{ I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}} \!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\int_{\mathcal{ I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{ \tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\! \int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}} \!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\! \!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\! \!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\! \!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\! \!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}} \!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\! \!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\! \!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\! \!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{ \mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{ \mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\! \!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{ I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\! \int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\! \!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\! \!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\! \!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\! \!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\! \!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\! \!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\! \!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\! \!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\! \!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\! \!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\! \!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\!\int_{\mathcal{I}^{ \hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\!\!\int_{\mathcal{I}^{\hat{\mathsf{A}}_{\tau}}}\!\!\!\!\!\!\! \!\ For the terms involving a derivative of the fundamental variables, we add (10.7f) and (10.8f), and integrate by parts using (5.28c) to arrive at (10.13) where (10.14) and we have appealed to (5.37), (6.38), (6.64), and the bound. At this stage we also note that the bootstrap inequalities imply (10.15) Lastly, we note that there are three terms with seven derivatives landing on the fundamental variables; these terms combine to yield an exact derivative, which we then integrate by parts. Adding (10.6b), (10.7b), (10.8b), and recalling that, using (5.28c) we have that (10.16) Note that the available pointwise bootstrap bounds imply (10.17) Summarizing the identities (10.9), (10.11), (10.13), (10.16) and bounds (10.10), (10.12), (10.14), (10.17) upon taking to be sufficiently small in terms of, and taking, gives (10.18) where as usual is a positive computable constant. Note that is independent of. ### The terms with over-differentiated geometry Next, we consider the terms in (10.6)-(10.8) which contain seven derivatives on the tangent vector (or equivalently, the normal vector ). #### 10.6.1. The sum \(I_{5}^{\hat{\mathsf{Z}}_{\tau}}+I_{7}^{\hat{\mathsf{Z}}_{\tau}}\) Integrating-by-parts in (10.7e) and (10.7g) we find that \[I_{5}^{\hat{\mathsf{Z}}_{\tau}}+I_{7}^{\hat{\mathsf{Z}}_{\tau}} =2\alpha\int_{0}^{\mathsf{s}}\!\!\int\!\!\!\int\!\!\!\int_{\mathcal{ J}^{\frac{1}{2}}}\!(\hat{\mathbf{A}}_{\tau}-\hat{\mathsf{Z}}_{\mathcal{N}})\widetilde{ \mathsf{D}}^{6}\mathcal{T}\cdot\mathcal{N}\left(\widetilde{\mathsf{D}}^{6} \hat{\mathsf{Z}}_{\tau,1}-J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h \,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\hat{\mathsf{Z}}_{\tau}\right)\] \[\qquad+2\alpha\int_{0}^{\mathsf{s}}\!\!\int\!\!\!\int\!\Big{(} \partial_{1}\big{(}J_{\mathrm{s}}\mathcal{J}^{\frac{2}{2}}(\hat{\mathbf{A}}_{ \tau}-\hat{\mathsf{Z}}_{\mathcal{N}})\mathcal{N}_{k}\big{)}-\widetilde{ \mathsf{D}}_{2}\big{(}J_{\mathrm{s}}J^{\frac{2}{2}}J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\,(\hat{\mathbf{A}}_{\tau}-\hat{\mathsf{Z}}_{ \mathcal{N}})\mathcal{N}_{k}\big{)}\Big{)}\widetilde{\mathsf{D}}^{6}\mathcal{ T}^{k}\,\widetilde{\mathsf{D}}^{6}\hat{\mathsf{Z}}_{\tau}\] \[\qquad+2\alpha\int_{0}^{\mathsf{s}}\!\!\int\!\!\!\int\!\!\!\hat{ \mathsf{Q}}_{2k}J^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2} h\,(\hat{\mathbf{A}}_{\tau}-\hat{\mathsf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6} \mathcal{T}\cdot\mathcal{N}\,\widetilde{\mathsf{D}}^{6}\hat{\mathsf{Z}}_{\tau}\] \[\qquad-2\alpha\int\overline{\mathsf{Q}}_{2\mathcal{J}}J^{\frac{ 3}{2}}J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,(\hat{\mathbf{A}}_{ \tau}-\hat{\mathsf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}\mathcal{T} \cdot\mathcal{N}\,\widetilde{\mathsf{D}}^{6}\hat{\mathsf{Z}}_{\tau}\Big{|}_{ \mathsf{s}}\] \[=:I_{5+7,a}^{\hat{\mathsf{Z}}_{\tau}}+I_{5+7,b}^{\hat{\mathsf{Z}} _{\tau}}+I_{5+7,c}^{\hat{\mathsf{Z}}_{\tau}}+I_{5+7,d}^{\hat{\mathsf{Z}}_{\tau }}\,. \tag{10.19}\] At this stage we note that the first term in (10.19), namely \(I_{5+7,a}^{\hat{\mathsf{Z}}_{\tau}}\), contains over-differentiated terms, i.e. seven derivatives on \(\hat{\mathsf{Z}}_{\tau}\), but that the remaining three terms are under control. Indeed, from (5.37), (6.38), (7.1h), and the fact that \(\mathcal{J}\leq J_{g}\), we have \[\big{|}I_{5+7,b}^{\hat{\mathsf{Z}}_{\tau}}\big{|} \tag{10.20a}\] \[\big{|}I_{5+7,c}^{\hat{\mathsf{Z}}_{\tau}}\big{|}\] (10.20b) \[\big{|}I_{5+7,d}^{\hat{\mathsf{Z}}_{\tau}}\big{|} \tag{10.20c}\] In order to handle the first term on the right side of (10.19), we use equation (10.1b) and (10.1a) to rewrite \[2\alpha\big{(}\widetilde{\mathsf{D}}^{6}\hat{\mathsf{Z}}_{\tau,1} -J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{\mathsf{D}}_{2} \widetilde{\mathsf{D}}^{6}\hat{\mathsf{Z}}_{\tau}\big{)}\] \[\qquad=-\alpha J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2} \widetilde{\mathsf{D}}^{6}\hat{\mathbf{A}}_{\tau}+\alpha J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\mathcal{T}\cdot \mathcal{N}(\Omega+\hat{\mathbf{W}}_{\tau}+\hat{\mathsf{Z}}_{\tau})+\tfrac{J_{g }}{\Sigma}(\Omega\partial_{\mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D}}^{6} \hat{\mathsf{Z}}_{\tau}\] \[\qquad\qquad-2\alpha(\hat{\mathbf{A}}_{\tau}-\hat{\mathsf{Z}}_{ \mathcal{N}})\widetilde{\mathsf{D}}^{6}\mathcal{T}_{\tau,1}\cdot\mathcal{N}+2 \alpha J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,(\hat{\mathbf{A}}_{ \tau}-\hat{\mathsf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}_{2}\widetilde{ \mathsf{D}}^{6}\mathcal{T}\cdot\mathcal{N}-\big{(}\widetilde{\mathsf{D}}^{6} \mathsf{F}_{\mathsf{Z}}^{\tau}+\mathcal{R}_{\hat{\mathsf{Z}}}^{\mathcal{T}}+ \mathcal{C}_{\hat{\mathsf{Z}}}^{\mathcal{T}}\big{)}\] \[\qquad=\tfrac{J_{g}}{\Sigma}(\Omega\partial_{\mathsf{s}}+V \partial_{2})\widetilde{\mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}+\tfrac{J_{g}}{ \Sigma}(\Omega\partial_{\mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D}}^{6} \hat{\mathsf{Z}}_{\tau}\] \[\qquad\qquad-2\alpha(\hat{\mathbf{A}}_{\tau}-\hat{\mathsf{Z}}_{ \mathcal{N}})\widetilde{\mathsf{D}}^{6}\mathcal{T}_{\tau,1}\cdot\mathcal{N}+2 \alpha J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,(\hat{\mathbf{A}}_{\tau }-\hat{\mathsf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}} ^{6}\mathcal{T}\cdot\mathcal{N}\] \[\qquad\qquad-\big{(}\widetilde{\mathsf{D}}^{6}\mathsf{F}_{\mathsf{ W}}^{\tau}+\mathcal{R}_{\hat{\mathsf{W}}}^{\mathcal{T}}+\mathcal{C}_{\hat{\mathsf{W}}}^{ \mathcal{T}}\big{)}-\big{(}\widetilde{\mathsf{D}}^{6}\mathsf{F}_{\mathsf{Z}}^{ \tau}+\mathcal{R}_{\hat{\mathsf{Z}}}^{\mathcal{T}}+\mathcal{C}_{\hat{\mathsf{Z}}}^{ \mathcal{T}}\big{)}\,.\] Substitution of this identity into the integral \(I_{5+7,a}^{\hat{\mathsf{Z}}_{\tau}}\), gives us the further decomposition \[I_{5+7,a}^{\hat{\mathsf{Z}}_{\tau}} =I_{5+7,a,i}^{\hat{\mathsf{Z}}_{\tau}}+I_{5+7,a,ii}^{\hat{\mathsf{Z} }_{\tau}}+I_{5+7,a,iii}^{\hat{\mathsf{Z}}_{\tau}}\,,\] \[I_{5+7,a,i}^{\hat{\mathsf{Z}}_{\tau}} =\int_{0}^{\mathsf{s}}\!\!\int\!\!\!\int\!\!\!\!\frac{1}{\Sigma^{2 \beta}}J^{\frac{1}{2}}J_{g}(\hat{\mathbf{A}}_{\tau}-\hat{\mathsf{Z}}_{ \mathcal{N}})\widetilde{\mathsf{D}}^{6}\mathcal{T}\cdot\mathcal{N}\left( \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}\right)\big{(}\widetilde{\mathsf{D}}^{6} \hat{\mathbf{W}}_{\tau}+\widetilde{\mathsf{D}}^{6}\hat{\mathsf{Z}}_{\tau}\big{)}\,,\] (10.21a) \[I_{5+7,a,ii}^{\hat{\mathsf{Z}}_{\tau}} =-2\alpha\int_{0}^{\mathsf{s}}\!\!\int\!\!\!\int\!\!\!\int_{\mathcal{ J}}J_{\mathcal{N}}J^{\frac{3}{2}}(\hat{\mathbf{A}}_{\tau}-\hat{\mathsf{Z}}_{\mathcal{N}})^{2} \widetilde{\mathsf{D}}^{6}\mathcal{T}\cdot\mathcal{N}\left(\widetilde{\mathsf{D}}^{6} \mathcal{F}_{\tau,1}\cdot\mathcal{N}-J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h \,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\mathcal{T}\cdot \mathcal{N}\right),\] (10.21b) \[I_{5+7,a,iii}^{\hat{\mathsf{Z}}_{\tau}} =-\int_{0}^{\mathsf{s}}\!\!\int\!\!\!\int_{\mathcal{J}}J^{\frac{3}{2}}( \hat{\mathbf{A}}_{\tau}-\hat{\mathsf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6} \mathcal{T}\cdot \[+\alpha\int\overline{\mathbb{Q}}_{2\mathcal{B}}\mathcal{J}^{\frac{1}{2}}( \hat{\mathbf{A}}_{\tau}-\hat{\mathbf{Z}}_{\mathcal{N}})^{2}J_{g}g^{-\frac{1}{2} }\widetilde{\mathbb{D}}_{2}h\,(\widetilde{\mathbb{D}}^{6}\tau\cdot\mathcal{N}) ^{2}\Big{|}_{\mathbf{s}}\] \[+2\alpha\int_{0}^{\mathbf{s}}\!\!\!\int_{2\mathcal{N}}\!\!\! \mathcal{J}^{\frac{3}{2}}(\hat{\mathbf{A}}_{\tau}-\hat{\mathbf{Z}}_{\mathcal{N }})^{2}\widetilde{\mathbb{D}}^{6}\mathcal{T}\cdot\mathcal{N}\,\big{(}\widetilde {\mathbb{D}}^{6}\tau\cdot\mathcal{T}_{\mathcal{N},1}\cdot\mathcal{T}-J_{g}g^{- \frac{1}{2}}\widetilde{\mathbb{D}}_{2}h\,\widetilde{\mathbb{D}}^{6}\mathcal{T} \cdot\tau\widetilde{\mathbb{D}}_{2\mathcal{N}}\cdot\mathcal{T}\big{)}\,. \tag{10.22}\] By appealing to (5.37), (6.38), (7.1h), and the bound \(\mathcal{J}\leq 1\), we conclude \[\big{|}I_{5+7,a,ii}^{\hat{\mathbf{Z}}_{\tau}}\big{|}\lesssim\varepsilon^{3} \beta(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6} \rangle^{2}\,. \tag{10.23}\] The term \(I_{5+7,a,i}^{\hat{\mathbf{Z}}_{\tau}}\) defined in (10.21a) requires that we integrate by parts the \((\mathbb{Q}\partial_{\mathsf{s}}+V\partial_{2})\) operator, and then appeal to the \(\widetilde{\mathbb{D}}^{6}\)-differentiated variant of (5.34b). Indeed, from (5.28d), and (5.34b), we may rewrite \[I_{5+7,a,i}^{\hat{\mathbf{Z}}_{\tau}}=-\int_{0}^{\mathbf{s}}\!\! \!\int\!\!\!\int\!\!\!\!\int\!\!\!\!\left(\widetilde{\mathbb{D}}^{6}\hat{ \mathbf{W}}_{\tau}+\widetilde{\mathbb{D}}^{6}\hat{\mathbf{Z}}_{\tau}\right) \tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{ \tau}-\hat{\mathbf{Z}}_{\mathcal{N}})\big{(}\tfrac{1+\alpha}{2}\widetilde{ \mathbb{D}}^{6}\hat{\mathbf{W}}_{\tau}+\tfrac{1-\alpha}{2}\widetilde{\mathbb{ D}}^{6}\hat{\mathbf{Z}}_{\tau}\big{)}\] \[\qquad-\int_{0}^{\mathbf{s}}\!\!\!\int\!\!\!\!\int\!\!\!\left( \widetilde{\mathbb{D}}^{6}\hat{\mathbf{W}}_{\tau}+\widetilde{\mathbb{D}}^{6} \hat{\mathbf{Z}}_{\tau}\right)\tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3} {2}}J_{g}(\hat{\mathbf{A}}_{\tau}-\hat{\mathbf{Z}}_{\mathcal{N}})\big{(} \tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\tau}+\tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_ {\tau}\big{)}\mathcal{N}\cdot\widetilde{\mathbb{D}}^{6}\mathcal{N}\] \[\qquad-\int_{0}^{\mathbf{s}}\!\!\!\int\!\!\!\left(\widetilde{ \mathbb{D}}^{6}\hat{\mathbf{W}}_{\tau}+\widetilde{\mathbb{D}}^{6}\hat{\mathbf{Z }}_{\tau}\right)\tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}J_{g}( \hat{\mathbf{A}}_{\tau}-\hat{\mathbf{Z}}_{\mathcal{N}})\mathcal{N}_{k}\big{(} \widetilde{\mathbb{D}}^{6},V\big{]}\widetilde{\mathbb{D}}_{2}\tau_{k}\] \[\qquad+\int_{0}^{\mathbf{s}}\!\!\!\int\!\!\!\left(\widetilde{ \mathbb{D}}^{6}\hat{\mathbf{W}}_{\tau}+\widetilde{\mathbb{D}}^{6}\hat{\mathbf{Z }}_{\tau}\right)\!\big{(}V\hat{\mathbb{Q}}_{2}-\widetilde{\mathbb{D}}_{2}V- \hat{\mathbb{Q}}_{\mathsf{s}}\big{)}\Big{(}\tfrac{1}{\Sigma^{2\beta}}\mathcal{J }^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{\tau}-\hat{\mathbf{Z}}_{\mathcal{N}}) \mathcal{N}\cdot\widetilde{\mathbb{D}}^{6}\mathcal{T}\Big{)}\] \[\qquad-\int_{0}^{\mathbf{s}}\!\!\!\int\!\!\left(\widetilde{ \mathbb{D}}^{6}\hat{\mathbf{W}}_{\tau}+\widetilde{\mathbb{D}}^{6}\hat{\mathbf{Z }}_{\tau}\right)\!\widetilde{\mathbb{D}}^{6}\tau_{k}(\mathbb{Q}\partial_{ \mathsf{s}}+V\partial_{2})\Big{(}\tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3 }{2}}J_{g}(\hat{\mathbf{A}}_{\tau}-\hat{\mathbf{Z}}_{\mathcal{N}})\mathcal{N} _{k}\Big{)}\] \[\qquad+\int\!\!\left(\widetilde{\mathbb{D}}^{6}\hat{\mathbf{W}}_{ \tau}+\widetilde{\mathbb{D}}^{6}\hat{\mathbf{Z}}_{\tau}\right)\!\mathsf{Q}\Big{(} \tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{\tau}- \hat{\mathbf{Z}}_{\mathcal{N}})\mathcal{N}\cdot\widetilde{\mathbb{D}}^{6} \mathcal{T}\Big{)}\Big{|}_{\mathbf{s}}\] \[\qquad-\int\!\left(\widetilde{\mathbb{D}}^{6}\hat{\mathbf{W}}_{ \tau}+\widetilde{\mathbb{D}}^{6}\hat{\mathbf{Z}}_{\tau}\right)\!\mathsf{Q}\Big{(} \tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{\tau}- \hat{\mathbf{Z}}_{\mathcal{N}})\mathcal{N}\cdot\widetilde{\mathbb{D}}^{6} \mathcal{T}\Big{)}\Big{|}_{\mathbf{0}}\,. \tag{10.24}\] We have isolated on the right side of (10.24) the first term as the most dangerous term, with the remaining seven terms being lower order. Using the available bootstrap bounds and improved estimates, we deduce from (10.24) that \[\big{|}I_{5+7,a,i}^{\hat{\mathbf{Z}}_{\tau}}\big{|}\lesssim\int_{0}^{ \mathbf{s}}\!\Big{(}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{2}}(J_{g}\mathbb{Q})^{ \frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathbb{D}}^{6}\hat{\mathbf{W}}_{\tau}( \cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}^{2}+\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{2}}(J_{g}\mathbb{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{ \mathbb{D}}^{6}\hat{\mathbf{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}^{ 2}\Big{)}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\varepsilon^{2}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2 }\langle\mathsf{B}_{6}\rangle^{2}+\varepsilon\mathsf{K}\langle\mathsf{B}_{6} \rangle(\tfrac{4}{\kappa_{0}})^{\beta}\Big{(}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{2}} J_{g}\mathbb{Q}}{\Sigma^{\beta}}\widetilde{\mathbb{D}}^{6}\hat{\mathbf{W}}_{\tau}\big{\|}_{L_{x}^{2}} +\big{\|}\tfrac{\mathcal{J}^{\frac{3}{2}}J_{g}\mathbb{Q}}{\Sigma^{\beta}} \widetilde{\mathbb{D}}^{6}\hat{\mathbf{Z}}_{\tau}\big{\|}_{L_{x}^{2}}\Big{)}\] \[\qquad+\varepsilon^{\frac{3}{2}}(\tfrac{4}{\kappa_{0}})^{\beta} \mathsf{K}\langle\mathsf{B}_{6}\rangle\Big{(}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{2} }(J_{g}\mathbb{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathbb{D}}^{6}\hat{ \mathbf{W}}_{\tau}(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}+\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{2}}(J_{g}\mathbb{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathbb{D}}^{6} \hat{\mathbf{Z}}_{\tau}(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}\Big{)}\] \[\qquad+\varepsilon^{\frac{3}{2}}(\tfrac{4}{\kappa_{0}})^{\beta} \mathsf{K}\langle\mathsf{B}_{6}\rangle\Big{(}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{2} }(J_{g}\mathbb{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde 6.2. The sum \(I_{3}^{\hat{\mathbf{W}}_{\tau}}+I_{3}^{\hat{\mathbf{Z}}_{\tau}}+I_{5}^{\hat{ \mathbf{A}}_{\tau}}+I_{7}^{\hat{\mathbf{A}}_{\tau}}\) Integrating-by-parts in (10.8e) and (10.8g), in a similar fashion to (10.19), we find that \[I_{5}^{\hat{\mathbf{A}}_{\tau}}+I_{7}^{\hat{\mathbf{A}}_{\tau}} =-2\alpha\int_{0}^{\ast}\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\big{|}I_{5+7,a,ii}^{\hat{\mathbb{A}}_{\tau}}\big{|}\lesssim\int_{0}^{ \mathsf{s}}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{0}\mathsf{Q})^{\frac{1}{2} }}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\tilde{\mathbb{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+ \varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle(\tfrac{4}{\kappa_{0}})^{ \beta}\big{\|}\tfrac{\mathcal{J}^{\frac{4}{4}}J_{\mu}^{\frac{1}{2}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\tilde{\mathbb{A}}_{\tau}\big{\|}_{L^{2}_{x}}\] \[\qquad\qquad\qquad\qquad\qquad+\varepsilon^{\frac{3}{2}}(\tfrac{ 4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\big{\|}\tfrac{ \mathcal{J}^{\frac{3}{4}}(J_{0}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\tilde{\mathbb{A}}_{\tau}(\cdot,\mathsf{s})\big{\|} _{L^{2}_{x}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\varepsilon^{\frac{3}{2}}( \tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}(J_{0}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta }}\widetilde{\mathsf{D}}^{6}\tilde{\mathbb{A}}_{\tau}(\cdot,0)\big{\|}_{L^{2}_ {x}}+\varepsilon^{4}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2}\langle \mathsf{B}_{6}\rangle^{2}. \tag{10.30b}\] The forcing and commutator terms in \(I_{5+7,a,v}^{\hat{\mathbb{A}}_{\tau}}\) will be shown in Subsection 10.7 below to satisfy similar bounds. This leaves us with the most delicate term, namely \(I_{5+7,a,i}^{\hat{\mathbb{A}}_{\tau}}\) (see (10.29a)). It turns out that while a good bound for this term is not available, this term completely cancels with a piece of the sum \(I_{3}^{\hat{\mathbb{W}}_{\tau}}+I_{3}^{\hat{\mathbb{Z}}_{\tau}}\). To see this, we next consider the sum \(I_{3}^{\hat{\mathbb{W}}_{\tau}}+I_{3}^{\hat{\mathbb{Z}}_{\tau}}\), given by (10.6c) and (10.7c). Using (5.28c), we have that \[I_{3}^{\hat{\mathbb{W}}_{\tau}}+I_{3}^{\hat{\mathbb{Z}}_{\tau}} =-2\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{2\mathsf{s}}\!\!\! \int_{2\mathsf{s}}\!\!\mathcal{J}^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}(\Omega+ \hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau})\widetilde{\mathsf{D}}_{2} \widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\,\widetilde{\mathsf{D}}^{6} \tilde{\mathbb{S}}_{\tau}\] \[=:I_{3,a}^{\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}}+I_{3, \hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}}^{\hat{\mathbb{W}}_{\tau}+ \hat{\mathbb{Z}}_{\tau}}+I_{3,\hat{\mathbb{Z}}_{\tau}}^{\hat{\mathbb{W}}_{\tau} +\hat{\mathbb{Z}}_{\tau}}+I_{3,\hat{\mathbb{Z}}_{\tau}}^{\hat{\mathbb{W}}_{\tau }+\hat{\mathbb{Z}}_{\tau}},\] (10.31a) where the four terms arising from integrating by parts the \[\widetilde{\mathsf{D}}_{2}\] operator are defined as \[I_{3,a}^{\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}} =2\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{2\mathsf{s}}\!\!\!\int_{ 2\mathsf{s}}\!\!\mathcal{J}^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}(\Omega+\hat{ \mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau})\widetilde{\mathsf{D}}^{6}\tau \cdot\mathcal{N}\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\tilde{ \mathbb{S}}_{\tau} \tag{10.31b}\] \[I_{3,b}^{\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}} =2\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{2\mathsf{s}}\!\!\!\widetilde {\mathsf{D}}_{2}\big{(}J_{3}\sigma^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}(\Omega+\hat{ \mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau})N_{k}\big{)}\widetilde{\mathsf{D}}^{6 }\tau^{k}\,\widetilde{\mathsf{D}}^{6}\tilde{\mathbb{S}}_{\tau}\] (10.31c) \[I_{3,c}^{\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}} =-2\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{2\mathsf{s}}\!\!\! \mathcal{J}^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}(\Omega+\hat{\mathbb{W}}_{\tau}+ \hat{\mathbb{Z}}_{\tau})\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\, \widetilde{\mathsf{D}}^{6}\tilde{\mathbb{S}}_{\tau}\] (10.31d) \[I_{3,d}^{\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}} =2\alpha\int\overline{\mathsf{Q}}_{2,b}\mathcal{J}^{\frac{3}{2}}J_{g }g^{-\frac{1}{2}}(\Omega+\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}) \widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\,\widetilde{\mathsf{D}}^{6} \tilde{\mathbb{S}}_{\tau}\Big{|}_{\mathsf{s}}. \tag{10.31e}\] We notice that the term \(I_{3,a}^{\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}}\) precisely cancels the term \(I_{5+7,a,i}^{\hat{\mathbb{A}}_{\tau}}\), as expected: \[I_{5+7,a,i}^{\hat{\mathbb{A}}_{\tau}}+I_{3,a}^{\hat{\mathbb{W}}_{\tau}+\hat{ \mathbb{Z}}_{\tau}}=0\,.\] The remaining terms in (10.31) are bounded using (5.37), (6.38), (7.1h), (7.1i), (8.3), and the bound \(\mathcal{J}\leq J_{s}\), as \[\big{|}I_{3,b}^{\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}} \big{|} \lesssim\varepsilon^{2}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\Big{(}\|\tfrac{\mathcal{J}^{\frac{4}{4}}J_{\mu}\tfrac{1}{ \Sigma^{\beta}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\hat{\mathbb{W}}_{ \tau}\big{\|}_{L^{2}_{x}}+\big{\|}\tfrac{\mathcal{J}^{\frac{4}{4}}J_{\mu}\tfrac{1}{ \Sigma^{\beta}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\tilde{\mathbb{Z}}_{ \tau}\big{\|}_{L^{2}_{x},\star}\Big{)} \tag{10.32a}\] \[\big{|}I_{3,d}^{\hat{\mathbb{W}}_{\tau}+\hat{\mathbb{Z}}_{\tau}} \big{|} \lesssim\varepsilon^{\frac{5}{2}}(\tfrac{4}{\kappa_{0}})^{\beta} \mathsf{K}\langle\mathsf{B}_{6}\rangle\Big{(}\|\tfrac{\mathcal{J}^{\frac{4}{4}}(J_{0} \mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\hat{ \mathbb{W}}_{\tau}(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}+\big{\|}\tfrac{\mathcal{J}^{ \frac{4}{4}}(J_{0}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6} \tilde{\mathbb{Z}}_{\tau}(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}\Big{)}\,. \tag{10.32b}\] Summarizing the decompositions (10.27), (10.29), (10.31), appealing to the bounds (10.28), (10.30), (10.32), the Cauchy-Schwartz inequality, and taking \(\varepsilon\) to be sufficiently small (only in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), for \(\beta\geq 1\) we arrive at \[\big{|}I_{3}^{\hat{\mathbb{W}}_ \[=\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{\mathcal{B}}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[+\int_{0}^{s}\!\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_{ \tau}\tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}(J_{g}\hat{\boldsymbol{ W}}_{\mathcal{N}}-J_{g}\hat{\boldsymbol{Z}}_{\mathcal{N}})\big{(}\tfrac{1+\alpha}{2} \hat{\boldsymbol{W}}_{\tau}+\tfrac{1-\alpha}{2}\hat{\boldsymbol{Z}}_{\tau} \big{)}\mathcal{N}\cdot(\widetilde{\mathsf{D}}^{6}\mathcal{N}+g^{-1}\tau \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h)\] \[+\int_{0}^{s}\!\!\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_{ \tau}\tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}(J_{g}\hat{\boldsymbol{ W}}_{\mathcal{N}}-J_{g}\hat{\boldsymbol{Z}}_{\mathcal{N}})\mathcal{N}_{k} \big{(}\widetilde{\mathsf{D}}^{6},V\big{)}\widetilde{\mathsf{D}}_{2}\mathcal{T }_{k}\] \[+\int_{0}^{s}\!\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_{ \tau}\big{(}V\dot{\mathsf{Q}}_{2}-\widetilde{\mathsf{D}}_{2}V-\dot{\mathsf{Q} }_{\mathsf{s}}\big{)}\Big{(}\tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{ 2}}(J_{g}\hat{\boldsymbol{W}}_{\mathcal{N}}-J_{g}\hat{\boldsymbol{Z}}_{ \mathcal{N}})\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau\Big{)}\] \[+\int\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_{\tau} \mathsf{Q}\Big{(}\tfrac{1}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}(J_{g} \hat{\boldsymbol{W}}_{\mathcal{N}}-J_{g}\hat{\boldsymbol{Z}}_{\mathcal{N}}) \mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau\Big{)}\Big{|}_{0}\,. \tag{10.38}\] At this stage, we denote the first three lines on the right side of (10.38) by \(\mathsf{M}_{1}\), \(\mathsf{M}_{2}\), and \(\mathsf{M}_{3}\), and note the existing bounds and the initial data assumption (4.11) imply that \[\big{|}I_{3,a,ii}^{\hat{\boldsymbol{A}}_{\text{\tiny$-$}}}- \mathsf{M}_{1}-\mathsf{M}_{2}-\mathsf{M}_{3}\big{|} \leq\hat{C}\varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K} \langle\mathsf{B}_{6}\rangle\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}J_{g}^{ \frac{1}{2}}}{\Sigma^{2\beta}}\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_ {\tau}\big{\|}_{L^{2}_{x,a}}+6(1+\alpha)\varepsilon(\tfrac{3}{\kappa_{0}})^{2 \beta}\mathsf{C}^{2}_{\mathsf{data}}\] \[\qquad+\tfrac{500^{2}}{\varepsilon^{2}\sqrt{1+\alpha}}\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}(QJ_{g})^{\frac{1}{2}}}{\Sigma^{2\beta}} \widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_{\tau}\big{\|}_{L^{2}_{x,a}} \big{\|}\tfrac{\mathcal{J}^{\frac{1}{4}}\mathcal{D}_{\mathsf{s}}}{\Sigma^{2 \beta}}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau\big{\|}_{L^{2}_{x,a}}\,. \tag{10.39}\] It thus remains to estimate the three bad terms in (10.38), namely \(\mathsf{M}_{1},\mathsf{M}_{2},\mathsf{M}_{3}\). Concerning the second and third one, we have \[\bigg{|}\mathsf{M}_{2}-\int_{0}^{s}\!\!\widetilde{\mathsf{D}}^{6} \hat{\boldsymbol{W}}_{\tau}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau \tfrac{1}{\Sigma^{2\beta}}J_{g}\hat{\boldsymbol{W}}_{\mathcal{N}}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}^{\frac{3}{2}}\bigg{|}\lesssim \varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6} \rangle\big{\|}\tfrac{\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{4}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_{\tau}\big{\|}_{L^{2}_{ x,a}} \tag{10.40}\] \[\bigg{|}\mathsf{M}_{3}+\int\widetilde{\mathsf{D}}^{6}\hat{ \boldsymbol{W}}_{\tau}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau\tfrac{1}{ \Sigma^{2\beta}}J_{g}\hat{\boldsymbol{W}}_{\mathcal{N}}\mathsf{Q}\mathcal{J}^{ \frac{3}{2}}\Big{|}_{\mathsf{s}}\bigg{|}\lesssim\varepsilon^{\frac{3}{2}}( \tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}Q)^{\frac{1}{2}}}{\Sigma^{2\beta}} \widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_{\tau}(\cdot,\mathsf{s})\big{\|} _{L^{2}_{x}}\,, \tag{10.41}\] and the terms on the RHS of the above have an acceptable size. At this point we note that by the definition of \(\mathcal{J}\), \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}^{\frac{3}{2}}=- \tfrac{3}{2\varepsilon}\mathsf{Q}\mathcal{J}^{\frac{1}{2}}\,.\] Thus, we can write the left side of (10.40) as \(|\mathsf{M}_{2}-\widetilde{\mathsf{M}}_{2}|\) and the left side of (10.41) as \(|\mathsf{M}_{3}-\widetilde{\mathsf{M}}_{3}|\), where we define \[\widetilde{\mathsf{M}}_{2} =-\tfrac{3}{2\varepsilon}\int_{0}^{s}\!\!\widetilde{\mathsf{D}}^{6 }\hat{\boldsymbol{W}}_{\tau}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau \frac{1}{\Sigma^{2\beta}}J_{g}\hat{\boldsymbol{W}}_{\mathcal{N}}\mathsf{Q} \mathcal{J}^{\frac{3}{2}}\,, \tag{10.42}\] \[\widetilde{\mathsf{M}}_{3} =-\int\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_{\tau} \mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau\tfrac{1}{\Sigma^{2\beta}}J_{g} \hat{\boldsymbol{W}}_{\mathcal{N}}\mathsf{Q}\mathcal{J}^{\frac{3}{2}}\Big{|}_{ \mathsf{s}}\,. \tag{10.43}\] Next, we try to manipulate the \(\widetilde{\mathsf{M}}_{2}\) term. We rewrite \[\widetilde{\mathsf{M}}_{2} =-\tfrac{3}{2\varepsilon}\int_{0}^{s}\!\!\widetilde{\mathsf{D}}^{6 }\hat{\boldsymbol{W}}_{\tau}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau \tfrac{1}{\Sigma^{2\beta}}\big{(}J_{g}\hat{\boldsymbol{W}}_{\mathcal{N}}-\tfrac{1 3}{\varepsilon}J_{g}\big{)}\mathsf{Q}\mathcal{J}^{\frac{1}{2}}-\tfrac{39}{2 \varepsilon^{2}}\int_{0}^{s}\!\!\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_ {\tau}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau\tfrac{1}{\Sigma^{2\beta}} \mathsf{Q}\mathcal{J}^{\frac{1}{2}}J_{g}\] \[=:\widetilde{\mathsf{M}}_{2}^{\prime}+\widetilde{\mathsf{M}}_{2}^ {\prime\prime}. \tag{10.44}\] The second term in (10.44) is estimated using Cauchy-Schwartz, (6.38g), and (5.37k) as \[\big{|}\widetilde{\mathsf{M}}_{2}^{\prime\prime}\big{|}\leq\tfrac{34}{ \varepsilon^{2}\sqrt{1+\alpha}}\int_{0}^{s}\!\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g }Q)^{\frac{1}{2}}}{\Sigma^{2\beta}}\widetilde{\mathsf{D}}^{6}\hat{\boldsymbol{W}}_ {\tau}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}\big{\|}\tfrac{\mathcal{J}^{ \frac{-1}{4}}\mathsf{Q}}{\Sigma^{\beta}}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6} \tau(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}\mathrm{d}\mathsf{s}^{\prime}\,. \tag{10.45}\] Next, we consider the term \(\widetilde{\mathsf{M}}_{2}^{\prime}\) in (10.44). We apply \(\widetilde{\mathsf{D}}^{6}\) to (5.34b), and obtain \[\mathcal{N}\cdot(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}) \widetilde{\mathsf{D}}^{6}\tau=\mathcal{N}^{k}\widetilde{\mathsf{D} and hence, \[\left|\widetilde{\mathbf{M}}^{\prime}_{2}+\tfrac{3}{(1+\alpha)\varepsilon}\int_{0} ^{s}\!\!\!\int((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\big{(}\mathcal{N }\cdot\widetilde{\mathsf{D}}^{6}\tau\big{)}-\tfrac{1-\alpha}{2}\widetilde{ \mathsf{D}}^{6}\boldsymbol{\hat{\mathsf{Z}}}_{\tau})_{\mathcal{N}}\cdot \widetilde{\mathsf{D}}^{6}\tau_{\frac{1}{\Sigma^{2\beta}}}\big{(}J_{g}\hat{ \mathbf{W}}_{\mathcal{N}}-\tfrac{13}{\varepsilon}J_{g}\big{)}\mathsf{Q} \mathcal{J}^{\frac{1}{2}}\right|\lesssim\mathsf{K}^{2}\varepsilon^{3}\langle \mathsf{B}_{6}\rangle^{2}. \tag{10.48}\] Note that we also have the estimate \[\left|\tfrac{3}{(1+\alpha)\varepsilon}\int_{0}^{s}\!\!\!\int\!\!\int\!\!\frac{1 -\alpha}{2}\widetilde{\mathsf{D}}^{6}\boldsymbol{\hat{\mathsf{Z}}}_{\tau} \mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau_{\frac{1}{\Sigma^{2\beta}}} \big{(}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}-\tfrac{13}{\varepsilon}J_{g}\big{)} \mathsf{Q}\mathcal{J}^{\frac{1}{2}}\right|\leq\tfrac{25}{\varepsilon^{2}} \big{\|}\tfrac{\mathcal{J}^{\frac{3}{2}}\mathsf{Q}}{\widetilde{\mathsf{D}}}^{6 }\boldsymbol{\hat{\mathsf{Z}}}_{\tau}\big{\|}_{L^{2}_{x,s}}. \tag{10.49}\] From (10.48) and (10.49), we deduce that \[\left|\widetilde{\mathbf{M}}^{\prime}_{2}+\tfrac{3}{(1+\alpha) \varepsilon}\int_{0}^{s}\!\!\!\int\!\!\!\left(\mathsf{Q}\partial_{\mathsf{s}}+ V\partial_{2}\right)(\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2} \tfrac{1}{\Sigma^{2\beta}}\big{(}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}-\tfrac{1 3}{\varepsilon}J_{g}\big{)}\mathsf{Q}\mathcal{J}^{\frac{1}{2}}\right|\] \[\qquad\qquad\leq\tfrac{25}{\varepsilon^{2}}\big{\|}\tfrac{ \mathcal{J}^{-\frac{1}{2}}\mathsf{Q}}{\Sigma^{2\beta}}\widetilde{\mathsf{D}}^{6 }\tau\big{\|}_{L^{2}_{x,s}}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{2}}\mathsf{Q }}{\widetilde{\mathsf{D}}}^{6}\boldsymbol{\hat{\mathsf{Z}}}_{\tau}\big{\|}_{L ^{2}_{x,s}}+\mathring{C}\mathsf{K}^{2}\varepsilon^{3}\langle\mathsf{B}_{6} \rangle^{2}\,. \tag{10.50}\] Thus, we are left to consider the following term (which we rewrite using (5.28d)) \[\widetilde{\mathbf{M}}^{\prime\prime\prime}_{2} :=-\tfrac{3}{2(1+\alpha)\varepsilon}\int_{0}^{s}\!\!\!\int( \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})(\mathcal{N}\cdot\widetilde{ \mathsf{D}}^{6}\tau)^{2}\tfrac{1}{\Sigma^{2\beta}}\big{(}J_{g}\hat{\mathbf{W}} _{\mathcal{N}}-\tfrac{13}{\varepsilon}J_{g}\big{)}\mathsf{Q}\mathcal{J}^{ \frac{1}{2}}\] \[\quad+\tfrac{3}{2(1+\alpha)\varepsilon}\int_{0}^{s}\!\!\!\int( \mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2}\tfrac{1}{\Sigma^{2\beta} }\mathsf{Q}\mathcal{J}^{\frac{1}{2}}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial _{2})\big{(}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}-\tfrac{13}{\varepsilon}J_{g} \big{)}\] \[\quad+\tfrac{3}{2(1+\alpha)\varepsilon}\int_{0}^{s}\!\!\!\int( \mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2}\big{(}J_{g}\hat{\mathbf{W} }_{\mathcal{N}}-\tfrac{13}{\varepsilon}J_{g}\big{)}\mathcal{J}^{\frac{1}{2}} (\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\Big{(}\tfrac{1}{\Sigma^{2 \beta}}\mathsf{Q}\Big{)}\] \[\quad-\tfrac{3}{2(1+\alpha)\varepsilon}\int_{0}^{s}\!\!\!\int(V \mathring{\mathsf{Q}}_{2}-\widetilde{\mathsf{D}}_{2}V-\mathring{\mathsf{Q}}_{ \mathsf{s}})(\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2}\tfrac{1}{ \Sigma^{2\beta}}\big{(}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}-\tfrac{13}{ \varepsilon}J_{g}\big{)}\mathsf{Q}\mathcal{J}^{\frac{1}{2}}\] \[\quad-\tfrac{3}{2(1+\alpha)\varepsilon}\int\mathsf{Q}(\mathcal{N }\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2}\tfrac{1}{\Sigma^{2\beta}}\big{(}J_{g }\hat{\mathbf{W}}_{\mathcal{N}}-\tfrac{13}{\varepsilon}J_{g}\big{)}\mathsf{Q} \mathcal{J}^{\frac{1}{2}}\Big{|}_{\mathsf{s}}\] \[\quad+\tfrac{3}{2(1+\alpha)\varepsilon}\int_{0}^{s}\!\!\!\int\!\! \mathsf{Q}(\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2}\tfrac{1}{\Sigma^{2 \beta}}\big{(}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}-\tfrac{13}{\varepsilon}J_{g }\big{)}\mathsf{Q}\mathcal{J}^{\frac{1}{2}}\Big{|}_{\mathsf{0}}\] \[=:\widetilde{\mathbf{M}}^{\prime\prime\prime}_{2,a}+\widetilde{ \mathbf{M}}^{\prime\prime\prime}_{2,b}+\widetilde{\mathbf{M}}^{\prime\prime \prime}_{2,c}+\widetilde{\mathbf{M}}^{\prime\prime\prime}_{2,d}+\widetilde{ \mathbf{M}}^{\prime\prime\prime}_{2,e}+\widetilde{\mathbf{M}}^{\prime\prime \prime}_{2,f}\,.\] At this stage, we note that since \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}=-\tfrac{\mathsf{Q}}{ \varepsilon}\), and by also appealing to (5.30) we have \[\widetilde{\mathbf{M}}^{\prime\prime\prime}_{2,a} =\tfrac{3}{4(1+\alpha)\varepsilon^{2}}\int_{0}^{s}\!\!\!\int( \mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2}\tfrac{1}{\Sigma^{2\beta}} \big{(}-J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{13}{\varepsilon}J_{g}\big{)} \mathsf{Q}^{2}\mathcal{J}^{-\frac{1}{2}} \tag{10.51a}\] \[\widetilde{\mathbf{M}}^{\prime\prime\prime}_{2,b} =\tfrac{3}{2(1+\alpha)\varepsilon}\int_{0}^{s}\!\!\!\int(\mathcal{N }\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2}\tfrac{1}{\Sigma^{2\beta}}\mathsf{Q} \mathcal{J}^{\frac{1}{2}}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})(J_{g}\hat {\mathbf{W}}_{\mathcal{N}})\] \[\qquad-\tfrac{39}{2(1+\alpha)\varepsilon^{2}}\int_{0}^{s}\!\!\!\int( \mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau)^{2}\tfrac{1}{\Sigma^{2\beta}} \mathsf{Q}\mathcal{J}^{\frac{1}{2}}\big{(}\tfrac{1+\alpha}{2}J_{g}\hat{ \mathbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g}\hat{\mathbf{Z}}_{\mathcal{N}} \big{)}\,. \tag{10.51b}\] Appealing to (4.10), (5.33d), (6.17a), the identity \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathsf{Q}=\mathsf{Q}\mathring{ \mathsf{Q}}+V\partial_{2}\mathsf{Q}\), the bootstrap bounds (5.37), the initial data bounds (4.11), the bounds (6.38) for the Q-related coefficients, the bounds (7.1) on the geometry, and the improved estimate for \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})J_{g}\hat{\mathbf{W}}_{\mathcal{N}}\) in (8.44a), we have \[\widetilde{\mathbf{M}}^{\prime\prime\prime}_{2,b} \geq-\mathring{C}\varepsilon^{2}(\tfrac{4}{\kappa_{0}})^{2\beta} \mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}-\tfrac{25}{(1+\alpha) \varepsilon^{3}}\int_{0}^{s}\big{\|}\tfrac{\mathsf{Q}\mathcal{J}^{\frac{1}{2}} \mathcal{N}}{\Sigma^{\beta}}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau( \cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\ \[\widetilde{\mathsf{M}}^{\prime\prime\prime}_{2,e}\geq\tfrac{27}{20(1+\alpha)e^{2}} \big{\|}\tfrac{Q\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot \widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}\,. \tag{10.52e}\] We note here that the second term on the right side of (10.52a) is the correct "Gronwall term" that corresponds to the energy given by (10.52e). Combining the estimates in (10.44)-(10.52), we deduce that \[\widetilde{\mathsf{M}}_{2} \geq-\tfrac{34^{2}}{e}\int_{0}^{\mathsf{s}}\big{\|}\tfrac{ \mathcal{J}^{\frac{3}{4}}(J_{q}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathrm{D}}^{6}\boldsymbol{\hat{W}}_{\tau}(\cdot,\mathsf{s}^{\prime })\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}-\tfrac{25^{2}(1+ \alpha)}{\varepsilon}\int_{0}^{\mathsf{s}}\big{\|}\tfrac{\mathcal{J}^{\frac{ 3}{4}}}{\Sigma^{\beta}}\widetilde{\mathrm{D}}^{6}\boldsymbol{\hat{Z}}_{\tau}( \cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad-25(1+\alpha)\varepsilon(\tfrac{3}{\kappa_{0}})^{2\beta} \mathsf{C}^{2}_{\mathsf{data}}-\mathring{C}\varepsilon^{2}(\tfrac{4}{\kappa_{ 0}})^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\] \[\qquad+\tfrac{7}{40(1+\alpha)\varepsilon^{3}}\int_{0}^{\mathsf{ s}}\big{\|}\tfrac{\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot \widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^ {2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\tfrac{27}{20(1+\alpha)\varepsilon^{2}}\big{\|}\tfrac{ \mathcal{Q}\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot \widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}- \tfrac{25+100\cdot 250^{2}}{(1+\alpha)\varepsilon^{3}}\int_{0}^{\mathsf{s}}\big{\|} \tfrac{\mathcal{Q}\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot \widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^ {2}\mathrm{d}\mathsf{s}^{\prime}\,. \tag{10.53}\] The first and last term in (10.53) are Gronwall terms, while the second term is a damping term, which will be handled by taking \(\beta\) to be sufficiently large in terms of \(\alpha\). Next, we return to the \(\widetilde{\mathsf{M}}_{3}\) term defined in (10.43). We estimate this term simply using (6.38g), the Cauchy-Schwartz inequality, and the bound \(\mathcal{J}\leq J_{s}\), as to obtain \[\widetilde{\mathsf{M}}_{3} \geq-\tfrac{\sqrt{5}}{\varepsilon\sqrt{2(1+\alpha)}}\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}(J_{q}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta }}\widetilde{\mathrm{D}}^{6}\boldsymbol{\hat{W}}_{\tau}(\cdot,\mathsf{s}) \big{\|}_{L^{2}_{x}}\big{\|}\tfrac{\mathcal{Q}\mathcal{J}^{\frac{1}{4}}}{ \Sigma^{\beta}}\mathcal{N}\cdot\widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathsf{s} )\big{\|}_{L^{2}_{x}}^{2}\] \[\geq-\tfrac{25}{52}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{q} \mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathrm{D}}^{6} \boldsymbol{\hat{W}}_{\tau}(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}-\tfrac{13 }{10(1+\alpha)\varepsilon^{2}}\big{\|}\tfrac{\mathcal{Q}\mathcal{J}^{\frac{1}{ 4}}}{\Sigma^{\beta}}\mathcal{N}\cdot\widetilde{\mathrm{D}}^{6}\tau(\cdot, \mathsf{s})\big{\|}_{L^{2}_{x}}^{2}\,. \tag{10.54}\] We emphasize at this stage that \(\tfrac{25}{52}<\tfrac{1}{2}\), so that the above bound is compatible with (10.18) and (10.53). Combining (10.40), (10.41), (10.42), (10.43), (10.53), and (10.54), we obtain \[\mathsf{M}_{2}+\mathsf{M}_{3} \geq-(\tfrac{25}{52}+\mathring{C}\varepsilon)\big{\|}\tfrac{ \mathcal{J}^{\frac{3}{4}}(J_{q}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathrm{D}}^{6}\boldsymbol{\hat{W}}_{\tau}(\cdot,\mathsf{s}) \big{\|}_{L^{2}_{x}}^{2}-\mathring{C}\int_{0}^{\mathsf{s}}\big{\|}\tfrac{ \mathcal{J}^{\frac{1}{4}}J_{y}\tfrac{2}{2}}{\Sigma^{\beta}}\widetilde{ \mathrm{D}}^{6}\boldsymbol{\hat{W}}_{\tau}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L ^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad-25(1+\alpha)\varepsilon(\tfrac{3}{\kappa_{0}})^{2\beta} \mathsf{C}^{2}_{\mathsf{data}}-\mathring{C}\varepsilon^{2}(\tfrac{4}{\kappa_{ 0}})^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\] \[\qquad+\tfrac{7}{40(1+\alpha)\varepsilon^{3}}\int_{0}^{\mathsf{s}} \big{\|}\tfrac{\mathcal{Q}\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N} \cdot\widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{ x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\tfrac{1}{20(1+\alpha)\varepsilon^{2}}\big{\|}\tfrac{ \mathcal{Q}\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot \widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}- \tfrac{25+100\cdot 250^{2}}{(1+\alpha)\varepsilon^{3}}\int_{0}^{\mathsf{s}}\big{\|} \tfrac{\mathcal{Q}\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot \widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathsf{s}^{\prime}\,. \tag{10.55}\] In view of (10.39), it is left to obtain a good lower bound for the term \(\mathsf{M}_{1}\) defined in (10.38). We have that \[\mathsf{M}_{1} \geq\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\int\tfrac{1}{\Sigma^{ \beta}}\mathsf{G}_{\mathsf{bad}}\big{(}\widetilde{\mathrm{D}}^{6} \boldsymbol{\hat{W}}_{\tau}\big{)}^{2}-\tfrac{1+\alpha}{\varepsilon}\int_{0}^{ \mathsf{s}}\big{\|}\tfrac{\mathcal{J}^{\frac{1}{4}}J_{y}\tfrac{2}{2}}{\Sigma^{ \beta}}\widetilde{\mathrm{D}}^{6}\boldsymbol{\hat{W}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\big{\|}_{L^{2}_{x}}^{2}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{ \Sigma^{\beta}}\widetilde{\mathrm{D}}^{6}\boldsymbol{\hat{Z}}_{\tau}(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad-\mathring{C}\int_{0}^{\mathsf{s}}\Big{(}\big{\|}\tfrac{ \mathcal{J}^{\frac{1}{4}}J_{y}\tfrac{2}{2}}{\Sigma^{\beta}}\widetilde{ \mathrm{D}}^{6}\boldsymbol{\hat{W}}_{\tau}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{ x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\big{\|}\tfrac{\mathcal{J}^{\frac{1}{4}}J_{y}\tfrac{2}{2}}{\Sigma^{ \beta}}\widetilde{\mathrm{D}}^{6}\boldsymbol{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\Big{)}\mathrm{d} \mathsf{s}^{\prime} \tag{10.56}\] \[-\tfrac{1+\alpha}{\varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{ \jmath^{\tfrac{4}{4}\,J_{y}\,\tfrac{4}{5}}}{\Sigma^{\beta}}\widetilde{\mathsf{D }}^{6}\hat{\mathbf{W}}_{\tau}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L_{x}^{2}} \bigl{\|}\tfrac{\jmath^{\tfrac{4}{4}\,J_{y}\,\tfrac{4}{5}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\hat{\mathbf{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime}) \bigr{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[-\tfrac{34^{2}+500^{2}}{\varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|} \tfrac{\jmath^{\tfrac{4}{4}\,J_{y}\,\tfrac{4}{5}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L_{ x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}-\tfrac{25^{2}(1+\alpha)}{ \varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{\jmath^{\tfrac{4}{4}\,J_{y} \,\tfrac{4}{5}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\hat{\mathbf{Z}}_{ \tau}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{ \prime}\] \[-31(1+\alpha)\varepsilon(\tfrac{3}{\kappa_{0}})^{2\beta} \mathsf{C}^{2}_{\mathsf{data}}-\hat{C}\varepsilon^{2}(\tfrac{4}{\kappa_{0}})^{ 2\beta}\mathsf{K}^{2}(\mathsf{B}_{6})^{2}\] \[+\tfrac{7}{40(1+\alpha)\varepsilon^{3}}\int_{0}^{\mathsf{s}} \bigl{\|}\tfrac{\mathsf{Q}\cdot\jmath^{-\tfrac{4}{4}}}{\Sigma^{\beta}}\mathcal{ N}\cdot\widetilde{\mathsf{D}}^{6}\tau(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L_{x}^{2}}^{2} \mathrm{d}\mathsf{s}^{\prime}\] \[+\tfrac{1}{20(1+\alpha)\varepsilon^{2}}\bigl{\|}\tfrac{\mathsf{Q} \cdot\jmath^{\tfrac{4}{4}\,}}{\Sigma^{\beta}}\mathcal{N}\cdot\widetilde{ \mathsf{D}}^{6}\tau(\cdot,\mathsf{s})\bigr{\|}_{L_{x}^{2}}^{2}-\tfrac{25+100 \cdot 250^{2}+500^{2}}{(1+\alpha)\varepsilon^{3}}\int_{0}^{\mathsf{s}} \bigl{\|}\tfrac{\mathsf{Q}\cdot\jmath^{\tfrac{4}{4}\,}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\hat{\mathbf{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime}) \bigr{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,. \tag{10.59}\] The forcing and commutator terms appearing in \(I^{\hat{A}_{\tau}}_{3,a,iii}\) will be shown in Subsection 10.7 to satisfy similar bounds. #### 10.6.4. Combining all the terms with over-differentiated geometry Here we summarize the identities for the terms which contain over-differentiated geometry. By combining (10.26), (10.33), and (10.59), we arrive at the identity \[I^{\hat{W}_{\tau}}_{3}+I^{\hat{A}_{\tau}}_{3}+I^{\hat{A}_{\tau}}_ {3}+I^{\hat{A}_{\tau}}_{5}+I^{\hat{A}_{\tau}}_{7}+I^{\hat{A}_{\tau}}_{5}+I^{ \hat{A}_{\tau}}_{7}+I^{\hat{A}_{\tau}}_{7}\] \[\geq-\bigl{|}I^{\hat{Z}_{\tau}}_{5+7,a,iii}\bigr{|}-\bigl{|}I^{ \hat{A}_{\tau}}_{5+7,a,v}\bigr{|}-\bigl{|}I^{\hat{A}_{\tau}}_{3,a,iii}\bigr{|} -31(1+\alpha)\varepsilon(\tfrac{3}{\kappa_{0}})^{2\beta}\mathsf{C}^{2}_{ \mathsf{data}}-\hat{C}\varepsilon^{2}\beta^{2}(\tfrac{4}{\kappa_{0}})^{2\beta} \mathsf{K}^{2}(\mathsf{B}_{6})^{2}\] \[\geq-\bigl{|}I^{\hat{Z}_{\tau}}_{5+7,a,iii}\bigr{|}-\bigl{|}I^{ \hat{A}_{\tau}}_{3,a,iii}\bigr{|}-31(1+\alpha)\varepsilon(\tfrac{3}{\kappa_{0} })^{2\beta}\mathsf{C}^{2}_{\mathsf{data}}-\hat{C}\varepsilon^{2}\beta^{2}( \tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2}(\mathsf{B}_{6})^{2}\] \[\geq-\bigl{|}I^{\hat{Z}_{\tau}}_{5+7,a,iii}\bigr{|}-\bigl{|}I^{ \hat{A}_{\tau}}_{5+7,a,v}\bigr{|}-\bigl{|}I^{\hat{A}_{\tau}}_{3,a,iii}\bigr{|} -31(1+\alpha)\varepsilon(\tfrac{3}{\kappa_{0}})^{2\beta}\mathsf{C}^{2}_{\mathsf{ data}}-\hat{C}\varepsilon^{2}\beta^{2}(\tfrac{4}{\kappa_{0}})^{2\beta} \mathsf{K}^{2}(\mathsf{B}_{6})^{2}\] \[\geq-\bigl{|}\tfrac{25}{52}+\hat{C}\varepsilon\bigr{\|}\tfrac{ \jmath^{\tfrac{4}{4}\,J_{y}\,\mathsf{Q}\,\tfrac{4}{5}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}(\cdot,\mathsf{s})\bigr{\|}_{L_{x}^{2}}^{2} -\tfrac{25}{(1+\alpha)}{\varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{\jmath^ {\tfrac{4}{4}\,}(J_{y}\,\mathsf{Q}\,\tfrac{4}{5})}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}\hat{\mathbf{Z}}_{\tau}(\cdot,\mathsf{s})\bigr{\|}_{L_{x}^{2}}^{2} \mathrm{d}\mathsf{s}^{\prime}\] \[\geq-\tfrac{34^{2}+500^{2}}{\varepsilon}\int_{0}^{\mathsf{s}} \bigl{\|}\tfrac{\jmath^{\tfrac{4}{4}\,}(J_{y}\,\mathsf{Q}\,\tfrac{4}{5})}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}(\cdot,\mathsf{s} ^{\prime})\bigr{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}-\tfrac{25^{2}(1 +\alpha)}{\varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{\jmath^{\tfrac{4}{4 }\,}(J_{x}\,\mathsf{Q}\,\tfrac{4}{5})}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6} \hat{\mathbf{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L_{x}^{2}}^{2} \mathrm{d}\mathsf{s}^{\prime}\] \[\geq-\tfrac{34^{2}+500^{2}}{\varepsilon}\int_{0}^{\mathsf{s}} \bigl{\|}\tfrac{\mathsf{Q}\cdot\jmath^{\tfrac{4}{4}\,}(J_{y}\,\mathsf{Q}\,\tfrac{4}{5})}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}(\cdot,\mathsf{s} ^{\prime})\bigr{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}-\tfrac{25^{2}(1 +\alpha)}{\varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{\jmath^{\tfrac{4}{4} \,}(J_{x}\,\mathsf{Q}\,\tfrac{4}{5})}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6} \hat{\mathbf{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L_{x}^{2}}^{2} \mathrm{d}\mathsf{s}^{\prime}\] \[\geq-\tfrac{34^{2}+500^{2}}{\varepsilon}\int_{0}^{\mathsf{s}} \bigl{\|}\tfrac{\mathsf{Q}\cdot\jmath^{\tfrac{4}{4}\,}(J_{y}\,\mathsf{Q}\, \tfrac{4}{5})}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\hat{\mathbf{W}}_{\tau}( \cdot,\mathsf{s}^{\prime})\bigr{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}- \tfrac{25^{2}(1+\alpha)}{\varepsilon}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{ \jmath^{\tfrac{4}{4}\,}(J_{x}\,\mathsf{Q}\,\tfrac{4}{5})}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\hat{\mathbf{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime}) \bigr{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\geq-\tfrac{34^{2}+500^{2}}{\varepsilon}\int_{0}^{\mathsf{s}} \bigl{\|}\ Similarly, from the definition of \(\mathcal{R}^{\tau}_{\hat{\mathbf{W}}}\) in (10.2), and by additionally appealing to (8.2) and (3.32), which gives the estimate \(\|(\mathbb{Q}\partial_{\mathbf{s}}+V\partial_{2})\hat{\mathbf{W}}_{\tau}\|_{L^{ \infty}_{x,\star}}\lesssim\varepsilon\), we deduce \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}\mathbb{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\mathcal{R}^{\tau}_{\hat{\mathbf{W}}}\big{\|}_{L^{2}_{x,\star}} \lesssim\varepsilon^{2}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{ B}_{6}\rangle+\varepsilon\Big{(}\|\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g} \mathbb{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\hat{ \mathbf{W}}_{\tau}\|_{L^{2}_{x,\star}}+\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4 }}(J_{g}\mathbb{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6} \hat{\mathbf{Z}}_{\tau}\big{\|}_{L^{2}_{x,\star}}\Big{)}\lesssim\varepsilon^{2 }(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{10.61c}\] Lastly, using the definition of \(\mathcal{C}^{\tau}_{\hat{\mathbf{W}}}\) in (10.2), identity (3.32), the aforementioned bounds, Lemma B.1, the Leibniz rule and Lemma B.3, we may also obtain \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}\mathbb{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\mathcal{C}^{\tau}_{\hat{\mathbf{W}}}\big{\|}_{L^{2}_{x,\star}} \lesssim\varepsilon^{2}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}^{2}\langle \mathsf{B}_{6}\rangle^{2}+\varepsilon\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}( J_{g}\mathbb{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6} \hat{\mathbf{W}}_{\tau}\big{\|}_{L^{2}_{x,\star}}+\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}(J_{g}\mathbb{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{ D}}^{6}\hat{\mathbf{A}}_{\tau}\big{\|}_{L^{2}_{x,\star}}\lesssim\varepsilon(\tfrac{4}{ \kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{10.61d}\] Combining the bounds in (10.61), we thus deduce \[\big{|}I^{\hat{\mathsf{W}}_{\tau}}_{4}\big{|}\leq\dot{C}\varepsilon^{2}( \tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{10.62}\] Next, we consider the bounds for \(I^{\hat{\mathsf{Z}}_{\tau}}_{8}\) and \(I^{\hat{\mathsf{A}}_{\tau}}_{8}\), which are nearly identical. From (5.37p), (10.7h), and (10.8h), we obtain \[\big{|}I^{\hat{\mathsf{Z}}_{\tau}}_{8}\big{|}+\big{|}I^{\hat{ \mathsf{A}}_{\tau}}_{8}\big{|}\] \[\leq\int_{0}^{8}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{Z}}_{\tau}(\cdot,\mathsf{ s}^{\prime})\big{\|}_{L^{2}_{x}}\Big{(}\kappa_{0}\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{F}^{\tau}_{2}( \cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}+\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta-1}}\mathcal{R}^{\tau}_{\hat{\mathbf{Z}}}(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}+\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4 }}}{\Sigma^{\beta-1}}\mathcal{C}^{\tau}_{\hat{\mathbf{Z}}}(\cdot,\mathsf{s}^{ \prime})\big{\|}_{L^{2}_{x}}\Big{)}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\int_{0}^{8}\!\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\hat{\mathbf{A}}_{\tau}(\cdot,\mathsf{ s}^{\prime})\big{\|}_{L^{2}_{x}}\Big{(}\kappa_{0}\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{F}^{\tau}_{\hat{ \mathbf{A}}}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}+\big{\|}\tfrac{ \mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}}\mathcal{R}^{\tau}_{\hat{\mathbf{A} }}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}+\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta-1}}\mathcal{C}^{\tau}_{\hat{\mathbf{A}}}(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}\Big{)}\mathrm{d}\mathsf{s}^{\prime}\,.\] (10.63a) From the definitions of \[\mathsf{F}^{\tau}_{2}\] and \[\mathsf{F}^{\tau}_{\hat{\mathbf{A}}}\] in ( 3.43 ), the bootstrap inequalities ( 5.37 ), the estimate \[\mathcal{J}\leq J_{g}\], the bounds for the geometry ( 7.1 ), the vorticity estimates ( 8.2 ) - ( 8.3 ), and the double-commutator in ( B.16 ), we deduce \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}} ^{6}\mathsf{F}^{\tau}_{2}\big{\|}_{L^{2}_{x,\star}}+\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{F}^{\tau}_{\hat{ \mathbf{A}}}\big{\|}_{L^{2}_{x,\star}}\lesssim\varepsilon(\tfrac{4}{\kappa_{0}} )^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,.\] (10.63b) Similarly, the definitions of \[\mathcal{R}^{\tau}_{\hat{\mathbf{Z}}}\] and \[\mathcal{R}^{\tau}_{\hat{\mathbf{A}}}\] in ( 10.3 ) and ( 10.4 ) furthermore allow us to estimate \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}}\mathcal{R}^{ \tau}_{\hat{\mathbf{Z}}}\big{\|}_{L^{2}_{x,\star}} \leq\dot{C}\varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle+2\alpha\|\mathcal{N}\cdot\mathcal{T}_{,1}-J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\mathsf{N}\cdot\widetilde{\mathsf{D}}_{2}\tau\|_{L^{ \infty}_{x,\star}}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}} \widetilde{\mathsf{D}}^{6}\hat{\mathbf{Z}}_{\mathsf{N}}\big{\|}_{L^{2}_{x,\star}}\] (10.63c) \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}}\mathcal{R}^{ \tau}_{\hat{\mathbf{A}}}\big{\|}_{L^{2}_{x,\star}} \leq\dot{C}\varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K} \langle\mathsf{B}_{6}\rangle\,.\] (10.63d) The term highlighted on the second line of the right side of ( 10.63c ) is not necessarily small, and so it must be treated with care. First, according to ( 3.18 ) and ( 5.37 ) we have that \[\|\mathsf{N}\cdot\mathcal{T}_{,1}-J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h \mathsf{N}\cdot\widetilde{\mathsf{D}}_{2}\tau\|_{L^{\infty}_{x,\star}}\leq\|g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}J_{g}\|_{L^{\infty}_{x,\star}}+\tilde{C} \varepsilon\leq 4(1+\alpha)+\tilde{C}\varepsilon\leq 5(1+\alpha)\,.\] Then, we note that if \[\widetilde{\mathsf{D}}^{6}\] contains a single copy of \[\widetilde{\mathsf{D}}_{1}\], or a single copy of \[\widetilde{\mathsf{D}}_{2}\], then by ( 8.22 ), ( 8.22 ), factor of \(\mathsf{K}\). Lastly, using the definitions of \(\mathcal{C}^{\mathcal{T}}_{\underline{\mathsf{Z}}}\) and \(\mathcal{C}^{\mathcal{T}}_{\underline{\mathsf{A}}}\) in (10.3) and (10.4), and by appealing to all available bounds, we obtain \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}}\mathcal{C }^{\mathcal{T}}_{\underline{\mathsf{Z}}}\big{\|}_{L^{2}_{x,s}} \leq\mathring{C}\varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{ K}\langle\mathsf{B}_{6}\rangle+\tfrac{1}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta-1}}\big{(}\widetilde{\mathsf{D}}^{6},\tfrac{J_{ \mathsf{Z}}}{\Sigma}\widetilde{\mathsf{D}}_{8}\tilde{\mathsf{Z}}_{\mathcal{T} })\big{\|}_{L^{2}_{x,s}}\] \[\leq\tfrac{32(1+\alpha)}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^ {\frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\tilde{\mathsf{Z}}_{ \mathcal{T}}\big{\|}_{L^{2}_{x,s}}+\mathring{C}\varepsilon^{\frac{4}{4}}( \tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle \tag{10.63i}\] \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}} \mathcal{C}^{\mathcal{T}}_{\underline{\mathsf{A}}}\big{\|}_{L^{2}_{x,s}} \leq\mathring{C}\varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{ K}\langle\mathsf{B}_{6}\rangle+\tfrac{1}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta-1}}\big{(}\widetilde{\mathsf{D}}^{6},\tfrac{J_{ \mathsf{Z}}}{\Sigma}^{\mathcal{T}}\kappa,\widetilde{\mathsf{D}}_{8}\hat{ \mathsf{A}}_{\mathsf{K}})\big{\|}_{L^{2}_{x,s}}\] \[\leq\tfrac{32(1+\alpha)}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^ {\frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\tilde{\mathsf{A}}_{ \mathcal{T}}\big{\|}_{L^{2}_{x,s}}+\mathring{C}\varepsilon^{\frac{4}{4}}( \tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{10.63j}\] In the last inequality in both (10.63i) and (10.63j) we have appealed to (B.19) with \(m=6\), and note that the most dangerous term comes from \(i=2\); this is the cause of the \(\varepsilon^{\frac{4}{4}}\) term, instead of the usual \(\varepsilon\). Upon combining all the bounds we have obtained in (10.63), we deduce that \[\big{|}I_{8}^{\hat{\mathsf{Z}}_{\tau}}\big{|}+\big{|}I_{8}^{\hat {\mathsf{A}}_{\tau}}\big{|} \leq\tfrac{34(1+\alpha)^{2}}{\alpha\varepsilon}\int_{0}^{8}\!\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6} \tilde{\mathsf{Z}}_{\mathcal{T}}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x }}^{2}\mathrm{d}\mathsf{s}^{\prime}+\tfrac{34(1+\alpha)}{\varepsilon}\int_{0} ^{8}\!\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}\tilde{\mathsf{A}}_{\mathcal{T}}(\cdot,\mathsf{s}^{\prime}) \big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\quad+9\alpha\varepsilon(\tfrac{4}{\kappa_{0}})^{2}\beta\mathsf{ B}_{6}^{2}+\mathring{C}\varepsilon^{\frac{3}{2}}(\tfrac{4}{\kappa_{0}})^{2} \beta\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{10.64}\] At last, the terms \(I_{5+7,\alpha,ii}^{\hat{\mathsf{Z}}_{\tau}}\) (defined in (10.21c)), \(I_{5+7,\alpha,v}^{\hat{\mathsf{A}}_{\tau}}\) (defined in (10.29e)), and \(I_{3,\alpha,iii}^{\hat{\mathsf{A}}_{\tau}}\) (defined in (10.36c)), may be estimated using the forcing and commutator estimates that we have just obtained in (10.61b), (10.61c), and (10.61d) for \(\hat{\mathsf{W}}_{\tau}\), (10.63b), (10.63h), and (10.63i) for \(\hat{\mathsf{Z}}_{\tau}\), and (10.63b), (10.63d), and (10.63j) for \(\hat{\mathsf{A}}_{\tau}\). We record the bounds \[\big{|}I_{5+7,\alpha,iii}^{\hat{\mathsf{Z}}_{\tau}}\big{|} \leq\tfrac{1}{(1+\alpha)\varepsilon^{3}}\int_{0}^{8}\big{\|} \tfrac{\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot\widetilde{ \mathsf{D}}^{6}\mathcal{T}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathsf{s}^{\prime}+\mathring{C}\varepsilon^{3}(\tfrac{4}{\kappa_{0}} )^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2} \tag{10.65a}\] \[\big{|}I_{5+7,\alpha,v}^{\hat{\mathsf{A}}_{\tau}}\big{|} \leq\tfrac{\hat{C}}{\int_{0}^{8}\big{\|}\tfrac{\mathcal{J}^{\frac {1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\mathcal{T}( \cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\big{\|}\tfrac{\mathcal{J}^{ \frac{5}{4}}}{\Sigma^{\beta}}\big{(}\widetilde{\mathsf{D}}^{6}\mathsf{F}_{\!u}^ {\mathcal{T}}+\mathcal{R}_{\hat{\mathsf{A}}}^{\mathcal{T}}+\mathcal{C}_{\hat{ \mathsf{A}}}^{\mathcal{T}}\big{)}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}} \mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tfrac{1}{(1+\alpha)\varepsilon^{3}}\int_{0}^{8}\big{\|} \tfrac{\mathcal{Q}\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot \widetilde{\mathsf{D}}^{6}\mathcal{T}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x }}^{2}\mathrm{d}\mathsf{s}^{\prime}+\mathring{C}\varepsilon^{3}(\tfrac{4}{\kappa_{0} })^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\] (10.65b) \[\big{|}I_{3,\alpha,iii}^{\hat{\mathsf{A}}_{\tau}}\big{|} \leq\tfrac{8}{\varepsilon\kappa_{0}}\int_{0}^{8}\!\big{\|} \tfrac{\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot\widetilde{ \mathsf{D}}^{6}\mathcal{T}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}\big{\|} \tfrac{\mathcal{J}^{\frac{5}{4}}}{\Sigma^{\beta}}\big{(}\widetilde{\mathsf{D}}^{6} \mathsf{F}_{\!u}^{\mathcal{T}}+\mathcal{R}_{\hat{\mathsf{W}}}^{\mathcal{T}}+ \mathcal{C}_{\hat{\mathsf{W}}}^{\mathcal{T}}\big{)}(\cdot,\mathsf{s}^{\prime}) \big{\|}_{L^{2}_{x}}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tfrac{1}{(1+\alpha)\varepsilon^{3}}\int_{0}^{8}\big{\|} \tfrac{\mathcal{Q}\mathcal{J}^{\frac{1}{4}}}{\Sigma^{\beta}}\mathcal{N}\cdot \widetilde{\mathsf{D}}^{6}\mathcal{T}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x }}^{2}\mathrm{d}\mathsf{s}^{\prime}+\mathring{C}\varepsilon^{3}(\tfrac{4}{\kappa_{0} })^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{10.65c}\] By combining the estimates (10.62), (10.64), and (10.65), we may summarize the forcing and commutator bounds as \[\big{|}I_{4}^{\hat{\mathsf{W}}_{\tau}}\big{|}+\big{|}I_{8}^{ \mathcal{Z}_{\tau}}\big{|}+\big{|}I_{5+7,\alpha,iii}^{\hat{\mathsf{A}}_{\tau}} \big{|}+\big{|}I_{5+7,\alpha,v}^{\hat{\mathsf{A}}_{\tau}}\big{|}+\big{|} \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0}^{ \mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}J_{\alpha}\, \mathsf{B}^{\frac{4}{3}}\,}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{ \hat{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|} \tfrac{\,\mathcal{J}^{\frac{4}{3}}J_{\alpha}\,\mathsf{B}^{\frac{4}{3}}\,}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{4\alpha(\beta-\frac{1}{2})}{5\varepsilon}-\tfrac{4(1+ \alpha)}{\varepsilon}-\tfrac{25^{2}(1+\alpha)}{\varepsilon}-\tfrac{34(1+ \alpha)^{2}}{\alpha\varepsilon})\int_{0}^{\mathsf{s}}\Bigl{(}\bigl{\|} \tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6 }\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}+ \bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^ {2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3} }\,}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}} \,}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}} \,}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}} \,}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}} \,}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\, }{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot,\mathsf{s} ^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{\prime}) \bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot,\mathsf{s}^{\prime}) \bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}\,}J_{\alpha}\,}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\, }{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot, \mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}\,}J_{\alpha}\,}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{Z}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}+\bigl{\|}\tfrac{\,\mathcal{J}^{\frac{4}{3}}\,}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\tau}(\cdot,\mathsf{s}^{ \prime})\bigr{\|}_{L^{2}_{x}}^{2}\Bigr{)}\mathrm{d}\mathsf{s}^{\prime}\] \[+(\tfrac{1+\alpha}{16\varepsilon}-\hat{C}\varepsilon\beta)\int_{0 }^{\mathsf{s}}\Bigl{(}\bigl{\|}\tfrac{ At last, we multiply the above estimate by \(\kappa_{0}^{2\beta_{\alpha}}\), appeal to (5.37p), drop the energy and damping terms for \(\mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\mathcal{T}\) (since these were bounded already in Proposition 7.1), use the inequality \(\mathcal{J}\leq J_{\mathsf{s}}\), and recall the definitions of \(\widetilde{\mathcal{E}}_{6,\mathcal{T}}^{2}(\mathsf{s})\) and \(\widetilde{\mathcal{D}}_{6,\mathcal{T}}^{2}(\mathsf{s})\) to deduce that \[\varepsilon\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{ \mathcal{E}}_{6,\mathcal{T}}^{2}(\mathsf{s})+\widetilde{\mathcal{D}}_{6, \mathcal{T}}^{2}(\varepsilon) \leq\widehat{\mathsf{c}}_{\alpha}\varepsilon^{2}4^{2\beta_{\alpha }}\Big{(}\mathsf{C}_{\mathsf{data}}^{2}+\mathsf{B}_{6}^{2}+\mathring{\mathcal{ C}}\varepsilon^{\frac{1}{2}}\mathsf{K}^{2}(\mathsf{B}_{6})^{2}\Big{)}\] \[\leq(\varepsilon\mathsf{K})^{2}\mathsf{B}_{6}^{2}\cdot\widehat{ \mathsf{c}}_{\alpha}4^{2\beta_{\alpha}}\Big{(}\tfrac{\mathsf{C}_{\mathsf{data }}^{2}+\mathsf{B}_{6}^{2}}{\mathsf{K}^{2}\mathsf{B}_{6}^{2}}+\mathring{\mathcal{ C}}\varepsilon^{\frac{1}{2}}\tfrac{(\mathsf{B}_{6})^{2}}{\mathsf{B}_{6}^{2}} \Big{)}\,. \tag{10.72}\] Upon defining \[\mathsf{K}:=8\max\{1,\widehat{\mathsf{c}}_{\alpha}^{\frac{1}{2}}4^{\beta_{ \alpha}}\}\,, \tag{10.73}\] where \(\beta_{\alpha}\) is as defined in (10.68), and ensuring that \[\mathsf{B}_{6}\geq\max\{1,\mathsf{C}_{\mathsf{data}}\}\,, \tag{10.74}\] and \(\varepsilon\) sufficiently small in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\), we deduce from (10.72) that \[\varepsilon\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{\mathcal{E}}_{6, \mathcal{T}}^{2}(\mathsf{s})+\widetilde{\mathcal{D}}_{6,\mathcal{T}}^{2}( \varepsilon)\leq\tfrac{1}{8}(\varepsilon\mathsf{K})^{2}\mathsf{B}_{6}^{2}\,, \tag{10.75}\] which closes the "tangential part" of the remaining bootstrap (5.37r). ## 11 Improved normal-component estimates for six pure time derivatives We now consider a new set of sixth-order energy estimates for \(\mathbf{\hat{Z}}_{\mathcal{N}}\) and \(\mathbf{\hat{A}}_{\mathcal{N}}\), which involve only time, or "material" derivatives. The bounds in this section, estimate (11.2b) to be more precise, were used in Section 10 to bound the \(2\alpha(\mathcal{N}\cdot\tau_{,1}-J_{\mathsf{s}}g^{-\frac{1}{2}}\widetilde{ \mathsf{D}}_{2}h_{\mathcal{N}}\cdot\widetilde{\mathsf{D}}_{2}\mathcal{T}) \widetilde{\mathsf{D}}^{6}\mathbf{\hat{Z}}_{\mathcal{N}}\) contribution to \(\mathcal{R}_{\mathbf{\hat{Z}}}^{\tau}\) (see (10.63g)), and are used in Section 12 to bound the \(\tfrac{2\alpha}{\varepsilon}\widetilde{\mathsf{D}}_{1}J_{\mathsf{s}}\widetilde {\mathsf{D}}^{6}\mathbf{\hat{Z}}_{\mathcal{N}}\) contribution to \(\mathcal{R}_{\mathbf{\hat{Z}}}^{\mathcal{N}}\) (see (12.37)). Except for these remainder terms, the bounds in this section are not used anywhere else in the argument. We define the operator \(\varepsilon\)-rescaled ALE transport operator in \((x,\mathsf{s})\) coordinates by \[\widetilde{\mathfrak{D}}=\varepsilon(\mathsf{Q}\partial_{\mathsf{s}}+V\partial _{2})=\widetilde{\mathsf{D}}_{\mathsf{s}}+\varepsilon V\widetilde{\mathsf{D}} _{2}\,.\] With the above notation, the goal of this section is to establish: **Proposition 11.1**.: _Under the standing bootstrap assumptions (5.37), assuming that \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), we have_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}J_{\mathsf{s}}^{\frac{1}{2}} \mathcal{J}^{3}\widetilde{\mathfrak{D}}^{6}(\mathbf{\hat{Z}}_{\mathcal{N}}, \mathbf{\hat{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}+ \tfrac{1}{\varepsilon}\int_{0}^{\varepsilon}\big{\|}\mathcal{J}^{\frac{1}{4}}J _{\mathsf{s}}^{\frac{1}{2}}\widetilde{\mathfrak{D}}^{6}(\mathbf{\hat{Z}}_{ \mathcal{N}},\mathbf{\hat{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L_{x}^ {2}}^{2}\mathrm{d}\mathsf{s}\lesssim\varepsilon\mathsf{K}^{2}(\mathsf{B}_{6})^{2 }\,, \tag{11.1}\] _where the implicit constant in (11.1) only depends on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\)._ Before turning to the proof of the above estimate, we record a corollary which is used throughout our proof. **Corollary 11.2**.: _Under the standing bootstrap assumptions (5.37), assuming that \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), we have that_ \[\big{\|}\mathcal{J}^{\frac{3}{4}}J_{\mathsf{s}}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{6}_{\mathsf{s}}\mathbf{\hat{Z}}_{\mathcal{N}}\big{\|}_{ L_{x}^{2}} \lesssim\varepsilon^{\frac{1}{2}}\mathsf{K}(\mathsf{B}_{6})\,, \tag{11.2a}\] \[\big{\|}\mathcal{J}^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}_{ \mathsf{s}}\mathbf{\hat{Z}}_{\mathcal{N}}\big{\|}_{L_{x,\mathsf{s}}^{2}} \leq\big{\|}\mathcal{J}^{\frac{1}{4}}J_{\mathsf{s}}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{6}_{\mathsf{s}}\mathbf{\hat{Z}}_{\mathcal{N}}\big{\|}_{L_{x, \mathsf{s}}^{2}} \lesssim\varepsilon\mathsf{K}(\mathsf{B}_{6})\,, \tag{11.2b}\] _where the implicit constant depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\)._ Proof of Corollary 11.2.: We note that for \(k\geq 2\) and sufficiently smooth functions \(f\), we have \[\widetilde{\mathfrak{D}}^{k}f-\widetilde{\mathsf{D}}^{k}_{ \mathsf{s}}f =\sum_{i=0}^{k-1}\binom{k}{i}(\varepsilon V)^{k-i}\widetilde{ \mathsf{D}}^{i}_{\mathsf{s}}\widetilde{\mathsf{D}}^{k-i}_{2}f\] \[+\varepsilon\sum_{i=0}^{k-2}\sum_{n=0}^{k-i-1}\sum_{j=0}^{i} \widetilde{\mathsf{D}}^{i-j}_{\mathsf{s}}\widetilde{\mathsf{D}}^{j+1}_{2}f\cdot \varepsilon^{j+n}\sum_{|\alpha|=k-i-1-n,|\beta|=n}c_{k,i,n,j,\alpha,\beta}\, \prod_{\ell=1}^{j+1+n}\widetilde{\mathsf{D}}^{\alpha_{\ell}}_{\mathsf{s}} \widetilde{\mathsf{D}}^{\beta_{\ell}}_{2}V\,, \tag{11.3}\] for suitable combinatorial coefficients \(c_{k,i,n,j,\alpha,\beta}\geq 0\). Identity (11.3) with \(k=6\) shows that \(\widetilde{\mathsf{D}}^{6}\mathbf{\hat{Z}}_{\mathcal{N}}-\widetilde{\mathsf{D}}^{6} _{\mathsf{s}}\mathbf{\hat{Z}}_{\mathcal{N}}\) consists of a sum of terms with at most five derivatives on \(\widetilde{\mathsf{D}}_{2}\mathbf{\hat{Z}}_{\mathcal{N}}\) times a power of \(\varepsilon\) which is at least equal to one. These terms are already bounded by (8.22a), by (8.22d) (with ), which for instance gives, and, and by (8.22f). Using the triangle inequality, the bounds for established in (7.11)-(7.1m), and the interpolation bounds in Lemma B.3 to treat the non-endpoint cases, it is then clear that (11.1) implies both (11.2a) and (11.2b). **Corollary 11.3**.: _Expansion (11.3) implies that we have the bounds_ (11.4a) (11.4b) (11.4c) _where we use the notation in (B.12)._ Proof of Corollary 11.3.: For simplicity, we only give the proof of (11.4a) when, the most difficult case. Due to (5.37o), the first line of (11.3) contributes at the upper bound. In order to deal with the second line in (11.3), note that the estimates in (5.37o) and (7.11) imply that for all. As such, by also appealing to the Poincare-type inequality (B.2a), we see that the second line of (11.3) only has a nontrivial contribution when or. For these special cases, we use the interpolation inequality (B.9), to obtain The bound (11.4a) with now follows from the -Young inequality. The bounds (11.4b) and (11.4c) follow since the bound on implies We conclude the proof of (11.4b)-(11.4c) by appealing to (B.2d), (7.11), and to (6.73) with. The remainder of this section is dedicated to the proof of Proposition 11.1. The proof of (11.1) is based on an energy estimate for and (11.9), in which the evolution is used passively as a "constitutive relation". The proof consists of several steps, which are then finalized in Section 11.6. In preparation, we write equations (3.25), (3.27), and (3.30) in coordinates as (11.5a) (11.5b) (11.5c) where (11.5d) (11.7a) \[\tfrac{J_{\mathsf{d}}}{\Sigma}(\mathsf{Q}\partial_{s}+V\partial_{2}) \widetilde{\mathfrak{D}}^{6}\hat{\mathbf{A}}_{\mathcal{N}}-\tfrac{\alpha}{2}g^{- \frac{1}{2}}J_{g}\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{6}\hat{ \mathbf{Z}}_{\mathcal{N}}-\alpha\widetilde{\mathfrak{D}}^{6}\hat{\mathbf{A}}_ {\mathcal{N},1}+\alpha g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}hJ_{g} \widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{6}\hat{\mathbf{A}}_{ \mathcal{N}}+\tfrac{\alpha}{2}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2} \widetilde{\mathfrak{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}) \tag{11.7b}\] where the commutator terms are given by \[\mathfrak{C}_{\boldsymbol{\Lambda}}^{\mathcal{N}} :=-\tfrac{1}{e}\big{[}\widetilde{\mathfrak{D}}^{6},J_{g}\Sigma^{ -1}\big{]}\widetilde{\mathfrak{D}}^{2}\hat{\mathbf{Z}}_{\mathcal{N}}+\alpha \big{\{}\widetilde{\mathfrak{D}}^{6},g^{-\frac{1}{2}}J_{g}\big{]}\widetilde{ \mathsf{D}}_{2}\hat{\mathbf{A}}_{\mathcal{N}}-2\alpha\big{[}\widetilde{ \mathfrak{D}}^{6},g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}hJ_{g}\big{]} \widetilde{\mathsf{D}}_{2}\hat{\mathbf{Z}}_{\mathcal{N}}\,, \tag{11.8a}\] \[\mathfrak{C}_{\boldsymbol{\Lambda}}^{\mathcal{N}} :=-\tfrac{1}{e}\big{[}\widetilde{\mathfrak{D}}^{6},J_{g}\Sigma^{ -1}\big{]}\widetilde{\mathfrak{D}}\hat{\mathbf{A}}_{\mathcal{N}}+\tfrac{\alpha }{2}\big{[}\widetilde{\mathfrak{D}}^{6},g^{-\frac{1}{2}}J_{g}\big{]} \widetilde{\mathsf{D}}_{2}\hat{\mathbf{Z}}_{\mathcal{N}}-\alpha\big{[} \widetilde{\mathfrak{D}}^{6},g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}hJ_{g} \big{]}\widetilde{\mathsf{D}}_{2}\hat{\mathbf{A}}_{\mathcal{N}}\,. \tag{11.8b}\] Our goal is to compute the spacetime \(L^{2}\) inner-product: \[\int_{0}^{\mathsf{s}}\!\!\int\!\Sigma^{-2\beta+1}\mathcal{J}^{ \frac{3}{2}}\Big{(}\underbrace{(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq \[-2\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{\mathcal{J}}\!\!J^{\frac{3}{2 }}\big{(}\widetilde{\mathfrak{D}}^{6}\tfrac{\mathsf{Z}}{\mathcal{Z}}_{\mathcal{N }}[\widetilde{\mathfrak{D}}^{6},\partial_{1}]\mathsf{\hat{Z}}_{\mathcal{N}}+ \widetilde{\mathfrak{D}}^{6}\mathsf{\hat{A}}_{\mathcal{N}}[\widetilde{ \mathfrak{D}}^{6},\partial_{1}]\mathsf{\hat{A}}_{\mathcal{N}}\big{)}\] \[=\tfrac{1}{2}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(\mathcal{Q} _{J_{0}})^{\frac{1}{2}}}{\Sigma^{8}}\widetilde{\mathfrak{D}}^{6}\mathsf{\hat{ Z}}_{\mathcal{N}}(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}+\big{\|}\tfrac{ \mathcal{J}^{\frac{3}{4}}(\mathcal{Q}_{J_{0}})^{\frac{1}{2}}}{\Sigma^{8}} \widetilde{\mathfrak{D}}^{6}\mathsf{\hat{A}}_{\mathcal{N}}(\cdot,\mathsf{s}) \big{\|}_{L^{2}_{x}}^{2}\] \[\quad-\tfrac{1}{2}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}( \mathcal{Q}_{J_{0}})^{\frac{1}{2}}}{\Sigma^{8}}\widetilde{\mathfrak{D}}^{6} \mathsf{\hat{Z}}_{\mathcal{N}}(\cdot,0)\big{\|}_{L^{2}_{x}}^{2}-\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}(\mathcal{Q}_{J_{0}})^{\frac{1}{2}}}{ \Sigma^{8}}\widetilde{\mathfrak{D}}^{6}\mathsf{\hat{A}}_{\mathcal{N}}(\cdot,0 )\big{\|}_{L^{2}_{x}}^{2}\] \[\quad+\int_{0}^{\mathsf{s}}\!\!\!\int\!\frac{1}{\Sigma^{2\rho}} \big{(}\mathsf{\hat{Z}}^{{}_{2}}(\widetilde{\mathfrak{D}}^{6}\mathsf{\hat{Z}} _{\mathcal{N}})^{2}+\mathsf{G}^{\hat{A}_{n}}(\widetilde{\mathfrak{D}}^{6} \mathsf{\hat{A}}_{\mathcal{N}})^{2}\big{)}-\alpha\int_{0}^{\mathsf{s}}\!\!\! \int\!\widetilde{\mathfrak{D}}^{6}\mathsf{\hat{Z}}_{\mathcal{N}}\,\widetilde{ \mathfrak{D}}^{6}\mathsf{\hat{A}}_{\mathcal{N}}(\hat{\mathfrak{D}}_{2}- \widetilde{\mathsf{D}}_{2})\big{(}\jmath_{s}\mathcal{J}^{\frac{3}{2}}J_{s}g^{ -\frac{1}{2}}\big{)}\] \[\quad+\alpha\int\overline{\mathfrak{Q}}_{2\jmath_{s}}\mathcal{J} ^{\frac{3}{2}}J_{s}g^{-\frac{1}{2}}\widetilde{\mathfrak{D}}^{6}\mathsf{\hat{ Z}}_{\mathcal{N}}\widetilde{\mathfrak{D}}^{6}\mathsf{\hat{A}}_{\mathcal{N}}\Big{|}_{ \mathsf{s}}-\alpha\int\overline{\mathfrak{Q}}_{2\jmath_{s}}\mathcal{J}^{\frac {3}{2}}J_{s}g^{-\frac{1}{2}}\widetilde{\mathfrak{D}}_{2}h\big{(}(\widetilde{ \mathfrak{D}}^{6}\mathsf{\hat{Z}}_{\mathcal{N}})^{2}+(\widetilde{\mathfrak{D }}^{6}\mathsf{\hat{A}}_{\mathcal{N}})^{2}\big{)}\Big{|}_{\mathsf{s}}\] \[\quad-2\alpha\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\int\!\mathcal{J} ^{\frac{3}{2}}\big{(}\widetilde{\mathfrak{D}}^{6}\mathsf{\hat{Z}}_{\mathcal{N }}[\widetilde{\mathfrak{D}}^{6},\partial_{1}]\mathsf{\hat{Z}}_{\mathcal{N}}+ \widetilde{\mathfrak{D}}^{6}\mathsf{\hat{A}}_{\mathcal{N}}[\widetilde{ \mathfrak{D}}^{6},\partial_{1}]\mathsf{\hat{A}}_{\mathcal{N}}\big{)} \tag{11.12}\] where we introduced the coefficients \[\mathsf{G}^{\hat{Z}_{n}} =-\alpha(2\beta-1)\mathcal{J}^{\frac{3}{4}}\Sigma_{,1}-\big{(} \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}\big{)}\big{(}\mathcal{J}^{\frac{ 3}{2}}J_{s}\big{)}\] \[\quad+\alpha\Sigma^{2\beta}(\hat{\mathsf{Q}}_{2}-\widetilde{ \mathsf{D}}_{2})\big{(}\jmath_{s}\mathcal{J}^{\frac{3}{2}}J_{s}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\big{)}+\tfrac{1}{2}\big{(}V\hat{\mathsf{Q}}_{2}- \hat{\mathsf{Q}}_{\mathsf{s}}-\widetilde{\mathsf{D}}_{2}V-2\alpha\beta(\mathsf{ \hat{Z}}_{\mathcal{N}}+\mathsf{\hat{A}}_{\mathcal{T}})\big{)}\mathcal{J}^{ \frac{3}{2}}J_{s} \tag{11.13}\] \[\mathsf{G}^{\hat{A}_{n}} =-\alpha(2\beta-1)\mathcal{J}^{\frac{3}{2}}\Sigma_{,1}-(\mathsf{ Q}\partial_{\mathsf{s}}+V\partial_{2}\big{)}\big{(}\mathcal{J}^{\frac{3}{2}}J_{s} \big{)}\] \[\quad+\alpha\Sigma^{2\beta}(\hat{\mathsf{Q}}_{2}-\widetilde{ \mathsf{D}}_{2})\big{(}\jmath_{s}\mathcal{J}^{\frac{3}{2}}J_{s}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\big{)}+\big{(}V\hat{\mathsf{Q}}_{2}-\hat{\mathsf{Q} }_{\mathsf{s}}-\widetilde{\mathsf{D}}_{2}V-2\alpha\beta(\mathsf{\hat{Z}}_{ \mathcal{N}}+\mathsf{\hat{A}}_{\mathcal{T}})\big{)}\mathcal{J}^{\frac{3}{2}}J_{s}\,. \tag{11.14}\] We note at this stage that for \(\beta\geq 1\), analogously to (10.9), (10.12), and (10.14), using the lower bound in (6.38g) and choosing \(\varepsilon\) to be sufficiently small in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), the following lower bounds hold: \[\mathsf{G}^{\hat{Z}_{n}} \geq\big{(}\alpha(\beta-\tfrac{1}{2})+\tfrac{1+\alpha}{4}\big{)} \big{(}\tfrac{4}{5\varepsilon}-\tfrac{33}{(1+\alpha)\varepsilon}J_{s} \mathsf{Q}\big{)}\mathcal{J}^{\frac{3}{2}}-\tfrac{250^{2}}{\varepsilon}\mathcal{Q} J_{s}\mathcal{J}^{\frac{3}{2}}+\tfrac{3}{4\varepsilon}\tfrac{2(1+\alpha)}{5} \mathcal{J}^{\frac{1}{2}}J_{g}\,, \tag{11.15a}\] \[\mathsf{G}^{\hat{A}_{n}} \geq\big{(}\alpha(\beta-\tfrac{1}{2})+\tfrac{1+\alpha}{2}\big{)} \big{(}\tfrac{4}{5\varepsilon}-\tfrac{33}{(1+\alpha)\varepsilon}J_{s} \mathsf{Q}\big{)}\mathcal{J}^{\frac{3}{2}}-\tfrac{250^{2}}{\varepsilon}\mathcal{Q} J_{s}\mathcal{J}^{\frac{3}{2}}+\tfrac{3}{2\varepsilon}\tfrac{2(1+\alpha)}{5} \mathcal{J}^{\frac{1}{2}}J_{g}\,. \tag{11.15b}\] The sixth, seventh, and eight terms on the right side of (11.12) are bounded by appealing to (10.17). In order to estimate the ninth (and last) term on the right side of (11.12), we observe that expression (11.3) and the commutator identity (5.24) also implies \[\big{[}\widetilde{\mathfrak{D}}^{6},\widetilde{\mathsf{D}}_{1} \big{]}f =-\varepsilon\sum_{i=0}^{5}\binom{6}{i}(6-i)\widetilde{\mathsf{D}}_{1}V( \varepsilon V)^{5-i}\widetilde{\mathsf{D}}_{2}(\widetilde{\mathsf{D}}_{i}^{i} \widetilde{\mathsf{D}}_{2}^{5-i}f)\] \[\quad-\varepsilon\sum_{i=0}^{4}\sum_{j=0}^{5-i}\sum_{j=0}^{i} \widetilde{\mathsf{D}}_{\mathsf{s}}^{i-j}\widetilde{\mathsf{D}}_{2}^{j+1}f \cdot\varepsilon^{j+n}\sum_{|\alpha|=5-i-n,|\beta|=n}c_{k,i,n,j,\alpha,\beta} \widetilde{\mathsf{D}}_{1}\prod_{\ell=1}^{j+1+n}\widetilde{\mathsf{O}}_{ \mathsf{s}}^{\alpha}\widetilde{\mathsf{D}}_{2}^{\beta_{\ell}}V\,,\] Note in particular that for all terms in the above commutator, we have at least a \(\widetilde{\mathsf{D}}_{2}\) present on \(f\). Since in the sixth term on the right side of (11.12) \(f\) is either \(\mathsf{\hat{A}}_{\mathcal{N}}\) or \(\mathsf{\hat{Z}}_{\mathcal{N}}\), we may appeal to the \(\widetilde{\mathsf{D}}^{6}\mathsf{\hat{A}}_{\mathcal{N}}\) and \(\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{ \[-\mathring{C}\varepsilon^{3}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2} \langle\mathsf{B}_{6}\rangle^{2}+\big{(}\tfrac{3(1+\alpha)}{5\varepsilon}- \mathring{C}\beta\big{)}\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\int\!\!\!\!\int \frac{\mathcal{J}^{\frac{3}{2}}J_{s}}{\Sigma^{2\alpha}}\big{(}\tfrac{1}{2}( \widetilde{\mathfrak{D}}^{6}\bar{\mathbf{Z}}_{\mathcal{N}})^{2}+(\widetilde{ \mathfrak{D}}^{6}\bar{\mathbf{A}}_{\mathcal{N}})^{2}\big{)}\,, \tag{11.17}\] where \(\mathring{C}=\mathring{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})>0\) is a constant. ### The integral \(\mathfrak{I}_{6}^{\hat{\mathsf{A}}_{n}}\) In order to estimate the \(\mathfrak{I}_{6}^{\hat{\mathsf{A}}_{n}}\) term defined in (11.11f), which we note contains the seventh order derivative \(\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{6}(J_{g}\hat{\mathbf{W}}_ {\mathcal{N}})=\varepsilon\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^ {5}(Q\delta_{\mathsf{s}}+V\partial_{2})\), we apply \(\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{5}\) to (11.5a) and find that \[\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{6}(J_{g}\hat {\mathbf{W}}_{\mathcal{N}})=-\alpha\varepsilon\Sigma g^{-\frac{1}{2}}J_{g} \widetilde{\mathsf{D}}_{2}^{2}\widetilde{\mathfrak{D}}^{5}\tilde{\mathbf{A}} _{\mathcal{N}}-\alpha\varepsilon\Sigma g^{-\frac{1}{2}}J_{g}\widetilde{ \mathsf{D}}_{2}[\widetilde{\mathfrak{D}}^{5},\widetilde{\mathsf{D}}_{2}]\hat{ \mathbf{A}}_{\mathcal{N}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\alpha\varepsilon \widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{5}(Sg^{-\frac{1}{2}}J_{g}) \widetilde{\mathsf{D}}_{2}\hat{\mathbf{A}}_{\mathcal{N}}-\alpha\varepsilon \big{(}\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{5},\Sigma g^{- \frac{1}{2}}J_{g},\widetilde{\mathsf{D}}_{2}\hat{\mathbf{A}}_{\mathcal{N}} \big{)}+\varepsilon\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{5} \mathcal{G}_{\hat{\mathbf{W}}}^{\mathcal{N}}\,.\] Substitution of this identity into the integral \(\mathfrak{I}_{6}^{\hat{\mathsf{A}}_{n}}\) in (11.11f) shows that \[\mathfrak{I}_{6}^{\hat{\mathsf{A}}_{n}} =-\alpha^{2}\varepsilon\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\! \int\Sigma g^{-1}\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\int_{\mathcal{N}}\!\!\! \!\!\int_{\mathcal{N}}\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\int_{\mathcal{N}} \!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\int_{ \mathcal{N}}\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\int_{\mathcal{ N}}\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\! \!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\! \!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\! \!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{ \mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{ \mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \[\leq\mathring{C}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{\mathcal{J}^{ \frac{3}{2}}}{\Sigma^{\beta}}\widetilde{\mathfrak{D}}^{6}\tilde{\mathbf{A}}_{ \mathcal{N}}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}+\mathring{C}\varepsilon^{3}(\tfrac{4}{\kappa_{0}})^{2\beta }\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}. \tag{11.22}\] Similarly, to bound the fifth term on the right side of (11.19), we use (11.4c) with \(f=\widetilde{\mathsf{D}}_{2}\tilde{\mathbf{A}}_{\mathcal{N}}\) and \(a=\tfrac{3}{4},b=\tfrac{1}{2}\), in conjunction with (8.21b)-(8.21c), to deduce \[\alpha^{2}\varepsilon^{2}\left|\int\overline{\mathsf{Q}}_{2} \Sigma_{\mathsf{J}_{\mathsf{J}}}g^{-1}\mathcal{J}^{\frac{3}{2}}J_{s}\widetilde{ \mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{5}\tilde{\mathbf{A}}_{\mathcal{N}} \widetilde{\mathfrak{D}}^{6}\tilde{\mathbf{A}}_{\mathcal{N}}\right|_{\mathsf{ s}} \leq\mathring{C}\varepsilon^{3}(\tfrac{4}{\kappa_{0}})^{\beta}\bigl{\|} \tfrac{\mathcal{J}^{\frac{3}{2}}(\mathsf{Q}J_{s})^{\frac{1}{2}}}{\Sigma^{ \beta}}\widetilde{\mathfrak{D}}^{6}\tilde{\mathbf{A}}_{\mathcal{N}}(\cdot, \mathsf{s})\bigr{\|}_{L^{2}_{x}}(\varepsilon^{\frac{1}{2}}\mathsf{K}(\mathsf{ B}_{6})+\varepsilon^{\frac{5}{2}}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2})\] \[\leq\mathring{C}\varepsilon^{2}\bigl{\|}\tfrac{\mathcal{J}^{ \frac{3}{2}}(\mathsf{Q}J_{s})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{ \mathfrak{D}}^{6}\tilde{\mathbf{A}}_{\mathcal{N}}(\cdot,\mathsf{s})\bigr{\|}_ {L^{2}_{x}}^{2}+\mathring{C}\varepsilon^{5}(\tfrac{4}{\kappa_{0}})^{2\beta} \mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{11.23}\] Thus, by combining (11.19), (11.21), (11.22), and (11.23), we deduce \[\mathcal{J}^{\tilde{\mathbf{A}}_{n}}_{6,a} \geq\tfrac{\alpha^{2}}{2}\varepsilon^{2}\bigl{\|}\tfrac{\Sigma^{ \beta}-\frac{1}{2}\mathcal{J}^{\frac{3}{2}}(\mathsf{Q}J_{s})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathfrak{D}}_{2}\widetilde{\mathfrak{D}}^{5} \tilde{\mathbf{A}}_{\mathcal{N}}(\cdot,\mathsf{s})\bigr{\|}_{L^{2}_{x}}^{2}- \tfrac{\alpha^{2}}{2}\varepsilon^{2}\bigl{\|}\tfrac{\Sigma^{\beta}-\frac{1}{2 }\mathcal{J}^{\frac{3}{2}}(\mathsf{Q}J_{s})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathfrak{D}}_{2}\widetilde{\mathfrak{D}}_{2}\widetilde{ \mathfrak{D}}^{5}\tilde{\mathbf{A}}_{\mathcal{N}}(\cdot,0)\bigr{\|}_{L^{2}_{ x}}^{2}\] \[\quad-\mathring{C}\varepsilon^{2}\bigl{\|}\tfrac{\mathcal{J}^{ \frac{3}{2}}(\mathsf{Q}J_{s})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{ \mathfrak{D}}^{6}\tilde{\mathbf{A}}_{\mathcal{N}}(\cdot,\mathsf{s})\bigr{\|}_ {L^{2}_{x}}^{2}-\mathring{C}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{\mathcal{J}^{ \frac{3}{2}}}{\Sigma^{\beta}}\widetilde{\mathfrak{D}}^{6}\tilde{\mathbf{A}}_{ \mathcal{N}}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}\] \[\geq-\mathring{C}\varepsilon^{2}\bigl{\|}\tfrac{\mathcal{J}^{ \frac{3}{2}}(\mathsf{Q}J_{s})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{ \mathfrak{D}}^{6}\tilde{\mathbf{A}}_{\mathcal{N}}(\cdot,\mathsf{s})\bigr{\|}_ {L^{2}_{x}}^{2}-\mathring{C}\int_{0}^{\mathsf{s}}\bigl{\|}\tfrac{\mathcal{J}^{ \frac{3}{2}}}{\Sigma^{\beta}}\widetilde{\mathfrak{D}}^{6}\tilde{\mathbf{A}}_{ \mathcal{N}}(\cdot,\mathsf{s}^{\prime})\bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}\] \[\quad-\mathring{C}\varepsilon^{3}(1+\mathring{C}\varepsilon \langle\beta\rangle)(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2}\langle \mathsf{B}_{6}\rangle^{2}\,. \tag{11.24}\] In the second inequality in (11.24) we have appealed to (4.11), the bootstraps (5.37), and to (11.4c) with \(f=\widetilde{\mathsf{D}}_{2}\tilde{\mathbf{A}}_{\mathcal{N}}\), \(a=b=0\). Next, we turn to the other delicate term in (11.18), namely \(\mathcal{J}^{\tilde{\mathbf{A}}_{n}}_{6,e}\). Recalling (11.6a), (3.25), and (5.32), we may write \[\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{5}\mathcal{G} \mathcal{G}^{\mathcal{N}}_{\tilde{\mathbf{W}}}\] \[=\varepsilon\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}\bigl{(}J_{s} \hat{\mathbf{W}}_{\mathcal{N}}+J_{s}\tilde{\mathbf{Z}}_{\mathcal{N}}-2J_{s} \hat{\mathbf{A}}_{\mathcal{T}}\bigr{)}\widetilde{\mathsf{D}}_{2}\widetilde{ \mathfrak{D}}^{4}\widetilde{\mathsf{D}}_{2}\bigl{(}g(\tfrac{1+\alpha}{2}\hat{ \mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\tilde{\mathbf{Z}}_{\mathcal{T}}) \bigr{)}\] \[-\varepsilon\tfrac{\alpha}{2}\bigl{(}\widetilde{\mathsf{D}}_{2}^{2}h \Sigma g^{-\frac{3}{2}}-\hat{\mathbf{A}}_{\mathcal{T}}\bigr{)}\widetilde{ \mathsf{D}}_{2}\widetilde{\mathfrak{D}}^{4}\Bigl{(}(J_{s}\hat{\mathbf{W}}_{ \mathcal{N}})\bigl{(}\tfrac{\alpha}{2}\hat{\mathbf{A}}_{\mathcal{T}}-\tfrac{\alpha }{2}\Sigma g^{-\frac{3}{2}}\widetilde{\mathsf{D}}_{2}^{2}h\bigr{)}+\alpha \Sigma g^{-\frac{1}{2}}J_{g}\widetilde{\mathsf{D}}_{2}\tilde{\mathfrak{D}}_{ \mathcal{N}}\tilde{\mathsf{A}}_{\mathcal{N}}-\tfrac{\alpha}{2}\bigl{(}\hat{ \mathbf{A}}_{\mathcal{T}}+\Sigma g^{-\frac{3}{2}}\widetilde{\mathsf{D}}_{2}^{2}h \bigr{)}J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\bigl{(}\tfrac{3+\alpha}{2} \hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\tilde{\mathbf{Z}}_{\mathcal{T}} \bigr{)}J_{g}\hat{\mathbf{A}}_{\mathcal{N}}+\bigl{(}\tfrac{1+\alpha}{2}\hat{ \mathbf{W}}_{\mathcal{T}}+\tfrac{1-\alpha}{2}\tilde{\mathbf{Z}}_{\mathcal{T}} \bigr{)}J_{g}\hat{\mathbf{W}}_{\mathcal{T}}+\alpha\Sigma g^{-\frac{3}{2}} \widetilde{\mathsf{D}}_{2}^{2}hJ_{g}\hat{\mathbf{A}}_{\mathcal{T}}\Bigr{)}\] \[-\varepsilon\tfrac{\alpha}{2}\Sigma g^{-\frac{3}{2}}\bigl{(}J_{s} \hat{\mathbf{W}}_{\mathcal{N}}+J_{s}\tilde{\mathbf{Z}}_{\mathcal{N}}-2J_{s}\hat{ \mathbf{A}}_{\mathcal{T}}\bigr{)}\widetilde{\mathsf{D}}_{2}\widetilde{ \mathfrak{D}}^{4}\widetilde{\mathsf{D}}_{2}V\widetilde{\mathfrak{D}}_{2}h\bigr{)}- \tfrac{\alpha}{2}\bigl{(}\widetilde{\mathsf{D}}_{2}^{2}h\Sigma g^{-\frac{3}{2}} \bigl{(}J_{s}\hat{\mathbf{Z}}_{\mathcal{N}}\bigr{)}\] \[+\alpha\widetilde{\mathsf{D}}_{2}^{2}h\,\widetilde{\mathsf{D}}_{2} \widetilde{\mathfrak{D}}^{5}\Bigl{(}\Sigma g^{-\frac{3}{2}}\bigl{(}J_{s}\hat{ \mathbf{Z}}_{\mathcal{N}}-J_{s}\hat{\mathbf{A}}_{\mathcal{T}}\bigr{)}\Bigr{)}+ \tfrac{\alpha}{2}\bigl{(}\widetilde{\mathsf{D}}_{2}\widetilde{\mathfrak{D}} Here we single out the term \(\frac{\alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}\mathcal{J}^{\frac{3}{2}} \widetilde{\mathbb{D}}_{2}\widetilde{\mathfrak{D}}^{5}\hat{\mathbf{A}}_{ \mathcal{T}}\) arising in \(\frac{\alpha}{2}\mathcal{J}^{\frac{3}{4}}\big{[}\widetilde{\mathbb{D}}_{2} \widetilde{\mathfrak{D}}^{5},\hat{\mathbf{A}}_{\mathcal{T}}\big{]}\big{(}J_{g }\hat{\mathbf{W}}_{\mathcal{N}}-J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}\big{)}\), as the only term responsible for the factor of \(\mathsf{K}\langle\mathsf{B}_{6}\rangle\) in the bound for \(\widetilde{\mathbb{D}}_{2}\widetilde{\mathfrak{D}}^{5}\mathcal{G}_{\mathbf{W}}^ {\mathcal{N}}\); one may verify that all other terms in (11.25) contribute at most a factor of \(\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\). In a similar fashion, we may bound the remaining (easy) terms \(\mathfrak{J}_{6,\bar{b}}^{\hat{\mathbf{A}}_{\mathcal{N}}},\mathfrak{J}_{6,c}^{ \hat{\mathbf{A}}_{\mathcal{N}}},\mathfrak{J}_{6,d}^{\hat{\mathbf{A}}_{\mathcal{ N}}}\) in (11.18) as follows: \[\big{|}\mathfrak{J}_{6,b}^{\hat{\mathbf{A}}_{\mathcal{N}}}\big{|} +\big{|}\mathfrak{J}_{6,c}^{\hat{\mathbf{A}}_{\mathcal{N}}}\big{|} \leq\hat{C}\varepsilon^{2}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K} \langle\mathsf{B}_{6}\rangle\int_{0}^{\mathbf{s}}\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathfrak{D}}^{5}\hat{\mathbf{A}}_{ \mathcal{N}}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}\mathrm{d}\mathsf{s }^{\prime}\] \[\leq\hat{C}\varepsilon^{3}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K }^{2}\langle\mathsf{B}_{6}\rangle^{2}+\hat{C}\varepsilon\int_{0}^{\mathbf{s}} \big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathfrak{D }}^{5}\hat{\mathbf{A}}_{\mathcal{N}}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x} ^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,, \tag{11.26b}\] where the implicit constant depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\text{data}}\). In summary, from the (11.18) and the bounds (11.24) and (11.26) we deduce the lower bound \[\mathfrak{J}_{6}^{\hat{\mathbf{A}}_{\mathcal{N}}}\geq-\hat{C} \varepsilon^{2}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(\mathsf{Q}_{J_{g}})^{ \frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathfrak{D}}^{6}\hat{\mathbf{A}}_{ \mathcal{N}}(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}-\tfrac{2\alpha}{5\varepsilon }\int_{0}^{\mathbf{s}}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta} }\widetilde{\mathfrak{D}}^{6}\hat{\mathbf{A}}_{\mathcal{N}}(\cdot,\mathsf{s}^{ \prime})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad\qquad-\hat{C}\varepsilon^{3}(1+\varepsilon\langle\beta \rangle)(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6} \rangle^{2}\,. \tag{11.27}\] ### The forcing and commutator terms Returning to (11.9) and the decompositions (11.10)-(11.11), it remains to estimate the forcing and commutator terms \(\mathfrak{J}_{5}^{\tilde{\mathsf{Z}}_{n}}\) and \(\mathfrak{J}_{5}^{\hat{\mathbf{A}}_{n}}\), defined in (11.10e) and respectively (11.11e). For the contributions from the commutator terms defined in (11.8), by appealing to Lemmas B.1, B.3 and B.5, to the boostraps (5.37), to the bounds on the geometry in Proposition 7.1, the improved \(\hat{\mathbf{A}}_{\mathcal{N}}\) estimates (8.21), to the improved \(\hat{\mathbf{Z}}_{\mathcal{N}}\) estimates (8.22), and to the bounds (11.4), we obtain \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}} \mathfrak{C}_{\hat{\mathbf{Z}}_{\mathcal{N}}}^{\mathcal{N}}\big{\|}_{L_{x,*}^{2}}\] \[\leq\tfrac{6}{\varepsilon}\|\Sigma\widetilde{\mathfrak{D}}(J_{g} \Sigma^{-1})\|_{L_{x,*}^{\infty}}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{ \Sigma^{\beta}}\widetilde{\mathfrak{D}}^{6}\hat{\mathbf{Z}}_{\mathcal{N}}\big{\|} _{L_{x,*}^{2}}+\hat{\hat{C}}\varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\sum \nolimits_{k=0}^{4}\big{\|}\widetilde{\mathfrak{D}}^{6-k}(J_{g}\Sigma^{-1}) \widetilde{\mathfrak{D}}^{k+1}\hat{\mathbf{Z}}_{\mathcal{N}}\big{\|}_{L_{x,*}^{ 2}}+\hat{C}\varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B }_{6}\rangle\] \[\leq\tfrac{6(11+\alpha)}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathfrak{D}}^{6}\hat{\mathbf{Z}}_{ \mathcal{N}}\big{\|}_{L_{x,*}^{2}}+\hat{\hat{C}}\varepsilon(\tfrac{4}{\kappa_{0 }})^{\beta}\sum\nolimits_{k=0}^{4}\big{\|}\widetilde{\mathfrak{D}}^{4-k} \widetilde{\mathfrak{D}}^{2}(J_{g}\Sigma^{-1})\big{\|}_{L_{x,*}^{\frac{\alpha}{ 8}}}\big{\|}\widetilde{\mathfrak{D}}^{k}\widetilde{\mathfrak{D}}\widetilde{ \mathfrak{Z}}_{\mathcal{N}}\big{\|}_{L_{x,*}^{\frac{\alpha}{8}}}+\hat{C} \varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\] \[\leq\tfrac{6(11+\alpha)}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{ \frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathfrak{D}}^{6}\hat{\mathbf{Z}}_{ \mathcal{N}}\big{\|}_{L_{x,*}^{2}}+\hat{C}\mathsf{K}(\tfrac{4}{\kappa_{0}})^{ \beta}\langle\mathsf{B}_{6}\rangle\,,\] (11.28a) and in a similar fashion \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}} \mathfrak{C}_{\hat{\mathbf{A}}}^{\mathcal{N}}\big{\|}_{L_{x,*}^{2}}\leq\tfrac{6(1 1+\alpha)}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta}} \widetilde{\mathfrak{D}}^{6}\hat{\mathbf{A}}_{\mathcal{N}}\big{\|}_{L_{x,*}^{2}}+ \hat{C}\mathsf{K}(\tfrac{4}{\kappa_{0}})^{\beta}\langle\mathsf{B}_{6}\rangle\,. \tag{11.28b}\] For the contributions from \(\widetilde{\mathfrak{D}}^{6}\) acting on the forcing terms defined in (11.6b)-(11.6c), we need to be quite careful, especially the contributions from \(\widetilde{\mathfrak{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\), \(\widehat{\mathfrak{D}}^{6}\widetilde{\mathbb{D}}^{2}_{2}h\), and \(\widehat{\mathfrak{D}}^{6}\widetilde{\mathbb{D}}_{2}J_{g}\). For these terms, we use that identities (11.5a) and (11.6a), along with the boostraps (5.37), the bounds on the geometry in Proposition 7.1, the improved \(\hat{\mathbf{A}}_{\mathcal{N}}\) estimates (8.21), to the improved \(\hat{\mathbf{Z}}_{\mathcal{N}}\) estimates (8.22), to the bounds (11.4), and the Moser inequality (B.13), imply \[\big{\|}\mathcal{J}^{\frac{3}{4}}\widetilde{\mathfrak{D}}^{6}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}})\big{\|}_{L_{x,*}^{2}} \leq\varepsilon\big{\|}\mathcal{J}^{\frac{3}{4}}\widetilde{\mathfrak{D}}^{5}( \mathds{Q}\partial_{\mathsf{s}}+V\partial_{2})(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}})\big{\|}_{L_{x,*}^{2}}\] \[\leq\alpha\varepsilon\|\Sigma g^{-\frac{1}{2}}J_{g}\|_{L_{x,*}^{ \infty}}\big{\|}\mathcal{J}^{\frac{3}{4}}\widetilde{\ Using these above estimates in (11.29), the bound (11.4), the product and commutator lemmas bounds in B.13 and B.16, we may derive from (11.6b)-(11.6c) the bounds \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}}\widetilde{ \mathcal{D}}^{6}\mathcal{G}_{\mathbf{\tilde{Z}}}^{\mathcal{N}}\big{\|}_{L^{2}_{ x,a}} \leq\tfrac{1+\alpha}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{ \beta}}\widetilde{\mathcal{D}}^{6}\mathbf{\tilde{Z}}_{\mathcal{N}}\big{\|}_{L^ {2}_{x,a}}+\mathring{C}\mathsf{K}(\tfrac{4}{\kappa_{0}})^{\beta}\langle\mathsf{ B}_{6}\rangle\,, \tag{11.30a}\] \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}} \widetilde{\mathcal{D}}^{6}\mathcal{G}_{\mathbf{\tilde{A}}}^{\mathcal{N}} \big{\|}_{L^{2}_{x,a}} \leq\tfrac{1}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{ \Sigma^{\beta}}\widetilde{\mathcal{D}}^{6}\mathbf{\tilde{A}}_{\mathcal{N}} \big{\|}_{L^{2}_{x,a}}+\mathring{C}\mathsf{K}(\tfrac{4}{\kappa_{0}})^{\beta} \langle\mathsf{B}_{6}\rangle\,. \tag{11.30b}\] Inserting the bounds (11.28) and (11.30), into definitions (11.10e) and (11.11e), we arrive at \[\big{|}\mathfrak{J}_{5}^{\tilde{Z}_{n}}\big{|}+\big{|}\mathfrak{J}_{5}^{\tilde {A}_{n}}\big{|}\leq\tfrac{100(1+\alpha)}{\varepsilon}\int_{0}^{\mathsf{s}} \big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta}}(\widetilde{ \mathcal{D}}^{6}\mathbf{\tilde{Z}}_{\mathcal{N}},\widetilde{\mathcal{D}}^{6} \mathbf{\tilde{A}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{ x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\mathring{C}\varepsilon\mathsf{K}^{2}( \tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,, \tag{11.31}\] with \(\mathring{C}=\mathring{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})>0\). ### Conclusion of the proof of Proposition 11.1 At this stage we gather the identity (11.9) and the lower bounds (11.17), (11.27), and (11.31), to derive that \[0\geq\big{(}\tfrac{1}{2}-\mathring{C}\varepsilon\big{)}\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}(\mathcal{Q}J_{\mu})^{\frac{1}{2}}}{\Sigma^{ \beta}}(\widetilde{\mathcal{D}}^{6}\mathbf{\tilde{Z}}_{\mathcal{N}},\widetilde{ \mathcal{D}}^{6}\mathbf{\tilde{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L ^{2}_{x}}^{2}-\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(\mathcal{Q}J_{\mu})^{ \frac{1}{2}}}{\Sigma^{\beta}}(\widetilde{\mathcal{D}}^{6}\mathbf{\tilde{Z}}_ {\mathcal{N}},\widetilde{\mathcal{D}}^{6}\mathbf{\tilde{A}}_{\mathcal{N}})( \cdot,0)\big{\|}_{L^{2}_{x}}^{2}\] \[\qquad\qquad+\Big{(}\tfrac{4\alpha(\beta-1)}{5\varepsilon}-\tfrac {100(1+\alpha)}{\varepsilon}\Big{)}\int_{0}^{\mathsf{s}}\big{\|}\tfrac{ \mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta}}(\widetilde{\mathcal{D}}^{6}\mathbf{ \tilde{Z}}_{\mathcal{N}},\widetilde{\mathcal{D}}^{6}\mathbf{\tilde{A}}_{ \mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}\] \[\qquad\qquad-\mathring{C}\varepsilon(1+\varepsilon^{3}\langle \beta\rangle)(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6 }\rangle^{2}+\big{(}\tfrac{3(1+\alpha)}{5\varepsilon}-\mathring{C}\beta\big{)} \int_{0}^{\mathsf{s}}\!\!\int\!\!\int\!\!\!\int\!\!\!\!\int\!\!\!\!\!\! \int\!\!\!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\int\!\! \!\!\!\!\!\!\int\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[-\tfrac{\alpha}{\varepsilon}\widetilde{\mathsf{D}}_{1}\widetilde{ \mathsf{D}}^{6}(J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}})+\tfrac{ \alpha}{\varepsilon}\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6}J_{g} \boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}}+\alpha J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6 }(J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}})-\alpha J_{g}g^{-\frac{1}{2} }\widetilde{\mathsf{D}}_{2}h\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}} \widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g}\] \[-\tfrac{\alpha}{2}\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6 }\tilde{\tau}\cdot\mathcal{N}J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h( J_{g}\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}}+J_{g}\boldsymbol{\hat{ \mathsf{Z}}}_{\mathcal{N}}-2J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{T}})= \widetilde{\mathsf{D}}^{6}\mathsf{F}_{\mathsf{A}}^{\mathcal{N}}+\mathcal{F}_{ \mathsf{A}}^{\mathcal{N}}+\mathcal{C}_{\mathsf{A}}^{\mathcal{N}}\,, \tag{12.1c}\] where the forcing terms \(\mathsf{F}_{\mathsf{W}}^{\mathcal{N}},\mathsf{F}_{\mathsf{A}}^{\mathcal{N}}\), \(\mathsf{F}_{\mathsf{A}}^{\mathcal{N}}\) are defined by (3.43), while the remainder and commutator terms are given by \[\mathcal{R}_{\boldsymbol{\hat{\mathsf{W}}}}^{\mathcal{N}} =\Sigma^{-1}\widetilde{\mathsf{D}}^{6}V\widetilde{\mathsf{D}}_{2}( J_{g}\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}})+\widetilde{\mathsf{D}}^{6} \Sigma^{-1}(\mathsf{Q}\partial_{\mathsf{A}}+V\partial_{2})(J_{g}\boldsymbol{ \hat{\mathsf{W}}}_{\mathcal{N}})+\alpha\widetilde{\mathsf{D}}^{6}g^{-\frac{1}{2 }}\widetilde{\mathsf{D}}_{2}(J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}})\] \[\qquad-\alpha\widetilde{\mathsf{D}}^{6}(g^{-\frac{1}{2}} \boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}})\widetilde{\mathsf{D}}_{2}J_{g}- \tfrac{\alpha}{2}\widetilde{\mathsf{D}}^{6}\big{(}g^{-\frac{1}{2}}(J_{g} \boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}}+J_{g}\boldsymbol{\hat{\mathsf{Z} }}_{\mathcal{N}}-2J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{T}})\mathcal{N }_{i}\big{)}\widetilde{\mathsf{D}}_{2}\tau^{i}\,, \tag{12.2a}\] \[\mathcal{C}_{\boldsymbol{\hat{\mathsf{W}}}}^{\mathcal{N}} =\big{(}\widetilde{\mathsf{D}}^{6},\Sigma^{-1},(\mathsf{Q} \partial_{\mathsf{A}}+V\partial_{2})(J_{g}\boldsymbol{\hat{\mathsf{W}}}_{ \mathcal{N}})\big{)}+\Sigma^{-1}\big{(}\widetilde{\mathsf{D}}^{6},V, \widetilde{\mathsf{D}}_{2}(J_{g}\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}}) \big{)}+\alpha\big{(}\widetilde{\mathsf{D}}^{6},g^{-\frac{1}{2}},\widetilde{ \mathsf{D}}_{2}(J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}})\big{)}\] \[\qquad-\alpha\big{(}\widetilde{\mathsf{D}}^{6},g^{-\frac{1}{2}} \boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}},\widetilde{\mathsf{D}}_{2}J_{g} \big{)}-\tfrac{\alpha}{2}\big{(}\widetilde{\mathsf{D}}^{6},g^{-\frac{1}{2}}(J_{ g}\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}}+J_{g}\boldsymbol{\hat{\mathsf{Z} }}_{\mathcal{N}}-2J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{T}})\mathcal{N }_{i}\big{)}\widetilde{\mathsf{D}}_{2}\tau^{i}\,, \tag{12.2b}\] for (12.1a), and \[\mathcal{C}_{\boldsymbol{\hat{\mathsf{Z}}}}^{\mathcal{N}} =\big{(}\widetilde{\mathsf{D}}^{6},\Sigma^{-1}J_{g},(\mathsf{Q} \partial_{\mathsf{A}}+V\partial_{2})(J_{g}\boldsymbol{\hat{\mathsf{Z}}}_{ \mathcal{N}})\big{)}+\Sigma^{-1}J_{g}\big{(}\widetilde{\mathsf{D}}^{6},V, \widetilde{\mathsf{D}}_{2}(J_{g}\boldsymbol{\hat{\mathsf{Z}}}_{\mathcal{N}}) \big{)}-\big{(}\widetilde{\mathsf{D}}^{6},\tfrac{1}{\Sigma}(\tfrac{1+\alpha}{2} J_{g}\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g} \boldsymbol{\hat{\mathsf{Z}}}_{\mathcal{N}}),J_{g}\boldsymbol{\hat{\mathsf{Z}}}_{ \mathcal{N}}\big{)}\] \[\qquad+\alpha\big{(}\widetilde{\mathsf{D}}^{6},J_{g}g^{-\frac{1}{2}},\widetilde{\mathsf{D}}_{2}(J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}}) \big{)}-\alpha\big{(}\widetilde{\mathsf{D}}^{6},J_{g}g^{-\frac{1}{2}} \boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}},\widetilde{\mathsf{D}}_{2}J_{g}\big{)}\] \[\qquad-\tfrac{\alpha}{2}\big{(}\widetilde{\mathsf{D}}^{6},J_{g}g^{- \frac{1}{2}}(J_{g}\boldsymbol{\hat{\mathsf{W}}}_{\mathcal{N}}+J_{g}\boldsymbol{ \hat{\mathsf{Z}}}_{\mathcal{N}}-2J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{T}}) \mathcal{N}_{i},\widetilde{\mathsf{D}}_{2}\tau^{i}\big{)}\] \[\qquad-\tfrac{2\alpha}{\varepsilon}\big{(}\widetilde{\mathsf{D}}^{6},J_{g}(\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}}+\boldsymbol{\hat{\mathsf{Z}}}_{ \mathcal{T}})\mathcal{N}_{i},\widetilde{\mathsf{D}}_{1}\tau^{i}\big{)}+\tfrac{2 \alpha}{\varepsilon}\big{(}\widetilde{\mathsf{D}}^{6},\boldsymbol{\hat{\mathsf{Z} }}_{\mathcal{N}},\widetilde{\mathsf{D}}_{1}J_{g}\big{)}+2\alpha\big{(} \widetilde{\mathsf{D}}^{6},J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h,\widetilde{\mathsf{D}}_{2}(J_{g}\boldsymbol{\hat{\mathsf{Z}}}_{\mathcal{N}}) \big{)}\] \[\qquad+2\alpha\big{(}\widetilde{\mathsf{D}}^{6},J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h(\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}}+ \boldsymbol{\hat{\mathsf{Z}}}_{\mathcal{T}})\mathcal{N}_{i},\widetilde{\mathsf{D}}_{2} \tau^{i}\big{)}-2\alpha\big{(}\widetilde{\mathsf{D}}^{6},J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\boldsymbol{\hat{\mathsf{Z}}}_{\mathcal{N}},\widetilde{ \mathsf{D}}_{2}J_{g}\big{)}\,, \tag{12.3b}\] for (12.1b), and \[\mathcal{R}_{\boldsymbol{\hat{\mathsf{A}}}}^{\mathcal{N}} =\Sigma^{-1}J_{g}\widetilde{\mathsf{D}}^{6}V\widetilde{\mathsf{D} }_{2}(J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}})+\widetilde{\mathsf{D}}^{6}( \Sigma^{-1}J_{g})(\mathsf{Q}\partial_{\mathsf{A}}+V\partial_{2})(J_{g} \boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}})-\widetilde{\mathsf{D}}^{6}\big{(} \tfrac{1}{\Sigma}(\tfrac{1+\alpha}{2}J_{g}\boldsymbol{\hat{\mathsf{W}}}_{ \mathcal{N}}+\tfrac{1-\alpha}{2}J_{g}\boldsymbol{\hat{\mathsf{Z}}}_{\mathcal{N}}) \big{)}J_{g}\boldsymbol{\hat{\mathsf{A}}}_{\mathcal{N}}\] \[\qquad+\alpha\widetilde{\mathsf{D}}^{6}\big{(}J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}(J_{g}\boldsymbol{\hat{\mathsf{S}}}_{\mathcal{N}}) -\alpha\widetilde{\mathsf{D}}^{6}\big{(}g^{-\frac{1}{2}}(J_{g}\boldsymbol{ \hat{\mathsf{S}}}_{\mathcal{N}})\big{)}\widetilde{\mathsf{D}}_{2}J_{g}+ \alpha\widetilde{\mathsf{D}}^{6}\big{(}J_{g}^{-g^{-\frac{1}{2}}}\boldsymbol{\hat{ \mathsf{S}}}_{\mathcal{T}}\mathcal{N} ### The integral \(I^{\hat{\mathcal{W}}_{n}}\) We additively decompose the integral \(I^{\hat{\mathcal{W}}_{n}}\) as \[I^{\hat{\mathcal{W}}_{n}} =I_{1}^{\hat{\mathcal{W}}_{n}}+I_{1}^{\hat{\mathcal{W}}_{n}}+I_{4}^ {\hat{\mathcal{W}}_{n}}+I_{5}^{\hat{\mathcal{W}}_{n}}+I_{6}^{\hat{\mathcal{W}}_ {n}}\,,\] \[I_{1}^{\hat{\mathcal{W}}_{n}} =\int_{0}^{\infty}\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[I_{5}^{\hat{\mathbf{A}}_{n}} =2\alpha\int_{0}^{\sharp}\!\!\int_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \[\mathsf{G}_{2}=\alpha\Sigma^{2\beta}(\hat{\mathsf{Q}}_{2}-\widetilde{\mathsf{D}}_{ 2})\big{(}{}_{\mathcal{B}}\mathcal{J}^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}} \widetilde{\mathsf{D}}_{2}h\big{)}\geq-\check{C}\langle\beta\rangle\varepsilon \mathcal{J}^{\frac{1}{2}}J_{g} \tag{12.14}\] and we have appealed to (5.37), (6.38), (6.64), and the bound \(\mathcal{J}\leq J_{g}\). At this stage we also note that the bootstrap inequalities imply \[-\alpha\int\overline{\mathsf{Q}}_{2}{}_{\mathcal{B}}\mathcal{J}^ {\frac{3}{2}}J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\left(\big{|} \widetilde{\mathsf{D}}^{6}(J_{g}\underline{\tilde{\mathsf{Z}}}_{{\mathcal{N}}} )\big{|}^{2}+\big{|}\widetilde{\mathsf{D}}^{6}(J_{g}\underline{\tilde{\mathsf{ Z}}}_{{\mathcal{N}}})\big{|}^{2}\right)\Big{|}_{\mathsf{S}}\] \[\qquad\geq-\check{C}\varepsilon^{2}\Big{(}\big{\|}\tfrac{\mathcal{ J}^{\frac{3}{4}}(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}(J_{g}\underline{\tilde{\mathsf{Z}}}_{{\mathcal{N}}})(\cdot, \mathsf{S})\big{\|}_{L^{2}_{x}}^{2}+\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}( J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g} \underline{\tilde{\mathsf{A}}}_{{\mathcal{N}}})(\cdot,\mathsf{S})\big{\|}_{L ^{2}_{x}}^{2}\Big{)}\,. \tag{12.15}\] Finally, we have the pure damping terms \(I_{2}^{\tilde{\mathsf{Z}}_{n}}\) and \(I_{2}^{\tilde{\mathsf{A}}_{n}}\). Summing (12.7b) and (12.8b) yields \[I_{2}^{\tilde{\mathsf{Z}}_{n}}+I_{2}^{\tilde{\mathsf{Z}}_{n}}=\int_{0}^{\int} \!\!\!\int\!\!\frac{1}{\Sigma^{2\beta}}\mathsf{G}_{3}\big{(}\big{|}\widetilde {\mathsf{D}}^{6}(J_{g}\underline{\tilde{\mathsf{Z}}}_{{\mathcal{N}}})\big{|}^ {2}+2\big{|}\widetilde{\mathsf{D}}^{6}(J_{g}\underline{\tilde{\mathsf{A}}}_{ {\mathcal{N}}})\big{|}^{2}\big{)}\,, \tag{12.16}\] and by using (5.37), (6.38g), and (6.64), we choosing \(\varepsilon\) sufficiently small, we have that \[\mathsf{G}_{3}:=-\tfrac{1+\alpha}{2}J_{g}\underline{\hat{\mathsf{W}}}_{{ \mathcal{N}}}-\tfrac{1-\alpha}{2}J_{g}\underline{\tilde{\mathsf{Z}}}_{{ \mathcal{N}}}\geq\tfrac{9(1+\alpha)}{2\sigma}\mathcal{J}^{\frac{3}{2}}-\tfrac {17}{\varepsilon}\mathsf{Q}\mathcal{J}^{\frac{3}{2}}J_{g}\,. \tag{12.17}\] There are three terms with seven derivatives acting on the fundamental variables, and once again, these terms combine to produce an exact derivative, which we then integrate by parts. Adding (12.6b), (12.7c), (12.8c), employing the identity \(\tilde{\boldsymbol{\Sigma}}_{{\mathcal{N}}}=\tfrac{1}{2}\tilde{\boldsymbol{ \hat{W}}}_{{\mathcal{N}}}-\tfrac{1}{2}\tilde{\mathsf{Z}}_{{\mathcal{N}}}\), using (5.28c) we have that \[I_{3}^{\hat{\mathsf{W}}_{n}}+I_{3}^{\tilde{\mathsf{Z}}_{n}}+I_{3 }^{\tilde{\mathsf{A}}_{n}} =2\alpha\int_{0}^{\mathsf{S}}\!\!\int\!\!\!\int\!\!\!\!\int\!\!\! \!\!\int\!\!\!\!\!\int\!\!\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\!\!\!\int\!\!\! \!\!\!\!\int_{\mathcal{B}}\!\!\!\!\!\!\int\!\!\!\!\!\!\int_{\mathcal{B}}\!\!\!\! \!\!\!\int\!\!\!\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\!\! \!\!\int\!\!\!\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! #### 12.7.1. Important geometric identities **Lemma 12.1**.: _For a function \(f(x,\mathsf{s})\), we have that_ \[\left|\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D }}^{6}\tau\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g}\right|+ \left|\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{ D}}_{2}\widetilde{\mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}^{6}J_{g}\right| \lesssim\varepsilon^{3}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\Big{(} \big{\|}\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}f\big{\|}_{L^{\infty}_{ x,\mathsf{s}}}+\varepsilon\big{\|}\mathcal{J}^{-\frac{1}{2}}f\big{\|}_{L^{ \infty}_{x,\mathsf{s}}}\Big{)}\,. \tag{12.20}\] Proof of Lemma 12.1.: Using the identities (3.11) and (3.18), we have that \[\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}J_{g} =\tfrac{1}{\varepsilon}g^{\frac{1}{2}}\widetilde{\mathsf{D}}_{1} (\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau)-J_{g}\widetilde{\mathsf{ D}}_{2}h\widetilde{\mathsf{D}}_{2}(\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau)\] \[\quad-\tfrac{1}{\varepsilon}g^{\frac{1}{2}}\tau\!\cdot\!\widetilde {\mathsf{D}}_{1}\mathcal{N}\!\cdot\!\mathcal{T}\!\cdot\!\widetilde{\mathsf{D}}^{6 }\tau+\tfrac{1}{\varepsilon}\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}(g^{ \frac{1}{2}}\mathcal{N})\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{1}\tau+ \tfrac{1}{\varepsilon}\big{(}\widetilde{\mathsf{D}}^{6},g^{\frac{1}{2}} \mathcal{N}_{i},\widetilde{\mathsf{D}}_{1}\tau_{i}\big{)}\] \[\quad+J_{g}\widetilde{\mathsf{D}}_{2}h\tau\!\cdot\!\widetilde{ \mathsf{D}}_{2}\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau-\mathcal{N} \!\cdot\!\widetilde{\mathsf{D}}^{6}(J_{g}\widetilde{\mathsf{D}}_{2}h\mathcal{ N}_{i})\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{2}\tau-\big{(}\widetilde{ \mathsf{D}}^{6},J_{g}\widetilde{\mathsf{D}}_{2}h\mathcal{N}_{i},\widetilde{ \mathsf{D}}_{2}\tau_{i}\big{)} \tag{12.21}\] Hence, integrating-by-parts using (5.28) and (12.21), we find that \[\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{ \mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g} =-\tfrac{1}{2\varepsilon}\int_{0}^{\mathsf{s}}\!\!\int\widetilde{ \mathsf{D}}_{1}(f\,g^{\frac{1}{2}})\,(\mathcal{N}\!\cdot\!\widetilde{\mathsf{D }}^{6}\tau)^{2}-\tfrac{1}{2}\int_{0}^{\mathsf{s}}\!\!\int(\widetilde{\mathsf{Q }}_{2}-\widetilde{\mathsf{D}}_{2})(f\,J_{g}\widetilde{\mathsf{D}}_{2}h)\,( \mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau)^{2}\] \[\quad+\tfrac{1}{2}\int\overline{\mathsf{Q}}_{2}f\,J_{g}\widetilde {\mathsf{D}}_{2}h(\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau)^{2}\Big{|} _{\mathsf{s}}-\tfrac{1}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\int f\, \mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau\,g^{\frac{1}{2}}\mathcal{ N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau\] \[\quad+\tfrac{1}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau\, \mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}(g^{\frac{1}{2}}\mathcal{N})\, \mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{1}\tau+\tfrac{1}{\varepsilon}\int_ {0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6} \tau\,\big{(}\widetilde{\mathsf{D}}^{6},g^{\frac{1}{2}}\mathcal{N}_{i}, \widetilde{\mathsf{D}}_{1}\tau_{i}\big{)}\] \[\quad+\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\! \widetilde{\mathsf{D}}^{6}\tau\,J_{g}\widetilde{\mathsf{D}}_{2}h\mathcal{T}\! \cdot\!\widetilde{\mathsf{D}}_{2}\mathcal{N}\,\tau\!\cdot\!\widetilde{\mathsf{ D}}^{6}\tau-\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau\, \mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}(J_{g}\widetilde{\mathsf{D}}_{2}h \mathcal{N})\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{2}\tau\] \[\quad-\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{ \mathsf{D}}^{6}\tau\,\big{(}\widetilde{\mathsf{D}}^{6},J_{g}\widetilde{\mathsf{D}}_{2 }h\mathcal{N}_{i},\widetilde{\mathsf{D}}_{2}\tau_{i}\big{)}\,. \tag{12.22}\] To conclude the bound for the first term in (12.20), we appeal to the bootstraps (5.37), the estimates (6.38), to the bounds (7.1g)-(7.1i), and to Lemmas B.3-B.5, to obtain \[\left|\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{ \mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g} \right|\lesssim\big{\|}\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}_{1}f\big{\|} _{L^{\infty}_{x,\mathsf{s}}}\!\!K^{2}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle^ {2}+\big{(}\big{\|}\mathcal{J}^{\frac{1}{2}}f\big{\|}_{L^{\infty}_{x,\mathsf{s}}} +\big{\|}\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}_{2}f\big{\|}_{L^{ \infty}_{x,\mathsf{s}}}\big{)}\mathsf{K}^{2}\varepsilon^{5}\langle\mathsf{B}_{6} \rangle^{2}\] \[\quad+\big{\|}\mathcal{J}^{-\frac{1}{2}}f\big{\|}_{L^{\infty}_{x, \mathsf{s}}}\mathsf{K}^{2}\varepsilon^{5}\langle\mathsf{B}_{6}\rangle^{2}+\big{\|} f\big{\|}_{L^{\infty}_{x,\mathsf{s}}}\mathsf{K}^{2}\varepsilon^{4}\langle \mathsf{B}_{6}\rangle^{2}\,.\] Since \(\mathcal{J}\leq 1\), this concludes the estimate for the first term in (12.20). The bound for the second term follows similarly since (5.28c) gives \[\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}^{6}J_{g} =-\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6} \tau\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g}-\int_{0}^{ \mathsf{s}}\!\!\int_{0}^{\mathsf{s}}\!\!\int\!\!\left(\widetilde{\mathsf{D}}_{2}f \,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau+f\tau\!\cdot\widetilde{\mathsf{ D}}_{2}\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau\right)\widetilde{\mathsf{D}}^{6}J_{g}\] \[\quad+\int_{0}^{\mathsf{s}}\!\!\int\!\!\widetilde{\mathsf{Q}}_{2}f\, \mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}^{6}J_{g}- \int\overline{\mathsf{Q}}_{2}f\,\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}^{6} \tau\,\widetilde{\mathsf{D}}^{6}J_{g}\Big{|}_{\mathsf{s}}\,. \tag{12.23}\] Using the bootstrap inequalities (5.37), the estimates (6.38), and the bounds for the geometry (7.1), we deduce \[\left|\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\cdot\!\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}^ \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}\mathbf{Q})^{\frac{1}{2}}}{\Sigma^{ \beta}}\mathcal{R}^{\mathcal{N}}_{\mathbf{W}}\big{\|}_{L^{2}_{x,*}}\lesssim\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}\mathbf{Q})^{\frac{1}{2}}}{\Sigma^{\beta} }\widetilde{\mathrm{D}}^{6}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}},J_{g}\mathbf{ \hat{Z}}_{\mathcal{N}},J_{g}\mathbf{\hat{Z}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,*}}+(\tfrac{4}{\kappa_{0}})^{\beta}\langle\mathsf{B}_{6}\rangle\lesssim( \tfrac{4}{\kappa_{0}})^{\beta}\langle\mathsf{B}_{6}\rangle\,.\] (12.29a) Similarly, from the definition of \[\mathcal{R}^{\mathcal{N}}_{\mathbf{W}}\] in ( 12.2), and by additionally appealing to ( 8.44 ), we find that \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}\mathbf{Q})^{\frac{1}{2}}}{\Sigma^{ \beta}}\mathcal{R}^{\mathcal{N}}_{\mathbf{W}}\big{\|}_{L^{2}_{x,*}}\lesssim \big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}\mathbf{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathrm{D}}^{6}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}}, J_{g}\mathbf{\hat{Z}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,*}}+(\tfrac{4}{\kappa_{0}})^{ \beta}\langle\mathsf{B}_{6}\rangle\lesssim(\tfrac{4}{\kappa_{0}})^{\beta} \langle\mathsf{B}_{6}\rangle\,.\] (12.29b) Lastly, using the definition of \[\mathcal{C}^{\mathcal{N}}_{\mathbf{W}}\] in ( 12.2), identity ( 3.25 ), the aforementioned bounds, Lemma B.1, the Leibniz rule and Lemma B.3, we may also obtain \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}\mathbf{Q})^{\frac{1}{2}}}{\Sigma^ {\beta}}\mathcal{C}^{\mathcal{N}}_{\mathbf{W}}\big{\|}_{L^{2}_{x,*}}\lesssim( \tfrac{4}{\kappa_{0}})^{\beta}\langle\mathsf{B}_{6}\rangle\,.\] (12.29c) To give a more detailed explanation of how we arrived at ( 12.29c ), we examine a typical commutator term, namely: \[\big{(}\widetilde{\mathrm{D}}^{6},\Sigma^{-1},(\mathsf{Q}\partial_{ \mathsf{S}}+V\partial_{2})(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})\big{)}\]. Using ( 3.42a ), for \[k=1,2,3,4,5\] we have \[\widetilde{\mathrm{D}}^{k}(\mathsf{Q}\partial_{\mathsf{S}}+V \partial_{2})(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})\] \[\qquad=\widetilde{\mathrm{D}}^{k}\big{(}-\alpha g^{-\frac{1}{2}} \Sigma\widetilde{\mathrm{D}}_{2}(J_{g}\mathbf{\hat{A}}_{\mathcal{N}})+\alpha g ^{-\frac{1}{2}}\Sigma\mathbf{\hat{A}}_{\mathcal{N}}\widetilde{\mathrm{D}}_{2 }J_{g}+\tfrac{\alpha}{2}g^{-\frac{1}{2}}\Sigma(J_{g}\mathbf{\hat{W}}_{\mathcal{ N}}+J_{g}\mathbf{\hat{Z}}_{\mathcal{N}}-2J_{g}\mathbf{\hat{A}}_{\mathcal{T}}) \widetilde{\mathrm{D}}_{2}\mathcal{T}\cdot\mathcal{N}+\Sigma\mathsf{F}^{ \mathcal{N}}_{\mathrm{W}}\big{)}\,.\] Using ( 5.37 ), ( 7.1 ), ( 8.21 ), ( 8.50a ), and ( 8.13 ), we obtain the bounds \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{g}\mathbf{Q})^{\frac{1}{2}}}{\Sigma ^{\beta}}\big{(}\widetilde{\mathrm{D}}^{6},\Sigma^{-1},(\mathsf{Q}\partial_{ \mathsf{S}}+V\partial_{2})(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})\big{)}\big{\|}_{L ^{2}_{x,*}}\] \[\qquad\leq(\tfrac{4}{\kappa_{0}})^{\beta}\|\Sigma\widetilde{ \mathrm{D}}(J_{g}\Sigma^{-1})\|_{L^{2}_{x,*}}\big{\|}\widetilde{\mathrm{D}}^{5}( J_{g}\mathbf{\hat{W}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,*}}+\mathring{C}(\tfrac{4}{ \kappa_{0}})^{\beta}\!\sum\nolimits_{k=1}^{3}\!\big{\|}\widetilde{\mathrm{D}}^{4 -k}\widetilde{\mathrm{D}}(J_{g}\Sigma^{-1})\widetilde{\mathrm{D}}^{k}\widetilde{ \mathrm{D}}(J_{g}\mathbf{\hat{W}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,*}}+\mathring{ C}\varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\langle\mathsf{B}_{6}\rangle\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ \[\big{\|}\tfrac{1}{\Sigma^{\beta-1}}\mathcal{J}^{\frac{3}{4}} \widetilde{\mathsf{D}}_{1}J_{g}\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}} ^{5}\underline{\tilde{\mathsf{Z}}}_{\mathcal{N}}\big{\|}_{L^{2}_{x,s}} \leq\tfrac{1}{\varepsilon}\big{\|}\widetilde{\mathsf{D}}_{1}J_{g} \big{\|}_{L^{\infty}_{x,s}}\big{\|}\tfrac{1}{\Sigma^{\beta-1}}\mathcal{J}^{ \frac{3}{4}}\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{5}\underline{ \tilde{\mathsf{Z}}}_{\mathcal{N}}\big{\|}_{L^{2}_{x,s}}\] \[\leq\tfrac{4(1+\alpha)}{\varepsilon}\big{\|}\tfrac{1}{\Sigma^{ \beta}}\mathcal{J}^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}(J_{g}\underline{ \tilde{\mathsf{Z}}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,s}}+\mathring{C} \varepsilon(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6} \rangle\,. \tag{12.34}\] For case (b), it follows from (8.22d) that \[\big{\|}\tfrac{1}{\Sigma^{\beta-1}}\mathcal{J}^{\frac{3}{4}} \widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{5}\underline{\tilde{ \mathsf{Z}}}_{\mathcal{N}}\big{\|}_{L^{2}_{x,s}} \leq\tfrac{1}{\varepsilon^{\beta}}\big{\|}\tfrac{1}{\Sigma^{\beta}} \mathcal{J}^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}\underline{\tilde{ \mathsf{Z}}}_{\mathcal{N}}\big{\|}_{L^{2}_{x,s}}+\mathring{C}\varepsilon( \tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,, \tag{12.35}\] and hence using (12.35), we find that \[\tfrac{2\alpha}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{ \Sigma^{\beta-1}}\widetilde{\mathsf{D}}_{1}J_{g}\widetilde{\mathsf{D}}_{2} \widetilde{\mathsf{D}}^{5}\underline{\tilde{\mathsf{Z}}}_{\mathcal{N}}\big{\|} _{L^{2}_{x,s}}\leq\mathring{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\,. \tag{12.36}\] Finally for case (c), from (11.2b), we also have that \[\tfrac{2\alpha}{\varepsilon}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{ \Sigma^{\beta-1}}\widetilde{\mathsf{D}}_{1}J_{g}\widetilde{\mathsf{D}}_{g}^{6 }\underline{\tilde{\mathsf{Z}}}_{\mathcal{N}}\big{\|}_{L^{2}_{x,s}} \leq\mathring{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\,. \tag{12.37}\] Combining (12.32), (12.34), (12.36), and (12.37) proves the inequality (12.31b). A similar decomposition of \(\mathcal{C}^{\mathcal{N}}_{\underline{\tilde{\mathsf{Z}}}}\) yields (12.31c). The bounds for the forcing, remainder, and commutator functions for the \(\mathbf{\hat{A}}_{\mathcal{N}}\)-equation (12.1c) are obtained in the identical fashion as for the \(\underline{\tilde{\mathsf{Z}}}_{\mathcal{N}}\)-equation, and we have that \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}} \widetilde{\mathsf{D}}^{6}\mathsf{F}_{\mathsf{N}}^{\mathcal{N}}\big{\|}_{L^{2}_ {x,s}} \leq\mathring{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\,, \tag{12.38a}\] \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}} \mathcal{R}^{\mathcal{N}}_{\mathbf{\hat{A}}}\big{\|}_{L^{2}_{x,s}} \leq\mathring{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\,+\tfrac{4(1+\alpha)}{\varepsilon}\big{\|}\tfrac{1}{ \Sigma^{\beta}}\mathcal{J}^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}(J_{g} \underline{\tilde{\mathsf{A}}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,s}}\,,\] (12.38b) \[\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}}{\Sigma^{\beta-1}} \mathcal{C}^{\mathcal{N}}_{\mathbf{\hat{A}}}\big{\|}_{L^{2}_{x,s}} \leq\mathring{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\,. \tag{12.38c}\] 7.3. The sum \(I_{5}^{\hat{\mathbf{W}}_{n}}+I_{5}^{\hat{\mathbf{Z}}_{n}}+I_{7}^{\hat{\mathbf{A}}_{n}}\) We first note that by using \(\tilde{\mathbf{\Sigma}}_{\mathcal{N}}=\frac{1}{2}(\hat{\mathbf{W}}_{\mathcal{N}}- \tilde{\mathbf{Z}}_{\mathcal{N}})\), we have that \[I_{5}^{\hat{\mathbf{W}}_{n}}+I_{5}^{\hat{\mathbf{Z}}_{n}}=-\alpha\int_{0}^{ \mathsf{s}}\!\!\int\!\!\!\int\!\!\!\!\!\int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\! \int_{\mathcal{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[I_{7,a,ii_{1}}^{\hat{\mathbf{A}}_{n}} =\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\int_{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}J_{g}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{\mathbf{A}}_{\tau})\,\widetilde{\mathrm{D }}^{6}\!\left((\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\tau}+\tfrac{1-\alpha}{2} \hat{\mathbf{Z}}_{\tau})\mathcal{N}\right)\!\cdot\!\mathcal{N}\,\widetilde{ \mathrm{D}}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\,,\] \[I_{7,a,ii_{2}}^{\hat{\mathbf{A}}_{n}} =\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\int\tfrac{1}{\Sigma^{2 \beta}}\mathcal{J}^{\frac{3}{2}}\left(\tfrac{1+\alpha}{2}J_{g}\hat{\mathbf{W}}_ {\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}\right) \left(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g}\hat{\mathbf{Z}}_{\mathcal{N}} -2J_{g}\hat{\mathbf{A}}_{\tau}\right)\widetilde{\mathrm{D}}^{6}\!\!\cdot\! \mathcal{N}\,\widetilde{\mathrm{D}}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\,,\] \[I_{7,a,ii_{3}}^{\hat{\mathbf{A}}_{n}} =\int_{0}^{\mathsf{s}}\!\!\!\int_{J_{g}}(Q\hat{\mathbf{Q}}_{ \mathsf{s}}+V\partial_{2})\big{(}\tfrac{1}{\Sigma^{2\beta}}\gamma^{\frac{3}{2}} \left(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g}\hat{\mathbf{Z}}_{\mathcal{N}} -2J_{g}\hat{\mathbf{A}}_{\tau}\right)\!\widetilde{\mathrm{D}}^{6}\!\!\cdot\! \mathcal{N}\,\widetilde{\mathrm{D}}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\,,\] \[I_{7,a,ii_{4}}^{\hat{\mathbf{A}}_{n}} =\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\int J_{g}(\widetilde{ \mathrm{D}}_{2}V+\hat{\mathbf{Q}}_{\mathsf{s}}-V\hat{\mathrm{Q}}_{2})\tfrac{1} {\Sigma^{2\beta}}\gamma^{\frac{3}{2}}\left(J_{g}\hat{\mathbf{W}}_{\mathcal{N}} +J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{\mathbf{A}}_{\tau}\right) \widetilde{\mathrm{D}}^{6}\!\!\cdot\!\mathcal{N}\,\widetilde{\mathrm{D}}^{6}(J _{g}\hat{\mathbf{A}}_{\mathcal{N}})\,,\] \[I_{7,a,ii_{5}}^{\hat{\mathbf{A}}_{n}} =-\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\int\tfrac{1}{\Sigma^{2 \beta}}\mathcal{J}^{\frac{3}{2}}J_{g}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g }\hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{\mathbf{A}}_{\tau})\,\widetilde{ \mathrm{D}}^{6}\!\!\cdot\!\mathcal{N}\,\widetilde{\mathrm{D}}^{6}(J_{g}\hat{ \mathbf{A}}_{\mathcal{N}})\Big{|}_{\mathsf{s}}\] \[I_{7,a,ii_{7}}^{\hat{\mathbf{A}}_{n}} =\int_{\Sigma^{2\beta}}J_{g}\mathcal{J}^{\frac{3}{2}}\left(J_{g} \hat{\mathbf{W}}_{\mathcal{N}}+J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{ \mathbf{A}}_{\tau}\right)\widetilde{\mathrm{D}}^{6}\!\!\cdot\!\mathcal{N}\, \widetilde{\mathrm{D}}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\Big{|}_{\mathsf{ s}}\] By using (4.11), (5.37), (6.38), and (7.1) together with (B.13) and (B.16), all of the seven integrals above are directly estimated and we obtain that \[\big{|}I_{7,a,ii}^{\hat{\mathbf{A}}_{n}}\big{|}\lesssim(\tfrac{4}{\kappa_{0}})^ {2\beta}\mathsf{K}^{2}(\mathsf{B}_{6})^{2}\,. \tag{12.47}\] As we have shown in (12.43)-(12.47), the integrals \(I_{7,a,ii}^{\hat{\mathbf{A}}_{n}}\) through \(I_{7,a,viii}^{\hat{\mathbf{A}}_{n}}\), defined in (12.42b)-(12.42h), are bounded rather directly. In contrast, the integral \(I_{7,a,i}^{\hat{\mathbf{A}}_{n}}\) defined in (12.42a) cannot be bounded, but rather requires a cancellation to occur. In particular, employing integration-by-parts for this integral, we will arrive at a cancellation with the sum \(I_{5}^{\hat{\mathbf{W}}_{n}}+I_{5}^{\hat{\mathbf{Z}}_{n}}\) given in (12.39). By once again using (5.28c), we have that \[I_{7,a,i}^{\hat{\mathbf{A}}_{n}} =I_{7,a,i_{1}}^{\hat{\mathbf{A}}_{n}}+I_{7,a,i_{2}}^{\hat{\mathbf{ A}}_{n}}+I_{7,a,i_{3}}^{\hat{\mathbf{A}}_{n}}+I_{7,a,i_{4}}^{\hat{\mathbf{A}}_{n}}\,,\] \[I_{7,a,i_{1}}^{\hat{\mathbf{A}}_{n}} =\alpha\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\int_{\mathrm{b}} \mathcal{J}^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}} +J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{\mathbf{A}}_{\tau})\widetilde{ \mathrm{D}}_{2}\widetilde{\mathrm{D}}^{6}\!\cdot\!\mathcal{N}\,\widetilde{ \mathrm{D}}^{6}(J_{g}\hat{\mathbf{S}}_{\mathcal{N}})\,,\] \[I_{7,a,i_{2}}^{\hat{\mathbf{A}}_{n}} =\alpha\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\widetilde{\mathrm{D}}_{2 }\Big{(}\!\jmath_{g}\mathcal{J}^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}}+J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{ \mathbf{A}}_{\tau})\widetilde{\mathrm{D}}^{6}\!\!\cdot\!\mathcal{N}\, \widetilde{\mathrm{D}}^{6}(J_{g}\hat{\mathbf{S}}_{\mathcal{N}})\,,\] \[I_{7,a,i_{3}}^{\hat{\mathbf{A}}_{n}} =-\alpha\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\int\!\!\!\!\!\int \hat{\mathrm{Q}}_{2\,\mathrm{b}}\mathcal{J}^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}}+J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{ \mathbf{A}}_{\tau})\widetilde{\mathrm{D}}^{6}\!\!\cdot\!\mathcal{N}\, \widetilde{\mathrm{D}}^{6}(J_{g}\hat{\mathbf{S}}_{\mathcal{N}})\,,\] \[I_{7,a,i_{4}}^{\hat{\mathbf{A}}_{n}} =\alpha\int\overline{\mathrm{Q}}_{2\,\mathrm{b}}\mathcal{J}^{ \frac{3}{2}}J_{g}g^{-\frac{1}{2}}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g} \hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{\mathbf{A}}_{\tau})\widetilde{ \mathrm{D}}^{6}\!\!\cdot\!\mathcal{N}\,\widetilde{\mathrm{D}}^{6}(J_{g}\hat{ \mathbf{S}}_{\mathcal{N}})\Big{|}_{\mathsf{s}}\,. \tag{12.48}\] The first term in the above decomposition precisely equals \(-(I_{5}^{\hat{\mathbf{W}}_{n}}+I_{5}^{\hat{\mathbf{Z}}_{n}})\). Therefore, we sum (12.39) with (12.48) and find that \[I_{5}^{\hat{\mathbf{W}}_{n}}+I_{5}^{\hat{\mathbf{Z}}_{n}}+I_{7,a,i}^{\hat{ \mathbf{A}}_{n}}=I_{7,a,i_{2}}^{\hat{\mathbf{A}}_{n}}+I_{7,a,i_{3}}^{\hat{ \mathbf{A}}_{n}}+I_{7,a,i_{4}}^{\hat{\mathbf{A}}_{n}}\,. \tag{12.49}\] Using that \[I_{4,a,ii}^{\hat{w}_{n}+\hat{2}_{n}} =2\alpha\int\limits_{0}^{\mathsf{s}}\!\!\int\limits_{\mathsf{J}} \mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}}\widetilde{\mathsf{D}}^{6 }J_{g}\left(\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6}(J_{g}\mathbf{ \hat{A}}_{\mathcal{N}})-\varepsilon J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D }}_{2}h\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}(J_{g}\mathbf{ \hat{A}}_{\mathcal{N}})\right), \tag{12.53a}\] \[I_{4,a,viii}^{\hat{w}_{n}+\hat{2}_{n}} =-\frac{\alpha}{\varepsilon}\int\limits_{0}^{\mathsf{s}}\!\!\int \limits_{\mathsf{J}}\mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}}(J _{g}\mathbf{\hat{W}}_{\mathcal{N}}+J_{g}\mathbf{\hat{Z}}_{\mathcal{N}}-2J_{g} \mathbf{\hat{A}}_{\mathcal{T}})\widetilde{\mathsf{D}}^{6}J_{g}\left( \widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}- \varepsilon J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\right),\] (12.53b) \[I_{4,a,viii}^{\hat{w}_{n}+\hat{2}_{n}} =-\frac{\alpha}{\varepsilon}\int\limits_{0}^{\mathsf{s}}\!\!\int \limits_{\mathsf{J}}\mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}}(J _{g}\mathbf{\hat{W}}_{\mathcal{N}}+J_{g}\mathbf{\hat{Z}}_{\mathcal{N}}-2J_{g} \mathbf{\hat{A}}_{\mathcal{T}})\widetilde{\mathsf{D}}^{6}J_{g}\left( \widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}- \varepsilon J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\right),\] (12.53f) \[I_{4,a,vii}^{\hat{w}_{n}+\hat{2}_{n}} =-\frac{\alpha}{\varepsilon}\int\limits_{0}^{\mathsf{s}}\!\!\int \limits_{\mathsf{J}}\mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}} \widetilde{\mathsf{A}}_{\mathcal{N}}^{2}\left(\widetilde{\mathsf{D}}_{1}- \varepsilon J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}_{2}\right)\left(\left(\widetilde{ \mathsf{D}}^{6}J_{g}\mathbf{\hat{A}}_{\mathcal{N}}\right)^{2}\right),\] (12.53g) \[I_{4,a,viii}^{\hat{w}_{n}+\hat{2}_{n}} =2\int\limits_{0}^{\mathsf{s}}\!\!\int\limits_{\mathsf{J}} \mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}}\widetilde{\mathsf{D}}^{6 }J_{g}\left(\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6}(J_{g}\mathbf{ \hat{A}}_{\mathcal{N}})-\varepsilon J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D }}_{2}h\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}(J_{g}\mathbf{ \hat{A}}_{\mathcal{N}})\right),\] (12.53h) \[I_{4,a,vi}^{\hat{w}_{n}+\hat{2}_{n}} =-\frac{\alpha}{\varepsilon}\int\limits_{0}^{\mathsf{s}}\!\!\int \limits_{\mathsf{J}}\mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}}(J _{g}\mathbf{\hat{W}}_{\mathcal{N}}+J_{g}\mathbf{\hat{Z}}_{\mathcal{N}}-2J_{g} \mathbf{\hat{A}}_{\mathcal{T}})\widetilde{\mathsf{D}}^{6}J_{g}\left( \widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}- \varepsilon J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\right),\] (12.53h) \[I_{4,a,vii}^{\hat{w}_{n}+\hat{2}_{n}} =-\frac{\alpha}{\varepsilon}\int\limits_{0}^{\mathsf{s}}\!\!\int \limits_{\mathsf{J}}\mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}} \widetilde{\mathsf{D}}^{6}J_{g}\left(\widetilde{\mathsf{D}}_{1}\widetilde{ \mathsf{D}}^{6}(J_{g}\mathbf{\hat{A}}_{\mathcal{N}})-\varepsilon J_{g}g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{\mathsf{D}}_{2} \widetilde{\mathsf{D}}^{6}(J_{g}\mathbf{\hat{A}}_{\mathcal{N}})\right),\] (12.53h) \[I_{4,a,viii}^{\hat{w}_{n}+\hat{2}_{n}} =-\frac{\alpha}{\varepsilon}\int\limits_{0}^{\mathsf{s}}\!\!\int \limits_{\mathsf{J}}\mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}}(J _{g}\mathbf{\hat{W}}_{\mathcal{N}}+J_{g}\mathbf{\hat{Z}}_{\mathcal{N}}-2J_{g} \mathbf{\hat{A}}_{\mathcal{T}})\widetilde{\mathsf{D}}^{6}J_{g}\left(\widetilde{ \mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}-\varepsilon J_{g}g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{\mathsf{D}}_{2}\widetilde{ \mathsf{D}}^{6}\tau\cdot\mathcal{N}\right),\] (12.53h) \[I_{4,a,vii}^{\hat{w}_{n}+\hat{2}_{n}} =-\frac{\alpha}{\varepsilon}\int\limits_{0}^{\mathsf{s}}\!\!\int \limits_{\mathsf{J}}\mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}} \widetilde{\mathsf{D}}^{6}J_{g}\left(\widetilde{\mathsf{D}}_{1}\widetilde{ \mathsf{D}}^{6}(J_{g}\mathbf{\hat{A}}_{\mathcal{N}})-\varepsilon J_{g}g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{\mathsf{D}}_{2}\widetilde{ \mathsf{D}}^{6}\tau\cdot\mathcal{N}\right),\] (12.53h) \[I_{4,a,viii}^{\hat{w}_{n}+\hat{2}_{n}} =-\frac{\alpha}{\varepsilon}\int\limits_{0}^{\mathsf{s}}\!\!\int \limits_{\mathsf{J}}\mathcal{J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}} \widetilde{\mathsf{D}}^{6}J_{g}\left(\widetilde{\mathsf{D}}_{1}\widetilde{ \mathsf{D}}^{6}J_{g}\,\widetilde{\mathsf{D}}^{6}(J_{g}\mathbf{\hat{A}}_{ \mathcal{N}})\right)\Big{(}\big{(}\widetilde{\mathsf{D}}^{6}J_{g}\big{)}^{2} \Big{)}\,,\] (12.53i) \[I_{4,a,viii}^{\hat{w}_{n}+\hat{2}_{n}} =2\int\limits_{0}^{\mathsf{s}}\!\!\int\limits_{\mathsf{J}}\mathcal{ J}^{\frac{3}{2}}\mathbf{\hat{A}}_{\mathcal{N}}\widetilde{\mathsf{D}}^{6}J_{g}\left( \widetilde{\mathsf{D}}^{6}\mathbf{F}_{\hat{A}}^{\mathcal{N}}+\mathcal{R}_{ \mathbf{\hat{A}}}^{\mathcal{N}}+\mathcal{C}_{\mathbf{\hat{A}}}^{\mathcal{N}}\right). \tag{12.53h}\] The eight terms present in (12.53) may all be bounded directly, except for \(I_{4,a,v}^{\hat{w}_{n}+\hat{2}_{n}}\), which must be cancelled with a contribution from \(I_{8}^{\hat{A}_{n}}\), see (12.60) below. Indeed, from (5.37), (6.38), (7.1), (12.38) we deduce \[\big{|}I_{4,a,ii}^{\hat{w}_{n}+\hat{2}_{n}}\big{|}+\big{|}I_{4,a,viii}^{\hat{w}_{n}+ \hat{2}_{n}}\big{|}\lesssim(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B }_{6}\rangle^{2}\,. \tag{12.54}\] By also appealing to Lemmas 12.1 and 12.2 we have \[\big{|}I_{4,a,iv}^{\hat{w}_{n}+\hat{2}_{n}}\big{|}+ For the last three terms above, by appealing to (5.37), (6.38), and (7.1), we deduce \[\big{|}I^{\hat{\mathbf{A}}_{n}}_{8,b}\big{|}+\big{|}I^{\hat{\mathbf{A}}_{n}}_{8,c} \big{|}+\big{|}I^{\hat{\mathbf{A}}_{n}}_{8,d}\big{|}\lesssim(\tfrac{4}{\kappa_{ 0}})^{2\beta}(\mathsf{B}_{6})^{2}\,, \tag{12.59}\] while for the first term we note the cancelation structure obtained by adding (12.53e) and (12.58a): \[I^{\hat{\mathbf{A}}_{n}+2_{n}}_{4,a,v}+I^{\hat{\mathbf{A}}_{n}}_{8,a}=0\,. \tag{12.60}\] From (12.53), (12.58), and (12.60) we thus obtain \[I^{\hat{\mathbf{A}}_{n}+2_{n}}_{4,a}+I^{\hat{\mathbf{A}}_{n}}_{8}=I^{\hat{ \mathbf{A}}_{n}+2_{n}}_{4,a,i}+I^{\hat{\mathbf{A}}_{n}+2_{n}}_{4,a,ii}+I^{\hat{ \mathbf{A}}_{n}+2_{n}}_{4,a,ii}+I^{\hat{\mathbf{A}}_{n}+2_{n}}_{4,a,iv}+I^{ \hat{\mathbf{A}}_{n}+2_{n}}_{4,a,vii}+I^{\hat{\mathbf{A}}_{n}+2_{n}}_{4,a,vii} +I^{\hat{\mathbf{A}}_{n}+2_{n}}_{4,a,viii}+I^{\hat{\mathbf{A}}_{n}}_{8,b}+I^{ \hat{\mathbf{A}}_{n}}_{8,c}+I^{\hat{\mathbf{A}}_{n}}_{8,d}\,,\] which may be combined with the bounds (12.52), (12.54)-(12.57), and (12.59) to deduce \[\big{|}I^{\hat{\mathbf{A}}_{n}}_{4}+I^{\hat{\mathbf{Z}}_{n}}_{4}+I^{\hat{ \mathbf{A}}_{n}}_{8}\big{|}\lesssim(\tfrac{4}{\kappa_{0}})^{2\beta}\langle \mathsf{B}_{6}\rangle^{2}\,. \tag{12.61}\] #### 12.7.5. The integral \(I^{\hat{\mathbf{A}}_{n}}_{4}\) For the integral \(I^{\hat{\mathbf{A}}_{n}}_{4}\) defined in (12.8d) we first integrate-by-parts the \(\widetilde{\mathsf{D}}_{2}\) derivative. Using (5.28c), we have that \[I^{\hat{\mathbf{A}}_{n}}_{4} =I^{\hat{\mathbf{A}}_{n}}_{4,a}+I^{\hat{\mathbf{A}}_{n}}_{4,b}+I^{ \hat{\mathbf{A}}_{n}}_{4,c}\,,\] \[I^{\hat{\mathbf{A}}_{n}}_{4,a} =2\alpha\int_{0}^{s}\!\!\!\int\!\!\!\int\!\!\!\!\!\int\!\!\!\!\! \int\!\!\!\!\!\int_{\!\!\!\!\!\!\int}\!\!\!\!\!\int_{\!\!\!\!\!\!\int}\!\!\! \!\!\!\int_{\!\!\!\!\!\!\!\int}\!\!\!\!\!\int_{\!\!\!\!\!\!\!\int}\!\!\!\!\! \int_{\!\!\!\!\!\!\!\!\int}\!\!\!\!\!\int_{\!\!\!\!\!\!\!\!\int}\!\!\!\!\!\! \int_{\!\!\!\!\!\!\!\!\!\int}\!\!\!\!\!\!\int_{\!\!\!\!\!\!\!\!\!\!\!\!\!\! \int}\!\!\!\!\!\!\int_{\!\!\!\!\!\!\!\!\!\!\!\!\!\int}\!\!\!\!\!\!\!\!\!\! \int_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \int}\!\!\!\!\!\!\!\!\!\!\!\int_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[J_{1}^{\hat{\mathbf{A}}_{n}} =2\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2\beta}}\mathcal{J} ^{\frac{3}{2}}(J_{g}\hat{\mathbf{\Sigma}}_{\mathcal{N}})(\mathsf{Q}\partial_{ \mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D}}^{6}J_{g}\;\widetilde{\mathsf{ D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\,, \tag{12.65a}\] \[J_{2}^{\hat{\mathbf{A}}_{n}} =2\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})(J_{g} \hat{\mathbf{\Sigma}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}J_{g}\; \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\,,\] (12.65b) \[J_{3}^{\hat{\mathbf{A}}_{n}} =\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2\beta}}( \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}^{\frac{3}{2}}\;(J_{ g}\hat{\mathbf{W}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}J_{g}\;\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\,,\] (12.65c) \[J_{4}^{\hat{\mathbf{A}}_{n}} =2\int_{0}^{\mathsf{s}}\!\!\int\!(\mathsf{Q}\partial_{\mathsf{s} }+V\partial_{2})\big{(}\tfrac{1}{\Sigma^{2\beta}}\big{)}\;\mathcal{J}^{\frac{3} {2}}(J_{g}\hat{\mathbf{\Sigma}}_{\mathcal{N}})\;\widetilde{\mathsf{D}}^{6}J_{g }\;\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\] (12.65d) \[J_{5}^{\hat{\mathbf{A}}_{n}} =2\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}(J_{g}\hat{\mathbf{\Sigma}}_{\mathcal{N}})(\hat{ \mathsf{Q}}_{\mathsf{s}}-V\hat{\mathsf{Q}}_{2}+\widetilde{\mathsf{D}}_{2}V) \widetilde{\mathsf{D}}^{6}J_{g}\;\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}})\,,\] (12.65e) \[J_{6}^{\hat{\mathbf{A}}_{n}} =-\int\frac{\mathsf{Q}}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}J _{g}\hat{\mathbf{W}}_{\mathcal{N}}\widetilde{\mathsf{D}}^{6}J_{g}\;\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\Big{|}_{\mathsf{s}}\,,\] (12.65f) \[J_{7}^{\hat{\mathbf{A}}_{n}} =\int\frac{\mathsf{Q}}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}J _{g}\hat{\mathbf{\Sigma}}_{\mathcal{N}}\widetilde{\mathsf{D}}^{6}J_{g}\; \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\Big{|}_{ \mathsf{s}}\] (12.65g) \[J_{8}^{\hat{\mathbf{A}}_{n}} =2\int\frac{\mathsf{Q}}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}} J_{g}\hat{\mathbf{\Sigma}}_{\mathcal{N}}\widetilde{\mathsf{D}}^{6}J_{g}\; \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\Big{|}_{0}\,,\] (12.65h) \[J_{9}^{\hat{\mathbf{A}}_{n}} =-\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2\beta}}( \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}^{\frac{3}{2}}\;(J_{g} \hat{\mathbf{\Sigma}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}J_{g}\; \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\,. \tag{12.65i}\] Most terms in the decomposition (12.65) are estimated handily using (5.37), (6.38), (7.1), (8.44a) as \[\big{|}J_{2}^{\hat{\mathbf{A}}_{n}}\big{|}+\big{|}J_{4}^{\hat{ \mathbf{A}}_{n}}\big{|}+\big{|}J_{5}^{\hat{\mathbf{A}}_{n}}\big{|}+\big{|}J_{7}^ {\hat{\mathbf{A}}_{n}}\big{|}+\big{|}J_{9}^{\hat{\mathbf{A}}_{n}}\big{|}\] \[\qquad\lesssim(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{ 6}\rangle^{2}+\tfrac{500^{2}}{\varepsilon^{2}\sqrt{1+\alpha}}\int_{0}^{\mathsf{ s}}\!\!\big{\|}\tfrac{\mathcal{J}^{\frac{3}{2}}(J_{g}\mathsf{Q})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})( \cdot,\mathsf{s}^{\prime})\big{\|}_{L_{\mathbb{Z}}^{2}}\big{\|}\tfrac{\mathcal{ Q}\mathcal{J}^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}J_{g}( \cdot,\mathsf{s}^{\prime})\big{\|}_{L_{\mathbb{Z}}^{2}}\mathrm{d}\mathsf{s}^{ \prime}\,, \tag{12.66}\] and by also appealing to (4.8), (4.11) and (6.43) with \(\mathsf{s}=0\), we have \[\big{|}J_{8}^{\hat{\mathbf{A}}_{n}}\big{|}\leq\big{(}\tfrac{1}{ \varepsilon}+\hat{C}\big{)}\tfrac{1+\alpha}{2}(1+\hat{C}\varepsilon)(\tfrac{3}{ \kappa_{0}})^{2\beta}\mathsf{C}^{2}_{\mathsf{data}}\leq\tfrac{1+\alpha}{ \varepsilon}(\tfrac{3}{\kappa_{0}})^{2\beta}\mathsf{C}^{2}_{\mathsf{data}}\,. \tag{12.67}\] The remaining terms, namely \(J_{1}^{\hat{\mathbf{A}}_{n}}\), \(J_{3}^{\hat{\mathbf{A}}_{n}}\), and \(J_{6}^{\hat{\mathbf{A}}_{n}}\), need to be handled carefully. The integral \(J_{1}^{\hat{\mathbf{A}}_{n}}\) produces an anti-damping term that must be combined with the last integral on the right side of (12.9). Using (5.27), (5.30), a short computation shows that \[J_{1}^{\hat{\mathbf{A}}_{n}} =J_{1,a}^{\hat{\mathbf{A}}_{n}}+J_{1,b}^{\hat{\mathbf{A}}_{n}}+J_{ 1,c}^{\hat{\mathbf{A}}_{n}}\,,\] (12.68) \[J_{1,a}^{\hat{\mathbf{A}}_{n}} =\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}(Q\partial_{\mathsf{s}}+V\partial_{2})J_{g}\;\big{|} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\big{|}^{2}\] \[J_{1,b}^{\hat{\mathbf{A}}_{n}} =\tfrac{1-\alpha}{2}\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{ \Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}) \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\,,\] \[J_{1,c}^{\hat{\mathbf{A}}_{n}} =-\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\big{(}\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\big{)}^{2}\,,\] \[J_{1,d}^{\hat{\mathbf{A}}_{n}} =-2\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}(J_{g}\hat{\mathbf{\Sigma}}_{\mathcal{N}})\big{(} \widetilde{\mathsf{D}}^{6}V\widetilde{\mathsf{D}}_{2}J_{g}+\big{(}\widetilde{ \mathsf{D}}^{6},V,\widetilde{\mathsf{D}}_{2}J_{g}\big{)}\big{)}\;\widetilde{ \mathsf{ where the anti-damping term \(\mathsf{G}_{\text{bad}}=\mathcal{J}^{\frac{3}{2}}(\mathsf{Q}\partial_{\mathsf{s}} +V\partial_{2})J_{g}\) is precisely the same as in the case of the tangential derivatives (see (10.57)). This last term will be combined with the damping term \(\mathsf{G}_{\text{good}}\) present on the right side of (12.19). For this purpose, precisely in the same way as for the tangential derivatives (see (10.58)), we record the inequality \[\mathsf{G}_{\text{good}}+\mathsf{G}_{\text{bad}}\geq\tfrac{1+\alpha}{16\varepsilon }\mathcal{J}^{\frac{1}{2}}J_{g}\,, \tag{12.69d}\] which follows directly from (6.65). Among the nine terms in (12.65) it thus remains to consider the integral \(J_{3}^{\hat{\mathsf{A}}_{n}}\) (which produces a new type of damping and energy norm which are displayed, respectively, in the integrals \(J_{3,a}^{\hat{\mathsf{A}}_{n}}\) and \(J_{3,b}^{\hat{\mathsf{A}}_{n}}\) below), and the boundary term \(J_{6}^{\hat{\mathsf{A}}_{n}}\) (which is to be dealt with by a careful application of the \(\varepsilon\)-Young inequality). It is important to make an analogy between \(J_{3}^{\hat{\mathsf{A}}_{n}}\) and \(J_{6}^{\hat{\mathsf{A}}_{n}}\), respectively the terms \(\tilde{\mathsf{M}}_{2}\) an \(\tilde{\mathsf{M}}_{3}\) which have previously appeared in the tangential energy estimates, cf. (10.42) and (10.43). Our treatment of the terms \(J_{3}^{\hat{\mathsf{A}}_{n}}\) and \(J_{6}^{\hat{\mathsf{A}}_{n}}\) closely resembles the analysis in Subsection 10.6.3. Concerning \(J_{3,a}^{\hat{\mathsf{A}}_{n}}\), we note that by the definition of \(\mathcal{J}\) in (5.18b), and upon using (5.30) and (5.27) to rewrite \[\widetilde{\mathsf{D}}^{6}(J_{g}^{\hat{\mathsf{A}}_{n}}\hat{ \mathsf{W}}_{\mathcal{N}}) =\tfrac{2}{1+\alpha}\big{(}\widetilde{\mathsf{D}}^{6}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})J_{g}-\tfrac{1-\alpha}{2}\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{\mathcal{N}})\big{)}\] \[=\tfrac{2}{1+\alpha}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{ 2})\widetilde{\mathsf{D}}^{6}J_{g}+\tfrac{2}{1+\alpha}\big{(}\widetilde{ \mathsf{D}}^{6}V\widetilde{\mathsf{D}}_{2}J_{g}+\big{(}\widetilde{\mathsf{D}} ^{6},V,\widetilde{\mathsf{D}}_{2}J_{g}\big{)}\big{)}-\tfrac{1-\alpha}{1+ \alpha}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{\mathcal{N}})\,,\] and finally, integrating-by-parts the material derivative using (5.28d), we arrive at \[J_{3}^{\hat{\mathsf{A}}_{n}} =-\tfrac{3}{2\varepsilon}\int_{0}^{\mathsf{s}}\!\!\!\int\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\big{|}J_{3,h}^{\hat{\mathbf{A}}_{n}}\big{|} \lesssim\varepsilon\mathsf{K}(\tfrac{4}{\kappa_{0}})^{2\beta} \langle\mathsf{B}_{6}\rangle^{2}\,, \tag{12.71f}\] \[\big{|}J_{3,i}^{\hat{\mathbf{A}}_{n}}\big{|} \leq\tfrac{25}{\varepsilon^{2}}\int_{0}^{\mathsf{s}}\big{\|} \tfrac{\mathsf{J}^{\frac{4}{3}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g }\hat{\mathbf{Z}}_{\mathcal{X}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2} }\big{\|}\tfrac{\mathsf{Q}\,\mathcal{J}^{\frac{4}{3}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2} }\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tfrac{1}{4(1+\alpha)}\tfrac{1}{\varepsilon^{3}}\int_{0}^{ \mathsf{s}}\big{\|}\tfrac{\mathsf{Q}\,\mathcal{J}^{-\frac{1}{3}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{ x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\tfrac{25^{2}(1+\alpha)}{ \varepsilon}\int_{0}^{\mathsf{s}}\big{\|}\tfrac{\mathcal{J}^{\frac{4}{3}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{X} })(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{ \prime}\,. \tag{12.71h}\] The bounds in (12.71) complete our estimates for the nine terms in the decomposition of \(J_{3}^{\hat{\mathbf{A}}_{n}}\). It remains to bound the term \(J_{6}^{\hat{\mathbf{A}}_{n}}\) in (12.65). This term requires a special application of the \(\varepsilon\)-Young inequality, akin to (10.54) for the tangential estimates. More precisely, by (5.37c), the lower bound in (6.38a), (6.38b) and the bound \(\mathcal{J}\leq J_{g}\), we have \[\big{|}J_{6}^{\hat{\mathbf{A}}_{n}}\big{|} \leq\tfrac{\sqrt{5}}{\varepsilon\sqrt{2(1+\alpha)}}\big{\|} \tfrac{\mathsf{Q}\,\mathcal{J}^{\frac{4}{3}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}\big{\|}\tfrac{ \mathsf{J}^{\frac{4}{3}}(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{X}})(\cdot,\mathsf{ s})\big{\|}_{L_{x}^{2}}\] \[\leq\tfrac{25}{52}\big{\|}\tfrac{\mathcal{J}^{\frac{4}{3}}(J_{g} \mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\mathbf{W}}_{\mathcal{X}})(\cdot,\mathsf{s})\big{\|}^{2}_{L_{x}^{2}} \mathrm{d}\mathsf{s}^{\prime}+\tfrac{13}{10(1+\alpha)\varepsilon^{2}}\big{\|} \tfrac{\mathsf{Q}\,\mathcal{J}^{\frac{4}{3}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|}^{2}_{L_{x}^{2}}\,. \tag{12.72}\] Again, we recall that the fact \(\tfrac{25}{52}<\tfrac{1}{2}\) allows us to close the energy estimate. We summarize the bounds in this subsection, namely (12.62), (12.64), (12.66), (12.67), (12.69), (12.71), and (12.72), together with \(\mathcal{J}\leq J_{g}\) and \(\varepsilon\)-Young, to obtain \[I_{4}^{\hat{\mathbf{A}}_{n}}\geq-\hat{C}(\tfrac{4}{\kappa_{0}})^{2 \beta}\langle\mathsf{B}_{6}\rangle^{2}-\tfrac{8(1+\alpha)}{\varepsilon}( \tfrac{3}{\kappa_{0}})^{2\beta}\mathsf{C}^{2}_{\mathsf{data}}\] \[\qquad+\int_{0}^{\mathsf{s}}\!\!\!\int_{\Sigma^{2\beta}}\big{(} \tfrac{1+\alpha}{24\varepsilon}\mathcal{J}^{\frac{4}{3}}J_{g}-\mathsf{G}_{ \mathsf{good}}\big{)}\big{|}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_ {\mathcal{X}})\big{|}^{2}-\tfrac{(12+25)(1+\alpha)}{\varepsilon}\int_{0}^{ \mathsf{s}}\!\big{\|}\tfrac{\mathcal{J}^{\frac{4}{3}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{X}})(\cdot,\mathsf{ s}^{\prime})\big{\|}^{2}_{L_{x}^{2}}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\tfrac{1}{20(1+\alpha)}\tfrac{1}{\varepsilon^{2}}\big{\|} \tfrac{\mathsf{Q}\,\mathcal{J}^{\frac{4}{3}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|}^{2}_{L_{x}^{2}}-\tfrac{20^{2}+50 ^{2}+100\cdot 250^{2}+\varepsilon\hat{C}(\beta)}{(1+\alpha)}\tfrac{1}{\varepsilon^{3}} \int_{0}^{\mathsf{s}}\!\!\!\!\big{\|}\tfrac{\mathcal{Q}\,\mathcal{J}^{\frac{4}{ 3}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime}) \big{\|}^{2}_{L_{x}^{2}}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\tfrac{7}{40(1+\alpha)}\tfrac{1}{\varepsilon^{3}}\int_{0}^{ \mathsf{s}}\!\!\!\big{\|}\tfrac{\mathcal{Q}\,\mathcal{J}^{-\frac{4}{3}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}^{2}_{L_ {x}^{2}}\mathrm{d}\mathsf{s}^{\prime}+\big{(}8-\hat{C}\langle\beta\rangle \varepsilon\big{)}\tfrac{1}{\varepsilon^{3}}\int_{0}^{\mathsf{s}}\!\!\!\!\int_{ \Sigma^{2\beta}}\!\!\!\!\int_{\Sigma^{2\beta}}\!\!\!\!\int_{\Sigma^{2\beta}} \!\!\!\!\int_{1}^{\mathsf{s}}\!\!\!\!\big{\|}\tfrac{\mathcal{J}^{\frac{4}{3}}(J_ {g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\mathbf{W}}_{\mathcal{X}})(\cdot,\mathsf{s}^{\prime})\big{\|}^{2}_{L_{x}^{2 }}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad-\tfrac{25}{52}\big{\|}\tfrac{\mathcal{J}^{\frac{4}{3}}(J_{g} \mathsf{Q})^{\frac{4}{3}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\mathbf{W}}_{\mathcal{X}})(\cdot,\mathsf{s})\big{\|}^{2}_{L_{x}^{2}}- \tfrac{39^{2}+500^{2}}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\!\big{\|}\tfrac{ \mathcal{J}^{\frac{4}{3}}(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{X}})(\cdot,\mathsf{s}^{ \prime})\big{\|}^{2}_{L_{x}^{2}}\mathrm{d}\mathsf{s}^{\prime}\,. \tag{12.73}\] #### 12.7.6. The integral \(I_{7}^{\hat{\mathbf{Z}}_{n}}\) We next study the integral \(I_{7}^{\hat{\mathbf{Z}}_{n}}\) in (12.7g). Using (5.28), we have that \[I_{7}^{\hat{\mathbf{Z}}_{n}} =I_{7,a}^{\hat{\mathbf{Z}}_{n}}+I_{7,c}^{\hat{\mathbf{Z}}_{n}}+I _{7,d}^{\hat{\mathbf{Z}}_{n}}\,,\] (12.74) \[I_{7,a}^{\hat{\mathbf{Z}}_{n}} =\tfrac{2\alpha}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\!\int_{ \mathcal{J}}\!\!\!\int_{\mathcal{J}}\!\!\!\!\!\int_{J_{g}}(\hat{\mathbf{A}}_{ \mathcal{N}}+\hat{\mathbf{Z}}_{\mathcal{T}})\widetilde{\mathsf \[I_{7,d}^{\hat{\mathsf{Z}}_{n}} =-2\alpha\int\,_{\mathcal{B}}\!\overline{\mathsf{Q}}_{2}\mathcal{J} ^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{\mathcal{N}}+\hat{\mathsf{Z}}_{\tau})J_{ g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\;\widetilde{\mathsf{D}}^{6} \mathcal{T}\!\cdot\!\mathcal{N}\;\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{ Z}}_{\mathcal{N}})\Big{|}_{\mathsf{s}}\,.\] With this additive decomposition, only the integral \(I_{7,a}^{\hat{\mathsf{Z}}_{n}}\) remains over-differentiated while the integrals \(I_{7,b}^{\hat{\mathsf{Z}}_{n}}\), \(I_{7,c}^{\hat{\mathsf{Z}}_{n}}\), and \(I_{7,d}^{\hat{\mathsf{Z}}_{n}}\) can be directly estimated using (5.37), (6.38), and (7.1), as \[\big{|}I_{7,b}^{\hat{\mathsf{Z}}_{n}}\big{|}+\big{|}I_{7,c}^{\hat{ \mathsf{Z}}_{n}}\big{|}+\big{|}I_{7,d}^{\hat{\mathsf{Z}}_{n}}\big{|}\lesssim \mathsf{K}\varepsilon(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6} \rangle^{2}\,. \tag{12.75}\] For the interesting integral, namely \(I_{7,a}^{\hat{\mathsf{Z}}_{n}}\), we use equation (12.1b) to replace \(\tfrac{2\alpha}{\varepsilon}\big{(}\widetilde{\mathsf{D}}_{1}-\varepsilon J _{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\;\widetilde{\mathsf{D}}_{2} \big{)}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{\mathcal{N}})\). This replacement yields integrals possessing either "exact-derivative" structure or a derivative reduction which arises from terms containing the operator \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\). Using (12.1b), we proceed with this substitution, and find that \[I_{7,a}^{\hat{\mathsf{Z}}_{n}} =\sum_{i=1}^{8}L_{i}^{\hat{\mathsf{Z}}_{n}}\,,\] (12.76a) \[L_{1}^{\hat{\mathsf{Z}}_{n}} =\int_{0}^{\mathsf{f}}\!\!\int\!\tfrac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{\mathcal{N}}+\hat{\mathsf{Z}} _{\tau})\widetilde{\mathsf{D}}^{6}\mathcal{T}\!\cdot\!\mathcal{N}\;J_{g}( \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D}}^{6}(J_{g }\hat{\mathsf{Z}}_{\mathcal{N}})\,,\] (12.76b) \[L_{2}^{\hat{\mathsf{Z}}_{n}} =-\int_{0}^{\mathsf{g}}\!\!\int\!\tfrac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{\mathcal{N}}+\hat{\mathsf{Z}} _{\tau})\widetilde{\mathsf{D}}^{6}\mathcal{T}\!\cdot\!\mathcal{N}\;(\tfrac{1+ \alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{ \mathcal{N}})\,,\] (12.76c) \[L_{3}^{\hat{\mathsf{Z}}_{n}} =-\alpha\int_{0}^{\mathsf{g}}\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[L_{1,c}^{\hat{\mathsf{Z}}_{n}} =\int_{0}^{\mathsf{s}}\!\!\int(V\hat{\mathsf{Q}}_{2}-\widetilde{ \mathsf{D}}_{2}V-\hat{\mathsf{Q}}_{\mathsf{s}})\tfrac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}J_{g}^{2}(\hat{\mathbf{A}}_{{\mathcal{N}}}+\hat{ \mathsf{Z}}_{\mathcal{T}})\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\, \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{{\mathcal{N}}}),\] \[L_{1,d}^{\hat{\mathsf{Z}}_{n}} =\int\tfrac{\mathsf{Q}}{\Sigma^{2\beta}}\mathcal{J}^{\frac{3}{2}}J _{g}^{2}(\hat{\mathbf{A}}_{{\mathcal{N}}}+\hat{\mathsf{Z}}_{\mathcal{T}}) \widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\,\widetilde{\mathsf{D}}^{6}(J_{ g}\hat{\mathsf{Z}}_{{\mathcal{N}}})\Big{|}_{0}^{\mathsf{s}}\,.\] From the above decomposition it is clear that no over-differentiated terms are present in the \(L_{1}^{\hat{\mathsf{Z}}_{n}}\) integral, and as such we may use (5.37), (6.38), and (7.1), to deduce \[\big{|}L_{1}^{\hat{\mathsf{Z}}_{n}}\big{|}\lesssim\mathsf{K}\varepsilon(\tfrac {4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{12.78}\] The last integral we need to consider from (12.76a) is \(L_{3}^{\hat{\mathsf{Z}}_{n}}\); this term requires further refinement in order to contend with the over-differentiation. To that end, we now use equation (12.1a) to substitute for the term \(\alpha g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}( J_{g}\hat{\mathbf{A}}_{{\mathcal{N}}})\), and we obtain the further decomposition of this integral as follows: \[L_{3}^{\hat{\mathsf{Z}}_{n}} =L_{3,a}^{\hat{\mathsf{Z}}_{n}}+L_{3,b}^{\hat{\mathsf{Z}}_{n}}+L_ {3,c}^{\hat{\mathsf{Z}}_{n}}+L_{3,d}^{\hat{\mathsf{Z}}_{n}}\,,\] \[L_{3,a}^{\hat{\mathsf{Z}}_{n}} =\int_{0}^{\mathsf{s}}\!\!\int\tfrac{1}{\Sigma^{2\beta}} \mathcal{J}^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{{\mathcal{N}}}+\hat{\mathsf{ Z}}_{\mathcal{T}})\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}J_{g}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{W}}_{{\mathcal{N}}})\,,\] (12.79a) \[L_{3,b}^{\hat{\mathsf{Z}}_{n}} =-\alpha\int_{0}^{\mathsf{s}}\!\!\int\!\!\!\int\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\int\!\!\!\!\! \!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! derivative level, in view of (5.37). As such, we may bound \(I_{5,b}^{\hat{\mathbf{A}}_{n}}\) in the identical fashion as the integral \(L_{3}^{\hat{\mathbf{Z}}_{n}}\) (cf. (12.80)) and obtain \[\big{|}I_{5,a}^{\hat{\mathbf{A}}_{n}}\big{|}\lesssim\mathsf{K}_{\mathcal{E}}( \tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,.\] Putting together the previous two estimates we thus obtain \[\big{|}I_{5}^{\hat{\mathbf{A}}_{n}}\big{|}\lesssim\mathsf{K}_{\mathcal{E}}( \tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{12.82}\] #### 12.7.8. The integral \(I_{8}^{\hat{\mathbf{Z}}_{n}}\) We next study the integral \(I_{8}^{\hat{\mathbf{Z}}_{n}}\), defined in (12.7h). By comparing the integral \(I_{8}^{\hat{\mathbf{Z}}_{n}}\) with \(I_{7}^{\hat{\mathbf{Z}}_{n}}\) (as defined in (12.7g)), we notice two differences. First, the lower order term \(-(J_{g}\hat{\mathbf{A}}_{N}+J_{g}\hat{\mathbf{Z}}_{7})\) in \(I_{7}^{\hat{\mathbf{Z}}_{n}}\), is replaced by the term \(\hat{\mathbf{Z}}_{N}\) in \(I_{8}^{\hat{\mathbf{Z}}_{n}}\). These terms, and their \(\widetilde{\mathsf{D}}\) derivatives, satisfy the same upper bounds in \(L_{\kappa_{0}}^{\infty}\), in light of (5.37). Second, the differential operator \(\widetilde{\mathsf{D}}_{1}-\varepsilon J_{g}g^{-\frac{1}{2}}\widetilde{ \mathsf{D}}_{2}h\widetilde{\mathsf{D}}_{2}\) acting on \(\widetilde{\mathsf{D}}^{6}{\mathcal{T}}\cdot\mathsf{N}\) in \(I_{7}^{\hat{\mathbf{Z}}_{n}}\), is now acting on \(\widetilde{\mathsf{D}}^{6}J_{g}\) in \(I_{8}^{\hat{\mathbf{Z}}_{n}}\). These terms satisfy nearly the same bounds in view of (7.1a), (7.1h), and (7.1i), with only one difference: the bounds for the sixth order derivatives of \(J_{g}\) are worse by a power of \((\mathsf{K}\varepsilon)^{-1}\) than the sixth order derivatives of \(\tau\). As such, it is clear that by repeating the decompositions and the bounds in Subsection 12.7.6, we arrive at a bound for \(I_{8}^{\hat{\mathbf{Z}}_{n}}\) which is worse than that for \(I_{7}^{\hat{\mathbf{Z}}_{n}}\) (see (12.81)) by a power of \((\mathsf{K}\varepsilon)^{-1}\), namely \[\big{|}I_{8}^{\hat{\mathbf{Z}}_{n}}\big{|}\leq(\tfrac{4}{\kappa_{0}})^{2\beta }\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{12.83}\] To avoid redundancy, we do not repeat the argument which proves (12.83). #### The forcing and commutator terms We now turn to the bounds for the remaining integrals \(I_{6}^{\hat{\mathbf{W}}_{n}}\), \(I_{10}^{\hat{\mathbf{Z}}_{n}}\), \(I_{10}^{\hat{\mathbf{A}}_{n}}\) in (12.6e), (12.7j), and (12.8j), respectively. In order to estimate these integrals, we use the definitions for the forcing functions in (3.43) together with the definitions of the so-called remainder and commutator functions in (12.2), (12.3), and (12.4). Bounds for these quantities were obtained earlier in (12.29), (12.31), and (12.38). Using Cauchy-Schwartz, the bound \(\mathcal{J}\leq J_{g}\), and (5.37r), we deduce from the aforementioned bounds that \[\big{|}I_{6}^{\hat{\mathbf{W}}_{n}}\big{|}\leq\hat{C}(\tfrac{4}{\kappa_{0}})^{ 2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,.\] (12.84a) Similarly, using ( 12.31 ) we also have that \[\big{|}I_{10}^{\hat{\mathbf{Z}}_{n}}\big{|} \leq\int_{0}^{\mathsf{s}}\!\big{\|}\tfrac{\sigma^{\frac{3}{4}}}{ \Sigma^{\beta-1}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{N})(\cdot, \mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}\Big{(}\big{\|}\tfrac{\sigma^{\frac{3}{4 }}}{\Sigma^{\beta-1}}\widetilde{\mathsf{D}}^{6}\mathsf{F}_{2}^{N}(\cdot, \mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}+\big{\|}\tfrac{\sigma^{\frac{3}{4}}}{ \Sigma^{\beta-1}}\mathcal{R}_{\mathbf{Z}}^{N}(\cdot,\mathsf{s}^{\prime})\big{\|} _{L_{x}^{2}}+\big{\|}\tfrac{\sigma^{\frac{3}{4}}}{\Sigma^{\beta-1}}\mathcal{C }_{\mathbf{Z}}^{N}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}\Big{)}\mathrm{ d}\mathsf{s}^{\prime}\] \[\leq\tfrac{4(1+\alpha)}{\varepsilon}\int_{0}^{\mathsf{s}}\!\big{\|} \tfrac{\sigma^{\frac{3}{4}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{Z}}_{N})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}^{2}\mathrm{ d}\mathsf{s}^{\prime}+\hat{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}(\mathsf{B}_{6})^{2}\,. \tag{12.84b}\] In the same way, using (12.38) we obtain the bound \[\big{|}I_{10}^{\hat{\mathbf{A}}_{n}}\big{|}\leq\tfrac{4(1+\alpha)}{\varepsilon} \int_{0}^{\mathsf{s}}\!\big{\|}\tfrac{\tau^{\frac{3}{4}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{A}}_{N})(\cdot,\mathsf{s}^{\prime} )\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\hat{C}(\tfrac{4}{\kappa_ {0}})^{2\beta}\mathsf{K}(\mathsf{B}_{6})^{2}\,. \tag{12.84c}\] Summarizing the bounds (12.84a), (12.84b), and (12.84c) we thus deduce that \[\big{|}I_{6}^{\hat{\mathbf{W}}_{n}}\big{|}+\big{|}I_{10}^{\hat{\mathbf{Z}}_{n}} \big{|}+\big{|}I_{10}^{\hat{\mathbf{A}}_{n}}\big{|}\leq\tfrac{4(1+\alpha)}{ \varepsilon}\int_{0}^{\mathsf{s}}\!\big{\|}\tfrac{\tau^{\frac{3}{4}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{N},J_{g}\hat{\mathbf{A}}_ {N})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{ \prime}+\hat{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}(\mathsf{B}_{6})^{2}\,. \tag{12.85}\] ### Conclusion of the six derivative normal energy bounds We return to the energy identity (12.5), with the decompositions (12.6), (12.7), and (12.8). We collect the lower bounds (12.19), (12.73), the estimate \(\mathsf{G}_{\mathsf{good}}\geq-\tfrac{1+\alpha}{3\varepsilon}\mathcal{J}^{ \frac{3}{2}}\), the upper bounds (12.51), (12.59), (12.81), (12.82), (12.83), (12.85), and the initial data assumption (4.11), to obtain \[0\geq\big{(}\tfrac{1}{52}-\hat{C}\varepsilon\big{)}\big{\|}\tfrac{ \tau^{\frac{3}{4}}(J_{0}\mathsf{0})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{N},J_{g}\hat{\mathbf{Z}}_{N},J_ {g}\hat{\mathbf{A}}_{N})(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}-\tfrac{9(1+ \alpha)}{\varepsilon}(\tfrac{3}{\kappa_{0}})^{2\beta}\mathsf{C}_{\mathsf{ data}}^{2}-\hat{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\] \[\qquad\qquad+\Big{(}\tfrac{1+\alpha}{24}-\hat{C}\varepsilon\beta \Big{)}\tfrac{1}{\varepsilon}\int_{0}^{\mathsf{s}}\!\big{\|}\tfrac{\tau^{\frac{3}{4}} \tfrac{1}{\tau^{\frac{3}{4}}}\tfrac{1}{\Sigma^{\beta}}}\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\mathbf{W}}_{N},J_{g}\hat{\mathbf{Z}}_{N},J_{g}\hat{\mathbf{A}}_{N}) \[+\tfrac{7}{40(1+\alpha)}\tfrac{1}{\varepsilon^{3}}\int_{0}^{ \mathsf{s}}\!\big{\|}\tfrac{\mathsf{Q}\,\mathcal{T}^{-\frac{1}{4}}}{\Sigma^{2 \beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{ x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\big{(}8-\mathring{C}\langle\beta \rangle\varepsilon\big{)}\tfrac{1}{\varepsilon^{3}}\int_{0}^{\mathsf{s}}\! \!\int_{\overline{\Sigma}^{2\beta}}\mathcal{J}^{\frac{1}{2}}\big{|}\widetilde{ \mathsf{D}}^{6}J_{g}\big{|}^{2}\,, \tag{12.86}\] where \(\mathring{C}=\mathring{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\) is independent of \(\beta\) (and \(\varepsilon\), as always). At this stage, we choose \(\beta=\beta(\alpha)\) to be sufficiently large to ensure that the damping for \(\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}},J _{g}\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}})\) is strong enough, i.e., so that \[\tfrac{\alpha(\beta-\frac{1}{2})}{8}+\tfrac{9(1+\alpha)}{20}-(16+25^{2})(1+ \alpha)\geq 0\,.\] More precisely, we choose \(\beta\) to ensure equality in the above inequality; namely, we have that \[\beta_{\alpha}:=\tfrac{1}{2}+\tfrac{8(1+\alpha)}{\alpha}\big{(}16+25^{2}- \tfrac{9}{20}\big{)}\,. \tag{12.87}\] With this choice of \(\beta=\beta_{\alpha}\), we return to (12.86), and choose \(\varepsilon\) to be sufficiently small in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\). After re-arranging we deduce that \[\tfrac{1}{53}\big{\|}\,\tfrac{\varphi^{\frac{7}{4}}(J_{g}\mathsf{Q })^{\frac{1}{2}}}{\Sigma^{2\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}},J_{g}\hat{\boldsymbol{\mathsf{Z} }}_{\mathcal{N}},J_{g}\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}})(\cdot, \mathsf{s})\big{\|}_{L_{x}^{2}}^{2}+\tfrac{1}{20(1+\alpha)}\tfrac{1}{ \varepsilon^{2}}\big{\|}\tfrac{\mathsf{Q}\,\mathcal{T}^{\frac{1}{4}}}{\Sigma^ {2\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|}_ {L_{x}^{2}}^{2}\] \[\qquad+\tfrac{1+\alpha}{48\varepsilon}\int_{0}^{\mathsf{s}}\! \big{\|}\tfrac{\varphi^{\frac{7}{4}}J_{g}\tfrac{1}{2}}{\Sigma^{2\beta_{\alpha }}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N }},J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}},J_{g}\hat{\boldsymbol{ \mathsf{A}}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}^{2} \mathrm{d}\mathsf{s}^{\prime}+\tfrac{7}{40(1+\alpha)}\tfrac{1}{\varepsilon^{3} }\int_{0}^{\mathsf{s}}\!\big{\|}\tfrac{\mathsf{Q}\,\mathcal{T}^{-\frac{1}{4} }}{\Sigma^{2\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^ {\prime})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tfrac{9(1+\alpha)}{\varepsilon}(\tfrac{3}{\kappa_{0}})^{2 \beta_{\alpha}}\mathsf{C}_{\mathsf{data}}^{2}+\mathring{C}(\tfrac{4}{\kappa_{ 0}})^{2\beta_{\alpha}}\mathsf{K}^{2}(\mathsf{B}_{6})^{2}\] \[\qquad+\tfrac{C}{\varepsilon}\int_{0}^{\mathsf{s}}\!\big{\|} \tfrac{\varphi^{\frac{7}{4}}(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{2\beta_{ \alpha}}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{ \mathcal{N}},J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}},J_{g}\hat{ \boldsymbol{\mathsf{A}}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{ x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\tfrac{C}{\varepsilon^{3}}\int_{0}^{ \mathsf{s}}\!\big{\|}\tfrac{\mathsf{Q}\,\mathcal{T}^{\frac{1}{4}}}{\Sigma^{2 \beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime}) \big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,, \tag{12.88}\] where \(C\) is a universal constant (in particular, independent of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), and \(\mathring{C}\) is as usual. By inspecting the first line on the left side and the last line on the right side of (12.88), we observe that we may apply Gronwall's inequality for \(\mathsf{s}\in[0,\varepsilon]\). More precisely, there exists a constant \[\bar{\mathsf{c}}_{\alpha}>0 \tag{12.89}\] which only depends on \(\alpha\), and may be computed explicitly from (6.38g), (12.87), and (12.88), such that \[\sup_{\mathsf{s}\in[0,\varepsilon]}\!\big{\|}\tfrac{\mathcal{T}^{ \frac{3}{4}}J_{g}\tfrac{1}{2}}{\Sigma^{\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}( J_{g}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}},J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{ \mathcal{N}},J_{g}\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}})(\cdot,\mathsf{s}) \big{\|}_{L_{x}^{2}}^{2}+\tfrac{1}{\varepsilon}\int_{0}^{\varepsilon}\!\big{\|} \tfrac{\varphi^{\frac{7}{4}}J_{g}\tfrac{1}{2}}{\Sigma^{\beta_{\alpha}}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}},J_{g} \hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}},J_{g}\hat{\boldsymbol{\mathsf{A}}}_{ \mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}\] \[\qquad+\tfrac{1}{\varepsilon^{2}}\sup_{\mathsf{s}\in[0, \varepsilon]}\big{\|}\tfrac{\mathcal{T}^{\frac{1}{4}}}{\Sigma^{\beta_{\alpha}}} \widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}+ \tfrac{1}{\varepsilon^{3}}\int_{0}^{\varepsilon}\!\big{\|}\tfrac{\varphi^{ \frac{7}{4}}}{\Sigma^{\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot, \mathsf{s})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}\] \[\leq\bar{\mathsf{c}}_{\alpha}\tfrac{1}{\varepsilon}(\tfrac{4}{ \kappa_{0}})^{2\beta_{\alpha}}\Big{(}\mathsf{C}_{\mathsf{data}}^{2}+ \mathring{C}\varepsilon\mathsf{K}^{2}(\mathsf{B}_{6})^{2}\Big{)}\,. \tag{12.90}\] At last, we multiply the above estimate by \(\kappa_{0}^{2\beta_{\alpha}}\), appeal to (5.37p), drop the energy and damping terms for \(\widetilde{\mathsf{D}}^{6}J_{g}\) (since these were bounded already in Proposition 7.1), and recall the definitions of \(\widetilde{\mathcal{E}}_{\delta,\mathcal{N}}^{2}(\mathsf{s})\) and \(\widetilde{\mathcal{D}}_{\delta,\mathcal{N}}^{2}(\mathsf{s})\) to deduce that \[\varepsilon\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{\mathcal{E} }_{6,\mathcal{N}}^{2}(\mathsf{s})+\widetilde{\mathcal{D}}_{6,\mathcal{N}}^{2}(\varepsilon) \leq\bar{\mathsf{c}}_{\alpha}4^{2\beta_{\alpha}}\Big{(}\mathsf{C}_{ \mathsf{data}}^{2}+\mathring{C}\varepsilon\mathsf{K}^{2}(\mathsf{B}_{6})^{2} \Big{)}\] \[\leq\mathsf{B}_{6}^{2}\bar{\mathsf{c}}_{\alpha}4^{2\beta_{\alpha}} \Big{(}\tfrac{\mathsf{C}_{\mathsf{data}}^{2}}{\mathsf{B}_{6}^{2}}+\mathring{C} \varepsilon\mathsf{K}^{2}\tfrac{(\mathsf{B}_{6})^{2}}{\mathsf{B}_{6}^{2}}\Big{)}\,. \tag{12.91}\] Since \(\mathsf{B}_{6}\geq 1\) (cf. (10.74) ## 13. Downstream maximal development In this section we give the proof of Theorem 4.7. We continue to use the notation in Section 5.2. Consider the level-set \(\{\overline{J}_{s,1}=0\}\). This is a hypersurface parameterized by the graph \((x_{2},t)\mapsto(x_{1}^{*}(x_{2},t),x_{2},t)\); see the definition of \(x_{1}^{*}(x_{2},t)\) given in (5.12). In this section we consider the Euler evolution in the spacetime geometry which is on the _downstream side_ of the hypersurface \(\{\overline{J}_{s,1}=0\}\) (namely, for \(x_{1}>x_{1}^{*}(x_{2},t)\)), and which is bounded from above by (a small modification of) the parabolic hypersurface \(\{\overline{J}_{s}=0\}\). On the _upstream side_ (namely, for \(x_{1}<x_{1}^{*}(x_{2},t)\)), the spacetime we consider is bounded from above by the cylindrical hypersurface \(\{\mathcal{J}=0\}=\{\overline{J}_{s}(x_{1}^{*}(x_{2},t),x_{2},t)=0\}\), which is the same as the top boundary considered in Sections 5-12 (see Figures 13 and 14 below). ### Flattening the top of the spacetime For \((x,t)\in\mathbb{T}^{2}\times(\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{ fin}}]\), we define \[\hat{J}_{s}(x_{1},x_{2},t):=\left\{\begin{array}{ll}\overline{J}_{s}(x_{1},x _{2},t),&x_{1}^{*}(x_{2},t)\leq x_{1}\leq x_{1}^{\sharp}(x_{2})\,,\\ \overline{J}_{s}(x_{1}^{\sharp}(x_{2}),x_{2},t),&x_{1}\geq x_{1}^{\sharp}(x_{ 2})\,,\end{array}\right.\] where \(x_{1}^{\sharp}(x_{2})\) is defined in terms of the function \(w_{0}(x)\) as \[x_{1}^{\sharp}(x_{2})=\left\{x_{1}\colon x_{1}>x_{1}^{\vee}(x_{2}),\mathsf{D }_{1}w_{0}(x_{1},x_{2})=-\tfrac{17}{20}\right\}. \tag{13.1}\] We also define \(\hat{J}_{s}(x,\mathsf{t}_{\mathsf{in}}):=1\) for all \(x\in\mathbb{T}^{2}\). **Proposition 13.1** (\(x_{1}^{\sharp}\) **is well-defined and \(\hat{J}_{s}\leq\overline{J}_{s}\))**.: _The function \(\mathbb{T}\ni x_{2}\mapsto x_{1}^{\sharp}(x_{2})\) given by (13.1) is well-defined and differentiable, and satisfies_ \[x_{1}^{\sharp}(x_{2})-x_{1}^{*}(x_{2},t)\geq\tfrac{\varepsilon}{41}\,,\qquad \left|\partial_{2}x_{1}^{\sharp}(x_{2})\right|\leq 240\varepsilon\,, \tag{13.2}\] _pointwise in \((x_{2},t)\). Moreover we have that_ \[\hat{J}_{s}\leq\overline{J}_{s}\,. \tag{13.3}\] Proof of Proposition 13.1.: In order to check that \(\hat{J}_{s}\) is well-defined, we need to verify \(x_{1}^{\sharp}(x_{2})>x_{1}^{*}(x_{2},t)\), pointwise in \((x_{2},t)\). We recall the assumptions on \(w_{0}(x)\) given in Section 4.2. To show that \(x_{1}^{\sharp}(x_{2})-x_{1}^{*}(x_{2},t)\geq\tfrac{\varepsilon}{41}\), uniformly in \(t\), we first recall from (6.53) that \(|x_{1}^{*}(x_{2},t)-x_{1}^{\vee}(x_{2})|\leq\mathring{C}\mathsf{K}(\mathsf{ B}_{6})\varepsilon^{3}\), so that it is sufficient to show \(x_{1}^{\sharp}>x_{1}^{\vee}+\tfrac{\varepsilon}{40}\). Using (4.10), ((vi)), and (13.1), we have that \(x_{1}^{\vee}(x_{2})|\|\partial_{1}\mathsf{D}_{1}w_{0}\|_{L^{\infty}_{x}}\leq\frac {2}{\varepsilon}|x_{1}^{\sharp}(x_{2})-x_{1}^{\vee}(x_{2})|\). Since by assumption \(x_{1}^{\sharp}(x_{2})>x_{1}^{\vee}(x_{2})\) (see (13.1)), we deduce \(x_{1}^{\sharp}>x_{1}^{\vee}+\frac{\varepsilon}{40}\). Next, we show that \(x_{1}^{\sharp}(x_{2})\) is well-defined. Note that from ((i)), ((vi)), the continuity of \(\mathsf{D}_{1}w_{0}\), and the intermediate value theorem, for every \(x_{2}\in\mathbb{T}\) there exists at least one \(x_{1}^{\vee}(x_{2})<x_{1}^{\sharp}(x_{2})<13\pi\varepsilon\) satisfying \(\mathsf{D}_{1}w_{0}(x_{1},x_{2})=-\frac{17}{20}\). To establish uniqueness, we first define \(x_{1}^{\sharp}(x_{2})\) as the smallest value \(x_{1}>x_{1}^{\vee}(x_{2})\) such that \(\mathsf{D}_{1}w_{0}(x)=-\frac{17}{40}\), and then show that for \(x_{1}\in(x_{1}^{\sharp}(x_{2}),13\pi\varepsilon]\), we must have \(\mathsf{D}_{1}w_{0}(x)>-\frac{17}{40}\). Since \(x_{1}^{\sharp}(x_{2})-x_{1}^{\vee}(x_{2})\geq\frac{\varepsilon}{40}\geq \varepsilon^{\frac{7}{4}}\), from assumption ((viii)) we know that for all \(x_{1}\in(x_{1}^{\sharp}(x_{2}),13\pi\varepsilon]\) with \(\mathsf{D}_{1}w_{0}(x)<-\frac{1}{3}\), we must have \(\mathsf{D}_{1}^{2}w_{0}(x)\geq\varepsilon^{\frac{7}{8}}\), showing that as \(x_{1}\) increases, \(\mathsf{D}_{1}w_{0}(x)\) strictly increases from the value \(-\frac{17}{20}\) when \(x_{1}=x_{1}^{\sharp}(x_{2})\), until it reaches the value \(-\frac{1}{3}\) at some point \(x_{1}=x_{1}^{\top}(x_{2})\). Additionally, ((viii)) implies that for \(x_{1}>x_{1}^{\top}(x_{2})\) we have that \(\mathsf{D}_{1}w_{0}(x)\geq-\frac{1}{3}\): this is because if \(\mathsf{D}_{1}w_{0}(x)\) wanted to dip below the value \(-\frac{1}{3}\), then it would need to decrease as a function of \(x_{1}\), but ((viii)) implies that \(\mathsf{D}_{1}w_{0}(x)\) can only decrease in \(x_{1}\) if \(\mathsf{D}_{1}w_{0}(x)\geq-\frac{1}{3}\). Therefore, \(\mathsf{D}_{1}w_{0}(x_{1},x_{2})>-\frac{17}{20}\) for all \(x_{1}>x_{1}^{\sharp}(x_{2})\), giving uniqueness. Next, we establish the second bound in (13.2). By implicitly differentiating the relation \(\mathsf{D}_{1}w_{0}(x_{1}^{\sharp}(x_{2}),x_{2})=-\frac{17}{20}\), we obtain that \(\partial_{2}x_{1}^{\sharp}(x_{2})=-\varepsilon\frac{\mathsf{D}_{1}\mathsf{D} _{2}w_{0}}{\mathsf{D}_{1}\mathsf{D}_{2}w_{0}}(x_{1}^{\sharp}(x_{2}),x_{2})\). The numerator of the above fraction is bounded from above by \(2\), in absolute value (due to (4.10)). The denominator is estimated by noting that via the mean value theorem, \(\mathsf{D}_{1}^{2}w_{0}(x_{1}^{\sharp}(x_{2}),x_{2})=\mathsf{D}_{1}^{2}w_{0}( x_{1}^{\sharp}(x_{2}),x_{2})+\frac{x_{1}^{\sharp}(x_{2})-x_{1}^{\vee}(x_{2})}{ \varepsilon}\mathsf{D}_{1}^{3}w_{0}(x_{1}^{\prime},x_{2})=\frac{x_{1}^{\sharp }(x_{2})-x_{1}^{\vee}(x_{2})}{\varepsilon}\mathsf{D}_{1}^{3}w_{0}(x_{1}^{\prime },x_{2})\geq\frac{1}{40}\cdot\frac{1}{3}\), for some \(x_{1}^{\prime}\in(x_{1}^{\vee}(x_{2}),x_{1}^{\sharp}(x_{2}))\). Here we have used the previously established estimate \(x_{1}^{\sharp}(x_{2})-x_{1}^{\vee}\geq\frac{\varepsilon}{40}\), the fact that by definition \(\mathsf{D}_{1}^{2}(x_{1}^{\vee}(x_{2}),x_{2})=0\), and the fact that at all points \(x_{1}^{\prime}\) in between \(x_{1}^{\vee}\) and \(x_{1}^{\sharp}\) we have that \(\mathsf{D}_{1}w_{0}(x_{1}^{\prime},x_{2})\leq-\frac{1}{3}\) (in fact \(\leq-\frac{17}{20}\)), and thus by assumption ((viii)) we know that \(\mathsf{D}_{1}^{3}w_{0}(x_{1}^{\prime},x_{2})\geq\frac{1}{3}\). This shows that \(\mathsf{D}_{1}^{2}w_{0}(x_{1}^{\sharp}(x_{2}),x_{2})\geq\frac{1}{120}\), and therefore \(|\partial_{2}x_{1}^{\sharp}(x_{2})|\leq 240\varepsilon\), as claimed. Lastly, we establish (13.3). That is, we need to show that \(0\leq\overline{J}_{g}(x_{1},x_{2},t)-\overline{J}_{g}(x_{1}^{\sharp}(x_{2}),x_ {2},t)=J_{g}(x_{1},x_{2},t)-J_{g}(x_{1}^{\sharp}(x_{2}),x_{2},t)\) whenever \(x_{1}\geq x_{1}^{\sharp}(x_{2})\); here, in the second equality we have used (5.7). In turn, this fact follows from assumption ((viii)) and Corollary 6.2, as follows. From the mean value theorem in \(x_{1}\), the bound (6.24b) with \(i=1\), and the considerations in the second paragraph of this proof, we have that \(J_{g}(x_{1},x_{2},t)-J_{g}(x_{1}^{\sharp}(x_{2}),x_{2},t)\) cannot vanish for \(x_{1}\in(x_{1}^{\sharp}(x_{2}),x_{1}^{\top}(x_{2})]\). But if \(x_{1}>x_{1}^{\top}(x_{2})\) then necessarily \((w_{0})_{1}\,(x_{1},x_{2})\geq-\frac{1}{3\varepsilon}\) and since \((w_{0})_{,1}\,(x_{1}^{\sharp}(x_{2}),x_{2})=-\frac{17}{20\varepsilon}\), we have that \(J_{g}(x_{1},x_{2},t)-J_{g}(x_{1}^{\sharp}(x_{2}),x_{2},t)\) cannot vanish due to (6.24a). The only issue is that while the map \(x_{1}\mapsto\hat{J}_{g}(x_{1},x_{2},t)\) is Lipschitz continuous, it is not \(C^{1}\) smooth. As such, we need to consider a variant of \(\hat{J}_{g}\), denoted by \(\vec{J}_{g}\), which is \(H^{6}\) smooth in space and time in the set \(\{(x,t)\colon t\in(\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}}),x_{2}\in \mathbb{T},x_{1}>x_{1}^{\star}(x_{2},t)\}\). It is convenient to introduce \[x_{1,+}^{\sharp}(x_{2})=x_{1}^{\sharp}(x_{2})+\tfrac{\varepsilon}{1000}\,, \qquad\text{and}\qquad x_{1,-}^{\sharp}(x_{2})=x_{1}^{\sharp}(x_{2})-\tfrac{ \varepsilon}{1000}\,, \tag{13.4}\] where \(x_{1}^{\sharp}(x_{2})\) is as in (13.1). Then, we let \(\vec{J}_{g}(x,\mathsf{t}_{\mathsf{in}})=1\) and for \((x,t)\in\mathbb{T}^{2}\times(\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}})\) with \(x_{1}\geq x_{1}^{\star}(x_{2},t)\) we define \[\vec{J}_{g}(x_{1},x_{2},t):=\begin{cases}\dot{\hat{J}}_{g}(x_{1},x_{2},t)= \overline{J}_{g}(x_{1},x_{2},t)&x_{1}^{\sharp}(x_{2},t)\leq x_{1}\leq x_{1,-}^{ \sharp}(x_{2})\,,\\ \dot{\hat{J}}_{g}(x_{1},x_{2},t)=\overline{J}_{g}(x_{1}^{\sharp}(x_{2}),x_ {2},t),&x_{1,+}^{\sharp}(x_{2})\leq x_{1}\,,\end{cases}\] (13.5a) where the middle branch in the definition of \[\vec{J} \(\frac{\varepsilon}{1000}\cdot\frac{2}{\varepsilon^{2}}=\frac{1}{500\varepsilon}\), and hence \((w_{0})_{,1}\,(x_{1},x_{2})\leq-\frac{17}{20\varepsilon}+\frac{1}{500\varepsilon} \leq-\frac{33}{40\varepsilon}<-\frac{1}{3\varepsilon}\). Hence, assumption ((viii)) implies that \((w_{0})_{,11}\,(x_{1},x_{2})>\varepsilon^{-\frac{8}{8}}\). Combining this information with (6.24f), we deduce \(\partial_{1}\partial_{t}\vec{J}_{g}(x_{1,-}^{\sharp}(x_{2})^{-},x_{2},t)\geq \frac{1+\alpha}{2}(w_{0})_{,11}\,(x_{1,-}^{\sharp}(x_{2}),x_{2})-\hat{C} \varepsilon^{-1}\geq\frac{1+\alpha}{2}\varepsilon^{-\frac{8}{8}}-\hat{C} \varepsilon^{-1}\geq\frac{1+\alpha}{4}\varepsilon^{-\frac{8}{8}}\). Third, we verify that \(\partial_{t}\vec{J}_{g}(x_{1,-}^{\sharp}(x_{2}),x_{2},t)<\partial_{t}\vec{J}_ {g}(x_{1,+}^{\sharp}(x_{2}),x_{2},t)\). In light of (5.7) and of (6.24d), this inequality would follow from \((w_{0})_{,1}\,(x_{1,-}^{\sharp}(x_{2}),x_{2})<(w_{0})_{,1}\,(x_{1}^{\sharp}(x_ {2}),x_{2})-2\mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\). But by the mean value theorem and the previously obtained \((w_{0})_{,11}\,(x_{1},x_{2})>\varepsilon^{-\frac{8}{8}}\) we have \((w_{0})_{,1}\,(x_{1}^{\sharp}(x_{2}),x_{2})-(w_{0})_{,1}\,(x_{1,-}^{\sharp}(x _{2}),x_{2})>\frac{\varepsilon}{1000}\cdot\varepsilon^{-\frac{8}{8}}=\frac{1} {1000}\varepsilon^{-\frac{8}{8}}>2\mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\), assuming \(\varepsilon\) is sufficiently small. These three points show that the first condition in (13.5b) (the map \(x_{1}\mapsto\partial_{t}\vec{J}_{g}(x_{1},x_{2},t)\) is monotone increasing) is permissible. Next, we discuss the second condition in (13.5b), namely \(\partial_{1}\partial_{t}\vec{J}_{g}(x_{1},x_{2},t)\leq 2(1+\alpha) \varepsilon^{-2}\). This condition is easy to ensure because \(\partial_{1}\partial_{t}\vec{J}_{g}(x_{1,+}^{\sharp}(x_{2})^{+},x_{2},t)=0\) and by (6.24f) and (4.10) we have \(|\partial_{1}\partial_{t}\vec{J}_{g}(x_{1,-}^{\sharp}(x_{2})^{-},x_{2},t)| \leq\frac{1+\alpha}{2\varepsilon^{2}}\|\mathsf{D}_{1}^{2}w_{0}\|_{L^{\infty} }+\hat{C}\varepsilon^{-1}\leq(1+\alpha)\varepsilon^{-2}+\hat{C}\varepsilon^{- 1}\leq(1+\alpha)\varepsilon^{-2}\). Next, we discuss the third condition in (13.5b), namely \(|\partial_{2}\partial_{t}\vec{J}_{g}(x_{1},x_{2},t)|\leq 2(1+\alpha) \varepsilon^{-2}\). Again, using (6.24f) and (4.10) we have \(|\partial_{2}\partial_{t}\vec{J}_{g}(x_{1,-}^{\sharp}(x_{2})^{-},x_{2},t)| \leq\frac{1+\alpha}{2\varepsilon}\|\mathsf{D}_{1}\mathsf{D}_{2}w_{0}\|_{L^{ \infty}}+\hat{C}\leq(1+\alpha)\varepsilon^{-1}+\hat{C}<2(1+\alpha) \varepsilon^{-1}\). On the other hand, by the chain rule and (13.2) we have that \(|\partial_{2}\partial_{t}\vec{J}_{g}(x_{1,+}^{\sharp}(x_{2})^{+},x_{2},t)| \leq|\partial_{2}\partial_{t}J_{g}(x_{1}^{\sharp}(x_{2}),x_{2},t)|+|\partial _{1}\partial_{t}J_{g}(x_{1}^{\sharp}(x_{2}),x_{2},t)|\cdot|\partial_{2}x_{1}^{ \sharp}(x_{2})|\leq((1+\alpha)\varepsilon^{-1}+\hat{C})(1+\varepsilon^{-1}| \partial_{2}x_{1}^{\sharp}(x_{2})|)\leq 241((1+\alpha)\varepsilon^{-1}+\hat{C})\leq 242(1+ \alpha)\varepsilon^{-1}\). Lastly, we discuss the fourth requirement in (13.5b), namely the fact that \(\vec{J}_{g}\leq\hat{J}_{g}\). This requirement is already satisfied for \(x_{1}^{\sharp}(x_{2})\leq x_{1}\leq x_{1,+}^{\sharp}(x_{2})\) since \(\vec{J}_{g}(x_{1,+}^{\sharp}(x_{2}),x_{2})=\hat{J}_{g}(x_{1,+}^{\sharp}(x_{2} ),x_{2})\) and \(\partial_{t}\partial_{1}(\vec{J}_{g}-\hat{J}_{g})\geq 0\). Using this fact, and that by construction we have \(\vec{J}_{g}(x_{1,-}^{\sharp}(x_{2}),x_{2})=\hat{J}_{g}(x_{1,-}^{\sharp}(x_{2} ),x_{2})\) it is straightforward to verify that the bound \(\vec{J}_{g}\leq\hat{J}_{g}\) may also be ensured to hold for \(x_{1,-}^{\sharp}(x_{2})\leq x_{1}\leq x_{1}^{\sharp}(x_{2})\). Then, with \(\vec{J}_{g}\) defined as in (13.5a), for \((x,t)\in\mathbb{T}^{2}\times(\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{ fin}}]\), we define \[\mathcal{J}(x_{1},x_{2},t):=\left\{\begin{array}{ll}\overline{J}_{g}(x_{1}^{*} (x_{2},t),x_{2},t),&x_{1}\leq x_{1}^{*}(x_{2},t)\,,\\ \vec{J}_{g}(x_{1},x_{2},t),&x_{1}>x_{1}^{*}(x_{2},t)\,,\end{array}\right. \tag{13.6}\] and we let \(\mathcal{J}(x,\mathsf{t}_{\mathsf{in}})=1\) for all \(x\in\mathbb{T}^{2}\). It is important to observe that \(\mathcal{J}\) has limited Lipschitz regularity across the hypersurface \(\{x_{1}=x_{1}^{*}(x_{2},t)\}\), but that it is \(H^{6}\) smooth on either side of this hypersurface. We also note that in the upstream region \(x_{1}\leq x_{1}^{*}(x_{2},t)\), we have that \(\mathcal{J}(x,t)=\mathcal{J}(x_{2},t)\), as defined in (5.14). **Remark 13.3** (**Properties of \(\mathcal{J}\) on the upstream side are the same as those of \(\mathcal{J}\))**.: _We note that (5.14) implies the equality \(\mathcal{J}(x,t)=\overline{J}_{g}(x_{1}^{*}(x_{2},t),x_{2},t)=\mathcal{J}(x_{2},t)\), for all \(x_{1}<x_{1}^{*}(x_{2},t)\). As such, all the properties of \(\mathcal{J}\) which were established in Sections 5 and 6 directly carry over to properties of \(\mathcal{J}\) in the spacetime \(\{(x,t)\in\mathbb{T}^{2}\times[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{ fin}}]\colon\,\mathcal{J}(x,t)>0,x_{1}<x_{1}^{*}(x_{2},t)\}\). To avoid redundancy, these estimates will not be re-proven in this section._ **Remark 13.4** (**The spacetime \(\{\mathcal{J}>0\}\) terminates before \(\mathsf{t}_{\mathsf{fin}}\))**.: _We note that for consistency with the rest of the paper, the spacetime \(\{\mathcal{J}>0\}\) is designed to terminate before \(\mathsf{t}_{\mathsf{fin}}\). To see this, note that in the upstream region this fact was already established (see Remark 13.3), while in the downstream region we have \(\mathcal{J}\leq\overline{J}_{g}\), since \(\vec{J}_{g}\leq\overline{J}_{g}\). By the proof of Lemma 6.5, for all \(x\in\mathbb{T}^{2}\) there exists \(t_{*}(x)<\mathsf{t}_{\mathsf{fin}}\) with \(\overline{J}_{g}(x,t_{*}(x))=0\). Hence, for all \(x\in\mathbb{T}^{2}\) there exists a time \(t^{\sharp}(x)\leq t_{*}(x)<\mathsf{t}_{\mathsf{fin}}\) such that \(\mathcal{J}(x,t^{\sharp}(x))=0\)._ **Remark 13.5** (**The spacetime \(\{\mathcal{J}>0\}\) captures the downstream maximal development prior to \(\mathsf{t}_{\mathsf{med}}\))**.: _The spacetime \(\{\mathcal{J}>0,x_{1}>x_{1}^{*}(x_{2},t),t\leq\mathsf{t}_{\mathsf{med}}\}\) coincides with the For downstream maximal development, we use the spacetime set given by \[\mathcal{P}^{\sharp}:=\left\{(x,t)\in\mathbb{T}^{2}\times[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}})\colon\mathcal{J}(x_{1},x_{2},t)>0\right\}, \tag{13.7}\] We define our spacetime "flattening" transformation by \[\mathsf{q}\colon\mathcal{P}^{\sharp}\mapsto[0,\varepsilon) \tag{13.8a}\] \[\mathsf{s}=\mathsf{q}(x_{1},x_{2},t):=\varepsilon\big{(}1- \mathcal{J}(x_{1},x_{2},t)\big{)}\,. \tag{13.8b}\] We have that \(\mathsf{q}(x_{1},x_{2},\mathsf{t}_{\mathsf{in}})=0\), so that the set \(\{\mathsf{s}=0\}\) corresponds to the initial time slice \(\{t=\mathsf{t}_{\mathsf{in}}\}\), or, the _past_ temporal boundary of \(\mathcal{P}^{\sharp}\); meanwhile, the _future_ temporal boundary of \(\mathcal{P}^{\sharp}\) is flattened to the set \(\{\mathsf{s}=\varepsilon\}\). We note that in the upstream region \(x_{1}\leq x_{1}^{*}(x_{2},t)\), the map \(\mathsf{q}=\mathsf{q}(x,t)\) defined in (13.8b) is precisely equal to the map \(\mathsf{q}=\mathsf{q}(x_{2},t)\) defined in (5.18b). Only in the downstream region \(x_{1}>x_{1}^{*}(x_{2},t)\) do \(\mathsf{q}(x,t)\) and \(\mathsf{q}(x_{2},t)\) differ. The inverse of \(\mathsf{q}\) is defined by \[\mathsf{q}^{-1} :\mathbb{T}^{2}\times[0,\varepsilon)\to[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}})\,, \tag{13.9a}\] \[t =\mathsf{q}^{-1}(x_{1},x_{2},\mathsf{s})\,, \tag{13.9b}\] such that \(t=\mathsf{q}^{-1}(x_{1},x_{2},\mathsf{q}(x_{1},x_{2},t))\) for all \((x_{1},x_{2},t)\in\mathcal{P}^{\sharp}\), or equivalently, that \(\mathsf{s}=\mathsf{q}(x_{1},x_{2},\mathsf{q}^{-1}(x_{1},x_{2},\mathsf{s}))\) for all \((x_{1},x_{2},\mathsf{s})\in\mathbb{T}^{2}\times[0,\varepsilon)\). In (13.9), we are once again abusing convention: it is the map \((x_{1},x_{2},\mathsf{s})\mapsto(x_{1},x_{2},t)\) defined from \(\mathbb{T}^{2}\times[0,\varepsilon)\to\mathbb{T}^{2}\times[\mathsf{t}_{\mathsf{ in}},\mathsf{t}_{\mathsf{fin}})\) which is the inverse of the map \((x_{1},x_{2},t)\mapsto(x_{1},x_{2},\mathsf{s})=(x_{1},x_{2},\mathsf{q}(x_{1},x _{2},t))\). The fact that such a map is well-defined is established in Lemma 13.8 below. ### Change of coordinates for the remapped spacetime Given any function \(f\colon\mathcal{P}^{\sharp}\to\mathbb{R}\), we define the function \(\widetilde{f}\colon\mathbb{T}^{2}\times[0,\varepsilon)\to\mathbb{R}\) by \[\widetilde{f}(x,\mathsf{s}):=f(x,t),\qquad\text{where}\qquad\mathsf{s}= \mathsf{q}(x,t)\,. \tag{13.10}\] Then, by the chain-rule and (13.8b), we obtain that \[\partial_{t}f(x,t)=\widehat{\mathsf{Q}}(x,\mathsf{s})\partial_{\mathsf{s}} \widetilde{f}(x,\mathsf{s})\,, \tag{13.11a}\] \[\partial_{i}f(x,t)=\big{(}\partial_{i}-\overline{\mathbb{Q}}_{i}(x,\mathsf{s}) \partial_{\mathsf{s}}\big{)}\widetilde{f}(x,\mathsf{s})\,,\qquad i=1,2\,, \tag{13.11b}\] where for compactness of notation we have introduced the functions \[\widehat{\mathsf{Q}}(x,\mathsf{s})=-\varepsilon(\partial_{t}\mathcal{J})(x,t) \Big{|}_{t=\mathsf{q}^{-1}(x,\mathsf{s})} \tag{13.12a}\] \[\overline{\mathsf{Q}}_{i}(x,\mathsf{s})=\varepsilon(\partial_{i} \mathcal{J})(x,t)\Big{|}_{t=\mathsf{q}^{-1}(x,\mathsf{s})}\,\,\,\text{for}\,\, \,i=1,2\,. \tag{13.12b}\] Note that \(\overline{\mathsf{Q}}_{1}=0\) for \(x_{1}\leq x_{1}^{*}(x_{2},t)|_{t=\mathsf{q}^{-1}(x,\mathsf{s})}\). We also define \[\mathsf{Q}(x,\mathsf{s}):=\widehat{\mathsf{Q}}(x,\mathsf{s})-\widetilde{V}(x,\mathsf{s})\overline{\mathsf{Q}}_{2}(x,\mathsf{s})=-\varepsilon(\partial_{t }\mathcal{J}+V\partial_{2}\mathcal{J})(x,t)\Big{|}_{t=\mathsf{q}^{-1}(x, \mathsf{s})}\,, \tag{13.12c}\] and \[\hat{\mathsf{Q}}=\partial_{\mathsf{s}}\mathsf{Q}\,,\qquad\hat{\mathsf{Q}}_{ \mathsf{s}}=\partial_{\mathsf{s}}\widehat{\mathsf{Q}}\,,\qquad\hat{\mathsf{Q }}_{1}=\partial_{\mathsf{s}}\overline{\mathsf{Q}}_{1}\,,\qquad\hat{\mathsf{Q }}_{2}=\partial_{\mathsf{s}}\overline{\mathsf{Q}}_{2}\,. \tag{13.12d}\] With the above notation, it follows from (5.21) that the spacetime gradient operator in \((x,t)\) variables, namely \(\mathsf{D}=(\varepsilon\partial_{t},\varepsilon\partial_{1},\partial_{2})\), becomes the gradient operator \(\widehat{\mathsf{D}}\) associated with the \((x,\mathsf{s})\) coordinates, which is defined by \[\widetilde{\mathsf{D}}=(\widetilde{\mathsf{D}}_{\mathsf{s}},\widetilde{ \mathsf{D}}_{1},\widetilde{\mathsf{D}}_{2}):=\big{(}\varepsilon\widehat{ \mathsf{Q}}\partial_{\mathsf{s}},\varepsilon(\partial_{1}-\overline{\mathsf{ Q}}_{1}\partial_{\mathsf{s}}),\partial_{2}-\overline{\mathsf{Q}}_{2}\partial_{ \mathsf{s}}\big{)}\,. \tag{13.13}\] Therefore, \(\mathsf{D}f(x,t)=\widetilde{\mathsf{D}}\widetilde{f}(x,\mathsf{s})\), and the components of \(\widetilde{\mathsf{D}}\) commute: \[[\widetilde{\mathsf{D}}_{\mathsf{s}},\widetilde{\mathsf{D}}_{2}]=[\widetilde {\mathsf{D}}_{\mathsf{s}},\widetilde{\mathsf{D}}_{1}]=[\widetilde{\mathsf{D} }_{2},\widetilde{\mathsf{D}}_{1}]=0\,. \tag{13.14}\] For any \(\gamma\in\mathbb{N}_{0}^{3}\), \(\widetilde{\mathsf{D}}^{\gamma}=\widetilde{\mathsf{D}}_{\mathsf{s}}^{\gamma_{ 0}}\widetilde{\mathsf{D}}_{1}^{\gamma_{1}}\widetilde{\mathsf{D}}_{2}^{\gamma_ {2}}\), and \[(\mathsf{D}^{\gamma}f)(x,t)=(\widetilde{\mathsf{D}}^{\gamma}\widetilde{f})(x, s)\,. \tag{13.15}\] From the identity \(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}=\frac{1}{\varepsilon}\widetilde{ \mathsf{D}}_{\mathsf{s}}+V\widetilde{\mathsf{D}}_{2}\), we note that material derivatives are mapped into \((x,\mathsf{s})\) coordinates as \[(\partial_{t}+V\partial_{2})f(x,t)=(\mathsf{Q}\partial_{\mathsf{s}}+ \widetilde{V}\partial_{2})\widetilde{f}(x,\mathsf{s})=(\tfrac{1}{\varepsilon} \widetilde{\mathsf{D}}_{\mathsf{s}}+\widetilde{V}\widetilde{\mathsf{D}}_{2}) \widetilde{f}(x,\mathsf{s})\,. \tag{13.16}\] It also follows from (13.14) and the second equality in (13.16) that \[[(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2}),\widetilde{ \mathsf{D}}^{k}]\widetilde{f}=[\widetilde{V},\widetilde{\mathsf{D}}^{k}] \widetilde{\mathsf{D}}_{2}\widetilde{f}=-\widetilde{\mathsf{D}}^{k}\widetilde{V }\,\widetilde{\mathsf{D}}_{2}\widetilde{f}-(\widetilde{\mathsf{D}}^{k}, \widetilde{V},\widetilde{\mathsf{D}}_{2}\widetilde{f})\,. \tag{13.17}\] With the notation in (13.10) and (13.13), the definition (13.8b) implies the following identities for \(\widetilde{\mathcal{J}}\) and \(\widetilde{\mathsf{D}}\widetilde{\mathcal{J}}\): \[\widetilde{\mathcal{J}}(x,\mathsf{s})=1-\tfrac{\mathsf{s}}{\varepsilon}\,, \qquad\widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathcal{J}}=-\widehat{ \mathsf{Q}}\,,\qquad\widetilde{\mathsf{D}}_{1}\widetilde{\mathcal{J}}= \overline{\mathsf{Q}}_{1}\,,\qquad\widetilde{\mathsf{D}}_{2}\widetilde{\mathcal{J }}=\overline{\mathsf{Q}}_{2}\,. \tag{13.18}\] The identities in (13.18) will be used implicitly throughout the section. ### Adjoint formulas We define \[\widetilde{x}_{1}^{*}(x_{2},\mathsf{s})=x_{1}^{*}(x_{2},t)|_{t=\mathsf{q}^{-1}(x,\mathsf{s})}\,. \tag{13.19}\] Note that by construction we have \(\overline{\mathsf{Q}}_{1}(x,\mathsf{s})=0\) for \(x_{1}\leq x_{1}^{*}(x_{2},t)|_{t=\mathsf{q}^{-1}(x,\mathsf{s})}=\widetilde{x}_ {1}^{*}(x_{2},\mathsf{s})\), and so it follows via (13.18) that \[\widetilde{\mathsf{D}}_{1}\widetilde{\mathcal{J}}(\widetilde{x}_{1}^{*}(x_{2}, \mathsf{s}),x_{2},\mathsf{s})=\overline{\mathsf{Q}}_{1}(\widetilde{x}_{1}^{*}(x _{2},\mathsf{s}),x_{2},\mathsf{s})=0\,. \tag{13.20}\] Using the notation in (13.19), we define \[\Gamma(\mathsf{s})=\bigcup_{s^{\prime}\in[0,\mathsf{s}]}\big{(}\widetilde{x}_ {1}^{*}(x_{2},\mathsf{s}^{\prime}),x_{2},\mathsf{s}^{\prime}\big{)} \tag{13.21}\] and hence (13.20) implies \(\widetilde{\mathsf{D}}_{1}\widetilde{\mathcal{J}}=0\) on \(\Gamma(\varepsilon)\). For energy estimates we shall work on the union of the following two sets: \[\mathcal{P}_{+}^{\sharp}(\mathsf{s}):=\{(x,\mathsf{s}^{\prime})\in \mathbb{T}^{2}\times[0,\mathsf{s}]\colon x_{1}>\widetilde{x}_{1}^{*}(x_{2}, \mathsf{s}^{\prime})\} \tag{13.22a}\] \[\mathcal{P}_{\pm}^{\sharp}(\mathsf{s})=\{(x,\mathsf{s}^{\prime})\in \mathbb{T}^{2}\times[0,\mathsf{s}]\colon x_{1}<\widetilde{x}_{1}^{*}(x_{2}, \mathsf{s}^{\prime})\}\,,\] (13.22b) \[\mathcal{P}_{\pm}^{\sharp}(\mathsf{s})=\mathcal{P}_{+}^{\sharp}( \mathsf{s})\cup\mathcal{P}_{-}^{\sharp}(\mathsf{s})\,. \tag{13.22c}\] With (13.21) and (13.22), we see that \(\mathcal{P}_{\pm}^{\sharp}(\varepsilon)\cup\Gamma(\varepsilon)=\mathbb{T}^{2} \times[0,\varepsilon]\) is the entire spacetime set, but energy estimates will be set on \(\mathcal{P}_{-}^{\sharp}(\mathsf{s})\cup\mathcal{P}_{+}^{\sharp}(\mathsf{s})\), thus avoiding differentiation across \(\Gamma(\mathsf{s})\). We similarly consider the associated time-slices \[\mathcal{X}_{+}^{\sharp}(\mathsf{s}):=\{x\in\mathbb{T}^{2}\colon x_{1}> \widetilde{x}_{1}^{*}(x_{2},\mathsf{s})\}\,, \tag{13.23a}\] \[\mathcal{X}_{-}^{\sharp}(\mathsf{s}):=\{x\in\mathbb{T}^{2}\colon x_{1}< \widetilde{x}_{1}^{*}(x_{2},\mathsf{s})\}\,, \tag{13.23b}\] \[\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})=\mathcal{X}_{+}^{\sharp}( \mathsf{s})\cup\mathcal{X}_{-}^{\sharp}(\mathsf{s})\,. \tag{13.23c}\] Using the notation in (13.22) and (13.23), throughout this section we shall use the following notation for norms: for \(0\leq\mathsf{s}\leq\varepsilon\), we denote \[\|F(\cdot,\mathsf{s})\|_{L^{2}} =\|F(\cdot,\mathsf{s})\|_{L^{2}} \tag{13.24a}\] \[\|F\|_{L^{\infty}_{x}L^{2}} :=\sup_{\mathsf{s}\in[0,\varepsilon]}\|F(\cdot,\mathsf{s})\|_{L^ {2}(\mathcal{X}_{\pm}^{\sharp}(\mathsf{s}))}\,,\] (13.24b) \[\|F\|_{L^{2}_{x,\mathsf{s}}} :=\|F\|_{L^{2}(\mathcal{P}^{\sharp}_{\pm}(\mathsf{s}))}\,,\] (13.24c) \[\|F\|_{L^{\infty}_{x,\mathsf{s}}} :=\|F\|_{L^{\infty}(\mathcal{P}^{\sharp}_{\pm}(\mathsf{s}))}\,. \tag{13.24d}\] In view of (13.38a), the same comments as in Remark 5.3 (regarding equivalence of \(L^{2}_{x,\mathsf{s}}\) and \(L^{2}_{x,t}\), respectively \(L^{\infty}_{x,\mathsf{s}}\) and \(L^{\infty}_{x,t}\)) also apply to the norms in (13.24). Next, we compute adjoints of the differential operator \(\widetilde{\mathsf{D}}\) (as defined by (13.13)), with respect to the \(L^{2}\) inner product on \(\mathcal{P}^{\sharp}_{\pm}(\mathsf{s})\). We claim that \[\widetilde{\mathsf{D}}^{*}_{\mathsf{s}} =-\widetilde{\mathsf{D}}_{\mathsf{s}}-\varepsilon\hat{\mathsf{Q} }_{\mathsf{s}}+\varepsilon\widetilde{\mathsf{Q}}(\delta_{\mathsf{s}}-\delta_ {0})\,, \tag{13.25a}\] \[\widetilde{\mathsf{D}}^{*}_{1} =-\widetilde{\mathsf{D}}_{1}+\varepsilon\hat{\mathsf{Q}}_{1}- \varepsilon\overline{\mathsf{Q}}_{1}\delta_{\mathsf{s}}\,,\] (13.25b) \[\widetilde{\mathsf{D}}^{*}_{2} =-\widetilde{\mathsf{D}}_{2}+\hat{\mathsf{Q}}_{2}-\overline{ \mathsf{Q}}_{2}\delta_{\mathsf{s}}\,,\] (13.25c) \[(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2})^{*} =-(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2})- \hat{\mathsf{Q}}_{\mathsf{s}}+\mathsf{Q}(\delta_{\mathsf{s}}-\delta_{0})+ \widetilde{V}\hat{\mathsf{Q}}_{2}-\widetilde{\mathsf{D}}_{2}\widetilde{V}\,. \tag{13.25d}\] We note that from the definition (13.12), we have \(\overline{\mathsf{Q}}_{1}(x,\mathsf{s})=\hat{\mathsf{Q}}_{1}(x,\mathsf{s})=0\) for \(x_{1}<\widetilde{x}_{1}^{*}(x_{2},\mathsf{s})\) and so the identities (13.25) precisely match the previously established formulas (5.28) in the upstream region. The identities in (13.25) follow from (13.13), and from (13.27), (13.31), and (13.33), which we establish below. We first consider \(\widetilde{\mathsf{D}}^{*}_{1}\). From (13.13), we have that \[\widetilde{\mathsf{D}}_{1}:=\left\{\begin{aligned} \varepsilon\partial_{1}\,,& x_{1}< \widetilde{x}_{1}^{*}(x_{2},\mathsf{s})\,,\\ \varepsilon(\partial_{1}-\overline{\mathsf{Q}}_{1}\partial_{ \mathsf{s}})& x_{1}>\widetilde{x}_{1}^{*}(x_{2},\mathsf{s})\,. \end{aligned}\right. \tag{13.26}\] By also appealing to (13.20), it follows that18 Footnote 18: Our computation in (13.27) makes use of the equality \((FG)(\widetilde{x}_{1}^{*}(x_{2},\mathsf{s}^{\prime})^{+},x_{2},\mathsf{s}^{ \prime})=(FG)(\widetilde{x}_{1}^{*}(x_{2},\mathsf{s}^{\prime})^{-},x_{2}, \mathsf{s}^{\prime})\), which holds whenever the trace of \(FG\) on \(\Gamma(\mathsf{s})\) from the domains \(\mathcal{P}^{\sharp}_{-}(\mathsf{s})\) and \(\mathcal{P}^{\sharp}_{+}(\mathsf{s})\) exist and agree. Note that \(\Gamma(s)\) is contained in the spacetime \(\widehat{\mathcal{P}}\) defined in (5.17), and that we have established the existence of a unique \(H^{7}\)-class Euler solution in \(\widehat{\mathcal{P}}\) with uniform bounds in the norms (5.36) for the \(H^{7}\)-class initial data specified in Section 4.2. We observe that by increasing the regularity of our initial data to \(H^{k}\) for \(k\geq 8\), then our argument would yield a unique \(H^{h}\)-class Euler solution in \(\widehat{\mathcal{P}}\) with uniform bounds in the \((k-1)\)-order version of the sixth-order norms in (5.36). In particular, in applying the trace to the functions \(F\) and \(G\) in (13.27), we can assume that we have sufficiently regularity for the cancellation of the two-sided trace to hold. \[\int_{\mathcal{P}^{\sharp}_{\pm}(\mathsf{s})}\widetilde{\mathsf{D}}_{1}F\,G\, \mathrm{d}x\mathrm{d}\mathsf{s}^{\prime} =\varepsilon\int_{0}^{\mathsf{s}}\int_{-\pi}^{\pi}\int_{\widetilde{x }_{1}^{*}(x_{2},\mathsf{s}^{\prime})}^{\pi}(\partial_{1}-\overline{\mathsf{Q}}_{1} \partial_{\mathsf{s}})F\,G\,\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{ \prime}+\varepsilon\int_{0}^{\mathsf{s}}\int_{-\pi}^{\pi}\int_{-\pi}^{\widetilde{x }_{1}^{*}(x_{2},\mathsf{s}^{\prime})}\partial_{1}F\,G\,\mathrm{d}x_{1} \mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}\] \[=\varepsilon\int_{0}^{\mathsf{s}}\int_{-\pi}^{\pi}(F\,G)(y_{1},x _{2},\mathsf{s}^{\prime})\Big{|}_{y_{1}=\widetilde{x}_{1}^{*}(x_{2},\mathsf{s}^{ \prime})^{-}}^{y_{1}=\widetilde{x}_{1}^{*}(x_{2},\mathsf{s}^{\prime})^{-}}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[-\int_{-\pi}^{\pi}\partial_{\mathsf{s}}\widetilde{x}_{1}^{*}(x_{2}, \mathsf{s})F(\widetilde{x}_{1}^{*}(x_{2},\mathsf{s}),x_{2},\mathsf{s})\mathsf{G} (\widetilde{x}_{1}^{*}(x_{2},\mathsf{s}),x_{2},\mathsf{s})\mathrm{d}x_{2}\,, \tag{13.28}\] and \[\frac{d}{d\mathsf{s}}\int_{-\pi}^{\pi}\int_{-\pi}^{\widetilde{x}_{1 }^{*}(x_{2},\mathsf{s}^{\prime})}F(x,\mathsf{s})\mathsf{G}(x,\mathsf{s}) \mathrm{d}x_{1}\mathrm{d}x_{2} =\int_{-\pi}^{\pi}\int_{-\pi}^{\widetilde{x}_{1}^{*}(x_{2},\mathsf{ s})}\partial_{\mathsf{s}}\big{(}F(x,\mathsf{s})\mathsf{G}(x,\mathsf{s})\big{)} \mathrm{d}x_{1}\mathrm{d}x_{2}\] \[+\int_{-\pi}^{\pi}\partial_{\mathsf{s}}\widetilde{x}_{1}^{*}(x_{2 },\mathsf{s})F(\widetilde{x}_{1}^{*}(x_{2},\mathsf{s}),x_{2},\mathsf{s}) \mathsf{G}(\widetilde{x}_{1}^{*}(x_{2},\mathsf{s}),x_{2},\mathsf{s})\mathrm{ d}x_{2}\,. \tag{13.29}\] The \(\partial_{\mathsf{s}}\widetilde{x}_{1}^{*}(x_{2},\mathsf{s})\) present in (13.28)-(13.29) may be computed from (13.11a), (13.12a), and (6.49), as \[\partial_{\mathsf{s}}\widetilde{x}_{1}^{*}(x_{2},\mathsf{s})=\tfrac{1}{ \mathsf{Q}(x,\mathsf{s})}\partial_{t}x_{1}^{*}(x_{2},t)|_{t=\mathsf{q}^{-1}( x,\mathsf{s})}=-\tfrac{1}{\mathsf{Q}(x,\mathsf{s})}(\tfrac{\partial_{t} \partial_{1}J_{g}}{\partial_{1}J_{g}})(x_{1}^{*}(x_{2},t),t)\big{|}_{t=\mathsf{ q}^{-1}(x,\mathsf{s})}\,. \tag{13.30}\] From (6.54), (6.61), and (13.38a) below, we may deduce that \(|\partial_{\mathsf{s}}\widetilde{x}_{1}^{*}|\lesssim\varepsilon\), so that this term is finite. This allows us to add together (13.28) and (13.29), which shows that the two integrals evaluated along \(\Gamma(\mathsf{s})\) cancel each other, and upon integrating in \(\mathsf{s}\) deduce that \[\int_{\mathcal{X}_{\pm}^{\mathsf{s}}(\mathsf{s})}F(x,\mathsf{s}^{\prime}) \mathsf{G}(x,\mathsf{s}^{\prime})\mathrm{d}x_{1}\mathrm{d}x_{2}\Big{|}_{ \mathsf{s}^{\prime}=0}^{\mathsf{s}^{\prime}=\mathsf{s}}=\int_{\mathcal{P}_{ \pm}^{\mathsf{s}}(\mathsf{s})}\partial_{\mathsf{s}}\big{(}F(x,\mathsf{s}^{ \prime})\mathsf{G}(x,\mathsf{s}^{\prime})\big{)}\mathrm{d}x_{1}\mathrm{d}x_{2} \mathrm{d}\mathsf{s}^{\prime}\] By setting \(\mathsf{G}=\varepsilon\widehat{\mathsf{Q}}G\), we have that \[\int_{\mathcal{P}_{\pm}^{\mathsf{s}}(\mathsf{s})}\widetilde{\mathsf{D}}_{ \mathsf{s}}F\,G\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}= \varepsilon\int_{\mathcal{X}_{\pm}^{\mathsf{s}}(\mathsf{s})}\widehat{\mathsf{Q }}\,F\,G\mathrm{d}x_{1}\mathrm{d}x_{2}\Big{|}_{0}^{\mathsf{s}}-\int_{\mathcal{P }_{\pm}^{\mathsf{s}}(\mathsf{s})}F\,\widetilde{\mathsf{D}}_{\mathsf{s}}G \mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}-\varepsilon\int_{ \mathcal{P}_{\pm}^{\mathsf{s}}(\mathsf{s})}\mathring{\mathsf{Q}}_{\mathsf{s}} FG\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}\,, \tag{13.31}\] which proves (13.25a). Third, we consider \(\widetilde{\mathsf{D}}_{2}^{*}\). By repeating the computations leading to (13.28) and (13.29) but with \(\partial_{\mathsf{s}}\) replaced by \(\partial_{2}\), we find that \[\int_{\mathcal{X}_{\pm}^{\mathsf{s}}(\mathsf{s})}\partial_{2}\big{(}F(x, \mathsf{s}^{\prime})\mathsf{G}(x,\mathsf{s}^{\prime})\big{)}\mathrm{d}x_{1} \mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}=0\,. \tag{13.32}\] Letting \(\mathsf{G}=\overline{\mathsf{Q}}_{2}G\), we find that \[\int_{\mathcal{P}_{\pm}^{\mathsf{s}}(\mathsf{s})}\widetilde{\mathsf{D}}_{2}F\,G= -\int_{\mathcal{P}_{\pm}^{\mathsf{s}}(\mathsf{s})}F\,\widetilde{\mathsf{D}}_{2 }G+\int_{\mathcal{P}_{+}^{\mathsf{s}}(\mathsf{s})}\mathring{\mathsf{Q}}_{2}G \,F-\int_{-\pi}^{\pi}\int_{\widetilde{x}_{1}^{*}(x_{2},\mathsf{s})}^{\pi}( \overline{\mathsf{Q}}_{2}G\,F)(x,\mathsf{s})\mathrm{d}x_{1}\mathrm{d}x_{2}\,. \tag{13.33}\] and thus (13.25c) follows. Identity (13.25d) is a consequence of (13.25a)-(13.25c), since \(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2}=\tfrac{1}{ \varepsilon}\widetilde{\mathsf{D}}_{\mathsf{s}}+V\widetilde{\mathsf{D}}_{2}\). Finally, we note that by virtue of the cancellation of the "boundary" integrals evaluated along \(\Gamma(\mathsf{s})\), the sum of the identities (13.28) and (13.29) shows that \[\frac{d}{d\mathsf{s}}\int_{\mathcal{X}_{\pm}^{\mathsf{s}}(\mathsf{s})}F(x, \mathsf{s})\mathrm{d}x=\int_{\mathcal{X}_{\pm}^{\mathsf{s}}(\mathsf{s})} \partial_{\mathsf{s}}F(x,\mathsf{s})\mathrm{d}x\,. \tag{13.34}\] **Remark 13.6** (Dropping the tildes).: _For notational convenience, we shall once again drop the tildes from all the variables in \((x,\mathsf{s})\) coordinates. Dropping tildes on the fundamental Euler variables and on the geometric variables is done in direct analogy with Remark 5.2. Notably, we shall denote \(\widetilde{J}_{g},\widetilde{\partial},\widetilde{\widetilde{J}}_{g}\) simply as \(J_{g},\mathfrak{J},\overline{J}_{g}\). This identification is made throughout the rest of this section and no ambiguity may arise because we shall still use the notation \(\widetilde{\mathsf{D}}\) for the spacetime derivative operator in \((x,\mathsf{s})\) coordinates. As such, \(\widetilde{\mathsf{D}}f\) means that \(f\) is viewed as a function of \((x,\mathsf{s})\), while \(\mathsf{D}f\) means that \(f\) is viewed as a function of \((x,t)\), where \(t=\mathsf{q}^{-1}(x,\mathsf{s})\)._ ### The \(L^{2}\)-based energy norms For downstream maximal development, with the notation in (13.24) we define the energy norms by \[\widetilde{\mathcal{E}}_{6}^{2}(\mathsf{s}) =\widetilde{\mathcal{E}}_{6,\mathcal{N}}^{2}(\mathsf{s})+(\mathsf{K }\varepsilon)^{-2}\widetilde{\mathcal{E}}_{6,\mathcal{T}}^{2}(\mathsf{s})\,, \widetilde{\mathcal{E}}_{5}^{2}(\mathsf{s})=\widetilde{\mathcal{E}}_{5, \mathcal{N}}^{2}(\mathsf{s})+(\mathsf{K}\varepsilon)^{-2}\widetilde{\mathcal{E}}_{5, \mathcal{T}}^{2}(\mathsf{s})\] (13.35a) \[\widetilde{\mathcal{E}}_{6,\mathcal{N}}^{2}(\mathsf{s}) =\big{\|}\|^{3\mathsf{d}}\,J_{g}^{1}\widetilde{\mathsf{D}}^{6}(J_{g} \mathring{\mathbf{W}}_{\mathcal{N}},J_{g}\mathring{\mathbf{Z}}_{\mathcal{N}},J_{g} \mathring{\mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L^{2}}^{2}\,, \widetilde{\mathcal{E}}_{5,\mathcal{N}}^{2}(\mathsf{s})=\big{\|}J_{g}^{1} \widetilde{\mathsf{D}}^{5}(J_{g}\mathring{\mathbf{W}}_{\mathcal{N}},J_{g} \mathring{\mathbf{Z}}_{\mathcal{N}},J_{g}\mathring{\mathbf{A}}_{\mathcal{N}})( \cdot,\mathsf{s})\big{\|}_{L^{2}}^{2}\] (13.35b) \[\widetilde{\mathcal{E}}_{6,\mathcal{T}}^{2}(\mathsf{s}) =\big{\|}\|^{3\mathsf{d}}\,J_{g}^{1}\widetilde{\mathsf{D}}^{6}( \mathring{\mathbf{W}}_{\mathcal{T}},\mathring{ \[\widetilde{\mathcal{D}}^{2}_{6,\mathcal{N}}(\mathsf{s}) =\int_{0}^{\mathsf{s}}\bigl{\|}\partial^{\mathsf{s}}J_{y}^{\mathsf{ s}}\widetilde{\mathsf{D}}^{6}(J_{y}\hat{\mathbf{W}}_{\mathcal{N}},J_{y}\tfrac{ \mathsf{Z}}{\mathcal{Z}}_{\mathcal{N}},J_{y}\hat{\mathbf{A}}_{\mathcal{N}}) \bigr{\|}_{L^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,, \widetilde{\mathcal{D}}^{2}_{5,\mathcal{N}}(\mathsf{s}) =\int_{0}^{\mathsf{s}}\bigl{\|}\widetilde{\mathsf{D}}^{5}(J_{y} \hat{\mathbf{W}}_{\mathcal{N}},J_{y}\tfrac{\mathsf{Z}}{\mathcal{Z}}_{ \mathcal{N}},J_{y}\hat{\mathbf{A}}_{\mathcal{N}})\bigr{\|}_{L^{2}}^{2}\mathrm{d }\mathsf{s}^{\prime}\,, \tag{13.36b}\] \[\widetilde{\mathcal{D}}^{2}_{6,\mathcal{T}}(\mathsf{s}) =\int_{0}^{\mathsf{s}}\bigl{\|}\partial^{\mathsf{s}}J_{y}^{ \mathsf{s}}\widetilde{\mathsf{D}}^{6}(\hat{\mathbf{W}}_{\mathcal{T}},\hat{ \mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{\mathcal{T}})\bigr{\|}_{L^{2}}^{2 }\mathrm{d}\mathsf{s}^{\prime}\,, \widetilde{\mathcal{D}}^{2}_{5,\mathcal{T}}(\mathsf{s}) =\int_{0}^{\mathsf{s}}\bigl{\|}\widetilde{\mathsf{D}}^{5}(\hat{ \mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{ \mathcal{T}})\bigr{\|}_{L^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,. \tag{13.36c}\] where once again \(\mathsf{K}\geq 1\) is a sufficiently large constant, independent of \(\varepsilon\), chosen at the end of the proof, solely in terms of \(\alpha\) and \(\kappa_{0}\) (see (13.55) below). ### Bootstrap assumptions We continue to use the same bootstrap assumptions as in (5.37), but instead of assuming that these bootstraps hold in the spacetime \(\mathcal{P}\) (cf. (5.11)), we now assume that these bootstraps hold in the \(\mathcal{P}^{\sharp}\) spacetime (cf. (13.7)). As such, in this section all pointwise bootstraps are assumed to hold for \((x,t)\in\mathcal{P}^{\sharp}\), or equivalently, for all \((x,\mathsf{s})\in\mathbb{T}^{2}\times[0,\varepsilon)\) via the flattening map \(\mathsf{q}\) (cf. (13.8)), and for the energy and damping norms defined earlier in (13.35) and (13.36). We continue to follow the convention in Remark 13.6 and drop the tildes from all fundamental Euler variables, all the geometric variables, and on the flattening coefficients. To be more precise, the working bootstrap assumptions in this section are that \[(\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}},J_{y},h,V, \Sigma)\] satisfy the pointwise bootstraps ( 5.37a) \[-(\ref{eq:bootstrap})\] in \[\mathcal{P}^{\sharp}\Leftrightarrow\mathbb{T}^{2}\times[0,\varepsilon)\,, \tag{13.37a}\] \[\widetilde{\mathcal{E}}_{6},\widetilde{\mathcal{D}}_{6},\widetilde{ \mathcal{E}}_{5},\widetilde{\mathcal{D}}_{5},\bigl{\|}\widetilde{\mathsf{D}}^{6 }\widetilde{\mathsf{D}}_{1}\widetilde{h}\bigr{\|}_{L^{2}_{x,\mathsf{s}}}, \bigl{\|}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}\widetilde{h} \bigr{\|}_{L^{2}_{x,\mathsf{s}}},\bigl{\|}\widetilde{\mathsf{D}}^{6} \widetilde{J}_{y}\bigr{\|}_{L^{2}_{x,\mathsf{s}}}\] satisfy the energy bootstraps ( 5.37b ) \[-(\ref{eq:bootstrap}). \tag{13.37b}\] Here \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}},J_{y},h,V,\Sigma)\) are defined according to the flattening (13.10), and the energy and damping norms are defined in (13.35) and (13.36), respectively. Since the bootstraps (13.37) in this section are the same as the bootstraps (5.37) used in Sections 5-12, save for the different weights in the \(L^{2}\) norms (see (13.35) and (13.36)), we shall sometimes (more frequently for the pointwise bootstraps) make reference to (5.37) instead of (13.37). As in Sections 5-12, the burden of the proof in the current section is to close the bootstrap assumptions (13.37), i.e., to show that these bounds hold with \(<\) symbols instead of \(\leq\) symbols. To avoid redundancy, we do not repeat the arguments of how bootstraps are closed when the proof is either identical to that in given earlier in Sections 5-12, or if it requires infinitesimal and straightforward adjustments. Instead, we focus on the proofs of those bootstraps which are different because of either the \(x_{1}\)-dependence of \(\mathsf{q}\) manifested through the fact that \(\widetilde{\mathsf{D}}^{*}_{1}\neq-\widetilde{\mathsf{D}}_{1}\) (see (13.25b)), or because of the fact that the weight \(\mathcal{J}\) used in the energy estimates satisfies \(\widetilde{\mathsf{D}}_{1}\mathcal{J}\neq 0\) (see (13.18)). The remainder of this section is dedicated to closing the bootstrap assumptions (13.37). In the process of closing the bootstrap estimates (13.37) we make use of the functional analytic framework in the flattened domain developed in Appendix B, with the modifications described in Section B.4. We shall also utilize the pointwise estimates for objects that naturally flow along the \(1\)-and \(2\)-characteristics, as developed in Appendix C, keeping in mind Section C.1, where the modifications due to the \(\mathsf{q}\) flattening map are discussed. ### Consequences of the bootstraps updated estimates for downstream maximal development Many of the bounds established in Sections 6-9 are direct consequences of the bootstrap assumptions, the functional analytic setup in Appendix B, and of the \(L^{\infty}\) estimates from Appendix C. We emphasize that many of these arguments apply **as is** in the geometry of the downstream maximal development, except that we refer to the bootstraps (13.37), to Section B.4, and to Section C.1. Examples include: the bounds for the diameter of the support in Section 6.1, the bounds for the ALE flow in Section 6.2, the pointwise bounds for \((W,Z,A)\) in Section 6.3, the pointwise bounds for \(\mathsf{D}^{k}(J_{y}\hat{\mathbf{W}}_{\mathcal{N}})\) and \(\mathsf{D}^{k}J_{y}\) when \(0\leq k\leq 2\) from Section 6.4, the properties of \(x_{1}^{*}(x_{2},t)\) and of the curve of pre-shocks in Section 6.6, the damping properties of \(J_{y}\) and \(\mathcal{J}\) from Section 6.7, and the closure of the bootstrap for the fifth order derivatives (cf. (5.37s)) discussed in Section 6.8 (here we only use that \(\mathcal{J}=1-\frac{\mathsf{s}}{\mathsf{s}}\)). These arguments are not repeated here. In fact, because these bounds are the same, throughout this section we abuse notation and make reference to equation numbers from Sections 5-6. The only estimates from Section 6 which require a substantial modification specific to downstream maximal development are the bounds for the remapping coefficients \((\widehat{\mathsf{Q}},\mathsf{Q},\overline{\mathsf{Q}}_{i},\hat{\mathsf{Q}}, \hat{\mathsf{Q}}_{i})\) defined in (13.12). The analysis in Section 6.5, or more precisely, the bounds from Lemma 6.3, are to be replaced with the estimates obtained in Lemma 13.7 below. **Lemma 13.7**.: _Assume that the bootstrap bounds (13.37) hold on \(\mathcal{P}^{\sharp}\). If \(\varepsilon\) is taken to be sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), then the functions \((\widehat{\mathsf{Q}},\mathsf{Q},\overline{\mathsf{Q}}_{i},\widehat{\mathsf{Q}} _{i})\) defined in (13.12) satisfy the bounds_ \[\tfrac{17(1+\alpha)}{40}-\hat{C}\varepsilon\mathsf{C}_{\mathsf{J} _{i}} \leq\widehat{\mathsf{Q}}\leq 401(1+\alpha)\,, \tag{13.38a}\] \[\tfrac{2(1+\alpha)}{5}\leq\mathsf{Q}\leq 402(1+\alpha)\,, \tag{13.38b}\] \[0 \leq\overline{\mathbb{Q}}_{1}\leq 5\mathbf{1}_{\mathcal{P}^{\sharp}_{+}( \mathsf{s})}\,, \tag{13.38c}\] \[0 \leq\mathring{\mathbb{Q}}_{1}\leq 5\varepsilon^{-1}\mathbf{1}_{ \mathcal{P}^{\sharp}_{+}(\mathsf{s})}\,,\] (13.38d) \[|\overline{\mathbb{Q}}_{2}| \leq 1500\varepsilon\,,\] (13.38e) \[|\mathring{\mathbb{Q}}_{2}| \leq 1500\,,\] (13.38f) \[|\mathring{\mathbb{Q}}_{\mathsf{s}}|,|\mathring{\mathbb{Q}}| \leq 2\cdot 250^{2}\mathsf{Q}\varepsilon^{-1}+\mathring{C}\,. \tag{13.38g}\] _hold uniformly for all \((x,\mathsf{s})\in\mathcal{P}^{\sharp}_{\pm}(\varepsilon)\)._ We note that the lower bound in (13.38a) matches that in (6.38a), up to an inconsequential \(\mathcal{O}(\varepsilon)\) term. All other bounds precisely match those obtained previously in (6.38), including the all-important lower bound for \(\mathsf{Q}\) in (13.38b). The bounds (13.38c) and (13.38d) are new; their key feature is the \(\geq 0\) lower bounds for both \(\overline{\mathbb{Q}}_{1}\) and \(\mathring{\mathbb{Q}}_{1}\). Proof of Lemma 13.7.: For \(x_{1}\leq x_{1}^{*}(x_{2},\mathsf{s})\), the functions \((\widehat{\mathbb{Q}},\overline{\mathbb{Q}}_{1},\overline{\mathbb{Q}}_{2}, \mathsf{Q},\mathring{\mathbb{Q}},\mathring{\mathbb{Q}}_{\mathsf{s}},\mathring{ \mathbb{Q}}_{1},\mathring{\mathbb{Q}}_{2})\) defined in (13.12) exactly equal the functions \((\widehat{\mathbb{Q}},0,\overline{\mathbb{Q}}_{2},\mathsf{Q},\mathring{ \mathbb{Q}},\mathring{\mathbb{Q}}_{\mathsf{s}},0,\mathring{\mathbb{Q}}_{2})\) defined in (5.22), because \(\mathscr{J}=\mathcal{J}\) (recall the paragraph below (13.6)). As such, for these values of \(x_{1}\), all the bounds established earlier in (6.38) continue to hold. Thus for \(x_{1}\leq x_{1}^{*}(x_{2},\mathsf{s})\) no proof is required. Next, we consider the region \(x_{1}>x_{1}^{*}(x_{2},\mathsf{s})\). We start with the bound for \(\widehat{\mathbb{Q}}\) which is defined in (13.12a). In light of (13.5a) and (13.6), we first consider values of \(x_{1}\) such that \(x_{1}^{*}(x_{2},t)\leq x_{1}\leq x_{1,-}^{\sharp}(x_{2})\). Here, via (5.6), the fact that \(\mathfrak{C}\geq 0\), using (6.24d), and the fact that the map \(x_{1}\mapsto\mathsf{D}_{1}w_{0}(x_{1},x_{2})\) is monotone increasing (at fixed \(x_{2}\)) at least until it reaches the value of \(-\frac{1}{3}\) (see the paragraph below (13.1)), we have that \[\widehat{\mathbb{Q}}(x,\mathsf{s})=-\varepsilon(\partial_{t} \mathscr{J})(x,t)\big{|}_{t=\mathsf{q}^{-1}(x,\mathsf{s})} \geq-\varepsilon(\partial_{t}J_{s})(x,t)\big{|}_{t=\mathsf{q}^{-1}(x, \mathsf{s})}\] \[\geq-\tfrac{1+\alpha}{2}\big{(}\mathsf{D}_{1}w_{0}(x)+ \varepsilon\mathsf{C}_{\mathsf{J}_{t}}\big{)}\] \[\geq\tfrac{1+\alpha}{2}\big{(}\tfrac{17}{20}-\varepsilon\mathsf{ C}_{\mathsf{J}_{t}}\big{)}\geq\tfrac{17(1+\alpha)}{40}-\varepsilon\tfrac{1+ \alpha}{2}\mathsf{C}_{\mathsf{J}_{t}}\,. \tag{13.39}\] In identical fashion, for \(x_{1}\geq x_{1,+}^{\sharp}(x_{2})\), by using the third branch in (13.5a) we have that \[\widehat{\mathbb{Q}}(x,\mathsf{s})=-\varepsilon(\partial_{t} \mathscr{J})(x,t)\big{|}_{t=\mathsf{q}^{-1}(x,\mathsf{s})}\geq-\tfrac{1+ \alpha}{2}\big{(}\mathsf{D}_{1}w_{0}(x_{1}^{\sharp}(x_{2}),x_{2})+\varepsilon \mathsf{C}_{\mathsf{J}_{t}}\big{)}\geq\tfrac{17(1+\alpha)}{40}-\varepsilon \tfrac{1+\alpha}{2}\mathsf{C}_{\mathsf{J}_{t}}\,. \tag{13.40}\] It thus remains to consider values of \(x_{1}\) such that \(x_{1,-}^{\sharp}(x_{2})<x_{1}<x_{1,+}^{\sharp}(x_{2})\), which represents the middle branch of (13.5a). In this region, by construction we have that \(\partial_{t}\mathscr{J}=\partial_{t}\mathscr{J}_{s}\) is monotone increasing in \(x_{1}\) (see (13.5b)), and hence \(\widehat{\mathbb{Q}}\) is monotone decreasing in \(x_{1}\). Its minimum value is thus attained when \(x_{1}=x_{1,+}^{\sharp}(x_{2})\), a point at which the bound (13.40) was previously established. This concludes the proof of the lower bound in (13.38a). The upper bound in (13.38a) is obtained by computing \(-\varepsilon\partial_{t}\vec{\mathscr{J}}_{s}\), which by its definition in (13.5a) and by (13.5b) attains its maximum in the region \(x_{1}^{*}(x_{2},t)\leq x_{1}\leq x_{1,-}^{\sharp}(x_{2})\). In this region, we conclude via the last bound in (5.8) and the fact that \((-\varepsilon\partial_{t}J_{s})\leq\tfrac{1+\alpha}{2}(1+\varepsilon\mathsf{C}_ {\mathsf{J}_{t}})\), via (6.24d). The bound (13.38b) follows from (13.38a) and (13.38e), since \(|\mathsf{Q}-\widehat{\mathsf{Q}}|\leq|V||\overline{\mathbb{Q}}_{2}|\leq \mathring{C}\varepsilon^{2}\). Next, we turn to the bounds (13.38c) and (13.38d) which concern \(1\)-derivatives of the geometric flattening coefficients, \(\overline{\mathbb{Q}}_{1}\) and \(\mathring{\mathbb{Q}}_{1}\). The definitions (13.12b) and (13.6) yield \[\overline{\mathbb{Q}}_{1}(x,\mathsf{s})=\varepsilon(\partial_{1} \vec{\mathscr{J}})(x,t)=\varepsilon(\partial_{1}\vec{\mathscr{J}}_{s})(x,t)\,, \qquad\text{where}\qquad t=\mathsf{q}^{-1}(x,\mathsf{s})\,,\] (13.41a) in the region of interest, \[\{x_{1}>x_{1}^{*}(x_{2},t)\}\]. We first note that the definition of \[\vec{\mathscr{J}}_{s}\] in ( 13.5a ) yields \[(\partial_{1}\vec{\mathscr{J}}_{s})(x,t)=0\,,\qquad\text{for}\qquad x_{1}>x_{1,+} ^{\sharp}(x_{2})\,. \tag{13.41b}\] For the middle branch in (13.5a), namely \(|x_{1}-x_{1}^{\sharp}(x_{2})|<\tfrac{\varepsilon}{1000}\), we use the first two inequalities in (13.5b), which may be integrated in time (since \(x_{1}^{\sharp}\) does not depend on time) to show that \[\partial_{1}\vec{\mathscr{J}}_{s}(x,t)=\underbrace{\partial_{1} \vec{\mathscr{J}}_{s}(x,\mathsf{t}_{\mathsf{m}})}_{=0}+\int_{\mathsf{t}_{ \mathsf{m}}}^{t}(\partial_{t}\partial_{1}\vec{\mathscr{J}}_{s})(x,t^{\prime}) \mathrm{d}t^{\prime}\] \[=\int_{\mathfrak{t}_{\mathfrak{in}}}^{t}(\partial_{t}\partial_{1} \vec{\mathscr{J}}_{g})(x,t^{\prime})\mathrm{d}t^{\prime}\in[0,(\mathfrak{t}_{ \mathfrak{fin}}-\mathfrak{t}_{\mathfrak{in}})\cdot 2(1+\alpha)\varepsilon^{-2}] \subseteq[0,5\varepsilon^{-1}]. \tag{13.41c}\] It remains to consider the \(\partial_{1}\)-derivative of the first branch in (13.5a), which concerns \(x_{1}^{\ast}<x_{1}<x_{1,-}^{\sharp}(x_{2})\). For these values of \(x_{1}\) we have that \(\partial_{1}\vec{\mathscr{J}}_{g}=\partial_{1}J_{g}\), while (4.10) and (6.24b), imply \[\|\partial_{1}J_{g}\|_{L^{\infty}_{x,\mathfrak{t}}}\leq(\mathfrak{t}_{ \mathfrak{fin}}-\mathfrak{t}_{\mathfrak{in}})\tfrac{1+\alpha}{2\varepsilon^{ 2}}(\|\mathsf{D}_{1}^{2}w_{0}\|_{L^{\infty}_{x}}+\mathring{C}\varepsilon^{2} )\leq\tfrac{51}{25\varepsilon}+\mathring{C}\varepsilon\leq\tfrac{3}{ \varepsilon}\,. \tag{13.41d}\] This gives the desired upper bound. For the lower bound, as already noted in Proposition 13.1, for \(x_{1}\in[x_{1}^{\ast}(x_{2},t),x_{1}^{\sharp}(x_{2})]\) we have that \(\mathsf{D}_{1}w_{0}(x)\leq-\tfrac{17}{20}<-\tfrac{1}{3}\), and therefore \(\mathsf{D}_{1}^{2}w_{0}(x)\geq\varepsilon^{\frac{7}{8}}\) (due to assumption ((viii))). With this information, (4.10) and (6.24b), now imply \[\partial_{1}J_{g}(x,t)\geq(t-\mathfrak{t}_{\mathfrak{in}})\tfrac{1+\alpha}{2 \varepsilon^{2}}(\mathsf{D}_{1}^{2}w_{0}(x)-\mathring{C}\varepsilon^{2})\geq (t-\mathfrak{t}_{\mathfrak{in}})\tfrac{1+\alpha}{2\varepsilon^{2}}( \varepsilon^{\frac{7}{8}}-\mathring{C}\varepsilon^{2})\geq(t-\mathfrak{t}_{ \mathfrak{in}})\tfrac{1+\alpha}{4}\varepsilon^{-\frac{9}{8}}\geq 0\,. \tag{13.41e}\] Collecting the bounds in (13.41), we have thus established (13.38c). Next, we consider the bound (13.38d). From (13.12) and the chain rule we have that \[\mathring{\mathsf{Q}}_{1}(x,\mathsf{s})=\tfrac{\varepsilon}{\mathring{\mathsf{ Q}}(x,\mathsf{s})}(\partial_{t}\partial_{1}\widehat{\mathscr{J}})(x,\mathsf{q}^{-1}(x, \mathsf{s}))\,.\] Inspecting the proofs of the bounds in (13.41), and upon referring to (6.24f) instead of (6.24b), we may deduce that \(0\leq\partial_{t}\partial_{1}\widehat{\mathscr{J}}(x,t)\leq 2(1+\alpha) \varepsilon^{-2}\), for all \(x_{1}>x_{1}^{\ast}(x_{2},t)\). Using the lower bound for \(\widehat{\mathsf{Q}}\) obtained in (13.38a), the proof of (13.38d) now follows. Next, we turn to the proof of (13.38e) for \[\overline{\mathsf{Q}}_{2}(x,\mathsf{s})=\tfrac{\varepsilon}{\mathring{\mathsf{ Q}}(x,\mathsf{s})}(\partial_{2}\widehat{\mathscr{J}})(x,\mathsf{q}^{-1}(x, \mathsf{s}))=\tfrac{\varepsilon}{\mathring{\mathsf{Q}}(x,\mathsf{s})}( \partial_{2}\vec{\mathscr{J}}_{g})(x,\mathsf{q}^{-1}(x,\mathsf{s}))\,,\] for \(x_{1}>x_{1}^{\ast}(x_{2},t)\). For the first branch in (13.5a), we use (4.10) and (6.24b), to deduce \(|(\partial_{2}\vec{\mathscr{J}}_{g})(x,t)|=|(\mathsf{D}_{2}J_{g})(x,t)|\leq( \mathfrak{t}_{\mathfrak{fin}}-\mathfrak{t}_{\mathfrak{in}})\tfrac{1+\alpha}{2 \varepsilon}(\|\mathsf{D}_{1}\mathsf{D}_{2}w_{0}\|_{L^{\infty}_{x}}+\mathring{ C}\varepsilon^{2})\leq\tfrac{51}{51}+\mathring{C}\varepsilon^{2}\leq 3\). For the middle branch in (13.5a) we appeal to the second bound in (13.5b), which may be integrated in time to yield \(|(\partial_{2}\vec{\mathscr{J}}_{g})(x,t)|\leq 250(1+\alpha)\varepsilon^{-1}( \mathfrak{t}_{\mathfrak{fin}}-\mathfrak{t}_{\mathfrak{in}})\leq 510\). For the last branch in (13.5a), we note that for all \(x_{1}\geq x_{1,+}^{\sharp}(x_{2})\) by the argument in Remark 13.2 we have \(|\partial_{2}\partial_{t}\vec{\mathscr{J}}_{g}(x_{1},x_{2},t)|=|\partial_{2} \partial_{t}\vec{\mathscr{J}}_{g}(x_{1}^{\sharp},(x_{2}),x_{2},t)|\leq| \partial_{t}\partial_{2}J_{g}(x_{1}^{\sharp}(x_{2}),x_{2},t)|+|\partial_{2} x_{1}^{\sharp}(x_{2})||\partial_{t}\partial_{1}J_{g}(x_{1}^{\sharp}(x_{2}),x_{2},t)| \leq 242(1+\alpha)\varepsilon^{-1}\). Integrating this bound in time, similarly implies \(|(\partial_{2}\vec{\mathscr{J}}_{g})(x,t)|\leq 242(1+\alpha)\varepsilon^{-1}( \mathfrak{t}_{\mathfrak{fin}}-\mathfrak{t}_{\mathfrak{in}})\leq 510\). Using the lower bound for \(\widehat{\mathsf{Q}}\) obtained in (13.38a), the proof of (13.38e) now follows. Inspecting the arguments in the previous paragraph, and refferring to the bound (6.24f) instead of (6.24b), we may deduce that \(|\partial_{2}\partial_{t}\vec{\mathscr{J}}_{g}(x,t)|\leq 250(1+\alpha) \varepsilon^{-1}\) for all \(x_{1}>x_{1}^{\ast}(x_{2},t)\). Since \(\mathring{\mathsf{Q}}_{2}(x,\mathsf{s})=\tfrac{\varepsilon}{\mathring{\mathsf{ Q}}(x,\mathsf{s})}(\partial_{t}\partial_{2}\widehat{\mathscr{J}})(x,\mathsf{q}^{-1}(x, \mathsf{s}))\), using the lower bound for \(\widehat{\mathsf{Q}}\) obtained in (13.38a), the proof of (13.38f) now follows. Lastly, the estimates in (13.38g) are obtained by repeating the proofs of (6.38d), and (6.38f), except that these proofs simplify in downstream region. For instance, when compared to the \(\mathring{\mathsf{Q}}_{\mathsf{s}}\) expression from (6.50), in the downstream region we have from (13.12) and the chain rule that \[\mathring{\mathsf{Q}}_{\mathsf{s}}(x,\mathsf{s})=\tfrac{-\varepsilon}{\mathring{ \mathsf{Q}}(x,\mathsf{s})}(\partial_{t}\partial_{t}\widehat{\mathscr{J}})(x, \mathsf{q}^{-1}(x,\mathsf{s}))\,.\] When compared to (6.50), the terms which involve an \(\mathsf{D}_{1}^{2}J_{g}\) denominator are absent. Given that we have already bounded \(\widehat{\mathsf{Q}}\) from below in (13.38a), repeating the same arguments as in the proofs of (6.38d) we obtain the bound for \(\mathring{\mathsf{Q}}_{\mathsf{s}}\) claimed in (13.38f) and (13.38g). The bound for \(\mathring{\mathsf{Q}}\) claimed in (13.38g) follows from the \(\mathring{\mathsf{Q}}_{\mathsf{s}}\), \(\mathring{\mathsf{Q}}_{2}\), and \(\overline{\mathsf{Q}}_{2}\) estimates, since \(|\mathring{\mathsf{Q}}-\mathring{\mathsf{Q}}_{\mathsf{s}}|\leq|V||\mathring{ \mathsf{Q}}_{2}|+|\partial_{\mathsf{s}}V||\overline{\mathring{\mathsf{Q}}}_{2}| \lesssim\varepsilon\). For the sake of completeness, we also record here an analogue of Lemma 6.5, showing that the map \(\mathsf{q}\) is invertible. **Lemma 13.8** (**The map \(\mathsf{q}\) is invertible)**.: _Assume that the bootstraps (13.37) hold and that \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\). Then, the map \(\mathsf{q}\) defined by (13.8) is invertible, with inverse \(\mathsf{q}^{-1}\) defined by (13.9)._ Proof of Lemma 13.8.: The proof is nearly identical to that of Lemma 6.5. The fact that the equation \(\mathsf{s}=\mathsf{q}(x,t)=\varepsilon(1-\widehat{\mathscr{J}}(x,t))\) has at most one solution \(t\in[\mathfrak{t}_{\mathfrak{in}},\mathfrak{t}_{\mathfrak{in}})\), for every fixed \(\mathsf{s}\in[0,\varepsilon)\) and \(x\in\mathbb{T}^{2}\), follows from the fact that \(\partial_{t}\mathsf{q}=-\varepsilon\partial_{t}\widehat{\mathscr{J}}=\widehat{ \mathsf{Q}}\geq\tfrac{2(1+\alpha)}{5}>0\). The fact that the equation \(\mathsf{s}=\mathsf{q}(x,t)=\varepsilon(1-\widehat{\mathscr{J}}(x,t))\) has at least ### Identities in the downstream coordinate system With respect to the coordinates \((x,\mathsf{s})\) given by (13.8), with the transformation (13.10), and upon dropping the tildes (see Remark 13.6), we have the following fundamental identities, which are translations of the identities in Section 3 into \((x,\mathsf{s})\) coordinates (see also (5.30)-(5.35)): \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})J_{{}_{\mathsf{s}}} =\tfrac{1+\alpha}{2}J_{{}_{\mathsf{s}}}\hat{\mathbf{W}}_{{}_{ \mathcal{N}}}+\tfrac{1-\alpha}{2}J_{{}_{\mathsf{s}}}\hat{\mathbf{Z}}_{{}_{ \mathcal{N}}}\,, \tag{13.42a}\] \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\tilde{\mathcal{J }} =-\tfrac{\mathsf{Q}}{\epsilon}\,,\] (13.42b) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{ \mathsf{D}}_{2}h =g\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{{}_{\mathcal{T}}}+ \tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{{}_{\mathcal{T}}}\big{)}\,,\] (13.42c) \[\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{1}\Sigma =\tfrac{1}{2}J_{{}_{\mathsf{s}}}(\hat{\mathbf{W}}_{{}_{\mathcal{ N}}}-\hat{\mathbf{Z}}_{{}_{\mathcal{N}}})+\tfrac{1}{2}J_{{}_{\mathsf{s}}} \widetilde{\mathsf{D}}_{2}h(\hat{\mathbf{W}}_{{}_{\mathcal{T}}}-\hat{\mathbf{ Z}}_{{}_{\mathcal{T}}})\,,\] (13.42d) \[\widetilde{\mathsf{D}}_{2}\Sigma =\tfrac{1}{2}g^{\frac{1}{2}}(\hat{\mathbf{W}}_{{}_{\mathcal{T}}}- \hat{\mathbf{Z}}_{{}_{\mathcal{T}}})\,,\] (13.42e) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\Sigma =-\alpha\Sigma(\hat{\mathbf{Z}}_{{}_{\mathcal{N}}}+\hat{\mathbf{A}}_{{}_{ \mathcal{T}}})\,,\] (13.42f) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\Sigma^{-2\beta} =2\alpha\beta\Sigma^{-2\beta}(\hat{\mathbf{Z}}_{{}_{\mathcal{N}}}+ \hat{\mathbf{A}}_{{}_{\mathcal{T}}})\,,\] (13.42g) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}){{\mathcal{N}}} =-\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{{}_{\mathcal{T}}}+ \tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{{}_{\mathcal{T}}}\big{)}\tau\,,\] (13.42h) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\tau =\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{{}_{\mathcal{T}}}+ \tfrac{1-\alpha}{2}\hat{\mathbf{Z}}_{{}_{\mathcal{T}}}){{\mathcal{N}}}\,,\] (13.42i) \[\tfrac{J_{{}_{\mathsf{s}}}}{\Sigma}(\mathsf{Q}\partial_{\mathsf{s }}+V\partial_{2})\Omega =\tfrac{\alpha}{\varepsilon}\widetilde{\mathsf{D}}_{1}\Omega- \alpha J_{{}_{\mathsf{s}}}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\, \widetilde{\mathsf{D}}_{2}\Omega\,. \tag{13.42j}\] By using (5.8), we also note that \(\widetilde{\mathsf{D}}_{1}(J_{{}_{\mathsf{s}}}-\overline{J}_{{}_{\mathsf{s}}})= \widetilde{\mathsf{D}}_{2}(J_{{}_{\mathsf{s}}}-\overline{J}_{{}_{\mathsf{s}}})=0\). ### Bounds for the geometry, sound speed, and ALE velocity for downstream maximal development We record here the modifications to the bounds obtained earlier in Section 7, due to the downstream maximal geometry. It turns out that the only modification is due to the change of the weight in the energy estimates, namely \(\mathcal{J}\mapsto\delta\): all the bounds which do not involve this weight function remain identical to those in Section 7 (and for those bounds we abuse notation and make reference to equation numbers from Section 7), while all the bounds which do involve this weight function need to be modified by exchanging \(\mathcal{J}\) with \(\tilde{\mathcal{J}}\). For instance, the bounds in Proposition 7.1 now become the bounds given in Proposition 13.9 below. The corollaries and remarks which follow this proposition (in particular, the closure of the (5.37t) and (5.37u)) bootstraps in Corollary 7.2) remain the same as in Section 7, and to avoid redundancy we do not repeat those arguments here. **Proposition 13.9** (Bounds for the geometry, sound speed, and ALE velocity).: _Assume that the bootstrap assumptions (13.37) hold, and that \(\varepsilon\) is taken to be sufficiently small to ensure \(\varepsilon^{\frac{1}{2}}((\mathsf{B}_{\mathsf{J}})+\langle\mathsf{B}_{\mathsf{ h}}\rangle+\langle\mathsf{B}_{\mathsf{6}}\rangle)\leq 1\). Then, assuming \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0},\) and \(\mathsf{C}_{\mathsf{data}}\), the functions \((J_{{}_{\mathsf{s}}},\widetilde{\mathsf{D}}_{1}h,\widetilde{\mathsf{D}}_{2}h,\Sigma,V)\) satisfy the bounds (7.1b), (7.1d), (7.1e), (7.1j)-(7.1k), and (7.1l)-(7.1m) respectively. Additionally_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}\tilde{\mathcal{J}}^{ \mathsf{4}}\widetilde{\mathsf{D}}^{6}J_{{}_{\mathsf{s}}}(\cdot,\mathsf{s}) \big{\|}_{L^{\infty}_{\mathsf{s}}L^{2}_{x}}^{2}+\tfrac{1}{\varepsilon}\big{\|} \tilde{\mathcal{J}}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_{{}_{\mathsf{s} }}\big{\|}_{L^{2}_{x,\mathsf{s}}}^{2} \lesssim\varepsilon(\mathsf{B}_{6})^{2}\,, \tag{13.43a}\] \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|\tilde{\mathcal{J}}^{ \mathsf{4}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h(\cdot,\mathsf{s}) \big{\|}_{L^{\infty}_{\mathsf{s}}L^{2}_{x}}^{2}+\tfrac{1}{\varepsilon}\|\tilde{ \mathcal{J}}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h \|_{L^{2}_{x,\mathsf{s}}}^{2} \lesssim\mathsf{K}^{2}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle^{2}\] (13.43b) \[\sup_{\mathsf{s}\in[0,\varepsilon]}\sum_{|\gamma|=3}^{6}\| \tilde{\mathcal{J}}^{-\frac{1}{4}}\big{(}\widetilde{\mathsf{D}}^{|\gamma|}{{ \mathcal{N}}}+\tfrac{1}{g}\tau\widetilde{\mathsf{D}}^{|\gamma|}\widetilde{ \mathsf{D}}_{2}h\big{)}\|_{L^{2}_{x,\mathsf{s}}} \lesssim\mathsf{K}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle\,,\] (13.43c) \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|\tilde{\mathcal{J}}^{- \frac{1}{4}}\widetilde{\mathsf{D}}^{6}{{\mathcal{N}}}\|_{L^{2}_{x,\mathsf{s}}} +\|\tilde{\mathcal{J}}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}\tau\|_{L^{2}_{x, \mathsf{s}}} \lesssim\mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,,\] (13.43d) \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{(}\|\tilde{\mathcal{J}}^{ \mathsf{4}}\widetilde{\mathsf{D}}^{6}{{\mathcal{N}}}\|_{L^{\infty}_{\mathsf{s}}L^{2} _{x}}+\|\tilde{\mathcal{J}}^{\mathsf{4}}\widetilde{\mathsf{D}}^{6}\tau\|_{L^{ \infty}_{\mathsf{s}}L^{2}_{x}}\big{)} \lesssim\mathsf{K}\varepsilon^{\frac{3}{2}}\langle\mathsf{B}_{6}\rangle\,, \tag{13.43e}\] _where the implicit constants in all the above inequalities depend only on \(\alpha\), \(\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\)._ Proof of Proposition 13.9.: We explain the downstream modifications required for the proof of the inequality (13.43a). Just as the equation (7.11) was obtained, letting \(\widetilde{\mathsf{D}}^{6}\) act on (13.42a) in the set \(\mathcal{P}^{\sharp}_{\pm}(\varepsilon)\), we have that \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})(\widetilde{\mathsf{D}}^{6}J_{{}_{ \mathsf{s}}})=\tfrac{1+\alpha}{2}\widetilde{\mathsf{D}}^{6}(J_{{}_{\mathsf{s}}} \hat{\mathbf{W}}_{{}_{\mathcal{N}}})+\tfrac{1-\alpha}{2}\widetilde{\mathsf{D}}^{6}(J_{ {}_{\mathsf{s}}}\hat{\mathbf{Z}}_{{}_{\mathcal{N}}})+\mathsf{R}_{{}_{\mathsf{s}}}\,, \tag{13.44}\] where \(\mathsf{R}_{\chi_{g}}=-\widetilde{\mathsf{D}}^{6}V\,\widetilde{\mathsf{D}}_{2}J_{g }-\big{(}\widetilde{\mathsf{D}}^{6},V,\widetilde{\mathsf{D}}_{2}J_{g}\big{)}\). For each \(\mathsf{s}\in(0,\varepsilon)\), we compute the \(L^{2}(\mathcal{X}_{\pm}^{\sharp}(\mathsf{s}))\)-inner product of (13.44) with \(\partial^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}J_{g}\) to obtain that \[\tfrac{1}{2}\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})}\, \partial^{\frac{1}{2}}\big{(}\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2} \big{)}|\widetilde{\mathsf{D}}^{6}J_{g}|^{2}\] \[\qquad\qquad\qquad=\tfrac{1+\alpha}{2}\int_{\mathcal{X}_{\pm}^{ \sharp}(\mathsf{s})}\,\partial^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{g} \mathbf{\hat{W}}_{{}_{\!\!\times}})\widetilde{\mathsf{D}}^{6}J_{g}+\tfrac{1- \alpha}{2}\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})}\partial^{\frac{1}{2}} \widetilde{\mathsf{D}}^{6}(J_{g}\mathbf{\hat{Z}}_{{}_{\!\!\times}})\widetilde {\mathsf{D}}^{6}J_{g}+\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})}\partial^{ \frac{1}{2}}\mathsf{R}_{J_{g}}\,\widetilde{\mathsf{D}}^{6}J_{g}\,. \tag{13.45}\] Next, we commute the operator \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\) past \(\partial^{\frac{1}{2}}\) separately in the regions \(\mathcal{X}_{-}^{\sharp}(\mathsf{s})\) and \(\mathcal{X}_{+}^{\sharp}(\mathsf{s})\); here, recall the definition of \(\mathcal{J}\) in (13.6). In analogy to (7.13), in both \(\mathcal{X}_{-}^{\sharp}(\mathsf{s})\) and \(\mathcal{X}_{+}^{\sharp}(\mathsf{s})\) we have that for any function \(f=f(x,\mathsf{s})\geq 0\) and any \(r\in\mathbb{R}\) we have the following identity and subsequent estimate (by appealing to (13.42b)) \[\partial^{r}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})f =\partial_{\mathsf{s}}\big{(}\partial^{r}\mathsf{Q}f\big{)}+ \partial_{2}\big{(}\partial^{r}Vf\big{)}-rf\partial^{r-1}(\mathsf{Q}\partial _{\mathsf{s}}+V\partial_{2})\mathcal{J}-f\partial^{r}\big{(}\mathsf{Q}+ \partial_{2}V\big{)}\] \[=\partial_{\mathsf{s}}\big{(}\partial^{r}\mathsf{Q}f\big{)}+ \partial_{2}\big{(}\partial^{q}Vf\big{)}+r\tfrac{\mathsf{Q}}{\varepsilon}f \partial^{r-1}-f\partial^{r}\big{(}\mathsf{Q}+\partial_{2}V\big{)}\,, \tag{13.46}\] Integrating the above expression over \(\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})\), appealing to (13.34) for the \(\partial_{\mathsf{s}}\) term, to (13.32) for the \(\partial_{2}\) term, appealing to the \(\mathsf{Q}\) and \(\tilde{\mathsf{Q}}\) bounds in (13.38), to the \(V_{,2}=\mathcal{O}(\varepsilon)\) bootstrap in (13.37), and upon taking \(\varepsilon\) to be sufficiently small, we arrive at the bound \[\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})}\,\partial^{r}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})f\geq\tfrac{d}{d\mathsf{s}}\int_{\mathcal{ X}_{\pm}^{\sharp}(\mathsf{s})}\,\partial^{r}\mathsf{Q}f+\tfrac{2r(1+\alpha)}{5 \varepsilon}\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})}\,\partial^{r-1}f- \tfrac{2.2502^{2}+\hat{C}_{\mathcal{Z}}}{\varepsilon}\int_{\mathcal{X}_{\pm}^ {\sharp}(\mathsf{s})}\,\partial^{r}\mathsf{Q}f\,. \tag{13.47}\] With \(r=\tfrac{1}{2}\) and \(f=\tfrac{1}{2}|\widetilde{\mathsf{D}}^{6}J_{g}|^{2}\), we use (13.47) to lower bound the left side of (13.45), resulting in \[\tfrac{1}{2d\mathsf{s}}\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s })}\,\partial^{\frac{1}{2}}\mathsf{Q}|\widetilde{\mathsf{D}}^{6}J_{g}|^{2}+ \tfrac{1+\alpha}{10\varepsilon}\int_{\mathcal{X}_{-}^{\sharp}(\mathsf{s})}\, \partial^{-\frac{1}{2}}|\widetilde{\mathsf{D}}^{6}J_{g}|^{2}-\tfrac{1+2502^{ 2}}{\varepsilon}\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})}\,\partial^{ \frac{1}{2}}\mathsf{Q}|\widetilde{\mathsf{D}}^{6}J_{g}|^{2}\] \[\leq\tfrac{1+\alpha}{2}\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s })}\,\partial^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{g}\mathbf{\hat{W}}_{{} _{\!\!\times}})\widetilde{\mathsf{D}}^{6}J_{g}+\tfrac{1-\alpha}{2}\int_{ \mathcal{X}_{\pm}^{\sharp}(\mathsf{s})}\,\partial^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{6}(J_{g}\mathbf{\hat{Z}}_{{}_{\!\!\times}})\widetilde{\mathsf{D}}^{6 }J_{g}+\int_{\mathcal{X}_{\pm}^{\sharp}(\mathsf{s})}\,\partial^{\frac{1}{2}} \mathsf{R}_{J_{g}}\,\widetilde{\mathsf{D}}^{6}J_{g}\,.\] The integrals on the right side of the above inequality are analyzed in the identical manner as in the proof of Lemma 7.4 (see (7.14)-(7.16)). By using Gronwall's inequality on the interval \([0,\varepsilon]\) to handle with the third term on the right side of the above inequality, and using the lower bound in (13.38b), we arrive at \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|\partial^{\frac{1}{4}}\widetilde{\mathsf{ D}}^{6}J_{g}(\cdot,\mathsf{s})\|_{L^{2}}^{2}+\tfrac{1}{\varepsilon}\|\partial^{-\frac{1}{4}} \widetilde{\mathsf{D}}^{6}J_{g}\|_{L^{2}_{x,\star}}^{2}\lesssim\|\widetilde{ \mathsf{D}}^{6}J_{g}(\cdot,0)\|_{L^{2}_{x}}^{2}+\varepsilon\langle\mathsf{B}_{ 6}\rangle^{2}\,.\] Using (4.11), we arrive at (13.43a). The downstream modifications (when compared to the proof of Proposition 7.1) required for proof of the inequalities (13.43b)-(13.43e) are identical (going through (13.47) instead of (7.13)) and these details will be omitted. To avoid redundancy, we also omit the proofs of the unweighted bounds for \((J_{g},\widetilde{\mathsf{D}}_{1}h,\widetilde{\mathsf{D}}_{2}h,\Sigma,V)\), as these are established exactly as in the proof of Proposition 7.1. ### Estimates in the downstream geometry which are improved due to vorticity bounds In the downstream geometry given by the transformation (13.8) we still obtain an improved estimates for the vorticity, and as a consequence we have that: \[(J_{g}\mathbf{\hat{W}}_{{}_{\!\!\times}},\mathbf{\hat{Z}}_{{}_{\!\!\times}}, \mathbf{\hat{A}}_{{}_{\!\!\times}})\text{ satisfy the improved bounds (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: Proof of Proposition 13.10.: The claimed vorticity estimates are established identically to the proof of Proposition 8.1, and are a consequence of the following bound for the specific vorticity \[\sup_{\mathsf{s}\in[0,\varepsilon]}\bigl{\|}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\bigr{\|}_{L_{x}^{2}}^{2}+\tfrac{1}{ \varepsilon}\int_{0}^{\varepsilon}\bigl{\|}\widetilde{\mathsf{D}}^{6}\Omega( \cdot,\mathsf{s})\bigr{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}\lesssim \varepsilon(\mathsf{B}_{6})^{2}\,. \tag{13.49}\] To avoid redundancy we only sketch here the modifications (when compared to the proof of (8.1)) needed to establish (13.49), which arise to the \(\widetilde{\mathsf{D}}_{1}\)-derivative. Applying \(\widetilde{\mathsf{D}}^{6}\) to (13.42j), as in (8.4) we arrive at \[\tfrac{J_{g}}{\Sigma}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde {\mathsf{D}}^{6}\Omega-\tfrac{\alpha}{\varepsilon}\widetilde{\mathsf{D}}_{1} \widetilde{\mathsf{D}}^{6}\Omega+\alpha J_{g}g^{-\frac{1}{2}}\widetilde{ \mathsf{D}}_{2}h\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\Omega= \mathsf{R}_{\Omega}\,, \tag{13.50}\] where the remainder term \(\mathsf{R}_{\Omega}\) is defined exactly as in (8.5). We test (13.50) with \(\Sigma^{-2\beta+1}\widetilde{\mathsf{D}}^{6}\Omega\), with \(\beta>0\) to be chosen appropriately (in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), and integrate over \(\mathcal{P}_{\pm}^{\sharp}(\mathsf{s})\). Note that here we do not multiply by powers of \(\mathcal{J}\) (as opposed to (13.45) earlier), and so appealing to (13.47) is not necessary. Additionally, note that the term containing a \(\partial_{1}\) derivative, namely \(-\tfrac{\alpha}{2}\Sigma^{-2\beta+1}\widetilde{\mathsf{D}}_{1}|\widetilde{ \mathsf{D}}^{6}\Omega|^{2}\) does not contain a \(J_{g}\) or \(\mathcal{J}\) term. Instead, we appeal directly to the adjoint formulas for \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})^{*}\), \(\widetilde{\mathsf{D}}_{1}^{*}\), and \(\widetilde{\mathsf{D}}_{2}^{*}\) from (13.25), and to the bounds (13.38) to deduce in analogy with (8.6)-(8.8) that \[\bigl{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\bigr{\|}_{L_{x}^{2}}^{2}- \bigl{\|}\tfrac{(J_{g}\mathsf{Q})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}\Omega(\cdot,0)\bigr{\|}_{L_{x}^{2}}^{2}+\int_{0}^{\mathsf{s}} \!\!\!\int\!\!\!\!\int\!\!\!\!\!\left|\widetilde{\mathsf{D}}^{6}\Omega\right| ^{2}\tfrac{1}{\Sigma^{2\beta}}\mathsf{G}_{\Omega}+\alpha\int\underbrace{ \tfrac{1}{\Sigma^{2\beta}}\bigl{|}\widetilde{\mathsf{D}}^{6}\Omega\bigr{|}^{2 }\Sigma\overline{\mathsf{Q}}_{1}}_{\geq 0\text{ due to \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: #### Downstream energy estimates At this stage in the proof, it only remains to close the bootstrap (13.37b) (see (5.37f)) for the sixth order energy \(\widetilde{\mathcal{E}}_{6}\) and damping \(\widetilde{\mathcal{D}}_{6}\) norms, defined earlier in (13.35a) and (13.36a). Previously, this was achieved by separately establishing a bound for the tangential parts of the energy \(\widetilde{\mathcal{E}}_{6,\tau}\) and damping \(\widetilde{\mathcal{D}}_{6,\tau}\) in Section 10, and the normal parts of the energy \(\widetilde{\mathcal{E}}_{6,\mathcal{N}}\) and damping \(\widetilde{\mathcal{D}}_{6,\mathcal{N}}\) in Section 12. In turn, these estimates required that we established improved energy bounds for six "pure time derivatives" in Section 11. For the downstream maximal development, we follow the same exact strategy. As before, the tangential bounds from Section 10 and normal energy estimates from Section 12 run in parallel, the only difference being that the fundamental variables are un-weighted for the tangential part (i.e. \((\hat{\boldsymbol{\Psi}}_{\tau},\tilde{\boldsymbol{\mathsf{Z}}}_{\tau},\tilde{ \boldsymbol{\mathsf{A}}}_{\tau})\)) and are \(J_{g}\)-weighted for the normal part (i.e. \((J_{g}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}},J_{g}\tilde{\boldsymbol{ \mathsf{Z}}}_{\mathcal{N}},J_{g}\tilde{\boldsymbol{\mathsf{A}}}_{\mathcal{N}})\)). The special estimates for six pure time derivatives from Section 11 are used in the same way, to treat the remainders \(\mathcal{R}_{\tilde{\boldsymbol{\mathsf{Z}}}}^{T}\) and \(\mathcal{R}_{\tilde{\boldsymbol{\mathsf{Z}}}}^{\mathcal{N}}\), in the \(\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{T}}\), and respectively \(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}}\) equations. Since the tangential and normal energy estimates run in parallel (similarities and differences may be seen by comparing Sections 10 and 12), we do not repeat both of these two sets of energy estimates for the downstream geometry. Indeed, the downstream modifications to the tangential component energy estimates are identical to the modifications made to the normal component energy estimates. For the sake of brevity, we chose to only give details for the downstream modifications to the normal energy estimates (see Section 13.12 below). #### 13.11.1. Sixth order tangential energy estimates For the tangential energy estimates, at this point we simply record that by repeating the arguments from Section 10, with the modifications outlined in Section 13.12 below (see the argument leading to (13.78)-(13.82)), similarly to (10.70)-(10.71), we obtain that there exists a constant \[\widehat{\mathfrak{c}}_{\alpha,\kappa_{0}}>0\,,\] which depends only on \(\alpha\) and also on \(\kappa_{0}\), and may be computed explicitly, such that \[\sup_{\mathsf{s}\in[0,\varepsilon]}\bigl{\|}\mathfrak{J}^{\frac{ 1}{4}}J_{g}^{\frac{1}{4}}\widetilde{\mathsf{D}}^{6}(\hat{\boldsymbol{\mathsf{ W}}}_{\tau},\tilde{\boldsymbol{\mathsf{Z}}}_{\tau},\hat{\boldsymbol{\mathsf{A}}}_{ \tau})(\cdot,\mathsf{s})\bigr{\|}_{L^{2}_{x}}^{2}+\tfrac{1}{\varepsilon}\int_{0 }^{\varepsilon}\bigl{\|}\mathfrak{J}^{\frac{1}{4}}J_{g}^{\frac{1}{4}} \widetilde{\mathsf{D}}^{6}(\hat{\boldsymbol{\mathsf{W}}}_{\tau},\tilde{ \boldsymbol{\mathsf{Z}}}_{\tau},\hat{\boldsymbol{\mathsf{A}}}_{\tau})(\cdot, \mathsf{s})\bigr{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}\] \[\qquad+\tfrac{1}{\varepsilon^{2}}\sup_{\mathsf{s}\in[0, \varepsilon]}\bigl{\|}\mathfrak{J}^{\frac{1}{4}}\mathcal{N}\cdot\widetilde{ \mathsf{D}}^{6}\tau(\cdot,\mathsf{s})\bigr{\|}_{L^{2}_{x}}^{2}+\tfrac{1}{ \varepsilon^{2}}\int_{0}^{\varepsilon}\bigl{\|}\mathfrak{J}^{-\frac{1}{4}} \mathcal{N}\cdot\widetilde{\mathsf{D}}^{6}\tau(\cdot,\mathsf{s})\bigr{\|}_{L^ {2}_{x}}^{2}\mathrm{d}\mathsf{s}\] \[\leq\widehat{\mathfrak{c}}_{\alpha,\kappa_{0}}\varepsilon\Bigl{(} \mathcal{C}_{\mathsf{data}}^{2}+\mathsf{B}_{6}^{2}+\dot{\mathcal{C}}\varepsilon ^{\frac{1}{2}}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2}\Bigr{)}\,. \tag{13.53}\] Then, as in (10.72)-(10.75), upon ensuring that \[\mathsf{B}_{6}\geq\max\{1,\mathsf{C}_{\mathsf{data}}\}\,, \tag{13.54}\] and upon defining \[\mathsf{K}:=8\max\{1,\widehat{\mathfrak{c}}_{\alpha,\kappa_{0}}^{\frac{1}{2} }\}\,, \tag{13.55}\] by letting \(\varepsilon\) be sufficiently small in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), we deduce from (13.53) that \[\varepsilon\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{\mathcal{E}}_{6,\tau} ^{2}(\mathsf{s})+\widetilde{\mathcal{D}}_{6,\tau}^{2}(\varepsilon)\leq\tfrac {1}{8}(\varepsilon\mathsf{K})^{2}\mathsf{B}_{6}^{2}\,. \tag{13.56}\] This bound is the same as (10.75). It closes the "tangential part" of the remaining bootstrap (13.37b) for \(\widetilde{\mathcal{E}}_{6}\) and \(\widetilde{\mathcal{D}}_{6}\). #### 13.11.2. Sixth order pure-time energy estimates For the energy estimates concerning pure time derivatives, we record that by repeating the arguments from Section 11, with the modifications outlined in Section 13.12 below, the same bound as given in (11.2b) holds, namely \[\varepsilon^{\frac{1}{2}}\bigl{\|}\mathfrak{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}_{\mathsf{s}}^{\frac{2}{2}}\widetilde{\mathsf{Z}}_{ \mathcal{N}}\bigr{\|}_{L^{2}_{x}}+\bigl{\|}\mathfrak{J}^{\frac{3}{4}} \widetilde{\mathsf{D}}_{\mathsf{s}}^{\frac{2}{2}}\widetilde{\mathsf{D}}_{ \mathsf{s}}^{\frac{2}{2}}\widetilde{\mathsf{Z}}_{\mathcal{N}}\bigr{\|}_{L^{2}_{x,\kappa}}\leq\varepsilon^{\frac{1}{2}}\bigl{\|}\mathfrak{J}^{\frac{3}{4}}J_{g}^{ \frac{1}{2}}\widetilde{\mathsf{D}}_{\mathsf{s}}^{\frac{2}{2}}\widetilde{ \mathsf{D}}_{\mathsf{s}}^{\frac{2}{2}}\widetilde{\mathsf{Z}}_{\mathcal{N}}\bigr{\|} _{L^{2}_{x,\kappa}}+\bigl{\|}\mathfrak{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}_{\mathsf{s}}^{\frac{2}{2}}\widetilde{\mathsf{D}}_{\mathsf{ s}}^{\frac{2}{2}}\widetilde{\mathsf{Z}}_{\mathcal{N}}\bigr{\|}_{L^{2}_{x,\kappa}}\lesssim \varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{13.57}\] #### 13.12. Downstream energy estimates for normal components It thus remains to outline the modifications to the normal energy estimates in Section 12, required for the downstream geometry. We continue to use the equation set (12.1) for \((J_{g}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}},J_{g}\hat{\boldsymbol{\mathsf{ Z}}}_{\mathcal{N}},J_{g}\hat{\boldsymbol{\mathsf{A}}}_{\mathcal{N}})\) in \((x,\mathsf{s})\) coordinates, with the operator \(\widetilde{\mathsf{D}}^{6}\) used only in the set \(\mathcal{P}_{\pm}^{\sharp}(\varepsilon)\), so that differentiation does not take place across the surface \(\Gamma(\varepsilon)\). According to definitions (13.35) and (13.36), the energy identity (12.5) is replaced with the downstream energy identity \[\int_{0}^{\mathsf{s}}\!\!\!\int_{\mathcal{B}}\!\!\!\int_{\mathcal{B}}\!\!\! \int_{\mathcal{B}}\!\!\!\int_{\mathcal{B}}\!\!\!\int_{\mathcal{B}}\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\! \int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\!\int_{\mathcal{B}}\!\!\!\! \int_{\mathcal{B}}\!\! where once again \(\underline{\jmath}_{3}=\Sigma^{-2\beta+1}\) and \(\beta=\beta(\alpha,\kappa_{0})>0\) is a sufficiently large constant chosen in the proof (see (13.79)). Here and throughout the remainder of the section we use the integral notation \[\int\text{ to denote }\int_{\mathcal{X}^{\sharp}_{\pm}(\mathsf{s})}\mathrm{d}x\,, \qquad\text{and}\qquad\int_{0}^{\mathsf{s}}\!\!\!\int\text{ to denote }\int_{0}^{\mathsf{s}}\!\!\!\int_{\mathcal{X}^{\sharp}_{\pm}(\mathsf{s}^{ \prime})}\mathrm{d}x\mathrm{d}\mathsf{s}^{\prime}=\int_{\mathcal{P}^{\sharp}_ {\pm}(\mathsf{s})}\mathrm{d}x\mathrm{d}\mathsf{s}^{\prime}\,. \tag{13.59}\] With this notation, we frequently appeal to (13.32), (13.34), and to the adjoint formulae (13.25). 12.1. The additive decompositions of integrals \(I^{\hat{W}_{n}}\), \(I^{\hat{Z}_{n}}\), and \(I^{\hat{A}_{n}}\) In analogy to (12.6), we additively decompose the integral \(I^{\hat{W}_{n}}\) as \[I^{\hat{W}_{n}} =I^{\hat{W}_{n}}_{1}+I^{\hat{W}_{n}}_{3}+I^{\hat{W}_{n}}_{4}+I^{ \hat{W}_{n}}_{5}+I^{\hat{W}_{n}}_{6}\,,\] \[I^{\hat{W}_{n}}_{1} =\int_{0}^{\mathsf{s}}\!\!\int\!\!\frac{1}{\Sigma^{2\beta}}g^{ \frac{1}{2}}J_{g}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\,\widetilde{\mathsf{D}}^{ 6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\,, \tag{13.60a}\] \[I^{\hat{W}_{n}}_{3} =\alpha\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\frac{1}{\lambda}g^{ \frac{3}{2}}J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{ D}}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\,\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}})\,,\] (13.60b) \[I^{\hat{W}_{n}}_{4} =-\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{0}\!\!\!\int_{3}g^{ \frac{3}{2}}J_{g}g^{-\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{N}}\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g}\,\widetilde{\mathsf{D}}^{6}(J_{g }\hat{\mathbf{W}}_{\mathcal{N}})\,,\] (13.60c) \[I^{\hat{W}_{n}}_{5} =-\tfrac{\alpha}{2}\int_{0}^{\mathsf{s}}\!\!\!\int_{3}g^{ \frac{3}{2}}J_{g}g^{-\frac{1}{2}}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g} \hat{\mathbf{Z}}_{\mathcal{N}}-2J_{g}\hat{\mathbf{A}}_{\mathcal{T}}) \widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\tau_{\mathcal{N}}\, \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\,,\] (13.60d) \[I^{\hat{W}_{n}}_{6} =-\tfrac{\alpha}{2}\int_{0}^{\mathsf{s}}\!\!\!\int_{3}g^{\frac{3} {2}}J_{g}\big{(}\widetilde{\mathsf{D}}^{6}\mathsf{F}^{\mathsf{N}}_{\mathsf{W}} +\mathcal{R}^{\mathcal{N}}_{\hat{\mathbf{W}}}+\mathcal{C}^{\mathcal{N}}_{ \hat{\mathbf{W}}}\big{)}\,\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}})\,, \tag{13.60e}\] where we have used the notation in (13.59). Next, in analogy to (12.7), we additively decompose the integral \(I^{\hat{Z}_{n}}\) as \[I^{\hat{Z}_{n}} =I^{\hat{Z}_{n}}_{1}+I^{\hat{Z}_{n}}_{2}+I^{\hat{Z}_{n}}_{3}+I^{ \hat{Z}_{n}}_{4}+I^{\hat{Z}_{n}}_{5}+I^{\hat{Z}_{n}}_{6}+I^{\hat{Z}_{n}}_{7}+I^ {\hat{Z}_{n}}_{8}+I^{\hat{Z}_{n}}_{9}+I^{\hat{Z}_{n}}_{10}\,,\] \[I^{\hat{Z}_{n}}_{1} =\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\frac{1}{\Sigma^{2\beta}}g^{ \frac{1}{2}}J_{g}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\,\widetilde{\mathsf{D}}^{6 }(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\,,\] (13.61a) \[I^{\hat{Z}_{n}}_{2} =-\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\frac{1}{\Sigma^{2\beta}}( \tfrac{1+\alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{g} \hat{\mathbf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_ {\mathcal{N}})\big{|}^{2}\,,\] (13.61b) \[I^{\hat{Z}_{n}}_{3} =-\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{3}g^{-\frac{1}{2}} \widehat{\partial}^{\frac{1}{2}}J_{g}\widetilde{\mathsf{D}}_{2}\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\,\widetilde{\mathsf{D}}^{6}( J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\,,\] (13.61c) \[I^{\hat{Z}_{n}}_{4} =\alpha\int_{0}^{\mathsf{s}}\!\!\!\int_{3}\!\!\!\widehat{d}^{ \frac{3}{2}}J_{g}g^{-\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{N}}\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g}\,\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\mathbf{Z}}_{\mathcal{N}})\,,\] (13.61d) \[I^{\hat{Z}_{n}}_{5} =\tfrac{\alpha}{2}\int_{0}^{\mathsf{s}}\!\!\!\int_{3}\!\!\!\widehat{ d}^{\frac{3}{2}}J_{g}g^{-\frac{1}{2}}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+J_{g}\hat{\mathbf{Z}}_ {\mathcal{N}}-2J_{g}\hat{\mathbf{A}}_{\mathcal{T}})\widetilde{\mathsf{D}}_{2} \widetilde{\mathsf{D}}^{6}\tau_{\mathcal{N}}\,\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}})\,,\] (13.61e) \[I^{\hat{Z}_{n}}_{6} =-\tfrac{2\alpha}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\!\int_{3} \!\!\!\widehat{d}^{\frac{3}{2}}\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6 }(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\,\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}})\,,\] (13.61f) \[I^{\hat{Z}_{n}}_{7} =-\tfrac{2\alpha}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\!\int_{3} \!\!\!\widehat{d}^{\frac{3}{2}}J_{g}(\hat{\mathbf{A}}_{\mathcal{N}}+\hat{ \mathbf{Z}}_{\mathcal{T}})\Big{(}\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{6 }\tau_{\mathcal{N}}\!\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\cdot \!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\!\!\cdot\!\!\!\cdot\!\!\!\!\cdot\!\!\!\!\!\cdot \!\!\!\!\cdot\!\!\!\cdot\!\!\!\!\cdot\!\!\!\cdot\!\!\!\!\cdot\!\!\!\!\cdot\!\!\!\! \cdot\!\!\!\!\cdot\!\!\!\!\cdot\!\!\!\!\cdot\!\!\!\!\cdot\!\!\!\!\cdot\!\!\!\!\!\cdot \!\!\!\!\cdot\!\!\!\!\cdot\!\!\!\!\cdot\!\!\!\!\!\cdot\!\!\!\!\! \[I_{2}^{\hat{\Lambda}_{n}} =-\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\!\int\!\frac{1}{\Sigma^{2 \beta}}\big{(}(1+\alpha)J_{g}\hat{\mathbf{W}}_{{}^{\mathcal{N}}}+(1-\alpha)J_{ g}\hat{\mathbf{Z}}_{{}^{\mathcal{N}}}\big{)}\tilde{d}^{\frac{3}{2}}\big{|}\widetilde{ \mathcal{D}}^{6}(J_{g}\hat{\mathbf{A}}_{{}^{\mathcal{N}}})\big{|}^{2}\,,\] (13.62b) \[I_{3}^{\hat{\Lambda}_{n}} =2\alpha\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\!\!\int_{\!\!\!\!\! \int\!\!\!\!\!\int_{\!\!\!\!\!\int_{\!\!\!\!\!\int_{\!\!\!\!\!\!\int_{\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The identity (12.13) for \(I_{9}^{\hat{\mathsf{Z}}_{n}}+I_{9}^{\hat{\mathsf{A}}_{n}}\) remains the same. The lower bound (12.14) for \(\mathsf{G}_{2}\) and (12.15) for the temporal boundary term, remain unchanged (except for \(\mathcal{J}^{\frac{1}{2}}\mapsto\mathcal{J}^{\frac{1}{2}}\)). The identity (12.16) for \(I_{2}^{\hat{\mathsf{Z}}_{n}}+I_{2}^{\hat{\mathsf{A}}_{n}}\) remains the same, and the damping coefficient \(\mathsf{G}_{3}\) satisfies the same lower bound as in (12.17), except for the usual modification \(\mathcal{J}^{\frac{3}{2}}\mapsto\mathcal{J}^{\frac{3}{2}}\). At last, the identity (12.18) for \(I_{3}^{\hat{\mathsf{W}}_{n}}+I_{3}^{\hat{\mathsf{Z}}_{n}}+I_{3}^{\hat{\mathsf{ A}}_{n}}\) remains the same, except that the weight function is \(\mathcal{J}^{\frac{3}{2}}\) instead of \(\mathcal{J}^{\frac{3}{2}}\). With the definition of \(\mathsf{G}_{\mathsf{good}}\) in (13.63), and with the updated lower bound (13.66), the estimate (12.19) summarizing the contribution of all exact derivative terms becomes \[I_{1}^{\hat{\mathsf{W}}_{n}}+I_{1}^{\hat{\mathsf{Z}}_{n}}+I_{1}^{ \hat{\mathsf{A}}_{n}}+I_{2}^{\hat{\mathsf{Z}}_{n}}+I_{2}^{\hat{\mathsf{A}}_{n }}+I_{3}^{\hat{\mathsf{W}}_{n}}+I_{3}^{\hat{\mathsf{Z}}_{n}}+I_{6}^{\hat{ \mathsf{A}}_{n}}+I_{6}^{\hat{\mathsf{Z}}_{n}}+I_{9}^{\hat{\mathsf{A}}_{n}}\] \[\geq\left(\tfrac{1}{2}-\hat{C}\varepsilon\right)\big{\|}\tfrac{ \mathcal{J}^{\frac{3}{4}}(J_{\mathsf{Q}})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{\mathcal{N}},J_{g}\hat{ \mathsf{Z}}_{N},J_{g}\hat{\mathsf{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|} _{L^{2}_{\mathsf{s}}}^{2}-\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{\mathsf{ Q}})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathsf{W}}_{\mathcal{N}},J_{g}\hat{\mathsf{Z}}_{N},J_{g}\hat{\mathsf{A}}_{ \mathcal{N}})(\cdot,0)\big{\|}_{L^{2}_{\mathsf{s}}}^{2}\] \[\quad\quad+\int_{0}^{\mathsf{s}}\!\!\int\!\frac{1}{\Sigma^{2 \beta}}\big{(}\mathsf{G}_{\mathsf{good}}-\hat{C}\beta\mathcal{J}^{\frac{1}{2}}J _{g}\big{)}\big{|}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{\mathcal{ N}},J_{g}\hat{\mathsf{Z}}_{\mathcal{N}},J_{g}\hat{\mathsf{A}}_{\mathcal{N}}) \big{|}^{2}\] \[\quad\quad+\left(\tfrac{\alpha(\beta-\frac{1}{2})-40\alpha\kappa_{ 0}}{8\varepsilon}+\tfrac{9(1+\alpha)}{20\varepsilon}\right)\int_{0}^{\mathsf{s }}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{\mathsf{Q}})^{\frac{1}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{\mathcal{N}}, J_{g}\hat{\mathsf{A}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{ \mathsf{s}}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\quad\quad-\left(\tfrac{16\alpha(\beta-\frac{1}{2})}{(1+\alpha) \varepsilon}+\tfrac{17+2\cdot 250^{2}}{\varepsilon}\right)\int_{0}^{\mathsf{s}}\big{\|} \tfrac{\mathcal{J}^{\frac{3}{4}}(J_{\mathsf{Q}})^{\frac{1}{2}}}{\Sigma^{\beta} }\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{\mathcal{N}},J_{g}\hat{ \mathsf{A}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{\mathsf{ s}}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\quad\quad-\tfrac{25\varepsilon^{2}}{\varepsilon}\int_{0}^{\mathsf{ s}}\big{\|}\tfrac{\mathcal{J}^{\frac{3}{4}}(J_{\mathsf{Q}})^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{\mathcal{N}})(\cdot,\mathsf{ s}^{\prime})\big{\|}_{L^{2}_{\mathsf{s}}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,, \tag{13.67}\] where \(\hat{C}=\hat{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\) is a positive constant independent of \(\beta\). #### 13.12.3 Downstream update to geometric lemmas In order to deal with terms that contain over-differentiated geometry, we next generalize Lemmas 12.1 and 12.2 to the geometry of \(\mathcal{P}^{\mathsf{t}}_{\pm}(\mathsf{s})\). **Lemma 13.11**.: _For a function \(f(x,\mathsf{s})\), we have that_ \[\left|\int_{0}^{\mathsf{s}}\!\!\!\int f\,\mathcal{N}\!\,\widetilde{ \mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g} \right|+\left|\int_{0}^{\mathsf{s}}\!\!\!\int f\,\mathcal{N}\!\,\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}^{6}J_{g} \right|\lesssim\varepsilon^{3}\mathsf{K}^{2}\langle\mathsf{B}_{6}\rangle^{2} \Big{(}\big{\|}\mathcal{J}^{\frac{1}{2}}\widetilde{\mathsf{D}}f\big{\|}_{L^{ \infty}_{x,\mathsf{s}}}+\big{\|}\mathcal{J}^{-\frac{1}{2}}f\big{\|}_{L^{ \infty}_{x,\mathsf{s}}}\Big{)}\,. \tag{13.68}\] We note that when compared to (12.20), the bound (13.68) is missing a factor of \(\varepsilon\) next to \(\|\mathcal{J}^{-\frac{1}{2}}f\|_{L^{\infty}_{x,\mathsf{s}}}\). This helpful factor of \(\varepsilon\) was however never used in any application of (12.1), so that we may use (13.68) as is in all energy estimates. Proof of Lemma 13.11.: We explain the modification of the proof of Lemma 12.1 that will lead to the inequality (13.68). In particular, we explain the modifications to (12.22) which occur when replacing the formula (5.28b) with the new identity (13.25b). The identity (12.22) is replaced with \[\int_{0}^{\mathsf{s}}\!\!\!\int f\,\mathcal{N}\!\,\widetilde{ \mathsf{D}}^{6}\tau\,\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{6}J_{g} =-\tfrac{1}{2\varepsilon}\int_{0}^{\mathsf{s}}\!\!\int\!\widetilde{\mathsf{D}}_{1}( f\,g^{\frac{1}{2}})\,(\mathcal{N}\!\,\widetilde{\mathsf{D}}^{6}\tau)^{2}- \tfrac{1}{2}\int_{0}^{\mathsf{s}}\!\!\!\int\Big{(}(\hat{\mathsf{Q}}_{2}- \widetilde{\mathsf{D}}_{2})(f\,J_{g}\widetilde{\mathsf{D}}_{2}h)-\hat{\mathsf{Q}}_{ 1}f\,g^{\frac{1}{2}}\Big{)}\,(\mathcal{N}\!\,\widetilde{\mathsf{D}}^{6}\tau)^{2}\] \[\quad\quad+\tfrac{1}{2}\int\Big{(}\overline{\mathsf{Q}}_{2}f\,J_{g} \widetilde{\mathsf{D}}_{2}h-\overline{\mathsf{Q}}_{1}f\,g^{\frac{1}{2}}\big{)}( \mathcal{N}\!\,\widetilde{\mathsf{D}}^{6}\tau)^{2}\Big{|}_{\mathsf{s}}-\tfrac{1}{ \varepsilon}\int_{0}^{\mathsf{s}}\!\!\int f\,\mathcal{N}\!\,\widetilde{\mathsf{D}}^{6} \tau\,g^{\frac{1}{2}}\tau\!\,\widetilde{\mathsf{D}}_{1}\mathcal{N}\,\tau\!\, \widetilde{\mathsf{D}}^{6}\tau\] \[\quad\quad+\frac{1}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\!\int f\, \mathcal{N}\!\,\widetilde{\mathsf{D}}^{6}\tau\,\mathcal{N}\!\,\widetilde{ \mathsf{D}}^{6}(g^{\frac{1}{2}\mathcal{N}})\,\mathcal{N}\!\,\widetilde{\mathsf{D}}_{1} \tau+\tfrac{1}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\!\int f\,\mathcal{N}\!\, \widetilde{\mathsf{D} **Lemma 13.12**.: _For a function \(f(x,\mathsf{s})\), we have that_ \[\left|\int_{0}^{\mathsf{s}}\!\!\!\int f\,\mathcal{N}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! which precisely matches (12.41). The analysis of \(I^{\hat{\mathsf{A}}_{n}}_{\tau,a}\) requires the decomposition in (12.42) (with \(\mathcal{J}\) replaced by \(\mathfrak{J}\)). Among the eight terms appearing in the decomposition (12.42), the last five terms require no modifications beyond those already given by Lemmas 13.11 and 13.12, so that the bounds (12.43)-(12.45) remain the same. To see this, consider for instance a term which involves the \(\widetilde{\mathsf{D}}_{1}\) operator, such as \(I^{\hat{\mathsf{A}}_{n}}_{\tau,a,vi}\). For this term we form an exact derivative and integrate-by-parts with respect to \(\widetilde{\mathsf{D}}_{1}\) using (13.25b) to obtain that \[-\tfrac{\alpha}{2e}\int_{0}^{\mathsf{s}}\!\!\int_{\mathcal{R}} \!\!\!\int_{\mathcal{R}}\!\!\!\int_{\mathcal{R}}\!\!\!\hat{\mathsf{Z}}_{ \mathcal{N}}+J_{g}\hat{\mathsf{Z}}_{\mathcal{N}}-2J_{g}\hat{\mathsf{A}}_{\tau })^{2}\widetilde{\mathsf{D}}^{6}\tau\mathcal{N}\;J_{g}\widetilde{\mathsf{D}}_{1 }\widetilde{\mathsf{D}}^{6}\tau\cdot\mathcal{N}\] \[=\tfrac{\alpha}{4e}\int_{0}^{\mathsf{s}}\!\!\int_{0}^{\mathsf{s}} \!\!\!\int\!\!\!\left(\widetilde{\mathsf{D}}_{1}-\varepsilon\dot{\mathsf{Q}}_{ 1}\right)\!\Big{(}\!\!\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\! \!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\! \left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\!\left.\!\!\!\left.\!\! \left.\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\!\!\left.\!\! \!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\!\left.\!\! \!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\! \!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The last two terms on the right side of the above display are standard, while the first one, containing the "anti-damping term" \(\mathsf{G}_{\mathsf{bad}}\), needs to be combined with the "damping term" containing \(\mathsf{G}_{\mathsf{good}}\), which is already present in (13.67). Recalling the definition of \(\mathsf{G}_{\mathsf{good}}\) from (13.63), and using the definition of \(\mathsf{G}_{\mathsf{bad}}\) given above, we next claim that \[\mathsf{G}_{\mathsf{good}}+\mathsf{G}_{\mathsf{bad}} =-\tfrac{1}{2}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})(q^{ \tfrac{3}{2}}J_{g})+\tfrac{3}{2^{\alpha}}(\mathsf{Q}\partial_{\mathsf{s}}+V \partial_{2})J_{g}\] \[=\tfrac{1}{2}\Big{(}\mathcal{J}^{\tfrac{3}{2}}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})J_{g}-J_{g}(\mathsf{Q}\partial_{\mathsf{ s}}+V\partial_{2})\mathcal{J}^{\tfrac{3}{2}}\Big{)}\geq\tfrac{1+\alpha}{16\varepsilon}q^{ \tfrac{1}{2}}J_{g}\,, \tag{13.73}\] which serves as the replacement of the lower bound (12.69d) in our downstream analysis. To see that (13.73) holds, we consider separately the regions \(\mathcal{P}^{\sharp}_{-}(\varepsilon)\) and \(\mathcal{P}^{\sharp}_{+}(\varepsilon)\). For \((x,\mathsf{s})\in\mathcal{P}^{\sharp}_{-}(\varepsilon)\), by construction (see (13.6)) we have that \(\mathcal{J}=\mathcal{J}\), and so we can apply (6.65), to obtain the lower bound \(\tfrac{1+\alpha}{16\varepsilon}\mathcal{J}^{\tfrac{1}{2}}J_{g}=\tfrac{1+ \alpha}{16\varepsilon}q^{\tfrac{1}{2}}J_{g}\), which is identical to (12.69d), and is consistent with (13.73). For \((x,\mathsf{s})\in\mathcal{P}^{\sharp}_{+}(\varepsilon)\), we cannot directly appeal to (6.65), and instead need to revisit the proof of this bound, which we detail as follows. Using the bounds \(0<\mathcal{J}\leq J_{g}\), the identities (13.12), (13.18), (13.42a), the bootstrap assumptions (13.37a), and the estimates (6.17a) and (13.38a), we deduce \[\tfrac{1}{2}\mathcal{J}^{\tfrac{1}{2}} \Big{(}\mathcal{J}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2} )J_{g}-\tfrac{3}{2}J_{g}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}) \mathcal{J}\Big{)}\] \[=\tfrac{1}{2}\mathcal{J}^{\tfrac{3}{2}}\Big{(}\mathcal{J}\big{(} \tfrac{1+\alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{1-\alpha}{2}J_{ g}\hat{\mathbf{Z}}_{\mathcal{N}}\big{)}+\tfrac{3}{2}J_{g}\tfrac{\mathsf{Q}}{ \varepsilon}\Big{)}\] \[\geq\tfrac{1}{2}\mathcal{J}^{\tfrac{1}{2}}\Big{(}-\mathcal{J} \big{(}\tfrac{1+\alpha}{2\varepsilon}+\mathring{C}\big{)}+\tfrac{3}{2}J_{g} \big{(}\tfrac{\mathring{\mathsf{Q}}}{\varepsilon}-\mathring{C}\big{)}\Big{)}\] \[\geq\tfrac{1}{2}\mathcal{J}^{\tfrac{1}{2}}\Big{(}-J_{g}\big{(} \tfrac{1+\alpha}{2\varepsilon}+\mathring{C}\big{)}+\tfrac{3}{2}J_{g}\big{(} \tfrac{17(1+\alpha)}{40\varepsilon}-\mathring{C}\big{)}\Big{)}=\tfrac{1}{2} \mathcal{J}^{\tfrac{1}{2}}\Big{(}J_{g}\tfrac{11(1+\alpha)}{80\varepsilon}- \mathring{C}J_{g}\Big{)}\geq\mathcal{J}^{\tfrac{1}{2}}J_{g}\tfrac{1+\alpha}{16 \varepsilon}\,. \tag{13.74}\] The above bound is identical to (6.65), and concludes the proof of (13.73). Indeed, combining (13.72)-(13.74), we arrive at \[J_{1}^{\hat{\mathsf{A}}_{n}}\geq\int_{0}^{\mathsf{s}}\!\!\!\int \!\!\!\int\!\!\!\int\frac{1}{\Sigma^{2\beta}}\big{(}\tfrac{1+\alpha}{24 \varepsilon}\mathcal{J}_{g}^{\tfrac{1}{2}}J_{g}-\mathsf{G}_{\mathsf{good}} \big{)}\big{|}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}} )\big{|}^{2}-\mathring{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{ 6}\rangle^{2}-\tfrac{12(1+\alpha)}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\!\big{\|} \tfrac{\mathcal{J}^{\tfrac{3}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}( J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_{x}^{2}}^{2} \mathrm{d}\mathsf{s}^{\prime}\,. \tag{13.75}\] The above bound is identical to the one obtained earlier in Section 12.7.5. We return now to the terms \(J_{3}^{\hat{\mathsf{A}}_{n}}\) and \(J_{6}^{\hat{\mathsf{A}}_{n}}\) given in (13.71). For \(J_{3}^{\hat{\mathsf{A}}_{n}}\) the trick is to again rewrite \(\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})=\tfrac{2}{1+ \alpha}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D}}^{6 }J_{g}+\tfrac{2}{1+\alpha}\big{(}\widetilde{\mathsf{D}}^{6}V\widetilde{ \mathsf{D}}_{2}J_{g}+\big{(}\widetilde{\mathsf{D}}^{6},V,\widetilde{\mathsf{D} }_{2}J_{g}\big{)}\big{)}-\tfrac{1-\alpha}{1+\alpha}\widetilde{\mathsf{D}}^{6}( J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\), leading to a decomposition identical (except for changing \(\mathcal{J}\) to \(\mathcal{J}\)) to (12.70), because the operators \(\partial_{1}\) or \(\widetilde{\mathsf{D}}_{1}\) are not involved here. Since in the downstream development we still have \((\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}=-\tfrac{\mathsf{Q}}{ \varepsilon}\), and since \(\mathsf{Q}\) satisfies the bound (13.38b), which is identical to (6.38g), the bounds (12.71) for the nine terms appearing in the decomposition (12.70) of \(J_{3}^{\hat{\mathsf{A}}_{n}}\) remain **as is**. This means that we may use precisely the same Cauchy-Young inequality for the \(J_{6}^{\hat{\mathsf{A}}_{n}}\) term as in (12.72). Together with (13.75), we may summarize the lower bound for \(I_{4}^{\hat{\mathsf{A}}_{n}}\) as \[I_{4}^{\hat{\mathsf{A}}_{n}}\geq- \mathring{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{ 6}\rangle^{2}-\tfrac{1+\alpha}{\varepsilon}\big{(}1+\tfrac{132}{(1+\alpha)^{4} }\big{)}(\tfrac{3}{\kappa_{0}})^{2\beta}\mathsf{C}_{\mathsf{data}}^{2}\] \[+\int_{0}^{\mathsf{s}}\!\!\!\int\!\!\!\int\frac{1}{\Sigma^{2 \beta}}\tfrac{1+\alpha}{24\varepsilon}q^{\tfrac{1}{2}}J_{g}-\mathsf{G}_{ \mathsf{good}}\big{)}\big{|}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}})\big{|}^{2}-\tfrac{(12+25^{2})(1+\alpha)}{\varepsilon}\int_{0}^{ \mathsf{s}}\!\!\!\big{\|}\tfrac{\mathcal{J}^{\tfrac{3}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})(\cdot,\mathsf{s}^{ \prime})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[+\tfrac{1}{20(1+\alpha)}\tfrac{1}{\varepsilon}\big{\|}\tfrac{ \mathcal{Q}^{\tfrac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}J_{g}( \cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}^{2}-\tfrac{20^{2}+500^{2}+100\cdot 250^{2}+ \varepsilon\mathring{C}\mathring{C}\mathring{\langle}\tfrac{\partial}{1}\] \[+\tfrac{7}{40(1+\alpha)}\tfrac{1}{\varepsilon^{3}}\int_{0}^{ \mathsf{s}}\big{\|}\tfrac{\mathcal{Q}^{\tfrac{3}{2}}-\tfrac{1}{\varepsilon^{3}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L_ {x}^{2}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\big{(}8-\mathring{C}\langle\beta \rangle\varepsilon\big{)}\tfrac{ \[I_{7,d}^{\hat{\mathcal{Z}}_{n}}=\text{the old }I_{7,d}^{\hat{\mathcal{Z}}_{n}}\text{ from (\ref{eq:12.74})}+2\alpha\int\overline{\mathsf{Q}}_{1,b}\hat{d}^{\hat{\mathcal{Z}} }J_{g}(\hat{\mathbf{A}}_{{\mathcal{N}}}+\hat{\mathbf{Z}}_{\tau})\ \widetilde{\mathsf{D}}^{6}\tau \cdot{\mathcal{N}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{{ \mathcal{N}}})\big{|}_{\mathbf{s}}\,.\] By appealing to (13.18), (13.37), (13.38c), (13.38d), and (13.43), we see that the bound (12.75) for \(|I_{7,b}^{\hat{\mathcal{Z}}_{n}}+I_{7,c}^{\hat{\mathcal{Z}}_{n}}+I_{7,d}^{ \hat{\mathcal{Z}}_{n}}|\) remains the same. It thus remains to consider the term \(I_{7,d}^{\hat{\mathcal{Z}}_{n}}\), which is decomposed in eight parts according to (12.76a). By once again employing the updated inequalities (13.68) and (13.70), using exact-derivative structure, the updated adjoint identities (13.25), the updated coefficient (13.38) and geometry (13.43) bounds, we find that following step by step the procedure outlined in (12.77)-(12.80), we arrive at the bound \(|I_{7,a}^{\hat{\mathcal{Z}}_{n}}|\lesssim\mathsf{K}\varepsilon(\frac{4}{\kappa _{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\). Putting this together with the bound for \(|I_{7,b}^{\hat{\mathcal{Z}}_{n}}+I_{7,c}^{\hat{\mathcal{Z}}_{n}}+I_{7,d}^{ \hat{\mathcal{Z}}_{n}}|\) discussed earlier, we arrive at the same conclusion as (12.81), namely that \[\big{|}I_{7}^{\hat{\mathcal{Z}}_{n}}\big{|}\lesssim\mathsf{K}\varepsilon(\tfrac {4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{13.77}\] The only terms with over-differentiated geometry which remains to be discussed are \(I_{8}^{\hat{\mathcal{Z}}_{n}}\) (see (13.61h)), and \(I_{5}^{\hat{\mathcal{Z}}_{n}}\) (see (13.62e)). A close inspection of the analysis of these terms Sections 12.7.8 and 12.7.7, respectively, reveals that the bounds (12.83) and (12.82) remain unchanged. #### 13.12.6. Downstream modifications to the forcing and commutator terms A close inspection of the analysis of the forcing, remainder, and commutator terms from Section 12.8, leading to the bounds for the integrals \(I_{6}^{\hat{\mathcal{W}}_{n}}\), \(I_{10}^{\hat{\mathcal{Z}}_{n}}\), and \(I_{10}^{\hat{\mathcal{A}}_{n}}\)), shows that no modification is required, and that the bounds (12.84a), (12.84b), and (12.84c) hold as is (with the weight \(\mathcal{J}\) being replaced by \(\bar{\jmath}\)). #### 13.12.7. Conclusion of the downstream normal component estimates We collect the bounds for the integrals that required downstream modifications. Combining (13.58), with the downstream modified bound (13.67), with the unmodified bounds discussed in Sections 13.12.4, 13.12.5, 13.12.6, exactly as in (12.86) we conclude that \[0\geq\big{(}\tfrac{1}{52}-\hat{C}\varepsilon\big{)}\big{\|} \tfrac{\hat{d}^{\frac{3}{4}}(J_{g}\boldsymbol{\hat{\mathcal{O}}})^{\frac{1}{2} }}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{{\mathcal{ N}}},J_{g}\hat{\mathbf{Z}}_{{\mathcal{N}}},J_{g}\hat{\mathbf{A}}_{{\mathcal{N}}})( \cdot,\mathbf{s})\big{\|}_{L^{2}_{x}}^{2}-\tfrac{1+\alpha}{\varepsilon}\big{(}2+ \tfrac{132}{(1+\alpha)^{4}}\big{)}(\tfrac{3}{\kappa_{0}})^{2\beta}\mathsf{C} _{\mathsf{data}}^{2}-\hat{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}^{2} \langle\mathsf{B}_{6}\rangle^{2}\] \[\qquad+\Big{(}\tfrac{1+\alpha}{24}-\hat{C}\varepsilon\beta\Big{)} \tfrac{1}{\varepsilon}\int_{0}^{\mathbf{s}}\big{\|}\tfrac{\hat{d}^{\frac{3}{4}}J _{g}^{\frac{3}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{W}}_{{\mathcal{N}}},J_{g}\hat{\mathbf{Z}}_{{\mathcal{N}}},J_{g}\hat{ \mathbf{A}}_{{\mathcal{N}}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\Big{(}\tfrac{\alpha(\beta-\frac{1}{2})-40\alpha\kappa_{0}}{8 }+\tfrac{9(1+\alpha)}{20}-(16+25^{2})(1+\alpha)\Big{)}\tfrac{1}{\varepsilon} \int_{0}^{\mathbf{s}}\big{\|}\tfrac{\hat{d}^{\frac{3}{4}}(J_{g}\boldsymbol{\hat{ \mathcal{O}}})^{\frac{1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\mathbf{Z}}_{{\mathcal{N}}},J_{g}\hat{\mathbf{A}}_{{\mathcal{N}}})(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad-\Big{(}\tfrac{16\alpha(\beta-\frac{1}{2})}{(1+\alpha)}+17+2 \cdot 250^{2}+39^{2}+500^{2}\Big{)}\tfrac{1}{\varepsilon}\int_{0}^{\mathbf{s}} \big{\|}\tfrac{\hat{d}^{\frac{3}{4}}(J_{g}\boldsymbol{\hat{\mathcal{O}}})^{\frac{ 1}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{{ \mathcal{N}}},J_{g}\hat{\mathbf{Z}}_{{\mathcal{N}}},J_{g}\hat{\mathbf{A}}_{{ \mathcal{N}}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}\] \[\qquad+\tfrac{1}{20(1+\alpha)}\tfrac{1}{\varepsilon^{2}}\Big{\|} \tfrac{\underline{\mathsf{Q}}\frac{4}{\Sigma^{\beta}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}-\tfrac{20^{2}+500 ^{2}+100\cdot 250^{2}}{1+\alpha}\tfrac{1}{\varepsilon^{3}}\int_{0}^{\mathbf{s}} \big{\|}\tfrac{\underline{\mathsf{Q}}\frac{4}{\Sigma^{\beta}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\tfrac{7}{40(1+\alpha)}\tfrac{1}{\varepsilon^{3}}\int_{0}^{ \mathbf{s}}\big{\|}\tfrac{\underline{\mathsf{Q}}\frac{4}{\Sigma^{\beta}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_ {x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\big{(}8-\hat{C}\langle\beta\rangle \varepsilon\big{)}\tfrac{1}{\varepsilon^{3}}\int_{0}^{\mathbf{s}}\!\!\int_{0}^{ \mathbf{s}}\!\!\!\int_{\Sigma^{\beta}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! where \(C\) is a universal constant (in particular, independent of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), and \(\hat{C}=\hat{C}(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\) is as usual. Comparing to (12.88), the main modification (besides \(\beta_{\alpha}\mapsto\beta_{\alpha,\kappa_{0}}\)) arising in (13.80) is the dependence on \(\langle\alpha\kappa_{0}\rangle\) of the Gronwall constant appearing in the third term on the right side of (13.80) (in (12.88) this constant was universal). Nonetheless, we may apply Gronwall's inequality for \(\mathsf{s}\in[0,\varepsilon]\) to (13.80) and deduce that there exists an explicitly computable constant \[\hat{\mathsf{c}}_{\alpha,\kappa_{0}}>0 \tag{13.81}\] which only depends on \(\alpha\) and \(\kappa_{0}\), such that after multiplying by \(\kappa_{0}^{\beta_{\alpha,\kappa_{0}}}\) and using that \(\frac{\kappa_{0}}{4}\leq\Sigma\leq\kappa_{0}\), and using that \(\mathsf{K}=\mathsf{K}(\alpha,\kappa_{0})\) was already fixed by the tangential energy estimates (see (13.55)), as in (12.90) we obtain \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}\hat{q}^{\frac{1}{4}} J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}},J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}},J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{ g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{ \mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}\] \[\qquad+\sup_{\mathsf{s}\in[0,\varepsilon]}\tfrac{1}{\varepsilon ^{2}}\big{\|}\mathcal{J}^{\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot, \mathsf{s})\big{\|}_{L^{2}_{x}}^{2}+\tfrac{1}{\varepsilon^{3}}\int_{0}^{ \varepsilon}\big{\|}q^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot, \mathsf{s})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}\] \[\leq\tfrac{1}{\varepsilon}\hat{\mathsf{c}}_{\alpha,\kappa_{0}} \Big{(}\mathsf{C}^{2}_{\mathsf{data}}+\hat{C}\varepsilon\langle\mathsf{B}_{6} \rangle^{2}\Big{)}\,. \tag{13.82}\] Dropping the energy and damping terms for \(\widetilde{\mathsf{D}}^{6}J_{g}\) (since these were bounded already in Proposition 13.9), and recalling the definitions of \(\widetilde{\mathcal{E}}^{2}_{\widetilde{6},\mathcal{N}}(\mathsf{s})\) and \(\widetilde{\mathcal{D}}^{2}_{\widetilde{6},\mathcal{N}}(\mathsf{s})\) (in (13.35)-(13.36)), as in (12.91)-(12.93) we deduce that \[\varepsilon\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{\mathcal{E}}^{2}_{ \widetilde{6},\mathcal{N}}(\mathsf{s})+\widetilde{\mathcal{D}}^{2}_{\widetilde {6},\mathcal{N}}(\varepsilon)\leq\hat{\mathsf{c}}_{\alpha,\kappa_{0}}\Big{(} \mathsf{C}^{2}_{\mathsf{data}}+\hat{C}\varepsilon\langle\mathsf{B}_{6} \rangle^{2}\Big{)}\leq 2\hat{\mathsf{c}}_{\alpha,\kappa_{0}}\mathsf{C}^{2}_{ \mathsf{data}}\leq\tfrac{1}{8}\mathsf{B}^{2}_{6}\,, \tag{13.83}\] once \(\varepsilon\) is taken sufficiently small in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), and \(\mathsf{B}_{6}\) is chosen sufficiently large in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\) to ensure that \(\mathsf{B}_{6}\geq\max\{1,\mathsf{C}_{\mathsf{data}}\}\) (see (13.54)) and \[\mathsf{B}_{6}\geq 4\hat{\mathsf{c}}_{\alpha,\kappa_{0}}^{\frac{1}{2}}\mathsf{C}_{ \mathsf{data}}\,. \tag{13.84}\] The choice (13.84) is the downstream-modified version of (12.92), and this closes the proof of "normal part" of the remaining bootstrap (13.37b) for \(\mathcal{E}_{6}\) and \(\mathcal{D}_{6}\). #### Closing the bootstrap for the sixth order energy Combining (13.83) with (13.56) we arrive at the same inequality as obtained in (12.94) \[\varepsilon\sup_{\mathsf{s}\in[0,\varepsilon]}\widetilde{\mathcal{E}}_{6}( \mathsf{s})+\widetilde{\mathcal{D}}_{6}(\varepsilon)\leq\tfrac{1}{2}\mathsf{B }_{6}\,, \tag{13.85}\] which closes the bootstrap (13.37b) (cf. (5.37r)) in the downstream coordinate system (13.8). ### 14. Upstream maximal development #### The slow acoustic characteristic surface Upstream of the pre-shock, the maximal development of the Cauchy data is limited by the unique slow acoustic characteristic surface passing through the co-dimension \(2\) surface of pre-shocks \(\Xi^{*}\). With respect to our fast-characteristic-geometry, the slow acoustic characteristic flow map \(\Upsilon=(\Upsilon_{1},\Upsilon_{2})\) evolves according to \[\partial_{t}\Upsilon_{1}(x,t)=-2\alpha(\Sigma J_{g}^{-1})(\Upsilon(x,t),t))\,, \quad\partial_{t}\Upsilon_{2}(x,t)=(V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2})( \Upsilon(x,t),t)\,. \tag{14.1}\] As can be seen from equations (3.28) and (3.34), the vector \((-2\alpha\Sigma J_{g}^{-1}\,,V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2})\) is the slow acoustic transport velocity associated to the wave speed \(\lambda_{1}\) in (1.3), but written in the frame of the fast acoustic geometry. Recalling Definition 6.6, the set of pre-shocks is given by \[\Xi^{*}:=\Big{\{}\big{(}\hat{x}_{1}(x_{2}),x_{2},t^{*}(x_{2})\big{)}\colon x_{2} \in\mathbb{T}\Big{\}}\,.\] We shall at first be concerned with the specific slow characteristic surface that _passes though the pre-shock_\(\Xi^{*}\). This surface consists of the union of trajectories of \(\Upsilon\) with starting position along \(\Xi^{*}\). The variable \(t\) in (14.1) denotes the flow time of the dynamical system, with initial time \(t=t^{*}(x_{2})\) corresponding to the time of intersection with \(\Xi^{*}\). As such, the image of the pre-shock \(\Xi^{*}\) by the flow of the slow acoustic characteristic is given by the green surface in Figure 15, namely \(\Upsilon(\Xi^{*},t)\) for \(t^{*}(x_{2})\leq t\leq\mathsf{t}_{\mathsf{fin}}\). Here, as in the previous sections, we use the notation \[\mathsf{t}_{\mathsf{in}}:=-\tfrac{2}{1+\alpha}\varepsilon\,, \tag{14.2a}\] for the initial time, and \[\mathsf{t}_{\mathsf{fin}}:=\tfrac{2}{1+\alpha}\varepsilon\cdot\tfrac{1}{50}\,, \tag{14.2b}\] for the final time. ### Parameterizing the slow acoustic characteristic through the pre-shock as a graph It is important in our analysis to parameterize this _distinguished_ slow-acoustic characteristic surface passing through \(\Xi^{*}\) as the graph \(x_{1}=\theta(x_{2},t)\) over the \((x_{2},t)\)-plane. The dynamics of \(\theta(x_{2},t)\) are determined from the dynamics of the flow map \(\Upsilon\) as \[\partial_{t}\theta=\big{(}\partial_{t}\Upsilon\circ\Upsilon^{-1}\circ\theta \big{)}\cdot\widetilde{N}\,,\] where the normal vector \(\widetilde{N}\) to the surface \((\theta(x_{2},t),x_{2})\) is given by \(\widetilde{N}=(1,-\theta_{,2}\,)\). As such, we compute that \[\partial_{t}\theta=-2\alpha\big{(}\Sigma J_{s}^{-1}\big{)}\circ\theta-\big{(} V+2\alpha\Sigma g^{-\frac{1}{2}}h_{,2}\big{)}\circ\theta\ \theta_{,2}. \tag{14.3}\] The surface \(x_{1}=\theta(x_{2},t)\) is a graph-type reparamateration of the distinguished slow acoustic characteristic surface passing through the pre-shock, and hence must verify the constraint \[\theta(x_{2},t^{*}(x_{2}))=\hat{x}_{1}(x_{2})\,. \tag{14.4}\] The graph \(x_{1}=\theta(x_{2},t)\) will play the role of the "right spatial boundary" for our spacetime. The characteristic surface given by the graph \(x_{1}=\theta(x_{2},t)\) can be alternatively reparameterized as the graph \(t=\Theta(x_{1},x_{2})\), where for each \(x_{2}\) fixed, \(\Theta\) is the inverse of \(\theta\), i.e. \[\Theta(\theta(x_{2},t),x_{2})=t\,. \tag{14.5}\] By differentiating the identity (14.5) with \(\partial_{t}\), applying the chain-rule, and using that \(x_{1}=\theta(x_{2},t)\), we have that \[\partial_{1}\Theta(x_{1},x_{2})\partial_{t}\theta(x_{2},t)=1\,.\] Substituting the dynamics (14.3) into this relation, we obtain that \[\Big{(}2\alpha\big{(}\Sigma J_{s}^{-1}\big{)}(\theta(x_{2},t),x_{2},t)+\big{(} V+2\alpha\Sigma g^{-\frac{1}{2}}\partial_{2}h\big{)}(\theta(x_{2},t),x_{2},t) \partial_{2}\theta(x_{2},t)\Big{)}\partial_{1}\Theta(x_{1},x_{2})=-1\,. \tag{14.6}\] Next, we differentiating the identity (14.5) with \(\partial_{2}\) and again use that \(x_{1}=\theta(x_{2},t)\); this yields the identity \[\partial_{1}\Theta(x_{1},x_{2})\partial_{2}\theta(x_{2},\mathsf{s})=-\partial _{2}\Theta(x_{1},x_{2})\,. \tag{14.7}\] Substitution of (14.7) into (14.6) together with the fact that \(t=\Theta(x_{1},x_{2})\) then shows that \[\partial_{1}\Theta(x)=-\tfrac{J_{g}}{2\alpha\Sigma}(x_{1},x_{2},\Theta(x))+\big{(} \tfrac{J_{g}V}{2\alpha\Sigma}+g^{-\frac{1}{2}}\partial_{2}hJ_{g}\big{)}(x, \Theta(x))\partial_{2}\Theta(x)\,. \tag{14.8}\] From (14.4), the reparameterization \(\Theta\) verifies the boundary condition \[\Theta(\hat{x}_{1}(x_{2}),x_{2})=t^{*}(x_{2})\,.\] There is an important observation to make about the "evolution equation" (14.8) for the slow acoustic characteristic surface \(\Theta\) passing through the pre-shock. Note that with the inversion of \(\theta\), the parameterization \(\Theta\) defines the evolution of the slow acoustic characteristic surface via \(J_{g}\) rather than \(J_{g}^{-1}\) (as was the case in (14.3)). Avoiding the degeneracy of \(J_{g}^{-1}\) at the pre-shock maintains our smooth analysis. This distinguished slow characteristic surface denotes the _future temporal boundary_ of the spacetime for the upstream maximal development of the Cauchy data. For technical reasons, it is convenient to use an arbitrarily small perturbation of this characteristic surface, and we shall explain this approximation in what follows. ### A foliation of spacetime by a family of approximate \(1\)-characteristic surfaces Fix an arbitrary \[\delta\in(0,\tfrac{1}{2})\,. \tag{14.9}\] We define a family of approximate \(1\)-characteristic surfaces \(\Theta^{\delta}(x_{1},x_{2},t)\) (they would not be "approximate" if \(\delta=0\)) as follows. For \(\mathfrak{t}_{\text{in}}\leq t\leq t^{*}(x_{2})\), we define \(\Theta^{\delta}\) as the solution of the Cauchy problem \[\partial_{1}\Theta^{\delta}(x,t)=-\tfrac{(1-\delta)J_{g}}{2 \alpha\Sigma}(x,\Theta^{\delta}(x,t))\] \[\qquad\qquad\qquad\qquad\qquad\times\left(1-\big{(}V+\tfrac{2 \alpha}{(1-\delta)}\Sigma g^{-\frac{1}{2}}h_{,2}\big{)}(x,\Theta^{\delta}(x,t ))\big{(}\partial_{2}\Theta^{\delta}(x,t)-\tfrac{\partial_{2}\mathcal{B}}{ \partial_{t}\mathcal{B}}(x_{2},t)\partial_{t}\Theta^{\delta}(x,t)\big{)} \right), \tag{14.10a}\] \[\Theta^{\delta}(\hat{x}_{1}(x_{2}),x_{2},t)=t\,,\qquad\text{for each $t\in[\mathfrak{t}_{\text{in}},t^{*}(x_{2})]$}\,, \tag{14.10b}\] where \[\mathcal{B}(x_{2},t)=\overline{J}_{g}(\hat{x}_{1}(x_{2}),x_{2},t)\,. \tag{14.10c}\] The solution \(\Theta^{\delta}\) of (14.10) is defined (see Section 14.5 below) on the domain \[\hat{\Omega}_{\text{US},+}:=\big{\{}(x,t)\colon x_{2}\in\mathbb{T}\,, \mathfrak{t}_{\text{in}}\leq t<t^{*}(x_{2})\,,\mathfrak{X}_{1}^{-}(x_{2},t) \leq x_{1}\leq\mathfrak{X}_{1}^{+}(x_{2},t)\big{\}}\,\] (14.11a) where the "stopping-times" \[\mathfrak{X}_{1}^{-}\] and \[\mathfrak{X}_{1}^{+}\] are defined by \[\mathfrak{X}_{1}^{+} =\mathfrak{X}_{1}^{+}(x_{2},t)=\max\big{\{}x_{1}\in\mathbb{T} \colon\Theta^{\delta}(x_{1},x_{2},t)\geq\mathfrak{t}_{\text{in}}\big{\}}\, \tag{14.11b}\] \[\mathfrak{X}_{1}^{-} =\mathfrak{X}_{1}^{-}(x_{2},t)=\min\big{\{}x_{1}\in\mathbb{T} \colon\Theta^{\delta}(x_{1},x_{2},t)\leq\mathfrak{t}_{\text{fin}}\big{\}}. \tag{14.11c}\] These stopping times are well defined by continuity of the function \(\Theta^{\delta}\) and the compactness of the constraints. Note that \(\mathfrak{X}_{1}^{-}(x_{2},t)\leq\hat{x}_{1}(x_{2})\leq\mathfrak{X}_{1}^{+}(x_ {2},t)\) in light of (14.10b). **Remark 14.1** (**Spatial support)**.: _We note that due to the bootstrap (5.37a) present in (14.132a), throughout this section are only interested in points \(x\in\mathcal{X}_{\text{fin}}=\{x\in\mathbb{T}^{2}\colon\operatorname{dist}(x, \mathcal{X}_{\text{in}})\leq\mathsf{C}_{\text{supp}}\varepsilon\}\), which in view of (4.7) and (6.5), amounts to_ \[|x_{1}-\hat{x}_{1}(x_{2})|\leq 2(13\pi+65\alpha(1+\alpha)\kappa_{0}) \varepsilon\,. \tag{14.12}\] _This is because for \(x\not\in\mathcal{X}_{\text{fin}}\), by (4.7) we have \(J_{g}=1\), \(\Sigma=\tfrac{1}{2}\kappa_{0}\), and \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}},\mathsf{D}J_{g},h_{,2})\equiv 0\), and so there is no analysis required here (all functions are in fact constants there). Throughout this section we shall implicitly assume that (14.12) holds._ The distinguished surface passing through the pre-shock (which corresponds to \(t=t^{*}(x_{2})\)) is parametrized as \(\{(x_{1},x_{2},\overline{\Theta^{\delta}}(x_{1},x_{2}))\}\), where \[\overline{\Theta^{\delta}}(x_{1},x_{2}):=\Theta^{\delta}(x_{1},x_{2},t^{*}(x_{2 }))\,. \tag{14.13}\] That is, for this distinguished surface, the explicit dependence on time is dropped. Here \(x_{2}\in\mathbb{T}\) and \(\mathfrak{X}_{1}^{-}(x_{2},t^{*}(x_{2}))\leq x_{1}\leq\mathfrak{X}_{1}^{+}(x_ {2},t^{*}(x_{2}))\). Throughout this section we work on the \(\delta\)-adjusted upstream spacetime \[\hat{\mathcal{H}}^{\delta}:=\big{\{}(x,t)\in\mathcal{X}_{\text{fin}}\times[ \mathfrak{t}_{\text{in}},\mathfrak{t}_{\text{fin}})\colon\mathfrak{t}_{\text{in}} \leq t<\overline{\Theta^{\delta}}(x)\big{\}}\,. \tag{14.14}\] The surface \(\{t=\min\{\overline{\Theta^{\delta}}(x),\mathfrak{t}_{\text{fin}}\}\}\) defines the "top" temporal boundary of \(\hat{\mathcal{H}}^{\delta}\). For times \(\mathfrak{t}_{\text{in}}\leq t\leq t^{*}(x_{2})\), we have a well-defined foliation (see Section 14.5 for details) of the spacetime subset of \(\mathcal{H}^{\delta}\) given by \[\hat{\mathcal{H}}^{\delta}_{+}:=\{(x,t)\in\mathcal{H}^{\delta}\colon\Theta^{ \delta}(x,\mathfrak{t}_{\text{in}})<t<\overline{\Theta^{\delta}}(x)\}=\{(x, \Theta^{\delta}(x,t))\colon(x,t)\in\hat{\Omega}_{\text{US},+}\}\,, \tag{14.15a}\] where \(\hat{\Omega}_{\mathsf{US},+}\) is defined in (14.11a). We define the complimentary set by \[\hat{\mathcal{H}}_{-}^{\hat{\mathcal{S}}}:=\{(x,t)\in\mathcal{H}^{\hat{\mathcal{ S}}}\colon\mathsf{t}_{\mathsf{in}}\leq t<\Theta^{\hat{\mathcal{S}}}(x,\mathsf{t}_{ \mathsf{in}})\}\,. \tag{14.15b}\] so that \[\hat{\mathcal{H}}^{\hat{\mathcal{S}}}=\hat{\mathcal{H}}_{+}^{\hat{\mathcal{S}} }\cup\Theta^{\hat{\mathcal{S}}}(x,\mathsf{t}_{\mathsf{in}})\cup\hat{\mathcal{ H}}_{-}^{\hat{\mathcal{S}}}\,. \tag{14.15c}\] The decomposition (14.15c) is represented in Figure 16 below. ### Bounds for derivatives of \(t^{*}(x_{2})\), \(\dot{x}_{1}(x_{2})\), and \(\mathfrak{B}\) Before analyzing the properties of the solution \(\Theta^{\hat{\mathcal{S}}}\) of (14.10), we need to estimate various derivatives of the functions \(t^{*}(x_{2})\) and \(\dot{x}_{1}(x_{2})\) appearing in the boundary condition (14.10b), and of the function \(\mathcal{B}\) defined in (14.10c). We recall from Definition 6.6 that \(\overline{J}_{g}(\dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))=0\). By employing the chain-rule, the fact that also \(J_{g,1}(\dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))=0\), and the identity (5.8), we deduce that \[\partial_{2}t^{*}(x_{2})=\tfrac{J_{g,1}(\dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}) )\partial_{2}\dot{x}_{1}(x_{2})+J_{g,2}(\dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}) )}{\partial_{t}J_{g}(\dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))}=\tfrac{J_{g,2}( \dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))}{\partial_{t}J_{g}(\dot{x}_{1}(x_{2}),x_ {2},t^{*}(x_{2}))}\,. \tag{14.16}\] While \(|J_{g,2}|=|\mathsf{D}_{2}J_{g}|\leq 4(1+\alpha)\) follows from the bootstrap assumptions (see (5.37)), the denominator appearing in the above identity was previously estimated (6.43) (specialized at \(t=t^{*}(x_{2})\) so that \(\hat{x}_{1}(x_{2})=x_{1}^{*}(x_{2},t^{*}(x_{2}))\)), resulting in \(\tfrac{2(1+\alpha)}{5}\leq-\varepsilon\partial_{t}\overline{J}_{g}(\dot{x}_{1 }(x_{2}),x_{2},t^{*}(x_{2}))\leq(1+\alpha)(1+400\mathbf{1}_{t^{*}(x_{2})\in( \mathsf{t_{mod}},\mathsf{t_{in}}]})\). From these bounds and (14.16) we deduce that \[\big{|}\partial_{2}t^{*}(x_{2})\big{|}\leq 10\varepsilon\,. \tag{14.17}\] Similarly, from (6.49) evaluated at \(t=t^{*}(x_{2})\), we have that \[\partial_{2}\dot{x}_{1}(x_{2})=-\big{(}\tfrac{\partial_{1}\partial_{2}J_{g}}{ \partial_{1}\partial_{2}J_{g}}\big{)}(\dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))\,, \tag{14.18}\] and using (6.24), (6.54), and the fact that \(-\hat{C}\varepsilon\leq t^{*}(x_{2})\leq\mathsf{t}_{\mathsf{fin}}\), we find that \[\big{|}\partial_{2}\dot{x}_{1}(x_{2})\big{|}\leq\tfrac{5}{4}\varepsilon\big{(} |\mathsf{D}_{1}^{2}\mathsf{D}_{2}w_{0}(\dot{x}_{1}(x_{2}),x_{2})|+\varepsilon \hat{C}\mathsf{K}(\mathsf{B}_{6})\big{)}\leq 2\varepsilon\|\mathsf{D}^{3}w_{0}\|_{L^{ \infty}}\leq 2\varepsilon\mathsf{C}_{\mathsf{data}}\,. \tag{14.19}\] Next, we turn to the second order derivatives of \(t^{*}(x_{2})\) and \(\dot{x}_{1}(x_{2})\). Differentiating (14.16) once more, we find that \[\partial_{2}^{2}t^{*}(x_{2})=\Big{(}\tfrac{(\partial_{2}\dot{x}_{1}\partial_{1 }+\partial_{2}\partial_{t^{*}}\partial_{t})J_{g,2}}{\partial_{t}\overline{J}_{ g}}-\tfrac{J_{g,2}(\partial_{2}\dot{x}_{1}\partial_{1}+\partial_{2} +\partial_{2}t^{*}\partial_{t})\partial_{t}\overline{J}_{g}}{(\partial_{t} \overline{J}_{g})^{2}}\Big{)}(\dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))\,.\] Using the assumptions on \(w_{0}\) given in Section 4.2 together with the bounds (6.24), (7.17), (14.17), and (14.19), in a similar manner to the proof of Lemma 6.3 we deduce \[\big{|}\partial_{2}^{2}t^{*}(x_{2})\big{|}\lesssim\varepsilon\,. \tag{14.20}\] Analogously, differentiating (14.18), we obtain that \[\partial_{2}^{2}\dot{\hat{x}}_{1}(x_{2})=-\Big{(}\tfrac{(\partial_{2}\dot{\hat{ x}}_{1}\partial_{1}+\partial_{2}+\partial_{2}t^{*}\partial_{1})J_{g,12}}{J_{g,11}}-\tfrac{J_{g,12}(\partial_{2}\dot{\hat{x}}_{1}\partial_{1}+\partial_{2}+ \partial_{2}t^{*}\partial_{1})J_{g,11}}{(J_{g,11})^{2}}\Big{)}(\dot{\hat{x}}_{ 1}(x_{2}),x_{2},t^{*}(x_{2}))\,.\] With the assumptions on \(w_{0}\) given in Section 4.2, the bounds (6.24), (7.17), (14.17), and (14.19), we find that \[\big{|}\partial_{2}^{2}\dot{\hat{x}}_{1}(x_{2})\big{|}\lesssim\varepsilon( \mathsf{B}_{6})\,. \tag{14.21}\] Lastly, we turn to the third order derivatives of \(t^{*}(x_{2})\) and \(\dot{x}_{1}(x_{2})\). Differentiating (14.16) two times with respect to \(x_{2}\), we have that \[\partial_{2}^{3}t^{*}(x_{2})=\Big{(}\tfrac{(\partial_{2}\dot{\hat {x}}_{1}\partial_{1}+\partial_{2}+\partial_{2}t^{*}\partial_{1})^{2}J_{g,12}} {\partial_{1}J_{g,11}}-\tfrac{2(\partial_{2}\dot{\hat{x}}_{1}\partial_{1}+ \partial_{2}+\partial_{2}t^{*}\partial_{1})J_{g,12}(\partial_{2}\dot{\hat{x}}_ {1}\partial_{1}+\partial_{2}+\partial_{2}t^{*}\partial_{1})J_{g,11}}{(J_{g,11 })^{2}}\] \[\qquad\qquad\qquad-\tfrac{J_{g,2}(\partial_{2}\dot{\hat{x}}_{1} \partial_{1}+\partial_{2}+\partial_{2}t^{*}\partial_{1})^{2}\partial_{2} \overline{J}_{g}}{(\partial_{1}\overline{J}_{g})^{2}}+\tfrac{2J_{g,2}(( \partial_{2}\dot{\hat{x}}_{1}\partial_{1}+\partial_{2}+\partial_{2}t^{*} \partial_{1})\partial_{\overline{J}_{g}})^{2}}{(\partial_{1}J_{g})^{3}}\Big{)}( \dot{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))\,.\] With the assumptions on \(w_{0}\) given in Section 4.2, the bounds (6.24), (7.17), and (14.19)-(14.21) yield \[\big{|}\partial_{2}^{3}t^{*}(x_{2})\big{|}\lesssim\varepsilon(\mathsf{B}_{6})\,. \tag{14.22}\] Finally, differentiating (14.18) two times with respect to \(x_{2}\), we arrive at \[\partial_{2}^{3}\dot{x}_{1}(x_{2})=-\Big{(}\tfrac{(\partial_{2} \dot{\hat{x}}_{1}\partial_{1}+\partial_{2}+\partial_{2}t^{*}\partial_{1})^{2 }J_{g,11}}{J_{g,11}}-\tfrac{2(\partial_{2}\dot{\hat{x}}_{1}\partial_{1}+ \partial_{2}+\partial_{2}t^{*}\partial_{1})J_{g,12}(\partial_{2}\dot{\hat{x}}_ {1}\partial_{1}+\partial_{2}+\partial_{2}t^{*}\partial_{1})J_{g,11}}{(J_{g,11 })^{2}}\] \[\qquad\qquad\qquad-\tfrac{J_{g,12}(\partial_{2}\dot{\hat{x}}_{1} \partial_{1}+\partial_{2}+\partial_{2}t^{*}\partial_{1})^{2}J_{g,11}}{(J_{g,11 })^{2}}+\tfrac{2J_{g,12}((\partial_{2}\dot{\hat{x}}_{1}\partial_{1}+\partial_{ 2}+\partial_{2}t^{*}\partial_{1})J_{g,11})^{2}}{(J_{g,11})^{3}}\Big{(}\hat{x}_{ 1}(x_{2}),x_{2},t^{*}(x_{2}))\,.\] The noticeable difference we encounter the above identity is the appearance of third order derivatives of \(J_{g,1}\) along the pre-shock. Pointwise bounds for \(\mathsf{D}^{4}J_{g}\) are not covered by the \(L^{\infty}\) bound in (7.17). Instead, we note that the bootstrap (14.132b) postulates \(J_{g}\in H^{6}(\hat{\mathcal{H}}^{6})\), and thus the Sobolev embedding in \(2+1\) (space\(+\)time) dimensions gives that \(J_{g}\in L^{\infty}(\hat{\mathcal{H}}^{6})\). The Sobolev embedding is however not necessarily sharp in terms of the scaling with respect to \(\varepsilon\). Indeed, the classical Gagliardo-Nirenberg inequality a function \(f\) which is \(H^{2}\) smooth on a space-time domain \(\Omega\subset\mathcal{X}_{\mathsf{fin}}\times[\mathsf{t}_{\mathsf{in}}, \mathsf{t}_{\mathsf{fin}}]\subset\mathbb{R}^{3}\): is \(\|f\|_{L^{\infty}(\Omega)}\lesssim\|f\|_{L^{2}(\Omega)}^{\frac{1}{2}}\| \nabla^{2}f\|_{L^{2}(\Omega)}^{\frac{3}{2}}+|\Omega|^{-\frac{1}{2}}\|f\|_{L^{2} (\Omega)}\), where the implicit constant is universal. In terms of the differential operators \(\mathsf{D}\), this implies via the Poincare inequality (B.2a) that \(\|f\|_{L^{\infty}_{x,t}}\lesssim\varepsilon^{-\frac{3}{2}}\|\mathsf{D}^{2}f\|_ {L^{2}_{x,t}}\). Applying this bound with \(f=\mathsf{D}^{4}J_{g}\), and appealing to the \(\mathsf{D}^{6}J_{g}\) bootstrap (14.132b), we deduce \(\|\mathsf{D}^{4}J_{g}\|_{L^{\infty}_{x,t}}\lesssim\mathsf{B}_{j}\varepsilon^{- \frac{1}{2}}\). Since this term is \(\mathcal{O}(\varepsilon^{-\frac{1}{2}})\) instead of \(\mathcal{O}(1)\), the bound for \(\partial_{2}\dot{x}_{1}(x_{2})\) loses a factor of \(\varepsilon^{\frac{1}{2}}\), resulting in \[\big{|}\partial_{2}^{3}\dot{\hat{x}}_{1}(x_{2})\big{|}\lesssim\varepsilon^{\frac{ 1}{2}}\langle\mathsf{B}_{6}\rangle\,. \tag{14.23}\] ### Solvability of (14.10) and properties of \(\mathsf{O}^{6}\) and its derivatives Solving for \(\mathsf{O}^{6}\) amounts to a standard application of the method of characteristics. Letting \[\mathfrak{M}(x,t) :=-\tfrac{(1-8)J_{g}(x,t)}{2\alpha\Sigma(x,t)}\big{(}V+\tfrac{2 \alpha}{1-8}\Sigma g^{-\frac{1}{2}}h_{,2}\,\big{)}(x,t)\,, \tag{14.24a}\] \[\mathfrak{M}(x,t) :=\tfrac{(1-8)J_{g}(x,t)}{2\alpha\Sigma(x,t)}\big{(}V+\tfrac{2 \alpha}{1-8}\Sigma g^{-\frac{1}{2}}h_{,2}\,\big{)}(x,t)\tfrac{\partial_{2} \mathcal{B}}{\partial_{t}\mathcal{B}}(x_{2},t)\,,\] (14.24b) \[\mathfrak{F}(x,t) :=-\tfrac{(1-8)J_{g}(x,t)}{2\alpha\Sigma(x,t)}\,, \tag{14.24c}\] we may write (14.10a) as \[\partial_{1}\mathsf{O}^{6}(x,t)+\mathfrak{M}(x,\mathsf{O}^{6}(x,t))\partial_{2 }\mathsf{O}^{6}(x,t)+\mathfrak{N}(x,\mathsf{O}^{6}(x,t))\partial_{t}\mathsf{O }^{6}(x,t)=\mathfrak{F}(x,\mathsf{O}^{6}(x,t))\,. \tag{14.25}\] This is a semilinear first order PDE with smooth coefficients. The regularity of \(\mathfrak{M},\mathfrak{N}\), and \(\mathfrak{F}\) may be seen as follows. First, in analogy to the bound (14.17) we deduce from (6.24), (6.43), (6.53), and (14.19) that the term \(\mathcal{B}\) defined in (14.10c) satisfies \[\big{|}\tfrac{\partial_{2}\mathcal{B}}{\partial_{t}\mathcal{B}}(x_{2},t)\big{|} \leq\tfrac{|J_{g,1}(\hat{x}_{1}(x_{2}),x_{2},t)|\cdot|J_{g,2}(\hat{x}_{1}(x_{ 2}),x_{2},t)|+|J_{g,2}(\hat{x}_{1}(x_{2}),x_{2},t)|}{-\partial_{t}J_{g}(\hat{x} _{1}(x_{2 which become uniform bounds since \(J_{g}(x,t)\leq\frac{6}{5}\). In order to bound derivatives of \(\mathfrak{M},\mathfrak{N}\), and \(\mathfrak{F}\), we use that from (6.24), (6.43), (6.53), (14.19), and (14.21) we have \[\big{|}(\mathsf{D}_{t},\mathsf{D}_{2})\tfrac{\partial_{2}\mathsf{ B}}{\partial_{t}\mathsf{B}}(x_{2},t)\big{|}\lesssim\varepsilon(\mathsf{B}_{6})\,, \tag{14.26c}\] where we recall that \((\mathsf{D}_{t},\mathsf{D}_{2})=(\varepsilon\partial_{t},\partial_{2})\). Combining this estimate with the pointwise bootstrap assumptions (5.37k)-(5.37q) present in (14.132a), we deduce \[\big{|}\mathsf{D}\mathfrak{F}(x,t)\big{|}\lesssim 1\,,\quad\big{|}\mathsf{D} \mathfrak{M}(x,t)\big{|}\lesssim\varepsilon\,,\quad\big{|}\mathsf{D}\mathfrak{ M}(x,t)\big{|}\lesssim\varepsilon^{2}(\mathsf{B}_{6})\,, \tag{14.26d}\] where we recall that \(\mathsf{D}=(\varepsilon\partial_{t},\varepsilon\partial_{1},\partial_{2})\). Lastly, by appealing to (6.24), (14.19), (14.21), (14.23), and the estimate, \(|J_{g},_{1}(\hat{x}_{1}(x_{2}),x_{2},t)|\leq|x_{1}^{*}(x_{2},t)-\hat{x}_{1}(x _{2})|\cdot\|J_{g},_{11}\|_{L^{\infty}_{x,t}}\lesssim\varepsilon\kappa( \mathsf{B}_{6})\), similarly to (14.26a) and (14.26c) we deduce \[\big{|}(\mathsf{D}_{t}^{2},\mathsf{D}_{t}\mathsf{D}_{2},\mathsf{D }_{2}^{2})\tfrac{\partial_{2}\mathsf{B}}{\partial_{t}\mathsf{B}}(x_{2},t) \big{|}\lesssim\varepsilon(\mathsf{B}_{6})\,. \tag{14.26e}\] Combined with the bootstraps (14.132a)-(14.132b), and the the anisotropic Sobolev estimate in (B.2d), similarly to (14.26d) we obtain \[\big{|}\mathsf{D}^{2}\mathfrak{F}(x,t)\big{|}\lesssim\langle\mathsf{B}_{6} \rangle\,,\quad\big{|}\mathsf{D}^{2}\mathfrak{M}(x,t)\big{|}\lesssim \varepsilon\langle\mathsf{B}_{6}\rangle\,,\quad\big{|}\mathsf{D}^{2} \mathfrak{N}(x,t)\big{|}\lesssim\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,. \tag{14.26f}\] With the bounds in (14.26), we turn to solving (14.25). Treating \(x_{1}\) as time and \((x_{2},t)\) as parameters, we introduce the flows \((\zeta_{2}(x_{1},x_{2},t),\zeta_{\ell}(x_{1},x_{2},t))\) and the function \(\Theta^{\delta}\circ\zeta(x_{1},x_{2},t):=\Theta^{\delta}(x_{1},\zeta_{2}(x_{1},x_{2},t),\zeta_{\ell}(x_{1},x_{2},t))\). The flows \((\zeta_{2},\zeta_{t})\) are the solutions of the characteristic ODEs \[\partial_{1}\zeta_{2}(x_{1},x_{2},t) =\mathfrak{M}\big{(}x_{1},\zeta_{2}(x_{1},x_{2},t),\Theta^{\delta }\circ\zeta(x_{1},x_{2},t)\big{)},\qquad\zeta_{2}(\hat{x}_{1}(x_{2}),x_{2},t) =x_{2}\,, \tag{14.27a}\] \[\partial_{1}\zeta_{\ell}(x_{1},x_{2},t) =\mathfrak{N}\big{(}x_{1},\zeta_{2}(x_{1},x_{2},t),\Theta^{\delta }\circ\zeta(x_{1},x_{2},t)\big{)},\qquad\zeta_{\ell}(\hat{x}_{1}(x_{2}),x_{2},t)=t\,, \tag{14.27b}\] while (14.25) and (14.10b) may be rewritten as \[\partial_{1}\big{(}\Theta^{\delta}\circ\zeta(x_{1},x_{2},t)\big{)} =\mathfrak{F}\big{(}x_{1},\zeta_{2}(x_{1},x_{2},t),\Theta^{\delta}\circ\zeta( x_{1},x_{2},t)\big{)}\,, \tag{14.27c}\] \[\Theta^{\delta}\circ\zeta(\hat{x}_{1}(x_{2}),x_{2},t) =\Theta^{\delta}(\hat{x}_{1}(x_{2}),x_{2},t)=t\,. \tag{14.27d}\] Note that the boundary condition at \(\{\hat{x}_{1}(x_{2}),x_{2},t\}\) is non-characteristic. Moreover, the fields \((\mathfrak{F},\mathfrak{M},\mathfrak{N})\) defined in (14.24) are uniformly \(C^{1}\) in both space and time in our spacetime (see (14.26)). This ensures unique and smooth solvability of the system (14.27): first by solving the two-dimensional system of coupled ODEs for \(\zeta_{2}(x_{1},x_{2},t)\) and \(\Theta^{\delta}(x_{1},\zeta_{2}(x_{1},x_{2},t),\eta_{t}(x_{1},x_{2},t))\) obtained from (14.27a) and (14.27c)-(14.27d), with \((x_{2},t)\) as parameters, and then afterwards integrating the ODE for \(\zeta^{t}\) in (14.27b). The global solvability of the characteristic ODEs in the interval \(x_{1}\in[\mathfrak{X}_{1}^{-}(x_{2},t),\mathfrak{X}_{1}^{+}(x_{2},t)]\) is a consequence of the \(C^{1}_{x,t}\) regularity of \((\mathfrak{F},\mathfrak{M},\mathfrak{N})\) and the fact that the boundary data at \(x_{1}=\hat{x}_{1}(x_{2})\) is smooth and non-characteristic. Moreover, using (14.26b) and (14.26d), we have that the map \((x_{2},t)\mapsto(\zeta_{2}(\cdot,x_{2},t),\zeta_{\ell}(\cdot,x_{2},t))\) is invertible and the bounds \[|\zeta_{2}(x_{1},x_{2},t)-x_{2}|\leq\mathring{C}\varepsilon^{2} \,,\qquad|\zeta_{\ell}(x_{1},x_{2},t)-t|\leq\mathring{C}\varepsilon^{3}\,, \tag{14.28}\] hold for each \(x_{1}\in[\mathfrak{X}_{1}^{-}(x_{2},t),\mathfrak{X}_{1}^{+}(x_{2},t)]\). Next, we turn to bounding the derivatives of \(\Theta^{\delta}\). We claim that: **Lemma 14.2** (**Bounds for the derivatives of \(\Theta^{\delta}\))**.: _Let \(\kappa_{0}\) be sufficiently large with respect to \(\alpha\) to ensure that (14.38) holds. Assume that the bootstrap assumptions (14.132) hold in \(\mathring{\mathcal{H}}^{\delta}\) and that \(\varepsilon\) is taken sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\). Then, for all \((x,t)\in\mathring{\Omega}_{\mathsf{US},+}\), the spacetime defined in (14.11a), we have_ \[-\tfrac{4J_{g}(x,\Theta^{\delta}(x,t))}{\alpha\kappa_{0}} \leq\partial_{1}\Theta^{\delta}(x,t)\leq-\tfrac{J_{g}(x,\Theta^{ \delta}(x,t))}{4\alpha\kappa_{0}}<0\,, \tag{14.29a}\] \[\big{|}\partial_{2}\Theta^{\delta}(x,t)\big{|} \leq 5\cdot 10^{3}(1+\alpha)^{2}\varepsilon\,,\] (14.29b) \[|\partial_{t}\Theta^{\delta}(x,t)-1| \leq 3\cdot 10^{-5}\,,\] (14.29c) \[\big{|}\partial_{22}\Theta^{\delta}(x,t)\big{|} \leq\mathsf{b}_{22}\,\varepsilon\langle\mathsf{B}_{6}\rangle\,,\] (14.29d) \[\big{|}\partial_{2t}\Theta^{\delta}(x,t)\big{|} \leq\mathsf{b}_{28}\,,\] (14.29e) \[\big{|}\partial_{tt}\Theta^{\delta}(x,t)\big{|} \leq\mathsf{b}_{\mathsf{ss}}\,. \tag{14.29f}\] _The constants \(\mathsf{b}_{22}\), \(\mathsf{b}_{28}\), \(\mathsf{b}_{\mathsf{ss}}\) appearing in (14.29d)-(14.29f) only depend on \(\alpha\), \(\kappa_{0}\), and \(\mathsf{C_{data}}\), and are defined in (14.57), (14.53), and (14.48). Before giving the proof of Lemma 14.2, we record a few immediate consequences. First, we note that from (14.29a), (14.38), and (5.37k) we may deduce \[-\tfrac{1}{10^{5}(1+\alpha)}\leq-\tfrac{5}{\alpha\kappa_{0}}\leq \partial_{1}\Theta^{\delta}(x,t)<0\,,\] (14.30a) for all \[(x,t)\in\mathring{\Omega}_{\mathsf{US},+}\]. Second, we note that by differentiating ( 14.25 ) with respect to \[x_{2}\] or \[t\], and appealing to the bootstrap inequalities in ( 14.132a ) bounds in ( 14.26b ), ( 14.26d ), and also to identity ( 14.33 ) and bound ( 14.38 ) below, we deduce \[\big{|}\partial_{12}\Theta^{\delta}(x,t)\big{|} \leq|(\partial_{2}\mathfrak{F})\circ\Theta^{\delta}(x,t)|+| \partial_{2}\Theta^{\delta}\cdot(\partial_{t}\mathfrak{F})\circ\Theta^{\delta} (x,t)|+\mathring{C}\varepsilon(\mathsf{B}_{6})\] \[\leq\tfrac{32(1+\alpha)}{\alpha\kappa_{0}}+\tfrac{5\cdot 10^{3}(1+ \alpha)^{3}}{\alpha\kappa_{0}}+\mathring{C}\varepsilon(\mathsf{B}_{6})\leq \tfrac{(1+\alpha)^{2}}{50}\,, \tag{14.30b}\] \[\big{|}\partial_{1t}\Theta^{\delta}(x,t)\big{|} \leq|\partial_{t}\Theta^{\delta}\cdot(\partial_{t}\mathfrak{F}) \circ\Theta^{\delta}(x,t)|+\mathring{C}\varepsilon(\mathsf{B}_{6})\] \[\leq(1+3\cdot 10^{-5})\tfrac{(1+\alpha)}{\alpha\kappa_{0}}+ \mathring{C}\varepsilon(\mathsf{B}_{6})\leq\tfrac{1}{10^{5}\varepsilon}\,. \tag{14.30c}\] By using these estimates, we may also differentiate (14.25) with respect to \(x_{1}\), and similarly deduce \[\big{|}\partial_{11}\Theta^{\delta}(x,t)\big{|} \leq|(\partial_{1}\mathfrak{F})\circ\Theta^{\delta}(x,t)|+| \partial_{1}\Theta^{\delta}\cdot(\partial_{t}\mathfrak{F})\circ\Theta^{ \delta}(x,t)|+\mathring{C}\varepsilon(\mathsf{B}_{6})\] \[\leq\tfrac{30}{\alpha\varepsilon\kappa_{0}}+\tfrac{5(1+\alpha)} {\varepsilon(\alpha\kappa_{0})^{2}}+\mathring{C}\varepsilon(\mathsf{B}_{6}) \leq\tfrac{1}{10^{5}\varepsilon(1+\alpha)}\,. \tag{14.30d}\] for all \((x,t)\in\mathring{\Omega}_{\mathsf{US},+}\). Proof of Lemma 14.2.: First, we prove (14.29a). Recall that \(\partial_{1}\Theta^{\delta}\) is computed from (14.25). By appealing to the bounds in (14.26b), along with the bounds (14.29b)-(14.29c), we obtain \[\big{|}\partial_{1}\Theta^{\delta}(x,t)+\tfrac{(1-\delta)J_{g}}{2\alpha\Sigma }\big{(}x,\Theta^{\delta}(x,t)\big{)}\big{|}\leq\mathring{C}\varepsilon^{2}J _{g}(x,\Theta^{\delta}(x,t))\,.\] Combining this bound with the \(\Sigma\) bootstrap in (5.37p) and taking \(\varepsilon\) to be sufficiently small, proves (14.29a). Next, we prove (14.29b)-(14.29c). We establish these bounds via a bootstrap/continuity argument with respect to \(x_{1}\), starting at \(\mathring{x}_{1}(x_{2})\). At \(x_{1}=\mathring{x}_{1}(x_{2})\) we have we have \(|\partial_{2}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)|=|J_{g,1}\,( \mathring{x}_{1}(x_{2}),x_{2},t)|\cdot|\partial_{2}\mathring{x}_{1}(x_{2})| \leq\mathring{C}\varepsilon^{2}\mathsf{K}(\mathsf{B}_{6})\) in light of (14.19), and also \(\partial_{t}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)=1\). Thus, at \(x_{1}=\mathring{x}_{1}(x_{2})\), the bounds (14.29b)-(14.29c) hold with a strict inequality. We then continue to propagate these bounds for \(x_{1}\) away from \(\mathring{x}_{1}(x_{2})\), and prove that they still hold with a strict inequality, yielding the global bound in \(\mathring{\Omega}_{\mathsf{US},+}\). We differentiate (14.25) with respect to the \(t\) and \(x_{2}\), and deduce that \[\big{(}\partial_{1}+\mathfrak{M}\circ\Theta^{\delta}\partial_{2 }+\mathfrak{M}\circ\Theta^{\delta}\partial_{t}\big{)}\!\left(\!\frac{\partial _{t}\Theta^{\delta}}{\partial_{2}\Theta^{\delta}}\!\right)-(\partial_{t} \mathfrak{F})\circ\Theta^{\delta}\left(\!\frac{\partial_{t}\Theta^{\delta}}{ \partial_{2}\Theta^{\delta}}\!\right)\] \[=\begin{pmatrix}-(\partial_{t}\mathfrak{M})\circ\Theta^{\delta} \partial_{2}\Theta^{\delta}\partial_{t}\Theta^{\delta}-(\partial_{t} \mathfrak{M})\circ\Theta^{\delta}(\partial_{t}\Theta^{\delta})^{2}\\ -(\partial_{t}\mathfrak{M})\circ\Theta^{\delta}(\partial_{2}\Theta^{\delta})^{2 }-(\partial_{2}\mathfrak{M})\circ\Theta^{\delta}\partial_{2}\Theta^{\delta}-( \partial_{t}\mathfrak{M})\circ\Theta^{\delta}\partial_{2}\Theta^{\delta}\partial_ {t}\Theta^{\delta}-(\partial_{2}\mathfrak{M})\circ\Theta^{\delta}\partial_{t} \Theta^{\delta}+(\partial_{2}\mathfrak{F})\circ\Theta^{\delta}\end{pmatrix}. \tag{14.31}\] Time differentiating (14.24c), and by appealing to (15a), (19b), we obtain (3.20a), \[\partial_{t}\mathfrak{F}= -\tfrac{(1-\delta)(1+\alpha)}{2\alpha}\partial_{1}(\log\Sigma)\] \[-\tfrac{(1-\delta)}{2\alpha\Sigma}\big{(}(1-\alpha)J_{g} \tfrac{\mathring{\boldsymbol{\mathcal{Z}}}}{\mathcal{Z}}_{\mathcal{N}}-\alpha J _{g}\mathring{\boldsymbol{\mathcal{Z}}}_{\mathcal{N}}-\alpha J_{g}h_{,2}\,( \mathring{\boldsymbol{\mathcal{W}}}_{\mathcal{T}}-\mathring{\boldsymbol{ \mathcal{Z}}}_{\mathcal{T}})-VJ_{g},{}_{2}-VJ_{g}\Sigma^{-1}\Sigma_{,2}\, \big{)}\,, \tag{14.32}\] so that the bootstrap assumptions imply \[\big{|}\partial_{t}\mathfrak{F}+\tfrac{(1-\delta)(1+\alpha)}{2\alpha}\partial_{ 1}(\log\Sigma)\big{|}\leq\mathring{C}\,. \tag{14.33}\] Using the characteristic flow \((\zeta_{2},\zeta_{t})\) introduced in (14.27), we additionally note the identity \[\big{(}\partial_{1}(\log\Sigma)\big{)}(x_{1},\zeta_{2}(x_{1},x_{2 },t),\Theta^{\delta}\circ\zeta(x_{1},x_{2},t))= \partial_{1}\big{(}\log\Sigma(x_{1},\zeta_{2}(x_{1},x_{2},t),\Theta^{ \delta}\circ\zeta(x_{1},x_{2},t))\big{)}\] \[-\big{(}\partial_{2}(\log\Sigma)\cdot\mathfrak{M}\big{)}(x_{1}, \zeta_{2}(x_{1},x_{2},t),\Theta^{\delta}\circ\zeta(x_{1},x_{2},t))\] \[-\big{(}\partial_{t}(\log\Sigma)\cdot\mathfrak{F}\big{)}(x_{1}, \zeta_{2}(x_{1},x_{2},t),\Theta^{\delta}\circ\zeta(x_{1},x_{2},t))\,. \tag{14.34}\] Thus, composing (14.31) with \((\zeta_{2},\zeta_{t})\), using the bound (14.33), identity (14.34), the bounds (14.26b), (14.26d), the bootstrap assumptions relating to \(\Sigma\) in (5.37p)-(5.37q), and the bounds (14.29b)-(14.29c) in a bootstrap fashion, we deduce that \[\big{|}\partial_{1}\big{(}(\partial_{t}\Theta^{\delta})\circ\zeta(x_{1},x_{2},t )\big{)}\] \[\big{|}(\partial_{t}\Theta^{\delta})\circ\zeta(x_{1},x_{2},t)-1 \cdot\mathcal{I}(x_{1},x_{2},t)\big{|}\lesssim|x_{1}-\hat{x}_{1}(x_{2})|\lesssim \varepsilon\,,\] which gives via (14.39) and upon composing with \(\zeta^{-1}\) that \[\big{|}\partial_{t}\Theta^{\delta}-1\big{|}\leq 10^{-5}+\hat{C} \varepsilon\leq 2\cdot 10^{-5}\,,\] upon taking \(\varepsilon\) to be sufficiently small. This proves (14.29c). Integrating (14.35b) with respect to \(x_{1}\), using that the boundary condition satisfies \(|(\partial_{2}\Theta^{\delta})(\hat{x}_{1}(x_{2}),x_{2},t)|\leq\hat{C}\varepsilon ^{2}\mathsf{K}(\mathsf{B}_{6})\), appealing to the bound (14.39) for the integrating factor, and also using the estimate \(\|\partial_{2}\mathfrak{F}\|_{L^{\infty}_{x,t}}\leq\frac{2}{\alpha\kappa_{0}} \|J_{g,2}\|_{L^{\infty}_{x,t}}+\frac{8}{\alpha\kappa_{0}^{2}}\|J_{g}\Sigma,_{ 2}\|_{L^{\infty}_{x,t}}\leq\frac{30(1+\alpha)}{\alpha\kappa_{0}}\) (which is a consequence of (5.37k), (5.37l), (5.37p), (5.37q)), we deduce \[\big{|}(\partial_{2}\Theta^{\delta})\circ\zeta(x_{1},x_{2},t)\big{|} \leq\mathcal{I}(x_{1},x_{2},t)|(\partial_{2}\Theta^{\delta})(\hat{ x}_{1}(x_{2}),x_{2},t)|+\big{(}\frac{30(1+\alpha)}{\alpha\kappa_{0}}+\hat{C} \varepsilon\big{)}\cdot\tfrac{1+10^{-5}}{1-10^{-5}}\cdot|x_{1}-\hat{x}_{1}(x_ {2})|\] \[\leq\hat{C}\varepsilon^{2}\mathsf{K}(\mathsf{B}_{6})+\tfrac{31(1 +\alpha)}{\alpha\kappa_{0}}\cdot|x_{1}-\hat{x}_{1}(x_{2})|\,.\] Appealing to the support assumption (14.12), to the fact that \(\kappa_{0}\) is taken sufficiently large with respect to \(\alpha\) cf. (14.38), taking \(\varepsilon\) to be sufficiently small, and composing with \(\zeta^{-1}\) we obtain \[\big{|}\partial_{2}\Theta^{\delta}\big{|}\leq 4050(1+\alpha)^{2} \varepsilon\,.\] This proves (14.29b). It thus remains to establish (14.29d)-(14.29f). As with (14.29b)-(14.29c), we prove these estimates via a bootstrap/continuity argument originating at \(x_{1}=\hat{x}_{1}(x_{2})\). First, we need to obtain good bounds for the boundary conditions at \(x_{1}=\hat{x}_{1}(x_{2})\), upon differentiating (14.10b) twice, we deduce \[\partial_{tt}\Theta^{\delta}(\hat{x}_{1}(x_{2}),x_{2},t) =0\,, \tag{14.40a}\] \[\partial_{2t}\Theta^{\delta}(\hat{x}_{1}(x_{2}),x_{2},t) =-\partial_{1t}\Theta^{\delta}(\hat{x}_{1}(x_{2}),x_{2},t)\cdot \partial_{2}\hat{x}_{1}(x_{2})\,,\] (14.40b) \[\partial_{22}\Theta^{\delta}(\hat{x}_{1}(x_{2}),x_{2},t) =-\partial_{11}\Theta^{\delta}(\hat{x}_{1}(x_{2}),x_{2},t)( \partial_{2}\hat{x}_{1}(x_{2}))^{2}-2\partial_{12}\Theta^{\delta}(\hat{x}_{1} (x_{2}),x_{2},t)\partial_{2}\hat{x}_{1}(x_{2})\] \[\qquad\qquad-\partial_{1}\Theta^{\delta}(\hat{x}_{1}(x_{2}),x_{2}, t)\cdot\partial_{22}\hat{x}_{1}(x_{2})\,. \tag{14.40c}\] In order to compute \(\partial_{12}\Theta^{\delta}\) and \(\partial_{11}\Theta^{\delta}\) at \((\mathring{x}_{1}(x_{2}),x_{2},t)\), we differentiate (14.25) with respect to \(x_{2},t\), and \(x_{1}\), appeal to the bootstraps, the bounds (14.19), (14.21), (14.26b), (14.26d), (14.33), and the fact that \(\partial_{t}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)=1\) and \(|\partial_{2}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)|\leq\mathring{ C}\varepsilon^{2}\mathsf{K}\langle\mathsf{B}_{6}\rangle\), to deduce \[|\partial_{1t}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)| \leq\mathring{C}\varepsilon^{2}|\partial_{12}\Theta^{\delta}( \mathring{x}_{1}(x_{2}),x_{2},t)|+\tfrac{2(1+\alpha)}{\alpha\varepsilon \kappa_{0}}\,, \tag{14.40d}\] \[|\partial_{11}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)| \leq\mathring{C}\varepsilon|\partial_{12}\Theta^{\delta}( \mathring{x}_{1}(x_{2}),x_{2},t)|+\tfrac{10(1+\alpha)}{\varepsilon(\alpha \kappa_{0})}+\tfrac{8(1+\alpha)}{\varepsilon(\alpha\kappa_{0})^{2}}\,,\] (14.40e) \[|\partial_{12}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)| \leq\mathring{C}\varepsilon|\partial_{22}\Theta^{\delta}( \mathring{x}_{1}(x_{2}),x_{2},t)|+\tfrac{28(1+\alpha)}{\alpha\kappa_{0}}\,. \tag{14.40f}\] Combining the bounds in (14.40) with (14.19), and (14.21), we obtain that \[\partial_{tt}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t) =0\,, \tag{14.41a}\] \[|\partial_{2t}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)| \leq\tfrac{5(1+\alpha)}{\alpha\kappa_{0}}\mathsf{C}_{\mathsf{ data}}\,,\] (14.41b) \[|\partial_{22}\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},t)| \leq\tfrac{200(1+\alpha)}{\alpha\kappa_{0}}\mathsf{C}_{\mathsf{ data}}{}^{2}\varepsilon+\tfrac{5}{\alpha\kappa_{0}}\cdot\mathring{C}_{(14.41c)} \varepsilon\langle\mathsf{B}_{6}\rangle\,. \tag{14.41c}\] Here the constant \(\mathring{C}_{(14.41c)}\) only depends on the implicit constant from (14.21), and thus only depends on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). In particular, recalling that \(\mathsf{C}_{\mathsf{data}}\leq\mathsf{B}_{6}\), if we let the constant \(\mathsf{b}_{22}\) appearing in (14.29d) satisfy \[\tfrac{200(1+\alpha)}{\alpha\kappa_{0}}\mathsf{C}_{\mathsf{data}}+\tfrac{5}{ \alpha\kappa_{0}}\cdot\mathring{C}_{(14.41c)}\leq\tfrac{1}{2}\mathsf{b}_{22}\,, \tag{14.42}\] and we let the constant \(\mathsf{b}_{2\mathsf{s}}\) appearing in (14.29e) satisfy \[\tfrac{5(1+\alpha)}{\alpha\kappa_{0}}\mathsf{C}_{\mathsf{data}}\leq\tfrac{1} {2}\mathsf{b}_{2\mathsf{s}}\,, \tag{14.43}\] we are ensured that (14.29d)-(14.29f) hold at \(x_{1}=\mathring{x}_{1}(x_{2})\), with strict inequalities. Note that no constraint on \(\mathsf{b}_{\mathsf{s}}\) is imposed at this stage. We next show via a bootstrap / continuity argument that these bounds still hold, with strict inequalities, globally in \(\mathring{\Omega}_{\mathsf{US},+}\). We first establish (14.29f). Differentiating the first component of (14.31) with respect to the \(t\) variable, we obtain \[\big{(}\partial_{1}+\mathfrak{M}\circ\Theta^{\delta}\partial_{2} +\mathfrak{M}\circ\Theta^{\delta}\partial_{t}\big{)}(\partial_{tt}\Theta^{ \delta})-(\partial_{t}\mathfrak{N})\circ\Theta^{\delta}\partial_{tt}\Theta^{ \delta}\] \[=-(\partial_{t}\mathfrak{M})\circ\Theta^{\delta}\partial_{t} \Theta^{\delta}\partial_{2t}\Theta^{\delta}-(\partial_{t}\mathfrak{M})\circ \Theta^{\delta}\partial_{t}\Theta^{\delta}\partial_{tt}\Theta^{\delta}+( \partial_{tt}\mathfrak{F})\circ\Theta^{\delta}(\partial_{t}\Theta^{\delta})^{2}\] \[\qquad-(\partial_{t}\mathfrak{M})\circ\Theta^{\delta}\partial_{t} \Theta^{\delta}\partial_{tt}\Theta^{\delta}+\partial_{2}\Theta^{\delta}\partial _{tt}\Theta^{\delta})-(\partial_{tt}\mathfrak{M})\circ\Theta^{\delta}\partial_{2 }\Theta^{\delta}(\partial_{t}\Theta^{\delta})^{2}\] \[\qquad-2(\partial_{t}\mathfrak{M})\circ\Theta^{\delta}\partial_{t} \Theta^{\delta}\partial_{tt}\Theta^{\delta}-(\partial_{tt}\mathfrak{M})\circ \Theta^{\delta}(\partial_{t}\Theta^{\delta})^{3}\,. \tag{14.44}\] By appealing to (14.26d), (14.26f), and (14.29) we deduce that the right side of (14.44) may be bounded as \[\big{|}\mathsf{RHS}_{(14.44)}\big{|}\leq(1+3\cdot 10^{-5})^{2}|(\partial_{tt} \mathfrak{F})\circ\Theta^{\delta}|+\mathring{C}\langle\mathsf{B}_{6}\rangle\,, \tag{14.45}\] where we have used that the constants \(\mathsf{b}_{22},\mathsf{b}_{2\mathsf{s}},\mathsf{b}_{\mathsf{s}}\) only depend on \(\alpha\), \(\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), and thus may be absorbed into \(\mathring{C}\). In analogy to (14.33), we may use the bootstraps, (14.32) and (3.19b) to show that \[\big{|}\partial_{tt}\mathfrak{F}-\tfrac{(1-\delta)(1+\alpha)}{2}\partial_{1} \tfrac{\mathfrak{Z}_{\mathcal{N}}}{\Sigma}+\tfrac{(1-\delta)(1-\alpha)}{2 \alpha}\partial_{t}\tfrac{J_{\boldsymbol{\hat{\mathsf{Z}}}_{\mathcal{N}}}}{ \Sigma}\big{|}\leq\mathring{C}\,,\] and therefore \[\big{|}\partial_{tt}\mathfrak{F}\big{|}\leq\tfrac{64(1+\alpha)}{\varepsilon \kappa_{0}}\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{N}}}\,. \tag{14.46}\] From the above estimate and (14.45), upon taking \(\varepsilon\) to be sufficiently small, we obtain \[\big{|}\mathsf{RHS}_{(14.44)}\big{|}\leq\tfrac{65(1+\alpha)}{\varepsilon\kappa_{0 }}\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{N}}}\,. \tag{14.47}\] With the above estimate available, we compose (14.44) with the flow \((\zeta_{2},\zeta_{t})\), integrate in \(x_{1}\) starting at \(\mathring{x}_{1}(x_{2})\), use the boundary condition (14.41a), the integrating factor \(\mathcal{I}\) from (14.36) which satisfies (14.39), and the bound (14.12), to deduce \[\big{|}(\partial_{tt}\Theta^{\delta})\circ\zeta(x_{1},x_{2},t)\big{|} \leq\mathcal{I}(x_{1},x_{2},t)\cdot|(\partial_{tt}\Theta^{\delta})( \mathring{x}_{1}(x_{2}),x_{2},t)|+\big{|}\mathsf{RHS}_{(14.44)}\circ\zeta(x_{1},x_ {2},t)\big{|}\cdot\tfrac{1+10^{-5}}{1-10^{-5}}\cdot|x_{1}-\mathring{x}_{1}(x_{2})|\] \[\leq\tfrac{65(1+\alpha)}{\varepsilon\kappa_{0}}\mathsf{C}_{ \mathbf{\hat{Z}}_{\mathcal{N}}}\cdot\tfrac{1+10^{-5}}{1-10^{-5}}\cdot 2(13\pi+65\alpha(1+\alpha)\kappa_{0})\varepsilon\] \[\leq\tfrac{66(1+\alpha)}{\kappa_{0}}\mathsf{C}_{\mathbf{\hat{Z}}_{ \mathcal{N}}}\cdot(26\pi+130\alpha(1+\alpha)\kappa_{0})\,.\] Since the right side in the above bound depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), upon composing with the inverse flow of \(\zeta\), and upon defining \[\tfrac{66(1+\alpha)}{\kappa_{0}}\mathsf{C}_{\mathbf{\hat{Z}}_{\mathcal{N}}}\cdot(2 6\pi+130\alpha(1+\alpha)\kappa_{0})=:\tfrac{1}{2}\mathsf{b}_{\mathsf{s}}\,, \tag{14.48}\] we have completed the proof of (14.29f). Next, we establish (14.29e). The proof is similar to (14.29f), except that the boundary condition at \(x_{1}=\mathring{x}_{1}(x_{2})\) is now satisfying (14.41b) instead of (14.41a), and the evolution equation (14.44) is now replaced by \[\big{(}\partial_{1}+\mathfrak{M}\!\circ\!\Theta^{\delta}\partial_{ 2}+\mathfrak{M}\!\circ\!\Theta^{\delta}\partial_{t}\big{)}(\partial_{2t} \Theta^{\delta})-(\partial_{t}\mathfrak{F})\!\circ\!\Theta^{\delta}\partial_{ 2t}\Theta^{\delta}\] \[=-\big{(}(\partial_{2}\mathfrak{M})\!\circ\!\Theta^{\delta}+( \partial_{t}\mathfrak{M})\!\circ\!\Theta^{\delta}\partial_{2}\Theta^{\delta} \big{)}\partial_{2t}\Theta^{\delta}-\big{(}(\partial_{2}\mathfrak{M}\!\circ \!\Theta^{\delta}+(\partial_{t}\mathfrak{M})\!\circ\!\Theta^{\delta}\partial_ {2}\Theta^{\delta}\big{)}\partial_{tt}\Theta^{\delta}\] \[\qquad+\big{(}(\partial_{2t}\mathfrak{F})\!\circ\!\Theta^{\delta }+(\partial_{tt}\mathfrak{F})\!\circ\!\Theta^{\delta}\partial_{2}\Theta^{ \delta}\big{)}\partial_{t}\Theta^{\delta}\] \[\qquad-(\partial_{t}\mathfrak{M})\!\circ\!\Theta^{\delta}\big{(} \partial_{22}\Theta^{\delta}\partial_{t}\Theta^{\delta}+\partial_{2}\Theta^{ \delta}\partial_{2t}\Theta^{\delta}\big{)}-\big{(}(\partial_{2t}\mathfrak{M})\! \circ\!\Theta^{\delta}+(\partial_{tt}\mathfrak{M})\!\circ\!\Theta^{\delta} \partial_{2}\Theta^{\delta}\big{)}\partial_{2}\Theta^{\delta}(\partial_{t} \Theta^{\delta})^{2}\] \[\qquad-2(\partial_{t}\mathfrak{M})\!\circ\!\Theta^{\delta}\partial _{t}\Theta^{\delta}\partial_{2t}\Theta^{\delta}-\big{(}(\partial_{2t}\mathfrak{ M})\!\circ\!\Theta^{\delta}+(\partial_{tt}\mathfrak{M})\!\circ\!\Theta^{\delta} \partial_{2}\Theta^{\delta}\big{)}(\partial_{t}\Theta^{\delta})^{2}\,, \tag{14.49}\] which is obtained by differentiating the first component of (14.31) with respect to \(x_{2}\). By appealing to (14.26d), (14.26f), (14.29) and (14.46) we deduce that the right side of (14.49) may be bounded as \[\big{|}\mathsf{HHS}_{(14.49)}\big{|}\leq(1+3\cdot 10^{-5})\big{(}\big{|}( \partial_{2t}\mathfrak{F})\!\circ\!\Theta^{\delta}\big{|}+\tfrac{64(1+\alpha )}{\varepsilon\kappa_{0}}C_{\underline{2}\mathcal{N}}\cdot 5\cdot 10^{3}(1+ \alpha)^{2}\varepsilon\big{)}+\mathring{C}\varepsilon\langle\mathsf{B}_{6} \rangle\,. \tag{14.50}\] On the other hand, by differentiating (14.32) with respect to \(x_{2}\) and appealing to the bootstrap assumptions, we may show that \[\big{|}\partial_{2t}\mathfrak{F}+\tfrac{(1-\delta)(1+\alpha)}{4\alpha}\partial _{2}\tfrac{J_{\mathsf{J}}\mathsf{W}_{\mathsf{N}^{\prime}}\!-\!J_{\mathsf{J}} \mathsf{Z}_{\mathsf{N}}}{\Sigma}+\tfrac{(1-\delta)(1-\alpha)}{2\alpha} \partial_{2}\tfrac{J_{\mathsf{J}}\mathsf{Z}_{\mathsf{N}}}{\Sigma}\big{|}\leq \mathring{C}\varepsilon\,,\] and therefore \[\big{|}\partial_{2t}\mathfrak{F}\big{|}\leq\tfrac{11(1+\alpha)}{\alpha\varepsilon \kappa_{0}}+\mathring{C}\leq\tfrac{12(1+\alpha)}{\alpha\varepsilon\kappa_{0}}\,. \tag{14.51}\] From the above estimate and (14.50), upon taking \(\varepsilon\) to be sufficiently small, we obtain \[\big{|}\mathsf{HHS}_{(14.49)}\big{|}\leq\tfrac{13(1+\alpha)}{\alpha \varepsilon\kappa_{0}}\,. \tag{14.52}\] With the above estimate available, we compose (14.49) with the flow \((\zeta_{2},\zeta_{t})\), integrate in \(x_{1}\) starting at \(\mathring{x}_{1}(x_{2})\), use the boundary condition (14.41b), the integrating factor \(\mathcal{I}\) from (14.36) which satisfies (14.39), and the bound (14.12), to deduce \[\big{|}(\partial_{2t}\Theta^{\delta})\!\circ\!\zeta(x_{1},x_{2},t) \big{|} \leq\mathcal{I}(x_{1},x_{2},t)\cdot|(\partial_{2t}\Theta^{\delta})( \mathring{x}_{1}(x_{2}),x_{2},t)|+\big{|}\mathsf{HHS}_{(14.49)}\!\circ\! \zeta(x_{1},x_{2},t)\big{|}\cdot\tfrac{1+10^{-5}}{1-10^{-5}}\cdot|x_{1}- \mathring{x}_{1}(x_{2})|\] \[\leq\tfrac{(1+10^{-5})5(1+\alpha)}{\alpha\kappa_{0}}C_{\mathsf{ data}}+\tfrac{13(1+\alpha)}{\alpha\varepsilon\kappa_{0}}\cdot\tfrac{1+10^{-5}}{1-10^{-5}} \cdot 2(13\pi+65\alpha(1+\alpha)\kappa_{0})\varepsilon\] \[\leq\tfrac{6(1+\alpha)}{\alpha\kappa_{0}}C_{\mathsf{data}}+\tfrac{ 14(1+\alpha)}{\alpha\kappa_{0}}\cdot(26\pi+130\alpha(1+\alpha)\kappa_{0})\,.\] Since the right side in the above bound depends only on \(\alpha,\kappa_{0}\), and \(C_{\mathsf{data}}\), upon composing with the inverse flow of \(\zeta\), and upon defining \[\tfrac{6(1+\alpha)}{\alpha\kappa_{0}}C_{\mathsf{data}}+\tfrac{14(1+\alpha)}{ \alpha\kappa_{0}}\cdot(26\pi+130\alpha(1+\alpha)\kappa_{0})=:\tfrac{1}{2} \mathsf{b}_{2\mathsf{s}}\,, \tag{14.53}\] which automatically implies (14.43), we have thus completed the proof of (14.29e). At last, we establish (14.29d). The proof is nearly identical to that of (14.29e) and (14.29f). In analogy to (14.49), by differentiating with respect to \(x_{2}\) the second component of (14.31), we deduce \[\big{(}\partial_{1}+\mathfrak{M}\!\circ\!\Theta^{\delta}\partial_{ 2}+\mathfrak{M}\!\circ\!\Theta^{\delta}\partial_{t}\big{)}(\partial_{22}\Theta^{ \delta})-(\partial_{t}\mathfrak{F})\!\circ\!\Theta^{\delta}\partial_{22}\Theta^{ \delta}\] \[=-\big{(}(\partial_{2}\mathfrak{M})\!\circ\!\Theta^{\delta}+( \partial_{t}\mathfrak{M})\!\circ\!\Theta^{\delta}\partial_{2}\Theta^{\delta} \big{)}\partial_{22}\Theta^{\delta}-\big{(}(\partial_{2}\mathfrak{M}\!\circ\! \Theta^{\delta}+(\partial_{t}\mathfrak{M})\!\circ\!\Theta^{\delta}\partial_{2} \Theta^{\delta}\big{)}\partial_{2t}\Theta^{\delta}\] \[\qquad+\big{(}(\partial_{2t}\mathfrak{F})\!\circ\!\Theta^{\delta} +(\partial_{tt}\mathfrak{F})\!\circ\!\Theta^{\delta}\partial_{2}\Theta^{\delta} \big{)}\partial_{2}\Theta^{\delta}+\big{(}(\partial_{22}\mathfrak{F})\!\circ\! \Theta^{\delta}+(\partial_{2t}\mathfrak{F})\!\circ\!\Theta^{\delta}\partial_{2} \Theta^{\delta}\big{)}\] \[\qquad-2(\partial_{t}\mathfrak{M})\!\circ\!\Theta^{\delta}\partial_{ 22}\Theta^{\delta}\partial_{2}\Theta^{\delta}-\big{(}(\partial_{2t}\mathfrak{M})\! \circ\!\Theta^{\delta}+(\partial_{tt}\mathfrak{M})\!\circ\!\Theta^{\delta} \partial_{2}\Theta^{\delta}\big{)}(\partial_{2}\Theta^{\delta})^{2}\] \[\qquad-(\partial_{2}\mathfrak{M})\!\circ\!\Theta^{\delta}\partial_{ 22}\Theta^{\delta}-\big{(}(\partial_{22}\mathfrak{M})\!\circ\!\Theta^{\delta}+( \partial_{2t}\mathfrak{M})\!\circ\!\Theta^{\delta}\partial_{2}\Theta^{\delta} \big{)}\partial_{2}\Theta^{\delta}\big{)}\partial_{2}\Theta^{\delta}\] \[\qquad-(\partial_{t}\mathfrak{M})\!\circ\!\Theta^{\delta}\big{(} \partial_{22}\Theta^{\delta}\partial_{t}\Theta^{\delta}+\partial_{2} \Theta^{\delta}\partial_{2t}\Theta^{\delta}\big{)}-\big{(}(\partial_{2t} \mathfrak{M})\!\circ\!\Theta^{\delta}+(\partial_{tt}\mathfrak{M})\!\circ\! \Theta^{\delta}\partial_{2}\Theta^{\delta}\big{)}\partial_{2}\Theta^{\delta} \big{)}\partial_{t}\Theta^{\delta}\] \[\ The above estimate and (14.55) imply that \[\big{|}\mathsf{RHS}_{(\ref{eq:14.54})}\big{|}\leq\tfrac{32}{\alpha\kappa_{0}} \big{(}2\mathsf{C}_{\mathsf{data}}+28(1+\alpha)\big{)}+\tfrac{12}{\alpha\kappa_{ 0}}\cdot 10^{4}(1+\alpha)^{3}\,. \tag{14.56}\] With the above estimate available, we compose (14.54) with the flow \((\zeta_{2},\zeta_{t})\), integrate in \(x_{1}\) starting at \(\mathring{x}_{1}(x_{2})\), use the boundary condition (14.41c), the integrating factor \(\mathcal{I}\) from (14.36) which satisfies (14.39), and the bound (14.12), to deduce \[\big{|}(\partial_{22}\Theta^{\delta})\circ\zeta(x_{1},x_{2},t)\big{|} \leq\mathcal{I}(x_{1},x_{2},t)\cdot|(\partial_{22}\Theta^{\delta})( \mathring{x}_{1}(x_{2}),x_{2},t)|+\big{|}\mathsf{RHS}_{(\ref{eq:14.49})}\circ \zeta(x_{1},x_{2},t)\big{|}\cdot\tfrac{1+10^{-5}}{1-10^{-5}}\cdot|x_{1}- \mathring{x}_{1}(x_{2})|\] \[\leq\varepsilon\big{(}\tfrac{200(1+\alpha)}{\alpha\kappa_{0}} \mathsf{C}_{\mathsf{data}}{}^{2}+\tfrac{5}{\alpha\kappa_{0}}\cdot\mathring{C}_ {(\ref{eq:14.41c})}\langle\mathsf{B}_{6}\rangle\] \[\qquad\qquad+\big{(}\tfrac{32}{\alpha\kappa_{0}}\big{(}2\mathsf{ C}_{\mathsf{data}}+28(1+\alpha)\big{)}+\tfrac{12}{\alpha\kappa_{0}}\cdot 10^{4}(1+ \alpha)^{3}\big{)}(26\pi+130\alpha(1+\alpha)\kappa_{0})\Big{)}\] \[\leq\varepsilon\langle\mathsf{B}_{6}\rangle\Big{(}\tfrac{200(1+ \alpha)}{\alpha\kappa_{0}}\mathsf{C}_{\mathsf{data}}+\tfrac{5}{\alpha\kappa_{ 0}}\cdot\mathring{C}_{(\ref{eq:14.41c})}\] \[\qquad\qquad+\big{(}\tfrac{32}{\alpha\kappa_{0}}\big{(}2+28(1+ \alpha)\big{)}+\tfrac{12}{\alpha\kappa_{0}}\cdot 10^{4}(1+\alpha)^{3}\big{)}(26\pi+13 0\alpha(1+\alpha)\kappa_{0})\Big{)}\] \[=:\varepsilon\langle\mathsf{B}_{6}\rangle\cdot\tfrac{1}{2}\mathsf{ b}_{22}\,. \tag{14.57}\] The above defined \(\mathsf{b}_{22}\) depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), and that this choice automatically implies (14.42). Upon composing with the inverse flow of \(\zeta\), (14.57) completes the proof of (14.29d), and thus of the Lemma. ### The upstream weight function \(\mathcal{J}\) in \(\mathring{\mathcal{H}}_{+}^{\delta}\) In analogy with Sections 5-13, we introduce a weight function (denoted by \(\mathcal{J}\)) that will be used in the upstream energy estimates, and which is a suitable extension of \(\overline{J}_{g}\) away from the pre-shock. According to the decomposition (14.15c) of the upstream spacetime \(\mathring{\mathcal{H}}^{\delta}\), we separately define the weight \(\mathcal{J}\) in \(\mathring{\mathcal{H}}_{+}^{\delta}\) (see (14.58) below) and \(\mathring{\mathcal{H}}_{-}^{\delta}\) (see (14.62) below), ensuring the continuity of certain derivatives across the surface \(\Theta^{\delta}(x,\mathsf{t}_{\mathrm{in}})\). In this subsection we define the weight function \(\mathcal{J}\) on \(\mathring{\mathcal{H}}_{+}^{\delta}\), which we recall is foliated by the surfaces \((x,\Theta^{\delta}(x,t))\), according to (14.15a). In light of this foliation, we may define the upstream weight function \(\mathcal{J}\) as \[\mathcal{J}\big{(}x_{1},x_{2},\Theta^{\delta}(x_{1},x_{2},t)\big{)}=\mathcal{ B}(x_{2},t)=\overline{J}_{g}(\mathring{x}_{1}(x_{2}),x_{2},t)\,, \tag{14.58}\] for all \((x,t)\in\mathring{\Omega}_{\mathsf{US},+}\), where we have used the notation in (14.10c) for \(\mathcal{B}\). In order to simplify our exposition, we shall sometimes use the notation \(\mathcal{J}(x,\Theta^{\delta})\) to mean \(\mathcal{J}(x_{1},x_{2},\Theta^{\delta}(x_{1},x_{2},t))\). The weight \(\mathcal{J}\) was defined in (14.58) in order to ensure that is satisfies a PDE, which is a \(\delta\)-modification of the \(1\)-characteristic transport PDE. To see this, we differentiate (14.58) and obtain that in \(\mathring{\mathcal{H}}_{+}^{\delta}\) we have \[\partial_{1}\big{(}\mathcal{J}(x,\Theta^{\delta})\big{)} =\partial_{1}\mathcal{J}(x,\Theta^{\delta})+\partial_{t}\mathcal{J }(x,\Theta^{\delta})\partial_{1}\Theta^{\delta}=0\,, \tag{14.59a}\] \[\partial_{2}\big{(}\mathcal{J}(x,\Theta^{\delta})\big{)} =\partial_{2}\mathcal{J}(x,\Theta^{\delta})+\partial_{t}\mathcal{J }(x,\Theta^{\delta})\partial_{2}\Theta^{\delta}=\partial_{2}\mathcal{B}(x_{2},t )\,,\] (14.59b) \[\partial_{t}\big{(}\mathcal{J}(x,\Theta^{\delta})\big{)} =\partial_{t}\mathcal{J}(x,\Theta^{\delta})\partial_{t}\Theta^{ \delta}=\partial_{t}\mathcal{B}(x_{2},t)\,. \tag{14.59c}\] The identities in (14.59) are substituted into the definition of \(\Theta^{\delta}\) in (14.10a) to yield \[(1\!-\!\delta)(\partial_{t}+V\partial_{2})\mathcal{J}-(2\alpha\Sigma J_{g}^{-1} )\partial_{1}\mathcal{J}+\big{(}2\alpha\Sigma g^{-\frac{1}{2}}h_{,2}\big{)} \partial_{2}\mathcal{J}=0\,.\] (14.60a) The boundary condition associated to the \[\mathcal{J}\] evolution ( 14.60a ) is deduced from ( 14.58 ) and ( 14.10b ), leading to \[\mathcal{J}(\mathring{x}_{1}(x_{2}),x_{2},t)=\mathcal{B}(x_{2},t)=\overline{J} _{g}(\mathring{x}_{1}(x_{2}),x_{2},t)\] (14.60b) for \[t\in[\mathsf{t}_{\mathrm{in}},t^{*}(x_{2}))\]. In the energy estimates, the form of the \(\mathcal{J}\) evolution (14.60a) which is most frequently used is \[2\alpha\Sigma\partial_{1}\mathcal{J}-2\alpha\Sigma J_{g}g^{-\frac{1}{2}}h_{,2} \,\partial_{2}\mathcal{J}-J_{g}(\partial_{t}+V\partial_{2})\mathcal{J}=-8J_{g}( \partial_{t}+V\partial_{2})\mathcal{J}\,. \tag{14.61}\] We will show (cf. (14.93a) below) that \((\partial_{t}+V\partial_{2})\mathcal{J}<0\), and so the condition \(\delta>0\) makes the term on the right side of (14.61) strictly positive. In turn, this induces a strictly positive damping term in certain energy norms, see Remark 14.14 below. ### The upstream weight function \(\mathfrak{J}\) in \(\hat{\mathcal{H}}^{\delta}_{-}\) Let us now define the upstream weight function \(\mathfrak{J}\) in the spacetime region \(\mathcal{H}^{\delta}_{-}\). We set \[(\partial_{t}+V\partial_{2})\mathfrak{J}(x,t)=((\partial_{t}+V\partial_{2}) \mathfrak{J})(x,\Theta^{\delta}(x,\mathfrak{t}_{\text{in}}))\qquad\text{ for all}\qquad(x,t)\in\check{\mathcal{H}}^{\delta}_{-}\,,\] (14.62a) with the boundary condition set at the "top" boundary of the spacetime \[\hat{\mathcal{H}}^{\delta}_{-}\] \[\mathfrak{J}(x,t)=1\qquad\text{ for all}\qquad t=\Theta^{\delta}(x, \mathfrak{t}_{\text{in}})\,. \tag{14.62b}\] We note from the start that the definition of \(\mathfrak{J}\) in (14.62) ensures that \(\mathfrak{J}\) is continuous across \(\Theta^{\delta}(x,\mathfrak{t}_{\text{in}})\) (because \(\mathcal{B}(x_{2},\mathfrak{t}_{\text{in}})=\overline{\mathcal{J}}_{\mathfrak{ g}}(\dot{x}_{1}(x_{2}),x_{2},\mathfrak{t}_{\text{in}})\)), and moreover \((\partial_{t}+V\partial_{2})\mathfrak{J}\) is continuous across \(\Theta^{\delta}(x,\mathfrak{t}_{\text{in}})\) (because of (14.62a)). It is convenient to solve the boundary value problem (14.62). First, we compute using (14.59b), (14.59c), the fact that \(\overline{\mathcal{J}}_{\mathfrak{s}}(x,\mathfrak{t}_{\text{in}})=1\), and the fact that \((\partial_{t}\overline{\mathcal{J}}_{\mathfrak{s}})(x,\mathfrak{t}_{\text{in }})=\frac{1+\alpha}{2}(w_{0})_{,1}(x)+\frac{1-\alpha}{2}(\alpha_{0})_{,1}(x)\), that for \(t\in[\mathfrak{t}_{\text{in}},t^{*}(x_{2})]\), we \[\big{(}(\partial_{t}+V\partial_{2})\mathfrak{J}\big{)}(x,\Theta ^{\delta}(x,\mathfrak{t}_{\text{in}})) =\partial_{t}\mathcal{B}(x_{2},\mathfrak{t}_{\text{in}})\frac{1-V (x,\Theta^{\delta}(x,\mathfrak{t}_{\text{in}}))\partial_{2}\Theta^{\delta}(x, \mathfrak{t}_{\text{in}})}{\partial_{t}\Theta^{\delta}(x,\mathfrak{t}_{\text{ in}})}+V(x,\Theta^{\delta}(x,\mathfrak{t}_{\text{in}}))\partial_{2} \mathcal{B}(x_{2},\mathfrak{t}_{\text{in}})\] \[=\big{(}\tfrac{1+\alpha}{2}(w_{0})_{,1}+\tfrac{1-\alpha}{2}(z_{0} )_{,1}\big{)}(\ddot{x}_{1}(x_{2}),x_{2})\tfrac{1-V(x,\Theta^{\delta}(x, \mathfrak{t}_{\text{in}}))\partial_{2}\Theta^{\delta}(x,\mathfrak{t}_{\text{ in}})}{\partial_{t}\Theta^{\delta}(x,\mathfrak{t}_{\text{in}})}\] \[=:\mathfrak{f}(x_{1},x_{2})\,. \tag{14.63}\] Note importantly that the \(x_{1}\) dependence of \(\mathfrak{f}\) enters only through the argument of \(\Theta^{\delta}\) and its derivatives. Returning to the evolution equation for \(\mathfrak{J}\) in (14.62), we introduce a slight modification of the flow \(\xi\) defined in (6.6) (the modification is that the new flow is the identity at a given time \(t\) instead of \(\mathfrak{t}_{\text{in}}\)), as follows: for \(\mathfrak{t}_{\text{in}}\leq t\leq t^{\prime}\), and for \(x\in\mathbb{T}^{2}\) such that \((x,t)\in\hat{\mathcal{H}}^{\delta}_{-}\), we let \[\partial_{t^{\prime}}\xi_{t}(x_{1},x_{2},t^{\prime})=V(x_{1},\xi_{t}(x_{1},x_{ 2},t^{\prime}),t^{\prime})\,,\qquad\xi_{t}(x_{1},x_{2},t)=x_{2}\,. \tag{14.64}\] It is clear that \(\xi_{t}\) satisfies the bounds (6.8) and (6.9). This flow is defined for times \(t^{\prime}\) less than the stopping time \[\mathsf{T}_{\xi}(x,t)=\sup\bigl{\{}t^{\prime}\in[t,\mathfrak{t}_{\text{fin}}) \colon(x_{1},\xi_{t}(x_{1},x_{2},t^{\prime}),t^{\prime})\in\hat{\mathcal{H}}^{ \delta}_{-}\bigr{\}}\,.\] (14.65a) Since the "top" temporal boundary of \[\hat{\mathcal{H}}^{\delta}_{-}\] is the surface \[\Theta^{\delta}(x,\mathfrak{t}_{\text{in}})\], we may alternatively characterize the stopping time \[\mathsf{T}_{\xi}(x,t)\] as the implicit solution of \[\mathsf{T}_{\xi}(x_{1},x_{2},t)=\Theta^{\delta}\Bigl{(}x_{1},\xi_{t}\big{(}x_{1 },x_{2},\mathsf{T}_{\xi}(x_{1},x_{2},t)\big{)},\mathfrak{t}_{\text{in}}\Bigr{)} \,,\qquad\mathsf{T}_{\xi}(x_{1},x_{2},t)\in[t,\mathfrak{t}_{\text{fin}})\,. \tag{14.65b}\] Note in particular that \(\mathsf{T}_{\xi}(x,\Theta^{\delta}(x,\mathfrak{t}_{\text{in}}))=\Theta^{\delta} (x,\mathfrak{t}_{\text{in}})\). Using this notation, we solve for \(\mathfrak{J}\) in (14.62). For \((x_{1},x_{2},t)\in\hat{\mathcal{H}}^{\delta}_{-}\) fixed, we compose (14.62a) with the flow \(\xi_{t}\), and using the notation in (14.63) deduce that \[\partial_{t^{\prime}}\bigl{(}\mathfrak{J}(x_{1},\xi_{t}(x_{1},x_{2},t^{\prime}), t^{\prime})\bigr{)}=\mathfrak{f}(x_{1},\xi_{t}(x_{1},x_{2},t^{\prime}))\,.\] Integrating the above equation from \(t^{\prime}=t\) and until \(t^{\prime}=\mathsf{T}_{\xi}(x,t)\), and using the boundary condition (14.62b), we get \[\mathfrak{J}(x_{1},x_{2},t)=1-\int_{t}^{\mathsf{T}_{\xi}(x_{1},x_{2},t)} \mathfrak{f}(x_{1},\xi_{t}(x_{1},x_{2},t^{\prime}))\mathrm{d}t^{\prime}\,. \tag{14.66}\] Identity (14.66) gives the definition of the upstream weight function \(\mathfrak{J}\) in the spacetime \(\hat{\mathcal{H}}^{\delta}_{-}\), where \(\xi_{t}\) is defined as in (14.64), \(\mathfrak{f}\) is given by (14.63), and \(T_{\xi}\) is given by (14.65). ### Properties of the weight function \(\mathfrak{J}\) The upstream weight function \(\mathfrak{J}\) is now fully defined, according to (14.58) in \(\hat{\mathcal{H}}^{\delta}_{+}\), and (14.66) in \(\hat{\mathcal{H}}^{\delta}_{-}\). We collect a number of useful properties of this weight, which will be used throughout the remainder of this section. #### 14.8.1. Lower bounds for \(\mathfrak{J}\) We claim that for all \((x,t)\in\hat{\mathcal{H}}^{\delta}\), we have \[\mathfrak{J}(x,t)\geq\left\{\begin{aligned} &\big{(} \overline{\Theta^{\delta}}(x)-t\big{)}\tfrac{1+\alpha}{2\varepsilon}\cdot \tfrac{89}{100}\,,&&(x,t)\in\hat{\mathcal{H}}^{\delta}_{+}\,,\\ & 1\,,&&(x,t)\in\hat{\mathcal{H}}^{\delta}_{-}\,.\end{aligned}\right. \tag{14.67}\] We prove (14.67) separately in the spacetime \(\hat{\mathcal{H}}^{\delta}_{+}\) and \(\hat{\mathcal{H}}^{\delta}_{-}\). In \(\hat{\mathcal{H}}^{\delta}_{+}\) it is sometimes convenient to use (14.68). According to (14.15a), for any \((x,t)\in\hat{\mathcal{H}}^{\delta}_{+}\) there exists \(t^{\prime}=t^{\prime}(x,t)\in[\mathfrak{t}_{\text{in}},t^{*}(x_{2}))\) such that \(t=\Theta^{\delta}(x,t^{\prime})\). The definition (14.58) then gives \[\mathfrak{J}(x,t)=\mathfrak{J}(x,\Theta^{\delta}(x,t^{\prime}))=\mathcal{B}(x_{2 },t^{\prime})\,.\] On the other hand, from assumptions ((iv)) and ((vi)), and bounds (5.8), (6.24d), (6.53), and Definition 6.6, we have \[\mathcal{J}(x,\Theta^{\delta}(x,t^{\prime})) =\mathcal{B}(x_{2},t^{\prime})=\overline{J}_{g}(\mathring{x}_{1}( x_{2}),x_{2},t^{\prime})=\overline{J}_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))- \int_{t^{\prime}}^{t^{*}(x_{2})}\partial_{t}\overline{J}_{g}(\mathring{x}_{1 }(x_{2}),x_{2},t^{\prime\prime})\mathrm{d}t^{\prime\prime}\] \[\geq-\int_{t^{\prime}}^{t^{*}(x_{2})}\partial_{t}J_{g}(\mathring{ x}_{1}(x_{2}),x_{2},t^{\prime\prime})\mathrm{d}t^{\prime\prime}\geq-(t^{*}(x_{2})-t^{ \prime})\tfrac{1+\alpha}{2}\big{(}(w_{0})_{1}\left(\mathring{x}_{1}(x_{2}),x_{ 2}\right)+\mathsf{C}_{\mathsf{J}}\big{)}\] \[\geq(t^{*}(x_{2})-t^{\prime})\tfrac{1+\alpha}{2}\big{(}\tfrac{9 }{10\varepsilon}-2\mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\big{)}\geq(t^{*}(x_{2} )-t^{\prime})\tfrac{1+\alpha}{2\varepsilon}\cdot\tfrac{895}{1000}\;, \tag{14.68}\] whenever \(\mathsf{t}_{\mathsf{in}}\leq t^{\prime}<t^{*}(x_{2})\). It thus remains to appeal to the intermediate value theorem in time, along with the bound (14.29c), and obtain that \[\overline{\Theta^{\delta}}(x)-t=\Theta^{\delta}(x,t^{*}(x_{2}))-\Theta^{\delta }(x,t^{\prime})=(t^{*}(x_{2})-t^{\prime})\qquad\partial_{t}\Theta^{\delta}(x,t^{\prime\prime})\qquad.\] From the two estimates above, we obtain the bound (14.67), in the spacetime \(\mathring{\mathcal{H}}^{\delta}_{+}\). For \((x,t)\in\mathring{\mathcal{H}}^{\delta}_{-}\), we appeal to the formula (14.66). We note that by the definition (14.63), the bounds (14.29b)-(14.29c), the fact that \(V=\mathcal{O}(\varepsilon)\), the assumptions on \(w_{0}\) and \(z_{0}\) in Section 4.2, and the bound (6.53) the function \(\mathfrak{f}\) satisfies the bound \[\tfrac{1+\alpha}{2\varepsilon}\cdot\tfrac{101}{100}\geq\big{(}\tfrac{1+\alpha }{2}\cdot\tfrac{1}{\varepsilon}+\mathring{C}\big{)}\cdot\tfrac{1+\mathring{C }\varepsilon^{2}}{1-3\cdot 10^{-5}}\geq-\mathfrak{f}(x_{1},x_{2})\geq\big{(} \tfrac{1+\alpha}{2}\cdot\tfrac{9}{10\varepsilon}-\mathring{C}\big{)}\cdot \tfrac{1-\mathring{C}\varepsilon^{2}}{1+3\cdot 10^{-5}}\geq\tfrac{1+\alpha}{2 \varepsilon}\cdot\tfrac{89}{100}\,. \tag{14.69}\] Inserting the lower bound (14.69) into (14.66), results in \[1+(\mathsf{T}_{\xi}(x,t)-t)\tfrac{1+\alpha}{2\varepsilon}\cdot\tfrac{101}{100 }\geq\mathcal{J}(x,t)\geq 1+(\mathsf{T}_{\xi}(x,t)-t)\tfrac{1+\alpha}{2 \varepsilon}\cdot\tfrac{89}{100}\,. \tag{14.70}\] Since by definition (recall (14.65a)) we always have \(\mathsf{T}_{\xi}(x,t)\geq t\), the above estimate proves (14.67) in \(\mathring{\mathcal{H}}^{\delta}_{-}\). #### 14.8.2. Comparison of \(\mathcal{J}\) and \(J_{g}\) Next, we compare the weight function \(\mathcal{J}\) defined by (14.58) and (14.66) to \(J_{g}\) itself. **Lemma 14.3** (\(\mathcal{J}\) and \(J_{g}\)).: _Assume that \(\kappa_{0}\) is taken sufficiently large with respect to \(\alpha>0\) to ensure that (14.38) holds. Assume that the pointwise bootstraps (14.132a) hold in \(\mathring{\mathcal{H}}^{\delta}\), and that \(\varepsilon\) is taken to be sufficiently small, with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). Then, for \((x,t)\in\mathring{\mathcal{H}}^{\delta}\) we have_ \[\mathcal{J}(x,t) \leq\tfrac{101}{100}J_{g}(x,t)\,,\qquad\text{whenever}\qquad|x_{ 1}|\leq 13\pi\varepsilon\,, \tag{14.71a}\] \[\mathcal{J}(x,t) \leq\tfrac{21}{10}J_{g}(x,t)\quad\text{and}\quad|J_{g}\mathring{ \mathbf{W}}_{\mathcal{N}}(x,t)|\leq\mathring{C}\varepsilon\,,\qquad\text{ whenever}\qquad|x_{1}|>13\pi\varepsilon\,. \tag{14.71b}\] _Moreover, we have_ \[0\leq\mathcal{J}(x,t)-1\leq\tfrac{1}{10^{3}}\mathbf{1}_{|x_{1}|<13\pi \varepsilon}+\tfrac{53}{50}\mathbf{1}_{|x_{1}|>13\pi\varepsilon}\quad\text{and }\quad\big{|}J_{g}(x,t)-1\big{|}\leq\tfrac{5}{10^{4}}\,, \tag{14.71c}\] _for \((x,t)\in\mathring{\mathcal{H}}^{\delta}_{-}\). Since \(\mathcal{J}>0\) in \(\mathring{\mathcal{H}}^{\delta}\), the bounds in (14.71) also show \(J_{g}>0\) in \(\mathring{\mathcal{H}}^{\delta}\)._ Proof of Lemma 14.3.: Let us first note that the condition \(|x_{1}|>13\pi\varepsilon\) implies, via (4.7), that \((w_{0})_{,1}(x)=0\). As such, the bound (6.17a) immediately implies that for such values of \(x\) we have \(|J_{g}\mathring{\mathbf{W}}_{\mathcal{N}}|\leq\mathring{C}\varepsilon\). This proves the second bound in (14.71b). It thus only remains to prove the estimates that involve \(\mathcal{J}\) and \(J_{g}\). _The proof of (14.71c)._ We recall that by its definition in (14.15b), we have \(\mathring{\mathcal{H}}^{\delta}_{-}=\{(x,t)\in\mathring{\mathcal{H}}^{\delta} \colon\mathsf{t}_{\mathsf{in}}\leq t<\Theta^{\delta}(x,\mathsf{t}_{\mathsf{in}})\}\). We first prove the \(J_{g}\) estimate in (14.71c). Using (6.24a), (14.30a), and the fact that \(\mathsf{t}_{\mathsf{in}}=\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},\mathsf{ t}_{\mathsf{in}})\), we have that for all \((x,t)\in\mathring{\mathcal{H}}^{\delta}_{-}\), \[|J_{g}(x,t)-1| \leq(t-\mathsf{t}_{\mathsf{in}})\tfrac{1+\alpha}{2\varepsilon} \big{(}|\mathsf{D}_{1}w_{0}(x)|+\varepsilon\mathsf{C}_{\mathsf{J}_{\mathsf{J}}} \big{)}\] \[\leq(\Theta^{\delta}(x_{1},x_{2},\mathsf{t}_{\mathsf{in}})-\Theta^{ \delta}(\mathring{x}_{1}(x_{2}),x_{2},\mathsf{t}_{\mathsf{in}})\tfrac{1+\alpha} {2\varepsilon}\big{(}|\mathsf{D}_{1}w_{0}(x)|+\varepsilon\mathsf{C}_{\mathsf{J}_ {\mathsf{J}}}\big{)}\] \[\leq|x_{1}-\mathring{x}_{1}(x_{2})|\cdot\tfrac{1}{10^{5}}(1+ \alpha)\cdot\tfrac{1+\alpha}{2\varepsilon}\big{(}\mathbf{1}_{|x_{1}|\leq 13\pi \varepsilon}+\varepsilon\mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\big{)}\] \[\leq\mathbf{1}_{|x_{1}|\leq 13\pi\varepsilon}\tfrac{13\pi}{10^{5}}+ \mathring{C}\varepsilon\mathbf{1}_{|x_{1}|>13\pi\varepsilon}\leq 5\cdot 10^{-4}\,,\] upon taking \(\varepsilon\) sufficiently small. This establishes the bounds for \(J_{g}\) claimed in (14.71c). In order to estimate \(\mathcal{J}(x,t)\) in the spacetime \(\mathring{\mathcal{H}}^{\delta}_{-}\) we appeal to (14.70), which thus necessitates an upper bound for the non-negative quantity \(\mathsf{T}_{\xi}(x,t)-t\), where \(\mathsf{T}_{\xi}\) is as defined by (14.65a). For this purpose, we note that by construction (see the line below (14.65b)), we have that \[\big{(}\mathsf{T}_{\xi}(x,t)-t\big{)}\big{|}_{t=\Theta^{\delta}(x,\mathsf{t}_ {\mathsf{in}})} =0\,. \tag{14.72}\] Moreover, implicitly differentiating (14.65b) with respect to \(t\) shows that \[\partial_{t}\mathsf{T}_{\xi}(x,t)=\tfrac{\partial_{\xi}\Theta^{\delta}(x_{1}, \xi_{t}(x,\mathsf{T}_{\xi}(x,t)),\mathsf{t_{n}})(\partial_{t}\xi_{t})(x,\mathsf{ T}_{\xi}(x,t))}{1-V(x_{1},\xi_{t}(x,\mathsf{T}_{\xi}(x,t)),\mathsf{T}_{\xi}(x,t)) \partial_{\xi}\Theta^{\delta}(x_{1},\xi_{t}(x,\mathsf{T}_{\xi}(x,t)),\mathsf{ t_{n}})}\] while differentiation of (14.64) with respect to \(t\) shows that \[(\partial_{t}\xi)(x,t^{\prime})=-V(x,t)\exp\!\left(\int_{t}^{t^{\prime}} \partial_{2}V(x_{1},\xi_{t}(x,t^{\prime\prime}),t^{\prime\prime})\mathrm{d}t^ {\prime\prime}\right),\] Combining the two identities above with (14.29b) and the fact that \(V,\mathsf{D}V=\mathcal{O}(\varepsilon)\) by the bootstraps, we deduce that \[\big{|}1+\partial_{t}\big{(}\mathsf{T}_{\xi}(x,t)-t\big{)}\big{|}=\big{|} \partial_{t}\mathsf{T}_{\xi}(x,t)\big{|}\lesssim\varepsilon^{2}\,. \tag{14.73}\] From (14.72) and (14.73) we deduce in turn that \[(1-\mathring{C}\varepsilon^{2})(\Theta^{\delta}(x,\mathsf{t_{in}})-t)\leq \mathsf{T}_{\xi}(x,t)-t\leq(1+\mathring{C}\varepsilon^{2})(\Theta^{\delta}(x, \mathsf{t_{in}})-t)\,, \tag{14.74}\] for all \((x,t)\in\mathring{\mathcal{H}}^{\delta}\). At this stage we consider two cases, \(|x_{1}|\leq 13\pi\varepsilon\), and \(|x_{1}|>13\pi\varepsilon\). In the first case, when \(|x_{1}|\leq 13\pi\varepsilon\), we use that (14.30a) implies \[\mathbf{1}_{|x_{1}|\leq 13\pi\varepsilon}(\mathsf{T}_{\xi}(x,t)-t) \leq\mathbf{1}_{|x_{1}|\leq 13\pi\varepsilon}|\Theta^{\delta}(x, \mathsf{t_{in}})-\mathsf{t_{in}}|(1+\mathring{C}\varepsilon^{2})\] \[\leq\mathbf{1}_{|x_{1}|\leq 13\pi\varepsilon}|\Theta^{\delta}(x _{1},x_{2},\mathsf{t_{in}})-\Theta^{\delta}(\mathring{x}_{1}(x_{2}),x_{2}, \mathsf{t_{in}})|(1+\mathring{C}\varepsilon^{2})\] \[\leq\mathbf{1}_{|x_{1}|\leq 13\pi\varepsilon}|x_{1}-\mathring{x }_{1}(x_{2})|\cdot\tfrac{1}{10^{5}(1+\alpha)}(1+\mathring{C}\varepsilon^{2}) \leq\tfrac{27\pi\varepsilon}{10^{5}(1+\alpha)}\leq\tfrac{\varepsilon}{10^{3}( 1+\alpha)}\,. \tag{14.75}\] In the second case, when \(|x_{1}|>13\pi\varepsilon\), we recall the definition of the domain \(\mathring{\Omega}_{\mathsf{US},+}\) on which \(\Theta^{\delta}\) is defined, cf. (14.11a), to note that \(\Theta^{\delta}(x,\mathsf{t_{in}})\leq\mathsf{t_{fin}}\), and therefore \[\mathbf{1}_{|x_{1}|\leq 13\pi\varepsilon}(\mathsf{T}_{\xi}(x,t)-t)\leq(1+ \mathring{C}\varepsilon^{2})(\Theta^{\delta}(x,\mathsf{t_{in}})-\mathsf{t_{ in}})\leq(1+\mathring{C}\varepsilon^{2})(\mathsf{t_{fin}}-\mathsf{t_{in}}) \leq\tfrac{2\varepsilon}{1+\alpha}\cdot\tfrac{52}{50}\,. \tag{14.76}\] Combining the estimates (14.75)-(14.76) with the upper bound in (14.70), we deduce \[\mathcal{J}(x,t)\leq 1+\big{(}\mathsf{T}_{\xi}(x,t)-t\big{)} \tfrac{1+\alpha}{2\varepsilon}\cdot\tfrac{101}{100} \leq 1+\tfrac{2\varepsilon}{1+\alpha}\big{(}\mathbf{1}_{|x_{1}|<13 \pi\varepsilon}\tfrac{1}{2\cdot 10^{5}}+\mathbf{1}_{|x_{1}|>13\pi \varepsilon}\tfrac{52}{50}\big{)}\tfrac{1+\alpha}{2\varepsilon}\cdot\tfrac{101 }{100}\] \[\leq 1+\mathbf{1}_{|x_{1}|<13\pi\varepsilon}\tfrac{1}{10^{3}}+ \mathbf{1}_{|x_{1}|>13\pi\varepsilon}\tfrac{53}{50}\,.\] This proves the upper bound for \(\mathcal{J}\) in (14.71c). _The proof of (14.71a)-(14.71b)._ We note that due to (14.71c), for \((x,t)\in\mathring{\mathcal{H}}^{\delta}\) (by continuity, on the closure of this spacetime), the bounds (14.71a)-(14.71b) are already known to hold. This is because for \(|x_{1}|\leq 13\pi\varepsilon\) we have \((1+10^{-3})\cdot(1+5\cdot 10^{-4})\leq 1.01\) while for \(|x_{1}|>13\pi\varepsilon\) we have \((1+\tfrac{53}{50})\cdot(1+5\cdot 10^{-4})\leq 2.1\). Moreover, since we have already established the \(J_{g}\mathring{\mathbf{W}}_{\mathcal{N}}\) bound stated in (14.71b), it only remains to prove the pointwise upper bound for \(\mathcal{J}_{g}^{-1}\) stated in (14.71a)-(14.71b), on the spacetime \(\mathring{\mathcal{H}}^{\delta}_{+}\). The proof consists of two parts. The first one is to obtain rough lower bounds of \(J_{g}(x,t)\) away from \(t=t^{*}(x_{2})\) and \(x_{1}=\mathring{x}_{1}(x_{2})\). The second consists in comparing \(\mathcal{J}\) with \(J_{g}\) using various decompositions of the spacetime \(\mathring{\mathcal{H}}^{\delta}_{+}\). The arguments in the first part of the proof, are reminiscent of the argument in the proof of Proposition 13.1, except that instead of the downstream geometry, we consider the upstream one. In analogy to (13.1) we define \[x_{1}^{\flat}(x_{2})=\big{\{}x_{1}\colon x_{1}<x_{1}^{\vee}(x_{2}),\mathsf{D}_{ 1}w_{0}(x_{1},x_{2})=-\tfrac{17}{20}\big{\}}\,. \tag{14.77}\] The fact that the function \(\mathbb{T}\ni x_{2}\mapsto x_{1}^{\flat}(x_{2})\) given by (14.77) is well-defined and differentiable requires a proof. This proof is nearly identical to the one given in the proof of Proposition 13.1 with signs changed; the changes are as follows. The existence of at least one value \(x_{1}^{\flat}(x_{2})\in(-13\pi\varepsilon,x_{1}^{\vee}(x_{2}))\) satisfying \(\mathsf{D}_{1}w_{0}(x_{1}^{\flat}(x_{2}),x_{2})=-\tfrac{17}{20}\) follows from the intermediate value theorem, because \(-\tfrac{17}{20}\in(-\tfrac{9}{10},0)\). Then, any such possible value \(x_{1}^{\flat}(x_{2})\) must satisfy \(x_{1}^{\flat}(x_{2})\leq x_{1}^{\vee}(x_{2})-\tfrac{1}{40}\varepsilon\). This is because by the intermediate value theorem, (4.10), and ((vi)), we have \(\tfrac{1}{20}\leq|\mathsf{D}_{1}w_{0}(x_{1}^{\flat}(x_{2}),x_{2})-\mathsf{D}_{ 1}w_{0}(x_{1}^{\vee}(x_{2}),x_{2})|=|x_{1}^{\flat}(x_{2})-x_{1}^{\vee}(x_{2})| \|\partial_{1}\mathsf{D}_{1}w_{0}\|_{L_{\infty}^{\infty}}\leq\tfrac{2}{ \varepsilon}|x_{1}^{\flat}(x_{2})-x_{1}^{\vee}(x_{2})|\). In particular, with (6.53) we also have \(x_{1}^{\flat}(x_{2})\leq x_{1}^{*}(x_{2},t)-\tfrac{\varepsilon}{41}\). We may then define \(x_{1}^{\flat}(x_{2})\) as the largest value of \(x_{1}\) for which \(\mathsf{D}_{1}w_{0}(x_{1},x_{2})=-\tfrac{17}{20}\) and prove that for all \(x_{1}<x_{1}^{\flat}(x_{2})\) we must have \(\mathsf{D}_{1}w_{0}(x_{1},x_{2})>-\tfrac{17}{20}\), yielding also the uniqueness of \(x_{1}^{\flat}(x_{2})\). To see this, note that when \(x_{1}<x_{1}^{\flat}(x_{2})\), then \(x_{1}-x_{1}^{\vee}(x_{2})<-\tfrac{17}{40}\varepsilon<-\varepsilon^{\frac{7}{4}}\). Hence, by assumption ((viii)) on the initial data we know that for all \(x_{1}\in[-13\pi\varepsilon,x_{1}^{\flat}(x_{2}))\) with \(\mathsf{D}_{1}w_{0}(x)<-\tfrac would need to increase as a function of \(x_{1}\), but ((viii)) implies that \(\mathsf{D}_{1}w_{0}(x)\) can only increase in \(x_{1}\) if \(\mathsf{D}_{1}w_{0}(x)\geq-\frac{1}{3}\). Thus, we have shown that (14.77) gives a well-defined object, and that \[x_{1}^{\flat}(x_{2})\leq x_{1}^{\vee}(x_{2})-\tfrac{1}{40}\varepsilon\,,\qquad x _{1}^{\flat}(x_{2})\leq\hat{x}_{1}(x_{2})-\tfrac{1}{41}\varepsilon\,, \tag{14.78a}\] \[\mathsf{D}_{1}w_{0}(x)>-\tfrac{17}{20}\text{ for all }x_{1}<x_{1}^{ \flat}(x_{2})\,,\qquad-1<\mathsf{D}_{1}w_{0}(x)<-\tfrac{17}{20}\text{ for all }x_{1}^{\flat}(x_{2})<x_{1}<x_{1}^{\vee}(x_{2})\,. \tag{14.78b}\] In particular, (14.78b), the inequality \(-\tfrac{17}{20}<-\tfrac{1}{3}\), and assumption (viii) imply that \[\mathsf{D}_{1}^{3}w_{0}(x)\geq\tfrac{1}{3}\text{ for all }x_{1}^{\flat}(x_{2}) \leq x_{1}\leq x_{1}^{\vee}(x_{2})\,. \tag{14.78c}\] Using (14.78c) we may also obtain a lower bound for the value of \(x_{1}^{\flat}(x_{2})\). Since \(\mathsf{D}_{1}^{2}w_{0}(x_{1}^{\vee}(x_{2}),x_{2})=0\), we may write \(\mathsf{D}_{1}w_{0}(x_{1}^{\flat}(x_{2}),x_{2})-\mathsf{D}_{1}w_{0}(x_{1}^{ \vee}(x_{2}),x_{2})=\tfrac{1}{2}(\tfrac{x_{1}^{\flat}(x_{2})-x_{1}^{\vee}(x_ {2})}{\varepsilon})^{2}\mathsf{D}_{1}^{3}w_{0}(x_{1}^{\prime},x_{2})\) for some \(x_{1}^{\prime}\in(x_{1}^{\flat}(x_{2}),x_{1}^{\vee}(x_{2}))\). Therefore, we deduce \(|x_{1}^{\flat}(x_{2})-x_{1}^{\vee}(x_{2})|\leq\sqrt{6}\varepsilon|\mathsf{D}_ {1}w_{0}(x_{1}^{\flat}(x_{2}),x_{2})-\mathsf{D}_{1}w_{0}(x_{1}^{\vee}(x_{2}),x _{2})|^{\frac{1}{2}}\leq\sqrt{6}\varepsilon\sqrt{3/20}\leq\tfrac{19}{20}\varepsilon\), so \[x_{1}^{\flat}(x_{2})\geq x_{1}^{\vee}(x_{2})-\tfrac{19}{20}\varepsilon\,, \qquad x_{1}^{\flat}(x_{2})\geq\mathring{x}_{1}(x_{2})-\varepsilon\,. \tag{14.78d}\] We also recall from Section 13.1 that there exists a unique \(x_{1}^{\sharp}(x_{2})>x_{1}^{\vee}(x_{2})\) such that \(\mathsf{D}_{1}w_{0}(x_{1}^{\sharp}(x_{2}),x_{2})=-\tfrac{17}{20}\); cf. (13.1). Moreover, in analogy to (14.78) it is shown in Section 13.1 (just like in the above argument), that \[x_{1}^{\sharp}(x_{2})\geq x_{1}^{\vee}(x_{2})+\tfrac{1}{40} \varepsilon\,,\qquad x_{1}^{\sharp}(x_{2})\geq\mathring{x}_{1}(x_{2})+\tfrac{ 1}{41}\varepsilon\,, \tag{14.79a}\] \[\mathsf{D}_{1}w_{0}(x)>-\tfrac{17}{20}\text{ for all }x_{1}>x_{1}^{ \sharp}(x_{2})\,,\qquad-1<\mathsf{D}_{1}w_{0}(x)<-\tfrac{17}{20}\text{ for all }x_{1}^{\vee}(x_{2})<x_{1}<x_{1}^{\sharp}(x_{2})\,,\] (14.79b) \[\mathsf{D}_{1}^{3}w_{0}(x)\geq\tfrac{1}{3}\text{ for all }x_{1}^{ \vee}(x_{2})\leq x_{1}\leq x_{1}^{\sharp}(x_{2})\,,\] (14.79c) \[x_{1}^{\sharp}(x_{2})\leq x_{1}^{\vee}(x_{2})+\tfrac{19}{20} \varepsilon\,,\qquad x_{1}^{\sharp}(x_{2})\leq\mathring{x}_{1}(x_{2})+ \varepsilon\,. \tag{14.79d}\] The bounds obtained in (14.78) for \(w_{0}\) have two immediate consequences regarding the behavior of \(J_{g}\), when combined with Corollary 6.2. The first bound in (14.78b) together with the support assumption (4.7) for \((w_{0})_{,1}\), and the bound (6.24a), show that whenever \((x,t)\in\hat{\mathcal{H}}^{\delta}\) is such that \(x_{1}\not\in(x_{1}^{\flat}(x_{2}),x_{1}^{\sharp}(x_{2}))\), \[J_{g}(x,t) \geq 1+(t-\mathfrak{t}_{\mathsf{in}})\tfrac{1+\alpha}{2 \varepsilon}\big{(}\mathsf{D}_{1}w_{0}(x)-\varepsilon\mathsf{C}_{\mathsf{J}_{ \mathsf{J}}}\big{)}\] \[\geq 1-\tfrac{2\varepsilon}{1+\alpha}\cdot\tfrac{51}{50}\cdot \tfrac{1+\alpha}{2\varepsilon}\big{(}\tfrac{17}{20}|\mathbf{x}_{|x_{1}|\leq 13 \pi\varepsilon}+\varepsilon\mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\big{)}\] \[\geq\tfrac{1}{9}\big{(}\mathbf{1}_{-13\pi\varepsilon\leq x_{1}\leq x _{1}^{\flat}(x_{2})}+\mathbf{1}_{x_{1}^{\sharp}(x_{2})\leq x_{1}\leq 13\pi \varepsilon}\big{)}+(1-\mathring{C}\varepsilon)\mathbf{1}_{13\pi\varepsilon<|x_{1}| \leq\pi}\,.\] (14.80a) Similarly, assumption ( 4.10 ), and the fact that \[t^{*}(x_{2})\leq\mathfrak{t}_{\mathsf{in}}\] (which holds by the construction of \[\overline{J}_{g}\], see the proof of Lemma 6.5 ), gives that whenever \[(x,t)\in\hat{\mathcal{H}}^{\delta}\] is such that \[\mathfrak{t}_{\mathsf{in}}\leq t\leq\tfrac{1}{2}(\mathfrak{t}_{\mathsf{in}}+ \mathfrak{t}_{\mathsf{in}})\], we have \[J_{g}(x,t)\geq 1+(t-\mathfrak{t}_{\mathsf{in}})\tfrac{1+\alpha}{2 \varepsilon}\big{(}\mathsf{D}_{1}w_{0}(x)-\varepsilon\mathsf{C}_{\mathsf{J}_{ \mathsf{J}}}\big{)}\geq 1-\tfrac{2\varepsilon}{1+\alpha}\cdot\tfrac{51}{100}\cdot \tfrac{1+\alpha}{2\varepsilon}\big{(}1+\varepsilon\mathsf{C}_{\mathsf{J}_{ \mathsf{J}}}\big{)}\geq\tfrac{4}{9}\,. \tag{14.80b}\] It thus remains to consider points \((x,t)\in\hat{\mathcal{H}}^{\delta}\) such that \(x_{1}\in(x_{1}^{\flat}(x_{2}),x_{1}^{\sharp}(x_{2}))\) and \(\tfrac{1}{2}(\mathfrak{t}_{\mathsf{in}}+\mathfrak{t}_{\mathsf{in}})<t\leq t^{*}(x _{2})\). In this region, the bounds (14.78c), (6.24e) with \(i=j=1\), (6.53), (6.63), and (4.9), give \[\mathsf{D}_{1}^{2}J_{g}(x,t) \geq(t-\mathfrak{t}_{\mathsf{in}})\tfrac{1+\alpha}{2\varepsilon} \big{(}\mathsf{D}_{1}^{3}w_{0}(x)-\varepsilon\mathring{C}\mathsf{K}(\mathsf{B}_{ \mathsf{6}})\big{)}\] \[\geq\tfrac{2\varepsilon}{1+\alpha}\cdot\tfrac{51}{100}\cdot \tfrac{1+\alpha}{2\varepsilon}\big{(}\tfrac{1}{3}-\varepsilon\mathring{C} \mathsf{K}(\mathsf{B}_{\mathsf{6}})-\varepsilon^{2}\mathring{C}\mathsf{K}( \mathsf{B}_{\mathsf{6}})\big{)}\geq\tfrac{1}{6}\,. \tag{14.80c}\] With (14.80) in hand, we turn to the second part of the proof and establish (14.71a)-(14.71b) for \((x,t)\in\hat{\mathcal{H}}_{+}^{\delta}\). We note that by definition, this is the set \(\{(x,t)\in\hat{\mathcal{H}}^{\delta}\colon\Theta^{\delta}(x,\mathfrak{t}_{ \mathsf{in}})<t<\Theta^{\delta}(x,t^{*}(x_{2}))\}\), which may also be written as \(\{(x,t)\in\mathcal{H}^{\delta}\colon\exists t^{\prime}\in[\mathfrak{t}_{ \mathsf{in}},t^{*}(x_{2})),\text{ with }t=\Theta^{\delta}(x,t^{\prime})\}\). We split this spacetime into three different regions, as follows: _(i) The case \(\mathfrak{t}_{\ Combining the two bounds above results in \[\mathcal{J}(x,t)\leq J_{\text{\tiny g}}(x,t^{\prime})+\mathring{\mathcal{C}}\mathsf{ K}(\mathsf{B}_{6})\varepsilon^{2}\,,\qquad\text{where}\qquad t=\Theta^{\delta}(x,t^{ \prime}), \tag{14.81}\] for all \(t^{\prime}\in[\mathfrak{t}_{\text{in}},t^{*}(x_{2}))\) and all \(x\in\mathbb{T}^{2}\) such that \((x,\Theta^{\delta}(x,t^{\prime}))\in\mathring{\mathcal{H}}^{\delta}\). Next, we bound \(J_{\text{\tiny g}}(x,t^{\prime})\) from above in terms of \(J_{\text{\tiny g}}(x,t)=J_{\text{\tiny g}}(x,\Theta^{\delta}(x,t^{\prime}))\), via the fundamental theorem of calculus in time and (6.24d) as \[J_{\text{\tiny g}}(x,t^{\prime})=J_{\text{\tiny g}}(x,t)-\int_{t^{\prime}}^{t} \partial_{t}J_{\text{\tiny g}}(x,t^{\prime\prime})\text{d}t^{\prime\prime}\leq J _{\text{\tiny g}}(x,t)+\tfrac{1+\alpha}{2}\big{(}|(w_{0})_{,1}\,(x)|+\mathsf{C }_{\mathsf{J}_{\mathsf{J}}}\big{)}|t-t^{\prime}|\,.\] It is thus clear that we need a bound for \(t-t^{\prime}=\Theta^{\delta}(x_{1},x_{2},t^{\prime})-t^{\prime}\). Recall cf. (14.10b) that \(\Theta^{\delta}(\dot{x}_{1}(x_{2}),x_{2},t^{\prime})=t^{\prime}\). The mean value theorem in \(x_{1}\), the bound (14.30a), and the bootstrap (5.37k) present in (14.132a) give \[|t-t^{\prime}|=|\Theta^{\delta}(x_{1},x_{2},t^{\prime})-t^{\prime}| =|\Theta^{\delta}(x_{1},x_{2},t^{\prime})-\Theta^{\delta}(\dot{x} _{1}(x_{2}),x_{2},t^{\prime})|\] \[=|x_{1}-\dot{x}_{1}(x_{2})|\,|\partial_{1}\Theta^{\delta}(x_{1}^ {\prime},x_{2},t^{\prime})|\leq|x_{1}-\dot{x}_{1}(x_{2})|\tfrac{5}{\alpha \kappa_{0}}\,.\] Combining the above two estimates we thus arrive at the bound \[J_{\text{\tiny g}}(x,t^{\prime})\leq J_{\text{\tiny g}}(x,t)+\tfrac{5(1+\alpha)}{2 \alpha\kappa_{0}}|x_{1}-\dot{x}_{1}(x_{2})|\big{(}|(w_{0})_{,1}\,(x)|+\mathsf{ C}_{\mathsf{J}_{\mathsf{J}}}\big{)}\,, \tag{14.82}\] valid for all \(t^{\prime}\in[\mathfrak{t}_{\text{in}},t^{*}(x_{2}))\) and all \(x\in\mathbb{T}^{2}\) such that \((x,t)=(x,\Theta^{\delta}(x,t^{\prime}))\in\mathring{\mathcal{H}}^{\delta}\). At this stage, combining (4.7), (4.10), (14.81), (14.82), and (14.12), we obtain \[\mathcal{J}(x,t) \leq J_{\text{\tiny g}}(x,t)+\tfrac{5(1+\alpha)}{2\alpha\kappa_{0 }}|x_{1}-\dot{x}_{1}(x_{2})|\big{(}|(w_{0})_{,1}\,(x)|+\mathsf{C}_{\mathsf{J}_ {\mathsf{J}}}\big{)}+\mathring{\mathcal{C}}\mathsf{K}(\mathsf{B}_{6}) \varepsilon^{2}\] \[\leq J_{\text{\tiny g}}(x,t)+\mathbf{1}_{|x_{1}|<13\pi\varepsilon} \tfrac{5(1+\alpha)}{2\alpha\kappa_{0}}\cdot 26\pi\varepsilon\cdot\big{(}\varepsilon^{-1}+ \mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\big{)}\] \[\qquad+\mathbf{1}_{|x_{1}|>13\pi\varepsilon}\tfrac{5(1+\alpha)}{2 \alpha\kappa_{0}}\cdot 2(13\pi+65\alpha(1+\alpha)\kappa_{0})\varepsilon\cdot\big{(}0+ \mathsf{C}_{\mathsf{J}_{\mathsf{J}}}\big{)}+\mathring{\mathcal{C}}\mathsf{K}( \mathsf{B}_{6})\varepsilon^{2}\] \[\leq J_{\text{\tiny g}}(x,t)+\mathbf{1}_{|x_{1}|<13\pi\varepsilon} \tfrac{210(1+\alpha)}{\alpha\kappa_{0}}+\mathring{\mathcal{C}}\varepsilon \mathbf{1}_{|x_{1}|>13\pi\varepsilon}\,, \tag{14.83}\] upon taking \(\varepsilon\) sufficiently small, and for all \(t^{\prime}\in[\mathfrak{t}_{\text{in}},t^{*}(x_{2}))\) and all \(x\in\mathcal{X}_{\text{fin}}\) such that \((x,\Theta^{\delta}(x,t^{\prime}))\in\mathring{\mathcal{H}}^{\delta}\). Note that up to this point, the condition \(\mathfrak{t}_{\text{in}}\leq t\leq\tfrac{1}{2}(\mathfrak{t}_{\text{in}}+ \mathfrak{t}_{\text{fin}})\) was not used. We use this condition now, in order to bound \(J_{\text{\tiny g}}(x,t)\) from below via (14.80b) by \(\tfrac{4}{9}\). Combined with (14.83), this results in the bound \[\mathcal{J}(x,t)\leq J_{\text{\tiny g}}(x,t)\big{(}1+\mathbf{1}_{|x_{1}|\leq 13 \pi\varepsilon}\tfrac{9}{4}\cdot\tfrac{210(1+\alpha)}{\alpha\kappa_{0}}+ \mathring{\mathcal{C}}\varepsilon\mathbf{1}_{|x_{1}|>13\pi\varepsilon}\big{)}\,. \tag{14.84}\] Thus, if we ensure that \(\kappa_{0}\) is taken sufficiently large with respect to \(\alpha\) (only), in order to ensure that (14.38) holds, then \(\tfrac{9}{4}\cdot\tfrac{210(1+\alpha)}{\alpha\kappa_{0}}\leq\tfrac{1}{500}\) and upon taking \(\varepsilon\) sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), we deduce \(\mathcal{J}(x,t)\leq\tfrac{101}{100}J_{\text{\tiny g}}(x,t)\), which is consistent with both (14.71a) and (14.71b). _(ii)_ _The case \(\tfrac{1}{2}(\mathfrak{t}_{\text{in}}+\mathfrak{t}_{\text{fin}})\leq t\leq\min \{\Theta^{\delta}(x,t^{*}(x_{2})),\mathfrak{t}_{\text{fin}}\}\), \(x_{1}\not\in(x_{1}^{\flat}(x_{2}),x_{1}^{\sharp}(x_{2}))\). We still refer to the previously obtained bound (14.83), except that in this region we use a different lower bound for \(J_{\text{\tiny g}}\). This lower bound is coming from (14.80a), which gives \(J_{\text{\tiny g}}(x,t)\geq\tfrac{1}{9}\) whenever \(x_{1}\not\in(x_{1}^{\flat}(x_{2}),x_{1}^{\sharp}(x_{2}))\). Similarly to (14.84), we thus obtain_ \[\mathcal{J}(x,t)\leq J_{\text{\tiny g}}(x,t)\big{(}1+\mathbf{1}_{|x_{1}|\leq 13 \pi\varepsilon}9\cdot\tfrac{210(1+\alpha)}{\alpha\kappa_{0}}+\mathring{\mathcal{ C}}\varepsilon\mathbf{1}_{|x_{1}|>13\pi\varepsilon}\big{)}\,. \tag{14.85}\] _Thus, if \(\kappa_{0}\) is taken sufficiently large with respect to \(\alpha\) to ensure that (14.38) holds, then \(9\cdot\tfrac{210(1+\alpha)}{\alpha\kappa_{0}}\leq\tfrac{1}{200}\) and upon taking \(\varepsilon\) sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), we deduce \(\mathcal{J}(x,t)\leq\tfrac{101}{100}J_{\text{\tiny g}}(x,t)\), which is consistent with both (14.71a) and (14.71b)._ _(iii)_ _The case \(\tfrac{1}{2}(\mathfrak{t}_{\text{in}}+\mathfrak{t}_{\text{fin}})\leq t\leq\min \{\Theta^{\delta}(x,t^{*}(x_{2})),\mathfrak{t}_{\text{fin}}\}\), \(x_{1}\in(x_{1}^{\flat}(x_{2}),x_{1}^{\sharp}(x_{2}))\). This region requires a different analysis, which is not based on (14.83); this is because we cannot obtain a uniform strictly positive lower bound for \(J_{\text{\tiny g}}\) in this spacetime. Instead, both \(\mathcal{J}\) and \(J_{\text{\tiny g}}\) may degenerate towards \(0\), and we need to carefully track this degeneracy. To track this degeneracy, we employ a Taylor series expansion in \(x_{1}\), which in light of (14.58) and the fact that by construction we have \(\Theta^{\delta}(\dot{x}_{1}(x_{2}),x_{2},t^{\prime})=t^{\prime}\), gives_ \[\tfrac{100}{101}\mathcal{J}(x_{1},x_{2},t)=\tfrac{100}{101}\mathcal{J}(x _{1} \[-\tfrac{1}{101}J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})-(x_{1}- \mathring{x}_{1}(x_{2}))\big{(}J_{g},_{1}+\partial_{t}J_{g}\partial_{1}\Theta^{ \delta}\big{)}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})\] \[-\tfrac{1}{2}(x_{1}-\mathring{x}_{1}(x_{2}))^{2}\Big{(}J_{g},_{11} (x_{1}^{\prime},x_{2},\Theta^{\delta}(x_{1}^{\prime},x_{2},t^{\prime}))+2 \partial_{t}J_{g},_{1}(x_{1}^{\prime},x_{2},\Theta^{\delta}(x_{1}^{\prime},x_{2 },t^{\prime}))\partial_{1}\Theta^{\delta}(x_{1}^{\prime},x_{2},t^{\prime})\] \[+\partial_{t}^{2}J_{g}(x_{1}^{\prime},x_{2},\Theta^{\delta}(x_{1} ^{\prime},x_{2},t^{\prime}))(\partial_{1}\Theta^{\delta})^{2}(x_{1}^{\prime},x _{2},t^{\prime})\] \[+\partial_{t}J_{g}(x_{1}^{\prime},x_{2},\Theta^{\delta}(x_{1}^{ \prime},x_{2},t^{\prime}))\partial_{1}^{2}\Theta^{\delta}(x_{1}^{\prime},x_{2 },t^{\prime})\Big{)}\] \[=:J_{g}(x_{1},x_{2},t)-\mathsf{Err} \tag{14.86}\] for some \(x_{1}^{\prime}\) which lies in between \(x_{1}\) and \(\mathring{x}_{1}(x_{2})\), and where the error term \(\mathsf{Err}\) is defined by the last three terms (containing the minus signs) in the earlier equality. Our claim is that for the range of \(x_{1}\) and \(t=\Theta^{\delta}(x,t^{\prime})\) considered here, we have that \(\mathsf{Err}\geq 0\), which then would result in the estimate \(\partial(x,t)\leq\frac{101}{100}J_{g}(x,t)\), thereby completing the proof of (14.71a) and (14.71b) in the spacetime \(\hat{\mathcal{H}}_{+}^{\delta}\). It thus remains to show that \(\mathsf{Err}\geq 0\). To see this, we first analyze the coefficient of the term which is linear in \(x_{1}-\mathring{x}_{1}(x_{2})\), namely \((J_{g},_{1}+\partial_{t}J_{g}\partial_{1}\Theta^{\delta})(\mathring{x}_{1}(x _{2}),x_{2},t^{\prime})\). Due to Proposition 6.7 we have that \(J_{g},_{1}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})=-\int_{t^{\prime}}^{t^{ \prime}(x_{2})}\partial_{t}J_{g},_{1}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime \prime})\mathrm{d}t^{\prime\prime}\). By also appealing to (6.24f) and (6.51) (at time \(t^{*}(x_{2})\)) we deduce that for \(t^{\prime}\leq t^{*}(x_{2})\), we have \[|J_{g},_{1}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})|\leq(t^{*}(x_{2})-t^{ \prime})\tfrac{9(1+\alpha)}{\varepsilon}\mathsf{C}_{\underline{2}_{N}}\,.\] (14.87a) On the other hand, \[J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})=-\int_{t^{\prime}}^{t^{*}(x_{ 2})}\partial_{t}J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime\prime})\mathrm{ d}t^{\prime\prime}\], and by also appealing to ( 6.24d) and ( 6.53 ) (at time \[t^{*}(x_{2})\] ) we deduce that for \[t^{\prime}\leq t^{*}(x_{2})\], we have \[J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime}) \geq(t^{*}(x_{2})-t^{\prime})\tfrac{1+\alpha}{2}(-(w_{0}),_{1}( \mathring{x}_{1}(x_{2}),x_{2})-\mathsf{C}_{\underline{1}_{h}})\] \[\geq(t^{*}(x_{2})-t^{\prime})\tfrac{1+\alpha}{2}(-(w_{0}),_{1}(x _{1}^{\prime}(x_{2}),x_{2})-2\mathsf{C}_{\underline{1}_{h}})\geq(t^{*}(x_{2})-t ^{\prime})\tfrac{2(1+\alpha)}{5\varepsilon}. \tag{14.87b}\] The inequalities (14.87a) and (14.87b) combined give \[|J_{g},_{1}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})|\leq 23\mathsf{C}_{ \underline{2}_{N}}J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})\,, \tag{14.87c}\] for \(t^{\prime}\leq t^{*}(x_{2})\). In order to estimate the \(\partial_{t}J_{g}\partial_{1}\Theta^{\delta}\) term, we use the bootstrap (5.37l) contained in (14.132a), together with (14.29a), to deduce \[|\partial_{t}J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})\partial_{1}\Theta^ {\delta}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})|\leq\tfrac{4(1+\alpha)}{ \varepsilon}\tfrac{4}{\alpha\kappa_{0}}J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^ {\prime})=\tfrac{16(1+\alpha)}{\alpha\kappa_{0}\varepsilon}J_{g}(\mathring{x}_ {1}(x_{2}),x_{2},t^{\prime})\,. \tag{14.88}\] Combining (14.87c) and (14.88) with \(x_{1}\in(x_{1}^{\flat}(x_{2}),x_{1}^{\sharp}(x_{2}))\), (14.78d), and (14.79d), we obtain \[|x_{1}-\mathring{x}_{1}(x_{2})|\big{|}(J_{g},_{1}+\partial_{t}J_ {g}\partial_{1}\Theta^{\delta})(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})\big{|} \leq|x_{1}-\mathring{x}_{1}(x_{2})|\tfrac{17(1+\alpha)}{\alpha \kappa_{0}\varepsilon}J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})\] \[\leq\tfrac{34(1+\alpha)}{\alpha\kappa_{0}}J_{g}(\mathring{x}_{1}(x _{2}),x_{2},t^{\prime})\leq 10^{-4}J_{g}(\mathring{x}_{1}(x_{2}),x_{2},t^{\prime})\,, \tag{14.89}\] where in the last inequality we have used that \(\kappa_{0}\) satisfies (14.38). Since \(10^{-4}<\frac{1}{101}\), the above estimate turns out to be sufficiently strong. Second, we analyze the coefficient of the term in (14.100) which is quadratic in \(x_{1}-\mathring{x}_{1}(x_{2})\). For this purpose, we note that since \(x_{1},\mathring{x}_{1}(x_{2})\in(x_{1}^{\flat}(x_{2}),x_{1}^{\sharp}(x_{2}))\), we also have \(x_{1}^{\prime}\in(x_{1}^{\flat}(x_{2}),x_{1}^{\sharp}(x_{2}))\). Moreover, since we care about \(\tfrac{1}{2}(\mathtt{t}_{\mathsf{in}}+\mathtt{t}_{\mathsf{fin}})\leq t\leq \min\{\Theta^{\delta}(x,t^{*}(x_{2})),\mathtt{t}_{\mathsf{fin}}\}\), we may apply (14.80c) to deduce \[J_{g},_{11}(x_{1}^{\prime},x_{2},\Theta^{\delta}(x_{1}^{\prime},x_{2},t^{\prime})) \geq\tfrac{1}{6\varepsilon^{2}}\,.\] (14.90a) Next, we appeal to ( 6.24f), ( 14.10), ( 14.30a), and ( 14.38), to deduce \[2\big{|}\partial_{t}J_{g},_{1}(x_{1}^{\prime},x_{2},\Theta^{\delta}(x_{1}^{\prime},x_ {2},t^{\prime}))\partial_{1}\Theta^{\delta}(x_{1}^{\prime},x_{2},t^{\prime}) \big{|}\leq 2\cdot\tfrac{1+\alpha}{2\varepsilon^{2}}\big{(}|\mathsf{D}_{1}^{2}w_{0}(x_{1}^{ \prime},x_{2})|+\mathring{C}\varepsilon\big{)}\cdot\tfrac{5}{\alpha\kappa_{0}} \leq\tfrac{11(1+\alpha)}{\alpha\kappa_{0}\varepsilon^{2}}\leq\tfrac{10^{-4}}{ \varepsilon^{2}}\,.\] (14.90b) In a similar fashion, by appealing Thus, putting together the bounds established in (14.90), we obtain that the coefficient of appearing appearing in the definition of in (14.86), satisfies (14.91) To summarize, we deduce from (14.89) and (14.91) that the term appearing in (14.86) satisfies (14.92) and hence this term is non-negative. In light of (14.86), this completes the proof of (14.71a) and (14.71b) in the spacetime. #### 14.8.3. Damping and anti-damping properties of One of the crucial properties of the weight is that its time derivative is strictly negative. This fact is recorded next. **Lemma 14.4**.: _Assume that satisfies (14.38), that the bootstraps (14.132) hold, and that is then taken sufficiently small with respect to, and. Then for all, we have_ (14.93a) (14.93c) (14.93d) (14.93e) _for all. Here is the constant from (14.29e)._ Proof of Lemma 14.4.: In order to prove (14.93a), we recall that is continuous across, and that from the definitions (14.62a), (14.63) and the lower bound in (14.69), we have for all. This bound is consistent with (14.93a) upon taking to be sufficiently small, since. For the, we write for some, and use identities (14.59) to write Appealing to the bounds established earlier in (6.43), (14.26a), and (14.29), for each we obtain This bound is consistent with (14.93a) upon taking to be sufficiently small, since. This completes the proof of (14.93a). We next turn to the bounds (14.93b)-(14.93d). In the spacetime upon writing and and (14.93), and (14.93), we have (14.94b) (14.94c) On the other hand, for, we may directly differentiate (14.66), which in turn requires derivative estimates for the stopping time, the flow, and the function. Estimates for the flow and its first order derivatives were previously obtained in (6.8). Concerning the space derivatives of \(\mathsf{T}_{\xi}\), implicitly differentiating (14.65b) and appealing to (14.29)-(14.30a), we deduce \[\big{|}\partial_{1}\mathsf{T}_{\xi}(x,t)\big{|} \leq\tfrac{1}{1-C\varepsilon^{2}}\big{(}\tfrac{1}{10^{5}(1+\alpha )}+\mathring{C}\varepsilon^{2}\big{)}\leq\tfrac{2}{10^{7}(1+\alpha)}\,, \tag{14.95a}\] \[\big{|}\partial_{2}\mathsf{T}_{\xi}(x,t)\big{|} \leq\tfrac{1}{1-C\varepsilon^{2}}\cdot 5\cdot 10^{3}(1+\alpha)^{2} \varepsilon\cdot(1+\mathring{C}\varepsilon^{2})\leq 6\cdot 10^{3}(1+\alpha)^{2} \varepsilon\,, \tag{14.95b}\] for all \((x,t)\in\mathring{\mathcal{H}}_{-}^{\xi}\). For the function \(\mathfrak{f}\), the pointwise estimate (14.69) is supplemented with the uniform derivative bounds on \(\mathbb{T}^{2}\) \[\big{|}\partial_{1}\mathfrak{f}(x)\big{|} \leq\big{(}\tfrac{1+\alpha}{2\varepsilon}+\mathring{C}\big{)} \cdot\tfrac{\cdot\lfloor\partial_{1}\mathsf{e}^{\delta}(x,\mathsf{t}_{\mathsf{ h}})+\mathring{C}\varepsilon}{(1-3\cdot 10^{-5})^{2}}\leq\tfrac{1+\alpha}{10^{5} \varepsilon^{2}}\,, \tag{14.96a}\] \[\big{|}\partial_{2}\mathfrak{f}(x)\big{|} \leq\big{(}\tfrac{1+\alpha}{2\varepsilon}(4\mathsf{C}_{\mathsf{ data}}+2)+\mathring{C}\big{)}\cdot\tfrac{1+\mathring{C}\varepsilon^{2}}{1-3 \cdot 10^{-5}}+\big{(}\tfrac{1+\alpha}{2\varepsilon}+\mathring{C}\big{)}\cdot \tfrac{\mathsf{b}_{\mathsf{2}}+\mathring{C}\langle\mathsf{B}_{\mathsf{b}} \rangle\varepsilon^{2}}{(1-3\cdot 10^{-5})^{2}}\leq\tfrac{(1+\alpha)}{ \varepsilon}(4\mathsf{C}_{\mathsf{data}}+2+\mathsf{b}_{\mathsf{2s}})\,, \tag{14.96b}\] which in turn follow from the pointwise bootstrap assumptions (14.132a), the assumptions on the initial data in Section 4.2, and the bounds (14.19), (14.29)-(14.30). Finally, upon differentiating (14.66), and appealing to the bounds (14.69), (14.75)-(14.76), (14.95)-(14.96), and (6.8), we obtain \[|\partial_{1}\updelta(x,t)| \leq\big{|}\partial_{1}\mathsf{T}_{\xi}(x,t)\mathfrak{f}(x_{1}, \mathsf{f}_{\xi}(x,\mathsf{T}_{\xi}(x,t)))\big{|}+\big{|}\mathsf{T}_{\xi}(x,t )-t\big{|}\big{(}\|\partial_{1}\mathfrak{f}\|_{L^{\infty}_{x}}+\|\partial_{1} \xi_{t}\|_{L^{\infty}_{x,t}}\|\partial_{2}\mathfrak{f}\|_{L^{\infty}_{x}}\big{)}\] \[\leq\tfrac{2}{10^{5}\varepsilon}+\tfrac{12\varepsilon}{1+\alpha} \cdot\tfrac{52}{50}\cdot\big{(}\tfrac{(1+\alpha)}{10^{5}\varepsilon^{2}}+ \mathring{C}\big{)}\leq\tfrac{1}{10^{3}\varepsilon}\,, \tag{14.97a}\] \[|\partial_{2}\updelta(x,t)| \leq\big{|}\partial_{2}\mathsf{T}_{\xi}(x,t)\mathfrak{f}(x_{1}, \mathsf{f}_{\xi}(x,\mathsf{T}_{\xi}(x,t)))\big{|}+\big{|}\mathsf{T}_{\xi}(x,t )-t\big{|}\|\partial_{2}\xi_{t}\|_{L^{\infty}_{x,t}}\|\partial_{2}\mathfrak{f} \|_{L^{\infty}_{x}}\] \[\leq 6\cdot 10^{3}(1+\alpha)^{2}\varepsilon\cdot\tfrac{1+ \alpha}{2\varepsilon}\cdot\tfrac{101}{100}+\tfrac{2\varepsilon}{1+\alpha} \cdot\tfrac{52}{50}\cdot\big{(}1+\mathring{C}\varepsilon\big{)}\cdot\tfrac{( 1+\alpha)}{\varepsilon}(4\mathsf{C}_{\mathsf{data}}+2+\mathsf{b}_{\mathsf{2s}})\] \[\leq 4\cdot 10^{3}(1+\alpha)^{3}+\tfrac{21}{10}(4\mathsf{C}_{ \mathsf{data}}+2+\mathsf{b}_{\mathsf{2s}})\,, \tag{14.97b}\] for all \((x,t)\in\mathring{\mathcal{H}}_{-}^{\delta}\). The \(\partial_{2}\updelta\) estimate above, together with (14.62a) and (14.69) shows that \[-\tfrac{1+\alpha}{\varepsilon}\leq-\tfrac{1+\alpha}{2\varepsilon}\cdot\tfrac{1 01}{100}-\mathring{C}\varepsilon\leq\partial_{t}\updelta(x,t)\leq-\tfrac{1+ \alpha}{2\varepsilon}\cdot\tfrac{89}{100}+\mathring{C}\varepsilon\leq-\tfrac{2( 1+\alpha)}{5\varepsilon} \tag{14.97c}\] for all \((x,t)\in\mathring{\mathcal{H}}_{-}^{\delta}\). Together, the estimates (14.94) and (14.97) complete the proof of the bounds (14.93b)-(14.93d), for all \((x,t)\in\mathring{\mathcal{H}}_{+}^{\delta}\cup\mathring{\mathcal{H}}_{-}^{\delta}\). In order to complete the proof of the Lemma, we need to establish (14.93e). For \((x,t)\in\mathring{\mathcal{H}}_{-}^{\delta}\), we apply \((\partial_{t}+V\partial_{2})\) to (14.62a), appeal to the fact that \(\mathfrak{f}\) is independent of \(t\), that \(|V|\leq\mathring{C}\varepsilon\), and that \(\partial_{t}\mathfrak{f}\) satisfies (14.96), to deduce that \[\big{|}(\partial_{t}+V\partial_{2})^{2}\updelta(x,t)\big{|}\leq\mathring{C}\,,\] for all \((x,t)\in\mathring{\mathcal{H}}_{-}^{\delta}\). This estimate is better than what is required by (14.93e). Lastly, for \((x,t)=(x,\Theta^{\delta}(x,t^{\prime}))\in\mathring{\mathcal{H}}_{+}^{\delta}\) we simply decompose \[(\partial_{t}+V\partial_{2})^{2}\updelta=\partial_{tt}\updelta+2V\partial_{2t} \updelta+V^{2}\partial_{22}\updelta+\partial_{t}V\partial_{2}\updelta\,.\] Then, upon differentiating (14.59b) and (14.59c), we derive \[\partial_{tt}\updelta(x,\Theta^{\delta}(x,t^{\prime})) =\tfrac{\partial_{tt}\updelta(x,t^{\prime})-\partial_{tt}\updelta^ {\delta}(x,t^{\prime})\partial_{t}\updelta(x,\Theta^{\delta}(x,t^{\prime}))}{( \partial_{t}\Theta^{\delta}(x,t^{\prime}))^{2}}\,,\] \[\partial_{2t}\updelta(x,\Theta^{\delta}(x,t^{\prime})) =\tfrac{\partial_{2t}\updelta(x,t^{\prime})-\partial_{2t}\updelta^ {\delta}(x,t^{\prime})\partial_{t}\updelta(x,\Theta^{\delta}(x,t^{\prime}))- \partial_{2}\updelta^{\delta}(x,t^{\prime})\partial_{tt}\updelta(x,\Theta^{ \delta}(x,t^{\prime}))}{\partial_{t}\updelta^{\delta}(x,t^{\prime})}\,,\] \[\partial_{22}\updelta(x,\Theta^{\delta}(x,t^{\prime})) =\partial_{22}\updelta(x_{2},t^{\prime})-\partial_{22}\updelta^{ \delta}(x,t^{\prime})\partial_{t}\updelta(x,\Theta^{\delta}(x,t^{\prime}))\] \[\qquad\qquad-2\partial_{2}\updelta^{\delta}(x,t^{\prime}) \partial_{2t}\updelta(x,\Theta^{\delta}(x,t^{\prime}))-(\partial_{2}\updelta^{ \delta}(x,t^{\prime}))^{2}\partial_{tt}\updelta(x,\Theta^{\delta}(x,t^{\prime}))\,.\] Using the estimates (5.8), (5.9), (6.43), (6.24g), (14.26a), (14.26c), and (14.29), we deduce \[\big{|}\partial_{tt}\updelta(x,\Theta^{\delta}(x,t^{\prime})) \big{|} =\big{(}\tfrac{200(1+\alpha)}{\varepsilon(1-3\cdot 10^{-5})}\big{)}^{2}+\tfrac{ \mathring{C}}{\varepsilon}\leq\tfrac{201^{2}(1+\alpha)^{2}}{\varepsilon^{2}}\,,\] \[\big{|}\partial_{2t}\updelta(x,\Theta^{\delta}(x,t^{\prime})) \big{|} \leq\tfrac{\mathring{C}}{\varepsilon}\,,\] \[\big{|}\partial_{22}\updelta(x,\Theta^{\delta}(x,t^{\prime})) \big{|} \leq\mathring{C}\,.\] Combining the above estimates, we obtain that \[\big{|}(\partial_{t}+V\partial_{2})^{2}\updelta(x,t)\big{|}\leq\tfrac{201^{2}(1+ \alpha)^ **Lemma 14.5** (**Upstream damping and anti-damping**).: _Assume that \(\kappa_{0}\) satisfies (14.38), that the bootstraps (14.132) hold, and that \(\varepsilon\) is then taken sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\). Then, we have that_ \[-\tfrac{3}{2}\tfrac{\sigma^{1}}{2}J_{s}(\partial_{t}+V\partial_{2})\mathcal{J} +\tfrac{\sigma^{3}}{2}(\partial_{t}+V\partial_{2})J_{s}\geq\tfrac{1+\alpha}{6 \varepsilon}\tfrac{\sigma^{1}}{2}J_{s}\;, \tag{14.98}\] _pointwise for all \((x,t)\in\mathring{\mathcal{H}}^{\delta}\)._ Proof of Lemma 14.5.: Using (14.93a), (14.71), and (6.24c), we have that \[-\tfrac{3}{2}\tfrac{\sigma^{1}}{2}J_{s}(\partial_{t}+V\partial_{2 })\mathcal{J}+\tfrac{\sigma^{3}}{2}(\partial_{t}+V\partial_{2})J_{s} \geq\tfrac{3}{2}\tfrac{\sigma^{1}}{2}J_{s}\cdot\tfrac{1+\alpha}{2 \varepsilon}\cdot\tfrac{899}{1000}-\tfrac{\sigma^{3}}{2}\big{(}\tfrac{1+ \alpha}{2\varepsilon}\mathbf{1}_{|x_{1}|\leq 13\pi\varepsilon}+\mathring{C}\big{)}\] \[\geq\mathcal{J}^{\frac{1}{2}}J_{s}\cdot\tfrac{1+\alpha}{2 \varepsilon}\big{(}\tfrac{3}{2}\cdot\tfrac{899}{1000}-\tfrac{101}{100}- \mathring{C}\varepsilon\big{)}\] \[\geq\mathcal{J}^{\frac{1}{2}}J_{s}\cdot\tfrac{1+\alpha}{2 \varepsilon}\cdot\tfrac{1}{3}\,,\] where \(\varepsilon\) was taken sufficiently small to obtain the last inequality. This proves the claimed lower-bound (14.98). ### Pre-shock flattening and the upstream spacetime set It is convenient to flatten the "set of times" \(t^{*}(x_{2})\) at which the pre-shock occur. We define the transformation \[\mathsf{s}=\mathsf{q}(x_{2},t):=\frac{2\varepsilon}{1+\alpha}\left(\frac{t-t^ {*}(x_{2})}{t^{*}(x_{2})-\mathsf{t}_{\text{in}}}\right)=\mathsf{t}_{\text{in}} \left(\frac{t^{*}(x_{2})-t}{t^{*}(x_{2})-\mathsf{t}_{\text{in}}}\right)\;, \tag{14.99}\] for all \((x,t)\in\mathring{\mathcal{H}}^{\delta}\). Sometimes it is more convenient to express (14.99) as \[t=\mathsf{q}^{-1}(x_{2},\mathsf{s})=t^{*}(x_{2})-(t^{*}(x_{2})-\mathsf{t}_{ \text{in}})\tfrac{\mathsf{s}}{\mathsf{t}_{\text{in}}}\,. \tag{14.100}\] From (14.99), we see that the initial \(\mathsf{s}\)-time is given by \[\mathsf{s}_{\text{in}}:=\mathsf{q}(x_{2},\mathsf{t}_{\text{in}})=\mathsf{t}_{ \text{in}}=-\tfrac{2\varepsilon}{1+\alpha}\,. \tag{14.101}\] By design, the set of pre-shock times \(\{t=t^{*}(x_{2})\}\) gets mapped to the time slice \(\{\mathsf{s}=0\}\), which is to say that in \((x,\mathsf{s})\) coordinates, the pre-shock set is given by \[\Xi^{*}:=\left\{\big{(}\hat{\hat{x}}_{1}(x_{2}),x_{2},0\big{)}\colon x_{2}\in \mathbb{T}\right\}. \tag{14.102}\] Next, we note that for all \((x,t)\in\mathring{\mathcal{H}}^{\delta}\) we have \[\mathsf{q}(x_{2},t)\leq\tfrac{2\varepsilon}{1+\alpha}\cdot\tfrac{\mathsf{t}_{ \text{in}}-t^{*}(x_{2})}{t^{*}(x_{2})-\mathsf{t}_{\text{in}}}\leq\tfrac{2 \varepsilon}{1+\alpha}\cdot\frac{\tfrac{1}{50}\cdot\tfrac{2\varepsilon}{1+ \alpha}+\mathring{C}\varepsilon^{2}}{\tfrac{2\varepsilon}{1+\alpha}- \mathring{C}\varepsilon^{2}}\leq\tfrac{2\varepsilon}{1+\alpha}\cdot\tfrac{1+ \mathring{C}\varepsilon}{50}\,.\] As such, the \(\mathsf{s}\)-time variable can never exceed \[\mathsf{s}_{\text{fin}}:=\tfrac{2\varepsilon}{1+\alpha}\cdot\max_{x_{2}\in \mathbb{T}}\bigl{(}\tfrac{\mathsf{t}_{\text{in}}-t^{*}(x_{2})}{t^{*}(x_{2})- \mathsf{t}_{\text{in}}}\bigr{)}\leq\tfrac{2\varepsilon}{1+\alpha}\cdot\tfrac{ 1+\mathring{C}\varepsilon}{50}\,. \tag{14.103}\] Taking into account (14.14), for upstream maximal development in \((x,\mathsf{s})\) variables, we use the spacetime set \(\mathcal{H}^{\delta}\) defined by \[\mathcal{H}^{\delta}:=\left\{(x,\mathsf{s})\in\mathcal{X}_{\text{fin}}\times[ \mathsf{s}_{\text{in}},\mathsf{s}_{\text{fin}})\colon\mathsf{s}=\mathsf{q}(x_{2 },t)\,,(x,t)\in\mathring{\mathcal{H}}^{\delta}\right\}. \tag{14.104}\] Moreover, we similarly define \[\mathcal{H}^{\delta}_{+}:=\left\{(x,\mathsf{s})\in\mathcal{X}_{ \text{fin}}\times[\mathsf{s}_{\text{in}},\mathsf{s}_{\text{fin}})\colon \mathsf{s}=\mathsf{q}(x_{2},t)\,,(x,t)\in\mathring{\mathcal{H}}^{\delta}_{+} \right\}, \tag{14.105a}\] \[\mathcal{H}^{\delta}_{-}:=\left\{(x,\mathsf{s})\in\mathcal{X}_{ \text{fin}}\times[\mathsf{s}_{\text{in}},\mathsf{s}_{\text{fin}})\colon\mathsf{s}= \mathsf{q}(x_{2},t)\,,(x,t)\in\mathring{\mathcal{H}}^{\delta}_{-}\right\}. \tag{14.105b}\] Given any function \(f\colon\mathring{\mathcal{H}}^{\delta}\to\mathbb{R}\), we define the corresponding function \(\widetilde{f}\colon\mathcal{H}^{\delta}\to\mathbb{R}\) by \[\widetilde{f}(x,\mathsf{s}):=f(x,t),\qquad\text{where}\qquad\mathsf{s}= \mathsf{q}(x_{2},t)\,. \tag{14.106}\] From (14.106), and the chain-rule, (14.99), and (5.13) we obtain \[\partial_{t}f(x,t) =\widehat{\mathsf{Q}}(x_{2})\partial_{\mathsf{s}}\widetilde{f}(x,\mathsf{s})=:\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{\mathsf{s}} \widetilde{f}(x,\mathsf{s})\,, \tag{14.107a}\] \[\partial_{2}f(x,t) =\bigl{(}\partial_{2}-\overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s}) \partial_{\mathsf{s}}\bigr{)}\widetilde{f}(x,\mathsf{s})=:\widetilde{\mathsf{D}}_ {2}\widetilde{f}(x,\mathsf{s})\,,\] (14.107b) \[\partial_{1}f(x,t) =\partial_{1}\widetilde{f}(x,\mathsf{s})=:\tfrac{1}{\varepsilon} \widetilde{\mathsf{D}}_{1}\widetilde{f}(x,\mathsf{s})\,, \tag{14.107c}\] where for compactness of notation we have introduced the functions \[\widehat{\mathsf{Q}}(x_{2})=\partial_{t}\mathsf{q}(x_{2},t) =\tfrac{2\varepsilon}{1+\alpha}\tfrac{1}{t^{*}(x_{2})-\mathsf{t}_{ \text{in}}}=\tfrac{-\mathsf{t}_{\text{in}}}{t^{*}(x_{2})-\mathsf{t}_{\text{in}}}\,, \tag{14.108a}\] \[\overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s})=-\partial_{2}\mathsf{q}(x_{2},t)=\tfrac {2\varepsilon}{1+\alpha}\tfrac{\partial_{2}t^{*}(x_{2})}{t^{*}(x_{2})-t_{\mathsf{ h}}}\big{(}1-\tfrac{t^{*}(x_{2})-t}{t^{*}(x_{2})-t_{\mathsf{h}}}\big{)}= \partial_{2}t^{*}(x_{2})\widehat{\mathsf{Q}}(x_{2})\big{(}1-\tfrac{\mathsf{s} }{\mathsf{s}_{\mathsf{h}}}\big{)}\,. \tag{14.108b}\] For later use, it is also convenient to define \[\mathsf{Q}(x,\mathsf{s}):=\widehat{\mathsf{Q}}(x_{2})-\widetilde{V}(x,\mathsf{ s})\overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s})=\tfrac{-t_{\mathsf{h}}}{t^{*}(x_{2})-t_{ \mathsf{h}}}\Big{(}1-\widetilde{V}(x,\mathsf{s})\partial_{2}t^{*}(x_{2}) \big{(}1-\tfrac{\mathsf{s}}{\mathsf{s}_{\mathsf{h}}}\big{)}\Big{)}\,, \tag{14.108c}\] and \[\tilde{\mathsf{Q}}=\partial_{\mathsf{s}}\mathsf{Q}=-\partial_{\mathsf{s}} \widetilde{V}\overline{\mathsf{Q}}_{2}+\widetilde{V}\widehat{\mathsf{Q}} \tfrac{\partial_{2}t^{*}(x_{2})}{\mathsf{s}_{\mathsf{h}}}\,,\qquad\hat{ \mathsf{Q}}_{\mathsf{s}}=\partial_{\mathsf{s}}\widehat{\mathsf{Q}}=0\,, \qquad\hat{\mathsf{Q}}_{2}=\partial_{\mathsf{s}}\overline{\mathsf{Q}}_{2}=- \widehat{\mathsf{Q}}\tfrac{\partial_{2}t^{*}(x_{2})}{\mathsf{s}_{\mathsf{h}}}\,. \tag{14.108d}\] With the above notation, it follows from (14.107) that the spacetime gradient operator in \((x,t)\) variables, namely \(\mathsf{D}=(\varepsilon\partial_{t},\varepsilon\partial_{1},\partial_{2})\), becomes the gradient operator \(\widetilde{\mathsf{D}}\) associated with the \((x,\mathsf{s})\) coordinates, which is defined by \[\widetilde{\mathsf{D}}=(\widetilde{\mathsf{D}}_{\mathsf{s}},\widetilde{ \mathsf{D}}_{1},\widetilde{\mathsf{D}}_{2}):=\big{(}\varepsilon\widehat{ \mathsf{Q}}\partial_{\mathsf{s}},\varepsilon\partial_{1},\partial_{2}- \overline{\mathsf{Q}}_{2}\partial_{\mathsf{s}}\big{)}\,.\] (14.109a) Additionally, the ALE transport operator \[(\partial_{t}+V\partial_{2})\] is written in \[(x,\mathsf{s})\] coordinates as \[\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2}=\widehat{\mathsf{Q }}\partial_{\mathsf{s}}+\widetilde{V}\widetilde{\mathsf{D}}_{2}=\tfrac{1}{ \varepsilon}\widetilde{\mathsf{D}}_{\mathsf{s}}+\widetilde{V}\widetilde{ \mathsf{D}}_{2}\,. \tag{14.109b}\] The approximate \(1\)-characteristic surfaces and the upstream weight function in \((x,\mathsf{s})\) coordinates Using the transformation (14.99), since \(\mathscr{J}\colon\hat{\mathcal{H}}^{\delta}\to\mathbb{R}\) we may define upstream weight function in \((x,\mathsf{s})\) coordinates as \[\widetilde{\partial}(x,\mathsf{s})=\mathscr{J}(x,t)\,, \tag{14.110}\] for all \((x,\mathsf{s})\in\mathcal{H}^{\delta}\). Moreover, according to (14.61), (14.62a), and (14.107), we have that \(\widetilde{\mathscr{J}}\) satisfies the equations \[2\alpha\Sigma\partial_{1}\widetilde{\mathscr{J}}-2\alpha\widetilde{\Sigma} \widetilde{J}_{\delta}\widetilde{\mathscr{J}}_{\delta}-\tfrac{1}{2} \widetilde{\mathsf{D}}_{2}\widetilde{h}\widetilde{\mathsf{D}}_{2}\widetilde{ \mathscr{J}}-\widetilde{J}_{\delta}(\mathsf{Q}\partial_{\mathsf{s}}+ \widetilde{V}\partial_{2})\widetilde{\mathscr{J}}\,,\quad\text{in}\quad \mathcal{H}^{\delta}_{+}\,, \tag{14.111a}\] \[(\mathsf{Q}\partial_{\mathsf{s}}+\widetilde{V}\partial_{2}) \widetilde{\mathscr{J}}=\mathfrak{f}\,,\quad\text{in}\quad\mathcal{H}^{ \delta}_{-}\,, \tag{14.111b}\] where \(\mathfrak{f}\) is as defined in (14.63) is the right side of (14.62a). It is also convenient to obtain a pointwise expression for \(\mathscr{J}\) in \((x,\mathsf{s})\in\mathcal{H}^{\delta}_{+}\) coordinates that mimics (14.58). For this purpose, we need to introduce an \((x,\mathsf{s})\) variant of the approximate \(1\)-characteristic surfaces \(\Theta^{\delta}(x,t)\) which were defined earlier in (14.10). It is important to note however that the domain of \(\Theta^{\delta}\) is not \(\hat{\mathcal{H}}^{\delta}_{+}\), and instead the spacetime \(\hat{\Omega}_{\mathsf{US},+}\) defined in (14.11a). This is because \(\Theta^{\delta}(x,t)\) acts like a "time" variable itself, and so it should not be transformed according to (14.106). Instead, the correct way to define an \((x,\mathsf{s})\)-variable analogue of \(\Theta^{\delta}(x,t)\) is similar to (14.99), and so is given by \[\widehat{\Theta}^{\delta}(x,\mathsf{s})=\mathsf{q}(x_{2},\Theta^{\delta}(x,t))= \mathsf{t}_{\mathsf{in}}\cdot\tfrac{t^{*}(x_{2})-\Theta^{\delta}(x,t)}{t^{*}(x _{2})-t_{\mathsf{h}}}\,,\qquad\text{where}\qquad\mathsf{s}=\mathsf{q}(x_{2},t)\,. \tag{14.112}\] In particular, the boundary condition (14.10b) and definition (14.99) shows that \[\widetilde{\Theta}^{\delta}(\mathring{x}_{1}(x_{2}),x_{2},\mathsf{s})= \mathsf{t}_{\mathsf{in}}\cdot\tfrac{t^{*}(x_{2})-\Theta^{\delta}(\mathring{x }_{1}(x_{2}),x_{2},t)}{t^{*}(x_{2})-t_{\mathsf{h}}}=\mathsf{t}_{\mathsf{in}} \cdot\tfrac{t^{*}(x_{2})-t}{t^{*}(x_{2})-t_{\mathsf{h}}}=\mathsf{s}\,, \tag{14.113}\] for all \(\mathsf{s}_{\mathsf{in}}\leq\mathsf{s}<0\). According to (14.11a) and (14.112), the natural domain of \(\widetilde{\Theta}^{\delta}\) is the spacetime \[\Omega_{\mathsf{US},+} :=\Big{\{}(x,\mathsf{s})\colon(x,\mathsf{q}^{-1}(x_{2},\mathsf{ s}))\in\hat{\Omega}_{\mathsf{US},+}\Big{\}}\] \[=\big{\{}(x,\mathsf{s})\colon x_{2}\in\mathbb{T}\,,\mathsf{s}_{ \mathsf{in}}\leq\mathsf{s}<0\,,\widetilde{\mathcal{K}}_{1}^{-}(x_{2},\mathsf{q}^ {-1}(x_{2},\mathsf{s}))\leq x_{1}\leq\mathfrak{X}_{1}^{+}(x_{2},\mathsf{q}^{-1} (x_{2},\mathsf{s}))\big{\}}\, \tag{14.114}\] Note importantly that \(\widehat{\Theta}^{\delta}(x,\mathsf{s})\) is only defined for \(\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},0)\), and that the derivatives of \(\widetilde{\Theta}^{\delta}\) do not transform according to (14.107), and instead according to \[\partial_{\mathsf{s}}\widetilde{\Theta}^{\delta}(x,\mathsf{s}) =\partial_{t}\Theta^{\delta}(x,t)\,, \tag{14.115a}\] \[\partial_{2}\widetilde{\Theta}^{\delta}(x,\mathsf{s}) =\widehat{\mathsf{Q}}(x_{2})\Big{(}\partial_{2}\Theta^{\delta}(x,t)+ \partial_{2}t^{*}(x_{2})\tfrac{t-t_{\mathsf{h}}}{t^{*}(x_{2})-t_{\mathsf{ h}}}\big{(}\partial_{t}\Theta^{\delta}(x,t)-1\big{)}-\partial_{2}t^{*}(x_{2}) \tfrac{\Theta^{\delta}(x,t)-t}{t^{*}(x_{2})-t_{\mathsf{h}}}\Big{)}\,,\] (14.115b) \[\partial_{1}\widetilde{\Theta}^{\delta}(x,\mathsf{s}) =\widehat{\mathsf{Q}}(x_{2})\partial_{1}\Theta^{\delta}(x,t)\,, \tag{14.115c}\] Above, identity (14.115a) is a consequence of the fact that \(\widehat{\mathsf{Q}}=\widehat{\mathsf{Q}}(x_{2})\) is independent of time (cf. (14.108a)), while in (14.115b)-(14.115c) we have appealed also to the computation in (14.108b). One of the consequences of definition (14.112), when combined with (14.58) and (14.110), is that for all \((x,\mathsf{s})\in\Omega_{\mathsf{US},+}\) we have \[\widetilde{\mathscr{J}}(x_{1},x_{2},\widetilde{\Theta}^{\delta}(x_{1},x_{2}, \mathsf{s})) =\widetilde{ In particular, for every \((x,\mathsf{s})\in\mathcal{H}_{+}^{\delta}\), we may find \(\mathsf{s}^{\prime}\in[\mathsf{s}_{\mathsf{in}},0)\) such that \(\mathsf{s}=\widetilde{\Theta}^{\delta}(x,\mathsf{s}^{\prime})\), and thus (14.116a) yields \[\widetilde{\partial}(x,\mathsf{s})=\widetilde{\partial}(x,\widetilde{\Theta}^{ \delta}(x,\mathsf{s}^{\prime}))=\widetilde{\mathcal{B}}(x_{2},\mathsf{s}^{ \prime})\,.\] With the above notation, the distinguished surface passing through the pre-shock is parametrized as the graph \[\big{\{}(x_{1},x_{2},\overline{\widetilde{\Theta}^{\delta}}(x_{1},x_{2}))\colon x _{2}\in\mathbb{T},x_{1}\in[\mathfrak{X}_{1}^{-}(x_{2},t^{*}(x_{2})),\mathfrak{X }_{1}^{+}(x_{2},t^{*}(x_{2}))]\big{\}}\,,\] where \[\overline{\widetilde{\Theta}^{\delta}}(x_{1},x_{2}):=\widetilde{\Theta}^{ \delta}(x_{1},x_{2},0)=\mathfrak{q}(x_{2},\overline{\Theta}^{\delta}(x_{1},x_ {2}))\,,\quad x_{1}\in[\mathfrak{X}_{1}^{-}(x_{2},t^{*}(x_{2})),\mathfrak{X}_{ 1}^{+}(x_{2},t^{*}(x_{2}))]\,,\] (14.117a) and \[\mathfrak{X}_{1}^{\pm}\] were defined in ( 14.11b ) - ( 14.11c ). Abusing notation, we extend \[\overline{\widetilde{\Theta}^{\delta}}\] to be continuous and constant for \[x_{1}\not\in[\mathfrak{X}_{1}^{-}(x_{2},t^{*}(x_{2})),\mathfrak{X}_{1}^{+}(x _{2},t^{*}(x_{2}))]\] ; that is, we define \[\overline{\widetilde{\Theta}^{\delta}}(x_{1},x_{2}) :=\mathfrak{q}(x_{2},\mathsf{t}_{\mathsf{fin}})\leq\mathsf{s}_{ \mathsf{fin}}\,,\quad x_{1}<\mathfrak{X}_{1}^{-}(x_{2},t^{*}(x_{2}))\,, \tag{14.117b}\] \[\overline{\widetilde{\Theta}^{\delta}}(x_{1},x_{2}) :=\mathsf{s}_{\mathsf{in}}\,,\quad x_{1}>\mathfrak{X}_{1}^{+}(x _{2},t^{*}(x_{2}))\,. \tag{14.117c}\] As before, we shall need to represent the surface \(\{(x,\overline{\widetilde{\Theta}^{\delta}}(x))\}\) defined via (14.117) as a graph over the \((x_{2},\mathsf{s})\) plane, i.e. as \[\big{\{}(\widetilde{\theta}^{\delta}(x_{2},\mathsf{s}),x_{2},\mathsf{s}) \colon x_{2}\in\mathbb{T},\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{ \mathsf{fin}})\big{\}}\,,\] where for each \(x_{2}\) fixed, \(x_{1}=\widetilde{\theta}^{\delta}(x_{2},\mathsf{s})\) denotes the inverse of \(\mathsf{s}=\overline{\widetilde{\Theta}^{\delta}}(\cdot,x_{2})=\mathfrak{q}(x_ {2},\overline{\Theta}^{\delta}(\cdot,x_{2}))\). In particular, \[\overline{\widetilde{\Theta}^{\delta}}(\widetilde{\theta}^{\delta}(x_{2}, \mathsf{s}),x_{2})=\mathsf{s}\qquad\text{and}\qquad\widetilde{\theta}^{\delta} (x_{2},\overline{\widetilde{\Theta}^{\delta}}(x_{1},x_{2}))=x_{1}\,. \tag{14.118}\] It follows from (14.116a) and the fact that \(\widetilde{\mathcal{B}}(x_{2},0)=\mathcal{B}(x_{2},t^{*}(x_{2}))=\overline{ \mathcal{J}}_{\!{}_{\!g}}(\check{x}_{1}(x_{2}),x_{2},t^{*}(x_{2}))=0\), that \[\widetilde{\beta}(\widetilde{\theta}^{\delta}(x_{2},\mathsf{s}),x_{2}, \mathsf{s})=\widetilde{\beta}\big{(}x_{1},x_{2},\overline{\widetilde{\Theta}^{ \delta}}(x_{1},x_{2})\big{)}=0\,. \tag{14.119}\] **Remark 14.6** (**Dropping the tildes**).: _Just as before, see Remarks 5.2 and 13.6, we shall henceforth drop the use of the tilde-notation for all variables defined as functions of the flattened coordinates \((x,\mathsf{s})\). Notably, besides dropping tildes on the fundamental Euler variables and the geometric variables, we shall denote \(\widetilde{J}_{\!{}_{\!g}}\), \(\widetilde{\beta}\), and \(\widetilde{\Theta}^{\delta}\) simply as \(J_{\!{}_{\!g}}\), \(\widetilde{\beta}\), and \(\Theta^{\delta}\), keeping the arguments as \((x,\mathsf{s})\). We shall keep referring to the \(\widetilde{\mathsf{D}}\) operator defined in (14.109a) as the rescaled spacetime differential operator in \((x,\mathsf{s})\) coordinates. This identification is made throughout the rest of this section._ #### The decomposition of the upstream spacetime in \((x,\mathsf{s})\) coordinates Recall that in \((x,t)\) coordinates, the upstream spacetime \(\hat{\mathcal{H}}^{\delta}\) was decomposed according to (14.15c). In \((x,\mathsf{s})\) coordinates, this decomposition carries over as follows. Using the notation introduced in (14.117), the upstream spacetime \(\mathcal{H}^{\delta}\) defined in (14.104) can be equivalently described as \[\mathcal{H}^{\delta}=\Big{\{}(x,\mathsf{s})\in\mathcal{X}_{\mathrm{fin}}\times [\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{fin}})\colon\mathsf{s}< \overline{\Theta^{\delta}}(x_{1},x_{2})\Big{\}}\.\] (14.120a) In analogy to ( 14.15a ), we have a well-defined foliation of the spacetime subset \[\mathcal{H}^{\delta}_{+}\subset\mathcal{H}^{\delta}\], as defined in ( 14.105a ), which is given by \[\mathcal{H}^{\delta}_{+}=\big{\{}(x,\mathsf{s})\in\mathcal{H}^{\delta}\colon \Theta^{\delta}(x,\mathsf{s}_{\mathsf{in}})<\mathsf{s}<\Theta^{\delta}(x,0) \big{\}}=\big{\{}(x,\Theta^{\delta}(x,\mathsf{s}))\colon(x,\mathsf{s})\in \Omega_{\mathsf{US},\,+}\big{\}}\.\] (14.120b) The complimentary spacetime \[\mathcal{H}^{\delta}_{-}\] defined in ( 14.105b ) may be characterized similarly to ( 14.15b ) as \[\mathcal{H}^{\delta}_{-}=\big{\{}(x,\mathsf{s})\in\mathcal{H}^{\delta}\colon \mathsf{s}_{\mathsf{in}}\leq\mathsf{s}<\Theta^{\delta}(x,\mathsf{s}_{\mathsf{in}}) \big{\}}\.\] (14.120c) Moreover, in analogy to ( 14.15c ) we have that \[\mathcal{H}^{\delta}=\mathcal{H}^{\delta}_{+}\cup\Theta^{\delta}(x,\mathsf{s}_{ \mathsf{in}})\cup\mathcal{H}^{\delta}_{-}\,. \tag{14.120d}\] While the characterization (14.120a) of \(\mathcal{H}^{\delta}\) is most useful for pointwise estimates in \((x,\mathsf{s})\) coordinates, for energy estimates it is convenient to foliate \(\mathcal{H}^{\delta}\) by \(\mathsf{s}\)-time-slices. This \(\mathsf{s}\)-slice foliation was precisely the purpose for introducing the function \(\theta^{\delta}(x_{2},\mathsf{s})\) (recall, we have dropped the tilde notation) in (14.118). By construction, we have \[\mathcal{H}^{\delta}=\bigcup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s }_{\mathsf{in}}]}\big{\{}(x_{1},x_{2},\mathsf{s})\colon x_{2}\in\mathbb{T},- \pi\leq x_{1}<\theta^{\delta}(x_{2},\mathsf{s})\big{\}}. \tag{14.121}\] In fact, the constraint that \(x\in\mathcal{X}_{\mathrm{fin}}\) present in (14.104) gives a more precise lower bound for \(x_{1}\). In particular, according to (14.12) we have \(x_{1}\geq\hat{x}_{1}(x_{2})-2(13\pi+65\alpha(1+\alpha)\kappa_{0})\varepsilon\). Since in all our energy estimates the integrands vanish identically for \(x\not\in\mathcal{X}_{\mathrm{fin}}\), there is no harm done by considering the \(\mathsf{s}\)-time-slices from (14.121). ### Notation for integrals and norms Recalling the definition of \(\theta^{\delta}\) given in (14.118), For \(\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{fin}}]\) fixed, we use the following double-integral notation for integrals over the \(\mathsf{s}\)-time-slices of spacetime \(\mathcal{H}^{\delta}\), defined in (14.121): \[\iint^{\theta^{\delta}}\!F:=\iint^{\theta^{\delta}}\!F(x_{1},x_{2},\mathsf{s}) \mathrm{d}x_{1}\mathrm{d}x_{2}:=\int_{x_{2}=-\pi}^{\pi}\int_{x_{1}=-\pi}^{ \theta^{\delta}(x_{2},\mathsf{s})}F(x_{1},x_{2},\mathsf{s})\mathrm{d}x_{1} \mathrm{d}x_{2}\,,\] and the triple integral notation to denote \[\int_{\mathcal{H}^{\delta}}F:=\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\! \iint^{\theta^{\delta}}\!F:=\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\! \iint^{\theta^{\delta}}\!F(x_{1},x_{2},\mathsf{s}^{\prime})\mathrm{d}x_{1} \mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}:=\int_{\mathsf{s}^{\prime}= \mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\iint_{x_{2}=-\pi}^{\pi}\int_{x_{1}=- \pi}^{\theta^{\delta}(x_{2},\mathsf{s}^{\prime})}F(x_{1},x_{2},\mathsf{s}^{ \prime})\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}\,.\] We then have the associated space and spacetime \(L^{2}\)-norms, respectively, defined as \[\|F(\cdot,\mathsf{s})\|_{L^{2}}^{2}=\|F(\cdot,\mathsf{s})\|_{L^{ 2}_{x}}^{2} :=\iint^{\theta^{\delta}}\!F^{2}\,, \tag{14.122a}\] \[\|F\|_{L^{2}_{x,\mathsf{s}}}^{2}:=\|F\|_{L^{2}_{x,\mathsf{s}}( \mathcal{H}^{\delta})}^{2} :=\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}_{\mathsf{in}}}\! \iint^{\theta^{\delta}}\!F^{2}\,. \tag{14.122b}\] Let us note that for integrable functions \(F(x_{1},x_{2},\mathsf{s})\) over \(\mathcal{H}^{\delta}\), the order of integration can be exchanged using the notation in (14.117), so that \[\int_{\mathsf{s}=\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}_{\mathsf{in}}}\int_{x _{2}=-\pi}^{\pi}\int_{x_{1}=-\pi}^{\theta^{\delta}(x_{2},\mathsf{s})}F(x_{1}, x_{2},\mathsf{s})\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}=\int_{x_{1}=- \pi}^{\pi}\int_{x_{2}=-\pi}^{\pi}\int_{\mathsf{s}=\mathsf{s}_{\mathsf{in}}}^{ \overline{\Theta}^{\mathsf{s}}(x_{1},x_{2})}F(x_{1},x_{2},\mathsf{s})\mathrm{ d}\mathrm{s}\mathrm{d}x_{2}\mathrm{d}x_{1}\,. \tag{14.123}\] Let us also consider the surface \(\Theta^{\delta}(x,\mathsf{s}_{\mathsf{in}})\) (cf. definition (14.112)) passing through \((\dot{x}_{1}(x_{2}),x_{2},\mathsf{s}_{\mathsf{in}})\). While \(\mathsf{s}=\Theta^{\delta}(x,\mathsf{s}_{\mathsf{in}})\) is a graph over \((x_{1},x_{2})\), it is convenient to view this same surface as a graph over the \((x_{2},\mathsf{s})\)-plane. In analogy to (14.118), we let \(x_{1}=\theta^{\delta}_{\mathsf{in}}(x_{2},\mathsf{s})\) denote this surface, where \(\theta^{\delta}_{\mathsf{in}}\) is defined as the inverse map \[\Theta^{\delta}(\theta^{\delta}_{\mathsf{s}_{\mathsf{in}}}(x_{2},\mathsf{s}), x_{2},\mathsf{s}_{\mathsf{in}})=\mathsf{s}\,,\qquad\text{and}\qquad\theta^{ \delta}_{\mathsf{s}_{\mathsf{in}}}(x_{2},\Theta^{\delta}(x_{1},x_{2},\mathsf{s }_{\mathsf{in}}))=x_{1}\,. \tag{14.124}\] Then, similarly to (14.123), we have that \[\int_{\mathsf{s}=\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}_{\mathsf{in}}}\int_{x _{2}=-\pi}^{\pi}\int_{x_{1}=-\pi}^{\theta^{\delta}_{\mathsf{in}}(x_{2},\mathsf{ s})}F(x_{1},x_{2},\mathsf{s})\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}=\int_{x_{1}=- \pi}^{\pi}\int_{x_{2}=-\pi}^{\pi}\int_{\mathsf{s}=\mathsf{s}_{\mathsf{in}}}^{ \Theta^{\mathsf{s}}(x_{1},x_{2},\mathsf{s}_{\mathsf{in}})}F(x_{1},x_{2},\mathsf{ s})\mathrm{d}\mathrm{s}\mathrm{d}x_{2}\mathrm{d}x_{1}\,, \tag{14.125}\] where in analogy to (14.117) we have abused notation and by continuity defined \[\Theta^{\mathsf{s}}(x_{1},x_{2},\mathsf{s}_{\mathsf{in}}) :=\mathsf{q}(x_{2},\mathsf{t}_{\mathsf{in}})\leq\mathsf{s}_{ \mathsf{fin}}\,,\qquad x_{1}<\mathfrak{X}_{1}^{-}(x_{2},\mathsf{t}_{\mathsf{ in}})\,,\] \[\Theta^{\mathsf{s}}(x_{1},x_{2},\mathsf{s}_{\mathsf{in}}) :=\mathsf{s}_{\mathsf{in}}\,,\qquad x_{1}>\mathfrak{X}_{1}^{+}(x _{2},\mathsf{t}_{\mathsf{in}})\,,\] where the stopping times \(\mathfrak{X}_{1}^{\pm}\) are defined in (14.11b)-(14.11c). For \(\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{fin}}]\) fixed, we will also make use of the following integral and norm notation: \[\int_{\mathcal{H}^{\delta}_{+}}F :=\int_{\mathsf{s}^{\prime}=\mathsf{s}_{\mathsf{in}}}^{\mathsf{ s}}\int_{x_{2}=-\pi}^{\pi}\int_{x_{1}=\theta^{\delta}_{\mathsf{in}}(x_{2}, \mathsf{s}^{\prime})}^{\theta^{\delta}(x_{2},\mathsf{s}^{\prime})}F(x_{1},x_{2}, \mathsf{s}^{\prime})\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}\,, \tag{14.126a}\] \[\int_{\mathcal{H}^{\delta}_{-}}F :=\int_{\mathsf{s}^{\prime}=\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}} \int_{x_{2}=-\pi}^{\pi}\int_{x_{1}=-\pi}^{\theta^{\delta}_{\mathsf{in}}(x_{2}, \mathsf{s}^{\prime})}F(x_{1},x_{2},\mathsf{s}^{\prime})\mathrm{d}x_{1}\mathrm{d}x_{2 }\mathrm{d}\mathsf{s}^{\prime}\,, \tag{14.126b}\] and in analogy to (14.122b), \[\|F\|_{L^{2}_{x,\mathsf{s},+}}^{2}:=\|F\|_{L^{2}_{x,\mathsf{s}}( \mathcal{H}^{\delta}_{+})}^{2}:=\int_{\mathcal{H}^{\delta}_{+}}F^{2}\,, \tag{14.127a}\] \[\|F\|_{L^{2}_{x,\mathsf{s},-}}^{2}:=\|F\|_{L^{2}_{x,\mathsf{s}}( \mathcal{H}^{\delta}_{-})}^{2}:=\int_{\mathcal{H}^{\delta}_{-}}F^{2}\,. \tag{14.127b}\] ### \(L^{2}\) adjoint of \(\widetilde{\mathrm{D}}\) in the upstream spacetime \(\mathcal{H}^{\delta}\) In computing adjoints (with respect to the \(L^{2}\) inner product on \(\mathcal{H}^{\delta}\)), we make use of the following calculus identities: \[\int_{-\pi}^{\theta^{\delta}(x_{2},\mathsf{s})}\partial_{\mathsf{s}}f(x_{1},x_{ 2},\mathsf{s})\mathrm{d}x_{1}=\frac{\partial}{\partial\mathsf{s}}\int_{-\pi}^ {\theta^{\delta}(x_{2},\mathsf{s})}f(x_{1},x_{2},\mathsf{s})\mathrm{d}x_{1}- \partial_{\mathsf{s}}\theta^{\delta}(x_{2},\mathsf{s})\,f(\theta^{\delta}(x_{ 2},\mathsf{s}),x_{2},\mathsf{s})\,.\] (14.128a) Similarly, we have that \[\int_{-\pi}^{\theta^{\delta}(x_{2},\mathsf{s})}\partial_{2}f(x_{1},x_{2}, \mathsf{s})\mathrm{d}x_{1}=\frac{\partial}{\partial x_{2}}\int_{-\pi}^{\theta ^{\delta}(x_{2},\mathsf{s})}f(x_{1},x_{2},\mathsf{s})\mathrm{d}x_{1}-\partial _{2}\theta^{\delta}(x_{2},\mathsf{s})\,f(\theta^{\delta}(x_{2},\mathsf{s}),x_ {2},\mathsf{s})\,,\] (14.128b) where \[\theta^{\delta}(x_{2},\mathsf{s})\] is as defined in ( 14.118 ). With ( 14.128 ), the adjoint of \[\widetilde{\mathrm{D}}\] with respect to the \[L^{2}\] inner product on \[\mathcal{H}^{\delta}\cap(\mathcal{X}_{\mathsf{fin}}\times[\mathsf{s}_{\mathsf{ in}},\mathsf{s}_{\mathsf{fin}}])\], with \[\mathsf{s}_{\mathsf{in}}\prec\mathsf{s}<\mathsf{s}_{\mathsf{fin}}\], is given by \[\widetilde{\mathrm{D}}_{\mathsf{s}}^{*} =-\widetilde{\mathrm{D}}_{\mathsf{s}}+\varepsilon\widehat{ \mathrm{Q}}(\delta_{\mathsf{s}^{\prime}=\mathsf{s}}-\delta_{\mathsf{s}^{ \prime}=\mathsf{s}_{\mathsf{in}}})-\widetilde{\mathrm{D}}_{\mathsf{s}}\theta^ {\delta}\delta_{x_{1}=\theta^{\delta}(x_{2},\mathsf{s}^{\prime})}\,,\] (14.129a) \[\widetilde{\mathrm{D}}_{\mathsf{s}}^{*} =-\widetilde{\mathrm{D}}_{1}+\varepsilon\delta_{x_{1}=\theta^{ \delta}(x_{2},\mathsf{s}^{\prime})}\,,\] (14.129b) \[\widetilde{\mathrm{D}}_{\mathsf{s}}^{*} =-\widetilde{\mathrm{D}}_{2}+\hat{\mathrm{Q}}_{2}-\overline{ \mathrm{Q}}_{2}\delta_{\mathsf{s}^{\prime}=\mathsf{s}}-\widetilde{\mathrm{D}}_ {2}\theta^{\delta}\delta_{x_{1}=\theta^{\delta}(x_{2},\mathsf{s}^{\prime})}\,,\] (14.129c) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})^{*} =-(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})-(\tilde{ \mathsf{Q}}+V_{,2})+\mathsf{Q}(\delta_{\mathsf{s}^{\prime}=\mathsf{s}}-\delta _{\mathsf{s}^{\prime}=\mathsf{s}_{\mathsf{in}}})-(\mathsf{Q}\partial_{ \mathsf{s}}\theta^{\delta}+V\partial_{2}\theta^{\delta})\delta_{x_{1}=\theta^{ \delta}(x_{2},\mathsf{s}^{\prime})}\,.\] (14.129d) Here we have used that \[\mathcal{H}^{\delta}\subset\mathcal{X}_{\mathsf{fin}}\times[\mathsf{s}_{ \mathsf{in}},\mathsf{s}_{\mathsf{fin}})\], so that only one boundary term emerges when integrating by parts with respect to \[x_{1}\]19, at \[x_{1}=\theta^{\delta}(x_{2},\mathsf{s})\]. We have also used the fact that ( 14.108b ) implies \[\overline{\mathrm{Q}}_{2}(x_{2},\mathsf{s}_{\mathsf{in}})=0\]. Footnote 19: See the footnote 18. ### The \(L^{2}\)-based energy norms Using the weight function \(\mathcal{J}\) defined by (14.58), (14.66), and (14.110), and with the notation introduced in (14.122a) for the \(L^{2}\) norm, for upstream maximal development we work with the energy norms defined by \[\widetilde{\mathcal{E}}_{6}^{2}(\mathsf{s}) =\widetilde{\mathcal{E}}_{6,\mathcal{N}}^{2}(\mathsf{s})+(\mathsf{ K}\varepsilon)^{-2}\widetilde{\mathcal{E}}_{6,\tau}^{2}(\mathsf{s})\,, \widetilde{\mathcal{E}}_{5}^{2}(\mathsf{s})=\widetilde{\mathcal{E}}_{5, \mathcal{N}}^{2}(\mathsf{s})+(\mathsf{K}\varepsilon)^{-2}\widetilde{\mathcal{E }}_{5,\tau}^{2}(\mathsf{s}) \tag{14.130a}\] \[\widetilde{\mathcal{E}}_{6,\mathcal{N}}^{2}(\mathsf{s}) =\big{\|}\mathcal{J}_{g}^{\mathcal{J}}\widetilde{\mathrm{D}}^{6}(J _{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat {\mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L^{2}}^{2}\,, \widetilde{\mathcal{E}}_{5,\mathcal{N}}^{2}(\mathsf{s})=\big{\|}J_{g}^{ \mathcal{J}}\widetilde{\mathrm{D}}^{5}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g} \hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})(\cdot, \mathsf{s})\big{\|}_{L^{2}}^{2}\] (14.130b) \[\widetilde{\mathcal{E}}_{6,\tau}^{2}(\mathsf{s}) =\big{\|}\mathcal{J}_{g}^{\mathcal{J}}\widetilde{\mathrm{D}}^{6}( \hat{\mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{ \mathcal{T}})(\cdot,\mathsf{s})\big{\|}_{L^{2}}^{2}\,, \widetilde{\mathcal{E}}_{5,\tau}^{2}(\mathsf{s})=\big{\|}J_{g}^{\mathcal{J}} \widetilde{\mathrm{D}}^{5}(\hat{\mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{ \mathcal{T}},\hat{\mathbf{A}}_{\mathcal{T}})(\cdot,\mathsf{s})\big{\|}_{L^{2}}^{2}\,,\] (14.130c) and the damping norms by \[\widetilde{\mathcal{D}}_{6}^{2}(\mathsf{s}) =\widetilde{\mathcal{D}}_{6,\mathcal{N}}^{2}(\mathsf{s})+(\mathsf{ K}\varepsilon)^{-2}\widetilde{\mathcal{D}}_{6,\tau}^{2}(\mathsf{s})\,, \widetilde{\mathcal{D}}_{5}^{2}(\mathsf{s})=\widetilde{\mathcal{D}}_{5, \mathcal{N}}^{2}(\mathsf{s})+(\mathsf{K}\varepsilon)^{-2}\widetilde{ \mathcal{D}}_{5,\tau}^{2}(\mathsf{s})\,,\] (14.131a) \[\widetilde{\mathcal{D}}_{6,\mathcal{N}}^{2}(\mathsf{s}) =\int_{\mathsf{s}_{\mathsf{in}}}\!\!\big{\|}\mathcal{J}_{g}^{ \mathcal{J}}\widetilde{\mathrm{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g} \hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\big{\|}_{L^{2 }}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+\int_{\mathsf{s}_{\mathsf{in}}}\!\big{\|}\mathcal{J}_{g}^{ \mathcal{J}}\widetilde{\mathrm{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{T}},J_{g} \hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})\big{\|}_{L^{2 }}^{2}\mathrm{d}\mathsf{s}^{\prime}\,, \tag{14.131c}\] Once again \(\mathsf{K}\geq 1\) is a sufficiently large constant, independent of \(\varepsilon\), chosen at the end of the proof, solely in terms of \(\alpha\) and \(\kappa_{0}\) (see (14.203) below). ### Upstream bootstrap assumptions We continue to use the same bootstrap assumptions as in (5.37), but instead of assuming that these bootstraps hold for \((x,t)\) in the spacetime \(\mathcal{P}\) (cf. (5.11)), we now assume that these bootstraps hold for \((x,t)\in\hat{\mathcal{H}}^{\delta}\) (cf. definition (14.14)). As such, in this section all pointwise bootstraps are assumed to hold for \((x,t)\in\hat{\mathcal{H}}^{\delta}\), or equivalently, for all \((x,\mathsf{s})\in\mathcal{H}^{\delta}\) via the flattening map (14.99), and for the energy and damping norms defined earlier in (14.130) and (14.131). To be more precise, the working bootstrap assumptions in this Here \((\hat{\mathbf{W}},\hat{\mathbf{Z}},\hat{\mathbf{A}},J_{g},h,V,\Sigma)\) are defined according to the flattening (14.106) (dropping the tildes as discussed in Remark 14.6), while the energy and damping norms are defined in (14.130) and (14.131), respectively. Since the bootstraps (14.132) in this section are the same as the bootstraps (5.37) used in Sections 5-12, save for the different weights in the \(L^{2}\) norms (with \(\hat{\mathcal{J}}\) replacing \(\mathcal{J}\) in (14.130) and (14.131)), we shall sometimes (more frequently for the pointwise bootstraps) make reference to (5.37) instead of (14.132). As in Sections 5-12, the burden of the proof in the current section is to close the bootstrap assumptions (14.132), i.e., to show that these bounds hold with \(<\) symbols instead of \(\leq\) symbols. To avoid redundancy, we do not repeat the arguments of how bootstraps are closed when the proof is either identical to that in given earlier in Sections 5-12, or if it requires infinitesimal and straightforward adjustments. Instead, we focus on the proofs of those bootstraps which are different due to the upstream spacetime and weights. In particular, the upstream weight function \(\hat{\mathcal{J}}(x_{1},x_{2},\mathsf{s})\) that appears in our energy estimates is defined using a family of surfaces \(\Theta^{\delta}(x,\mathsf{s})\) which closely approximate the slow acoustic characteristic surfaces. A key feature of this weight function is the identity (14.111a) which is fundamental to the upstream geometry. The remainder of this section is dedicated to closing the bootstrap assumptions (14.132). #### Identities in the downstream coordinate system With respect to the coordinates \((x,\mathsf{s})\) given by (14.99), with the transformation (14.106), and upon dropping the tildes (see Remark 14.6), we have the following fundamental identities, which are translations of the identities in Section 3 into \((x,\mathsf{s})\) coordinates (see also (5.30)-(5.35)): \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})J_{g} =\tfrac{1+\alpha}{2}J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{1- \alpha}{2}J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}\,, \tag{14.133a}\] \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{ \mathsf{D}}_{2}h =g\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1- \alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}\,,\] (14.133b) \[\tfrac{1}{e}\widetilde{\mathsf{D}}_{1}\Sigma =\tfrac{1}{2}J_{g}(\hat{\mathbf{W}}_{\mathcal{N}}-\hat{\mathbf{Z} }_{\mathcal{N}})+\tfrac{1}{2}J_{g}\widetilde{\mathsf{D}}_{2}h(\hat{\mathbf{W }}_{\mathcal{T}}-\hat{\mathbf{Z}}_{\mathcal{T}})\,,\] (14.133c) \[\widetilde{\mathsf{D}}_{2}\Sigma =\tfrac{1}{2}g^{\frac{1}{2}}(\hat{\mathbf{W}}_{\mathcal{T}}- \hat{\mathbf{Z}}_{\mathcal{T}})\,,\] (14.133d) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\Sigma =-\alpha\Sigma(\hat{\mathbf{Z}}_{\mathcal{N}}+\hat{\mathbf{A}}_{ \mathcal{T}})\,,\] (14.133e) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\Sigma^{-2\beta} =2\alpha\beta\Sigma^{-2\beta}(\hat{\mathbf{Z}}_{\mathcal{N}}+\hat{ \mathbf{A}}_{\mathcal{T}})\,,\] (14.133f) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{N} =-\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1- \alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}\mathcal{T}\,,\] (14.133g) \[(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{T} =\big{(}\tfrac{1+\alpha}{2}\hat{\mathbf{W}}_{\mathcal{T}}+\tfrac{1- \alpha}{2}\hat{\mathbf{Z}}_{\mathcal{T}}\big{)}\mathcal{N}\,,\] (14.133h) \[\tfrac{J_{g}}{\Sigma}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{ 2})\Omega =\tfrac{\alpha}{e}\widetilde{\mathsf{D}}_{1}\Omega-\alpha J_{g}g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\,\widetilde{\mathsf{D}}_{2}\Omega\,. \tag{14.133i}\] By using (5.8), we also note that \(\widetilde{\mathsf{D}}_{1}(J_{g}-\overline{J}_{g})=\widetilde{\mathsf{D}}_{2}(J _{g}-\overline{J}_{g})=0\). Bounds for \(\mathsf{Q},\widehat{\mathsf{Q}},\widehat{\mathsf{Q}},\widetilde{\mathsf{Q}}_{2}, \hat{\mathsf{Q}}_{2}\), \(\hat{\mathsf{Q}}_{2}\), and \(\hat{\mathsf{Q}}\) We list a few properties of the coefficients defined in (14.108) in conjunction with the flattening map \(\mathsf{s}=\mathsf{q}(x_{2},t)\). **Lemma 14.7**.: _Assume that the bootstrap bounds (14.132) hold on \(\mathcal{H}^{\delta}\). If \(\varepsilon\) is taken to be sufficiently small with respect to \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\), then_ \[\tfrac{50}{51} \leq\widehat{\mathsf{Q}}(x_{2})\leq\tfrac{1001}{1000}\,, \tag{14.134a}\] \[\big{|}\mathsf{Q}(x,\mathsf{s})-\widehat{\mathsf{Q}}(x_{2})\big{|} \leq\mathring{C}\varepsilon^{2}\,,\] (14.134b) \[\big{|}\overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s})\big{|} \leq 11\varepsilon\,,\] (14.134c) \[\big{|}\hat{\mathsf{Q}}_{2}(x_{2})\big{|} \leq 6(1+\alpha)\,,\] (14.134d) \[\big{|}\hat{\mathsf{Q}}(x,\mathsf{s})\big{|} \leq\mathring{C}\varepsilon\,,\] (14.134e) \[\big{|}\partial_{2}\widehat{\mathsf{Q}}(x_{2})\big{|} \leq 7(1+\alpha)\,,\] (14.134f) \[\big{|}\partial_{1}\mathsf{Q}(x,\mathsf{s})\big{|} \leq\mathring{C}\varepsilon\,,\] (14.134g) \[\big{|}\partial_{2}\mathsf{Q}(x,\mathsf{s})\big{|} \leq\mathring{C}\,, \tag{14.134h}\] _hold uniformly for all \((x,\mathsf{s})\in\mathcal{H}^{\delta}\)._ Proof of Lemma 14.7.: Using the definition (14.108a) together with the fact that \(-\mathring{C}\varepsilon^{2}-\mathsf{t}_{\mathsf{in}}<t^{*}(x_{2})-\mathsf{t}_{ \mathsf{in}}\leq\mathsf{t}_{\mathsf{fin}}-\mathsf{t}_{\mathsf{in}}\), which follows from the analysis in Section 6.6, we have that \[\tfrac{50}{51}=\tfrac{-\mathsf{t}_{\mathsf{in}}}{\mathsf{t}_{\mathsf{fin}}- \mathsf{t}_{\mathsf{in}}}\leq\widehat{\mathsf{Q}}\leq\tfrac{-\mathsf{t}_{ \mathsf{in}}}{-\bar{C}\varepsilon^{2}-\mathsf{t}_{\mathsf{in}}}\leq 1+\mathring{C}\varepsilon^{2}\,, \tag{14.135}\] so by choosing \(\varepsilon\) sufficiently small, (14.134a) holds. From the definition (14.108b) and the bounds (14.17) and (14.135), we deduce \[|\overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s})|\leq 10\varepsilon\cdot(1+\tilde{C} \varepsilon^{2})\cdot\left|1-\tfrac{\mathsf{s}}{\mathsf{s}_{\mathsf{m}}} \right|\leq 10\varepsilon\cdot(1+\tilde{C}\varepsilon^{2})\cdot(\tfrac{51}{50 }+\tilde{C}\varepsilon)\,,\] and so the inequality (14.134c) immediately follows. Now, since \(\mathsf{Q}=\widehat{\mathsf{Q}}-V\overline{\mathsf{Q}}_{2}\), from (14.134c), we obtain the bound (14.134b). Using the definition (14.108d), we see that \(\widehat{\mathsf{Q}}=-\partial_{\mathsf{s}}V\,\overline{\mathsf{Q}}_{2}+V \tfrac{\partial_{2}\varepsilon^{\prime}(x_{2})}{\mathsf{s}_{\mathsf{m}}}\). The bounds (5.370), (14.17), (14.135), and (14.134c) show that (14.134e) holds. Next, we have that \(\widehat{\mathsf{Q}}_{2}=-\widehat{\mathsf{Q}}\tfrac{\partial_{2}\varepsilon^ {\prime}(x_{2})}{\mathsf{s}_{\mathsf{m}}}\) so that with (14.17) and (14.135), we see that (14.134d) holds. Since \(\partial_{2}\widehat{\mathsf{Q}}=-\widehat{\mathsf{Q}}\widehat{\mathsf{Q}}_{2}\), we also have that (14.134f) holds. Finally, from (14.108c), \(\partial_{1}\mathsf{Q}=-\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{1}V\, \overline{\mathsf{Q}}_{2}\). Hence, (5.370) and (14.134c) give (14.134g). Similarly, from (14.108c), \(\partial_{2}\mathsf{Q}=\partial_{2}\widehat{\mathsf{Q}}-\partial_{2}V \overline{\mathsf{Q}}_{2}-V\partial_{2}\overline{\mathsf{Q}}_{2}\). From (5.370), (14.17), (14.20), (14.134c), (14.134d), and (14.134f), (14.134h) follows. ### Bounds for \(\mathsf{O}^{\delta}\) and \(\mathcal{J}\) in \((x,\mathsf{s})\) coordinates In Sections 14.5 and 14.8 we have obtained very precise bounds for \(\Theta^{\delta}(x,t)\) (and its derivatives), and for \(\mathcal{J}(x,t)\) (and its derivatives), in the spacetime \(\mathring{\mathcal{H}}^{\delta}\). In this subsection we revisit a few of these estimates and re-state them (using (14.112) and (14.110)), for \(\Theta^{\delta}(x,\mathsf{s})\) and \(\mathcal{J}(x,\mathsf{s})\) in the spacetime \(\mathcal{H}^{\delta}\). #### 14.18.1. Bounds for the derivatives of \(\Theta^{\delta}\) From Lemma 14.2 and the identities in (14.115), we immediately derive the first derivative bounds \[\tfrac{999}{1001} \leq\partial_{\mathsf{s}}\Theta^{\delta}(x,\mathsf{s})\leq\tfrac {1001}{999}\,, \tag{14.136a}\] \[-\tfrac{2}{10^{8}(1+\alpha)} \leq\partial_{1}\Theta^{\delta}(x,\mathsf{s})<0\,,\] (14.136b) \[|\partial_{2}\Theta^{\delta}(x,\mathsf{s})| \leq 6\cdot 10^{3}(1+\alpha)^{2}\varepsilon\,, \tag{14.136c}\] for all \((x,\mathsf{s})\in\Omega_{\mathsf{US},+}\), the spacetime defined in (14.114). Indeed, the bound (14.136a) follows from (14.115a), the estimate (14.136b) is a consequence of (14.30a), (14.115c), and (14.134a), while (14.136c) follows from (14.29b), (14.29c), (14.115b), and (14.17). Bounds for the second order derivatives of \(\Theta^{\delta}(x,\mathsf{s})\) also follow directly from (14.29d)-(14.29f) and (14.30b)-(14.30d) upon differentiating (14.115) one more time, and appealing to (14.17), (14.20), (14.134a), and (14.134f). Since these bounds will not be crucially used in the subsequent analysis, we choose not to re-state these bounds. #### 14.18.2. The function \(\theta^{\delta}\) is strictly decreasing in \(\mathsf{s}\) We recall the definition of \(\mathring{\mathcal{G}}(x_{2},\mathsf{s})\) from (14.118). Differentiating the first identity in (14.118) with respect to \(\mathsf{s}\), and recalling that \(\overline{\Theta^{\delta}}(x)=\Theta^{\delta}(x,0)\), we deduce that \[\partial_{1}\Theta^{\delta}(\theta^{\delta}(x_{2},\mathsf{s}),x_{2},0)\cdot \partial_{\mathsf{s}}\theta^{\delta}(x_{2},\mathsf{s})=1\,.\] This identity, combined with the bound (14.29a), (14.38), (14.115c), and (14.134a), yields \[-\tfrac{4\alpha\kappa_{0}}{J_{g}(\theta^{\delta}(x_{2},\mathsf{s}),x_{2}, \mathsf{s})}\leq\partial_{\mathsf{s}}\theta^{\delta}(x_{2},\mathsf{s})\leq- \tfrac{10^{5}(1+\alpha)}{2}<0\,. \tag{14.137}\] Similarly, for the function \(\theta^{\delta}_{\mathsf{s}_{\mathsf{m}}}\) defined in (14.124), and appealing to the bound (14.71c), we also have \[-5\alpha\kappa_{0}\leq-\tfrac{4\alpha\kappa_{0}}{J_{g}(\theta^{\delta}_{\mathsf{ m}}(x_{2},\mathsf{s}),x_{2},\mathsf{s})}\leq\partial_{\mathsf{s}}\theta^{ \delta}_{\mathsf{s}_{\mathsf{m}}}(x_{2},\mathsf{s})=\tfrac{1}{\partial_{1} \Theta^{\delta}(\theta^{\delta}_{\mathsf{m}}(x_{2},\mathsf{s}),x_{2},\mathsf{ s}_{\mathsf{m}})}\leq-\tfrac{10^{5}(1+\alpha)}{2}<0\,. \tag{14.138}\] In view of the definition of \(\widetilde{\mathsf{O}}^{*}\) in (14.129), it is also convenient to record the estimate \[\big{|}\partial_{2}\mathring{\mathcal{G}}(x_{2},\mathsf{s})\big{|}\leq\tfrac{24 \cdot 10^{3}(1+\alpha)^{2}\alpha\kappa_{0}}{J_{g}(\theta^{\delta}(x_{2},\mathsf{s}),x_ {2},\mathsf{s})}\varepsilon\,, \tag{14.139}\] which follows upon differentiating the first identity in (14.118) with respect to \(x_{2}\), and appealing to (14.136c). #### 14.18.3. Lower bounds for \(\mathcal{J}\) We recall that the function \(\mathcal{J}(x,t)\) satisfies the pointwise lower bounds (14.67) and (14.68) in \(\mathring{\mathcal{H}}^{\delta}\), and respectively \(\mathring{\mathcal{H}}^{\delta}_{+}\). Via the definition of \(\mathcal{J}(x,\mathsf{s})\) in (14.110), appealing to the definitions (14.100) and (14.117), and to the lower bound for \(\widehat{\mathsf{Q}}\) in (14.134a), these pointwise lower bounds in (14.67) become \[\mathcal{J}(x,\mathsf{s})\geq\left\{\begin{aligned} &\left(\overline{\Theta^{\delta}}(x)-\mathsf{s}\right)\tfrac{1+\alpha} {2\varepsilon}\cdot\tfrac{22}{25}\,,&(x,\mathsf{s})\in\mathcal{H }^{\delta}_{+}\,,\\ & 1\,,&(x,\mathsf{s})\in\mathcal{H}^{\delta}_{-}\,.\end{aligned}\right. \tag{14.140}\] By additionally appealing to (14.116a), (14.99), (14.108a), and (14.134a), the lower bound (14.68) becomes \[\mathcal{J}(x,\Theta^{\delta}(x,\mathsf{s}))\geq\tfrac{895}{1000}\widehat{ \mathsf{Q}}(x_{2})^{-1}\cdot\tfrac{\mathsf{s}}{\mathsf{s}_{\mathsf{m}}} \geq\tfrac{89}{100}\cdot\tfrac{\mathsf{s}}{\mathsf{s}_{\mathsf{m}}}\,, \tag{14.141}\] for all \((x,\mathsf{s})\in\Omega_{\mathsf{US},+}\), or equivalently, for all \((x,\Theta^{\delta}(x,\mathsf{s}))\in\mathcal{H}^{\delta}_{+}\). #### 14.18.4. Comparison of \(\mathcal{J}\) and \(J_{{}_{g}}\) We only record the fact that the results of Lemma 14.3 carry over directly to \((x,\mathsf{s})\) variables. In particular, (14.71) implies that \[\mathcal{J}(x,\mathsf{s})\leq\tfrac{101}{100}J_{{}_{g}}(x,\mathsf{s})\mathbf{1} _{|x_{1}|\leq 13\pi\varepsilon}+\tfrac{21}{10}J_{{}_{g}}(x,\mathsf{s})\mathbf{1} _{|x_{1}|>13\pi\varepsilon} \tag{14.142}\] for all \((x,\mathsf{s})\in\mathcal{H}^{\delta}\). #### 14.18.5. Damping properties of \(\mathcal{J}\) Using (14.110), (14.107), (14.109b), and (14.134), it is direct to transfer any derivative information for \(\mathcal{J}\) from \((x,t)\) variables, into \((x,\mathsf{s})\) variables. In particular, from (14.93) we directly deduce \[-(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}(x,\mathsf{s})\geq \tfrac{1+\alpha}{2\varepsilon}\cdot\tfrac{899}{1000}\geq\tfrac{11(1+\alpha)}{ 25\varepsilon}\] (14.143a) for all \[(x,\mathsf{s})\in\mathcal{H}^{\delta}\], and \[-\tfrac{412(1+\alpha)}{\varepsilon}\leq\partial_{\mathsf{s}} \mathcal{J}(x,\mathsf{s})\leq-\tfrac{9(1+\alpha)}{25\varepsilon}\,, \tag{14.143b}\] \[\big{|}\widetilde{\mathsf{D}}_{2}\mathcal{J}(x,\mathsf{s})\big{|} \leq\mathring{C}\,,\] (14.143c) \[\big{|}\widetilde{\mathsf{D}}_{1}\mathcal{J}(x,\mathsf{s})\big{|} \leq 10^{-3}\,,\] (14.143d) \[\big{|}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})^{2} \mathcal{J}(x,\mathsf{s})\big{|} \leq\tfrac{202^{2}(1+\alpha)^{2}}{\varepsilon^{2}}\,,\] (14.143e) for all \[(x,\mathsf{s})\in\mathcal{H}^{\delta}_{+}\cup\mathcal{H}^{\delta}_{-}\]. Lastly, from Lemma 14.5 we obtain that \[-\tfrac{3}{2}\mathcal{J}^{\frac{1}{2}}J_{{}_{g}}(\mathsf{Q}\partial_{\mathsf{s }}+V\partial_{2})\mathcal{J}+\mathcal{J}^{\frac{3}{2}}(\mathsf{Q}\partial_{ \mathsf{s}}+V\partial_{2})J_{{}_{g}}\geq\tfrac{1+\alpha}{6\varepsilon} \mathcal{J}^{\frac{1}{2}}J_{{}_{g}}\,, \tag{14.144}\] pointwise for all \((x,\mathsf{s})\in\mathcal{H}^{\delta}\). #### 14.19. Bounds for the geometry, sound speed, and ALE velocity for upstream maximal development Just as we did in Section 13, we again record all necessary upstream modifications to the bounds obtained earlier in Section 7. We specifically highlight all the modification caused by the change in the weight function \(\mathcal{J}\mapsto\mathcal{J}\). Bounds without the presence of such a weight function remain identical to those in Section 7 (and we continue to make reference to equation numbers from Section 7), and bounds with such a weight function are modified with \(\mathcal{J}\) replaced with \(\mathcal{J}\). For instance, the bounds in Proposition 7.1 now become the bounds given in Proposition 14.8 below. The corollaries and remarks which follow this proposition (in particular, the closure of the (5.37t) and (5.37u) bootstraps in Corollary 7.2) remain the same as in Section 7, and to avoid redundancy we do not repeat those arguments here. **Proposition 14.8** (Bounds for the geometry, sound speed, and ALE velocity).: _Assume that the bootstrap assumptions (14.132) hold, and that \(\varepsilon\) is taken to be sufficiently small to ensure \(\varepsilon^{\frac{1}{2}}(\langle\mathsf{B}_{\mathsf{J}}\rangle+\langle\mathsf{ B}_{\mathsf{h}}\rangle+\langle\mathsf{B}_{\mathsf{6}}\rangle)\leq 1\). Then, assuming \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0},\) and \(\mathsf{C}_{\mathsf{data}}\), we have that_ \[(J_{{}_{g}},\widetilde{\mathsf{D}}_{1}h,\widetilde{\mathsf{D}}_{2}h,\Sigma,V)\] _satisfy the bounds (6.64), (7.1b), (7.1d), (7.1e), (7.1j) \[-\eqref{eq:1.1}-\eqref{eq:1.1},\] (14.145a) \[\big{\|}\mathcal{J}^{\frac{4}{4}}\widetilde{\mathsf{D}}^{6}J_{{}_ {g}}(\cdot,\mathsf{s})\big{\|}_{L^{\infty}_{\kappa}L^{2}_{\kappa}}^{2}+\tfrac{1} {\varepsilon}\big{\|}\mathcal{J}^{\frac{4}{4}}\widetilde{\mathsf{D}}^{6}J_{{}_ {g}}\big{\|}_{L^{2}_{\kappa,\kappa}}^{2} \lesssim\varepsilon\langle\mathsf{B}_{6}\rangle^{2}\,,\] (14.145b) \[\big{\|}\mathcal{J}^{\frac{4}{4}}\widetilde{\mathsf{D}}^{6} \widetilde{\mathsf{D}}_{2}h(\cdot,\mathsf{s})\big{\|}_{L^{\infty}_{\kappa}L^{2 }_{\kappa}}^{2}+\tfrac{1}{\varepsilon}\big{\|}\mathcal{J}^{-\frac{4}{4}} \widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\big{\|}_{L^{2}_{\kappa, \kappa}}^{2} \lesssim\mathsf{K}^{2}\varepsilon^{3}\langle\mathsf{B}_{6}\rangle^{2}\,,\] (14.145c) \[\sum_{|\gamma|=3}^{6}\|\mathcal{J}^{-\frac{1}{4}}\big{(}\widetilde{ \mathsf{D}}^{|\gamma|}\mathcal{N}+g^{-1}\tau\widetilde{\mathsf{D}}^{|\gamma|} \widetilde{\mathsf{D}}_{2}h\big{)}\|_{L^{2}_{\kappa,\kappa}}^{2}+\|\mathcal{J }^{-\frac{1}{4}}\big{(}\widetilde{\mathsf{D}}^{|\gamma|}\tau-g^{-1}{}_{\mathcal{ N}}\widetilde{\mathsf{D}}^{|\gamma|}\widetilde{\mathsf{D}}_{2}h\big{)}\|_{L^{2}_{\kappa, \kappa}} \lesssim\mathsf{K}^{3}\langle\mathsf{B}_{6}\rangle\,,\] (14.145d) \[\big{\|}\mathcal{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6} \mathcal{N}\|_{L^{2}_{\kappa,\kappa}}+\|\mathcal{J}^{-\frac{1}{4}}\widetilde{ \mathsf{D}}^{6}\tau\|_{L^{2}_{\kappa,\kappa}} \lesssim\mathsf{K}\varepsilon^{2}\langle\mathsf{B}_{6}\rangle\,,\] (14.145e) \[\big{(}\|\mathcal{J}^{\frac{4}{4}}\widetilde{\mathsf{D}}^{6} \mathcal{N}\|_{L^{\infty}_{\kappa}L^{2}_{\kappa}}+\|\mathcal{J}^{\frac{4}{4}} \widetilde{\mathsf{D}}^{6}\tau\|_{L^{\infty}_{\kappa}L^{2}_{\kappa}}\big{)} \lesssim\mathsf{K}\varepsilon^{\frac{2}{2}}\langle\mathsf{B}_{6}\rangle\,,\] (14.145f) _where the implicit constants in all the above inequalities depend only on \(\alpha\), \(\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\)._ Proof of Proposition 14.8.: We explain the upstream modifications required for the proof of the inequality (14.145b). We compute the \(L^{2}\)-inner product of (7.11) with \(\mathcal{J}^{\frac{4}{2}}\widetilde{\mathsf{D}}^{6}J_{{}_{g}}\) to obtain that for any \(\mathsf{s}\in(\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{fin}})\), \[\tfrac{1}{2}\int_{\mathsf{s}_{\mathsf{in}}} \!\!\!\iint\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Using (14.129), we have that \[\tfrac{1}{2}\iint^{\theta^{\phi}}\!\!Q\!\tilde{q}^{\frac{1}{2}} \big{|}\widetilde{\mathsf{D}}^{6}J_{g}\big{|}^{2}\big{|}_{\mathsf{s}_{\mathsf{ in}}}^{2}-\tfrac{1}{4}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\! \iint^{\theta^{\phi}}\!\!\!\!\tilde{q}^{\frac{1}{2}}(\mathsf{Q}\partial_{ \mathsf{s}}+V\partial_{2})\mathscr{J}\,\big{|}\widetilde{\mathsf{D}}^{6}J_{g} \big{|}^{2}-\tfrac{1}{2}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\!\iint^{ \theta^{\phi}}\!\!\!\!\tilde{q}^{\frac{1}{2}}(\mathsf{Q}\partial_{\mathsf{s}}+V \partial_{2})\mathscr{J}\,\big{|}\widetilde{\mathsf{D}}^{6}J_{g}\big{|}^{2}\] \[\qquad-\tfrac{1}{2}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\! \int_{\mathbb{T}}\Big{(}\big{(}\mathsf{Q}\partial_{\mathsf{s}}\theta+V\partial _{2}\theta\big{)}\mathscr{J}^{\frac{1}{2}}\big{|}\widetilde{\mathsf{D}}^{6}J_{ g}\big{|}^{2}\Big{)}\Big{|}_{\theta(x_{2},\mathsf{s})}\mathrm{d}x_{2}\mathsf{d}\mathsf{ s}\] \[=\tfrac{1+\alpha}{2}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}} \!\!\iint^{\theta^{\phi}}\!\!\!\tilde{q}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6 }(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}J_{g}+\tfrac{1 -\alpha}{2}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\!\iint^{\theta^{ \phi}}\!\!\!\tilde{q}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}J_{g}+\int_{\mathsf{s}_{ \mathsf{in}}}^{\mathsf{s}}\!\!\iint^{\theta^{\phi}}\!\!\!\tilde{q}^{\frac{1}{2 }}\mathsf{R}_{J_{g}}\,\widetilde{\mathsf{D}}^{6}J_{g}\,.\] Notice that the second line in this equality vanishes due to the fact that \(\mathscr{J}=0\) along the surface \(x_{1}=\theta^{\phi}(x_{2},\mathsf{s})\) from (14.119). It follows that \[\tfrac{1}{2}\iint^{\theta^{\phi}}\!\!Q\!\tilde{q}^{\frac{1}{2}} \big{|}\widetilde{\mathsf{D}}^{6}J_{g}\big{|}^{2}\big{|}_{\mathsf{s}_{\mathsf{ in}}}^{\mathsf{s}}-\tfrac{1}{4}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\! \iint^{\theta^{\phi}}\!\!\!\tilde{q}^{\frac{1}{2}}(\mathsf{Q}\partial_{ \mathsf{s}}+V\partial_{2})\mathscr{J}\,\big{|}\widetilde{\mathsf{D}}^{6}J_{g} \big{|}^{2}-\tfrac{1}{2}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\!\iint^{ \theta^{\phi}}\!\!\!(\tilde{\mathsf{Q}}+V_{,2})\mathscr{J}^{\frac{1}{2}}\big{|} \widetilde{\mathsf{D}}^{6}J_{g}\big{|}^{2}\] \[=\tfrac{1+\alpha}{2}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}} \!\!\iint^{\theta^{\phi}}\!\!\!\tilde{q}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6 }(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}J_{g}+\tfrac{1 -\alpha}{2}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\!\iint^{\theta^{ \phi}}\!\!\!\tilde{q}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}})\widetilde{\mathsf{D}}^{6}J_{g}+\int_{\mathsf{s}_{ \mathsf{in}}}^{\mathsf{s}}\!\!\iint^{\theta^{\phi}}\!\!\!\tilde{q}^{\frac{1 }{2}}\mathsf{R}_{J_{g}}\,\widetilde{\mathsf{D}}^{6}J_{g}\,.\] Then, using the bootstraps (5.37), and the bounds (14.134) and (14.143a), we obtain that for \(\varepsilon>0\) sufficiently small, \[\tfrac{1-\varepsilon^{\frac{7}{2}}}{2}\iint^{\theta^{\phi}}\!\!\! \tilde{q}^{\frac{1}{2}}|\widetilde{\mathsf{D}}^{6}J_{g}|^{2}\big{|}_{ \mathsf{s}_{+}}^{\mathsf{s}}+\tfrac{(1+\alpha)}{10\varepsilon}\|\mathscr{J}^{ -\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_{g}\|_{L^{2}_{x,\mathsf{s}}}^{2}\] \[\leq\|\mathscr{J}^{-\frac{1}{4}}\widetilde{\mathsf{D}}^{6}J_{g}\| _{L^{2}_{x,\mathsf{s}}}\!\!\left(\tfrac{1+\alpha}{2}\|\mathscr{J}^{\frac{1}{4}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\|_{L^{2}_{x, \mathsf{s}}}+\tfrac{|1-\alpha|}{2}\|\mathscr{J}^{\frac{1}{2}}\|\mathscr{J}^{ \frac{1}{2}}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\|_{L^{2}_{x,\mathsf{s}}}^{2}+ \|\mathscr{J}^{\frac{1}{4}}\mathsf{R}_{J_{g}}\|_{L^{2}_{x,\mathsf{s}}}^{2}\right)\] \[\leq\tfrac{1+\alpha}{20\varepsilon}\|\mathscr{J}^{-\frac{4}{4}} \widetilde{\mathsf{D}}^{6}J_{g}\|_{L^{2}_{x,\mathsf{s}}}^{2}+4(1+\alpha) \varepsilon\|\mathscr{J}^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\|_{L^{2}_{x, \mathsf{s}}}^{2}+\tfrac{16\varepsilon}{1+\alpha}\|\mathscr{J}^{\frac{3}{4}} \mathsf{R}_{J_{g}}\|_{L^{2}_{x,\mathsf{s}}}^{2}\,. \tag{14.146}\] The bound for \(\|J_{g}^{\frac{3}{4}}\mathsf{R}_{J_{g}}\|_{L^{2}_{x,\mathsf{s}}}^{2}\) is obtained identically as in (7.15), and hence, it follows that \[\iint^{\theta^{\phi}}\!\!\tilde{q}^{\frac{1}{2}}|\widetilde{\mathsf{D}}^{6}J_{g} (\cdot,\mathsf{s})|^{2}+\tfrac{1}{\varepsilon}\|\mathscr{J}^{-\frac{1}{4}} \widetilde{\mathsf{D}}^{6}J_{g}\|_{L^{2}_{x,\mathsf{s}}}^{2}\leq 20\iint^{\theta^{\phi}}\!\!\!\tilde{q}^{\frac{1}{2}}| \widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}_{\mathsf{in}})|^{2}+ \varepsilon C\langle\mathsf{B}_{6}\rangle^{2}\,, \tag{14.147}\] where \(C\) depends only on \(\alpha\). The bound for \(\iint^{\theta^{\phi}}\!\!\tilde{q}^{\frac{1}{2}}|\widetilde{\mathsf{D}}^{6}J_{g} (\cdot,\mathsf{s}_{\mathsf{in}})|^{2}\), given in (7.10), has an implicit constant that depends on \(\alpha\) and \(\mathsf{C}_{\mathsf{data}}\). This concludes the proof of (14.145b). The upstream modifications required for proof of the inequalities (14.145c)-(14.145f) are identical and these details will be omitted. To avoid redundancy, we also omit the proofs of the unweighted bounds for \((J_{g},\widetilde{\mathsf{D}}_{1}h,\widetilde{\mathsf{D}}_{2}h,\Sigma,V)\), as these are established exactly as in the proof of Proposition 7.1. ### \(L^{2}\)-norms of functions evaluated along \(\Theta^{\mathsf{s}}\) We first define the set of points at the intersection of each surface \(\Theta^{\mathsf{s}}(x,\mathsf{s})\) and the initial time-slice \(\{\mathsf{s}=\mathsf{s}_{\mathsf{in}}\}\). Recall the definitions (14.112) and (14.11b), for each \(\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},0]\) and \(x_{2}\in\mathbb{T}\), the intersection of the initial time-slice \(\{\mathsf{s}=\mathsf{s}_{\mathsf{in}}\}\) with the slow characteristic surface \(\Theta^{\mathsf{s}}(x,\mathsf{s})\) occurs at \(x_{1}=\mathfrak{X}_{1}^{+}(x_{2},\mathfrak{q}^{-1}(x_{2},\mathsf{s}))\) _where we recall the definitions in (14.148). The spacetime \(L^{2}\) norm of \(F(x,\Theta^{\delta}(x,\mathsf{s}))\) is written as_ \[\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\cdot))\big{\|}_{L^{2}_{x, \mathsf{s}}(\Omega_{\mathsf{US},+})}^{2} =\int_{\mathsf{s}=\mathsf{s}_{\mathsf{m}}}^{0}\big{\|}F(\cdot, \Theta^{\delta}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}\] \[=\int_{\mathsf{s}=\mathsf{s}_{\mathsf{m}}}^{0}\int_{x_{2}=-\pi}^{ \pi}\int_{x_{1}=\widetilde{\mathbf{x}}_{1}^{-}(x_{2},\mathsf{s})}^{\widetilde{ \mathbf{x}}_{1}^{+}(x_{2},\mathsf{s})}\big{|}F(x,\Theta^{\delta}(x,\mathsf{s}) )\big{|}^{2}\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}\] \[=\int_{\Omega_{\mathsf{US},+}}\big{|}F(x,\Theta^{\delta}(x, \mathsf{s}))\big{|}^{2}\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}\,,\] (14.151a) _upon recalling the definition (_14.114_). By a slight abuse of notation, we shall write_ \[\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\cdot))\big{\|}_{L^{2}_{x, \mathsf{s}}}^{2}:=\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\cdot))\big{\|}_{L^{2} _{x,\mathsf{s}}(\Omega_{\mathsf{US},+})}^{2}\] (14.151b) _as \[F(\cdot,\Theta^{\delta}(\cdot,\cdot))\] is only defined on \[\Omega_{\mathsf{US},+}\] and hence there is no other possible domain for this spacetime norm._ #### 14.20.1. A spacetime change of variables formula It will be necessary for us to make the spacetime change of variables \(\mathsf{s}\mapsto\Theta^{\delta}(x,\mathsf{s})\) in order to relate the norm \(\|F(\cdot,\Theta^{\delta}(\cdot,\cdot))\|_{L^{2}_{x,\mathsf{s}}}\) (as defined by (14.151)), to the norm \(\|F\|_{L^{2}_{x,\mathsf{s}+}}\) (as defined by (14.126a) and (14.127)). This is possible because according to (14.120b) the spacetime \(\mathcal{H}^{\delta}_{+}\) is indeed foliated by the surfaces \((x,\Theta^{\delta}(x,\mathsf{s}^{\prime}))\), with \((x,\mathsf{s}^{\prime})\in\Omega_{\mathsf{US},+}\). That is, for every \((x,\mathsf{s})\in\mathcal{H}^{\delta}_{+}\), there exists a unique \(\mathsf{s}^{\prime}\in[\mathsf{s}_{\mathsf{m}},0)\) such that \(\mathsf{s}=\Theta^{\delta}(x,\mathsf{s}^{\prime})\), and the map \((x,\mathsf{s}^{\prime})\mapsto(x,\Theta^{\delta}(x,\mathsf{s}^{\prime}))=(x, \mathsf{s})\) is a bijection \(\Omega_{\mathsf{US},+}\to\mathcal{H}^{\delta}_{+}\). The Jacobian determinant of this map is \(\partial_{\mathsf{s}^{\prime}}\Theta^{\delta}(x,\mathsf{s}^{\prime})\), which according to (14.136a) is uniformly close to \(1\). Applying the change-of-variables theorem to this transformation, we find that \[\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\cdot))\big{\|}_{L^{2}_{x, \mathsf{s}}}^{2} =\int_{\Omega_{\mathsf{US},+}}\big{|}F(x,\Theta^{\delta}(x, \mathsf{s}^{\prime}))\big{|}^{2}\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d} \mathsf{s}^{\prime}\] \[=\int_{\mathcal{H}^{\delta}_{+}}\big{|}F(x,\mathsf{s})\big{|}^{2} \tfrac{1}{\partial_{\mathsf{s}^{\prime}}\Theta^{\delta}(x,\mathsf{s}^{ \prime})}\Big{|}_{\mathsf{s}=\Theta^{\delta}(x,\mathsf{s}^{\prime})}\mathrm{d}x _{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}=\big{\|}(\partial_{\mathsf{s}}\Theta^{ \delta})^{-\frac{1}{2}}\,\,F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{ \delta}_{+})}^{2}\,, \tag{14.152}\] the last equality coming from the definition of the \(L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{+})\)-norm given by (14.126a) and (14.127). #### 14.20.2. Useful inequalities for fifth-derivative bounds We next develop some inequalities that will be used for fifth-derivative bounds. We follow the methodology of Section 6.8. The main estimates established below are (14.156), (14.161), and (14.164). We first seek to obtain a bound for the \(L^{2}_{x}\)-norm of the composite function \(F(\cdot,\Theta^{\delta}(\cdot,s))\), as defined in (14.150). In order to bound from above \(\tfrac{d}{ds}\|F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}))\|_{L^{2}_{x}}^{2}\), it is clear from (14.150) that we need to be able to differentiate the stopping times \(\widetilde{\mathbf{x}}_{1}^{\pm}(x_{2},\mathsf{s})\) with respect to \(\mathsf{s}\). This is achieved by differentiating the two identities in (14.149) with respect to \(\mathsf{s}\), and appealing to (14.29a), (14.29c), (14.115a), (14.115c), (14.134a), and to the fact that \(J_{g}(\cdot,\mathsf{s}_{\mathsf{in}})=1\); this leads to \[\tfrac{d}{ds}\big{(}\widetilde{\mathbf{x}}_{1}^{+}(x_{2},\mathsf{ s})\big{)} =-\tfrac{(\partial_{\mathsf{s}}\Theta^{\delta})(\widetilde{\mathbf{x}}_{1}^{+}(x_{2},\mathsf{s}),x_{2},\mathsf{s})}{(\partial_{1}\Theta^{\delta})(\widetilde{ \mathbf{x}}_{1}^{+}(x_{2},\mathsf{s}),x_{2},\mathsf{s})}\in[\tfrac{\alpha\kappa _{0}}{5},5\alpha\kappa_{0}\big{]}\,, \tag{14.153a}\] \[\tfrac{d}{ds}\big{(}\widetilde{\mathbf{x}}_{1}^{-}(x_{2},\mathsf{ s})\big{)} =-\tfrac{(\partial_{\mathsf{s}}\Theta^{\delta})(\widetilde{\mathbf{x}}_{1}^{-}(x_{2},\mathsf{s}),x_{2},\mathsf{s})}{(\partial_{1}\Theta^{\delta})(\widetilde{ \mathbf{x}}_{1}^{-}(x_{2},\mathsf{s}),x_{2},\mathsf{s})}\in[\tfrac{\alpha\kappa _{0}}{5},\infty)\,. \tag{14.153b}\] The rough upper bound in (14.153b) is due to the fact that the \(J_{g}\circ\Theta^{\delta}\) factor implicitly present in \(\partial_{1}\Theta^{\delta}\) is evaluated at time \(\mathsf{q}(x_{2},\mathsf{t}_{\mathsf{fin}})\), and so \(J_{g}\) here could in principle be as close as possible to \(0\), though still strictly positive. With (14.153), we simply differentiate (14.150) and deduce that \[\tfrac{d}{ds}\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s})) \big{\|}_{L^{2}_{x}}^{2}\] \[\leq 5\alpha\kappa_{0}\int_{\mathbb{T}}\big{|}F(\widetilde{ \mathbf{x}}_{1}^{+}(x_{2},\mathsf{s}),x_{2},\mathsf{s}_{\mathsf{in}})\big{|}^{2} \mathrm{d}x_{2}\] \[\quad+\int_{x_{2}=-\pi}^{\pi}\int_{x_{1}=\widetilde{\mathbf{x}}_{1 }^{-}(x_{2},\mathsf{s})}^{\widetilde{\mathbf{x}}_{1}^{+}(x_{2},\mathsf{s})}2F(x_{1},x_{2},\Theta^{\delta}(x_{1},x_{2},\mathsf{s}))\partial_{\mathsf{s}}F(x_{1},x_{2 },\mathsf{s}^{\prime}(x_{1},x_{2},\mathsf{s}))\partial_{\mathsf{s}}\Theta^{ \delta}(x_{1},x_{2},\mathsf{s})\mathrm{d}x_{1}\mathrm{d}x_{2}\] \[\leq 5\alpha\kappa_{0}\|F(\widetilde{\mathbf{x}}_{1}^{+}(\cdot, \mathsf{s}),\cdot,\mathsf{s}_{\mathsf{in}})\|_{L^{2}_{x_{2}}}^{2}+2\cdot\tfrac{1 001}{999}\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_{x}} \big{\|}\partial_{\mathsf{s}}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_{ x}}\,. \tag{14.154}\] In the last inequality above we have appealed to (14.136a) to bound \(\partial_{\mathsf{s}}\Theta^{\delta}\). The estimate (14.154) then implies upon integration in time, and by appealing to (14.153a) and the change of variable formula for the map \((x_{2},\mathsf{s}^{\prime})\mapsto(x_{2},\widetilde{\mathfrak{X}}_{1}^{+}(x_{2},\mathsf{s}^{\prime}))\), whose determinant Jacobian is \(\frac{d}{d}\widetilde{\mathfrak{X}}_{1}^{+}\), that \[\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_ {x}}^{2} \leq\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}_{\mathsf{in}})) \big{\|}_{L^{2}_{x}}^{2}+5\alpha\kappa_{0}\int_{\mathfrak{s}_{\mathsf{in}}}^{ \mathsf{s}}\!\int_{\mathbb{T}}\!\!\big{|}F(\widetilde{\mathfrak{X}}_{1}^{+}(x_ {2},\mathsf{s}^{\prime}),x_{2},\mathsf{s}_{\mathsf{in}})\big{|}^{2}\mathrm{d} x_{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\qquad+2\cdot\tfrac{1001}{999}\int_{\mathfrak{s}_{\mathsf{in}}}^{ \mathsf{s}}\!\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}^{\prime}))\big{\|} _{L^{2}_{x}}\big{\|}\partial_{\mathsf{s}}F(\cdot,\Theta^{\delta}(\cdot, \mathsf{s}^{\prime}))\big{\|}_{L^{2}_{x}}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}_{\mathsf{in} }))\big{\|}_{L^{2}_{x}}^{2}+25\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|} _{L^{2}_{x}}^{2}\] \[\qquad+2\cdot\tfrac{1001}{999}\int_{\mathfrak{s}_{\mathsf{in}}}^{ \mathsf{s}}\!\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}^{\prime}))\big{\|} _{L^{2}_{x}}\big{\|}\partial_{\mathsf{s}}F(\cdot,\Theta^{\delta}(\cdot, \mathsf{s}^{\prime}))\big{\|}_{L^{2}_{x}}\mathrm{d}\mathsf{s}^{\prime} \tag{14.155}\] We now consider a function \(F(x,\mathsf{s})\) such that \(\mathcal{J}^{r}\partial_{\mathsf{s}}F\in L^{2}_{x,\mathsf{s}}(\mathcal{H}^{ \delta})\), for some \(0\leq r\leq\frac{3}{4}\). Our goal is to establish the bound \[\big{\|}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta})}^{2}\leq 224 \varepsilon\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}+6 64\varepsilon^{2}\big{\|}\mathcal{J}^{r}\,\partial_{\mathsf{s}}F\big{\|}_{L^{ 2}_{x,\mathsf{s}}(\mathcal{H}^{\delta})}^{2}\,,\qquad 0\leq r\leq\tfrac{3}{4}\,. \tag{14.156}\] In order to prove (14.156), we first decompose the left side of this inequality as integrals over \(\mathcal{H}^{\delta}_{+}\) and \(\mathcal{H}^{\delta}_{-}\). From (14.122b), (14.127), (14.136a), (14.151), and (14.152), we deduce that \[\big{\|}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta})}^{2}\leq\big{\|} F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{+})}^{2}+\big{\|}F\big{\|}_{L^{2} _{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}^{2}\leq\tfrac{1001}{999}\big{\|}F( \cdot,\mathsf{s}^{\delta}(\cdot,\cdot))\big{\|}_{L^{2}_{x,\mathsf{s}}}^{2}+ \big{\|}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}^{2}\,. \tag{14.157}\] For the first term on the right side of (14.157), we use definition (14.151) the lower bound (14.141) and the previously obtained upper bound (14.155), to deduce \[\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\cdot))\big{\|}_{L^{2}_{x, \mathsf{s}}}^{2} =\int_{\mathfrak{s}_{\mathsf{in}}}^{0}\!\big{\|}F(\cdot,\Theta^{ \delta}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}\] \[\leq|\mathsf{s}_{\mathsf{in}}|\big{\|}F(\cdot,\Theta^{\delta}( \cdot,\mathsf{s}_{\mathsf{in}}))\big{\|}_{L^{2}_{x}}^{2}+25|\mathsf{s}_{\mathsf{ in}}|\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}\] \[\qquad+2\cdot\tfrac{1001}{999}\int_{\mathfrak{s}_{\mathsf{in}}}^{ 0}\!\int_{\mathfrak{s}_{\mathsf{in}}}^{\mathsf{s}}\!\big{\|}F(\cdot,\Theta^{ \delta}(\cdot,\mathsf{s}^{\prime}))\big{\|}_{L^{2}_{x}}\big{\|}\partial_{ \mathsf{s}}F(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}^{\prime}))\big{\|}_{L^{2}_{ x}}\mathrm{d}\mathsf{s}^{\prime}\mathrm{d}\mathsf{s}\] \[\leq|\mathsf{s}_{\mathsf{in}}|\big{\|}F(\cdot,\Theta^{\delta}( \cdot,\mathsf{s}_{\mathsf{in}}))\big{\|}_{L^{2}_{x}}^{2}+25|\mathsf{s}_{\mathsf{ in}}|\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}\] \[\qquad+2\cdot\tfrac{1001}{999}\int_{\mathfrak{s}_{\mathsf{in}}}^{ 0}\!\int_{\mathfrak{s}_{\mathsf{in}}}^{\mathsf{s}}\!\big{\|}F(\cdot,\Theta^{ \delta}(\cdot,\mathsf{s}^{\prime}))\big{\|}_{L^{2}_{x}}\big{\|}(\mathcal{J}^{r }\partial_{\mathsf{s}}F)(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}^{\prime}))\big{\|} _{L^{2}_{x}}\big{(}\tfrac{89}{100}\cdot\tfrac{\mathsf{s}^{\prime}}{\mathsf{s}_{ \mathsf{in}}}\big{)}^{-r}\mathrm{d}\mathsf{s}^{\prime}\mathrm{d}\mathsf{s}\] \[\leq|\mathsf{s}_{\mathsf{in}}|\big{\|}F(\cdot,\Theta^{\delta}( \cdot,\mathsf{s}_{\mathsf{in}}))\big{\|}_{L^{2}_{x}}^{2}+25|\mathsf{s}_{ \mathsf{in}}|\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}\] \[\qquad+9|\mathsf{s}_{\mathsf{in}}|\big{\|}F(\cdot,\Theta^{\delta}( \cdot,\cdot))\big{\|}_{L^{2}_{x,\mathsf{s}}}\big{\|}(\mathcal{J}^{r}\partial_{ \mathsf{s}}F)(\cdot,\Theta^{\delta}(\cdot,\cdot))\big{\|}_{L^{2}_{x,\mathsf{s}}}\,.\] In the last inequality we have used the fact that for \(r\in[0,3/4]\) we have \(\int_{\mathfrak{s}_{\mathsf{in}}}^{0}(\frac{\mathsf{s}}{\mathsf{s}_{\mathsf{in}}})^{-r} \mathrm{d}\mathsf{s}\leq 4|\mathsf{s}_{\mathsf{in}}|\). Using an Cauchy-Young argument, and then by appealing to (14.152) and (14.136a), we deduce from the above estimate that \[\big{\|}F(\cdot,\Theta^{\delta}(\cdot,\cdot))\big{\|}_{L^{2}_{x, \mathsf{s}}}^{2} \leq 2|\mathsf{s}_{\mathsf{in}}|\big{\|}F(\cdot,\Theta^{\delta}(\cdot, \mathsf{s}_{\mathsf{in}}))\big{\|}_{L^{2}_{x}}^{2}+50|\mathsf{s}_{\mathsf{in}}| \big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}+81|\mathsf{s}_{ \mathsf{in}}|^{2}\big{\|}(\mathcal{J}^{r}\partial_{\mathsf{s}}F)(\cdot,\Theta^{ \delta}(\cdot,\cdot))\big{\|}_{L^{2}_{x,\mathsf{s}}}^{2}\] \[\leq 2|\mathsf{s}_{\mathsf{in}}|\big{\|}F(\cdot,\Theta^{\delta}( \cdot,\mathsf{s}_{\mathsf{in}}))\big{\|}_{L^{2}_{x}}^{2}+50|\mathsf{s}_{\mathsf{in \[\leq\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}+2 \big{\|}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}\big{\|} \mathcal{J}^{\sigma}\partial_{\mathsf{s}}F\big{\|}_{L^{2}_{x,\mathsf{s}}( \mathcal{H}^{\delta}_{-})}^{2}\,.\] The last inequality following from (14.138) and the fact that \(\mathcal{J}\geq 1\) in \(\mathcal{H}^{\delta}_{-}\) due to (14.140). Integrating the above inequality for \(\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{fin}}]\) and then using Cauchy-Young, we obtain \[\big{\|}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}^{2}\leq 2( \mathsf{s}_{\mathsf{fin}}-\mathsf{s}_{\mathsf{in}})\big{\|}F(\cdot,\mathsf{s}_ {\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}+4(\mathsf{s}_{\mathsf{fin}}-\mathsf{s} _{\mathsf{in}})^{2}\big{\|}\mathcal{J}^{\sigma}\partial_{\mathsf{s}}F\big{\|} _{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}^{2}\,. \tag{14.160}\] Combining the bounds (14.157), (14.158), (14.159), (14.160), using that \(|\mathsf{s}_{\mathsf{fin}}-\mathsf{s}_{\mathsf{in}}|\leq\frac{26}{25}|\mathsf{ s}_{\mathsf{in}}|\), and applying the Cauchy-Young inequality, we deduce \[\big{\|}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}^{2}\leq 1 12|\mathsf{s}_{\mathsf{in}}|\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|} _{L^{2}_{x}}^{2}+166|\mathsf{s}_{\mathsf{in}}|^{2}\big{\|}\mathcal{J}^{\sigma }\partial_{\mathsf{s}}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{ +})}^{2}+35|\mathsf{s}_{\mathsf{in}}|^{2}\big{\|}\mathcal{J}^{\sigma}\partial _{\mathsf{s}}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}^{2}\,.\] This bound clearly implies (14.156), as claimed. Our next goal is to establish the bound \[\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},0]}\big{\|}(J^{\frac{1}{2}}_{ \sigma}F)(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_{x}}^{2} \leq 40e^{15}\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}+2 0e^{30}\varepsilon\big{\|}\mathcal{J}^{\frac{1}{4}}_{\sigma}J^{\frac{1}{2}}_{ \sigma}\partial_{\mathsf{s}}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{ \delta})}^{2}\,, \tag{14.161}\] in direct analogy to (6.75). For this purpose, we revisit the computation in (14.154)-(14.155), appeal to the fact that via (14.145) (more precisely, (6.64)) and the bootstrap assumptions in (14.132a) we have that \(\partial_{\mathsf{s}}J_{g}\leq-\frac{4}{5}\cdot\frac{1+\alpha}{2\varepsilon}+ 14\cdot\frac{1+\alpha}{2\varepsilon}J_{g}\leq 14|\mathsf{s}_{\mathsf{in}}|^{-1}J_{g}\), and thus \[\big{\|}(J^{\frac{1}{2}}_{\sigma}F)(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_{x}}^{2} \leq\big{\|}(J^{\frac{1}{2}}_{\sigma}F)(\cdot,\Theta^{\delta}( \cdot,\mathsf{s}_{\mathsf{in}}))\big{\|}_{L^{2}_{x}}^{2}+25\big{\|}(J^{\frac{1 }{2}}_{\sigma}F)(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}\] \[\quad+\frac{2002}{999}\big{\|}(J^{\frac{1}{2}}_{\sigma}F)(\cdot, \Theta^{\delta}(\cdot,\mathsf{s}^{\prime}))\big{\|}_{L^{2}_{x}}\big{\|}(J^{ \frac{1}{2}}_{\sigma}\partial_{\mathsf{s}}F)(\cdot,\Theta^{\delta}(\cdot, \mathsf{s}^{\prime}))\big{\|}_{L^{2}_{x}}\mathrm{d}\mathsf{s}^{\prime}\] \[\quad+\frac{1001}{999}\cdot\frac{14}{|\mathsf{s}_{\mathsf{in}}|} \int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\big{\|}(J^{\frac{1}{2}}_{\sigma}F)( \cdot,\Theta^{\delta}(\cdot,\mathsf{s}^{\prime}))\big{\|}_{L^{2}_{x}}^{2} \mathrm{d}\mathsf{s}^{\prime}\,. \tag{14.162}\] Using that \(J_{g}(\cdot,\mathsf{s}_{\mathsf{in}})\equiv 1\), that \(|J_{g}-1|\leq 5\cdot 10^{-4}\) and \(1\leq\mathcal{J}\leq 3\) in the closure of \(\mathcal{H}^{\delta}_{-}\) via (14.71c), appealing to the upper bounds (14.159), (14.160), to the lower bound (14.141), and using the norm equivalence stated in (14.152), we deduce from (14.162) that \[\big{\|}(J^{\frac{1}{2}}_{\sigma}F)(\cdot,\Theta^{\delta}(\cdot, \mathsf{s}))\big{\|}_{L^{2}_{x}}^{2} \leq 26\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}+ 3\big{\|}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}\big{\|} \mathcal{J}^{\frac{1}{2}}_{\sigma}\partial_{\mathsf{s}}F\big{\|}_{L^{2}_{x, \mathsf{s}}(\mathcal{H}^{\delta}_{-})}\] \[\quad+\frac{21}{10}\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}} \big{\|}(J^{\frac{1}{2}}_{\sigma}F)(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}^{ \prime}))\big{\|}_{L^{2}_{x}}\big{\|}(J^{\frac{1}{2}}_{\sigma}J^{\frac{1}{2}}_{ \sigma}\partial_{\mathsf{s}}F)(\cdot,\Theta^{\delta}(\cdot,\mathsf{s}^{\prime})) \big{\|}_{L^{2}_{x}}\big{(}\frac{\mathsf{s}^{\prime}}{\mathsf{s}_{\mathsf{in}}})^{- \frac{1}{4}}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq 29\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}}^{2}+ 10|\mathsf{s}_{\mathsf{in}}|\big{\|}\mathcal{J}^{\frac{1}{2}}_{\sigma}\partial_{ \mathsf{s}}F\big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\delta}_{-})}^{2}\] \[+3|\mathsf{s}_{\mathsf{in}}|^{\frac{1}{2}}\Big{(}\sup_{\mathsf{s}\in [\mathsf{s}_{\mathsf{in}},0]}\big{\|}(J_{\vartheta}^{\frac{1}{2}}F)(\cdot, \mathsf{\Theta}^{\mathsf{S}}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_{x}}\Big{)}\big{\|} \mathring{q}^{\frac{1}{4}}J_{\vartheta}^{\frac{1}{2}}\partial_{\mathsf{s}}F \big{\|}_{L^{2}_{x,\mathsf{s}}(\mathcal{H}^{\mathsf{S}}_{+})}\] \[+\frac{15}{|\mathsf{s}_{\mathsf{in}}|}\int_{\mathsf{s}_{\mathsf{in }}}^{\mathsf{s}}\big{\|}(J_{\vartheta}^{\frac{1}{2}}F)(\cdot,\mathsf{\Theta}^{ \mathsf{S}}(\cdot,\mathsf{s}^{\prime}))\big{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}\,. \tag{14.163}\] Using Gronwall's inequality for \(\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},0]\) and the Cauchy-Schwartz inequality, we deduce from (14.163) that \[\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},0]}\big{\|}(J_{ \vartheta}^{\frac{1}{2}}F)(\cdot,\mathsf{\Theta}^{\mathsf{S}}(\cdot,\mathsf{ s}))\big{\|}_{L^{2}_{x}}^{2} \leq 29e^{15}\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{ 2}_{x}}^{2}+14e^{15}|\mathsf{s}_{\mathsf{in}}|\big{\|}\mathring{q}^{\frac{1}{4 }}J_{\vartheta}^{\frac{1}{2}}\partial_{\mathsf{s}}F\big{\|}_{L^{2}_{x, \mathsf{s}}(\mathcal{H}^{\mathsf{S}}_{-})}^{2}\] \[+9e^{30}|\mathsf{s}_{\mathsf{in}}|\big{\|}\mathring{q}^{\frac{1} {4}}J_{\vartheta}^{\frac{1}{2}}\partial_{\mathsf{s}}F\big{\|}_{L^{2}_{x, \mathsf{s}}(\mathcal{H}^{\mathsf{S}}_{+})}^{2}+\frac{1}{4}\sup_{\mathsf{s}\in[ \mathsf{s}_{\mathsf{in}},0]}\big{\|}(J_{\vartheta}^{\frac{1}{2}}F)(\cdot, \mathsf{\Theta}^{\mathsf{S}}(\cdot,\mathsf{s}))\big{\|}_{L^{2}_{x}}^{2}\,.\] Absorbing the last term on the right side into the left side of the above bound, then establishes (14.161). The remaining bound that is established in this section is that for the norm defined in (14.122a), we have \[\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{in}}]}\|(J_ {\vartheta}^{\frac{1}{2}}F)(\cdot,\mathsf{s})\|_{L^{2}_{x}}^{2} \leq 14e^{28}\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2} _{x}}^{2}+92\varepsilon e^{56}\big{\|}\mathring{q}^{\frac{1}{4}}J_{ \vartheta}^{\frac{1}{2}}\partial_{\mathsf{s}}F\big{\|}_{L^{2}_{x,\mathsf{s}}( \mathcal{H}^{\mathsf{S}}_{-})}^{2}\,. \tag{14.164}\] In order to prove (14.164), we split the integral defined in (14.122a) similarly to (14.126) as \[\|(J_{\vartheta}^{\frac{1}{2}}F)(\cdot,\mathsf{s})\|_{L^{2}_{x}}^{2} =\int_{\mathbb{T}}\!\int_{-\pi}^{\theta_{\mathsf{in}}(x_{2},\mathsf{ s})}(J_{\vartheta}F^{2})(x_{1},x_{2},\mathsf{s})\mathrm{d}x_{1}\mathrm{d}x_{2}+ \int_{\mathbb{T}}\!\int_{\theta_{\mathsf{in}}^{\mathsf{S}}(x_{2},\mathsf{s})}^ {\theta\,(x_{2},\mathsf{s})}(J_{\vartheta}F^{2})(x_{1},x_{2},\mathsf{s}) \mathrm{d}x_{1}\mathrm{d}x_{2}\] \[=:\mathcal{I}(\mathsf{s},-)+\mathcal{I}(\mathsf{s},+)\,. \tag{14.165}\] For the first integral, a fundamental theorem of calculus in time is applied between time \(\mathsf{s}\) and time \(\mathsf{s}_{\mathsf{in}}\). The monotonicity with respect to \(x_{1}\) of \(\mathsf{\Theta}^{\mathsf{S}}(x_{1},x_{2},\mathsf{s}_{\mathsf{in}})\) ensured by (14.136b) implies that for every \(x_{1}\leq\theta_{\mathsf{in}}^{\mathsf{S}}(x_{2},\mathsf{s})\), and for all \(\mathsf{s}^{\prime}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s})\), we have \((x_{1},x_{2},\mathsf{s}^{\prime})\in\mathcal{H}^{\mathsf{S}}_{-}\). Then, with the bounds for \(\mathring{\mathpzc{J}}\) and \(J_{\vartheta}\) provided by (14.71c) in \(\mathcal{H}^{\mathsf{S}}_{-}\), and with the estimate \(\partial_{\mathsf{s}}J_{\vartheta}\leq 14|\mathsf{s}_{\mathsf{in}}|^{-1}J_{\vartheta}\), we deduce \[\mathcal{I}(\mathsf{s},-)=\int_{\mathbb{T}}\!\int_{-\pi}^{\theta_{ \mathsf{in}}^{\mathsf{S}}(x_{2},\mathsf{s})}(J_{\vartheta}F^{2})(x_{1},x_{2}, \mathsf{s}_{\mathsf{in}})\mathrm{d}x_{1}\mathrm{d}x_{2}+\int_{\mathsf{s}_{ \mathsf{in}}}^{\mathsf{s}}\int_{\mathbb{T}}\!\int_{-\pi}^{\theta_{\mathsf{in}}^ {\mathsf{S}}(x_{2},\mathsf{s})}\partial_{\mathsf{s}}(J_{\vartheta}F^{2})(x_{1},x _{2},\mathsf{s}^{\prime})\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{2}_{x}} ^{2}+2\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\|(J_{\vartheta}^{\frac{1}{2}}F)( \cdot,\mathsf{s}^{\prime})\|_{L^{2}_{x}}\|(\mathring{q}^{\frac{1}{4}}J_{ \vartheta}^{\frac{1}{2}}\partial_{\mathsf{s}}F)(\cdot,\mathsf{s}^{\prime})\|_{L^{ 2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\tfrac{14}{|\mathsf{s}_{\mathsf{in}}|} \int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\|(J_{\vartheta}^{\frac{1}{2}}F)( \cdot,\mathsf{s}^{\prime})\|_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,. \tag{14.166}\] In order to bound the \(\mathcal{I}(\mathsf{s},+)\) integral appearing in (14.165), we let \(\mathsf{s}^{\prime}\in(\mathsf{s}_{\mathsf{in}},0)\) be arbitrary (we will eventually pass \(\mathsf{s}^{\prime}\to 0\)). Then, for each \(x_{2}\in\mathbb{T}\) fixed, there exists a unique \[\nu(x_{2},\mathsf{s},\mathsf{s}^{\prime}):=x_{1}\in[\widetilde{\mathfrak{X}}_{1}^{ -}(x_{2},\mathsf{s}^{\prime}),\widetilde{\mathfrak{X}}_{1}^{+}(x_{2},\mathsf{s}^ {\prime})]\cap[\theta_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{S}}(x_{2},\mathsf{s} ),\theta^{\mathring{\mathsf{S}}}(x_{2},\mathsf{s}))\quad\text{such that}\quad\mathsf{s}= \mathsf{\Theta}^{\mathsf{S}}(x_{1},x_{2},\mathsf{s}^{\prime})\,.\] Note that \(\lim_{\mathsf{s}^{\prime}\to\mathsf{s}_{\mathsf{in}}}\nu(x_{2},\mathsf{s}, \mathsf{s}^{\prime})=\theta_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{S}}(x_{2},\mathsf{s})\), and more importantly, \(\lim_{\mathsf{s}^{\prime}\to 0}\nu(x_{2},\mathsf{s},\mathsf{s}^{\prime})=\theta^{ \mathring{\mathsf{S}}}(x_{2},\mathsf{s})\). Because of this, by the monotone convergence theorem, we may write \[\mathcal{I}(\mathsf{s},+)=\lim_{\mathsf{s}^{\prime}\to 0}\int_{\mathbb{T}}\!\int_{ \mathbb{R}^{\prime}_{\mathsf{in}}(x_{2},\mathsf{s}^{\prime})}^{\nu(x_{2}, \mathsf{s},\mathsf{s}^{\prime})}(J_{\vartheta}F^{2})(x_{1},x_{2},\mathsf{s}) \mathrm{d}x_{1}\mathrm{d}x_{2}\,. \tag{14.167}\] Our goal is to obtain uniform in \(\mathsf{s}^{\prime}\) bounds for the The above estimate is clearly independent of \(\mathsf{s}^{\prime}\). The second term on the right side of (14.168) is bounded similarly, using that \(\partial_{\mathsf{s}}J_{g}\leq 14|\mathsf{s}_{\mathsf{in}}|^{-1}J_{g}\), Fubini, the change of variables formula, and the fact that \(\mathcal{J}\) satisfies the lower bound (14.141). An upper bound is provided by \[\tfrac{14}{|\mathsf{s}_{\mathsf{in}}|}\int_{\mathbb{T}}\!\!\int_{ \mathsf{df}_{\mathsf{in}}(x_{2},\mathsf{s}^{\prime})}\int_{\max\{\Theta^{ \mathsf{s}}(x,\mathsf{s}_{\mathsf{in}}),\mathsf{s}_{\mathsf{in}}\}}^{\mathsf{ s}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{m}},\mathsf{s}_{\mathsf{fin}}]}\widetilde{ \mathcal{E}}_{5,7}(\mathsf{s})^{2}\leq 14e^{36}\mathsf{C}_{\mathsf{data}}{}^{2} \varepsilon+\tfrac{94}{\varepsilon}e^{56}\widetilde{\mathcal{D}}_{6,7}^{2}( \mathsf{s}_{\mathsf{fin}})\] and therefore, with (14.130a), we obtain that \[\varepsilon\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{m}},\mathsf{s} _{\mathsf{fin}}]}\widetilde{\mathcal{E}}_{5}^{2}(\mathsf{s}) \leq 14e^{28}\mathsf{C}_{\mathsf{data}}^{2}+94e^{56} \widetilde{\mathcal{D}}_{6,\mathcal{N}}^{2}(\mathsf{s}_{\mathsf{fin}})+( \mathsf{K}\varepsilon)^{-2}\big{(}14\varepsilon^{2}e^{28}\mathsf{C}_{\mathsf{ data}}^{2}+94e^{56}\widetilde{\mathcal{D}}_{6,7}^{2}(\mathsf{s}_{\mathsf{fin}}) \big{)}\] \[\leq 28e^{28}\mathsf{C}_{\mathsf{data}}^{2}+94e^{56}\widetilde{ \mathcal{D}}_{6}^{2}(\mathsf{s}_{\mathsf{fin}})\,. \tag{14.175}\] From (14.174) and (14.175) we thus obtain \[\varepsilon^{\frac{1}{2}}\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{m}},\mathsf{ s}_{\mathsf{fin}}]}\widetilde{\mathcal{E}}_{5}(\mathsf{s})+\widetilde{\mathcal{D}}_{5}( \mathsf{s}_{\mathsf{fin}})\leq 6e^{14}\mathsf{C}_{\mathsf{data}}+10e^{28} \widetilde{\mathcal{D}}_{6}(\mathsf{s}_{\mathsf{fin}})\] and so the bootstrap (5.37r) implies that (5.37r) holds (with a strict inequality) as soon as \[6e^{14}\mathsf{C}_{\mathsf{data}}+10e^{28}\mathsf{B}_{6}=:\mathsf{B}_{5}\,. \tag{14.176}\] ### Improved estimates #### 14.2.1.1. The \(H^{6}\) vorticity energy estimate **Proposition 14.10** (\(H^{6}\) estimates for the vorticity).: _Let \(\Omega\) be the ALE vorticity, defined in (3.37), and \(\Omega\) be the ALE specific vorticity given by (3.38). Assume that the bootstrap assumptions (14.132) hold, and that \(\varepsilon\) is taken to be sufficiently small to ensure \(\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{j}\rangle+\varepsilon^{\frac{1}{2} }\langle\mathsf{B}_{h}\rangle+\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6} \rangle\leq 1\). Then, assuming \(\varepsilon\) is sufficiently small with respect to \(\alpha,\kappa_{0}\), \(\mathsf{C}_{\mathsf{data}}\), and \(\delta\), we have the bound_ \[\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{m}},\mathsf{s}_{\mathsf{fin}}]}\big{\|} J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|}_{L_{x} ^{2}}^{2}+\tfrac{1}{\varepsilon}\int_{\mathsf{s}_{\mathsf{m}}}^{\mathsf{s}_{ \mathsf{fin}}}\big{\|}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|} _{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}\leq\mathring{C}\varepsilon(\mathsf{B}_{ 6})^{2}\,, \tag{14.177}\] _where the implicit constant depends only on \(\alpha,\kappa_{0}\), \(\mathsf{C}_{\mathsf{data}}\), and \(\delta\). Additionally, we have that_ \[\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{m}},\mathsf{s}_{\mathsf{fin}}]}\big{\|} \tfrac{1}{2}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|}_{L_{x} ^{2}}^{2}+\tfrac{1}{\varepsilon}\int_{\mathsf{s}_{\mathsf{m}}}^{\mathsf{s}_{ \mathsf{fin}}}\big{\|}\widetilde{\mathsf{D}}^{6}\Omega(\cdot,\mathsf{s})\big{\|} _{L_{x}^{2}}^{2}\mathrm{d}\mathsf{s}\leq\mathring{C}\varepsilon(\mathsf{B}_{ 6})^{2}\,, \tag{14.178}\] _where the implicit constant depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). Moreover, we have that_ \[\big{\|}Q\big{\|}_{L_{x,\mathsf{s}}^{\infty}} \leq 2^{3+\frac{2}{\alpha}}e^{18}\mathsf{C}_{\mathsf{data}}\,, \tag{14.179a}\] \[\big{\|}\widetilde{\mathsf{D}}\Omega\big{\|}_{L_{x,\mathsf{s}}^{ \infty}} \leq 2(4e^{18})^{\frac{20.23(1+\alpha)}{\alpha}}\mathsf{C}_{\mathsf{ data}}\,. \tag{14.179b}\] Proof of Proposition 14.10.: First, we note that the proof of the inequalities (14.179a) and (14.179b) of is identical to that given in the proof of Proposition 8.1 so that these pointwise bounds hold, and will be used for our energy estimate via the Gagliardo-Nirenberg-type inequality Lemma B.3. Second, we will use the weight \[\mathcal{J}^{2r}\,,\qquad 0<r\leq\tfrac{3}{4}\,,\] for our energy method, with the intent of ultimately passing to the limit as \(r\to 0\). Third, by using (14.156) with \(F=\widetilde{\mathsf{D}}^{5}\widetilde{\mathsf{D}}_{\mathsf{s}}\Omega\), we obtain the bound for the unweighted fifth-derivatives of specific vorticity: \[\big{\|}\widetilde{\mathsf{D}}^{5}\Omega\big{\|}_{L_{x,x}^{2}}\leq 15 \varepsilon^{\frac{1}{2}}\big{\|}\widetilde{\mathsf{D}}^{5}\Omega(\cdot,\mathsf{s} _{\mathsf{m}})\big{\|}_{L_{x}^{2}}+28\big{\|}\mathcal{J}^{r}\,\widetilde{ \mathsf{D}}^{5}\widetilde{\mathsf{D}}_{\mathsf{s}}\Omega\big{\|}_{L_{x,x}^{2}}\,. \tag{14.180}\] Fourth, we can now turn to the \(H^{6}\) energy estimate. From (8.4), we have that \[\tfrac{J_{g}}{\Sigma}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{ \mathsf{D}}^{6}\Omega-\alpha\partial_{1}\widetilde{\mathsf{D}}^{6}\Omega+ \alpha J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2}h\widetilde{\mathsf{D}}_{2} \widetilde{\mathsf{D}}^{6}\Omega=\mathsf{R}_{\Omega}\,, \tag{14.181}\] where the remainder term \(\mathsf{R}_{\Omega}\) is defined in (8.5). For \(\beta>0\) to be chosen below, we compute the spacetime \(L^{2}\) inner-product of (14.181) with \(\Sigma^{-2\beta+1}\mathcal{J}^{2r}\widetilde{\mathsf{D}}^{6}\Omega\), use the identities (14.108d), (14.129b), (14.129c), and (14.129d), to obtain that \[\iint\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[-\alpha(2\beta-1)\int_{\mathsf{s}_{\mathsf{m}}}\!\!\!\!\int\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The idea is to choose \(\beta\) sufficiently large so as to absorb the first two norms on the right side of (14.187) by the second integral on the left side of (14.183). Since \(\mathring{C}_{(\ref{eq:14.187})}\) is independent of \(\beta\), we may first choose \(\beta\) to be sufficiently large (in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), and then \(\varepsilon\) to be sufficiently small (in terms of \(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}}\)), to ensure that \[\tfrac{4\alpha\beta}{5}\geq\mathring{C}_{(\ref{eq:14.187})}(1+\varepsilon^{2} 4^{\beta})+1\,. \tag{14.188}\] With \(\beta=\beta(\alpha,\kappa_{0},\mathsf{C}_{\mathsf{data}})\) chosen so that (14.188) holds, from (14.183), we have that \[\iint^{\theta^{\sharp}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! thereby establishing (14.190c). To establish (14.190a), we first note that the first inequality in (6.75) holds in \(\mathcal{H}^{\delta}\) giving us that for an \(\mathfrak{s}\)-time-slice that intersects \(\mathcal{H}^{\delta}\), \[\|J_{g}^{\frac{1}{2}}F(\cdot,\mathfrak{s})\|_{L_{x}^{2}}\leq e^{9}\|J_{g}^{ \frac{1}{2}}F(\cdot,\mathfrak{s}_{\mathfrak{n}})\|_{L_{x}^{2}}+e^{9}\int_{0}^{ \varepsilon}\|J_{g}^{\frac{1}{2}}\partial_{\mathfrak{s}}F(\cdot,\mathfrak{s} )\|_{L_{x}^{2}}\mathrm{d}\mathfrak{s}\,. \tag{14.191}\] Letting \(F=\widetilde{\mathsf{D}}^{5}\Omega\) in (14.191), we then have that \[\|J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}\Omega(\cdot,\mathfrak{s})\|_{ L_{x}^{2}}\leq e^{9}\|J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}\Omega( \cdot,\mathfrak{s}_{\mathfrak{n}})\|_{L_{x}^{2}}+e^{9}\varepsilon^{-\frac{1}{ 2}}\|J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}_{\mathfrak{s}}\widetilde{ \mathsf{D}}^{5}\Omega\|_{L_{x,\mathfrak{s}}^{2}}\,. \tag{14.192}\] We now once again employ the identity (3.37). We have that by (4.11), (14.132), and (14.192), \[\sup_{\mathfrak{s}\in[\mathfrak{s}_{\mathfrak{n}},\mathfrak{s}_{\mathfrak{n }}]}\|J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}\hat{\boldsymbol{\mathsf{ A}}}_{\mathcal{N}}(\cdot,\mathfrak{s})\|_{L_{x}^{2}}\lesssim\sup_{\mathfrak{s}\in[ \mathfrak{s}_{\mathfrak{n}},\mathfrak{s}_{\mathfrak{n}}]}\left(\|J_{g}^{\frac{ 1}{2}}\widetilde{\mathsf{D}}^{5}\Omega(\cdot,\mathfrak{s})\|_{L_{x}^{2}}+ \tfrac{1}{2}\|J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}\hat{\mathsf{D}} ^{5}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{T}}\|_{L_{x,x}^{2}}+\tfrac{1}{2} \|J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}\hat{\mathsf{D}}^{5}\hat{ \boldsymbol{\mathsf{Z}}}_{\mathcal{T}}\|_{L_{x,x}^{2}}\right)\lesssim \varepsilon^{\frac{1}{2}}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,,\] concluding the proof of the lemma. 21.3. Improved estimates for \(\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}}\) in the upstream spacetime **Lemma 14.12**.: _Under the assumptions of Proposition 14.10, we have that for any \(\overline{\beta}>0\), and \(\overline{a}\in[0,\frac{1}{2}]\),_ \[\left\|J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{5}\hat{ \boldsymbol{\mathsf{Z}}}_{\mathcal{N}}\right\|_{L_{x}^{\infty}L_{x}^{2}}^{2}+ \tfrac{1}{\varepsilon}\int_{0}^{\varepsilon}\left\|\widetilde{\mathsf{D}}^{5} \hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}}(\mathfrak{s})\right\|_{L_{x}^{2}}^ {2}\mathrm{d}\mathfrak{s} \leq\hat{C}\varepsilon(\mathsf{B}_{6})^{2}\,, \tag{14.193a}\] \[\left\|\widetilde{\mathsf{D}}^{4}\hat{\boldsymbol{\mathsf{Z}}}_{ \mathcal{N}}\right\|_{L_{x}^{\infty}L_{x}^{2}} \leq\hat{C}\varepsilon^{\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,,\] (14.193b) \[\left\|\Sigma^{-\overline{\beta}}\partial^{\frac{3}{4}-\overline {\eta}}J_{g}^{\overline{a}}\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}^{5 }\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}}\right\|_{L_{x,\mathfrak{s}}^{2}} \leq\tfrac{1}{2\alpha}\|\Sigma^{-1-\overline{\beta}}\partial^{\frac{3}{4}- \overline{\eta}}J_{g}^{\overline{a}}\widetilde{\mathsf{D}}^{5}\widetilde{ \mathsf{D}}_{\mathsf{s}}(J_{g}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}})\|_{L _{x,\mathfrak{s}}^{2}}+\hat{C}\varepsilon(\tfrac{4}{\kappa_{0}})^{\overline {\beta}}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,,\] (14.193c) \[\left\|\Sigma^{-\overline{\beta}}\partial^{\frac{3}{4}-\overline {\eta}}J_{g}^{\overline{a}}\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{5 }\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}}\right\|_{L_{x,\mathfrak{s}}^{2}} \leq\tfrac{1}{\alpha\varepsilon}\|\Sigma^{-1-\overline{\beta}}\partial^{ \frac{3}{4}-\overline{\eta}}J_{g}^{\overline{a}}\widetilde{\mathsf{D}}^{5} \widetilde{\mathsf{D}}_{\mathsf{s}}\hat{\boldsymbol{\mathsf{Z}}}_{\mathcal{T}} \|_{L_{x,\mathfrak{s}}^{2}}+\hat{C}\varepsilon(\tfrac{4}{\kappa_{0}})^{ \overline{\beta}}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,, \tag{14.193d}\] _where the implicit constant \(\hat{C}\) depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). It is also convenient to record the estimates_ \[\left\|\partial^{\frac{3}{4}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}_{1}\widetilde{\mathsf{D}}^{5}\hat{\boldsymbol{\mathsf{Z}}}_{ \mathcal{N}}\right\|_{L_{x}^{\infty}L_{x}^{2}} \lesssim\varepsilon^{-\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,, \tag{14.193e}\] \[\left\|\partial^{\frac{3}{4}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}_{2}\widetilde{\mathsf{D}}^{5}\hat{\boldsymbol{\mathsf{Z}}}_{ \mathcal{N}}\right\|_{L_{x}^{\infty}L_{x}^{2}} \lesssim\varepsilon^{-\frac{1}{2}}\langle\mathsf{B}_{6}\rangle\,, \tag{14.193f}\] _where the implicit constant and the constant depends only on \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\)._ Proof of Lemma 14.12.: The proof of (14.193b)-(14.193d) is identical to that given in the proof of Lemma 8.3. The proof of (14.193a) requires the following modification: We test the equation (8.23) with \(\Sigma^{-2\beta+1}\partial^{2r}\widetilde{\mathsf{D}}^{5}\hat{\boldsymbol{ \mathsf{Z}}}_{\mathcal{N}}\) for \(0<r\leq\frac{3}{4}\). The energy estimate is performed in the same manner as our weighted energy estimate for specific vorticity in the proof of Proposition 14.10; namely, we obtain uniform-in-\(r\) estimates for norms weighted by \(\hat{\mathcal{J}}^{r}\) and then pass to the limit as \(r\to 0\). The inequality (14.193a) is an immediate consequence. 21.4. Improved estimates for \(\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}}\) in the upstream spacetime **Lemma 14.13**.: _Under the assumptions of Proposition 14.10,_ \[\|(\mathrm{Q}\partial_{\mathfrak{s}}+V\partial_{2})(J_{g}\hat{ \boldsymbol{\mathsf{W}}}_{\mathcal{N}})\|_{L_{x,\mathfrak{s}}^{\infty}} \lesssim 1 \tag{14.194a}\] \[\|\widetilde{\mathsf{D}}^{4}(\mathrm{Q}\partial_{\mathfrak{s}}+V \partial_{2})(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{\mathcal{N}})\|_{L_{x, \mathfrak{s}}^{2}} \lesssim\mathsf{K}\langle\mathsf{B}_{6}\rangle\] (14.194b) \[\|\partial^{\frac{3}{4}}\widetilde{\mathsf{D}}^{5}(\mathrm{Q} \partial_{\mathfrak{s}}+V\partial_{2})(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{ \mathcal{N}})\|_{L_{x,\mathfrak{s}}^{2}} \lesssim\mathsf{K}\langle\mathsf{B}_{6}\rangle \tag{14.194c}\] _and we also have that_ \[\left\|\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{ \mathcal{N}})\right\|_{L_{x}^{\infty}L_{x}^{2}} \leq 2\mathsf{C}_{\mathsf{data}}\varepsilon^{-\frac{1}{2}}\,,\] (14.195a) \[\left\|\partial^{\frac{3}{4}}J_{g}^{\frac{1}{2}}\widetilde{ \mathsf{D}}_{\mathfrak{s}}\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\boldsymbol{ \mathsf{W}}}_{\mathcal{N}})\right\|_{L_{x,\mathfrak{s}}^{2}} \leq\ and following the bounds (8.53) and (8.54), we have that \[\big{\|}\mathsf{F}_{5}\big{\|}_{L^{2}_{x,*}}\leq\mathsf{K}(\mathsf{B}_{6})\,. \tag{14.197}\] We test (14.196) with \(\widetilde{\mathsf{D}}^{5}(J_{g}\hat{\boldsymbol{\mathsf{W}}}_{{\mathcal{N}}})\) which results in \[\iint^{\theta^{\mathcal{G}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Upstream energy estimates It remains for us to close the bootstrap (14.132b) (see (5.37r)) for the sixth order energy \(\widetilde{\mathcal{E}}_{6}\) and damping \(\widetilde{\mathcal{D}}_{6}\) norms, defined earlier in (14.130a) and (14.131a). Previously, this was achieved by separately establishing a bound for the tangential parts of the energy \(\widetilde{\mathcal{E}}_{6,\tau}\) and damping \(\widetilde{\mathcal{D}}_{6,\tau}\) in Section 10, and the normal parts of the energy \(\widetilde{\mathcal{E}}_{6,\mathcal{N}}\) and damping \(\widetilde{\mathcal{D}}_{6,\mathcal{N}}\) in Section 12. In turn, these estimates required that we established improved energy bounds for six "pure time derivatives" in Section 11. Just as for the downstream maximal development, for the upstream maximal development, we follow the same exact methodology. As before, the tangential bounds from Section 10 and normal energy estimates from Section 12 run in parallel, the only difference being that the fundamental variables are un-weighted for the tangential part (i.e. \((\hat{\mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\tau},\hat{\mathbf{A}}_{ \tau})\)) and are \(J_{g}\)-weighted for the normal part (i.e. \((J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g }\hat{\mathbf{A}}_{\mathcal{N}})\)). The special estimates for six pure time derivatives from Section 11 are used in the same way, to treat the remainders \(\mathcal{R}_{\hat{\mathbf{Z}}}^{\mathcal{T}}\) and \(\mathcal{R}_{\hat{\mathbf{Z}}}^{\mathcal{N}}\), in the \(\hat{\mathbf{Z}}_{\tau}\), and respectively \(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}}\) equations. Since the tangential and normal energy estimates run in parallel (similarities and differences may be seen by comparing Sections 10 and 12), we do not repeat both of these two sets of energy estimates for the upstream geometry. Just as in our downstream analysis, the upstream modifications to the tangential component energy estimates are identical to the modifications made to the normal component energy estimates. For conciseness, we shall therefore only provide details for the upstream normal-component energy estimates (see Section 14.24 below). #### 14.23.1 Sixth order tangential energy estimates For the tangential energy estimates, at this point we simply record that by repeating the arguments from Section 10, with the modifications outlined in Section 14.24 below (see the argument leading to (14.245)-(14.248)), similarly to (10.70)-(10.71), there exists a constant \[\widehat{\mathfrak{c}}_{\alpha,\kappa_{0},\delta}>0\,,\] which depends only on \(\alpha\), \(\delta\), and \(\kappa_{0}\), and may be computed explicitly, such that \[\sup_{\mathbf{s}\in[0,\varepsilon]}\big{\|}\partial^{\frac{1}{4} }J_{g}^{\frac{1}{4}}\widetilde{\mathrm{D}}^{6}(\hat{\mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\tau},\hat{\mathbf{A}}_{\tau})(\cdot,\mathbf{s})\big{\|}_{ L_{x}^{2}}^{2}+\frac{1}{\varepsilon}\int_{0}^{\varepsilon}\big{\|} \partial^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\widetilde{\mathrm{D}}^{6}(\hat{ \mathbf{W}}_{\mathcal{T}},\hat{\mathbf{Z}}_{\mathcal{T}},\hat{\mathbf{A}}_{ \mathcal{T}})(\cdot,\mathbf{s})\big{\|}_{L_{x}^{2}}^{2}\mathrm{d}\mathbf{s}\] \[\qquad+\frac{1}{\varepsilon^{2}}\sup_{\mathbf{s}\in[0, \varepsilon]}\big{\|}\partial^{\frac{1}{4}}\mathcal{N}\cdot\widetilde{\mathrm{ D}}^{6}\tau(\cdot,\mathbf{s})\big{\|}_{L_{x}^{2}}^{2}+\frac{1}{\varepsilon^{2}} \int_{0}^{\varepsilon}\big{\|}\partial^{-\frac{1}{4}}\mathcal{N}\cdot \widetilde{\mathrm{D}}^{6}\tau(\cdot,\mathbf{s})\big{\|}_{L_{x}^{2}}^{2} \mathrm{d}\mathbf{s}\] \[\leq\widehat{\mathfrak{c}}_{\alpha,\kappa_{0},\delta}\cdot \varepsilon\Big{(}\mathcal{C}_{\mathsf{data}}^{2}+\mathsf{B}_{6}^{2}+\hat{C} _{\varepsilon}\frac{1}{2}\mathsf{K}^{2}(\mathsf{B}_{6})^{2}\Big{)}\,. \tag{14.201}\] Then, as in (10.72)-(10.75), upon ensuring that \[\mathsf{B}_{6}\geq\max\{1,\mathsf{C}_{\mathsf{data}}\}\,, \tag{14.202}\] and upon defining \[\mathsf{K}:=8\max\{1,\widehat{\mathfrak{c}}_{\alpha,\kappa_{0}, \delta}^{\frac{1}{2}}\}\,, \tag{14.203}\] by letting \(\varepsilon\) be sufficiently small in terms of \(\alpha,\kappa_{0}\), \(\mathsf{C}_{\mathsf{data}}\), and \(\delta\), we deduce from (13.53) that \[\varepsilon\sup_{\mathbf{s}\in[0,\varepsilon]}\widetilde{\mathcal{E}}_{6, \tau}^{2}(\mathbf{s})+\widetilde{\mathcal{D}}_{6,\tau}^{2}(\varepsilon)\leq \tfrac{1}{8}(\varepsilon\mathsf{K})^{2}\mathsf{B}_{6}^{2}\,. \tag{14.204}\] This bound is the same as (10.75). It closes the "tangential part" of the remaining bootstrap (14.132b) for \(\widetilde{\mathcal{E}}_{6}\) and \(\widetilde{\mathcal{D}}_{6}\). #### 14.23.2 Sixth order pure-time energy estimates The energy estimates for the case of pure-time derivatives following the arguments from Section 11, with the modifications outlined in Section 14.24 below, the same bound as given in (11.2b) holds and therefore, \[\varepsilon^{\frac{1}{2}}\big{\|}\partial^{\frac{3}{4}}J_{g}^{\frac{1}{4}} \widetilde{\mathrm{D}}_{\mathbf{s}}^{6}\hat{\mathbf{Z}}_{\mathcal{N}}\big{\|}_{ L_{x}^{\infty}L_{x}^{2}}+\big{\|}\partial^{\frac{3}{4}}\widetilde{\mathrm{D}}_{ \mathbf{s}}^{6}\hat{\mathbf{Z}}_{\mathcal{N}}\big{\|}_{L_{x,\star}^{2}}\leq \varepsilon^{\frac{1}{2}}\big{\|}\partial^{\frac{3}{4}}J_{g}^{\frac{1}{4}} \widetilde{\mathrm{D}}_{\mathbf{s}}^{6}\hat{\mathbf{Z}}_{\mathcal{N}}\big{\|}_{ L_{x}^{\infty}L_{x}^{2}}+\big{\|}\partial^{\frac{1}{4}}J_{g}^{\frac{1}{4}} \widetilde{\mathrm{D}}_{\mathbf{s}}^{6}\hat{\mathbf{Z}}_{\mathcal{N}}\big{\|}_{ L_{x,\star}^{2}}\lesssim\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,.\] ### Upstream energy estimates for normal components We continue to use the equation set (12.1), with the operator \(\widetilde{\mathrm{D}}^{6}\) employed in the following manner: The energy identity (12.5) is then replaced with the upstream energy identity \[\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\!\iint^{\partial^{\frac{3}{2}}}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! #### 14.24.1. The integral \(I^{\hat{\Psi}_{n}}\) We additively decompose the integral \(I^{\hat{\Psi}_{n}}\). For \(\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{fin}}]\), we have that \[I^{\hat{\Psi}_{n}} =I_{1}^{\hat{\Psi}_{n}}+I_{3}^{\hat{\Psi}_{n}}+I_{4}^{\hat{\Psi}_{ n}}+I_{5}^{\hat{\Psi}_{n}}+I_{6}^{\hat{\Psi}_{n}}\,,\] \[I_{1}^{\hat{\Psi}_{n}} =\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\!\!\iint^{\theta^ {\theta}}\!\!\!\frac{1}{\Sigma^{2\beta}}\widetilde{d}^{2}J_{g}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\bf W }_{{\mathcal{N}}})\;\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\bf W}_{{\mathcal{N }}})\,,\] (14.206a) \[I_{3}^{\hat{\Psi}_{n}} =\alpha\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\!\!\iint^{ \theta^{\theta}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[I_{3}^{\hat{\mathbb{A}}_{n}} =2\alpha\int_{\mathbb{s}_{\text{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}}}}}}}}}}}\!\! \!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\big{(}I_{9,c}^{\hat{\mathcal{Z}}_{n}},I_{9,d}^{\hat{\mathcal{A}}_{n}}\big{)}=- \alpha\int_{\mathbb{R}_{n}}^{\mathsf{s}}\!\!\!\int\!\!\!\!\int\!\!\!\!\!\int \!\!\!\!\!\!\int\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(x_{1}\)_-independence of the weight \(\mathcal{J}\) in Section 12 and replaces the crucial fact that \(\widetilde{\mathsf{D}}_{1}J_{g}\geq 0\) in a large region of the downstream spacetime in Section 13._ #### 14.2.4. Upstream analysis of \(I_{3}^{\hat{W}_{n}}+I_{3}^{\hat{Z}_{n}}+I_{3}^{\hat{A}_{n}}\) Integrating-by-parts with respect to \(\widetilde{\mathsf{D}}_{2}\) and using (14.129c), we find \[I_{3}^{\hat{W}_{n}}+I_{3}^{\hat{Z}_{n}}+I_{3}^{\hat{A}_{n}} =2\alpha\int_{\mathsf{s}_{\mathsf{m}}}^{\mathsf{s}}\!\!\!\int\!\! \!\!\!\int\!\!\!\!\!\!\!\!\partial\mathscr{Y}_{\mathscr{B}}\mathscr{Y}_{ \mathscr{B}}-\tfrac{1}{2}\,\,\partial^{\frac{3}{2}}J_{g}\,\widetilde{\mathsf{D }}_{2}\big{(}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{A}}_{\mathcal{N}}) \,\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{S}}_{\mathcal{N}})\big{)}\] \[=:I_{3,a}^{\hat{W}_{n}+\hat{Z}_{n}+\hat{A}_{n}}+I_{3,b}^{\hat{W}_ {n}+\hat{Z}_{n}+\hat{A}_{n}}+I_{3,c}^{\hat{W}_{n}+\hat{Z}_{n}+\hat{A}_{n}}+I_ {3,d}^{\hat{W}_{n}+\hat{Z}_{n}+\hat{A}_{n}}+I_{3,d}^{\hat{W}_{n}+\hat{Z}_{n}+ \hat{A}_{n}}\,,\] \[I_{3,a}^{\hat{W}_{n}+\hat{Z}_{n}+\hat{A}_{n}} =\tfrac{3\alpha}{2}\int_{\mathsf{s}_{\mathsf{m}}}^{\mathsf{s}}\! \!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\partial\mathscr{Y}_{\mathscr{B}}g^{-\frac {1}{2}}\widetilde{\mathsf{D}}_{2}\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\,\mathscr{I}\, \mathscr{I}\ #### 14.24.8. Geometric identities in the upstream spacetime We will need the upstream variants of Lemmas 12.1 and 12.2 to analyze the terms with over-differentiated geometry. **Lemma 14.15**.: _We have that_ (14.218) _for all differentiable functions such that the right side of (14.218)._ Proof of Lemma 14.15.: We begin with the modifications to the first integral on the left side of (14.218). Substituting the identity (12.21) into the integral, and integrating-by-parts using (14.129), we find that (14.219) Here, we have used the fact that must contain at least a power of in order for the right side of (14.218) to be bounded, which in turn implied that on the surface. Upon comparison, every integral on the right side of (14.219) is directly analogous to every integral on the right side of (12.22) (with replacing and with the upstream modification of the domains of integration). Hence, the identical bounds hold for (14.219) as for (12.22). To bound the second integral on the left side of (14.218), we use the adjoint formula for given in (14.129c), and obtain (14.219) Using the bootstrap inequalities (5.37), the estimates (6.38), and the bounds for the geometry (7.1), we deduce This concludes the proof of the lemma. **Lemma 14.16**.: _We have that_ (14.220) _for all differentiable functions such that the right side of (14.220)._ The proof of Lemma 14.16 is identical to the proof of Lemma 12.2. #### 14.24.9. Upstream bounds for the forcing, remainder, and commutator functions Following the analysis in Section 12.7.2, we have the following upstream bounds for the forcing, remainder, and commutator functions: (14.221a) (14.221b) (14.221c) and \[\left\|\,\Sigma^{\frac{3}{2^{3}-1}}\widetilde{\mathsf{D}}^{6} \mathsf{F}_{\mathsf{F}}^{\mathsf{\mathcal{N}}^{\prime}}\right\|_{L^{2}_{x,s}} \leq\hat{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle\mathsf{ B}_{6}\rangle\,, \tag{14.222a}\] \[\left\|\,\tfrac{3^{\frac{3}{2}}}{\Sigma^{\beta-1}}\mathcal{R}_{ \mathsf{Z}}^{\mathsf{\mathcal{N}}}\right\|_{L^{2}_{x,s}} \leq\hat{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle+\tfrac{4(1+\alpha)}{\varepsilon}\big{\|}\tfrac{1}{\Sigma^{ \beta}}g^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}(J_{g}\tilde{\boldsymbol{ \mathsf{Z}}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,s}}\,,\] (14.222b) \[\left\|\,\tfrac{3^{\frac{3}{2}}}{\Sigma^{\beta-1}}\mathcal{C}_{ \boldsymbol{\mathsf{Z}}}^{\mathsf{\mathcal{N}}}\right\|_{L^{2}_{x,s}} \leq\hat{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\,, \tag{14.222c}\] and \[\left\|\,\Sigma^{\frac{3}{2^{3}-1}}\widetilde{\mathsf{D}}^{6} \mathsf{F}_{\mathsf{F}}^{\mathsf{\mathcal{N}}^{\prime}}\right\|_{L^{2}_{x,s}} \leq\hat{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\,, \tag{14.223a}\] \[\left\|\,\tfrac{3^{\frac{3}{2}}}{\Sigma^{\beta-1}}\mathcal{R}_{ \mathsf{A}}^{\mathsf{\mathcal{N}}}\right\|_{L^{2}_{x,s}} \leq\hat{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle+\tfrac{4(1+\alpha)}{\varepsilon}\big{\|}\tfrac{1}{\Sigma^ {\beta}}g^{\frac{3}{4}}\widetilde{\mathsf{D}}^{6}(J_{g}\tilde{\boldsymbol{ \mathsf{A}}}_{\mathcal{N}})\big{\|}_{L^{2}_{x,s}}\,,\] (14.223b) \[\left\|\,\tfrac{3^{\frac{3}{2}}}{\Sigma^{\beta-1}}\mathcal{C}_{ \boldsymbol{\mathsf{A}}}^{\mathsf{\mathcal{N}}}\right\|_{L^{2}_{x,s}} \leq\hat{C}(\tfrac{4}{\kappa_{0}})^{\beta}\mathsf{K}\langle \mathsf{B}_{6}\rangle\,. \tag{14.223c}\] The proof of these inequalities relies on the bootstrap bounds (14.132), the bounds on geometry (14.145), the improved estimates (14.190), (14.193), (14.195a), as well as (7.17), (B.9), and (B.13); with these bounds in hand, the inequalities (14.221)-(14.223) follow in the identical manner as proven in Section 12.7.2. 24.1. The upstream analysis of \(I_{5}^{\hat{\mathsf{W}}_{n}}+I_{5}^{\hat{\mathsf{Z}}_{n}}+I_{7}^{\hat{\mathsf{ A}}_{n}}\) We first note that since \(\tilde{\boldsymbol{\Sigma}}_{\mathcal{N}}=\tfrac{1}{2}(\hat{\boldsymbol{\mathsf{W}}}_{ \mathcal{N}}-\tilde{\boldsymbol{\mathsf{Z}}}_{\mathcal{N}})\), we have that \[I_{5}^{\hat{\mathsf{W}}_{n}}+I_{5}^{\hat{\mathsf{Z}}_{n}}=-\alpha\int_{\mathsf{ R}_{n}}\!\!\!\!\iint^{\theta}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! #### 14.24.12. The upstream analysis of \(I^{\hat{\mathsf{A}}_{n}}_{4}\) For the integral \(I^{\hat{\mathsf{A}}_{n}}_{4}\) defined in (14.208d) we first integrate-by-parts with respect to the \(\widetilde{\mathsf{D}}_{2}\) derivative. Using (14.129c), we have that \[I^{\hat{\mathsf{A}}_{n}}_{4} =I^{\hat{\mathsf{A}}_{n}}_{4,a}+I^{\hat{\mathsf{A}}_{n}}_{4,b}+I^{ \hat{\mathsf{A}}_{n}}_{4,c}\,,\] \[I^{\hat{\mathsf{A}}_{n}}_{4,a} =2\alpha\int_{\mathsf{s}_{n}}^{\mathsf{s}_{n}}\!\!\iint^{\theta} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[J_{5}^{\hat{\mathsf{A}}_{n}} =2\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\!\iint^{\theta^{\mathsf{g}}} \tfrac{1}{\Sigma^{2\beta}}\partial^{\frac{3}{2}}(J_{g}\hat{\mathsf{S}}_{{ \mathcal{N}}_{\mathcal{N}}})(-V\hat{\mathsf{Q}}_{2}+\widetilde{\mathsf{D}}_{2}V )\widetilde{\mathsf{D}}^{6}J_{g}\ \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{ \mathcal{N}}_{\mathcal{N}}})\,, \tag{14.230e}\] \[J_{6}^{\hat{\mathsf{A}}_{n}} =-\iint^{\theta^{\mathsf{g}}}\tfrac{\mathsf{Q}}{\Sigma^{2\beta}} J_{g}\hat{\mathsf{W}}_{{\mathcal{N}}}\widetilde{\mathsf{D}}^{6}J_{g}\ \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{ \mathcal{N}}})\Big{|}_{\mathsf{s}}\,,\] (14.230f) \[J_{7}^{\hat{\mathsf{A}}_{n}} =\iint^{\theta^{\mathsf{g}}}\tfrac{\mathsf{Q}}{\Sigma^{2\beta}} J_{g}\hat{\mathsf{Z}}_{{\mathcal{N}}}\partial^{\frac{4}{2}}\widetilde{\mathsf{D}}^{6}J_{g} \ \partial^{\frac{4}{2}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{ \mathcal{N}}})\Big{|}_{\mathsf{s}}\] (14.230g) \[J_{8}^{\hat{\mathsf{A}}_{n}} =2\iint^{\theta^{\mathsf{g}}}\tfrac{\mathsf{Q}}{\Sigma^{2\beta} }J_{g}\widehat{\mathsf{S}}_{{\mathcal{N}}}\widetilde{\mathsf{D}}^{6}J_{g}\ \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{ \mathcal{N}}})\Big{|}_{\mathsf{s}_{n}}\,,\] (14.230h) \[J_{9}^{\hat{\mathsf{A}}_{n}} =-\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\!\iint^{\theta^{\mathsf{g }}}\tfrac{1}{\Sigma^{2\beta}}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}) \partial^{\frac{3}{2}}\ (J_{g}\hat{\mathsf{Z}}_{{\mathcal{N}}})\widetilde{ \mathsf{D}}^{6}J_{g}\ \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{ \mathcal{N}}})\,. \tag{14.230i}\] In the same way that we bounded the analogous terms in (12.65), most of the terms in (14.230) are estimated using (5.33d), (14.132), (14.134), (14.143), (14.145), and (14.194a) as \[\big{|}J_{2}^{\hat{\mathsf{A}}_{n}}\big{|}+\big{|}J_{4}^{\hat{\mathsf{A}}_{n} }\big{|}+\big{|}J_{5}^{\hat{\mathsf{A}}_{n}}\big{|}+\big{|}J_{9}^{\hat{ \mathsf{A}}_{n}}\big{|}\lesssim(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{ B}_{6}\rangle^{2}\,. \tag{14.231}\] By additionally employing (14.71), we have that \[\big{|}J_{7}^{\hat{\mathsf{A}}_{n}}\big{|}\leq\big{(}\tfrac{101}{100}\big{)}^ {\frac{1}{\beta}}\iint^{\theta^{\mathsf{g}}}\tfrac{\mathsf{Q}}{\Sigma^{2\beta }}\big{|}J_{g}\hat{\mathsf{Z}}_{{\mathcal{N}}}\big{|}\,\big{|}\,\big{|} \partial^{\frac{4}{2}}\widetilde{\mathsf{D}}^{6}J_{g}\big{|}\,\big{|}\,\big{|} \partial^{\frac{4}{2}}J_{g}^{\frac{4}{2}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{ \mathsf{W}}_{{\mathcal{N}}})\big{|}\Big{|}_{\mathsf{s}}\lesssim(\tfrac{4}{ \kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{14.232}\] and by also appealing to (4.8), (4.11) and (14.134b), we have that for \(\varepsilon\) sufficiently small, \[\big{|}J_{8}^{\hat{\mathsf{A}}_{n}}\big{|}\leq 2\left(\tfrac{1}{ \varepsilon}+\hat{C}\right)(1+\varepsilon)(\tfrac{3}{\kappa_{0}})^{2\beta} \mathsf{C}_{\mathsf{data}}^{2}\leq\tfrac{4}{\varepsilon}(\tfrac{3}{\kappa_{0}})^ {2\beta}\mathsf{C}_{\mathsf{data}}^{2}\,. \tag{14.233}\] Once again, the remaining integrals \(J_{1}^{\hat{\mathsf{A}}_{n}}\), \(J_{3}^{\hat{\mathsf{A}}_{n}}\), and \(J_{6}^{\hat{\mathsf{A}}_{n}}\) require some care in their estimation. The integral \(J_{1}^{\hat{\mathsf{A}}_{n}}\) produces an anti-damping term that must be combined with the last integral on the right side of (12.9). Using (5.27), (5.30), a short computation shows that \[J_{1}^{\hat{\mathsf{A}}_{n}} =J_{1,a}^{\hat{\mathsf{A}}_{n}}+J_{1,b}^{\hat{\mathsf{A}}_{n}}+J_ {1,c}^{\hat{\mathsf{A}}_{n}}\,,\] \[J_{1,a}^{\hat{\mathsf{A}}_{n}} =\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\!\iint^{\theta^{\mathsf{g}}} \tfrac{1}{\Sigma^{2\beta}}\partial^{\frac{3}{2}}(Q\partial_{\mathsf{s}}+V \partial_{2})J_{g}\,\big{|}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{ \mathcal{N}}})\big{|}^{2}\] \[J_{1,b}^{\hat{\mathsf{A}}_{n}} =\tfrac{1-\alpha}{2}\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\!\iint^{ \theta^{\mathsf{g}}}\tfrac{1}{\Sigma^{2\beta}}\partial^{\frac{3}{2}}(J_{g}\hat{ \mathsf{W}}_{{\mathcal{N}}})\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{{ \mathcal{N}}})\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{\mathcal{N}}})\,,\] \[J_{1,c}^{\hat{\mathsf{A}}_{n}} =-\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\!\iint^{\theta^{\mathsf{g}}} \tfrac{1}{\Sigma^{2\beta}}\partial^{\frac{3}{2}}(J_{g}\hat{\mathsf{Z}}_{{ \mathcal{N}}})\big{(}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{ \mathcal{N}}})\big{)}^{2}\,,\] \[J_{1,d}^{\hat{\mathsf{A}}_{n}} =-2\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\!\iint^{\theta^{\mathsf{g}}} \tfrac{1}{\Sigma^{2\beta}}\partial^{\frac{3}{2}}(J_{g}\hat{\mathsf{S}}_{{ \mathcal{N}}})\big{(}\widetilde{\mathsf{D}}^{6}V\widetilde{\mathsf{D}}_{2}J_{g}+ \big{(}\widetilde{\mathsf{D}}^{6},V,\widetilde{\mathsf{D}}_{2}J_{g}\big{)}\big{)} \ \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{\mathcal{N}}})\,.\] Using (5.37), (6.38), (7.1), and Lemma B.5 we deduce \[\big{|}J_{1,c}^{\hat{\mathsf{A}}_{n}}\big{|}+\big{|}J_{1,a}^{\hat{\mathsf{A}}_{n}} \big{|}\lesssim(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}+ \mathsf{K}\varepsilon(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6} \rangle^{2}\lesssim(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,.\] (14.234a) Moreover, by the Cauchy-Schwartz and Cauchy-Young inequalities, ( 5.37c ), and ( 14.71 ), we obtain that \[\big{|}J_{1,b}^{\hat{\mathsf{A}}_{n}}\big{|} \leq\tfrac{1+\alpha}{\varepsilon}\sqrt{\tfrac{101}{100}}\int_{0}^{ \mathsf{s}}\!\!\big{|}\tfrac{\partial^{\frac{4}{2}}J_{2}^{\frac{1}{\Sigma^{2 \beta}}}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{\mathcal{N}}})( \cdot,\mathsf{s}^{\prime})\big{\|}_{L_{2}^{2}}\big{\|}\tfrac{\partial^{\frac{4}{2}}}{ \Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{{\mathcal{N}}})( \cdot with \(\mathsf{G}_{\mathsf{bad}}=\mathcal{J}^{\hat{\mathsf{A}}}_{2}(\mathsf{Q}\partial_{ \mathsf{s}}+V\partial_{2})\mathcal{J}\), produces an anti-damping integral that must be combined with the associated damping integral on the right side of (14.216). For this purpose, with \(\mathsf{G}_{\mathsf{good}}\) defined in (14.217), we see that for \(\varepsilon\) taken small enough, \[\mathsf{G}_{\mathsf{good}}+\mathsf{G}_{\mathsf{bad}}=-\tfrac{3}{4}\mathcal{J} ^{\hat{\mathsf{A}}}_{g}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{ J}+\tfrac{1}{2}\mathcal{J}^{\hat{\mathsf{A}}}_{2}(\mathsf{Q}\partial_{\mathsf{s}}+V \partial_{2})J_{g}-\mathring{\mathcal{C}}\geq\tfrac{1+\alpha}{7\varepsilon} \mathcal{J}^{\hat{\mathsf{A}}}J_{g}\,, \tag{14.234d}\] where we have used and (14.144) to obtain this lower-bound. Having estimated the integrals \(J^{\hat{\mathsf{A}}_{n}}_{1}\), \(J^{\hat{\mathsf{A}}_{n}}_{2}\), \(J^{\hat{\mathsf{A}}_{n}}_{4}\), \(J^{\hat{\mathsf{A}}_{n}}_{5}\), \(J^{\hat{\mathsf{A}}_{n}}_{7}\), and \(J^{\hat{\mathsf{A}}_{n}}_{8}\) in (14.230), it remains for us to estimate the integrals \(J^{\hat{\mathsf{A}}_{n}}_{3}\) and \(J^{\hat{\mathsf{A}}_{n}}_{6}\). We will first treat the integral \(J^{\hat{\mathsf{A}}_{n}}_{3}\) which produces new energy and damping norms that we will crucially rely upon. As before, the following identity is the key to the construction of these new energy and damping norms: from (5.30) and (5.27), we find that \[\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}) =\tfrac{2}{1+\alpha}\big{(}\widetilde{\mathsf{D}}^{6}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})J_{g}-\tfrac{1-\alpha}{2}\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\big{)}\] \[=\tfrac{2}{1+\alpha}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2 })\widetilde{\mathsf{D}}^{6}J_{g}+\tfrac{2}{1+\alpha}\big{(}\widetilde{\mathsf{ D}}^{6}V\widetilde{\mathsf{D}}_{2}J_{g}+(\widetilde{\mathsf{D}}^{6},V, \widetilde{\mathsf{D}}_{2}J_{g})\big{)}-\tfrac{1-\alpha}{1+\alpha}\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}})\,. \tag{14.235}\] We substitute (14.235) into the integral \(J^{\hat{\mathsf{A}}_{n}}_{3}\) in (14.230c), employ (5.33d) and the adjoint formula (14.129d), use that by (14.119) \(\mathcal{J}=0\) on \(\theta^{\hat{\mathsf{F}}}(x_{2},\mathsf{s})\), and arrive at the additive decomposition \[J^{\hat{\mathsf{A}}_{n}}_{3} =\tfrac{3}{2}\int_{\mathsf{s}_{\mathsf{n}}}^{\mathsf{s}_{\mathsf{ n}}}\!\!\!\iint^{\theta^{\hat{\mathsf{F}}}}\tfrac{1}{\Sigma^{2\beta}}( \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}\,\mathcal{J}^{\hat{ \mathsf{A}}_{n}}_{2}\,(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\widetilde{\mathsf{ D}}^{6}J_{g}\,\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}})\] \[=J^{\hat{\mathsf{A}}_{n}}_{3,a}+J^{\hat{\mathsf{A}}_{n}}_{3,b}+J^{ \hat{\mathsf{A}}_{n}}_{3,c}+J^{\hat{\mathsf{A}}_{n}}_{3,d}+J^{\hat{\mathsf{A}}_ {n}}_{3,e}+J^{\hat{\mathsf{A}}_{n}}_{3,f}+J^{\hat{\mathsf{A}}_{n}}_{3,g}+J^{ \hat{\mathsf{A}}_{n}}_{3,h}+J^{\hat{\mathsf{A}}_{n}}_{3,i}\,,\] (14.236) \[J^{\hat{\mathsf{A}}_{n}}_{3,a} =\tfrac{3}{2(1+\alpha)}\iint^{\theta^{\hat{\mathsf{F}}}}\tfrac{1}{ \Sigma^{2\beta}}\Big{(}\!-\!(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}) \mathcal{J}\Big{)}\mathcal{J}^{\hat{\mathsf{A}}^{\hat{\mathsf{I}}}}(-J_{g}\hat{ \mathbf{W}}_{\mathcal{N}}+\tfrac{13}{\varepsilon}J_{g})\big{|}\widetilde{ \mathsf{D}}^{6}J_{g}\big{|}^{2}\Big{|}_{\mathsf{s}}\,,\] \[J^{\hat{\mathsf{A}}_{n}}_{3,b} =\tfrac{3}{4(1+\alpha)}\int_{\mathsf{s}_{\mathsf{n}}}^{\mathsf{s}} \!\!\!\iint^{\theta^{\hat{\mathsf{F}}}}\tfrac{1}{\Sigma^{2\beta}}\Big{(}( \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}\,\mathcal{J}^{\hat{ \mathsf{A}}^{\hat{\mathsf{I}}}}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})(-J _{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{13}{\varepsilon}J_{g})\,\big{|} \widetilde{\mathsf{D}}^{6}J_{g}\big{|}^{2}\,,\] \[J^{\hat{\mathsf{A}}_{n}}_{3,d} =\tfrac{3}{2(1+\alpha)}\int_{\mathsf{s}_{\mathsf{n}}}^{\mathsf{s}} \!\!\!\iint^{\theta^{\hat{\mathsf{F}}}}\tfrac{1}{\Sigma^{2\beta}}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})^{2}\mathcal{J}\,\mathcal{J}^{\hat{ \mathsf{A}}^{\hat{\mathsf{I}}}}(-J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{13}{ \varepsilon}J_{g})\big{|}\widetilde{\mathsf{D}}^{6}J_{g}\big{|}^{2}\,,\] \[J^{\hat{\mathsf{A}}_{n}}_{3,e} =\tfrac{3}{2(1+\alpha)}\int_{\mathsf{s}_{\mathsf{n}}}^{\mathsf{s}} \!\!\iint^{\theta^{\hat{\mathsf{F}}}}\tfrac{1}{\Sigma^{2\beta}}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}\,\mathcal{J}^{\hat{\mathsf{A}}^{ \hat{\mathsf{I}}}}(-J_{g}\hat{\mathbf{W}}_{\mathcal{N}}+\tfrac{13}{\varepsilon}J_{ g})\big{|}\widetilde{\mathsf{D}}^{6}J_{g}\big{|}^{2}\,,\] \[J^{\hat{\mathsf{A}}_{n}}_{3,g} =\tfrac{39}{2\varepsilon}\int_{\mathsf{s}_{\mathsf{n}}}^{\mathsf{s}} \!\!\iint^{\theta^{\hat{\mathsf{F}}}}\tfrac{1}{\Sigma^{2\beta}}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}\,\mathcal{J}^{\hat{\mathsf{I}} ^{\hat{\mathsf{I}}}}\tfrac{1}{\Sigma^{2\beta}}(\mathsf{Q}\partial_{\mathsf{s}}+V \partial_{2})\mathcal{J}\,\mathcal{J}^{\hat{\mathsf{I}}^{\hat{\mathsf{I}}}}J_{g} \widetilde{\mathsf{D}}^{6}J_{g}\,\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{ \mathcal{N}})\,,\] \[J^{\hat{\mathsf{A}}_{n}}_{3,h} =\tfrac{3}{(1+\alpha)}\int_{\mathsf{s}_{\mathsf{n}}}^{\mathsf{s}} \!\!\iint^{\theta^{\hat{\mathsf{F}}}}\tfrac{1}{\Sigma^{2\beta}}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}\,\mathcal{J}^{\hat{\mathsf{I}}^{ \hat{\mathsf{I}}}}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}}-\tfrac{13}{\varepsilon}J_{ g})\widetilde{\mathsf{D}}^{6}J_{g}\,\big{(}\widetilde{\mathsf{D}}^{6}V \widetilde{\mathsf{D}}_{2}J_{g}+(\widetilde{\mathsf{D}}^{6},V,\widetilde{ \mathsf{D}}_{2}J_{g})\big{)}\,,\] \[J^{\hat{\mathsf{A}}_{n}}_{3,i} =-\tfrac{3(1-\alpha)}{2(1+\alpha)}\int_{\mathsf{s}_{\mathsf{n}}}^{ \mathsf{s}}\!\!\iint^{\theta^{\hat{\mathsf{F}}}}\tfrac{1}{\Sigma^{2\beta}}( \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\mathcal{J}\,\mathcal{J}^{\hat{ \mathsf{I}}^{\hat{\mathsf{I}}}}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}} For the remaining terms in the additive decomposition of \(J_{3}^{\hat{\mathsf{A}}_{n}}\), using the bounds (4.11), (6.64), (14.132), (14.143), (14.145), (14.146), together with the identitiy (5.30) and choosing \(\varepsilon\) sufficiently small, we obtain the bounds \[J_{3,c}^{\hat{\mathsf{A}}_{n}}\geq\tfrac{3}{2(1+\alpha)}\tfrac{1}{ \varepsilon^{3}}\int_{\mathsf{s}_{\mathsf{m}}}^{\mathsf{s}}\!\!\iint^{\theta^{ \mathsf{s}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! To summarize, with (14.227), (14.229), (14.231), (14.233), (14.234), (14.238), (14.239) and (14.240), the inequality \(\mathcal{J}\leq\frac{101}{100}J_{g}\) for \((x,\mathsf{s})\in\mathcal{H}^{8}\) such that \(|x_{1}|\leq 13\pi\varepsilon\), and a specific Caucy-Young inequality, we have found that \[I_{4}^{\hat{\mathsf{A}}_{n}}\geq-\hat{C} (\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}- \tfrac{4}{\varepsilon}(\tfrac{3}{\kappa_{0}})^{2\beta}\mathsf{C}_{\mathsf{data} }^{2}\] \[+\int_{\mathsf{s}_{n}}^{\mathsf{s}_{\mathsf{on}}}\!\!\!\iint^{ \theta^{\mathsf{S}}}\!\!\!\iint^{\theta^{\mathsf{S}}}\!\!\!\iint^{\theta^{ \mathsf{S}}}\!\!\!\iint^{\theta^{\mathsf{S}}}\!\!\!\iint^{\theta}\!\!\!\! \iint^{\theta}\!\!\!\!\iint^{\theta}\!\!\!\!\iint^{\theta}\!\!\!\!\frac{1}{ \Sigma^{\beta\theta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{{ \mathcal{N}}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\,\mathrm{d} \mathsf{s}^{\prime}\] \[+\tfrac{1}{20(1+\alpha)}\tfrac{1}{\varepsilon^{2}}\big{\|} \tfrac{\mathsf{Q}^{\frac{1}{2}}\tfrac{3}{\varepsilon^{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}- \tfrac{144+1600^{2}}{\varepsilon^{3}}\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\! \big{\|}\tfrac{\mathsf{Q}^{\frac{1}{2}}\tfrac{3}{\varepsilon^{2}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^ {2}_{x}}^{2}\,\mathrm{d}\mathsf{s}^{\prime}\] \[+\tfrac{7}{40(1+\alpha)}\tfrac{1}{\varepsilon^{3}}\int_{\mathsf{s }_{n}}^{\mathsf{s}}\!\!\big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}-\frac{1}{ \varepsilon^{3}}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\,\mathrm{d}\mathsf{s}^{\prime}- \tfrac{25}{52}\big{\|}\tfrac{\mathsf{Q}^{\frac{1}{2}}\tfrac{3}{\varepsilon^{2 }}\mathcal{J}\tfrac{1}{\varepsilon^{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D }}^{6}(J_{g}\hat{\mathsf{W}}_{{\mathcal{N}}})(\cdot,\mathsf{s})\big{\|}_{L^{2} _{x}}^{2}\] \[-\tfrac{44(1+\alpha)}{\varepsilon}\int_{\mathsf{s}_{n}}^{\mathsf{ s}}\!\!\big{\|}\tfrac{\mathsf{Q}^{\frac{1}{2}}\tfrac{3}{\varepsilon^{2}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{W}}_{{\mathcal{N}}})( \cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\,\mathrm{d}\mathsf{s}^{ \prime}\,. \tag{14.241}\] 24.13. The integrals \(I_{7}^{\hat{\mathsf{Z}}_{n}}\), \(I_{5}^{\hat{\mathsf{A}}_{n}}\), and \(I_{8}^{\hat{\mathsf{Z}}_{n}}\). Following the same analysis as in Sections 12.7.6-12.7.8, we obtain \[\big{|}I_{7}^{\hat{\mathsf{Z}}_{n}}\big{|}+\big{|}I_{5}^{\hat{\mathsf{A}}_{n}} \big{|}+\big{|}I_{8}^{\hat{\mathsf{Z}}_{n}}\big{|}\lesssim(1+\hat{C}\mathsf{K} \varepsilon)(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{14.242}\] #### 14.25. The forcing and commutator terms. It remains for us to bound the integrals \(I_{6}^{\hat{\mathsf{W}}_{n}}\), \(I_{10}^{\hat{\mathsf{Z}}_{n}}\), \(I_{10}^{\hat{\mathsf{A}}_{n}}\) in (14.206e), (14.207j), and (14.208j), respectively. In order to estimate these integrals, we use the definitions for the forcing functions in (3.43) together with the definitions of the so-called remainder and commutator functions in (12.2), (12.3), and (12.4), whose bounds were given in (14.221), (14.222), and (14.223). Using the Cauchy-Schwartz inequality, the bound \(\mathcal{J}\leq\tfrac{21}{10}J_{g}\) from (14.142), and (5.37r), we deduce from the aforementioned bounds that \[\big{|}I_{6}^{\hat{\mathsf{W}}_{n}}\big{|}\leq\hat{C}(\tfrac{4}{\kappa_{0}})^{2 \beta}\langle\mathsf{B}_{6}\rangle^{2}\,.\] (14.243a) Similarly, using (14.222) we also have that \[\big{|}I_{10}^{\hat{\mathsf{Z}}_{n}}\big{|} \leq\int_{\mathsf{s}_{n}}^{\mathsf{s}}\!\!\big{\|}\tfrac{\mathsf{Q}^{ \frac{3}{2}}}{\Sigma^{\beta-1}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_ {{\mathcal{N}}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\,\Big{(} \big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}}{\Sigma^{\beta-1}}\widetilde{\mathsf{D }}^{6}\mathsf{F}_{2}^{{\mathcal{N}}}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{ x}}+\big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}}{\Sigma^{\beta-1}}\mathcal{R}_{\hat{ \mathsf{Z}}}^{{\mathcal{N}}}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}+ \big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}}{\Sigma^{\beta-1}}\mathcal{C}_{\hat{ \mathsf{Z}}}^{{\mathcal{N}}}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2} \,\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tfrac{4(1+\alpha)}{\varepsilon}\int_{\mathsf{s}_{n}}^{ \mathsf{s}}\!\!\big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{{\mathcal{N}}})(\cdot,\mathsf{s}^{\prime}) \big{\|}_{L^{2}_{x}}^{2}\,\mathrm{d}\mathsf{s}^{\prime}+\hat{C}(\tfrac{4}{\kappa_ {0}})^{2\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle^{2}\,.\] (14.243b) In the same way, using (14.223) we obtain the bound \[\big{|}I_{10}^{\hat{\mathsf{A}}_{n}}\big{|} \leq\tfrac{4(1+\alpha)}{\varepsilon}\int_{\mathsf{s}_{n}}^{ \mathsf{s}}\!\!\big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathsf{A}}_{{\mathcal{N}}})(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\,\mathrm{d}\mathsf{s}^{\prime}+ \hat{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}\langle\mathsf{B}_{6}\rangle^{2}\,. \tag{14.243c}\] Collecting the bounds (14.243a), (14.243b), and (14.243c), we have that \[\big{|}I_{6}^{\hat{\mathsf{W}}_{n}}\big{|}+\big{|}I_{10}^{\hat{\mathsf{Z}}_{n}} \big{|} \leq\tfrac{4(1+\alpha)}{\varepsilon}\int_{\mathsf{s}_{n}}^{ \mathsf{s}}\!\!\big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}}{\Sigma^{\beta}}\widetilde{ \mathsf{D}}^{6}(J_{g}\hat{\mathsf{Z}}_{{\mathcal{N}}},J_{g}\hat{\mathsf{A}}_{{ \mathcal{N}}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\,\mathrm{d} \mathsf{s}^{\prime}+\hat{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\mathsf{K}\langle\mathsf{ B}_{6}\rangle^{2}\,.\] where \(\hat{C}=\hat{C}(\alpha,\kappa_{0},\mathsf{C_{data}})\) is independent of \(\beta\) (and \(\varepsilon\), as always). We choose \(\beta=\beta(\alpha)\) to be sufficiently large to ensure that the damping for \(\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{ \mathbf{A}}_{\mathcal{N}})\) is strong enough, i.e., \[\tfrac{2\alpha(2\beta-1)+2(1+\alpha)}{5}-580(1+\alpha)\geq 1\,.\] More precisely, we choose \(\beta\) to ensure equality in the above inequality; namely, we have that \[\beta_{\alpha}:=\tfrac{2903+2900\alpha}{4\alpha}\,. \tag{14.246}\] With this choice of \(\beta=\beta_{\alpha}\), we return to (14.245), and choose \(\varepsilon\) to be sufficiently small in terms of \(\alpha,\kappa_{0},\mathsf{C_{data}}\). After re-arranging, we deduce that \[\tfrac{1}{53}\big{\|} \tfrac{\mathsf{Q}^{\frac{1}{2}}\mathfrak{J}_{g}^{\frac{3}{2}}J_{ g}^{\frac{3}{2}}}{\Sigma^{\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}(J_{g} \hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{ \mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}+\tfrac{1} {20\varepsilon^{2}}\big{\|}\tfrac{\mathsf{Q}^{\frac{1}{2}}\mathfrak{J}^{\frac{ 3}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|} _{L^{2}_{x}}^{2}\] \[+\tfrac{1+\alpha}{7\varepsilon}\int_{\mathsf{S_{m}}}^{\mathsf{s}} \big{\|}\tfrac{\mathsf{Q}^{\frac{1}{2}}\mathfrak{J}_{g}^{\frac{3}{2}}}{\Sigma^ {\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N} },J_{g}\hat{\mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^ {2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\tfrac{178(1+\alpha)}{100\varepsilon }\int_{\mathsf{S_{m}}}^{\mathsf{s}}\big{\|}\tfrac{\mathsf{Q}^{\frac{1}{2}} \mathfrak{J}_{g}^{\frac{3}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}(J_ {g}\hat{\mathbf{Z}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_ {x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[+\tfrac{1}{\varepsilon}\int_{\mathsf{S_{m}}}^{\mathsf{s}}\big{\|} \tfrac{\mathsf{Q}^{\frac{1}{2}}\mathfrak{J}_{g}^{\frac{3}{2}}}{\Sigma^{\beta _{\alpha}}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g }\hat{\mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{ x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\tfrac{7}{40(1+\alpha)\varepsilon^{ \mathrm{s}}}\int_{\mathsf{S_{m}}}^{\mathsf{s}}\big{\|}\tfrac{\mathsf{Q}^{\frac{ 1}{2}}\mathfrak{J}^{\frac{3}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6}J _{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{ \prime}\] \[\leq\tfrac{10000}{\varepsilon}(\tfrac{3}{\kappa_{0}})^{2}\mathsf{C _{data}^{2}}+\hat{C}(\tfrac{4}{\kappa_{0}})^{2\beta}\langle\mathsf{B}_{6}\rangle^ {2}\] \[+\tfrac{C_{\alpha}}{\varepsilon}\int_{\mathsf{S_{m}}}^{\mathsf{s}} \big{\|}\tfrac{\mathsf{Q}^{\frac{1}{2}}\mathfrak{J}^{\frac{3}{2}}J_{g}^{\frac{ 1}{2}}}{\Sigma^{\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W }}_{\mathcal{N}},J_{g}\hat{\mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{A}}_{ \mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d} \mathsf{s}^{\prime}+\tfrac{C}{\varepsilon^{3}}\int_{\mathsf{S_{m}}}^{\mathsf{s }}\big{\|}\tfrac{\mathsf{Q}^{\frac{1}{2}}\mathfrak{J}^{\frac{3}{2}}}{\Sigma^{ \beta}}\widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2 }_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\,, \tag{14.247}\] where \(C_{\alpha}\) is a universal constant which is independent of \(\kappa_{0},\mathsf{C_{data}}\), \(C\) is a universal constant which is independent of \(\alpha,\kappa_{0},\mathsf{C_{data}}\), and \(\hat{C}\) is as defined above. An application of Gronwall's inequality to (14.247) shows that there exists a constant \[\tilde{\mathsf{c}}_{\alpha,\kappa_{0},\delta}>0\] which only depends on \(\alpha\), \(\kappa_{0}\), and \(\delta\), such that \[\sup_{\mathsf{s}\in[\mathsf{s_{m}},\mathsf{S_{m}}]}\big{\|} \tfrac{\mathsf{Q}^{\frac{3}{2}}J_{g}^{\frac{1}{2}}}{\Sigma^{\beta_{\alpha}}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s}) \big{\|}_{L^{2}_{x}}^{2}+\tfrac{1}{\varepsilon}\sup_{\mathsf{S}\in[\mathsf{s_{ m}},\mathsf{S_{m}}]}\big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}J_{g}(\cdot,\mathsf{s})\big{\|}_{L^{2}_{x}}^{2}\] \[+\tfrac{1}{\varepsilon}\int_{\mathsf{S_{m}}}^{\mathsf{S_{m}}} \big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}J_{g}^{\frac{1}{2}}}{\Sigma^{\beta}} \widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{W}}_{\mathcal{N}},J_{g}\hat{ \mathbf{Z}}_{\mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s}^{ \prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}+\tfrac{1}{ \varepsilon}\int_{\mathsf{S_{m}}}^{\mathsf{S_{m}}}\big{\|}\tfrac{\mathsf{Q}^{\frac{3} {2}}}{\Sigma^{\beta_{\alpha}}}\widetilde{\mathsf{D}}^{6}(J_{g}\hat{\mathbf{Z}}_{ \mathcal{N}},J_{g}\hat{\mathbf{A}}_{\mathcal{N}})(\cdot,\mathsf{s}^{\prime})\big{\|}_{L ^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[+\tfrac{1}{\varepsilon^{3}}\int_{\mathsf{S_{m}}}^{\mathsf{S_{m}}} \big{\|}\tfrac{\mathsf{Q}^{\frac{3}{2}}}{\Sigma^{\beta}}\widetilde{\mathsf{D}}^{6} J_{g}(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\tilde{\mathsf{c}}_{\alpha,\kappa_{0},\delta}\tfrac{1}{ \varepsilon}(\tfrac{4}{\kappa_{0}})^{2\beta_{\alpha},\kappa_{0}}\big{(} \mathsf{C_{data}^{2}}+\varepsilon\hat{C}\mathsf{K}^{2}\langle\mathsf{B}_{6} \rangle^{2}\big{)}\,. \tag{14.248}\] Just as in the conclusion of our previously established energy estimates, at this stage, we multiply the above estimate by \(\kappa_{0}^{2\beta_{\alpha}}\), appeal to (5.37p), drop the energy and damping terms for \(\widetilde{\mathsf{D}}^{6}J_{g}\) (since these were bounded already in Proposition 7.1), and recall the definitions of \(\widetilde{\mathcal{E}}^{2}_{6,\mathcal{N}}(\mathsf{s})\) and \(\widetilde{\mathcal{D}}^{2}_{6,\mathcal{N}}(\mathsf{s})\) to deduce that \[\varepsilon\sup_{\mathsf{s}\in[\mathsf{s_{m}},\mathsf{S_{m}}]} \widetilde{\mathcal{E}}^{2}_{6,\mathcal{N}}(\mathsf{s})+\widetilde{\mathcal{D}}^{2}_{6, \mathcal{N}}(\mathsf{s}_{fin}) \leq\tilde{\mathsf{c}}_{\alpha,\kappa_ #### Closing the bootstrap for the sixth order energy Combining (14.251) with (14.204) we arrive at the same inequality as obtained in (12.94) \[\varepsilon\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{ fin}}]}\widetilde{\mathcal{E}}_{\mathsf{6}}(\mathsf{s})+\widetilde{\mathcal{D}}_{ \mathsf{6}}(\mathsf{s}_{\mathsf{fin}})\leq\tfrac{1}{2}\mathsf{B}_{\mathsf{6}}\,, \tag{14.252}\] which closes the bootstrap (14.132b) (cf. (5.377)) in the upstream coordinate system (14.99). ### 15. Optimal regularity for velocity, sound speed, and ALE map We discuss the optimal regularity of the fields \(U=u\circ\psi\) and \(\Sigma=\sigma\circ\psi\) (as defined in (3.2a)), and of the ALE map \(\psi\) (as defined in (2.7)). That is, we show how the bootstrap bounds (5.37) obtained for the differentiated Riemann variables \((\tilde{\mathbf{W}},\tilde{\mathbf{Z}},\tilde{\mathbf{A}})\) and the geometric quantities \((J_{g},h_{2}\,)\) at the sixth derivative level, imply corresponding estimates for the un-differentiated unknowns \((U,\Sigma,h)\) at the seventh derivative level. Our main result is: **Proposition 15.1** (**Optimal regularity for \((U,\Sigma,h)\)**).: _Let \(\varphi\) be a weight function, defined as follows:_ * _For the "shock formation" in Sections_ 5_-_12_, let_ \(\varphi=\mathcal{J}\) _as defined in (_5.14_)._ * _For the "downstream maximal development" in Section_ 13_, let_ \(\varphi=\mathcal{J}\)_, as defined in (_13.6_)._ * _For the "upstream maximal development" of Section_ 14_, let_ \(\varphi=\mathcal{J}\)_, as defined by (_14.58_) and (_14.66_)._ _Then, for each of these weights \(\varphi\) individually, we have the bounds_ \[\varepsilon^{\frac{1}{2}}\sup_{\mathsf{s}\in[0,\varepsilon]}\bigl{\|}\varphi ^{\frac{3}{4}}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{\gamma}\mathsf{S}( \cdot,\mathsf{s})\bigr{\|}_{L^{2}_{x}}+\bigl{\|}\varphi^{\frac{1}{4}}J_{g}^{ \frac{1}{2}}\widetilde{\mathsf{D}}^{\gamma}\mathsf{S}\bigr{\|}_{L^{2}_{x}L^{ 2}_{x}}\lesssim\varepsilon\mathsf{K}\langle\mathsf{B}_{\mathsf{6}}\rangle \tag{15.1a}\] \[\varepsilon^{\frac{1}{2}}\bigl{\|}\varphi^{\frac{3}{4}}J_{g}^{ \frac{1}{2}}\widetilde{\mathsf{D}}^{\gamma}U\bigr{\|}_{L^{\infty}_{x}L^{2}_{x }}+\bigl{\|}\varphi^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{ \gamma}U\bigr{\|}_{L^{2}_{x}L^{2}_{x}}\lesssim\varepsilon\mathsf{K}\langle \mathsf{B}_{\mathsf{6}}\rangle\] (15.1b) \[\varepsilon^{\frac{1}{2}}\bigl{\|}\varphi^{\frac{1}{4}}\widetilde{ \mathsf{D}}^{\gamma}h\bigr{\|}_{L^{\infty}_{x}L^{2}_{x}}+\bigl{\|}\widetilde{ \mathsf{D}}^{\gamma}h\bigr{\|}_{L^{2}_{x}L^{2}_{x}}\lesssim\varepsilon^{2} \mathsf{K}\langle\mathsf{B}_{\mathsf{6}}\rangle\,. \tag{15.1c}\] _In (15.1) it is understood that the \((x,\mathsf{s})\) coordinates are given by (5.18b) for Sections_ 5_-_12_, by (_13.8b_) for Section_ 13_, and by (_14.99_) for Section_ 14_._ **Remark 15.2** (**Bounds in terms of the \((x,t)\) variables**).: _The estimates provided by (15.1) only concern \((x,\mathsf{s})\) variables, and we have dropped the tildes, as described in Remark 5.2, Remark 13.6, and Remark 14.6. It is clear however that the \(L^{2}_{x,\mathsf{s}}\) estimates in (15.1) may be converted into \(L^{2}_{x,t}\) coordinates by using the change of variables formula, and that the \(L^{\infty}_{\mathsf{s}}L^{2}_{x}\) imply a suitable bound in \((x,t)\) coordinates. For example, for the analysis in Sections_ 5_-_12_, the Jacobian of the map \((x,t)\mapsto(x,\mathsf{s})\) present in (5.20) is easily seen to equal \(|\partial_{t}\mathsf{q}|=\widehat{\mathsf{Q}}\), and (6.38a) gives global upper and lower bounds for \(\widehat{\mathsf{Q}}\) (which are strictly positive and depend only on \(\alpha\)). This matter was previously addressed in Remark 5.3. As such, with the spacetime \(\mathcal{P}\) defined in (5.11), we deduce from the \(L^{2}_{\mathsf{s}}\) bounds in (15.1) that_ \[\|\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\mathsf{D}^{\gamma}\|_{L^{2}_{x,t }(\mathcal{P})}+\|\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\mathsf{D}^{ \gamma}U\|_{L^{2}_{x,t}(\mathcal{P})}+\tfrac{1}{2}\|J_{g}^{\frac{1}{2}}\mathsf{D }^{\gamma}h\|_{L^{2}_{x,t}(\mathcal{P})}\lesssim\varepsilon\mathsf{K}\langle \mathsf{B}_{\mathsf{6}}\rangle\,.\] (15.2a) _On the other hand, the uniform-in-'s bounds (_15.1_) imply uniform bounds along the foliation of_ \[\mathcal{P}\] _with the (cylindrical) level sets_ \[\{(x_{1},x_{2},t)\colon\mathcal{J}(x_{2},t)=1-\tfrac{\mathsf{s}}{\varepsilon} \}=\{(x_{1},x_{2},\mathsf{q}^{-1}(x_{2},\mathsf{s}))\}\,\text{for $\mathsf{s}\in[0, \varepsilon]$. That is, we have}\] \[\sup_{\mathsf{s}\in[0,\varepsilon]}(1-\tfrac{\mathsf{s}}{ \varepsilon})^{\frac{1}{4}}\|J_{g}^{\frac{1}{2}}\mathsf{D}^{\gamma}\mathsf{S}(x_{1 },x_{2},\mathsf{q}^{-1}(x_{2},\mathsf{s}))\|_{L^{2}_{x}}+\sup_{\mathsf{s}\in[0, \varepsilon]}(1-\tfrac{\mathsf{s}}{\varepsilon})^{\frac{1}{4}}\|J_{g}^{\frac{1}{ 2}}\mathsf{D}^{\gamma}U(x_{1},x_{2},\mathsf{q}^{-1}(x_{2},\mathsf{s}))\|_{L^{2} _{x}}\] \[\qquad+\tfrac{1}{\varepsilon}\sup_{\mathsf{s}\in[0,\varepsilon]}\|J _{g}^{\frac{1}{2}}\mathsf{D}^{\gamma}h(x_{1},x_{2},\mathsf{q}^{-1}(x_{2},\mathsf{s }))\|_{L^{2}_{x}}\lesssim\varepsilon^{\frac{1}{2}}\mathsf{K}\langle\mathsf{B}_{ \mathsf{6}}\rangle\,, \tag{15.2b}\] _where as usual \(\mathsf{D}=(\varepsilon\partial_{t},\varepsilon x_{1},x_{2})\)._ _Bounds similar to (15.2) hold also for the downstream maximal development considered in Section_ 13_. The_ \(L^{2}_{x,t}\) _bounds analogous to (_15.2a_) (with_ \(\mathcal{J}\) _replaced by_ \(\mathcal{J}\)_) hold over the spacetime_ \(\mathcal{P}^{\sharp}\) _defined in (_13.7_), while the bound analogous to (_15.2b_) holds with_ \(\mathsf{q}^{-1}\) _being replaced by_ \(\mathsf{q}^{-1}\)_. Similarly,_ \(L^{2}_{x,t}\) _bounds for the upstream maximal development considered in Section_ 14 _hold over the spacetime_ \(\mathcal{H}^{\delta}\) _defined in (_14.120a_)._ Proof of Proposition 15.1.: For simplicity, we only state here the bounds involving the weight function \(\varphi=\mathcal{J}\), as defined in (5.14), and which is used for shock formation, in Sections_ 5_-_12_. These bounds are stated in (_15.4_), (_15.14_), and (_15.16_) below. The same bounds hold for the downstream maximal development discussed in Section_ 13_, upon replacing the weight_ \(\mathcal{J}\) _appearing in (_15.4_), (_15.14_), and (_15.16_), with the weight_ \(\mathcal{J}\) _defined in (_13.6_), and the same bounds hold for the upstream maximal development discussed in Section_ 14_, upon replacing the weight_ \(\mathcal{J}\) _appearing in (_15.4_), (_15.14_), and (_15.16_), with the weight_ \(\mathcal{J}\)_, defined in (_14.58_) and (_14.66_). ### Bounds for \(\widetilde{\mathsf{D}}^{7}\Sigma\) We note that the bounds (7.1j) and (7.1k) below, provide bounds for \(\widetilde{\mathsf{D}}^{6}\Sigma\): \[\varepsilon^{\frac{1}{2}}\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6} \Sigma\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}+\big{\|}\widetilde{\mathsf{D}}^{6 }\Sigma\big{\|}_{L_{x}^{2}L_{x}^{2}}\lesssim\varepsilon(\mathsf{B}_{6})\,. \tag{15.3}\] In order to estimate \(\widetilde{\mathsf{D}}^{7}\Sigma\), we recall from (3.19b) and (3.20) that \[\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{1}\Sigma =\tfrac{1}{2}(J_{s}\hat{\mathbf{W}}_{\mathcal{N}}-J_{s}\underline {\tilde{\mathbf{Z}}}_{\mathcal{N}})+\tfrac{1}{2}J_{s}\widetilde{\mathsf{D}}_{ 2}h(\hat{\mathbf{W}}_{\mathcal{T}}-\underline{\tilde{\mathbf{Z}}}_{\mathcal{T }})\,,\] \[\widetilde{\mathsf{D}}_{2}\Sigma =\tfrac{1}{2}g^{\frac{1}{2}}(\hat{\mathbf{W}}_{\mathcal{T}}- \underline{\tilde{\mathbf{Z}}}_{\mathcal{T}})\] \[\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{\mathsf{s}}\Sigma =-\alpha\Sigma(\underline{\tilde{\mathbf{Z}}}_{\mathcal{N}}+ \bar{\mathbf{A}}_{\mathcal{T}})-\tfrac{1}{2}Vg^{\frac{1}{2}}(\hat{\mathbf{W} }_{\mathcal{T}}-\underline{\tilde{\mathbf{Z}}}_{\mathcal{T}})\,.\] The above identities may be combined with the definitions (5.36a) and (5.36g), the bootstraps (5.37r), the bounds (7.1) for the geometry, the improved estimates in (8.22) and (11.2), the product bounds in (B.13) and (B.22) and the \(L_{x,s}^{\infty}\) bounds in (5.37), to yield \[\varepsilon^{\frac{1}{4}}\big{\|}\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{7}\Sigma\big{\|}_{L_{\infty}^{\infty}L_{x}^{2}}+\big{\|} \mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{7}\Sigma \big{\|}_{L_{x}^{2}L_{x}^{2}}\lesssim\varepsilon\mathsf{K}(\mathsf{B}_{6})\,. \tag{15.4}\] When compared to the \(\widetilde{\mathsf{D}}^{6}\Sigma(x,\mathsf{s})\) bounds in (15.3), we note that the \(\widetilde{\mathsf{D}}^{7}\Sigma(x,\mathsf{s})\) estimates obtained in (15.4) come at the penalty of additional weights of \(\mathcal{J}^{\frac{3}{4}}\), and respectively \(J_{s}^{\frac{1}{2}}\mathcal{J}^{\frac{1}{4}}\), as is natural given our bootstrap assumptions. ### Bounds for \(\widetilde{\mathsf{D}}^{7}U\) First we note that bounds for \(\widetilde{\mathsf{D}}^{6}U\) are available through estimates for \(\widetilde{\mathsf{D}}^{6}(W,Z,A,J_{s},\widetilde{\mathsf{D}}_{2}h)\). More precisely, from (3.10) and (3.18) we deduce \[J_{s}\hat{\mathbf{W}}_{\mathcal{N}} =\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{1}W-\widetilde{ \mathsf{D}}_{2}hg^{-\frac{1}{2}}J_{s}\widetilde{\mathsf{D}}_{2}W+g^{-\frac{1}{ 2}}A\widetilde{\mathsf{D}}_{2}J_{s}\,,\] \[J_{s}\hat{\mathbf{Z}}_{\mathcal{N}} =\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{1}Z-\widetilde{ \mathsf{D}}_{2}hg^{-\frac{1}{2}}J_{s}\widetilde{\mathsf{D}}_{2}Z+g^{-\frac{1}{ 2}}A\widetilde{\mathsf{D}}_{2}J_{s}\,,\] \[J_{s}\hat{\mathbf{A}}_{\mathcal{N}} =\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{1}A-\widetilde{ \mathsf{D}}_{2}hg^{-\frac{1}{2}}J_{s}\widetilde{\mathsf{D}}_{2}A-\tfrac{1}{2}g^ {-\frac{1}{2}}(W+Z)\widetilde{\mathsf{D}}_{2}J_{s}\,,\] \[g^{\frac{1}{2}}\hat{\mathbf{W}}_{\mathcal{T}} =\widetilde{\mathsf{D}}_{2}W+Ag^{-1}\widetilde{\mathsf{D}}_{2}^{2}h\,,\] \[g^{\frac{1}{2}}\hat{\mathbf{Z}}_{\mathcal{T}} =\widetilde{\mathsf{D}}_{2}Z+Ag^{-1}\widetilde{\mathsf{D}}_{2}^{2}h\,.\] \[g^{\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{T}} =\widetilde{\mathsf{D}}_{2}A-\tfrac{1}{2}(W+Z)g^{-1}\widetilde{ \mathsf{D}}_{2}^{2}h\,.\] In the six identities above, we add the equations for \(\hat{\mathbf{W}}\) and \(\tilde{\mathbf{Z}}\), resulting in the four identities \[\mathcal{N}\!\cdot\!\partial_{1}U-g^{-\frac{1}{2}}\widetilde{ \mathsf{D}}_{2}hJ_{s}\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{2}U =\tfrac{1}{2}\big{(}J_{s}\hat{\mathbf{W}}_{\mathcal{N}}+J_{s} \underline{\tilde{\mathbf{Z}}}_{\mathcal{N}}\big{)}\] \[\mathcal{T}\!\cdot\!\partial_{1}U-g^{-\frac{1}{2}}\widetilde{ \mathsf{D}}_{2}hJ_{s}\mathcal{T}\!\cdot\!\widetilde{\mathsf{D}}_{2}U =J_{s}\hat{\mathbf{A}}_{\mathcal{N}}\,,\] \[\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{2}U =\tfrac{1}{2}g^{\frac{1}{2}}(\hat{\mathbf{W}}_{\mathcal{T}}+ \underline{\tilde{\mathbf{Z}}}_{\mathcal{T}})\,,\] \[\mathcal{T}\!\cdot\!\widetilde{\mathsf{D}}_{2}U =g^{\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{T}}\,.\] By substitution of the last two equations into the first two, and by applying \(\widetilde{\mathsf{D}}^{k}\) to both sides for \(0\leq k\leq 6\), we obtain \[\tfrac{1}{\varepsilon}\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{ 1}\widetilde{\mathsf{D}}^{k}U =\tfrac{1}{2}\widetilde{\mathsf{D}}^{k}\big{(}J_{s}\hat{\mathbf{W}}_{ \mathcal{N}}+J_{s}\underline{\tilde{\mathbf{Z}}}_{\mathcal{N}}\big{)}+\tfrac{1}{2} \widetilde{\mathsf{D}}^{k}\big{(}\widetilde{\mathsf{D}}_{2}hJ_{s}(\hat{\mathbf{W}}_{ \mathcal{T}}+\underline{\tilde{\mathbf{Z}}}_{\mathcal{T}})\big{)}-\tfrac{1}{ \varepsilon}\big{[}\widetilde{\mathsf{D}}^{k},\mathcal{N}\big{]}\!\cdot\! \widetilde{\mathsf{D}}_{1}U \tag{15.5a}\] \[\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{k }U =\tfrac{1}{2}\widetilde{\mathsf{D}}^{k}\big{(}g^{\frac{1}{2}}(\hat{\mathbf{W} }_{\mathcal{T}}+\underline{\tilde{\mathbf{Z}}}_{\mathcal{T}})\big{)}-\big{[} \widetilde{\mathsf{D}}^{k},\mathcal{N}\big{]}\!\cdot\!\widetilde{\mathsf{D}}_{2}U\,,\] (15.5c) \[\mathcal{T}\!\cdot\!\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}^{k }U =\widetilde{\mathsf{D}}^{k}\big{(}g^{\frac{1}{2}}\hat{\mathbf{A}}_{\mathcal{T}} \big{)}-\big{[}\widetilde{\mathsf{D}}^{k},\mathcal{T}\big{]}\!\cdot\! \widetilde{\mathsf{D}}_{2}U\,. \tag{15.5d}\] The above four identities need to be supplemented for an identity for computing \(\mathcal{N}\!\cdot\!\widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{k}U\) and \(\mathcal{T}\!\cdot\!\widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{k}U\). We obtain the later by first noting that \(\widetilde{\mathsf{D}}_{\mathsf{s}}=\varepsilon\big{(}(\mathsf{Q}\partial_{ \mathsf{s}}+V\partial_{2})-V\widetilde{\mathsf{D}}_{2}\big{)}\), and then recalling that (3.19a) gives \[\mathcal{N}\!\cdot\!(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})U =\alpha\Sigma\underline{\tilde{\mathbf{Z}}}_{\mathcal{N}}\,,\] \[\mathcal{T}\!\cdot\!(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})U =\alpha\Sigma\underline{\tilde{\mathbf{A}}}_{\mathcal{N}}-\tfrac{ \alpha}{2}\Sigma(\hat{\mathbf{W}}_{\mathcal{T}}-\underline{\tilde{\mathbf{Z}}}_{ \mathcal{T}})\,.\] The two identities above may be differentiated by applying \(\widetilde{\mathsf{D}}^{k Considering the six identities in (15.5) with \(k=5\) together with the formula \(\widetilde{\mathsf{D}}_{\mathsf{s}}=\varepsilon\big{(}(\mathsf{Q}\partial_{ \mathsf{s}}+V\partial_{2})-V\widetilde{\mathsf{D}}_{2}\big{)}\), and using the definitions (5.36a) and (5.36g), the bootstraps (5.37s), the bounds (7.1) for the geometry, the improved estimates in (8.21), (8.22) and (11.2), the product bounds in (B.13) and (B.22) and the \(L^{\infty}_{x,\mathsf{s}}\) bounds in (5.37), the commutator bounds in (B.16), (B.17), and (B.21), and with Remark 6.10, we deduce that \(\widetilde{\mathsf{D}}^{6}U\) satisfies the estimates \[\varepsilon^{\frac{1}{2}}\big{\|}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{6} U\big{\|}_{L^{\infty}_{x}L^{2}_{x}}+\big{\|}\widetilde{\mathsf{D}}^{6}U\big{\|}_{L^{2}_{ x}L^{2}_{x}}\lesssim\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{15.6}\] The above estimate is in direct analogy with (15.3). Next, we aim to bound \(\widetilde{\mathsf{D}}^{7}U\) in a suitably weighted space. To achieve this, we consider the six identities in (15.5) with \(k=6\). Here, it is convenient to replace \(\widetilde{\mathsf{D}}_{1}\) by \(\varepsilon\partial_{1}\), and \(\widetilde{\mathsf{D}}_{2}\) by \(\partial_{2}\) plus a material derivative term, via the identity \[\partial_{2}=\widetilde{\mathsf{D}}_{2}+\tfrac{1}{\varepsilon}\overline{ \mathsf{Q}}_{2}\widehat{\mathsf{Q}}^{-1}\widetilde{\mathsf{D}}_{\mathsf{s}}= \tfrac{1}{1+\overline{\mathsf{Q}}_{2}V\mathsf{Q}^{-1}}\big{(}\widetilde{ \mathsf{D}}_{2}+\overline{\mathsf{Q}}_{2}\mathsf{Q}^{-1}(\mathsf{Q}\partial_ {\mathsf{s}}+V\partial_{2})\big{)}\,. \tag{15.7}\] Using (15.5) with \(k=6\) and (15.7), we may thus derive \[\mathscr{N}\!\cdot\!\partial_{1}\widetilde{\mathsf{D}}^{6}U =\mathcal{M}_{\mathscr{N},1} \tag{15.8a}\] \[\mathscr{T}\!\cdot\!\partial_{1}\widetilde{\mathsf{D}}^{6}U =\mathcal{M}_{\mathscr{T},1}\] (15.8b) \[\mathscr{N}\!\cdot\!\partial_{2}\widetilde{\mathsf{D}}^{6}U =\tfrac{1}{1+\overline{\mathsf{Q}}_{2}V\mathsf{Q}^{-1}}\big{(} \mathcal{M}_{\mathscr{N},2}+\overline{\mathsf{Q}}_{2}\mathsf{Q}^{-1} \mathcal{M}_{\mathscr{N},\mathsf{s}}\big{)}\] (15.8c) \[\mathscr{T}\!\cdot\!\partial_{2}\widetilde{\mathsf{D}}^{6}U =\tfrac{1}{1+\overline{\mathsf{Q}}_{2}V\mathsf{Q}^{-1}}\big{(} \mathcal{M}_{\mathscr{T},2}+\overline{\mathsf{Q}}_{2}\mathsf{Q}^{-1} \mathcal{M}_{\mathscr{T},\mathsf{s}}\big{)}\,, \tag{15.8d}\] where we have denoted the emerging six forcing terms by \[\mathcal{M}_{\mathscr{N},1} :=\tfrac{1}{2}\widetilde{\mathsf{D}}^{6}\big{(}J_{g}\boldsymbol{ \mathring{W}}_{\mathscr{N}}+J_{g}\boldsymbol{\mathring{Z}}_{\mathscr{N}} \big{)}+\tfrac{1}{2}\widetilde{\mathsf{D}}^{6}\big{(}\widetilde{\mathsf{D}}_{2 }hJ_{g}(\boldsymbol{\mathring{W}}_{\tau}+\boldsymbol{\mathring{Z}}_{\tau}) \big{)}-\tfrac{1}{\varepsilon}\big{[}\widetilde{\mathsf{D}}^{6},\mathscr{N} \big{]}\cdot\widetilde{\mathsf{D}}_{1}U \tag{15.9a}\] \[\mathcal{M}_{\mathscr{T},1} :=\widetilde{\mathsf{D}}^{6}(J_{g}\boldsymbol{\mathring{A}}_{ \mathscr{N}})+\widetilde{\mathsf{D}}^{6}\big{(}\widetilde{\mathsf{D}}_{2}hJ_ {g}\boldsymbol{\mathring{A}}_{\tau}\big{)}-\tfrac{1}{\varepsilon}\big{[} \widetilde{\mathsf{D}}^{6},\mathscr{T}\big{]}\cdot\widetilde{\mathsf{D}}_{1}U\] (15.9b) \[\mathcal{M}_{\mathscr{N},2} :=\tfrac{1}{2}\widetilde{\mathsf{D}}^{6}\big{(}g^{\frac{1}{2}}( \boldsymbol{\mathring{W}}_{\tau}+\boldsymbol{\mathring{Z}}_{\tau})\big{)}- \big{[}\widetilde{\mathsf{D}}^{6},\mathscr{N}\big{]}\cdot\widetilde{\mathsf{D }}_{2}U\] (15.9c) \[\mathcal{M}_{\mathscr{T},2} :=\widetilde{\mathsf{D}}^{6}\big{(}g^{\frac{1}{2}}\boldsymbol{ \mathring{A}}_{\tau}\big{)}-[\widetilde{\mathsf{D}}^{6},\mathscr{T}]\cdot \widetilde{\mathsf{D}}_{2}U\] (15.9d) \[\mathcal{M}_{\mathscr{N},\mathsf{s}} :=\alpha\widetilde{\mathsf{D}}^{6}\big{(}\Sigma^{2}\boldsymbol{ \mathring{Z}}_{\mathscr{N}}\big{)}-\mathscr{N}\cdot[\widetilde{\mathsf{D}}^{6}, \mathscr{V}]\widetilde{\mathsf{D}}_{2}U-[\widetilde{\mathsf{D}}^{6},\mathscr{N} ]\cdot\big{(}\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{\mathsf{s}}U+V \widetilde{\mathsf{D}}_{2}U\big{)}\,. \tag{15.9f}\] Next, using that \(e_{1}=g^{-\frac{1}{2}}(\mathscr{N}+\widetilde{\mathsf{D}}_{2}h\mathscr{T})\) and \(e_{2}=g^{-\frac{1}{2}}(\mathscr{T}-\widetilde{\mathsf{D}}_{2}h\mathscr{N})\), we may perform linear combinations of the terms in (15.8), finally arriving at \[\partial_{1}\widetilde{\mathsf{D}}^{6}U^{1} =g^{-\frac{1}{2}}\big{(}\mathcal{M}_{\mathscr{N},1}+\widetilde{ \mathsf{D}}_{2}h\mathcal{M}_{\mathscr{T},1}\big{)}\,, \tag{15.10a}\] \[\partial_{1}\widetilde{\mathsf{D}}^{6}U^{2} =g^{-\frac{1}{2}}\big{(}\mathcal{M}_{\mathscr{T},1}-\widetilde{ \mathsf{D}}_{2}h\mathcal{M}_{\mathscr{N},1}\big{)}\,,\] (15.10b) \[\partial_{2}\widetilde{\mathsf{D}}^{6}U^{1} =\tfrac{g^{-\frac{1}{2}}}{1+\overline{\mathsf{Q}}_{2}V\mathsf{Q}^{- 1}}\big{(}(\mathcal{M}_{\mathscr{N},2}+\widetilde{\mathsf{D}}_{2}h\mathcal{M}_ {\mathscr{T},2})+\overline{\mathsf{Q}}_{2}\mathsf{Q}^{-1}(\mathcal{M}_{\mathscr{N},\mathsf{s}}+\widetilde{\mathsf{D}}_{2}h\mathcal{M}_{\mathscr{T},\mathsf{s}}) \big{)}\,,\] (15.10c) \[\partial_{2}\widetilde{\mathsf{D}}^{6}U^{2} =\tfrac{g^{-\frac{1}{2}}}{1+\overline{\mathsf{Q}}_{2}V\mathsf{Q}^{- 1}}\big{(}(\mathcal{M}_{\mathscr{T},2}-\widetilde{\mathsf{D}}_{2}h\mathcal{M}_ {\mathscr{N},2})+\overline{\mathsf{Q}}_{2}\mathsf{Q}^{-1}(\mathcal{M}_{\mathscr{T},\mathsf{s}}-\widetilde{\mathsf{D}}_{2}h\mathcal{M}_{\mathscr{N},\mathsf{s}}) \big{)}\,. \tag{15.10d}\] In order to use (15.10), we thus must bound the right side in suitably-weighted \(L^{2}_{x}\) spaces, both in \(L^{\infty}_{\mathsf{s}}\) (with weight \(\mathcal{J}^{\frac{3}{2}}J^{\frac{1}{2}}_{g}\)) and in \(L^{2}_{\mathsf{s}}\) (with weight \(\mathcal{J}^{\frac{1}{4}}J^{\frac{1}{2}}_{g}\)). For this purpose, we record the bounds \[\varepsilon^{\frac{1}{2}}\big{\|}\mathcal{J}^{\frac{3}{2}}J^{\frac{1} {2}}_{g}\mathcal{M}_{\mathscr{N},1}\big{\|}_{L^{\infty}_{x}L^{2}_{x}}+\big{\|} \mathcal{J}^{\frac{1}{4}}J^{\frac{1}{2}}_{g}\mathcal{M}_{\mathscr{N},1} \big{\|}_{L^{2}_{x}L^{2}_{x}} \lesssim\langle\mathsf{B}_{6}\rangle\,,\] (15.11a) \[\varepsilon^{\frac{1}{2}}\big{\|}\mathcal{J}^{\frac{3}{2}}J^{\frac{1} {2}}_{g}\mathcal{M}_{\mathscr{N},2}\big{\|}_{L^{\infty}_{x}L^{2}_{x}}+\big{\|} \mathcal{J}^{\frac{1}{4}}J^{\frac{1}{2}}_{g}\mathcal{M}_{\mathscr{N},1}\big{\|}_{L ^{2}_{x}L^{2}_{x}} \lesssim\varepsilon\mathsf{K In turn, the bootstraps (5.37m), (5.37o), the bounds (6.38c), (6.38g), and estimates (15.11), imply via (15.10) that \[\varepsilon^{\frac{1}{2}}\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{2}}( \varepsilon\partial_{1},\partial_{2})\widetilde{\mathsf{D}}^{6}U\big{\|}_{L_{ x}^{\infty}L_{x}^{2}}+\big{\|}\mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}( \varepsilon\partial_{1},\partial_{2})\widetilde{\mathsf{D}}^{6}U\big{\|}_{L_ {x}^{2}L_{x}^{2}}\lesssim\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{15.12}\] It thus remains to convert the \(\nabla\widetilde{\mathsf{D}}^{6}\) bound in (15.12) to a full \(\widetilde{\mathsf{D}}^{7}U\) estimate. This is achieved by appealing to (15.5e), which may be rewritten as \(\mathcal{N}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{ \mathsf{D}}^{k}U=\mathcal{M}_{\mathcal{N},\mathsf{s}}\), and (15.5f), which may be rewritten as \(\mathcal{T}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})\widetilde{ \mathsf{D}}^{k}U=\mathcal{M}_{\mathcal{T},\mathsf{s}}\). Together with (15.11), these identities yield \[\varepsilon^{\frac{1}{2}}\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{2}} \big{(}\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}\big{)}\widetilde{ \mathsf{D}}^{6}U\big{\|}_{L_{x}^{\infty}L_{x}^{2}}+\big{\|}\mathcal{J}^{\frac{ 1}{4}}J_{g}^{\frac{1}{2}}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}) \widetilde{\mathsf{D}}^{6}U\big{\|}_{L_{x}^{2}L_{x}^{2}}\lesssim\mathsf{K} \langle\mathsf{B}_{6}\rangle\,. \tag{15.13}\] The last ingredient is (5.23), which may be rewritten as \[\widetilde{\mathsf{D}}_{1}=\varepsilon\partial_{1}\,,\quad\widetilde{\mathsf{ D}}_{2}=\big{(}1+\tfrac{V\overline{\mathsf{Q}}}{\mathsf{Q}}\big{)}\partial_{2}- \tfrac{\overline{\mathsf{Q}}}{\mathsf{Q}}(\mathsf{Q}\partial_{\mathsf{s}}+V \partial_{2})\,,\quad\widetilde{\mathsf{D}}_{\mathsf{s}}=\varepsilon\tfrac{ \widehat{\mathsf{D}}}{\mathsf{Q}}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial _{2})-\varepsilon\tfrac{V\widehat{\mathsf{Q}}}{\mathsf{Q}}\partial_{2}\,.\] The above three identities, together with (15.12), (15.13), and Lemma 6.3, give \[\varepsilon^{\frac{1}{2}}\big{\|}\mathcal{J}^{\frac{3}{4}}J_{g}^{\frac{1}{2}} \widetilde{\mathsf{D}}^{7}U\big{\|}_{L_{x}^{\infty}L_{x}^{2}}+\big{\|} \mathcal{J}^{\frac{1}{4}}J_{g}^{\frac{1}{2}}\widetilde{\mathsf{D}}^{7}U\big{\|} _{L_{x}^{2}L_{x}^{2}}\lesssim\varepsilon\mathsf{K}\langle\mathsf{B}_{6}\rangle\,. \tag{15.14}\] The above estimate is in direct analogy with (15.4). ### Bounds for \(\widetilde{\mathsf{D}}^{7}h\) We recall from (2.7) that the ALE map \(\psi\) is given by \(h(x_{1},x_{2},t)e_{1}+x_{2}e_{2}\). As such, the regularity of \(\psi\) is equivalent to the regularity of the map \(h\). So far, the best available estimates for \(h\) are (7.1c) for \(\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{2}h\), and (7.1d) for \(\widetilde{\mathsf{D}}^{6}\widetilde{\mathsf{D}}_{1}h\). To obtain a full control of \(\widetilde{\mathsf{D}}^{7}h\), it thus remains to estimate \(\widetilde{\mathsf{D}}_{\mathsf{s}}^{7}h\). For this purpose, we recall from (2.10), written in \((x,\mathsf{s})\) variables, that \(\frac{1}{\varepsilon}\partial_{\mathsf{s}}h=g^{\frac{1}{2}}(U\cdot\mathcal{N} +\Sigma)\). Upon applying \(\widetilde{\mathsf{D}}_{\mathsf{s}}^{6}\) to this expression, we determine that \[\widetilde{\mathsf{D}}_{\mathsf{s}}^{7}h=\varepsilon\widetilde{\mathsf{D}}_{ \mathsf{s}}^{6}\big{(}g^{\frac{1}{2}}(U\cdot\mathcal{N}+\Sigma)\big{)}\,.\] Using the bootstraps (5.37), the inequality \(1\leq\mathcal{J}^{-\frac{1}{4}}\), the geometric bounds in (7.1), the \(\widetilde{\mathsf{D}}^{6}\Sigma\) bounds in (15.3), the \(\widetilde{\mathsf{D}}^{6}U\) bounds in (15.6), and the Moser-type bounds in Lemmas B.4, we deduce from the above identity that \[\big{\|}\widetilde{\mathsf{D}}_{\mathsf{s}}^{7}h\big{\|}_{L_{x}^{\infty}L_{x}^ {2}}\lesssim\varepsilon^{2}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,.\] (15.15a) By additionally using the inequalities \[\mathcal{J}^{\frac{1}{4}}\leq J_{g}^{\frac{1}{4}}\leq J_{g}^{\frac{1}{2}} \lesssim 1,\] ( 4.11 ) ( ), ( ), and ( ), we have that \[\big{\|}\mathcal{J}^{\frac{1}{4}}\widetilde{\mathsf{D}}_{\mathsf{s}}^{7}h \big{\|}_{L_{x}^{\infty}L_{x}^{2}}\lesssim\varepsilon^{\frac{3}{2}}\mathsf{K} \langle\mathsf{B}_{6}\rangle+\varepsilon^{\frac{1}{2}}\sum\nolimits_{k=1}^{6} \lVert\widetilde{\mathsf{D}}_{\mathsf{s}}^{7-k}(g^{\frac{1}{2}}\mathcal{N}) \cdot\widetilde{\mathsf{D}}_{\mathsf{s}}^{k}U\rVert_{L_{x,\mathsf{s}}^{2}}+ \lVert\widetilde{\mathsf{D}}_{\mathsf{s}}^{7-k}g^{\frac{1}{2}}\cdot \widetilde{\mathsf{D}}_{\mathsf{s}}^{k}\Sigma\rVert_{L_{x,\mathsf{s}}^{2}} \lesssim\varepsilon^{\frac{3}{2}}\mathsf{K}\langle\mathsf{B}_{6}\rangle\,.\] (15.15b) To summarize, we combine ( 7.1c), ( 7.1d), ( 15.15a), and ( 15.15b), and deduce that \[\varepsilon^{\frac{1}{2}}\big{\|}\mathcal{J}^{\frac{1}{4}}\widetilde{\mathsf{D}} ^{7}h\big{\|}_{L_{x}^{\infty}L_{x}^{2}}+\big{\|}\widetilde{\mathsf{D}}^{7}h \big{\|}_{L_{x}^{2}L_{x}^{2}}\lesssim\varepsilon^{2}\mathsf{K}\langle\mathsf{B}_{6 }\rangle\,,\] (15.16) which gives the optimal regularity bounds for \[h\], and thus also for the ALE map \[\psi\]. ## Appendix A Maximal development of Cauchy data for the 1D Euler equations The purpose of this appendix is to showcase the process of _shock formation from smooth initial data_, the spacetime of _maximal hyperbolic development of Cauchy data_, and to discuss the physical _shock development problem_ in the context of the 1D Euler equations (cf. (1.1) for \(d=1\)). While these results are surely known in one space dimension, we were not able to find a suitable exposition of these concepts in the literature. Additionally, the discussion in this appendix highlights some of the main concepts discussed in the multi-dimensional setting of this paper, but without having to deal with a number of geometric difficulties. Lastly, it is easier to draw accurate images in \(1+1\) dimensions (space & time), so that the reader can build part of the necessary intuition required for the multi-dimensional setting. ### The setup: equations, variables, initial data In one space dimension, the system (1.1) becomes \[\partial_{t}(\rho u)+\partial_{y}(\rho u^{2})+\partial_{y}p =0\,,\] (A.1a) \[\partial_{t}\rho+\partial_{y}(\rho u) =0\,,\] (A.1b) \[\partial_{t}E+\partial_{y}(u(E+p)) =0\,,\] (A.1c) where \(E=\frac{1}{\gamma-1}p+\frac{1}{2}\rho u^{2}\) is the energy, and \(\gamma>1\) is the adiabatic exponent. Here the space variable is \(y\in\mathbb{T}\), and (A.1) is supplemented with smooth Cauchy data \((\rho_{0},u_{0},E_{0})\) at the initial time \(t=\mathsf{t}_{\text{in}}\). While the formulation (A.1), as a system of conservation laws, is necessary in order to read off the correct Rankine-Hugoniot jump conditions in the presence of a shock singularity, when the Euler system (A.1) is initiated with smooth initial data, an equivalent, more symmetric formulation is useful. For this purpose we remark that at least until the emergence of shocks, the evolution of the energy \(E\) in (A.1c) may be replaced by the transport equation \[\partial_{t}S+u\partial_{y}S=0\,,\] (A.2) where \(S\) is the specific entropy, defined as \(S=\log(\frac{2p}{\rho^{\gamma}})\). As such, if the initial datum \(S_{0}\) is a constant (which for simplicity we may take as zero), then \(S\) remains a constant under the above transport equation, as long as \(u\in L_{t}^{1}W_{x}^{1,\infty}\) (prior to the formation of singularities). Since the spacetime of maximal hyperbolic development of smooth Cauchy data does not contain any singular behavior in its interior, it is convenient for the sake of presentation to confine the discussion to the _isentropic_ case, in which \(S_{0}\equiv 0\), and thus \(S\equiv 0\) prior to the formation of shocks.20 Footnote 20: See the analysis in [44] for a parallel discussion in the case that \(S_{0}\not\equiv 0\). As discussed earlier in (1.2), in this isentropic case the pressure law becomes \(p=\frac{1}{\gamma}\rho^{\gamma}\), and upon letting \(\alpha=\frac{\gamma-1}{2}\), and denoting the rescaling sound speed by \(\sigma=\frac{1}{\alpha}c=\frac{1}{\alpha}\rho^{\alpha}\), the conservation laws in (A.1a)-(A.1b) may be written in a more symmetric way as \[\partial_{t}u+u\partial_{y}u+\alpha\sigma\partial_{y}\sigma=0\,,\] (A.3a) \[\partial_{t}\sigma+u\partial_{y}\sigma+\alpha\sigma\partial_{y}u =0\,.\] (A.3b) The isentropic dynamics in (A.3) consists in fact of two coupled transport equations for the Riemann variables \[w=u+\sigma\,,\qquad z=u-\sigma\,.\] The dominant Riemann variable \(w\) is transported along the fast characteristic speed \(\lambda_{3}\), while the the subdominant Riemann variable is transported along the slow characteristic speed \(\lambda_{1}\). That is, \[(\partial_{t}+\lambda_{3}\partial_{y})w=0\,,\] (A.4a) \[(\partial_{t}+\lambda_{1}\partial_{y})z=0\,,\] (A.4b) where, rewriting (1.3) in one space dimension, we have denoted for \(i\in\{1,2,3\}\) the wave speeds \[\lambda_{i}=u+(i-2)\alpha\sigma=u+(i-2)c=\tfrac{1+(i-2)\alpha}{2}w+\tfrac{1-( i-2)\alpha}{2}z\,.\] (A.4c) The system (A.4) is equivalent to (A.3). For consistency with the rest of the paper, we supplement (A.4) with initial data \((w_{0},z_{0})=(u_{0}+\sigma_{0},u_{0}-\sigma_{0})\) that has many of the same properties as the multi-dimensional data discussed in Section 4.2 (compressive and generic), while at the same aiming for simplicity of the presentation. For definiteness, we consider initial data \(w_{0}\) which is \(\mathcal{O}(1)\) and has an \(\mathcal{O}(\frac{1}{\varepsilon})\) most negative slope, and \(z_{0}\) which is \(\mathcal{O}(\varepsilon)\) and has an \(\mathcal{O}(1)\) slope. For simplicity, let \[w_{0}(x)=1-\tfrac{1}{2}\sin(\tfrac{2x}{\varepsilon})\,,\qquad z_{0}(x)=\tfrac {\varepsilon}{2}\cos(\tfrac{2x}{\varepsilon})\,,\] (A.5) where \(\varepsilon\ll 1\) is a small parameter. In particular, \(\tfrac{1}{8}\leq\sigma_{0}(x)\leq 1\), \(w_{0}^{\prime}\) attains a global minimum of \(-\tfrac{1}{\varepsilon}\) at \(x=0\), and this minimum is non-degenerate, as \(w_{0}^{\prime\prime\prime}(0)=\tfrac{4}{\varepsilon^{3}}>0\). For consistency with (4.1), let us assume that the initial time is given by \(\mathsf{t}_{\mathsf{in}}=-\tfrac{2\varepsilon}{1+\alpha}\). We also define \(\mathsf{t}_{\mathsf{fin}}=\tfrac{\varepsilon}{1+\alpha}\). Our goal is to study the system (A.4), with initial datum (A.5), for \(t\in[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}}]\). To avoid any discussion of localization at "large" values of \(|y|\), let us only discuss the problem for \(|y|\leq\tfrac{3\pi\varepsilon}{4}\), appealing to the finite speed of propagation. ### The classical perspective on shock formation Shock formation is classically described as the process through which an initial negatively sloped disturbance in the graph of the density progressively steepens, and eventually \(\partial_{y}\sigma\) diverges towards \(-\infty\). For initial data such as that in (A.5), at the same time the slope of the dominant Riemann variable \(w\) also diverges towards \(-\infty\). A classical way to see this, following Lax [30], is to differentiate (A.4a) and to use that (A.4b) may be rewritten as \((\partial_{t}+\lambda_{3}\partial_{y})\sigma=(\lambda_{3}-\lambda_{1})\partial _{y}z=2\alpha\sigma\partial_{y}z\), leading to \[(\partial_{t}+\lambda_{3}\partial_{y})(\partial_{y}w)=-\partial_{ y}\lambda_{3}\,\partial_{y}w=-\tfrac{1+\alpha}{2}(\partial_{y}w)^{2}-\tfrac{1- \alpha}{2}(\partial_{y}w)(\partial_{y}z)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad =-\tfrac{1+\alpha}{2}(\partial_{y}w)^{2}-\tfrac{1-\alpha}{4\alpha}(\partial_{y }w)(\partial_{t}+\lambda_{3}\partial_{y})\log\sigma\,.\] Since vacuum cannot form on \([\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{fin}}]\)21 on may integrate the above PDE along the characteristics \(\eta=\eta(x,t)\) associated to the fast wave-speed \(\lambda_{3}\), i.e., the solution of Footnote 21: To see this, note that for initial data as in (A.5), the maximum principle applied to (A.4a) yields \(|w(y,t)-1|\leq\tfrac{1}{2}\) and the maximum principle applied to (A.4b) yields \(|z(y,t)|\leq\tfrac{\varepsilon}{2}\). Hence \(\sigma=\tfrac{1}{2}(w+z)\in[\tfrac{1}{8},\tfrac{7}{4}]\) when \(\varepsilon\leq\tfrac{1}{2}\). \[\partial_{t}\eta(x,t)=\lambda_{3}(\eta(x,t),t)\,,\qquad\eta(x,\mathsf{t}_{ \mathsf{in}})=x\,,\] (A.6) leading to \[\partial_{t}(\partial_{y}w\circ\eta)=-\tfrac{1+\alpha}{2}(\partial_{y}w\circ \eta)^{2}-\tfrac{1-\alpha}{4\alpha}(\partial_{y}w\circ\eta)\partial_{t}(\log \sigma\circ\eta)\,.\] (A.7) With \(\log\sigma=\mathcal{O}(1)\) given, the above ODE may now be directly integrated, leading to a proof that \(\partial_{y}w\circ\eta\) blows up towards \(-\infty\) in finite time, for the label at which \(w_{0}^{\prime}\) was the most negative. Since by assumption this label is \(x=0\), we deduce that there exists a minimal time \(t_{*}>\mathfrak{t}_{\text{in}}\) such that \(\partial_{y}w(\eta(0,t),t)\to-\infty\) as \(t\to t_{*}\). In order to estimate \(t_{*}\), we may compare the above ODE to the Ricatti ODE \(\dot{a}=-\tfrac{1+\alpha}{2}a^{2}\), with datum \(a(\mathfrak{t}_{\text{in}})=-\tfrac{1}{\varepsilon}\), and deduce that \(t_{*}=\mathfrak{t}_{\text{in}}+\tfrac{2\varepsilon}{1+\alpha}\pm\mathcal{O}( \varepsilon^{2})=0\pm\mathcal{O}(\varepsilon^{2})\in(\mathfrak{t}_{\text{in} },\mathfrak{t}_{\text{fin}})\). This is Lax's proof of shock formation for the system (A.4), which may be found in [30]. A more geometric perspective on shock formation is provided by the method of characteristics (see Figure 17 below). Since in 1D we can rule out the formation of vacuum, it may be shown (see [44, Proposition 5.4]) that the Eulerian blowup criterion \(\int_{\mathfrak{t}_{*}}^{t_{*}}\|\partial_{y}w(\cdot,t)\|_{L^{\infty}}+\| \partial_{y}\sigma(\cdot,t)\|_{L^{\infty}}dt=+\infty\), is equivalent to the Lagrangian criterion \(\lim_{t\to t_{*}}\min_{x}\partial_{x}\eta(x,t)=0\). That is, shock formation occurs at the first time \(t_{*}\) at which the map \(x\mapsto\eta(x,t_{*})\) loses strict monotonicity, because the fast characteristic curves \((\eta(x,t),t)\) impinge on each other (for \(t>t_{*}\) the map \(x\mapsto\eta(x,t)\) would fail to be injective). The time \(t_{*}\), and the label \(x_{*}\) at which this strict monotonicity is lost may be naturally computed by solving \[\partial_{x}\eta(x_{*},t_{*})=0\,,\qquad\partial_{xx}\eta(x_{*},t_{*})=0\,.\] (A.8) We note that for the initial data (A.5), at \((x_{*},t_{*})\) the field \(w\) develops a Holder-\(\frac{1}{3}\) cusp singularity,22 not an actual shock-discontinuity, which is why the point \((x_{*},t_{*})\) is sometimes referred to as a _pre-shock_. Footnote 22: To see this, let \(y=\eta(x,t_{*})\). Due to (A.4a) we have \(w(y,t_{*})=w_{0}(x)\). Thus we need to compose \(w_{0}\) with the inverse flow \(x=\eta^{-1}(y,t_{*})\). Due to (A.8) we have the power series expansion \(\eta(x,t_{*})=\eta(x_{*},t_{*})+\tfrac{1}{6}(x-x_{*})^{3}\partial_{x}^{3} \eta(x_{*},t_{*})+\mathcal{O}((x-x_{*})^{4})\). The fact that \(\partial_{x}^{3}w_{0}(x_{*})>0\), which is the genericity condition for the data, implies that \(\partial_{x}^{3}\eta(x,t_{*})\approx(t_{*}-\mathfrak{t}_{\text{in}})\tfrac{1 +\alpha}{2}\partial_{x}^{3}w_{0}(x_{*})\approx 4\varepsilon^{-2}>0\). Thus when we are solving for \(x\) the equation \(y=\eta(x,t_{*})\), we are in essence solving the cubic equation \(y\approx y_{*}+\tfrac{2}{3}\varepsilon^{-2}(x-x_{*})^{3}\). This results in \(x-x_{*}\approx\varepsilon^{\tfrac{2}{3}}(y-y_{*})^{\tfrac{1}{3}}\), and thus \(w(y,t_{*})\approx w_{0}(e^{\tfrac{2}{3}}(y-y_{*})^{\tfrac{1}{3}})\). Since \(w_{0}\) is smooth, this explains the Holder-\(\frac{1}{3}\) cusp. Also, we note that for initial datum of the type (A.5) the characteristics \(\phi=\phi(x,t)\) associated to the slow wave-speed \(\lambda_{1}\), i.e., the solution of \[\partial_{t}\phi(x,t)=\lambda_{1}(\phi(x,t),t)\,,\qquad\phi(x,\mathfrak{t}_{ \text{in}})=x\,,\] (A.9) remain injective and the map \(x\mapsto\phi(x,t)\) remains strictly monotone uniformly for \((x,t)\) with \(t\in[\mathfrak{t}_{\text{in}},t_{*}]\). Equivalently, \(\|\partial_{y}z(\cdot,t)\|_{L^{\infty}}\) remains uniformly bounded as \(t\to t^{*}\). Figure 17. **Shock formation for 1D Euler via classical characteristics.** Consider the 1D Euler system (A.4) with initial data (A.5) for \(\alpha=\tfrac{1}{2}\) and \(\varepsilon=\tfrac{1}{16}\). The bounding box represents \((x,t)\in[-\tfrac{3\pi\varepsilon}{4},\tfrac{3\pi\varepsilon}{4}]\times[ \mathfrak{t}_{\text{in}},t_{*}]\). In orange we represent the “fast” characteristics \(\eta(x,t)\) which solve (A.6), and in olive-green we represent the “slow” characteristics \(\phi(x,t)\) that solve (A.9). The first blowup occurs at the Eulerian spacetime location \((y_{*},t_{*})\), where \(y_{*}=\eta(t_{*},x_{*})\), and \((x_{*},t_{*})\) solve (A.8). In this figure \(x_{*}=0\). In magenta we have represented the distinguished \(\lambda_{3}\)-characteristic curve leading to the very first singularity, \((\eta(x_{*},t),t)_{t\in[\mathfrak{t}_{\text{in}},t_{*}]}\). ### A bi-characteristic description of the maximal hyperbolic development The aforementioned analysis only describes the solution up the time slice of the very first singularity, \(\{t=t_{*}\}\). However, due to finite speed of propagation, even for times \(t>t_{*}\) there exist regions of spacetime where a smooth extension of the solution may be uniquely computed. In hyperbolic PDEs it is a natural question to describe the largest such spacetime region, which is free of singularities. The spacetime \(\mathcal{M}\) of maximal hyperbolic development of the Cauchy data for (A.4) consists of all points \((y,t)\) for which the fields \(w(y,t)\) and \(z(y,t)\) can be computed in a _unique_ and _smooth_ way in terms of the initial data \((w_{0},z_{0})\), prescribed on the time slice \(\{t=\mathfrak{t}_{\text{in}}\}\). In the particular case of (A.4), which consists of two coupled transport equations, we have that \[w(\eta(x,t),t)=w_{0}(x)\,,\qquad z(\phi(x,t),t)=z_{0}(x)\,,\] assuming that \(\eta\) and \(\phi\) are well-defined and sufficiently regular. Therefore, the requirement that \(w(y,t)\) and \(z(y,t)\) may be computed in a unique and smooth fashion in terms of the Cauchy data is in fact the requirement that the backwards flows23\(\eta^{-1}(y,t)\) and \(\phi^{-1}(y,t)\) are smooth, and that Footnote 23: These may be defined either as inverse maps via \(\eta(\eta^{-1}(y,t),t)=y\) and \(\phi(\phi^{-1}(y,t),t)=y\), or as solutions of the transport PDEs \((\partial_{t}+\lambda_{3}(y,t)\partial_{y})\eta=0\) and \((\partial_{t}+\lambda_{1}(y,t)\partial_{y})\phi=0\), with initial data that is the identity map at time \(\mathfrak{t}_{\text{in}}\). \[\eta(\eta^{-1}(y,t),t^{\prime}),\phi(\phi^{-1}(y,t),t^{\prime})\in\mathcal{M} \,,\qquad\text{for all}\qquad(y,t)\in\mathcal{M}\,,t^{\prime}\in[\mathfrak{t}_ {\text{in}},t]\,.\] (A.10) Since \(w(y,t)=w_{0}(\eta^{-1}(y,t))\) and \(z(y,t)=z_{0}(\phi^{-1}(y,t))\), the condition in (A.10) is indeed a full characterization of the spacetime domain \(\mathcal{M}\) of maximal hyperbolic development of the Cauchy data, for 1D isentropic Euler (A.4). For the compressive and generic initial condition considered here (cf. (A.5)), one may give a precise definition of the spacetime \(\mathcal{M}\) characterized by (A.10). We refer to Figure 18 below, where the part of \(\mathcal{M}\) which lies in the bounding box \((x,t)\in[-\frac{3\pi\varepsilon}{4},\frac{3\pi\varepsilon}{4}]\times[\mathfrak{ t}_{\text{in}},\mathfrak{t}_{\text{fin}}]\) is represented: * It it is clear that that all time slices with \(t<t_{*}\) lie in \(\mathcal{M}\) since prior to the very first singularity all functions remain smooth; as such, Figure 18 extends the image in Figure 17. * Consider labels \(x\) such that \(x>x_{*}\). In our setup, where the waves move to the right, we refer to these labels as being _downstream_ of the pre-shock label. For every \(x>x_{*}\), we may smoothly and uniquely extend the fast acoustic characteristic \(\eta\) emanating from \(x\) for all times \(t\in[\mathfrak{t}_{\text{in}},\mathsf{T}_{*}(x)]\), where the time \(\mathsf{T}_{*}(x)\) is characterized by the condition \[\partial_{x}\eta(x,\mathsf{T}_{*}(x))=0\,.\] (A.11) This is the same condition that we used in (A.8) to characterize the very first blowup time, and indeed \(t_{*}=\mathsf{T}_{*}(x_{*})\). For \(t>\mathsf{T}_{*}(x)\) injectivity is lost and hence we cannot uniquely extend \(\eta(x,\cdot)\) after this time. The curve \(\partial_{\text{top}}^{+}\mathcal{M}:=(\eta(x,\mathsf{T}_{*}(x)),\mathsf{T}_ {*}(x))_{x\geq x_{*}}\) which emanates from the very first singularity at \((x_{*},t_{*})\) (represented in red in Figure 18) is smooth, as it is parametrizes the zero level set of the smooth function \(\partial_{x}\eta\), and it serves as the future temporal boundary of the "downstream part" of the spacetime \(\mathcal{M}\). Indeed, it is clear from Figure 18 that for every \((y,t)\) which lies either "below" or "to the right" of the curve \(\partial_{\text{top}}^{+}\mathcal{M}\) the backwards trajectories corresponding to the labels \(\eta^{-1}(y,t)\) and \(\phi^{-1}(y,t)\) remain in \(\mathcal{M}\), and therefore they satisfy (A.10). Lastly, we mention that for any point \((\overline{y},\overline{t}):=(\eta(\overline{x},\mathsf{T}_{*}(\overline{x})),\mathsf{T}_{*}(\overline{x}))\in\partial_{\text{top}}^{+}\mathcal{M}\) with \(\overline{x}\geq x_{*}\), we have that \(\lim_{\mathcal{M}\ni(y,t)\to(\overline{y},\overline{t})}\partial_{y}w(y,t)\to-\infty\). In particular, the curve \(\partial_{\text{top}}^{+}\mathcal{M}\) parametrizes a family of successive gradient catastrophes in the dominant Riemann variable, upstream of the very first singularity. To see this, simply note that \(\partial_{x}w_{0}(x)=(\partial_{y}w)(\eta(x,t),t)\partial_{x}\eta(x,t)\), and hence \(\partial_{y}w(y,t)=w_{0}^{\prime}(\eta^{-1}(y,t))(\partial_{x}\eta(\eta^{-1}(y,t),t))^{-1}.\) Since \((\eta^{-1}(y,t),t)\to(\overline{x},\mathsf{T}_{*}(\overline{x}))\) as \((y,t)\to(\overline{y},\overline{t})\), we have that \(\partial_{x}\eta(\eta^{-1}(y,t),t)\to\partial_{x}\eta(\overline{x},\mathsf{T}_ {*}(\overline{x}))=0\), and since for \(x\) in the vicinity of \(x_{*}\) we have \(w_{0}^{\prime}(x)\approx-\frac{1}{\varepsilon}<0\), the claimed divergence towards \(-\infty\) follows. We note that the gradient catastrophes lurking on \(\partial_{\text{top}}^{+}\mathcal{M}\) are not jump discontinuities, instead, the field \(w\) has Holder cusps on this curve. Using the same argument as in Footnote 22, one may verify that for \(x>x_{*}\) the regularity of these cusps is Holder \(\frac{1}{2}\). * Consider labels \(x\) such that \(x<x_{*}\). In our setup, where the waves move to the right, we refer to these labels as being _upstream_ of the pre-shock label. Here, the main observation is that there exists a distinguished slow acoustic characteristic which emanates from the very first singularity \((y_{*},t_{*})\), the green curve in Figure 18. Letting \(x^{*}=\phi^{-1}(y_{*},t_{*})\), the slow acoustic characteristic emanating from the very first singularity is the curve \(\partial_{\text{top}}^{-}\mathcal{M}:=(\phi(x^{*},t),t)_{t\geq t_{*}}\). The notation we have chosen indicates that this curve serves as the future temporal boundary of the "upstream part" of the spacetime \(\mathcal{M}\). Indeed, for any \((y,t)\) which lies either "below" or "to the left" of \(\partial_{\text{top}}^{-}\mathcal{M}\), the backwards trajectories corresponding to the labels \(\eta^{-1}(y,t)\) and \(\phi^{-1}(y,t)\) remain in \(\mathcal{M}\), and therefore they satisfy (A.10). On the other hand, any point \((y,t)\) which lies "to the right" of \(\partial_{\text{top}}^{-}\mathcal{M}\) and "to the left" of \(\partial_{\text{top}}^{+}\mathcal{M}\), the white region in Figure 18, is inaccessible from the perspective of the Cauchy data at time \(t=\mathfrak{t}_{\text{in}}\). This is because following a \(\phi\) characteristic backwards in time from any such point \((y,t)\), we are bound to intersect \(\partial_{\mathsf{top}}^{+}\mathcal{M}\) (the red curve in Figure 18), and we have just discussed that there the field \(w\) experiences a gradient singularity. As such, as smooth continuation backwards in time "through" \(\partial_{\mathsf{top}}^{+}\mathcal{M}\) and all the way back to the initial data is not possible. In the physical shock development problem, this issue is overcome by the introduction of a shock curve, see the discussion in Section A.5 below. In summary, the maximal hyperbolic development of Cauchy data for the 1D isentropic Euler system is the spacetime \(\mathcal{M}\) consisting of points \((y,t)\) which either lie to the left of the curve \((\eta(x_{*},t),t)_{t\in[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{in}}]} \cup\partial_{\mathsf{top}}^{-}\mathcal{M}\) (the upstream part), or they lie to the right of the curve \((\eta(x_{*},t),t)_{t\in[\mathsf{t}_{\mathsf{in}},\mathsf{t}_{\mathsf{in}}]} \cup\partial_{\mathsf{top}}^{+}\mathcal{M}\) (the downstream part). **Remark A.1** (Sound evolution, not the physical shock development).: _Its important to note that Figure 18 shows the dynamics of characteristics propagating as sound waves (and not shocks) and that the "real" Euler solution transitions from sound evolution to shock evolution (with associated the Rankine-Hugoniot jump conditions) instantaneously after the first singular time \(t_{*}\). This issue is discussed in Section A.5 below._ **Remark A.2** (The boundary of \(\mathcal{M}\) is not smooth).: _We emphasize that in this bi-characteristics language, the future temporal boundary of \(\mathcal{M}\), i.e., the set \(\partial_{\mathsf{top}}^{-}\mathcal{M}\cup\partial_{\mathsf{top}}^{+} \mathcal{M}\) is merely a Lipschitz surface. This lack of smoothness makes the analysis in this classical coordinate system rather cumbersome, as we shall describe next._ A fast-acoustic characteristic perspective on shock formation and the maximal hyperbolic development The new perspective taken in this paper is to consider the entire analysis of shock formation and of maximal hyperbolic development of the Cauchy data in the Lagrangian coordinates24\((\eta(x,t),t)\) of the fast acoustic characteristics. Footnote 24: In multiple space dimensions, we in fact need to consider the Arbitrary-Eulerian-Lagrangian coordinates discussed in Section 2. That this is a natural perspective may be seen from at least two points of view: first, the quantity which develops a singularity on the downstream part of the future temporal boundary of the spacetime \(\mathcal{M}\) is the gradient of the dominant Riemann variable, namely \(\partial_{y}w\), and this quantity is naturally evolving along the flow \(\eta\) (see (A.7)); and second, the very first singularity and the subsequent gradient catastrophes in the downstream region are characterized in terms of properties of \(\eta\) and its derivatives (see (A.8) and (A.11)). In order to build better intuition for the multi-D problem,25 even in the 1D setting of this Appendix it is convenient to work with the spatially differentiated version of the isentropic Euler system (A.4). That is, we define Footnote 25: As explained in Section 3, in multi-D, working with differentiated Riemann-type variables (which are then composed with the ALE map) is necessary in order to avoid derivative loss. \[\hat{W}(x,t):=(\partial_{y}w)(\eta(x,t),t)\,,\qquad\hat{Z}(x,t):=(\partial_{y} z)(\eta(x,t),t)\,,\qquad\Sigma(x,t):=\sigma(\eta(x,t),t)\,.\] With the above notation, the chain rule, and using the identity \(\partial_{x}\Sigma=\frac{1}{2}w_{0}^{\prime}-\frac{1}{2}\partial_{x}\eta\,\hat {Z}\), the system (A.4) may be equivalently rewritten as26 Footnote 26: See also [44] for the similar variables in the non-isentropic setting. \[\partial_{x}\eta\,\hat{W} =w_{0}^{\prime}\,,\] (A.12a) \[\partial_{t}(\partial_{x}\eta) =\tfrac{1+\alpha}{2}w_{0}^{\prime}+\tfrac{1-\alpha}{2}\partial_{x }\eta\,\hat{Z}\,,\] (A.12b) \[\partial_{x}\eta\partial_{t}\hat{Z}-2\alpha\Sigma\partial_{x}\hat {Z} =-\hat{Z}(\tfrac{1-\alpha}{2}w_{0}^{\prime}+\tfrac{1+\alpha}{2} \partial_{x}\eta\,\hat{Z})\,,\] (A.12c) \[\partial_{t}\Sigma =-\alpha\Sigma\,\hat{Z}\,.\] (A.12d) The remarkable features of the system (A.12) are as follows: (A.12a) shows that the quantity \(\partial_{x}\eta\,\hat{W}\) is frozen into the flow, (A.12b) and (A.12d) are ODEs, while the only PDE left (A.12c) is the one for \(\hat{Z}\), which is a benign transport equation with a polynomial nonlinearity in the unknowns. It is thus natural that the analysis of this system for \((\hat{W},\hat{Z},\partial_{x}\eta,\Sigma)\) should encounter no derivative loss (this turns out to be true even in multi-D). By working directly in the Lagrangian coordinate system associated to the fast acoustic characteristics \(\eta\), the orange \(\lambda_{3}\)-characteristics from Figure 17 and Figure 18 are now vertical straight lines (this holds "by definition", it is a tautology). What is more interesting is that the Lagrangian coordinates \(\eta\), the characteristics associated to the slow wave speed \(\lambda_{1}\) (which were the integral curves of the operator \(\partial_{t}+\lambda_{1}(y,t)\partial_{y}\) in original coordinates, in olive-green in Figure 17 and Figure 18), are now the integral curves of the transport operator \(\partial_{x}\eta(x,t)\partial_{t}-2\alpha\Sigma(x,t)\partial_{x}\). Finally, let us discuss the the spacetime \(\mathcal{M}\) of maximal hyperbolic development of the Cauchy data, in Lagrangian coordinates. This is represented in Figure 19 below. Just as before, the location of the very first singularity at \((x_{*},t_{*})\) is determined by solving the system (A.8). In the downstream region, i.e., for \(x>x_{*}\), one still solves (A.11) in order to determine the time \(\mathsf{T}_{*}(x)\) at which the curve \(\{(x,t)\}_{t\geq\mathsf{t}_{*}}\) intersects the downstream part of the future temporal boundary of \(\mathcal{M}\). In Lagrangian coordinates, this boundary is characterized as \(\partial_{\mathsf{top}}^{+}\mathcal{M}=\{(x,\mathsf{T}_{*}(x))\}_{x>x_{*}}\). In light of (A.12a) it is immediately clear that the vanishing of \(\partial_{x}\eta(x,t)\) as \((x,t)\to(\overline{x},\mathsf{T}_{*}(\overline{x}))\), for any \(\overline{x}\geq x_{*}\), implies the blowup (towards \(-\infty\)) of \(\hat{W}(x,t)\); which is to say that \(\partial_{\mathsf{top}}^{+}\mathcal{M}\) parametrizes a succession of gradient catastrophes past the time of the first singularity. In the upstream region, i.e., for \(x<x_{*}\), we still solve for the distinguished slow acoustic characteristic (which is now an integral curve of the transport operator \(\partial_{x}\eta(x,t)\partial_{t}-2\alpha\Sigma(x,t)\partial_{x}\)), which passes through \((x_{*},t_{*})\). We denote the upstream part of this curve by \(\partial_{\mathsf{top}}^{-}\mathcal{M}\), and this is the future temporal boundary of the upstream part of \(\mathcal{M}\). The main observation is that in these fast acoustic Lagrangian coordinates the spacetime of maximal hyperbolic development becomes globally \(W^{2,\infty}\) smooth (in essence, a cubic to the left, joining a parabola to the right, with matching first derivatives). This fact allows a PDE analysis in this spacetime which avoids derivative loss. The reader may compare Figure 19 (in Lagrangian coordinates) and its direct analogue, Figure 18 (in Eulerian coordinates). ### The physical shock development problem As already alluded to in Remark A.1, when constructing the maximal hyperbolic development of the Cauchy data past the time-slice of the very first singularity \(\{t=t_{*}\}\), the physical Euler evolution is replaced by the local hyperbolic propagation of sound waves via (A.4) (in the non-isentropic model, one would add the local hyperbolic propagation of entropy waves). While the extension of the solution to (A.4) past the time-slice \(\{t=t_{*}\}\) (see Figure 18) agrees with the physical, entropy-producing, shock solution27 in certain regions of spacetime, it certainly _does not_ agree with the "real" Euler solution globally, not even globally in \(\mathcal{M}\)! Footnote 26: See also [44] for the similar variables in the non-isentropic setting. Footnote 27: With Rankine-Hugoniot jump conditions to ensure that we are working with a weak solution of the full Euler system (A.1). In order to describe the physical, entropy-producing, shock solution to (A.1) past the time slice \(\{t=t_{*}\}\), we need to consider the shock curve, which may be parametrized as \((\mathfrak{s}(t),t)_{t>t_{*}}\), with \(\mathfrak{s}(t_{*})=y_{*}=\eta(x_{*},t_{*})\). While at the point of the very first singularity, \((\mathfrak{s}(t_{*}),t_{*})\), the fields \((w,z,S)\) (and hence \((u,\rho,E)\)) have in the worst case a Holder \(\frac{1}{3}\) cusp, for all \(t>t_{*}\), the fields \((u,\rho,E)\) (and hence \((w,z,S)\)) have a jump discontinuity across \((\mathfrak{s}(t),t)\). For convenience of notation, for \(t>t_{*}\) let us denote the jump of a function \(f(y,t)\) as \(y\) crosses the shock location at \(\mathfrak{s}(t)\) by \([[f(t)]]=f(\mathfrak{s}(t)^{-},t)-f(\mathfrak{s}(t)^{+},t)\). Here we have chosen the normal vector to the shock curve to point in the same direction as the propagation of the shock itself. With this convention, the physical entropy condition is that \([[S(t)]]>0\) for all \(t>t_{*}\). The shock speed (which uniquely determines the location of the shock) is denoted by \(\hat{\mathbf{s}}(t)\), and is computed via the Rankine-Hugoniot conditions as follows. The weak formulation of the conservation-law form of the 1D Euler equations (A.1) yields a system of three equations in seven unknowns: \(u(\mathbf{s}(t)^{\pm},t)\), \(\rho(\mathbf{s}(t)^{\pm},t)\), \(E(\mathbf{s}(t)^{\pm},t)\), and \(\hat{\mathbf{s}}(t)\). Equivalently, we may use the seven unknowns \(w(\mathbf{s}(t)^{\pm},t)\), \(z(\mathbf{s}(t)^{\pm},t)\), \(S(\mathbf{s}(t)^{\pm},t)\), and \(\hat{\mathbf{s}}(t)\). The first observation is that the values of the fields downstream of the shock, i.e., \(w(\mathbf{s}(t)^{+},t)\), \(z(\mathbf{s}(t)^{+},t)\), \(S(\mathbf{s}(t)^{+},t)\) may be computed uniquely and smoothly in terms of the initial data. With the isentropic data considered in this Appendix, this leads to \(S(\mathbf{s}(t)^{+},t)=0\), \(w(\mathbf{s}(t)^{+},t)=w_{0}(\eta^{-1}(\mathbf{s}(t)^{+},t),t)\) and \(z(\mathbf{s}(t)^{+},t)=z_{0}(\phi^{-1}(\mathbf{s}(t)^{+},t),t)\). This is because upstream of the shock all characteristic families (corresponding to \(\lambda_{3}\) for \(w\), \(\lambda_{2}\) for \(S\), and \(\lambda_{1}\) for \(z\)) are subsonic relative to the shock itself. The second observation (which together with the previous one are referred to as the _Lax geometric entropy conditions_) is that we may in fact also compute \(w(\mathbf{s}(t)^{-},t)\) uniquely and smoothly in terms of the initial data as \(w_{0}(\eta^{-1}(\mathbf{s}(t)^{-},t),t)\), because the fast acoustic characteristic upstream of the shock is supersonic relative to the shock itself.28 The third observation is that the Rankine-Hugoniot system reduces to a system in three equation and three unknowns, namely \(z(\mathbf{s}(t)^{-},t)\), \(S(\mathbf{s}(t)^{-},t)\), and \(\hat{\mathbf{s}}(t)\). For compressive and generic initial data \((w_{0},z_{0})\) as in (A.5) and with \(S_{0}=0\), this 3\(\times\)3 system is uniquely solvable under the requirement that \(S(\mathbf{s}(t)^{-},t)=[[S(t)]]>0\), thus leading to a well-defined shock front evolution. The fourth and last observation is that the values of the fields \((w,z,S)\) on the left side (upstream) of the shock curve (that is, at \((\mathbf{s}(t)^{-},t)\)) may now be used as Cauchy data for the system of equations (A.2) and (A.4), which allows one to compute physical shock solution to 1D Euler globally in spacetime, even past the domain of maximal hyperbolic Cauchy development. Footnote 28: This statement is a bit more subtle than it seems because the very introduction of the shock curve implies that the backwards map \(\eta^{-1}(\cdot,t)\) is not continuous across \(y=\mathbf{s}(t)\), resulting in a nontrivial value for \([[w(t)]]\). The reader may for instance refer to Figure 20 and trace backwards-in-time the orange characteristic curves emanating from a point which is just to the left of \((\mathbf{s}(t),t)\), and a point which is just to right of \((\mathbf{s}(t),t)\). These backwards-in-time characteristics intersect \(\{t=\mathfrak{t}_{\mathfrak{n}}\}\) at very different locations. Figure 19. **Maximal hyperbolic development for 1D Euler – the Lagrangian perspective.** Consider the 1D Euler system (A.4) with initial data as in (A.5). The bounding box represents \((x,t)\in[-\frac{3\pi\varepsilon}{4},\frac{3\pi\varepsilon}{4}]\times[\mathfrak{ t}_{\mathfrak{n}},\mathfrak{t}_{\mathfrak{fin}}]\), for the specific values \(\alpha=\frac{1}{2}\) and \(\varepsilon=\frac{1}{16}\). We revisit Figure 18, except that we work in the Lagrangian coordinates of the fast acoustic characteristic corresponding to \(\lambda_{3}\). That is, a point \((y,t)\) in Figure 18 is replaced by the point \((x,t)=(\eta^{-1}(y,t),t)\) in the above figure. Then, the \(\lambda_{3}\)-characteristics become straight lines in orange, while the \(\lambda_{1}\)-characteristics become the curves in olive-green. The red curve represents the curve \(\partial_{\mathsf{top}}^{+}\mathcal{M}\), while the green curve represents the curve \(\partial_{\mathsf{top}}^{-}\mathcal{M}\) (which is extended backwards in time all the way up to \(t=\mathfrak{t}_{\mathfrak{n}}\)). In this Figure, the spacetime of maximal hyperbolic development of the Cauchy data \(\mathcal{M}\) is the region which lies “below” (in the causal past) of the union of \(\partial_{\mathsf{top}}^{+}\mathcal{M}\) and \(\partial_{\mathsf{top}}^{-}\mathcal{M}\). This spacetime has a \(W^{2,\infty}\)-regular boundary. The shock development problem, as painted by the above paragraph, is illustrated in Figure 20 below. Mathematically, this picture has been analyzed in full-detail in [4], in the context of the 2D Euler equations with azimuthal symmetry.29 See also [53] and [15] in the context of 3D Euler with radial symmetry. Footnote 29: In [4] we have additionally described the weak characteristic singularities (weak-contact and weak-rarefaction) which emerge from the very first singularity at \((y_{*},t_{*})\), simultaneously with the shock curve in the upstream region. **Remark A.3** (**The shock development problem and the maximal hyperbolic development of the data**).: _We close this appendix by discussing the location of the shock curve relative to the spacetime \(\mathcal{M}\) of maximal hyperbolic Cauchy development. Figure 20 is our point of reference._ * _We emphasize that the shock curve is located in the interior of the downstream part of_ \(\mathcal{M}\)_, i.e., the curve_ \((\mathfrak{s}(t),t)_{t>t_{*}}\)_, (in black in Figure_ 20_) lies strictly below the curve_ \(\partial_{\mathsf{top}}^{+}\mathcal{M}\) _(in red in Figure_ 20_), and they intersect only at_ \((y_{*},t_{*})\)_. This is also why we may compute the values of all fields_ \((u,\rho,E)\) _downstream of the shock: because this spacetime is fully contained in_ \(\mathcal{M}\)_, and the interior of this spacetime contains no singularities whatsoever._ * _This same fact, namely that the curve_ \((\mathfrak{s}(t),t)_{t>t_{*}}\) _(in black in Figure_ 20_) lies strictly below the curve_ \(\partial_{\mathsf{top}}^{+}\mathcal{M}\) _(in red in Figure_ 20_) also shows that the maximal hyperbolic Cauchy development of the data computes a solution to Euler in a spacetime which is not physical (in the spacetime which lies between the red and black curves in Figure_ 20_). To see this, note merely that the physical shock solution is entropy producing, and thus_ \(S\neq 0\) _at points upstream of the shock. In contrast, maximal hyperbolic development is computed via pure propagation of sound waves and thus the solution remains isentropic (_\(S=0\)_)._ * _We note that in the upstream portion of_ \(\mathcal{M}\)_, namely "to the left" or "below" the curve_ \(\partial_{\mathsf{top}}^{-}\mathcal{M}\) _(in green in Figure_ 20_), the physical shock solution in fact precisely matches the solution obtained as the maximal hyperbolic Cauchy development of the data. Again, the reason is that in this portion of spacetime there are no singularities, and each point therein is directly accessible from the initial data via_ \(\lambda_{i}\)_-characteristics, for all_ \(i\in\{1,2,3\}\)_._ * _The last observation is the obvious one: solving the shock development problem allows one to compute the correct solution of 1D Euler (in particular, a weak solution, with shocks) in the portion of spacetime that lies beyond the spacetime of maximal hyperbolic development of the data (see the white region in Figure_ 18_). This is because the shock curve itself acts as a Cauchy hypersurface, on which the data is prescribed via a combination of the Lax geometric entropy conditions and the Rankine-Hugoniot jump conditions. The 1D Euler propagation away _from the shock curve is via three transport equations:_ (A.2), (A.4a)_, and (A.4b)_. Two of the corresponding characteristics (the slowest and the fastest) are represented in Figure_ 20_._ _In summary, beyond the time slice of the very first singularity, \(\{t=t_{*}\}\), the 1D Euler solution computed as the maximal hyperbolic development of the data is useful (meaning, it describes the correct solution) in the spacetime which is either downstream of the shock (in black in Figure 20), or which is upstream of the slow acoustic characteristic emanating from the very first singularity (in green in Figure 20). Note however that since the shock curve is a-priori unknown (it must be nonlinearly solved for using the Rankine-Hugoniot jump conditions) a description of the Euler solution in the entire downstream part of \(\mathcal{M}\) could prove to be useful in an iteration scheme aimed at constructing the shock curve._ ## Appendix B Functional analysis in the remapped domain In this appendix we record a number of functional analytic bounds which are used throughout the paper. These bounds concern functions with arguments in the remapped spacetime \(\mathbb{T}^{2}\times[0,\varepsilon)\) (cf. (5.20)), which in turn is defined via the flattening map \(\mathfrak{q}=\mathfrak{q}(x_{2},t)\) from (5.18b). The associated differential operators given by the vector \(\widetilde{\mathsf{D}}\) (cf. (5.23)). This analytical framework is used in Sections 5-12. In Sections B.4 and B.5 below, we show that the same analytical framework developed here applies _as is_ to the remapped domain from Section 13 and respectively Section 14. In view of definition (4.7) and the bootstrap (5.37a), all the functions we consider in this Appendix are defined on \(\mathcal{X}_{\mathrm{fin}}\times[0,\varepsilon)\), where \(\mathcal{X}_{\mathrm{fin}}\subset\mathbb{T}^{2}\) has Lebesgue measure bounded by \(4\pi(13\pi+\mathsf{C}_{\mathsf{supp}})\varepsilon\), where the constant \(\mathsf{C}_{\mathsf{supp}}=\mathsf{C}_{\mathsf{supp}}(\alpha,\kappa_{0})>0\) was defined in (6.5). As such, throughout this section we will consider functions \(f\colon\mathbb{T}^{2}\times[0,\varepsilon)\to\mathbb{R}\) with the property that the support of \(f\) in the \(x_{1}\) direction has diameter \(\lesssim\varepsilon\); more precisely, \[\left|\bigcup_{(x_{2},\mathfrak{s})\in\mathbb{T}\times[0,\varepsilon)}\mathrm{ supp}\left(f(\cdot,x_{2},\mathfrak{s})\right)\right|\leq 4\pi(13\pi+\mathsf{C}_{ \mathsf{supp}})\varepsilon\,.\] (B.1) Additionally, throughout this section the implicit constants in the \(\lesssim\) symbols depend on the constant \(\mathsf{C}_{\mathsf{supp}}\), and hence they depend only on \(\alpha\) and \(\kappa_{0}\). ### Sobolev and Poincare Instead of the "usual" Sobolev embedding \(H^{s}\subset L^{\infty}\), for \(s>1\), we shall use the below bound, which avoids placing two copies of \(\partial_{1}=\varepsilon^{-1}\widetilde{\mathsf{D}}_{1}\) derivatives on any term. **Lemma B.1**.: _For sufficiently smooth \(f\colon\mathbb{T}^{2}\times[0,\varepsilon)\to\mathbb{R}\) satisfying (B.1), we have_ \[\left\|f\right\|_{L^{2}_{x,\mathfrak{s}}} \lesssim\left\|\widetilde{\mathsf{D}}_{1}f\right\|_{L^{2}_{x, \mathfrak{s}}}\] (B.2a) \[\left\|\partial_{2}f\right\|_{L^{2}_{x,\mathfrak{s}}} \lesssim\left\|\widetilde{\mathsf{D}}_{2}f\right\|_{L^{2}_{x, \mathfrak{s}}}+8\big{\|}\widetilde{\mathsf{D}}_{\mathfrak{s}}f\big{\|}_{L^{2}_ {x,\mathfrak{s}}}\] (B.2b) \[\left\|f\right\|_{L^{\infty}_{x}} \lesssim\left\|f(\cdot,0)\right\|_{L^{2}_{x}}+\varepsilon^{-\frac{ 1}{2}}\big{\|}\widetilde{\mathsf{D}}_{\mathfrak{s}}f\big{\|}_{L^{2}_{x, \mathfrak{s}}}\] (B.2c) \[\left\|f\right\|_{L^{\infty}_{x,\mathfrak{s}}} \lesssim\varepsilon^{-\frac{1}{2}}\big{\|}\widetilde{\mathsf{D}} \widetilde{\mathsf{D}}_{1}f(\cdot,0)\big{\|}_{L^{2}_{x}}+\varepsilon^{-1} \big{\|}\widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{\mathfrak{s}} \widetilde{\mathsf{D}}_{1}f\big{\|}_{L^{2}_{x,\mathfrak{s}}}\,,\] (B.2d) \[\left\|f\right\|_{L^{2}_{x}L^{\infty}_{x}} \lesssim\varepsilon^{-\frac{1}{2}}\big{\|}\widetilde{\mathsf{D}} \widetilde{\mathsf{D}}_{1}f\big{\|}_{L^{2}_{x,\mathfrak{s}}}\,,\] (B.2e) _where the implicit constant depends only on \(\mathsf{C}_{\mathsf{supp}}\), and hence on \(\alpha\) and \(\kappa_{0}\)._ Proof of Lemma b.1.: The usual Poincare inequality and the support assumption for \(f(\cdot,\mathfrak{s})\) in (B.1) implies that \[\left\|f(\cdot,\mathfrak{s})\right\|_{L^{2}_{x}}\lesssim\varepsilon\big{\|} \partial_{1}f(\cdot,\mathfrak{s})\big{\|}_{L^{2}_{x}}\,,\] (B.3) which yields (B.2a) since \(\widetilde{\mathsf{D}}_{1}=\varepsilon\partial_{1}\). The bound (B.2b) follows since \[\partial_{2}=\widetilde{\mathsf{D}}_{2}+\overline{\mathsf{Q}}_{2}\partial_{ \mathfrak{s}}=\widetilde{\mathsf{D}}_{2}+(\tfrac{1}{\varepsilon}\overline{ \mathsf{Q}}_{2}\widehat{\mathsf{Q}}^{-1})\widetilde{\mathsf{D}}_{\mathfrak{s}}\,,\] (B.4) and \(|\tfrac{1}{\varepsilon}\overline{\mathsf{Q}}_{2}\widehat{\mathsf{Q}}^{-1}| \leq\tfrac{3\cdot 40}{17}\leq 8\). Next, for any function \(f\), the fundamental theorem of calculus in time implies that \[\sup_{\mathfrak{s}\in[0,\varepsilon]}\left\|f(\cdot,\mathfrak{s})\right\|_{L^{ 2}_{x}}^{2}\leq\left\|f(\cdot,0)\right\|_{L^{2}_{x}}^{2}+2\left\|f\right\|_{L^{ 2}_{x,\mathfrak{s}}}\left\|\partial_{\mathfrak{s}}f(\cdot,\mathfrak{s})\right\|_ {L^{2}_{x,\mathfrak{s}}}.\] Combined with the fact that \(\left\|\partial_{\mathfrak{s}}f(\cdot,\mathfrak{s})\right\|_{L^{2}_{x}}\lesssim \varepsilon^{-1}\big{\|}\widetilde{\mathsf{D}}_{\mathfrak{s}}f(\cdot,\mathfrak{s} )\big{\|}_{L^{2}_{x}}\), which in turn is a consequence of \(\big{\|}\widehat{\mathsf{Q}}^{-1}\big{\|}_{L^{\infty}_{x}}\leq\tfrac{40}{17}\) (see (6.38a)), it follows that \[\sup_{\mathfrak{s}\in[0,\varepsilon]}\left\|f(\cdot,\mathfrak{s})\right\|_{L^{ 2}_{x}}\lesssim\left\|f(\cdot,0)\right\|_{L^{2}_{x}}+\varepsilon^{-\frac{1}{2 }}\big{\|}f\big{\|}_{L^{2}_{x,\mathfrak{s}}}^{\frac{1}{2}}\big{\|}\widetilde{ \mathsf{D}}_{\mathfrak{s}}f\big{\|}_{L^{2}_{x,\mathfrak{s}}}^{\frac{1}{2}}\] \[\leq\varepsilon^{-\frac{1}{2}}\big{\|}\widetilde{\mathsf{D}}_{1} \widetilde{\mathsf{D}}f(\cdot,\mathsf{s})\big{\|}_{L_{x}^{2}}\,.\] (B.6) Note that taking the norm of both sides of the above estimate trivially yields (B.2c). Then we apply (B.2c) with \(f\) replaced by \(\widetilde{\mathsf{D}}_{2}\widetilde{\mathsf{D}}_{1}f\) and respectively \(\widetilde{\mathsf{D}}_{1}\widetilde{\mathsf{D}}f\) to deduce \[\|f\|_{L_{x,\mathsf{s}}^{\infty}}\lesssim\varepsilon^{-\frac{1}{2}}\big{\|} \widetilde{\mathsf{D}}\widetilde{\mathsf{D}}_{1}f(\cdot,0)\big{\|}_{L_{x}^{2}}+ \varepsilon^{-1}\big{\|}\widetilde{\mathsf{D}}_{8}\widetilde{\mathsf{D}} \widetilde{\mathsf{D}}_{1}f\big{\|}_{L_{x,\mathsf{s}}^{2}}\,,\] (B.7) thereby establishing (B.2d). **Remark B.2**.: _Sometimes it is convenient to use a slightly modified variant of (B.2c), which conforms to the transport operator \((\widetilde{\mathsf{Q}}\partial_{\mathsf{s}}+V\widetilde{\mathsf{D}}_{2})\), rather than the directional derivative \(\widetilde{\mathsf{D}}_{\mathsf{s}}\). We claim that_ \[\big{\|}f\big{\|}_{L_{\mathsf{s}}^{\infty}L_{x}^{2}}\lesssim\big{\|}f(\cdot,0) \big{\|}_{L^{2}(\Omega)}+\varepsilon^{\frac{1}{2}}\big{\|}(\mathsf{Q} \partial_{\mathsf{s}}+V\partial_{2})f\big{\|}_{L_{x,\mathsf{s}}^{2}}\,,\] (B.8) _where the implicit constant is universal. In order to prove (B.8), we use (5.28c) and then (5.26) to write_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|f(\cdot,\mathsf{s})\|_{L_{x} ^{2}}^{2} =\|f(\cdot,0)\|_{L_{x}^{2}}^{2}+2\int_{0}^{\mathsf{s}}\!\!\!\int f \partial_{\mathsf{s}}f\] \[=\|f(\cdot,0)\|_{L_{x}^{2}}^{2}+2\int_{0}^{\mathsf{s}}\!\!\!\int \frac{1}{\overline{\mathsf{Q}}}f\big{(}\widetilde{\mathsf{Q}}\partial_{ \mathsf{s}}+V\widetilde{\mathsf{D}}_{2})f-\int_{0}^{\mathsf{s}}\!\!\!\int \frac{V}{\overline{\mathsf{Q}}}\widetilde{\mathsf{D}}_{2}(f^{2})\] \[=\|f(\cdot,0)\|_{L^{2}(\Omega)}^{2}+2\int_{0}^{\mathsf{s}}\!\!\! \int\frac{1}{\overline{\mathsf{Q}}}f\big{(}\mathsf{Q}\partial_{\mathsf{s}}+V \partial_{2})f+\int_{0}^{\mathsf{s}}\!\!\!\int f^{2}\big{(}-\mathring{\mathsf{ Q}}_{2}+\widetilde{\mathsf{D}}_{2}\big{)}(\frac{V}{\overline{\mathsf{Q}}})+\!\int \big{(}f^{2}\frac{V\overline{\mathsf{Q}}_{2}}{\overline{\mathsf{Q}}}\big{)}( x,\mathsf{s})\mathrm{d}x\,.\] _Then, using (5.37o) and (6.38) we arrive at_ \[\sup_{\mathsf{s}\in[0,\varepsilon]}\|f(\cdot,\mathsf{s})\|_{L_{x}^{2}}^{2} \leq\|f(\cdot,0)\|_{L_{x}^{2}}^{2}+5\varepsilon^{\frac{1}{2}}\big{\|}\big{(} \mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})f\big{\|}_{L_{x,\mathsf{s}}^{2} }\sup_{\mathsf{s}\in[0,\varepsilon]}\|f(\cdot,\mathsf{s})\|_{L_{x}^{2}}+ \mathring{C}\varepsilon^{2}\sup_{\mathsf{s}\in[0,\varepsilon]}\|f(\cdot, \mathsf{s})\|_{L_{x}^{2}}^{2}\,.\] _The bound (B.8) now follows by absorbing terms involving \(\sup_{\mathsf{s}\in[0,\varepsilon]}\|f(\cdot,\mathsf{s})\|_{L_{x}^{2}}\) into the left side, upon taking \(\varepsilon\) to be sufficiently small._ ### Gagliardo-Nirenberg **Lemma B.3**.: _Let \(f\colon\mathbb{T}^{2}\times[0,\varepsilon)\to\mathbb{R}\) be a smooth function satisfying (B.1). Then, for every \(0\leq i\leq m\) we have_ \[\big{\|}\widetilde{\mathsf{D}}^{i}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}} \lesssim\big{\|}\widetilde{\mathsf{D}}^{m}f\big{\|}_{L_{x,\mathsf{s}}^{\infty} }^{\frac{i}{m}}\Big{(}\big{\|}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}}+\max_{1 \leq j\leq i}\big{\|}\widetilde{\mathsf{D}}^{j}f(\cdot,0)\big{\|}_{L_{x}^{ \infty}}\Big{)}^{1-\frac{i}{m}}+\mathbf{1}_{0<i<m}\varepsilon^{\frac{i}{m}} \Big{(}\big{\|}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}}+\max_{1\leq j\leq i} \big{\|}\widetilde{\mathsf{D}}^{j}f(\cdot,0)\big{\|}_{L_{x}^{\infty}}\Big{)}\,,\] (B.9) _where the implicit constant depends only on \(i\), \(m\), and \(\mathsf{C}_{\mathsf{supp}}\) (and hence on \(\alpha\) and \(\kappa_{0}\))._ Proof of Lemma b.3.: The bound (B.9) is trivial when \(i=0\) or \(i=m\), so we restrict the proof to \(1\leq i\leq m-1\). The first nontrivial case is \(i=1\) and \(m=2\). For this purpose we note that space integration by parts, the definition of the adjoint \(\widetilde{\mathsf{D}}^{*}\) in (5.28), and the bounds (6.38), yields \[\big{\|}\widetilde{\mathsf{D}}_{1}f\big{\|}_{L_{x,\mathsf{s}}^{4}}^{4} \lesssim\big{\|}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}}\big{\|} \widetilde{\mathsf{D}}_{1}^{2}f\big{\|}_{L_{x,\mathsf{s}}^{2}}^{2}\big{\|} \widetilde{\mathsf{D}}_{1}f\big{\|}_{L_{x,\mathsf{s}}^{2}}^{2}\] (B.10a) \[\big{\|}\widetilde{\mathsf{D}}_{2}f\big{\|}_{L_{x,\mathsf{s}}^{4}}^{4} \lesssim\big{\|}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}}\big{\|} \widetilde{\mathsf{D}}_{2}^{2}f\big{\|}_{L_{x,\mathsf{s}}^{2}}^{2}\big{\|} \widetilde{\mathsf{D}}_{2}f\big{\|}_{L_{x,\mathsf{s}}^{2}}^{2}+\varepsilon^{ \frac{1}{2}}\,\big{\|}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}}\big{\|}\widetilde{ \mathsf{D}}_{2}f\big{\|}_{L_{x,\mathsf{s}}^{4}}^{3}+\varepsilon\,\big{\|}f\big{\|}_{L _{x,\mathsf{s}}^{\infty}}\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}\widetilde{ \mathsf{D}}_{2}f\big{\|}_{L_{x}^{3}}^{3}\,,\] (B.10b) \[\big{\|}\widetilde{\mathsf{D}}_{4}f\big{\|}_{L_{x,\mathsf{s}}^{4}}^{4} \lesssim\big{\|}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}}\big{\|} \widetilde{\mathsf{D}}_{4}^{2}f\big{\|}_{L_{x,\mathsf{s}}^{2}}^{2}\big{\|} \widetilde{\mathsf{D}}_{4}f\big{\|}_{L_{x,\mathsf{s}}^{2}}^{2}+\varepsilon\, \big{\|}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}}\sup_{\mathsf{s}\in[0, \varepsilon]}\big{\|}\widetilde{\mathsf{D}}_{8}f\big{\|}_{L_{x}^{3}}^{3}\,,\] (B.10c) where the last terms on the second and third lines arise from the temporal boundary term in the formula for \(\widetilde{\mathsf{D}}_{2}^{*}\) and \(\widetilde{\mathsf{D}}_{\mathsf{s}}^{*}\). For these terms, similarly to (B.5) we may prove \[\sup_{\mathsf{s}\in[0,\varepsilon]}\bigl{\|}\widetilde{\mathsf{D}}f\bigr{\|}_{L _{x}^{3}}^{3}\lesssim\varepsilon\bigl{\|}\widetilde{\mathsf{D}}f(\cdot,0) \bigr{\|}_{L_{x}^{\infty}}^{3}+\varepsilon^{-1}\bigl{\|}\widetilde{\mathsf{D}} f\bigr{\|}_{L_{x,\mathsf{s}}^{4}}^{2}\bigl{\|}\widetilde{\mathsf{D}}_{ \mathsf{s}}\widetilde{\mathsf{D}}f\bigr{\|}_{L_{x,\mathsf{s}}^{2}}\] (B.10d) When combined, the above three displays yield \[\bigl{\|}\widetilde{\mathsf{D}}f\bigr{\|}_{L_{x,\mathsf{s}}^{4}}\lesssim\bigl{\|} f\bigr{\|}_{L_{x,\mathsf{s}}^{4}}^{\frac{1}{2}}\bigl{\|}\widetilde{\mathsf{D}}^{2} f\bigr{\|}_{L_{x,\mathsf{s}}^{2}}^{\frac{1}{2}}+\varepsilon^{\frac{1}{2}} \bigl{\|}f\bigr{\|}_{L_{x,\mathsf{s}}^{\infty}}+\varepsilon^{\frac{1}{2}} \bigl{\|}\widetilde{\mathsf{D}}f(\cdot,0)\bigr{\|}_{L_{x}^{\infty}}\] (B.11) thereby probing (B.9) for \(i=1\) and \(m=2\). The general case \(m\geq 3\) and \(1\leq i\leq m-1\) in (B.9) may be proven in a similar manner, by induction. We omit these redundant details. ### Product and commutator estimates **Lemma B.4** (Moser inequality).: _For \(m\geq 1\), assume that \(f,g\colon\mathbb{T}^{2}\times[0,\varepsilon]\to\mathbb{R}\) be smooth functions satisfying the standing support assumption (B.1). Inspired by (B.9), define the quantities_ \[\mathfrak{B}_{f} :=\bigl{\|}f\bigr{\|}_{L_{x,\mathsf{s}}^{\infty}}+\max_{1\leq j \leq m-1}\bigl{\|}\widetilde{\mathsf{D}}^{j}f(\cdot,0)\bigr{\|}_{L_{x}^{ \infty}}\,,\] (B.12a) \[\mathfrak{B}_{g} :=\bigl{\|}g\bigr{\|}_{L_{x,\mathsf{s}}^{\infty}}+\max_{1\leq j \leq m-1}\bigl{\|}\widetilde{\mathsf{D}}^{j}g(\cdot,0)\bigr{\|}_{L_{x}^{ \infty}}\,.\] (B.12b) _Then, we have that_ \[\bigl{\|}\widetilde{\mathsf{D}}^{m}(fg)\bigr{\|}_{L_{x,\mathsf{s}}^{2}} \lesssim\bigl{\|}\widetilde{\mathsf{D}}^{m}f\bigr{\|}_{L_{x,\mathsf{s}}^{2}} \mathfrak{B}_{g}+\bigl{\|}\widetilde{\mathsf{D}}^{m}g\bigr{\|}_{L_{x,\mathsf{ s}}^{2}}\mathfrak{B}_{f}+\varepsilon\ \mathfrak{B}_{f}\mathfrak{B}_{g}\,,\] (B.13) _where the implicit constant only depends only on \(m\) and \(\mathsf{C}_{\mathsf{supp}}\) (hence on \(\alpha\) and \(\kappa_{0}\))._ Proof of Lemma b.4.: Let \(\gamma\in\mathbb{N}_{0}^{3}\) be such that \(|\gamma|=m\). From the Leibniz rule and the Holder inequality we have \[\bigl{\|}\widetilde{\mathsf{D}}^{\gamma}(fg)\bigr{\|}_{L_{x,\mathsf{s}}^{2}} \lesssim\sum_{\beta\leq\gamma}\binom{\gamma}{\beta}\bigl{\|} \widetilde{\mathsf{D}}^{\beta}f\bigr{\|}_{L_{x,\mathsf{s}}^{2\llbracket m \rrbracket}}\|\widetilde{\mathsf{D}}^{\gamma-\beta}g\bigr{\|}_{L_{x,\mathsf{ s}}^{\frac{2m}{\gamma}\llbracket m\rrbracket}}\,,\] (B.14) where we recall that \(\widetilde{\mathsf{D}}^{\gamma}=\widetilde{\mathsf{D}}_{\mathsf{s}}^{\gamma_{0 }}\widetilde{\mathsf{D}}_{1}^{\gamma_{1}}\widetilde{\mathsf{D}}_{2}^{\gamma_{2}}\). For each of the above terms we apply (B.9) to deduce \[\bigl{\|}\widetilde{\mathsf{D}}^{m}(fg)\bigr{\|}_{L_{x,\mathsf{s}}^{2}} \lesssim\sum_{i=0}^{m}\bigl{(}\bigl{\|}\widetilde{\mathsf{D}}^{m}f \bigr{\|}_{L_{x,\mathsf{s}}^{2}}^{\frac{i}{m}}\mathfrak{B}_{f}^{1-\frac{i}{m}}+ \boldsymbol{1}_{0<i<m}\varepsilon^{\frac{i}{m}}\mathfrak{B}_{f}\bigr{)}\bigl{(} \bigl{\|}\widetilde{\mathsf{D}}^{m}g\bigr{\|}_{L_{x,\mathsf{s}}^{2}}^{1-\frac{i }{m}}\mathfrak{B}_{g}^{\frac{i}{m}}+\boldsymbol{1}_{0<i<m}\varepsilon^{1- \frac{i}{m}}\mathfrak{B}_{g}\bigr{)}\] \[\lesssim\varepsilon\mathfrak{B}_{f}\mathfrak{B}_{g}+\sum_{i=0}^{m}( \bigl{\|}\widetilde{\mathsf{D}}^{m}f\bigr{\|}_{L_{x,\mathsf{s}}^{2}}\mathfrak{B}_{ g})^{\frac{i}{m}}(\bigl{\|}\widetilde{\mathsf{D}}^{m}g\bigr{\|}_{L_{x,\mathsf{s}}^{2}} \mathfrak{B}_{f})^{1-\frac{i}{m}}\] \[\qquad+\sum_{i=1}^{m-1}(\bigl{\|}\widetilde{\mathsf{D}}^{m}f \bigr{\|}_{L_{x,\mathsf{s}}^{2}}\mathfrak{B}_{g})^{\frac{i}{m}}(\varepsilon \mathfrak{B}_{f}\mathfrak{B}_{g})^{1-\frac{i}{m}}+\sum_{i=1}^{m-1}(\varepsilon \mathfrak{B}_{f}\mathfrak{B}_{g})^{\frac{i}{m}}(\bigl{\|}\widetilde{\mathsf{D}}^{m}g \bigr{\|}_{L_{x,\mathsf{s}}^{2}}\mathfrak{B}_{f})^{1-\frac{i}{m}}\,.\] The proof concludes by appealing to Young's inequality. The proof of Lemma B.4 also yields bounds for commutators, which are defined as in Remark 4.1. For \(\gamma=(\gamma_{0},\gamma_{1},\gamma_{2})\in\mathbb{N}_{0}^{3}\) with \(|\gamma|=m\) and any scalar function \(f\), we shall denote \[[\widetilde{\mathsf{D}}^{\gamma},f]g=\widetilde{\mathsf{D}}^{\gamma}(fg)-f \widetilde{\mathsf{D}}^{\gamma}g=\sum_{\beta\leq\gamma,|\beta|\leq|\gamma|-1} \binom{\gamma}{\beta}\widetilde{\mathsf{D}}^{\beta}f\ \widetilde{\mathsf{D}}^{\gamma-\beta}g\,\] and we shall use the notation \[(\widetilde{\mathsf{D}}^{\gamma},f,g)=\widetilde{\mathsf{D}}^{\gamma}(f\,g)-f \widetilde{\mathsf{D}}^{\gamma}g-g\widetilde{\mathsf{D}}^{\gamma}f=\sum_{\beta \leq\gamma,1\leq|\beta|\leq|\gamma|-1}\binom{\gamma}{\beta}\widetilde{\mathsf{D}}^{ \beta}f\ \widetilde{\mathsf{D}}^{\gamma-\beta}g\,,\] so that \[[\widetilde{\mathsf{D}}^{\gamma},f]g=(\widetilde{\mathsf{D}}^{\gamma},f,g)+g \widetilde{\mathsf{D}}^{\gamma}f\,.\] The above identity implies that bounds for \([\widetilde{\mathsf{D}}^{\gamma},f]g\) are direct consequences of bounds for \((\widetilde{\mathsf{D}}^{\gamma},f,g)\), which are more symmetric in nature. **Lemma B.5** (Double commutator estimate).: _For \(m\geq 2\), assume that \(f,g\colon\mathbb{T}^{2}\times[0,\varepsilon]\to\mathbb{R}\) are smooth functions satisfying the standing support assumption (B.1). In analogy to (B.12), define_ \[\mathfrak{B}_{\widetilde{\mathsf{D}}f} :=\big{\|}\widetilde{\mathsf{D}}f\big{\|}_{L^{\infty}_{x,*}}+\max_ {1\leq j\leq m-2}\big{\|}\widetilde{\mathsf{D}}^{j}\widetilde{\mathsf{D}}f( \cdot,0)\big{\|}_{L^{\infty}_{x}}\,,\] (B.15a) \[\mathfrak{B}_{\widetilde{\mathsf{D}}g} :=\big{\|}\widetilde{\mathsf{D}}g\big{\|}_{L^{\infty}_{x,*}}+\max _{1\leq j\leq m-2}\big{\|}\widetilde{\mathsf{D}}^{j}\widetilde{\mathsf{D}}g( \cdot,0)\big{\|}_{L^{\infty}_{x}}\,.\] (B.15b) _Then, we have that_ \[\big{\|}\big{(}\widetilde{\mathsf{D}}^{m},f,g\big{)}\big{\|}_{L^{ 2}_{x,*}}\lesssim\big{\|}\widetilde{\mathsf{D}}^{m-1}f\big{\|}_{L^{2}_{x,*}} \mathfrak{B}_{\widetilde{\mathsf{D}}g}+\big{\|}\widetilde{\mathsf{D}}^{m-1}g \big{\|}_{L^{2}_{x,*}}\mathfrak{B}_{\widetilde{\mathsf{D}}f}+\varepsilon\ \mathfrak{B}_{\widetilde{\mathsf{D}}f} \mathfrak{B}_{\widetilde{\mathsf{D}}g}\,,\] (B.16) _where the implicit constant depends only on \(m\) and \(\mathsf{C}_{\mathsf{supp}}\) (hence on \(\alpha\) and \(\kappa_{0}\)). In particular, the usual commutator \([\widetilde{\mathsf{D}}^{m},f]g\) is bounded in \(L^{2}_{x,*}\) via (B.16) and the triangle inequality_ \[\big{\|}[\widetilde{\mathsf{D}}^{m},f]g\big{\|}_{L^{2}_{x,*}}\leq\big{\|} \big{(}\widetilde{\mathsf{D}}^{m},f,g\big{)}\big{\|}_{L^{2}_{x,*}}+\big{\|}g \big{\|}_{L^{\infty}_{x,*}}\big{\|}\widetilde{\mathsf{D}}^{m}f\big{\|}_{L^{2}_ {x,*}}\,.\] (B.17) Proof of Lemma b.5.: From the Leibniz rule and Holder inequalities, we have that \[\big{\|}\big{(}\widetilde{\mathsf{D}}^{m},f,g\big{)}\big{\|}_{L^{ 2}_{x,*}}\lesssim\sum_{i=1}^{m-1}\big{\|}\widetilde{\mathsf{D}}^{i-1} \widetilde{\mathsf{D}}f\big{\|}_{L^{2(m-2)}_{x,*}}\big{\|}\widetilde{\mathsf{ D}}^{m-i-1}\widetilde{\mathsf{D}}g\big{\|}_{L^{\frac{2(m-2)}{m-i-1}}_{x,*}}\,.\] (B.18) Applying (B.9), we obtain \[\big{\|}\big{(}\widetilde{\mathsf{D}}^{m},f,g\big{)}\big{\|}_{L^{ 2}_{x,*}} \lesssim\sum_{i=1}^{m-1}\big{(}\big{\|}\widetilde{\mathsf{D}}^{m-1}f \big{\|}_{L^{2}_{x,*}}^{\frac{i-1}{m-2}}\mathfrak{B}_{\widetilde{\mathsf{D}}f}^ {1-\frac{i-1}{m-2}}+\varepsilon^{\frac{i-1}{m-2}}\mathfrak{B}_{\widetilde{ \mathsf{D}}f}\big{)}\big{(}\big{\|}\widetilde{\mathsf{D}}^{m-1}g\big{\|}_{L^{2} _{x,*}}^{1-\frac{i-1}{m-2}}\mathfrak{B}_{\widetilde{\mathsf{D}}g}^{\frac{i-1} {m-2}}+\varepsilon^{1-\frac{i-1}{m-2}}\mathfrak{B}_{\widetilde{\mathsf{D}}g} \big{)}\] \[\lesssim\varepsilon\mathfrak{B}_{\widetilde{\mathsf{D}}f} \mathfrak{B}_{\widetilde{\mathsf{D}}g}+\sum_{i=1}^{m-1}\big{(}\big{\|} \widetilde{\mathsf{D}}^{m-1}f\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_{\widetilde{ \mathsf{D}}g}\big{)}^{\frac{i-1}{m-2}}\big{(}\big{\|}\widetilde{\mathsf{D}}^{m-1 }g\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_{\widetilde{\mathsf{D}}f}\big{)}^{1-\frac {i-1}{m-2}}\] \[\qquad+\sum_{i=1}^{m-1}\big{(}\varepsilon\mathfrak{B}_{\widetilde{ \mathsf{D}}f}\mathfrak{B}_{\widetilde{\mathsf{D}}g}\big{)}^{\frac{i-1}{m-2}} \big{(}\boldsymbol{\mathfrak{B}}_{\widetilde{\mathsf{D}}f}\big{\|}\widetilde{ \mathsf{D}}^{m-1}g\big{\|}_{L^{2}_{x,*}}\big{)}^{1-\frac{i-1}{m-2}}\] \[\qquad+\sum_{i=1}^{m-1}\big{(}\big{\|}\widetilde{\mathsf{D}}^{m-1 }f\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_{\widetilde{\mathsf{D}}g}\big{)}^{\frac{i-1 }{m-2}}\big{(}\varepsilon\mathfrak{B}_{\widetilde{\mathsf{D}}f}\mathfrak{B}_{ \widetilde{\mathsf{D}}g}\big{)}^{1-\frac{i-1}{m-2}}\,.\] (B.19) The bound (B.13) now follows from Young's inequality. Next, we establish a version of the double-commutator bound from Lemma B.5, which is however uniform in \(\mathsf{s}\). **Lemma B.6** (Double commutator estimate, uniform in time).: _For \(m\geq 2\), assume that \(f,g\colon\mathbb{T}^{2}\times[0,\varepsilon]\to\mathbb{R}\) are smooth functions satisfying the support assumption (B.1). Recall the definitions of \(\mathfrak{B}_{f}\), \(\mathfrak{B}_{g}\) in (B.12), and the definitions of \(\mathfrak{B}_{\widetilde{\mathsf{D}}f}\), \(\mathfrak{B}_{\widetilde{\mathsf{D}}g}\) in (B.15). Then,_ \[\big{\|}\big{(}\widetilde{\mathsf{D}}^{m},f,g\big{)}\big{\|}_{L^{ \infty}_{x}L^{2}_{x}}\lesssim\varepsilon^{-\frac{1}{2}}\Big{(}\big{\|}\widetilde{ \mathsf{D}}^{m}f\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_{\widetilde{\mathsf{D}}g}+ \big{\|}\widetilde{\mathsf{D}}^{m}g\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_{ \widetilde{\mathsf{D}}f}+\varepsilon\ \mathfrak{B}_{\widetilde{\mathsf{D}}f} \mathfrak{B}_{\widetilde{\mathsf{D}}g}\Big{)}\,,\] (B.20) _where the implicit constant depends only on \(m\) and \(\mathsf{C}_{\mathsf{supp}}\). For any function \(\varphi\colon\mathbb{T}^{2}\times[0,\varepsilon]\to[0,\infty)\) which is bounded from above, we have the commutator bound_ \[\big{\|}\varphi\big{[}\widetilde{\mathsf{D}}^{m},f\big{]}g\big{\|}_{L^{ \infty}_{x}L^{2}_{x}}\lesssim\big{\|}\varphi\widetilde{\mathsf{D}}^{m}f\big{\|}_{L^ {\infty}_{x}L^{2}_{x}}\mathfrak{B}_{g}+\varepsilon^{-\frac{1}{2}}\Big{(}\big{\|} \widetilde{\mathsf{D}}^{m}f\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_{\widetilde{ \mathsf{D}}g}+\big{\|}\widetilde{\mathsf{D}}^{m}g\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_ {\widetilde{\mathsf{D}}f}+\varepsilon\ \mathfrak{B}_{\widetilde{\mathsf{D}}f} \mathfrak{B}_{\widetilde{\mathsf{D}}g}\Big{)}\,,\] (B.21) _and the Moser-type bound_ \[\big{\|}\varphi\widetilde{\mathsf{D}}^{m}(f\,g)\big{\|}_{L^{ \infty}_{x}L^{2}_{x}}\lesssim\big{\|}\varphi\widetilde{\mathsf{D}}^{m}f\big{\|}_{L^{ \infty}_{x}L^{2}_{x}}\mathfrak{B}_{g}+\big{\|}\varphi\widetilde{\mathsf{D}}^{m}g \big{\|}_{L^{\infty}_{x}L^{2}_{x}}\mathfrak{B}_{f}\\ +\varepsilon^{-\frac{1}{2}}\big{\|}\varphi\big{\|}_{L^{\infty}_{x,*}}\Big{(}\big{\|}\widetilde{\mathsf{D}}^{m}f\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_ {\widetilde{\mathsf{D}}g}+\big{\|}\widetilde{\mathsf{D}}^{m}g\big{\|}_{L^{2}_{x,*}}\mathfrak{B}_{\widetilde{\mathsf{D}}f}+\varepsilon\ \mathfrak{B}_{\widetilde{\mathsf{D}}f} \mathfrak{B}_{\widetilde{\mathsf{D}}g}\Big{)}\,,\] (B.22) _where the implicit constant depends only on \(m\), and on \(\mathsf{C}_{\mathsf{supp}}\) (hence on \(\alpha\) and \(\kappa_{0}\))._ **Remark B.7**.: _The important fact to notice about (B.22) is that the terms on _and therefore the upper bound for \(\left\|\varphi\widetilde{\mathsf{D}}^{m}(f\,g)\right\|_{L^{\infty}_{x}L^{2}_{x}}\) will always be given by \(\varepsilon^{-\frac{s}{2}}\|\varphi\|_{L^{\infty}_{x,n}}\) times the upper bound for \(\left\|\widetilde{\mathsf{D}}^{m}(f\,g)\right\|_{L^{2}_{x,*}}\), plus the first two terms on the right side of (B.22), which retain the weight function \(\varphi\) inside the norm._ Proof of Lemma b.6.: From the triangle inequality we have that \[\left\|(\widetilde{\mathsf{D}}^{m},f,g)\right\|_{L^{\infty}_{x}L^{2}_{x}}\leq 2 ^{m}\sum_{j=1}^{m-1}\|\widetilde{\mathsf{D}}^{j}f\widetilde{\mathsf{D}}^{m-j}g \|_{L^{\infty}_{x}L^{2}_{x}}\;.\] In turn, for each fixed \(1\leq j\leq m-1\) using (B.2c), we have that \[\|F\|_{L^{\infty}_{x}L^{2}_{x}}\lesssim\|F(0)\|_{L^{2}_{x}}+\varepsilon^{- \frac{1}{2}}\|\widetilde{\mathsf{D}}_{\mathsf{s}}F\|_{L^{2}_{x,*}}\;,\] where the implicit constant is universal. By appealing to the above bound with \(F=\widetilde{\mathsf{D}}^{j}f\widetilde{\mathsf{D}}^{m-j}g\), we deduce that \[\begin{split}&\sum_{j=1}^{m-1}\|\widetilde{\mathsf{D}}^{j}f \widetilde{\mathsf{D}}^{m-j}g\|_{L^{\infty}_{x}L^{2}_{x}}\\ &\lesssim\sum_{j=1}^{m-1}\Bigl{(}\varepsilon^{\frac{1}{2}}\| \widetilde{\mathsf{D}}^{j}f(0)\|_{L^{\infty}_{x}}\|\widetilde{\mathsf{D}}^{m-j }g(0)\|_{L^{\infty}_{x}}+\varepsilon^{-\frac{1}{2}}\|\widetilde{\mathsf{D}}_{ \mathsf{s}}\widetilde{\mathsf{D}}^{j}f\widetilde{\mathsf{D}}^{m-j}g\|_{L^{2}_{ x,*}}+\varepsilon^{-\frac{1}{2}}\|\widetilde{\mathsf{D}}^{j}f\widetilde{ \mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}^{m-j}g\|_{L^{2}_{x,*}}\Bigr{)} \\ &\lesssim\varepsilon^{\frac{1}{2}}\mathfrak{B}_{\widetilde{ \mathsf{D}}f}\mathfrak{B}_{\widetilde{\mathsf{D}}g}+\varepsilon^{-\frac{1}{2}} \|\widetilde{\mathsf{D}}^{m}f\|_{L^{2}_{x,*}}\|\widetilde{\mathsf{D}}g\|_{L^{ \infty}_{x,*}}+\varepsilon^{-\frac{1}{2}}\|\widetilde{\mathsf{D}}^{m}g\|_{L^ {2}_{x,*}}\|\widetilde{\mathsf{D}}f\|_{L^{\infty}_{x,*}}+\varepsilon^{- \frac{1}{2}}\sum_{j=2}^{m-1}\|\widetilde{\mathsf{D}}^{j-1}\widetilde{\mathsf{ D}}f\widetilde{\mathsf{D}}^{m-j}\widetilde{\mathsf{D}}g\|_{L^{2}_{x,*}}\\ &\lesssim\varepsilon^{\frac{1}{2}}\mathfrak{B}_{\widetilde{ \mathsf{D}}f}\mathfrak{B}_{\widetilde{\mathsf{D}}g}+\varepsilon^{-\frac{1}{2}} \|\widetilde{\mathsf{D}}^{m}f\|_{L^{2}_{x,*}}\mathfrak{B}_{\widetilde{ \mathsf{D}}g}+\varepsilon^{-\frac{1}{2}}\|\widetilde{\mathsf{D}}^{m}g\|_{L^{ 2}_{x,*}}\mathfrak{B}_{\widetilde{\mathsf{D}}f}+\varepsilon^{-\frac{1}{2}}\sum _{j=2}^{m-1}\|\widetilde{\mathsf{D}}^{j-1}\widetilde{\mathsf{D}}f\|_{L^{2 }_{x,*}}\|\widetilde{\mathsf{D}}^{m-j}\widetilde{\mathsf{D}}g\|_{L^{2(m-1)}_{ x,*}}\end{split}\] where the implicit constant only depends on \(m\). This proves (B.20). The bounds (B.21) and (B.22) now follow from the definitions of \(\bigl{(}\widetilde{\mathsf{D}}^{m},f,g\bigr{)}\), \(\llbracket\mathsf{D}^{m},f\rrbracket g\), the triangle inequality, and the bound \(\mathcal{J}\leq\overline{J}_{g}\leq J_{g}\leq\frac{3}{2}\). ### The flattening map from Section 13 The map \(\mathsf{q}=\mathsf{q}(x_{1},x_{2},t)\) defined in (13.8b) is used in Section 13 to remap the spacetime which is downstream of the pre-shock onto \(\mathbb{T}^{2}\times[0,\varepsilon)\), via (13.10). The associated differential operator \(\widetilde{\mathsf{D}}\) is given in (13.13). Save for the \(x_{1}\)-dependence of \(\mathsf{q}\), the mappings \(\mathsf{q}\) and \(\mathsf{q}\) have nearly identical properties and bounds (see Lemmas 6.3 and 13.7), which is why we have chosen to use nearly the same notation for these maps. The only difference between the analysis of the differential operators \(\widetilde{\mathsf{D}}\) from (5.23) and the \(\widetilde{\mathsf{D}}\) from (13.13) lies in the definition of \(\widetilde{\mathsf{D}}_{1}\). This difference results in the following modification to the upper bounds given in this section: every bound which contains a term of the type \(\|\widetilde{\mathsf{D}}_{1}f\|\), with \(\widetilde{\mathsf{D}}_{1}\) as defined by (5.23), needs to be replaced by an upper of the type \(\|\widetilde{\mathsf{D}}_{1}f\|+\|\widetilde{\mathsf{D}}_{\mathsf{s}}f\|\), with \(\widetilde{\mathsf{D}}_{1}\) and \(\widetilde{\mathsf{D}}_{\mathsf{s}}\) as defined by (13.13). This change only affects the bounds (B.2a), (B.2d), and (B.2e); note however that when these estimates are used in the bulk of the paper, we in fact bound \(\|\widetilde{\mathsf{D}}_{1}f\|\) with the full norm \(\|\widetilde{\mathsf{D}}f\|\), and therefore no single bound in the bulk of the paper is altered by this change. The estimates provided by the Remarks B.2, B.7, and Lemmas B.3, B.4, B.5, B.6 remain unchanged. In view of this fact, in Section 13 we still refer to the bounds in this Appendix, the above mentioned slight modification being implicit. ### The flattening map from Section 14 In Section 14 we work in the spacetime \(\mathring{\mathcal{H}}^{\delta}\) defined in (14.14). While pointwise estimates are performed in \((x,t)\in\mathring{\mathcal{H}}^{\delta}\) variables, the energy estimates at the center of our analysis are performed in the \((x,\mathsf{s})\) variables that correspond to the remapped domain \(\mathcal{H}^{\delta}\subset\mathcal{X}_{\mathsf{fin}}\times[\mathsf{s}_{ \mathsf{in}},\mathsf{s}_{\mathsf{fin}})\), defined in (14.120a). This remapping (cf. (14.106)) is achieved by setting \(\mathsf{s}=\mathsf{q}(x_{2},t)\), with the map \(\mathsf{q}\) being defined in (14.99). In particular, the map \(\mathsf{q}\) is independent of \(x_{1}\), and all functions \(f\colon\mathcal{H}^{\delta}\to\mathbb{R}\) satisfy (B.1). The differences between the \((x,\mathsf{s})\) coordinates in Section 14 and the \((x,\mathsf{s})\) coordinates in used in Sections 5-12 are as follows. First, the domain of the new time variable is not \([0,\varepsilon)\), but instead \([\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{fin}})\), as defined by (14.101) and (14.103). Since \(\mathsf{s}_{\mathsf{fin}}-\mathsf{s}_{\mathsf{in}}\in[(\frac{51}{50}-\hat{C} \varepsilon)\frac{2\varepsilon}{1+\alpha},(\frac{51}{50}+\hat{C}\varepsilon) \frac{2\varepsilon}{1+\alpha}]\) the length of the time interval is still \(\mathcal{O}(\varepsilon)\), this time with an implicit constant depending only on \(\alpha\) (on which the bounds in this Appendix are allowed to depend anyway). Second, the \(L^{2}_{x}\) and \(L^{2}_{x,*}\) norms are not anymore defined on all of \(\mathbb{T}^{2}\), respectively \(\mathbb{T}^{2}\times[0,\varepsilon)\) (as was done in Sections 5-12); instead, for \(f\colon\mathcal{H}^{\delta}\to\mathbb{R}\) we use the \(L^{2}_{x,\mathsf{s}}=L^{2}_{x,*}(\mathcal{H}^{\delta})\) norm defined in (14.122b), while the \(L^{\infty}_{\mathsf{s}}L^{2}_{x}=L^{\infty}_{\mathsf{s}}L^{2}_{x}(\mathcal{H}^{ \delta})\) norm given by \(\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{fin}}]}\|f( \cdot,\mathsf{s})\|_{L^{2}_{x}}\), with \(\|f(\cdot,\mathsf{s})\|_{L^{2}_{x}}\) as defined by (14.122a). Third, we now have that \(\mathsf{s}-\)-timeses of \(\mathcal{H}^{\delta}\) have a "right" boundary, at \(x_{1}=\theta^{\delta}(x_{2},\mathsf{s})\). Nonetheless, due to (B.1) it still holds that the norm in (14.122a) satisfies a Poincare inequality of the type \(\|f(\cdot,\mathsf{s})\|_{L^{2}_{x}}\lesssim\varepsilon\|\partial_{1}f(\cdot, \mathsf{s})\|_{L^{2}_{x}}=\|\widetilde{\mathsf{D}}_{1}f(\cdot,\mathsf{s})\|_{L ^{2}_{x}}\), as claimed in (B.2a). The new spacetime differential operator \(\widetilde{\mathsf{D}}\) is defined in (14.109a), with the adjoint \(\widetilde{\mathsf{D}}^{*}\) being given by (14.129). Since \(\mathsf{q}\) is independent of \(x_{1}\), the estimates the \(\mathsf{Q}\) coefficients appearing in the definition of the differential operator \(\widetilde{\mathsf{D}}\) are nearly identical to those in Sections 5-12. Indeed, comparing the estimates in Lemma 14.7 to those in Lemma 6.3 we see that the only differences involve \(\alpha\)-dependent factors. Another potential difference concerns the usage of the fundamental theorem of calculus in time (which is used to prove (B.2c), (B.8)), since the the location of the right boundary of each \(\mathsf{s}\)-time-slice depends on \(\mathsf{s}\). What saves the day is that the \(x_{1}\) boundary term at \(\theta^{\delta}(x_{2},\mathsf{s})\) (see (14.128a)) always has a sign since \(\partial_{\delta}\theta^{\delta}\leq 0\) (see (14.137)). For example, \[\big{\|}F(\cdot,\mathsf{s})\big{\|}_{L^{p}_{x}}^{p} =\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{p}_{x}}^{ p}+\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\tfrac{\mathrm{d}}{\mathrm{d}s }\big{\|}F(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{2}_{x}}^{2}\mathrm{d}\mathsf{ s}^{\prime}\] \[=\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{p}_{x}}^{ p}+\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\big{(}\iint^{\theta^{\delta}}\!\! \!p|F(x,\mathsf{s}^{\prime})\big{|}^{p-2}F(x,\mathsf{s}^{\prime})\partial_{ \mathsf{s}^{\prime}}F(x,\mathsf{s}^{\prime})\mathrm{d}x\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\int_{x_{2}=-\pi}^{\pi} \underbrace{\partial_{\mathsf{s}^{\prime}}\theta^{\delta}(x_{2},\mathsf{s}^{ \prime})}_{\leq 0}\big{|}F(\theta^{\delta}(x_{2},\mathsf{s}^{\prime}),x_{2}, \mathsf{s}^{\prime})\big{|}^{p}\mathrm{d}x_{2}\Big{)}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq\big{\|}F(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{p}_{x}} ^{p}+p\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}}\!\big{\|}F(\cdot,\mathsf{ s}^{\prime})\big{\|}_{L^{\frac{q}{q-1}}_{x}}^{p-1}\big{\|}\partial_{\mathsf{s}^{ \prime}}F(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{q}_{x}}\mathrm{d}\mathsf{s}^{ \prime}\,,\] (B.23) where in the last inequality we have used Holder and (14.137). As such, no actual change arises when applying the fundamental theorem of calculus in \(\mathsf{s}\), we are simply not using a helpful signed term in our upper bounds. The last potential difference concerns the integration by parts in \(\mathsf{s}\) that is used to prove (B.9). To see that no difference arises in the final estimate, let us for instance consider the proof of (B.11), which consists of establishing the bounds (B.10a)-(B.10d). First, we note that (B.10d) remains unchanged, by using an argument similar to that in (B.23), for \(p=3\) and \(q=2\). Second, using the definition of \(\widetilde{\mathsf{D}}_{1}^{*}\) in (14.129b), we see that the bound (B.10a) becomes \[\big{\|}\widetilde{\mathsf{D}}_{1}f\big{\|}_{L^{4}_{x,\mathsf{s}}}^{4}\leq 3 \,\|f\|_{L^{\infty}_{x,\mathsf{s}}}\,\big{\|}\widetilde{\mathsf{D}}_{1}^{2}f \big{\|}_{L^{2}_{x,\mathsf{s}}}\big{\|}\widetilde{\mathsf{D}}_{1}f\big{\|}_{L^{ 4}_{x,\mathsf{s}}}^{2}+\varepsilon\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s }_{\mathsf{in}}}\int_{\mathbb{T}}\!\big{|}f(\widetilde{\mathsf{D}}_{1}f)^{3} \big{|}(\theta^{\delta}(x_{2},\mathsf{s}),x_{2},\mathsf{s})\mathrm{d}x_{2} \mathrm{d}\mathsf{s}\,.\] For the second term in the above estimate, using the support property (B.1), the fundamental theorem of calculus in the \(x_{1}\) variable, and the fact that \(\widetilde{\mathsf{D}}_{1}=\varepsilon\partial_{1}\), we deduce that \[\varepsilon\int_{\mathsf{s}_{\mathsf{in}}}^{\mathsf{s}_{\mathsf{ in}}}\int_{\mathbb{T}}\!\big{|}f(\widetilde{\mathsf{D}}_{1}f)^{3}\big{|}( \theta^{\delta}(x_{2},\mathsf{s}),x_{2},\mathsf{s})\mathrm{d}x_{2}\mathrm{d} \mathsf{s} \leq\varepsilon\|f\|_{L^{\infty}_{x,\mathsf{s}}}\int_{\mathsf{s}_{ \mathsf{in}}}\int_{\mathbb{T}}|\widetilde{\mathsf{D}}_{1}f|^{3}(\theta^{ \delta}(x_{2},\mathsf{s}),x_{2},\mathsf{s})\mathrm{d}x_{2}\mathrm{d}\mathsf{s}\] \[\leq\|f\|_{L^{\infty}_{x,\mathsf{s}}}\big{\|}\widetilde{\mathsf{D}}_{1 }f\big{\|}_{L^{2}_{x,\mathsf{s}}}^{2}\big{\|}\widetilde{\mathsf{D}}_{1}^{2}f \big{\|}_{L^{2}_{x,\mathsf{s}}}^{2}\,.\] Combining the two estimates above shows that (B.10a) remains unchanged, i.e. \[\big{\|}\widetilde{\mathsf{D}}_{1}f\big{\|}_{L^{4}_{x,\mathsf{s}}}^{4}\leq 4 \,\|f\|_{L^{\infty}_{x,\mathsf{s}}}\,\big{\|}\widetilde{\mathsf{D}}_{1}^{2}f \big{\|}_{L^{2}_{x,\mathsf{s}}}\big{\|}\widetilde{\mathsf{D}}_{1}f\big{\|}_{L^{ 2}_{x,\mathsf{s}}}^{2}\,.\] A similar argument, using the definition of \(\widetilde{\mathsf{D}}_{2}^{*}\) in (14.129c), and the bounds for \(\widetilde{\mathsf{Q}}_{2}\) and \(\overline{\mathsf{Q}}\) from (14.134), shows that \[\big{\|}\widetilde{\mathsf{D}}_{2}f\big{\|}_{L^{4}_{x,\mathsf{s}}}^{4} \leq 3\big{\|}f\big{\|}_{L^{\infty}_{x,\mathsf{s}}}\big{\|}\widetilde{ \mathsf{D}}_{2}f\big{\|}_{L^{2}_{x,\mathsf{s}}}^{2}+6(1+\alpha)\big{\|}f\big{\|}_{L^{ \infty}_{x,\mathsf{s}}}\big{\|}\widetilde{\mathsf{D}}_{2}f\big{\|}_{L^{2}_{x, \mathsf{s}}}^{3}+11\varepsilon\big{\|}f\big{\|}_{L^{\infty}_{x,\mathsf{s}}} \sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{in}},\mathsf{s}_{\mathsf{in}}]}\big{\|} \widetilde{\mathsf{D}}_{2}f(\cdot,\mathsf{s})\big{\|}_{L^{3}_{x}}^{3}\] \[\qquad+\big{\|}f\big{\|}_{L^{\infty}_{x,\mathsf{s}}}\int_{\mathsf{ s}_{\mathsf{in}}}^{\mathsf{s}_{\mathsf{in}}}\int_{\mathbb{T}}\!\big{|} \widetilde{\mathsf{D}}_{2}\theta^{\delta}(x_{2},\mathsf{s})\big{|}\big{|} \widetilde{\mathsf{D}}_{2}f(\theta^{\delta}(x_{2},\mathsf{s}),x_{2},\mathsf{s}) \big{|}^{3}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}\,.\] The first three terms on the right side of the above estimate already were contained in (B.10b), and they are treated in the same way, by appealing to (B.23) with \(p=3\) and \(q=2\). In order to estimate the last term in the above expression, we make two observations. First, from (14.118) and (14.109a) we deduce \[\widetilde{\mathsf{D}}_{2}\theta^{\delta}(x_{2},\mathsf{s})=\tfrac{ \overline{\mathsf{Q}}_{2}(x_{2},\mathsf{s})+\partial_{\widetilde{\mathsf{D}}} \overline{\Theta}^{\delta}(\theta^{\delta}(x_{2},\mathsf{s}),x_{2})}{-\partial_{ \widetilde{\mathsf{D}}}\overline{\Theta}^{\delta}(\theta^{\delta}(x_{2}, \mathsf{s}),x_{2})}\,.\] Second, upon making the change of variables \(\mathsf{s}\mapsto x_{1}\) via \(\mathsf{s}=\overline{\Theta^{\delta}}(x_{1},x_{2})\), we have from (14.118) that \[\mathrm{d}\mathsf{s}=\partial_{1}\overline{\Theta^{\delta}}(x_{1},x_{2}) \mathrm{d}x_{1}=\partial_{1}\overline{\Theta^{\delta}}(\theta^{\delta}(x_{2}, \mathsf{s}),x_{2})\mathrm{d}x_{1}.\] Hence, changing variables, recalling the definition (14.150), and appealing to the bounds (14.134c) and (14.136c), we deduce that \[\big{\|}f\big{\|}_{L^{\infty}_{\mathsf{s},\mathsf{s}}}\int_{ \mathsf{s}_{\mathsf{m}}}^{\mathsf{s}_{\mathsf{m}}}\int_{\mathbb{T}}\bigl{|} \widetilde{\mathsf{D}}_{2}\theta^{\delta}(x_{2},\mathsf{s})\big{|}\bigl{|} \widetilde{\mathsf{D}}_{2}f(\theta^{\delta}(x_{2},\mathsf{s}),x_{2},\mathsf{s })\bigr{|}^{3}\mathrm{d}x_{2}\mathrm{d}\mathsf{s}\] \[\qquad=\big{\|}f\big{\|}_{L^{\infty}_{\mathsf{s}}}\int_{ \mathbb{T}}\int_{\widetilde{\mathfrak{X}}^{+}_{1}(x_{2},0)}^{\widetilde{ \mathfrak{X}}^{+}_{1}(x_{2},0)}\bigl{|}\overline{\mathsf{Q}}_{2}(x_{2}, \overline{\Theta^{\delta}}(x_{1},x_{2}))+\partial_{2}\overline{\Theta^{\delta }}(x_{1},x_{2})\big{|}\bigl{|}\widetilde{\mathsf{D}}_{2}f(x_{1},x_{2}, \overline{\Theta^{\delta}}(x_{1},x_{2}))\bigr{|}^{3}\mathrm{d}x_{2}\mathrm{d}x _{1}\] \[\qquad\lesssim\varepsilon\big{\|}f\big{\|}_{L^{\infty}_{\mathsf{ s}}}\big{\|}\widetilde{\mathsf{D}}_{2}f(\cdot,\Theta^{\delta}(\cdot,0))\big{\|} _{L^{\infty}_{\mathsf{s}}}^{3}\,.\] Then, re-doing the computation leading to (14.155) (in \(L^{3}\) instead of \(L^{2}\)), using (14.152) (both with \(L^{2}\) and with \(L^{4}\), instead of just \(L^{2}\)), and re-doing the computation leading up to (14.159) for \(r=0\) (in \(L^{3}\) instead of \(L^{2}\)), we deducethat \[\big{\|}\widetilde{\mathsf{D}}_{2}f(\cdot,\Theta^{\delta}(\cdot, 0))\big{\|}_{L^{2}_{\mathsf{s}}}^{3} \lesssim\big{\|}\widetilde{\mathsf{D}}_{2}f(\cdot,\mathsf{s}^{ \delta}(\cdot,\mathsf{s}_{\mathsf{m}}))\big{\|}_{L^{2}_{\mathsf{s}}}^{3}+ \big{\|}\widetilde{\mathsf{D}}_{2}f(\cdot,\mathsf{s}_{\mathsf{m}})\big{\|}_{ L^{3}_{\mathsf{s}}}^{3}+\tfrac{1}{\varepsilon}\big{\|}\widetilde{\mathsf{D}}_{2}f( \cdot,\mathsf{s}^{\delta}(\cdot,\cdot))\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s }}}^{2}\big{\|}\widetilde{\mathsf{D}}_{2}f(\cdot,\mathsf{s}^{\delta}(\cdot, ))\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}}^{2}\] \[\lesssim\big{\|}\widetilde{\mathsf{D}}_{2}f(\cdot,\mathsf{s}_{ \mathsf{m}})\big{\|}_{L^{3}_{\mathsf{s}}}^{3}+\tfrac{1}{\varepsilon}\big{\|} \widetilde{\mathsf{D}}_{2}f\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}}^{2}\big{\|} \widetilde{\mathsf{D}}_{\mathsf{s}}\widetilde{\mathsf{D}}_{2}f\big{\|}_{L^{2}_ {\mathsf{s},\mathsf{s}}}\,.\] Combining the bounds obtained in the five estimates above leads to \[\big{\|}\widetilde{\mathsf{D}}_{2}f\big{\|}_{L^{4}_{\mathsf{s},\mathsf{s}}}^{4 }\lesssim\big{\|}f\big{\|}_{L^{\infty}_{\mathsf{s},\mathsf{s}}}\big{\|} \widetilde{\mathsf{D}}_{2}f\big{\|}_{L^{4}_{\mathsf{s},\mathsf{s}}}^{2}\big{\|} \widetilde{\mathsf{D}}_{2}^{2}f\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}}^{2}+ \big{\|}f\big{\|}_{L^{\infty}_{\mathsf{s},\mathsf{s}}}\big{\|}\widetilde{\mathsf{ D}}_{2}f\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}}^{2}\big{\|}\widetilde{\mathsf{D}}_{ \mathsf{s}}\widetilde{\mathsf{D}}_{2}f\big{\|}_{L^{2}_{\mathsf{s},\mathsf{s}}}+ \varepsilon^{2}\big{\|}f\big{\|}_{L^{\infty}_{\mathsf{s},\mathsf{s}}}\big{\|} \widetilde{\mathsf{D}}_{2}f(\cdot,\mathsf{s}_{\mathsf{m}})\big{\|}_{L^{\infty}_{ \mathsf{s}}}^{3}\,,\] which precisely matches (B.11). It remains to consider the modifications required to the bound (B.10c). Recalling the definition of \(\widetilde{\mathsf{D}}_{\mathsf{s}}^{*}\) in (14.129a), it is clear that only one new term emerges, on the right side of (B.11), which is \[\big{\|}f\big{\|}_{L^{\infty}_{\mathsf{s}}}\int_{\mathsf{s}_{\mathsf{m}}}^{ \mathsf{s}_{\mathsf{m}}}\int_{\mathbb{T}}\bigl{|}\widetilde{\mathsf{D}}_{ \mathsf{s}}\theta^{\delta}(x_{2},\mathsf{s})\big{|}\bigl{|}\widetilde{\mathsf{D }}_{\mathsf{s}}f(\theta^{\delta}(x_{2},\mathsf{s}),x_{2},\mathsf{s})\bigr{|}^{3} \mathrm{d}x_{2}\mathrm{d}\mathsf{s}\,.\] In a similar fashion to the \(\widetilde{\mathsf{D}}_{2}\) analysis above, we use that from (14.118) and (14.109a) we deduce \[\widetilde{\mathsf{D}}_{\mathsf{s}}\theta^{\delta}(x_{2},\mathsf{s})=\tfrac{ \varepsilon\widetilde{\mathsf{Q}}(x_{2})}{-\partial_{1}\overline{\Theta^{\delta}} (\theta^{\delta}(x_{2},\mathsf{s}),x_{2})}\,.\] The same change of variables as the one described above, followed up with an application of (14.150), (14.155), (14.152), (14.159), all adapted to the \(L^{3}\) instead of the \(L^{2}\) setting, shows that \(\big{\|}\widetilde{\mathsf{D}}_{\mathsf{s}}f\big{\|}_{L^{4}_{\mathsf{s},\mathsf{s}}}^ {4}\) obeys a bound which is the same as (B.10c). Thus, we have shown that for the upstream geometry of Section 14 the bound (B.11) still holds _as is_, albeit with a slightly more elaborate proof. This results in an unchanged statement of Lemma B.3. In summary, _all the main bounds_ obtained in this Appendix, namely Lemmas B.1, B.3, B.4, B.5, B.6, and Remarks B.2, B.7, hold _as is_ for the \((x,\mathsf{s})\) coordinates from Section 14. ## Appendix C Transport bounds The purpose of this appendix is to provide space-time \(L^{\infty}\) bounds for objects which are transported and stretched along the \(\lambda_{1}\) and \(\lambda_{2}\) characteristics. As with Appendix B, the estimates in this Appendix are written for the flattening map \(\mathsf{q}\) defined in Section 5.2, and utilized in Sections 5-12. In Sections C.1 and C.2 below, we show that the same analytical framework developed here applies _as is_ to the remapped domains from Sections 13 and 14. Recall from (3.31), (3.36), and (3.39) that in the ALE coordinates corresponding to the fast acoustic speed \(\lambda_{3}\), the transport operator associated with the wave-speed \(\lambda_{2}\) takes the form \((\partial_{t}+V\partial_{2})+\alpha g^{-\frac{1}{2}}h_{,2}\,\partial_{2}-\alpha \Sigma{J_{g}}^{-1}\partial_{1}\). Also, from (3.28) and (3.34) we see that the \(\lambda_{1}\) transport operator takes the form \((\partial_{t}+V\partial_{2})+2\alpha g^{-\frac{1}{2}}h_{,2}\,\partial_{2}-2\alpha \Sigma{J_{g}}^{-1}\partial_{1}\). For simplicity of the presentation, we only present the details of the \(\lambda_{2}\) analysis (the change of an \(\alpha\) to a \(2\alpha\) does not affect our final estimate). We recall from Section 5.2 that the remapping of the space-time \(\mathcal{P}\to\mathbb{T}^{2}\times[0,\varepsilon]\), and the associated change of unknown \(f(x,t)=\widetilde{f}(x,\mathsf{s})\), gives \(\|f\|_{L^{\infty}_{x,t}}=\|\widetilde{f}\|_{L^{\infty}_{x,\mathsf{s}}}\). Under this change of coordinates, if a function \(f\) that solves \[\bigl{(}J_{g}(\partial_{t}+V\partial_{2})+\alpha\Sigma g^{-\frac{1}{2}}J_{g}h_{,2} \,\partial_{2}-\alpha\Sigma then according to (5.21) and (5.26) we have that the function \(\widetilde{f}\) solves \[\big{(}J_{s}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})+\alpha\Sigma g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}hJ_{s}\widetilde{\mathsf{D}}_{2}-\alpha \Sigma\partial_{1}\big{)}\widetilde{f}=\widetilde{q}\,.\] (C.2) As discussed in Remark 5.2, we drop the tildes on all the remapped functions of the unknown \((x,\mathsf{s})\) present in (C.2). Our goal is to prove the \(L^{\infty}\) bounds directly in \((x,\mathsf{s})\) coordinates, working with (C.2). Clearly, the bounds established will also hold if \(\alpha\) is replaced by \(2\alpha\), i.e. the \(\lambda_{2}\) transport operator is replaced by the \(\lambda_{1}\) transport operator. The main result is Proposition C.1 below, whose proof has an equally useful consequence, see Corollary C.3. **Proposition C.1**.: _Assume that \(f\) is a smooth solution (C.2), and that \(q\) is bounded. Then, assuming the bootstrap inequalities (5.37) hold, and that \(\varepsilon\) is sufficiently small in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\), we have_ \[\big{\|}f\big{\|}_{L^{\infty}_{x,\mathsf{s}}}\leq 4e^{18}\big{\|}f(\cdot,0) \big{\|}_{L^{\infty}_{x}}+\tfrac{2\alpha}{\alpha}e^{18}\big{\|}q\big{\|}_{L^{ \infty}_{x,\mathsf{s}}}\,.\] (C.3) **Remark C.2**.: _In many applications of estimate (C.3) it may be convenient to appeal to the fundamental theorem of calculus in time and (B.2e) to bound the \(L^{\infty}\) norm of \(q\), resulting in the estimate_ \[\big{\|}f\big{\|}_{L^{\infty}_{x,\mathsf{s}}}\lesssim\big{\|}f(\cdot,0)\big{\|} _{L^{\infty}_{x}}+\tfrac{\varepsilon}{\alpha}\big{\|}q(\cdot,0)\big{\|}_{L^{ \infty}_{x}}+\tfrac{1}{\alpha}\big{\|}\widetilde{\mathsf{D}}^{2}\widetilde{ \mathsf{D}}_{1}q\big{\|}_{L^{2}_{x,\mathsf{s}}}\,.\] (C.4) Proof of Proposition C.1.: First, we note that (C.2) (in which as usual we drop the tildes) implies \[\tfrac{J_{\mathsf{s}}}{\Sigma^{p}}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_ {2})(f^{p})+\tfrac{\alpha g^{-\frac{1}{2}}J_{\mathsf{s}}\widetilde{\mathsf{D }}_{2}h}{\Sigma^{p-1}}\widetilde{\mathsf{D}}_{2}(f^{p})-\tfrac{\alpha}{\Sigma^ {p-1}}\partial_{1}(f^{p})=\tfrac{p}{\Sigma^{p}}f^{p-1}q\,.\] Integrating the above expression in space-time, and appealing to (5.28) we deduce \[\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma^{-1}f(\cdot,\mathsf{s})\big{\|} _{L^{p}_{x}}^{p}-\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma^{-1}f(\cdot,0) \big{\|}_{L^{p}_{x}}^{p}=\int_{0}^{\mathsf{s}}\!\!\!\int\tfrac{1}{\Sigma^{p}} f^{p}\mathsf{G}+\int\overline{\mathsf{Q}}_{2}\tfrac{\alpha g^{-\frac{1}{2}}J_{ \mathsf{s}}\widetilde{\mathsf{D}}_{2}h}{\Sigma^{p-1}}f^{p}\big{|}_{\mathsf{s} }+\int_{0}^{\mathsf{s}}\!\!\!\int\tfrac{p}{\Sigma^{p}}f^{p-1}q,\] (C.5) where, we have denoted \[\mathsf{G}:=\Sigma^{p}\Big{(}\hat{\mathsf{Q}}_{\mathsf{s}}+\widetilde{\mathsf{ D}}_{2}V-V\hat{\mathsf{Q}}_{2}+(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2}) \Big{)}\tfrac{J_{\mathsf{s}}}{\Sigma^{p}}-\Sigma^{p}(\hat{\mathsf{Q}}_{2}- \widetilde{\mathsf{D}}_{2})\Big{(}\tfrac{\alpha g^{-\frac{1}{2}}J_{\mathsf{s }}\widetilde{\mathsf{D}}_{2}h}{\Sigma^{p-1}}\Big{)}-\Sigma^{p}\partial_{1} \Big{(}\tfrac{\alpha}{\Sigma^{p-1}}\Big{)}\,.\] (C.6) Using (5.30), (5.33c), and (3.20), we may rewrite \[\mathsf{G} =J_{s}\big{(}\hat{\mathsf{Q}}_{\mathsf{s}}+\widetilde{\mathsf{D}} _{2}V-V\hat{\mathsf{Q}}_{2}\big{)}+\tfrac{1+\alpha}{2}J_{s}\hat{\mathsf{W}}_{ \mathsf{s}_{\mathsf{s}}}+\tfrac{1-\alpha}{2}J_{s}\hat{\mathsf{Z}}_{\mathsf{s} }+p\alpha J_{s}(\hat{\mathsf{Z}}_{\mathsf{s}}+\hat{\mathsf{A}}_{\mathsf{\tau}})\] \[\qquad-\alpha\hat{\mathsf{Q}}_{2}\Sigma g^{-\frac{1}{2}}J_{s} \widetilde{\mathsf{D}}_{2}h+\alpha\Sigma\widetilde{\mathsf{D}}_{2}\big{(}g^{- \frac{1}{2}}J_{s}\widetilde{\mathsf{D}}_{2}h\big{)}-(p-1)\alpha g^{-\frac{1}{2} }J_{s}\widetilde{\mathsf{D}}_{2}h\widetilde{\mathsf{D}}_{2}\Sigma\] \[\qquad+(p-1)\alpha\big{(}\tfrac{1}{2}(J_{s}\hat{\mathsf{W}}_{ \mathsf{s}_{\mathsf{s}}}-J_{s}\hat{\mathsf{Z}}_{\mathsf{s}})+\tfrac{1}{2}J_{s} \widetilde{\mathsf{D}}_{2}h(\hat{\mathsf{W}}_{\mathsf{\tau}}-\hat{\mathsf{Z}}_{ \mathsf{\tau}})\big{)}\] \[=\big{(}\tfrac{1}{2}+p\tfrac{\alpha}{2}\big{)}J_{s}\hat{\mathsf{W }}_{\mathsf{s}_{\mathsf{s}}}+J_{s}\big{(}\hat{\mathsf{Q}}_{\mathsf{s}}+ \widetilde{\mathsf{D}}_{2}V-V\hat{\mathsf{Q}}_{2}\big{)}+\big{(}\tfrac{1}{2}+p \tfrac{\alpha}{2}\big{)}J_{s}\hat{\mathsf{Z}}_{\mathsf{s}}\] \[\qquad-\alpha\hat{\mathsf{Q}}_{2}\Sigma g^{-\frac{1}{2}}J_{s} \widetilde{\mathsf{D}}_{2}h+\alpha\Sigma\widetilde{\mathsf{D}}_{2}\big{(}g^{- \frac{1}{2}}J_{s}\widetilde{\mathsf{D}}_{2}h\big{)}+p\alpha J_{s}\hat{\mathsf{A }}_{\mathsf{\tau}}\,.\] Using (6.38), (6.64), and the bootstrap inequalities (5.37), we deduce for \(p\geq 1\) the pointwise bound \[\mathsf{G}\leq-p\tfrac{9\alpha}{20\varepsilon}+\tfrac{17p+2\cdot 250^{2}}{ \varepsilon}\mathsf{Q}J_{s}+\mathring{C}p\] (C.7) where \(\mathring{C}=\mathring{C}(\alpha,\kappa_{0},\mathsf{C_{data}})\) is independent of \(\varepsilon\) and \(p\). Moreover, by appealing to the same bounds we have that \[\big{|}\overline{\mathsf{Q}}_{2}\tfrac{\alpha g^{-\frac{1}{2}}J_{s}\widetilde{ \mathsf{D}}_{2}h}{\Sigma^{p-1}}\big{|}\leq\mathring{C}\varepsilon^{2}\tfrac{J_{ \mathsf{s}}\mathsf{Q}}{\Sigma^{p}}\,.\] (C.8) Therefore, upon taking \(\varepsilon\) to be sufficiently small, solely in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\), we deduce from (C.5), (C.7), and (C.8) that \[\tfrac{17}{18}\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma^{-1}f( \cdot,\mathsf{s})\big{\|}_{L^{p}_{x}}^{p}+\tfrac{2\alpha p}{5\varepsilon}\int_{0 }^{\mathsf{s}}\!\!\big{\|}\Sigma^{-1}f(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{p}_{ x}}^{p}\mathsf{d}\mathsf{s}^{\prime}\] \[\leq\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma^{-1}f(\cdot,0) \big{\|}_{L^{p}_{x}}^{p}+\tfrac{17p+2\cdot 250^{2}}{\varepsilon}\int_{0}^{\mathsf{s}}\!\!\big{\|}( \mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma^{-1}f(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{p}_{ x}}^{p}\mathsf{d}\mathsf{s}^{\prime}+p\!\!\int_{0}^{\mathsf{s}}\!\!\big{\|}\Sigma^{-1}f( \cdot,\mathsf{s}^{\prime})\big{\|}_{L^{p}_{x}}^{p-1}\big{\|}\Sigma^{-1}q(\cdot, \mathsf{s}^{\prime})\big{\|}_{L^{p}_{x}}^{p}\mathsf{d}\mathsf{s}^{\prime}.\] Next, we apply the \(\varepsilon\)-Young inequality in the form \[p\big{\|}\Sigma^{ \[\leq 2\big{\|}(\mathsf{Q}J_{g})^{\frac{1}{p}}\Sigma^{-1}f(\cdot,0) \big{\|}_{L_{x}^{p}}^{p}+\tfrac{18p+2\cdot 250^{2}}{\varepsilon}\int_{0}^{ \mathsf{s}}\!\big{\|}(\mathsf{Q}J_{g})^{\frac{1}{p}}\Sigma^{-1}f(\cdot,\mathsf{ s}^{\prime})\big{\|}_{L_{x}^{p}}^{p}\mathrm{d}\mathsf{s}^{\prime}+2(\tfrac{5 \varepsilon}{\alpha})^{p-1}\int_{0}^{\mathsf{s}}\!\big{\|}\Sigma^{-1}q\big{\|}_ {L_{x}^{p}}^{p}\mathrm{d}\mathsf{s}^{\prime}\,.\] (C.9) Next, we ignore the damping term on the left side of (C.9) and apply Gronwall on the time interval \([0,\varepsilon]\) to obtain \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}(\mathsf{Q}J_{g})^{\frac{1}{p}} \Sigma^{-1}f(\cdot,\mathsf{s})\big{\|}_{L_{x}^{p}}^{p}\leq 2e^{18p+2\cdot 250^{2}} \big{\|}(\mathsf{Q}J_{g})^{\frac{1}{p}}\Sigma^{-1}f(\cdot,0)\big{\|}_{L_{x}^{ p}}^{p}+2e^{18p+2\cdot 250^{2}}(\tfrac{5\varepsilon}{\alpha})^{p-1}\big{\|} \Sigma^{-1}q\big{\|}_{L_{x,\mathsf{s}}^{p}}^{p}\,.\] The second to last step is to take \(p^{th}\) roots, use that \((a^{p}+b^{p})^{\frac{1}{p}}\leq a+b\), and deduce \[\sup_{\mathsf{s}\in[0,\varepsilon]}\big{\|}(\mathsf{Q}J_{g})^{\frac{1}{p}} \Sigma^{-1}f(\cdot,\mathsf{s})\big{\|}_{L_{x}^{p}}\leq 2^{\frac{1}{p}}e^{18+ \frac{2\cdot 250^{2}}{p}}\big{\|}(\mathsf{Q}J_{g})^{\frac{1}{p}}\Sigma^{-1}f( \cdot,0)\big{\|}_{L_{x}^{p}}+2^{\frac{1}{p}}e^{18+\frac{2\cdot 250^{2}}{p}}( \tfrac{5\varepsilon}{\alpha})^{\frac{p-1}{p}}\big{\|}\Sigma^{-1}q\big{\|}_{L_{ x,\mathsf{s}}^{p}}\,.\] (C.10) The last step is to pass \(p\to\infty\). Via the dominated convergence theorem \[\big{\|}\Sigma^{-1}f(\cdot,\mathsf{s})\big{\|}_{L_{x,\mathsf{s}}^{\infty}} \leq e^{18}\big{\|}\Sigma^{-1}f(\cdot,0)\big{\|}_{L_{x}^{\infty}}+\tfrac{5 \varepsilon}{\alpha}e^{18}\big{\|}\Sigma^{-1}q\big{\|}_{L_{x,\mathsf{s}}^{ \infty}}\,.\] The proof of (C.3) now follows from the above estimate and (5.37p). In a number on instances in the paper we need to bound the gradient of the function \(f\) which solves (C.2). In these cases the new forcing term is not merely the gradient of the function \(q\), but there is a factor which is containing the gradient of \(f\) itself, but without the necessary power of \(J_{g}\) in front of it. That is, we need to consider equations like \[\big{(}J_{g}(\partial_{t}+V\partial_{2})+\alpha\Sigma g^{-\frac{1}{2}}J_{g}h_{,2}\,\partial_{2}-\alpha\Sigma\partial_{1}\big{)}f=mf+q\,,\] (C.11) where as before the function \(q\) is bounded on \(\mathcal{P}\), and we also assume that the function \(m\) is bounded on \(\mathcal{P}\). As before, in \((x,\mathsf{s})\) variables the above equation transforms to \[\big{(}J_{g}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial_{2})+\alpha\Sigma g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}hJ_{g}\widetilde{\mathsf{D}}_{2}-\alpha \Sigma\partial_{1}\big{)}\widetilde{f}=\widetilde{m}\widetilde{f}+\widetilde{q}\,.\] (C.12) As usual, we drop the tilde from the unknowns, so that \((\widetilde{f},\widetilde{m},\widetilde{q})\) become \((f,m,q)\). By a slight modification of the proof of Proposition C.1, we obtain: **Corollary C.3**.: _Assume that \(f\) is a smooth solution (C.12), and that \(q\) and \(m\) are bounded. Then, assuming the bootstrap inequalities (5.37) hold, and that \(\varepsilon\) is sufficiently small in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C_{data}}\), we have_ \[\big{\|}f\big{\|}_{L_{x,\mathsf{s}}^{\infty}}\leq(4e^{18})^{\beta}\big{\|}f( \cdot,0)\big{\|}_{L_{x}^{\infty}}+\varepsilon\tfrac{20}{\alpha\beta}(4e^{18} )^{\beta}\big{\|}q\big{\|}_{L_{x,\mathsf{s}}^{\infty}}\,.\] (C.13) _where the parameter \(\beta\) is given cf. (C.15) as \(\beta=\max\{\tfrac{20}{\alpha}\varepsilon\|m^{+}\|_{L_{x,\mathsf{s}}^{\infty}},1\}\), and \(m^{+}=\max\{m,0\}\)._ Proof of Corollary C.3.: The main difference is that in the energy formed on the left side of (C.5), we need to replace \(\Sigma^{-1}\) by \(\Sigma^{-\beta}\) for a \(\beta>0\) that is to be chosen in terms of \(\|m^{+}\|_{L_{x,\mathsf{s}}^{\infty}}\). With this change, (C.5) becomes \[\big{\|}(\mathsf{Q}J_{g})^{\frac{1}{p}}\Sigma^{-\beta}f(\cdot, \mathsf{s})\big{\|}_{L_{x}^{p}}^{p}-\big{\|}(\mathsf{Q}J_{g})^{\frac{1}{p}} \Sigma^{-\beta}f(\cdot,0)\big{\|}_{L_{x}^{p}}^{p}\] \[\qquad\qquad=\int_{0}^{\mathsf{s}}\!\!\!\int\frac{1}{\Sigma^{sp}f} \mathsf{f}^{\mathsf{G}}+\int\overline{\mathsf{Q}}_{2}\tfrac{\alpha g^{-\frac{1}{2 }}J_{g}\widetilde{\mathsf{D}}_{2}h}{\Sigma^{sp-1}}f^{p}\big{|}_{\mathsf{s}}+ \int_{0}^{\mathsf{s}}\!\!\!\int\frac{p}{\Sigma^{pp}}f^{p-1}q\,,\] (C.14) where \[\mathsf{G} =pm+\Sigma^{\beta p}\Big{(}\hat{\mathsf{Q}}_{\mathsf{s}}+\widetilde {\mathsf{D}}_{2}V-V\hat{\mathsf{Q}}_{2}+(\mathsf{Q}\partial_{\mathsf{s}}+V \partial_{2})\Big{)}\tfrac{J_{g}}{\Sigma^{sp}}-\Sigma^{\beta p}(\hat{\mathsf{Q}}_{ 2}-\widetilde{\mathsf{D}}_{2})\Big{(}\tfrac{\alpha g^{-\frac{1}{2}}J_{g} \widetilde{\mathsf{D}}_{2}h}{\Sigma^{\beta p-1}}\Big{)}-\Sigma^{\beta p} \partial_{1}\Big{(}\tfrac{\alpha}{\Sigma^{\beta p-1}}\Big{)}\] \[\qquad-\alpha\hat{\mathsf{Q}}_{2}\Sigma g^{-\frac{1}{2}}J_{g} \widetilde{\mathsf{D}}_{2}h+\alpha\Sigma\widetilde{\mathsf{D}}_{2}\big{(}g^{- \frac{1}{2}}J_{g}\widetilde{\mathsf{D}}_{2}h\big{)}+p\alpha\beta J_{g} \hat{\boldsymbol{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsfmathsfmathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsfmathsfmathsf\mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf \mathsf From (C.8), (C.14), and (C.16), we deduce \[\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma^{-\beta}f(\cdot, \mathsf{s})\big{\|}_{L^{\infty}_{x}}^{p}+\tfrac{\alpha\beta p}{5\varepsilon} \int_{0}^{\mathsf{s}}\big{\|}\Sigma^{-\beta}f(\cdot,\mathsf{s}^{\prime})\big{\|} _{L^{\infty}_{x}}^{p}\mathrm{d}\mathsf{s}^{\prime}\] \[\leq 2\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma^{-\beta}f( \cdot,0)\big{\|}_{L^{\infty}_{x}}^{p}+\tfrac{18\beta p+2\cdot 250^{2}}{ \varepsilon}\int_{0}^{\mathsf{s}}\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma ^{-\beta}f(\cdot,\mathsf{s}^{\prime})\big{\|}_{L^{\infty}_{x}}^{p}\mathrm{d} \mathsf{s}^{\prime}+2(\tfrac{5\varepsilon}{\alpha\beta})^{p-1}\int_{0}^{ \mathsf{s}}\!\big{\|}\Sigma^{-\beta}q\big{\|}_{L^{\infty}_{x}}^{p}\mathrm{d} \mathsf{s}^{\prime}\,.\] (C.17) We then use Gronwall's inequality, take \(p^{th}\) roots and pass \(p\to\infty\) to deduce \[\big{\|}\Sigma^{-\beta}f(\cdot,\mathsf{s})\big{\|}_{L^{\infty}_{x}}\leq e^{18 \beta}\big{\|}\Sigma^{-\beta}f(\cdot,0)\big{\|}_{L^{\infty}_{x}}+\tfrac{5 \varepsilon}{\alpha\beta}e^{18\beta}\big{\|}\Sigma^{-\beta}q\big{\|}_{L^{ \infty}_{x,\mathsf{s}}}\,.\] By further appealing to (5.37p), the proof is completed. ### The flattening map from Section 13 In the flattened geometry utilized in Section 13, the statements of Proposition C.1, Remark C.2, and Corollary C.3 remain unchanged. The changes to the proofs of these statements are entirely due to the fact that the \(-\alpha\Sigma\partial_{1}\widetilde{f}\) term in (C.2) and (C.12) needs to be rewritten as \(-\tfrac{\alpha}{\varepsilon}\Sigma\widetilde{\mathsf{D}}_{1}\widetilde{f}\), and instead of using that \(\partial_{1}^{\star}=-\partial_{1}\), we need to use the adjoint formula \(\tfrac{1}{\varepsilon}\widetilde{\mathsf{D}}_{1}^{\star}=-\tfrac{1}{ \varepsilon}\widetilde{\mathsf{D}}_{1}^{\star}+\mathring{\mathsf{Q}}_{1}- \overline{\mathsf{Q}}_{1}\delta_{\mathsf{s}}\) (cf. (13.25b)). Let us discuss in detail the changes this causes to the proof of Proposition C.1. Identities (C.5)-(C.6) become \[\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{p}}\Sigma^{-1}f(\cdot, \mathsf{s})\big{\|}_{L^{\infty}_{x}}^{p}-\big{\|}(\mathsf{Q}J_{s})^{\frac{1}{ p}}\Sigma^{-1}f(\cdot,0)\big{\|}_{L^{\infty}_{x}}^{p}+\int\overline{\mathsf{Q}}_{1} \tfrac{\alpha}{\Sigma^{p-1}}f^{p}\big{|}_{\mathsf{s}}\] \[\qquad=\int_{0}^{\mathsf{s}}\!\!\!\int\tfrac{1}{\Sigma^{p}}f^{p} \mathsf{G}+\int\overline{\mathsf{Q}}_{2}\tfrac{\alpha g^{-\frac{1}{2}}J_{s} \widetilde{\mathsf{Q}}_{2}h}{\Sigma^{p-1}}f^{p}\big{|}_{\mathsf{s}}+\int_{0}^{ \mathsf{s}}\!\!\!\int\frac{p}{\Sigma^{p}}f^{p-1}q,\] with \[\mathsf{G}:=\Sigma^{p}\Big{(}\mathring{\mathsf{Q}}_{\mathsf{s}}+ \widetilde{\mathsf{D}}_{2}V-V\mathring{\mathsf{Q}}_{2}+(\mathsf{Q}\delta_{ \mathsf{s}}+V\partial_{2})\Big{)}\tfrac{J_{s}}{2^{p}}-\Sigma^{p}\big{(} \mathring{\mathsf{Q}}_{2}-\widetilde{\mathsf{D}}_{2}\big{)}\Big{(}\tfrac{ \alpha g^{-\frac{1}{2}}J_{0}\mathring{\mathsf{Q}}_{2}h}{\Sigma^{p-1}}\Big{)}- \tfrac{1}{\varepsilon}\Sigma^{p}\widetilde{\mathsf{D}}_{1}\Big{(}\tfrac{ \alpha}{\Sigma^{p-1}}\Big{)}+\alpha\mathring{\mathsf{Q}}_{1}\Sigma\,.\] The new term arising due to \(\overline{\mathsf{Q}}_{1}\) is a helpful term that because (13.38c) gives \(\overline{\mathsf{Q}}_{1}\geq 0\); we however ignore this helpful term. The only new contribution to \(\mathsf{G}\) arising from the \(\mathring{\mathsf{Q}}_{1}\) term may be bounded from above using (13.38d) and (13.37a) by \(5\alpha\kappa_{0}\varepsilon^{-1}\); a bound which is \(p\)-independent. As such, (C.7) is replaced by \[\mathsf{G}\leq-p\tfrac{9\alpha}{20\varepsilon}+\tfrac{5\alpha\kappa_{0}}{ \varepsilon}+\tfrac{17p+2\cdot 250^{2}}{\varepsilon}\mathsf{Q}J_{s}+\mathring{C}p\leq-p \tfrac{2\alpha}{5\varepsilon}+\tfrac{17p+2\cdot 250^{2}}{\varepsilon}\mathsf{Q}J_{s}\,,\] (C.18) whenever \(p\geq 200\alpha\kappa_{0}\), and \(\varepsilon\) is taken to be sufficiently small in terms of \(\alpha,\kappa_{0}\), and \(\mathsf{C}_{\mathsf{data}}\). Since we pass \(p\to\infty\), this restriction on a lower bound for \(p\) that depends on \(\alpha\) and \(\kappa_{0}\) is irrelevant and may be assumed without loss of generality. Since none of the subsequent bounds are altered, we proceed as in the proof of Proposition C.1, and upon passing \(p\to\infty\) deduce that the bound (C.3) remains unchanged. The changes to the proof of Corollary C.3 are identical, and we omit these details. ### The flattening map from Section 14 In the flattened geometry utilized in Section 14, the statements of Proposition C.1, Remark C.2, and Corollary C.3 remain unchanged. The changes to the proofs are due to the fact that every \(\mathsf{s}\)-time-slice of the spacetime \(\mathcal{H}^{\delta}\) has a right boundary at \(x_{1}=\theta^{\mathsf{s}}(x_{2},\mathsf{s})\). These modifications were already discussed in the proof of Proposition 14.10, but for convenience we repeat here the main parts of the argument. For simplicity, we focus on the changes required by the proof of Proposition C.1. Moreover, since the weight function \(\mathcal{J}\) solves a PDE (see (14.111a)) which is a \(\delta\)-modification of the \(\lambda_{1}\) transport operator, we focus on the (potentially) more difficult case when the function \(f\) solves a forced \(\lambda_{1}\) transport, namely \[\big{(}J_{s}(\mathsf{Q}\delta_{\mathsf{s}}+V\partial_{2})+2\alpha\Sigma g^{- \frac{1}{2}}\widetilde{\mathsf{D}}_{2}hJ_{s}\widetilde{\mathsf{D}}_{2}-2\alpha \Sigma\partial_{1}\big{)}f=q\,.\] (C.19) Here we recall that \(\widetilde{\mathsf{D}}\) is defined by (14.109a). The above PDE is considered instead of (C.2). When \(2\alpha\) is replaced by \(\alpha\) in (C.19) the same bounds will hold. Let \(r>0\) be arbitrary. We may multiply (C.19) by \(p\Sigma^{-p}\delta^{r}f^{p-1}\) and deduce that \[\tfrac{J_{s}\delta^{r}}{\Sigma^{p}}(\mathsf{Q}\delta_{\mathsf{s}}+V \partial_{2})(f^{p})+\tfrac{2\alpha g^{-\frac{1}{2}}J_{s}\delta^{r}\widetilde{ \mathsf{D}}_{2}h}{\Sigma^{p-1}}\widetilde{\mathsf{D}}_{2}(f^{p})-\tfrac{2\alpha g ^{r}}{\Sigma^{p-1}}\partial_{1}(f^{p})=\tfrac{p\delta^{r}}{\Sigma^{p}}f^{p-1}q\,.\] In analogy to (C.5)-(C.6), by integrating the above identity over the spacetime \(\{(x,\mathsf{s}^{\prime})\colon\mathsf{s}^{\prime}\in[\mathsf{s}_{\mathsf{in}}, \mathsf{s}],x_{2}\in\mathbb{T},x_{1}\leq\theta^{\mathsf{s}}(x_{2},\mathsf{s}^{ \prime})\}\subset\mathcal{H}^{\delta}\), using the adjoint formulas in (14.129) and the notation for norms from (14.122), it follows that \[\big{\|}(\mathsf{Q}J_{s}\delta^{r})^{\frac{1}{p}}\Sigma^{-1}f( \cdot,\mathsf{s})\big{\|}_{L^{\infty}_{x}}^{p}-\big{\|}(\mathsf{Q}J_{s}\delta^{r})^{ \frac{1}{p}}\Sigma^{-1}f(\cdot,\mathsf{s}_{\mathsf{in}})\big{\|}_{L^{\infty}_{x}}^{p}\] \[\qquad=\int_{0}^{\mathsf{s}}\!\!\!\int\tfrac{\delta^{r}}{\Sigma^{p}}f^{p} \mathsf{G}+\int\overline{\mathsf{Q}}_{2}\tfrac{2\alpha g^{-\frac{1}{2}}J_{s} \delta^{r}\widetilde{\mathsf{D}}_{2}h}{\Sigma^{p-1}}f^{p}\big{|}_{\mathsf{s}}+ \int_{0}^{\mathsf{s}}\!\!\!\int\frac{p}{\Sigma^{p}}f^{p-1}q,\] (C. with \[\mathsf{G}:=\tfrac{\Sigma^{p}}{\mathcal{J}^{r}}\Big{(}\hat{\mathsf{Q}}_{\mathsf{s}} +\widetilde{\mathsf{D}}_{2}V-V\hat{\mathsf{Q}}_{2}+(\mathsf{Q}\partial_{\mathsf{s }}+V\partial_{2})\Big{)}\tfrac{J_{g}^{\mathcal{J}^{r}}}{\Sigma^{p}}-\tfrac{ \Sigma^{p}}{\mathcal{J}^{r}}(\hat{\mathsf{Q}}_{2}-\widetilde{\mathsf{D}}_{2}) \Big{(}\tfrac{2\alpha g^{-\frac{1}{2}J_{g}\mathcal{J}^{r}\widetilde{\mathsf{D} }_{2}h}}{\Sigma^{p-1}}\Big{)}-\tfrac{\Sigma^{p}}{\mathcal{J}^{r}}\partial_{1} \Big{(}\tfrac{2\alpha\mathcal{J}^{r}}{\Sigma^{p-1}}\Big{)}\,.\] (C.20b) In deriving (C.20) we have crucially use the following fact. By (14.119) we have that \(\mathcal{J}(\theta^{\mathsf{G}}(x_{2},\mathsf{s}^{\prime}),x_{2},\mathsf{s}^ {\prime})=0\), and hence, since \(r>0\) the boundary terms at \(x_{1}=\theta^{\mathsf{G}}(x_{2},\mathsf{s}^{\prime})\) that should arise when integrating by parts via (14.129), are in fact all equal to \(0\). We note that (C.20a) is identical to (C.5), while the term \(\mathsf{G}\) defined in (C.20b) is nearly identical to the term \(\mathsf{G}\) defined in (C.6); the only difference arises from the differential operators landing on \(\mathcal{J}^{r}\). That is, \[\mathsf{G}_{\text{\eqref{eq:C.20b}}}=\mathsf{G}_{\text{\eqref{eq:C.6}}}+ \tfrac{r}{\mathcal{J}}\Big{(}J_{g}(\mathsf{Q}\partial_{\mathsf{s}}+V\partial _{2})\mathcal{J}+2\alpha\Sigma J_{g}g^{-\frac{1}{2}}\widetilde{\mathsf{D}}_{2 }h\widetilde{\mathsf{D}}_{2}\mathcal{J}-2\alpha\Sigma\partial_{1}\mathcal{J} \Big{)}\,.\] The key observation is that by combining the PDE solved by \(\mathcal{J}\) in \(\mathcal{H}_{+}^{\xi}\) (recall (14.111a)), with the derivative bounds satisfied by \(\mathcal{J}\) in \(\mathcal{H}^{\xi}\) (recall (14.143a)), the pointwise bounds for \(\mathcal{J}\) in \(\mathcal{H}_{-}^{\xi}\) from (14.144) and (14.142), and the previously obtained estimate (C.7) (which still holds because we work under ten same pointwise bootstraps), we obtain \[\mathsf{G}_{\text{\eqref{eq:C.20b}}} \leq\mathsf{G}_{\text{\eqref{eq:C.6}}}+\tfrac{r}{\mathcal{J}} \Big{(}-8J_{g}\tfrac{11(1+\alpha)}{25\varepsilon}\mathbf{1}_{\mathcal{H}_{+} ^{\xi}}+\big{(}-J_{g}\tfrac{10(1+\alpha)}{25\varepsilon}+\tfrac{2\alpha \kappa_{0}}{10^{3}\varepsilon}\big{)}\mathbf{1}_{\mathcal{H}_{-}^{\xi}}\Big{)}\] \[\leq\left(-p\tfrac{9\alpha}{20\varepsilon}+\tfrac{17p+2\cdot 250^{2}}{ \varepsilon}\mathsf{Q}J_{g}+\hat{C}p\right)-\tfrac{2r\xi J_{g}}{5\mathcal{J} \varepsilon}+\tfrac{3\alpha\kappa_{0}}{10^{3}\varepsilon}\mathsf{Q}J_{g}\] \[\leq-p\tfrac{2\alpha}{5\varepsilon}+\tfrac{17p+2\cdot 250^{2}+r \alpha\kappa_{0}}{\varepsilon}\mathsf{Q}J_{g}\,.\] We see that the modification to the upper bound for \(J_{g}\) is nearly identical to the one obtained in the previous Section in (C.18). In fact, as \(r\to 0^{+}\) (a limit which we will take in a moment), the above obtained upper bound is identical to the one from (C.18), and to the one from the proof of Proposition C.1. The bounds for the last two terms on the right side of (C.20a) remain identical to those obtained in Proposition C.1. Proceeding as in the proof of Proposition C.1, we arrive at the bound corresponding to (C.10), namely \[\sup_{\mathsf{s}\in[\mathsf{s}_{\mathsf{n}},\mathsf{s}_{\mathsf{ n}}]}\left\lVert(\mathsf{Q}J_{g}\mathcal{J}^{\tau})^{\frac{1}{p}}\Sigma^{-1}f( \cdot,\mathsf{s})\right\rVert_{L^{p}_{x}} \leq 2^{\frac{1}{p}}e^{18+\frac{2\cdot 250^{2}+r\alpha\kappa_{0}}{p}} \big{\lVert}(\mathsf{Q}J_{g}\mathcal{J}^{\tau})^{\frac{1}{p}}\Sigma^{-1}f( \cdot,0)\big{\rVert}_{L^{p}_{x}}\] \[\quad+2^{\frac{1}{p}}e^{18+\frac{2\cdot 250^{2}+r\alpha\kappa_{0}}{p}} \big{(}\tfrac{5\alpha}{\alpha}\big{)}^{\frac{p-1}{p}}\big{\lVert}\mathcal{J}^{ \frac{1}{p}}\Sigma^{-1}q\big{\rVert}_{L^{p}_{x,\ast}}\,.\] Upon passing \(r\to 0^{+}\) (which we can do since \(0\leq\mathcal{J}\leq\mathcal{J}\leq\mathcal{J}\) uniformly in \(\mathcal{H}^{\xi}\)) and letting \(p\to\infty\), we deduce that the bound (C.3) remains unchanged. The changes to the proof of Corollary C.3 are identical, and we omit these details. ## Acknowledgments The work of S.S. was in part supported by the NSF grant DMS-2007606 and the Collaborative NSF grant DMS-2307680. The work of V.V. was in part supported by the Collaborative NSF grant DMS-2307681.
2301.06208
On thermodynamically consistent quasiparticle model at finite chemical potential
We explore the quasiparticle model at finite chemical potential related to Ru-Keng Su's distinguished contributions to the topic. Besides, we discuss recent developments in the model, and in particular, one argues that the effective mass of the quasiparticle might attain a specific form as a function of momentum, in addition to its dependence on temperature and chemical potential. Unlike the approaches based on the properties of underlying symmetry or renormalization group, the momentum dependence emerges as a special solution to an integro-differential equation resulting from the underlying thermodynamic consistency. Moreover, this special solution to the problem is shown to be more general than previously explored in the literature. Instead of fitting to the lattice QCD data at vanishing chemical potential, in this work, we adopt a ``bottom-up'' approach by assuming some analytic ansatzes that are manifestly thermodynamically consistent. The remaining physical quantities are subsequently derived, and possible implications are also addressed.
Wei-Liang Qian, Hong-Hao Ma, Shao-Yu Yin, Ping Wang
2023-01-15T23:14:43Z
http://arxiv.org/abs/2301.06208v1
# On thermodynamically consistent quasiparticle model at finite chemical potential1 ###### Abstract We explore the quasiparticle model at finite chemical potential related to Ru-Keng Su's distinguished contributions to the topic. Besides, we discuss recent developments in the model, and in particular, one argues that the effective mass of the quasiparticle might attain a specific form as a function of momentum, in addition to its dependence on temperature and chemical potential. Unlike the approaches based on the properties of underlying symmetry or renormalization group, the momentum dependence emerges as a special solution to an integro-differential equation resulting from the underlying thermodynamic consistency. Moreover, this special solution to the problem is shown to be more general than previously explored in the literature. Instead of fitting to the lattice QCD data at vanishing chemical potential, in this work, we adopt a "bottom-up" approach by assuming some analytic ansatzes that are manifestly thermodynamically consistent. The remaining physical quantities are subsequently derived, and possible implications are also addressed. ## I Introduction The transition between the quark-gluon plasma (QGP) and hadronic phases constitutes one of the most prominent problems in high-energy nuclear physics. In the vicinity of such a region, the underlying dynamics are essentially non-perturbative, through which the system undergoes a dramatic change in the number of degrees of freedom. Moreover, even in the QGP phase, the system's thermodynamic properties deviate significantly from those of a non-interacting ideal gas of quarks and gluons. For instance, the lattice quantum chromodynamics (QCD) calculations showed [1] that the system's pressure and energy density undershoot the Stefan-Boltzmann limit by about 15-20% even at temperatures \(T\gtrsim 3T_{c}\). Also, the speed of sound extracted from lattice QCD is found to be smaller than that of a massless ideal gas. In particular, as the system approaches the transition region, it is observed that the speed of sound varies non-monotonically [2]. The above properties are crucial for adequately establishing the equation of state (EoS), which plays a central role in providing an appropriate description of the hydrodynamic evolution of the hot and dense system that emerged in the relativistic heavy ion collisions [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. In the literature, the density-dependent quark mass was suggested by Fowler, Raha, and Weiner [16] to address the transition between nuclear and quark matter, and the thermal partonic quasiparticle was initially proposed by Peshier, Kampfer, Pavlenko, and Soff [17] to accommodate the numerical experiments from the lattice QCD. Primarily inspired by its counterparts in other fields of physics, the notion of the quasiparticle is a phenomenological approach aimed at capturing the bulk thermodynamic properties of QGP. The model can be viewed as an effective and simplified imitation of the essence of many existing theoretical efforts, namely, lattice QCD [1], dimensional reduction [18], hard thermal loop [19], Polyakov-loop model [20], as well as other hadronic-degree based approaches [21]. It has also been speculated that the success of the notion of quasiparticle degree of freedom will further give rise to novel effective theories from a more fundamental perspective while properly incorporating the nonperturbative aspects of the QCD. The quasiparticle model interprets the system as a composition of non-interacting quanta which carry the same quantum numbers of quarks and gluons. The medium-dependent quasiparticle mass implements the strong interactions between the elementary degrees of freedom. For the description of gluon plasma, the quasiparticle mass was initially assumed to be merely temperature dependent [17]. As the concept flourished in literature, a crucial feature of the model was elaborated in a seminal paper by Gorenstein and Yang [22] with respect to its thermodynamic consistency. The authors solved the issue elegantly via the requirement of an exact cancelation between the additional contributions from the temperature-dependent quasiparticle mass and those from the bag constant. Subsequently, various relevant aspects of the topic were discussed and further developed by, among others [23; 24; 25; 26; 27], Ru-Keng Su in collaboration with his students and collaborators [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. In [28], the role of an additional contribution to the thermopotential and its consequential effect on the strange quark matter were explored. A series of studies regarding the quark mass density- and temperature-dependent (QMDTD) model was performed in [29; 30; 31; 32]. The temperature dependence of the stable radius of a strangelet was discussed in [29]. The temperature dependence of the bag constant \(B\) was explored and shown to cure the divergence that occurred at vanishing baryon density in the phase diagram for the bulk strange quark matter of the original QMDTD model [30]. A systematic analysis regarding the stability of strangelet was performed in [31] in the framework of the QMDTD model. It was observed that stable strangelets are more likely to be encountered in the region with a sizeable negative electric charge and significant strangeness. The analysis was then extended to the dibaryon systems [32] regarding different decay channels, and the results were found in good agreement with those obtained by the chiral SU(3) quark model. The QMDTD setup was then applied to the context of Friedberg-Lee soliton bag [45; 46; 47] nonlinear coupled to the sigma [34] as well as omega [35] mesons. The model was further extended to investigate the properties of deconfinement [36; 38] and nuclear matter [37]. As an alternative approach to address the thermodynamic consistency, an additional fictitious degree of freedom was introduced [41; 42] to elaborate a generalized version of the first law of thermodynamics. From the field theory perspective, the mass of a particle can be defined either by the pole of the effective propagator or via the Debye screen mass extracted at small momentum, provided the question of gauge invariance is adequately dealt with. In particular, the calculations with hard thermal loop approximation show that the gluon screen mass extracted from the above pictures is consistent [19; 48]. The derived quasiparticle mass, in turn, is a function of temperature and chemical potential. As a result, the above dependence calls for a generalization scheme for thermodynamic consistency. Further developments by Peshier _et al._ gives rise to a flow equation [23; 24; 25; 49]. The latter is a partial differential equation, and its boundary condition is chosen at vanishing baryon density, adapted to the lattice QCD data. It was shown that the thermodynamic properties obtained from such a framework agree well with the lattice calculations at finite baryon chemical potential. Following [22], one takes the grand partition function of the system \(\mathcal{Z}\) as the starting point, which reads \[\mathcal{Z}\left(V,T,\mu\right)=\mathrm{Tr}\left[e^{-\beta\left(H-\mu N\right) }\right], \tag{1}\] and \(\beta\) is the reciprocal of the temperature \(T\), \(\mu\) represents the chemical potential, \(V\) is the volume of the system, \(H\) and \(N\) are the Hamiltonian and conserved number operators. In order to derive the remaining thermodynamic quantities (such as pressure, energy density, and conserved number density) in a consistent fashion, the two following identities need to be valid, \[\frac{\partial\mathcal{Z}\left(V,T,\mu\right)}{\partial\beta}=-\mathrm{Tr} \left[\left(H-\mu N\right)e^{-\beta\left(H-\mu N\right)}\right], \tag{2}\] and \[\frac{\partial\mathcal{Z}\left(V,T,\mu\right)}{\partial\mu}=\beta\mathrm{Tr} \left[Ne^{-\beta\left(H-\mu N\right)}\right]. \tag{3}\] The conditions Eqs. (2) and (3) are manifestly satisfied when the Hamiltonian is not medium dependent. As an example, for the quasiparticle model proposed in [17], \[H=\sum_{\mathbf{k}}\omega(\mathbf{k},T,\mu)a_{\mathbf{k}}^{\dagger}a_{ \mathbf{k}}+VB, \tag{4}\] where \(B\), the _bag constant_, is attributed to the vacuum energy, mostly viewed as a constant, and \[\omega(\mathbf{k},T,\mu)=\omega(\mathbf{k},m)=\sqrt{k^{2}+m^{2}}, \tag{5}\] where \(k=|\mathbf{k}|\) and \(m=m(T)\) is an explicit function of the temperature. The latter adds an additional contribution to the partial derivative in Eq. (2), associated with \(H\). The recipe by Gorenstein and Yang is derived from the proposal that \(B\) should also be medium dependent, namely, \(B=B(T)\), whose entire purpose is to identically cancel out the undesirable contribution coming from the temperature-dependent quasiparticle mass. To be explicit, it is not difficult to show [22; 23; 44] that the above requirement dictates the relation \[\frac{dB}{dT}=\left.\frac{\partial p_{\mathrm{id}}\left(T,\mu,m\right)}{ \partial m}\right|_{T,\mu}\frac{dm}{dT}, \tag{6}\] where the pressure of ideal gas is an intensive property given by the standard statistical mechanics, \[p_{\rm id}=\frac{T}{V}\ln\mathcal{Z}\left(V,T,\mu\right)\Bigr{|}_{B=0}\,, \tag{7}\] whose specific form is given below in Eq. (13). Since \(m=m(T)\) and \(B=B(T)\), we have \(B=B(m)\). In other words, Eq. (6) implies \[\frac{dB}{dm}=\left.\frac{\partial p_{\rm id}(T,\mu,m)}{\partial m}\right|_{T,\mu}, \tag{8}\] where the bag constant \(B\) is understood to be a function of the particle mass \(m\) only. Similarly, if the quasiparticle mass is chemical potential dependent, it seems that the above scheme can be readily applied. Specifically, one replaces the temperature derivative in Eq. (6) with the chemical-potential derivative, while Eq. (8) remains unchanged. Moreover, if the mass function depends on both the temperature and chemical potential, namely, \(m=m(T,\mu)\), Eq. (8) seemly serves the purpose. However, though it might not be apparent at first glimpse, one can argue [44] that Gorenstein and Yang's scheme cannot be applied straightforwardly to such a case. This can be understood as follows. To be precise, one needs to solve for \(B=B(m)\) for an arbitrarily given form \(m=m(T,\mu)\), using Eq. (8). Observing the l.h.s. of Eq. (8), one concludes that the dependence of temperature and chemical potential can be entirely "packed" into the quasiparticle mass \(m\). On the other hand, since the form \(m=m(T,\mu)\) is arbitrary, one can always redefine this function so that the r.h.s. of Eq. (8) cannot be written as a function of \(m\). We note that the above considerations do not necessarily invalidate Eq. (8). Instead, it indicates the existence of some additional constraint when finite chemical potential is involved. In [23], Peshier _et al._ derived a flow equation giving a further constraint for the mass function \(m=m(T,\mu)\). In [44], some of us derived an integro-differential equation, which is shown to fall back to the flow equation under certain circumstances. Moreover, it was demonstrated that there are also other possibilities, and in particular, the quasiparticle mass can be a function of the momentum. In the present study, we proceed further to explore the topic by adopting a "bottom-up" approach. Specifically, instead of numerically adjusting the model parameters to the lattice QCD data, we choose a straightforward but analytical form for the mass function at vanishing chemical potential. By adopting the analytic function, one can scrutinize the different branches of the mass function in the temperature-chemical potential parameter plane. The remainder of the present paper is organized as follows. In the next section, we review the relevant elements regarding the thermodynamic consistency in the quasiparticle model. The resulting integro-differential equation is presented and discussed. Sec. III focuses on the novel type of solution. In particular, we explore a mathematically simple form of the mass function at vanishing chemical potential. It is shown that such a choice will not entirely determine the mass function in the temperature-chemical potential parameter plane. Different possibilities are then investigated numerically. The last section is devoted to further discussions and concluding remarks. ## II The generalized condition for thermodynamical consistency This section discusses the formal constraints for the thermodynamic consistency in the quasiparticle model. For the present study, the term _consistency_ implies the following three essential aspects. First, all the thermodynamic quantities can be derived using the standard formulae once the grand partition function \(\mathcal{Z}\) is given. Second, these thermodynamic quantities possess an interpretation in accordance with the ensemble average in statistics. Last but not least, most thermodynamic identities, for instance, those based on the first law of thermodynamics (c.f. Eq. (19)) and extensive properties (c.f. Eq. (17)), remain unchanged. To our knowledge, the scheme proposed by Gorenstein and Yang is the only one that meets all three above requirements. As discussed in [22], once Eqs. (2) and (3) are satisfied, and the energy density and particle number density derived either from the ensemble average or from the partial derivative of the grand partition function possess identical forms. These lead to the following forms of the energy density \[\varepsilon=\frac{\langle E\rangle}{V}=-\frac{1}{V}\frac{\partial\ln\mathcal{Z }}{\partial\beta}=\epsilon_{\rm id}+B, \tag{9}\] with \[\epsilon_{\rm id}=\frac{g}{2\pi^{2}}\int_{0}^{\infty}\frac{k^{2}dk\omega(k,T, \mu)}{\exp[(\omega(k,T,\mu)-\mu)/T]\mp 1}+\mbox{c.t.}\, \tag{10}\] where \(g\) indicates possible degeneracy, "\(\mp\)" corresponds to boson and fermion, and the counter term "c.t." indicates contributions from anti-particles obtained by the substitution \(\mu\rightarrow-\mu\) in the foregoing term. We have also considered the isotropic case \(m({\bf k},T,\mu)=m(k,T,\mu)\). To derive the above equation, we have assumed the validity of Eq. (2), namely, the contribution from the temperature dependence of quasiparticle mass has precisely been canceled out with the temperature dependence of \(B\). By writing it out explicitly, one finds \[\frac{\partial B}{\partial T}=-\frac{g}{2\pi^{2}}\int_{0}^{\infty}\frac{k^{2} dk}{\omega(k,T,\mu)}\frac{1}{\exp[(\omega(k,T,\mu)-\mu)/T]\mp 1}m\frac{\partial m }{\partial T}+\mathrm{c.t.} \tag{11}\] In statistical mechanics, the pressure is interpreted as a "general force", which reads \[p=\frac{1}{\beta}\frac{\partial\ln{\cal Z}}{\partial V}=\frac{1}{\beta}\frac{ \ln{\cal Z}}{V}=p_{\mathrm{id}}-B, \tag{12}\] where \[p_{\mathrm{id}} = \frac{\mp g}{2\pi^{2}}\int_{0}^{\infty}k^{2}dk\ln\left\{1\mp\exp \left[\left(\mu-\omega(k,T,\mu)\right)/T\right]\right\}+\mathrm{c.t.} \tag{13}\] \[= \frac{g}{12\pi^{2}}\int_{0}^{\infty}\frac{k^{3}dk}{\exp[\left( \omega(k,T,\mu)-\mu)/T\right]\mp 1}\left.\frac{\partial\omega(k,T,\mu)}{ \partial k}\right|_{T,\mu}+\mathrm{c.t.}\] Also, as an ensemble average, the number density is found to be \[n=\frac{\langle N\rangle}{V}=-\frac{1}{V}\frac{\partial\ln{\cal Z}}{\partial \alpha}=n_{\mathrm{id}}, \tag{14}\] where \[n_{\mathrm{id}}=\frac{g}{2\pi^{2}}\int_{0}^{\infty}\frac{k^{2}dk}{\exp[\left( \omega(k,T,\mu)-\mu)/T\right]\mp 1}-\mathrm{c.t.} \tag{15}\] Again, we have assumed the condition Eq. (3), which states that the contribution from the chemical-potential dependence of quasiparticle mass in the ideal gas term and that from the bag constant \(B\) cancel out each other. The above condition can be specified to give \[\frac{\partial B}{\partial\mu}=-\frac{g}{2\pi^{2}}\int_{0}^{\infty}\frac{k^{2 }dk}{\omega(k,T,\mu)}\frac{1}{\exp[\left(\omega(k,T,\mu)-\mu)/T\right]\mp 1}m \frac{\partial m}{\partial\mu}+\mathrm{c.t.} \tag{16}\] The well-known thermodynamic identity \[\epsilon=T\frac{\partial p}{\partial T}-p+\mu n, \tag{17}\] essentially comes from the first law of thermodynamics and its extensive properties. As a matter of fact, following the procedure elaborated in the standard textbook [50], it is not difficult to verify that the total derivative of \(q=\ln{\cal Z}\) gives \[dq=-\langle N\rangle d\alpha-\langle E\rangle d\beta-\beta pdV. \tag{18}\] By comparing the above expression with the first law of thermodynamics, namely, \[d(E)=TdS-pdV+\mu d\langle N\rangle, \tag{19}\] we have the mapping \[\beta = \frac{1}{k_{B}T},\] \[\alpha = -\frac{\mu}{k_{B}T},\] \[q + \alpha N+\beta E=\frac{S}{k_{B}}. \tag{20}\] The validity of Eq. (17) is readily verified. Now, we proceed to discuss the implications of the conditions Eqs. (11) and (16). By taking partial derivative of Eq. (11) in \(\mu\) and compare with the partial derivative of Eqs. (16) in \(T\), one arrives at the following integro-differential equation [44] \[\langle\!\langle m\frac{\partial m}{\partial T}\rangle\!\rangle_{-}=\langle\! \langle m\frac{\partial m}{\partial\mu}\rangle\!\rangle_{+}, \tag{21}\] where \[\langle\!\langle O\rangle\!\rangle_{-} \equiv\int_{0}^{\infty}k^{2}dk\left\{\frac{\exp[(\omega-\mu)/T]}{ (\exp[(\omega-\mu)/T]\mp 1)^{2}\omega T}-\text{c.t.}\right\}O(k), \tag{22}\] \[\langle\!\langle O\rangle\!\rangle_{+} \equiv\int_{0}^{\infty}k^{2}dk\left\{\frac{\exp[(\omega-\mu)/T]( \omega-\mu)}{(\exp[(\omega-\mu)/T]\mp 1)^{2}\omega T^{2}}+\text{c.t.}\right\}O(k).\] The solution of Eq.(21), \(m=m(k,T,\mu)\), is in general a function also of the momentum \(k\). In turn, the bag constant \(B\) is obtained by integrating Eqs. (11) and (16) on the parameter plane. It can be viewed as a functional of \(m(k,T,\mu)\) besides being a function of \(T\) and \(\mu\). It is noted that the above discussions can be straightforwardly generalized to the case where the system is not isotropic, where \(m=m(\mathbf{k},T,\mu)\). As pointed out in [44], if one simplifies and considers the momentum-independent case, namely, \(m(\mathbf{k},T,\mu)=m(T,\mu)\), one readily falls back to the flow equation derived in Ref. [23]. In this case, \(B\) also simplifies to a function of \(T\) and \(\mu\). We are, however, more interested in exploring the momentum-dependent case, which will be elaborated further in the following section. ## III Bottom up toy-model approaches An apparent momentum-dependent solution to Eq. (21) can be obtained by "factoring out" the momentum integration \(\int k^{2}dk\) and assuming the integrand vanishes. In other words, \[\left\{\frac{\exp[(\omega-\mu)/T]T}{(\exp[(\omega-\mu)/T]\mp 1)^{2}}-\text{c.t. }\right\}\frac{\partial m}{\partial T}=\left\{\frac{\exp[(\omega-\mu)/T]( \omega-\mu)}{(\exp[(\omega-\mu)/T]\mp 1)^{2}}+\text{c.t.}\right\}\frac{\partial m}{ \partial\mu}. \tag{23}\] The above equation can be solved by using the method of characteristics [51]. To be specific, for a given \(k\), the solution is the surface tangent to the vector field \[\left(a(T,\mu,m),b(T,\mu,m),0\right),\] where \[a(T,\mu,m) =\frac{\exp[(\omega-\mu)/T]T}{(\exp[(\omega-\mu)/T]\mp 1)^{2}}- \text{c.t.}, \tag{24}\] \[b(T,\mu,m) =-\frac{\exp[(\omega-\mu)/T](\omega-\mu)}{(\exp[(\omega-\mu)/T] \mp 1)^{2}}-\text{c.t.}.\] Its formal solution is the characteristic curves obtained by the integration \[\frac{dT}{d\lambda} =a(T,\mu,m), \tag{25}\] \[\frac{d\mu}{d\lambda} =b(T,\mu,m),\] where \(\lambda\) is an intermediate variable, for given \(k\), \(m\), and thus \(\omega\). An interesting scenario that gives rise to an analytic solution occurs when one ignores anti-particles' contributions. By taking \(\omega,T,\mu\) as the three independent variable, the method of characteristics gives [44] the following formal solution \[m=f\left(\frac{T\omega}{\omega-\mu},k\right), \tag{26}\] where an arbitrary function \(f\) furnishes the boundary condition at vanishing chemical potential, namely, \(f(T)\equiv f(T,0)=m(T,\mu=0,k=0)\), where we assume that the mass is independent of the momentum at \(\mu=0\). We note that the resultant mass function is a function of \(k,T\), and \(m\), and therefore the solution of the form Eq. (26) serves as a simple but non-trivial example. In [44], the freedom in \(f\) was employed to perform a numerical fit to the lattice QCD results for \(N_{f}=2+1\) flavor QCD system [52; 53; 54; 55; 56] at vanishing chemical potential. Then the relevant physical quantities, such as the trace anomaly, sound velocity, and particle number susceptibility, were evaluated and compared to the lattice data. Instead, we adopt a "bottom-up" approach for the present study. Specifically, we consider two cases where one assumes a simple ansatz for \(f\) posteriorly adapted to the lattice results and proceeds analytically to a large extent. _Case 1:_ Our first choice is a simple linear fit. Based on the lattice data [52] shown in Fig. 1, there are two regions where the mass of the quasiparticle is primarily a linear function in temperature. In other words, \[f|_{\mu=0}=c_{1}T+c_{2}, \tag{27}\] which gives \[f\left(\frac{T\omega}{\omega-\mu},k\right)=f\left(\frac{T\omega}{\omega-\mu} \right)=\frac{c_{1}T\omega}{\omega-\mu}+c_{2}. \tag{28}\] Despite its simple form, Eq. (27) might be plagued by the pole on its denominator. To avoid the pole at \(\omega=\mu\) for an arbitrary momentum \(k\) indicates the condition \[\omega>\mu, \tag{29}\] that is, \(\omega\geq m>\mu\), by considering the definition Eq. (5). Otherwise, if one requires \(\omega<\mu\), it is always possible to find a momentum \(k\) large enough to violate the condition. Substituting Eq. (28) into Eq. (26) gives \[\omega(m-c_{1}T-c_{2})=\mu(m-c_{2}), \tag{30}\] for which Eq. (29) dictates \[c_{1}>0, \tag{31}\] while given \(T>0\) and \(\mu>0\). By substituting Eq. (5) into Eq. (30) and squaring both sides, one finds a fourth-degree polynomial equation for \(m\). This equation possesses four roots, where complex roots always appear in pairs. The physically relevant solution must sit on the positive real axis. From this point on, we proceed numerically. One extracts the values \(c_{1}\) and \(c_{2}\) from the region shown in Fig. 1 satisfying \(c_{1}>0\), and finds \(c_{1}=0.44\) and \(c_{2}=0.13\). The fourth-degree polynomial contains a pair of complex roots, which are subsequently discarded. One of the remaining two real roots is extraneous, owing to the fact that we have squared both sides of Eq. (30). The resultant mass function \(m(k,T,\mu)\) and the bag constant \(B(T,\mu)\) are shown in Figure 1: The mass of \(u\) and \(d\) quarks at vanishing chemical potential derived from the lattice data [52]. It can be readily extracted for vanishing chemical potential using Gorenstein and Yang’s scheme [22]. The curve is then fit to analytic form Eq. (27) discussed in the text. Fig. 2. The left and middle plots give the mass as a function of \(T\) and \(\mu\) at given \(k=1\) GeV and that of \(k\) and \(\mu\) at given \(T=0.25\) GeV. The resulting bag constant is obtained by numerical integration of Eqs. (11) and (16). The dependence of the bag constant on the temperature \(T\) or chemical potential \(\mu\) is presented in the right plot of Fig. 2. The mass function and the bag constant are found to be moderate in \(T\) and \(\mu\). As \(k\to 0\), according to the middle plot of Fig. 2, the quasiparticle mass increases significantly. It is noted that the obtained bag constant \(B\) is manifestly path independent. For instance, one evaluates \(B(T,\mu)\) by using two following integration paths on the \(T-\mu\) plane. The integration for \(B\) is carried out from \((T_{0}=0.25\) GeV,\(\mu_{0}=0)\) to \((T_{1}=0.45,\mu_{1}=0.3)\), where path 1 is defined by \((T_{0},\mu_{0})\rightarrow(T_{1},\mu_{0})\rightarrow(T_{1},\mu_{1})\), while path 2 is through \((T_{0},\mu_{0})\rightarrow(T_{0},\mu_{1})\rightarrow(T_{1},\mu_{1})\). One finds \[\left[B(T_{1},\mu_{1})-B(T_{0},\mu_{0})\right]\right|_{\rm path~{}1}=-0.6060 49=\left[B(T_{1},\mu_{1})-B(T_{0},\mu_{0})\right]\right|_{\rm path~{}2}.\] We also note that based on the above discussions, the fit to the region \(c_{1}<0\), where the mass of the quasiparticle decreases with increasing temperature in Fig. 1, is doomed to fail. A numerical attempt reveals path-dependent values, which signals that those obtained by straightforward integration do not yield mathematically well-defined results. This is due to the undesirable pole at \(\omega=\mu\) in the denominator of the first term on the r.h.s. of Eq. (28). In order to handle the region where \(c_{1}<0\), we proceed to consider the second case. _Case 2:_ The second choice involves a linear function in the reciprocal of the argument of Eq. (26). To be specific, we consider the ansatz, \[\left.f\right|_{\mu=0}=\frac{1}{c_{3}+c_{4}T}, \tag{32}\] which gives \[f\left(\frac{T\omega}{\omega-\mu},k\right)=\frac{\omega-\mu}{c_{3}(\omega-\mu )+c_{4}T\omega}. \tag{33}\] In this case, to avoid the pole in the denominator, one considers the following constraint \[\omega(c_{4}T+c_{3})>\mu c_{3}. \tag{34}\] Substituting Eq. (33) into Eq. (26) gives \[\omega(c_{4}Tm+mc_{3}-1)=\mu(mc_{3}-1). \tag{35}\] Given \(T>0\) and \(m>0\), Eq. (34) implies \[\omega>\mu,~{}~{}c_{4}>0, \tag{36}\] and moreover, Eq. (35) further indicates \[c_{3}<\frac{1}{m},~{}~{}c_{4}Tm+mc_{3}<1. \tag{37}\] Otherwise, if \(c_{3}\geq 1/m\), Eq. (35) can no longer hold. To proceed, one substitutes Eq. (5) into Eq. (35) and squares both sides, and one again finds a fourth-degree polynomial equation for \(m\). Similarly, this equation possesses four roots, where complex roots appear in pairs. Figure 2: The derived quasiparticle mass in the parameter space according to the form given by Eq. (28) and the fit shown in Fig. 1. (a) The quasiparticle mass \(m\) as a function of \(T\) and \(\mu\) at \(k=1\) GeV. (b) The quasiparticle mass \(m\) as a function of \(k\) and \(\mu\) at \(T=0.25\) GeV. (c) The bag constant \(B\) as a function of \(T\) and \(\mu\). We proceed numerically from this point on. The values \(c_{3}\) and \(c_{4}\) are extracted from a fit to the lattice QCD data shown in Fig. 3. One finds \(c_{3}=-1.84\) and \(c_{4}=28.55\), which affirms the second choice above. By discussing a pair of complex roots and an extraneous root, the physically relevant solution is eventually singled out from the two sit on the positive real axis. The resultant mass function \(m(k,T,\mu)\) and the bag constant \(B(T,\mu)\) are shown in Fig. 4. The left and middle plots give the mass as a function of \(T\) and \(\mu\) at given \(k=1\) and as a function of \(k\) and \(\mu\) at given \(T=0.12\). Again, the bag constant can be obtained by numerical integration of Eqs. (11) and (16). The dependence of the bag constant on the temperature \(T\) or chemical potential \(\mu\) is presented in the right plot of Fig. 4. The mass function and the bag constant are found to be moderate in \(T\) and \(\mu\), mainly in accordance with the existing results [44]. Different from Fig. 2, as \(k\to 0\), the mass of the quasiparticle does not modify significantly. Again, the obtained bag constant \(B\) is manifestly path independent. Before closing this section, we present in Fig. 5 a few resulting thermodynamic quantities evaluated using the toy model proposed in case 2. In the left plot, we show the pressure, energy density, and entropy density as a function of temperature at vanishing chemical potential. The right plot gives the difference in pressure between the states with finite and vanishing chemical potential. It is noted in the calculations, in accordance with the simplified scenario, one only takes into account the \(u\) and \(d\) quarks but does not include the \(s\) quarks, gluons, or the anti-particles. By comparing the results with those obtained using more sophisticated approaches [26; 44; 57], one is led to the following observations. The tendency of the temperature dependence is mainly correct, while the magnitudes of the calculated thermodynamic quantities consistently underestimate the existing results. This is because the simplified models do not consider the contributions from the remaining degrees of freedom, including those from the anti-particles. Moreover, the order of magnitude for these quantities can be roughly recuperated by multiplying a factor of two. The latter effectively compensates for the contributions missing from the anti-particles. We note, nonetheless, that the main objective of the present approach is to explore the analytic properties of the mass function from a bottom-up perspective rather than reproduce the lattice data numerically by employing some sophisticated approximate function. Figure 3: The mass of \(u\) and \(d\) quarks at vanishing chemical potential derived from the lattice data [52], which is fit to analytic form Eq. (32) discussed in the text. ## IV Concluding remarks To summarize, in this work, we reviewed the topic of the quasiparticle model closely related to Ru-Keng Su's distinguished contributions in the past years. Moreover, we explore the approach applied to scenarios with finite chemical potential. Different from the standard recipe in the literature, we explored the possibility that the effective mass of the quasiparticle might be a function of the momentum, in addition to the dependence on temperature and chemical potential. It was shown that such a scenario emerges as a special solution to an integro-differential equation derived from the thermodynamic consistency. We pointed out that the special solution in question is essentially a generalization to those previously explored in the literature. Instead of fitting to the lattice QCD data at vanishing chemical potential, we performed a "bottom-up" approach by assuming two analytic ansatzes. The remaining physical quantities were subsequently derived and discussed. We note that the momentum-dependent quanta mass has also been addressed by some authors from the QCD perspective, where the analyses were closely related to the symmetry of the underlying system. In terms of the Gribov-Zwanziger framework, results on the gluon [58; 59; 60; 61] and quark propagator [62] indicated that the pole masses are functions of the momentum. Besides, calculations using the Schwinger-Dyson equation showed momentum-dependence for both gluon [63] and quark [64; 65] dynamic masses. The current approach's main objective is to explore the analytic properties of the mass function. It is primarily motivated as one might distinguish the various roots deriving from the thermodynamical consistency condition. As observed and discussed in the main text, these different roots are somehow separated by the pole of the relevant Figure 4: The derived quasiparticle mass in the parameter space according to the form given by Eq. (33) and the fit shown in Fig. 3. (a) The quasiparticle mass \(m\) as a function of \(T\) and \(\mu\) at \(k=1\) GeV. (b) The quasiparticle mass \(m\) as a function of \(k\) and \(\mu\) at \(T=0.12\) GeV. (c) The bag constant \(B\) as a function of \(T\) and \(\mu\). Figure 5: The derived thermodynamic quantities by considering the \(u\) and \(d\) quarks, where the \(s\) quarks, gluons, and anti-particles are not explicitly taken into account. (a) The pressure \(3p/T^{4}\), energy density \(\epsilon/T^{4}\), and entropy density \(3s/(4T^{3})\) as functions of the temperature \(T/T_{c}\) at vanishing chemical potential, where \(T_{c}=1.5\) GeV in accordance with lattice QCD data. (b) The difference in pressure between the states with finite and vanishing chemical potential is shown as a function of temperature \(T/T_{c}\). equation, which is not apparent if a numerical scheme were utilized in the first place. The calculations primarily employ Eq. (26). It is a simplified approach as it ignores anti-particles' contributions and is only utilized to fit to accommodate the u and d quarks. On the other hand, a numerical approach directly based on Eq. (21) was carried out in a previous study [44], where the cancelations warranted by Eqs. (11) and (16) take place for individual particles, as well as their anti-particles. Nonetheless, the present study gives rise to the following speculations. First, we have attempted to avoid the singularity of the mass function by entirely evading its poles by imposing the conditions, Eqs. (29) and (34). The resultant physical quantities are, in turn, manifestly _analytic_ on the \(T\) and \(\mu\) parameter space. Curiously, from a theoretical perspective, one expects a curve of first-order phase transition on the parameter plane, which entails some discontinuity. In other words, the discontinuity avoided in the present study might be utilized in our favor. Specifically, a pole in the mass function indicates an infinite mass, which can be viewed as a natural and benign outcome when a degree of freedom can hardly be excited. We plan to address these aspects in further studies. ## Acknowledgements This work is supported by the National Natural Science Foundation of China. This work is partially supported by the Central Government Guidance Funds for Local Scientific and Technological Development, China (No. Guike ZY22096024). We also gratefully acknowledge the financial support from Brazilian agencies Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP), Fundacao de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ), Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), and Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES).
2303.10213
Late-time universal distribution functions of observables in one-dimensional many-body quantum systems
We study the probability distribution function of the long-time values of observables being time-evolved by Hamiltonians modeling clean and disordered one-dimensional chains of many spin-1/2 particles. In particular, we analyze the return probability and its version for a completely extended initial state, the so-called spectral form factor. We complement our analysis with the spin autocorrelation and connected spin-spin correlation functions, both of interest in experiments with quantum simulators. We show that the distribution function has a universal shape provided the central limit theorem holds. Explicitly, the shape is exponential for the return probability and spectral form factor, meanwhile it is Gaussian for the few-body observables. We also discuss implications over the so-called many-body localization. Remarkably, our approach requires only a single sample of the dynamics and small system sizes, which could be quite advantageous when dealing specially with disordered systems.
I. Vallejo-Fabila, E. Jonathan Torres-Herrera
2023-03-17T19:00:38Z
http://arxiv.org/abs/2303.10213v2
Late-time universal distribution functions of observables in one-dimensional many-body quantum systems ###### Abstract We study the probability distribution function of the long-time values of observables being time-evolved by Hamiltonians modeling clean and disordered one-dimensional chains of many spin-1/2 particles. In particular we analyze the return probability and its version for a completely extended initial state, the so-called spectral form factor. We complement our analysis with the spin autocorrelation and connected spin-spin correlation functions, both of interest in experiments with quantum simulators. We show that the distribution function has a universal shape provided the Central Limit Theorem holds. Explicitly, the shape is exponential for the return probability and spectral form factor, meanwhile it is Gaussian for the few-body observables. We also discuss implications over the so-called many-body localization. Remarkably, our approach requires only a single sample of the dynamics, which is quite advantageous for experiments and theory. ## I Introduction Probability distribution functions (pdfs) are of fundamental interest in different branches of science. In physics, in particular, pdfs are important not only for the purpose of computing mean values or expectation values of relevant physical quantities, but also by themselves because they could provide fine information about fluctuations around the mean values of properties of a given system. For a comprehensive review on fluctuations in physical systems see, for instance, Ref. [1]. In the realm of quantum mechanics, the characterization of fluctuations in time domain has been useful to understand reversibility in generic chaotic and integrable systems [2], to establish a Gaussian scenario for equilibration in non-interacting models [3; 4], where fluctuations can decay as the square root of the system size. Although later is was shown in [5] that for many-body systems the fluctuations of generic observables decay exponentially with system size in both, integrable and chaotic systems. Late-time pdfs are also important in the context of thermalization and many-body localization (MBL). Just some examples are: Gaussian momentum distribution functions for noninteracting spinless fermions and hard-core bosons in one-dimensional (1D) lattices were observed in [6; 7]. Gaussian distributions are experimentally obtained for the equilibrium value of a temporal autocorrelation function, number entropy and Hamming distance in [8]. In numerical experiments employing the time dependence variational principle for matrix product states and machine learning, late-time pdfs where analyzed for the spin imbalance, entanglement entropy and so-called Schmidt gap [9]. pdfs were useful for the determination of the universality class of MBL under speckle disorder [10]. Long-tails and bimodal probability distribution functions of the entanglement entropy near and at the MBL transition respectively were previously analyzed in [11; 12]. From our point of view a clear determination of the conditions under which the formerly found, experimentally and theoretically, pdfs of observables in many-body systems is missing. Our aim in this work is to set, in a systematic way, the basic conditions that lead to universal late-time probability distribution functions of physical quantities and observables in 1D interacting quantum systems. We will show that the Central Limit Theorem (CLT) is behind the universal late-time pdfs, but even deeper are the conditions under which this almost ubiquitous and undoubtedly celebrated theorem holds in the time evolutions generated by paradigmatic models of spin-1/2 particles, with or without interactions, clean or disordered, chaotic or integrable. Projections of the initial state in the energy eigenstates, as well as eigenvalues are the main characters playing a relevant role in the fulfillment of the CLT. Of course, when dealing with observables matrix elements in energy representation could be also relevant. Our analysis will employ different probes of the dynamics, namely, return probability, spectral form factor, spin auto-correlation function and connected spin-spin correlation function. We will put main emphasis in the first one, the return probability, this because it has been prominently studied since the early years of the first quantum revolution. Let us give another non exhaustive but larger list of examples: In the context of the time-energy uncertainty relation [13; 14], non-exponential late-time decays of a quasy-stationary state [15], quantum Zeno's paradox [16], quantum speed limits [17; 18; 19; 20], quantum energy flow [21], non-periodic substitution potentials [22], connection with the time operator through a generalized Weyl relation [23], cosmology [24], quantum walks and complex networks [25; 26; 27; 28], dynamics of a thermofield double state [29; 30], matter-radiation interaction models like Dicke [31] and Bose-Hubbard [32]. The return probability has been also measured in experiments to observe time-resolved level repulsion of chaotic systems [33], fluorescence experiments with molecules [34; 35], ultra-cold atoms in magneto-optic traps [36], atom chip [37] and prethermalization in Floquet systems [38]. The rest of the manuscript is organized as follows. Models and quantities are introduced in Secs. II and III respectively. In Sec. IV we provide an overview of the dynamics from the time it is initiated up to equilibrium. In Sec. V we present a justification of the universal distributions based on theoretical grounds. Results for the addressed systems are shown and discussed in Sec. VI. Conclusions are finally presented in Sec. VII. ## II Models We consider a system of spin-\(1/2\) particles that can interacting with nearest-neighbors in a 1D lattice. Spins can also be subjected to an on-site random potential. The general Hamiltonian describing such a system is \[\mathcal{H}=\sum_{k}(S_{k}^{x}S_{k+1}^{x}+S_{k}^{y}S_{k+1}^{y}+\Delta S_{k}^{z }S_{k+1}^{z})+\sum_{k=1}^{L}h_{k}S_{k}^{z}. \tag{1}\] With spin-\(1/2\) operators \(S_{k}^{x,y,z}\) acting on the spin located at the \(k\)-th site. We set \(\hbar=1\). The anisotropy parameter \(\Delta\) and the on-site potential amplitudes \(h_{k}\) are tuned to obtain different instances of model (1). Setting \(\Delta=0\) together with \(h_{k}=0\) leads to a non-interacting model, known as XX model which is solved by applying a Jordan-Wigner transformation [39; 40]. With \(\Delta=1\) and \(h_{k}=0\) we obtain the isotropic XXZ model with Ising-like interactions in \(z\)-direction, this model in also solvable but via the celebrated Bethe ansatz [41; 42; 43; 44; 45]. We arrive to a paradigmatic model for the MBL transition by setting again \(\Delta=1\) but with \(h_{k}\) as uncorrelated random numbers uniformly distributed in the interval \((-h,h)\), with \(h\) as the disorder strength. A critical point \(h_{c}\) has been identified, \(3.75\lesssim h_{c}\), but still without consensus about its precise location in the \(h\)-axis, see for instance Refs. [46; 47; 48; 49; 50; 51; 52]. For the XX and XXZ models we consider open boundaries conditions, meanwhile for the disordered model we use periodic boundary conditions, \(\hat{S}_{L+1}=\hat{S}_{1}\), implying that in the first summation of Hamiltonian (1) we have the index \(k\) running from \(1\) to \(L-1\) for the two former models and from \(1\) to \(L\) for the latter. Hamiltonian (1) conserves the total spin in the \(z\)-direction, \(\mathcal{S}^{z}=\sum_{k}S_{k}^{z}\), it only moves an excitation (spin up) through the chain due to the XX term which is also known as the flip-flop term. This allows to focus our study on the largest subspace with \(\mathcal{S}^{z}=0\) and dimension \(\mathcal{N}=L!/(L/2)!^{2}\). ## III Quantities and observables In this Section we describe the time-dependent quantities and observables employed in our study. We also present the level spacing distribution traditionally used as a probe of repulsion between adjacent energy levels, a fingerprint of quantum chaos. ### Return probability and spectral form factor The return probability, \(RP\), is a dynamical quantity defined as the probability to find an initial state \(\ket{\Psi(0)}\) at a future time \(t\). It is given by \[RP(t)=|S(t)|^{2}=\left|\sum_{\alpha=1}^{\mathcal{N}}|c_{\alpha}^{0}|^{2}e^{- iE_{\alpha}t}\right|^{2}. \tag{2}\] Where \(S(t)=\bra{\Psi(0)}\mathcal{U}(t)\ket{\Psi(0)}\) is the return amplitude and \(\mathcal{U}(t)=\exp(-i\mathcal{H}t)\) the unitary time evolution operator. For the Hamiltonian generating the dynamics we have the eigenvalue equation \(\mathcal{H}\ket{\psi_{\alpha}}=E_{\alpha}\ket{\psi_{\alpha}}\), \(\alpha=1,\,2,\,\ldots,\,\mathcal{N}\). Meanwhile \(c_{\alpha}=\bra{\psi_{\alpha}}\ket{\Psi(0)}\) are the projections of the initial state into the energy eigenstates. When the initial state \(\ket{\Psi(0)}\) is completely extended in the energy eigenbasis, its components goes as \(1/\sqrt{\mathcal{N}}\) and from Eq.(2) we recover the younger relative of \(RP(t)\), the so-called spectral form factor, \[K(t)=\frac{1}{\mathcal{N}^{2}}\left|\sum_{\alpha=1}^{\mathcal{N}}e^{-iE_{ \alpha}t}\right|^{2}, \tag{3}\] where only energy eigenvalues are involved. ### Spin autocorrelation function How close the spin configuration at time \(t\) is to the one at \(t=0\) can be measured through so-called spin autocorrelation function, given by \[I(t)=\frac{4}{L}\sum_{i=1}^{L}\bra{\Psi(0)}\hat{S}_{i}^{z}e^{i\mathcal{H}t}\hat {S}_{i}^{z}e^{-i\mathcal{H}t}|\Psi(0)\rangle\,. \tag{4}\] For particular initial states, like a quantum Neel-like state, \(I(t)\) is equivalent to the density imbalance measured in experimental platforms studying ultra-cold atoms [53; 54]. ### Connected spin-spin correlation function Another quantity also of interest in quantum simulators is the connected spin-spin correlation function, defined through \[\begin{split} C(t)&=\frac{4}{L}\sum_{k=1}^{L-1}\big{[} \bra{\Psi(t)}S_{k}^{z}S_{k+1}^{z}\ket{\Psi(t)}-\\ &\bra{\Psi(t)}S_{k}^{z}\ket{\Psi(t)}\bra{\Psi(t)}S_{k+1}^{z} \ket{\Psi(t)}\big{]}.\end{split} \tag{5}\] It quantifies time-dependent correlations between neighboring spins in the chain. It has been measured, for instance, in experiments with trapped ions studying systems with long-range interactions [55]. ### Level spacings We present in this subsection the theoretical predictions for the distribution \(P(s)\) of spacings between adjacent energy levels, \(s_{\alpha}=(E_{\alpha+1}-E_{\alpha})/\delta E\), with \(\delta E\) being the mean level spacing of the system. For integrable systems the level spacings typically follow an exponential distribution, they obey Poisson-like statistics [56], \[P_{\rm P}(s)=\exp(-s). \tag{6}\] No level repulsion is apparent from Eq. (6) since \(P(s\to 0)\to 1\). Meanwhile spacings between energy levels of time-reversal invariant systems, just like the ones we deal with, that are chaotic follow statistics from the Gaussian orthogonal ensemble (GOE) of random matrix theory (RMT) [57], \[P_{\rm GOE}(s)=\frac{\pi}{2}s\exp\left(-\frac{\pi}{4}s^{2}\right), \tag{7}\] and level repulsion is manifested as indicated by the limit case \(P(s\to 0)\to s\). We stress that \(P(s)\) will be used just to signal the regime of the Hamiltonian, being integrable or chaotic (ergodic, thermalizing), and also to expose possible degeneracies of the energy spectrum, like in the XX model. ## IV Dynamics overview We focus on the late-time behavior of the dynamical quantities and observables introduced lines above, that is, beyond the so-called Heisenberg time, \(t_{\rm H}=2\pi/\delta E\). However, it is instructive to illustrate their time evolution from \(t=0\) up to saturation. Figure 1 depicts the full-time evolution generated by the disordered model with fixed number of sites \(L=16\) for \(RP(t)\) [Fig. 1(a)], \(K(t)\) [Fig. 1(b)], \(I(t)\) [Fig. 1(c)] and \(C(t)\) [Fig. 1(d)]. Three different disorder strengths are considered, \(h=0.5\) (red), \(h=3.75\) (yellow) and \(h=6.0\) (blue). Dark-colored curves are averages over samples obtained from 130 disorder realizations and 78 initial states with energy closest to zero, \(E_{0}=\left\langle\Psi(0)\right|\mathcal{H}\left|\Psi(0)\right\rangle\approx 0\). Then a total of \(\sim 10^{3}\) samples were considered for the average. Light-colored curves are results from five individual samples, i.e., each one representing the dynamics for a single initial state with energy closest to zero and only one disorder realization. Note that individual samples for \(h=3.75\) are omitted, this is done only to avoid overcrowding of figure. For \(K(t)\) only disorder realizations were considered. The individual samples present huge sample-to-sample fluctuations [58] that could hide or obscure some relevant aspects of the dynamics, as the ones shown by the averaged quantities [59; 60; 61]. In fact, from the averages shown in Fig. 1 a clear power-law decay is seen for the return probability and apparently also for the imbalance [62; 63], although for this last quantity dissipative dynamics includes a stretched exponential decay [64; 65]. The power-law decay is followed by the correlation hole [66; 67; 68] which is visible for both quantities but also for the spectral form factor. For \(C(t)\) the correlation hole is not apparent but it is present, a deep close up is needed to observe it [69]. We note that also in [69] it was shown how these features depend on both, system size and kind of observable. Figure 2 shows the whole dynamics for the clean XXZ and XX models, but considering only the return probability [Fig. 2(a,c)] and spectral form factor [Fig. 2(b,d)]. In Fig. 2(a,c) the dark-blue curves are averages over 200 initial states together with an additional moving average to reduce fluctuations. Light-blue curves are dynamics for five individual initial states with energies closest to zero. In Figs. 2(b,d) the dark curves are moving averages over a single sample. While the light-colored curves are the rough data. Although interesting transient behavior is observed in Figs. 1 and 2, it is worth to emphasize that our main analysis of the distribution functions will be carried out for times beyond the Heisenberg time, \(t_{\rm H}\). However, we eventually will provide examples of distribution functions for different time scales. The vertical dashed line in all panels of both figures marks the point where \(t/t_{\rm H}=1\). Figure 1: Full-time dynamics of the quantities and observables treated in our study. Time evolution is generated by the disordered model in the ergodic phase with \(h=0.5\) (red), \(h=3.75\) (orange) and \(h=6.0\) (blue). Return probability (a), spectral form factor (b), spin auto-correlation function (c) and connected spin-spin correlation function (d). Light-colored curves are results from five samples of the dynamics, meanwhile dark-colored curves correspond to an average over \(\sim 10^{4}\) samples originated from different initial states and random realizations. System size is \(L=16\). ## V Central limit theorem and distribution functions Equation (2) for the return probability can be rewritten as \[RP(t)=\left|\sum_{\alpha=1}^{\mathcal{N}}X_{\alpha}(t)-i\sum_{\alpha=1}^{ \mathcal{N}}Y_{\alpha}(t)\right|^{2}, \tag{8}\] where \(X_{\alpha}(t)=|c_{\alpha}^{0}|^{2}\cos(E_{\alpha}t)\) and \(Y_{\alpha}(t)=|c_{\alpha}^{0}|^{2}\sin(E_{\alpha}t)\). At this point we remind that the classical Central Limit Theorem roughly states that if \(X_{\alpha}(t)\) or \(Y_{\alpha}(t)\) are independent and identically distributed (i.i.d.) random variables with finite second moment, then the distribution of their sum is well approximated by a Gaussian shape. Thus, if we assume that \(X_{\alpha}(t)\) and \(Y_{\alpha}(t)\) behave as i.i.d. in a time window \([t_{i},t_{f}]\) with initial time \(t_{i}\) and final time \(t_{f}\), then Eq. (8) can be considered as the squared norm of a complex random variable with Gaussian real and imaginary parts. In such a case the distribution of \(RP(t)\) should be exponential [70], \[P(RP)=\lambda e^{-\lambda RP},\hskip 28.452756pt\lambda\geq 0. \tag{9}\] Considering the terms appearing in Eq. (8), it is clear that if both, initial state components \(c_{\alpha}^{0}\) and energy eigenvalues \(E_{\alpha}\) behave simultaneously as i.i.d. then the CLT should hold. Assuming that \(X_{\alpha}(t)\) and \(Y_{\alpha}(t)\) remain each one identically distributed, three possible scenarios where the CLT does not hold are when initial state components are auto-correlated, energy eigenvalues are auto-correlated or both of them are correlated, each of these cases mean that \(X_{\alpha}(t)\) and \(Y_{\alpha}(t)\) are not independent. Note however that even for weakly enough correlated variables the CLT could remain true [71]. In the case of \(K(t)\) the exponential distribution should be achieved when correlations between energy eigenvalues are absent, just see the expression defining \(K(t)\) in Eq. (3). The scenario for the expectation value of generic observables is more involved. From its time evolution \[\langle\mathcal{O}(t)\rangle=\sum_{\alpha,\beta}c_{\alpha}^{0}c_{\beta}^{0} \mathcal{O}_{\alpha\beta}e^{-i(E_{\alpha}-E_{\beta})t}, \tag{10}\] with \(\mathcal{O}_{\alpha\beta}=\langle\psi_{\alpha}|\mathcal{O}|\psi_{\beta}\rangle\), we see that matrix elements in energy representation of the observable are also included. The simplest case arises with the assumption that all terms inside the summation in Eq. (10) behave collectively as i.i.d. random variables, then by the CLT we expect the distribution to be Gaussian. We note that having a Gaussian distribution for the late-time values of a generic observable is an indication that the matrix elements of the observable in energy representation are i.i.d. by themselves. However, if the distribution is not Gaussian then we can not ensure that the matrix elements are correlated, this because the presence of energy eigenvalues and eigenstates inside the summation that could be correlated. Although interesting, we leave the analysis of those fine details for a future work. At this point it is fair to mention that Aurich and Steiner conjectured in [72] that given an extended initial state, the distribution function of a normalized version of the return amplitude \(S(t)\) for a chaotic quantum system should be universally described by Rayleigh's law, \[P(S)=\frac{\pi}{2}S\exp\left(-\frac{\pi}{4}S^{2}\right),\hskip 28.452756ptS \geq 0. \tag{11}\] This conjecture was established in the context of 2D and 3D quantum billiards. It is straightforward to show that the square root of an exponential distribution, like the one for the return probability, results in a Rayleigh distribution just as Eq. (11), this is consistent with Eq. (9) which was derived using only basics from probability theory. Also fair is to note that in the context of RMT, Kunz arrived to the rigorous result that the pdf of \(RP(t)\) is exponential, independently of the initial state [73]. ## VI Results Now we present and describe our results on the pdfs, starting with the return probability \(RP(t)\), to which we devote majority of this paper. Results about level statistics are also presented, but of course they are no new in the existing literature. As explained before, we use them only as a reference. Figure 2: Full-time dynamics of \(RP(t)\) and \(K(t)\) generated by the XXZ (a, b) and XX (c, d) models. The light-colored curves in (a) and (c) are for individual samples of \(RP(t)\) obtained for five different initial states, meanwhile the dark-colored ones correspond to an average over 200 initial states, together with an additional moving average performed to further smooth the curves. For \(K(t)\) in (b) and (d) the light-colored curves represent the dynamics from a single initial state and the dark-colored curves correspond to moving averages over the same sample. System size is \(L=16\). ### Return probability and spectral form factor Figure 3 depicts \(P(s)\) and \(P(RP)\) for the disordered model and fixed number of spins \(L=18\). The upper panels \(P(s)\) show the behavior already known in the vicinity of the MBL transition [74]. For small disorder strength, \(h=0.5\) [Fig. 3 (a)], \(P(s)\) has the shape described by Eq. (7). At intermediate disorder strength, \(h=3.75\), the shape of \(P(s)\) is neither GOE-like nor Poisson-like, although closer to the Poisson shape than to GOE. Once at the localized phase, \(h=6.0\), \(P(s)\) gets the shape pre-the MBL transition, see for instance [75; 62] and more recently [76]. In fact, the slow dynamics produced by those correlations have been observed experimentally. [79] However, up to our knowledge, information about auto-correlations of energy levels does not exist. To fill out this gap, we recur to the spectral form factor and analyze its distribution for \(t/t_{\rm H}>1\). Figure 4 is conclusive, it shows that for any disorder strength in the disordered model the distribution of \(K(t)\) has an exponential shape, meaning that the energy eigenvalues remain as i.i.d. random variables. Then the sole cause for the unconform shapes of \(P(RP)\) in Figs. 3 (e) and (f) should be the auto-correlations between initial state components \(c_{\alpha}^{0}\). To reinforce our statement about initial state, we present in Fig. 5\(P(RP)\) departing from a random initial state with tuned degree of correlations and evolved by the disordered model in the chaotic region, with fixed \(h=0.5\). There the components \(c_{\alpha}^{0}\) of the initial state are random numbers from a uniform distribution and with correlation degree \(q\), generated following the recipe given in [78] and recently used to show the effects of correlated disorder in the region around the MBL transition [79]. Figure 5 shows that as the degree of correlations moves from a regime of strong correlations (a), passing trough intermediate (b) and finally arriving to weak correlations (c), the exponential distribution of \(P(RP)\) is eventually recovered. Figure 4: Distribution of the late-time values of \(K(t)\) for the same time window as in Fig. 3 for the disordered model with \(h=0.5\) (a), \(h=3.75\) (b) and \(h=6.0\) (c). Solid curves are given by \(P(K)=\exp(-K)\). System size is \(L=18\). Figure 5: Distribution of the late-time values of \(RP(t)\) under the disordered model in the ergodic phase with \(h=0.5\). The considered time window is \(t/t_{\rm H}\in(11.38,14.93)\). The initial state is a random one with correlated components \(c_{\alpha}^{0}\). Distributions for different degrees of correlation are depicted: strong \(q=0.5\) (a), intermediate \(q=0.33\) (b) and weak \(q=0.2\) (c). Red solid curve in (c) corresponds to \(P(RP)=\exp(-RP)\). System size is \(L=18\). Figure 3: Level spacing distribution \(P(s)\) (upper panels) and distribution of the late-time values of \(RP(t)\) (lower panels) for the disordered model. The time intervals are \(t/t_{\rm H}\in(11.38,14.93)\), \(t/t_{\rm H}\in(29.25,38.38)\), and \(t/t_{\rm H}\in(33.34,43.74)\) for the disorder strengths \(h=0.5\) (a), \(h=3.75\) (b) and \(h=6.0\) (c) respectively. Solid lines in the upper panels depict the theoretical values for \(P(s)\), Eq. (6) [color red in (b) and (c)] and Eq. (7) [color blue in (a) and (b)]. Solid curves in the lower panels correspond to Eq. (9) with mean 1. System size is \(L=18\). XX and XXZ models As already discussed, auto-correlations between energy levels could block the fulfillment of the CLT. Those correlations can be caused, for instance, by degeneracies in the energy spectrum. A suitable model to show this is the XX model [\(\Delta=0\) and \(h=0\) in Eq. (1)], which energy spectrum contains a high amount of degeneracies [5]. Figure 6 (a) showing \(P(s)\) for the XX model confirms the degenerates with the Shnirelman's peak as a witness [80]. Consequently, the distribution not only of \(RP(t)\) but also of \(K(t)\) should not be exponential, which is confirmed by Figs. 6 (c) and (e) for \(P(RP)\) and \(P(K)\) respectively. We complement our analysis with the study of the time evolution generated by the XXZ model [\(\Delta=1\) and \(h=0\) in Eq. (1)]. The Ising interactions present in the integrable XXZ model break some symmetries of the XX model and thus removes degeneracies in the energy spectrum. This is reflected into \(P(s)\), Fig 6 (b), which displays Poisson-like statistics. The distribution of the return probability, \(P(RP)\), is exponential meaning that energy eigenvalues and initial state components behave as uncorrelated random variables. Figure 6 for \(P(K)\) confirms absence of auto-correlations between eigenvalues. To confirm the last same for initial state components, we employed the so-called inverse participation ratio, \(IPR=\sum_{\alpha=1}^{\mathcal{N}}|\bra{\psi_{\alpha}}\Psi(0)\ket{4}^{4}\propto \mathcal{N}^{-D_{2}}\), with \(0\leq D_{2}\leq 1\) known as the correlation dimension that measures the degree of correlations between eigenstate components [81]. Two extreme cases \(D_{2}=0\) and \(D_{2}=1\) mean correlations and absence of correlations respectively. Our scaling analysis of \(IPR\) with dimension \(\mathcal{N}\) (not shown) for the initial state \(\ket{\Psi(0)}\) used in Fig. 6 (d) lead us to \(D_{2}\approx 0.75\), this value suggest weak correlations between initial state components \(c_{\alpha}^{0}\), at least weak enough to have the CLT still valid. Our results for the integrable XXZ model reaffirm the idea that the universal late-time exponential distribution is achieved once the conditions for the CLT fulfillment are provided, that is, simultaneous absence or weak auto-correlations in the initial state components and in the energy spectrum of the Hamiltonian generating the time evolution. #### iii.2.2 Timescales Up to now, the analysis has been carried out using timescales greater than the Heisenberg time \(t_{\rm H}\), an interesting question is if the universal distribution could appear for smaller timescales. Figure 7 displays the distribution of \(RP\) for the disordered model in the ergodic phase with \(h=0.5\) and for different time intervals, at very short times (a), power-law decay (b), around the correlation hole (c) and saturation (d). The reader is referred to Fig. 3(a) and discussion in Sec. IV to remind all t Once the initial state begins the transition to other basis states, as time increases, the distribution \(P(RP)\) moves gradually to an expon Figure 6: Distribution of spacings \(P(s)\) (upper panels), distribution of the late-time values of \(RP(t)\) (middle panels) and \(K(t)\) (lower panels). Left column contains results for the XX model, meanwhile in the right columns the results are for the XXZ model. The considered time windows are \(t/t_{\rm H}\in(9.092,11.928)\) for the XX model and \(t/t_{\rm H}\in(9.859,12.935)\) for the XXZ model. \(L=18\). Figure 7: Distribution \(P(RP)\) of the return probability for the disordered model in the ergodic phase with \(h=0.5\) in different time windows. Initial decay, \(t/t_{\rm H}\in(0.000034,0.000089)\) (a), power-law decay, \(t/t_{\rm H}\in(0.00034,0.00089)\) (b), around the time where the minimum of the correlation hole is located, \(t/t_{\rm H}\in(0.0091,0.0228)\) (c) and saturation, \(t/t_{\rm H}\in(11.38,14.93)\) (d). Solid curves are given by \(P(RP)=\exp(-RP)\). System size is \(L=18\). the return probability is very similar in values and a peaked distribution is obtained, Fig. 7(a). The exponential shape is completely achieved when the time is large enough like in Fig. 7(d); however, note that for times when the correlation hole appears [Fig. 7(c)] the distribution of \(RP\) also looks like an exponential but not quite good as in Fig. 7(d). Then we infer that in order to observe the universal exponential distribution one should wait for times beyond the Heisenberg time, \(t_{\rm H}\). Now we have the opportunity to mention that the exponential distribution of \(RP(t)\) in a time interval coincides with the exponential observed in [61] also at large times, but there fixing a single time \(t_{0}\) and computing the pdf over different disorder realizations and initial states. This is consistent with the fact that in order to study ergodicity one should do it once at equilibrium. ### Spin auto-correlation function and connected spin-spin correlation function In this Section we focus on the analysis of the pdf of two dynamical observables of great interest in quantum simulations with cold atoms and ion traps, the spin auto-correlation function \(I(t)\) and the connected spin-spin correlation function \(C(t)\) given by Eqs. 5 and 4 respectively. We see in the upper panels of Fig. 8 the pdf for the spin auto-correlation function, \(P[I(t/t_{\rm H}>1)]\), meanwhile in the lower panels of Fig. 8 we have the pdf for the connected spin-spin correlation function, \(P[C(t/t_{\rm H}>1)]\). Both quantities are evolved by the disordered model with different disorder strengths. As predicted in Sec. V, when \(h=0.5\) the late-time pdf are Gaussian for both observables [Figs. 8 (a) and (d)]. Once the system is outside the ergodic phase, the pdf for both observables do not conform with the Gaussian shape. Comparing Figs. 8 (b) and (e), for \(h=3.75\), we see that \(P(I)\) is more sensitive to the change in disorder than \(P(C)\). Finally, when the disorder strength is \(h=6.0\), the lack of correspondence between the pdf of both observables and the Gaussian is even more evident. Again, the shape of the pdf can be used as a probe of the MBL transition. Interestingly, our results in the ergodic side of the MBL are in line with results for non-interacting systems in [3; 4]. ## VII Conclusions We analyzed the late-time probability distribution functions of dynamical quantities of interest for experimental platforms. We argued that the pdfs are universal in the sense that the unique requirement is the CLT to hold. This needs uncorrelated initial state components and energy spectrum. For chaotic quantum systems these conditions are fulfilled, but also for integrable systems like the XXZ model. Strong degeneracies in the energy spectrum, like in the XX model, prevent the CLT to hold, as well as correlated initial state components, like in the disordered model for intermediate and strong disorder strengths. For generic observables, an additional condition should be fulfilled, uncorrelated elements in energy representation. This happens for the two quantities studied in this work, the spin auto-correlation function and the connected spin-spin correlation function. No averages are needed, our analysis was based on a single sample of the dynamics. Details about auto-correlations between initial state components, energy levels and elements of observables deserve a deeper analysis, but this isleaved for a future work. We expect our results to motivate further studies of late-time pdfs in several quantum systems. ###### Acknowledgements. IV-F and E.J.T.-H. are grateful to LNS-BUAP for their supercomputing facility. E.J.T.-H. is also grateful for financial support from VIEP-BUAP, project No. 00270-2022.
2307.14596
HUTFormer: Hierarchical U-Net Transformer for Long-Term Traffic Forecasting
Traffic forecasting, which aims to predict traffic conditions based on historical observations, has been an enduring research topic and is widely recognized as an essential component of intelligent transportation. Recent proposals on Spatial-Temporal Graph Neural Networks (STGNNs) have made significant progress by combining sequential models with graph convolution networks. However, due to high complexity issues, STGNNs only focus on short-term traffic forecasting, e.g., 1-hour forecasting, while ignoring more practical long-term forecasting. In this paper, we make the first attempt to explore long-term traffic forecasting, e.g., 1-day forecasting. To this end, we first reveal its unique challenges in exploiting multi-scale representations. Then, we propose a novel Hierarchical U-net TransFormer (HUTFormer) to address the issues of long-term traffic forecasting. HUTFormer consists of a hierarchical encoder and decoder to jointly generate and utilize multi-scale representations of traffic data. Specifically, for the encoder, we propose window self-attention and segment merging to extract multi-scale representations from long-term traffic data. For the decoder, we design a cross-scale attention mechanism to effectively incorporate multi-scale representations. In addition, HUTFormer employs an efficient input embedding strategy to address the complexity issues. Extensive experiments on four traffic datasets show that the proposed HUTFormer significantly outperforms state-of-the-art traffic forecasting and long time series forecasting baselines.
Zezhi Shao, Fei Wang, Zhao Zhang, Yuchen Fang, Guangyin Jin, Yongjun Xu
2023-07-27T02:43:21Z
http://arxiv.org/abs/2307.14596v1
# HUTFormer: Hierarchical U-Net Transformer for Long-Term Traffic Forecasting ###### Abstract Traffic forecasting, which aims to predict traffic conditions based on historical observations, has been an enduring research topic and is widely recognized as an essential component of intelligent transportation. Recent proposals on Spatial-Temporal Graph Neural Networks (STGNNs) have made significant progress by combining sequential models with graph convolution networks. However, due to high complexity issues, STGNNs only focus on short-term traffic forecasting, _e.g._, 1-hour forecasting, while ignoring more practical long-term forecasting. In this paper, we make the first attempt to explore long-term traffic forecasting, _e.g._, 1-day forecasting. To this end, we first reveal its unique challenges in exploiting multi-scale representations. Then, we propose a novel _Hierarchical U-net Transformer_ (HUTFormer) to address the issues of long-term traffic forecasting. HUTFormer consists of a hierarchical encoder and decoder to jointly _generate_ and _utilize_ multi-scale representations of traffic data. Specifically, for the encoder, we propose window self-attention and segment merging to extract multi-scale representations from long-term traffic data. For the decoder, we design a cross-scale attention mechanism to effectively incorporate multi-scale representations. In addition, HUTFormer employs an efficient input embedding strategy to address the complexity issues. Extensive experiments on four traffic datasets show that the proposed HUTFormer significantly outperforms state-of-the-art traffic forecasting and long time series forecasting baselines. Traffic forecasting, long-term time series forecasting, multivariate time series forecasting. ## 1 Introduction Traffic forecasting aims at predicting future traffic conditions (_e.g._, traffic speed or flow) based on historical traffic conditions observed by sensors. With the development of Intelligent Transportation Systems (ITS), traffic forecasting fuels a wide range of services related to traffic scheduling, public safety, _etc._[1, 2]. For example, predicting long-term traffic changes (_e.g._, 1 day) is valuable for people to plan their route in advance to avoid possible traffic congestion. In general, traffic data consists of multiple time series, where each time series records traffic conditions observed by sensors deployed on a road network. A critical property of traffic data is that there exist strong correlations between time series owing to the connection of road networks [3]. To make accurate traffic forecasting, state-of-the-art proposals [1, 3, 4, 5] usually adopt Spatial-Temporal Graph Neural Networks (STGNNs), which model the correlations between time series based on Graph Convolution Networks (GCNs) [3, 6, 7]. However, there is no free lunch. Graph convolution brings significant improvements in performance and complexity at the same time. Computational complexity usually increases linearly or quadratically with the length and number of time series [8]. Therefore, it is difficult for STGNNs to scale to long-term historical traffic data, let alone predict long-term future traffic conditions. In fact, most existing works focus on short-term traffic prediction, _e.g._, predicting future 12 time steps (1 hour in commonly used datasets). Such an inability to make long-term traffic forecasting limits the practicality of these models. In this paper, we focus on long-term traffic forecasting, _e.g._, predicting a future day. Except for the correlations between time series, we argue that the long-term traffic forecasting task has its own uniqueness. In the following, we discuss them in detail to motivate model design. An example of traffic data is shown in Figure 1(a). On the one hand, when observing from a global perspective, traffic data usually exhibit regular changes, _e.g._, daily periodicity. On Fig. 1: Examples of long-term traffic forecasting. Note that the figure below is the future part of the picture above. the other hand, local details are also extremely important for traffic forecasting. For example, we must capture the rapidly decreasing traffic changes when daily traffic congestion occurs. To capture these different patterns, we argue that exploiting multi-scale representations of traffic data is the key challenge of accurate long-term traffic forecasting. Specifically, smaller-scale and larger-scale representations are extracted based on smaller and larger receptive fields, respectively. The former is usually semantically weak but fine-grained, which facilitates the prediction of local details, _e.g._, rapid changes during traffic congestion. In contrast, the latter is coarse-grained but semantically strong, which is helpful in predicting global changes, _e.g._, daily periodicity. An illustration is shown in Figure 1(b). The prediction based on large-scale features captures daily periodicity but misses local details, which can be fixed by further incorporating small-scale features. However, it is a challenging task to exploit multi-scale representations of traffic data. We discuss it from two aspects: _generating_ and _utilizing_ multi-scale representations. _On the one hand_, most existing models cannot generate multi-scale representations of traffic data. State-of-the-art models for long-sequence time series forecasting [9] mainly adopt Transformer architecture to capture the long-term dependencies based on self-attention mechanisms [10]. However, standard self-attention naturally has a global receptive field and thus can only generate representations on a fixed scale. _On the other hand_, utilizing multi-scale representations for traffic forecasting is also a challenging task, as it usually requires a specific decoder. For example, in computer vision tasks like object detection and semantic segmentation, researchers designed decoders such as FPN [11] and U-Net [12] to utilize the multi-scale representations extracted by the pre-trained CNN [13] encoder. These architectures usually require pixel alignment of input and output images. However, the historical and future sequences in traffic _forecasting_ problems are not the same sequences, _i.e._, not aligned, making existing approaches [11, 12] inapplicable. Based on the above discussion, we summarize three challenges that the desired long-term traffic forecasting model should address. First, it must _efficiently_ model the correlations between multiple long-term time series. Second, it should _generate_ multi-scale representations of traffic data by an encoder. Third, it should include a decoder for traffic forecasting tasks to effectively _utilize_ the multi-scale representations generated by the encoder. To address the above challenges, we propose a novel _Hierarchical U-net Transformer_ (named HUTFormer). As shown in Figure 2, HUTFormer is a two-stage model consisting of a hierarchical encoder and a hierarchical decoder, forming an inverted U-shaped structure. _To address the efficiency problem_, HUTFormer designs an efficient input embedding strategy, which employs segment embedding and spatial-temporal positional encoding to significantly reduce the complexity of modeling multiple long-term time series in both temporal and spatial dimensions. _To generate multi-scale representations_, the HUTFormer encoder proposes a window Transformer layer to limit the receptive field, and then designs segment merging as a pooling layer to extract larger-scale features. Thus, lower layers of the encoder focus on smaller-scale features, while higher layers generate larger-scale features. Then, HUTFormer makes an intermediate prediction based on the top-level representations. _To utilize multi-scale representations_, the HUTFormer decoder proposes a cross-scale attention mechanism to address the misalignment issue, which retrieves information for each segment of the intermediate prediction from multi-scale representations, thus enabling the fine-tuning of the intermediate prediction. By exploiting the multi-scale representations of traffic data, HUTFormer is capable of making accurate long-term traffic forecasting. The main contributions of this paper are summarized as follows: * To our best knowledge, this is the first attempt to study long-term traffic forecasting. We reveal its unique challenges in exploiting multi-scale representations of traffic data, and propose a novel _Hierarchical U-net TransFormer_ (HUTFormer) to address them. * We propose window self-attention and cross-scale attention mechanisms to generate and utilize multi-scale representations effectively. In addition, to address complexity issues, we design an input embedding strategy that includes segment embedding and spatial-temporal positional encoding. * Extensive experiments on four traffic datasets show that the proposed HUTFormer significantly outperforms state-of-the-art traffic forecasting and long-sequence time series forecasting baselines, and effectively exploits the multi-scale representations of traffic data. ## 2 Related Work ### _Traffic Forecasting_ Previous traffic forecasting studies usually fall into two categories, _i.e._, knowledge-driven (_e.g._, queuing theory [14]) and early data-driven (_e.g._, sequential models [15, 16, 17, 18]). However, these methods usually ignore the correlation between time series and the high non-linearity of time series [1], which severely limits the effectiveness of these methods. To this end, Spatial-Temporal Graph Neural Networks (STGNNs) were proposed recently [3] to model the complex spatial-temporal correlations in traffic data. Specifically, STGNNs combine Graph Neural Networks (GNNs) [6, 7] and sequential models (_e.g._, CNN [18] or RNN [16]), to model the complex spatial-temporal correlation in traffic data. For example, DCRNN [3], STMetaNet [19], AGCRN [20], _etc._. [21, 22, 23], are RNN-based methods, which combine GNN with recurrent neural networks. Graph WaveNet [4], MTGNN [24], STGCN [22], and StemGNN [25] are CNN-based methods [26], which usually combines GNN with the Temporal Convolution Network (TCN [18]). Moreover, the attention mechanism is also a widely used technique in STGNNs [27, 28, 29, 30, 31]. Although STGNNs have achieved significant progress, their complexity is high. Specifically, their complexity usually increases linearly or quadratically with the length and the number of time series [8], since they need to deal with both temporal and spatial dependency at every step. Thus, most of them focus on short-term traffic forecasting based on short-term history data, _e.g._, predicting future 1-hour traffic conditions based on 1-hour historical data. A recent work STEP [8] attempts to address this issue based on the time series pre-training model. It significantly enhances STGNN's ability to exploit long-term historical data. However, STEP requires a downstream STGNN as the backbone, which still focuses on short-term traffic forecasting. Although STGNN-based traffic forecasting has made significant progress, these models only focus on short-term traffic forecasting, and cannot handle long-term traffic forecasting. On the one hand, due to the high complexity, most of them can not handle long-term history data, let alone predict long-term future traffic conditions. On the other hand, apart from efficiency issues, long-term traffic forecasting also has its unique challenges, which require exploiting multi-scale representations of traffic data to capture the complex long-term traffic dynamics. ### _Long-Sequence Time Series Forecasting_ Recently, long-sequence time series forecasting [9] has received much attention [9, 32, 33, 34, 35, 36, 37]. They aim to make long-term future predictions by modeling long-term historical sequences. For example, Informer [9] proposes a ProbSparse self-attention mechanism to replace the standard self-attention, which enhances the predictive ability of the standard Transformer in the long-sequence forecasting problem. Autoformer [32] designs an efficient Auto-Correlation mechanism to conduct dependencies discovery and information aggregation at the series level. DLinear [35] rethinks Transformer-based techniques and proposes a simple linear model based on decomposition to achieve better accuracy. Although these models have made considerable progress in long-term time series forecasting, they are not designed for traffic data, which significantly affects their effectiveness in traffic forecasting problems. We discuss it from two aspects. First of all, there are strong correlations between multiple time series in traffic data, which is an important bottleneck in traffic forecasting [3]. However, long-sequence time series forecasting models usually do not pay attention to such spatial correlations. Second, as discussed in Section 1, long-term traffic forecasting requires exploiting multi-scale representations of traffic data to capture the complex long-term traffic dynamics. However, long-sequence forecasting models usually rely on the self-attention mechanism and its variants, which can not explicitly generate multi-scale features. ## 3 Preliminaries ## 4 Notation Table In this section, we define the notions of traffic data and traffic forecasting task. Frequently used notations are summarized in Table I. **Definition 1**.: **Traffic Data**__\(\mathcal{X}\in\mathbb{R}^{T\times N\times C}\) denotes the observation from all sensors on the traffic network, where \(T\) is the number of time steps, \(N\) is the number of traffic sensors, and \(C\) is the number of features collected by sensors. We additionally denote the data from the sensor \(i\) as \(\mathbf{X}^{i}\in\mathbb{R}^{T\times C}\). **Definition 2**.: **Traffic Forecasting** aims to predict the traffic values \(\mathcal{Y}\in\mathbb{R}^{T_{f}\times N\times C}\) of the \(T_{f}\) nearest future time steps based on the given historical traffic data \(\mathcal{X}\in\mathbb{R}^{T_{h}\times N\times C}\) from the past \(T_{h}\) time steps. In this paper, we focus on long-term traffic forecasting, e.g., forecasting for a day in the future. ## 5 Model Architecture ### _Overview_ As illustrated in Figure 2, HUTFormer is based on a hierarchical U-net structure to _generate_ and _utilize_ multi-scale representations of traffic data. In this subsection, we intuitively discuss each component of HUTFormer and its two-stage training strategy. First, we discuss the hierarchical encoder. The window Transformer layer is the basis for generating multi-scale representations, which calculates self-attention within a small window to limit the receptive field. Then, segment merging acts as a pooling layer, reducing the sequence length to produce larger-scale representations. By combining them, lower layers can focus on smaller-scale features while higher layers focus on larger-scale features, thus successfully generating multi-scale features. Then, an intermediate prediction is made based on the top-layer representations. However, considering that the top-layer features are semantically strong but coarse-grained, the intermediate prediction may fail to capture rapidly changing local details, _e.g._, the red line in Figure 1(b). To address the above problem, the hierarchical decoder aims to fine-tune the intermediate prediction by incorporating multi-scale representations. U-net [12, 38] is a popular structure for utilizing multi-scale representations, especially in computer vision tasks (_e.g._, semantic segmentation). In these tasks, the pixels of the input and target images are aligned, _i.e._, models operate on the same image. However, for traffic _forecasting_ tasks, the input and output sequences are not the same sequence, _i.e._, not aligned. Thus, the representations generated by the encoder and the decoder cannot \begin{table} \begin{tabular}{c|l} \hline \hline **Notations** & **Definitions** \\ \hline \(T\) & The length of history traffic data. \\ \(T_{f}\) & The length of future traffic data. \\ \(N\) & The number of time series. \\ \(C\) & The number of feature channels in a traffic sensor. \\ \(\mathcal{X}\) & History data of shape \(\mathbb{R}^{T\times N\times C}\) \\ \(\mathcal{Y}\) & Future data of shape \(\mathbb{R}^{T_{f}\times N\times C}\) \\ \(\mathbf{X}^{i}\) & History data of sensor \(i\). \\ \(\mathbf{Y}^{i}\) & Future data of sensor \(i\). \\ \(\tilde{\mathbf{Y}}^{i}\) & Prediction data of sensor \(i\). \\ \(L\) & The segment size. \\ \(P\) & The number of segments. \(T=P\times L\). \\ \(d\) & The hidden dimension. \\ \(\mathbf{W}\) & Parameter matrix of the fully connected layer. \\ \(\mathbf{b}\) & Parameter of the bias of the fully connected layer. \\ \(\mathbf{E}\) & Spatial embeddings of shape \(\mathbb{R}^{N\times d_{1}}\). \\ \(\mathbf{T}\) & Temporal embeddings. \\ \(N_{D}\) & The number of time slots of a day. \\ \(N_{W}\) & The number of days in a week (\(\mathcal{7}\)). \\ \(\mathbf{S}\) & Embeddings of each segment after segment embedding. \\ \(\mathbf{U}\) & Embeddings of each segment after spatial-temporal positional encoding. \\ \(\mathbf{H}\) & Hidden states. \\ \hline \hline \end{tabular} \end{table} TABLE I: Frequently used notation. be directly superimposed as regular U-net structures [12, 38] do for computer vision tasks. To this end, we design a cross-scale Transformer layer, which uses the representations from the decoder as _queries_ and the multi-scale features from the encoder as _keys_ and _values_ to retrieve information. Such a top-down pathway and lateral connects help to combine the multi-scale representations, thus enhancing the prediction accuracy. In addition, HUTFormer addresses complexity issues based on an efficient input embedding strategy, which consists of segment embedding and spatial-temporal positional encoding. On the one hand, segment embedding reduces complexity from the temporal dimension by using time series segments as basic input tokens. This simple operation has significant benefits in both reducing the length of the input sequence and providing more robust semantics [8]. On the other hand, spatial-temporal positional encoding is designed to replace the standard positional encoding [39, 10] in Transformer. More importantly, it efficiently models the correlations among time series from the perspective of solving the indistinguishability of samples [40, 41], avoiding the high complexity of conducting graph convolution [3, 4] in the spatial dimension. Finally, we propose the training strategy: a two-stage strategy. The first stage aims to train the hierarchical encoder based on the Mean Absolute Error (MAE) between the intermediate prediction and ground truth. In the second stage, we only train the decoder, while the parameters of the encoder are fixed to act as the multi-scale feature extractor. The reason for adopting the two-stage strategy is that traffic forecasting tasks are different from tasks that employ an end-to-end strategy (_e.g._, semantic segmentation [12, 38] and object detection [11] in computer vision). Specifically, in computer vision tasks, pre-trained vision models (_e.g._, pre-trained ResNet [13]) usually serve as the backbone to extract multi-scale features [11]. However, there is no pre-trained model for time series that can extract multi-scale features. Therefore, optimizing the feature extractor (_i.e._, the encoder) and downstream networks (_i.e._, the decoder) in an end-to-end fashion may be insufficient. The experimental results in Section 6.5 also verify this hypothesis. Next, we introduce each component in detail. ### _Input Embedding_ **Segment embedding.** Most previous works usually use single data points as the basic input units. However, isolated points of time series usually give less semantics [8] and are more easily affected by noise. Therefore, HUTFormer adopts segment embedding, _i.e._, dividing the input sequence into several segments to get the input tokens. Specifically, given the time series \(\mathbf{X}^{i}\in\mathbb{R}^{T\times C}\) from sensor \(i\), HUTFormer divides it into \(P\) non-overlapping segments of length \(L\), _i.e._, \(T=P*L\). We denote the \(j\)th segment as \(\mathbf{X}^{i}_{j}\in\mathbb{R}^{LC}\). Then, we conduct the input embedding layer based on these segments: \[\mathbf{S}^{i}_{j}=\mathbf{W}\cdot\mathbf{X}^{i}_{j}+\mathbf{b}, \tag{1}\] where \(\mathbf{S}^{i}_{j}\in\mathbb{R}^{d}\) is the embedding of segments \(j\) of the time series from sensor \(i\), and \(d\) is the hidden dimension. \(\mathbf{W}\in\mathbb{R}^{d\times(LC)}\) and \(\mathbf{b}\in\mathbb{R}^{d}\) are learnable parameters shared by all segments. In summary, applying segment embedding brings two benefits. First, it provides more robust semantics. Second, it significantly reduces the sequence length to reduce computational complexity. Fig. 2: The overview of the proposed HUTFormer. Left: the hierarchical encoder. It generates multi-scale features for traffic data based on window Transformer layer and segment merging, and makes an intermediate prediction. Right: the hierarchical decoder. It fine-tunes the intermediate prediction by incorporating multi-scale features based on cross-scale Transformer layer. In addition, segment embedding and spatial-temporal positional encoding are proposed to address complexity issues. **Spatial-temporal positional encoding.** In this paper, we propose to replace the standard positional encoding in Transformer-based networks [10, 39] with Spatial-Temporal Positional Encoding (ST-PE). Specifically, given the segment embedding \(\mathbf{S}_{j}^{i}\in\mathbb{R}^{d}\) of segments \(j\) from time series \(i\), ST-PE conduct positional encoding on the spatial and temporal dimensions simultaneously: \[\mathbf{U}_{j}^{i}=\text{Linear}(\mathbf{S}_{j}^{i}\parallel\mathbf{E}^{i} \parallel\mathbf{T}_{j}^{TiO}\parallel\mathbf{T}_{j}^{DiW}). \tag{2}\] On the spatial dimension, we define the spatial positional embeddings \(\mathbf{E}\in\mathbb{R}^{N\times d_{1}}\), where \(N\) is the number of time series (_i.e._, sensors), and \(d_{1}\) is the hidden dimension. On the temporal dimension, we define two semantic positional embeddings, \(\mathbf{T}^{TiO}\in\mathbb{R}^{N_{D}\times d_{2}}\) and \(\mathbf{T}^{DiW}\in\mathbb{R}^{N_{W}\times d_{3}}\), where \(N_{D}\) is the number of time slots of a day (determined by the sensor's sampling frequency) and \(N_{W}=7\) is the number of days in a week. The temporal embeddings are thus shared among slots for the same time of the day and the same day of the week. Semantic temporal positional embeddings are helpful since traffic systems usually reflect the periodicity of human society. In addition, kindly note that all other baseline models [33, 34, 32, 1, 4] also use such temporal features, so there is no unfairness. Linear(\(\cdot\)) is a linear layer to reduce the hidden dimension. \(\mathbf{E}\), \(\mathbf{T}^{TiD}\), and \(\mathbf{T}^{DiW}\) are trainable parameters. Embedding \(\mathbf{E}\) is vital for reducing the complexity of modeling the spatial correlations between time series. This is because attaching spatial embeddings plays a similar role to GCN in terms of solving the indistinguishability of samples [40], but with two primary advantages. On the one hand, it is more efficient than GCNs, which usually have \(\mathcal{O}(N^{2})\) complexity. On the other hand, it does not generate many additional network parameters than approaches based on variable-specific modeling [7, 20]. ### _Hierarchical Encoder_ **Window Transformer Layer.** Standard Transformer Layers [10] are designed based on the multi-head self-attention mechanism. As shown in Figure 3(a), it computes the attention among all input tokens. Therefore, each layer of the Transformer Layer has an infinite receptive field, and many works [34, 32, 9, 33] try to capture long-term dependencies based on such a feature. However, the infinite receptive field makes the standard Transformer layers unable to generate multi-scale features [42]. Inspired by recent development in computer vision [42], we apply the window self-attention in HUTFormer to extract the hierarchical multi-scale features. An example of window self-attention with windows size 2 is shown in Figure 3(b). Window self-attention forces calculating attention inside non-overlapping windows, thereby limiting the size of the receptive field. By replacing multi-head self-attention in standard Transformer layers [10] with the Window Multi-head Self-Attention (W-MSA), we present the window Transformer layer: \[\begin{split}\mathbf{H}^{in^{\prime}}&=\text{W- MSA}(\text{LN}(\mathbf{H}^{in}))+\mathbf{H}^{in},\\ \mathbf{H}^{out}&=\text{MLP}(\text{LN}(\mathbf{H}^{ in^{\prime}}))+\mathbf{H}^{in^{\prime}},\end{split} \tag{3}\] where \(\text{LN}(\cdot)\) is the layer normalization, and \(\text{MLP}(\cdot)\) is the multi-layer perceptron. \(\mathbf{H}^{in}\in\mathbb{R}^{P\times d}\) and \(\mathbf{H}^{out}\in\mathbb{R}^{P\times d}\) are the input and output sequences. \(P\) is the sequence length, and \(d\) is the hidden dimension. By limiting the receptive field size, the window transformer layer is the basis for extracting multi-scale features. **Segment Merging.** To generate hierarchical multi-scale representations, we adopt segment merging, which reduces the number of tokens and increases the number of hidden dimensions as the network gets deeper. As illustrated in Figure 4, segment merging divides the token series into non-overlapping groups of size 2, and concatenates the features within each group. By combining the segment merging and window transformer layer, we get the basic block of the hierarchical encoder (_i.e._, the blue block in Figure 2). Assuming \((\mathbf{H}^{i})_{enc}^{l}\in\mathbb{R}^{P^{l}\times d^{l}}\) is the representation of time series \(i\) after block \(l\) (\(l\geq 1\)) of the encoder, the \((l+1)\)th block is computed as: \[\begin{split}(\mathbf{H}^{i})_{enc}^{l^{\prime}}&= \text{SegmentMerging}((\mathbf{H}^{i})_{enc}^{l}),\\ (\mathbf{H}^{i})_{enc}^{l+1}&=\text{Window Transformer}((\mathbf{H}^{i})_{enc}^{l^{\prime}}),\end{split} \tag{4}\] where \((\mathbf{H}^{i})_{enc}^{l+1}\in\mathbb{R}^{P^{l+1}\times d^{l+1}}\) is the representation of time series \(i\) after block \(l+1\) of the encoder. \(P^{l+1}=\frac{P^{l}}{2}\) is the number of tokens after \((l+1)\)th layer, and \(d^{l+1}=2d^{l+1}\) is the hidden dimension. **Prediction Layer.** Assuming there are \(S\) blocks in the encoder, HUTFormer makes an intermediate prediction with a linear layer: \[\hat{\mathbf{Y}}_{enc}^{i}=\text{Linear}(\mathop{\parallel}_{j=1}^{P^{S}}( \mathbf{H}_{j}^{i})_{enc}^{S}), \tag{5}\] where \(P^{S}\) is the number of tokens after the \(S\)th block. \(\hat{\mathbf{Y}}^{i}\in\mathbb{R}^{T_{f}\times C}\) is the prediction of time series \(i\). Considering the prediction from all \(N\) time series, \(\hat{\mathbf{Y}}^{enc}\in\mathbb{R}^{T_{f}\times N\times C}\), we compute the Mean Absolute Error (MAE) as regression loss to train the hierarchical encoder: \[\mathcal{L}_{enc}=\frac{1}{T_{f}NC}\sum_{j=1}^{T_{f}}\sum_{i=1}^{N}\sum_{k=1}^{C }|\hat{\mathcal{Y}}_{ijk}^{enc}-\mathcal{Y}_{ijk}|. \tag{6}\] Fig. 4: An illustration of segment merging. Fig. 3: Standard self-attention _v.s._ window self-attention. ### _Hierarchical Decoder_ **Cross-Scale Transformer Layer.** The hierarchical decoder aims to effectively utilize the multi-scale features, to fine-tune each segment of the intermediate prediction. However, as discussed in Section 5.1, the history and future sequence in traffic forecasting tasks are not aligned, making the feature sequences extracted by the encoder and the decoder cannot be directly superimposed. Therefore, we design a cross-scale attention mechanism to select and incorporate multi-scale features. Different from self-attention, cross-scale attention utilizes the representations of the decoder as _queries_ to retrieve the multi-scale features from the encoder. For brevity, we denote \(\mathbf{H}_{enc}\in\mathbb{R}^{P_{enc}\times d_{enc}}\) as the representation from the encoder and \(\mathbf{H}_{dec}\in\mathbb{R}^{P_{dec}\times d_{dec}}\) as the corresponding representation from the decoder. Then, the Cross-scale Attention (CA) is computed as: \[\begin{split}\text{CA}(\mathbf{H}_{enc},\mathbf{H}_{dec})& =\text{Softmax}(\frac{\mathbf{H}_{dec}(\mathbf{H}_{enc}^{{}^{\prime}})^ {T}}{\sqrt{d_{dec}}})\mathbf{H}_{enc}^{{}^{\prime}},\\ \mathbf{H}_{enc}^{{}^{\prime}}&=\text{Linear}( \mathbf{H}_{enc}),\end{split} \tag{7}\] The \(\text{Linear}(\cdot)\) layer is used to transform the hidden dimension from \(d_{enc}\) to \(d_{dec}\). By replacing the multi-head self-attention with Multi-head Cross-scale Attention (MCA), we present the cross-scale Transformer layer as: \[\begin{split}\mathbf{H}_{dec}^{in^{\prime}}=\text{MCA}(\text{LN} (\mathbf{H}_{enc}^{in},\mathbf{H}_{dec}^{in}))+\mathbf{H}_{dec}^{in},\\ \mathbf{H}_{dec}^{out}=\text{MLP}(\text{LN}(\mathbf{H}_{dec}^{in^{ \prime}}))+\mathbf{H}_{dec}^{in^{\prime}},\end{split} \tag{8}\] where \(\mathbf{H}_{enc}^{in}\) is the multi-scale feature from the encoder, and \(\mathbf{H}_{dec}^{in}\) is the input feature from the decoder. \(\mathbf{H}_{dec}^{out}\) is the output of the cross-scale Transformer layer. **Prediction Layer.** Assuming \((\mathbf{H}_{j}^{i})_{dec}^{S}\in\mathbb{R}^{d_{dec}}\) is the representation of the decoder's last block (_i.e._, the \(S\)th block) for \(j\)th segment of \(i\)th time series, HUTFormer makes the final prediction for each segment with a shared linear layer: \[(\hat{\mathbf{Y}}_{j}^{i})_{dec}=\text{Linear}((\mathbf{H}_{j}^{i})_{dec}^{S}), \tag{9}\] where \((\hat{\mathbf{Y}}_{j}^{i})_{dec}\in\mathbb{R}^{LC}\) is the final prediction of segment \(j\) of time series \(i\). Similar to the encoder, we consider the prediction from all \(P_{dec}\) segments (\(P_{dec}\times L=T_{f}\)) of all \(N\) time series, \(\hat{\mathcal{Y}}^{dec}\in\mathbb{R}^{T_{f}\times N\times C}\), and compute the MAE loss to train the hierarchical decoder: \[\mathcal{L}_{dec}=\frac{1}{T_{f}NC}\sum_{j=1}^{T_{f}}\sum_{i=1}^{N}\sum_{k=1}^{ C}|\hat{\mathcal{Y}}_{ijk}^{dec}-\mathcal{Y}_{ijk}|. \tag{10}\] Kindly note that the parameters of the encoder are fixed during this stage to serve as a pre-training model for extracting robust hierarchical multi-scale representations of traffic data. ## 6 Experiments In this section, we conduct extensive experiments on four real-world traffic datasets to validate the effectiveness of HUTFormer for long-term traffic forecasting. First, we introduce the experimental settings, including datasets, baselines, and implementation details. Then, we compare HUTFormer with other state-of-the-art traffic forecasting baselines and long-sequence time series forecasting baselines. Furthermore, we conduct more experiments to evaluate the impact of important components and strategies, including the effectiveness of the hierarchical U-net structure, the input embedding strategy, and the two-stage training strategy. ### _Experimental Setting_ **Datasets.** We conduct experiments on four commonly used traffic datasets, including two traffic speed datasets (METRA-LA and PEMS-BAY) and two traffic flow datasets (PEMS04 and PEMS08). The statistical information is summarized in Table II. * **METRA-LA** is a traffic speed dataset collected from loop-detectors located on the LA County road network [43]. It contains data of 207 selected sensors over a period of 4 months from Mar to Jun in 2012 [3]. The traffic information is recorded at the rate of every 5 minutes, and the total number of time slices is 34,272. * **PEMS-BAY** is a traffic speed dataset collected from California Transportation Agencies (CalTrans) Performance Measurement System (PeMS) [44]. It contains data of 325 sensors in the Bay Area over a period of 6 months from Jan 1st 2017 to May 31th 2017 [3]. The traffic information is recorded at the rate of every 5 minutes, and the total number of time slices is 52,116. * **PEMS04** is a traffic flow dataset also collected from CalTrans PeMS. It contains data of 307 sensors over a period of 2 months from Jan 1st 2018 to Feb 28th 2018 [30]. The traffic information is recorded at the rate of every 5 minutes, and the total number of time slices is 16,992. * **PEMS08** is a public traffic flow dataset collected from CalTrans PeMS. Specifically, PEMS08 contains data of 170 sensors in San Bernardino over a period of 2 months from July 1st 2018 to Aug 31th 2018 [30]. The traffic information is recorded at the rate of every 5 minutes, and the total number of time slices is 17,856. **Baselines.** On the one hand, we select six traffic forecasting baselines, including * **DCRNN**[3] (Diffusion Convolutional Recurrent Neural Network) is one of the earliest works for STGNN-based traffic forecasting, which replaces the fully connected layer in GRU [16] by diffusion convolutional layer to form a Diffusion Convolutional Gated Recurrent Unit (DCGRU). * **Graph WaveNet**[4] is a traffic forecasting model, which stacks gated temporal convolutional layer and GCN layer by layer to jointly capture the spatial and temporal dependencies. * **MTGNN**[24] is a traffic forecasting model, which extends Graph WaveNet through the mix-hop propagation layer in the spatial module, the dilated inception \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **Type** & **Dataset** & **\# Samples** & **\# Sensors** & **Sample Rate** \\ \hline \multirow{2}{*}{**Speed**} & **METRA-LA** & 34272 & 207 & 5 minutes \\ & **PEMS-BAY** & 52116 & 325 & 5 minutes \\ \hline \multirow{2}{*}{**Flow**} & **PEMS04** & 16992 & 307 & 5 minutes \\ & **PEMS08** & 17856 & 170 & 5 minutes \\ \hline \hline \end{tabular} \end{table} TABLE II: Statistics of datasets. layer in the temporal module, and a delicate graph learning layer. * **STID**[40] is a simple but effective baseline for traffic forecasting, which identifies the indistinguishability of samples in both spatial and temporal dimensions as a key bottleneck, and addresses the indistinguishability by attaching spatial and temporal identities. * **STEP**[8] is a traffic forecasting model, which enhances existing STGNNs with the help of a time series pre-training model. It significantly extends the length of historical data. * **D\({}^{2}\)STGNN**[1]: D\({}^{2}\)STGNN is a state-of-the-art traffic forecasting model, which identifies the diffusion process and inherent process in traffic data, and further decouples them for better modeling. On the other hand, we also select six long-sequence forecasting baselines, including * **HI**[45]: HI (Historical Inertia) is a basic baseline for long-sequence time series forecasting problems, which directly takes the most recent time steps in the input as output. * **DLinear**[35]: DLinear is a simple but effective long-sequence time series forecasting model, which decomposes the time series into a trend and a remainder series and employs two one-layer linear networks to model these two series. * **Informer**[9]: Informer is a model for long-sequence time series forecasting, which designs a ProbSparse self-attention mechanism and distilling operation to handle the challenges of the quadratic complexity in the standard Transformer. Also, it carefully designs a generative decoder to alleviate the limitation of standard encoder-decoder architecture. * **Autoformer**[32]: Autoformer is a model for long-sequence time series forecasting, which is proposed as a decomposition architecture by embedding the series decomposition block as an inner operator. Besides, it designs an efficient Auto-Correlation mechanism to conduct dependencies discovery and information aggregation at the series level. * **FEDformer**[33]: FEDformer [33] is a frequency-enhance Transformer for long-sequence time series forecasting problems. It proposes an attention mechanism with low-rank approximation in frequency and a mixture of experts decomposition to control the distribution shifting. * **Pyraformer**[34]: Pyraformer [34] is a pyramidal attention-based model for long-sequence time series forecasting. Pyramidal attention can effectively describe short and long temporal dependencies with low time and space complexity. **Metrics.** In this paper, we evaluate the performances of all baselines by Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) metrics. First, the MAE metric reflects the absolute prediction error, but is affected by the units of the dataset. For example, traffic speed datasets usually take values between 0km/h and 70km/h, while traffic flow datasets usually take values between 0 and hundreds. Thus, we also adopt MAPE, which can eliminate the impact of data units and reflects the relative error, helping to understand the accuracy more intuitively. **Implementation.** For all datasets, we use historical \(T_{p}=288\) time steps (_i.e._, one day) to predict future \(T_{f}=288\) time steps. For HUTFormer, we set the segment length \(L\) to 12, and the number of segments \(P=24\) (\(L\times P=288\)). We set the window size to \(3\). We set the hidden dimension of temporal embedding \(\mathbf{T}^{iTiD}\) to 8, while others \(d\) to 32. The depth of HUTFormer is set to \(4\). For baselines, we adopt the default settings in their paper. Moreover, as discussed before, STGNNs can not directly handle the long-term traffic forecasting task due to their high complexity. Therefore, we first apply the segment embedding to reduce the length of input tokens for them1. Footnote 1: Methods implemented with segment embeddings are marked with \(*\). **Optimization Settings.** For both encoding and decoding stages, we apply the optimization settings in Tabel 3. Specifically, we adopt Adam as our optimizer, and set learning rate and weight decay to \(0.0005\) and \(0.0001\), respectively. The batch size is set to 64. In addition, we use a learning rate scheduler, MultiStepLR, which adjusts the learning rate at epochs 1, 40, 80, and 120 with gamma 0.5. Moreover, the gradient clip is set to 5. All the experiments in Section 6 are running on an Intel(R) Xeon(R) Gold 5217 CPU @ 3.00GHz, 128G RAM computing server, equipped with RTX 3090 graphics cards. ### _Main Results_ **Settings.** We follow the dataset division in previous works. Specifically, for traffic speed datasets (METR-LA and PEMS-BAY), we use 70%, 10%, and 20% of the data for training, validating, and testing, respectively. For traffic flow datasets (PEMS04 and PEMS08), we use 60%, 20%, and 20% of data for training, validating, and testing, respectively. We compare the performance at 1, 4, 8, 12, 16, and 24 hours (horizon 12, 48, 96, 144, 192, and 288) of forecasting on the MAE and MAPE metrics. **Results.** The results of traffic speed and traffic flow forecasting are shown in Table IV and 6, respectively. In general, HUTFormer consistently outperforms all baselines, indicating its effectiveness. Long-sequence forecasting models do not perform well on traffic forecasting tasks. We conjecture that the main reason is that these models do not fit the characteristics of traffic data. _First_, there exist strong correlations between the time series of traffic data. For example, due to the constraint of road networks, time series from adjacent sensors or \begin{table} \begin{tabular}{l|l} \hline \hline config & value \\ \hline optimizer & Adam \\ learning rate & 0.0005 \\ batch size & 64 \\ weight decay & 0.0001 \\ learning rate schedule & MultiStepLR \\ milestones & [1, 40, 80, 120] \\ gamma & 0.5 \\ gradient clip & 5 \\ \hline \hline \end{tabular} \end{table} TABLE III: Optimization settings. from similar geographical functional areas may be more similar [19]. Understanding and exploiting the correlations between time series is essential for traffic forecasting. However, long-sequence forecasting models are usually not concerned with such spatial dependencies. _Second_, as discussed in Section 1, the long-term traffic forecasting task requires exploiting multi-scale representations to capture the complex dynamics of traffic data. However, most long-term sequence forecasting models mainly focus on capturing global dependencies based on self-attention mechanisms. For example, Informer [9] optimizes the efficiency of the original self-attention mechanism through the ProbSparse mechanism. Autoformer [32] conducts the dependencies discovery at the series level. They can not generate and utilize multi-scale representations of traffic data. In summary, the above-mentioned uniqueness of long-term traffic forecasting tasks significantly affects the effectiveness of long-sequence forecasting models. Compared to long-sequence forecasting models, traffic forecasting models achieve better performance. This is mainly because they model correlations between time series with the help of graph convolution. Most of them [1, 3, 4, 8, 24] utilize diffusion convolution, a variant of graph convolution, to model the diffusion process at each time step. However, there is no free lunch. The graph convolution brings a high complexity [8]. As mentioned earlier, we had to implement these models with the segment embedding in HUTFormer to reduce the length of input tokens to make them runnable. Kindly note that although the latest baseline STEP [8] can handle long-term historical data, it still requires a downstream STGNN as the backend, which can only make short-term future predictions. In summary, these models only focus on short-term traffic forecasting and do not consider the uniqueness of long-term traffic forecasting, _i.e.,_ exploiting multi-scale representations. Compared to all baselines, HUTFormer achieves state-of-the-art performances by sufficiently addressing the issues of long-term traffic forecasting tasks. Specifically, on the one hand, HUTFormer efficiently handles the correlations between long-term time series with spatial-temporal positional encoding and segment embedding. On the other hand, HUTFormer effectively generates and utilizes multi-scale representations based on the hierarchical U-net. ### _Efficiency_ In this section, we conduct more experiments to evaluate the efficiency of the HUTFormer variants in Section 6.5. We conduct experiments with a single NVIDIA V100 graphics card with 32GB memory, and report the GPU memory usage and running time. Specifically, for the two-stage training variants, we report the largest GPU memory usage of the two stages and report the sum of the running time in the encoding and decoding stages. We conduct experiments on the METR-LA dataset. The results are shown in Figure 5. First, we can see that removing the segment embedding (_i.e., w/o SE_) will significantly increase the computational complexity, and require more GPU memory. Second, compared with applying GCN, \begin{table} \begin{tabular}{c c c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Data**} & \multirow{2}{*}{**Methods**} & \multicolumn{2}{c}{**@Horizon 12**} & \multicolumn{2}{c}{**@Horizon 48**} & \multicolumn{2}{c}{**@Horizon 96**} & \multicolumn{2}{c}{**@Horizon 144**} & \multicolumn{2}{c}{**@Horizon 192**} & \multicolumn{2}{c}{**@Horizon 288**} \\ \cline{3-13} & & MAE & MAPE & MAE & MAPE & MAE & MAPE & MAE & MAPE & MAE & MAPE & MAE & MAPE \\ \hline \hline \multirow{11}{*}{**HUTFormer**} & HI & 10.44 & 23.21\% & 10.42 & 23.19\% & 10.43 & 23.23\% & 10.43 & 23.32\% & 10.40 & 23.34\% & 10.22 & 22.81\% \\ & DLinear & 7.61 & 16.19\% & 12.86 & 23.79\% & 12.99 & 23.11\% & 12.90 & 23.48\% & 12.89 & 23.15\% & 13.07 & 23.33\% \\ & Informer & 4.65 & 15.52\% & 4.86 & 16.54\% & 4.98 & 17.16\% & 5.07 & 17.41\% & 5.07 & 17.30\% & 5.06 & 17.14\% \\ & Autoformer & 7.23 & 19.25\% & 7.27 & 19.73\% & 7.45 & 20.23\% & 7.83 & 21.49\% & 7.74 & 20.98\% & 8.41 & 22.43\% \\ & FEDformer & 8.78 & 22.29\% & 9.11 & 22.69\% & 9.12 & 22.75\% & 9.54 & 24.18\% & 9.81 & 24.76\% & 10.13 & 25.56\% \\ & Pyraformer & 4.22 & 12.84\% & 4.55 & 14.93\% & 4.75 & 15.81\% & 4.80 & 15.89\% & 4.81 & 15.68\% & 4.62 & 14.79\% \\ \cline{2-13} & DCRNN\({}^{*}\) & 4.07 & 12.74\% & 4.39 & 14.08\% & 4.44 & 14.02\% & 4.46 & 14.16 \% & 4.51 & 14.41\% & 4.71 & 15.59\% \\ & GWNet\({}^{*}\) & 3.87 & 12.18\% & 4.19 & 13.60\% & 4.25 & 13.62\% & 4.42 & 15.46\% & 4.58 & 15.40\% & 4.51 & 15.09\% \\ & MTGNN\({}^{*}\) & 4.01 & 12.31\% & 4.31 & 13.84\% & 4.53 & 14.85\% & 4.59 & 14.77\% & 4.57 & 15.18\% & 4.75 & 15.93\% \\ & STID & 3.84 & 12.17\% & 4.13 & 14.11\% & 4.04 & 13.05\% & 4.11 & 13.65\% & 4.15 & 14.07\% & 4.17 & 13.83\% \\ & STEP\({}^{*}\) & 3.74 & 11.60\% & 4.14 & 13.24\% & 4.22 & 13.52\% & 4.38 & 14.07\% & 4.34 & 13.96\% & 4.43 & 14.42\% \\ & DSTGNN\({}^{*}\) & 3.71 & 11.24\% & 3.96 & 12.84\% & 3.99 & 13.26\% & 4.05 & 13.17\% & 4.05 & 13.36\% & 4.09 & 12.78\% \\ \cline{2-13} & HUTFormer & **3.59** & **10.93\%** & **3.77** & **11.88\%** & **3.79** & **11.86\%** & **3.80** & **12.08\%** & **3.82** & **12.18\%** & **3.84** & **12.28\%** \\ \hline \hline \multirow{11}{*}{**HUTFormer**} & HI & 3.37 & 7.84\% & 3.36 & 7.80\% & 3.36 & 7.77\% & 3.36 & 7.76\% & 3.36 & 7.74\% & 3.38 & 7.79\% \\ & DLinear & 2.70 & 6.28\% & 3.14 & 7.75\% & 3.13 & 7.77\% & 3.15 & 7.76\% & 3.15 & 7.78\% & 3.23 & 7.90\% \\ & Informer & 2.77 & 6.65\% & 2.80 & 6.88\% & 2.84 & 7.06\% & 2.83 & 7.07\% & 2.82 & 6.98\% & 2.92 & 7.16\% \\ & Autoformer & 3.15 & 7.48\% & 3.24 & 7.85\% & 3.30 & 8.00\% & 3.37 & 8.10\% & 3.39 & 8.15\% & 4.35 & 11.25\% \\ & FEDformer & 3.04 & 7.55\% & 3.14 & 7.61\% & 3.13 & 7.58\% & 3.32 & 8.00\% & 3.42 & 8.45\% & 3.67 & 9.33\% \\ & Pyraformer & 2.53 & 6.21\% & 2.71 & 6.72\% & 2.64 & 6.39\% & 2.74 & 6.65\% & 2.75 & 6.68\% & 2.77 & 6.81\% \\ \cline{2-13} & DCRNN\({}^{*}\) & 2.18 & 5.49\% & 2.52 & 6.49\% & 2.54 & 6.43\% & 2.66 & 6.79\% & 2.67 & 6.80\% & 2.66 & 6.62\% \\ & GWNet\({}^{*}\) & 2.01 & 5.11\% & 2.35 & 5.91\% & 2.40 & 5.98\% & 2.47 & 6.35\% & 2.46 & 6.24\% & 2.46 & 6.09\% \\ & MTCNN\({}^{*}\) & 2.17 & 5.40\% & 2.45 & 6.11\% & 2.51 & 6.04\% & 2.52 & 6.13\% & 2.57 & 6.19\% & 2.70 & 6.40\% \\ & STID & 2.02 & 5.02\% & 2.29 & 5.66\% & 2.32 & 5.69\% & 2.33 & 5.72\% & 2.32 & 5.67\% & 2.38 & 5.81\% \\ & STEP\({}^{*}\) & 2.00 HUTFormer is more efficient and effective by leveraging the spatial-temporal positional encoding, which does not increase much complexity. ### _Generalization_ The ability of HUTFormer to generate and utilize multi-scale features should also be effective in many non-traffic data, since the multi-scale features widely exists in many domains. In order to verify the generalization of HUTFormer, in this part, we compare HUTFormer with more latest Transformer-based long time series forecasting models [46, 47] based on three commonly used long-sequence prediction datasets, ETTh1, ETTm1, and Weather. The details of Crossformer [46] and Triformer [47] as well as the three datasets are neglected for simplicity. Interest readers can refer to their papers [46, 47]. We use the same setting as the other datasets in our paper. As shown in the table VI, HUTFormer still outperforms these models on these datasets, which further verifies the effectiveness and generalization of HUTFormer. ### _Ablation Study_ In this subsection, we conduct more experiments to evaluate the impact of some important components and strategies. Specifically, we evaluate from three aspects, including the effectiveness of the hierarchical U-net structure, the input embedding strategy, and the two-stage training strategy. Due to space limitations, we only present the results on METR-LA datasets in Table VII. The hierarchical U-net structure is designed to generate and exploit multi-scale features. Specifically, the encoder combines window self-attention and segment merging to generate multi-scale features, while the decoder primarily utilizes extracted features based on cross-scale attention. Therefore, to evaluate their effectiveness, we set up three variants. First, we replace the decoder with a simple concatenation, named \(HUTFormer\ concat\). The concatenation of features from different scales naturally preserves all information. Second, we set \(HUTFormer\ w/o\ decoder\) to remove the decoder and use the intermediate prediction as the final prediction. The above two variants are used to demonstrate that exploiting multi-scale features is a non-trivial challenge and our hierarchical decoder is effective. Third, we set \(HUTFormer\ w/o\ hierarchy\) to further remove Fig. 5: Efficiency Study. \begin{table} \begin{tabular}{c c c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Data**} & \multirow{2}{*}{**Methods**} & \multicolumn{2}{c}{**\#Horizon 12**} & \multicolumn{2}{c}{**\#Horizon 48**} & \multicolumn{2}{c}{**\#Horizon 96**} & \multicolumn{2}{c}{**\#Horizon 144**} & \multicolumn{2}{c}{**\#Horizon 192**} & \multicolumn{2}{c}{**\#Horizon 288**} \\ \cline{3-14} & & MAE & MAPE & MAE & MAPE & MAE & MAPE & MAE & MAPE & MAE & MAPE & MAE & MAPE \\ \hline \hline \multirow{11}{*}{**HUTFormer**} & HI & 41.73 & 28.46\% & 41.16 & 28.61\% & 41.38 & 28.62\% & 41.28 & 28.42\% & 30.99 & 27.34\% & 39.58 & 26.49\% \\ & DLinear & 27.29 & 19.83\% & 37.20 & 26.51\% & 37.50 & 26.78\% & 37.57 & 26.87\% & 37.17 & 25.27\% & 36.87 & 25.21\% \\ & Informer & 25.94 & 17.56\% & 25.72 & 18.05\% & 25.60 & 18.27\% & 25.98 & 17.81\% & 26.42 & 17.67\% & 27.42 & 18.57\% \\ & Autorformer & 29.94 & 28.00\% & 31.30 & 27.41\% & 31.47 & 27.73\% & 31.95 & 27.89\% & 32.03 & 28.03\% & 33.34 & 29.82\% \\ & FEDformer & 34.94 & 34.33\% & 32.24 & 37.23\% & 33.90 & 34.33\% & 35.12 & 41.26\% & 35.16 & 34.08\% & 41.83 & 51.01\% \\ & Pyraformer & 23.40 & 17.18\% & 25.40 & 18.80\% & 26.45 & 19.89\% & 26.22 & 19.01\% & 26.51 & 19.18\% & 26.58 & 20.57\% \\ \cline{2-14} & DCRNN\({}^{*}\) & 22.25 & 16.59\% & 24.42 & 18.89\% & 25.20 & 19.17\% & 26.31 & 19.61\% & 27.32 & 19.74\% & 28.04 & 21.02\% \\ & GWN\({}^{*}\) & 22.24 & 16.51\% & 23.50 & 18.29\% & 24.08 & 18.07\% & 24.85 & 18.21\% & 25.83 & 18.98\% & 31.17 & 21.00\% \\ & MTGNN\({}^{*}\) & 21.75 & 15.93\% & 23.04 & 17.81\% & 24.33 & 17.80\% & 25.56 & 17.68\% & 25.80 & 17.85\% & 26.78 & 20.64\% \\ & STID & 21.01 & 15.24\% & 22.77 & 16.61\% & 23.39 & 16.87\% & 24.06 & 17.08\% & 24.43 & 17.22\% & 25.19 & 17.49\% \\ & STEP\({}^{*}\) & 20.82 & 15.56\% & 22.23 & 17.11\% & 22.87 & 17.21\% & 24.46 & 17.97\% & 24.89 & 17.40\% & 26.18 & 18.47\% \\ & D\({}^{*}\)STGNN\({}^{*}\) & 21.55 & 16.03\% & 22.98 & 17.04\% & 24.16 & 17.57\% & 24.50 & 17.93\% & 24.59 & 17.19\% & 24.79 & 17.97\% \\ \cline{2-14} & HUTFormer & **19.61** & **13.59\%** & **21.54** & **14.95\%** & **21.96** & **15.22\%** & **22.66** & **15.30\%** & **23.10** & **15.35\%** & **23.43** & **15.71\%** \\ \hline \hline \multirow{11}{*}{**HUTFormer**} & HI & 37.33 & 25.01\% & 37.31 & 25.07\% & 37.23 & 25.05\% & 37.09 & 25.02\% & 36.94 & 24.98\% & 36.40 & 24.76\% \\ & DLinear & 22.91 & 17.23\% & 34.13 & 24.15\% & 34.34 & 25.54\% & 34.44 & 23.80\% & 34.52 & 23.91\% & 35.11 & 23.71\% \\ & Informer & 24.55 & 14.76\% & 24.80 & 15.03\% & 24.72 & 15.03\% & 25.07 & 15.11\% & 24.82 & 14.91\% & 25.09 & 15.61\% \\ & Autorformer & 31.36 & 25.44\% & 32.29 & 27.13\% & 33.19 & 27.45\% & 32.98 & 26.15\% & 33.57 & 25.78\% & 36.75 & 28.82\% \\ \cline{2-14} & FEDformer & 24.62 & 20.01\% & 26.76 & 21.85\% & 28.56 & 23.02\% & 30.33 & 24.47\% & 29.11 & 23.14\% & 29.91 & 24.47\% \\ & Pyraformer & 21.92 & 14.43\% & 23.00 & 14.70\% & 23.80 & 15.46\% & 24.45 & 16.88\% & 24.34 & 16.17\% & 22.71 & 14.79\% \\ \hline \multirow{11}{*}{**HUTFormer**} & DCRNN\({}^{*}\) & 18.64 & 13.47\% & 20.42 & 14.92\% & 20.97 & 15.11\% & 21.63 & 15.51\% & 22.45 & 16.23\% & 22.95 & 16.72\% \\ & GWNet\({}^{*}\) & 17.07 & 11.57\% & 19.55 & 11.93\% & 20.38 & 14.33\% & 20.49 & 14.82\% & 20.00 & 14.68\% & 20.29 & 15.20\% \\ & MTGNN\({}^{*}\) & 17.75 & 12.61\% & 19.27 & 13.35\% & 19.99 & 13.85\% & 20.68 & 15.00\% & 20.95 & 14.65\% & 22.16 & 15.68\% \\ & STD & 16.40 & 11.42\% & 18.53 & 13.26\% & 19.17 & 1 segment merging and replace the window Transformer layer with a standard Transformer layer, to evaluate the effectiveness of hierarchical multi-scale representations. As shown in Table IV, \(HUTFormer\) significantly outperforms \(HUTFormer\)\(concat\) and \(HUTFormer\)\(w/o\)\(decoder\), which shows that it is not an easy task to utilize the multi-scale features, and also validates the effectiveness of our decoder. In addition, the results of \(HUTFormer\)\(w/o\)\(hierarchy\) show that hierarchical multi-scale features are crucial for accurate long-term traffic forecasting. The above results show that generating and utilizing hierarchical multi-scale features is important, and the designed hierarchical U-net structure is effective. The input embedding strategy aims to address the complexity issue from both spatial and temporal dimensions. Specifically, it consists of a Segment Embedding (SE) and a Spatial-Temporal Positional Encoding (ST-PE). To verify their effectiveness, we set up three variants. First, we set \(HUTFormer\)\(w/o\)\(ST-PE\), which replaces the ST-PE with standard learnable positional encoding. Second, we set \(HUTFormer\)\(GCN\), which replaces the spatial embeddings in ST-PE with graph convolution [4]. Third, we remove the segment embedding to get \(HUTFormer\)\(w/o\)\(SE\). As shown in Table VII, without ST-PE, the performance of HUTFormer decreases significantly. This is because modeling the correlations between time series is the basis of traffic forecasting. In addition, we can see that the ST-PE strategy is significantly better than performing graph convolution, indicating the superiority of ST-PE. Moreover, removing segment embedding not only leads to a significant decrease in performance but also increases the complexity due to the increased sequence length. These results indicate the effectiveness of the spatial-temporal positional encoding and segment merging. \begin{table} \begin{tabular}{c c c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Data**} & \multirow{2}{*}{**Methods**} & \multicolumn{3}{c|}{**@Horizon 12**} & \multicolumn{3}{c}{**@Horizon 48**} & \multicolumn{3}{c}{**@Horizon 96**} & \multicolumn{3}{c}{**@Horizon 144**} & \multicolumn{3}{c}{**@Horizon 192**} & \multicolumn{3}{c}{**@Horizon 288**} \\ \cline{3-14} & & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE \\ \hline \hline \multirow{11}{*}{**HUTFormer**} & Informer & 0.72 & 1.01 & 0.79 & 1.16 & 0.87 & 1.34 & 0.96 & 1.61 & 0.97 & 1.66 & 0.97 & 1.68 \\ & Autoformer & 0.54 & 0.56 & 0.56 & 0.61 & 0.57 & 0.64 & 0.58 & 0.66 & 0.60 & 0.69 & 0.68 & 0.85 \\ & FEDformer & 0.52 & 0.50 & 0.54 & 0.55 & 0.56 & 0.57 & 0.57 & 0.60 & 0.58 & 0.62 & 0.64 & 0.73 \\ & Pyraformer & 0.56 & 0.63 & 0.57 & 0.64 & 0.65 & 0.79 & 0.74 & 0.92 & 0.76 & 1.03 & 0.79 & 1.09 \\ & & Informer & 0.46 & 0.42 & 0.53 & 0.54 & 0.56 & 0.59 & 0.57 & 0.60 & 0.58 & 0.61 & 0.60 & 0.68 \\ & Crossformer & 0.45 & 0.40 & 0.50 & 0.49 & 0.54 & 0.57 & 0.55 & 0.60 & 0.56 & 0.61 & 0.58 & 0.63 \\ \cline{2-14} & HUTFormer & **0.41** & **0.36** & **0.46** & **0.43** & **0.49** & **0.47** & **0.51** & **0.50** & **0.52** & **0.53** & **0.54** & **0.56** \\ \hline \hline \multirow{11}{*}{**HUTFormer**} & Informer & 0.53 & 0.59 & 0.60 & 0.67 & 0.63 & 0.74 & 0.68 & 0.85 & 0.72 & 0.91 & 0.74 & 0.95 \\ & Autoformer & 0.49 & 0.56 & 0.55 & 0.58 & 0.57 & 0.62 & 0.59 & 0.67 & 0.59 & 0.69 & 0.63 & 0.84 \\ & FEDformer & 0.46 & 0.42 & 0.49 & 0.52 & 0.53 & 0.54 & 0.58 & 0.55 & 0.60 & 0.59 & 0.66 \\ & Pyraformer & 0.50 & 0.51 & 0.54 & 0.57 & 0.57 & 0.68 & 0.59 & 0.70 & 0.61 & 0.77 & 0.66 & 0.84 \\ & Triformer & 0.33 & 0.25 & 0.39 & **0.35** & **0.40** & **0.38** & **0.45** & 0.45 & **0.46** & **0.45** & 0.48 & 0.48 \\ & Crossformer & 0.35 & 0.27 & 0.47 & 0.45 & 0.49 & 0.46 & 0.54 & 0.55 & 0.56 & 0.58 & 0.57 & 0.59 \\ \cline{2-14} & HUTFormer & **0.31** & **0.23** & **0.40** & 0.36 & **0.40** & **0.38** & 0.46 & **0.44** & **0.46** & **0.45** & **0.47** & **0.46** \\ \hline \hline \multirow{11}{*}{**HUTFormer**} & Informer & 0.34 & 0.27 & 0.38 & 0.36 & 0.40 & 0.38 & 0.43 & 0.42 & 0.45 & 0.46 & 0.45 & 0.48 \\ & Autoformer & 0.32 & 0.24 & 0.34 & 0.28 & 0.36 & 0.31 & 0.38 & 0.34 & 0.40 & 0.37 & 0.43 & 0.42 \\ & FEDformer & 0.29 & 0.21 & 0.30 & 0.24 & 0.32 & 0.27 & 0.34 & 0.30 & 0.35 & 0.32 & 0.39 & 0.39 \\ & Pyraformer & 0.28 & 0.23 & 0.31 & 0.29 & 0.34 & 0.32 & 0.35 & 0.33 & 0.45 & 0.48 & 0.52 & 0.60 \\ & Triformer & 0.30 & 0.23 & 0.43 & 0.45 & 0.47 & 0.54 & 0.47 & 0.56 & 0.51 & 0.59 & 0.52 & 0.64 \\ & Crossformer & **0.18** & **0.15** & 0.26 & 0.20 & 0.30 & 0.24 & 0.32 & 0.27 & 0.35 & 0.32 & 0.37 & 0.37 \\ \cline{2-14} & HUTFormer & **0.18** & **0.15** & **0.23** & **0.19** & **0.26** & **0.23** & **0.28** & **0.24** & **0.31** & **0.30** & **0.33** & **0.32** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Experiments on ETHn1, ETHm1, Weather datasets. \begin{table} \begin{tabular}{c c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Variants**} & \multicolumn{3}{c|}{**@Horizon 12**} & \multicolumn{3}{c}{**@Horizon 48**} & \multicolumn{3}{c}{**@Horizon 96**} & \multicolumn{3}{c}{**@Horizon 144**} & \multicolumn{3}{c}{**@Horizon 192**} & \multicolumn{3}{c}{**@Horizon 288**} \\ \cline{2-14} & MAE & MAPE & MAE & MAPE & MAE & MAPE & MAE & MAE & MAPE & MAE & MAE & MAPE \\ \hline HUTFormer & **3.59** & **10.93\%** & **3.77** & **11.88\%** & **3.79** & **11.86\%** & **3.80** & **12.08\%** & **3.82** & **12.18\%** & **3.84** & **12.28\%** \\ \hline \multirow{12}{*}{_concat_} & 3.86 & 12.16\% & 3.98 & 13.23\% & 4.01 & 13.36\% & 4.01 & 13.36\% & 4.05 & 13.41\% & 4.08 & 13.65\% \\ \cline{2-14} & _w/o decoder_ & 3.80 & 11.94\% & 3.85 & 12.33\% & 3.90 & 12.64\% & 3.88 & 12.91\% & 3.96 & 12.91\% & 3.97 & 12.93\% \\ \cline{2-14} & _w/o hierarchy_ & 3.90 & 12.56\% & 3.97 & 12.85\% & 3.96 & 12.86\% & 3.98 & 12.88\% & Finally, we evaluate the two-stage training strategy of HUTFormer. To this end, we set two variants. First, we set \(HUTFormer\)\(end2end\), which trains the HUTFormer in an end-to-end strategy. Second, we set \(HUTFormer\)\(w/o\)\(fix\), which does not fix the parameter of the encoder when training the decoder. The results in Table VII show that either the end-to-end strategy or the strategy without fixing the encoder leads to insufficient optimization and significant performance degradation. In addition, both strategies require more memory. In contrast, our two-stage strategy achieves the best performance and efficiency simultaneously. ### _Hyper-parameter Study_ In this subsection, we conduct experiments to study the impact of two key hyper-parameters: segment size and window size. We conduct experiments on the METR-LA dataset and report the MAE at horizon 288. Moreover, we report the training speed of the encoder, since these hyper-parameters mainly affect the encoder. As shown in Figure 7(a), the segment size \(L=12\) achieves the best performance. Smaller segments cannot provide robust semantics, while larger segments ignore more local details. In addition, we can see that as the segment size increases, the encoder runs faster (seconds/epoch). Kindly note that changing the segment size may change the depth of the HUTFormer to ensure that the receptive field covers the entire sequence. The impact of the window size is shown in Figure 7(b), where larger window sizes perform worse. This is because the ability to extract multi-scale features is weakened as the window size increases. Moreover, the efficiency of HUTFormer will also decrease [42] on larger window sizes. ### _Visualization_ #### 6.7.1 Spatial-Temporal Positional Encoding To further understand the HUTFormer in modeling the correlations between multiple time series in traffic data, we analyze the spatial-temporal positional encoding layer. Modeling correlations between multiple time series have been widely discussed in multivariate time series forecasting [4, 8, 24]. Previous works usually utilize Graph Convolution Networks (GCN), which conduct message passing in a pre-defined graph. GCN is a powerful model, but it has high complexity of \(\mathcal{O}(N^{2})\). Very recent works, STID [40] and STNorm [41], identify that graph convolution in multivariate time series forecasting is essentially used for addressing the indistinguishability of samples on the spatial dimension. Based on such an observation, STID proposes a simple but effective baseline of attaching spatial and temporal identities, achieving a similar performance of GCN but high efficiency. The Spatial-Temporal Positional Encoding (ST-PE) is designed based on such an idea [40]. The ST-PE contains three learnable positional embeddings, \(\mathbf{E}\in\mathbb{R}^{N\times d}\), \(\mathbf{T}^{TiO}\in\mathbb{R}^{N_{D}\times d}\) and \(\mathbf{T}^{DiW}\in\mathbb{R}^{N_{W}\times d}\), where \(N\) is the number of time series, \(N_{D}\) is the number of time slots of a day (determined by the sensor's sampling frequency), and \(N_{W}=7\) is the number of days in a week. We utilize t-SNE [14] to visualize these three embedding matrices. Kindly note that \(\mathbf{T}^{DiW}\) only have \(7\) embeddings, which is significantly less than the hidden dimension \(32\), making it hard to get correct visualizations. Therefore, we additionally train a HUTFormer with the embedding size of \(\mathbf{T}^{DiW}\) to \(2\) to get a more accurate visualization. The results are shown in Figure 6. First, as shown in Figure 6(a), the spatial embeddings are likely to cluster. For example, traffic conditions observed by sensors that are connected or have similar geographical functionality are more likely to be similar. However, it is not as apparent as in the results in STID [40]. We conjecture this is because the impact of the indistinguishability of the samples becomes weaker as the length of the historical data increases. Second, Figure 6(b) shows the embeddings of 288 time slots, where the daily periodicity is very obvious. Third, Figure 6(c) visualizes the embeddings of each day in a week, where weekdays are closer and weekends' are different. #### 6.7.2 Prediction Visualization In order to further intuitively evaluate HUTFormer, in this subsection, we visualize the prediction of HUTFormer and Fig. 6: Visualization of the spatial and temporal embeddings. Fig. 7: Hyper-parameter study. other baselines on the METR-LA dataset. Specifically, we select sensor 12 and displayed its data from June 05th, 2012 to June 06th, 2012 (located in the test dataset). The forecasting results of Autoforormer [32], Graph WaveNet [4], HUTFormer w/o hierarchy, and HUTFormer are shown in Figure 8. First, Autoforormer performs the worst, since it does not fit the characteristics of traffic data as discussed in Section 6.2. Graph WaveNet and HUTFormer w/o hierarchy perform well on smooth periods but fail to accurately capture the sudden change when traffic congestion occurs. In contrast, benefiting from exploiting multi-scale representations of traffic data, HUTFormer performs better in both smooth periods and sudden changes. ## 7 Conclusions In this paper, we make the first attempt to explore the long-term traffic forecasting problem. To this end, we reveal its unique challenges in exploiting the multi-scale representations of traffic data, and propose a novel _Hierarchical U-net TransFormer_ (HUTFormer) to efficiently and effectively address them. The HUTFormer mainly consists of a hierarchical encoder and decoder. On the one hand, the hierarchical encoder generates multi-scale representations based on the window self-attention mechanism and segment merging. On the other hand, the hierarchical decoder effectively utilizes the extracted multi-scale features based on the cross-scale attention mechanism. In addition, HUTFormer adopts segment embedding and spatial-temporal positional encoding as the input embedding strategy to address the complexity issue. Extensive experiments on four commonly used traffic datasets show that the proposed HUTFormer significantly outperforms state-of-the-art traffic forecasting and long-sequence time series forecasting baselines. ## Acknowledgments This work is also supported by the Youth Innovation Promotion Association CAS No.2023112. In addition, Zhao Zhang is supported by the China Postdoctoral Science Foundation under Grant No. 2021M703273.
2303.06465
The S-Web Origin of Composition Enhancement in the Slow-to-Moderate Speed Solar Wind
Connecting the solar wind observed throughout the heliosphere to its origins in the solar corona is one of the central aims of heliophysics. The variability in the magnetic field, bulk plasma, and heavy ion composition properties of the slow wind are thought to result from magnetic reconnection processes in the solar corona. We identify regions of enhanced variability and composition in the solar wind from 2003 April 15 to May 13 (Carrington Rotation 2002), observed by the Wind and Advanced Composition Explorer spacecraft, and demonstrate their relationship to the Separatrix-Web (S-Web) structures describing the corona's large-scale magnetic topology. There are four pseudostreamer (PS) wind intervals and two helmet streamer (HS) heliospheric current sheet/plasma sheet crossings (and an ICME) which all exhibit enhanced alpha-to-proton ratios and/or elevated ionic charge states of carbon, oxygen, and iron. We apply the magnetic helicity-partial variance of increments ($H_m$-PVI) procedure to identify coherent magnetic structures and quantify their properties during each interval. The mean duration of these structures are $\sim$1 hr in both the HS and PS wind. We find a modest enhancement above the power-law fit to the PVI waiting time distribution in the HS-associated wind at the 1.5-2 hr timescales that is absent from the PS intervals. We discuss our results in context of previous observations of the $\sim$90 min periodic density structures in the slow solar wind, further development of the dynamic S-Web model, and future Parker Solar Probe and Solar Orbiter joint observational campaigns.
B. J. Lynch, N. M. Viall, A. K. Higginson, L. Zhao, S. T. Lepri, X. Sun
2023-03-11T17:30:43Z
http://arxiv.org/abs/2303.06465v1
# The S-Web Origin of Composition Enhancement in the Slow-to-Moderate Speed Solar Wind ###### Abstract Connecting the solar wind observed throughout the heliosphere to its origins in the solar corona is one of the central aims of heliophysics. The variability in the magnetic field, bulk plasma, and heavy ion composition properties of the slow wind are thought to result from magnetic reconnection processes in the solar corona. We identify regions of enhanced variability and composition in the solar wind from 2003 April 15 to May 13 (Carrignton Rotation, 2002), observed by the _Wind_ and _Advanced Composition Explorer_ spacecraft, and demonstrate their relationship to the Separatrix-Web (S-Web) structures describing the corona's large-scale magnetic topology. There are four pseudostreamer (PS) wind intervals and two helmet streamer (HS) heliospheric current sheet/plasma sheet crossings (and an ICME) which all exhibit enhanced alpha-to-proton ratios and/or elevated ionic charge states of carbon, oxygen, and iron. We apply the magnetic helicity-partial variance of increments (\(H_{m}\)-PVI) procedure to identify coherent magnetic structures and quantify their properties during each interval. The mean duration of these structures are \(\sim\)1 hr in both the HS and PS wind. We find a modest enhancement above the power-law fit to the PVI waiting time distribution in the HS-associated wind at the 1.5-2 hr timescales that is absent from the PS intervals. We discuss our results in context of previous observations of the \(\sim\)90 min periodic density structures in the slow solar wind, further development of the dynamic S-Web model, and future _Parker Solar Probe_ and _Solar Orbiter_ joint observational campaigns. solar wind -- Sun: heliosphere -- Sun: corona -- Sun: magnetic fields -- Sun: solar-terrestrial relations 0000-0002-4610-5885]B. J. Lynch ## 1 Introduction The global magnetic geometry of the solar corona directly determines the structure of the solar wind outflow (e.g. Zirker, 1977; Axford et al., 1999; Antiochos et al., 2007, 2011; Cranmer, 2012). Decades of in-situ observations have shown that the heliospheric structure and solar wind properties reflect the coronal magnetic structure of its origin (Zurbuchen, 2007; Zhao et al., 2014). During solar minimum, polar coronal holes are correlated with fast, tenuous solar wind (Geiss et al., 1995; McComas et al., 2002), while the helmet streamer (HS) belt and the heliospheric current sheet (HCS) are associated with slower, denser, and more variable solar wind (Gosling, 1997; McComas et al., 1998; Zurbuchen et al., 2002; Zhao et al., 2009). During solar maximum, the helmet streamer belt is highly warped and pseudostreamer (PS) coronal structures often make a significant contribution to the solar wind in the ecliptic plane (Riley and Luhmann, 2012). Whereas the large-scale closed flux system of the HS belt separates open fields of opposite polarity, thus giving rise to the HCS, coronal PS's (sometimes called unipolar streamers) are closed-flux regions surrounded by open fields of a single polarity (e.g. Wang et al., 2007; Titov et al., 2012; Rachmeler et al., 2014; Wang et al., 2012; Wang and Panasenco, 2019; Mason et al., 2021). Solar wind originating from coronal PS's tends to be more similar to the dense, variable HS slow wind than to the fast wind from coronal holes (Crooker et al., 2012), but observations have established the existence of a continuum of states between the nominal fast and slow wind rather than a well-separated bimodal distribution (e.g. Stakhiv et al., 2015, 2016). Connecting the solar wind to its source region of origin has become one of the central aims of heliophysics in order to test and constrain different theories of solar wind formation (Viall and Borovsky, 2020). Additionally, accurate space weather prediction requires an understanding of the different solar wind streams in the heliosphere and where they were formed, e.g. mesoscale structures are known to drive magnetospheric dynamics (Viall et al., 2021). Therefore, establishing this solar-heliospheric connection is one of the fundamental science objectives of the _Parker Solar Probe_ (PSP; Fox et al., 2016) and _Solar Orbiter_(Muller et al., 2020) missions. White-light coronagraph and heliospheric imaging data have shown that the solar wind originating from the helmet streamer stalks includes a continual, intermittent outflow of intensity enhancements, called "streamer blobs," that trace the bulk outflow of the slow solar wind (Sheeley et al., 1997, 1999, 2009; Rouillard et al., 2010, 2010; Sanchez-Diaz et al., 2017). While the basic theory of steady-state, slow solar wind from the vicinity of coronal streamers and pseudostreamers is well-established (e.g. Arge and Pizzo, 2000; Lepri et al., 2008; Riley and Luhmann, 2012, and references therein), this steady-state picture is difficult to reconcile with the observed slow wind variability in both remote-sensing and in-situ observations that likely require a time-varying magnetic reconnection component. Demonstrating another example of solar wind variability, Kepko et al. (2020) analyzed 25 years of solar wind data, expanding on the initial study of Viall et al. (2008), finding that intermittent periodic density structures that range in size from 70-900 Mm are a ubiquitous feature of the slow solar wind, occurring a majority (\(\gtrsim 60\%\)) of the time. Furthermore, Viall et al. (2010) and Viall and Vourlidas (2015) examined the _Solar Terrestrial Relations Observatory_(STEREO; Kaiser et al., 2008) SECCHI (Howard et al., 2008) HI1 and COR2 white-light imaging data and showed there were clear signatures of \(\sim\)90 min variability in the intensity variations of coronal streamer outflow, confirming that many of the periodic density structures are the result of solar wind formation processes. In the in-situ slow solar wind, especially near the HCS, magnetic structures with timescales of several hours have been identified and linked to magnetic reconnection (Crooker et al., 1996, 2004; Suess et al., 2009). High-cadence composition data have revealed the presence of cyclic 0.5-3 hour solar wind structures with signatures in helium, oxygen and carbon densities, and heavy ion charge states (Viall et al., 2009; Kepko et al., 2016). In-situ elemental and ionic composition measurements are routinely used as proxies for solar wind formation processes and the "freeze-in" coronal electron temperatures in the low-to-middle corona; when the characteristic bulk solar wind expansion timescale exceeds the ionization and recombination timescales of various ion species, the ionic charge states can be considered frozen-in to the solar wind outflow (e.g. Hundhausen et al., 1968; Owocki et al., 1983; Ko et al., 1997; Landi et al., 2012, 2012; Landi and Lepri, 2015). In fact, Kepko et al. (2016) showed that these density and compositional variations also often correspond to regions of coherent magnetic field signatures and periods of bidirectional electron streaming, suggestive of a succession of small magnetic flux ropes or flux rope-like periods. There is some preliminary indication that in-situ small flux ropes can be coincident with periods of enhanced ionic composition (Foullon et al., 2011; Feng and Wang, 2015; Yu et al., 2016; Kepko et al., 2016). The Separatrix-Web (S-Web) model for the origin of slow solar wind (Antiochos et al., 2011) is based on the magnetic geometry of the solar corona and predicts that the topological separatrix surfaces of the magnetic field are regions where interchange reconnection--the mechanism for releasing closed-flux coronal plasma onto adjacent open field lines--is most likely to occur. The dynamic S-Web model extends previous observational and theoretical considerations of reconnection at coronal hole boundaries (Madjarska et al., 2004; Edmondson et al., 2009, 2010; Linker et al., 2011; Rappazzo et al., 2012; Brooks et al., 2015; Pontin and Wyper, 2015; Scott et al., 2021) and solar wind outflows at the periphery of active regions (e.g. Sakao et al., 2007; Harra et al., 2008; Baker et al., 2009; Brooks and Warren, 2011; Edwards et al., 2016), and aims to address a number of outstanding issues related to the slow solar wind, including its larger-than-expected latitudinal extent (Crooker et al., 2012) and the reconnection component seemingly required by the variability of the in-situ measurements of slow wind plasma, field, and composition (e.g. Viall et al., 2009; Zhao et al., 2009, 2014, 2017; Lepri et al., 2013, 2014; Kepko et al., 2016; Sanchez-Diaz et al., 2017, 2019; Di Matteo et al., 2019; Reville et al., 2022). Higginson et al. (2017, 2017) presented simulation results showing that interchange magnetic reconnection is ubiquitous and most likely responsible for releasing much of the slow solar wind, in particular along S-Web topological features. Since that work, there have been a number of significant developments in the modeling reconnection-generated slow solar wind structure and the interchange reconnection processes associated with dynamic S-Web outflows, summarized in Figure 1. Figure 1(a) presents the 3D structure of the pinch-off reconnection that forms streamer blob flux rope/plasmoids in the simulation by Lynch (2020). These simulation results showed qualitative agreement with both the morphology and the kinematics of coronal inflows and streamer blob outflows in synthetic white light coronagraph observations, as have other recent modeling efforts (e.g. Reville et al., 2020). Figure 1(b) presents simulation results from Higginson and Lynch (2018) who showed that the continual formation of flux rope/plasmoid structures essentially filled the entire heliospheric current sheet. Figure 1(c) shows the simulation results by Aslanyan et al. (2022) in which they examined interchange reconnection occurring in a 3D pseudostreamer configuration and developed a synthetic suprathermal electron pitch angle proxy based on the simulation's instantaneous magnetic connectivity. Previously, Zhao et al. (2017) have used solar wind data from the _Advanced Composition Explorer_ (ACE; Stone et al., 1998) during CR 2002 to develop a source region classification scheme based on heliospheric back-mapping and PFSS modeling of observer-connected magnetic field lines and the pixel brightness in synoptic maps of EUV 195A emission in the vicinity of the field line foot point. Applying their EUV brightness-based source region classifications ('Coronal Hole', 'Coronal Hole Boundary', 'Quiet Sun', 'Active Region Boundary', 'Active Region', and 'Helmet Streamer') to in-situ data from 1998-2011 resulted in a statistical ordering of the distributions of O\({}^{7+}\)/O\({}^{6+}\) by distance from coronal holes, representing a relatively smooth increase in some combination of coronal electron temperature, mass density, and/or outflow velocities. A number of other solar wind classification schemes have been developed to identify specific solar wind "types" for the purpose of trying to uncover the physical relationships between different plasma, field, and composition signatures _within_ and _between_ different solar wind types (which are generally a proxy for coronal source region classifications). For example, Xu & Borovsky (2015) constructed a "four-plasma" classification scheme based, in part, on the proton specific entropy, \(S_{p}=T_{p}/n_{p}^{2/3}\), and showed this had a significant correlation with O\({}^{7+}\)/O\({}^{6+}\), C\({}^{6+}\)/C\({}^{5+}\) signatures and a relatively clear separation in the Alfven speed-specific entropy (\(v_{A}\)-\(S_{p}\)) space between their 'Ejecta', 'Coronal Hole', 'Streamer Belt', and 'Sector Reversal' (HCS/HPS crossing) types. Ko et al. (2018) examined the perpendicular velocity fluctuations (\(\delta v_{T}\), \(\delta v_{N}\) in RTN coordinates) and presented superposed epoch trends in HS and PS intervals (low-\(\delta v\)) for a variety of solar wind properties including magnetic field fluctations, Alfvenicity, width of the suprathermal electron strahl, proton specific entropy \(S_{p}\), helium abundance, the C, O, and Fe charge states, and Fe/O composition. Bloch et al. (2020) have investigated a couple of machine learning techniques to identify 'Streamer Belt' and 'Coronal Hole' solar wind type clusters in the \(S_{p}\)-O\({}^{7+}\)/O\({}^{6+}\) parameter space from _Ulysses_ and ACE data. Roberts et al. (2020) have used \(k\)-means clustering based on a number of solar wind variables including O and Fe charge states and the Fe/O ratio which resulted in a mixture of some clearly separated solar wind types and some significantly overlapping solar wind types when visualized in the cross helicity (\(\sigma_{c}\)) and residual energy (\(\sigma_{r}\)) parameter space commonly used in turbulence studies. In this paper, we extend the CR 2002 analysis of Zhao et al. (2017) to the magnetic complexity of the source region and examine the relationship between measures of solar wind variability in plasma, field, and composition with the large-scale geometric S-Web configurations of the associated source regions. In Section 2, we present in-situ solar wind observations from the _Wind_ and ACE spacecraft during CR 2002 and define several slow-to-moderate speed intervals of enhanced variability in proton and alpha densities. We then show that each of these intervals correspond to enhancements in the ionic composition signatures of C, O, and Fe. In Section 3, we perform the heliospheric back-mapping procedure to map the in-situ time series at 1 au to Carrington longitude at the potential field source surface (PFSS) at Figure 1: Reconnection mechanisms for generating intermittent outflow of dense, closed-field plasma in the slow-to-moderate speed solar wind from helmet streamers and pseudostreamers. (a) ARMS simulation of HS blob pinch-off reconnection (adapted from Lynch, 2020) and (b) the small flux rope/reconnection plasmoid structures of the heliospheric current sheet (adapted from Higginson & Lynch, 2018). (c) ARMS simulation of interchange reconnection outflow from a pseudostreamer and a synthetic proxy for suprathermal electron pitch angle based on magnetic connectivity (adapted from Aslanyan et al., 2022). \(2.5\,R_{\odot}\) (3.1) and show these intervals of enhanced variability and composition map back to the S-Web topological structures associated with the helmet streamer belt and coronal pseudostreammers (3.2). In Section 4, we present the magnetic helicity-partial variance of increments (\(H_{m}\)-PVI) analysis during the enhanced variability intervals and quantify the similarities and differences between the helmet streamer (4.2) and pseudostreamer (4.3) slow wind, and perform some statistical analyses on these time series (4.4). Finally, in Section 5, we discuss the implications of our results for theory and modeling the origin of the slow solar wind and avenues for future progress with complementary PSP and _Solar Orbiter_ observations. ## 2 Intervals of Enhanced Variability ### Proton Density and the Alpha-to-Proton Ratio The slow solar wind shows considerably more variation in proton and helium densities (and their relative abundance ratio) than in the fast wind. The mean alpha particle (He\({}^{2+}\)) to proton (H\({}^{+}\)) ratio \(A_{\rm He}\equiv n_{\alpha}/n_{p}\times 100\) (or \(\alpha/p\), interchangeably) in both the fast and slow solar wind are on the order of 3-5% but the relative variation in the fast solar wind is \(\sim\)10% while in the slow solar wind it can be as high as \(\sim\)40% (Gosling, 1997; Schwenn, 2006). Helium enhancements have long been associated with in-situ observations of CME material (e.g. Borrini et al., 1982; Richardson & Cane, 2004; Zurbuchen et al., 2016; Lepri & Rivera, 2021), but recent analyses have also made significant progress quantifying the helium variability during ambient solar wind intervals (Kasper et al., 2007; Suess et al., 2009; Wang, 2016; Sanchez-Diaz et al., 2019). For example, Kasper et al. (2007, 2012) have shown the solar wind \(\alpha/p\) ratio exhibits both a dependence on solar wind speed and the phase of the solar activity cycle, with the \(A_{\rm He}\) in the slowest speed solar wind intervals showing the most variation with sunspot number, in support of multiple sources and/or mechanisms for the solar wind's helium component (Schwenn et al., 2006). Viall et al. (2009) and Kepko et al. (2016) and others have shown that the solar wind helium abundance (and the associated increase in the variance of the helium abundance) are often coincident with periodic proton density structures (and their increased variance), as well as periods of increased ionic and elemental composition (see also Kasper et al., 2012). Figure 2 shows a plot of the _Wind_/3DP (Lin et al., 1995) and _Wind_/SWE (Ogilvie et al., 1995) data at 1 AU for Carrington Rotation 2002 (from 2003 Apr 15 21:35 UT through 2003 May 13 03:24 UT). From top-to-bottom, we plot the bulk radial velocity \(V_{r}\), proton number density \(n_{p}\), alpha number density \(n_{\alpha}\), the \(A_{\rm He}\) ratio, and its variance, \(\rm Var[\,A_{\rm He}\,]\equiv\sigma_{\alpha/p}^{2}\), calculated over 6-hour bins. The 3DP data are shown in black and the SWE data are shown in red. Based on visual inspection of the Figure 2 time series, we have identified eight distinct intervals during CR 2002 that can be considered slow-to-moderate speed solar wind (\(V_{r}\lesssim 550\) km s\({}^{-1}\)) with one or more of the following: enhanced proton density (\(n_{p}\geq 5\) cm\({}^{-3}\)); enhanced alpha density (\(n_{\alpha}\geq 0.25\) cm\({}^{-3}\)); enhanced \(A_{\rm He}\) (\(\geq 5\%\)); or enhanced \(\sigma_{\alpha/p}^{2}\) (\(\geq 0.80\)). Each of the intervals are labeled above the top \(x\)-axis as #1-8 and shaded as yellow, green, teal, or purple. The colors were selected to represent different large-scale coronal source region configurations, as will be discussed in Section 3.2. The one exception to our slow-to-moderate speed criteria is interval #2 (shaded purple) which is clearly identified as a fast ICME, and cataloged as such by Richardson & Cane (2010). The start and end times of each Figure 2 interval are listed in Table 1 along with a synopsis of the relevant interval-averaged quantities. ### Ionic and Elemental Composition Enhancement Figure 3 shows ACE measurements for the CR 2002 solar wind. From top-to-bottom, we plot the SWEPAM (McComas et al., 1998) measurements of the bulk solar wind speed \(V_{r}\), the normalized 272 eV suprathermal electron pitch angle distribution (PAD), the MAG (Smith et al., 1998) measurements of \(B\) in RTN coordinates and the magnetic field orientation angles (\(\delta\) is the elevation angle above/below the RT plane; \(\lambda\) is the azimuth angle within the RT plane), and the SWVICS (Gloeckler et al., 1998) composition measurements of select ion charge states of carbon (\(Q_{\rm C}\): 4-6+), oxygen (\(Q_{\rm O}\): 5-8+), and iron (\(Q_{\rm Fe}\): 6-20+), as well as the Fe/O abundance ratio. Here, the solar wind speed and magnetic field values are 1-hr averages whereas the SWICS composition measurements are 2-hr averages. Figure 3 also shows each of the slow-to-moderate speed solar wind intervals associated with enhanced \(n_{p}\), \(n_{\alpha}\), or \(A_{\rm He}\) variability that were identified in the _Wind_ data of Figure 2. With the inclusion of the magnetic field and suprathermal electron PAD, the intervals corresponding to sector boundaries and heliospheric current sheet/plasma sheet (HCS/HPS) crossings are immediately apparent as #8 and #3, both shaded light yellow. Another particularly noteworthy feature of Figure 3 is that each of the remaining slow-to-moderate speed intervals are coincident with broader suprathermal electron PADs and/or elevated charge states in C, O, and Fe. While recent analyses by Borovsky (2020, 2021) have shown that changes in the suprathermal electron strahl intensities often occur with simultaneous changes in other plasma and/or composition properties, here we note that the broader PADs of intervals #7, #6, and #4 exhibit remarkable, qualitative agreement with the synthetic PAD distribution constructed by Aslanyan et al. (2022) from their MHD simulation of interchange reconnection pseudostreamer outflow (lower panels of Figure 1(c)). We will show in the next section these intervals do, in fact, map to coronal pseudostreamer source regions. The charge state and elemental composition enhancements during each of the identified intervals have the following properties. The presence of increased C\({}^{6+}\) and decreased C\({}^{5+}\) will result in a substantial increase in the C\({}^{6+}\)/C\({}^{5+}\) ratio which has similar properties to the O\({}^{7+}\)/O\({}^{6+}\) ratio commonly used to identify periods of increased coronal electron temperatures (e.g. Landi et al., 2012; Kepko et al., 2016). Additionally, every interval except #6 and #7 also show a significant increase in O\({}^{7+}\) along with a corresponding decrease in O\({}^{6+}\), providing local maxima of the well-known O\({}^{7+}\)/O\({}^{6+}\) ratio (e.g. Zhao et al., 2009; Wang, 2016). During intervals #3, #4-5, and #8, there are also enhanced levels of the higher iron charge states, Fe\({}^{\geq 12+}\), including some tradi \begin{table} \begin{tabular}{|c|c|c|c|c c c c c c|} \hline & Start time & End time & Source & \(\langle V_{r}\rangle\) & \(\langle n_{p}\rangle\) & \(\langle A_{\rm He}\rangle\) & \(\langle Q_{\rm C}\rangle\) & \(\langle Q_{\rm O}\rangle\) & \(\langle Q_{\rm Fe}\rangle\) & \(\langle\)Fe/O\(\rangle\) \\ \# & DD/MM HH:MM [UT] & region & [km s\({}^{-1}\)] & [cm\({}^{-3}\)] & [\%] & [4–6+] & [5–8+] & [6–20+] & \\ \hline 8 & 04/19 13:58 & 04/21 05:24 & HS (Y) & \(\mathbf{556\pm 26}\) & \(\mathbf{6.1\pm 4.1}\) & \(\mathbf{5.1\pm 1.6}\) & \(\mathbf{5.07\pm 0.15}\) & \(\mathbf{6.14\pm 0.08}\) & \(\mathbf{10.82\pm 0.96}\) & \(0.11\pm 0.02\) \\ 7 & 04/23 00:00 & 04/24 14:12 & PS (G) & \(\mathbf{498\pm 32}\) & \(\mathbf{5.2\pm 1.3}\) & \(4.4\pm 1.6\) & \(\mathbf{5.19\pm 0.08}\) & \(\mathbf{6.08\pm 0.02}\) & \(10.12\pm 0.52\) & \(\mathbf{0.13\pm 0.02}\) \\ 6 & 04/25 14:24 & 04/27 19:12 & PS (G) & \(\mathbf{478\pm 34}\) & \(\mathbf{5.2\pm 1.1}\) & \(\mathbf{5.0\pm 1.1}\) & \(\mathbf{5.24\pm 0.16}\) & \(\mathbf{6.09\pm 0.05}\) & \(9.85\pm 0.32\) & \(\mathbf{0.16\pm 0.03}\) \\ 5 & 04/28 04:48 & 04/29 04:47 & PS (T) & \(\mathbf{432\pm 45}\) & \(2.2\pm 1.0\) & \(3.9\pm 1.8\) & \(\mathbf{5.52\pm 0.18}\) & \(\mathbf{6.32\pm 0.16}\) & \(\mathbf{11.83\pm 1.29}\) & \(\mathbf{0.19\pm 0.09}\) \\ 4 & 04/29 04:48 & 04/30 09:36 & PS (T) & \(\mathbf{534\pm 32}\) & \(\mathbf{4.5\pm 2.3}\) & \(\mathbf{7.6\pm 2.4}\) & \(\mathbf{5.35\pm 0.15}\) & \(\mathbf{6.29\pm 0.14}\) & \(\mathbf{11.07\pm 0.51}\) & \(\mathbf{0.31\pm 0.28}\) \\ 3 & 05/03 15:27 & 05/06 03:56 & HS (Y) & \(\mathbf{496\pm 95}\) & \(\mathbf{7.9\pm 2.8}\) & \(\mathbf{4.6\pm 1.7}\) & \(\mathbf{5.24\pm 0.24}\) & \(\mathbf{6.20\pm 0.15}\) & \(\mathbf{11.01\pm 0.79}\) & \(\mathbf{0.14\pm 0.07}\) \\ 2 & 05/09 04:48 & 05/10 16:48 & ICME (P) & \(738\pm 89\) & \(3.5\pm 2.3\) & \(3.2\pm 2.1\) & \(\mathbf{5.06\pm 0.15}\) & \(\mathbf{6.12\pm 0.05}\) & \(\mathbf{11.10\pm 1.14}\) & \(\mathbf{0.21\pm 0.11}\) \\ 1 & 05/01 16:48 & 05/11 12:00 & PS (G) & \(\mathbf{601\pm 20}\) & \(3.6\pm 1.8\) & \(\mathbf{9.4\pm 5.3}\) & \(\mathbf{5.19\pm 0.08}\) & \(\mathbf{6.22\pm 0.08}\) & \(10.09\pm 0.25\) & \(\mathbf{0.20\pm 0.07}\) \\ \hline \multicolumn{10}{|c|}{Non-interval CR 2002 averages} & \(637\pm 93\) & \(4.0\pm 1.9\) & \(4.5\pm 1.2\) & \(5.01\pm 0.17\) & \(6.04\pm 0.09\) & \(10.37\pm 0.55\) & \(0.11\pm 0.03\) \\ \hline \end{tabular} \end{table} Table 1: The start and end times of each slow-to-moderate speed, composition-enhanced solar wind intervals during CR 2002 along with the interval-averaged solar wind plasma quantities: \(V_{r}\) and \(n_{p}\) (from _Wind_/3DP), \(A_{\rm He}\) (from _Wind_/SWE), and \(Q_{\rm C}\), \(Q_{\rm O}\), \(Q_{\rm Fe}\), and Fe/O (from ACE/SWICS). The interval shading is also indicated (Y—yellow, G—green, T—teal, and P—purple). Boldface values are slower/more enhanced than the non-interval averages over the remainder of CR 2002. Figure 2: In-situ solar wind data from the _Wind_ spacecraft during Carrington Rotation (CR) 2002. Plotted, from top-to-bottom, are the bulk radial velocity \(V_{r}\), proton number density \(n_{p}\), alpha number density \(n_{\alpha}\), the He\({}^{2+}\)/H\({}^{+}\) ratio, \(A_{\rm He}\equiv n_{\alpha}/n_{p}\times 100\), and its variance \(\sigma_{\alpha/p}^{2}\) in 6-hour bins. The black curves are _Wind_/3DP 1 min data and the red curves are _Wind_/SWE 97 s data. The eight intervals, labeled #1–8 along the top axis, represent the different large-scale coronal source region classifications (HS—yellow, PS—green, teal, ICME—purple). The interval properties are summarized in Table 1. tionally "hot" signatures of Fe\({}^{\geq}\)16+ (Lepri & Zurbuchen, 2004). Finally, the elemental composition ratio Fe/O shows clear enhancements during intervals #1-5 but less obvious enhancement during intervals #6-7. From Table 1, only interval #8 does not exceed the non-interval Fe/O average. ## 3 Solar-heliospheric Connectivity to the Coronal S-WEB ### Heliospheric Ballistic Back-Mapping Here we follow the standard procedure for heliospheric backmapping described by Parenti et al. (2021) and references therein. The in-situ observations of solar wind at 1 AU are ballistically mapped from the spacecraft back to the Sun along the Parker (1958) spiral streamlines assuming constant \(V_{r}\) values equal to the 1 hr averages measured by ACE. Figure 4(a) plots the heliospheric Figure 3: Solar wind and ionic and elemental composition properties from ACE/SWEPAM, ACE/MAG, and ACE/SWICS for CR 2002. From top-to-bottom we plot: proton \(V_{r}\), the 272 eV suprathermal electron pitch angle distribution (PAD), the magnetic field components of \(\mathbf{B}_{\rm RTN}\), the magnetic field elevation and azimuthal angles (\(\delta\), \(\lambda\)), the distribution of C\({}^{4-6+}\), O\({}^{5-8+}\), and Fe\({}^{6-20+}\), and the Fe/O ratio. The slow-to-moderate speed intervals from Figure 2 and Table 1 are also shown. representation of Parker spiral streamlines colored by radial velocity. In order to map the in-situ solar wind observations back to their coronal source regions on the solar surface, we use the standard PFSS model (e.g. Altschuler and Newkirk, 1969; Wang and Sheeley, 1992) to approximate the large-scale geometry of the solar corona. We calculate the PFSS extrapolation from the line-of-sight observations of the photospheric magnetic field taken by MDI (Scherrer et al., 1995) aboard the _Solar and Heliospheric Observatory_(SOHO; Domingo et al., 1995). The line-of-sight fields are transformed into radial fields via the \(B_{r}=B_{\rm los}/\sin\theta\) relation. Starting with the original high-resolution (\(3600\times 1080\)) MDI synoptic map for CR 2002 with the Sun et al. (2011) interpolation for the polar field values, we rebin it to \(720\times 360\) and calculate the PFSS spherical harmonics through order \(l_{\rm max}=16\) with a source surface height of \(R_{ss}=2.5\,R_{\odot}\). Figure 4(b) plots the magnetic field line mapping from \(R_{ss}\) to the lower boundary at \(r=1\,R_{\odot}\). Here the large-scale, closed-field coronal HS and PS structures are labeled with their corresponding intervals. The top panel of Figure 4(c) plots the radial velocity as a function of Carrington longitude at 1 AU (note time now runs from right-to-left as indicated by the upper \(x\)-axis label). We have also drawn the corresponding intervals of enhanced variability identified in SS2.1. The middle panel of Figure 4(c) plots the 1 hr ACE velocity measurements as a function of Carrington longitude at \(R_{ss}\) while the bottom panel of Figure 4(c) shows the Carrington longitude of the streamline footpoints at \(1\,R_{\odot}\). This ballistic mapping method has been widely used to estimate the coronal source regions of in-situ solar wind measurements (e.g. Neugebauer et al., 2002, 2004; Gibson et al., 2011; Zhao et al., 2013, 2017), including with the new PSP (e.g Badman et al., 2020; Panasenco et al., 2020; Griton et al., 2021) and _Solar Orbiter_ data (e.g. Telloni et al., 2021). We note that, while the numerical errors associated with integrating velocity streamlines or magnetic field lines, e.g. from a PFSS extrapolation, are quite small (Stansby and Verscharen, 2022), the overall "uncertainty" in the position of the foot point of the magnetic field lines as mapped by these techniques are typically within approximately \(10^{\circ}\)(Neugebauer et al., 2002; Leamon and McIntosh, 2009), largely due to the assumptions and simplifications inherent in the models themselves, such as the current-free approximation in the corona and the unperturbed Parker spiral structure that does not account for the interaction between fast and slow solar wind streams, etc. Figure 4: Heliospheric back-mapping for CR2002. (a) Ecliptic plane streamlines color-coded by 1 AU radial velocity value to the \(r=2.5R_{\odot}\) source surface. (b) Continuation of the back-mapping from \(R_{ss}\) to \(1\,R_{\odot}\) with the PFSS magnetic field. The view of the ecliptic plane is from solar north pole. (c) The mapping of the time series of 1-hr ACE/SWEPAM radial velocity in Carrington Longitude at 1 AU (top panel) to the source surface (middle panel) and then to the solar surface (bottom panel). The intervals of high \(\alpha/p\) from Section 2 are also shown in each location. ### S-Web Source Region Configurations The static representation of the Separatrix Web (S-Web) topological structures is based on the \(Q\)-map which is defined as the logarithmic "squashing factor," \(\log Q\). The \(Q\)-map quantifies the magnetic field's geometric connectivity (Titov, 2007), i.e., separatrix and quasi-separatrix surfaces are regions of high \(Q\)(e.g. Titov et al., 2011; Antiochos et al., 2012; Scott et al., 2018). We have calculated the \(Q\) value from the CR 2002 PFSS magnetic field extrapolation via the formulation in Titov (2007) where \(Q=N^{2}/|\Delta|\), \[N^{2}=\left(\frac{\partial Y}{\partial y}\right)^{2}+\left(\frac{\partial Y}{ \partial z}\right)^{2}+\left(\frac{\partial Z}{\partial y}\right)^{2}+\left( \frac{\partial Z}{\partial z}\right)^{2}\, \tag{1}\] and \(|\Delta|=|B_{x}/B_{x}^{*}|\). While the full derivation (in arbitrary coordinates) is described in Titov (2007), the expression in spherical coordinates is straightforward to obtain: \(B_{x}/B_{x}^{*}\to B_{r}/B_{r}^{*}\), the starting and ending field line positions become \((x_{0},y_{0},z_{0})\to(r_{0},\theta_{0},\phi_{0})\), \((X,Y,Z)\to(R,\Theta,\Phi)\), and the differentials become changes in arc length \(\partial y\to r_{0}\,\partial\theta\), \(\partial Y\to R\,\partial\Theta\), \(\partial z\to r_{0}\sin\theta_{0}\,\partial\phi\), and \(\partial Z\to R\sin\Theta\,\partial\Phi\). We note that when the starting and ending radial surfaces are set to \(r_{0}=R_{\odot}\) and \(R=R_{*}\), one arrives at the exact spherical definition of \(N^{2}\) given as Equation (22) in Titov (2007). We calculate the field connectivity from a grid of \(1536\times 768\) field lines starting at the desired radial distance \(r_{0}\) uniformly spaced in (\(\theta\), \(\phi\)). As in the \(Q\)-map calculation of Wyper et al. (2016), we use a fourth-order central difference scheme for the derivatives. Figure 5 summarizes the coronal portion of our heliospheric back-mapping procedure to illustrate the connectivity of our composition-enhanced intervals to their coronal S-Web structures of origin. Figure 5(a) repeats the 1-hr average \(V_{r}\) points mapped to \(R_{ss}\) (from Figure 4(c)) and plots the longitudinal extent of our back-mapped intervals of interest with their boundaries indicated in every subsequent panel. Figure 5(b) shows the MDI magnetogram for CR 2002. The positive (negative) open field regions calculated from PFSS solution shaded in blue (red), the structure of the helmet streamer belt with representative green field lines, and the HCS location (\(B_{r}=0\) at \(R_{ss}\)) as the black contour. The green field lines are traced along the HCS location at a radial distance just below \(R_{ss}\), and therefore represent the largest closed flux tubes belonging to the helmet streamer belt and illustrate the boundary between the large-scale open and closed coronal flux systems. The back-mapped intervals are labeled along the top axis of the plot. Figure 5(c) plots the \(Q\)-map at \(R_{ss}\). The values of \(\log Q\) are also shaded blue and red to indicate \(B_{r}\) polarity. The position of the HCS current sheet is immediately identified as where the polarities change sign. The darker arcs contained within each polarity correspond to S-Web arcs. These S-Web arcs indicate the PFSS field line mapping of the outer spine and fan structures of PS flux systems (Scott et al., 2018) and/or the presence of narrow channels of open field (Antiochos et al., 2007). The purple diamonds indicate the S-Web features associated with their corresponding back-mapped, composition-enhanced intervals. The in-situ intervals that contain the HCS crossing (\(\#3\), \(\#8\)) are clearly as Figure 5: Magnetic structure of the PFSS extrapolation for CR2002 with back-mapped in-situ intervals of slow-to-moderate speed solar wind. (a) The back-mapped \(V_{r}\) time series and intervals \(\#1\)–\(8\) at \(R_{ss}=2.5R_{\odot}\) from Figure 4(c). (b) Synoptic map of the open field regions (blue positive polarity, red negative polarity). The configuration of the helmet streamer belt is shown as green field lines traced from the \(B_{r}=0\) contour at \(R_{ss}\) representing the location of the HCS. (c) \(Q\)-map at \(R_{ss}\) showing the characteristic arcs of the S-Web structure. The \(\log Q\) values are shaded blue (red) for positive (negative) polarity. (d) \(Q\)-map at \(1R_{\odot}\) showing the equatorial field line foot point locations and the low-latitude open field regions between PS’s for intervals \(\#4\)–\(7\). sociated with the HS belt and the intersection of the HCS with the ecliptic plane, despite the spatial extent of interval #8 at \(R_{ss}\) (329deg-353deg) missing the PFSS location of the HCS (3.7deg) by \(\sim\)10deg. This discrepancy is typical of the uncertainties associated with our simplified back-mapping (as mentioned above) but given that PFSS helmet streamer width beneath the HCS-ecliptic plane intersection spans \(\sim\)67deg in Carrington longitude (330deg-37deg) at \(R_{\odot}\), the association between the solar wind during interval #8 and its origin from this portion of the helmet streamer is evident. Intervals #1, #4, and #7 each include a well-defined, PS S-Web arc in their longitudinal range. Interval #5 is directly adjacent to the S-Web arc of interval #4 and interval #6 appears to straddle the midpoint between the #4 and #7 S-Web arcs. Figure 5(d) plots the \(Q\)-map at \(r=1\,R_{\odot}\) and shows the foot points of the PFSS magnetic field lines traced from \(R_{ss}\). The positive polarity (blue) open field foot points map to the southern boundary of the HS belt/northern boundary of the polar coronal hole extensions. The negative polarity (red) open field foot points map to a series of low-latitude coronal holes sandwiched between the northern boundary of the HS belt and the southern boundaries of a series of large PSs above the AR magnetic fields between Carrington longitudes 180deg-315deg. While intervals #5 and #6 were not associated with a distinct S-Web arc at \(R_{ss}\), their field line foot points map to the vicinity of the open-closed flux boundaries between the low-latitude coronal holes and the large equatorial PSs. ## 4 Coherent Magnetic Structure in Composition-Enhanced Solar Wind ### \(H_{m}\)-PVI Analysis Procedure We have implemented the Pecora et al. (2021) magnetic helicity-partial variance of increments (\(H_{m}\)-PVI) procedure to identify coherent magnetic structures within our intervals of composition-enhanced solar wind originating from coronal HS and PS source regions. Here, we briefly review the methodology for the identification of small-scale flux ropes and/or coherent flux tubes, while in sections 4.2 and 4.3, we present the results from applying this technique to HS intervals (#3, #8) and PS intervals (#1, #4-#7), respectively. In section 4.4, we compare and contrast properties of the \(H_{m}\) and PVI time series in each interval. Quite generally, the magnetic helicity can be written as \[H_{m}=H_{m}^{+}(\ell)+H_{m}^{-}(\ell) \tag{2}\] where the temporal or spatial scale, \(\ell\), is used to define the magnetic helicity contained in scales greater than \(\ell\) as \(H_{m}^{+}(\ell)\) and less than \(\ell\) as \(H_{m}^{-}(\ell)\). Since we are interested in the local coherence, we will ignore the \(H_{m}^{+}\) term and follow the Pecora et al. (2021) prescription for the local estimate of \(H_{m}^{-}\) using the two-point correlation function \(C_{jk}=\left\langle\,B_{j}(\mathbf{r})B_{k}(\mathbf{r}+\mathbf{s})\,\right\rangle\)(e.g. Matthaeus and Goldstein, 1982). We take the spatial lag \(\mathbf{s}=s\,\hat{\mathbf{e}}_{i}\) to be in the \(\hat{\mathbf{r}}\) direction so indices \(j\), \(k\) are the tangent and normal directions of the spacecraft's RTN coordinates. We calculate the spatial average of the two-point correlation function over an interval of width \(W=2\ell\) centered at position \(x\) via \[\begin{split}& C_{jk}(x,s)=\\ &\frac{1}{W}\int_{x-\frac{W}{W}}^{x+\frac{W}{W}}d\xi\left[B_{j}( \xi+s)B_{k}(\xi)-B_{j}(\xi)B_{k}(\xi+s)\right].\end{split} \tag{3}\] Following Pecora et al. (2021), we apply a smooth windowing function to \(C_{jk}(x,s)\) of the form \(f(s)=\frac{1}{2}+\frac{1}{2}\cos\left(\,2\pi s/W\,\right)\) to obtain the local helicity estimate, \[H_{m}(x,\ell)=\int_{0}^{\ell}ds\;C_{jk}(x,s)\;f(s)\;. \tag{4}\] The spatial domain quantities (\(x\), \(s\)) can be converted to the temporal domain (\(t\), \(\tau\)) with the usual Taylor approximation of \(x(t)=\int d\tau\;V_{r}(\tau)\). In our implementation of \(H_{m}(t,\ell)\), we use a spatial scale of \(\ell_{H}=4.3\times 10^{6}\) km (4300 Mm, \(\sim\)6.2 \(R_{\odot}\)) and for a solar wind speed of 500 km s\({}^{-1}\), this corresponds to a temporal scale of 2.4 hr (i.e. \(\sim\)135 data points at 64 s cadence) which is the mean correlation timescale of the vector magnetic field over our eight intervals (\(2.37\pm 1.83\) hrs). However, we note that the correlation timescales during the HCS/HPS intervals were significantly larger (\(4.58\pm 1.0\) hrs) than the PS intervals (\(0.98\pm 0.52\) hrs) which agree with previous estimates (e.g. Matthaeus et al., 2005). Typically, one decides that a given peak in \(H_{m}(t,\ell)\) is significant if it exceeds a \(\pm 1\)-\(\sigma\) threshold. In the following sections, this standard deviation is calculated from the \(H_{m}\) curves over the entire interval of interest, i.e. those defined in Section 2 (and illustrated in Figures 2-5). The PVI measure (e.g. Pecora et al., 2019, 2021) is defined as \[\text{PVI}(t,\ell)=\frac{|\Delta\mathbf{B}(t,\ell)|}{\sqrt{\langle\,|\Delta\mathbf{B}(t,\ell)|^{2}\rangle}}\;, \tag{5}\] in which \(|\Delta\mathbf{B}(t,\ell)|\equiv|\mathbf{B}(t+\ell)-\mathbf{B}(t)|\), the (temporal or spatial) averaging is over an appropriate interval, and \(\ell\) represents the scale size of the increments. The PVI technique has been widely used to identify discontinuities, reconnecting current sheets, and as a measure of turbulence structures (e.g. Greco et al., 2009, 2018; Osman et al., 2014; Pecora et al., 2019). Since we aim to use PVI to identify the sharp magnetic boundaries of coherent flux tubes and/or small flux rope plasmoids, we choose a temporal scale of \(\ell_{\text{PVI}}=2.13\) min and an averaging window of 24 hrs (10 times the magnetic field's mean correlation timescale above). Again, one makes a determination of the significance of any given PVI peak via thresholding, where some authors have used \(\mathrm{PVI}>2\)(Pecora et al., 2021), \(>\) 2.4 (Greco et al., 2008), \(>\) 3 (Kilpua et al., 2022), or even larger thresholds of \(>\) 4-6 (e.g. Servidio et al., 2011; Zhou et al., 2019). Here we use \(\mathrm{PVI}>3.0\) during each of our CR 2002 solar wind intervals for ease of comparison between the HS and PS PVI statistics. The magnitude of the PVI peaks has been shown to be related to different types of boundaries or discontinuities in the solar wind. For example, the \(\mathrm{PVI}\gtrsim 3\) threshold has been interpreted as representing discontinuities that are actual physical boundaries of coherent magnetic structures rather than random statistical fluctuations, whereas PVI values \(\gtrsim 5\) have been associated with reconnection events (Servidio et al., 2011). The strength of the \(H_{m}\)-PVI procedure is that for a magnetic island or a coherent flux rope-like structure there is a local \(H_{m}(t)\) maximum somewhere within the flux rope and \(\mathrm{PVI}(t)\) yields local maxima at the flux rope boundaries. For a given time series, local peaks in \(H_{m}\) or \(\mathrm{PVI}\) can each occur for a variety of independent features, but the combination of two PVI peaks bracketing a local \(H_{m}\) maximum appears to be a fairly robust identification criteria (Pecora et al., 2021). For completeness, we note there are a number of complementary methods to identify coherent intervals and/or solar wind flux tube boundaries based on either statistical plasma properties or turbulence measures. For example, rapid changes in the magnetic field orientation (i.e. tangential discontinuities) can be characterized with \(\Delta\theta_{B}\)(e.g. Borovsky, 2008, and references therein), and these have recently been shown to coincide with abrupt changes in the suprathermal electron strahl width and/or intensity (Borovsky, 2020, 2021; Borovsky et al., 2021). ### Intervals of Helmet Streamer (HS) Wind Figure 6 shows our two composition-enhanced intervals associated with HS wind and the in-situ HCS/HPS and IMF sector boundary crossings. Figure 6(a) shows interval #8 which is from DOY 109.582 to 111.267 (40.44 hr total duration) and Figure 6(b) shows interval #3, from DOY 123.644 to 126.164 (60.48 hr total duration). In each plot, the top panel shows the (normalized) 272 eV suprathermal electron PAD. The next three panels show the 64 s vector magnetic field in RTN components (\(B_{R}\) blue; \(B_{T}\) green; \(B_{N}\) red) along with its local orientation angles: latitude \(\delta\in[-90^{\circ},\,90^{\circ}]\) and longitude \(\lambda\in[0^{\circ},\,360^{\circ}]\). The fourth and fifth panels show the magnetic helicity measure \(H_{m}^{-}\) and the PVI profiles. And the remaining three panels show the 2-hr ionic composition measurements from ACE/SWICS of C\({}^{4-6+}\), O\({}^{5-8+}\), and Fe\({}^{6-20+}\). The PVI panels have the \(\mathrm{PVI}\geq 3.0\) threshold shown as a solid red line and the local maxima are indicated with red diamond symbols. The vertical lines associated with the location of the PVI peaks are drawn over each panel. The \(H_{m}^{-}\) profile in 6(a) is normalized by a value of \(1.0\times 10^{8}\), and has a standard deviation of \(\sigma=0.582\). The \(H_{m}^{-}\) profile in 6(b) has two separate normalizations, as indicated by the additional \(y\)-axis at DOY 124.764 due to the magnitude of \(\mathbf{B}\) increasing after the HCS/HPS crossing. For \(t<124.764\) the normalization value is \(1.0\times 10^{8}\) whereas for \(t>124.764\), we normalize by \(7.68\times 10^{8}\) so that \(\sigma=1.124\) for both sides. In each of the \(H_{m}^{-}\) panels the \(\pm 1\sigma\) range is shaded light gray. The local \(H_{m}^{-}\) magnitude maxima larger than 1-\(\sigma\) within each PVI interval are indicated with blue diamond plot symbols. The \(H_{m}\)-PVI procedure identifies a number of coherent magnetic structures throughout each interval. The occurrence of significant PVI peaks tend to be clustered in local patches and each interval's HCS/HPS crossing (the \(\sim\)180\({}^{\circ}\) transition in \(\lambda\) coincident with the bidirectional/broadening of the suprathermal PADs) is bracketed by a cluster of PVI-peaks. In HS interval #8, there are four main clusters of PVI peaks: DOY 109.7-109.9, 110.0-110.15, 110.2-110.6, and 110.9-111.3. The two largest clusters of PVI peaks occur on either side of the HCS/HPS crossing and contain the greatest number of significant \(H_{m}\) peaks. Once the suprathermal electron PAD transitions from a unidirectional (\(0^{\circ}\)) strahl to a broader, more isotropic distribution (DOY 110.5 through 111.1) there is a train of three coherent north-to-south magnetic field rotations (positive-to-negative profile in \(\delta\)) at the beginning of the PAD transition and a number of larger \(H_{m}\) structures as the PAD transitions to oppositely-directed (180\({}^{\circ}\)) strahl on the other side. Notably, the structure centered at DOY 111.0 corresponds to a 1.5 hr-wide, relatively flat profile in both \(\delta\) and \(\lambda\). Finally, there is a slight increase in O\({}^{7+}\) (and decrease in O\({}^{6+}\)) for the duration of the HCS/HPS crossing during the broad electron PAD region which coincides with a slight shift to higher Fe charge states during this same period. Likewise, there is a significant increase in C\({}^{6+}\) at the beginning and end of the PAD transitions at the same time as a number of the HCS/HPS interval-related \(H_{m}\)-PVI structures. HS interval #3 exhibits many similar features to those of interval #8. For example, the significant \(H_{m}^{-}\) peaks occur on either side of the IMF sector boundary at DOY 124.6, including three consecutive structures between DOY 124.5-124.8, followed by three more, centered on DOY 125.0, 125.1, and 125.2. These \(H_{m}\)-PVI structures are also associated with coherent rotations in (\(\delta,\lambda\)), as well as a sharp local maximum in C\({}^{6+}\) at the HCS superimposed on top of a broader region of enhanced C\({}^{6+}\) from DOY 124.0-125.5. The O\({}^{7+}\) signal shows a similar, but less pronounced, trend over a slightly narrower range (DOY 124.3-125.2). However, the enhanced high Fe charge states (up to Fe\({}^{16+}\)) tend to be shifted earlier (DOY 123.9-124.6) and return to being strongly peaked at Fe\({}^{9-11+}\) for \(t>125.0\). The suprathermal electron PADs leading up to the HCS crossing are more patchy, alternating between the 180\({}^{\circ}\) strahl and broader, more isotropic (and even bidirectional) PADs before transitioning to more steady 0\({}^{\circ}\) strahl after DOY 125.0. Throughout both intervals, the PVI peaks occur almost exclusively at discontinuities in the magnetic field angles, and these are often also coincident with changes in the electron PADs. Thus, the conjecture that the PVI peaks select boundaries of distinct plasma intervals (either magnetic flux tubes, discrete solar wind flows, or magnetic island plasmoid/small flux ropes) appears supported by our results. Another feature of the \(H_{m}\)-PVI analysis in these intervals is that, even when the \(H_{m}\) profiles do not exceed the 1-\(\sigma\) significance threshold, there are often still local peaks and coherent magnetic field signatures within the bracketing PVI peaks. ### Intervals of Pseudostreamer (PS) Wind Figure 7 shows the remaining composition-enhanced intervals associated with non-HS wind, i.e. from PS or PS-adjacent source regions, in the same format as Figure 6. Figure 7(a) shows interval #7 which is from DOY 113.0 to 114.6 (38.4 hr duration), Figure 7(b) shows interval #6, from DOY 115.6 to 117.8 (52.8 hr), Figure 7(c) shows the combined intervals of #4 and #5 from DOY 118.2 to 120.4 (52.8 hr), and lastly Figure 7(d) shows the ICME interval #2 from DOY 129.2 to 130.7 (36 hr) and the subsequent, brief PS interval #1, from DOY 130.7 to 131.5 (19.2 hr). The qualitative features of the HS intervals described above are also present in each of the PS intervals. Specifically, the \(H_{m}\)-PVI analysis continues to identify magnetic field discontinuities and/or sudden changes in the electron PAD via the significant PVI peaks, the PVI peaks clearly show clustering, and these peaks often bracket significant local maxima in the \(H_{m}^{-}\) magnitude. The \(H_{m}\) normalization for intervals #7, #6 is \(1.0\times 10^{8}\) which results in a standard deviation of \(\sigma=0.940\) for #7 and \(\sigma=0.768\) for #6. In interval #7 (Figure 7(a)), there are a series of significant \(H_{m}\) peaks from DOY 113.1-113.55 that begin before the cluster of \(\mathrm{PVI}>3\) events ranging from DOY 113.35-113.75 and another succession of \(H_{m}\) peaks coincident with the next large PVI cluster at \(t\gtrsim 114.2\). During interval #6 (Figure 7(b)), the PVI clusters are more frequent and of shorter duration, whereas the significant \(H_{m}\) peaks are more spread out over the entire interval. The overlap between the two occur primarily for DOY 116.3-116.7 and for \(t>116.9\). Essentially the entire #7 interval has a moderate enhancement of C\({}^{6+}\), but almost no corresponding enhancement in the hotter charge states of O or Fe. Interval #6 is similar with perhaps a very slight enhancement in Fe\({}^{10-12+}\) between Figure 6: Intervals of the helmet streamer (HS) wind that include heliospheric current sheet/plasma sheet crossings. (a) HS (#8) from DOY 109.582 to 111.267. (b) HS (#3) from DOY 123.644 to 126.164. Figure 7: Non-helmet streamer, composition-enhanced intervals (primarily from pseudostreamers) in the same format as Figure 6. (a) PS (#7) from DOY 113.0 to 114.60. (b) PS (#6) from DOY 115.60 to 117.80. (c) PS (#5, #4) from DOY 118.20 to 120.40. (d) ICME (#2) and PS (#1) from DOY 130.70 to 131.50. DOY 116-117, and a more prominent C\({}^{6+}\) region for \(t>117\). Additionally, there is one 2-hr data point centered at 117.5 that include a slight increase in O\({}^{7+}\) (and decrease in O\({}^{6+}\)), coincident with a coherent magnetic structure interval. In general, PS intervals #7 and #6 can be considered to have fewer composition enhancement than our previous HS intervals. And while there are still some discrete regions of broader electron PAD signatures, especially in #7, for the most part these PS intervals have less variation in the PAD profiles--as may be expected for unipolar PS solar wind. PS intervals #5+4 and #2+1, shown in Figures 7(c) and 7(d), respectively, also show PVI clusters and sequential trains of \(H_{m}\) coherent structures bracketed by PVI peaks. The \(H_{m}\) normalization in interval #5+4 is \(1.0\times 10^{8}\), resulting in a standard deviation of \(\sigma=1.608\). We use the same normalization (\(10^{8}\)) for the ICME interval (#2) which yields \(\sigma=2.902\) while for trailing PS interval (#1), we use a normalization of \(3.877\times 10^{7}\) to obtain the matching \(\sigma\) value. PS interval #5+4 has the most enhanced heavy ion charge states of our PS intervals, including a significant increase in C\({}^{6+}\) and O\({}^{7+}\) from DOY 118.7-119.6. coincident with \(H_{m}\) peaks at DOY 118.9, 119.1, 119.2, and 119.3. Interval #5+4 also contains highly variable and enhanced hot iron charge states, Fe\({}^{\geq 12+}\), including a Fe\({}^{16+}\) component present throughout almost the entire time range, DOY 118.5-120.2. Additionally, there are (small) flux rope-like rotations in the DOY 118.4-118.5 and 119.7-119.8 structures. Again, the PVI peaks representing coherent structure boundaries are seen to line up with discontinuities in magnetic field (\(\delta,\lambda\)) angles. While there is interesting, composition-enhanced internal magnetic structuring present within the ICME interval of Figure 7(d)--including large ICME boundary enhancements in the Fe distribution (e.g. DOY 129.3-129.7 and 130.0-130.4)--in this work we will concentrate on the PS interval #1. The Fe\({}^{16+}\) component is also present for a large percentage of this interval, through DOY 131.3. There is an intriguing sequence of short, intermittent bursts of bidirectional electrons from DOY 130.6-131.0 which have corresponding \(H_{m}\) structures that do not exceed the 1-\(\sigma\) threshold but occur toward the latter portion of an extended PVI peak cluster. The \(H_{m}\) peaks that do exceed the threshold occur towards the end of interval #1 and into the beginning of interval #2 (DOY 130.2-130.7) and the coherent magnetic structures at \(t\gtrsim 131.3\) also show flux rope-like rotation signatures. ### Statistical Properties Given the variation and "randomness" of the magnetic field structure(s) and fluctuations within our slow-to-moderate speed, composition-enhanced HS and PS solar wind intervals, statistical methods are required to characterize various properties of the time series (e.g. Zurbuchen et al., 2000). A summation of these analyses are presented in Figure 8. Figure 8(a) plots the autocorrelation functions \(A_{Hm}(\tau)\) of the \(H_{m}^{-}(t)\) time series of the HS intervals (top row) and the PS intervals (bottom row). The average \(e\)-folding time, \(\langle\tau_{1/e}\rangle\), for each set of curves is given in their respective panels. If one defines a characteristic width (duration) of the magnetic helicity-carrying structures as \(w=2\langle\tau_{1/e}\rangle\) then the mean HS interval width is \(w_{\rm HS}=0.94\pm 0.02\) hr and the mean PS interval width is \(w_{\rm PS}=0.96\pm 0.22\) hr. These values are consistent with, i.e. on the order of, the \(\sim\)90 min periodicity found in solar wind proton density structures (e.g. Viall et al., 2010; Viall and Vourlidas, 2015; Kepko et al., 2016; Di Matteo et al., 2019). Figure 8(b) plots the temporal waiting time histogram, \(f_{\rm PVI}(\Delta t)\), during the HS (top row) and PS (bottom row) intervals. We have fit a line to each of the distributions in log-log space using the IDL linfit.pro least-squares minimization procedure representing a \(f(x)=Ax^{b}\) power law form. The best-fit lines are also plotted in red in each panel and the fit parameters (and their 1-\(\sigma\) uncertainties) are given in the plot. The HS and PS distributions have very similar slopes: \(b=-0.83\pm 0.08\) in the HS case and \(b=-1.02\pm 0.06\) for the PS case. If the PVI peaks represent boundaries of coherent magnetic structures, i.e. plasmoid flux ropes or individual flux tubes, then the \(\Delta t\) "waiting time" between PVI peaks should be roughly the flux structure's diameter (with some variation due to the spacecraft's relative impact parameter). The mean waiting times are \(\langle\Delta t\rangle=1.10\) hr and 1.01 hr for HS and PS distributions, respectively. The vertical yellow bar in the HS waiting time distribution highlights the bins centered at \(\Delta t=1.625\), 1.875, and 2.125 hr. Each of these bins having counts \(\gtrsim\) 1-\(\sigma\) above the best-fit line may indicate the presence of additional coherent structure at these timescales which is, again, remarkably consistent with the Viall et al. (2010) \(\sim\)90 min timescales for periodic density structures. Interestingly, the PS waiting time distribution does not appear to have a similar enhancement in the 1.5-2 hr scale range, although the counts in the PS bins at \(\Delta t\)= 2.875 and 5.62 hr are also on the order of 1-\(\sigma\) above the best fit line. Figure 8(c) plots the spatial waiting time histogram, \(f_{\rm PVI}(\Delta s)\), in the same format as column 8(b). Here we note the mean spatial lengths for the HS and PS intervals are, again, essentially identical at \(\langle\Delta s\rangle=2.44\,R_{\odot}\) (1698 Mm) and \(2.41\,R_{\odot}\) (1677 Mm), respectively. An interesting feature is the "disappearance" of the small enhancement at the 1.5-2 hr scale range in the HS PVI waiting time distribution when plotted as spatial scales. Since we used the radial velocity time series to integrate the distance between PVI peaks, rather than a constant \(V_{r}\) value, the PVI \(\Delta s\) distribution is not merely a re-scaled version of the \(\Delta t\) distribution. This means, at least in the case of HS slow-to-moderate speed so lar wind, that it may be possible to miss a periodic or quasi-periodic signal associated with solar wind formation/source region properties during its subsequent heliospheric evolution if one is focusing on the spatial domain. Conversely, the counts in the PS \(\Delta s=1.25\,R_{\odot}\) bin are significantly above the power law fit without an obvious corresponding enhancement in the PS \(\Delta t\) distribution. The average solar wind speed obtained from the first moment of the temporal and spatial times are \(\langle V_{r}\rangle=\langle\Delta s\rangle/\langle\Delta t\rangle=429\) km s\({}^{-1}\) for the HS intervals and \(\langle V_{r}\rangle=461\) km s\({}^{-1}\) for the PS intervals. These values appear to be slightly lower than the averages obtained directly from the \(V_{r}(t)\) profiles during our composition-enhanced intervals (Table 1, Figures 2-4). Our PVI waiting time statistics seem compatible and consistent with previous applications of these analyses; at scales below the magnetic correlation scale, the PVI waiting time distribution is well approximated by a power-law and at scales greater than the correlation scale, the distribution takes on more of the classic Poisson waiting time exponential form (Greco et al., 2009, 2009). The temporal/spatial plots in Figure 8(b),(c) show a consistent departure/roll-over from the best-fit line for \(\Delta t\gtrsim 2.4\) hr (\(\Delta s\gtrsim 6\,R_{\odot}\)) and the first moments of the waiting times/length scales (\(\langle\Delta t\rangle\), \(\langle\Delta s\rangle\)) are on the order of the associated correlation scales (cf. SS4.1). In fact, the range of values we obtain for the power-law fit exponents (-1.02 to -0.78) are entirely consistent with those found by Greco et al. (2009) in MHD turbulence simulation data (-0.92) and in the solar wind at 1 au (-1.29), and in PSP observations of the PVI \(>3\) magnetic field fluctuations at \(\sim\)0.25 au (-1.29 to -0.83; Chhiber et al., 2020). ## 5 Summary and Discussion It is well established that in-situ solar wind composition, its variation and its associated plasma structures are all remnant signatures of the physical processes of solar wind formation and the coronal conditions of its origin. We have presented a comprehensive analysis of a set of slow-to-moderate speed, composition-enhanced solar wind intervals at 1 AU during CR 2002. Our intervals were selected on the basis of solar wind speed and observed enhancements in some combination of \(n_{p}\), \(n_{\alpha}\), \(A_{\rm He}\) or their variability. We have shown that each of these intervals correspond to solar wind flows with complex, broadened or bidirectional suprathermal electron strahl, elevated (hot) ionic charge states of C, O, and Fe, and an enhanced Fe/O ratio. Pseudostreamers are a prime location for interchange reconnection and they are thought to be responsible for a component of intermittent, slow solar wind outflow (e.g. Masson et al., 2012; Wang et al., 2012; Higginson et al., 2017; Wang and Panasenco, 2019). In general, energiz Figure 8: Statistical properties of the intermittency and coherent magnetic structures defined via the \(H_{m}\)–PVI analysis during our slow-to-moderate solar wind intervals with enhanced \(\alpha/p\) and heavy ion charge states. (a) Autocorrelations of the \(H_{m}^{-}(t)\) profiles from Figures 6 (top) and 7 (bottom). (b) Temporal waiting time distribution of \(\Delta t\) between PVI peaks for the HS (#8, #3) and PS (#7, #6, #5+4, #1) intervals. (c) Spatial waiting time distribution of \(\Delta s\) between PVI peaks for the HS and PS intervals. In columns (b), (c), the red curves show power-law fits to the respective waiting time distributions. ing surface flows (e.g. translation or rotational shearing flows, flux emergence, and/or flux cancellation/tethercutting) will build up volumetric currents, stress magnetic null points, and develop strong current sheets at topological boundaries, thereby creating favorable conditions for magnetic reconnection (e.g. Antiochos et al., 2012; Rappazzo et al., 2012; Burkholder et al., 2019; Mason et al., 2021). Lynch and Edmondson (2013) showed that 2.5D pseudo-dostreamer interchange reconnection (in the form of pre-eruption breakout reconnection) could result in bursty, quasi-steady signatures in density along the external spine and coronal dimming signatures near the stressed null point and current sheet (see also Kumar et al., 2021), while recent simulations from Aslanyan et al. (2021, 2022) have illustrated that the complex 3D interchange reconnection dynamics seen by Higginson et al. (2017, 2018) can also be produced at the open-closed boundaries of pseudostreamer flux systems. There is an implicit relationship between the Zhao et al. (2017) source region categories and the large-scale coronal magnetic topology in the neighborhood of the PFSS field line foot points. For example, their 'Quiet Sun,' 'Active Region,' and 'Active Region Boundary' classifications--typically thought of as closed flux regions--are likely to be associated with structures giving rise to the S-Web, i.e. pseudostreamers and small/narrow open field regions such as low-latitude coronal holes. With the application of standard backmapping techniques, we showed that the slow-to-moderate speed, composition-enhanced solar wind intervals at 1 AU map to large-scale coronal features such as the HS belt and S-Web arcs. These are precisely the locations predicted by the \(Q\)-map topology analysis to be sites favorable for interchange reconnection during the dynamic evolution of the solar corona's open-closed flux boundaries. Lastly, we note that the presence of relatively slow, highly structured, and composition-enhanced solar wind that originates from S-Web arcs far from the HCS is a crucial test of the S-Web theory (e.g. Higginson et al., 2017; Di Matteo et al., 2019). We have analyzed properties of the in-situ coherent magnetic structures within each composition-enhanced interval as determined by the Pecora et al. (2021)\(H_{m}\)-PVI procedure for the identification of helicity-carrying flux tubes and/or magnetic island plasmoids. The characteristic widths of these coherent magnetic structures (\(\sim\)1 hr from \(H_{m}\); \(\sim\)2 hr from PVI) are consistent with the \(\sim\)90 min periodicities determined from either in-situ proton density time series (Viall et al., 2010) or in the Thomson-scattered white-light coronagraph brightness fluctuations that are proportional to the line-of-sight integrated electron density \(n_{e}\)(Viall and Vourlidas, 2015). There appears to be a 1.5-2 hr timescale signature above the expected power-law distribution of PVI waiting times in HS-associated solar wind that is either significantly less obvious or non-existent in our PS intervals. There also appears to be an enhancement of the PS-associated waiting time length scale \(s\sim 1.25\,R_{\odot}\) without a corresponding enhancement in the temporal distribution. One may expect different types of reconnection-generated magnetic structures at the boundaries of HS and PS regions due to the topological differences, e.g. as discussed by Edmondson and Lynch (2017) and Higginson and Lynch (2018), but further numerical modeling of their origin and heliospheric evolution will be needed. This work complements previous statistical studies characterizing magnetic field and plasma properties within coherent intervals or by solar wind type (e.g. Ko et al., 2018; D'Amicis et al., 2019; Borovsky et al., 2019, 2021), as well as those studies of specific, small-scale structures (Khabarova et al., 2021; Gershkovich et al., 2022), such as small magnetic flux ropes (e.g. Feng et al., 2008; Yu et al., 2016; Murphy et al., 2020; Choi et al., 2021). Importantly, our attempt to relate various in-situ properties of the structued variability in slow-to-moderate speed solar wind through an application of the \(H_{m}\)-PVI methodology represents a significant extension of previous work where coherent magnetic structures identified "by eye" were shown to be coincident with structure in the proton density and \(A_{\rm He}\) observations (e.g. Kepko et al., 2016; Di Matteo et al., 2019). Given recent interest in the further refinement and development of sophisticated automated methods such as machine learning/artificial intelligence neural networks, the \(H_{m}\)-PVI procedure appears to be a promising candidate for inclusion in the suite of tools being constructed to classify solar wind types and properties (e.g., as discussed in Section 1) and to identify and characterize coherent flux rope intervals, ranging in spatiotemporal scales from ICMEs (e.g. Nguyen et al., 2019; dos Santos et al., 2020; Roberts et al., 2020; Narock et al., 2022) to small-scale flux ropes embedded in the slow solar wind and HCS/HPS crossings (e.g. Hu et al., 2018; Zhao et al., 2020). The results presented herein open up a number of avenues for future research efforts: (1) extending the current analysis to in-situ solar wind plasma, field, and composition measurements to many more CRs over different phases of the activity cycle; (2) performing forward modeling of heavy ion charge states and elemental abundances associated with the spatial distribution of discrete, observer-connected solar wind flux tubes with varying solar wind outflow properties based on coronal conditions of their foot point locations/source region topologies; and (3) further analysis of existing and future numerical MHD simulations of dynamic S-Web outflow and their derived observational signatures. Since there has been recent progress integrating aspects of heavy ion composition forward modeling into steady-state MHD solar wind calculations (e.g. Oran et al., 2015; Shen et al., 2017; Lionello et al., 2019; Szente et al., 2022), it would be extremely interesting to perform these calculations on dynamic, time-dependent MHD modeling of the formation and evolution of coherent magnetic structures generated under different reconnection scenarios. For example, the Aslanyan et al. (2022) calculation of the synthetic suprathermal electron PAD "time series" associated with PS interchange reconnection outflows shows excellent qualitative agreement with the observed broadening of the strahl for some of our PS intervals (#s 4, 6, and 7, in particular). Lynch et al. (2014) showed the largest (i.e. low frequency) \(\delta B/\langle B\rangle\) signatures resulting from PS reconnection had characteristic length-scales of 100-350 Mm (0.14-0.50 \(R_{\odot}\)) in the corona which reflected the spatial scale of their PS flux system of origin, and Higginson & Lynch (2018) demonstrated that the MHD simulation-derived, synthetic in-situ magnetic field signatures of a similarly-sized, non-linear torsional Alfven wave could resemble the coherent magnetic structure of small-scale magnetic flux ropes/streamer blob plasmoids typically associated with HS slow wind in the vicinity of the HCS/HPS. On the largest scales (the 10s of hours of our interval durations), there is a remarkably clear association between our HS and PS S-Web arc intervals and in-situ composition enhancements. On the scales of coherent magnetic structures depicted in Figures 6-7, there are _some_ indications that the PVI boundaries are also associated with discrete changes in coronal freeze-in temperatures as inferred from the heavy ion charge states. The 2-hr cadence of the ACE/SWICS data used herein obviously limits our ability to resolve charge-state structure below the averaging window duration. Smaller scale features have been observed and reported in Kepko et al. (2016) and Gershkovich et al. (2022) using periods of high-cadence (12-min native instrument resolution) ACE/SWICS data to argue that some of the discrete magnetic flux tube intervals of interest did line up with sudden changes in various composition measures (i.e. He, C, and O abundances, the C\({}^{6+}\)/C\({}^{5+}\) charge state ratio, etc). Measurements from the Heavy Ion Sensor (HIS, with a native resolution of 30 seconds for heavy ions), part of the _Solar Orbiter_ Solar Wind Analyser (SWA) instrument suite (Owen et al., 2020), should enable the identification and characterization of smaller scale associations of coherent magnetic structures with in-situ composition enhancements. Additionally, the scientific importance for multispacecraft measurements and remote-sensing and in-situ quadrature observational geometries, to establish the solar-heliospheric connection for specific plasma features of well-observed interchange reconnection events has been recently demonstrated by Telloni et al. (2022) with the first direct imaging of a "switchback" with _Solar Orbiter_'s Metis coronagraph (Antonucci et al., 2020). This switchback event's likely origin from the complex S-Web configuration of a small PS S-Web arc coming off the main HS belt/HCS and a null-point spine-fan curtain topology where the PS and HS flux systems intersect strongly motivates continued theoretical development, data analysis, and numerical simulations of the dynamic S-Web model for the slow solar wind. The authors acknowledge support from NASA HGI 80NSSC18K0645 and HSR 80NSSC18K1553. Additionally, BJL acknowledges support from NASA grants 80NSSC21K0731, 80NSSC20K1448, and 80NSSC21K1325; NMV and AKH acknowledge support from the competed Internal Scientist Funding Model (IFSM) at NASA GSFC; LZ acknowledges support from NASA grant 80NSSC21K0579; and STL acknowledges support from NASA grants 80NSSC19K0853 and 80NSSC20K0192. The authors thank the _Wind_ and ACE mission teams for making the in-situ magnetic field, plasma, and composition data available at [http://claweb.gsfc.nasa.gov](http://claweb.gsfc.nasa.gov) and [https://izw1.caltech.edu/ACE/ASC/](https://izw1.caltech.edu/ACE/ASC/), as well as the MDI team for making the photospheric magnetogram data available at [http://hmi.stanford.edu/data/synoptic.html](http://hmi.stanford.edu/data/synoptic.html).
2304.13971
Data-driven time-scale separation of ODE right-hand sides using dynamic mode decomposition and time delay embedding
Multi-physics simulation often involve multiple different scales. The ARKODE ODE solver package in the SUNDIALS library addresses multi-scale problems with a multi-rate time-integrator that can work with a right-hand side that has fast scale and slow scale components. In this report, we use dynamic mode decomposition and time delay embedding to extract the fast and and slow components of the right-hand sides of a simple ODE from data. We then use the extracted components to solve the ODE with ARKODE. Finally, to move towards a real-world use case, we attempt to extract fast and slow scale dynamics from synthetic seismic modeling data.
Cody J. Balos
2023-04-27T06:23:34Z
http://arxiv.org/abs/2304.13971v1
Data-driven time-scale separation of ODE right-hand sides using dynamic mode decomposition and time delay embedding ###### Abstract Multi-physics simulation often involve multiple different scales. The ARKODE ODE solver package in the SUNDIALS library addresses multi-scale problems with a multi-rate time-integrator that can work with a right-hand side that has fast scale and slow scale components. In this report, we use dynamic mode decomposition and time delay embedding to extract the fast and and slow components of the right-hand sides of a simple ODE from data. We then use the extracted components to solve the ODE with ARKODE. Finally, to move towards a real-world use case, we attempt to extract fast and slow scale dynamics from synthetic seismic modeling data. ## 1 Introduction and Overview As computing power increases, multi-physics simulations which involve the coupling of different physical phenomena are becoming more common and more ambitious. The different physics in these simulations often occur at different scales. As a result, there are inefficiencies when solving the ODE systems that arise because the time-step of the integrator is restricted by the fast scale. The ARKODE [6] ODE solver package in the SUNDIALS library [1] addresses these problems with a multirate time-integrator that is defined by a right-hand side composed of a fast-scale and slow-scale components as shown in Equation (1). Naturally, this leads to the question of how to split the right-hand side appropriately. In this report, we will take a data-driven approach based on the dynamic mode decomposition (DMD), a powerful tool for approximating dynamics from data [4, 7]. \[\dot{y}=\underbrace{f_{s}(t,y)}_{\text{slow scale}}+\underbrace{f_{f}(t,y)}_{ \text{fast scale}},\quad y(t_{0})=y_{0} \tag{1}\] In [2], Grosek _et al._, introduced a method for extracting the low-rank and sparse components of a DMD approximation in the context of background and foreground separation in videos, however, we will apply the concept to other dynamical systems where there exists a slow-scale (background) that is much slower than a fast-scale (foreground). In order to successfully reconstruct the dynamics from systems with only one partial state information, we will use time delay embeddings to augment the state. This method was presented by Kamb _et al._ as a Koopman observable in [3]. After extracting the the fast and slow components of an simple ODE right-hand side, we will solve a multi-rate ODE by feeding the approximated components into ARKODE. Finally, we will attempt to extract fast and slow-scale dynamics from synthetic tsunami modeling data produced with a code developed by Vogl and Leveque based on the Clawpack solver package [8]. The idea is that with some additional work, the extracted components could be used as the fast and slow right hand sides in ARKODE to solve ODE system(s) that stem from the wave propagation equations discussed. The rest of this report is organized as follows. Section 2 discusses the theory behind the methods used to form our separation algorithm. Section 3 explains the algorithm and its implementation in MATLAB. Section 4 discussed the computational results for the toy problem and the synthetic tsunami data. Finally, we provide our conclusions in Section 5. ## 2 Theoretical Background In this section we will discuss the theory behind the different methods used in our separation algorithm. Consider a dynamical system \[\frac{d\mathbf{x}}{dt}=\mathbf{f}(\mathbf{x},t;\mu), \tag{2}\] where \(\mathbf{x}(t)\in\mathbb{R}^{n}\) is a vector representing the state of our dynamical system at time \(t\), \(\mu\) contains the parameters of the system, and \(\mathbf{f}(\cdot)\) represents the dynamics. By sampling (2) with a uniform sampling rate we get the discrete-time flow-map \[\mathbf{x}_{k+1}=\mathbf{F}(\mathbf{x}_{k}). \tag{3}\] ### Koopman Theory The Koopman operator \(\mathcal{K}_{t}\) provides an operator perspective on dynamical systems. The insight behind the operator is that the finite-dimensional, nonlinear dynamics of (2) can be transformed to an infinite-dimensional, linear dynamical system by considering a new space of scalar observable functions \(g:\mathbb{R}^{N}\rightarrow\mathbb{C}\) on the state instead of the state directly [3]. The Kooopman operator acts on the observable: \[\mathcal{K}_{t}g(\mathbf{x}_{0})=g(\mathbf{F}_{t}(\mathbf{x}_{0}))=g(x(t))\] where \(\mathcal{K}_{t}\) maps the measurement function \(g\) to the next set of value it will take after time \(t\). ### Dynamic Mode Decomposition By sampling (2) with a uniform sampling rate, the DMD approximates the low-dimensional modes of the linear, time-independent Koopman operator in order to estimate the dynamics of the system. Consider \(n\) data points with a total of \(m\) samplings in time, then we can form the two matrices: \[\mathbf{X_{1}}=\begin{bmatrix}|&|&|\\ \mathbf{x_{1}}&\mathbf{x_{2}}&\cdots&\mathbf{x}_{m-1}\\ |&|&|\\ \end{bmatrix} \tag{4}\] \[\mathbf{X_{2}}=\begin{bmatrix}|&|&|\\ \mathbf{x_{2}}&\mathbf{x_{3}}&\cdots&\mathbf{x}_{m}\\ |&|&|\\ \end{bmatrix}. \tag{5}\] The Koopman operator \(\mathbf{A}\) provides a mapping that takes the system, modeled by the data, from time \(j\) to \(j+1\), i.e. \(x_{j+1}=\mathbf{A}x_{j}\). We can use the singular value decomposition (SVD) of the matrix \(\mathbf{X_{1}}\) to reduce the rank \(\ell\) as well as to obtain the benefits of unitary matrices. Accordingly, \(\mathbf{X_{1}}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{*}\), where \(\mathbf{U}\in\mathbb{C}^{n\times\ell}\) is unitary, \(\mathbf{\Sigma}\in\mathbb{C}^{\ell\times\ell}\) is diagonal, and \(\mathbf{U}\in\mathbb{C}^{(m-1)\times\ell}\) is unitary. Thus, the approximate eigendecomposition of the Koopman operator is: \[\tilde{\mathbf{A}}=\mathbf{U}^{*}\mathbf{X_{2}}\mathbf{V}\mathbf{\Sigma},\] the eigenvalues are given by \(\mathbf{\dot{A}}\omega_{j}=\lambda_{j}\omega_{j}\), and the DMD mode (eigenvector of the Koopman operator) corresponding to eigenvalue \(\lambda_{j}\) is \(\varphi_{j}=\mathbf{U}\omega_{j}\). Using the approximate eigendecomposition of the Koopman operator we obtain the DMD reconstruction of the dynamics: \[\mathbf{x}_{\mathrm{DMD}}(t)=\sum_{j=1}^{\ell}b_{j}\varphi_{j}e^{\omega_{j}t}= \mathbf{\Phi\Omega^{t}b}, \tag{6}\] where \[\mathbf{\Omega}=\begin{bmatrix}e^{\omega_{1}}&0&\cdots&0\\ 0&e^{\omega_{2}}&\ddots&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&e^{\omega_{t}}\end{bmatrix}\] the vector \(\mathbf{b}\approx\mathbf{\Phi}^{-1}\mathbf{x_{1}}\), and \(\Phi=\mathbf{U\Omega}\)[2, 7]. If we consider components of a dynamical system that changes very slowly with time with respect to the fast components, it becomes clear that they will have an associated mode \(\omega_{j}\) such that \[\|\omega_{j}\|/\mathrm{max}(\|\omega_{j}\|)\approx 0. \tag{7}\] For this reason the DMD can be used to separate the dynamics into a fast and slow component. Assume that \(\omega_{p}\), where \(p\in\{1,2,\ldots,\ell\}\), satisfies Equation (7), and that \(\|\omega_{j}\|/\mathrm{max}(\|\omega_{j}\|)\;\forall\;j\neq p\) is bounded away from zero. Then we can redefine Equation (6) for a multi-scale dynamical system as: \[\mathbf{X}_{\mathrm{DMD}}=\underbrace{b_{p}\varphi_{p}e^{\omega_{p}\mathbf{t}} }_{\text{slow scale}}+\underbrace{\sum_{j\neq p}b_{j}\varphi_{j}e^{\omega_{j} \mathbf{t}}}_{\text{fast scale}}. \tag{8}\] If we consider the low-rank reconstruction (9) of the DMD and the fact that (10) should be true, it follows that the sparse reconstruction given can be calculated using the real values of elements only. \[\mathbf{X}_{\mathrm{DMD}}^{\text{Low-Rank}}=b_{p}\varphi_{p}e^{\omega_{p} \mathbf{t}}, \tag{9}\] \[\mathbf{X}=\mathbf{X}_{\mathrm{DMD}}^{\text{Low-Rank}}+\mathbf{X}_{\mathrm{ DMD}}^{\text{Sparse}}. \tag{10}\] \[\mathbf{X}_{\mathrm{DMD}}^{\text{Sparse}}=\sum_{j\neq p}b_{j}\varphi_{j}e^{ \omega_{j}\mathbf{t}} \tag{11}\] \[\mathbf{X}_{\mathrm{DMD}}^{\text{Sparse}}=\mathbf{X}-\Big{|}\mathbf{X}_{ \mathrm{DMD}}^{\text{Low-Rank}}\Big{|}. \tag{12}\] ### Time Delay Embedding _Delay embedding_ is a classic technique for overcoming partial state information. In the context of the DMD and Koopman operator theory, we can use time delay embedding to construct a new observable: \(\tilde{\mathbf{g}}(\mathbf{x}(t)):=(g(\mathbf{x}(t)),g(\mathbf{x}(t-\Delta t)),g(\mathbf{x}(t-2\Delta t)),g(\mathbf{x}(t-n\Delta t)))\in\mathbb{R}^{n}\) with lag time \(\Delta t\). This is the delay embedding of the trajectory \(g(\mathbf{x}(t))\). We can use the embedding to form the Hankel matrix: \[\mathbf{H}=\begin{bmatrix}g(\mathbf{x}_{1})&g(\mathbf{x}_{2})&\cdots&g(\mathbf{x}_{ M})\\ g(\mathbf{x}_{2})&g(\mathbf{x}_{3})&\cdots&g(\mathbf{x}_{M+1})\\ \vdots&\vdots&\ddots&\vdots\\ g(\mathbf{x}_{N})&g(\mathbf{x}_{N})&\cdots&g(\mathbf{x}_{N+M+1})\end{bmatrix}= \begin{bmatrix}|&&|&&|\\ \tilde{\mathbf{g}}(\mathbf{x}_{1})&\tilde{\mathbf{g}}(\mathbf{x}_{2})&\cdots& \tilde{\mathbf{g}}(\mathbf{x}_{M})\\ |&&|&&|\end{bmatrix} \tag{13}\] By examining the singular values of the Hankel matrix we can find the best rank-\(r\) approximation of the trajectory space. Then, we can form the matrices \(\mathbf{X_{1}}\) and \(\mathbf{X_{2}}\) for the DMD from \(\mathbf{H}\) and set \(\ell=r\) for the best DMD approximation using the embedding [3]. ## 3 Algorithm Implementation and Development We develop the algorithm with the assumption we are given \(m\) measurements and \(n\) snapshots of a dynamical system with fast and slow scales, but only partial state information. This data forms the matrix \(\mathbf{X}\in\mathbb{R}^{m\times n}\). Since we only have partial state information, the first step in the algorithm is to employ time delay embeddings to form a Hankel matrix \(\mathbf{H}\) with \(N\) embeddings. We choose an \(N\) that minimizes the approximation error in the second stage of the algorithm. The second step in the algorithm is to form the matrices \(\mathbf{X_{1}}\) and \(\mathbf{X_{2}}\) from \(\mathbf{H}\), and then apply the DMD method with a rank-reduction that is chosen based on the singular values of \(\mathbf{H}\). This produces the DMD approximation to the dynamical system as given in Equation (6). The third step in the algorithm is to extract the fast and slow components by applying Equations (8) - (12). These fast and slow components can then be used to form the fast and slow right-hand side functions provided to the ARKODE multirate integrator. ## 4 Computational Results We applied the time-scale separation algorithm in two different problems. The first problem is a simple, contrived toy problem. The second scenario uses synthetic seismic data for tsunami modeling that is closer to real-world data. Figure 1: The time-scale separation algrotihm ### Toy Problem The toy problem uses is defined as the simple IVP: \[\frac{dy}{dt}=\cos(10t)+\sin(t),\quad y(0)=1.\] It is clear the right hand side has a fast component, \(\cos(10t)\) and a slow component \(\sin(t)\). Using MATLAB we solve the ODE with a uniform time step size of \(0.01\). The solution is used as our dynamics data that we construct the Hankel matrix with and then perform the DMD. The derivative of the fast and slow components extracted is shown in Figure 4. These components were used in the ARKODE solver by reading the different components in from a data file instead of computing the values of the functions at some time instance. ### Synthetic Seismic Data The second problem we apply our scale separation algorithm to is a seismic problem where waves, or primary waves, are fast and S waves, or secondary waves are slow. In [8], Vogl _et al._ present a method for a high-resolution seismic model of a fault slippage under the ocean. In the Figure 3: The image on the left is the DMD reconstruction of the toy problem dynamics. The black dotted line is the numerical solution and the red line is the DMD approximation. The right image shows the error, given by \(|\mathbf{X}-\mathbf{X}_{\mathrm{DMD}}|\), in the approximation. Figure 2: The singular values of the toy problem \(\mathbf{H}\) matrix with 300 embeddings. The right image is a zoomed in version of the left. The left image shows the point at which we can truncate when performing the DMD (\(\approx 20\)). work they present a linear hyperbolic system of equations that model the 2D plane-strain case of isotropic linear elasticity: \[q_{t}+A(x,y)q_{x}+B(x,y)q_{y}=0 \tag{14}\] A general Riemann solution to produces the eigenvectors of \(n_{x}A+n_{y}B\), where \(n=[0\ \ 1]^{T}\) for the specific data we will examine. The eigenvectors correspond to P and S waves traveling from the source. As part of their work, Vogl _et al._ developed a simulation code for the problem based on the Clawpack solver package. The simulation code takes measurements at varying location from the fault with "gauges". The data recorded corresponds to the vector \(q\) which contains key parameters of the plane-strain equations such as the stress, density and velocity. We build a data matrix \(\mathbf{X}\in\mathbb{R}^{n\times 10}\) from 10 sequential gauges and the vertical velocity. This data matrix is then fed into our time scale separation algorithm. Figure 4: The DMD recreations of the fast and slow components of the toy problem right-hand side. These recreations are fed into ARKODE as the fast and slow functions. Figure 5: The singular values of the \(\mathbf{H}\) matrix for the seismic data with 150 embeddings. The right image is a zoomed in version of the left. The left image shows the point at which we can truncate when performing the DMD (\(\approx 75\)). ## 5 Summary and Conclusions In summary, in this report, we used dynamic mode decomposition and time delay embedding to extract the fast and and slow components of the right-hand sides of a simple ODE from data. We then used the extracted components to solve the ODE with ARKODE. Finally, to move towards a real-world use case, we attempted to extract fast and slow scale dynamics from synthetic seismic modeling data. We found our algorithm to be adequate with the simple dynamics in a toy problem. In the more difficult case with the synthetic seismic data it showed promise, there seemed to be a split of fast and slow scale components, but more work needs to be done to verify the accuracy of the components. In the future, we would like to explore using the multi-resolution DMD presented in [5] to compare the performance and fit for the use case. Figure 6: The image on the left is the DMD reconstruction of the seismic problem dynamics. The black dotted line is the numerical solution and the red line is the DMD approximation. The right image shows the error, given by \(|\mathbf{X}-\mathbf{X}_{\mathrm{DMD}}|\), in the approximation. Figure 7: The extracted fast and slow components of the vertical velocity in the seismic data. Acknowledgements We would like to thank Christopher Vogl for valuable insight into the seismic model used for the synthetic data experiment and with assistance in collecting the data from the simulation code. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-TR-848209.
2307.15850
Comprehensive Algorithm Portfolio Evaluation using Item Response Theory
Item Response Theory (IRT) has been proposed within the field of Educational Psychometrics to assess student ability as well as test question difficulty and discrimination power. More recently, IRT has been applied to evaluate machine learning algorithm performance on a single classification dataset, where the student is now an algorithm, and the test question is an observation to be classified by the algorithm. In this paper we present a modified IRT-based framework for evaluating a portfolio of algorithms across a repository of datasets, while simultaneously eliciting a richer suite of characteristics - such as algorithm consistency and anomalousness - that describe important aspects of algorithm performance. These characteristics arise from a novel inversion and reinterpretation of the traditional IRT model without requiring additional dataset feature computations. We test this framework on algorithm portfolios for a wide range of applications, demonstrating the broad applicability of this method as an insightful algorithm evaluation tool. Furthermore, the explainable nature of IRT parameters yield an increased understanding of algorithm portfolios.
Sevvandi Kandanaarachchi, Kate Smith-Miles
2023-07-29T00:48:29Z
http://arxiv.org/abs/2307.15850v1
# Comprehensive Algorithm Portfolio Evaluation ###### Abstract Item Response Theory (IRT) has been proposed within the field of Educational Psychometrics to assess student ability as well as test question difficulty and discrimination power. More recently, IRT has been applied to evaluate machine learning algorithm performance on a single classification dataset, where the student is now an algorithm, and the test question is an observation to be classified by the algorithm. In this paper we present a modified IRT-based framework for evaluating a portfolio of algorithms across a repository of datasets, while simultaneously eliciting a richer suite of characteristics - such as algorithm consistency and anomalousness - that describe important aspects of algorithm performance. These characteristics arise from a novel inversion and reinterpretation of the traditional IRT model without requiring additional dataset feature computations. We test this framework on algorithm portfolios for a wide range of applications, demonstrating the broad applicability of this method as an insightful algorithm evaluation tool. Furthermore, the explainable nature of IRT parameters yield an increased understanding of algorithm portfolios. _Key words--_ Item Response Theory, algorithm evaluation, algorithm portfolios, classification, machine learning, algorithm selection, instance space analysis, explainable algorithm evaluation. ## 1 Introduction Evaluating a diverse set algorithms across a comprehensive set of test problems contributes to an increased understanding of the interplay between test problem characteristics, algorithm mechanisms and algorithm performance. Such an evaluation helps determine an algorithm's strengths and weaknesses, and provides a broad overview of the collective capabilities of an algorithm portfolio. The drawback of many studies that evaluate only a small number of algorithms on a limited set of test problems is that they fail to reveal where any algorithm belongs within a state-of-the-art algorithm portfolio's capabilities, or where the unique strengths and weaknesses of algorithms lie considering a diverse range of test problem difficulties and challenges. After several decades of calls for a more "empirical science" of algorithm testing (Hooker, 1994, 1995), research communities in many fields are now pulling together the components needed for rigorous evaluation of algorithms - open source algorithms and shared test problem repositories - that provide the foundation for new methodologies for empirical evaluations (McGeoch, 1996; Hall & Posner, 2010; Smith-Miles et al., 2014; Casalicchio et al., 2019; Bischl et al., 2016). In this paper we present a framework that evaluates a portfolio of algorithms based on a novel adaptation of Item Response Theory (IRT). The general premise of IRT is that there is a hidden "quality" or a trait, such as verbal or mathematical ability, that cannot be directly measured (Hambleton & Swaminathan, 2013) but can be inferred from responses to well-designed test questions that are suitably difficult and discriminating. A test instrument such as a questionnaire or an exam containing test items is used to capture participant responses. Using the participant responses to the test items, an IRT model is fitted to estimate the discrimination and difficulty of test items and the ability of participants. In an educational setting, the ability relates to the knowledge of the subject matter tested on the exam; the discrimination of test items inform us which items are better at discriminating between strong and weak students; and the difficulty parameters indicate the difficulty of each test item given the response profile from the participants. IRT's ability to evaluate performance data and obtain useful insights has made it a natural fit for adaptation to the machine learning domain. Martinez-Plumed et al. (2019) used IRT to evaluate the performance of machine learning algorithms (students in the educational analogy) on a single classification dataset (exam), with the individual observations in a classification dataset (exam questions) used to assess algorithm performance. They train and test many classifiers on a single dataset, and obtain insights about the individual observations and about the portfolio of classifiers on that dataset. As a result they obtain a set of classifier characteristic curves for the dataset. Another IRT based evaluation of algorithm portfolios was carried out by Chen et al. (2019). They proposed a model called \(\beta^{3}\)-IRT, which extends the Beta IRT model for continuous responses discussed by Yvonnick Noel & Bruno Dauvier (2007). Chen et al. (2019) consider new parametrizations so that the resulting item characteristic curves are not limited to logistic curves and use their model to assess machine learning classifiers. They too evaluate an algorithm portfolio on an individual dataset and draw their conclusions about which algorithm is best for a given observation within a dataset. Both Martinez-Plumed et al. (2019) and Chen et al. (2019) investigate IRT on an individual dataset, which we call a test instance. These exciting directions have motivated us to expand the use of IRT for understanding the strengths and weaknesses of a portfolio of algorithms when applied to _any_ dataset, not just a single dataset. In this case, the test instance is an entire dataset comprising observations, and the 'exam' is comprised of many datasets to evaluate the ability of an algorithm. Extending in this direction is important because the limited amount of diversity contained within a single dataset can shed only a limited amount of light on a portfolio of classifiers, and the classifier characteristic curves heavily depend on the dataset. To obtain a better understanding of the strengths and weaknesses of a portfolio of classifiers, indeed any type of algorithm, we need to evaluate the portfolio on a broader range of datasets from diverse repositories. The excellent foundational work showing how IRT models - with both discrete (Martinez-Plumed et al. 2019) and continuous (Chen et al. 2019) performance metrics - can be used to study performance of machine learning algorithms is ripe for extension to see how the insights that can be generated from an IRT perspective compare with recent advances in algorithm portfolio evaluation and construction. In recent decades the call for a more empirical approach to algorithm testing (Hooker 1994) has seen efforts to move beyond a standard statistical analysis to evaluate algorithm portfolios, where strong algorithms have best "on-average" performance across a chosen set of test instances. Machine learning approaches such as meta-learning ("learning to learn") have been used to learn how algorithm portfolios perform based on characteristics of the test instances (Vialta et al. 2009), with efforts encompassing a large body of research on topics such as algorithm selection, rankings, recommendation, and ensembles to name a few (Lemke et al. 2015). Frechette et al. (2016) use Shapley values - a concept from coalition game theory measuring a component's marginal contribution to the portfolio - to gain insights into the value of an algorithm in a portfolio. In a related but orthogonal direction, emphasis in the literature on dataset repository design to facilitate unbiased algorithm evaluation is also a growing research area (Marcia & Bernad' o Mansilla 2014, Bischl et al. 2016), motivated by the fact that algorithms are frequently claimed to be superior without testing them on a demonstrably broad range of test instances. Demonstrating that a selected set of test instances or datasets is unbiased and sufficiently diverse is one of the major contributions of the Instance Space Analysis methodology (Smith-Miles & Tan 2012, Smith-Miles et al. 2014, Smith-Miles & Bowly 2015, Munoz et al. 2018), developed by Smith-Miles and co-authors by extending Rice's algorithm selection framework (Rice et al. 1976). A \(2D\) instance space is constructed by projecting all test instances into the instance space in a manner that maximize visual interpretation of the relationships between instance features and algorithm performance. The mathematical boundary defining the instance space can be determined, and the diversity of the test instances within the instance space can be scrutinized. Furthermore, the instance space analysis methodology can be used to answer the question posed by the Algorithm Selection Problem (Rice et al. 1976), "Which algorithm is best suited for my problem?". This aspect is missed by the standard statistical analysis, which focuses on average performances, and leaves hidden the unique strengths and weaknesses of algorithms and relationships to test instance characteristics. Our main contribution in this paper is proposing a novel framework for evaluating algorithm portfolios across diverse suites of test instances based on IRT concepts. We call this framework AIRT - Algorithmic IRT. The word _airt_ is an old Scottish word which means "to guide". By re-mapping the educational analogies of students, exams and test questions in a manner that is essentially flipped from the original approach on a single dataset of Martinez-Plumed et al. (2019), we propose an inverted model that yields a richer set of evaluation metrics for algorithm portfolios. Adapting continuous IRT models, we introduce measures for quantifying algorithm _consistency_, an algorithm's _difficulty limit_ in terms of the instances it can handle, and the degree of _anomalousness_ of an algorithm's behavior compared to others in the portfolio. We also explore the problem space and find regions of good and bad performance, which are effectively algorithm strengths and weaknesses. These other measures are not computed by standard statistical methodology used for ranking algorithms, nor are they available from the standard IRT mapping (Martinez-Plumed et al. 2019, Chen et al. 2019). For example, the algorithm with the best overall performance on a suite of test problems may not be stable or consistent in the sense that a small change in a test instance may result in large changes in performance. Or there may be an anomalous algorithm that performs well on test instances for which other algorithms perform poorly, and such insights may be lost in standard 'on-average' statistical analysis. Indeed, it is AIRT's focus on revealing insights into algorithm strengths and weaknesses, based on new methods for visual exploratory data analysis from empirical performance data results, that adds significant value beyond standard statistical analysis or algorithm selection studies. It is worthwhile noting that methodologies in social sciences focus on explanations as opposed to accurate predictions (Shmueli 2010). As such, quantitative models in social sciences only have a handful of parameters which have meaningful interpretations. Explanations are often linked with causality. Lewis (1986) states "Here is my main thesis: _to explain an event is to provide some information about its causal history._" Miller (2019) presents an argument for linkages with social sciences stating that "the field of explainable artificial intelligence can build on existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics." Indeed, AIRT is such a linkage. In educational psychometrics IRT is used to explain the student performance in terms of student ability and test item discrimination and difficulty. For example, difficult test items generally yield lower scores than easy test items. Similarly, students with high ability obtain higher scores compared to students with low ability. Thus, IRT model parameters are used to explain the student and test item characteristics and have causal interpretations. These explainable interpretations get translated to the algorithm evaluation setting as follows: problems with high difficulty generally result in low performance values. Algorithms with high difficulty limits can handle harder problems. Algorithms that are consistent give similar results irrespective of the problem difficulty. Anomalous algorithms behave in an unusual fashion by giving better results to harder problems compared to easier problems. We realise these statements are simple and obvious. But that is an attribute of an explanation; Oxford English Dictionary (June 2016) defines it as _a thing which explains, makes clear, or accounts for something_. Therefore, AIRT metrics come from an explainable model in educational psychometrics and contribute to increasing the explainability of algorithm performance. Beyond insights and explanations however, AIRT can also be used for algorithm selection to construct a strong portfolio. In this paper we compare the predictive power of the AIRT portfolio to others generated by Shapley values (Frechette et al. 2016) and best on average performance. The AIRT portfolio showcases algorithm strengths in different parts of the problem space. In addition to introducing these measures that capture different aspects of algorithm performance and constructing algorithm portfolios, we also assess the goodness of the IRT model by comparing the IRT predicted performance with the actual performance. As a further contribution, we make this work available in the R package airt(Kandanaarachchi, 2020). Another point of interest is that, unlike in instance space analysis, we do not need to compute test instance features for AIRT, avoiding the additional computational expense, as well as the somewhat arbitrariness of certain feature choices. AIRT computes a 1-dimensional problem space based on dataset difficulty, which is calculated from the performance results of the algorithm portfolio. Characteristics such as algorithm consistency and anomalousness can be calculated as overall characteristics based only on an algorithm's performance metric, while the region of the problem space for which an algorithm shows superiority can be revealed without the need for features. The fact that similar insights can be obtained from the case studies presented in this paper without the need for feature calculation required by instance space analysis is one of the main advantages of AIRT focused on the broader goal of generating insights into algorithm performance, in addition to constructing strong algorithm portfolios, i.e. addressing both questions of which algorithm should be used for a particular instance, and why? The remainder of the paper is organized as follows: In Section 2 we provide an introduction to polytomous and continuous IRT models and discuss the contextual differences between traditional applications that use IRT for evaluating educational outcomes and adaptations to evaluate algorithms. We then discuss our alternative adaptation, essentially an inverted model, which creates a rich new set of algorithm evaluation metrics defined by reframing the interpretation of the IRT parameters in Section 3. Using these new metrics, we can visualize the strengths and weaknesses of algorithms in the problem space and construct algorithm portfolios using AIRT. Furthermore, to assess the goodness of the models built within our AIRT framework, we define additional measures based on model predicted performance and actual performance on test instances. AIRT expands on the IRT framework to including such enhancements to enable its application to the broader challenge of understanding algorithm strengths and weaknesses. In Section 5 we illustrate the complete functionality of AIRT - including the algorithm metrics, problem space analysis, strengths and weaknesses of algorithms, algorithm portfolio evaluation and model goodness results - using the detailed case study of OpenML-Weka classification algorithms and test instances available at ASIib repository (Bischl et al., 2016). We refer the reader to Appendix A where further results are summarized on nine more case studies using a variety of ASIib scenarios including from satisfiability (SAT) and constraint satisfaction problem domains. These case studies demonstrate the functionality of AIRT as an exploratory data analysis tool for algorithm portfolio evaluation and how the user can construct a competitive algorithm portfolio using AIRT with the objective of minimizing performance gap. Finally, we discuss future work and present our conclusions in Section 6. IRT: Traditional setting and new mapping Item Response Theory (IRT) (Lord 1980, Embretson & Reise 2013, van der Linden & Hambleton 2013) refers to a family of latent trait models that is used to explain the relationship between unobservable characteristics such as intelligence or political preference and their observed outcomes such as responses to questionnaires. Attributes such as verbal or mathematical ability, racial prejudice and stress proneness, which cannot be measured directly can be modeled as latent variables. The observed outcomes such as test items and questionnaire responses can be explained using latent trait models. IRT builds a connection between the items of a bigger unit such as a test with the participants' latent traits, thus placing each participant in a latent trait continuum. IRT is commonly used in psychometrics (Cooper & Petrides 2010) and educational testing (Yen 1986). ### Dichotomous and polytomous IRT models We introduce some IRT concepts for dichotomous and polytomous models using the notation of Chalmers (2012) and Rizopoulos (2006). Let \(i=1,\ldots N\) represent participants or testees, \(j=1,\ldots n\) represent the test items with \(N>n\), and let \(\theta\) denote the latent variable such as intelligence or ability. An example includes a test with \(n\) questions, which is administered to a class of \(N\) students with the aim of measuring their ability \(\theta\) to perform certain tasks. The response of the \(i^{\text{th}}\) participant for the \(j^{\text{th}}\) item is denoted by \(x_{ij}\). The discrimination parameter for test item \(j\) is denoted by \(\alpha_{j}\) and the difficulty parameter by \(d_{j}\). These two parameters are used to build the 2-Parameter Logistic (2PL) model, while an additional guessing parameter \(\gamma_{j}\) is incorporated in the 3-Parameter Logistic (3PL) model. For dichotomous data researchers are interested in modeling the probability of correct response for each item given the ability level \(\theta_{i}\). The 3PL model defines the probability of a correct response for participant \(i\) for item \(j\) as \[\Phi\left(x_{ij}=1|\theta_{i},\alpha_{j},d_{j},\gamma_{j}\right)=\gamma_{j}+ \frac{1-\gamma_{j}}{1+\exp\left(-D\alpha_{j}\left(\theta_{i}-d_{j}\right) \right)} \tag{1}\] where \(D\) is the scaling adjustment traditionally set at \(1.702\). The role of \(D\) is to make the logistic curve similar to the cumulative distribution function of the normal distribution (Reckase 2009). Figure 1 shows the resulting probability for a given item \(j\) with fixed \(\alpha,d\) and \(\gamma\) disregarding the scaling constant \(D\). The greater the ability \(\theta_{i}\) of the participant, the higher the probability of the correct response. For polytomous data, we briefly present the multi-response ordinal models described in Samejima (1969). For example, self-esteem surveys have questions such as _I feel that I am a person of worth, at least on an equal plane with others_ with responses {_strongly disagree, disagree, neutral, agree, strongly agree_}. In this case the original responses, which are the participants answers, are used to fit the IRT model (Gray-Little et al. 1997). By definition ordinal responses are ordered, i.e., _strongly disagree \(<\) disagree \(<\) neutral \(<\) agree \(<\) strongly agree_. The responses need to be ordinal because the resulting latent trait continuum is ordered from low ability to high ability. In educational testing an accuracy measure such as marks, derived from the original responses are used to fit the IRT model. For example, for each question in a test, the participants write their answers and marks are derived by the person who grades them. For simplicity, suppose the marks for each question can take the values {0, 1, 2, 3, 4, 5}. The marks, which is a derived accuracy measure are the responses in this case and is used to fit the IRT model. Similarly, for multiple choice questions with marks taking the values {0, 1} a dichotomous IRT model is fitted. Whether a derived accuracy measure or the original responses are used, these are called _responses_ in IRT literature. We note that the word _response_ is confusing to non-IRT researchers when it refers to grades or other type of measures derived from the original responses. However, as this is the standard term used in IRT literature, we will use the same for easier cross-referencing. If there are \(C_{j}\) unique response categories for item \(j\) with \(0<1<\cdots<C_{j}-1\), difficulty parameters \(\boldsymbol{d}_{j}=\left(d_{1},\ldots,d_{C_{j}-1}\right)\) and discrimination parameter \(\alpha_{j}\), the cumulative probabilities are defined as \[\Phi\left(x_{ij}\geq 0|\theta_{i},\alpha_{j},\boldsymbol{d}_{j}\right) =1\;,\] \[\Phi\left(x_{ij}\geq 1|\theta_{i},\alpha_{j},\boldsymbol{d}_{j}\right) =\frac{1}{1+\exp\left(-D\alpha_{j}\left(\theta_{i}-d_{1}\right) \right)}\;,\] \[\Phi\left(x_{ij}\geq 2|\theta_{i},\alpha_{j},\boldsymbol{d}_{j}\right) =\frac{1}{1+\exp\left(-D\alpha_{j}\left(\theta_{i}-d_{2}\right) \right)}\;,\] \[\vdots\] \[\Phi\left(x_{ij}\geq C_{j}-1|\theta_{i},\alpha_{j},\boldsymbol{d}_ {j}\right) =\frac{1}{1+\exp\left(-D\alpha_{j}\left(\theta_{i}-d_{C_{j}-1} \right)\right)}\;,\] \[\Phi\left(x_{ij}\geq C_{j}|\theta_{i},\alpha_{j},\boldsymbol{d}_ {j}\right) =0\;,\] where \(x_{ij}\) is the response of participant \(i\) for question \(j\). This gives the probability of the response \(x_{ij}=k\) as \[\Phi\left(x_{ij}=k|\theta_{i},\alpha_{j},\boldsymbol{d}_{j}\right)=\Phi\left(x _{ij}\geq k|\theta_{i},\alpha_{j},\boldsymbol{d}_{j}\right)-\Phi\left(x_{ij} \geq(k+1)|\theta_{i},\alpha_{j},\boldsymbol{d}_{j}\right)\;.\] Figure 1: _Probability of a correct response for a given item using a 3PL model. Difficulty corresponds to \(d_{j}\), discrimination to \(\alpha_{j}\) and guessing to \(\gamma_{j}\) in equation (1)._ Figure 2 shows the probability density functions for different responses \(x_{ij}=k\) for \(k\in\{0,\ldots,(C_{j}-1)\}\). In the educational testing scenario discussed above, each curve denotes the probability that marks are equal to \(k\) for \(k\in\{0,1,2,3\}\). From Figure 2 we see that the green curve, which gives the probability density function for marks = 0, has high probability when the participant ability \(\theta\) is low. Similarly, the dark red curve corresponding to marks = 3 has a higher probability for high participant ability. We see that a participant with a lower ability/latent trait is more likely to obtain a response corresponding to a low value of \(k\) compared to a participant with a higher ability. ### Continuous IRT models In addition to the polytomous IRT models, Samejima (1973, 1974) introduced Continuous Response Models (CRM) to extend polytomous models to continuous responses. Wang & Zeng (1998) introduced an expectation-maximization (EM) algorithm for Samejima's continuous item response model. This EM algorithm was further optimized by Shojima (2005) by proposing a non-iterative solution for each EM cycle. In this section we use the notation used by Wang & Zeng (1998) and Shojima (2005). They consider \(N\) examinees with trait variables \(\theta_{i}\) where \(i\in\{1,\ldots,N\}\) and \(n\) test instances with parameters \(\lambda_{j}=(\alpha_{j},\beta_{j},\gamma_{j})^{T}\) for \(j\in\{1,\ldots,n\}\). The item parameters \(\alpha_{j}\) represents discrimination, \(\beta_{j}\) difficulty and \(\gamma_{j}\) a scaling coefficient that defines a scaling transformation from the original rating scale to the \(\theta\) scale. Using the normal density type CRM, Wang & Zeng (1998) considered the probability of an examinee with an ability \(\theta\) obtaining a score of \(y_{j}\) or higher on a given item \(j\) as \[P\left(Y\geq y_{j}|\theta\right)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{v}e^{- \frac{t^{2}}{2}}dt\,, \tag{2}\] where \[v=\alpha_{j}\left(\theta-\beta_{j}-\gamma_{j}\ln\frac{y_{j}}{k_{j}-y_{j}} \right)\,,\] Figure 2: The probability of the response \(x_{ij}=k\) for different \(k\in\{0,1,2,3\}\) with \(\theta\) on the horizontal axis and \(\Phi\) on the vertical axis. The most likely outcome is different depending on the ability levels \(\theta\). and the continuous score range of \(y_{j}\) is \((0,k_{j})\). The continuous score range of \((0,k_{j})\) is opened up to \((-\infty,\infty)\) with the reparametrization \[z_{j}=\ln\frac{y_{j}}{k_{j}-y_{j}}\,.\] Using this reparametrization they obtain the probability density function \(f\left(z_{j}|\theta\right)\) by differentiating the cumulative density function obtained using equation (2) as \[f\left(z_{j}|\theta\right)=\frac{d}{dz_{j}}\left(1-P(Z\geq z_{j}|\theta) \right)=\frac{\alpha_{j}\gamma_{j}}{\sqrt{2\pi}}\exp\left(-\frac{\alpha_{j}^{ 2}}{2}\left(\theta-\beta_{j}-\gamma_{j}z_{j}\right)^{2}\right)\,. \tag{3}\] Comparing the parameters \(\alpha_{j}\), \(\beta_{j}\) and \(\gamma_{j}\) with those of Section 2.1 we note that the parameter \(\alpha_{j}\) denotes discrimination as in Section 2.1 and the parameter \(\beta_{j}\) denotes the difficulty level of item \(j\), which was denoted by \(d_{j}\) in Section 2.1. However, the parameter \(\gamma_{j}\) is quite different to the guessing parameter used in Section 2.1, in that it denotes a scaling factor which we will inspect soon. For every \(z\in\mathbb{R}\), there is an associated probability density function given by \(f(z|\theta)\). Figure 3 shows the item response functions obtained for \(z\in\{-2,0,1\}\) for different items, which have different CRM parameters. Figure 4 shows the heatmap of \(f(z|\theta)\) for the same items for continuous \(z\) and \(\theta\) values. The first pane in both Figures show the curves/heatmap for the first item, HaifaCSP-free, with \(\alpha=1.73\), \(\beta=1.16\) and \(\gamma=2.72\). The second item, iZplus-free, has CRM parameters \(\alpha=0.65\), \(\beta=2.6\) and \(\gamma=1.65\). The third item, MZN/Gurobi-free, has CRM parameters \(\alpha=1.14\), \(\beta=1.15\) and \(\gamma=2.49\). We will give more context on these items later. The second item has a higher difficulty level compared to the first and the third we see that \(\beta=2.6\) shifts the curves to the right in Figure 3 and the high density regions have moved to the right in Figure 4. The first item has higher \(\alpha\) values making the curves steeper in Figure 3 compared with items 2 and 3. Similarly, the high density regions are narrower and sharper in Figure 4 due to higher discrimination. Figure 3: _Probability density curves for \(z=-2\), \(z=0\) and \(z=1\) for three items with different CRM parameters. The items are from CSP-Minizinc-2016 algorithm portfolio._ with a non-iterative step for the expectation cycle, which made the item parameter computation much faster. However, in their estimation Shojima (2005) only considers \(\alpha,\gamma>0\). As such, their algorithm does not accommodate negative discrimination items. This reflects the current practice regarding negative discrimination items in educational and psychometric testing. Negative discrimination items are generally considered as non-value adding and as such revised or removed in traditional educational testing (Hambleton & Swaminathan 2013). However, in algorithm performance negative discrimination plays an important role and we do not remove such items from the pool. We accommodate negative discrimination items by modifying the existing algorithm discussed by Shojima (2005). Before discussing these modifications we give a brief overview of their method. First they rescale \(y_{ij}\), such that \(x_{ij}=y_{ij}/k_{j}\) lies in \((0,1)\) and consider \(z_{ij}=\ln x_{ij}/(1-x_{ij})\). They denote the item response vector of examinee \(i\) by \(z_{i}\). Then they perform a marginal maximum likelihood estimation with the expectation maximization algorithm (MML-EM). Using a normal prior for \(\theta_{i}\), i.e. \(\mathcal{N}\left(\theta_{i}|\mu,\sigma\right)\) they obtain an estimate for the posterior distribution of \(\theta_{i}\), given \(\mathbf{z}_{i}\) and the current estimates of item parameters as \[p\left(\theta_{i}|\mathbf{\Lambda}^{(t)},\mathbf{z}_{i}\right)=\mathcal{N}\left( \theta_{i}|\mu_{i}^{(t)},\sigma^{(t)2}\right)\,,\] where \(\mathbf{\Lambda}^{(t)}=\left(\mathbf{\lambda}_{\mathbf{1}}^{(t)},\ldots,\mathbf{\lambda}_{\bm {n}}^{(t)}\right)\), \(\mathbf{\lambda}_{\mathbf{j}}^{(t)}=\left(\alpha_{j}^{(t)},\beta_{j}^{(t)},\gamma_{j} ^{(t)}\right)^{T}\) and \((t)\) denotes the iteration. The parameters \(\mu_{i}^{(t)}\) and \(\sigma^{(t)}\) are given by \[\sigma^{(t)2} =\left(\sum_{j}\alpha_{j}^{(t)2}+\sigma^{-2}\right)^{-1}\,,\] \[\mu_{i}^{(t)} =\sigma^{(t)2}\left(\sum_{j}\alpha_{j}^{(t)2}\left(\beta_{j}^{(t )}+\gamma_{j}^{(t)}z_{ij}\right)+\mu\right)\,,\] Figure 4: The heatmap of probability density functions for the items in Figure 3 where \(\mu\) and \(\sigma\) denote the initial prior parameters of \(\theta_{i}\). Then they obtain the expectation of the log-likelihood \[E_{\mathbf{\theta}|\Lambda^{(t)},\mathbf{Z}}\left[\ln p\left(\mathbf{\Lambda}|\mathbf{\theta},\bm {Z}\right)\right]=N\sum_{j=1}^{n}\left(\ln\alpha_{j}+\ln\gamma_{j}\right)-\frac {1}{2}\sum_{i=1}^{N}\sum_{j=1}^{n}\alpha_{j}^{2}\left(\left(\beta_{j}+\gamma_{j }z_{ij}-\mu_{i}^{(t)}\right)^{2}+\sigma^{(t)2}\right)+\ln p\left(\mathbf{\Lambda} \right)+\text{const} \tag{4}\] where \(\mathbf{Z}\) is the item response matrix of all examinees for \(n\) items and \(p\) denotes the probability. They optimize this expectation with flat priors for item parameters and obtain \[\gamma_{j}^{(t+1)} =\frac{V\left(\mu_{i}^{(t)}\right)+\sigma^{(t)2}}{C_{j}\left(z_{ ij},\mu_{i}^{(t)}\right)}\,, \tag{5}\] \[\beta_{j}^{(t+1)} =M\left(\mu_{i}^{(t)}\right)-\gamma_{j}^{(t+1)}M_{j}\left(z_{ij} \right)\,,\] (6) \[\alpha_{j}^{(t+1)} =\left(\gamma_{j}^{(t+1)2}V_{j}(z_{ij})-V\left(\mu_{i}^{(t)} \right)-\sigma^{(t)2}\right)^{-1/2}\,, \tag{7}\] where \(M\), \(V\) and \(C\) denote the mean, variance and covariance terms defined by \[M_{j}\left(z_{ij}\right) =\frac{\sum_{i}z_{ij}}{N}\,,\] \[M\left(\mu_{i}^{(t)}\right) =\frac{\sum_{i}\mu_{i}^{(t)}}{N}\,,\] \[V\left(z_{ij}\right) =\frac{\sum_{i}z_{ij}^{2}}{N}-M_{j}\left(z_{ij}\right)^{2}\,,\] \[V\left(\mu_{i}^{(t)}\right) =\frac{\sum_{i}\mu_{i}^{(t)2}}{N}-M\left(\mu_{i}^{(t)}\right)^{2}\,,\] \[\text{and}\qquad C_{j}\left(z_{ij},\,\mu_{i}^{(t)}\right) =\frac{\sum_{i}z_{ij}\mu_{i}^{(t)}}{N}-M_{j}\left(z_{ij}\right)M \left(\mu_{i}^{(t)}\right)\,. \tag{8}\] For each iteration using \(\mu_{i}^{(t)}\), \(\sigma^{(t)}\), and \(M\), \(V\) and \(C\) quantities listed above, the parameter \(\gamma_{j}^{(t+1)}\) is computed as in equation (5). This value of \(\gamma_{j}^{(t+1)}\) is used to compute \(\beta_{j}^{(t+1)}\) and \(\alpha_{j}^{(t+1)}\) in equations (6) and (7). Using the parameter values \(\alpha_{j}^{(t+1)}\), \(\beta_{j}^{(t+1)}\) and \(\gamma_{j}^{(t+1)}\), the log-likelihood given in equation (4) is computed. This whole process is repeated until the difference in log-likelihoods for successive iterations becomes smaller than a predefined level of convergence. We note that this is a brief overview of this method and refer to Shojima (2005) for more details. With the current formulation we see that if \(\gamma_{j}^{(t+1)}\) in equation (5) is negative due to a negative covariance term \(C_{j}\left(z_{ij},\mu_{i}^{(t)}\right)\) computed as in equation (8), this results in the log-likelihood in equation (4) being incalculable as it requires \(\ln\gamma_{j}\). This forces the MML-EM algorithm to stop, preventing convergence. As a result, this formulation only works when all test instances have \(\alpha_{j}>0\) and \(\gamma_{j}>0\) as permitted by the assumption. However, we see that the probability density function \(f(z_{j}|\theta)\) in equation (3) contains the product \(\alpha_{j}\gamma_{j}\) and is valid when both \(\alpha_{j}\) and \(\gamma_{j}\) have the same sign. Similarly, equation (4) can be rewritten with the product \(\ln\left(\alpha_{j}\gamma_{j}\right)\) instead of the sum of log terms and is valid when both \(\alpha_{j}\) and \(\gamma_{j}\) have the same sign. Therefore, if we remove the assumption used by Shojima (2005), that \(\alpha_{j}>0\) and \(\gamma_{j}>0\) and update it with \(\alpha_{j}\gamma_{j}>0\), we incorporate test items with \(\alpha_{j}\), \(\gamma_{j}<0\) as well as test items with \(\alpha_{j}\), \(\gamma_{j}>0\). That is, effectively we are adding the assumption \(\text{sign}(\alpha_{j})=\text{sign}(\gamma_{j})\), instead of \(\alpha_{j}>0\) and \(\gamma_{j}>0\). More importantly, we are opening the IRT model to negative discrimination items. With the updated assumption we can rewrite the log-likelihood as \[E_{\boldsymbol{\theta}|\Lambda^{(t)},\boldsymbol{Z}}\left[\ln p\ (\boldsymbol{\Lambda}| \boldsymbol{\theta},\boldsymbol{Z})\right]=N\sum_{j=1}^{n}\left(\ln|\alpha_{j }|+\ln|\gamma_{j}|\right)-\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{n}\alpha_{j}^{2 }\left(\left(\beta_{j}+\gamma_{j}z_{ij}-\mu_{i}^{(t)}\right)^{2}+\sigma^{(t) 2}\right)+\ln p\ (\boldsymbol{\Lambda})+\text{const} \tag{9}\] making the log-likelihood tractable for any \(\alpha_{j}\) and \(\gamma_{j}\). Then following through the computation we obtain \[\alpha_{j}^{(t+1)}=\text{sign}\left(\gamma_{j}^{(t+1)}\right)\left(\gamma_{j} ^{(t+1)2}V_{j}(z_{ij})-V\left(\mu_{i}^{(t)}\right)-\sigma^{(t)2}\right)^{-1/ 2}\.\] The parameters \(\gamma_{j}^{(t+1)}\) and \(\beta_{j}^{(t+1)}\) stay the same as given by equations (6) and (7) with the updated assumption. These modifications allow us to fit both negative and positive discrimination items in our continuous IRT model. The causal interpretation of traditional IRT presumes that the attributes of participant \(i\) and test question \(j\) give rise to marks \(x_{ij}\). The attributes are the discrimination and difficulty parameters of question \(j\) and the ability of the participant \(i\). This is shown in the Directed Acyclic Graph (DAG) in Figure 5. While traditional IRT texts do not include DAGs, more recent work (Kelly et al. 2023) makes these causal interpretations explicit. Figure 5: Left: A DAG showing participant \(i\) and question \(j\) giving rise to marks \(x_{ij}\). Right: The DAG composed of participant and question attributes. ### Applications to machine learning and algorithm evaluation In the traditional IRT setting \(N\) participants' responses for \(n\) test instances are used to fit an IRT model and obtain the discrimination and difficulty of test instances as well as the ability of the participants. A natural way to use the IRT framework on algorithms and test instances is to consider an algorithm as a participant and test instances as test questions/items. If we formulate our problem this way, then we can obtain the test instance characteristics difficulty and discrimination using the IRT framework. In addition, IRT will also give us the latent scores or the ability of the algorithms. Martinez-Plumed et al. (2019) and Chen et al. (2019) formulated their problem this way and used the IRT framework to evaluate observations in a dataset and obtain the ability of the classifiers for that dataset. Instead of using observations of a given dataset as test items, we can also use datasets as test items. Then the parameters fitted by the IRT model would be dataset difficulty and discrimination. This is illustrated in Figure 6. Recent investigations (Kandanaarachchi 2022) showed the benefits of a flipped approach in constructing an unsupervised anomaly detection ensemble for a single dataset where observations were used as participants and algorithms as test items. In the current paper, we explore this idea further for evaluating algorithms on many datasets, developing a full theory and framework for comprehensive algorithm evaluation. ## 3 Algorithmic IRT (AIRT) As a novel adaptation in this paper we now invert the intuitive IRT mapping discussed in the previous paragraph and consider algorithms as items and test instances as participants. This is shown in Figure 7. This inversion results in a loss of intuition momentarily. However, by persisting with this less intuitive mapping we gain an elegant reinterpretation of the theory that enables us to analyze the strengths and weaknesses of algorithms with far more nuanced detail. Firstly, we note that this inversion produces two parameters describing algorithm properties compared to a single parameter in the standard setting. As we will see shortly, we will derive three algorithm characteristics from these two algorithm parameters. Thus, the inversion serves to offer a richer set of metrics with which to evaluate algorithms, compared Figure 6: Standard IRT setting extended to algorithms working on datasets. The IRT model provides the dataset characteristics of difficulty and discrimination, and algorithm ability, as outputs. to the standard approach, which focuses more on dataset/observation evaluation. Table 1 compares the classic IRT approach with the standard and the inverted IRT approaches for algorithm evaluation. With this mapping, we presume that attributes of the problem/dataset \(i\) and algorithm \(j\) give rise to the performance \(x_{ij}\) as shown in the DAG in Figure 8. The inherent meaning of resulting IRT parameters and latent scores is changed when we map algorithms to items and test instances to participants. For example, suppose Figure 9 originates from an educational testing scenario. It shows the heatmap of a test question, the set of trace lines with \(\mathrm{P1}<\mathrm{P2}<\mathrm{P3}<\mathrm{P4}\) and a histogram of latent scores. The \(y\)-axis in the heatmap labeled \(z\) denotes the normalized score and examinee's ability is denoted by \(\theta\). Then, as the examinee's ability increases, the probability of getting a better grade for this \begin{table} \begin{tabular}{l l l l} \hline \hline & Classic IRT & Standard & Ap- & Inverted Ap- & Inverted \\ & & proach & for & proach & Characteristics \\ & & Algorithm & & Algorithm Evaluation & \\ & & Evaluation & & \\ \hline Setting & Examinees doing & Algorithms & Datasets acting & \\ & test items & working & on & on algorithms & \\ & & datasets & & \\ \hline & Test item difficulty & Dataset difficulty & Difficulty parameter for algorithms & Algorithm difficulty limit \\ Parameters & Test item discrimination & Dataset discrimination & Discrimination & Algorithm \\ & Examinee ability & Algorithm ability & parameter for algorithms & anomalousness and consistency \\ & & Algorithm ability & Ability trait of datasets & Dataset difficulty \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison between the classic IRT with the standard and inverted IRT approaches for algorithm evaluation. Figure 7: Inverted IRT setting with datasets acting on algorithms. The IRT model provides two algorithm characteristics in place of difficulty and discrimination, and one dataset characteristic in place of ability as outputs. particular question also increases as seen from the heatmap and the trace lines. For algorithm evaluation, let us also consider the performance levels \(\text{P1}<\text{P2}<\text{P3}<\text{P4}\) with higher levels and larger \(z\) values indicating better performance. If we consider the standard IRT approach discussed in Table 1, then the heatmap and the trace lines give the performance of a specific dataset and the histogram of latent scores give algorithm abilities. If we consider the inverted IRT approach, the heatmap and the tracelines show the performance of an algorithm and the histogram gives the latent scores of the datasets. What do these latent scores represent? We know that algorithms gives better performance on easy test instances. For example a classification algorithm such as logistic regression will give better classification accuracy on a linearly separable dataset compared to a complex dataset. As such, in the inverted algorithm evaluation setting the latent score \(\theta\) represents the easiness of the test instance. Furthermore, this inverted setting gives rise to important algorithm characteristics that can now be measured using the IRT parameters, as described in the following sections. ### Framework Our algorithmic IRT (AIRT) framework consists of three main stages: 1. Stage 1: Fitting an IRT model with inverted mapping We input the performance results of \(n\) algorithms on \(N\) test instances to a continuous or a polytomous IRT model, mapping test instances to participants and algorithms to items. The R package airt fits the continuous IRT models described in Section 2.2 using the updated log-likelihood function and assumption. To fit polytomous models airt uses the functionality of the existing R package mirt(Chalmers, 2012). 2. Stage 2: Calculation of algorithm and dataset metrics The second stage consists of reinterpreting the results of the IRT model, due to the inverted mapping and inherent contextual differences, so that a richer set of metrics for algorithm performance and dataset difficulty can be calculated. 3. Stage 3: Compute strengths and weaknesses and construct algorithm portfolios Figure 8: Mapping IRT to the algorithm evaluation domain, a participant is mapped to a dataset and a question is mapped to an algorithm. Left: The resulting DAG from the this mapping. Right: The DAG composed of dataset and algorithm attributes. Construct latent trait curves to enable algorithm ranking and strengths and weaknesses of algorithm portfolios to be observed across test suites of varying difficulty. A range of indicators are computed as additional measures that characterize algorithms and assess the goodness of the IRT model, as presented in the following sections. AIRT is applicable to both continuous and polytomous IRT models. Our results on various algorithm portfolios used to validate the approach in Section 5 and Appendix A focus on continuous IRT models, however we note that AIRT can be used to construct polytomous models. We present results for continuous scenarios because they have higher variation and as such are more interesting. We note that the R package airt has the functionality to handle polytomous data as well as continuous, and details of the generalization to polytomous data are provided in Supplementary Materials. We will use CSP-Minizinc-2016 algorithm portfolio from ASIib repository (Bischl et al. 2016) to illustrate algorithm and dataset metrics. For all algorithms in the ASIib repository certain hyperparameters and parameters were used which we do not vary. Any conclusions we draw about algorithm performance are therefore dependent on the actual algorithm implementation they use. Further conclusions about the strengths and weaknesses of any algorithm would need to thoroughly explore the impact of its parameter values. CSP-Minizinc-2016 contains the results of constrained satisfaction and optimization problems. The original dataset contains the runtimes of each problem instance. As the IRT Figure 9: The heatmap of LCG-Glucose-free on the top left and the trace lines for \(z\in\{-1,0,1\}\) for that item on the top right. The histogram of the latent scores estimated by the model is shown at the bottom. In the inverted IRT algorithm evaluation setting, the latent scores represent test instance easiness. framework denotes good performance by increasing values we have taken the reciprocal of the runtimes to fit the AIRT model. Figure 10 shows the heatmaps of the probability density functions for all algorithms in the portfolio. The items discussed in Figures 3 and 4 were algorithms taken from this portfolio. ### Dataset metric: Difficulty score As discussed previously, the latent trait denoted by \(\theta\) corresponds to dataset easiness and is given by \[\theta_{i}=\frac{\sum_{j}\hat{\alpha}_{j}^{2}\left(\hat{\beta}_{j}+\hat{\gamma }_{j}z_{ij}\right)}{\sum_{j}\hat{\alpha}_{j}^{2}}\,, \tag{10}\] where \(\hat{\alpha}_{j}\), \(\hat{\beta}_{j}\) and \(\hat{\gamma}_{j}\) are the estimated discrimination, difficulty and scaling parameters for algorithm \(j\), which are obtained by fitting the IRT model. Using \(\theta_{i}\) we define dataset difficulty Figure 10: The heatmap of probability density functions for all algorithms in CSP-Minizinc-2016 portfolio. as \[\delta_{i}=-\theta_{i}\;, \tag{11}\] where \(\delta_{i}\) denotes the difficulty of the \(i^{\text{th}}\) dataset. We see that dataset difficulty is a function of discrimination, difficulty and scaling parameters of algorithms as well as the accuracy scores of the datasets. Shojima (2005) uses the normal density type CRM with normal priors making the posterior distribution of the trait parameter \(\theta\) normal. Thus, we can expect dataset difficulty \(\delta\) to be normally distributed. We refer to datasets/problems as easy if they have low difficulty values. Similarly, we say datasets/problems are difficult if they have high difficulty values. The semi-difficult or semi-easy instances are in the middle of the spectrum. ### Algorithm metric: Anomalous indicator Consider the heatmap and the trace lines shown in Figure 11. The left column represents the algorithm LCG-Glucose-free and the second column represents a different type of algorithm. Figure 11: The left column shows the heatmap and the trace lines for LCG-Glucose-free, a typical algorithm with increasing \(\theta\) corresponding to increased performance. The right column shows the heatmap and the trace lines for an anomalous algorithm, which obtains high accuracy scores for difficult test instances and low accuracy scores for easy test instances. The second algorithm is constructed using an algorithm in the Minizinc portfolio for illustrative purposes. Suppose these figures were generated from an item in educational testing. Then the left column shows the heatmap and the trace lines of a test item for which higher examinee ability corresponds to higher grades. On the other hand the right column shows a test item for which examinees with lower ability obtain higher grades than examinees with higher ability. Such a test item is said to have negative discrimination. The standard premise in educational testing is that high grades correspond to high ability. As such, a test item with negative discrimination is commonly revised to obtain a positive discrimination or removed from the pool of questions (Hambleton & Swaminathan 2013). However, in algorithm evaluation such a heatmap or a set of trace lines represent an algorithm or a dataset with an interesting quirk. For the standard IRT approach for algorithm evaluation, the heatmap and the trace lines represent a dataset, which gives poor performances for high ability algorithms and good performances for low ability algorithms. For the inverted IRT approach, the heatmap and trace lines represent an algorithm that performs well on difficult test instances and poorly on easy test instances. We describe such algorithms as "anomalous". Indeed, the _no free lunch_ concept emphasizes that no single algorithm performs better than other algorithms for all problems. This is confirmed by the _instance space_ analyses conducted by Smith-Miles and co-authors (Smith-Miles & Tan 2012, Kang et al. 2017, Munoz & Smith-Miles 2017). Furthermore, the instance space analyses for different problems show that even though some algorithms perform poorly on average, they often hold a niche in the instance space where they outperform other algorithms (Kandanaarachchi et al. 2019). As this is a unique strength of the algorithm, it should not be removed from the dataset as practiced in educational testing. For continuous and polytomous IRT models, the standard parameters for item \(j\) comprise the discrimination parameter \(\alpha_{j}\) and the difficulty parameter \(\beta_{j}\) for continuous models, and the intercepts \(\boldsymbol{d}_{j}=\left(d_{1},\ldots,d_{C_{j}-1}\right)\) for polytomous models. The discrimination parameter, which is present in both continuous and polytomous models highlights two aspects of algorithm performance. The sign of the discrimination parameter tells us if the algorithm is typical or anomalous. If \(a_{j}<0\) then algorithm \(j\) gives better performance values for difficult test instances and low performances for easy test instances, and is considered anomalous. So we define the anomalous indicator as \[\text{anomalous}(j)=\left\{\begin{array}{ll}\text{TRUE}&a_{j}<0\,,\\ \text{FALSE}&\text{otherwise}\,.\end{array}\right.\] ### Algorithm metric: Algorithm consistency score Consider the heatmap and the trace lines in Figure 12. Suppose these trace lines relate to a test item in an educational testing scenario. Then this item does a poor job in discriminating examinees with different abilities, because all examinees are most likely to obtain a similar score regardless of their ability. In algorithm evaluation using the standard IRT approach, such a heatmap and trace lines indicate that the dataset in question does not discriminate between the algorithms. That is, the dataset might be too difficult for all algorithms or too easy for all algorithms. Similarly, in algorithm evaluation using the inverted IRT approach, such a heatmap and trace lines indicate that the algorithm does not discriminate. That is, regardless of the easiness/difficulty of the test instance, this algorithm is most likely to give a similar score, i.e. its sensitivity to test instances is quite low. Thus, the algorithm is consistent and non-discriminative. The consistency or robustness of an algorithm is an important characteristic that is sometimes overlooked in the quest for peak performance. Stability or robustness can be defined in different ways. For example, Eiben & Smit (2011) discuss 3 types of robustness indicators: robustness with respect to parameters, problem specification and random seeds. Often robustness or stability is defined as a measure of the change of the output with respect to a small perturbation of the input. In our case, we do not perturb the input; however datasets positioned close to each other in the latent trait continuum are considered to have similar easiness/difficulty level. As such, a measure of the change of performance values across the latent trait continuum is an indication of stability or robustness. However, stability or robustness are positive attributes. The algorithm quality we want to encapsulate is slightly different in the sense that some algorithms can consistently perform poorly irrespective of the problem while others can consistently perform well. We capture this notion by defining algorithm consistency. The absolute value of the discrimination parameter \(|a_{j}|\) gives the discrimination power of the algorithm, which is linked to the consistency of the algorithm. If \(|a_{j}|\) is small, then the algorithm will produce trace curves with slower transitions similar to those in Figure 12, signifying a more consistent algorithm than one with a larger \(|a_{j}|\). As such, we define consistency as \[\text{consistency}(j)=\frac{1}{|a_{j}|}\,.\] Tying this back to the heatmaps, the discrimination power of the algorithm is connected with Figure 12: The heatmap and the trace lines for Choco-free, a relatively consistent algorithm in this portfolio. the sharpness of the lines/bands on the heat map. In Figure 10 we see that some algorithms have sharp lines while others have blurry lines. Algorithms with sharp lines are more discriminating than algorithms with blurry lines, i.e., algorithms with blurry lines or no lines are more consistent than algorithms with sharp lines. ### Algorithm metric: Difficulty limit Both consistency and anomalousness relate to the IRT discrimination parameter. Next, we discuss the role of the item difficulty parameter in the inverted IRT algorithm evaluation approach. Suppose Figure 13 represents two items in educational testing. The first and the second columns in Figure 13 show the trace lines and the heatmaps of two items, with the item in the left column having higher difficulty. We see that for any given ability \(\theta\), the most probable score in the heatmap in the right column is higher than that of the left column. In the inverted IRT approach, the heatmaps and the tracelines represent algorithms with the algorithm in the left column, Mistral-free, giving lower performance for similar datasets compared to the algorithm in the right column, MZN/SCIP-free. When we consider dataset difficulty (\(-\theta\)), we see that as datasets get more difficult the algorithm performance goes down. Figure 13: Two algorithms with different difficulty limits. Mistral-free has a higher difficulty limit than MZN/SCIP-free. Thus, each algorithm has an upper limit in terms of dataset difficulty. If the difficulty of a dataset is lower than this limit, we expect the algorithm to give good results, but if it is higher than the limit, the algorithm would perform poorly. Therefore, we define the algorithm difficulty limit as \[\text{difficulty}(j)=-\beta_{j}\,,\] where \(\beta_{j}\) is the traditional IRT difficulty parameter. Higher values of difficulty(\(j\)) indicate better algorithms that can handle more difficult datasets. For polytomous IRT, as there are multiple difficulty parameters \((d_{1},d_{2},\ldots,d_{C_{j}-1})\), we use \(-d_{C_{j}-1}\) as the difficulty limit, because this denotes the threshold for the highest performance level. ## 4 Evaluating algorithm portfolios using AIRT ### Modelling algorithm performance based on dataset difficulty The dataset difficulty spectrum gives a way of ordering the performance values \(y_{ij}\). For each algorithm \(j\), we can consider the set of points \((\delta_{i},y_{ij})\) for \(i\in\{1,\ldots,N\}\). When ordered by \(\delta_{i}\), \(y_{ij}\) exhibits algorithm \(j\)'s performance as datasets get progressively difficult. Thus, for each algorithm \(j\), we can fit a model explaining the performance by dataset difficulty values. These models can be denoted by functions \(\{h_{j}(\delta)\}_{j=1}^{n}\), where \(j\) denotes the algorithm and \(\delta\) the dataset difficulty. For simplicity our \(h_{j}\)'s are smoothing splines. The smoothing spline \(h_{j}\) minimizes the function \[\sum_{i=1}^{N}\left(y_{ij}-h_{j}(\delta_{i})\right)^{2}+\lambda\int\,h_{j}^{ \prime\prime}(t)\,dt\,,\] where the first term denotes the sum of squared errors and the second term is a penalty for wiggliness. It is the second term - the integral of the second derivative - that gives the smoothness to the spline. The parameter \(\lambda\) is a tuning parameter and is computed by using a closed-form expression that minimizes the leave-one-out cross validation squared error (James et al., 2013). An advantage of using smoothing splines is that we do not need to specify any parameters to fit the splines. Furthermore, by graphing the splines we can visualize regions of the latent trait where algorithms give good or weak performance. We note that this is a feature-less way of exploring algorithm performance. For example, in instance space analysis we compute features of datasets and explain algorithm performance using these features. AIRT explains algorithm performance using dataset difficulty, which is computed from fitting an IRT model without using external features. CSP-Minizinc-2016 algorithm portfolio ordered by dataset difficulty and the fitted smoothing splines are shown in Figure 14. From this diagram we see that different algorithms perform better for different values of dataset difficulty. ### Strengths and weaknesses of algorithms We can compute the strengths and weaknesses of algorithms using the dataset/problem difficulty spectrum. To find the algorithm strengths we first find the best algorithm performance for each value \(\delta\) in the problem difficulty spectrum. That is, \[h_{j_{*}}(\delta)=\max_{j}\,h_{j}(\delta)\,.\] Next, for a given \(\epsilon>0\) we define the strengths of algorithm \(j\) as \[\text{strengths}(j,\epsilon)=\left\{\delta:|h_{j}(\delta)-h_{j_{*}}(\delta) |\leq\epsilon\right\}\,.\] Figure 14: The dataset difficulty spectrum of CSP-Minizinc-2016 explored. Left: Algorithm performance against dataset difficulty for all 4 algorithms. Right: Smoothing splines fitted to algorithm performance values. That is, the strengths of algorithm \(j\) denote the regions in the problem difficulty spectrum where algorithm \(j\) gives good performance. Here good is defined as close to best, specifically within \(\epsilon\) from the best. As such, we can get multiple contiguous regions of strengths for some algorithms while others may not have any strengths in the spectrum for a given \(\epsilon\). Algorithm weaknesses are found similarly. To compute the weaknesses we first find the poorest algorithm performance for every point in the problem difficulty spectrum: \[h_{j_{\#}}(\delta)=\min_{j}h_{j}(\delta)\;.\] Then, we define the weaknesses of algorithm \(j\) as \[\text{weaknesses}(j,\epsilon)=\left\{\delta:|h_{j}(\delta)-h_{j_{\#}}(\delta )|\leq\epsilon\right\}\;.\] Weaknesses represent regions in the problem difficulty spectrum where algorithms give poor performance. Figure 15 shows the strengths and weaknesses of CSP-Mnizinc-2016 algorithm portfolio. The top row shows the strengths and weaknesses for \(\epsilon=0\) and the bottom part for \(\epsilon=0.01\). The difference between the two values of \(\epsilon\) is that when \(\epsilon=0\), for each value \(\delta\) in the dataset difficulty spectrum there is only one algorithm that is strong. When \(\epsilon\neq 0\) multiple algorithms can display strengths for the same \(\delta\). In Figure 15 we see that when \(\epsilon=0\) LCG-Glucose-UC-free is strong for a large part of the problem space, including difficult and medium-difficult problems. OR-Tools-free is better for more difficult problems and LCG-Glucose-free and Chuffed-free for easy problems. For \(\epsilon=0\) only 5 algorithms have strengths. When \(\epsilon=0.01\) we see a little overlap. However, when \(\epsilon=0.01\) only 7 algorithms out of 22 algorithms exhibit strengths. In contrast, 16 algorithms have weaknesses when \(\epsilon=0.01\). Both LCG-Glucose-UC-free and LCG-Glucose-free have strengths for easier problems but LCG-Glucose-UC-free remains the more powerful algorithm. In the weaknesses space, we see Picat-CP-fd, OscaR/CBLS-free and Yuck-free displaying weaknesses for most of the problem space. A large number of algorithms are weak for difficult problems as seen for \(\epsilon=0.01\). Using the strengths we compute the _latent trait occupancy_ (LTO) for each algorithm. LTO gives the proportion of datasets supported by each algorithm in the region of its strength. We define it as \[\text{LTO}(j,\epsilon)=\frac{|\{i:i\in\text{strengths}(j,\epsilon)\}|}{N}\;,\] where \(i\) and \(j\) denote the datasets and algorithms respectively. The total number of datasets/problems is denoted by \(N\). For the strengths shown in Figure 15 for \(\epsilon=0\), LCG-Glucose-UC-free occupies the largest portion of the latent trait followed by Chuffed-free. When \(\epsilon>0\), the quantity \(\sum\text{LTO}>1\) if the strengths of algorithms overlap as shown in Figure 15. The LTO values for both \(\epsilon=0\) and \(\epsilon=0.01\) is listed in Table 2. Combining Figure 15 and Table 2 we see that for very easy problems (\(\delta\approx-2\)) algorithms MZN/Cbc-free and MZN/SCIP-free display strengths. However, we see that the latent trait occupancy LTO = 0.02, which is very small. Therefore, even if these two algorithms have strengths for very easy problems, it is risky to use them because of small LTO. For easy problems (\(\delta\leq 0\)) we have 3 candidates: LCG-Glucose-UC-free, Chuffed-free and LCG-Glucose-free. The LTO of these algorithms are 0.828, 0.141 and 0.111 respectively. Basically, this reiterates that LCG-Glucose-UC-free is the most powerful algorithm. For very hard datasets (\(\delta>1\)), we have 3 candidates, Choco-free, OR-Tools-free and LCG-Glucose-UC-free. Of these, Choco-free has an LTO of 0.04, and thus can be disregarded. OR-Tools-free occupies the same position in the strengths diagram for both \(\epsilon=0\) and \(\epsilon=0.01\) and thus has a unique strength for very difficult problems. ### Algorithm portfolio selection The analysis in the previous section can be used understand the strengths and weaknesses of algorithms, adding to the exploratory data analysis domain of algorithm portfolios. We can also use AIRT for algorithm portfolio selection. We construct the airt portfolio by selecting Figure 15: _Strengths and weaknesses of CSP-Minizinc-2016 algorithms for \(\epsilon=0\) and \(\epsilon=0.01\)._ the set of strong algorithms for a given \(\epsilon\). Formally, the airt portfolio is defined as \[\mathcal{A}(\epsilon) =\left\{j:|h_{j}(\delta)-h_{j_{*}}(\delta)|\leq\epsilon,\text{ for all}\,\delta\right\}\,,\] \[=\left\{j:\text{strengths}(j,\epsilon)\neq\emptyset\right\}.\] When \(\epsilon=0\) we obtain \(\mathcal{A}(0)=\bigcup j_{*}\), i.e., the strongest set of algorithms in the latent space. We use lowercase letters 'airt' when describing portfolio specific results and uppercase AIRT when describing more general aspects. The number of algorithms in the airt portfolio depends on \(\epsilon\). However, we do not directly specify the number of algorithms. It is a result of the smoothing splines \(\{h_{j}(\delta)\}_{j=1}^{n}\), which use the dataset difficulty spectrum \(\delta\) as the input. But, \(\delta_{i}=-\theta_{i}\), which is computed using \(\alpha_{j}\), \(\beta_{j}\), \(\gamma_{j}\) and \(z_{ij}\) as dictated by equation (10). Therefore, the AIRT model has a direct influence on the portfolio. Of course, the airt portfolio, strengths and weaknesses and other indicators of algorithm performance are only reliable if the IRT model providing the parameters has a good fit. In the following section we provide some measures of goodness of the IRT model to support interpretation of the results. ### IRT Model goodness measures We are using IRT to model algorithm performance, that is the IRT model is effectively a meta-model. Checking the accuracy or the goodness of the IRT model is important because it determines the confidence we can place on the IRT model parameters, which describe the algorithms. If the IRT model is accurate, then we can trust the relationships it has modeled between instances and algorithm performances. After fitting a continuous (polytomous) IRT model we define the predicted result (category) for a test instance \(i\), with latent score \(\theta_{i}\) as the result (category) with the highest probability for latent score \(\theta_{i}\). We denote the predicted result (category) for test instance \(i\) and algorithm \(j\) by \begin{table} \begin{tabular}{l r r} \hline \hline Algorithm & LTO (\(\epsilon=0\)) & LTO (\(\epsilon=0.01\)) \\ \hline LCG-Glucose-UC-free & 0.717 & 0.828 \\ Chuffed-free & 0.121 & 0.141 \\ LCG-Glucose-free & 0.071 & 0.111 \\ OR-Tools-free & 0.071 & 0.071 \\ MZN/Cbc-free & 0.020 & 0.020 \\ Choco-free & 0 & 0.040 \\ MZN/SCIP-free & 0 & 0.020 \\ \hline \hline \end{tabular} \end{table} Table 2: AIRT Latent Trait Occupancy (LTO) for CSP-Minizinc-2016 algorithms. \(\hat{x}_{ij}\). Then the residuals \(e_{ij}=x_{ij}-\hat{x}_{ij}\) are of interest to us. For a fixed \(j\), let \(e_{j}=\{e_{ij}\}_{i=1}^{N}\) denote the residuals of the \(j^{\text{th}}\) algorithm. We consider the scaled absolute residuals \(\rho_{ij}=c|e_{ij}|\), such that \(\rho_{ij}\in[0,1]\). As we are interested in the algorithms we define \(\rho_{j}=\{\rho_{ij}\}_{i=1}^{N}\) and consider the empirical cumulative distribution function (CDF) of \(\rho_{j}\) for each \(j\), which we denote by \(F(\rho_{j})\): \[F(\rho_{j})=P(\rho_{j}\leq\rho)\quad\text{for}\quad\rho\in[0,1]\;. \tag{12}\] Figure 16 shows a histogram of the absolute residuals \(|e_{ij}|\), the empirical cumulative distribution functions of \(|e_{ij}|\), and the scaled absolute residuals \(\rho_{ij}\) for iZplus-free algorithm in CSP-Minizinc-2016 portfolio. The only difference between the two CDFs is the \(x\) values, which are in the interval \([0,1]\) for the scaled absolute residuals. By rescaling the absolute residuals to \([0,1]\) we make sure that the area under the CDF \(F(\rho_{j})\), denoted by AUCDF(\(\rho_{j}\)), is bounded by \(1\). AUCDF(\(\rho_{j}\)) provides a measure of goodness of the IRT model for algorithm \(j\). A higher AUCDF signifies better IRT model fit. We compute the mean square error (MSE) of the residuals and AUCDF(\(\rho_{j}\)) for each algorithm \(j\). IRT may fit some algorithms better than others. We note that the log-likelihood obtained from fitting the IRT model is an aggregate and therefore does not show how well each algorithm is fitted. By computing the residual metrics such as mean square error and AUCDF(\(\rho_{j}\)) we gain a better understanding of the IRT model in relation to each algorithm. ### Predicted and actual effectiveness We are interested in how well algorithms perform on test instances, especially the high performance results. If an algorithm gives good performance results for most test instances, then Figure 16: The histogram of the absolute residuals \(|e_{ij}|\) of iZplus-free is shown on the left. The CDF of the absolute residuals is shown on the top right and the CDF of the scaled absolute residuals is shown on the bottom right. Notice the difference in the domain for the two CDFs. that algorithm is effective. As such, we focus on the performance results in decreasing order and study effectiveness via the cumulative distribution function (CDF) for each algorithm. First we denote the algorithm performance results for algorithm \(j\) by \(x_{j}=\{x_{ij}\}_{i=1}^{N}\). By defining \(t_{j}=\max(x_{j})-x_{j}\) we reverse the performance results so that small values of \(t_{j}\) denote high performance results. The variable \(t_{j}\) can be thought of as a tolerance parameter, i.e. small tolerances give better performance. Then we compute the effectiveness of the algorithm by \[\bar{F}_{j}(\ell)=P(t_{j}\leq\ell)\,\] where \(P\) denotes the probability. The function \(\bar{F}_{j}(\ell)\) is also related to the complementary cumulative distribution (CCDF), which is defined as \[\bar{F}_{x}(\ell)=P(x\geq\ell)\,\] since, \[P(t_{j}\leq\ell) =P\left(\max(x_{j})-x_{j}\leq\ell\right)\,\] \[=P\left(x_{j}\geq\max(x_{j})-\ell\right)\.\] As such, \(\bar{F}_{j}(\ell)\) denotes the CCDF of \(x_{j}\) with the \(x\) axis reversed. We call the curve \(y=\bar{F}_{j}(\ell)\), the effectiveness curve. By scaling \(t_{j}\) to lie in \([0,1]\), we make sure that the area under the effectiveness curve is bounded by \(1\). For polytomous IRT with categories \(\{0,1,\ldots,C_{j-1}\}\), we consider a step size of \(\Delta=\frac{1}{C_{j-1}}\) for the \(x\) axis with \(\ell\in\{0,1,\ldots,C_{j-1}\}\), so that the curve \(y=\bar{F}_{j}(\ell)\) is defined by the points \(\left(0,\bar{F}_{j}(C_{j-1})\right),\left(\Delta,\bar{F}_{j}(C_{j-2})\right), \ldots,\left(1,\bar{F}_{j}(0)\right)\). Figure 18 shows the histogram, CDF of performance values (bottom-left) and the effectiveness curve (bottom-right) of Chuffed-free algorithm in CSP-Minizinc-2016 portfolio. Similarly, we can compute the effectiveness for the IRT predicted algorithm performance values by defining \(\hat{x}_{j}=\{\hat{x}_{ij}\}_{i=1}^{N}\), and \(\hat{t}_{j}=\max\hat{x}_{j}-\hat{x}_{j}\) where \(\hat{x}_{ij}\) denotes the predicted result Figure 17: The histogram of performance values Chuffed-free algorithm is shown on the left. The graph on the top right shows the CDF of the performance values. The graph on the bottom right shows the effectiveness curve \(y=\bar{F}_{j}(\ell)\). for algorithm \(j\) and test instance \(i\). This gives the predicted effectiveness \[\bar{F}_{j}(\hat{\ell})=P(\hat{t}_{j}\leq\ell)\;,\] where we have indicated that it is a predicted quantity by using \(\hat{\ell}\). We have denoted the effectiveness by \(\bar{F}\) for both predicted and actual values, while changing from \(\ell\) to \(\hat{\ell}\) for predicted effectiveness. We compute the area under the actual and predicted effectiveness curves as this is a measure of an algorithm's ability to produce high performance results. We denote the area under the actual effectiveness curve \(y=\bar{F}_{j}(\ell)\) by AUAEC\((j)\), and area under the predicted effectiveness curve \(y=\bar{F}_{j}(\hat{\ell})\) by AUPEC\((j)\). A high AUAEC\((j)\) indicates that algorithm \(j\) has a large proportion of high performance results and a high AUPEC\((j)\) indicates that the IRT model predicts algorithm \(j\) to have a large proportion of high performance results. For a single algorithm the pair of values (AUAEC, AUPEC) gives an indication about the algorithm's actual and perceived ability to produce high performance results. If the absolute Figure 18: The model goodness graphs for CSP-Minizinc-2016 portfolio. The top row shows actual and predicted effectiveness curves. The graph on the bottom left shows the CDF of the absolute residuals and the graph on the bottom right shows the actual and predicted effectiveness of the algorithms. difference between the predicted and actual effectiveness, \(|\)AUAEC - AUPEC\(|\) is large, then the trustworthiness of the IRT model is low for that algorithm. It may be the case that AUAEC \(\approx\) AUPEC for most algorithms in a portfolio, but for one algorithm the absolute difference between AUAEC and AUPEC is higher. A larger absolute difference between AUAEC and AUPEC will concur with a lower AUCDF for that algorithm. For example if the IRT model over estimates the performance of an algorithm, AUPEC will be higher than AUAEC. This will also result in lower agreement between the predicted and the actual results giving rise to lower AUCDF. Table 3 gives the model goodness measures for each algorithm. We see that the MSE is low for most algorithms apart from HaifaCSP-free, which also has the highest \(|\)AUAEC - AUPEC\(|\). In terms of goodness of fit, we can say that the IRT model is a good fit for mostly all algorithms, apart from HaifaCSP-free. The measures we have proposed for algorithm consistency score, anomalous indicator and difficulty limit are algorithm evaluation metrics while the absolute residuals curve, actual and predicted effectiveness curves, along with AUCDF, --AUA \begin{table} \begin{tabular}{l r r r r r} \hline \hline Algorithm & MSE & AUCDF & AUAEC & AUPEC & --AUAEC - AUPEC— \\ \hline iZplus-free & 0.046 & 0.823 & 0.133 & 0.179 & 0.046 \\ MZN/SCIP-free & 0.072 & 0.746 & 0.102 & 0.287 & 0.185 \\ Chuffed-free & 0.085 & 0.733 & 0.286 & 0.383 & 0.097 \\ LCG-Glucose-UC-free & 0.088 & 0.725 & 0.372 & 0.450 & 0.078 \\ Concrete-free & 0.003 & 0.974 & 0.022 & 0.000 & 0.022 \\ JaCoP-fd & 0.051 & 0.894 & 0.092 & 0.032 & 0.059 \\ Mistral-free & 0.027 & 0.892 & 0.083 & 0.098 & 0.016 \\ OscaR/CBLS-free & 0.000 & 0.995 & 0.002 & 0.000 & 0.002 \\ HaifaCSP-free & 0.100 & 0.690 & 0.168 & 0.392 & 0.224 \\ Gecode-free & 0.000 & 0.989 & 0.005 & 0.000 & 0.005 \\ OR-Tools-free & 0.037 & 0.951 & 0.043 & 0.000 & 0.043 \\ SICStus-Prolog-fd & 0.013 & 0.972 & 0.023 & 0.000 & 0.023 \\ Picat-CP-fd & 0.000 & 0.993 & 0.002 & 0.000 & 0.002 \\ Picat-SAT-free & 0.017 & 0.894 & 0.096 & 0.169 & 0.073 \\ MZN/Gurobi-free & 0.089 & 0.720 & 0.208 & 0.384 & 0.176 \\ MZN/CPLEX-free & 0.093 & 0.713 & 0.205 & 0.385 & 0.180 \\ LCG-Glucose-free & 0.089 & 0.722 & 0.336 & 0.450 & 0.114 \\ MZN/Cbc-free & 0.058 & 0.786 & 0.092 & 0.223 & 0.132 \\ Yuck-free & 0.000 & 0.995 & 0.002 & 0.000 & 0.002 \\ Choco-free & 0.021 & 0.944 & 0.050 & 0.000 & 0.050 \\ MinisatID-free & 0.022 & 0.898 & 0.081 & 0.115 & 0.034 \\ G12FD-free & 0.014 & 0.961 & 0.035 & 0.001 & 0.034 \\ \hline \hline \end{tabular} \end{table} Table 3: MSE, AUCDF, Area Under Actual Effectiveness Curve (AUAEC) and Predicted Effectiveness Curves (AUPEC) and —AUAEC - AUPEC— for CSP-Minizinc algorithms. AIRT's model goodness metrics. In the discussion that follows we refer to both AIRT and the underlying IRT model. AIRT refers to the reinterpreted IRT model with the additional evaluation metrics discussed above. When we discuss standard IRT concepts such as trace lines we refer to the IRT model. This concludes the discussion on different aspects of the AIRT framework. The pseudocode given in Algorithm 1 summarizes the steps and functionality of AIRT. ### Computational complexity of AIRT To fit the IRT model, we use the non-iterative item parameter solution proposed by Shojima (2005). They use expectation maximization (EM) and in each EM cycle a non-iterative solution is found by optimizing the expectation in equation (9) item-by-item. By computing partial derivatives and solving a set of simultaneous equations they find the exact solutions for item parameters \(\alpha_{j}\), \(\beta_{j}\) and \(\gamma_{j}\) in each cycle. The optimization stops when solutions of successive cycles converge or when the maximum number of cycles is reached. Let \(c\) denote the number of cycles. Hence, the computation is repeated \(c\) times. For a \(N\times n\) matrix \(Z\), there are \(n\) items and \(N\) participants. The non-iterative solution is found for each item \(j\in\{1,\ldots,n\}\). For a fixed \(j\), solving for \(\alpha_{j}\), \(\beta_{j}\) and \(\gamma_{j}\) involves computing various quantities such as mean, variance and covariance. The computational complexity of these operations is \(\mathcal{O}(N)\). When they are computed for each item \(j\) for \(c\) cycles the overall complexity of fitting the IRT model becomes \(\mathcal{O}(Nnc)\). Of the three variables \(c\) has an upper bound of \(200\) and \(n\) is much smaller than \(N\). As such the most influencing variable is \(N\). After fitting the IRT model we compute the anomalous indicator, algorithm consistency score and difficulty limit for each algorithm. These computations take a fixed amount of time for each algorithm \(j\). Therefore, computing the indicators have \(\mathcal{O}(n)\) complexity. Computing dataset difficulty values \(\delta_{i}\) for \(N\) datasets using equations (10) and (11) takes \(\mathcal{O}(N)\) complexity. Therefore, computing AIRT indicators and dataset difficulty values have \(\mathcal{O}(N+n)\approx\mathcal{O}(N)\) complexity as \(n\) is much smaller compared to \(N\). Smoothing splines can be fitted in \(\mathcal{O}(N)\) computational time. In statistical software packages, they are fitted using a much smaller number of points, approximately \(\log(N)\) when \(N>50\)(Hastie et al., 2009). Strengths and weaknesses of algorithms are computed mainly for visualization purposes. As such, the strengths and weaknesses horizontal bar graph has a smaller number of points compared to \(N\); let us say it has \(M\) points. For each of these points we compute the strengths and weaknesses of \(n\) algorithms. This computation involves \(\mathcal{O}(Mn)\) complexity; however vectorized computations make it much faster. The airt algorithm portfolio can be computed in fixed time as they take the union of strong or weak algorithms. Model goodness measures involve \(n\) algorithms with \(N\) data points for each algorithm. Computing the MSE, and the CDF for \(\rho_{j}\) as in equation (12) have \(\mathcal{O}(nN)\) complexity. Computing the area under the curve using trapezoidal integration takes \(\mathcal{O}(N)\) time. Similarly, actual and predicted effectiveness have \(\mathcal{O}(nN)\) complexity. input: The matrix \(Y_{N\times n}\), containing accuracy measures of \(n\) algorithms for \(N\) datasets/problem instances. output: 1. AIRT indicators of algorithms and dataset/problem difficulty 2. The strengths and weaknesses of algorithms 3. airt algorithm portfolio 4. Model goodness measures 1 Stage 1 - Fitting the IRT model with inverted mapping 2. Transform the accuracy measures \(y_{ij}\) by defining \(z_{ij}=\ln\frac{y_{ij}}{k-y_{ij}}\). 3. Let \(Z=\{z_{ij}\}\in\mathbb{R}^{N\times n}\), where \(N\) denotes the number of problems/datasets and \(n\) denotes the number of algorithms. 4. Fit a continuous IRT model to \(Z\) by maximizing the log-likelihood function \[E_{\theta|\Lambda^{(t)},Z}\left[\ln p\left(\Lambda|\theta,Z\right)\right]=N \sum_{j=1}^{n}\left(\ln|\alpha_{j}|+\ln|\gamma_{j}|\right)-\frac{1}{2}\sum_{i =1}^{N}\sum_{j=1}^{n}\alpha_{j}^{2}\left(\left(\beta_{j}+\gamma_{j}z_{ij}-\mu _{i}^{(t)}\right)^{2}+\sigma^{(t)2}\right)+\ln p\left(\Lambda\right)+\text{ const}\,,\] 4. From this model we obtain (after \(t\) iterations) the IRT discrimination and difficulty parameters \(\alpha_{j}\) and \(\beta_{j}\) and the scaling parameter \(\gamma_{j}\) for algorithms \(j\in\{1,\ldots,n\}\) as follows: \[\gamma_{j}^{(t+1)} =\frac{V\left(\mu_{i}^{(t)}\right)+\sigma^{(t)2}}{C_{j}\left(z_{ ij},\mu_{i}^{(t)}\right)}\,,\] \[\beta_{j}^{(t+1)} =M\left(\mu_{i}^{(t)}\right)-\gamma_{j}^{(t+1)}M_{j}\left(z_{ij} \right)\,,\] \[\alpha_{j}^{(t+1)} =\text{sign}\left(\gamma_{j}^{(t+1)}\right)\left(\gamma_{j}^{(t+1 )2}V_{j}(z_{ij})-V\left(\mu_{i}^{(t)}\right)-\sigma^{(t)2}\right)^{-1/2}\,.\] 5. Using these IRT parameters we compute the latent trait \(\theta_{N\times 1}\) as \[\theta_{i}=\frac{\sum_{j}\hat{\alpha}_{j}^{2}\left(\hat{\beta}_{j}+\hat{\gamma }_{j}z_{ij}\right)}{\sum_{j}\hat{\alpha}_{j}^{2}}\,.\] 6 Stage 2 - Calculation of algorithm and dataset metrics 7. For each algorithm \(j\) compute the anomalous indicator, algorithm consistency score and difficulty limit using \[\text{anomalous}(j) =\left\{\begin{array}{ll}\text{TRUE}&a_{j}<0\,,\\ \text{FALSE}&\text{otherwise}\,.\end{array}\right.\,,\] consistency(j) =\frac{1}{|a_{j}|}\,,\] difficulty(j) =-\beta_{j}\,,\] 8. For each dataset \(i\) compute the dataset difficulty using \(\delta_{i}=-\theta_{i}\). 9 Stage 3 - Computing strengths and weaknesses and construct airt portfolio 10. Using the dataset difficulty spectrum \(\delta\) fit smoothing splines \(h_{j}(\delta)\) to performance values \(y_{ij}\) for each algorithm \(j\) minimizing \(\sum_{i=1}^{N}\left(y_{ij}-h_{j}(\delta_{i})\right)^{2}+\lambda\int h_{j}^{ \prime\prime}(t)\,dt\). **Algorithm 1**_AIR3 framework_. 1. Compute the strengths and weaknesses of algorithms using \[\text{strengths}(j,\epsilon) =\left\{\delta:|h_{j}(\delta)-h_{j_{*}}(\delta)|\leq\epsilon\right\}\,,\] weaknesses\((j,\epsilon) =\left\{\delta:|h_{j}(\delta)-h_{j_{\#}}(\delta)|\leq\epsilon\right\}\,,\] and use the strengths and weaknesses for exploratory data analysis purposes. 2. Construct the airt portfolio using \(\mathcal{A}(\epsilon)=\left\{j:\text{strengths}(j,\epsilon)\neq\emptyset \right\}.\) 3. Check the fit of the IRT model by computing model goodness measures MSE, AUCDF and --AUAEC - AUPEC--. ## 5 Results We now test AIRT on 10 algorithm portfolios hosted on ASlib data repository (Bischl et al. 2016). ASlib hosts performance data and test instance features for a large number of algorithm portfolios. Section 5.1 contains a detailed analysis of classification algorithms using AIRT. We explore AIRT metrics, model goodness measures and the strengths and weakness of algorithms using the dataset difficulty spectrum. In addition, we compare different algorithm portfolios. The analysis of classification algorithms encompasses the full functionality of AIRT. We carry out more concise analyses for other ASlib scenarios in Appendix A. We include the latent trait curves, strengths and weaknesses and algorithm portfolio comparisons for each ASlib scenario. ### Detailed case study: Classification This scenario was introduced by van Rijn (2016) and uses a selection of WEKA algorithms (Hall et al. 2009). It was later used in the 2017 algorithm selection challenge by Lindauer et al. (2017). The dataset contains predictive accuracy results from 30 classification algorithms on 105 test instances. The default parameters and hyperparameters used by the classification algorithms were not varied. For ease of plotting graphs, we have shortened the names of many algorithms. For example, there are 3 multilayer perceptron algorithms; algorithm 8990_MultilayerPerceptron is renamed to 8990_MLP. #### 5.1.1 AIRT algorithm metrics Figure 19 shows the heatmaps of AIRT fitted probability distribution functions for the classification algorithms. We see that OLM and ConjunctiveRule are more stable comparatively. AIRT did not find any algorithm to be anomalous. Table 4 gives AIRT metrics for the classification algorithms. Even though OLM has the highest algorithm consistency, it has the lowest difficulty limit. Therefore, OLM gives poor performances consistently. Thus, algorithm consistency by itself is not an indicator of a good algorithm. The RandomForest has the highest difficulty limit. Hence, the RandomForest can handle very difficult instances. Algorithms LMT, NaiveBayes, SMO_PolyKernel, AdaBoostM1_J48 and BayesNet also have high difficulty limits meaning that these algorithms can handle hard instances. The RandomForest occupies the largest proportion in the latent trait (LTO) for \(\epsilon=0\) and the second largest for \(\epsilon=0.01\). Therefore, it is an excellent algorithm suited for a large number of diverse instances. Notably, LMT, the second best algorithm in terms of LTO for \(\epsilon=0\) surpasses the RandomForest and becomes the best algorithm for \(\epsilon=0.01\). This means, that even though it is not the topmost curve for most part of the latent trait, it is \(\epsilon\)-close to the top curve mostly, and coupled with its own strengths on the latent trait it surpasses the RandomForest. Algorithm AdaBoostM1_J48 has a similar latent trait occupancy (LTO) as LMT when \(\epsilon=0\). Even though AdaBoostM1_J48's LTO increases when \(\epsilon=0.01\), it doesn't increase as much as LMT's LTO does. Curiously, REPTree and 8990_MLP have a similar proportion on the latent trait for both \(\epsilon\) values. In contrast, algorithms such as J48, JRip and Bagging_REPTree, increase their LTO from 0 to values greater than 0.1 when \(\epsilon\) increases from 0 to 0.01 - a bigger increase than REPTree and 8990_MLP undergo with the increase in \(\epsilon\). This observation suggests the two algorithms REPTree and 8990_MLP have unique strengths in the latent trait and not in other parts where more algorithms perform well. #### 5.1.2 Strengths and weaknesses of algorithms via AIRT Figures 20 and 21 show the latent trait analysis for OpenML Weka classification algorithms. Figure 20 shows the performance of the algorithms with respect to problem difficulty and the resulting smoothing splines. The strengths and weaknesses of different algorithms are shown in Figure 21. The strengths and weaknesses are calculated for two values of \(\epsilon\), \(\epsilon=0\) and \(\epsilon=0.01\) as discussed in Section 4.2. Of the 30 algorithms, 6 have strengths on the dataset difficulty spectrum when \(\epsilon=0\). These are 8990_MLP, AdaBoostM1_J48, LMT, RandomForest, REPTree and SimpleCart algorithms. In contrast 14 algorithms exhibit strengths when \(\epsilon=0.01\) showing the competitiveness of algorithms. The RandomForest displays strengths on a large region of the problem space followed by LMT when \(\epsilon=0.01\). We see that many algorithms have strengths for easy problems while not so many are strong for difficult problems. For the region when dataset difficulty is between 0.5 and 1, only LMT displays a strength. Similarly, when dataset difficulty is between 1.5 and 2, the RandomForest is the only algorithm that displays an advantage. In terms of weaknesses, GLM is weak for most of the problem space for both \(\epsilon\) values. Hyperpipes are weak for more difficult problems for both \(\epsilon\) values. The latent trait curves lying relatively below are shown in dashed lines so that they can be identified easier. These are AdaB_DSt, ConjunctiveRule, HyperPipes, OLM and OneR. We can make some observations from Figure 21 and Table 4. The first is that the RandomForest and LMT cover almost all of the latent trait in the strengths diagram for \(\epsilon=0.01\). For \(\epsilon=0\), these two algorithms coupled with AdaB_J48 cover most of the strengths spectrum. Thus, these three algorithms, or even just RandomForest and LMT make a good combination in tackling diverse datasets. The second observation is that when increasing \(\epsilon\) from 0 to 0.01, even though the number of algorithms increased from 6 to 14, most of them have strengths for very easy problems. Of the additional 8 algorithms, DecisionTable, 8994_MLP, 8995_MLP and LADTree have \(\text{LTO}<0.1\). Thus, we can disregard some of the algorithms with small LTO when \(\epsilon=0.01\). Considering the key algorithms, the main change from \(\epsilon=0\) to \(\epsilon=0.01\) is the increase in LTO for algorithm LMT. #### 5.1.3 AIRT model goodness metrics Table 5 gives the model goodness results for classification algorithms. The MSE is less than 0.1 for all algorithms apart from OLM. Furthermore, the difference between predicted and actual effectiveness \(|\text{AUPEC}-\text{AUAEC}|\) is less than 0.1 for all algorithms apart from OLM, NaiveBayes and ConjunctiveRule. Figure 22 shows the effectiveness curves and the CDFs for this portfolio of algorithms. We see that most points on the AUAEC-AUPEC plane are close to the AUAEC = AUPEC line, which is shown by a dotted line. OLM is the exception. In general, the model has fitted the algorithm performances well. Figure 19: The heatmap of probability density functions for classification (OpenML-weka-2017) algorithms by fitting a continuous IRT model Figure 20: Algorithm performance with dataset/problem difficulty for classification algorithms. Top: Algorithm performance against dataset difficulty. Bottom: Latent trait curves for each algorithm with AdaB_DSt, ConjunctiveRule, HyperPipes, OLM and OneR in dashed lines. #### 5.1.4 Algorithm portfolio selection We compare the airt portfolio with 2 additional algorithm portfolios: 1. _Shapley-portfolio_: a subset of algorithms selected using Shapley values (Frechette et al. 2016). Shapley values measure an algorithm's marginal contribution to the portfolio by using concepts from coalition game theory. For Shapley-portfolio we select algorithms with the top-\(n\) Shapley values. 2. _topset-portfolio_: a subset of algorithms having the best on-average performance at a per-instance level. The highest-ranked algorithm in the topset-portfolio gives the best Figure 21: Strengths and weaknesses of OpenML Weka classification algorithms. The top bar shows the strengths and weaknesses for \(\epsilon=0\) and the bottom graph for \(\epsilon=0.01\). performance for the most number of instances. For topset-portfolio we select the top-\(n\) best on-average algorithms We construct Shapley, topset and airt portfolios with \(n\) algorithms and compare their performance for different values of \(n\). As the evaluation metric we use the performance gap. Performance gap is computed using the best per-instance performance for each portfolio and \begin{table} \begin{tabular}{l c c c c c} \hline \hline Algorithm & MSE & AUCDF & AUAEC & AUPEC & \(\underline{\text{--}}\)AUAEC - AUPEC— \\ \hline 8990\_MLP & 0.032 & 0.863 & 0.758 & 0.730 & 0.028 \\ 8994\_MLP & 0.036 & 0.844 & 0.747 & 0.700 & 0.047 \\ 8995\_MLP & 0.029 & 0.899 & 0.776 & 0.809 & 0.033 \\ SMO\_PolyKernel & 0.007 & 0.940 & 0.814 & 0.820 & 0.006 \\ OneR & 0.051 & 0.840 & 0.637 & 0.712 & 0.075 \\ J48 & 0.004 & 0.944 & 0.810 & 0.782 & 0.028 \\ 2364\_IBk & 0.022 & 0.910 & 0.785 & 0.795 & 0.010 \\ REPTree & 0.010 & 0.932 & 0.794 & 0.769 & 0.025 \\ RandomTree & 0.007 & 0.939 & 0.754 & 0.754 & 0.000 \\ RandomForest & 0.006 & 0.938 & 0.845 & 0.816 & 0.029 \\ LMT & 0.007 & 0.928 & 0.848 & 0.800 & 0.048 \\ HoeffdingTree & 0.011 & 0.921 & 0.768 & 0.767 & 0.001 \\ SMO\_RBFKernel & 0.017 & 0.899 & 0.750 & 0.759 & 0.009 \\ JRip & 0.003 & 0.950 & 0.805 & 0.787 & 0.018 \\ 2889\_IBk & 0.020 & 0.920 & 0.789 & 0.800 & 0.011 \\ HyperPipes & 0.040 & 0.846 & 0.629 & 0.683 & 0.054 \\ NaiveBayes & 0.029 & 0.868 & 0.751 & 0.881 & 0.130 \\ OLM & 0.164 & 0.681 & 0.411 & 0.171 & 0.240 \\ FURIA & 0.006 & 0.929 & 0.824 & 0.780 & 0.044 \\ BayesNet & 0.011 & 0.920 & 0.782 & 0.855 & 0.073 \\ ConjunctiveRule & 0.093 & 0.784 & 0.588 & 0.719 & 0.131 \\ SimpleCart & 0.009 & 0.943 & 0.811 & 0.794 & 0.017 \\ AdaBoostM1\_NaiveBayes & 0.013 & 0.928 & 0.771 & 0.813 & 0.042 \\ LADTree & 0.012 & 0.938 & 0.774 & 0.818 & 0.044 \\ Logistic & 0.005 & 0.942 & 0.805 & 0.802 & 0.003 \\ AdaBoostM!\_DSt & 0.082 & 0.794 & 0.611 & 0.698 & 0.087 \\ AdaBoostM1\_J48 & 0.006 & 0.934 & 0.837 & 0.798 & 0.039 \\ Bagging\_REPTree & 0.011 & 0.929 & 0.820 & 0.787 & 0.033 \\ DecisionTable & 0.009 & 0.931 & 0.761 & 0.764 & 0.003 \\ LogitBoost\_DSt & 0.004 & 0.956 & 0.812 & 0.824 & 0.012 \\ \hline \hline \end{tabular} \end{table} Table 5: MSE, AUCDF, Area Under Actual Effectiveness Curve (AUAEC) and Predicted Effectiveness Curves (AUPEC) and \(\underline{\text{--}}\)AUAEC - AUPEC— for classification algorithms. the best per-instance performance using all the algorithms. We define the difference as the performance gap at a per-instance level. Let the best performance for instance \(i\) using the full set of algorithms be denoted by \(b_{i}\). Let \(\mathcal{A}_{n},\mathcal{S}_{n}\) and \(\mathcal{T}_{n}\) denote airt, Shapley and topset portfolios having \(n\) algorithms. Let \(b_{\mathcal{A},i,n}\) denote the best performance for instance \(i\) using the airt portfolio with \(n\) algorithms. Similarly, let \(b_{\mathcal{S},i,n}\) and \(b_{\mathcal{T},i,n}\) denote the best performance for instance \(i\) using Shapley and topset portfolios with \(n\) algorithms. Then we define the best performance for instance \(i\) using the full set of algorithms. Figure 22: Model goodness metrics for classification algorithms: the actual and predicted effectiveness curves on the top row and the CDFs \(F(\rho_{j})\) and (AUAEC, AUPEC) on the bottom row. the performance gap for instance \(i\) for each portfolio as \[\text{Perf. gap}_{\mathcal{A},i,n}=b_{i}-b_{\mathcal{A},i,n}\,,\quad\text{Perf. gap}_{\mathcal{S},i,n}=b_{i}-b_{\mathcal{S},i,n}\quad\text{ and}\quad\text{Perf. gap}_{\mathcal{T},i,n}=b_{i}-b_{\mathcal{T},i,n}\,.\] For each algorithm portfolio and \(n\) we get an \(N\times 1\) vector of performance gap values. We compute the mean performance gap for each \(n\). For each algorithm scenario we use 10-fold cross validation and report the average cross validated performance gap for Shapley, topset and airt portfolios. Additionally, we compute the standard errors using different folds. We note that _Perf. gap_ is the same as _misclassification penalty_ discussed in Bischl et al. (2016). However, we have used the term _Perf. gap_ because we think it is more intuitive and applicable to non-classification scenarios. Figure 23 shows the mean performance gap of the 3 portfolios using 10-fold cross validation for OpenML Weka algorithms for different values of \(\epsilon\). A lower gap is preferred as it indicates the portfolio has better algorithms. The vertical lines at each point show the standard errors. We see that airt generally has lower performance gaps. The number of algorithms in the airt portfolio changes with \(\epsilon\). For each \(\epsilon\), as the limiting number of algorithms (the maximum \(x\) value) we select the minimum number of algorithms from airt, Shapley and topset. For Figure 23: Performance analysis of Shapley, topset and airt portfolios for different \(\epsilon\) values. The mean cross-validated performance gap is shown with standard errors denoted by vertical lines. \(\epsilon=0\) airf selects 6 algorithms, which decides the limiting number of algorithms. For other \(\epsilon\) values Shapley decides the limiting number of algorithms in this example. For each fold, different algorithms may get selected by different portfolio selection methods. Thus, for \(n=14\) standard errors are not computed because 14 algorithms are selected only in 1 fold. ### Additional case studies We conduct shorter analyses for 9 additional ASlib scenarios, which are given in Appendix A. We explore the latent trait curves, strengths and weaknesses of algorithms for \(\epsilon\in\{0,0.05\}\) and compare different algorithm portfolios. As each scenario other than OPENML-Weka has runtimes or par10 values as the evaluation metric, we transform these values by multiplying with -1 and scaling to the interval \([0,1]\) with 1 denoting good performance and 0 denoting poor performance. Some summary statistics of these analyses are given in Table 6. As it is difficult to encapsulate the strengths and weaknesses or the latent trait curves by a single numeric value, we give the mean performance gap of the portfolios with 5 algorithms in Table 6. Figures of the mean performance gap for different number of algorithms with standard errors and other details are given in the Appendix. For most scenarios airt performs well. Even though airt does not perform well for SAT11_INDU, from the MPG curves in the Appendix we see that the standard errors of the different portfolios overlap. We also see that the latent trait curves for SAT11_INDU are all bundled up together. This tells us that SAT11_INDU algorithms are similar in performance. From other scenarios, we notice that airt is better at identifying a good portfolio of algorithms when algorithms are diverse, i.e., when the latent trait curves display high variability. The construction \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Scenario & \multicolumn{2}{l}{Measurement Num.} & \multicolumn{1}{l}{Num.} & \multicolumn{1}{l}{airt} & \multicolumn{1}{l}{Shapley} & \multicolumn{1}{l}{topset} \\ & & Obs. & \multicolumn{1}{l}{Algorithms} & \multicolumn{1}{l}{Algorithms} & \multicolumn{1}{l}{MPG} & \multicolumn{1}{l}{MPG} \\ \hline OPENML\_WEKA & accuracy & 105 & 31 & **0.0553** & 0.0631 & 0.0556 \\ ASP\_POTASSCO & runtime & 1294 & 11 & 78.0 & 92.7 & **77.8** \\ CSP\_MINIZINC\_2016 & par10 & 100 & 21 & **1962** & 2371 & 2026 \\ GRAPHS\_2015 & runtime & 5725 & 8 & **1689346** & 6127229 & 6763210 \\ MAXSAT\_PMS\_2016 & par10 & 601 & 20 & **1019** & 1469 & 1305 \\ PROTEUS\_2014 & runtime & 4021 & 23 & **293** & 648 & 1125 \\ SAT11\_INDU & runtime & 300 & 19 & 882 & **826** & 855 \\ SAT12\_ALL & runtime & 1614 & 32 & **456** & 523 & 683 \\ SAT18\_EXP\_ALGO & runtime & 353 & 38 & **1677** & 1823 & 1822 \\ BNSL\_2016 & runtime & 1179 & 9 & **1210** & 1448 & 2030 \\ \hline \hline \end{tabular} \end{table} Table 6: Additional ASlib case studies. Mean Performance Gap (MPG) of a portfolio of 5 algorithms is reported using 10-fold CV with the best in bold. of the latent trait \(\theta\) involves IRT discrimination and difficulty parameters as well as the actual performance. Therefore, selecting algorithms based on fitting splines to \(\theta\) takes into account this underlying hidden quantity uncovered by IRT that denotes the dataset difficulty spectrum. This allows us to select a good portfolio of algorithms when a diverse set of algorithms are present. This is another use of AIRT in addition to its exploratory aspect. ## 6 Conclusions Beyond standard statistical analysis, which often hides useful insights, there are not many techniques that can be used to rigorously evaluate a portfolio of algorithms and identify their strengths and weaknesses. One such technique is the instance space analysis methodology which can be used to visualize the strengths and weaknesses of algorithms. As the instance space incorporates both the algorithms and the test instances, computing features of test instances is an essential step to constructing an instance space. Devising suitable features of test instances that capture their intrinsic difficulties for algorithms is a significant challenge that can limit the applicability of the method. In this paper we have taken a different approach to achieve the same goal that avoids the need to devise instance features. We have presented AIRT, an IRT based algorithm evaluation method, which evaluates algorithms using only performance results. We demonstrated its usefulness on a diverse set of algorithm portfolios arising from a wide variety of problem domains. The scenarios used are taken from the ASIib repository containing algorithm implementations with given parameter and hyperparameter settings. We have not explored different parameter settings in this study and this is a limitation. Each parameter setting would give rise to a different algorithm implementation that would result in a different algorithm curve. Thus, by considering a single algorithm with different parameter settings, AIRT has potential to select parameter settings that are advantageous for easy or difficult problems. Recasting the IRT framework as an inverted model, AIRT focuses on evaluating algorithm attributes such as consistency, anomalousness and difficulty limit thereby helping to broaden the understanding of algorithm behaviors and their dependence on test instances. AIRT can be used to visualize the strengths and weaknesses of algorithms in different parts the problem space. Using algorithms with strengths we construct an algorithm portfolio and show that it has a low performance gap compared to other portfolios. In addition, IRT model goodness measures can be derived, showing the level of trustworthiness of the underlying IRT model. Due to the fact that AIRT extends the IRT framework, it also has the desirable mathematical and optimality properties inherited from the embedded maximum likelihood estimation techniques. Furthermore, the explainable nature of IRT parameters gets translated to the algorithm evaluation domain. As future research avenues we plan to consider the role of AIRT in parameter selection and alternative remappings of the IRT framework to increase understanding of the strengths and weaknesses of dataset repositories, thereby providing means to select an unbiased yet diverse collection of datasets, drawing deeper insights into their abilities to support meaningful conclusions about algorithm strengths and weaknesses. ### Acknowledgments Funding was provided by the Australian Research Council through the Australian Laureate Fellowship FL140100012, and the ARC Training Centre in Optimisation Technologies, Integrated Methodologies and Applications (OPTIMA) under grant IC200100009. The authors would like to thank Prof Rob J. Hyndman for his suggestion of the name AIRT for our method. ### Supplementary Material The algorithm performance datasets used in this paper are found at [https://github.com/coseal/aslib_data](https://github.com/coseal/aslib_data) and the programming scripts using AIRT are available at [https://github.com/sevvandi/airt-scripts](https://github.com/sevvandi/airt-scripts).
2307.04785
Empirically Constraining the Spectra of Stellar Surface Features Using Time-Resolved Spectroscopy
Transmission spectroscopy is currently the technique best suited to study a wide range of planetary atmospheres, leveraging the filtering of a star's light by a planet's atmosphere rather than its own emission. However, as both a planet and its star contribute to the information encoded in a transmission spectrum, an accurate accounting of the stellar contribution is pivotal to enabling robust atmospheric studies. As current stellar models lack the required fidelity for such accounting, we investigate here the capability of time-resolved spectroscopy to yield high-fidelity, empirical constraints on the emission spectra of stellar surface heterogeneities (i.e., spots and faculae). Using TRAPPIST-1 as a test case, we simulate time-resolved JWST/NIRISS spectra and demonstrate that with a blind approach incorporating no physical priors, it is possible to constrain the photospheric spectrum to less than 0.5% and the spectra of stellar heterogeneities to within 10%, a precision that enables photon-limited (rather than model-limited) science. Now confident that time-resolved spectroscopy can propel the field in an era of robust high-precision transmission spectroscopy, we introduce a list of areas for future exploration to harness its full potential, including wavelength dependency of limb darkening and hybrid priors from stellar models as a means to further break the degeneracy between the position, size, and spectra of heterogeneities.
David Berardo, Julien de Wit, Benjamin V. Rackham
2023-07-10T18:00:00Z
http://arxiv.org/abs/2307.04785v2
# Empirically Constraining the Spectra of a Star's Heterogeneities From Its Rotation Lightcurve ###### Abstract Transmission spectroscopy is currently the most powerful technique to study a wide range of planetary atmospheres, leveraging the filtering of a star's light by a planet's atmosphere rather than its own emission. However, both a planet and its star contribute to the information encoded in a transmission spectrum and a particular challenge relate to disentangling their contributions. As measurements improve, the lack of fidelity of stellar spectra models present a bottleneck for accurate disentanglement. Considering JWST and future high-precision spectroscopy missions, we investigate the ability to derive empirical constraints on the emission spectra of stellar surface heterogeneities (i.e., spots and faculae) using the same facility as used to acquire the transmission spectra intended to characterize a given atmosphere. Using TRAPPIST-1 as a test case, we demonstrate that it is possible to constrain the photospheric spectrum to \(\geq\)0.2% and the spectra of stellar heterogeneities to within 1-5%, which will be valuable benchmarks to inform the new generation of theoretical stellar models. Long baseline of observations (\(\geq\)90% of the stellar rotation period) are necessary to ensure the photon-limited (i.e., instrument-limited) exploration of exoplanetary atmospheres via transmission spectroscopy. 0000-0002-4880-7880]David Berardo 0000-0002-4883-2880]Julien de Wit 0000-0002-4883-2880]Benjamin V. Rackham ## 1 Introduction Transmission spectroscopy was the first technique introduced to study the atmospheres of worlds beyond the solar system (Seager and Sasselov, 2000; Brown et al., 2001). Today, it is still one of the most powerful techniques in this context, as it leverages the light coming from a host star rather than the light directly emitted by the planet itself, which is orders of magnitude fainter. As the field of exoplanetary science transitions in the coming decade towards the spectroscopic characterization of directly imaged exoplanets, emission spectroscopy will become the primary avenue to study planetary atmospheres. Until then, perfecting the art of transmission spectroscopy studies is a must. Currently, the dominant bottlenecks for transmission spectroscopy are associated with imperfections in our opacity models (Niraula et al., 2022) and stellar models (Iyer and Line, 2020; Rackham et al., 2023; Rackham and de Wit, 2023). The current limitations in opacity models have been shown to result in an accuracy wall preventing most atmospheric properties beyond \(\sim\)0.5 dex for all planets but large, hot, and highly-metallic ones (Niraula et al., 2023). Future efforts supporting the standardization of existing databases, and the improvement of treatments of broadening and far-wing behaviors, should mitigate the current bottleneck. Regarding stellar models, Iyer and Line (2020) showed that not accounting for stellar contamination will yield biased inferences of atmospheric properties. However, correcting for stellar contamination is challenging, as the model limitations (i.e., lack of fidelity) can yield a biased correction of the contamination via an inadequate fit of the out-of-transit spectrum. The lack of fidelity can also result in challenges in inferring the number of components present on the stellar disk (Wakeford et al., 2019; Garcia et al., 2022). Fortunately, when stellar models with a sufficient fidelity are accessible, the degeneracy between the number of components and their covering fractions can be lifted, leading to an optimal correction of the stellar contamination (Rackham and de Wit, 2023). Sufficient fidelity is defined here as follows: with a precision superior or equal to the expected uncertainty associated with the out-of-transit spectra obtained for transit observations in the targeted system. This definition therefore supports returning to a regime of photon-limited studies-where instruments are used at their maximum potential. While a new generation of stellar models are being computed following the guidance of the report from NASA's Exoplanet Exploration Program Study Analysis Group 21 (Rackham et al., 2023), we investigate a possible avenue to empirically derive the emission spectra of a star's heterogeneities. Doing so would provide the community with a data-driven solution to the stellar-model challenge, i.e., benchmarks for ongoing theoretical simulations. In this paper, we present a framework leveraging a multi-wavelength stellar spectroscopic rotation curve to constrain empirically the emission spectra of its different heterogeneities. We focus our injection-retrieval test on M-dwarf stars with properties similar to those of TRAPPIST-1 (\(T_{\rm eff}=2566\) K), for which stellar contamination is expected to be the most pronounced (e.g., Rackham et al., 2018, 2019) and the most challenging to correct (e.g., Zhang et al., 2018; Wakeford et al., 2019; Garcia et al., 2022). We present in Section 2 the forward model developed to generate the synthetic, multi-wavelength observations of an heterogeneous stellar surface. In Section 3, we present the retrieval framework used to assess the extent to which the properties of individual heterogeneities (size, positions, and emission spectra) can be constrained based on a synthetic rotation light-curve. In Section 4, we present the injection-retrieval tests performed and their results, including testing the effect of varying the duration and sampling of an observation relative to the stellar rotation period. In Section 5, we describe the results of these preliminary tests, as well as highlight future steps to improve and expand upon this initial framework. ## 2 Forward model for generating synthetic data In this section we present the forward model used to generate synthetic time- and wavelength-dependent observations of an heterogeneous stellar surface. These synthetic observations are generated using a grid-based stellar surface model, which consists of a star (described by its rotation period and rotation axis orientation) as well as a list of heterogeneities, which are each described by a latitude, longitude, radius, and temperature. ### Spectral Model For this analysis, we use the PHOENIX stellar spectral model grid1 to simulate the emission of an individual surface feature (Husser et al., 2013). These grids provide adequate coverage to describe the photospheric background of an M dwarf, as well as heterogeneities which vary by several hundred degrees in either direction relative to the photosphere. For the stellar photosphere we use a spectral model with a temperature of 2500 K, a \(\log g\) of 5.0, and an [Fe/H] metallicity of 0 (similar to TRAPPIST-1, which has a surface temperature of \(2566\pm 26\) K, a \(\log g\) of \(5.2396\pm 0.006\)(Agol et al., 2021) and an [Fe/H] metallicity of \(0.05\pm 0.08\)(Ducrot et al., 2020)). For heterogeneities, we alter only the temperature of the model spectrum used, since the surface gravity and metallicity are typically expected to remain constant across a stellar surface (Freeman and Bland-Hawthorn, 2002). In this way, we make the common assumption the emission from heterogeneities resembles that of a stellar photosphere with a different effective temperature. For our analysis, we used spectral models corresponding to 2300 K and 2700 K (varying \(\pm 200\) K relative to the photosphere). Footnote 1: [https://phoenix.astro.physik.uni-goettingen.de/](https://phoenix.astro.physik.uni-goettingen.de/) For this analysis we use the PHOENIX grids of specific intensity spectra, which provide spectral information as a function of viewing angle \(\mu\), as opposed to disk-averaged intensities. When sampling from these specific intensity spectra, we take the value corresponding to \(\mu=0\) (i.e., the center of the star, normal to the observer's line of sight). We then calculate a quadratic limb-darkening profile for the stellar surface, and scale this intensity across the stellar surface, allowing us to have control over the limb darkening of the signal. We emphasize that although we use simulated models to generate the synthetic data, this does not invalidate the premise of this study to empirically retrieve stellar spectra. This is because when fitting for these spectra later on, we use no information about the input spectra whatsoever, and thus the retrieval is not biased based on prior knowledge. ### Instrumental Model We consider observations made with the NIRISS Single Object Slitless Spectroscopy (SOSS) instrument (Albert et al., 2023) on JWST (Gardner et al., 2023), which has a spectral resolution of \(R\)\(\approx\)700 at 0.6-2.8 \(\mu\)m\({}^{2}\), providing an adequate compromise between resolving power and spectral coverage for such work considering the spectral energy distribution (SED) of stars, including M dwarfs (Allard et al., 2003). The spectral resolution of the PHOENIX spectra is much higher than can be observed with JWST, and so they must first be downsampled to a resolution of \(R\)=700 using a Gaussian convolution filter to match the expected signal from NIRISS. After adjusting the resolution, we also bin the spectra down to a wavelength spacing of 8.8 \(\mu\)m. These are appropriate transformations in this case given that the forward model is linear, and thus high resolution is not needed (see Niraula et al. (2022) for further discussion on binning and down-sampling spectra). ### Spatial Model The stellar surface is treated as a grid in the longitudinal and latitudinal directions. Once the stellar spectra are calculated, we must then determine where on the surface each heterogeneity lies. This is done using a flood fill technique, where we begin at the cell of the stellar surface corresponding to the heterogeneity center, and spread out from this point until we reach a cell which is too far from the central cell to be a part of a given heterogeneity. As this is done, each cell is marked as being a part of the heterogeneity and assigned the flux corresponding to its temperature as well as the relevant wavelength. While the model has been optimized for a circular feature, in principle any shape can be 'painted' on the stellar surface grid, accounting for projection effects. This model is based off of a similar one used in Gunther et al. (2022), which was used to model the interactions of an heterogeneous star with a debris disk. In addition to this flux map, we also calculate maps which correspond to the projected area of a given cell, taking into account the shape of the cell as well as its normal vector relative to the observer. We also calculate a limb darkening map. These three maps can then be multiplied together to produce a final observation map, which can be rapidly summed to measure the observed flux at a given time. In order to calculate the flux at a different time, the flux map is simply 'rolled' along the longitudinal axis, since the projected area and limb darkening effects are constant in time. ## 3 Retrieval Framework The goal of this initial study is to demonstrate the capability to characterize arbitrary heterogeneities of a stellar surface and their contribution to the overall stellar spectrum without relying on physical models, which currently cannot provide a sufficient level of accuracy. In this work we focus in particular on heterogeneities which can be described by their size, location, and temperature. The effect of the position and size of a heterogeneity are highly non-linear, due to both their projection onto the observing plane as well limb-darkening effects. Thus when retrieving these parameters we will employ standard Markov chain Monte Carlo (MCMC) methods in order to sample the full range of parameter space. For a given distribution of heterogeneities, however, the total spectral signal can be described as a linear combination of the stellar photosphere and the heterogeneity spectra (scaled by their relative surface area), and thus can be solved for as a linear matrix problem, which we outline in this section. Once we have re-formulated the spectral retrieval as a linear algebra problem, we utilize spectral value decomposition (SVD)3 in order to estimate the spectral signal of each component (including the photosphere). Thus the problem can be separated into a non-linear MCMC retrieval (the geometric properties of the heterogeneity) and linear retrieval (the spectral signal of the photosphere and individual heterogeneities). Footnote 3: [https://en.wikipedia.org/wiki/Singular_value_decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition) ### Linear component of retrieval model Given a set of synthetic observations, we now describe the framework used to constrain the properties of individual components (size, positions, and spectra). The total flux observed, Flux(\(\lambda\),t), at a given wavelength \(\lambda\) and time \(t\) is a linear combination of the geometric signals of all the components modulated by the spectral signal of each component and can thus be written as: \[\text{Flux}(\lambda,t)=\Lambda_{\text{phot}}(\lambda)+\sum_{i}\left[\Lambda_{ i}(\lambda)-\Lambda_{\text{phot}}(\lambda)\right]\times S_{i}(t) \tag{1}\] where \(\Lambda_{\text{phot}}(\lambda)\) is the (constant in time) spectral signal of the photosphere, \(\Lambda_{i}(\lambda)\) is the spectrum of the \(i^{th}\) heterogeneity, and \(S_{i}(t)\) is the time-varying geometric projection of a heterogeneity, which is a function of its size and position on the stellar surface, as well as any limb-darkening effects. The sum runs over the number of individual heterogeneity features. A graphical depiction of this decomposition is show in Figure 1 Within an MCMC framework, the linear component of the model can be estimated using SVD, allowing us to leverage rapid and robust libraries available in Python to retrieve the spectral signal of each feature in just a few milliseconds on a modern laptop computer. The benefit of this separation is that the geometric signal of any surface features can often be estimated from a white light curve, as well as with more sophisticated techniques to analyze the frequency components of the light curves. Thus strong priors can be placed on the position and sizes of heterogeneities, which reduces the overall time needed to run such a retrieval. ### A Note on Limb Darkening The geometric signal of the heterogeneity in the previous equations (i.e., the quantity \(S_{i}(t)\)) requires a choice of limb darkening coefficients for the stellar surface, since it is calculated as the combination of the size of a cell and its projected area, multiplied by a limb darkening factor. However, in general, limb darkening is an effect which depends on the temperature of the stellar surface, which is the quantity we are attempting to fit. Thus we find ourselves in a loop where the stellar spectrum is required in order to know the appropriate value of the limb darkening coefficients, which is required in order to fit for the stellar spectrum. As a result, the current fitting routine assumes that limb darkening is independent of temperature, at least within the range considered in this work (\(\pm 200\,\mathrm{K}\)). In general, limb darkening is expected to vary with temperature (Claret & Bloemen, 2011). However, since the models are generated under the same assumption, we may still assess the ability of the our framework to recover injected signals. In Section 5 we briefly highlight how this may be addressed in the future and the additional prospects for characterization it will allow for. ## 4 Injection-retrieval tests Given the forward model used to simulate observations described in Section 2, and the retrieval mechanism described in Section 3, we now describe a series of injection-retrieval tests we use to test the ability of the model to recover stellar surface heterogeneities. ### Fitting for Spectral Components In order to test the effectiveness of the model in retrieving spectral features of a star, we first perform a series of injection-retrieval tests in an idealized scenario in which we assume to know the number of heterogeneities, as well as their positions and sizes. Thus in this first stage we are attempting to retrieve only the spectral features of heterogeneities and photosphere (the linear part of the retrieval), which represents a best-case scenario and effectively acts as an upper limit on the strength of the current framework. In this idealized scenario, we have removed the complex, non-linear component of fitting for the feature positions, and the problem is reduced to a linear one of disentangling the spectral contribution of each component. By employing SVD, this can be solved in just milliseconds (including the full range of time and wavelength observations), allowing rapid testing of a variety of scenarios. This can similarly represent a scenario where strong priors have been obtained for the spectral components, based on an analysis of a white lightcurve or a pre-fitting routine which places constraints on the possible heterogeneity configurations. We tested the model on a suite of stellar surfaces, including ones with heterogeneities hotter than the photosphere, colder than the photosphere, both, as well as anywhere from one to four individual heterogeneities. Additionally, we tested a series of single-heterogeneity models with all but one parameter being held constant, varying either the size of a heterogeneity or its latitudinal position. The full sample of surfaces considered is described in Table 1, along with the deviation from Figure 1: A schematic illustrating the way in which heterogeneities on the surface of a star are modulated in both time and wavelength, and then combined to produce a final observed signal. The aim of this work is to demonstrate the ability to follow this diagram in reverse, going from multi-wavelength observations to individual component spectra based on physical models. the true spectra used to simulate the observations. The results of these tests reveal that the model is able to recover the spectra of heterogeneities to sufficient precisions (i.e., better than the out-of-transit spectrum-see Section2). For example, the precision achieved on the photospheric spectrum is \(\leq\) 0.1% vs \(\sim\)0.5% for the out-of-transit spectrum associated with transit observations in the TRAPPIST-1 system-typically based on a \(\sim\) 2 hr integration. The spectra of heterogeneities are constrained at the level of 1 to 5% depending notably on their sizes and latitudinal position. The spectra of heterogeneities are less constrained due to their smaller covering fraction resulting in less photons from them. Their small covering fraction also mean that while the uncertainties associated with their spectra are larger, they contribute to the total uncertainty budget for the stellar model at a similar level than the photosphere. For this reason, we will assess sufficient model fidelity based on the ratio of the uncertainty associated with the retrieved photospheric spectrum and the one associated with the out-of-transit spectrum. ### Retrieving Full Heterogeneities In order to fully test the ability of the model to characterise a heterogeneous stellar surface, we also run a set of retrievals where we attempt to estimate not only the spectral signature of each component, but also their sizes and positions on the stellar surface. For a fit with \(N\) heterogeneities, we thus have \(3N+2\) parameters: a size, latitude and longitude for each heterogeneity, as well as two limb darkening parameters for a quadratic limb darkening law. As described in Section3, we run an MCMC retrieval within which we linearly retrieve the spectral signals of each component using SVD. The results of this fitting process highlight the inherent difficulty in constraining the position and size of a heterogeneity, which outlines clear areas for future improvement. The longitude of a spot is typically reliably constrained to within a few degrees of the true value, due to the high time-sampling resolution. The latitude however is often much less constrained, with the model being able to differentiate only between equatorial or polar spots. Additionally, the size of a spot is typically only constrained to within 50% of its true value, although the model is capable of excluding extremely large or small/non-existent spots. In section5 we outline how additional prior information may be used to help further constrain the size of a feature, based on an global physical constraints on the overall scaling of its spectrum (leveraging the trade-off between feature size and spectral amplitude). A subset of the models from the previous section were tested, where we fixed the number of heterogeneities to the true value. As an aside, we ran fits on the white lightcurve for each model, where we sequentially added in additional features. In most cases, the true number of components was found to best describe the data, while adding additional components did not improve the fit and resulted in a worse BIC (Bayesian Information Criterion) value. In this first run, heterogeneities were allowed to occur anywhere on the stellar surface, and in some cases this led to degeneracies where two heterogeneities would overlap and contribute to the overall spectrum jointly. Additionally, we found that without additional information, the latitudinal position of a heterogeneity was difficult to constrain. These issues highlight clear areas for improvement for future work, which we discuss further in Section5. Despite issues with constraining the geometric properties of spot features, in most cases the model was still able to recover the photospheric signal to within 1%. We show the results of an example fit in Figure2, comparing the individual retrieved component spectra to the spectra used to generate the synthetic observations. ### Varying Observation Baseline In the previous sections, retrieval was performed using simulated observations covering an entire rotation period of the host star. However, in most cases a strong argument must be made to justify the use of high-demand facilities to continuously state at a single target. In this section we investigate the effect of observing only a portion of the full rotation lightcurve on the ability of the framework to accurately measure the photospheric spectrum of a star. Given the time-variability of a heterogeneity signal, there exists a strong correlation between the duration of an observation, the phase offset relative to a heterogeneity's longitude, and the retrieved uncertainty on the stellar photosphere. To this end, we first simulate a heterogeneous stellar surface as in the previous section, with anywhere from 1-4 heterogeneities which may be colder or hotter than the background photosphere. From this model, we then generate a set of synthetic observations again as described in the previous sections. For each observation, we chose two parameters: (1) an offset for the longitudinal rotation of the star relative to the observer, and (2) a viewing window, defined as a fraction from 0-1 of the stellar rotation period. Selecting a value of one represents the analysis done in the previous section, for which the entire stellar rotation was supplied to the fitting routine. These two values define a time series, for which we generate the base-vector signals attributed to each heterogeneity on the stellar surface. We then use SVD decomposition to rapidly fit the linear component of the model. As in the previous section, we can then compare the retrieved spectrum to the injected spectrum for each component, the results of which are shown in Figure 3. The various curves represent different observation durations. For a given observation duration, the residual signal can vary strongly as a function of stellar rotation phase. This is more pronounced for the shorter durations. For example, the residual for an observation covering 0.1 of the stellar rotation can vary from approximately 1% to over 100%. We attribute this variation to the unequal ability of each phase to contribute a set of component spectra descriptive of the entire photosphere. In other words, when fewer or no heterogeneities are present, one cannot extract the necessary information to model the photosphere at a phase showing many heterogeneities. Thus, the shorter-duration observations show both larger residuals overall and larger variability in residuals with rotation phase. For this reason, we find that only a covering fraction of \(\geq 90\%\) can reliably constrain the stellar spectra to within the OOT uncertainty (0.5%). Indeed, while the targeted precision of 0.5% may be achieved for some configurations with only a 40% phase coverage, it is not achieved for all (average precision \(\sim\)1%). ## 5 Discussion & Future Steps This work represents the first steps towards building a library of empirical emission spectra for stellar surface heterogeneities. While similar in scope to the work of Kessel et al. (2017) that compiled a library of empirical spectra for various stellar types, an important distinction resides in that the spectra being measured are not for disk-integrated features, but rather for 'pure' basis components which may be combined with rotational geometry in order to produce accurate spectra for stars with arbitrarily complex surface features. Such a library will not only enable the robust correction of the TLS effect based on out-of-transit measurements, it will also provide important benchmarks for the next-generation of theoretical stellar models (e.g., Witzke et al., 2021; Rackham et al., 2023), and further inform key relationships between the properties of stars and those of heterogeneities such as between heterogeneities temperature and size, photospheric temperatures, and atomic line-depth ratios. Indeed, we are able to constrain photospheric spectra at the level of the 0.1% and typically 1-5% for the spectra of heterogeneities while spectra with precisions of \(\sim 1\%\) (S/N \(\sim 100\)) are used commonly to constrain the fundamental physical parameters of exoplanet host stars (e.g., Wells et al., 2021; Delrez et al., 2022; Barkaoui et al., 2023; Dransfield et al., 2023; Pozuelos et al., 2023). In terms of absolute flux calibrations, for example, the goal for the X-SHOOTER instrument is \(\leq 10\%\)(Schonebeck et al., 2014), while the eventual goal of the JWST calibration program is 1% accuracy for each observing mode (Gordon et al., 2022). Thus, constraints on component spectra from this technique are on par with current precisions available for integrated disk spectra and will be limited ultimately by the overall precision and accuracy limitations of JWST observations themselves providing valuable data-driven benchmarks to inform the next generation of models. Figure 2: Empirical constraints on the emission spectra (blue) of four surface components when performing a full retrieval on the size, position, and spectrum. Synthetic spectra are in orange. Residuals are on the bottom panels. The first panel represent the photosphere, for which a 2500K effective temperature was used. Our framework enables retrieving both the geometric features of heterogeneities as well as their individual spectral contributions, without relying on any prior information from spectra generated by physical models. In the rest of this discussion, we highlight a series of possible improvements to the framework introduced here. ### Series of Snapshots for Slow Rotators Covering 90% of a stellar rotation of TRAPPIST-1 would correspond to a \(\sim\)72-hr stare at the system, which is both feasible and reasonable for such a high-priority target. Doing so for slow rotators that may have periods up to 30 times that of TRAPPIST-1's, however, would be impractical. For such hosts, we show that a continuous series of small stares ("snapshots") could be used instead (see Figure Figure Figure 3). In order to reach the targeted precision, we find that snapshots needs a minimum duration equal the intended OOT integration and sufficient occurrences to sample time-varying contribution of the heterogeneities. As seen in the bottom panels of Figure 3, the duration and number of snapshots required to achieve a given SNR are related offering multiple observational options. For a 30-day rotation period, a sufficient precision is achieved for, e.g., 40 2-hr snapshots, or 20 4-hr, 10 8-hr, 5 16-hr. These options correspond to a 10\(\times\) lower observation requirement than when considering a long Figure 3: **Top Left**: Median stellar photospheric spectral precision retrieved vs. observation duration for durations ranging from a tenth of the stellar rotation period to a full rotational cycle (colored curves). The model analysed here contains four heterogeneities. Additionally, we show the effect of varying the offset during the rotational cycle, indicated by the x-axis. The dashed line represents the uncertainty on the out-of-transit spectrum of a previous NIRISS observation of the TRAPPIST-1 system (“OOT uncertainty”).**Top Right**: White lightcurve showing different sampling windows for two cases highlighted in the bottom panel. **Bottom**: Heat-map showing the median residual on the photosphere spectrum when observing in snapshots (relative to the OOT uncertainty) for three different rotation periods. The percentage in each cell represents the total observation time relative to the rotation period of the star. The snapshot duration is relative to a 2-hr duration that would be typically observed OOT for a transit of planets around TRAPPIST-1. continuous state. Of the four options highlighted above, we expect that the later will be favored when accounting for practical considerations (e.g., overheads and slew time). ### Wavelength-dependent Limb Darkening The models described in this work used limb darkening laws which did not change as a function of temperature or wavelength. While this represents an important first step in estimating the capability of this framework, future developments should account for such dependencies, which could notably be used to break the currently observed degeneracies between the latitude and size of a heterogeneity and thus better constrain the latitudinal distribution of heterogeneities. ### Including Prior Knowledge From Model Spectra The present proof-of-concept is performed without any prior knowledge regarding stellar physics. Future works could explore how relevant priors could be added to the framework without introducing biases from complete stellar models. An example of such priors would be a parametrization of the relative flux expected between wavelength bins associated to the feature of a same molecule. While absolute flux values may be biased, relationships between wavelengths may be robust enough to provide additional constraints. This information could be extracted using Gaussian processes in order to measure correlations between different wavelengths (Perger et al., 2023). Constraining the spectra in this way would enable tighter constraints on the size and latitude of a given feature, which is currently degenerate with the overall amplitude of its spectrum. Additionally, including the use of activity indicators provided by high-precision spectroscopy to help solve in the inverse problem of reconstructing active regions on the stellar surface (Mallonn et al., 2018; Rosich et al., 2020). ### Correcting for Stellar Contamination at Different Epochs The ultimate goal of this work is to generate a library of empirically retrieved spectra for the heterogeneities of a given star in order to support for the robust correction of in-transit stellar contamination at any past and future epochs. The feasibility of this approach is supported by the following. First, heterogeneities of a given star have been shown to have consistent properties. For example, molecular-band modeling of echelle spectra of DM UMa suggests a spot temperature of \(3570\pm 100\) K during an observing campaign in 1995, with filling factors ranging from \(0.25\pm 0.08\) to \(0.30\pm 0.10\)(O'Neal et al., 1998). Returning to the same star during six nights in 1998, a later analysis found a spot temperature of \(3450\pm 120\) K and filling factors ranging from \(0.28\pm 0.06\) to \(0.42\pm 0.05\)(O'Neal et al., 2004). Second, properties of heterogeneities appear to be correlated making it easier to to pin down. Starspot temperatures show a clear dependence on photospheric temperature, based on Doppler imaging, modeling of molecular bands, and atomic line-depth ratios (Berdyugina, 2005). Therefore while heterogeneity's filling factors surely evolve over a stellar activity cycle, their temperatures and thus spectra are a static characteristic of a given star supporting our proposition of their relevance across epochs. In other words, while a series of improvements to this framework can (and should) be made in the future, the present theoretical proof-of-concept suffices to move towards a practical application with JWST data as a next step. Such data would also inform in a relevant manner the aforementioned series of improvements (e.g., empirical wavelength- and temperature-dependencies of the limb-darkening). We thus look forward to an on-sky validation and further development of this framework in the near future to enable the robust atmospheric characterization of planets whose spectra would otherwise stay contaminated. ## 6 Acknowledgements We thank Elsa Ducrot and the Pandora Team for helpful discussions regarding this project. B.V.R. thanks the Heising-Simons Foundation for support. This material is based upon work supported by the National Aeronautics and Space Administration under Agreement No. 80NSSC21K0593 for the program "Alien Earths". The results reported herein benefited from collaborations and/or information exchange within NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate.
2308.14313
Three-dimensional flat Landau levels in an inhomogeneous acoustic crystal
When electrons moving in two-dimensions (2D) are subjected to a strong uniform magnetic field, they form flat bands called Landau levels, which are the basis for the quantum Hall effect. Landau levels can also arise from pseudomagnetic fields (PMFs) induced by lattice distortions; for example, mechanically straining graphene causes its Dirac quasiparticles to form a characteristic set of unequally-spaced Landau levels, including a zeroth Landau level. In three-dimensional (3D) systems, there has thus far been no experimental demonstration of Landau levels or any other type of flat band. For instance, applying a uniform magnetic field to materials hosting Weyl quasiparticles, the 3D generalizations of Dirac quasiparticles, yields bands that are non-flat in the direction of the field. Here, we report on the experimental realization of a flat 3D Landau level in an acoustic crystal. Starting from a lattice whose bandstructure exhibits a nodal ring, we design an inhomogeneous distortion corresponding to a specific pseudomagnetic vector potential (PVP) that causes the nodal ring states to break up into Landau levels, with a zeroth Landau level that is flat along all three directions. These findings point to the possibility of using nodal ring materials to generate 3D flat bands, to access strong interactions and other interesting physical regimes in 3D.
Zheyu Cheng, Yi-jun Guan, Haoran Xue, Yong Ge, Ding Jia, Yang Long, Shou-qi Yuan, Hong-xiang Sun, Yidong Chong, Baile Zhang
2023-08-28T05:34:44Z
http://arxiv.org/abs/2308.14313v1
# Three-dimensional flat Landau levels in an inhomogeneous acoustic crystal ###### Abstract **When electrons moving in two-dimensions (2D) are subjected to a strong uniform magnetic field, they form flat bands called Landau levels, which are the basis for the quantum Hall effect. Landau levels can also arise from pseudomagnetic fields (PMFs) induced by lattice distortions; for example, mechanically straining graphene causes its Dirac quasiparticles to form a characteristic set of unequally-spaced Landau levels, including a zeroth Landau level. In three-dimensional (3D) systems, there has thus far been no experimental demonstration of Landau levels or any other type of flat band. For instance, applying a uniform magnetic field to materials hosting Weyl quasiparticles, the 3D generalizations of Dirac quasiparticles, yields bands that are non-flat in the direction of the field. Here, we report on the experimental realization of a flat 3D Landau level in an acoustic crystal. Starting from a lattice whose bandstructure exhibits a nodal ring, we design an inhomogeneous distortion corresponding to a specific pseudomagnetic vector potential (PVP) that causes the nodal ring states to break up into Landau levels, with a zeroth Landau level that is flat along all three directions. These findings point to the possibility of using nodal ring materials to generate 3D flat bands, to access strong interactions and other interesting physical regimes in 3D.** Landau levels (LLs) first arose in Landau's 1930 derivation of the magnetic susceptibility of metals, based on a quantum mechanical model of nonrelativistic electrons in a uniform magnetic field [1]. If the electrons are constrained to the 2D plane perpendicular to the magnetic field, the LLs form a set of equally-spaced flat energy bands, independent of the in-plane momentum. Such 2D LLs were subsequently found to exhibit quantized Hall conductance (the integer quantum Hall effect) due to their nontrivial band topology [2; 3]. Other 2D models host different types of LLs; for example, particles governed by a 2D Dirac equation (such as electrons near the Dirac points of graphene), when subjected to a uniform magnetic field, produce 2D LLs that are flat but unequally spaced in energy, with a zeroth Landau level (0LL) at zero energy [4; 5]. Flat bands such as 2D LLs are of broad interest in multiple fields of physics since their high density of states is conducive for accessing strong-interaction regimes [6; 7; 8; 9; 10; 11], such as strong inter-electron interactions in condensed matter systems, which given rise to phenomena like the fractional quantum Hall effect [6; 7; 12; 13], and strong light-matter coupling in optoelectronic systems[14; 15]. Although LLs are not the only way to achieve flat bands, they are attractive because of their rich physics and relative accessibility. Aside from using real magnetic fields, LLs can also arise from PMFs induced by lattice distortions, without breaking time-reversal invariance [16]. This is achievable in electronic materials through strain engineering [16; 17; 18; 19] or inter-layer twisting [13], and in synthetic metamaterials like photonic or acoustic crystals through structural engineering [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. PMFs are highly tunable and can reach much higher effective field strengths than real magnetic fields. In 3D, flat bands are challenging to realize, whether via LLs or some other mechanism; to our knowledge, there is no prior experimental demonstration in any truly 3D structure. For example, stacking 2D quantum Hall systems turns the LLs into 3D bands that are non-flat along the stacking direction, so long as there is nonzero coupling between layers (similar to the original Landau model) [33]. Likewise, if we generalize 2D Dirac particles to Weyl particles in 3D, applying a uniform magnetic field produces chiral LLs that can propagate freely along the field direction [34; 35; 36]. In this work, we design and experimentally implement a 3D lattice exhibiting LLs that are flat in all three directions. This is accomplished with an acoustic crystal--a synthetic metamaterial through which classical sound waves propagate--that incorporates an inhomogeneous structural distortion. In the absence of the distortion, the band structure contains a circular nodal ring (i.e., a ring in momentum space along Figure 1: **Pseudomagnetic field induced Landau levels in 3D nodal ring systems.****a**, Illustration of the pseudomagnetic vector potential (yellow arrows) at different positions of the nodal ring (blue circle). **b**, Under the pseudomagnetic vector potential shown in **a**, the nodal ring splits into 3D flat Landau levels. which two bands touch) [37, 38]. The introduction of the distortion generates a PVP pointing radially from the nodal ring's center in momentum space, as well as varying in real space (Fig. 1a). The nodal ring spectrum hence splits into 3D flat bands (Fig. 1b), in a manner analogous to the formation of 2D LLs from Dirac points [16]. This 3D flat band spectrum has potential uses for enhancing nonlinearities and accessing interesting 3D phenomena such as inverse Anderson localization [39]. The acoustic crystal design might also help guide the development of solid-state materials hosting 3D LLs, which could access novel correlated electron phases not found in lower-dimensional flat band systems [40]. In PMF engineering, as performed in strained graphene [16] and related materials [18, 19, 24, 25], a lattice distortion shifts a bandstructure's nodal points (e.g., Dirac points) in momentum space, which is analogous to applying a vector potential \(\mathbf{A}\). For instance, a slowly-varying (compared to the lattice constant) distortion can implement \(\mathbf{A}=Bx_{3}\hat{\mathbf{e}}_{1}\) (where \(x_{1,2,3}\) denotes position coordinates and \(\hat{\mathbf{e}}_{1,2,3}\) the unit vectors), corresponding to a uniform PMF \(\nabla\times\mathbf{A}=B\hat{\mathbf{e}}_{2}\). A similar manipulation can be applied to nodal lines, rather than nodal points [40, 41, 42]. Consider the continuum Hamiltonian [43] \[H\left(\mathbf{k}\right)=\frac{1}{2m_{p}}\left(k_{p}^{2}-k_{0}^{2}\right) \sigma_{1}+v_{3}k_{3}\sigma_{2}, \tag{1}\] where \(\mathbf{k}=(k_{1},k_{2},k_{3})\) is 3D momentum vector, \(\sigma_{1,2}\) are Pauli matrices, \(k_{p}^{2}=k_{1}^{2}+k_{2}^{2}\), and \(m_{p},k_{0},v\) are positive real parameters. This hosts a nodal ring at \(k_{p}=k_{0}\), \(k_{3}=0\), with energy \(E=0\)[43]. Now suppose we impose a PVP \[\mathbf{A}\left(k_{1},k_{2},x_{3}\right)=B_{0}x_{3}\hat{\mathbf{e}}_{p}, \tag{2}\] where \(\hat{\mathbf{e}}_{p}=k_{p}^{-1}(k_{1},k_{2},0)\) is the radial vector in the plane of the nodal ring. If we treat \(x_{3}\) as a constant, the Peierls substitution \(\mathbf{k}\rightarrow\mathbf{k}+\mathbf{A}\) shifts the nodal ring's radius to \(k_{0}^{\prime}=k_{0}-B_{0}x_{3}\). For a slow variation, \(x_{3}=-i\partial/\partial k_{3}\), we can expand \(H\) close to the nodal ring (i.e., \(\left|k_{p}-k_{0}^{\prime}\right|\ll k_{0}^{\prime}\)) to obtain \[H\left(\mathbf{k}\right)\approx\frac{k_{0}}{m_{p}}\left(k_{p}+B_{0}x_{3}-k_{0 }\right)\sigma_{1}+v_{3}k_{3}\sigma_{2}. \tag{3}\] This is a 2D Dirac equation, based on coordinates \(\rho\) and \(x_{3}\), with a uniform PMF. Its spectrum is \(E_{n}=\text{sgn}(n)\omega_{c}\sqrt{|n|}\), where \(\omega_{c}=\sqrt{2v_{3}B_{0}k_{0}/m_{p}}\) and \(n=0,1,2,\dots\) (see Supplementary Information). Each LL is flat along \(k_{p}\), \(k_{3}\), as well as the nodal ring plane's azimuthal coordinate (which \(H\) does not depend upon). Following our scheme, the key to realizing 3D LLs is to have a bandstructure with a circular nodal ring, whose radius can be parametrically varied without losing its circularity. Such a situation arises in a tight-binding model of an Figure 2: **3D Landau levels in an inhomogeneous anisotropic diamond lattice.****a**, Cubic cell of the anisotropic diamond lattice. The cubic unit cell has side length \(a\). Red and blue bonds denote nearest-neighbor couplings \(t\) and \(t^{\prime}=1\), respectively, and nearest-neighbor sites are separated by the vectors \(\mathbf{\delta}_{i}(i=1,2,3,4)\). The Cartesian coordinate axes are \(x_{i}\) (\(i=1,2,3\)), such that \(x_{3}\) is parallel to \(-\delta_{1}\), and \(x_{3}\) is parallel to \(-\delta_{3}+\delta_{4}\). **b**, Schematic of the first Brillouin zone. The reciprocal lattice vectors \(k_{i}\) (\(i=1,2,3\)) are oriented along \(LK\), \(LW\) and \(TL\) respectively. **c**, Shapes of the nodal ring for various \(t\). **d**, Projections of the nodal ring onto the \(k_{1}\)-\(k_{2}\) plane, for the values of \(t\) used in **c**. **e-f**, Local density of states in the \(k_{2}\) direction for \(B=0\) (**e**) and \(B=0.0073a^{-2}\) (**f**), calculated using a 600-site chain. Solid white lines in **f** represent the analytically predicted Landau levels. **g**, Wavefunction amplitude of the zeroth Landau level at \((k_{1},k_{2})=(0,0.70\pi/a)\). anisotropic diamond lattice [44]. As shown in Fig. 2a, the cubic unit cell has period \(a\), there are two sublattices with one \(s\) orbital per site, and the nearest-neighbor couplings are \(t\) (red bonds) and \(t^{\prime}\) (blue bonds). The momentum-space lattice Hamiltonian is \[H(\mathbf{k})=\left(\begin{array}{cc}0&te^{i\mathbf{k}\cdot\mathbf{\delta_{ 1}}}+t^{\prime}\sum_{i=2}^{4}e^{i\mathbf{k}\cdot\mathbf{\delta_{i}}}\\ te^{-i\mathbf{k}\cdot\mathbf{\delta_{1}}}+t^{\prime}\sum_{i=2}^{4}e^{-i \mathbf{k}\cdot\mathbf{\delta_{i}}}&0\end{array}\right), \tag{4}\] where \(\mathbf{\delta_{1}},\dots,\mathbf{\delta_{4}}\) are the nearest-neighbor displacements shown in Fig. 2a (see Supplementary Information). The first Brillouin zone is depicted in Fig. 2b. Thereafter, we set \(t^{\prime}=1\) for convenience. This lattice is known to host a nodal ring when \(1<t<3\)[44], whereas for \(t>3\) it is a higher-order topological insulator [45, 46]. In the former regime, the nodal ring occurs at \(E=0\) and \[K_{1}^{2}+K_{2}^{2} =\left(\frac{2\sqrt{2}}{a}\sqrt{3-t}\right)^{2}, \tag{5}\] \[K_{3} =\sqrt{3}\frac{\pi}{a}, \tag{6}\] where \((K_{1},K_{2},K_{3})\) is the position on the nodal ring. The nodal ring forms a circle in momentum space (see Supplementary Information Fig. S1, and Figs. 2c-2d). Crucially, its radius is determined solely by \(t\). We can form any shape of nodal lines in the continuum model [47], but it is hard to realize circle-shaped nodal rings. In most cases nodal line semimetals hold either discrete lines [38] or closed rings with other shapes [38, 48, 49, 50, 51]. To generate the 3D LLs, we modulate \(t\) along the spatial axis \(x_{3}\) perpendicular to the plane of the ring, so that the nodal ring's radius increases linearly with \(x_{3}\). This leads to the gauge Figure 3: **Pseudomagnetic fields and Landau levels in an acoustic nodal-ring crystal.****a**, Unit cell of the acoustic lattice, consisting of two sphere cavities connected by cylindrical tubes. The radii of the tubes are \(r_{1}=\xi r_{0}\) and \(r_{0}\), respectively. The radius of the sphere cavity is \(R\). **b**, Plot of the square of the nodal ring’s radius \(K_{p}=\sqrt{K_{1}^{2}+K_{2}^{2}}\) against the geometry parameter \(\xi\) for different polar angles \(\phi\). Blue markers and the red line represent the data and the linear fit, respectively. The lower inset displays the nodal ring’s frequency variation when \(\xi\) changes. **c**, Schematic of a 12-layer inhomogeneous acoustic lattice. The value of \(\xi\) gradually changes along the \(x_{3}\) direction, which leads to the ring shrinking and induces a pseudomagnetic field. To realize flat drumhead surface states, the top and bottom resonators are cut by \(h_{\text{top}}\) and \(h_{\text{bottom}}\), respectively. **d**, Eigenfrequencies of the first three Landau levels at \((k_{1},k_{2})=(0,0.70\pi/a)\) for acoustic lattices under different pseudomagnetic fields. The simulation results (blue dots) are well predicted by the theoretical model (red curves). **e**, Dispersion along \(k_{2}\) for an inhomogeneous acoustic lattice with 300 layers and \(B=0.0073a^{-2}\). Black lines represent analytically predicted Landau levels. **f**, Pressure amplitude distributions for the four eigenmodes labelled in **e**. Both \(s_{1}\) and \(s_{2}\) are double degenerated. One eigenmode localizes at the top surface, whereas another one moves from bottom to bulk as \(k_{2}\) increases. The plots only display the chain’s top, middle, and bottom parts and omit other regions where sound pressure is neglectable. Figure 4: **Experimental detection of the acoustic Landau levels.****a**, The top view of the sample, with \(11\times 11\) sites on \(x_{1}x_{2}\) plane and 12 layers along \(x_{3}\) direction, which induce strong pseudomagnetic fields under \(B=0.27a^{-2}\), \(B=0.18a^{-2}\), \(B=0\) for sample 1,2,3, respectively. **b**, A photo of the sample. Sphere cavities are connected by tubes. **c**, The photo of the experiment setup. The probe is ensembled on the mechanical arm, which is controlled by the stepping motor. **d**, Measured acoustic pressure spectra at the same bulk site for samples 1 (blue), 2 (red), and 3(cyan). The black arrow indicates frequency \(n=0\) Landau levels for sample 1 and 2. The blue (red) arrows indicate frequencies of \(n=\{-1,1\}\) Landau levels for samples 1(2). **e**, Measured acoustic pressure distributions for the \(n=\{-1,0,1\}\) Landau levels and two gap frequencies in sample 1. The radii of the blue spheres are proportional to the acoustic pressure. **f**, Measured acoustic pressure distributions for the \(n=\{-1,0,1\}\) Landau levels and two gap frequencies in sample 2. The radii of the red spheres are proportional to the acoustic pressure. **g**, Measured acoustic pressure distributions with five frequencies in sample 3. The radii of the cyan spheres are proportional to the acoustic pressure. The green star denotes the position of the sound source in **e,f,g**. field (see Supplementary Information): \[\mathbf{A}\left(x_{3},\phi\right)=Bx_{3}\left(\begin{array}{c}\cos\phi\\ \sin\phi\\ 0\end{array}\right), \tag{7}\] with parameter \(B=0.4\sqrt{3}\pi/Na^{2}\), which controls the magnitude of PVP. Here, \(N\) is the number of unit cells along the \(x_{3}\) direction, and \(\phi\) is the azimuth angle in the \(k_{1}\)-\(k_{2}\) plane. Such a PVP splits the original nodal degeneracy into discrete LLs described by \(E_{n}=\mathrm{sgn}\left(n\right)\sqrt{|n|}\omega_{c}\), where \(\omega_{c}=v\sqrt{2B}\) is the cyclotron frequency (see Figs. 2e-2f). While the nonzero LLs are not ideally flat due to the \(\mathbf{k}\)-dependence of the group velocity \(v\), the zeroth LL is exactly dispersionless, which indicates a novel mechanism for generating flat bands in 3D systems. A numerically calculated profile of the zeroth LL is plotted in Fig. 2g, whose localization in the bulk is well-captured by the low-energy theory (see Supplementary Information). Next, we design an acoustic diamond lattice to realize the above phenomenon. Figure 3a shows the acoustic unit cell, which consists of two sphere cavities connected by cylindrical tubes with radii \(r_{1}\) or \(r_{0}\). The whole structure is filled with air and covered by hard walls. Here, the two sphere cavities act as two sublattices (denoted as "A" and "B"; see Fig. 3c) and the cylindrical tubes control the coupling strengths. We define a dimensionless parameter \(\xi=r_{1}/r_{0}\) which describes the anisotropic strength of couplings. Similar to the tight-binding model, this acoustic lattice hosts a circular-like nodal ring in its band structure. We numerically find that the square of the radius of the nodal ring is controlled by a single parameter \(\xi\) and it scales linearly with \(\xi\) (see Fig. 3b). Moreover, for different values of \(\xi\), the frequency of the nodal ring remains almost unchanged (see the inset to Fig. 3b). We can straightforwardly engineer PMFs in this acoustic lattice with these nice properties. The supercell of the designed structure is schematically illustrated in Fig. 3c, where \(\xi\) gradually varies along the \(x_{3}\) direction. This variation makes the radius of the nodal ring linearly dependent on space coordinates, similar to the case in the tight-binding model. We cut part of the top and bottom cavities to tune on-site potential. After precisely controlling \(h_{\mathrm{top}}\) and \(h_{\mathrm{bottom}}\), the drumhead surface state is almost dispersionless, which is the advantage in the drumhead state measurement. We numerically test our design using a lattice with 300 layers along the \(u_{3}\) direction. Figure 3d plots the frequencies of \(n=\{-1,0,1\}\) LLs under different PMFs. As can be seen, the spacings between the LLs follow well with the theoretical curve \(E_{n}=\mathrm{sgn}\left(n\right)\sqrt{|n|}\omega_{c}\). Figure 3e shows the dispersion along \(k_{2}\) direction under \(B=0.0073a^{-2}\), where the acoustic LLs is clearly presented. Notably, the zeroth LL is exponentially localized in the bulk and is smoothly connected to the surface modes (see Fig. 3f). All these numerical results are consistent with the low-energy theory and tight-binding calculations. To observe the LLs experimentally, we fabricate three 12-layer samples with stronger PMFs under \(B=0.27a^{-2}\), \(B=0.18a^{-2}\), \(B=0\), as shown in Figs. 4a-4c. Figure 4a shows the top layer of the sample. Figure 4c shows the experiment setup. The mechanical arm is controlled by the stepping motor, which ensures we get the pressure field in the right position. The strong PMF leads to LLs with spacing that is wide enough to be measured in a simple pump-and-probe experiment (see Methods for more experimental details). Figure 4d plots the measured spectrum when the source and the probe are placed at two bulk sites (see Supplementary Information Fig. S8). As can be seen, there are pronounced peaks at the corresponding frequencies of the predicted LLs. To visualize the effect of the PMF, we plot the acoustic field distributions at several representative frequencies that correspond to the frequencies of the \(n=\{-1,0,1\}\) LLs or the middle frequencies between these LLs. As shown in Fig. 4e-4f, the excited fields spread a noticeable area when the source operates at LL frequencies. In contrast, the excited fields are highly confined to the source position at midgap frequencies. Furthermore, we compare the acoustic field distributions at Dirac frequency and gap frequencies for three samples. As shown in the 2nd to the 4th columns in Fig. 4e-4g, the stronger Figure 5: **PMF-modified drumhead surface states.****a.** Illustration of the drumhead states at the top and bottom surface. **b,c,d** Measured Fourier transformed intensity at the bottom (top) surface at 6.12kHz for three samples. The gray circles denote the projections of the nodal ring near the surfaces. the PMF, the more localized the field. Such a sharp comparison is a direct consequence of the Landau quantization of the acoustic bands. Conventional nodal line crystals are accompanied by the drumhead surface states. In the presence of the PMF, we find that the drumhead surface states are also modified. As illustrated in Fig. **5**a, due to the spatial variation of the nodal ring's radius, the momentum space area of the drumhead surface states at the top and bottom surfaces are different (see Supplementary Information). To see this effect, we measure the acoustic fields at the top and bottom surfaces, and the corresponding Fourier spectra are given in Fig. **5**b-**5**d. Due to the size effect, the drumhead surface state frequency is located at 6.12kHz, shifting a little from the zeroth LL. As shown in Figs. **5**b-**5**c, the surface states at the top surface indeed occupy a larger area in the momentum space compared to those at the bottom. To sum up, we have theoretically proposed and experimentally demonstrated the generation of PMFs in 3D nodal-ring systems. Our results open a new route to studying the physics of artificial gauge fields in gapless systems beyond Dirac and Weyl semimetals. From the perspective of wave manipulation, the PMF-induced LLs provide a novel method to generate flat bands in 3D systems [10], which could be useful in sound trapping, energy harvest, and slow-wave devices. In future studies, it would be interesting to investigate the effects of other forms of PMF beyond the constant one [31] and the interactions between PMFs and other types of band degeneracies, such as nodal link, nodal knot, and nodal surfaces. Extending the idea to photonic and electronic systems is also highly desired, where nonlinear and correlated physics can be studied.
2310.17450
Rapid Generation of Kilonova Light Curves Using Conditional Variational Autoencoder
The discovery of the optical counterpart, along with the gravitational waves from GW170817, of the first binary neutron star merger, opened up a new era for multi-messenger astrophysics. Combining the GW data with the optical counterpart, also known as AT2017gfo, classified as a kilonova, has revealed the nature of compact binary merging systems by extracting enriched information about the total binary mass, the mass ratio, the system geometry, and the equation of state. Even though the detection of kilonova brought about a revolution in the domain of multi-messenger astronomy, since there has been only one kilonova from a gravitational wave detected binary neutron star merger event so far, this limits the exact understanding of the origin and propagation of the kilonova. Here, we use a conditional variational autoencoder trained on light curve data from two kilonova models having different temporal lengths, and consequently, generate kilonova light curves rapidly based on physical parameters of our choice with good accuracy. Once trained, the time scale for light curve generation is of the order of a few milliseconds, thus speeding up generating light curves by $1000$ times compared to the simulation. The mean squared error between the generated and original light curves is typically $0.015$ with a maximum of $0.08$ for each set of considered physical parameter; while having a maximum of $\approx0.6$ error across the whole parameter space. Hence, implementing this technique provides fast and reliably accurate results.
Surojit Saha, Michael J. Williams, Laurence Datrier, Fergus Hayes, Matt Nicholl, Albert K. H. Kong, Martin Hendry, IK Siong Heng, Gavin P. Lamb, En-Tzu Lin, Daniel Williams
2023-10-26T15:00:00Z
http://arxiv.org/abs/2310.17450v1
# Rapid Generation of Kilonova Light Curves Using Conditional Variational Autoencoder ###### Abstract The discovery of the optical counterpart, along with the gravitational waves from GW170817, of the first binary neutron star merger, opened up a new era for multi-messenger astrophysics. Combining the GW data with the optical counterpart, also known as AT2017gfo, classified as a kilonova, has revealed the nature of compact binary merging systems by extracting enriched information about the total binary mass, the mass ratio, the system geometry, and the equation of state. Even though the detection of kilonova brought about a revolution in the domain of multi-messenger astronomy, since there has been only one kilonova from a gravitational wave detected binary neutron star merger event so far, this limits the exact understanding of the origin and propagation of the kilonova. Here, we use a conditional variational autoencoder trained on light curve data from two kilonova models having different temporal lengths, and consequently, generate kilonova light curves rapidly based on physical parameters of our choice with good accuracy. Once trained, the time scale for light curve generation is of the order of a few milliseconds, thus speeding up generating light curves by 1000 times compared to the simulation. The mean squared error between the generated and original light curves is typically 0.015 with a maximum of 0.08 for each set of considered physical parameter; while having a maximum of \(\approx 0.6\) error across the whole parameter space. Hence, implementing this technique provides fast and reliably accurate results. Kilonova, light curves, conditional variational autoencoder 0000-0002-0002-0002]Savchenko et al. 017, 0000-0002-0002-3876]J. Xu 0000-0002-3189-7888]X. Xu 0000-0002-3189-7888]X. Xu 0000-0002-3189-7888]X. Xu ## 1 Introduction The concomitant discovery of the gravitational waves (GW) and optical counterpart electromagnetic waves (EM) (Soares-Santos et al., 2017; Lipunov et al., 2017; Tanvir et al., 2017; Arcavi et al., 2017; Valenti et al., 2017; Coulter et al., 2017) from the merger of the binary neutron star GW170817 (Abbott et al., 2017), also accompanied by short gamma-ray burst (GRB) (Goldstein et al., 2017; Savchenko et al., 2017), advanced the domain of multi-messenger astronomy. This optical counterpart, designated as kilonova (KNe) and known as AT2017gfo bolstered previous predictions of the existence of such electromagnetic transients (Li & Paczynski, 1998; Rosswog et al., 1999). Prior to the discovery of GW170817, it was predicted that such an event would be accompanied by an EM counterpart, including short-duration GRBs (Eichler et al., 1989; Nakar, 2007), emissions ranging from radio to X-rays pertaining to on or off axis afterglow (van Eerten & MacFadyen, 2012; Coward et al., 2014; Fong et al., 2015; Lamb & Kobayashi, 2016) and optical to near-IR resulting from the decay of r-process nuclei. A kilonova is an isotropic quasi-thermal transient which is powered by the radioactive decay, in the merger ejecta, of r-process nuclei, having luminosities \(10^{40}-10^{42}\) erg s\({}^{-1}\)(Li & Paczynski, 1998; Rosswog et al., 1999; Metzger et al., 2010; Barnes & Kasen, 2013). This EM counterpart can provide deeper understanding of the merger environment and its products. When EM information is further combined with GW, it leads to a unique platform for an extensive understanding of such binary events. KNe models consists of one or more radioactive ejecta components that produce light curves peaking at different timescales and temperature depending on atomic mass number of the ejecta, and luminosity. The two-components consists of the blue KNe (Metzger et al., 2010; Roberts et al., 2011; Metzger & Fernandez, 2014) emission having poor lanthanide fraction (\(10^{-5}\)) in the merger ejecta peaking at a relatively earlier time scale and red KNe (Barnes & Kasen, 2013; Kasen et al., 2013; Tanaka & Hotokezaka, 2013) comprising of lanthanide-rich (\(10^{-2}\)) merger ejecta with peak values at later days. In the three-component model (Perego et al., 2017) there is an inclusion of purple KNe that describes the presence of purple KNe with a lanthanide fraction of \(10^{-3}\). Various papers (Cowerthwaite et al., 2017; Villar et al., 2018) have provided the best-fit parameter for AT2017gfo related to the blue, purple and red kilonova in terms of ejecta mass, ejecta velocity and lanthanide fraction. However, since there has been only one GW-confirmed detection of a KNe from a merger of binary neutron stars, along with some possible candidates from other sources, it is difficult to understand and verify the properties of binary merging systems that emit electromagnetic radiation. An overview on the physical parameters that govern the KNe has been provided by Metzger (2019). In recent years, machine learning has been used for various data analysis and formulating techniques required in astronomy (Vander Plas et al., 2014; Ball & Brunner, 2010; Ntampaka et al., 2021; Jones et al., 2022). There has been certain areas where the application of machine learning techniques has produced a remarkable result, especially focusing on providing a relatively faster results alternative to the existing methods (Ball, 2011; Sipocz et al., 2020). There are plenty of excellent literature available on different topics of machine learning application in astronomy and astrophysics spanning over wide range of topics (Li et al., 2022; Gheller & Vazza, 2022; Garcia-Jara et al., 2022; Sheng et al., 2022). In this paper, we incorporate a method from ML domain known as autoencoder (Rumelhart et al., 1986), which is based on feed-forward mechanism, to generate light curves. The primary task of an autoencoder is to encode the input into a lower dimensional latent representation and then decode it back to data. In feed-forward mechanism, the neural network has unidirectional processing of information. The striking feature for an autoencoder is that the encoder compresses the high-dimensional input to a low-dimensional latent space and the decoder decompressed to produce result to match the input. In this work, we regenerate the KNe light curves by implementing a conditional variational autoencoder (CVAE) (Kingma & Welling, 2019), which is developed based on variational autoencoder (VAE)(Kingma & Welling, 2013). We use this CVAE to generate light curves based on our choice of physical parameter values of the binary merging system. For the CVAE, once the training has been completed, we are more flexible to rapidly generate light curves based on the physical parameters depending on choice of the data set. Although, KNe data from only two models for training and producing the results are used in this work, it is also possible to extend the same idea for data from other models. Although the data have been produced from the model, this unique data analysis technique can not only generate light curves for the parameter values that are not explicitly mentioned in the model but can also replace time-consuming and resource-draining simulations required for predicting the light curves. The novelty of this technique is expressed in the fact that once the CVAE is trained on a KNe model, we can generate numerous light curves for a different combinations of physical parameters in a very short time. An alternative approach for such conditional variational autoencoder and KNe work has been presented in Lukosiute et al. (2022) where different method has been studied for obtaining results. One of the fundamental differences between Lukosiute et al. (2022) and our work is in using spectra rather than light curves during the training. In our method, we look into the light curve evolution with respect to different sets of physical parameters aiming to complete the process in a relatively shorter time-scale compared to the existing simulation methods. This kind of rapid generation technique for KNe light curves can be useful when used as template for rapid parameter estimation of KNe. In the following sections, we show the gradual implementation of our idea, and the results are discussed thereafter. In Section 3, we discuss the CVAE architecture implemented in our work. Section 4 puts forward the differences in the data and their physical parameters used for training and generation of the KNe light curves. Section 5 provides the detailed discussion of results obtained after implementation of CVAE. In Section 6 we summarize our approach and present some important features of this technique. Additional results for references are included in the appendix. ## 2 Kilonova models Kilonova emission results from the mass ejection in the NS mergers (Freiburghaus et al., 1999; Ruffert, M. and Janka, H.-Th., 2001; Hotokezaka et al., 2013). The properties of the ejecta such as the ejecta mass, ejecta velocity and opacity dominates the peak luminosity and the time of the peak luminosity (Li and Paczynski, 1998; Kasen et al., 2013, 2015; Barnes and Kasen, 2013; Tanaka et al., 2017). This luminosity is sourced from the radioactive decay of the r-process elements synthesized in the merger. This work primarily focuses on two KNe models having different physical parameters that determine the light curves. Below we provide an outline of the two models used in this work. In Kasen et al. (2017),the light curves are dependent upon the peak magnitude and decay time with respect to the ejecta mass, velocity and lanthanide fraction is provided. Quantitatively, peak luminosity to first order is denoted by \(L\propto M_{\rm ej}\) and the width of the light curve is \(\tau\propto(\kappa M_{\rm ej}/v_{\rm ej})^{1/2}\), pointing to lanthanide-rich ejecta with higher \(\kappa\), where \(M_{\rm ej}\), \(v_{\rm ej}\) and \(\kappa\) are the mass, velocity and opacity of the ejecta. Hence heavier ejecta from the merger leads to higher peak magnitude with relatively longer duration KNe whereas ejecta with higher velocities have short duration with bright KNe. We see that lanthanide fraction plays a major role on the light curves where the ejecta with lower lanthanide fraction decays on a relatively shorter timescale compared to lanthanide-rich ejecta decaying over weeks. From the literature in Nicholl et al. (2021), the study presents an approach where light curves are predicted from the chirp mass, mass ratio, orbital inclination also including the properties related to the nuclear equation of state. Here we see the nature of light curves based on these physical parameters. It is important to note that, (Kasen et al., 2017) is based on radiative transfer model, where as (Nicholl et al., 2021) is a semi-analytic model. While KNe light curves can be generated with CVAE, we choose these models due to their different parametrizations and the data availability. However, this does not put any limitations on the use of CVAE, since data from any other available KNe models can be equally used. ## 3 Autoencoder With the advancement of new machine learning (ML) methods and their implicit application in data analysis, has opened a new era where ML techniques can be implemented to obtain relatively faster results. Time-consuming and resource-draining simulations can be completed on a reasonable time scale (Carleo et al., 2019). There are many available ML techniques that can be used for data analysis with specific techniques built to obtain certain results. We use an autoencoder, an ML technique that is based on a feed-forward mechanism, and its function is to reproduce the input. As is the case with any standard autoencoder, the CVAE consists of three sections, an encoder, a latent space, and a decoder, as shown in Fig. 1. The encoder (\(Q(\phi)\)) compresses the high-dimensional input data into a lower-dimensional latent space while capturing the features of the input data. Latent space(\(Z\)), which is also referred to as the output of encoder, has a lower dimensional representation. Generally, a well-trained CVAE has the entire high-dimensional input data smoothly distributed over the latent space. To generate light curves of our choice, we draw samples from this latent space and pass it through the decoder where the decoder (\(P(\theta)\)) takes the compressed representation and generates the required light curves. This facilitates using CVAE as a tool, not only limited to a generative model but also quickly look into the parameter dependency of KNe light curves. In CVAE, the training is regularised in order to avoid over-fitting. Hence, the latent space is well distributed to enable generative process Even though there are other generative models like Generative Adversarial Networks (GAN)(Goodfellow et al., 2014) but since in our work we want to sample new data from the probability distribution of the input data, to look at the model parameters crucial for the light curves, we prefer to utilize the strength of an autoencoder because of its ability to have a probability distribution of input. Even though GAN and CVAE are both generative models and belongs to the category of unsupervised learning, one of the primary differences lies in there architecture where GAN has a generator and discriminator and CVAE has encoder, latent space and decoder. Along with this, the loss functions used in GAN and CVAE are different from each other. It is important to note that for generating new data, sampling a hidden state in GAN takes place from a predefined distributions and is then fed into the discriminator in contrast to that of CVAE where sampling of hidden state, to be fed into the decoder, is done from selecting a prior distribution related to the actual data. Here, to achieve our goal, we take advantage of the conditional variational autoencoder, where we train it on the light curves by conditioning them on physical parameters and generate new light curves based on the physical parameters of our choice. In this case, we have complete control over the generated data set. In this work, we take advantage of this feature of CVAE to generate new data. In our CVAE architecture, the loss function is the combination of the reconstruction-loss and the Kullback-Leibler divergence (Aserti and Trentin, 2020). Training is carried out with \(batch\_size=50\) for \(epochs=1000\) and having a learning rate of 0.0001. Commencing from the training till generation of the light curves for the different physical parameter values, it takes \(\simeq 20\) minutes for the process to complete using 3.9 GHz 8-core Intel Core i9 processor with 32 GB memory. In our training and regeneration of the light curves, we have not used the GPU of the system and have completely relied on the CPU. Once the model is trained and saved, it takes only a few milliseconds to generate the light curves. In this section, we outline the CVAE without delving into the details, however, for more detailed variational encoder readers can refer to (Kingma and Welling, 2013) and (Kingma and Welling, 2019). In Section 5 more insight into the results obtained from CVAE are shown. ## 4 Data In the ML domain, one needs to be very careful about the data that is fed into the algorithm. The data have to be pre-processed and prepared accordingly. The data used for training, validation and test set are categorized into two types as mentioned in Section 2. We adopt the data ([https://github.com/dnkasen/Kasen_Kilonova_Models_2017](https://github.com/dnkasen/Kasen_Kilonova_Models_2017)), hereafter \(D_{1}\)(Kasen et al., 2017) to prepare our data set to be fed into the CVAE. The physical parameters in this data set are ejecta mass (\(0.001-0.1M_{\odot}\)), ejecta velocity(\(0.1-0.3c\)) and lanthanide fraction(\(10^{4}\)-\(10^{\text{-}9}\)). There are 329 light curves with duration of \(\approx 25\) days. Each light curve has different values of the physical parameter within the range mentioned above. The light curves are available for different filter bands (u,g,r,y,z) within the same physical parameter range. Using these data, 241 light curves are used as training sets, and the rest are equally divided into the test set and the validation set. In our analysis, we have trained the CVAE on each filter band data separately, hence there are five trained CVAE's, and carried out light curves generation; however, in the main sections, only the g-band results are shown; the rest are added in appendix. The second type of data, hereafter \(D_{2}\), used here consists of simulated light curves at same the filter bands but having different physical parameters. The physical parameters of these data are the chirp mass of the binary system, the mass ratio, the fraction of the remnant disk that is ejected, the viewing angle in degrees from the pole, and the opening angle of the cocoon shock (Nicholl et al., 2021). The range of values are (\(0.7M_{\odot}-2.0M_{\odot}\)) for the chirp mass, (\(0.5-1.0\)) for the mass ratio, (\(0.15-0.45\)) for the fraction of the remnant disk, respectively. For viewing angle the data corresponds to the values of \(0^{\text{*}}\), \(60^{\text{*}}\) and \(90^{\text{*}}\). There are 529 light curves having temporal length of 30 days, out of which 401 light curves are included in the training data and the rest are equally divided into test and validation set. The difference in the two data set is appears in the physical parameters used in the respective KNe models. These data are used to verify the pliability of the CVAE. Since the physical parameters in the two data sets used are entirely different, this gives an extra edge in verifying our approach, and the results are detailed in Section 5. Since both data sets contain light curves that have different absolute magnitude peak values, while feeding them for training, we have scaled the light curve data and the physical parameters between 0 and 1, as this would be a more effective approach. Later, after training and generating light curves, we rescale the light curves and plot accordingly. Throughout the paper, we will follow a particular format to represent the physical parameters in the text and in the legends of the plots. For the texts and plots relevant to \(D_{1}\), we will use the format where \(a\),\(b\) and \(c\) will be the values of ejecta mass in \(M_{\odot}\), velocity of the ejecta in units of velocity of the light and lanthanide fraction respectively. Similarly for the \(D_{2}\), we will use the format of \([w,x,y,z]\) where \(w\) is the chirp mass in units of \(M_{\odot}\), \(x\) is the mass ratio, \(y\) gives the fraction of the remnant disk that is ejected and \(z\) is the viewing angle in degrees from the pole. We will use the above format throughout the paper wherever required, without further mentioning the respective units. Fig. 2(a) shows 50 out of 329 original light curves from \(D_{1}\) in the g-band for physical parameters corresponding to ejecta mass, ejecta velocity and lanthanide fraction with values ranging between \(0.001-0.1M_{\odot}\), \(0.03-0.05c\) and \(10^{-4}-10^{-9}\) respectively whereas in Fig 2(b), we show 50 out of 529 light curves corresponding to \(D_{2}\) having physical parameters that of chirp mass (\(0.7-1.10M_{\odot}\)), mass ratio (\(0.5-1.0\)), fraction of the remnant disk (\(0.15-0.45\)) and having a viewing angle of \(0^{\text{*}}\) from the pole as mentioned in Section 4. In Fig. 2, we show the input light curves corresponding to \(D_{1}\) and \(D_{2}\) which is fed into the CVAE. During training and validation, scaled values of the KNe light curves and the conditional physical parameters are used for both \(D_{1}\) and \(D_{2}\), and the obtained results are shown in absolute magnitude values. The input light curves for training from \(D_{1}\) consist of 241 light curves and from \(D_{2}\) has 401 light curves from the g-band. As previously mentioned, each light curve has different physical parameters corresponding to ejecta mass, ejecta velocity and lanthanide fraction for \(D_{1}\) and chirp mass, mass ratio, fraction of the remnant disk and the viewing angle from the pole for \(D_{2}\). Throughout the text, \(D_{1}\) and \(D_{2}\) will be referred to accordingly, keeping the above training, test, and validation set unchanged. In Table 1, we tabulate the physical parameters and the data ranges used in this work. In this study, while defining the training, test, and validation set we do not restrict the physical parameters to the different regions of the parameter space. Instead, the physical parameters present in the training, test, and validation sets cover the entire parameter space. But, at the same time, we ensure that none of the parameter combinations, i.e ejecta mass, ejecta velocity, and lanthanide fraction for \(D_{1}\) and chirp mass, mass ratio, a fraction of the remnant disk, and viewing angle for \(D_{2}\), are repeated in the training, test, and validation set. Specifically, for \(D_{2}\), apart from the training, test, and validation set, we have kept aside a separate set of simulated light curves from MOSFiT([https://github.com/guillochon/MOSFiT](https://github.com/guillochon/MOSFiT)), which are utilized to evaluate the performance of the CVAE. We augmented the existing test set with these separately simulated light curves, hereafter \(D_{2}\dagger\), to provide a robust performance check. The combination of the physical parameters present in \(D_{2}\dagger\), as shown in Table 2, covers the parameter space but is previously unseen to the CVAE. The CVAE will be employed with these sets of physical parameters to generate light curves and hence compare with the true light curves, thus providing a robust approach to appraise the CVAE performance. Hence, using the CVAE, we generate light curves over a grid on the parameter space. ## 5 Results In this section, we present the results after the implementation of CVAE. After training, samples are drawn from the latent space to reconstruct light curves based on the required physical parameters. This section points out the comparison between the generated light curves from the latent space and the original light curves from the simulation. This technique allows us to generate as many light curves as we require over a wide range of physical parameters within the range provided by the model. In Fig.3(a), we show the generated light curves of g-band after implementing CVAE for the physical parameter of \([0.001M_{\odot},0.03c,10^{-5}]\) which are the ejecta mass, Figure 1: Schematic diagram of the conditional variational autoencoder, consisting of the encoder, latent space and the decoder, from input to data generation is shown. In the figure, we have the probabilistic encoder \(Q(\phi)\) and the probabilistic decoder \(P(\theta)\). \(Z\) represents the compressed input or encoded representation. Here, the encoder transforms the high-dimensional input data, into the low-dimensional latent space which represents the compressed form of the high-dimensional input data while the decoder maps this compressed representation in the low-dimensional latent space leading to the reconstruction of the original high-dimensional data. We take a set of data \(x\), which is the KNe light curves and condition it on a given parameter \(y\), which in our case are the physical parameters from the model, and then the encoder compresses this representation in the latent space. The decoder decompresses the representation and provides the relevant output data, KNe light curves, based on the physical parameter values of our choice. This is an overview of the CVAE architecture that has been used in this work to train and generate the KNe light curves. ejecta velocity, and lanthanide fraction, respectively as mentioned in Section 4. The loss plot for the CVAE is shown in Fig 3(b). We find that the validation loss and the training loss has decreased to a point of stability and there is small gap between the two curves. Although, CVAE has the ability to generate as many light curves we want for one or many physical parameters, but in order to avoid congestion in plots, we limit ourselves to 100 generated light curves based on the above physical parameter. The 100 light curves are closely spaced and \begin{table} \begin{tabular}{|c|c|c|} \hline **Data** & **Physical Parameters** & **Data Features** \\ \hline \multirow{5}{*}{\(D_{1}\)} & **Ejecta Mass (\(M_{\odot}\))** & 0.001, 0.0025, 0.005, 0.01, 0.02, 0.025, 0.03, 0.04, 0.05, 0.075 \\ \cline{2-3} & **Ejecta Velocity(c)** & 0.03, 0.05, 0.1, 0.2, 0.3 \\ \cline{2-3} & **Lanthanide Fraction** & \(10^{-9}\), \(10^{-5}\), \(10^{-4}\), \(10^{-3}\), \(10^{-2}\), \(10^{-1}\) \\ \hline \multirow{5}{*}{\(D_{2}\)} & **Chirp Mass (\(M_{\odot}\))** & 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8 \\ \cline{2-3} & **Mass Ratio** & 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 \\ \cline{2-3} & **Fraction of the Remnant Disk** & 0.15, 0.30, 0.45 \\ \cline{2-3} & **Viewing Angle** & \(0^{\text{\textdegree}}\), \(60^{\text{\textdegree}}\), \(90^{\text{\textdegree}}\) \\ \hline \end{tabular} \end{table} Table 1: This table gives an overview of the physical parameters and range of their values corresponding to \(D_{1}\) and \(D_{2}\) used in this work for training and generation of the KNe light curves. \(D_{1}\) is taken from [https://github.com/dnkasen/Kasen_Kilonova_Models_2017](https://github.com/dnkasen/Kasen_Kilonova_Models_2017) and \(D_{2}\) is adopted from [https://github.com/mnicholl/kn-models-nicholl2021](https://github.com/mnicholl/kn-models-nicholl2021). The training, test and validation data for the corresponding data set consists of these individual physical parameter. Figure 2: (a) In this plot, we represent only 50 KNe light curves of the g-band of \(D_{1}\) which are a part of the data set for training. These light curves consist of the mass of the ejecta, the velocity of the ejecta, and the fraction of lanthanide as physical parameters with values ranging between \((0.001-0.1M_{\odot})\),\((0.03-0.05c)\), \((10^{-9}-10^{-4})\) respectively. (b)In this plot, we show 50 light curves for g-band of \(D_{2}\) which are a part of the training data set. These light curves consists of chirp mass, mass ratio, fraction of the remnant disk with values ranging between \((0.7-1.10M_{\odot})\), \((0.50-1.0)\), \((0.15-0.45)\) respectively and having a viewing angle of \(0^{\text{\textdegree}}\) as their physical parameters. This figure gives an outline of the KNe light curves and their decay time, which is used for training the CVAE. For the both data, the KNe light curves are in absolute magnitude. These light curves are scaled between [0-1] while feeding them into the CVAE for training, but, while representing the generated results, these light curves are scaled back to the absolute magnitude. the apparent variations at the later days comes from the variations of the probability distribution in the latent space. These variations are expected since, to regenerate light curves, we randomly draw samples from the latent space. This randomness allows the CVAE to produce variations in the light curves even for the same input. Thus, we see these variations in the CVAE-generated light curves for the physical parameters. Besides, the light curves are more sensitive to the changes in the physical parameters in the later days. However, since we are more interested in the earlier KNe evolution, we restrict the plots to 14 days. This parameter is chosen from the simulated data in order to verify the robustness of the CVAE. It is clear that the input light curve, shown by diamond markers, used from the validation data of the CVAE is well within the limit of the regenerated light curves. This assures that the CVAE is well trained. In Fig. 3(a), we see some overlapping in the light curves, since the 100 generated light curves have relatively similar values. While generating these light curves, samples were drawn from different points in latent space. It is possible to generate as many light curves as desired and observe the possible variations from the latent representation. Since the encoder maps the input light curves to a probability distribution over the latent space. Hence, \begin{table} \begin{tabular}{|c|c|c|} \hline **Data** & **Physical Parameters** & **Data Features** \\ \hline \multirow{4}{*}{\(D_{2}^{\dagger}\)} & **Chirp Mass (\(M_{\odot}\))** & 1.0, 1.2, 1.4, 1.6, 1.8 \\ \cline{2-3} & **Mass Ratio** & 0.7, 0.75, 0.8, 0.85, 0.9 \\ \cline{2-3} & **Fraction of the Remnant Disk** & 0.15, 0.20, 0.25, 0.30, 0.35, 0.40 \\ \cline{2-3} & **Viewing Angle** & 45\({}^{*}\),60\({}^{*}\), 75\({}^{*}\), 90\({}^{*}\) \\ \hline \end{tabular} \end{table} Table 2: This table contains the interpolated physical parameters that were used to simulate light curves using MOSFIT and compare those with the CVAE-generated light curves. Light curves from these physical parameters, alongside from the test set, are employed to evaluate the performance of the CVAE over the parameter space. Figure 3: (a)This figure show light curves comparison corresponding to g-band after training and generation are implemented on \(D_{1}\). We display 100 light curves for physical parameter of ejecta mass, ejecta velocity and lanthanide fraction having values \([0.001M_{\odot},0.03c,10^{-5}]\) and compare it along side the light curve from the test data for the same physical parameter. The original light curve is shown in diamond marker and is within distribution of the generated light curves shown by dashed lines. The deviations seen in the generated light curves arise due to the variations in the latent space. (b)This plot corresponds to the learning curve after implementing the CVAE. The gap between the two curves is small indicating a good-fit even though in the respective KNe models, there is one light curve associated with a single value of the physical parameter, implementing CVAE provides a comparatively wide range of light curve distribution for the same value of physical parameter arising from the latent representation. This variation of the generated KNe light curves are particularly interesting when the light curves are generated for an entirely new physical parameter set which the CVAE has not come across during training and validation. Kilonova light curves are extremely sensitive to ejecta mass(Kasen et al., 2017). With the change in the ejecta mass, the peak value of the light curves increases and shifts to later days, which is explicitly shown in the top left, top right and bottom left panel in Fig.4. This result is expected in accordance to the KNe evolution. In this figure, the comparative analysis for the different values of ejecta masses (\(0.02M_{\odot},0.1M_{\odot}\) and \(0.03M_{\odot}\)) are shown keeping the ejecta velocity (\(0.03c\)) and lanthanide fraction (\(10^{-9}\)) unchanged. 100 light curves are generated from the CVAE comparing with the original light curve for the above physical parameters. In the bottom right panel, light curve for an arbitrary parameter (\(0.025M_{\odot}\),\(0.035c\),\(10^{-5}\)) is shown. This physical parameter value configuration in not present in \(D_{1}\) therefore we do not have any benchmark to validate it, however, with more data from the respective KNe model, this can be verified. Similar comparative analysis, for all other filter bands, showing the evolution of the light curves with the change in ejecta mass, ejecta velocities and lanthanide fractions are shown in appendix in Fig A1. We next take the second data set, \(D_{2}\), with physical parameters of chirp mass, mass ratio, fraction of the remnant disk and viewing angle from the pole. We run the same analysis as above. As mentioned in Section 4, this data set has a total of 529 light curves, with the above physical parameters having values given as mentioned in Section 4. The training data has 401 light curves, and the remaining are equally divided into a test set and a validation set. In Fig 5, we demonstrate the application of the CVAE on \(D_{2}\) where we have compared 100 light curves generated from CVAE with the original light curve for the physical parameter of \([1.8M_{\odot},0.9,0.15,90^{\circ}]\) correlated to the chirp mass, the mass ratio, the fraction of the remnant disk and the viewing angle from the pole, respectively. This set of physical parameters is taken from the test data set The input light curve is within the distribution of the generated light curve which indicates that the CVAE is performing well and the training and regeneration of the light curves are successful. Fig. 6, shows the confidence plot for a set of physical parameter where it is important to mention that the neural network has not seen this set of physical parameter and hence from the similarity between the generated and true light curves we can substantiate the performance of the CVAE. Other relevant plots for different combination of physical parameter and the changes in the light curves corresponding to those for other filter bands are shown in Fig B1 in appendix. To measure the predictive accuracy of the trained CVAE, we use mean absolute error (MAE), where lower value corresponds to more accurate prediction, calculated using \[MAE=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y}_{i}| \tag{1}\] , where \(y_{i}\) and \(\hat{y}_{i}\) are the true values and predicted values of the light curves of the test data and \(n\) is the total number of points drawn for each light curve from latent space. Alongside the above mean absolute error, we also calculate and plot the mean squared error (MSE) between the true and predicted light curve values using the equation \[MSE=\frac{1}{n}\sum_{i=1}^{n}(z_{i}-\hat{z}_{i})^{2} \tag{2}\] where, \(n\) being the total number of data points and \(z_{i}\) and \(\hat{z}_{i}\) are the respective values of true and CVAE-predicted light curves. ## 6 Discussion Since the CVAE depends on the input data, light curves from different KNe models can be used for training and the corresponding trained CVAE is saved for generating light curves.The input data contains both training data and physical parameters. This versatility of the technique provide a unique opportunity to test this on other data set. Using eq(1), we have the mean absolute error to be 0.0995 and 0.0339 for \(D_{1}\) and \(D_{2}\) respectively. The mean squared error for \(D_{1}\) and \(D_{2}\) calculated on the test data using eq(2) are found to be 0.00241 and 0.00153 respectively. Compared to \(D_{2}\), the MAE for \(D_{1}\) is higher, as the training data is relatively lower for \(D_{1}\). We not only successfully reconstructed the light curves for known values of physical parameters but also generated light curves for the physical parameters that are not present in the data set but are well within the range of the physical parameters values provided in the respective models. In Fig. 3, we presented the reconstructed light curves and the corresponding loss values for the CVAE. Consequently, based on these results, we proceed to generate light curves for different combinations of physical parameters and verify those as presented in Fig. 4. The obtained results are consistent with the KNe evolution presented in the model. Comparing the plots in Fig. 3(a) and Fig. 4, as mentioned in Section 5, we can see more variations in the latter, and for the latter figure the light curves has been truncated to 14 days while in the former, CVAE-generated light curves are extended till the end. For the plots generated from \(D_{2}\), since there has been no significant change related to astrophysical importance, we have truncated the light curves to 14 days. Model evaluation is performed with the test data, in the current work, we move a step ahead to evaluate the performance via the confidence interval (Fig. 6) and mean squared error (Fig.7) between the CVAE-generated and true light curves by taking physical parameters from \(D_{2}\dagger\) which contains the physical parameters from the test set as well as the entirely new physical parameters that were absent in the training, test and validation set. Therefore, as previously mentioned in Section. 4, we use the physical parameters combination from Table 2 to produce Fig. 7. Therefore, when evaluating the performance on these sets of previously unseen parameters, simulates the real scenario akin to observing a new kilonova. In Fig. 6, we represent the figure for the 90% confidence interval between the generated and true light curves in different filter bands of _u,g,r,i,z_ and \(y\) having the chirp mass of \(1.2M_{\odot}\), mass ratio of 0.7, 0.15 fraction of the remnant disk having a viewing angle of 60\({}^{\circ}\). The true light curves for this set of parameters were not included in the analysis. Figure 4: In this figure, we compare 100 light curves (solid lines) generated from CVAE with original the light curves (diamond marker) for physical parameter values having ejecta masses of \(0.02M_{\odot}\)(_upper left_),\(0.1M_{\odot}\)(_upper right_) and \(0.03M_{\odot}\)(_bottom left_) keeping the ejecta velocity \(0.03c\) and lanthanide fraction(\(10^{-9}\)) constant for all the generated light curves. These values of the physical parameter were chosen from the test set. In the bottom right panel, 10 light curves for an arbitrary physical parameter [\(0.025M_{\odot}\), \(0.035c\), \(10^{-5}\)] is shown. All the generated and the original light curves corresponds to the g-band. the training, test, or validation data set, thus this data is unseen to the network. To plot the confidence interval, we take the mean light curves from the 2000 CVAE-generated light curve for the above physical parameter values [1.2\(M_{\odot}\),0.7,0.15,60"] in each filter bands. Each result is highlighted with color-shaded region for each filter band including the true light curve. Here we see a good agreement between the true and generated light curves which provides supporting evidence in favour of the performance of CVAE. In addition to the above, we have calculated the mean squared error between the generated and true light curves where we have have obtained the error to be always between (\(0.015-0.08\)) for the set of physical parameters. To provide further evidence of CVAE performance, we illustrate the mean squared error calculated over the entire parameter space while considering the following values of physical parameters,1.0\(M_{\odot}\),1.2\(M_{\odot}\),1.4\(M_{\odot}\),1.6\(M_{\odot}\), and 1.8\(M_{\odot}\) for the chirp mass region, 45\({}^{\circ}\), 60\({}^{\circ}\), 75\({}^{\circ}\) and 90\({}^{\circ}\) for viewing angle region, 0.15, 0.20, 0.25, 0.30, 0.35 and 0.40 for the fraction of remnant disk parameter and for the 0.70, 0.75, 0.80, 0.85 and 0.90 in mass ratio parameter for the different filter bands as shown in Fig.7. To calculate the mean squared error, we have used the mean of 2000 CVAE-generated light curves and the true light curves for the respective sets of physical parameters. Fig.7 consists of four frames, each having an inset, which show the error calculated for the respective value of physical parameter (along _x-axis_) across the parameter space of the other remaining physical parameters. For the chirp mass frame (_upper left_),error is calculated for the different chirp masses (1.0\(M_{\odot}\),1.2\(M_{\odot}\),1.4\(M_{\odot}\),1.6\(M_{\odot}\),1.8\(M_{\odot}\)) across the parameter space of viewing angle, mass ratio and the fraction of the remnant disk, grouped together for each value of chirp mass for the filter bands. For the viewing angle frame (_upper right_), for the values of 45\({}^{\circ}\),60\({}^{\circ}\),75\({}^{\circ}\) and 90\({}^{\circ}\) error is calculated across the chirp mass, mass ratio, and the fraction of the remnant disk, grouped together for each value of the chirp mass for the filter bands. For the viewing angle frame (_upper right_), for the values of 45\({}^{\circ}\),60\({}^{\circ}\),75\({}^{\circ}\) and 90\({}^{\circ}\) error is calculated across the chirp mass, mass ratio, and the fraction of the remnant disk, grouped together for each value of the chirp mass for the filter bands. Figure 6: This figure corresponds to the 90% confidence plot for the CVAE-generated and true light curves in the _g,r,z,y,i_ and \(u\) bands that have the physical parameter of \(1.2M_{\odot}\) chirp mass, 0.7 mass ratio, 0.15 fraction of the remnant disk and a viewing angle of 60\({}^{\circ}\). True values of the light curves for the above parameter are represented by the solid lines in each color-filled region. We find quite satisfactory agreement between the true and CVAE-generated light curves. The light curves corresponding to the above parameters were taken from the \(D_{2}\dagger\) data set and thus are entirely new to the CVAE for prediction and generation. ratio and fraction of the remnant disk and grouped accordingly. For calculating the mean squared error for the fraction of remnant disk, we have considered 0.15,0.20,0.25,0.30,0.35,and 0.40 across the parameter space of chirp mass, viewing angle and mass ratio grouped in parameter values in different filter bands. In the mass ratio frame (_lower right_), error is calculated for the mass ratio values of 0.70,0.75,0.80,0.85, and 0.90, across the parameter space of chirp mass, viewing angle and fraction of the remnant disk grouped accordingly. For certain set of physical parameter in certain filter bands, tend to have comparatively high mean squared error value, thus adding more to the overall error. For instance, the above case can be seen in the viewing angle frame in Fig.7 for 45" viewing angle in _i-band_. However, the overall error are considerably low. In the insets of Fig.7, _g-band_ results are shown where each dot refers to the mean squared error of a single light curve for the respective set of physical parameter across the parameter space. Here also, we find that for certain set of physical parameter, the error value is comparatively higher as evident from outliers in \(1.8M_{\odot}\) chirp mass, 60" viewing angle, 0.15 fraction of remnant disk and 0.80 mass ratio in the insets. In comparison to one-one mean squared error, the overall error is relatively higher as it includes all the parameter sets across the individual parameter considered. From the above result, we find the regions in parameter space for different filter bands where the CVAE has comparatively unsatisfactory results. In addition to the above, the network summary, Fig C1 has been added in the appendix for reference. While comparing Fig. 3 and Fig. 4 for \(D_{1}\) with results from \(D_{2}\) in Fig.5, one will find similar deviations in the tail of the light curves after 10 days for the different sets of physical parameters shown in plots presented in the appendix(Fig B1). In the main text, only one such plot (Fig. 5) has been shown for a single set of physical parameter, where the deviations are comparatively less. One of the main reasons for such different deviations is distribution of the latent space owing to the difference in the number of training data, since \(D_{1}\) has comparatively less training data than \(D_{2}\). ## 7 Conclusion In this paper, we look into rapid generation of KNe light curves based on different physical parameters values while implementing CVAE. We present a methodological approach to our idea. In the initial stages, the performance check was carried out while inspecting the loss curves during the training and validation. We also evaluated the CVAE performance on different physical parameter from test data, hence perform rapid interpolation of light curves and subsequently provide evidence that the results are quite in agreement with the true light curves. However, in certain cases, while generating the light curves, we find that for \(D_{1}\), as compared to the original light curves, the generated light curves tends to deviate after 8 days but for \(D_{2}\) we do not see such deviations in the later days. Here, we have used publicly available KNe data to provide a proof of our concept, but this does not put any limitation on this technique. This technique can also be used for similar kind of data analysis comprising of data related to other domain in astronomy and astrophysics. The striking point for such an approach is that it reduces the time which is required by simulations to rerun and reproduce similar results for different physical parameters every time. We train the data available and produce the desired results rapidly rather than adjusting the simulation code each time and the saved CVAE model can be used to generate the required KNe light curves.In this work, by speeding up the light curve generation by 1000 times, we have achieved rapid results. Besides the current application for KNe, the CVAE technique demonstrated above has prospective applications in a similar procedure where data are available for training, test and, validation. In the current work, since the detailed calculations of the simulation are not incorporated into the CVAE, we do not expect any new results, but at the same time, the trained-CVAE produces results without looking into the particulars of the simulation.An alternative approach for generating simulation results, with the help of machine learning tools, without actually re-executing those is presented here.Additionally, this method can accommodate other KNe models where data are available to be fed into the network. This kind of CVAE approach has also the potential to be utilized for rapid parameter estimation not only limiting to KNe but also for other astrophysical sources where rapid data analysis is required. ## 8 Acknowledgements This work is supported by the National Science and Technology Council of Taiwan under the grants 110-2628-M-007-005 and 111-2112-M-007-020, and a joint grant of the National Science and Technology Council and the Royal Society of Edinburgh through 110-2927-I-007-513. MJW is supported by the Science and Technology Facilities Council [2285031,ST/V005634/1,ST/V005715/1]. ISH is supported by the Science and Technology Research Council [ST/ L000946/1]. ISH and MJW are also supported by the European Cooperation in Science and Technology (COST) action [CA17137]. MN is supported by Figure 7: In this figure, we present the performance of the CVAE spanning over the whole parameter space of chirp mass (\(1.0M_{\odot}-1.80M_{\odot}\)),viewing angle(\(45^{\circ}-90^{\circ}\)),fraction of remnant disk(\(0.15-0.40\)) and mass ratio (\(0.70-0.90\)) in the _u,g,i,r,z_ and \(y\) filter bands represented by the different colors bars grouped accordingly. The _y-axis_ represents the mean squared values whereas in the _x-axis_ we have the respective physical parameter values for which we have calculated the overall mean squared error in the different filter bands shown with different color bars. For the chirp mass frame(_upper left_), the mean squared errors for different filter bands are grouped by the chirp mass values, \(1.0M_{\odot},1.2M_{\odot},1.4M_{\odot},1.6M_{\odot},1.8M_{\odot}\), while covering parameter space of viewing angle, mass ratio and fraction of the remnant disk. For the viewing angle frame(_upper right_), we show the error for \(45^{\circ}\), \(60^{\circ}\), \(75^{\circ}\) and \(90^{\circ}\) viewing angles calculated over the entire parameter values of chirp mass, mass ratio and fraction of the remnant disk. In the remnant disk frame(_lower left_), for the values of 0.15,0.20,0.25,0.30,0.35 and 0.40 fraction of remnant disk, errors are grouped in different filter bands encompassing the chirp mass, viewing angle and mass ratio. For the mass ratio frame(_lower right_), errors corresponding to mass ratio values of 0.70,0.75,0.80,0.85,and 0.90 are shown as grouped bar plots across the parameter space of chirp mass, viewing angle and fraction of the remnant disk for the different filter bands. In the insets for all the frames, we show the CVAE performance over the entire parameter space, as discussed above, obtained from CVAE generated _g-band_ light curves. Each dot in the inset corresponds to the calculated value of the mean squared error of a light curve for the relevant set of physical parameters. For each of the histograms corresponding to the different parameter space, error bars have been shown. The histograms without the error bar correspond to a single light curve that is available for calculating the mean squared error between the true and CVAE generated light curves. the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 948381) and by funding from the UK Space Agency. The authors are grateful to the valuable suggestions from He-Feng Hsieh, John Veitch, and Nicola De Lillo. matplotlib (Hunter 2007), pandas(pandas development team 2020), tensorflow(Abadi et al. 2015), keras(Chollet et al. 2015) MOSFIT [https://github.com/mnicholl/MOSFiT,https://github.com/guillochon/MOSFiT](https://github.com/mnicholl/MOSFiT,https://github.com/guillochon/MOSFiT)
2303.17568
CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Benchmarking on HumanEval-X
Large pre-trained code generation models, such as OpenAI Codex, can generate syntax- and function-correct code, making the coding of programmers more productive and our pursuit of artificial general intelligence closer. In this paper, we introduce CodeGeeX, a multilingual model with 13 billion parameters for code generation. CodeGeeX is pre-trained on 850 billion tokens of 23 programming languages as of June 2022. Our extensive experiments suggest that CodeGeeX outperforms multilingual code models of similar scale for both the tasks of code generation and translation on HumanEval-X. Building upon HumanEval (Python only), we develop the HumanEval-X benchmark for evaluating multilingual models by hand-writing the solutions in C++, Java, JavaScript, and Go. In addition, we build CodeGeeX-based extensions on Visual Studio Code, JetBrains, and Cloud Studio, generating 4.7 billion tokens for tens of thousands of active users per week. Our user study demonstrates that CodeGeeX can help to increase coding efficiency for 83.4% of its users. Finally, CodeGeeX is publicly accessible and in Sep. 2022, we open-sourced its code, model weights (the version of 850B tokens), API, extensions, and HumanEval-X at https://github.com/THUDM/CodeGeeX.
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, Jie Tang
2023-03-30T17:34:01Z
http://arxiv.org/abs/2303.17568v2
# CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X ###### Abstract Large pre-trained code generation models, such as OpenAI Codex, can generate syntax- and function-correct code, making the coding of programmers more productive and our pursuit of artificial general intelligence closer. In this paper, we introduce CodeGeeX, a multilingual model with 13 billion parameters for code generation. CodeGeeX is pre-trained on 850 billion tokens of 23 programming languages as of June 2022. Our extensive experiments suggest that CodeGeeX outperforms multilingual code models of similar scale for both the tasks of code generation and translation on HumanEval-X. Building upon HumanEval (Python only), we develop the HumanEval-X benchmark for evaluating multilingual models by hand-writing the solutions in C++, Java, JavaScript, and Go. In addition, we build CodeGeeX-based extensions on Visual Studio Code, JetBrains, and Cloud Studio, generating 4.7 billion tokens for tens of thousands of active users per week. Our user study demonstrates that CodeGeeX can help to increase coding efficiency for 83.4% of its users. Finally, CodeGeeX is publicly accessible and in Sep. 2022, we open-sourced its code, model weights (the version of 850B tokens), API, extensions, and HumanEval-X at [https://github.com/THUDM/CodeGeeX](https://github.com/THUDM/CodeGeeX). ## 1 Introduction Given the description of a human intent, such as "write a factorial function", can the machine automatically generate an executable program that addresses this need? This is the problem of _automatic program writing_ that has been explored since the early days of computer science in the 1960s (Waldinger and Lee, 1969; Summers, 1977). From LISP-based pioneering deductive synthesis approaches (Waldinger and Lee, 1969; Summers, 1977) to modern program synthesis systems (Solar-Lezama, 2008; Polozov and Gulwani, 2015), to end-to-end code generation via deep neural networks (Mou et al., 2015; Svyatkovskiy et al., 2020; Sun et al., 2020), tremendous efforts have been made to enable machines to automatically write correct programs as part of the quest to artificial general intelligence. By treating programs as language sequences, neural sequential architectures, such as recurrent neural networks and transformer (Vaswani et al., 2017), can be naturally applied to code generation. In fact, transformer-based techniques (Svyatkovskiy et al., 2020; Sun et al., 2020) have shown the potential of _automatic program writing_ by starting to generate code that is both syntactically correct and consistent in 2020. This progress is significantly furthered when large language models (transformers with billions of parameters) meet the massive open-sourced code data. Notably, the OpenAI Codex (Chen et al., 2021) model (Python only) with 12 billion (12B) parameters pioneered and demonstrated the potential of large code generation models pre-trained on billions lines of public code. By using the generative pre-training (GPT) strategy, Codex can solve introductory-level programming problems in Python with a high probability. Research studies (Ziegler et al., 2022) also show that 88% of users of GitHub Copilot--a paid service powered by Codex--feel more productive when coding with it. Since then, large pre-trained code models have been extensively developed, including DeepMind AlphaCode (Li et al., 2022), Salesforce CodeGen (Nijkamp et al., 2022), Meta InCoder (Fried et al., 2022), and Google PaLM-Coder-540B (Chowdhery et al., 2022). In this work, we present CodeGeeX, a multilingual code generation model with 13 billion parameters, pre-trained on a large code corpus of 23 programming languages. It was trained on more than 850 billion tokens on a cluster of 1,536 Ascend 910 AI Processors between April and June 2022, and was publicly released in Sep. 2022 (CF. the GitHub repo). CodeGeeX has the following properties. First, different from Codex in Chen et al. (2021), both CodeGeeX--the model itself--and how such scale of code models can be pre-trained are open-sourced, facilitating the understanding and advances in pre-trained code generation models. CodeGeeX also supports cross-platform inference on both Ascend and NVIDIA GPUs. Second, in addition to code generation and code completion as Codex and others, CodeGeeX supports the tasks of code explanation and code translation between language pairs (Cf. Figure 1 (a)). Third, it offers consistent performance advantages over well-known _multilingual_ code generation models of the similar scale, including CodeGen-16B, GPT-NeoX-20B, InCode-6.7B, and GPT-J-6B (Cf. Figure 1 (b) and (c)). We also build the free CodeGeeX extensions in several IDEs, currently including Visual Studio Code, JetBrains, and Tencent Cloud Studio (a Web IDE). It supports several different modes--code completion, function-level generation, code translation, code explanation, and customizable prompting--to help users' programming tasks in real time. Since its release, there are tens of thousands of daily active users, each of which on average makes 250+ API calls per weekday. As of this writing, the CodeGeeX model generates 4.7 billion tokens per week. Our user survey suggests that 83.4% of users feel the CodeGeeX extensions improve their programming efficiency. Finally, we develop the HumanEval-X benchmark for evaluating multilingual code models as 1) HumanEval (Chen et al., 2021)--developed by OpenAI for evaluating Codex--and other bench Figure 1: Summary of CodeGeeX. (a): In supported IDEs, users can interact with CodeGeeX by providing prompts. Different models are used to support three tasks: code generation, code translation and code explanation. (b) and (c): In HumanEval and our newly-proposed HumanEval-X, CodeGeeX shows promising multilingual abilities and consistently outperforms other multilingual code generation models. marks (Austin et al., 2021; Hendrycks et al., 2021; Nijkamp et al., 2022) only consist of programming problems in a single language and 2) existing multilingual datasets (Ren et al., 2020; Lu et al., 2021; Zhu et al., 2022) use string similarity metrics like BLEU (Papineni et al., 2002) for evaluation rather than really verify the functional correctness of generated code. Specifically, for each problem--defined only for Python--in HumanEval, we manually rewrite its prompt, canonical solution, and test cases in C++, Java, JavaScript, and Go. In total, HumanEval-X covers 820 hand-written problem-solution pairs (164 problems, each having solutions in 5 languages). Importantly, HumanEval-X support the evaluation of both code generation and code translation between different languages. The contributions of this work can be summarized as follows: * We develop and release CodeGeeX, a 13B pre-trained 23-language code generation model that demonstrates consistent outperformance on code generation and translation over its multilingual baselines of the same scale. * We build the CodeGeeX extensions on VS Code4, JebBrains5, and Tencent Cloud Studio. Compared to Copilot, it supports more diverse functions, including code completion, generation, translation, and explanation. According to the user survey, CodeGeeX can improve the coding efficiency for 83.4% of its users. Footnote 4: [https://marketplace.visualstudio.com/items?itemName=aminer.codgeex](https://marketplace.visualstudio.com/items?itemName=aminer.codgeex) Footnote 5: [https://plugins.jetbrains.com/plugin/20587-codgegex](https://plugins.jetbrains.com/plugin/20587-codgegex) * We hand-craft the HumanEval-X benchmark to evaluate multilingual code models for the tasks of code generation and translation in terms of functional correctness, facilitating the understanding and development of pre-trained (multilingual) code models. ## 2 The CodeGeeX Model CodeGeeX is a multilingual code generation model with 13 billion (13B) parameters, pre-trained on a large code corpus of 23 programming languages. As of June 22, 2022, CodeGeeX has been trained on more than 850 billion tokens on a cluster of 1,536 Ascend 910 AI Processors for over two months. We introduce the CodeGeeX model and its design choices. The consensus reality is that it is computationally unaffordable to test different architectural designs for large pre-trained models (Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022; Zeng et al., 2022), though they define the inductive bias of models. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \hline & \multicolumn{3}{c|}{**Model Properties**} & \multicolumn{3}{c|}{**Dataset**} & \multicolumn{3}{c|}{**Evaluation**} \\ \hline & \multirow{2}{*}{Open} & Multi- & \multirow{2}{*}{\# Pharma} & \multirow{2}{*}{Source} & Language & Size & Multi- & \multirow{2}{*}{Translation} & Benchmark \\ & & & & & & & & & & \\ \hline CodeGeeX (Chen et al., 2021) & ✗ & 12B & Collected & Python & Code & 155GHz & ✗ & ✗ & HumanEval, APFS \\ \hline AlphaCode (Li et al., 2022) & ✗ & ✓ & 41B & Collected & 12 Imge & Code & 715.1GB & ✓ & ✗ & HumanEval, APFS \\ \hline PalM Code (Churchary et al., 2022) & ✗ & ✓ & 88, GB, 54GB & Collected & Multiple & Text: 91B token & ✓ & ✓ & HumanEval, MISP \\ \hline PolyLabCode (Liu et al., 2022) & ✓ & ✓ & 2.78 & Collected & 12 Imge & Code & 253.4GB & ✗ & ✗ & HumanEval \\ \hline OpenX-Nov (Black et al., 2021) & ✓ & ✓ & 1,38, 2,7B & The Flix & Multiple & Text: 23GB & ✗ & ✗ & HumanEval \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ \hline GPT-Nov (Black et al., 2021) & ✓ & ✓ & 20B & The Flix & Multiple & Text: 73GB & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ \hline GPT-A (Wang and Kermontski, 2021) & ✓ & ✓ & 6B & The Flix & Multiple & Text: 73GB & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Large pre-trained language models related to programming languages in the literature. ### CodeGeeX's Architecture **The Transformer Backbone.** Similar to recent pre-trained models, such as GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022), and Codex (Chen et al., 2021), CodeGeeX follows the generative pre-training (GPT) architecture (Radford et al., 2018) with the decoder-only style for autoregressive (programming) language modeling. The core architecture of CodeGeeX is a 39-layer transformer decoder. In each transformer layer (in Figure 2), we apply a multi-head self-attention mechanism (Vaswani et al., 2017) followed by MLP layers, together with layer normalization (Ba et al., 2016) and residual connection (He et al., 2016). We use an approximation of GELU (Gaussian Linear Units) operation (Hendrycks and Gimpel, 2016), namely FastGELU, which is more efficient under the Ascend 910 AI Processor: \[\text{FastGELU}(X_{i})=\frac{X_{i}}{1+\exp(-1.702*|X_{i}|)*\exp(0.851*(X_{i}-|X _{i}|))} \tag{1}\] **Generative Pre-Training Objective.** By adopting the GPT paradigm (Radford et al., 2019; Chen et al., 2021), we train the model on a large amount of unlabeled code data. The principle is to iteratively take code tokens as input, predict the next token, and compare it with the ground truth. Specifically, for any input sequence \(\{x_{1},x_{2},...,x_{n}\}\) of length \(n\), the output of CodeGeeX is a probability distribution of the next token \(\mathbb{P}(x_{n+1}|x_{1},x_{2},...,x_{n},\Theta)=p_{n+1}\in[0,1]^{1\times v}\), where \(\Theta\) represents all parameters of the model and \(v\) is the vocabulary size. By comparing it with the real distribution, _i.e._, a one-hot vector \(y_{n+1}\in\{0,1\}^{1\times v}\) of the ground-truth token, we can optimize the Figure 2: CodeGeeX’s model architecture. CodeGeeX is a code generation model with 13B parameters, consisting of 39-layer left-to-right transformer decoders and a top query layer. It takes text/code tokens as input and outputs the probability of the next token autoregressively. cumulative cross-entropy loss: \[\mathcal{L}=-\sum_{n=1}^{N-1}y_{n+1}\log\mathbb{P}(x_{n+1}|x_{1},x_{2},...,x_{n},\Theta) \tag{2}\] **The Top Query Layer and Decoding.** The original GPT model uses a pooler function to obtain the final output. We use an extra query layer (Zeng et al., 2021) on top of all other transformer layers to obtain the final embedding through attention. As shown in Figure 2, the input of the top query layer replaces the query input \(X_{in}\) by the query embedding of position \(n+1\). The final output is multiplied by the transpose of word embedding matrix to get the output probability. For decoding strategies, CodeGeeX supports greedy, temperature sampling, top-k sampling, top-p sampling, and beam search. Finally, detokenization will turn the selected token ID into an actual word. ### Pre-Training Setup **Code Corpus.** The training corpus contains two parts. The first part is from open source code datasets, the Pile (Gao et al., 2020) and CodeParrot6. The Pile contains a subset of public repositories with more than 100 stars on GitHub, from which we select files of 23 popular programming languages including C++, Python, Java, JavaScript, C, Go, and so on. We identify the programming language of each file based on its suffix and the major language of the repository it belongs to. CodeParrot is another public Python dataset from BigQuery. The second part is supplementary data of Python, Java, and C++ directly scraped from GitHub public repositories that do not appear in the first part. We choose repositories that have at least one star and a total size within 10MB, then we filter out files that: 1) have more than 100 characters per line on average, 2) are automatically generated, 3) have a ratio of alphabet less than 40%, 4) are bigger than 100KB or smaller than 1KB. We format Python code according to the PEP8 standards. Footnote 6: [https://huggingface.co/datasets/transformersbook/codeparrot](https://huggingface.co/datasets/transformersbook/codeparrot) Figure 3 shows the composition of the 158B-token training data, containing 23 programming languages. We divide the training data into segments of equal length. To help the model distinguish between multiple languages, we add a language-specific tag before each segment in the form of [Comment sign]language: [LANG], # language: Python. **Tokenization.** The first step is to convert code snippets into numerical vectors. Considering that 1) there is a large number of natural language comments in code data, 2) the naming of variables, functions, and classes are often meaningful words, we treat code data the same as text data and apply the GPT-2 tokenizer (Radford et al., 2019). It is a BPE (Byte Pair Encoding) (Sennrich et al., Figure 3: Language distribution and tags of CodeGeeX’s data. 2015) tokenizer that deals with the open-vocabulary problem using a fixed-size vocabulary with variable-length characters. The initial vocabulary size is 50,000, we encode multiple whitespaces as extra tokens following Chen et al. (2021) to increase the encoding efficiency. Specifically, L whitespaces are represented by <|extratoken_K|>, where X=8+L. Since the vocabulary contains tokens from various natural languages, it allows CodeGeeX to process tokens in languages other than English, like Chinese, French, Russia, Japanese and more. The final vocabulary size is \(v=52,224\). After tokenization, any code snippet or text description can be transformed into a vector of integers. More details can be found in Appendix A.2. **The Input Word and Positional Embeddings.** Given the tokens, the next step is to associate each token with a word embedding. By looking up the token ID in a word embedding matrix \(W_{word}\in\mathbb{R}^{v\times h}\), where \(h=5120\) is the hidden size, a learnable embedding \(x_{word}\in\mathbb{R}^{h}\) is obtained for each token. To capture positional information, we also adopt learnable positional embedding that maps the current position ID to a learnable embedding \(x_{pos}\in\mathbb{R}^{h}\), from \(W_{pos}\in\mathbb{R}^{n_{max}\times h}\), where \(n_{max}=2048\) is the maximum sequence length. Then, two embeddings are added to obtain the input embeddings \(x_{in}=x_{word}+x_{pos}\) for the model. Finally, the entire sequence can be turned into input embeddings \(X_{in}\in\mathbb{R}^{n\times h}\), where \(n\) is the input sequence length. ### CodeGeeX Training **Parallel Training on Ascend 910.** CodeGeeX was trained on a cluster of the Ascend 910 AI processors (32GB) with Mindspore (v1.7.0). We faced and addressed numerous unknown technical and engineering challenges during pre-training, as Ascend and Mindspore are relatively new compared to NVIDIA GPUs and PyTorch/TensorFlow. The entire pre-training process takes two months on 192 nodes with 1,536 AI processors, during which the model consumes 850B tokens, equivalent to 5+ epochs (213,000 steps). Detailed configurations can be found in Table 2. \begin{table} \begin{tabular}{c c c} \hline \hline **Category** & **Parameter** & **Value** \\ \hline \multirow{8}{*}{**Environment**} & Framework & Mindspore v1.7.0 \\ & Hardwares & 1,536x Ascend 910 AI processors \\ & Mem per GPU & 32GB \\ & GPUs per node & 8 \\ & CPUs per node & 192 \\ & RAM per node & 2048GB \\ \hline \multirow{8}{*}{**Model**} & Model parameters & 13B \\ & Vocabulary size & 52224 \\ & Position embedding & Learnable \\ & Maximum sequence length & 2048 \\ & Hidden size \(h\) & 5120 \\ & Feed-forward size \(4h\) & 20480 \\ & Feed-forward activation & FastGELU \\ & Layernorm epsilon & le-5 \\ & Layernorm precision & FP32 \\ & Number of attention heads \(h_{n}\) & 40 \\ & Attention softmax precision & FP32 \\ & Dropout rate & 0.1 \\ \hline \multirow{2}{*}{**Parallelism**} & Model parallel size & 8 \\ & Data parallel size & 192 \\ & Global batch size & 3072 \\ \hline \multirow{8}{*}{**Optimization**} & Optimizer & Adam \\ & Optimizer parameters & \(\beta_{1}=0.9,\beta_{2}=0.999\) \\ & Initial/final learning rate & 1e-4/1e-6 \\ \cline{1-1} & Warm-up step & 2000 \\ \cline{1-1} & Decay step & 200000 \\ \cline{1-1} & Learning rate scheduler & cosine decay \\ \cline{1-1} & Loss function \(\mathcal{L}\) & Cross entropy \\ \cline{1-1} & Loss scaling & Dynamic \\ \cline{1-1} & Loss scaling window & 1000 \\ \cline{1-1} & Trained steps & 213000 \\ \hline \hline \end{tabular} \end{table} Table 2: Training configuration of CodeGeeX. To increase training efficiency, we adopt an 8-way model parallel training together with 192-way data parallel training, with ZeRO-2 (Rajbhandari et al., 2020) optimizer enabled to further reduce the memory consumption of optimizer states. Finally, the micro-batch size is 16 per node and the global batch size reaches 3,072. Specifically, we use Adam optimizer (Kingma and Ba, 2014) to optimize the loss in Equation 2. The model weights are under FP16 format, except that we use FP32 for layer-norm and softmax for higher precision and stability. The model takes about 27GB of GPU memory. We start from an initial learning rate 1e-4, and apply a cosine learning rate decay by: \[lr_{current}=lr_{min}+0.5*(lr_{max}-lr_{min})*(1+\cos(\frac{n_{current}}{n_{ decay}}\pi)) \tag{3}\] During the two-month training, the training loss of CodeGeeX continues to decrease as the training goes on. We evaluate the checkpoints on HumanEval-X code generation task and observe that the performance is continuously increasing. See details in Figures 13 and 14 in Appendix A.3. **Training Efficiency Optimization.** Over the course of the training, we actively attempted to optimize the Mindspore framework to release the power of Ascend 910. Notably, we adopt the following techniques that significantly improve training efficiency: * Kernel fusion: We fuse several element-wise operators to improve calculation efficiency on Ascend 910, including Bias+LayerNorm, BatchMatmul+Add, FastGeLU+Matmul, Softmax, etc. We also optimize LayerNorm operator to support multi-core calculation. * Auto Tune optimization: When loading models, Mindspore first compiles them to static computational graphs. It uses the Auto Tune tool to optimize the choice of operators (_e.g._, matrix multiplication in different dimensions). And it applies graph optimization techniques to deal with operator fusion and constant folding. Table 3 shows the comparison of training efficiency before and after our optimization. The overall efficiency is measured by trained tokens per day. We observe that the efficiency per processor was improved 3\(\times\) compared to the non-optimized implementation and the overall token throughput of 1,536 GPUs was improved by 224%. ### Fast Inference To serve the pre-trained CodeGeeX, we implement a pure PyTorch version of CodeGeeX that supports inference on NVIDIA GPUs. To achieve fast and memory-efficient inference, we apply both quantization and acceleration techniques to the pre-trained CodeGeeX. **Quantization.** We apply post-training quantization techniques to decrease memory consumption of CodeGeeX during inference. We transform weights \(W\) in all linear transformations from FP16 to INT8 using the common absolute maximum quantization: \[W_{q}=\text{Round}(\frac{W}{\lambda}),\lambda=\frac{\text{Max}(|W|)}{2^{b-1}-1} \tag{4}\] where \(b\) is the bitwidth and \(b=8\). \(\lambda\) is the scaling factor. This quantization transform FP16 values in \([-\text{Max}(|W|),\text{Max}(|W|)]\) to integers between \([-127,127]\). As in Table 4, the memory consumption of CodeGeeX decreases from \(\sim\)26.9GB to \(\sim\)14.7GB (down by 45.4%), allowing CodeGeeX inference on one RTX 3090 GPU. Importantly, Figure 4 shows that \begin{table} \begin{tabular}{c|c c} \hline \hline & **Before** & **After** \\ \hline **Device** & Ascend 910 & Ascend 910 \\ **#GPUs** & 1536 & 1536 \\ **Parallelism** & Data parallel + Model parallel & Data parallel + Model parallel \\ **Sequence length** & 2048 & 2048 \\ **Global batch size** & 2048 & 3072 \\ **Step time(s)** & 15s & 10s \\ **Overall efficiency** & 24.2B tokens/day & 54.3B tokens/day \\ \hline \hline \end{tabular} \end{table} Table 3: Training efficiency (before and after optimization). the quantization only slightly affects the performance on the code generation task (Cf Section 3.2 for details about HumanEval-X.). **Acceleration.** After quantization, we further implement a faster version of CodeGeeX using the NVIDIA FasterTransformer (FastTrans). It supports highly-optimized operations by using layer fusion, GEMM autotuning, and hardware-accelerated functions. For INT8 quantized version, we also implement a custom kernel that accelerates the mixed precision matrix multiplication between INT8 weights and FP16 activation vectors. According to Table 4, the INT8 quantization plus FastTrans implementation achieves the fastest inference speed and the lowest GPU memory consumption on a single GPU. The inference time per token is within 13ms (1.61 seconds / 128 tokens). We also compare the inference speed with implementations in LLM.int() (Dettmers et al., 2022) and Oneflow (Yuan et al., 2021). ## 3 The HumanEval-X Benchmark We develop the HumanEval-X benchmark7 for evaluating multilingual code models. There are 164 code problems defined for five major languages: C++, Java, JavaScript, Go, and Python, resulting in 164\(\times\)5=820 problem-solution pairs. For each problem, it supports both code generation and code translation. Examples of the problems can be found in Appendix A.5. Footnote 7: The HumanEval-X dataset and docker image are at [https://hub.docker.com/r/codogeex/codogeex](https://hub.docker.com/r/codogeex/codogeex). ### HumanEval-X: A Multilingual Benchmark HumanEval (Chen et al., 2021) has been developed to evaluate Codex by OpenAI. However, similar to MBPP (Austin et al., 2021) and APPS (Hendrycks et al., 2021), it only consists of handcrafted programming problems in Python, thus cannot be directly applied to systematically evaluate the performance of multilingual code generation. \begin{table} \begin{tabular}{c c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Implementation**} & \multirow{2}{*}{**GPU**} & \multirow{2}{*}{**Format**} & \multicolumn{2}{c|}{**L+128**} & \multicolumn{2}{c}{**L+256**} & \multicolumn{2}{c}{**L+512**} & \multicolumn{2}{c}{**L+1024**} & \multicolumn{2}{c}{**L+2308**} \\ & & **Mem (G)** & **Time (s)** & **Mem (G)** & **Time (s)** & **Mem (G)** & **Time (s)** & **Mem (G)** & **Time (s)** & **Mem (G)** & **Time (s)** \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Paytech \\ Typical \\ Meggrtron \\ Maggrtron \\ Maggrtron \\ Maggrtron \\ Maggrtron \\ Maggrtron \\ Maggrtron \\ Maggrtron \\ \\ Maggrtron \\ \\ \end{tabular} } & 3090 & \begin{tabular}{c} 1796 \\ 18.7 \\ 18. To this end, we propose to develop a multilingual variant of HumanEval, referred to as HumanEval-X. This is not trivial. For each problem, defined only for Python, in HumanEval, we manually rewrite its prompt, canonical solution, and test cases in the other four languages--C++, Java, JavaScript, and Go. Altogether, we have 820 problem-solution pairs in total in HumanEval-X, each comprising the following parts: * **task_id**: programming language and numerical problem id, _e.g._, Java/0 represents the 0-th problem in Java; * **declaration**: function declaration including necessary libraries or packages; * **docstring**: description that specifies the functionality and example input/output; * **prompt**: function declaration plus docstring; * **canonical_solution**: a verified solution to the problem; * **test**: test program including test cases. Each problem-solution pair in HumanEval-X supports both code generation code translation. An illustrative example is shown in Figure 5. We take the following efforts to make sure that the rewritten code conforms to the programming style of the corresponding language. First, we use the customary naming styles, like CamelCase in Java, Go, and JavaScript, and snake_case in C++. Second, we put the docstrings before the function declaration in Java, JavaScript, C++, and Go. Symbols in docstrings are modified, _e.g._, single quotes are replaced by double quotes in some languages, and keywords like True/False, None are also replaced. Third, we refine test cases according to language-specific behaviors, rather than forcing the programs to return the same result for different languages. For example, when converting an integer to a binary string, Python method bin adds a prefix ''Ob'' before the string while Java method Integer.toBinaryString does not, so we remove such prefix in Java test cases. Last, we also take care of the rounding function. In Python, round converts half to the Figure 5: An illustration of code _generation_ and _translation_ tasks in HumanEval-X. Declarations, docstrings, solutions, and test cases are marked with red, green, blue, and purple respectively. _Generation_ uses declaration and docstring as input to generate the solution. _Translation_ uses declaration in both languages and solution in source language as input, to generate solution in the target language (docstring is not used to prevent models from directly solving the problem). closest even number, unlike in other languages. Thus, we change the test cases to match the rounding implementations in each language. ### HumanEval-X: Tasks In HumanEval-X, we evaluate two tasks: code generation and code translation. **Code Generation.** The task of code generation takes a problem description (e.g., 'write a factorial function") as input and generates the solution in the selected languages (Cf Figure 1 (a)). Specifically, the model takes in the prompt including declaration and docstrings, and generates the implementation of the function. Note that HumanEval-X uses the same problem set for all the five languages, thus, for solving each problem, it supports either one single language or multiple languages simultaneously. **Code Translation.** The task of code translation takes the implementation of a problem in the source language and generates its counterpart implementation in the target language. Precisely, its input includes the function declaration and a canonical solution in the source language (e.g., Python). The model should translate the solution to the target language. Adding declaration in the target language restricts function names and variable types, making the evaluation easier, especially under the zero-shot setting. To prevent the models from directly solving the problem rather than translating, we do not include the docstrings. HumanEval-X supports the translation between all pairs of 5 languages, that is, in total 20 source-target language pairs. **Metric.** For both tasks, we use test cases to evaluate the exact functional correctness of the generated code, measuring the performance with pass@\(k\)(Kulal et al., 2019), making it real-world useful and also completely different from the string similarity metrics like BLEU (Papineni et al., 2002), and CodeBLEU (Ren et al., 2020; Lu et al., 2021; Zhu et al., 2022). Specifically, we use the unbiased method to estimate pass@\(k\)(Chen et al., 2021): \[\text{pass@}k:=\mathbb{E}[1-\frac{\binom{n-c}{k}}{\binom{n}{k}}],n=200,k\in \{1,10,100\} \tag{5}\] where \(n\) is the total number of generation (\(n\)=200 in this work), \(k\) is the sampling budget (typically \(k\in\{\)1, 10, 100\(\}\)) and \(c\) is the number of samples that pass all test cases. We average over the problem set to get the expectation. \(1-\frac{\binom{n-c}{k}}{\binom{n}{k}}\) is the estimated pass@\(k\) for a single problem, and \(\mathbb{E}\) is the expectation of pass@\(k\) over all problems. In practice, we average single-problem pass@\(k\) among all test-set problems to get the expectation. **Multilingual Metric with Budget Allocation.** Unlike mono-lingual models, multilingual code models can solve problems by allocating generation budgets to various languages to increase the sampling diversity and improve the solve rate. Given a budget \(k\), we can distribute part of it \(n_{i}\) to each language with the assignment \[\pi=(n_{1},n_{2},...,n_{m}),\sum_{i=1}^{m}n_{i}=k, \tag{6}\] where \(n_{i}\) is the generation budget assigned to language \(i\), \(m\) is the number of candidate languages. Under an assignment \(\pi=(n_{1},...n_{m})\), for a problem \(p\), the pass@\(k_{\pi}\) can be estimated by: \[\text{pass@}k_{\pi}=\mathbb{E}[1-\prod_{i=1}^{m}\frac{\binom{n-c_{i}}{n_{i}} }{\binom{n}{n_{i}}}], \tag{7}\] where \(n\) is the total number of generation, \(n_{i}\) is the sampling budget and \(c_{i}\) is the number of samples that pass all test cases for language \(i\). We show in Section 4.3 that multilingual models can benefit from budget allocation strategies and have higher solve rate than using any single language. ## 4 Evaluating CodeGeeX on HumanEval-X We evaluate CodeGeeX for the code generation and translation tasks on the multilingual benchmark HumanEval-X. By inheriting from HumanEval, the HumanEval-X results on Python are equivalent to the evaluation on HumanEval. ### Evaluation Settings **Baselines.** We compare CodeGeeX with five competitive open-source baselines: GPT-J-6B (Wang and Komatsuzaki, 2021), GPT-NeoX-20B (Black et al., 2022), InCoder-6.7B (Fried et al., 2022), and CodeGen-Multi-6B/16B (Nijkamp et al., 2022). These models are all trained on multilingual code data, but is previously only evaluated in HumanEval (Python). And they are closer to the scale of CodeGeeX or even larger, while smaller models in the literature are ignored. For all baselines, we use the versions available on HuggingFace (Wolf et al., 2019). We follow the experimental settings of HumanEval-X in Section 3.2. Further details can be found in Appendix A.3. **Environment.** Experiments are conducted by using the NVIDIA A100-SXM-40GB GPUs with Linux system. We design a distributed framework for generation based on ZeroMQ to balance GPU loads. All generated codes are tested in language-specific environments with necessary packages installed. **Decoding Strategy.** We use temperature sampling (\(t\in[0,1]\)) and nucleus sampling (\(p\in[0,1]\)) for generation. For CodeGeeX in code generation, we use \(t=0.2,p=0.95\) for pass@1 and \(t=0.8,p=0.95\) for pass@10 and pass@100 (except for Go and JavaScript, where \(p=0.9\)). For CodeGeeX in code translation, we use \(t=0.2,p=0.95\) for pass@1 and \(t=0.8,p=0.95\) for pass@10 and pass@100 for all language pairs. For the fine-tuned CodeGeeX-13B-FT used for code translation, we use \(p=0.95\). For all baselines in both tasks, we use \(t=0.2,p=0.95\) for pass@1, \(t=0.8,p=0.95\) for pass@10 and pass@100. All pass@\(k\), \(k\in\{1,10,100\}\) results are estimated with \(n=200\). The maximum number of generated tokens is set to 1024 for all models. ### Results of Code Generation and Translation **Multilingual Code Generation.** Table 5 and Figure 6 report the code generation results in terms of the pass@\(k\), \(k\in\{1,10,100\}\) for CodeGeeX and five baseline models on five programming languages. CodeGeeX significantly outperforms models trained with mixed corpora (GPT-J-6B and GPT-NeoX-20B), even though GPT-NeoX-20B has much more parameters. For models trained on codes, CodeGeeX outperforms those with smaller scales (InCoder-6.7B, CodeGen-Multi-6B) by a large margin, and is competitive with CodeGen-Multi-16B with a larger scale. CodeGeeX achieves the best average performance among all models, even slightly better than the larger CodeGen-Multi-16B in all three metrics (0.37%\(\sim\)1.67% improvements). When considering individual languages, models have preferences highly related to the training set distribution. For example, the best language \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Language**} & \multirow{2}{*}{**Metric**} & **GPT-J** & **GPT-NeoX** & **InCoder** & **CodeGen** & **CodeGen** & **CodeGex** \\ & & **-6B** & **-20B** & **-6.7B** & **-Multi-6B** & **-Multi-16B** & **-13B (ours)** \\ \hline \multirow{3}{*}{**Python**} & pass@1 & 11.10\% & 13.83\% & 16.41\% & 19.41\% & 19.22\% & **22.89\%** \\ & pass@10 & 18.67\% & 22.72\% & 26.55\% & 30.29\% & 34.64\% & **39.57\%** \\ & pass@100 & 30.98\% & 39.56\% & 43.95\% & 49.63\% & 55.17\% & **60.92\%** \\ \hline \multirow{3}{*}{**C++**} & pass@1 & 7.54\% & 9.90\% & 9.50\% & 11.44\% & **18.05\%** & 17.06\% \\ & pass@10 & 13.67\% & 18.99\% & 19.30\% & 26.23\% & 30.84\% & **32.21\%** \\ & pass@100 & 30.16\% & 38.75\% & 36.10\% & 42.82\% & 50.90\% & **51.00\%** \\ \hline \multirow{3}{*}{**Java**} & pass@1 & 7.86\% & 8.87\% & 9.05\% & 15.17\% & 14.95\% & **20.04\%** \\ & pass@10 & 14.37\% & 19.55\% & 18.64\% & 31.74\% & **36.73\%** & 36.70\% \\ & pass@100 & 32.96\% & 42.23\% & 40.70\% & 53.91\% & **60.62\%** & 58.42\% \\ \hline \multirow{3}{*}{**JavaScript**} & pass@1 & 8.99\% & 11.28\% & 12.98\% & 15.41\% & **18.40\%** & 17.59\% \\ & pass@10 & 16.32\% & 20.78\% & 22.98\% & 27.92\% & **32.80\%** & 32.28\% \\ & pass@100 & 33.77\% & 42.67\% & 43.34\% & 48.81\% & **56.48\%** & 56.33\% \\ \hline \multirow{3}{*}{**Go**} & pass@1 & 4.01\% & 5.00\% & 8.68\% & 9.98\% & 13.03\% & **14.43\%** \\ & pass@10 & 10.81\% & 15.70\% & 13.80\% & 23.26\% & 25.46\% & **25.68\%** \\ & pass@100 & 23.70\% & 32.08\% & 28.31\% & 41.01\% & **48.77\%** & 47.14\% \\ \hline \multirow{3}{*}{**Average**} & pass@1 & 7.90\% & 9.78\% & 11.33\% & 14.28\% & 16.73\% & **18.40\%** \\ & pass@10 & 14.77\% & 19.55\% & 20.25\% & 27.89\% & 32.09\% & **33.29\%** \\ \cline{1-1} & pass@100 & 30.32\% & 39.06\% & 38.48\% & 47.24\% & 54.39\% & **54.76\%** \\ \hline \hline \end{tabular} \end{table} Table 5: Results of **code generation** task in HumanEval-X. for CodeGeeX is Python while the best language for CodeGen-Multi-16B is Java. Examples of CodeGeeX generation can be found in Appendix A.5. **Cross-Lingual Code Translation.** Table 6 illustrates the results on code translation. For CodeGeeX, we evaluate both the original version CodeGeeX-13B and the fine-tuned CodeGeeX-13B-FT. CodeGeeX-13B-FT is first fine-tuned using the training set of code translation task in XLCoST (Zhu et al., 2022), and then continuously fine-tuned by a small amount of Go data (since Go is missing in XLCoST). Among all translation pairs, CodeGeeX-13B-FT performs the best on pass@100 in 11 out of the 20, while CodeGen-Multi-16B is the best on 7 of them. We also observe a clear preference of languages by different models. CodeGeeX performs the best when translating other languages to Python and C++, while CodeGen-Multi-16B performs better when translating to JavaScriptScript and Go. **Test Result Analysis.** We group the samples' test results into five categories: passing, wrong answer, runtime error, syntax/semantic error and unfinished generation, and calculate the proportion of results for each model. Runtime error includes out-of-bound index, wrong string format, etc; syntax/semantic error indicates errors detected by syntax or semantic check, like compilation error in compiled languages and syntax, undefined or type error in interpreted languages; unfinished generation means failing to complete one function within maximum length. \begin{table} \begin{tabular}{c|c|c c c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & & & & & & & & & \\ \multirow{4}{*}{**Py**} & \multirow{4}{*}{**Model**} & \multicolumn{3}{c}{**Python**} & \multicolumn{3}{c}{**C++**} & \multicolumn{3}{c}{**Java**} & \multicolumn{3}{c}{**JavaScript**} & \multicolumn{3}{c}{**Ge**} \\ & & @1 & @10 & @100 & @1 & @10 & @100 & @1 & @10 & @100 & @1 & @100 & @100 & @1 & @10 & @100 \\ \hline \multirow{4}{*}{**C++**} & InLocider-6.7B & - & - & - & 26.11 & 41.00 & 54.25 & 26.74 & 42.66 & 61.20 & 37.05 & 58.85 & 78.91 & 15.69 & 27.57 & 43.67 \\ & CodeGen-Multi-16B & - & - & - & **35.94** & **47.81** & 59.37 & 29.27 & 45.70 & 64.45 & **34.30** & **66.26** & **28.55** & **28.87** & **41.01** & **57.22** \\ & CodeGeeX-13B & - & - & - & 26.54 & 43.56 & 56.48 & **52.54** & 41.52 & 59.72 & 32.22 & 47.33 & 65.87 & 9.56 & 28.83 & 33.56 \\ & CodeGeeX-13B & - & - & - & 34.16 & 46.86 & **6.12** & **41.98** & **58.17** & **77.28** & **34.31** & 53.05 & 66.08 & 16.41 & 30.76 & 46.37 \\ \hline \multirow{4}{*}{**C++**} & InLocider-6.7B & 34.37 & 58.41 & 78.57 & - & - & 34.04 & 57.02 & 68.70 & 37.05 & 65.05 & 79.61 & 25.54 & 39.11 & 58.02 \\ & CodeGee-Multi-16B & 33.83 & 55.37 & 76.64 & - & - & - & 43.20 & 69.84 & **88.82** & **54.51** & **71.50** & **83.14** & **27.94** & **49.73** & **68.32** \\ & CodeGeeX-13B & 27.18 & 49.02 & 67.69 & - & - & - & 22.56 & 40.91 & 64.08 & 30.23 & 55.68 & 75.8 & 8.64 & 18.79 & 31.76 \\ & CodeGeeX-13B & **27.29** & **80.39** & **87.10** & - & - & - & **71.68** & **81.62** & 58.84 & 58.03 & 64.55 & 74.57 & 16.71 & 34.18 & 52.98 \\ \hline \multirow{4}{*}{**Java**} & InLocider-6.7B & 42.76 & 65.58 & 80.43 & 40.01 & 55.17 & 70.39 & - & - & 43.20 & **68.24** & **84.39** & 21.58 & 35.20 & 54.97 \\ & CodeGee-Multi-16B & 52.73 & 69.30 & 82.74 & 41.42 & 54.68 & 65.50 & - & - & **57.65** & 67.90 & 79.22 & **34.00** & **48.49** & **67.94** \\ & CodeGeeX-13B & 43.41 & 68.86 & 84.03 & 39.33 & 58.48 & 72.36 & - & - & - & 44.19 & 64.22 & 82.89 & 17.17 & 32.74 & 47.71 \\ & CodeGeeX-13B & **75.03** & **87.71** & **51.33** & **49.67** & **65.45** & **75.40** & - & - & 49.95 & 62.82 & 79.64 & 18.85 & 32.92 & 48.39 \\ \hline \multirow{4}{*}{**JS**} & InLocider-6.7B & 23.18 & 50.47 & 67.26 & 35.47 & 54.48 & 70.17 & 30.67 & 50.90 & 71.03 & - & - & 25.79 & 42.96 & 61.47 \\ & CodeGeeX-16B & 35.52 & 52.23 & 69.78 & 35.41 & 53.12 & 64.47 & 33.79 & 56.06 & 74.00 & - & - & **33.38** & **40.88** & **64.14** \\ & CodeGeeX-13B & 31.15 & 54.02 & 72.36 & 30.32 & 51.63 & 69.37 & 24.68 & 48.35 & 69.03 & - & - & 11.91 & 26.39 & 39.81 \\ & CodeGeeX-13B & **67.63** & **81.88** & **93.30** & **46.87** & **60.82** & **73.18** & **56.55** & **70.27** & **80.71** & - & - & 16.46 & 32.99 & 50.29 \\ \hline \multirow{4}{*}{**Go**} & InLocider-6.7B & 34.14 & 54.52 & 70.83 & 30.45 & 48.47 & 62.81 & 34.52 & 53.95 & 69.92 & 39.37 & **63.63** & **80.75** & - & - \\ & CodeGeeX-13B & 35.97 & 68.65 & 32.95 & 45.88 & 59.56 & 36.55 & 59.12 & 78.70 & 38.93 & 56.68 & 70.68 & - & - \\ \cline{1-1} & CodeGeeX-13B & 35.92 & 56.02 & 77.32 & 29.83 & 41.98 & 58.15 & 22.89 & 41.04 & 61.46 & 25.24 & 46.50 & 69.93 & - & - \\ \cline{1-1} & CodeGeeX-13B & **57.98** & **79.04** & **93.57** & **38.97** & **53.05** & **63.92** & **54.22** & **69.03** & **79.40** & **43.07** & 59.78 & 74.04 & - & - \\ \hline \hline \end{tabular} \end{table} Table 6: Results of **code translation** task in HumanEval-X. Figure 6: Results of **code generation** task in HumanEval-X. Left: Detailed pass@k performance in five languages. Right: CodeGeeX achieves the highest average performance compared with other open-sourced multilingual baselines. We also find that it gains performance when the sampling budgets are properly distributed to multiple languages. Figure 7 shows the proportions of running results of four models. For all languages, the most common error type is wrong answer, with ratio ranging from 0.44 to 0.75 except for Go, showing that code generation models at the current stage mainly suffer from incorrect code logic rather than semantics. Go samples have a high syntax error rate, which may be due to Go having strict restrictions on syntax and forbidding unused variables and imports, failing to compile many logically correct codes. CodeGeeX has less rate to generate code that produces runtime, syntax, or semantic errors. ### The Multilingual Pre-Training Helps Problem Solving We perform studies to understand whether and how multilingual pre-training can benefit problem-solving of CodeGeeX. **Exploration vs. Exploitation under Fixed Budgets.** Given a fixed budget \(k\), pass@k evaluates the ability of models generating at least 1 correct solution under \(k\) generations. Previous works (Chen et al., 2021; Li et al., 2022) have already discovered that there's a trade-off between exploration and exploitation: When the budget is small, it is better to use a low temperature to ensure accuracy on Figure 8: In HumanEval-X, each problem’s pass rate varies when generating in different programming languages with CodeGeeX. **Left:**\(t=0.2,p=0.95\); **Right:**\(t=0.8,p=0.95\). Figure 7: **Left**: the proportions of running results of four models for each language. **Right**: the average result ratios across four models, with lines representing minimum and maximum values. For each model and each language, we study 200 samples generated under \(t=0.8\) and \(p=0.95\). easy problems. When the budget is large, instead, adjusting a higher temperature is vital, as it makes the model more likely to find at least one solution for difficult problems. **Pass Rate Distribution vs. Languages.** Unlike monolingual models, multilingual models can solve problems using various programming languages. In Figure 8, we observe that the pass rate distribution of problems against different languages are diverse. This inspires us to use budget allocation methods to help improve the diversity of the generated solutions. **Budget Allocation Strategies.** We compare three basic strategies: _Best Single_ chooses a single language with the best performance; _Uniform_ allocates the budget uniformly; _Weighted_ allocates the budget to multiple languages based on their proportions in the training corpus (detailed weights can be found in Appendix Table 9). Table 7 illustrates how budget allocation improves the multilingual generation. Both **Uniform** and **Weighted** outperform **Best Single** by promoting a more diverse generation, which gives a higher chance of solving problems. **Weighted** is slightly better due to the prior knowledge on the model. For model-wise comparison, CodeGeeX shows up a decent advantage over other baselines in both strategies, which suggests that it might have a more diverse solution set under multiple languages. Programming languages are created with a specific purpose and unique design; in real-world scenarios, multilingual models might take this advantage for certain tasks. **Negative Correlations in Pair-Language Translation.** When evaluating the translation ability in HumanEval-X, an interesting observation is that the performance of A-to-B and B-to-A are usually negatively-correlated, shown in Figure 9. Such asymmetry suggests that multilingual code generation models may have imbalanced focus on source and target languages during code translation. We provide two possible explanations. First, language distributions in training corpus differ a lot, resulting in different level of generation ability. For example, the ratio of Python is 26.6% (vs. Go 4.7%) in CodeGeeX training corpus, and average pass@100 of _Others-to-Python_ reaches ~90% (vs. _Others-to-Go_ only ~50%). Second, some languages are themselves harder to automatically write with syntactic and semantic accuracy due to language-dependent features, affecting translation performance as target languages. For instance, Go, which models translate poorly into, has more constraints on syntax level, forbidding unused variables or imports. ## 5 The CodeGeeX Tools and Users Based on CodeGeeX, we build open-source extensions for IDEs including VS Code, JetBrains and Cloud Studio. The extensions support code generation, completion, translation and explanation, aiming at improving the development efficiency of programmers. As of this writing, CodeGeeX has served tens of thousands of users, with an average of 250+ API calls per active user per weekday. It currently generates 4.7+ billion tokens per week, which has been steadily growing since its release. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**Method**} & **GPT-J** & **GPT-NeOx** & **InCoder** & **CodeGen** & **CodeGen** & **CodeGex** \\ & & **-6B** & **-20B** & **-6.7B** & **-Multi-6B** & **-Multi-16B** & **-13B** \\ \hline \multirow{2}{*}{\begin{tabular}{c} pass\(@k_{\pi}\) \\ (\(k=100\)) \\ \end{tabular} } & Best Single & 33.77\% & 42.67\% & 43.95\% & 53.19\% & 60.62\% & **60.92\%** \\ & Uniform & 36.40\% & 44.75\% & 43.89\% & 53.47\% & 61.01\% & **62.41\%** \\ \multirow{2}{*}{ \begin{tabular}{c} pass\(@k_{\pi}\) \\ (\(k=100\)) \\ \end{tabular} } & Weighted & _36.76\%_ & _44.97\%_ & _45.60\%_ & _53.94\%_ & _61.34\%_ & **62.95\%** \\ \hline \hline \end{tabular} \end{table} Table 7: Results for fixed-budget multilingual generation on HumanEval-X. Figure 9: The performance of translating A-to-B is negatively correlated with B-to-A. Such asymmetry indicates that multilingual models still lack of high-level understanding between languages. We perform a survey on CodeGeeX's user experience from 168 users covering _front-end developer_, _backend developer_, _full stack engineer_, _algorithm engineer_, _students_, _researcher_, and _other programmers_. Figure 10 illustrates users' profession distribution and the satisfaction score. We evaluate the satisfaction considering five dimensions, "Ease of Use", "Reliability", "Feature", "Visual", "Speed", each scored from 0 to 5. Figure 10 shows that the majority of users have positive experiences with CodeGeeX, especially for researchers and students, while there is still room for improvement for professional developers. This can be interpreted by our training code corpus: open-sourced repositories contain many introductory or research projects, while production codes are often close-sourced. To increase the CodeGeeX's capability in professional domain, these codes are needed in the future. We further investigate how multilinguality of CodeGeeX help coding. Figure 11 illustrates how users evaluate the helpfulness of CodeGeeX during development. There are on average over 83.4% of users think CodeGeeX can improve or slightly increase their coding efficiency, especially for mainstream programming languages like Go, C++, Python, C, C#, etc. Note that these well-performing programming languages also appear more frequently in the training data (Figure 3), which encourages us to train CodeGeeX on more language-specific data to enhance its capability. ## 6 Conclusion We introduce CodeGeeX, a 13B pre-trained 23-language code generation model, as well as we build HumanEval-X, to fill the gap of multilingual code generation. CodeGeeX consistently outperforms open-sourced multilingual baselines of the same scale on code generation and translation tasks. The extensions built on CodeGeeX bring significant benefits in increasing coding efficiency. The multilinguality of CodeGeeX brings the potential of solving problems with an ubiquitous set of formalized languages. We open sourced CodeGeeX aiming to help researchers and developers to widely take benefit of large pre-trained models for code generation. The multilingual ability of CodeGeeX shows the potential of solving problems with a ubiquitous set of formalized languages. Here, we share three of our observations as the future directions. First, we find that the model capacity is essential for multilingual programming ability. It is not trivial for the model to benefit from learning multiple languages. Human programmers can abstract the high-level concept of programming, thus learning one language can help them master the others. On the contrary, the model seems to require a large capacity to concurrently store the knowledge of each language. How to help the model extract the most essential knowledge of programming remains a research challenge. Second, similar to others, CodeGeeX shows the reasoning potential as a model though its lack of strong generality. We demonstrate that CodeGeeX can solve problems in different languages. However, the pass rate distribution varies a lot among languages, _i.e._, it is not able to solve the same problem using different languages on occasion. We assume that this could probably be related to some language-specific features (_e.g._, some problems are easier to solve in Python), or it could be simply due to the appearance of a similar language-specific implementation in training data. Either case, there is a long way to go for the model to have a reliable reasoning ability. Third, the few-shot ability of CodeGeeX is worth exploration. Instead of using costly fine-tuning approaches, we may do priming using a few examples and make the model achieve comparable performance. Recent works like chain-of-thought (CoT) prompting (Wei et al., 2022) have shown impressive results using such an approach, inspiring us to examine CoT in code models. ## Acknowledgement This research was supported by Natural Science Foundation of China (NSFC) for Distinguished Young Scholars No. 61825602, NSFC No. 62276148 and a research fund from Zhipu.AI. We give our special thanks to Wenguang Chen from Tsinghua, the Peng Cheng Laboratory, and Zhipu.AI for sponsoring the training and inference GPU resources. We thank all our collaborators and partners from Tsinghua KEG, IIIS, Peng Cheng Laboratory, and Zhipu.AI, including Aohan Zeng, Wendi Zheng, Lilong Xue, Yifeng Liu, Yanru Chen, Yichen Xu, Qingyu Chen, Zhongqi Li, Gaojun Fan, Yifan Yao, Qihui Deng, Bin Zhou, Ruijie Cheng, Peinan Yu, Jingyao Zhang, Bowen Huang, Zhaoyu Wang, Jiecai Shan, Xuyang Ding, Xuan Xue, and Peng Zhang.
2302.06994
Time varying Na I D absorption in ILRTs as a probe of circumstellar material
Intermediate-Luminosity Red Transients (ILRTs) are a class of observed transient posited to arise from the production of an electron-capture supernova from a super-asymptotic giant branch star within a dusty cocoon. In this paper, we present a systematic analysis of narrow Na I D absorption as a means of probing the circumstellar environment of these events. We find a wide diversity of evolution in ILRTs in terms of line strength, time-scale, and shape. We present a simple toy model designed to predict this evolution as arising from ejecta from a central supernova passing through a circumstellar environment wherein Na II is recombining to Na I over time. We find that while our toy model can qualitatively explain the evolution of a number of ILRTs, the majority of our sample undergoes evolution more complex than predicted. The success of using the Na I D doublet as a diagnostic tool for studying circumstellar material will rely on the availability of regular high-resolution spectral observations of multiple ILRTs, and more detailed spectral modelling will be required to produce models capable of explaining the diverse range of behaviours exhibited by ILRTs. In addition, the strength of the Na I D absorption feature has been used as a means of estimating the extinction of sources, and we suggest that the variability visible in ILRTs would prevent such methods from being used for this class of transient, and any others showing evidence of variability
Robert Byrne, Morgan Fraser, Yongzhi Cai, Andrea Reguitti, Giorgio Valerin
2023-02-14T12:02:29Z
http://arxiv.org/abs/2302.06994v1
# Time varying Na i D absorption in ILRTs as a probe of circumstellar material ###### Abstract Intermediate-Luminosity Red Transients (ILRTs) are a class of observed transient posited to arise from the production of an electron-capture supernova from a super-asymptotic giant branch star within a dusty cocoon. In this paper, we present a systematic analysis of narrow Na i D absorption as a means of probing the circumstellar environment of these events. We find a wide diversity of evolution in ILRTs in terms of line strength, time-scale, and shape. We present a simple toy model designed to predict this evolution as arising from ejecta from a central supernova passing through a circumstellar environment wherein Na ii is recombining to Na i over time. We find that while our toy model can qualitatively explain the evolution of a number of ILRTs, the majority of our sample undergoes evolution more complex than predicted. The success of using the Na i D doublet as a diagnostic tool for studying circumstellar material will rely on the availability of regular high-resolution spectral observations of multiple ILRTs, and more detailed spectral modelling will be required to produce models capable of explaining the diverse range of behaviours exhibited by ILRTs. In addition, the strength of the Na i D absorption feature has been used as a means of estimating the extinction of sources, and we suggest that the variability visible in ILRTs would prevent such methods from being used for this class of transient, and any others showing evidence of variability. keywords: supernovae: general - stars: massive - stars: evolution - circumstellar matter ## 1 Introduction With the advent of modern astronomical surveys, the 'luminosity gap' (e.g. Kulkarni & Kasliwal, 2009; Pastorello & Fraser, 2019; Cai et al., 2022), spanning from the brightest classical novae to the dimmest core-collapse supernovae has begun to be populated with a number of new classes of intermediate-luminosity transients. Among these are the intermediate-luminosity red transients (ILRTs). In terms of photometry, ILRTs typically show a slow rise of \(\sim 2\) weeks to a peak absolute magnitude between \(-11.5\) and \(-14.5\) mag in the \(V\)-band. A linear decline or pseudo-plateau phase follows the peak, leading the shape of their light curves to resemble those of Type IIIL or Type IIP supernovae respectively. In cases where late-time photometry has been available, their decline has been seen to match the decay of \({}^{56}\)Co. With total radiated energies on the order of \(10^{47}\) erg, they are fainter than the majority of Type II supernovae. Spectra from ILRTs tend to be relatively featureless, with narrow H \(\alpha\) and H \(\beta\) emission lines being the most prominent features, alongside the [Ca ii] doublet (\(\lambda\lambda\) 7291, 7324) and the Ca infrared triplet (\(\lambda\lambda\) 8498, 8542, 8662). Lines such as Na i D and Ca H&K can also be seen in absorption. The [Ca ii] emission doublet (\(\lambda\lambda\) 7291, 7324) is strongly visible throughout the duration of the transient and is considered a characteristic feature of ILRTs. The spectra evolve slowly, becoming slightly redder over time, and exhibiting an IR excess at both early and late times, suggestive of a dusty local environment. In addition to the prototypical event for this class: SN 2008S (Botticella et al., 2009), a handful of these transients have been discovered and studied such as NGC 300-20080T1 (Bond et al., 2009; Berger et al., 2009; Adams et al., 2016), AT 2017be (Cai et al., 2018), and AT 2019ab (Jencson et al., 2019). A spectroscopic and photometric study of five ILRTs (Cai et al., 2021) shows strong homogeneity between members of this class of transient. A number of mechanisms have been suggested for how the transient itself is produced. Some possibilities include outbursts similar to those of Luminous Blue Variables (LBVs) (Smith et al., 2009; Humphreys et al., 2011), a stellar merger similar to a Luminous Red Nova (LRN) (Kasliwal et al., 2011), or a faint core-collapse supernova produced by the core of a super-asymptotic giant branch star undergoing electron-capture (Botticella et al., 2009; Doherty et al., 2017). Progenitor candidates have been detected for a number of ILRTs through pre-explosion imaging. MIR imaging from _Spitzer_ revealed a source coincident with the position of SN 2008S consistent with a \(\sim 10\) M\({}_{\odot}\) star enshrouded in a cloud of dust at a temperature of \(\sim 440\) K (Prieto et al., 2008). Similar archival observations at the position of NGC 300-20080T1 prior to explosion show a possible progenitor with a luminosity of \(10^{4.9}\) L\({}_{\odot}\) and a dust temperature of \(\sim 300\) K (Adams et al., 2016). Jencson et al. (2019) identify a similar progenitor for AT 2019abn, which exhibits variability in the 4.5 micron band in the years prior to explosion. Follow-up campaigns for SN 2008S and NGC 300-2008OT1 have confirmed that both objects are still fading, and are > 15 times fainter than their progenitors in the MIR, and undetected in optical and NIR (Adams et al., 2016). This lends credence to those models which imply these events arise from terminal explosions rather than non-terminal eruptions. In a number of ILRTs, the strength of the Na i D (\(\lambda\lambda\) 5890, 5896) absorption doublet has been seen to vary with time (Botticella et al., 2009; Cai et al., 2021). Such variability has also been measured in other classes of gap transient, such as the luminous red nova AT 2021byl (Cai et al., 2022). This variability may suggest that these lines are being produced in clouds of circumstellar material around the progenitor, and by tracking their evolution, we may be able to probe these dusty environments. In doing so, we can use insights from the circumstellar environment to infer the type of progenitors which may produce such environments. The Na i D line has been used to probe a number of classes of SNe. These include the Type Ia supernova SN 2006X, where the evolution of this feature was interpreted as occurring due to changes in the ionisation conditions of the circumstellar material (CSM) due to radiation from the supernova (Patat et al., 2007). Na i D absorption has also been studied for large statistical samples of Type Ia SNe, where a preponderance of blue-shifted absorption has been taken as evidence for outflowing gas from their progenitor systems (Sternberg et al., 2011; Maguire et al., 2013). Detections of Na i D absorption from the Type IIL supernova SN 1998S have been used to study both the interstellar medium of its host galaxy (Bowen et al., 2000) and as diagnostics of the wind density (Chugai & Utrobin, 2008). For the recurrent nova T Pyx, analysis of the discrete components of this doublet suggested an evolution caused by the progression of a recombination front through ejecta as the optical depth of the ejecta decreases over time (Shore et al., 2011). Measurements of the width of the Na i D line in supernova impostor SN 2011A were used to determine the origin of the absorption as arising from the CSM (de Jaeger et al., 2015). Besides being used to study the properties of individual objects, the Na i D line has been used in a statistical sense to show a correlation that exists between a source's Milky Way extinction and the equivalent width of this doublet (Poznanski et al., 2012). This is often used as a method of estimating a source's extinction from its spectrum. The doublet has also been used, in the case of SNe Ia, to probe host galaxy reddening. An analysis comparing the equivalent width of the Na i D line to the colour excess \(E(B-V)\) in a sample of SNe Ia has shown that sources cluster around two lines of significantly different slopes (Turatto et al., 2003). Despite the previous work done regarding this absorption line, there has not yet, to our knowledge, been a systematic investigation into the diversity of Na i D evolution for a sample of ILRTs. We present a method by which the equivalent width (EW) of the Na i D doublet can be consistently and accurately measured across a time series of spectra for a particular transient. We apply this to a number of transients in the ILRT class in order to study and compare their behaviour. Additionally, we present a toy model used to predict the evolution of the Na i D doublet as ejecta from a central supernova passes through a given density profile of CSM. In Section 2 we describe our method of fitting spectral lines using MCMC methods, along with the checks we perform to ensure its accuracy. In Section 3 we introduce our toy model used to predict the evolution of the Na i D doublet and show some of these predictions. In Section 4 we present our results from analysing the spectra of our sample of ILRTs. Finally, in Section 5 we summarise our conclusions from this study. ## 2 Methods ### Line fitting The majority of the ILRTs we choose to analyse have spectra publicly available from the WISReEP database1(Yaron & Gal-Yam, 2012). In addition to these, we analyse spectra of four ILRTs from Cai et al. (2021). Footnote 1: [https://wiserep.weizmann.ac.il](https://wiserep.weizmann.ac.il) We also checked the ESO archive for any high-resolution spectra taken for any of these ILRTs and found a single high signal-to-noise high-resolution UVES spectrum for NGC 300-2008OT1. We further augment our analysis of NGC 300-2008OT1 with a number of lower resolution spectra from Valerin et al. (in preparation). A number of our spectra for AT 2013la displayed a broad emission feature at the wavelength of the Na doublet. These spectra were each taken using the same instrument (OSIRIS at the Gran Telescopio Canarias). As we had spectra of the same source from other instruments which did not show this emission feature, we suspect that this effect was instrumental, caused by an inaccuracy in subtracting the background sky spectrum leaving detections of this emission line from the atmosphere. Ideally, we would confirm this through the sky spectra, but these were not available in this case. As such, we discard spectra that show this emission feature as it impacts our ability to fit the absorption from the ILRT accurately. Five low resolution spectra were available for the ILRT SN 2002bu (Smith et al., 2011) on WISReEP. However, we decided not to include this object in our sample due to a combination of low signal-to-noise and inconsistencies between the spectra making the unambiguous identification of the Na doublet difficult. In total, our sample consists of ten ILRTs with at least two spectra. We measure the equivalent width of the Na doublet in each of these spectra using a custom-built Markov chain Monte Carlo (MCMC) pipeline using the emcee Python package (Foreman-Mackey et al., 2013). See Mackay (2003) for a full review of MCMC methods. A detailed description of the code which we use to perform these calculations is available in Appendix A. ### Synthetic spectrum testing To ensure that this method of fitting the absorption doublet produces reliable results, we test its ability to estimate the equivalent width of the Na doublet from a synthetic line of known strength. We begin by creating a flat, high-resolution continuum with constant flux of 1. We then inject absorption features into this continuum at the expected wavelengths of each component of the doublet. Each component is implemented as a Gaussian with a width of 0.15 A. This allows the two components to be separated and distinguished individually, which would require a spectral resolution greater than that available from the majority of our real spectra. The amplitude of the D1 component is set to half that of the D2 component. We then convolve this spectrum with a broad Gaussian whose width is 3 A, similar to the width of the Na absorption line present in many of our real spectra. This blends the two components of the Na doublet together. Next, we rebin the spectrum such that its resolution matches that of the low resolution spectra. We do this using a method from the PySynphot package (STScI Development Team, 2013) which conserves flux in each bin as we degrade the spectral resolution. Finally we add Gaussian noise proportional to the flux at each wavelength to the spectrum to reduce the signal-to-noise ratio to a level better representative of our real spectra. At this point, we have a synthetic Na absorption spectrum which resembles the majority of our real spectra with an equivalent width known from our original high-quality spectrum. We generate 1500 synthetic spectra in this described manner, each spectrum differing by the Gaussian noise injected into them. We then use these synthetic spectra to test the effectiveness of our MCMC code by fitting Na doublets to each of them, and comparing these 1500 calculated equivalent widths to the known value from the initial synthetic spectrum. Each run of the code returns an equivalent width and an associated uncertainty. We record each case where the calculated equivalent width differs from the true value by less than its uncertainty as a successful determination of the line's true strength. We find that the code recovers the expected equivalent width in 68.5 per cent of cases. This value is close to the proportion of a normal distribution which lies within 1 standard deviation of the mean, giving us confidence that our method is recovering equivalent widths at a satisfactory level. When examining the equivalent widths that our code predicts, we find that our code does have a slight tendency to overpredict rather than underpredict. Overall, our methods calculated a value for equivalent width lower (higher) than the true value 43.3 (56.7) per cent of the time. However, as we can still calculate the true equivalent width within our uncertainty the correct proportion of the time, we do not consider this an important issue. In particular, as we want to track the evolution of the strength of the Na doublet across multiple spectra, a slight systematic bias towards overprediction may manifest in individual spectra but should not negatively effect the overall evolution. ### IRAF comparison To test the accuracy of our MCMC code, we compare its predictions to those made using the IRAF code (Tody, 1986). We predict equivalent widths using IRAF for a number of spectra for one ILRT: PTF 10fqs, as well as a type Ia supernova which we discuss further in section 2.4: SN 2017erp. These fits are plotted alongside our calculations using MCMC methods in Figure 1. Within our calculated uncertainties, our results are largely consistent with those from IRAF, with only a single point for PTF 10fqs lying outside of our uncertainties. Our error bars indicate 1\(\sigma\) uncertainties, and as measurements from IRAF do not include uncertainties, this discrepancy is small enough to be unimportant. Our measurements of Na in the host galaxy of SN 2017erp all line up with IRAF measurements, although our uncertainties are much larger in this case due to the line being intrinsically weaker as well as this source having a more complicated continuum than most ILRTs. These comparisons show that our code serves as a method of predicting the strength of line with a similar accuracy to IRAF, with the added benefit of generating uncertainties for each measurement of equivalent width. ### SN 2017erp In addition to our sample of ILRTs, we examine spectra for a control case. For this, we choose the type Ia supernova SN 2017erp. SN 2017erp is a normal type Ia supernova with some reddening in the near-ultraviolet (Brown et al., 2019). We choose this object for a number of reasons. Firstly, it has a number of spectra available on WISeREP, and these spectra show measurable Na absorption lines at redshifts of the host galaxy, as well as from the Milky Way (MW). The object is also at a redshift where host and MW absorption can be distinguished, but is not too far away. Additionally, it is expected that variable Na i D lines are uncommon in type Ia supernovae. In a sample of 31 objects where variability could possibly be detected only a single instance of a variable EW was found (Blondin et al., 2009). For these reasons, we consider SN 2017erp a good candidate for testing our code, as we expect that absorption from both the host and the MW should remain constant in time. We track the strength of both Na doublets across the spectra for SN 2017erp using our MCMC methods. These measurements are shown in Figure 2. As interstellar material is not associated with a variable star or Figure 1: Comparison between the measurements of the Na doublet in PTF 10fqs and the host galaxy of SN 2017erp using our MCMC code and IRAF. transient, it is expected that its column density should remain static over the time-scales of the observations. Therefore, we expect that the equivalent width of the absorption produced from interstellar material along the line of sight to the source should remain constant in time, regardless of location in either the Milky Way or host galaxy. In SN 2017erp's host galaxy, our calculated uncertainties are large enough that each measurement is consistent with a non-evolving Na doublet. Measurements from the Milky Way show a larger discrepancy. The best agreement is given by an equivalent width of \(\sim 0.55\) A, for which 9 of 16 measurements (56 per cent) agree within 1\(\sigma\). This is slightly lower than the 68 per cent of measurements expected to lie within 1\(\sigma\) for normally distributed measurements of a constant value. We note that the evolution displayed by the Milky Way doublet seems to correlate with the evolution from the host galaxy doublet. In order to examine this potential correlation in closer detail, we fit a number of these spectra between +9 and +23 days using IRAF. These measurements are displayed in the inset of Figure 2. We can see from these measurements that a correlation is present between the strength of the Na doublet in the Milky Way and the strength in the host galaxy. No physical process can explain this correlation, and thus we propose that this supposed variability is not physical in origin, but arises from the spectra themselves. In SN 2017erp, the region surrounding the two instances of the Na doublet is dominated by a number of other broad emission and absorption lines, making the fitting of a sensible continuum much more difficult. We propose that the inability to fit a confident continuum to this object affects the measured equivalent widths, and causes this variability. This can result in catastrophic outliers in the calculation of the equivalent width, as seen in the spectrum from +11 days. This also manifests as much larger uncertainties in the equivalent widths. Overall, we do not expect this effect to have a large impact on our measurements for ILRTs. The variability seen in the measurements from SN 2017erp is far smaller than those we measure in our sample of ILRTs. Additionally, ILRT continua tend to be much flatter and less complex around the Na absorption line, with very few additional lines impacting on our measurements. This allows us to calculate a continuum and corresponding equivalent width with more certainty. Ideally, we would measure the equivalent width of Na at zero redshift in each ILRT spectrum. This would allow us to make measurements on a quantity which should remain constant using spectra with simple continua. This could be used as a check that the evolution measured at host redshift is physical in origin. Unfortunately for all the ILRTs in our sample the redshift is either small enough that the Milky Way absorption is blended with that of the host, leaving us either unable to distinguish the two or, where there is separation between the two lines, Milky Way absorption is too faint to be measured. ## 3 Toy Model In order to guide our expectations for the evolution of the Na i doublet, we create a simple toy model. A number of models have been developed to explain the variability of Na absorption, particularly in SNe Ia. Patat et al. (2007) suggest a model for SN 2006X where variability arises from the ionisation and subsequent recombination of Na in the CSM. Borkowski et al. (2009) extend this by considering the level of photoionisation as a function of the location of CSM shells around the central supernova. Soker (2014) offers a model where the Na responsible for absorption is released from dust grains through photon-stimulated desorption. Our toy model is based on an ejecta front from a central supernova travelling outwards at constant velocity through a spherically symmetric region of circumstellar material. We track the amount of Na i present in the CSM in front of the ejecta and use this as a proxy for the strength of the Na i D absorption feature. Using column density of CSM as a proxy for the equivalent width of the Na i D line relies on the assumption that no saturation of this line occurs. In such a case, the two quantities may not correlate with one another simply. In our analysis of real ILRT spectra, we found that the absorption doublet was never strong enough to come close to saturation, and so we proceed with this assumption. This model is similar conceptually to the Borkowski et al. (2009) model, although in a different regime in terms of scale and density. Their model focuses on bubbles on parsec scales blown into the interstellar medium by strong accretion-driven winds from the progenitor of a Type Ia supernova, and hence the CSM will not be overrun by ejecta on the timescales of interest to us. In contrast, our model relates to the denser circumstellar material much closer to the star. We first select a density profile for the circumstellar material. The main density profiles of interest are that of a typical stellar wind proportional to \(r^{-2}\), and a constant density profile, though any profile can be used. We assume that at time t = 0 directly after the explosion, all Na within the CSM has been ionised to Na ii by emission from the central supernova. Na has a relatively low ionisation potential of 5.139 eV, corresponding to a photon wavelength of 241 nm, so the strong radiation field of the supernova should be sufficient to produce such a scenario. In addition to ionising the Na, it is likely that the supernova will ionise a significant fraction of hydrogen in the CSM. As the relative fraction of H to Na in the CSM is high, this will result in a large amount of free electrons being present. These free electrons will allow for the recombination of Na ii to Na i. Due to the high density of free electrons compared to that of Na, we assume that Na ii recom Figure 2: Evolution of strength of Na doublet in spectra of SN 2017erp at redshifts corresponding to supernova host galaxy and Milky Way. Main figure shows EW measurements from our MCMC code. Milky Way measurements in main figure are offset by 0.5 days for visual clarity. Inset shows a number of EW measurements produced using IRAF, where correlation between Milky Way and host galaxy measurements is apparent. bines at a rate proportional to its density, i.e. at each time step some fixed percentage of the remaining Na ii recombines to Na i. Starting at r = 0, we move the position of the ejecta front outwards through the CSM at a constant velocity with each timestep. At each step, we then calculate the total column density of Na remaining in front of the shock by integrating the density function past that point. We then let a fixed proportion of the Na ii recombine to Na i, and multiply this fraction by the total column density in order to estimate the column density of Na ii in front of the ejecta. We repeat this process and track the evolution of the column density of Na i as a proxy for the strength of the Na i D doublet. In Figure 3, we display the predictions from our toy model for the evolution of the column density of Na i given ejecta sweeping through two circumstellar environments. The first has a density structure proportional to \(r^{-2}\), typical of a stellar wind, while the second displays a constant density CSM. As we expect the strength of the Na absorption doublet to track this quantity, these can be interpreted as predictions for the evolution of the equivalent width of the Na doublet over time. The central panel of Figure 3 shows the integrated column density of Na i remaining in front of the shock as time progresses for either density profile. The panels on either side display the structure of the CSM in the windy regime at two representative times, one early and one late. In these panels, the black vertical line represents the position of the shock travelling outwards through the CSM, with the grey shaded region representing the CSM left behind it. Blue and orange shaded regions in front of the shock represent the relative proportions of Na i and Na ii in the unshocked CSM. At early times, it can be seen that a large amount of CSM remains in front of the shock, split relatively evenly between both states of Na, resulting in the production of a strong absorption feature. At late times, although most of the Na ii has recombined to form Na i, there is comparatively little left in front of the shock, resulting in a weak absorption feature. As we are largely concerned with the shape of evolution, we display these plots without units, and normalise the curves to the same peak. It can be seen that both profiles of CSM produce a similar overall shape in their evolution, where the column density rises to a peak before dropping down to a low level. The windy CSM shows a faster rise followed by a concave decrease over time, levelling out at a low strength. The constant density CSM rises slightly slower, and decreases from peak at a more linear rate. ## 4 Results ### ILRT evolution Figure 4 shows the evolution of the strength of the Na i D absorption line as measured using our MCMC code for our sample of ILRTs. Below we provide brief descriptions of the evolution displayed by each of the ILRTs. Phases are given in terms of days from explosion where one is available in the literature, or days since first detection where no explosion epoch is available. **a)**: **SN 2001ac** Only four spectra are available for SN 2001ac (Smith et al., 2011; Shivvers et al., 2017), and Na is only detected in two of these. Our first detection is at +1.6 days, which shows a strong absorption line but with large uncertainties. This is followed by two non-detections, the first of which constrains the equivalent width to the same strength as the previous detection or lower. The second non-detection is around the level of our second detection at +16.6 days, which shows the absorption becoming weaker with time. **b)**: **SN 2008S** Detections of the Na doublet in SN 2008S (Botticella et al., 2009) show evidence of long-term evolution. Our first detections show the strength of the doublet decreasing over time, from +18.5 days until +46.5 days, where it begins to increase again until a peak at +67.5 days. One further detection at +71.5 days shows the line decreasing in strength rapidly. One non-detection is not shown at +101.5 days, however the upper limit for this non-detection is not strongly constraining, at 5.32 A. **c)**: **NGC 300-2008OT1** We have no early observations for NGC 300-2008OT1 (Valerin et al., in prep.), with our first initial limiting constraint on the strength of the absorption at +22.3 days. After this the absorption feature appears and grows in strength until roughly 70 days, before declining again. The amplitude of this variation is smaller than that seen in many of the ILRTs in our sample. Interestingly, the shape of this evolution showing a rise to single peak followed by a decline is similar to the behaviour predicted by our toy model. In our models, the strength of the Na line begins to increase immediately post-explosion and begins to decrease when the shock has passed through a large enough amount of CSM such that the decrease in overall column density outmatches the increase in the fraction of Na i in the CSM. Observations sooner after explosion would allow us to determine whether the Na line is increasing in strength uniformly from the time of explosion, however these observations would require a high signal-to-noise ratio in order to detect such a weak line (\(<1\) A at +22.3 days). The late time-scale of this evolution may indicate that it is being driven by interaction with CSM far from the central supernova. We fit a simple quadratic curve to our detections and find that this curve peaks and begins to decline at \(\sim\) +66 days. We make a rough estimate for the ejecta velocity from an ILRT. Ejecta from typical core-collapse supernovae tends to travel at velocities on the order of \(10^{4}\) km s\({}^{-1}\). As ILRTs are expected to be produced by weaker explosions, we expect a lower ejecta velocity than this. In the case of the underluminous SN IIP SN 2005cs, Pastorello et al. (2009) measure an ejecta velocity of \(\sim 1000\) km s\({}^{-1}\). We consider this velocity to be similar to that an ILRT, and as such we take our ejecta velocity to be 2000 km s\({}^{-1}\) as a conservative estimate for a weak explosion. Assuming that this velocity is maintained for 66 days, this would place the CSM region \(\sim 1.14\times 10^{15}\) cm from the central supernova. **d)**: **PTF 10fqs** PTF 10fqs (Kasliwal et al., 2011) shows a fast rise to a strong peak in absorption, followed by a similarly fast decline in strength with no observations at late times. This evolution looks quite similar to the behaviour expected by interaction with CSM in a windy regime. Unlike NGC 300-2008OT1 above, it appears to begin its rise in strength directly after explosion, and additionally reaches a much larger equivalent width. This may indicate interaction with a denser CSM much closer to the supernova. **e)**: **AT 2010dn** AT 2010dn (Cai et al., 2021) shows a uniformly declining strength of its Na absorption line over time between +7.4 days and +14.4 days, with a single non-detection coming later at +42.4 days. As we have no detections prior to +7.4 days it is possible that there is a rise to this peak beforehand which would look similar to our models, though this would require CSM very close to the supernova. **f)**: **AT 2012jc** AT 2012jc (Stritzinger et al., 2020; Cai et al., 2021) begins with behaviour similar to that of AT 2010dn, however beginning at a much earlier time. Our first detection comes at +2.1 days. After a very sharp decline in strength, the Na line appears to rebrighten. It should be noted that our first detection comes with a relatively large uncertainty, and in the absence of this point it is possible that no such drop in equivalent width would be inferred. Our toy model assumes that the Na equivalent width should be proportional only to the amount Na i remaining in front of the ejecta, with the entirety being ionised to Na ii at t = 0. For this model, it is not possible to explain a rebrightening after the strength of absorption begins to decline, as this would require a process by which new Na i is somehow injected into the system in front of the ejecta. One possibility is that the first detection is inaccurate, and there is no drop in equivalent width at early times. This would allow for the remainder of the evolution to be interpreted in line with our toy model. However, as a potential rebrightening behaviour can also be seen in SN 2008S, we suggest a mechanism by which this can occur. We suggest that these cases may be explained by a different CSM density profile. Instead of being wind-like, we propose that the CSM has a low density closest to the star, before increasing to a maximum some distance from the star and then falling off similarly to that of a wind-like CSM profile. In addition, we suggest that the sodium present in the CSM is not fully ionised by radiation from the initial supernova. This leads to a situation where, at \(t=0\), the majority of Na is un-ionised Na i. As the ejecta passes through this nearby low-density region of CSM, the column density of Na i decreases, causing the equivalent width to decrease in strength. Eventually the ejecta reaches a region of high density CSM some distance from the star. At this point, the ejecta interacts strongly with the CSM, producing X-ray photons which help to photoionise the outer region of CSM, causing a further drop in Na i equivalent width. As the ejecta moves past this region of high density CSM, the strength of interaction decreases, producing fewer ionising photons and allowing for recombination to increase the relative level of Na i in the remaining CSM. If the rate of recombination is high enough, this process increases the column density of Na i enough to produce a peak in the equivalent width. As the ejecta moves further out into the low-density material, recombination continues until most of the Na is un-ionised, however at this point the ejecta has passed enough that the total column density continues to decrease, as seen particularly clearly in SN 2008S. * **AT 2013b** Observations of AT 20131b (Cai et al., 2021) begin with four non-detections of Na spanning a short period of time from +10.8 to + 12.9 days. Three of these non-detections do not provide strong constraints on equivalent width, but the spectrum at +11 days constrains the strength of the doublet to below 0.72 A. After this, we see two detections where the strength of the Na line increases over time. Without further observations, we are unable to determine at what time the strength eventually peaks, or if any other behaviour is displayed. * **AT 2013la**AT 2013la (Cai et al., 2021) is unique among our sample of ILRTs in that we do not find any spectra where the Na absorption feature is detected. The observations we do have are sparsely populated, with no observations before +18.3 days and no observations between +45.3 and +92.2 days. Additionally, the limits on the equivalent width obtained from these non-detections are not very constraining, with many detections of our sources having detections at equivalent widths weaker than any of the limits for AT 2013la. This means that it is possible that Na was present in this source, but was simply undetected, or that the Na doublet displayed fast early evolution, similar to that of AT 2010dn, or late time evolution similar to NGC 300-OT20081. Either of these may have been missed due to the paucity of observations. However, from these observations we can only conclude that there is currently no evidence for the Na i doublet appearing in AT 2013la. * **AT 2017b** Observations of AT 2017be (Cai et al., 2018) begin with two non-detections which constrain the strength of the Na doublet to a very weak level, below \(\sim\) 0.5 A. Soon after these we see three weak detections which are consistent with one another within uncertainties. These uncertainties are relatively large compared to the strength of the line. After these detections we have one further non-detection which places a relatively loose constraint on the equivalent width. Without further detections we cannot determine whether the doublet remains at the same level as seen in the initial three detections, or begins to increase or decrease in strength. * **AT 2019ab**AT 2019ab (Jencson et al., 2019) shows a large amount of variation in its Na line strength over time. It begins by increasing in strength, before dropping down to a lower level by \(\sim\) +40 days, after which it increases once more, remaining at a strong level by the final observation. As mentioned in our description of AT 2012jc, our models can explain an increase in strength followed by a decline, but are not able to explain a late rebrightening of the Na line. As can be seen, our sample of ILRTs shows a diverse range of behaviours in terms of strength, time-scale, and shape of evolution. While we do not see the same type of evolution displayed across the entire class, we do see a number of similar patterns in a few of these objects. Both PTF 10fqs and NGC 300-2008OT1 show an evolution similar Figure 3: Evolution of column density of Na i in front of constant velocity ejecta passing through circumstellar material with wind-like (\(\propto r^{-2}\)) and constant density profiles. This quantity is used as a proxy for the strength of the Na i D absorption doublet. Axes are given in arbitrary units and wind-like and constant CSM evolutions are normalised to the same peak. Figure 4: Evolution of the equivalent width of the Na i D absorption doublet in our sample of ILRTs. Each plot displays the evolution for the first 100 days post-explosion, though the scales for equivalent width differ for each plot. Individual descriptions for the evolution of each ILRT are given in the main body of text. Triangles represent upper limits to equivalent width in non-detections. to that predicted by our toy model with a single peak, although they differ widely from one another in terms of the strength of the line and time-scale. This evolution is also, qualitatively speaking, the most similar to those predicted by our toy model of ejecta passing through a recombining region of CSM. A number of ILRTs seem to show the strength of the Na line increasing after some time, some of these after an initial period where it decreased in strength. These include SN 2008S, AT 2012jc, AT 2013lp, and AT 2019abm. Further ILRTs show their own individual evolution, including the non-detection in AT 2013la. AT 2017be in particular shows only three weak detections with consistent equivalent widths. As previous observations showed non-detections for this ILRT we can say that the Na line is varying in time, but it remains to be seen what type of evolution it follows after these two detections. One key insight is the diversity of time-scales on display in these evolutions. Observations of these ILRTs are sparse and unevenly sampled, meaning that some of the outlying ILRTs which seem to show their own unique evolution may in actuality appear more similar to others if further observations were available. This highlights the need for further spectroscopic coverage of future events in order to better constrain the evolution of this line. We find that all but one of our ILRTs display clear evidence of variability in the Na i D line over time. In our one exception, AT 2013la, we find no detection of the Na line. This does not necessarily rule out the possibility of Na evolution in this ILRT, it may be that the variability was simply missed due to sparse observations, or the amplitude of the line may have been low. Indeed, our limiting equivalent widths for this ILRT are higher than many of our detections for other sources. However, our observations cannot prove that this evolution exists. Our prior analysis of SN 2017erp was intended to show the ability of our code to distinguish between sources displaying true variability in the Na line and those not undergoing evolution. Although this attempt was complicated by the systematic variability found in the measurements of SN 2017erp, we note that the variability found for SN 2017erp is smaller than that displayed in our sample of ILRTs. Hence, we believe that the evolution displayed by these ILRTs is physical, rather than systematic in origin. ### Ngc 300-20080t1 One ILRT in our sample, NGC 300-20080T1 has an additional high-resolution UVES spectrum. The reduced spectrum is available from the ESO Science Archive2. In this spectrum, there was clear separation between the D2 and D1 components of the Na doublet, and individual substructure could be analysed for each. The spectrum shows an emission line from He i at \(\sim\) 5875 A, followed directly by a broad absorption feature at the wavelengths of the Na doublet. Within this broad absorption feature, further individual narrow absorption features can be seen. We show the original high-resolution spectrum in Figure 5. Footnote 2: [http://archive.eso.org/cms.html](http://archive.eso.org/cms.html) We begin our analysis of this spectrum by fitting the narrow absorption components of Na. Figure 6 shows cutouts of the narrow line structure visible at the wavelengths of the D2 and D1 components. At each position we find two narrow absorption features, which we fit using a slightly modified version of our MCMC models described above, specifying narrower ranges for central wavelengths and widths of each line. We fit the substructures of the D2 and D1 components separately. We attempted fitting three absorption features to these cutouts, and found that this did not improve the accuracy of our fits. At the D2 position, we measured narrow absorption features at wavelengths of 5892.944 \(\pm\) 0.011 A and 5893.354 \(\pm\) 0.004 A, with full widths at half maximum corresponding to velocities of 16.44 \(\pm\) 2.19 km s\({}^{-1}\) and 14.14 \(\pm\) 0.49 km s\({}^{-1}\) respectively. At the D1 component, we measured these features at wavelengths of 5897.913 \(\pm\) 0.008 A and 5898.321 \(\pm\) 0.003 A, with velocities of 12.21 \(\pm\) 0.99 km s\({}^{-1}\) and 13.52 \(\pm\) 0.40 km s\({}^{-1}\) respectively. Within uncertainties, the velocities of the second components measured at both the D2 and D1 positions are consistent, as would be expected if these features were produced in the same environments. The velocities of the first components differ at a significance of 1.3\(\sigma\). This small discrepancy may arise from difficulty in normalising the spectrum as this specific component is located close to the red edge of the broad Na absorption feature. While these low velocities may be indicative of being produced in two separate regions of slow moving material in the interstellar medium along the line of sight to the source, these velocities are also similar to that of a red supergiant wind. These stars have winds typically on the order of a few 10s of km s\({}^{-1}\). We expect similar wind speeds from a SAGB star, and as such we cannot distinguish between these two possible regions as the site of origin of these narrow lines using a single high-resolution spectrum. Time series of high-resolution spectra may be required to confirm either of these options. Next, we fit the broad components of Na absorption visible in the spectrum. We do this by first manually removing the narrow components described above which would interfere with the accuracy of fitting the broad components. To do so, we isolate small regions surrounding these narrow lines in our spectrum and remove them. We then refill these regions using linear fits interpolated between the edges of the removed portions of the spectrum. We add Gaussian noise scaled to the same level of the surrounding spectrum to these linear infills to mimic a continuous spectrum. We then model and fit the broad Na doublet, as well as the nearby He i emission line, using Figure 5: UVES spectrum of NGC 300-20080T1 at +69.1 days. Broad components of the Na absorption line can be seen in the centre, along with a number of narrow components. He i emission line lies at the blue end of the Na absorption feature. the same methods as for our lower resolution spectra. The spectrum with the narrow line emission removed is shown in Figure 7. We measure the broad components at wavelengths of 5890.013 \(\pm\) 0.043 A and 5895.987 \(\pm\) 0.043 A, with velocities of \(268.31\pm 3.38\) km s\({}^{-1}\) and \(268.04\pm 3.38\) km s\({}^{-1}\) respectively. We calculate an equivalent width of \(1.87\pm 0.01\) A for the broad components of the Na doublet. In order to demonstrate the effect of using low-resolution spectra compared to high-resolution, we degrade this spectrum to a resolution similar to the majority of our spectra using the same processes described in Section 2.2. After degrading the original UVES spectrum such that it matches the resolution of our other spectra, we again fit for the equivalent width. In this case, we find a value of \(2.44\pm 0.67\) A. This value is consistent with that obtained from the high-resolution spectrum, though the uncertainty is much higher, illustrating the usefulness of high-resolution spectroscopy for accurate determination of equivalent widths. As a means of comparison, we fit a number of other observed emission lines found in this spectrum which we expect to arise due to interaction with fast moving ejecta. These include the H\(\alpha\) emission line and the Ca ii infrared triplet. While the H\(\alpha\) line shows some evidence of substructure, we fit it as single emission line to get an estimate of its velocity. The Ca triplet is unfortunately partially cut off due to the wavelength capabilities of the UVES instrument, but the entirety of the 8500 A line is visible, and so we fit this. This emission line is accompanied by a prominent narrow absorption feature, which we include in our fit. Our fit to H\(\alpha\) returns a velocity of \(658.00\pm 1.45\) km s\({}^{-1}\), while our fit to the Ca line returns a velocity of \(436.27\pm 2.76\) km s\({}^{-1}\). Within uncertainties, these velocities each exceed those determined for Na absorption. If these emission lines are produced by ejecta moving faster than the circumstellar material, this would provide evidence that the absorption we see from Na is produced in the CSM. However, it is additionally possible that the different velocities of these lines arise not from production in different media of CSM and ejecta, rather as an effect of different optical depths. In SNe IIP, line velocities can differ by widths of 1000s of km s\({}^{-1}\), due to lines forming in different ejecta regions. Similarly, it may be that the difference in our velocities is also from lines being formed and observed in different parts of a dense CSM with different velocities. ### Velocity evolution To attempt to distinguish between Na lines arising from CSM as opposed to ejecta, we measure the evolution of the velocity of the Na doublet in our sample of ILRTs. The velocity in this section refers to the offset of the central wavelength of the D2 components of the Na line from its rest wavelength, as opposed to the width of the line. To do this, we first find the observed wavelength of H\(\alpha\) emission in our spectra. As this line is usually strong compared to the continuum, we simply take the wavelength of H\(\alpha\) to be the wavelength at the point where flux is highest within the region of the emission. We then compare this observed wavelength to the rest wavelength of H\(\alpha\) and calculate a pseudo-redshift. This step is necessary due to the heterogeneity of spectra in our sample. Data from WISeREP have Figure 6: Subsections of the UVES spectrum of NGC 300-2008OT1 centred on the narrow D2 and D1 components of the Na absorption line. In each case, two absorption components can be seen. Black line indicates best fit from MCMC model, while orange represents sample of fits from Markov Chain. Figure 7: UVES spectrum of NGC 300-2008OT1 at +69.1 days. Narrow components of Na doublet have been removed in order to fit the broad features. Broad absorption from the Na doublet can be seen, as well as the nearby He i emission line. Black line indicates best fit from MCMC model, while orange represents sample of fits from Markov Chain. not necessarily been corrected for redshift from the host galaxy, and differences between observations using different instruments may introduce instrumental effects. We therefore use this pseudo-redshift as a catch-all for measuring the correction between the observed and rest spectra. Using this pseudo-redshift, we calculate the rest wavelength of the Na doublet from our fits of the observed spectra. We then compare this wavelength to the expected rest wavelength to determine the velocity of the material in which the Na doublet is being produced. If the absorption doublet is produced in a layer of CSM, we would expect the velocity to remain relatively constant over time. In contrast, if the absorption was taking place within the ejecta, we would expect to see evolution. As the ejecta travels further from the star, its optical depth will drop, allowing us to see further into the ejecta to regions where the ejecta velocity is lower. Thus, we would expect to see the velocity of the doublet drop over time. Figure 8 shows the evolution of the velocity of the D2 component of the Na doublet over the first 100 days of observation for our sample of ILRTs. The error bars plotted in Figure 8 are underestimates, including only uncertainties incurred from the fitting of the position of the Na doublet. In reality, the uncertainty in the determination of the central wavelength of H\(\alpha\) would increase our uncertainties in the final calculation of velocity. Regardless, the majority of our ILRTs show little obvious evidence of velocity evolution. Some additional caveats must be considered when interpreting these results. One is that a number of spectra from WISeREP were not usable. Several spectra, particularly for SN 2008S, showed evidence of having been shifted prior to being uploaded. In these cases, lines such as H\(\alpha\) did not appear at the same observed wavelength as in other spectra, instead being 100s of A lower. Attempting to fit this discrepancy with an instrumental redshift as usual led to the position of the Na doublet being poorly estimated, resulting in velocities far higher than expected, on the order of 1000s of km s\({}^{-1}\). As we had no way of correcting these spectra back to their intended wavelength ranges, these spectra had to be discarded. A number of spectra for other ILRTs also showed similar behaviour. In the other ILRTs which seem to show some evidence of evolution such as PTF 10fqs, AT 2012jc, and AT 2017be, it is worth considering that the purported evolution is only backed up by a small number of points in each case, and uncertainties are underestimated. Additionally, as issues similar to SN 2008S were present in other ILRTs, it is difficult to disentangle the true velocity of the Na line from that which may be introduced by inconsistencies between the spectra. It may be tentatively suggested that some correlation exists between the evolutions of the velocity of the Na doublet, and its equivalent width. This is perhaps most visible in the evolutions of AT 2012jc and AT 2019abn. These purported correlations may simply arise from random chance, however we consider the implications of such a correlation, if one does indeed exist. This would imply that the strength of the absorption line is related to the velocity of material in the medium in which it is being produced. In a regime of CSM produced by a homologous wind over a large timescale, such a correlation would be impossible. In this case, material at both high and low velocities would be present in regions both close to and far from the star. On a long enough timescale, the distribution of velocities would be the same independent of distance to the star. Thus, no matter which region of CSM was being probed, the measured velocity would be the same. If instead, the CSM is produced by episodic eruptions on shorter timescales, it would be possible for a velocity gradient to exist in the CSM. In this case, higher velocity material would reside further from the star, while low velocity material would remain close. In such a case, it maybe possible that as the equivalent width of the Na line changes due to the sweeping up of material and changes in the ionisation conditions, the velocity of remaining absorbing material changes with the strength of the line. ## 5 Discussions and Conclusions We have carried out the first systematic study of the evolution of the Na i D doublet in ILRTs. The low ionisation potential of Na, combined with its relatively ubiquity in the spectra of ILRTs make it a particularly useful species for probing the nature of the CSM around these sources and thus gaining insight into the type of systems from which they arise. We present our code for fitting these spectral lines using MCMC methods and apply it to a sample of eleven ILRTs. We find that Na i D evolution is present in our entire sample of ILRTs with a single exception where the line is not detected in any observation. This evolution displays diversity in the strength of the line, as well as its shape and time-scale. We note a number of common behaviours shared by ILRTs. As a means of predicting the evolution displayed in ILRTs, we create a simple toy model wherein the strength of the Na i D absorption feature is directly proportional to the column density of Na i remaining in front of the ejecta from a central supernova. Testing this model using density profiles corresponding to a windy (\(\propto r^{-2}\)) environment and a constant density environment, we predict the behaviour expected in these situations and note that two of our sample; PTF 10fqs and NGC 300-2008OT1 display evolution qualitatively similar to these. We find that the largest issue to accurately tracking the evolution of the Na line in our sample of ILRTs was the scarcity of high quality observations available for many sources. For the majority of ILRTs in our sample our analysis relies on fewer than 10 spectra, and we only had access to a single high signal-to-noise high-resolution spectrum for one of our sample. Further observations for any of these ILRTs at both early or late times would allow us to better constrain their overall evolution and may allow us to draw parallels between other ILRTs whose similarity is only speculative with current observations. Our toy model was successful in describing the overall shape of the Na i D evolution for two of our ILRTs, however it is clear from the diversity of evolution displayed across the whole sample that our model is too simplistic to act as a robust predictor of ILRT behaviour in general. In order to reproduce the observed evolution in greater detail, it is clear that further detailed modelling will be required. Implementation of a spectral synthesis code such as Cloudy (Ferland et al., 2017) would allow for modelling with greater accuracy and the ability to reproduce some of the more complex behaviour displayed in our sample of ILRTs. This would lead to further constraints on the types of environments present in these events, and hence their associated progenitor systems. A correlation has been reported in the literature between the strength of the Na i D line in a source and its Milky Way absorption (Munari & Zwitter, 1997; Poznanski et al., 2012). While this relation has often been used to estimate the extinction level for individual sources, there are a number of caveats which must be considered before using such a relation. The first is that this relation does not hold for low resolution spectra, where essentially no correlation is seen (Poznanski et al., 2011). The second is that this correlation is an empirical trend based on a large number of spectra. While it is true that extinction increases with Na i D strength for a large population of sources, it is not necessarily true that this represents a reliable predictor for the extinction of any individual source. Finally, above a certain equivalent width, the Na i D lines begin to saturate, and the relation flattens. Munari & Zwitter (1997) find that this saturation occurs at equivalent widths > 0.5 A for the Na i D2 line, and while Poznanski et al. (2012) extend this closer to 1 A by considering multiple dust clouds in the line of sight, a saturation point still remains past which the correlation becomes less useful. For our sample of ILRTs in particular, this relation proves unhelpful. We take Equation 9 from Poznanski et al. (2012) and naively apply it to AT 2010dn. AT 2010dn displays a maximum equivalent width of \(2.88\pm 0.15\) A and a minimum of \(1.30\pm 0.13\) A. These lead to estimates for the extinction of either \(\log_{10}(E(B-V))=-0.33\pm 0.17\) or \(\log_{10}(E(B-V))=1.52\pm 0.19\), a difference of nearly two orders of magnitude. It can be seen that for a source where the strength of this line is variable in time, this relation cannot produce an accurate estimate of extinction. As an aside, we also note that extinction due to scattering in a spherical CSM will differ from that cause by a foreground screen. Scattering occurring in a cloud located along the line of sight to a source will serve to remove photons which would otherwise have reached us, increasing the strength of the absorption feature. Scattering from a spherical CSM introduces a second factor - while photons can still be scattered out of our line of sight, there will also be some fraction of photons which were originally travelling away from our line of sight which are instead scattered towards us through interaction with the CSM. Kochanek et al. (2012) investigate the consequences of modelling circumstellar dust around the type IIP SN 2012aw using Galactic extinction laws, finding that ignoring Figure 8: Evolution of velocity of Na absorption doublet in our sample of ILRTs. AT 2013a not included due to lack of detections making velocity determination impossible. Uncertainties come from uncertainty of fitting central wavelengths of Na doublet, and do not factor in uncertainty in location of H\(\alpha\) line used for calibration. spherical geometry results in overestimation of both luminosity and absorption. The Poznanski et al. (2012) relation is a powerful statistical tool, but we advise caution in its use as a means of predicting extinction for individual sources. Care must be taken to ensure that the Na absorption being measured is not varying in time, arises from the interstellar medium rather than CSM, is present in high-resolution spectra, and does not exceed saturation limits before it can be used as an estimate for Milky Way extinction. The study of the Na i D doublet represents a promising new diagnostic for studying the circumstellar environment around explosive transients such as ILRTs. Further series of spectral observations will allow us to better understand these events and eventually use them as quantitative diagnostics of wind profiles. ## Acknowledgements The research conducted in this publication was funded by the Irish Research Council under grant number GOIPG/2020/542. MF acknowledges support from a Royal Society - Science Foundation Ireland University Research Fellowship. AR acknowledges financial support from ANID BECAS/DOCTORADO NACIONAL 21202412. GV acknowledges INAF for funding his PhD fellowship within the PhD School in Astronomy at the University of Padova. We thank Peter Duffy, Andrea Pastorello and Elena Mason for their discussions regarding this project, and the comments provided on the paper. This work made use of Astropy:3 a community-developed core Python package and an ecosystem of tools and resources for astronomy (Astropy Collaboration et al., 2013, 2018, 2022) Footnote 3: [http://www.astropy.org](http://www.astropy.org) Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 281.D-5016. ## Data Availability The spectra analysed in this study are available from the WISeREP repository at [https://wiserep.weizmann.ac.il](https://wiserep.weizmann.ac.il).
2301.11760
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics computing, which leverages photons, photons coupled with matter, and optics-related technologies for effective and efficient computational purposes. It covers the history and development of photonics computing and modern analogue computing platforms and architectures, focusing on optimization tasks and neural network implementations. The authors examine special-purpose optimizers, mathematical descriptions of photonics optimizers, and their various interconnections. Disparate applications are discussed, including direct encoding, logistics, finance, phase retrieval, machine learning, neural networks, probabilistic graphical models, and image processing, among many others. The main directions of technological advancement and associated challenges in photonics computing are explored, along with an assessment of its efficiency. Finally, the paper discusses prospects and the field of optical quantum computing, providing insights into the potential applications of this technology.
Nikita Stroev, Natalia G. Berloff
2023-01-27T14:58:22Z
http://arxiv.org/abs/2301.11760v2
# Renaissance of Analogue Optical Computing ###### Abstract This review paper examines the physics and mathematics of optical computing, which utilizes photons and optics-related technologies for effective and efficient computational purposes. We discuss the history and development of optical computing, as well as modern analogue computing platforms and architectures, focusing on neural network implementations. Furthermore, we cover special-purpose optimisers and mathematical descriptions of optical optimisers, as well as their various applications and interconnections. We also explore the main directions of technological development in optical computing and estimates of its efficiency. Finally, we discuss future perspectives and the domain of optical quantum computing. This review provides a comprehensive overview of the current state-of-the-art in optical computing and its potential applications. Introduction In 1965 Intel co-founder Gordon Moore formulated an empirical observation that the number of transistors in a microprocessor will double nearly every two years, the statement which is known as Moore's law [1; 2]. This prediction was followed by the forecast of reaching a saturation point by 2015. The progress of conventional computer architectures was very close to Moore's vision. However, reaching the saturation point was just a matter of time. The miniaturization of silicon transistors recently managed to break the 7-nanometre barrier, which was believed to be the limit. Also, Moore's law usually comes with several essential indicators, such as the processor's thread performance and clock frequency, which reached the point of saturation much faster than the density of the transistors. All of these factors limit the scaling performance of modern computers. However, there are other reasons for the saturation of conventional computing power growth, which are the consequences of Moore's law. For example, increasing the number of transistors allows one to obtain more powerful systems. Still, the processing speed will inevitably decrease with the concomitant increase in heat production, while increased energy consumption is connected with the growth of the performance. Another critical issue is the so-called von Neumann bottleneck [3], arising from the architecture design. It refers to the computer system throughput limitation due to the characteristic of bandwidth for incoming and outcoming data [4; 5]. All these issues pose severe problems to the future of conventional computer development. As a result, the alternatives to von Neumann systems started to emerge [6; 7]. One turns to alternative hardware architectures and purpose-built devices to keep up with the scaling performance. As such, universal quantum computing promises to decrease the algorithmic complexity of solving challenging tasks by exploiting the entangled states. However, in contrast to this high-risk and high-reward strategy (also discussed in the current review in the optical setting), there is an option to replace electrons with photons but remain in the scope of classical or classical with a transient quantum coherence regime of optical computing. The motivation for such transition is clear since photons move at the speed of light, have low heat production, have high density and can be efficiently coupled to matter to exploit nonlinear behaviours. Moreover, optical technologies have matured and entered our everyday lives, such as fibre optic channels that carry the global traffic of information or optical readers of compact disks. However, the conversion of photons into electrons is required for compatibility with CMOS architectures. Such conversion takes a significant portion of energy, slows down the overall process of information processing, and presents a severe technological bottleneck in this type of hybrid technology. Despite these difficulties, optical hardware is exploited in computing devices. For example, different application-specific photonic hardware can operate on a reasonable scale in data centres for heavy machine learning (ML) applications and large-scale optimization. Moreover, neural network (NN) architectures are nearly ideally suited for optical hardware with the potential to achieve high efficiency, fast computing times, and low energy consumption due to the desired physical properties of the photonic systems. Nevertheless, at this point, optical computing can not be associated with mainstream technology. It is unlikely that optics will ever replace electronics as the universal platform in the foreseeable future. The additional reason is the technological inertia accumulated through the years by significant investments in CMOS technologies. Partially, the rapid development of what we call conventional computers in the early years led to an ever-increasing gap with computing using photonics, which will occupy its own place in the domain of application-specific hardware. There are many excellent reviews on the topic of optical computing. The challenges of modern computing and new opportunities for optics are discussed in [8]. This work presents the latest research progress of analogue optical computing, focusing on three main directions: vector/matrix manipulation, reservoir computing (RC) and photonic Ising machine. Moreover, it covers the topic of computing efficiencies, such as the ratio of performance and power dissipation and the error/precision interplay of such hardware. Another excellent review considers analogue optical computing in the context of artificial intelligence (AI) applications [9]. This work provides an overview of the latest accomplishments of optical computing, considering the realization of different AI models and NN paradigms. One can find additional information in other reviews [10; 11; 12; 13; 14], which appeared due to the recent interest in deep learning methods and their success in many domains. What differentiates our review from those listed above is that we treat analogue optical computing using the concept of universality of the underlying dynamical systems description. The advantage of optical computing comes from ultrafast emulation of the dynamics [15]. We focus on physical optimisers that exploit bifurcation dynamics and threshold operation and aim at solving nonlinear problems, therefore, going beyond the speed-up of performing the linear operations that optics is so efficient at. We organised our review as follows. Section II provides a short history of optical computing together with the modern analogue computing platforms focusing on NN implementation and other neuromorphic systems. Section III discusses the special-purpose optimisers and several examples of such devices. This section connects the operational regimes of such machines with the complexity classes and addresses the scalability of this approach. Section IV focuses on the physics of optical computing devices based on laser networks, optical parametric oscillators (OPOs) in fibre, photon or polariton networks, as well as their mathematical models. The second part of this review investigates the mathematical structures of different assignments and their emulation by the physical systems. The following Section V lists a wide range of possible applications across different applied domains. The final part consists of our subjective perspective on the future technological development of optical computing field in Section VI and passing remarks about quantum optical devices in Section VII. Finally, Section VIII summarises the results. ## II Analog optical computing Modern technologies demand vast data flows, creating various challenges for the development of the semiconductor industry and pushing classic electrical circuits to their physical limits. The developments range from more mainstream such as optical components that can be integrated into traditional computers or play the role of specific hardware, dealing with computationally heavy tasks or supplementing such calculations, to ambitious ones, such as all-optical digital computer architecture. ### Brief prehistory of the optical computing Although optical computing is an emerging technology that has gained more momentum over time (especially considering the popularity and efficiency of the latest data-driven approaches), many significant advances have been made in previous decades. Therefore, before describing the particular systems, their advantages and their applications, we briefly discuss the progress that enabled. More information and additional details can be found in [16]. The generic optical processor architecture comprises three plane parts: the input, the processing and the output planes. Early on, the input plane was a fixed image slide with its later change to a spatial light modulator (SLM), introduced to perform the input signal conversion. The processing plane can be composed of lenses, nonlinear components, or holograms, while the final output part is composed of photodetectors or a camera. The first promising applications for optical processors were pattern recognition tasks, which influenced the prototypes of optical correlators. The simple architecture called 4-f was based on the work on spatial filtering, see [17]. The Fourier transform property of a lens is the standard function of many optical computing schemes, taking advantage of the speed and parallelism of light. The second type of correlator architecture was presented in 1966 by Weaver and Goodman [18], which is called the joint transform correlator (JTC), see Fig. 1 Before 1950 there were significant steps in development of optical technologies such as the theory of image formation in the microscope [19], developed by Abbe, the development of phase contrast filter by Zernike [20] and the appearance of the information optics after Elias Snitzer in 1952 [21; 22]. Other major inventions of tha Figure 1: The optical setup of the joint transform correlator (JTC). The figure is taken from [16]. (by Gabor,1948) [23] and the development of the laser in 1960 [24; 25]. The consequent introduction of the off-axis hologram allowed the separation of the different terms of the reconstruction giving remarkable 3D reconstructions [26; 27] in 1962 by Leith and Upatnieks, which basically led to practical holography and was further enhanced by Lohmann, creating the first computer-generated hologram [28; 29] in 1966. Early SLMs were based on the Pockels effect with few prospective devices [30; 31; 32]. Liquid crystal technology is the most commonly used technology for SLMs today. Another significant step was the invention of the first optical transistor [33], the hope for small integrated circuits. The period from 1980 to 2004 was vibrant and productive. Active progress was going in the field of holography, particularly new encoding methods and the point-oriented methods were developed to achieve high quality and high diffraction efficiency optical reconstructions of the CGHs [34]. More than 50 types of SLM were introduced in the eighties and nineties [35]. Optical transistors presented another active area of research with the appearance of the micro-electromechanical systems (MEMS) technology [36]. In the 1990s, vertical-cavity surface-emitting lasers (VCSELs) and the self-electrooptic effect (SEED) devices became available [37]. In general, many aspects of modern optical interconnections and their components were introduced and studied during this period. The optical technologies development provided the necessary experience in the capabilities of the optical devices and led to the maturation of the experimental element base. Optical computing received a second chance after the success of so-called deep NNs, which share many similarities with the previous neural-like optical architectures. ### Modern optical computing Today, numerous research topics benefit from the progress in optical computing; therefore, the field is no longer so well defined. For example, some of the algorithms initially developed for pattern recognition using optical processors are now used successfully in digital computers. Other fields, such as biophotonics, largely benefit from past optical processing research. The fundamental building block of modern electronic computers is a transistor. Therefore, one must find an equivalent optical transistor to replace electronic components with optical ones. To assemble such transistors into the higher-level components to create an all-optical computer's central processing unit (CPU), one has to design the optical processor and optical storage and organise the optical data transfer. However, such an approach faces many challenges, while the potential of optics in large architecture consisting of higher-level components can be seen as somewhat speculative [38]. Among persisting problems are the scalability of the optical logic devices due to the bad logic-level restoration, cascadability, fan-out and input-output isolation, energy consumption issues and non-miniature device footprint. Moreover, coupling these potential devices with the electrical components will require data format conversion from photons to electrons, which is relatively slow and energy-consuming. However, the development of integrated photonics continued [39]. It led to attempts to create linear logic elements, such as all-optical logic gates [40; 41], improve the existing optical transistors and develop new ones in the context of the all-optical processing [42; 43]. One can use SLM, micro-lens array and holographic elements in free space to realize optical linear interconnection. Such linear elements are essential components in various optical computing devices. Nonlinearity is another essential component in optical schemes; however, its realisation meets specific difficulties as light beams pass through each other unperturbed in a pure vacuum. To force these beams to interact, one has to set up a high-energy experiment, which is challenging to realise in practice. There are two other ways to realise the nonlinearity: introduce the digital readout mechanism, implemented by the charged-coupled device (CCD), send it to a computer with further nonlinear processing before feeding it back to SLM, or develop fully optical nonlinear activation materials with high enough intensity of the beams (utilising absorption, refraction or scattering processes). Nonlinearities can be divided into local (as needed in neural architectures) and global systems (such as reservoir computing systems, see below). Combining the linear and nonlinear elements led to the developing of specialised isolated devices. As a result, optical computing research has seen a resurgence in activity, centring around new developments in photonic hardware accelerators and neuromorphic computing. Neuromorphic computing usually denotes the use of integrated systems to mimic neurobiological architectures. Although it is very close to the domain of AI, with the stress on the word "artificial", which deals with the intelligent designed machines or agents, we will use neuromorphic computing in the general sense to describe any neural systems, be it brain or nature-inspired or artificially designed. Modern key focus areas are concerned with emulating the neural structure and operation of the human brain, including probabilistic computing, which creates algorithmic approaches to dealing with uncertainty, ambiguity, and contradiction in the natural world. Optics has required ingredients to emulate NNs [13; 44]. ### Optical neural networks Optics has long been a promising medium for matrix multiplication and interconnects. Artificial neural networks (ANN) have been widely used for industrial and fundamental applications, and this new technological demand created a renewed case for photonic NNs. Although most ANN hardware systems are electronic-based, their optical implementation is particularly attractive because of their intrinsic parallelism and low energy consumption. Disparate ANNs vary by types of constituent elements, mathematical operations and the architecture used. In photonic approaches to ANN, the mathematical operations are mapped to the dynamics of optical propagation set by programmable linear optics and nonlinearity. A scalar synaptic weight describes pairwise connections between artificial neurons. At the same time, the layout of interconnections can be represented as a matrix-vector operation, where the input to each neuron is the dot product of the output from connected neurons with assigned weights. Photonic realizations of ANNs fall into three categories. First, free-space systems rely on diffraction, Fourier transforms, etc. [45; 46]. They have high scalability and can simultaneously process large numbers of neurons but suffer limited connectivity. One example is a reconfigurable, scalable two-layer NN for the classifying phases of a statistical Ising model [47]. Second, SLMs program linear operations and Fourier lenses implement the summation by collecting the light power encoded signal. However, in the case of free optics, the nonlinear optical activation functions are realized in a complicated manner, e.g. with the laser-cooled atoms with electromagnetically induced transparency [47]. Finally, on-chip approaches based on wavelength multiplexing [48] or beamsplitter meshes [49] can achieve programmable all-to-all coupling but need to scale better. One on-chip design was proposed in [50], where the optical platform takes advantage of encoding information in both phase and magnitude, thus making it possible to execute complex arithmetic by optical interfer ence, which suits performing handwriting recognition tasks. Mach-Zehnder Interferometers (MZIs) can perform many functions, such as dividing and modulating the light signals, separating the reference and the main light beams, and implementing a complex-valued weight matrix. Tunable waveguides can multiply optical signals, while wavelength-division multiplexing can add signals. Wavelength-division multiplexing can be achieved by the accumulation of carriers in semiconductors [52; 53], electronic currents [54; 55], or photon-induced changes of the material [56]. To achieve the full potential in on-chip architectures, one must require long-range connections between neurons, assisted with photonic waveguides that outperform metal wires connections of conventional electronics but fall behind free-optics solutions. In particular, silicon photonic platforms demonstrated efficient neuromorphic architectures [48; 49; 54]. An array of beam splitters and phase shifters can implement unitary matrix Figure 2: (a) One layer of an optical NN with \(k\) layers consists of matrix-vector multiplication (grey) and non-linearities (red). (b) One-level implementation. Matrix multiplication is performed by combining the input and weight signals and performing balanced homodyne detection. The final signals are sent through a non-linear function (red), serialized, and sent to the following layer’s input. Figure from [51] transformations using interference between different paths of coherent input light, where inputs are assigned to different waveguides and power modulated [57]. Modulating the effective refractive index of signal-carrying waveguides is another optical mode-based approach to weight configuration. Non-volatile synapse implementations have been referred to as all-optical because they do not need electrical inputs for tuning. These may use optically induced changes in chalcogenide materials to control the light propagation in waveguides [58]. Weight configurations based on non-volatile optical materials could lead to improved heat dissipation. A scheme based on homodyne detection has several scaling advantages over on-chip ap Figure 3: In fully functioned 2-layer all optical NN the first layer comprises a linear operation by the first SLM (SLM1) which encodes a certain pattern and a nonlinear activation function based on the electromagnetic induced transparency at magneto-optical trap (MOT). The second layer contains the second SLM (SLM2), converting four beams into two output beams at camera C3. The collimated coupling laser beam passing lens L1 is incident on the SLM1, which generates four beams at the focal plane of L3, which is monitored by a flip mirror (FM) and camera C1. Four beams are imaged on the MOT through a 4-f system comprising L4 and L5. A probe laser is going opposite the coupling beam, which is imaged on camera C2 through L5 and L6. L7 and L8 achieve further amplification. Four beams are incident on SLM2, generating two beams and then focusing on camera C3. Figure from [47]. proaches, including linear (rather than quadratic) chip-area scaling and constant circuit depth [51]. The input vector in this implementation is encoded onto a pulse train, which is fanned out to an array of homodyne detectors where each detector computes a product between the input and a row of a matrix encoded into optical pulse trains. The accumulated charge on the homodyne detector performs matrix-vector multiplication. The output is sent through an electrical nonlinearity and converted back to optical signal using a modulator. The advantage of the homodyne detection scheme is that the matrix elements (weights) are encoded optically and can be dynamically reconfigured. This procedure requires a reduced number of photonic components: the number of modulators, detectors, and beamsplitter grows linearly with the number of neurons. The homodyne detection architecture can be parallelized to implement general matrix-matrix multiplication by routing the light out of plane [51]. This is useful in practical NNs that reuse weights (either natively in convolutional layers or through batching). Nonlinearity in ANN is required to implement the thresholding effect of the neuron. Some photonic devices exhibit nonlinear neuron-like (gate-like) transfer functions. However, the challenge is to achieve cascadability. Photonic neurons must be capable of reacting to multiple optical inputs, applying a nonlinearity and producing an optical output suitable to drive other photonic neurons. Integrated photonic solutions use either optical/electrical/optical (O/E/O) or all-optical design to achieve such cascadability. In the O/E/O approach, nonlinearities may be introduced during the E/O conversion stage by employing lasers, saturated modulators or photodetector-modulators [59] or in the electronic domain only (e.g. the nonlinear dynamics of spiking photonic neurons could be implemented with a superconducting electronic signal pathway [60]). NN architectures can take different forms: feed-forward and back-forward, layered and recurrent, spiking or continuous etc. Each neural model has a different signal representation, training method and network topology. Weight configurations can differ depending on the training type: supervised training, unsupervised or programmatic 'compilation'. Topology describes the graph structure of neuron connectivity, and often it is advantageous to ANN operation to constrain the topology to guide weight configurations. Therefore, hardware implementation details may differ between different ANN, while the key technologies necessary for practical realization include active on-chip electronics and light sources. Many photonic architectures have already been demonstrated: recurrent ANN, continuous-time and programmed by compiler [48]; feed-forward, single-valued and externally trained ANN [49]; spiking, feed-forward ANN with both external and local training [56]; feed-forward multilayer ANN with semiconducting few-photon light-emitting diodes and superconducting-nanowire single-photon detectors [54]; diffractive networks with a nonlinearity [47]. The computational tasks solved by these platforms cover the main functions attributed to ML and AI: image and audio recognition and classification, simulation of dynamical systems, combina Figure 4: Complex-valued coherent optical NN. (a) Scheme with an input layer, multiple hidden layers, and an output layer. (b) The schematic of the optical neural chip in implementing complex-valued networks. The single chip performs all stages, such as the input preparation, weight multiplication and coherent detection. The division and modulation of the light signals (\(i_{1}-i_{6}\)) are realized by the MZIs (red). Green MZI separates the reference light. Blue MZIs are used to implement the \(6\times 6\) complex-valued weight matrix. Grey MZIs are used for on-chip coherent detection. (c) The workflow of the ONC system. Figure from [50]. torial optimization and many other applications, which we will discuss in Section V. Some of the architectures and their different experimental realizations are shown in Fig. 2, 3 and 4. The key merit of NN hardware is the level of energy consumption, which can be evaluated as petaMAC (multiply-accumulate operations) per second per mm\({}^{2}\) processing speeds [61] and attojoule per MAC energy efficiencies [62]. In general, current optoelectronic hardware offers great advantages for implementing ANN, but eliminating the electrical contribution will inevitably be beneficial. For practical applications of neuromorphic photonic systems, one needs to reduce heat dissipation during information transfer between electrons and photons. Such reduction can be achieved by improving optical sources, high-efficiency modulators, and photonic analogue-to-digital interfaces. Current photonic platforms lack the functionality of electronic processors such as logic gates, high-level compilers and assemblers, analogue-digital-analogue conversion and memory. Although photonics provides advantages in connectivity and linear operations over electronics, on-chip memory is challenging. 'In-memory' computing, where processing is performed in situ with an array of memory devices called memristors, has been established [63; 64]; however, reading and writing at high frequencies is still challenging. The recent trends in the development of the ANN show the increasing demand to lower the power consumption of the devices. At the same time, the requirements for parallelism and scalability remain the same through the years [65]. Thus, the optical domain offers a promising solution to future hardware requirements. ### Reservoir and other neuromorphic computing systems Reservoir computing (RC) is a recurrent NN-based framework for computation that maps input signals into a specific computational space of the fixed nonlinear system dynamics. This system is usually called a "reservoir", and its state is passed to a simple readout mechanism, specifically trained to get the final output [66]. The original concepts of RC can be traced to the liquid-state machines [67] and echo-state networks [68]. Many physical systems can reproduce this computational framework, and the optical/photonics domain is no exception. The extension of RC to deep hierarchical RC allows one to create more efficient models and simultaneously investigate the inherent role of layered composition in recurrent structures. Another promising research direction is to combine RC with quantum physical systems to access larger computational space. The idea of RC is to exploit the rich nonlinear dynamics of controllable nonlinear systems and simultaneously overcome the disadvantages of recurrent architectures with their challenging and time-consuming training for both hardware and software systems. The RC training is performed only at the readout stage, as the reservoir dynamics are fixed. This readout framework enjoys the benefits of a particular photonic physical system, such as speed or energy consumption, reducing learning costs. Another RC benefit is learning temporal (dynamic) dependencies compared to the feed-forward architectures used for static (non-temporal) data processing. The simplicity of the training procedure in RC is attractive. However, accessing complex dynamics without rigorous understanding can lead to many problems. Operating within the RC framework usually needs extensive experiments and experimental verification due to the need for a unified theory of RC. Another disadvantage is the performance instability due to the noise present, typical for nearly chaotic dynamical systems. Nevertheless, many successful cases of RC are being applied to practical problems, such as temporal pattern classification, time series forecasting, pattern generation, adaptive filtering and control, and system approximations. Moreover, RC can be used conventionally for static data processing. The first all-optical implementation of RC was demonstrated within a simple optical delayed feedback loop combined with the nonlinearity of an optical amplifier [70]. Concerning the free-space optics principles, an image processing task was successfully solved using a predesigned configuration with a diffraction grating and Fourier imaging with randomly interconnected microring resonators [71]. The reservoir consisting of a diffractive optical element was described based on an \(8\times 8\) laser array (of VCSELs) and an SLM. It showed rich dynamics with the potential for scaling up [72]. Further modifications of this setup with a laser illumination field and digital micro-mirror device allowed one to realise the large-scale RC scheme with 2025 diffractively coupled photonic nodes applied to a time series prediction task [73]. The recurrent 24-node silicon photonic NN, in which microring weight banks configure connections, was programmed using a "neural compiler" to solve a differential system emulation task with a 294-fold acceleration against a conventional benchmark [48]. Some hybrid architectures, such as opto-electronic devices, similarly benefit from the RC concept. For example, excellent performance has been obtained for speech recognition [74, 75, 76], chaotic time series prediction [75, 77, 78], and radar signal forecasting [79], with the operating speed in the megahertz range and the potential to increase it to gigahertz speed, at the same time preserving the state-of-the-art numerical accuracy. Additional cases of successful RC have been reported in literature [66, 74, 80]. We will consider the NN and RC architectures cases involving quantum effects in Section VII.1. Another example is the case where the state-of-the-art numerical accuracy is \(\sim 100\) dB. The state-of-the-art numerical accuracy is \(\sim 100\) dB. illustrated by Fig. 5. ## III Nonlinear optimization specific optical machines A large class of problems that can be solved by optical hardware includes nonlinear programming problems. They seek to minimize a nonlinear objective function \(E(\mathbf{x})\) of real or complex variables in \(\mathbf{x}\) subject to a series of constraints in the form of equalities or inequalities, i.e. \(g(\mathbf{x})\leq 0\) and \(h(\mathbf{x})=0\). Such a general framework can include many applications across social sciences, finance, telecommunications, aerospace, biological and chemical industries [81]. Many nonlinear optimisation problems present a significant challenge as the number of operations to solve them usually grows exponentially fast with the number of variables. This algorithmic complexity is the reason for using specialised techniques such as genetic algorithms, particle swarm optimisation, simulation and population annealing. Quadratic programming (QP) for minimising quadratic functions of variables subject to linear constraints is a usual simplification of such problems because nonlinear optimisation problems are quadratic to second order around the vicinity of the optimal solution. Such approximation can be successfully performed even outside the feasible solutions space. QPs can be met in the least squares regression or as a part of a bigger problem, such as support vector machine (SVM) training. The apparent correspondence between the QP objective function and 2-local spin Hamiltonians of various physical systems allows one to map the problem into the physical setup. Here, the degrees of freedom \(\mathbf{x}\) are associated with "spins" and the cost function \(E(\mathbf{x})\) is associated with a "Hamiltonian" that specifies the interactions patterns and strengths between spins. There are several possible ways such a system can find the optimal solution or the ground state of the corresponding spin Hamiltonian. Depending on the nature of the system, it can use either quasi-equilibrium or non-equilibrium regimes. The system in thermodynamic equilibrium may find the optimal solution by quantum annealing, which is executed with the time-dependent Hamiltonian \[H(t)=\bigg{(}1-\frac{t}{\tau}\bigg{)}H_{0}+\frac{t}{\tau}H_{\rm objective}, \tag{1}\] where \(H_{0}\) is the initial trivial Hamiltonian with known ground state, and \(H_{\rm objective}\) is the final Hamiltonian at \(t=\tau\) which encodes an original objective function \(E\). One can keep the system at the ground state during adiabatic evolution from \(H_{0}\) to \(H_{\rm objective}\). For the adiabatic transformation, the time scaling \(\tau\) for the system to remain in the ground state must be much larger than that defined by the inverse of the spectral gap [82]. However, the process becomes inefficient for larger systems and sophisticated (glassy) \(H_{\rm objective}\) because the spectral gap typically shrinks exponentially fast with the system size. The excited states lead to large errors while simultaneously slowing down the annealing procedure. Many open non-equilibrium gain-dissipative systems, such as lasers and photonic or polaritonic condensates are non-Hermitian systems and, therefore, do not have a ground state. Instead, they tend to minimise losses on the route to coherence. One can use geometric analogies to describe their operational principle as the approach of the surface of the optimisation cost function (loss landscape) from below. There are two main processes that lead to loss minimisation: bosonic stimulation below the threshold and the coherence of operations at the threshold responsible for the quality of the solution. After increasing the gain to the point where it overcomes the linear losses and is stabilised by the nonlinearity of the gain saturation, the emergent coherent state minimises the losses (equivalent to maximisation of the total number of particles). It hence achieves the loss minimisation mapped into the objective spin Hamiltonian. The system elements' resulting evolution closely resembles the Hopfield Networks' dynamics, proposed to be used to solve quadratic optimisation problems forty years ago [83]. Despite the successes and a lot of excitement generated back then, the optimisers based on Hopfield networks were almost forgotten primarily due to the high connectivity required between neurons and the concomitant evolution time of the networks used by classical architecture. The recent interest in Hopfield networks reemerged as it became possible to emulate them with physical systems such as electronic circuits or photonic NNs. Photonic systems have an advantage over their electronic counterparts due to the picosecond to the femtosecond time scale of their operation. At the same time, many signals can flow through a single optical waveguide. As a result, a photonic implementation of Hopfield networks as optimisers can have large dimensionality, dense connectivity, and a fast convergence time to the optimum solution. ### Spin Hamiltonians The most real-life decision or optimisation problems present a severe challenge to conventional classical computers, with classic examples of a so-called "hard optimisation task" being the travelling salesman problem, the dynamic analysis of financial markets, the prediction of new chemical materials, and ML. Mathematically, many of these optimisation problems admit a reformulation into the problem of finding the ground state of a particular spin Hamiltonian, which can be emulated with a given simulator (e.g. solid-state or optical system). The overhead in the number of additional variables needed during this mapping is, at most, polynomial [84]. However, better still, such a system needs to easily map the variables of the desired objective function into spin Hamiltonian elements (spins, currents etc.). Additionally, one wants to independently tune short and long-range interactions between the elements and perform measurements to obtain the answer with the required precision. Such spin model Hamiltonians are experimentally challenging to implement and control. Still, the possible advantages in dealing with large problem sizes lead to an intensive search for a superior simulator. Such simulators have been realised to various extents in different physical systems and are covered in this review. Through all these systems, two classes of spin Hamiltonians are generally considered: Ising and XY. The Ising model attracts the most attention since an extensive range of challenging discrete combinatorial optimisation problems, e.g. travelling salesman, graph colouring, graph partitioning, etc., can be mapped into it [84]. This model is formulated for \(N\) classical "spins" \(s_{j}\) that take discrete values \(\{-1,1\}\) to minimise the quadratic unconstrained binary optimisation problem (QUBO): \[\min_{s_{i}}\quad-\,\sum_{i=1}^{N}\,\sum_{j=1,j<i}^{N}J_{ij}s_{i}s_{j}+\sum_{ i=1}^{N}h_{i}s_{i},\quad\text{subject to}\quad s_{i}\in\{-1,1\}, \tag{2}\] where \(h_{i}\) represents an external (magnetic) field, this term can be incorporated in the \(\mathbf{J}\) matrix by considering \(N+1\) spins and thus will be omitted for the rest of the chapter. Experimental realization of the **nonlinear** terms beyond quadratic in the Ising Hamiltonian would lead to a \(k\)-local spin Hamiltonian with \(k>2\) and would allow for a direct mapping of higher-order binary optimization problems (HOBO) including Max-SAT [85] or number factorization [86] \[\min_{s_{i}}\quad-\sum_{i_{1},i_{2},...i_{k}}^{N}Q_{i_{1},i_{2},...i_{l},...,i_{k} }s_{i_{1}}s_{i_{2}}...s_{i_{l}}...s_{i_{k}}\quad\text{subject to}\quad s_{i_{l}} \in\{-1,1\}. \tag{3}\] In the XY model "spins" are continuous vectors \(\mathbf{s_{j}}=(\cos\theta_{j},\sin\theta_{j})\) and the corresponding quadratic continuous optimization problem (QCO) can be formulated as \[\min_{\mathbf{s_{i}}}-\sum_{i,j<i}J_{ij}\mathbf{s_{i}}\cdot\mathbf{s}_{j}=\min _{\theta_{i}}-\sum_{i,j<i}J_{ij}\cos(\theta_{i}-\theta_{j})\quad\text{subject to}\quad\theta_{i}\in[0,2\pi), \tag{4}\] and directly applicable for phase retrieval problems [87; 88; 89; 90]. When phases \(\theta_{j}\) are limited to discrete values \(2\pi/n\) with an integer \(n>2\) the model (4) recovers the \(n-\)state Potts model with applications in protein folding [91]. The appearance of continuous spins is a common feature in many optical systems because short photonic impulses can be characterized through amplitude and phase variables. Some of the optical hardware for ML take advantage of this feature. For example, complex-valued NNs [50], or more unusual concept of analogue transformations using a nonlinear set of functions were proposed [92]. ### P, NP, NP-complete problems The computational complexity of a problem is determined through the dependence of the problem's size on time or the number of operations required to solve it. A problem belongs to a \(\mathbb{P}\) class when a polynomial algorithm exists for solving it (e.g. searching for the maximum element in an array with no prior information). Suppose there exists a polynomial algorithm for verifying a solution. In that case, the problem belongs to a non-deterministic polynomial-time (\(\mathbb{P}\ni\mathbb{NP}\)), which does not always have an efficient (polynomial time) method of finding a solution. Whether \(\mathbb{P}=\mathbb{NP}\) true or not is a major unsolved problem in computer science, although it is widely believed to be untrue. Most difficult problems in \(\mathbb{NP}\) are called \(\mathbb{NP}\)-complete. They are equivalent to each other in that either all of them or none admit a polynomial-time algorithm (e.g. the travelling salesman problem, spin glass models and integer linear programming are in general NP-complete problems). A problem is called \(\mathbb{NP}\)-hard (informally, the hardest problems in \(\mathbb{NP}\)) if the existence of an efficient algorithm for its solution implies the existence of such an algorithm for all the \(\mathbb{NP}\)-complete problems. In general, if a decision problem with a yes or no answer (e.g. does a particular Ising Hamiltonian have a ground state energy less than some value?), is \(\mathbb{NP}\)-complete, then its corresponding optimization problem (e.g. what is the ground state energy of this Ising Hamiltonian?), is said to be \(\mathbb{NP}\)-hard. It means that \(\mathbb{NP}\)-hard problems are not any easier to solve than the corresponding \(\mathbb{NP}\)-complete decision problems. The computational complexity of the Ising model has been studied before [93] where the Ising model with a magnetic field (2) and equal antiferromagnetic couplings was shown to be \(\mathbb{NP}\)-hard for planar graphs. \(\mathbb{NP}\)-hardness was demonstrated for the three-dimensional Ising model with nearest neighbour interactions and coupling strengths randomly drawn from \(\{-1,0,1\}\). The \(\mathbb{NP}\)-hardness implies the hardness of the hardest instances for the considered problems, while the average problem can be polynomially easy. The existence of universal spin Hamiltonians has been established [94]. Universality means that all classical spin models can be reproduced within such a model, and certain simple Hamiltonians, such as the 2D Ising model on a square lattice with transverse fields and nearest neighbour interactions of infinite precision are universal [94]. Thus, due to \(\mathbb{NP}\)-hardness of the Ising model, there should exist a polynomial time mapping of many practically relevant \(\mathbb{NP}\)-complete problems to the Ising Hamiltonian, whose decision version solves the \(\mathbb{NP}\)-the complete problem of interest. The mapping of various \(\mathbb{NP}\) problems, including Karp's 21 \(\mathbb{NP}\)-complete problems, to Ising models with a polynomial overhead were explicitely formulated [84]. A problem belongs to \(\mathbb{P}\) class only if all its instances can be solved in polynomial time, while for a problem to be \(\mathbb{NP}\)-complete is enough to have some instances that are hard to solve. Such instances are said to represent worst-case scenario behaviour. How to distinguish hard instances from simple ones is a cornerstone question of analogue physical optimisers. Such understanding is necessary to evaluate their scalability and efficiency [95]. It is believed that the procedure for creating "hard" instances for spin Hamiltonians may be found at the intersection of computational complexity and statistical physics, e.g. the hardness of problems can be connected to the existence of a first-order phase transition in a system (see [96; 97; 98; 99] and references therein). Indeed, even a medium size hard instance is difficult to solve on a classical computer due to the exponential growth of operations with size. Thus, the time required to find the ground state energy depends on the coupling matrix structure \(\mathbf{J}\). For instance, finding the global minimum of the XY model for positive definite matrices remains NP-hard due to the non-convex constraints. Still, it can be effectively approximated using a semidefinite programming relaxation with some performance guarantee [100; 101]. Sparsity also plays an important role, and for sufficiently sparse coupling matrices, fast methods exist [102]. Having a unified set of optimization problems with tunable hardness and known solutions is an ongoing research direction. It will allow for an objective benchmark of classical and/or quantum simulators and algorithms. Otherwise, it would be hard to evaluate the performance of state-of-the-art platforms and methods. Current research made a good starting point in developing a standardised procedure for performance evaluation. For example, the "optimisation simplicity criterion" was recently proposed to identify computationally simple instances [95]. Optical machines with their mode selection operation often follow the dominant eigenvalue of the coupling matrix and find minimisers that correspond to the signs of the principal eigenvector components. If the minimisers of a given problem have this property, the solution will be found easily in polynomial (at most quadratic) time. One such popular example is the Ising model on the Mobius ladder graph [95]. By rewiring the Mobius ladder graph to random 3-regular graphs, one can probe an intermediate computational complexity between \(\mathbb{P}\) and \(\mathbb{NP}\)-hard classes with several numerical methods. Another way to construct instances for testing involves planted ensemble technique [99; 103]. ## IV Description of physical optical platforms for optimization Rather than trying to model nature, one can consider a reverse idea of exploiting physical phenomena for solving \(\mathbb{NP}\)-complete problems. The concept of using simulators or analogue processing devices is quite old; see, for example, [104]. However, in the last years, one can observe a competition of different physical platforms in solving classical optimization problems faster than it can be achieved on conventional hardware. This competition resulted in the rapid emergence of a new field of Hamiltonian simulators at the intersection of laser and condensed matter physics, engineering and complexity theories. Here we discuss various physical systems that appeared as promising platforms for solving computational problems. ### Complex laser networks A new generation of complex lasers such as degenerate cavity lasers, multimode fibre amplifiers, large-aperture VCSEL, and random lasers have many advantages compared with the relatively simple traditional laser resonators of their computing properties [105]. They have many spatial degrees of freedom, their nonlinear interactions within the gain material can be controlled by adjusting the spatial structures of lasing modes, the spatial coherence of emission can be tuned over a wide range, and the output beams may have arbitrary profiles. These properties allow the complex lasers to be used for RC [106] or mapped to hard computational problems. In laser networks, the coupling can be engineered by mutual light injection from one laser to another. This introduces losses that depend on the relative phases between the lasers. Such dissipative coupling drives the system to a phase-locking that minimises losses. If the amplitudes of lasers are about the same, a steady-state minimum of the XY Hamiltonian is found [107]. Degenerate cavity lasers are beneficial as solvers as all their transverse modes have nearly identical \(Q\). This implies that a large number of transverse modes lase simultaneously since they all have similar lasing thresholds [105]. The evolution of the \(N\) single transverse and longitudinal modes class-B lasers can be described by the rate equations [108; 109] on the amplitude \(A_{i}\), phase \(\theta_{i}\), and gain \(G_{i}\) of the \(i\)-th laser \[\frac{dA_{i}}{dt} =(G_{i}-\alpha_{i})\frac{A_{i}}{\tau_{p}}+\sum_{j}J_{ij}\frac{A_{ j}}{\tau_{p}}\cos(\theta_{i}-\theta_{j}), \tag{5}\] \[\frac{d\theta_{i}}{dt} =\Omega_{i}-\sum_{j}J_{ij}\frac{A_{j}}{\tau_{p}A_{i}}\sin(\theta _{i}-\theta_{j}),\] (6) \[\frac{dG_{i}}{dt} =\frac{1}{\tau_{c}}[P_{i}-G_{i}(1+|A_{i}|^{2})], \tag{7}\] where \(P_{i},\alpha_{i},\Omega_{i}\) represent the pump strength, loss, frequency detuning of laser \(i\), respectively, whereas \(\tau_{p}\) and \(\tau_{c}\) denote the cavity round trip time and the carrier lifetime, respectively. The coupling strengths between \(i\)-th and \(j\)-th lasers are represented by \(J_{ij}\). If the amplitudes of all lasers are equal, Eq. (6) reduces to the Kuramoto equation of coupled phase oscillators \[\frac{d\theta_{i}}{dt}=\Omega_{i}-\frac{1}{\tau_{p}}\sum_{j}J_{ij}\sin(\theta _{i}-\theta_{j}). \tag{8}\] Equation (8) is a celebrated Kuramoto model of identical oscillators, which is widely used to describe the emergence of coherent behaviour in complex systems [110; 111]. By LaSalle invariance principle [112], every trajectory of the Kuramoto model converges to a minimum of the XY Hamiltonian. It was shown that the probability of finding the global minimum of the XY Hamiltonian agrees between the experimental realization of the laser array and with the numerical simulation of Eqs. (5-7). However, simulating the Kuramoto model Eq. (8) on the same matrix of coupling strength gives a much lower probability of finding the global minimum. This result implies that the amplitude dynamics described by Eq. (5) provide a mechanism to reach lower energy states by pumping from below [109]. Consequently, the cavity lasers can be used as an efficient physical simulator for finding the global minimum of the XY Hamiltonian and, therefore, for solving phase retrieval problems. A particularly successful in these tasks was a digital degenerate cavity laser [90]. It is an all-optical system that uses a nonlinear lasing process to find a solution that best satisfies the constraint on the Fourier magnitudes of the light scattered from an object. To ensure that the solution to the phase retrieval problem is found, the compact support aperture is introduced inside the cavity, ensuring that different configurations of laser phases compete to find the one with minimal losses. The system combines the advantages of short round-trip times of the order of 20ns and high parallelism in selecting the winning mode. ### Coherent Ising machine A network of coupled optical parametric oscillators (OPOs) is an alternative physical system for solving the Ising problem ([113; 114; 115; 116; 117; 118; 119] and references therein). Each OPO is a nonlinear oscillator with two possible phase states above the threshold that can be interpreted as the Ising spins. These artificial Ising spins are encoded by the optical phase of short laser pulses generated by a nonlinear optical process, i.e. optical parametric amplification. The OPO-based simulator, coherent Ising machine (CIM), is a gain-dissipative system in which the ground state of the Ising Hamiltonian corresponds to the lowest loss configuration. The optimal solution is found by driving the system close to the near-threshold regime, where other local energy minima are still unstable. Currently, most successful implementations of CIMs use a fibre-based degenerate OPOs (DOPOs) and a measurement-based feedback coupling, in which a matrix-vector multiplication is performed on the FPGA embedded in the feedback loop, see the scheme depicted in Fig. 6. The computational performance of such a scalable optical processor, bounded by the electronic feedback, was demonstrated for various large-scale Ising problems [113, 114, 115]. The comparison of a possible CIM's speedup over classical algorithms is an ongoing study [116, 120]. Furthermore, the ability to implement arbitrary coupling connections [113] between any two spins implies better scalability than the solid-state based annealer, i.e. D-Wave machine [114]. In CIM, each Ising spin corresponds to a DOPO that is described by a stochastic equation for the complex amplitude of the signal field \(a_{i}\): \[\frac{da_{i}}{dt}=pa_{i}^{*}-a_{i}-|a_{i}|^{2}a_{i}+\sum_{j}J_{ij}a_{j}, \tag{9}\] Figure 6: Schematics of the coherent Ising machine (CIM) with the feedback mechanism. The time-multiplexed pulse degenerate parametric oscillator is formed by a non-linear crystal (periodically polarized lithium niobate (PPLN)) in a fibre optic ring cavity containing 160 pulses. The feedback signal couples the independent pulses in the cavity and is computed from the measurements from different pulse fractions.IM - intensity modulator; PM - phase modulator; LO - local oscillator; SHG - second-harmonic generation; FPGA - field-programmable gate array. The figure is taken from [113]. where the dynamics is defined by a linear pump term \(p\), normalised linear and nonlinear losses, and mutual couplings \(J_{ij}\). To experimentally realise these couplings, a portion of the light is extracted from the cavity after each round trip. That light is then homodyned against a reference pulse to produce \(a_{i}\) that is supplied to the FPGA, where a feedback signal is computed for each pulse. Lastly, an optical modulator is applied to convert the signal back to light for the next round trip. The equations (9) are often reformulated in terms of the in-phase and quadrature components \(a_{i}=c_{i}+is_{i}\) giving the equations in real terms: \[\frac{dc_{i}}{dt} =\bigg{(}p-1-(c_{i}^{2}+s_{i}^{2})\bigg{)}c_{i}+\sum_{j}J_{ij}c_{j} \tag{10}\] \[\frac{ds_{i}}{dt} =\bigg{(}-p-1-(c_{i}^{2}+s_{i}^{2})\bigg{)}s_{i}+\sum_{j}J_{ij}s_{ j}. \tag{11}\] The computational effectiveness of these equations has been demonstrated by tackling small size Ising type problems of order up to 20 [118]. In part devoted to polariton condensates, we will show that for achieving the global minimum, the realisation of an individual pump variation \(p_{i}\) for equalising all signal amplitudes \(|a_{i}|\) is crucial. Phase stability for the cavity's whole length is required, making the DOPOs system highly susceptible to external perturbations that can affect performance [114]. Furthermore, the nonlinear DOPO generation process demands powerful laser systems and temperature-controlled nonlinear materials, which results in large and complex optical setups. These issues have led to recent proposals of other physical platforms for implementing a CIM-like machine. A CIM based on optoelectronic oscillators with self-feedback was suggested to be more stable and cheaper based on solving Ising optimisation problems on regular and frustrated graphs with up to 100 spins, and similar or better performance compared to the original DOPO-based CIM [117]. An analogue all-optical implementation of a CIM based on a network of injection-locked multicore fibre lasers demonstrated the possibility of solving Ising Hamiltonians for up to thirteen nodes [121]. The dynamics of a network of injection-locked lasers were based on nonlinear coupled photon rate equations, and the couplings were implemented using SLMs. The couplings were reported to be dependent on the photon numbers that have yet to be discovered beforehand, which can be a significant obstacle in solving a given Ising Hamiltonian with the proposed photonic CIM. To resolve this issue approaches similar to gain feedback [122; 123] may be considered in future. Another large-scale optical Ising machine based on the use of an SLM was experimentally demonstrated by using the binary phases in separated spatial points of the optical wavefront of an amplitude-modulated laser beam and realising configurations with thousands of spins with tunable all-to-all pairwise interactions [124]. CIM's essential elements are DOPOs with an unconventional operating mechanism called mode selection or gain-dissipative principle. Here we briefly describe this operational regime: Each neuron is prepared in a linear superposition state of different excitations to implement a quantum parallel search. The cost function is mapped to the effective loss, photon decay rate, of the given network by setting the coupling coefficient proportional to the \(J_{ij}\), which encodes the information about the given task. The ground state of the Ising Hamiltonian corresponds to an oscillation mode with the minimum network loss. The system reaches the ground state with a minimum loss at the threshold pump rate. It starts oscillating as a single stable mode, which triggers photons' stimulated emission and affects the saturation for all the other modes. Detecting this single oscillation mode will give us the solution to the desired problem. ### Photon and polariton networks Photons have both attractive and not properties concerning computational assignments. However, despite the commonly known optical platforms, such as free optical setups or systems of lasers, it is possible to bind the photons with the matter wave excitations. This gives rise to unique designs, combining the photons with matter, such as exciton-polaritons. Microcavity exciton-polaritons, or simply polaritons, are quasi-particles that result from the hybridisation of light confined inside semiconductor microcavities and bound electron-hole pairs (excitons). The steady states in these nonequilibrium systems are set by the balance between the pumping intensity, coming from the interconversion rate of the exciton's reservoir into polaritons, and losses, happening due to the leakage of photons. Polaritons are bosons and obey Bose-Einstein statistics. Therefore, they can form a condensed (coherent) state above a critical density [125]. Thus, polaritons offer a unique playground to explore nonequilibrium condensation and related effects in solids. The advantage for such explorations comes from the polariton's small effective mass of 4-5 orders of magnitude smaller than the electron's mass. The design and choice of material allow one to control the polariton mass and realise such solid-state nonequilibrium condensates not only at cryogenic temperatures but also at room temperature in organic structures. The weak coupling at high temperatures and high pumping intensities transitions continuously to strong coupling at lower temperatures and lower pumping intensities. In the limit of a small gain, i.e. small losses, solid-state condensates resemble equilibrium Bose-Einstein condensates (BECs). They approach the lasers in the regime of high gain, i.e. high losses. This transition from the equilibrium BECs to normal lasers was described with a unified approach via polariton condensates [126]. In another system, closely resembling the physics of polariton condensates, macroscopic occupation of the lowest mode for gas of photons confined in a dye-filled optical microcavity was recently shown [127; 128; 129; 130]. The rapid thermalization of rovibrational modes of the dye molecules by their collisions with the solvent and phonon dressing of the absorption and emission by the dye molecules leads to the thermal equilibrium distribution of photons and concomitant accumulation of low-energy photons. Such systems resemble microlasers [131], but unlike microlasers, they exhibit a sharp threshold that occurs far below the inversion. Many techniques have been proposed and realised in experiments to construct the lattices of polariton or photon condensates. Polariton lattices can be optically engineered by injecting polaritons in specific areas of the sample using the SLM [132; 133; 134; 135; 136]. Various potential landscapes to confine polariton or photons have also been engineered [137; 138; 139]. The rate equations describing the evolution of gain-dissipative condensates in a lattice were derived using the tight-binding approximation of the space and time-resolved mean-field equations [123; 140] and take the form of the Stuart-Landau equations \[\dot{\Psi}_{i}=-iU|\Psi_{i}|^{2}\Psi_{i}+(\gamma_{i}-|\Psi_{i}|^{2})\Psi_{i}+ \sum_{j\neq i}\mathcal{C}_{ij}\Psi_{j}, \tag{12}\] where \(\Psi_{i}=\sqrt{\rho_{i}}\exp[i\theta_{i}]\) is the complex amplitude of the \(i-\)th condensate, \(U\) is the strength of self-interactions between the quasi-particles, \(\gamma_{i}\) is the effective injection rate (the difference between the pumping of the quasi-particles into the system and linear losses). The coupling strength \(\mathcal{C}_{ij}=J_{ij}+iG_{ij}\) is generally a complex number and consists of the Heisenberg coupling \(J_{ij}\) mediated by the injection reservoir and the Josephson part \(G_{ij}\) that comes from exchange interactions between the condensates. The system described by Eq. (12) reaches the fixed point when \(J_{ij}\gg G_{ij}\) and the pumping feedback is introduced in the system [123]. The feedback on the pumping intensity ensures that all the occupations are the same at the fixed point by adjusting the pumping if the occupation exceeds the set threshold value \(|\Psi_{i}|^{2}=\rho_{\rm th}\). The total injection of the particles in the system of \(N\) condensates at the fixed point is given by \[\sum_{i=1}^{N}\gamma_{i}=N\rho_{\rm th}-\sum_{i=1}^{N}\sum_{j<i}^{N}J_{ij}\cos (\theta_{i}-\theta_{j}). \tag{13}\] Choosing the lowest possible total particle injection \(\sum\gamma_{i}\) that leads to the occupation \(\rho_{\rm th}\) for each condensate guarantees that the minimum of the \(XY\) Hamiltonian is reached. To find the actual global minimum, the system has to slowly be brought to the condensation threshold while spending enough time in its neighbourhood to span various phase configurations driven by the system noise (classical and quantum fluctuations). When the system reaches a phase configuration in the vicinity of the minimum of the \(XY\) Hamiltonian, it quickly converges to it by the gradient descent given by the imaginary part of Eq. (12): \[\dot{\theta}_{i}=-U\rho_{\rm th}-\sum_{j\neq i}^{N}J_{ij}\sin(\theta_{i}- \theta_{j}). \tag{14}\] This idea has been theoretically justified [123] and experimentally realised for simple polariton graphs [136]. It was also proposed how to extend the scheme to discrete optimisation problems such as QUBO (minimising the Ising Hamiltonian) or \(n\)-states Potts Hamiltonians [141]. When the resonant excitation is combined with a non-resonant one, the spins are forced to take the discrete values aligning with the directions set by the resonant excitation. If \(n:1\) resonant drive is added to the system, the dynamics of the coherent centres obey \[\dot{\Psi}_{i}=-iU|\Psi_{i}|^{2}\Psi_{i}+(\gamma_{i}-|\Psi_{i}|^{2})\Psi_{i}+ \sum_{j\neq i}J_{ij}\Psi_{j}+h(t)\Psi_{i}^{*(n-1)}, \tag{15}\] where \(h(t)\) is an increasing function that reaches a constant value \(H>\max_{i}\sum_{j}|J_{ij}|\) at the threshold. At the fixed point, Eq. (13) is replaced with \[\sum_{i=1}^{N}\gamma_{i}=N\rho_{\rm th}-\sum_{i=1}^{N}\sum_{j<i}^{N}J_{ij}\cos (\theta_{i}-\theta_{j})-H\rho_{\rm th}^{n/2-1}\cos(n\theta_{i}). \tag{16}\] At \(n=2\), the last term on the right-hand side provides the penalty to phases deviating from \(0\) or \(\pi\), reducing the optimization problem to QUBO. For \(n>2\), the \(n\)-state Potts Hamiltonian is minimized. The minimization of HOBO may be achieved when the system operates much above the threshold, and higher-order terms must be addressed [142]. If the time evolution of the reservoir of noncondensed particles is slow, the system of \(N\) interacting coherent centres is better described by the following equations [140]: \[\dot{\Psi}_{i} = -iU|\Psi_{i}|^{2}\Psi_{i}+(R_{i}-\gamma_{c})\Psi_{i}+\sum_{j\neq i }J_{ij}\Psi_{j}, \tag{17}\] \[\dot{R}_{i} = \Gamma_{i}-\gamma_{R}R_{i}-R_{i}|\Psi_{i}|^{2}, \tag{18}\] where \(R_{i}\) is the occupation of the \(i-\)th reservoir, \(\Gamma_{i},\gamma_{R}\) and \(\gamma_{c}\) characterize the rate of particle injection into the reservoir and the linear losses of the reservoir and condensate, Figure 7: Top: Schematic of the condensate density map for a five-vertex polariton graph. The sign of the coupling depends on the separation distance between the sites and is either ferromagnetic (solid-blue lines) or anti-ferromagnetic (dashed-red lines). Each vertex of the graph polaritons represents a local phase mapped to a classical vector spin. Bottom: schematics of different types of annealing for finding the global minimum of the energy landscape of the simulated XY Hamiltonian [136]. respectively. If one replaces \(\Psi_{i}\) by the electric field and \(R_{i}\) by the population inversion of the \(i-\)th laser, the result is a form of the Lang-Kobayashi equations normally derived to describe the dynamical behaviour of coupled lasers from Lamb's semiclassical laser theory [143; 144]. The total injection of the particles in the system of \(N\) condensates at the fixed point is given by \[\sum_{i=1}^{N}\Gamma_{i}=(\gamma_{R}+\rho_{\rm th})[N\gamma_{c}-\sum_{i=1}^{N} \sum_{j<i}^{N}J_{ij}\cos(\theta_{i}-\theta_{j})]. \tag{19}\] Similar to Eq. (13), if the total injection into the system is minimal, the phases of coherent centres minimize the XY Hamiltonian. ### Mathematical description of optical optimisers Many existing optical machines can be described as the evolution of a set of \(N\) classical degrees of freedom. The variety of optical platforms, such as atoms, polaritons, excitons, photons, etc., shares many similar features in their mathematical description. We present here the structured list of the main equations used in the context of nature-inspired physical systems and algorithms in Fig. 8. There are a few main reasons to highlight the unified picture of these equations. The first one is to show that all of the presented equations represent the same phenomena of the minimization principle and bifurcation dynamics, unifying many equations from math, physics, theory of neural networks, etc., see [145]. The second important feature is represented through the structure of the list. One can easily find the correct transformation between two chosen equations. This is the reason we non-rigorously placed them in the order so that the canonical Andronov-Hopf oscillators (AHO) model resides at the top of the list and the most straightforward gradient resides at the bottom. One can land at the required equations starting from the canonical model by using the proper transformation in the neighbourhood of the bifurcation leading to the solution or omitting some of the terms or derivatives. Moreover, the difference between the presented equations appears to lie only in the chosen parametrization of the system. The optimization process can be done differently, even in the scope of classical dynamical systems depending on the chosen parameters. Such a unified framework allows one to merge many empirical results or to work in the same framework of a universal model for a better comparison of results. The Principle of Least Action, the Principle of Minimum Power Dissipation (or Minimum Entropy Generation) or the Variational Principle are good demonstrations that "optimization is built into physics" [146]. In Hamilton's formulation, the fundamental Principle of Least Action states that a true dynamical trajectory of a system between an initial and final configuration in a specified time is found by choosing the one among the set of possible imaginary trajectories that makes the action locally stationary (in other words have least action). Such a variational task is an excellent example of physics spawning complex problems. For even more complicated tasks, one can consider the formulations of the Principle of Least Action for classical and quantum field theories. We do not include the explicit Hamiltonian equations (\(\dot{q}_{i}=\frac{\partial H}{\partial p_{i}},\quad\dot{p}_{i}=-\frac{\partial H }{\partial q_{i}}\)) in the second block in the Fig. 8. However, they also connected with the presented equations and served as a perfect entry point for considering the whole list from the physicist's point of view. Within the scope of this review, we restrict ourselves to classical systems with a discrete number of degrees of freedom and focus on PDEs that can be mapped into Newtonian-like equations of motion. Additionally, we do not pay much attention to the changes in the original equations of motion, such as Lagrange multipliers, holonomic constraints or relativistic factors. Another good entry point of Fig. 8 is the well-known classical gradient-descent dynamics with the target cost function defined by the gradients \(-\sum_{j\neq i}Q_{ij}\left(x_{j}\right)\), which is the most straightforward equation among the presented dynamical systems. One can connect it with gradient descent with momentum (see the centre of Fig. 8) or the classical momentum (CM) method [147], or it's improved version - Nesterov accelerated gradient-descent [148]). Kuramoto model - is a well-known mathematical model used to describe synchronization phenomena occurring in a system of coupled oscillators [149; 150; 151]. One can obtain this model from the AHO equations using the transformation that involved the eigenvalues and eigenvectors of the coupling matrix at the neighbourhood of the Hopf bifurcation, directly derive it from a nontrivial dissipative Hamiltonian or look at this model as the gradient descent over the cost function corresponding to the classical XY Hamiltonian. The bottom part of Fig. 8 consists of Hopfield NN and coherent Ising machine description. Hopfield NN is a recurrent artificial NN and can be viewed as the gradient descent variant with the effective projection term with characteristic time \(\tau\) and the gradient terms \(-\sum_{j\neq i}Q_{ij}\left(x_{j}\right)\), which are usually represented through \(-\sum_{j\neq i}J_{ij}\varphi(x_{j})\), where \(\varphi(x_{j})\) is the projection function and \(J_{ij}\) are the coupling strengths [83]. CIM equations are very close to the Hopfield description. CIM is a network of OPOs, in which the "strongest" collective mode of oscillations corresponds to an optimum solution while going above the threshold of a particular Ising problem [113; 114]. The main difference between the classical description of CIM (which is debated to be essentially non-classical [152; 153]) and Hopfield NN is the additional pumping term \(p\) and saturation mechanism \(-x_{i}^{2}\). The middle part of the Fig. 8 contains simulated bifurcation machine (SBM) equations, which are inspired by the adiabatic evolution of classical nonlinear Hamiltonian systems exhibiting bifurcation phenomena [154; 155; 156]. The higher derivative makes the connection with the physics more visible and improves the simulation algorithm's performance for specific parameters. An alternative perspective on the connections between the physical Lagrangian/Hamiltonian systems and neural network evolution is given by the Modern Hopfield networks, or dense associative memories [157; 158]. Modern Hopfield networks operate with feature \(x_{i}\) and memory (hidden) \(h_{\mu}\) neurons that evolve as continuous variables in continuous time. The characteristic times for each group are \(\tau_{f}\) and \(\tau_{h}\). The symmetric coupling functions are chosen according to \(Q_{i\mu}\left(h_{\mu}\right)=\xi_{i\mu}f_{\mu}\) and \(G_{\mu i}\left(x_{i}\right)=\xi_{\mu i}g_{i}\) and connect only neurons from different groups, i.e. a feature neuron \(i\) to the memory neuron \(\mu\) and reverse. The outputs of the memory neurons and the feature neurons are denoted by non-linear functions \(f(\{h_{\mu}\})\) and \(g_{i}=g(\{x_{i}\})\) correspondingly. These functions can be represented as derivatives of the Lagrangian functions for the two groups of neurons \(f_{\mu}=\frac{\partial L_{h}}{\partial h_{\mu}}\) and \(g_{i}=\frac{\partial L_{x}}{\partial x_{i}}\). Choosing the specific Lagrangian will define the network's dynamics (or updates rule), which minimises the energy function. One can recover an effective theory of evolution by integrating out hidden neurons. The upper part of Fig. 8 contains the Andronov-Hopf oscillators model [159; 160], the canonical model describing the appearance of the bifurcations, which are among the essential phenomena observed in neuron dynamics, responsible for the periodic activity. The functions \(Q_{ij}\) are accountable for the interaction between the \(i\) and \(j\) oscillators, while \(\gamma_{i}\), \(\omega_{i}\), \(\sigma_{i}\), \(U_{i}\) represent the effective gain, self-frequency, nonlinear dissipation and self-interactions respectively. Many lasers [161], photonic, polaritonic [140], and biological systems [162] exhibit the so-called Andronov-Hopf bifurcation at the threshold that can spawn the limit cycle behaviour. AHO can be an attractive choice for the unifying framework for many models presented here, which demonstrate a variety of collective phenomena [145]. Another important property of the AHO model is its canonicity, which means that in the vicinity of bifurcation, one can get every equation in Fig. (8) below AHO by a certain transformation, which is not true in the reverse case. One can also investigate the bifurcation phenomena and the time-dependent behaviour of the coefficients near the bifurcation point since it is the crucial mechanism for the system to find a solution to the optimization task. AHO shares its canonicity with another model - weakly interacting neural networks. The network consists of \(N\) neural oscillators comprised of excitatory (\(x_{i}(t)\)) and inhibitory (\(y_{i}(t)\)), that evolve according to the presented dynamical equations [162]. In the local context, functions \(f,g\) are responsible for the internal behaviour of the \(i\)th part of the system. At the same time, \(p,q\) represents the external interactions, the strength of which is parametrized by the \(\epsilon\) parameter. The explicit transformations between the equations can be found in [163; 164; 165]. We will omit the explicit description of the transformations that lead from the top equations to the bottom, while the detailed discussion and corresponding references can be found in [145]. Although the coupled microelectromechanical systems (MEMs) do not contain the optical elements, they are governed by similar optical second-order differential equations [165]. The transition from the AHO to the CIM, Hopfield of SBM equations can also be found in [145]. It is important to remember that introducing sophisticated time dynamics of the parameters can improve the minimisation properties of each of the presented types of equations. For example, it is possible to introduce specific time schedules (e.g. the chaotic amplitude method that anneals the coupling terms depending on the discrepancy between the oscillator amplitude and its saturation point [166]) or to introduce the high-order terms (e.g. \(\dot{\psi_{i_{k}}}\sim\sum_{i_{1},i_{2},\ldots i_{k}-1}^{N}Q_{i_{1},i_{2}, \ldots i_{l},\ldots,i_{k}}\psi_{i_{1}}\psi_{i_{2}}\ldots\psi_{i_{l}}\ldots \psi_{i_{k}-1}^{*}\)[142]). An additional note should highlight the Principle of Minimum Power Dissipation and its role in analogue optimization machines. It was shown that many physical systems act through this principle and perform Lagrange function optimization [146]. The Lagrange multipliers are given by the gain or loss coefficients or their time-varying parametrization; see, for example, the equations of the CIM. Depending on the characteristics of the machine, it can be helpful in many other applied domains. The operation of optical machines consisting of \(N\) elements can be described in a unified fashion as an evolution of a set of \(N\) classical or quantum oscillators. The difference between classical and quantum comes from the system's initial state. It affects the speed and proba ## Appendix A Figure 8: The ordered list of the main models from different branches of science used in the context of the optimization. The most general equations are closer to the top, starting with the canonical AHO model, which encompasses all equations below through certain transformations. In contrast, the simpler ones, like gradient descent, are located at the bottom. We non-rigorously group the models according to their use of the second-order derivative terms. The functions \(\sum_{j\neq i}Q_{ij}\left(x_{j}\right)\) can have different forms such as \(\eta\frac{\partial E}{\partial x_{i}}\) in gradient descent case, \(Q_{ij}\left(x_{j}\right)=J_{ij}x_{j}\) or \(Q_{ij}\left(x_{j}\right)=J_{ij}\varphi(x_{j})\) in case of the Hopfield NNs. bility of finding the final state (usually a solution to a problem). If the occupation numbers of oscillators are large and somewhat uncertain and interactions are weak, then the system evolves as an ensemble of classical fields with corresponding classical-field action [167]. This analogy is valid for any bosonic oscillators, including optical: atoms, polaritons, excitons, photons, etc. For instance, the density matrix of a completely disordered, weakly interacting Bose gas with large and somewhat uncertain occupation numbers is almost diagonal in the coherent-state representation. The initial state can be viewed as a statistical ensemble of coherent states. To the leading order, each coherent state evolves along its classical trajectory. The evolution leads to an explosive increase of occupation numbers in the low-energy region of wave number space where the ordering process takes place [167]. Even if the occupation numbers are of order unity in the initial state, so that the classical matter field description is not yet applicable, the evolution, which can be described at this stage by the standard Boltzmann quantum kinetic equation, inevitably results in the appearance of large occupation numbers in the low-energy region of the particle distribution. Therefore, one can switch from the kinetic equation to the matter field description for the long-wavelength component of the field at a particular moment of the evolution when the occupation numbers become appropriately large. The optical system can be described using a classical matter field when this happens. However, the quantum dynamics before this moment plays a crucial role. This fully quantum dynamics with entanglement and superposition of states allows for complete scanning of the high dimensional space of the system until the coherent state is found. After that, the system behaves classically. while this coherent state settles to a fixed point that is a solution to a problem. During the passage to the coherent state, the quantum effects should enhance the search for the optimal state and potentially lead to the quantum speed-up. ### Associative memory model In this section, we present the associative memory model as one of the NN models, which exploits the links with spin Hamiltonians. This correspondence implies that many physical systems with nontrivial (nonzero) interaction potentials can be used as computational devices. The standard model of associative memory [83] uses a system of \(N\) binary neurons, with values \(\pm 1\). A configuration of all the neurons is denoted by a vector \(\sigma_{i},i=1,..,N.\) The model stores \(K\) memories, denoted by \(\xi_{i}^{\mu},\mu=1,..,K\), which are also binary. The model is defined by an energy function (or, further Lyapunov function), which is given by \[E=-\frac{1}{2}\sum_{i,j=1}^{N}\sigma_{i}J_{ij}\sigma_{j},\quad J_{ij}=\sum_{\mu= 1}^{K}\xi_{i}^{\mu}\xi_{j}^{\mu}, \tag{20}\] and a dynamical update rule that decreases the energy at every update. The fundamental problem is that when presented with a new pattern, the network should respond with a stored memory that most closely resembles the input. Many physical systems we considered in Section IV can follow the gradient of this Lyapunov function, which automatically converts them into the ANN. The theory of Hebbian learning addressed the associative memory [168; 169] and describes how to prescribe the coupling coefficients between the neurons \(J_{ij}\) (usually normalised by the number of patterns \(K\)). Usually, \(J_{ij}\) is taken as the sum of the outer products of the stored patterns. One can find more ways to define coupling coefficients in the associative memory, e.g. pseudoinverse rule, Storkey learning rule or others. There has been a lot of work investigating this model's capacity, defined as the maximal number of memories that the network can store and reliably retrieve. It has been demonstrated that in the case of random memories, this maximal value is of the order of \(K^{\rm max}\approx 0.14N\)[170; 83; 171]. If one attempts to store more patterns, several neighbouring memories in the configuration space will merge, which produces a global minimum of the energy (20), thus preventing recovery of the stored memories. It is possible to improve the capacity close to \(K^{\rm max}=N\) by modifying the Hamiltonian (20) in a way that removes second-order correlations between the stored memories [172]. The simple associative memory model (20) has many benefits. Firstly, it is quadratic in variables, which means that the energy gradient is linear to these variables. Therefore, one can easily calculate the corresponding updates of the neurons that lower the energy function (20). The following consequence of this mathematical structure is that one can reproduce the energy function (20) together with required dynamical behaviour using various physical hardware systems. To build an associative memory machine, one needs to connect the elements representing the analogue variables via nontrivial interaction potential proportional to the strength \(J_{ij}\) and project the final stable state into the discrete domain to obtain the binary states of neurons. Furthermore, the model's simplicity allows one to easily modify and incorporate other extensions. Finally, the model's universality means it is possible to solve different tasks via associative memory by mapping between tasks; for example, the classification task can be reduced to pattern recognition/restoration. Another well-known name for the associative memory model is the Hopfield NN, a form of recurrent ANN with binary threshold nodes. Moreover, Hopfield NN shares many other similarities with the physical spin-glass model and several combinatorial optimization tasks. For example, the Hopfield model is isomorphic to the Ising model of magnetism (for zero temperature) [173], which has been extensively analyzed in physical contexts. In combinatorial optimization, finding the ground state of the Ising model is \(\mathbb{NP}\)-hard and can be related to the QUBO (2). Moreover, computing the statistical sum of the spin-glass has the same \(\mathbb{NP}\)-hard complexity class, which was a significant obstacle in calculating its various thermodynamic quantities. Other examples of tasks are the Boolean satisfiability problem or SAT [174] and weighted MAX-2-SAT. To fully define the associative memory model, one has to specify the dynamical update rule of the neurons. For instance, the update rule can describe the discrete state of neurons in discrete time steps: \[\sigma_{i}(t+1)=\left\{\begin{array}{ll}1,&\text{if }\Sigma_{j}J_{ij}\sigma_{j}(t)>0, \\ -1,&\text{otherwise},\end{array}\right. \tag{21}\] with the same notation used in 20. The continuous version has the form: \[\frac{dx_{i}}{dt}=-\frac{x_{i}}{\tau}+\sum_{j}J_{ij}g\left(x_{j}\right)+h_{i}, \tag{22}\] where \(x_{i}\) denotes the mean state of the \(i\)-th neuron that can get continuous values in the initially defined range, \(h_{i}\) is a direct input or bias coefficient in case the Lyapunov function (20) has non-zero field, \(g\) is a monotone function that bounds the continuous states and converts them into the discrete in the final state of convergence, i.e. makes the correspondence between the variables \(\sigma_{i}=g(x_{j})\), and \(\tau\) is the characteristic time (22) of the convergence to an optimal or suboptimal solution. The analogue computation with the NN can be described as an evolution of the vector-state variables in the high-dimensional continuous space. One can precisely trace it using Eq. (22). The vital aspect of such a differential equation structure is an existence of a Lyapunov function. This Lyapunov function \(H\) behind the Hopfield NN can lead to the un derstanding of possible final states, which appear to be attractors of the system's dynamical behaviour. For both models, one can realise the dynamical state update using a particular hardware system described previously. However, one should differentiate between different regimes that can be realised on the hardware level: the task of finding the ground state (the global minimum) of the model and pattern restoration (descending on the surface of the Lyapunov function towards its nearest minimum). The explicit formula for the Lyapunov function in the discrete variant of the model with the non-zero field is: \[H=-\frac{1}{2}\sum_{i,j=1}^{N}\sigma_{i}J_{ij}\sigma_{j}-\sum_{i=1}^{N}h_{i} \sigma_{i}. \tag{23}\] In the case of continuous variables Eq. (22), the same function has a slightly different forms: \[H=-\frac{1}{2}\sum_{i,j=1}^{N}\sigma_{i}J_{ij}\sigma_{j}-\sum_{i=1}^{N}h_{i} \sigma_{i}+\frac{1}{\tau}\sum_{i=1}^{N}\int^{\sigma_{i}}g^{-1}(Z)dZ, \tag{24}\] where the last term appears due to the correspondence between the discrete and continuous state \(\sigma_{i}=g(x_{i})\). For \(g(x)\), one usually picks the \(g(x)=\tanh(x/\beta)\) function, where the \(\beta\) parameter tends to zero value during the evolution of the Hopfield NN forcing the last term of the Eq. (24) to disappear, see [175] with the additional emphasis on the optimization problems. The essential property of the dynamical update rules is that the energy decreases through the system evolution, which leads to the final stable patterns in the phase space. The classical Hopfield NN has many modifications for the Lyapunov function, variables update rules and other features. One version is known as modern Hopfield NNs [157]. Modern Hopfield networks with continuous states can be integrated into deep learning architectures because they are continuous and differentiable with respect to their parameters. Moreover, they retrieve patterns with just one update, conforming to deep learning layers. For these reasons, modern Hopfield networks can serve as specialised layers in deep networks to equip them with memories. Possible applications of Hopfield layers in deep network architectures find their way in multiple instance learning, defence against adversarial attacks [176], processing of and learning with point sets, sequence analysis and time series prediction, storing and retrieving reference data, e.g. the training data, outliers, high error data points, prototypes and many other purposes [157]. Even more importantly, the functionality of the modern Hopfield networks can be compared with various methods from the ML domain, such as SVMs, random forest, boosting, decision trees, Bayesian methods and many others [177; 178]. As we mentioned above, many optical systems can perform optimization tasks. Since there are intrinsic similarities between this task and the associative memory model, one can exploit this relation to realize Hopfield NN on the optical setup. Such realizations include previously discussed laser networks, Ising machines, photon [179] and polariton systems [136], and confocal cavity QED NN [180], see Fig. 9. The connection between the optical networks and the Hopfield model is important since it allows one to incorporate such layers into more complex optical architectures without complicated adjustments. ### Higher-order systems One significant extension of the Hopfield model is incorporating the tensor terms, which depend on the \(\sigma_{i}\) variables polynomially in \(n\)[182]. The such extension allows one to increase the number of stored patterns to \(K^{max}=\alpha_{n}N^{n-1}\), where \(\alpha_{n}\) is a numerical constant. Moreover, it is possible to observe the so-called "feature to prototype transition" when increasing \(n\) in the NN training. The prototype theory provides an alternative approach to Figure 9: (a) Four nodes with the all-to-all coupling and sign-changing connectivity between spin ensembles. Blue and red show ferromagnetic versus antiferromagnetic \(J_{ij}\) links. One can find the physical details in [181]. (b) The realization of the Hopfield NN by the spin ensemble. Binary neurons \(s_{i}\) of a single-layer network are recurrently fed back and subjected to a linear transform \(J\) with the consequent element-wise threshold operation. (c) The Hopfield model exhibits an energy landscape with many metastable states. Energy-minimizing dynamics drive similar spin configurations to the stored local minimum, characterized by the basin of attraction. Too many memories make the basins of attraction vanish. (d) Schematic of the associative memory problem - recalling multiple stored patterns by completing distorted input images. Figure from [181]. learning in which objects are recognized as a whole. Although tensor terms are assumed not to be biologically plausible [158], they can be reproduced on some artificial physical setups [142]. From this perspective, artificial tensor platforms can significantly benefit from such technological opportunities. The higher order Hopfield NNs [183] can be written as \[\frac{dx_{l}}{dt}=-\frac{x_{l}}{\tau}+\sum_{\bar{\Omega}}\mathbf{A}_{l,i_{1}, \cdots,i_{k}}^{k}s_{i_{1}}\cdots s_{i_{k}};\hskip 28.452756pts_{l}=g\bigg{(}\frac{x_{l} (t)}{\beta}\bigg{)}, \tag{25}\] where \(x_{l}\) are real continuous variables, \(g(x)\) is the threshold function and \(\beta\) is the scaling parameter that can depend on time. Such systems can solve HOBO, see Eq. (3), because of the \(k\)-local coupling. It was shown [142] that polariton systems above the threshold are described by \[\frac{d\Psi_{l}}{dt} =\Psi_{l}(\gamma_{l}(t)-|\Psi_{l}|^{2})+\sum_{\bar{\Omega}} \mathbf{A}_{i_{1},\cdots i_{k}}^{k}\Psi_{i_{1}}...\Psi_{i_{k}}^{*}, \tag{26}\] \[\frac{d\gamma_{l}}{dt} =\epsilon(\rho_{\text{th}}-|\Psi_{l}|^{2}), \tag{27}\] where \(\bar{\Omega}\) is the set of indices that excluded index \(l\). Eq. (27) describes the feedback mechanism that drives all \(\rho_{i}\) to a priori set values \(\rho_{\text{th}}\), \(\epsilon\) characterizes how fast \(\gamma_{i}\) adjusts to changes in \(\rho_{i}\). Next, we proceed with the different ways of connecting the practical computational tasks with the actual physical behaviour of the presented systems. ## V Mathematical formulation of applications This section considers a range of generic applications that follow from the network's ability to solve optimization problems or/and act as Hopfield networks. We start with the simple problems from classical computer science with the corresponding mapping to the QUBO problem. We then move to modern tasks that differ in information capacity and are considered to suffer from the so-called "curse of dimensionality", where it is more suitable to work with the probability distributions instead of the individual variables. However, in both cases, we do not pay attention to whether the presented mapping is efficient (like in the following subsection) or not (when one needs multiple sequential operations with a considerable amount of pre and post-processing in between). Some of the inefficient embeddings can still possess mathematical challenges and can be improved either in the general formulation or with task-specific information. At the end of this chapter, we discuss the NN architectures and their capabilities. ### Direct encoding/decoding This subsection describes the connections/correspondences between different computational tasks established during the last 50 years [174; 184; 185]. The propositional satisfiability problem (SAT) lies at the heart of such correspondence. It is a fundamental problem determining whether a set of sentences in propositional logic is satisfactory. A clause is built as the disjunction, the logical OR (denoted by \(\vee\)) of some Boolean variables or their negations. A set of several clauses, which must be satisfied simultaneously, is the conjunction, logical AND (denoted by \(\wedge\)) of the clauses. One can write a satisfiability problem in the general form: \[(x_{1}\lor x_{2}\vee...)\wedge(y_{1}\lor y_{2}\vee...)\wedge...(...), \tag{28}\] where the \(x_{i},y_{i}\) are "literals", any of the original variables or their negations. The form (28) is called a conjunctive normal form (CNF), and one can easily see that any logical statement between Boolean variables can be written as a CNF. SAT is the first problem that was proven to be \(\mathbb{NP}\)-complete [174; 184]. Currently, no known algorithm efficiently solves each SAT instance. The question of its existence is equivalent to the famous \(\mathbb{P}\) vs \(\mathbb{NP}\) problem. Nevertheless, many heuristics SAT algorithms can solve problem instances involving a significant number of variables, sufficient for many applications. Additionally, many versions of the SAT problems exist, like 3-SAT and the generalization k-SAT, HORN-SAT, and XOR-SAT, which can better suit particular unconventional tasks. One specific SAT version - weighted MAX-2-SAT allows one to easily reformulate the task as QUBO, often appearing in this review. A simple 2-SAT has \(m\) clauses of 2 literals each. A MAX-2-SAT is the problem of assigning values that maximize the number of satisfied clauses. Weighted MAX-SAT gives each clause a positive weight so that the measure of violating the cost appears in the problem. To reformulate a weighted MAX-2-SAT problem as a QUBO, one has to use the fact that maximizing the weight of satisfied clauses is equivalent to minimizing the weight of unsatisfied clauses, and using the logic \(\overline{x_{i}\lor x_{j}}=\overline{x_{i}}\wedge\overline{x_{j}}\). The final form looks then: \[\max_{x_{i}}\sum_{i,j<i}w_{ij}x_{i}x_{j}, \tag{29}\] which is the QUBO that has the same form as Eq.(2). Thus, the connection between the SAT (that can be easily converted into weighted MAX-2-SAT by use of the Boolean logic) and QUBO is revealed. The vital work [84] provided Ising formulations for many \(\mathbb{NP}\)-complete and \(\mathbb{NP}\)-hard problems and covered all of Karp's 21 \(\mathbb{NP}\)-complete problems. For example, one can find number partitioning, graph partitioning, clique existence, binary integer linear programming, exact cover, set packing (or maximal independent set), vertex cover, satisfiability (with the emphasis on 3SAT to MIS reduction), set cover, knapsack with integer weights, graph colouring, Hamiltonian cycles and paths, travelling salesman problem, Steiner trees, feedback vertex set, feedback edge set, graph isomorphisms among the covered problems, as well as some useful tricks for the near-term quantum adiabatic optimization devices. We mention some of them in a slightly different form below. ### Logistics Logistic and planning problems are usually related to the well-known travelling salesman problem. For example, a salesman travels in \(N\) cities that are connected with weighted edges \(w_{uv}\geq 0\) from the set \(E\) (these can represent distances and other costs associated with travelling between the cities), and can be formulated as the following Ising problem of size \(N^{2}\): \[H_{\text{TSP}} = A\sum_{i=1}^{N}\biggl{(}1-\sum_{v=1}^{N}x_{v,i}\biggr{)}^{2}+A \sum_{v=1}^{N}\biggl{(}1-\sum_{i=1}^{N}x_{v,i}\biggr{)}^{2}+A\sum_{(uv)\notin E }\sum_{i=1}^{N}x_{u,i}x_{v,i+1} \tag{30}\] \[+ B\sum_{(uv)\in E}w_{u,v}\sum_{i=1}^{N}x_{u,i}x_{v,i+1}.\] Each spin \(x_{v,i}\in\{0,1\}\) in Eq. (30) represents the vertex \(v\) and its order \(i\) in a path. The first three terms regulate all valid routes in this representation. Each city should be in the route (first term) and appear only once (second term). Any adjacent cities in the route should be connected (third term), while the search for the optimal route is realised by minimising the sum of weights of all cities in a route (fourth term). The reasonable choice of constants \(A\) and \(B\) (e.g. \(A\) should be big enough for \(B>0\)) guarantees that only the space of valid routes is explored. Reshaping this two-dimensional spin matrix with elements \(x_{v,i}\) to a spin vector of size \(N^{2}\) allows one to recover the coupling matrix \(\mathbf{J}\) and magnetic field \(\mathbf{h}\) to formulate the corresponding Ising Hamiltonian. One can reduce the size of the Ising problem to \((N-1)^{2}\) by fixing a particular city to be the first in the route. Note that the Hamiltonian \(H_{\text{TSP}}\) can represent both directed and undirected graphs, and the generalisation for the cycles optimisation problem is straightforward. It has also been used for finding transportation routes that minimise costs. ### Portfolio optimization Optimizing the portfolio selection means finding the most optimal combination of investments for an institution or individual. One of the modern portfolio optimization problem formulations has the following form [186]: \[\min_{0\leq x_{i}\leq 1}\lambda\bigg{[}\sum_{i=1}^{N}\sum_{j=1}^{N}J_{ij}x_{i} x_{j}\bigg{]}-(1-\lambda)\bigg{[}\sum_{i=1}^{N}\mu_{i}x_{i}\bigg{]},\quad\sum_{i=1}^ {N}x_{i}=1, \tag{31}\] where \(N\) is the number of different assets, and \(x_{i}\) is the decision variable representing the proportion of capital invested in asset \(i\). Here coupling coefficient \(J_{ij}\) represents the covariance between returns of assets \(i\) and \(j\), \(\mu_{i}\) is the mean return of asset \(i\), and \(\lambda\in[0,1]\) is the risk aversion parameter. When \(\lambda=0\), the model maximizes the portfolio's mean return, and the optimal solution will be formed only by the assets with the greatest mean return. When \(\lambda=1\), only the total risk associated with the portfolio is minimized. There are different modifications to the portfolio optimization problem. For instance, one can introduce bounding and cardinality constraints that specify that there should be \(K\) different assets in the portfolio or/and the portion of some assets should be within certain bounds. This is achieved by \[\sum_{i=1}^{N}z_{i}=K,\quad\epsilon_{i}z_{i}\leq x_{i}\leq\delta_ {i}z_{i},\quad z_{i}\in\{0,1\}. \tag{32}\] The cardinality-constrained mean-variance model is a mixed quadratic and integer programming problem in the \(\mathbb{NP}\)-hard class of problems. Although the problem is not a combinatorial optimisation, we take advantage of the fact that the objective function has the same form as the energy function in Hopfield networks. Consequently, it will be minimised if we follow the Hopfield dynamics. Hopfield NNs have efficiently solved this problem [187; 188]. The discrete dynamics becomes \[x_{i}(t+1)=G_{i}[-2\lambda\sum_{j}J_{ij}x_{j}(t)+(1-\lambda)\mu_{i})], \tag{33}\] where \(G_{i}\) is a sigmoid with values in \([\epsilon_{i},\delta_{i}].\) When solving any optimization problem, constraints usually appear in the energy function. However, in many cases of Hopfield networks, this is not necessary. Constraints on \(x_{i}\) are satisfied using a sigmoid's activation function since its outputs already lie inside the desired interval. To fulfil the cardinality constraints, we begin with \(3K/2\) neurons. After getting a minimum for the objective function, we remove the asset with the smallest output and repeat this process until the network has precisely \(K\) assets. These remaining assets solve the original portfolio selection problem. To satisfy the constraint \(\sum x_{i}=1\), one can use various adjustments, for instance, to evaluate the feasibility of every portfolio and change the proportions of capital \(x_{i}\) to be invested in each selected asset [187]. ### Phase retrieval The minimisation of the XY model (solving QCO) is directly related to the notoriously hard-to-solve phase retrieval problem. The problem's objective is to recover a general signal (or image) from the magnitude of its Fourier transform [87; 88; 89]. This problem arises because the signal detectors can usually record only the modulus of the diffraction pattern, therefore, losing the information about the phase of the optical wave. Mathematically, one needs to recover a signal \(\mathbf{x}\in\mathbb{C}^{m}\) from the amplitude \(\mathbf{b}=|\mathbf{A}\mathbf{x}|\), where \(\mathbf{A}\in\mathbb{C}^{n\times m}\), \(\mathbf{b}\in\mathbb{R}^{n}\). Then the phase recovery problem [189] can be formulated as: \[\min_{x_{j},u_{i}}\sum_{i}\bigg{(}\sum_{j}A_{ij}x_{j}-b_{i}u_{i}\bigg{)}^{2} \tag{34}\] where \(\mathbf{u}\in\mathbb{C}^{n}\) is a phase vector that satisfies \(\mathbf{A}\mathbf{x}=diag(\mathbf{b})\mathbf{u}\), \(|u_{i}|=1\) for \(i=\overline{1,n}\). This optimization problem can be further rewritten as \[\min\sum_{ij}M_{ij}u_{i}u_{j}\quad\text{subject to}\quad|u_{i}|=1,i=\overline{1, n}, \tag{35}\] where \(\mathbf{M}=diag(\mathbf{b})(\mathbf{I}-\mathbf{A}\mathbf{A}^{\dagger})diag( \mathbf{b})\) is the Hermitian matrix, \(\mathbf{I}\) is the identity matrix, and \(\mathbf{A}^{\dagger}\) is the Moore-Penrose inverse of a matrix \(\mathbf{A}\) (see [189] for details). ### Machine learning The data growth now surpasses our capabilities to process it concerning human and computational resources. The development of data-driven methods also marks the transition from the classical computer science paradigm to the modern ML setting. The related questions concerning the abilities of the precisely crafted algorithms and worst-case scenarios are changed by the most probable cases and the design of the NN architectures. One of the main ML field's goals is to predict specific outcomes from the given data. The richness of the data dramatically influences the methods' performance, making it easier to find patterns and expect accurate results. Considering complicated methods and deep NN architectures, there are three crucial components in ML: data, features, and algorithms. Practically speaking, one can meet the data in many ways: e-mails, stock prices time-series, users databases and collection of the experimental measurements. Moreover, one can collect in different ways, either manually, usually quite long and costly, with few errors or automatically feeding everything to some sorting algorithms. Depending on the context, the collected data (or datasets) can be of great value, determining the demand for suitable rare datasets. Features represent the properties of the considered objects. Therefore, a small amount of essential and sorted features in most cases can guarantee the success of the ML approach to the problem. However, it is very time-consuming to determine the feature in the so-called 'raw' big datasets and select the right ones. Moreover, sometimes one has to avoid human-based decisions to prevent introducing subjectivity and opinion-based bias to optimize the model performance. Therefore, the latest deep learning success is partially tied to automatic feature engineering compared to the previous partially empiric ML models. The last part of the considered scheme is the algorithm. Choosing the method of solving a particular task depends on the context and influences such parameters as the final model's accuracy, speed, and computational complexity. In general, one can solve a problem in many different ways. The components were presented according to their significance in the ML pipeline. Simply saying, one can only extract useful information from a noisy and meaningful dataset. The following subsection starts the discussion with the simple classical algorithms, which are the basis of many existing applications. Then, we outline the central ideas behind the main ML methods that will be the centre of attention for transferring into the special-purpose hardware. At the end of this chapter, we cover the wide range of capabilities of the NNs. #### v.5.1 Regression Regression analysis is one of the earliest methods in statistical modelling that allows estimating the relationships between a dependent variable and independent variables. The most common form of regression analysis is linear regression. This model assumes that the dependent variables denoted by \(y_{i}\) have a linear relationship depending on the m-vector of points \(\{x_{i1},\ldots,x_{im}\}_{i=1}^{n}\) with an addition of the disturbance terms \(\epsilon_{i}\) in each case. This relationship can be written in the following form: \[y_{i}=\beta_{0}+\beta_{1}x_{i1}+\cdots+\beta_{m}x_{im}+\epsilon_{i}=\sum_{j=0} ^{m}\beta_{j}x_{ij}+\epsilon_{i}. \tag{36}\] To shorten notation we use the matrix form \(\mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\epsilon}\) where: \(\mathbf{y}=\{y_{i}\},\mathbf{X}=\{x_{ij}\},\mathbf{\beta}=\{\beta_{j}\},\mathbf{\epsilon} =\{\epsilon_{i}\},(i=1,\ldots,n),(j=0,\ldots,m)\), with \(x_{i0}=1\). The linear regression task is the estimation of the values of the regression coefficients \(\beta_{j}\) given the data points \(x_{ij}\) and observables \(y_{i}\), so that the error term \(\mathbf{\epsilon}=\mathbf{y}-\mathbf{X}\mathbf{\beta}\) is minimized. One can use different metrics for that purpose, such as the sum of squared errors of \(\epsilon_{i}\) or others. The most common parameter estimation technique is called the least-squares estimation. Here, the optimum parameter is defined through the minimization of the sum of the mean squared loss \[\min_{\beta_{j}}\sum_{i=1}^{n}\left(\sum_{j=0}^{m}\beta_{j}x_{ij}-y_{i}\right) ^{2}, \tag{37}\] which can be connected with the conventional QP. The optimal solution can be obtained by differentiating Eq. (37) and equating it to zero with respect to parameters \(\beta_{j}\). In matrix notation, the solution can be written as \[\mathbf{\beta}=(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y}. \tag{38}\] There exist different modifications of the proposed procedure: generalized least squares, where one introduces a certain degree of correlation between the residuals \(\epsilon_{i}\) (37), or the weighted least squares, where the knowledge of the variance of observations is incorporated as the coefficients \(w_{k}\) before each of the residual. Moreover, intrinsically different techniques can be based on maximum likelihood estimation, Bayesian methods, or regularization. A natural extension of linear regression is in replacing linear dependence with a polynomial. In the case of one argument, it is possible to rewrite Eq. (36) as \[y_{i}=\beta_{0}+\beta_{1}x_{i}+\beta_{2}x_{i}^{2}+\cdots+\beta_{m}x_{i}^{m}+ \epsilon_{i}=\sum_{j=0}^{m}\beta_{j}x_{i}^{j}+\epsilon_{i}. \tag{39}\] Given the data points \(x_{i}^{j}\), the task is the same as Eq. (37) except for the change in variables \(x_{ij}\to x_{i}^{j}\). Similarly, it is possible to replace the polynomial basis with a set of some nonlinear functions \(f(x_{i})_{j}\), so that \(x_{i}^{2}\to f(x_{i})_{j}\). Multiple linear regression is a generalization of linear regression with more than one independent variable. The basic model for multiple linear regression can be written in a similar form: \[\mathbf{y}_{i}=\beta_{0}+\beta_{1}\mathbf{X}_{i1}+\beta_{2}\mathbf{X}_{i2}+\cdots+ \beta_{m}\mathbf{X}_{im}+\mathbf{e}_{i}=\sum_{j=0}^{m}\beta_{j}\mathbf{X}_{ij}+\mathbf{e}_{i}, \tag{40}\] where instead of variables \(x_{ij}\) one has a set of matrix elements \(\mathbf{X}_{ij}\) of size \(k\times k\). Depending on the chosen norm for the matrix, it is possible to formulate the task of finding the regression coefficients. Taking the square Frobenius norm of the matrix, the search for optimal coefficients \(\beta_{i}\) is equivalent to solving Eq.(37), except for the additional sum over the \(k^{2}\) matrix elements: \[\min_{\beta_{j}}\sum_{l=1}^{k^{2}}\sum_{i=1}^{n}\left(\sum_{j=0}^{m}\beta_{j} x_{ij}^{l}-y_{i}^{l}\right)^{2}. \tag{41}\] This can be extended further for multivariate linear regression or combined with the nonlinear basis with minor consequences concerning the parameters search and hardware operations, except for the much more complicated procedure for preprocessing the coefficients for any modification. Regression can be considered the simplest form of supervised learning. #### v.5.2 Classification Classification is one of the popular tasks for ML. The purpose of classification is to sort the objects among the initially defined classes. The earliest algorithms include naive Bayes and decision trees. Here, we only consider Markov random field (MRF) encoding, which is the general case for such models. The \(k\)-nearest neighbours algorithm is a non-parametric classification method used in statistics [190; 191]. It aims to classify the objects by considering their \(k\) nearest neighbours with the defined class. The consequent attaching objects to a particular group is repeated until the convergence. We omit the explicit corresponding formulas because of their similarity with the \(k\)-means, the unsupervised clusterization algorithm, presented below. Both methods are usually based on Euclidean distances and can easily be transferred to special-purpose optimization hardware. Another classification method is called a support vector machine (SVM). SVM is a supervised learning model that analyses data for classification purposes. It aims to construct a hyperplane between the classes of training data points in a high-dimensional space, emphasising a good separation achieved by maximising its margin. SVM was introduced in [192] and standardised in [193]. Linear SVM deals with the \(n\) points \(\mathbf{x}\) in the \(m\)-dimensional space, where each point has been assigned a binary class \(y_{i}=\pm 1\). The task is to construct a hyperplane that divides these two groups with the maximum distance between them. The so-called "hard margin" scenario assumes that the initial data is linearly separable. One can start by constructing two parallel hyperplanes, separating groups of different classes with the largest distance between these two surfaces. The target surface between these hyperplanes is called the maximum margin hyperplane. To mathematically describe these surfaces, one can write: \[\mathbf{w}^{T}\mathbf{x}^{i}-b=\sum_{j}w_{j}x_{j}^{i}-b=\pm 1, \tag{42}\] where \(w_{j}\) are the components of the normal vector for both of the hyperplanes, \(x_{j}^{i}\) are \(m\)-dimensional coordinates of the vector with the serial number \(i\), \(b\) defines the surface shift concerning the zero coordinates and \(\pm 1\) defines the class. Everything above \(y=1\) belongs to one class, and everything below \(y=-1\) belongs to another. The offset of the hyperplane is determined by \(b/\left\|\mathbf{w}\right\|\), while the marginal distance equals \(2/\left\|\mathbf{w}\right\|\). To maximize the marginal distance, one has to minimize the norm of \(\left\|\mathbf{w}\right\|\) and hence its square \(\left\|\mathbf{w}\right\|^{2}\). This task can be reformulated as the optimization problem, adding the constraints that prevent data points from being positioned into the margin \[\begin{split}\min\left\|\mathbf{w}\right\|\\ \text{s.t. }y_{i}\left(\mathbf{w}^{T}\mathbf{x}^{i}-b\right)\geq 1,\text{ for }i=1,...,n\end{split} \tag{43}\] The natural extension of SVM is in considering a so-called "soft margin" case. It is assumed that the given data points are not linearly separable. In this case, one has to introduce a new kind of variable \(\xi_{i}=\max(0,1-y_{i}\left(\mathbf{w}^{T}\mathbf{x}^{i}-b\right))\) for each point \(i\), which is usually referred to as the hinge loss function, playing a regularizer role. Thus, it is possible to rewrite Eq. (43) as \[\begin{split}&\min\frac{1}{n}\sum_{i=1}^{n}\xi_{i}+C\|\mathbf{w} \|^{2}\\ &\text{s.t.}y_{i}\left(\mathbf{w}^{T}\mathbf{x}^{i}-b\right)\geq 1 -\xi_{i}\text{ and }\xi_{i}\geq 0,\text{ for all }i,\end{split} \tag{44}\] where the constant \(C\) regulates the interplay between the pure hard margin classifier and the soft margin one. We can reformulate the problem using the Lagrangian duality: \[\begin{split}&\max_{a_{i}}\sum_{i=1}^{n}a_{i}-\frac{1}{2}\sum_{i=1}^{ n}\sum_{j=1}^{n}y_{i}a_{i}\left(\mathbf{x}_{i}^{T}\mathbf{x}_{j}\right)y_{j}a_{j}\\ &\text{s.t. }\sum_{i=1}^{n}a_{i}y_{i}=0,\text{ and }0\leq a_{i}\leq \frac{1}{2nC}\text{ for all }i,\end{split} \tag{45}\] where the norm vector \(\mathbf{w}\) is expressed through the new variables \(a_{i}\), so that \(\mathbf{w}=\sum_{i=1}^{n}a_{i}y_{i}\mathbf{x}^{i}\), and the initial task of determining the offset of the surface is expressed via \(\mathbf{b}=\mathbf{w}^{T}\mathbf{x}^{i}-y_{i}\). Thus, it is possible to obtain the problem, which has an exact QP formulation. This problem can be solved with the standard quadratic algorithms, thus, can be solved using special purpose hardware. It is helpful to mention the nonlinear extension of the SVM, which solves nonlinear classification task and can exploit the different functional forms of kernels. One can modify the scalar dot product in the quadratic form Eq. (45) by a different kernel function \(k(\mathbf{x}_{i},\mathbf{x}_{j})\) depending on the properties of the analogue hardware. #### v.5.3 Finding the principal eigenvector Finding the principal (dominant) eigenvector of a given matrix \(\mathbf{J}\) belongs to the \(\mathbb{P}\)-class of problems. However, finding such a dominant eigenvector on an ever-growing large matrix becomes a computationally intensive task incompatible with Moore's law. At the same time, a range of real-life problems would benefit from fast calculation of the principal eigenvector. For instance, the PageRank algorithm [194; 195] evaluates the relative importance of pages by exploiting the web link structure. The web network is represented as a directed graph, where each page is a node of the graph, and each hyperlink is an edge connecting one page to another. For the entire database of web pages, the PageRank algorithm computes a single score vector, the PageRank. The algorithm's key underlying assumption is that pages transfer the importance to other pages via links; hence, PageRank components determine the importance of pages. Mathematically, finding the PageRank vector is equivalent to calculating the principal eigenvector of the link-structure matrix, Google matrix. Besides, calculating the principle eigenvector is required in social network analysis, recommendation systems, bibliometrics, bioinformatics, DNA sequencing, and distributed computing systems [196; 197; 198]. There are numerous applications of PageRank to chemistry and engineering sciences networks to investigate and analyse complex systems. As engineered systems grow in size, they become increasingly complex, with networks and submodules interacting in unpredictable, nonlinear ways. Network analysis methods like PageRank help to organise and study these complexities [197]. For instance, MonitorRank diagnoses root causes of issues in a modern distributed system: error logs and tracing debugging information [199]. PageRank has been used for road and urban space networks, which help predict traffic flow and human movement. It was shown that PageRank is the best network measure in predicting traffic on individual roads [200]. The advantage of using optical systems for calculating the principal eigenvector has been recently shown [198]. For a certain choice of control parameters of these optical systems, the steady state of optical networks can solve an eigenvalue maximization problem [201], which results in finding the energy state dictated by signs of the eigenvector corresponding to the largest eigenvalue of the interaction matrix, i.e. principal eigenvector. In particular, the estimates presented [198] show that special-purpose optical machines for PageRank calculations may provide dramatic improvements in power consumption over classical computing architectures. #### v.5.4 Dimensionality reduction Dimensionality reduction involves the transformation of data from the space with many dimensions into a low-dimensional space, usually preserving meaningful and valuable properties from the original data. It isn't easy to handle high-dimensional data in practice due to the growth of the space volume. Dimensionality reduction is standard in data-intensive fields. It can be used in signal processing, neuroinformatics, and bioinformatics [202; 203]. One can find its applications in recommender systems [204], semantic search [205] or as a primary tool in many domains involving numerical analysis. One of the well-known methods for dimensionality reduction is the principal component analysis (PCA) [206]. The idea behind PCA is to approximate particular data with linear manifolds of lower dimensions. PCA can be alternatively interpreted as finding subspaces of lower dimension in the orthogonal projection on which the data variation is maximum. The initial task behind the PCA is to find the best approximation of the data points using lines and surfaces. Given the set of vectors \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{m}\in\mathbb{R}^{n}\), the aim is at finding the sequence of \(k\)\(k\)-dimensional affine spaces \(L_{k}\subset\mathbb{R}^{n}\) that find \[\min_{L_{k}}\sum_{i=1}^{m}\mathrm{d}^{2}\left(\mathbf{x}_{i},L_{k}\right)= \min_{a_{jl}}\sum_{i=1}^{m}\sum_{l=1}^{n}\left(x_{il}-a_{0l}-\sum_{j=1}^{k}a_{ jl}\sum_{q=1}^{n}a_{jq}\left(x_{iq}-a_{0q}\right)\right)^{2}, \tag{46}\] for each \(k\), where \(\mathrm{d}\left(\mathbf{x}_{i},L_{k}\right)\) is the Euclidean distance from the point \(\mathbf{x}_{i}\) to the \(L_{k}\). Affine spaces \(L_{k}\) are defined as the sets of linear combinations \(L_{k}=\{\mathbf{a}_{0}+\alpha_{1}\mathbf{a}_{1}+\cdots+\alpha_{k}\mathbf{a}_{ k}\}\) with coefficients \(\alpha_{i}\in\mathbb{R}\), while the vectors \(\{\mathbf{a}_{1},\mathbf{a}_{2},\ldots,\mathbf{a}_{k}\}\subset\mathbb{R}^{n}\) form orthonormal basis in \(\mathbb{R}^{n}\). Eq. (46) is an optimization problem. The initial vector \(\mathbf{a}_{0}\) is simply defined as the solution to \[\min_{\mathbf{a}_{0}}\sum_{i=1}^{m}\mathrm{d}^{2}\left(\mathbf{x}_{i},L_{0}\right) =\frac{1}{m}\sum_{i=1}^{m}\mathbf{x}_{i}. \tag{47}\] The next component is found iteratively by subtracting the projection \(\mathbf{x}_{i}=\mathbf{x}_{i}-\mathbf{a}_{0}\left(\mathbf{a}_{0}^{T}\mathbf{x} _{i}\right)\) (with the scalar product \(\mathbf{a}_{0}^{T}\mathbf{x}_{i}\)) for the vectors corresponding to \(L_{j}\): \[\mathbf{a}_{j}=\underset{\|\mathbf{a}_{j}\|=1}{\mathrm{argmin}}\left(\sum_{i= 1}^{m}\left(\mathbf{x}_{i}-\mathbf{a}_{j}\left(\mathbf{a}_{j}^{T}\mathbf{x}_{ i}\right)\right)^{2}\right). \tag{48}\] The iterations continue until the number of the affine space \(k\) reaches the \(n-1\) of the initial problem space dimension. Using the identity \(||\mathbf{x}_{i}-\mathbf{a}_{j}\left(\mathbf{a}_{j}^{T}\mathbf{x}_{i}\right)|| ^{2}=||\mathbf{x}_{i}||^{2}-\left(\mathbf{a}_{j}^{T}\mathbf{x}_{i}\right)^{2}\), one can easily map this task into the QP in \(\mathbf{a}_{i}\) variables with the normalization constraints and the coupling matrix \(J_{ij}=-x_{i}x_{j}\). To shorten the presented notation, the iterative procedure can be written similarly to maximization tasks \[\hat{\mathbf{X}}_{k}=\mathbf{X}-\sum_{s=1}^{k-1}\mathbf{X}\mathbf{w}_{(s)} \mathbf{w}_{(s)}^{\mathrm{T}}, \tag{49}\] \[\mathbf{w}_{(k)}=\underset{\|\mathbf{w}\|=1}{\mathrm{arg\,max}}\left\{\left\| \hat{\mathbf{X}}_{k}\mathbf{w}\right\|^{2}\right\}, \tag{50}\] where \(k\) is the number of principal component, \(\mathbf{X}\) is the data matrix of size \(n\times m\), \(\mathbf{w}_{s}=\left(w_{1},\ldots,w_{m}\right)_{(s)}\) are the weight coefficients. If the sequential operation is limited on the specific hardware system, one can still use the first iteration of the PCA method to obtain the largest eigenvalues of a matrix. One can find many alternative formulations of the PCA task, such as cancelling correlations between coordinates, i.e. covariance matrix diagonalization or singular value decomposition. Singular value decomposition (SVD) is a special form of a rectangular matrix decomposition in the form \[\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}, \tag{51}\] where \(\mathbf{U}\) is the unitary matrix (representing the rotation as the linear transformation of the space in the geometrical interpretation), \(\mathbf{\Sigma}\) is the rectangular diagonal matrix with non-negative real numbers on the diagonal (which are called the singular values, the action of the matrix has the interpretation of the corresponding scaling by diagonal elements) and \(\mathbf{V}^{\top}\) is another unitary matrix (with the same additional rotation interpretation). SVD is essentially vital in the standard techniques of the latent semantic analysis (LSA) [207; 208], which purpose is to process documents and detect the relationship between libraries and terms. There is a direct correspondence between PCA and SVD decomposition. To perform the PCA, one has to find the eigenvectors of the covariance matrix \(\mathbf{X}\mathbf{X}^{\top}\) (without the appropriate scaling factor \(\frac{1}{n-1}\)). The covariance matrix is diagonalizable, and with the normalized eigenvectors, one can write \[\mathbf{X}\mathbf{X}^{\top}=\mathbf{W}\mathbf{D}\mathbf{W}^{\top}. \tag{52}\] Applying SVD to the same data matrix \(\mathbf{X}\) gives \[\mathbf{X}\mathbf{X}^{\top}=\left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top} \right)\left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\right)^{\top}=\left( \mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\right)\left(\mathbf{V}\mathbf{ \Sigma}\mathbf{U}^{\top}\right), \tag{53}\] which gives \[\mathbf{W}\mathbf{D}\mathbf{W}^{\top}=\mathbf{U}\mathbf{\Sigma}^{2}.\mathbf{ U}^{\top}, \tag{54}\] Using this correspondence, one can perform the SVD decomposition as PCA on the special-purpose hardware. #### v.5.5 Clusterization The most detailed description of clusterization is the separation of the objects on a specific basis. The goal can be defined as a classification without any prior information about the classes. The machine can set the number of clusters in advance or define them automatically. The algorithm determines objects' similarity by their marked features and puts the objects with many similar features in the same class. There are successful applications of clusterization in market analysis (consumer analytics), image compressing, data analytics, and anomaly detection. \(K\)-means clustering is a clustering method that aims to partition \(n\) observations into \(k\) clusters. Each of these observations is located in the cluster with the nearest mean, also called a centroid [209; 210; 211]. There are heuristic algorithms that deal with such an assignment; however, the problem is \(\mathbb{NP}\)-hard. Given a set of observations \(\{\mathbf{x}_{1},...,\mathbf{x}_{n}\}\) in a \(d\)-dimensional space k-means algorithm aims to partition these observations into \(k\) sets \(\{S_{1},S_{2},...,S_{k}\}\) to minimise the within-cluster sum of squares (or variance): \[\operatorname*{arg\,min}_{S_{i}}\sum_{i=1}^{k}\sum_{\mathbf{x}\in S_{i}}\left\| \mathbf{x}-\boldsymbol{\mu}_{S_{i}}\right\|^{2}, \tag{55}\] where \(\boldsymbol{\mu}_{S_{i}}\) is the mean of points in the set \(S_{i}\). One usually uses an iterative technique consisting of two steps to perform such an optimisation task. Given an initial set of k means \(\boldsymbol{m}_{1}^{1},...,\boldsymbol{m}_{k}^{1}\), the first step is to assign each observation to the cluster with the nearest mean, according to the Euclidean distance. The next step is to recalculate the centroids: \(\boldsymbol{m}_{i}^{t+1}=\sum_{x_{j}\in S_{i,(t)}}\boldsymbol{x}_{j}\). Finally, the loop is run until the convergence. The algorithm uses the assigning of objects to the nearest cluster by Euclidean distance, and it is a suitable method for transferring its sequential operations to the specific hardware. Mean shift is a high-dimensional-space analysis method for locating the maximum density function given a discrete number of data sampled from this arbitrary density function. It is helpful in complex hierarchical algorithms and is used in different computer vision or image processing domains. Given data points \(\boldsymbol{x}_{i}\) in \(n\)-dimensional space, one can use the kernel function \(k(r)\), acting on the norm value \(r\), to determine the mean shift's value. The kernel function has to be non-negative, non-increasing and continuous. One can use the flat kernel so that \(k(r)=1\) if \(r<r_{0}\) and \(0\) outside. Each iteration consists of calculating the function \[F(x)=\sum_{i}k\left(\frac{(\boldsymbol{x}-\boldsymbol{x}_{i})^{2}}{\alpha^{2} }\right), \tag{56}\] where there are \(\alpha\) states. The maximum of \(F(x)\) is computed using the square norm. ### Neural networks ANNs are often associated with ML. We considered the associative memory model, a simple recurrent shallow NN in the subsection IV.5. This model can be extended to higher-order systems, simultaneously gaining many useful properties. However, optical systems are not tied only to this type of architecture [10]. Any NN can be defined as a set of neurons and connections between them. An artificial neuron's task is to take input numbers, process them in a certain way (executing a special function), and output the results. The standard mathematical transformation of one NN layer can be written as \(\varphi(\sum_{i=0}^{N}w_{i}x_{i}+b)\), where \(w_{i}\) denote the weights for the input data points \(x_{i}\) (or independent variables), and the constant \(b\) is the shift called bias. Here, the \(\varphi\) is a nonlinear activation function. A single-layer NN that performs a similar transformation and produces a single output number is called a perceptron. The perceptrons, assembled into multilayered structures, are called multilayer perceptrons. The introduction to the NNs is presented in [212; 213; 214] with more modern work [215] and the latest results after the deep learning breakthrough [216]. The activation function \(\varphi\) plays an essential role in the NN design because the output signal would be simply a linear function in its absence. There are many functional activation functions, such as binary step function, sigmoid (or logistic function), hyperbolic tangent, etc. They allow a NN to map an input to the output appropriately. Thus, NN is considered a universal function approximator [217]. To choose the NN weights, one usually uses the backpropagation procedure [218; 219], although there are many alternatives. Backpropagation consists of tuning the NN weights according to the difference between the actual output value of the network and the predicted one, with the final goal of minimizing this discrepancy or the cost function. The tuning procedure involves computing the total discrepancy gradients on each layer, starting from the final one and updating the corresponding weight values. Through the extensive number of such iterations, there is a chance that the weights will be tuned in the desired way. Many deep NN (NN with many layers) can be mapped into a shallow one with a significant overhead on the number of neurons in the standard layer. That means that any deep NN functionality can, in principle, be performed on a physical device suitable to a shallow architecture. With an appropriate mapping, both networks will have the same approximation qualities [220; 221; 222; 223]. A well-trained NN can approximate many complicated algorithms, some of which are presented in this review. However, one has to provide enough input conditions and good output answers, especially when the problem is of high computational complexity. In addition, however, the resulting correlations need to be better understood. The valuable properties of the NNs go far beyond the optimization domain. We will consider some of them below. #### v.6.1 Neural networks and dynamical systems Using ML models in the domain of physical sciences, i.e. incorporating physical laws and domain knowledge into neural architectures, is called physics-informed machine learning (PIML). It provides a powerful approach to modelling different physical phenomena. This rapidly growing field can pursue many other goals. Among them are constructing better predictive models with high accuracy and reliable generalization abilities, increasing data processing rate, accelerating the dynamical processes through optimized architecture, and solving inverse problems with interpretable models. One should expect that emulating complex nonlinear dynamics should benefit from the PIML. This can be seen in the applications in weather forecasting [224], modelling of turbulence [225; 226], nonlinear dynamics [227; 228], applications of the ML to the Koopman operator theory [229]. Optical hardware can be potentially used to speed up these applications. The correspondence between NN architectures and dynamical systems is straightforward. Some of the NN can be viewed as discretizations of dynamical systems, which is true in reverse order - one can design NNs to have specific properties, such as invertibility [230]. This correspondence can broaden the applicability of the potential optical hardware. Their connection with dynamical systems and deep learning can be found in [231]. The generalization of the optimization algorithms inspired by different optical systems has canonical universality property [232]. ### Probabilistic graphical models Graphical models provide a natural tool for dealing with uncertainty and complexity throughout applied mathematics and engineering. The graph theoretic side of graphical models offers an appealing interface where one can model a data structure that lends itself naturally to the design of efficient general-purpose algorithms. Many models in statistics, systems engineering, information theory, and pattern recognition are special cases of the general graphical model formalism. It is vital for representing joint probability distribution and inference based on the given observations [233; 234; 235]. Probabilistic graphical models (PGMs) are graphs with the nodes represented by random variables, while edges connecting them represent conditional independence assumptions. PGM can be thought of as a compact representation of joint probability distributions. There are two main kinds of graphical models: undirected, which are known as Markov random fields (MRFs), widely used in the physics and vision communities, and directed, also known as Bayesian networks (BNs), belief networks or causal models that are more popular with the AI and ML communities [233]. The spin Hamiltonians are particularly useful for PGMs. We recall the Ising spin model of Eq. (2) with zero-field coefficients (\(h_{i}=0\)). Each spin variable \(s_{i}\) can be treated as a random binary variable so that their coupling strengths serve as the connections between random variables. Certain configurations of spin variables \(X=(x_{1},\ldots,x_{N})\in\{-1,+1\}^{N}\) is called an assignment. The probability of an assignment in the PGM is given by \[\mathrm{P}(s_{i}=x_{i})=\frac{1}{Z}\exp\left(-\sum_{i=1}^{N}\sum_{j=1,j<i}^{N} J_{ij}x_{i}x_{j}\right), \tag{57}\] where \[Z=\sum_{X\in\{-1,+1\}^{N}}\exp\left(-\sum_{i=1}^{N}\sum_{j=1,j<i}^{N}J_{ij}x_{ i}x_{j}\right) \tag{58}\] is the so-called partition function. There are several quantities of interest in the PGMs. First, it is the inference task - the computation of the quantity \(Z\) given by Eq. (58). The exact inference is the computation of \(Z\) with all possible assignments, which is a hard problem for an arbitrary graph. The running time of the exact algorithms of finding \(Z\) is exponential in the size of the largest cluster of corresponding graph nodes. There are rare cases of the Ising model graphs when it is possible to compute its partition function in polynomial time, but the problem of computing \(Z\) is generally hard. Hence, the approximate inference is widely used. Other quantities of interest can include finding the low-energy states (low energy sampling), worst margin violators, constituents of partition functions - assignment likelihood and marginal probabilities and certain moments concerning the partition function and the target value. Some popular approximate inference methods include sampling (Monte Carlo), variational methods and message-passing algorithms [233]. Since many optical spin machines are not flexible in terms of programmability compared to conventional computers, one can hardly exploit sophisticated methods in hardware operations. That is why the sampling procedure is the most promising application from the hardware perspective, especially for inference tasks. Expanding the spin machines' functionality is a promising direction, given the speed and energy efficiency of the optical efficiency domain. The physical system often realises the symmetric coupling coefficients, making the model undirected. Using the system-specific devices that redirect light, it is possible to introduce the asymmetry in the variable connections, which opens the path to the directional PGM. In addition to the universality concept, one can see many practical tasks encoded into the Ising model (such as portfolio optimisation) as special cases of the PGMs. Moreover, the hardware's ability to realise the high-order interaction terms allows one to encode complicated conditional dependencies with little or no overhead on the number of variables. However, the application scope of optical machines aimed at simulating PGMs is far beyond the scope of this problem. One can encode complicated large graphs with many factors, representing large-scale practical problems and efficiently use them as supporting decision-making networks. There are also applications in control theory and game theory. For example, PGM can compactly model joint probability distributions using sparse graphs to reflect conditional independence relationships in complex systems. It is possible to decompose similarly multi-attribute cost functions (or utility functions). For instance, let the general cost function be a sum of local cost functions. Each local term has parental nodes (random variables or factors), which it depends upon. Moreover, some of the utility nodes will also have action (control) nodes similar to parent nodes because they depend on the state of the environment and the performed actions. The resulting graph is called an influence diagram. Using such a diagram, one can perform sampling, similar to the inference task, and compute the optimal (sequence of) action(s) to maximise or minimise the cost function [233; 236]. The application of the same strategy was used in multi-person game theory [237]. In such a way, one can exploit optical spin machines to investigate dynamical systems and decision policy on factor graphs. There are many more applications of such correspondence between spin system functionality, control theory, and decision-making. The advantages of optical systems will benefit large complex graphs with complex connections between units [233]. Exploring the functionality of optical machines with respect to different paradigms is a promising research direction. ### Image processing Several problems in computer vision can be formulated as binary quadratic programs, a particular case of QUBO. One can also see the similarity with PGMs. The conventional approach to such problems is to use the semidefinite relaxation technique, which appeared to be quite efficient [238]. The problems discussed include image co-segmentation, image segmentation with different constraints, graph matching, image deconvolution, graph bisection, and others. The computational complexity of these problems is high, which makes it necessary to propose an improved version of the semidefinite programming approach, which is more efficient and scalable. Some of these formulations are listed with little corresponding details, and we refer the reader to the original work [239]. \[\begin{array}{rl}\min_{\mathbf{x}\in\{-1,+1\}^{N}}&\mathbf{x}^{\top}\mathbf{ A}\mathbf{x}\\ \text{s.t.}&\left(\mathbf{x}^{\top}\mathbf{t}_{i}\right)^{2}\leq\kappa^{2}n_{ i}^{2},i=1,\ldots,s,\end{array} \tag{59}\] is the image co-segmentation task with the matrix \(\mathbf{A}\)[239], \(s\) is the number of images, \(n_{i}\) is the number of pixels for \(i\)-th image, and \(n=\sum_{i=1}^{s}n_{i}.\mathbf{t}_{i}\in\{0,1\}^{n}\) is the indicator vector for the \(i\)-th image, \(\kappa\in(0,1]\) is a parameter. \[\begin{array}{rl}\min_{\mathbf{x}\in\{0,1\}^{KL}}&\mathbf{h}^{\top}\mathbf{ x}+\mathbf{x}^{\top}\mathbf{H}\mathbf{x}\\ \text{s.t.}&\sum_{j=1}^{L}\mathbf{x}_{(i-1)L+j}=1,i=1,\ldots,K\\ &\sum_{i=1}^{K}\mathbf{x}_{(i-1)L+j}\leq 1,j=1,\ldots,L\end{array} \tag{60}\] is the graph matching task and \(x_{(i-1)L+j}=1\) if the \(i\)-th source point is matched to the \(j\)-th target point; otherwise it equals to \(0\). \(h_{(i-1)L+j}\) records the local feature similarity between source point \(i\) and target point \(j\); \(H_{(i-1)L+j,(k-1)L+l}=\exp\left(-\left(\mathrm{d}_{ij}-\mathrm{d}_{kl}\right) ^{2}/\sigma^{2}\right)\) encodes the structural consistency of source point \(i,j\) and target point \(k,l\). The corresponding details can be found in [240]. \[\min_{\mathbf{x}\in\{0,1\}^{n}}\|\mathbf{q}-\mathbf{K}\mathbf{x}\|_{2}^{2}+ \mathrm{S}(\mathbf{x}) \tag{61}\] is the image deconvolution task, where \(\mathbf{K}\) is the convolution matrix corresponding to the blurring kernel \(\mathbf{k},\mathrm{S}\) denotes the smoothness cost, \(\mathbf{x}\) and \(\mathbf{q}\) represent the input image and the blurred image respectively [238]. \[\begin{split}\min_{\mathbf{x}\in\{-1,1\}^{n}}&-\mathbf{x} ^{\top}\mathbf{W}\mathbf{x},\\ \text{s.t.}&\mathbf{x}^{\top}\mathbf{1}=0\end{split} \tag{62}\] is the graph bisection task with \(W_{ij}=\exp\left(-\mathrm{d}_{ij}^{2}/\sigma^{2}\right),\) if \((i,j)\in\mathcal{E};\) and \(0\) otherwise, where \(\mathrm{d}_{ij}\) denotes the Euclidean distance between \(i\) and \(j\). These tasks can potentially be mapped into the special-purpose hardware dealing with quadratic assignments or low-level programmable tasks. ### Several examples of hardware embeddings Here we consider the hardware representation of several tasks we considered previously. We characterise each assignment stating its possible embedding on the particular hardware type - spin machines and characterise several parameters of such embedding. We consider discrete and continuous variables (the latter can require additional overhead on the number of discrete operational units), direct mapping of the problem coefficients or partial concerning other factors, whether the hardware requires the consequent manner of operations or not, incorporating additional constraints into the coefficients of the problem and other details. Overall, these factors determine whether the possible embeddings are efficient or not (significant overhead, consequent operations, etc.). We present the examples in Table 1. ## VI Main directions of technological development in optical computing The demand for computational resources is gaining momentum due to their use in many practical applications. This trend is supported by a growing industrial interest from prominent IT companies (Microsoft, Google, IBM, Amazon, etc.) and fast-growing start-ups. To get a better global picture, one must understand the current paradigm of conventional heavy calculations and what advantages the optical machines can offer. First, we present several key metrics of the standard approaches and then show the benefits of optical devices. After that, we describe the strategies pursued in photonic neuromorphic computing. ### Performance of information processing systems In general, Moore's law concerns several metrics. All of them are reaching saturation but at a different pace. To maintain the same effectiveness of the hardware, new technology \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Assignment & Formulation & Details \\ \hline MIN-2-SAT & \multirow{2}{*}{\(\min\sum_{x_{i}\in\{0,1\}^{n}}\sum_{i,j<i}w_{ij}x_{i}x_{j}\)} & Discrete variables, direct mapping, straightforward operation with respect to dynamical updates, efficient. \\ \hline Phase retrieval & \multirow{2}{*}{\(\min\sum_{ij}M_{ij}u_{i}u_{j}\)} & Continuous variables, direct mapping on the QCO (Eq.(4)), straightforward operation, efficient concerning the QCO hardware. \\ \cline{2-3} Regression & \multirow{2}{*}{\(\min\sum_{\beta_{j}}\sum_{i=1}^{n}\left(\sum_{j=0}^{m}\beta_{j}x_{ij}-y_{i}\right)^{2}\)} & Continuous variables, partial mapping, straightforward operation, inefficient concerning the variables mapping. \\ \cline{2-3} SVM & \multirow{2}{*}{\(\min\|\mathbf{w}\|\Rightarrow\min(\sum_{j}w_{j}^{2})\) s.t. \(y_{i}\left(\mathbf{w}^{T}\mathbf{x}_{i}-b\right)\geq 1,\text{ for }i=1,...,n\) (66)} & Continuous variables, partial mapping, straightforward operation, inefficient concerning the variables mapping. \\ \hline k-means & \multirow{2}{*}{\(\min_{S_{i}}\sum_{i=1}^{k}\sum_{\mathbf{x}\in S_{i}}\left\|\mathbf{x}- \boldsymbol{\mu}_{S_{i}}\right\|^{2}\)} & Continuous variables, partial mapping, consequent operation, inefficient variables mapping and operation setup. \\ \cline{2-3} Graph bisection & \multirow{2}{*}{\(\min_{x_{i}\in\{-1,1\}}-\sum_{i,j}w_{ij}x_{i}x_{j}\) s.t. \(\sum_{i}x_{i}=0\)} & Discrete variables, partial mapping due to the constraints, straightforward operation, inefficient representation of the constraints. \\ \hline Image co-segmentation & \multirow{2}{*}{Discrete variables, partial mapping due to the constraints, straightforward operation, overhead on auxiliary variables, inefficient concerning the constraints and overhead.} \\ \cline{2-3} & & Discrete variables, partial mapping due to the constraints, straightforward operation, overhead on auxiliary variables, inefficient concerning the constraints and overhead. \\ \cline{2-3} & & Discrete variables, partial mapping due to the constraints, straightforward operation, overhead on auxiliary variables, inefficient concerning the constraints and overhead. is required. However, the more significant demand for superior hardware is caused by the explosive growth of AI applications, which puts much more pressure on research and development performance. For example, the need for computational capabilities has increased by more than five orders of magnitude from 2012 to 2018 because of the AI developments shown in the OpenAI report [8]. Several key metrics characterise the performance of information processing systems. We will use MAC (multiply-accumulate operation containing one multiplication and one addition) and FLOP (floating-point operation). The relationship between them is that 1 MAC counts as 2 FLOP. One usually uses MACs and FLOPs to measure the speed performance of the device, which depends on the frequency or the characteristic intrinsic operation time on the hardware. Alternatively, one can use the operations per second (OPs), be it conventional mathematical operation or hardware state switching, but this notation is rarely used. Another important metric is energy consumption or efficiency, which can be measured in FLOPs/W (FLOPs per watt). One can consider alternative metrics, such as the total training energy in joules in the case of the training NN or J per spike in the operations performed on the spiking NN (SNN) architecture. Many combined metrics and their variations exist, such as speed per area (Op/s/mm), that are used to describe some other energy characteristics. Other important parameters of the hardware setup may include the analogue level of noise, scalability properties, specific architecture parameters, etc. Data centres that use thousands of CPUs and hundreds of GPUs consume megawatts of power. Despite the versatility of conventional computers, their characteristics are not enough to achieve high performance in the key metrics. Thus, application-specific hardware that differs in architecture and logic reduces this gap between the desired efficiency and computer capabilities. One can find several discussions of these devices in [198; 241] with the corresponding comparison of the key metrics; see also Fig. 10. In addition, we mention some of these electronic devices as reference points for comparison with optical devices. Classical computing architectures can differ in the details within one type of device. However, it is common to characterise them using two key metrics - the processing power or the computing speed using FLOPS and the energy efficiency. One can use the ratio of the FLOPS to the power consumption in watts (W) to get the energy efficiency metrics [198]. The standard estimate of the modern CPU efficiency is 2 TFLOPs, while the power efficiency is about 10 GFLOPs/W. Therefore, we can use the Intel Xeon processor as one of the top devices in terms of efficiency for working with the double-precision format, which has 4.8 TFLOPs and 29 GFLOPs/W [242]. Graphics processing units (GPU) are the advanced specialised electronic architectures and workhorses of the current ML tasks in real applications because of the parallel computing options. Most of the GPUs operate at near 0.3 kW power consumption with the range of 0.5 to 7 TFLOPs and corresponding 1.6 to 23 GFLOPs/W energy efficiency for the work with the double-precision format. Another type of classical hardware is powerful non-distributed computer systems that are not so energy efficient but have enormous computing power. The top 10 list starts with NVIDIA DGX SuperPOD with 2356 TFLOPs and nearly 26.2 GFLOPs/W. The most powerful supercomputer in processing power is Fujitsu's Fugaku, with 442000 TFLOPs and 14.8 GFLOP/J. One can link several devices into the powerful distributed system to achieve much higher processing power with additional energy costs. Another class of electronic devices can be named "dedicated hardware". Although the GPU is not usually attributed to this class, it performs a similar role. A good example is the field-programmable gate array (FPGA), an integrated circuit that can be configured by a customer using a hardware description language. On average, FPGA can achieve 10 Figure 10: Left panel: computing power and energy efficiency of different types of computing hardware. The schematic distribution of the processing power versus energy efficiency is shown for several CPUs, GPUs, FPGAs, supercomputers and potential unconventional computing devices based on optical systems, reproduced with permission from [198]. Right panel: energy efficiency values versus computing speed per area for spike-event hardware compared with results described in the literature. Reproduced with permission from [241]. TFLOPs with near 50 GFLOPs/W energy consumption rate and several times more (x 5, x 7) by working with lower precision numbers [243]. Another example of dedicated hardware is Google's Tensor processing unit (TPU). TPUs are custom-developed application-specific integrated circuits (ASICs) to accelerate ML workloads. The efficiency can be estimated as 90 TFLOPs, and 400 GFLOPs/W [244]. Further improvements in electronic special-purpose devices are expected to come from analogue architectures based on memristors [245], non-volatile memories, compact low-voltage field-effect transistors and engineering of heterostructures of two-dimensional materials taking into account the quantum effects. Another option is to explore the different architecture of the dedicated hardware. For example, IBM claims to achieve 176000 times better energy efficiency with their bio-inspired neuromorphic chip TrueNorth chip than the conventional general-purpose Intel i7 system for specific applications [246]. Nevertheless, TrueNorth has a relatively slow frequency rate of 1 kHz and an approximate energy efficiency of 2.3 pJ/bit. Moreover, it requires additional connections for the incoming neural spikes. One can further explore Intel's Loihi [247] or NeuroGrid [248] devices, which are close to the modern GPU [249]. Despite impressive and innovative developments, more than the presented classical architectures are needed to satisfy the need. For example, some estimates on demand from future autonomous vehicles require the information processing at 100 TOps rate with the energy consumption of less than 100 Watt with the additional low latency [250]. ### Optical energy consumption Optical devices can process information instantaneously. Additional advantages include negligible energy consumption and heat generation. State-of-the-art for CPUs and GPUs metrics can be converted into 20 pJ/MAC [251]. The dedicated hardware and application-specific circuits can achieve 1 pJ/MAC with reduced precision of the calculations [252]. The same so-called "ideal" benchmark is supported by the work [49], where authors used a programmable nanophotonic processor with a cascaded array of 56 programmable MZIs in a silicon photonic integrated circuit to perform the vowel recognition task. Modern AI chips can reach the 100 mW/GOps operation power per second, but the future competitive requirements should be \(\sim 10-1\) mW/GOps [8]. We can consider several examples of photonic hardware and highlight their technical characteristics, such as speed and energy consumption. The photonic accelerator architecture based on coherent detection [51] enables a new class of ultra-low-energy processors operating at very low (sub-aJ) energies for MAC operation. These structures can be reprogrammed and trained on the fly and have good scalability of up to one million elements. Additionally, [51] discusses the "standard quantum limit" for optical NNs that can be bounded with 50 zJ/MAC values for irreversible digital computation. Optical NNs can achieve accurate results with extremely low optical energies [253]. It was shown experimentally that optical NN with dot product calculated optically achieved high accuracy on the MNIST digits classification using few photons (of the order \(10^{-19}\) J of optical energy) per weight multiplication. The essential idea was to reduce the noise from accumulating scalar multiplications in dot-product sums. Some optical machines can use pre-optimized mathematical structures for architectural benefits. A good example is energy-efficient, high-throughput, and compact tensorised optical NN exploiting the tensor-train decomposition [254]. Such a NN can improve the energy efficiency by a factor of \(1.4\times 104\) compared with digital electronics ANN hardware and by a factor of \(2.9\times 102\) compared with silicon photonic technologies. Moreover, it was possible to achieve better energy efficiency with fewer elements for footprint-energy efficiency calculation [254]. In general, neuromorphic photonic systems potentially offer petaMAC per second per mm\({}^{2}\) processing speeds [61] and attojoule per MAC energy efficiencies [62]. Energy consumption is closely related to the physical properties of the neural architecture. For example, event-driven spiking neural networks (SNNs) outperform ANNs in energy efficiency. The dynamic of many models can be described using the universal Izhikevich model [249]. Event-driving neuromorphic computing overcomes traditional von-Neumann architectures' limitations but has several specific problems with the throughput, scalability, training methods, etc. The successful implementation of an optoelectronic spiking neuron inspired by the Izhikevich model was reported in [241]. A nanoscale optoelectronic neuron with 200 aJ/spike input can trigger the output from on-chip nanolasers with 10 fJ/spike. This neuron can support a fanout of \(\sim 80\) or overcome 19 dB excess optical loss while running at 10 GSpikes/second in the NN. Such a scheme corresponds to 100 throughput and 1000 times energy-efficiency improvement compared to state-of-art electrical neuromorphic hardware such as Loihi and NeuroGrid [241]. The hybrid systems of quasiparticles can be another potential platform for spiking architectures. Exciton-polaritons can achieve 1 pJ/spike with 100 ps timescale [255; 256]. ### Evaluation of speed A universal optical computer was not a viable option to compete with classical computers. Instead, a specified optical computer or optical block as a part of a hybrid classical/nonclassical architecture has become a focus of recent research. One of the first realisations of simple mathematical operations, such as a free-space fan-in/out vector-matrix multiplication, was introduced by Goodman in 1978 [257]. It is the essential linear algebra operation, where the input vector is loaded into an array of light sources, and the multiplication matrix is encoded into the SLM. The light propagation is analogous to broadcasting the initial vector into SLM, which performs element-wise multiplication, after which the lens gathers all the beams in the horizontal direction and summates the intensities. One can evaluate the performance of this device as \(N^{2}\) MAC for one multiplication of the vector with \(N\) elements and a square matrix \(N^{2}\). However, the effective performance is limited by the system's frequency \(f\), mainly of the SLM, resulting in \(fN^{2}\) MACs; see also [258]. Nevertheless, using 256-length input vector and 125 MHz frequency rate, the device's performance can reach impressive \(\sim 8\) TMACs. Other schemes based on different forms of the free-space matrix-vector multiplication can reach similar values. In 2020, Lightmatter presented an optoelectrical hybrid chip 'Mars' with \(0.4-4\) TMACs depending on the frequency of weights [259]. A massively parallel convolution of 16x16' tensor core' scheme based on crossbar architecture has been built on a chip with 13 GHz modulation speed for the inputs, and approximate 2 TMACs [260]. Another scheme based on the electro-optical Mach-Zehnder modulators represents a universal optical vector convolutional accelerator and achieves more than ten TOPS speed, with a consequent successful use as an optical convolutional neural network in facial and handwritten digit images recognition [261]. Most of the photonic hardware with the feed-forward architecture can operate at high (GHz) speeds and usually have good scalability characteristics [51; 253; 254]. Another critical factor affecting the optical device's speed performance is the hardware's architecture. From this perspective, RC might improve many aspects of optical computing devices. One could expect several orders of magnitude speed-up compared to the typical ANN structure. RC optoelectronic/optical implementations are usually divided into spatially distributed and time-delayed [80]. The RC scheme on a silicon photonic chip with optical waveguides, splitters and optical combiners can achieve the data processing rate of 0.12 and up to 12.5 Gbit/s [262]. Moreover, more exotic physical systems, such as exciton-polaritons, can reach similar performance so that the SNN architectures can achieve the characteristic operation time of the order of 100 ps with the energy efficiency of 1 pJ/spike [255; 256] Optic-based spin machines also enjoy competitive speed characteristics. CIM evolved from having just 4 spins and 12 connections in 2014 (Stanford) to 16K spins and 256M connections in 2021. The 2000-node version achieves semidefinite relaxation minimum of a cost function in 0.1 ms and further improves the solution [263]. The new generation of CIMs based on Thin-Film LiNbO3 (TFLM) photonic circuits will be released in 2022. It will feature an OPO network with \(\sim\mu W\) pump power, \(\sim\) fs pulse duration, 100 GHz - 1 THz clock frequency and the synchronized operation of multiple CIMs on a chip. Exciton-polaritons possess even better ultrafast timescales. For example, the polariton graph simulator [136] is easily scalable to 10K elements and shows \(\sim 100\) ps operational times respectively, while the degenerate lasers [90] system have \(\sim\mu\)s characteristic timescale. However, all-to-all controllable couplings have yet to be experimentally implemented. ### Other important properties Other essential factors are undoubtedly affecting optical devices' attractiveness and performance, such as intrinsic noise and analogue accuracy of the hardware. For example, the recognition results using MNIST handwritten digits can show different accuracy on different devices, which can be a good measure of how well a particular NN is adjusted to a specific task citelee2021izhikevich. The comprehensive analysis of the error sources and their classification for the electro-optical device can be found in [8]. The essential part of the hardware is its structure/architecture. It affects many other properties of the optical devices, be it the accuracy, scalability, the potential for future optimization, etc. The interplay between the hardware's electronic and photonic components depends on the architecture. It directly affects optical/electronic conversion, storing and reading the data, and logic operations cost in the case of a hybrid architecture. Scalability is one of the key metrics and is the consequence of the architecture choice. It measures the ability of a system to keep its algorithmic performance with a growing number of variables. The optical setups enjoy additional degrees of freedom compared to the conventional electronic hardware. For example, two independent variables in the complex plane can parameterise short optical impulses. In addition, one can explore optics-specific degrees of freedom such as polarisation and orbit angular moments of light. Lastly, current optical hardware is used to employ classical algorithms and NN architectures that are conventional for standard electronic architecture. These algorithms are designed using Boolean logic, which is suitable for a digital computing system. However, they are not always optimal for optics implementation. Therefore, developing specialised algorithms optimised for optical computer platforms is necessary, further reducing the operational complexity and execution time. ### Optical minimisers of spin Hamiltonians The optical systems described in this review as optical minimisers of spin Hamiltonians aim to find the global minimum of hard optimisation problems. They offer the potential for finding a better solution to a wide variety of nonlinear optimisation problems for a fixed time, finding a solution of a given precision faster, or solving more complex problems at fixed and limited cost. All these machines have advantages and limitations. They vary in scalability, ability to engineer the required couplings, the flexibility of turning the interactions, the precision of read-out, and factors facilitating the approach to global rather than the local minimum. However, they all have some parts of their operation that promise increased performance over the classical computations. To solve an optimisation problem on optical minimisers of spin Hamiltonians, one needs to think of an optimal mapping of the real-life problem onto a spin Hamiltonian, some of which are known [84], for others finding an optimal mapping will mean half of the success in solving. While combinatorial optimization is often focused on finding just one of the absolute minima configurations, it is desirable in many applications to obtain many or all degenerate absolute minima and, in some cases, to sample many low-energy excited states as well [264]. Such sampling capability benefits applications that obtain distributional information about optimal solutions, such as implementing Boltzmann machines as generative models for ML [265]. In industrial settings, accessing a pool of candidate solutions to an optimization problem can make processes more efficient and flexible; for example, in drug discovery [266], structure-based lead optimization could generate many candidate molecules for simultaneous testing. Another approach is to decompose large optimisation problems into subproblems to be solved separately (e.g., to accommodate hardware limitations). Better solutions to the original problem can be constructed using multiple low-energy samples rather than just the optimum for each subproblem [267]. However, a spin minimiser designed for combinatorial optimisation is not necessarily well-suited to sample all ground and low-energy states. The nonlinear stochastic dynamics of such machines in the presence of quantum noise can be exploited to sample degenerate ground and low-energy spin configurations of the spin models. When such optical machines operate in a quantum-noise-dominated regime with short photon lifetimes (i.e., low cavity finesse), homodyne monitoring of the system can efficiently produce samples of low-energy spin configurations better than their classical analogues [268]. An additional advantage of discovered principles of operation of optical minimisers of spin Hamiltonians leads to the opportunity of formulating new optimisation algorithms to be realised on specialised but classical computing architectures: FPGAs, GPUs, etc. For example, the principle of operation of the CIM was implemented as the network of nonlinear oscillators described by simplified equations [269]. A similar approach has been recently realised using FPGAs using a network of Duffing oscillators [270]. To properly access the properties of such systems, one can use computer simulations in several scenarios. Such emulations allow one to avoid extensive labour experiments to predict properties of such systems properly, tune and optimise the parameters for optimal performance and even inspire new classes of algorithms for conventional computers. The emulation algorithms can be found in [122; 271]. Such techniques can apply to a broad type of NNs. ### Efficiency of Artificial neural networks Overall, ANNs help process large data sets and combine and analyze vast amounts of information quickly and without explicit instructions. Therefore, multiple NN architectures have been investigated and implemented in various applications. Furthermore, developing a variety of NNs is essential because different NNs can be represented through different architectures while maintaining a certain universality in approximating and representing many complex systems, which expands the already significant scope of NNs applicability. Many linear transformations can be performed with passive optics without power consumption and minimal latency at rates over 50 Gb/s. The feasibility of optical logic gates has also been demonstrated [272; 273; 274; 275; 276]. However, the attempts to replicate the classical boolean electronic logic circuits in photonics did not prove to be successful. Analog photonic computing devices are especially suitable for NNs, which require fast and energy-efficient (although approximate) computations. Furthermore, in principle, many optical nonlinearities can be used to implement various nonlinear functions [277]. Recent developments suggest that optical implementations of NNs can overcome electronic solutions in terms of computational speed and energy efficiency. However, as discussed in our review, the challenge of developing truly deep NNs with photonics still needs to be solved. Photonic multilayer perceptrons and photonic SNNs have a lot of potential to realize all-optical ANNs. In the near term, photonic accelerators for convolution NNs (multiple layers, weight sharing, sparse topology) are the most promising photonic solutions to enhance inference speed and reduce power consumption. However, there are still many other opportunities to explore and improve the implementation of photonic NN. For example, more research is needed to assess whether a specific type of deep NN can be implemented optically efficiently, i.e., in a way that provides advantages concerning fully electronic implementations. Furthermore, photonics has not yet implemented some deep NNs (long-short-term memory NNs, generative adversarial nets, geometric deep NNs, deep belief networks, etc.). The ultimate goal is to demonstrate large networks with thousands of nodes and interconnections across many hidden layers, i.e., truly deep architectures. Therefore, it is essential to work on the photonic NN cascadability (enabled by low propagation losses, crosstalk, and noise) and robustness to fabrication imperfections and parameter drifts over time [278]. For instance, resonant structures like microring resonators are susceptible to manufacturing deviations [279]. At the same time, because of their reconfigurability, linear optical processors based on MZI appear more robust to process inaccuracies. Some studies discuss how to achieve reliable photonic computations even with imperfect components [280]. The all-optical implementation of the nonlinear activation function requires further investigation. Nonlinearities can be emulated in software, but integrating nonlinear elements into hardware is still challenging. Several approaches to address this issue have been reported using MZIs [281], graphene and quantum well electro-optic absorption modulators, and photonic crystals [282]. Some technological breakthroughs would benefit photonic NN, particularly implementing an integrated, non-volatile and energy-efficient photonic memory element. In this scenario, using phase-change materials seems the most promising approach to achieve such photonic memories since they have also shown the potential for multi-level storage [283]. Moreover, the cells of such materials have been recently exploited in photonic NN, mainly for SNNs [56]. #### vi.6.1 All-optical backpropagation When training NNs, one usually considers the backpropagation algorithm by default. The essential idea behind the backpropagation is to compute the gradient of the loss function with respect to each weight by the chain rule and doing it consequently, one layer at a time, iterating backwards from the last layer to avoid redundant calculations of intermediate terms in this sequence of steps [216; 218]. Such a procedure allows one to fit the weights of a NN for a given task. Still, the complexity of the backpropagation is enormous. It grows linearly with the number of training examples or butches, the number of iterations, which is not known in advance and the basic complexity of feedforward input propagations, which can be estimated as a consequent series of matrix-vector multiplications. These evaluations hold for many cases, assuming batch gradient descent algorithm and simple matrix multiplication for the input propagation. However, one can reduce the number of steps with some approximate schemes. At the moment, there are many different ways to train NNs, including variants of backpropagation or alternatives, such as learning without backpropagation [284]. Thus, the backpropagation algorithm remains one of the most expensive components to compute. The significant power and time consumption happens due to the sequential computation of gradients in the backpropagation procedure of NN training. Backpropagation through nonlinear neurons is another challenge to the field of optical NNs and a significant conceptual barrier to all-optical training schemes. Although there exist several practical, simple solutions, such as using approximation provided in a pump-probe scheme that requires only passive optical elements [285] or by measuring the forward and backwards propagated optical fields based on light reciprocity and phase conjunction principles [286], the schemes still involve digital electronics or programming a high-speed SLM respectively. Therefore, having incomplete solutions, the work on the end-to-end optical training of NNs is in progress. Achieving the efficient all-optical backpropagation training method (besides the realization of depth and nonlinearities) will be a major achievement in the field. The question of such realisation is just a matter of time since there are no fundamental restrictions on such a development [287; 288]. ### Statistical sampling Statistical sampling is another essential domain where using optical machines can be beneficial. PGMs can effectively represent the probability distributions of different factors in complex systems. Moreover, due to its universal structure, one can model complicated large graphs with many factors for various practical problems. The correspondence between the Ising model and the probability measure of the pairwise PGM allows one to solve many tasks, such as inference based on the given observations or sampling. For example, the latter can obtain the most and the least probable states by exploiting the sign before the energy function. Furthermore, the additional specific mechanism presented in several types of hardware can enhance the sampling procedure to efficiently use them as a source of additional information for particular problems. Unfortunately, the simulation of PGMs using optical machines needs to be better investigated. The obvious directions will be to increase the programmability of the optical spin models to access more options for manipulating the Ising/XY/Potts etc., states or decompose large and rich PGMs into their discrete approximations accessible by spin Hamiltonian simulators. Another option is to investigate additional hardware improvements in the context of PGMs. Finally, there are many more applications of such correspondence between spin system functionality, control theory, and decision-making. #### vi.7.1 Neural architectures and transfer learning Many NN architectures can differ in forms (deep and shallow, feed-forward and recurrent), training methods, network topologies, and operational principles. Some photonic architectures were mentioned before; see Section II.3. Moreover, some of these structures are best suited for one purpose than another. For example, recurrent NNs are good at tackling temporal dependencies, while convolutional NN is a standard architecture in image processing tasks. However, one can not easily realise all of the architectures on particular hardware due to its physical limitations or the ineffectiveness of the design. To deal with the transfer of functionality between different architectures, one can pay attention to the domain of transfer learning. Originally, transfer learning was a research direction in ML that aimed at gaining knowledge from solving one type of problem and using it in a different but related domain; see recent reviews [289; 290]. However, transfer learning is a way to transfer features of one architecture to another and make the problem more hardware-friendly. Transfer of functionality will dramatically influence the ML domain and benefit the hardware computing field. It is of essential importance for optical devices, which have certain engineering limitations on the realisations of some architectures. Many more related research directions, like neural architecture search, can be adjusted to optimise the hardware systems. ## VII Optical quantum computing ANN in photonic integrated circuits and optical minimisers of spin Hamiltonians are the main paradigms for optical platforms that have already established an engineering base and clear development directions. Compared with emerging quantum technologies, a high-risk endeavour, classical optical devices offer advantages in speed, parallelism, energy consumption, or operational policy in short to medium term. Therefore, we can say that optical technologies are repeating their electronic special purpose hardware analogues development, with the technological progress making "another loop in its spiral development". Hence, they represent a solid investment in research and development activities with inevitable advantages. The story of quantum computers is related to exciting developments in physics and the theory of computation. There has been a recent surge of investment by large public and startup companies. Such ramping up of industrial activity requires careful examination of the commercial potential of quantum computing technology. There are many hardware platforms on which quantum computing can be developed, and it is still being determined which technology, or a combination of technologies, will prove most successful. This section assesses the current status and future potential of quantum computing based on photons. The current view of the academic community is to exert caution when discussing future practical applications of quantum computing technology because it is so different from the information technology we use now. Many believe that quantum technology will substantially impact society in the decades ahead [291]. Still, not many are confident about the commercial potential of quantum technology in the near term (five to ten years) [292]. Others are sceptical that quantum computers will ever become useful [293]. At the core of critics' argument against the feasibility of quantum computers lies the notion of complexity. So far, a very low-level complexity class of probability distributions has been identified and described by noisy intermediate-scale quantum computers. Such computers would allow neither good-quality quantum error correction nor a demonstration of "quantum supremacy" - the ability of quantum computers to make computations that are impossible or extremely hard for classical computers [293]. ### Quantum optical devices The operation of quantum computers relies on three principles: quantum entanglement, quantum complexity and quantum error correction. Therefore, quantum computers exploit the characteristic correlations among the parts of a quantum system that make them robust and scalable to large devices solving hard problems. By 2022, many advances in quantum computing were announced (but some were also refuted). The leading technology is based on superconducting qubits (Google, IBM, Rigetty) and trapped ions (IonQ, Honeywell). Google team has announced quantum supremacy using 53 qubits in 2019; IBM entangled 65 qubits while revealing a road map to more than 1000 by 2023. The advantages of superconducting qubit systems are that they are based on well-developed semiconductor technology; however, they have to be kept cold (10mK) and have a short decoherence time (\(<10\mu\)s). In contrast, trapped ions are very stable with much longer decoherence times (minutes), longer range interactions (beyond nearest neighbours) and report the best quantum volume among any quantum computer systems. However, many lasers are needed to be controlled simultaneously, the operation could be faster, and it would be hard to put many ions on a chip. So far, IonQ has achieved 32 trapped ions in a chain, promised to achieve quantum supremacy by 2025 and solve interesting real-life problems by 2028. There are other proposals and small-scale realisations using silicon quantum dots [294], diamond vacancies [295], neutral atoms [296, 297], etc. One of the biggest disappointments was experienced by Microsoft in 2021 invested into topological qubits. In theory, a topological qubit created from a pair of Majorana zero modes could benefit from topological protection. The topological protection leads to stability and a lack of decoherence that could help topological quantum computers scale up in power more easily than other approaches. The theoretical existence of Majorana zero modes was realised experimentally in 2018 [298], but the paper was retracted following the discovery of erroneous data presented. Quantum computers based on photons had been considered impractical in the early ages of quantum computer developments because of difficulties in generating and controlling the required quantum states. However, such computers are being developed by photonic companies such as Xanadu (Toronto) and PsiQuantum (Palo Alto, CA) in addition to intensive academic research. The advantages of photon-based quantum computers are room temperature operation, much longer decoherence times (from ms to hours), and the systems being cheaper and easier to build. However, they become large quickly (although PsiQuantum claims that one million qubits would still be possible). For a photon-based quantum computer, boson sampling was proposed as a counterpart to a random quantum circuit of superconducting qubit systems. A sampling task is one where the computer generates samples from a specific probability distribution. Quantum algorithms allow sampling from probability distributions well beyond the capabilities of classical computers. The most famous example is Shor's factorisation algorithm which exploits the ability to sample a probability distribution efficiently based on the Fourier coefficients of a function on a quantum computer. There is a quantum uncertainty associated with the amplitude and phase of any state of light. In squeezed states of light, this quantum uncertainty is unequally distributed between the amplitude and phase and the more the state is squeezed, the more photons it contains. Multi-photon squeezed light is found in many quantum-optics experiments, and quantum computing models based on these states have been studied for over two decades [299]. In particular, it was proposed that even a relatively simple optical circuit that exploits the properties of squeezed light and consists of beam splitters and photon counters could carry out a sampling algorithm at speed beyond the reach of classical computers [300; 301]. It was also proposed that such an algorithm has many practical applications [302] such as finding matching configurations between molecules [303] or the different states of a molecule [304]. More rigorously, the boson sampler is a quantum optical device in which a linear optical network mixes many non-classical photon sources. As a result, the photons are indistinguishable and, when originating from different sources, lead to complex photon counting statistics of the output detectors. When the number of the input/output channels of the boson sampler is large, the emulation of such a device with a classical computer is believed to be \(\sharp\)\(\mathbb{P}\)-hard [300; 305]. In the original formulation, the boson sampler was introduced as a device consisting of single-photon sources, a linear interferometer and photon-counting detectors at the output channels. Several experiments implemented variations of this set-up: 5 input photons in 21-mode optical circuit [306], and 20 input photons in 60-mode interferometer [307]. Using single-photon sources creates various technological complications that reduce the scalability necessary to overcome the classical computer calculations (that roughly scale as \(2^{k}\) in the number of operations where \(k\) is the number of input photons). The lack of scalability in single-photon-based experiments on integrated platforms is due to non-deterministic state preparation and gate implementation. Instead, deterministically prepared squeezed states and linear optics with non-Gaussian operations provided by photon-counting detectors allow significant scaling up in the number of input/output channels. Therefore, the Gaussian boson sampling was proposed, where the single-photon sources are replaced by the single-mode squeezed light generated by parametric down-conversion sources [300]. A paper published in Science at the end of 2020 reported that they achieved "quantum computational advantage" while implementing Gaussian boson sampling using 50 input channels and a 100-mode interferometer [308]. The authors claim that their device provides 200 seconds samples requiring classical computers billions of years. Specifically, the paper reports a Gaussian boson sampling experiment representing a quantum state in \(10^{30}\)-dimensional Hilbert space and a sampling rate that is \(10^{14}\) faster than that of using digital supercomputers. This paper was described as the first independent verification of Google's quantum advantage claims and claimed to surpass Google's supremacy by several orders of magnitude. This huge computational advantage reported [308] is based on specific statistical tests measuring the proximity of the measured samples to the outcomes of noiseless simulations of the quantum experiment that were performed on a classical digital supercomputer. It was previously shown that a classically sampled distribution might pass the same statistical tests by only reproducing small-scale correlations of the actual theoretical distribution [309]. Moreover, a polynomial-time algorithm based on taking a truncated Fourier-Hermite expansion on the boson sampling distribution [309] may achieve similar or better sampling quality for the statistical methods of [308]. Another method for attaining similar sampling quality based on an algorithm of Clifford and Clifford [310] was also proposed [311]. Finally, very recently, a series of approximations were introduced to generate the probability distributions of any specific measurement outcome in a polynomial complexity [312]. The accuracy of the experiment was achieved at the fourth-order approximation using a laptop computer. The algorithm was tuned towards the actual experiment and applies only to the Gaussian boson sampling (not Fock-state boson sampling) [300], only for threshold detectors (not photon-counting detectors), and only for a small number of modes (not quadratic in the number of photons as in the original proposal [300]). Further experiments, however, reported nontrivial genuine high-order correlations in the GBS samples, which presented an evidence of robustness against possible classical simulation schemes [313]. So far, experimental implementations of GBS lack programmability (reconfigurability of the circuitry) or have prohibitive loss rates that limit the scalability. There is a need for rigorous theoretical evidence of the classical hardness of GBS, althought some progress was recently made [314]. A more recent (2021) experiment by Kanadu and NIST attempted to remedy this [315]. The circuitry they implemented is programmable and potentially highly scalable. The system uses eight modes of strongly squeezed vacuum initialized as two-mode squeezed states in single temporal modes that pass through a fully programmable four-mode interferometer and photon number-resolving readout on all outputs. This technological advance was achieved by using strong squeezing and high sampling rates. The interferometer imple mented a user-programmable gate sequence based on a network of beam splitters and phase shifters. The resulting eight-mode Gaussian state is measured on the Fock basis using eight independent photon-number-resolving detectors. The total device was composed of a 10 mm \(\times\) 4 mm photonic chip. The chip is coupled with a high-level application programming interface running on a classical computer. There are many problems to overcome before the quantum sampling implemented on a quantum computer becomes useful for real-world applications. Photon losses need to be controlled and significantly decreased to enable photon travel through the circuitry to improve scalability. The improvement in the sampling fidelity and the quality of the squeezed states must be increased. The most exciting application would require individual control of the degree of squeezing and the amount of optical power in each squeezed state. The number of commercial applications that can be implemented using the current architecture is limited. Xanadu group implemented two potentially practical algorithms. By encoding the problems into the beam splitter network, they use the generated samples to determine energy spectra for transitions between molecular states and find the similarity between graph representations of different molecules. In particular, a graph can be encoded in a photonic circuit by mapping the graph's adjacency matrix into the structure of a linear optical interferometer with squeezed light [316]. The photon-counting statistics can be used to specify so-called "feature vectors", which represent the graphs in Euclidean space [317] so that the distance between them can be used to quantify the similarity of the corresponding graphs. Such similarity measure between the graphs derived from a Gaussian boson sampling device is important, for instance, for classification in ML. Recently, other optical computing platforms based on squeezed states have been theoretically proposed on the route to a useful optical quantum computer [318; 319]; see Fig. 11. ### Boson sampling and graph isomorphism Using the light interference network for quantum analogue calculation has many practical advantages. We mentioned that operating with the boson sampling setup allows one to calculate the permanent of a specific matrix, which is extremely hard from the computational perspective. However, it is more complex to make this helpful computation and was an open question for some time with a few remaining debates. Recently the connection between a Gaussian boson sampler and the graph isomorphism problem was established [321]. The graphs are encoded into quantum states of light, and then their properties are probed with photon-number-resolving detectors. Using a complete set of graph invariants, the authors prove that the probabilities in the setup can be combined, and the isomorphism between the two graphs can be established only in the case of equal detection probabilities on the output. It is still an open question whether graph isomorphism has a specific complexity type. It is believed to belong to the class of \(\mathbb{NP}\)-intermediate computational problems. The existence of a polynomial-time algorithm that can determine whether two graphs are isomorphic is under question; however, there are quasi-polynomial types of algorithms. One can find the recent advances in photonic boson sampling with the description of both the technological Figure 11: a) Definition of the problem. Calculate the sample from the specific distribution defined by the modulus squared permanents of submatrices of a Haar random unitary matrix \(U\). b) The scheme of the photonic experiments, i.e. single photons propagate through a linear optical network followed by its detection. c) A classical boson sampling algorithm based on Metropolised independence sampling using the distinguishable particle transition probabilities as the proposal distribution. Reproduced from [320] with permission. improvements and future challenges [322]. The proposed connection between the graph isomorphism and boson sampling can be further extended to other practical tasks, such as constructing graph kernels for the ML applications operating with the graph-structured data [317]. ### Quantum ML Programmable waveguide meshes can be configured to execute any linear transformation between sets of input and output waveguides. These operations are also at the core of photonic quantum computing. Here, the quantum information is represented by quantum states of light propagating through the photonic integrated circuits [323]. A typical scheme encodes a qubit as a single photon in a superposition of two rail waveguides [324]. Noisy intermediate-scale quantum (NISQ) devices have now shown potential in quantum ML that promises to process large data sets vastly faster than classical computers [325]. In these proposals, quantum ML parallels classical photonic deep NN accelerators: stages of linear waveguide meshes connected by activation layers. Still, these activation layers have strong reversible nonlinearities [326]. In such a 'quantum optical neural network' (QONN), programming a NISQ computer reduces to training the phases in the waveguide mesh through supervised learning on input and output quantum states. The QONN can be taught to perform a range of quantum information processing tasks, including quantum optical state compression and reinforcement learning. Recently, a QONN managed to program a one-way quantum repeater [326; 327]. However, these concepts and many other ideas in neural quantum architectures developed earlier are far from useful in practice with the current experiments state. ### Comparison with other quantum approaches to optimization As previously discussed, CIM has shown several orders of magnitude time-to-solution advantages compared to D-Wave2000Q quantum annealer on similar dense matrix instances. At the same time, the recent comparisons of (1) the quantum approximate optimization algorithm (QAOA) and two widely studied competing methods, quantum annealing and simulated annealing [328]; (2) the D-Wave2000Q quantum annealer with IBM Q Experi ence system that implements QAOA [329]; and (3) benchmarking of QAOA on Google "Sycamore" [330] allow us (to some extent) compare optical spin machines performance with QAOA. In the QAOA, the variational wavefunction resembles a trotterised version of the quantum annealing procedure: \[|\Psi(\beta,\gamma)>=\prod_{i=1}^{N}e^{-i\beta_{i}H_{0}}e^{-i\gamma_{i}H_{\rm objective }}, \tag{70}\] where the starting state is \(|+>\) is the product state of eigenstates of \(\sigma_{x}\) with eigenvalue 1, \(|+>=\prod_{i}(|0>_{i}+|1>_{i})/\sqrt{2}\) which is simultaneously the superposition of all computational basis states. In contrast to a trotterised version of quantum annealing, the parameters \(\beta_{i}\) and \(\gamma_{i}\) are adjusted in a classical learning loop to minimize the objective function. Such adjustments are considered as \(\mathbb{NP}\)-hard problems themselves. As \(P\rightarrow\infty\), QAOA approaches smooth \(QA\). The results of [328] show that QAOA can deterministically find the solution of specially constructed optimization problems in cases where both quantum annealing and simulated annealing fail (wide and tall energy barriers of the function to be minimized). However, there exists an efficient classical algorithm for these instances. In [329], small size (up to \(N=18\)) weighted Max-Cut problems and 2-SAT problems were tested using D-Wave2000Q quantum annealer with IBM Q Experience. The actual machine IBM Q on 16 qubits gave such poor solution quality that the real physical experiment on D-Wave200Q was compared to the simulation of QAOA. Even in this case, physical QA has shown much better success probabilities than QAOA (99.92 vs 8.84(\(p=1\)) and 42.39(\(p=3\)), respectively, on as small matrices as \(N=8\)!) The conclusion was that for the set of problem instances considered, taking the success probability as a measure, "the QAOA cannot compete with quantum annealing". The corresponding plots can be found in Fig. 12. In [330], the authors ran the Google Sycamore superconducting qubit quantum processor for combinatorial optimization problems with QAOA using the planar graph matching the hardware connectivity. They also applied the QAOA to the Sherrington-Kirkpatrick model and Max-Cut, both high-dimensional graph problems for which the QAOA requires significant compilation. The problems were solved up to \(N=23\) numerically (without noise) and experimentally. For QAOA the theoretically optimal \(\beta,\gamma\) and \(p\in{1,2,3,4,5}\) were used in experiments. The success probabilities on average of the problem on graph matching hardware reached a plateau for \(N>8\) at about 80 % (numerically) and about 45 % experimentally. For SK and Max-Cut problems, performance deteriorated quickly (for any \(p\)) to the probability of finding a solution by random guessing (for \(N>15\)). The authors concluded that while no existing quantum processors can outperform classical optimization heuristics, applying popular methods such as the QAOA to prototypical problems can be a benchmark for comparing various hardware platforms. For quantum optimization to compete with classical methods for real-world problems, it is necessary to push beyond contrived problems at low circuit depth. Figure 12: The ratio of the found solution to the best solution when using QAOA for various problem sizes \(N\). Each solution is averaged across ten random instances (standard deviation is given as error bars). The experimental solutions of the SK and Max-Cut models approach random as \(N\) increases. The figure is taken from [330]. ## VII.5 Quantum effects and optical machines As we can see, various optical hardware uses different mechanisms for its operation. It can have the primary mechanism's pure classical, quantum, or hybrid nature. Even in the case of operating near the classical limit, the quantum effects can be essential and greatly influence the actual operation regime. For example, it was shown that the nonlinear stochastic dynamics of the CIM in the presence of quantum noise could be efficiently exploited to sample degenerate ground and low-energy spin configurations of the Ising model on the example of Max-Cut problems [331]. Both quantum noise and optical nonlinearities play an essential role in system dynamics. Removing these essential elements will result in the degradation of sampling performance. The supplementary numerical results beyond the classical simulation complement the description of the quantum mechanism's role in the CIM operation. Another work [152] studies the performance scaling of three quantum algorithms for combinatorial optimization, such as CIM performance, discrete adiabatic quantum computation, and the Durr-Hoyer algorithm for quantum minimum finding that is based on Grover's search. Authors claim that the CIM performance is dramatically better for solving Max-Cut problems. Moreover, the CIM is competitive against various heuristic solvers implemented on CPUs, GPUs, and FPGAs. Many optical devices fall under the category of open quantum systems. Such a formalism is necessary to account for many complicated effects. For this purpose, a Markovian open quantum systems framework has been developed [332; 333]. Such effective dynamics for the reduced density matrix of the system give rise to the Lindblad-form master equation, which allows one to trace such effects as equilibration with the pump and decay processes, thermalisation of the system and different aspects of interaction with the environment. Although the numerical methods for such processes are quite complicated, one usually develops approximated schemes that account for the omitted effects. There are many more systems where this approach could be beneficial for describing subtle but essential features--another example, except for the CIM, is the exciton-polariton system frequently mentioned before. Furthermore, one should pay attention to other microscopic processes in the EP system since such consideration gives more degrees of freedom to inspect compared to the simple mean-field theory [334; 335; 336]. ## VIII Final Remarks ### Benchmarking optical machines So far, the research and development of optical hardware are experiencing significant growth. The main problem is to compare the capabilities of optical machines as they are often tested on different problems of variable sizes and difficulty. Thus it is hard to figure out the scaling properties of the particular mechanism from either experiment or numerical emulation procedure. Another problem is lying in the biased results, which can be cherry-picked for better demonstrative purposes. In general, it is hard to find extensive, complete, up-to-date and unbiased results comparing different types of optical hardware. Although the majority of the NN optical architectures can be compared using standard metrics concerning the accuracy of the particular datasets and the required workload, we outline what is known and how some of the optical machines can be compared. For example, in [337] comparisons between memristors, GPU, D-Wave and the CIMs were made using the same set of dense 60-node Max-Cut graphs. The time to solution (with 99 % probability to reach optimal solution) was \(600\mu\) s for the CIM and 1000 s for the D-Wave. In [117], it was reported that the Ising machine based on optoelectronic feedback systems solved Max-Cut optimization problems on regular and frustrated graphs with 100 spins showing similar or better performance compared to CIMs based on DOPOs. Since OEO-based CIMs can be implemented as integrated photonic circuits, the flexible spin coupling can be made optically with programmable silicon photonic circuits. This will take full advantage of the high bandwidth of the optical system and bring a significant speedup over existing CIM concepts. Establishing universal benchmarks will attract more people since understanding the hardware's successes and failures on particular problems allows one to maximize utility. Moreover, such a research direction shares similar issues with the ongoing studies on the NN architectures and phase transitions in the statistical approaches to the computational problems [98], which is cross-beneficial for all of the domains. ## VIII.2 The most promising applications for optical computing Our subjective perspective is that modern optical computing has the potential to give a significant computational advantage in three major applied areas: **Neural networks, Nonlinear optimization, and Statistical sampling.** Optical hardware is a promising platform to get accceleration for these applications, with many computational advantages coming from the hybrid-quantum/classical mode of operation. The optics naturally supports these tasks but also benefit from many more factors, such as specific architectures and their interplay with the natural properties of light systems. For example, a mode selection mechanism is one of the beneficial regimes of operation for a quantum system spanning a high-dimensional space of possible solutions and finding an optimal one while settling to the first possible coherent state with a large occupation. Another component is classical dynamical system behaviour that can mimic the NN dynamics, following the classical gradient dynamics on a changing energy landscape while tunnelling through barriers to the nearest energy minimum. The task is achieved if this minimum corresponds to the optimum solution to the problem. Finally, a similar mechanism is responsible for sampling the landscape's low-energy subspace. ### Future perspectives The 'no free lunch' (NFL) theorem in optimisation states that any two optimisation algorithms have the same performance averaged across all possible problem instances. This theorem applies to the hardware instead of the algorithms with many more implications. One of the consequences of the NFL theorem is the correspondence between the solver/hardware structure and the hardness of the problem with the best-case and worst-case scenarios. To use optical spin machines to speed up the solutions to specific industry real-life problems, one needs to think hard about the range of application that may go far from the QUBO. These application has to closely match the optical machine's operational principle to take advantage of all the potential advantages. Many questions need to be addressed before the optical spin machines become useful for the real-life applications. Which platforms should we use for comparison between different machines? What is the importance of optical quantum vs classical, classical vs classical, hybrid vs classical, optical vs other physics-based hardware advantages? How does hardware performance compare to the best algorithms run on traditional systems? To answer these questions, we need to introduce a standard for fair comparisons between the machines and approaches. Which section of the workflow is more advantageous to optimize? Should sections that are closer to hardware or closer to a user be more important? How to properly optimise pre-processing and post-processing? How do we evaluate results and which metric should be used? The proximity between the found solutions of QUBO can be evaluated using, for instance, the Hamming distance, the distance in the energy space, the ratio of the energies, the accuracy of the neural architecture, or some other the generalisation of the error metrics can be used. How do we evaluate optimisation performance in several important dimensions, e.g. the computation time, the solution quality, the energy efficiency, the input scope, etc. all can be used for evaluation. How do we account for overheads concerning the given architecture and the task-specific constraints? How to optimize the implementation, translating, embedding, tuning, post-processing? How do we find inputs that are hard and relevant? How do we avoid using trivial ones? Drawing the possible phase diagrams should be built for the parametrised tasks. How do we maximise the generality of the conclusions, and to what extent if we can only test some combinations of inputs and hardware. The advantage will come when we (i) develop purpose-built solutions tuned to specific applications, (ii) develop hybrid algorithms and approaches (e.g. including ML as a part of the hybrid solutions) and (iii) leverage programmable accelerators for core tasks. More research is needed to bring the potential of optical (or any other unconventional) computing systems to real-life applications. Answering the critical questions will bring us closer to a better understanding of the underlying principles of unconventional optical machines, improve their performance and hence achieve a significant practical impact. ###### Acknowledgements. N.G.B. thanks the Julian Schwinger Foundation grant JSF-19-02-0005 for the financial support.
2303.10207
Generalized Differential and Integral Calculus and Heisenberg Uncertainty Principle
This paper presents a generalization for Differential and Integral Calculus. Just as the derivative is the instantaneous angular coefficient of the tangent line to a function, the generalized derivative is the instantaneous parameter value of a reference function (derivator function) tangent to the function. The generalized integral reverses the generalized derivative, and its calculation is presented without antiderivatives. Generalized derivatives and integrals are presented for polynomial, exponential and trigonometric derivators and integrators functions. As an example of the application of Generalized Calculus, the concept of instantaneous value provided by the derivative is used to precisely determine time and frequency (or position and momentum) in a function (signal or wave function), opposing Heisenberg's Uncertainty Principle.
Fernando Marques de Almeida Nogueira
2023-02-10T19:55:26Z
http://arxiv.org/abs/2303.10207v1
# Calculus ++ ###### Abstract This paper presents a generalization for Differential and Integral Calculus. Just as the derivative is the instantaneous angular coefficient of the tangent line to a function, the generalized derivative is the instantaneous parameter value of a reference function (derivator function) tangent to the function. The generalized integral reverses the generalized derivative, and its calculation is presented without antiderivatives. Generalized derivatives and integrals are presented for polynomial, exponential and trigonometric derivators and integrators functions. As an example of the application of Generalized Calculus, the concept of instantaneous value provided by the derivative is used to precisely determine time and frequency (or position and momentum) in a function (signal or wave function), opposing Heisenberg's Uncertainty Principle. **Keywords:** Differential and Integral Calculus, Instantaneous Frequency, Heisenberg's Principle Uncertainty ## 1 Introduction Differential and Integral Calculus has become one of the main mathematical tools that made possible discoveries and advances in several areas such as Physics, Chemistry, Economics, Computer Science, Engineering, and even Biology and Medicine. Moreover, in Mathematics itself, Differential and Integral Calculus is used in other areas, such as Linear Algebra, Analytical Geometry, Probability, and Optimization, among others. Differential and Integral Calculus was developed by Isaac Newton [1] (1643-1727) and Gottfried Wilhelm Leibniz [2] (1646-1716), independently of each other, in the 17th century and basically established three operations that are applicable to any function: calculus of limits, derivatives, and integrals. The derivative concerns the instantaneous rate of change of a function. On the other hand, the integral concerns the area under the curve described by a function. Both the derivative and the integral are based on the calculus of infinitesimals through the concept of limit, and the Fundamental Theorem of Calculus formalizes the inverse operations relationship between Differential and Integral Calculus. The derivative is an operation that is performed on any function \(f(x)\)1, resulting in another function \(f^{\prime}(x)\) that represents the slope of the tangent line to \(f(x)\) for each x. Differential Calculus uses the line as the "reference function" and its slope as the result of the derivative. Footnote 1: \(f(x)\) is formally defined in the remaining sections. **Remark**: _Why use only the line as the reference function and its slope as the result of the derivative?_ This paper presents the derivative performed for other reference functions different from the line and other parameters different from the slope of the line, thus generalizing the Differential Calculus. Since the derivative and the integral are inverse operations, the same generalization concept employed for Differential Calculus is applied to Integral Calculus. ### The Derivative, its Generalization and the Antiderivative The derivative of a function can be understood as a linear interpolation process. Let \(\mathbb{I}\) a non-empty open interval, \(f:\mathbb{I}\to\mathbb{R}\) a function, \(y=f(x)\), \(\mathbb{I}\subseteq\mathbb{R}\), \(x_{0}\in\mathbb{I}\) and \(\Delta\in\mathbb{R}\), as illustrated in figure 1. Two points determine a line: from the points \((x_{0},f(x_{0}))\) and \((x_{0}+\Delta,f(x_{0}+\Delta))\) it is possible to calculate the angular \((a_{1})\) and linear \((a_{0})\) coefficient of the linear equation \(y=a_{1}x+a_{0}\) secant to the graph of the function \(f(x)\). This calculation is obtained by solving the following linear system: \[S:\left\{\begin{array}{l}f(x_{0})=a_{1}x_{0}+a_{0}\\ f(x_{0}+\Delta)=a_{1}(x_{0}+\Delta)+a_{0}\end{array}\right. \tag{1}\] The resolution of 1 is: \[a_{1}=\frac{f(x_{0}+\Delta)-f(x_{0})}{\Delta} \tag{2}\] \[a_{0}=\frac{f(x_{0})(x_{0}+\Delta)-f(x_{0}+\Delta)x_{0}}{\Delta} \tag{3}\] In differential calculus, the angular coefficient (\(a_{1}\)) in (2) is known as Newton's Difference Quotient. For small \(\Delta\) values, the linear equation \(y=a_{1}x+a_{0}\) will be practically tangential to the graph of the function \(f(x)\) near the point \(x_{0}\), and in the limit \(\Delta\to 0\), this line will be tangential to the graph of \(f(x)\) at point \(x_{0}\). Applying limit of \(\Delta\to 0\) in (1) is: \[S:\left\{\begin{array}{l}f(x_{0})=a_{1}x_{0}+a_{0}\\ f(x_{0}+\Delta\to 0)=a_{1}(x_{0}+\Delta\to 0)+a_{0}\end{array}\right. \tag{4}\] And, its resolution is: \[a_{1}\mid_{x_{0}}=\lim_{\Delta\to 0}\frac{f(x_{0}+\Delta)-f(x_{0})}{\Delta} \tag{5}\] \[a_{0}\mid_{x_{0}}=\lim_{\Delta\to 0}\frac{f(x_{0})(x_{0}+\Delta)-f(x_{0}+\Delta)x_{0} }{\Delta} \tag{6}\] In (5), the value of \(a_{1}\mid_{x_{0}}\) is the value of the derivative of \(f(x)\) at the point \(x_{0}\). The value of \(a_{0}\mid_{x_{0}}\) in (6) **is not used** in traditional differential and integral calculus. Generalizing for any point \(x\) in the domain, \(a_{1}^{ins}:\mathbb{I}\rightarrow\mathbb{R}\) the function \(a_{1}\) instantaneous, the derivative of \(f(x)\) is: Figure 1: Secant line (red) passing through the points \((x_{0},f(x_{0}))\) and \((x_{0}+\Delta,f(x_{0}+\Delta))\) (left figure) and the tangent line at point \((x_{0},f(x_{0}))\) (right figure) to the function (black). \[\frac{df(x)}{dx}=a_{1}^{ins}(x)=\lim_{\Delta\to 0}\frac{f(x+\Delta)-f(x)}{\Delta} \tag{7}\] **Remark**: _Therefore, the derivative of a function is the application of the limit of \(\Delta\to 0\) to Newton's Difference Quotient._ This process uses a linear procedure to determine the **slope** (angular coefficient) of the linear equation of the tangent line to any function at a given point. This linear procedure is simply a linear interpolation or a regression to the linear equation for two infinitesimally close points belonging to \(f(x)\). However, similarly to (7), from (6), \(a_{0}^{ins}:\mathbb{I}\rightarrow\mathbb{R}\) the function \(a_{0}\) instantaneous, one can write: \[\mathfrak{D}\{\}=a_{1}x+a_{0}\] \[\mathfrak{D}\{a_{0}\}\frac{df(x)}{dx}=a_{0}^{ins}(x)=\lim_{\Delta \to 0}\frac{f(x)(x+\Delta)-f(x+\Delta)x}{\Delta} \tag{8}\] where, \(\mathfrak{D}\{\}\) is the tangent function to \(f(x)\) in which the derivative is defined (the line equation in this case, as in the "classical derivative", but it could be any other function); \(\mathfrak{D}\{a_{0}\}\frac{df(x)}{dx}\) indicates under which parameter of the \(\mathfrak{D}\{\}\) the derivative \(\frac{df(x)}{dx}\) is defined. In this context, as (8), the generalized notation for the "classical derivative" (7) is: \[\mathfrak{D}\{\}=a_{1}x+a_{0}\] \[\mathfrak{D}\{a_{1}\}\frac{df(x)}{dx}=a_{1}^{ins}(x)=\lim_{ \Delta\to 0}\frac{f(x+\Delta)-f(x)}{\Delta} \tag{9}\] The reasoning used in (1) to (9) can be generalized to other functions (and not just the linear equation) and their respective parameters. This concept can also be applied to the integral of a function. For example, the notation for the inverse operation of (9) is: \[\mathfrak{I}\{\}=a_{1}x+a_{0} \tag{10}\] \[F(x)=\mathfrak{I}\{a_{1}\}\int f(x)dx\] where, \(\mathfrak{I}\{\}\) is the tangent function to \(F:\mathbb{I}\rightarrow\mathbb{R}\) which the integral is defined; \(\mathfrak{I}\{a_{1}\}\int f(x)dx\) indicates under which parameter of the \(\mathfrak{I}\{\}\) the integral of \(f(x)\) is defined. As a result, the following concepts can be defined: * **Derivator Function** is the function \(\mathfrak{D}\{\}\) that is used in the interpolation process (the classic derivative uses the linear equation as Derivator Function). * **Derivator Parameter** is the parameter of interest \(p_{k}\) of the \(\mathfrak{D}\{\}\), represented by \(\mathfrak{D}\{p_{k}\}\), where \(p_{k}\in\mathfrak{P}\), \(\mathfrak{P}=\{p_{0},p_{1},p_{2},...,p_{N-1}\}\) is the set of \(N\in\mathbb{N}\) parameters of the \(\mathfrak{D}\{\}\), \(k\in\mathbb{N}\) and \(k<N\). (the classic derivative uses the angular coefficient \(a_{1}\) as Derivator Parameter). * **Integrator Function** is the function \(\mathfrak{I}\{\}\) that is used in the process of obtaining the primitive function (the classic integral uses the linear equation as Integrator Function). * **Integrator Parameter** is the parameter of interest \(p_{k}\) of the \(\mathfrak{I}\{\}\), represented by \(\mathfrak{I}\{p_{k}\}\), where \(p_{k}\in\mathfrak{D}\), \(\mathfrak{D}=\{p_{0},p_{1},p_{2},...,p_{N-1}\}\) is the set of \(N\in\mathbb{N}\) parameters of the \(\mathfrak{I}\{\}\), \(k\in\mathbb{N}\) and \(k<N\). (the classic integral uses the angular coefficient \(a_{1}\) as Integrator Parameter). Figure 2 illustrates the names of the functions and operations involved in Generalized Differential and Integral Calculus. ## 2 Background Different forms of the derivative have already been established. These forms use concepts different from the foundation employed for generalizing Differential and Integral Calculus presented in this article. * Symmetric Derivative [3] Figure 2: Functions and operations involved in Differential and Integral Calculus. A simple variant form of the "classical derivative" is the Symmetric Derivative, which uses Newton's Difference Quotient in a symmetrical form. The Symmetric Derivative \(f_{S}^{\prime}\) is defined as: \[f_{S}^{\prime}=\lim_{\Delta\to 0}\frac{f(x+\Delta)-f(x-\Delta)}{2\Delta} \tag{11}\] Although the Symmetric Derivative uses a different form for Newton's Difference Quotient, the derivative function can still be understood as the slope of the tangent line to the function \(f(x),\forall x\). In this form, the Symmetric Derivative is a different manner of defining the "classical derivative". * Frechet Derivative [4] Given \(\mathbb{V}\) and \(\mathbb{U}\) normed vectorial spaces, \(\mathbb{W}\subseteq\mathbb{V}\), \(f:\mathbb{W}\rightarrow\mathbb{U}\) a function Frechet differentiable at \(x\in\mathbb{V}\). If there is a bounded and linear operator \(A:\mathbb{V}\rightarrow\mathbb{U}\) such that: \[\lim_{\Delta\to 0}\frac{\parallel f(x+\Delta)-f(x)-A\Delta\parallel_{ \mathbb{U}}}{\parallel\Delta\parallel_{\mathbb{V}}}=0 \tag{12}\] Then, \(A\) is the derivative of \(f\) at \(x\). The Frechet Derivative is used on a vector-valued function of multiple real variables and to define the Functional Derivative, generalizing the derivative of a real-valued function of a single real variable. * Functional Derivative [4] Another form of a derivative is the Functional Derivative. Given \(\mathbb{V}\) a vectorial (function) space, \(\mathbb{K}\) a field and \(F\) a functional, \(F:\mathbb{V}\rightarrow\mathbb{K}\), \(f\in\mathbb{V}\), \(\zeta\) an arbitrary function, the Functional Derivative of \(F\) at \(f\), \(\frac{d(F)}{d(f)}\) is: \[\int\frac{d(F)}{d(f)}(x)\zeta(x)dx=\lim_{\Delta\to 0}\frac{F(f+\Delta \zeta)-F(f)}{\Delta} \tag{13}\] In this case, the concept of the derivative is applied to a functional and not to a function. In this paper, the concept of the derivative is generalized to functions. * Fractional Derivative [5] The derivative can be repeated \(n\) times over a function, resulting in the derivative's order. Thus, the order of the derivative is clearly a natural number (\(n\in\mathbb{N}\)). The fractional derivative generalizes the concept of derivative order so that the \(\alpha\) order of the fractional derivative is \(\alpha\in\mathbb{R}\) or even \(\alpha\in\mathbb{C}\). It is then possible, under this generalization, to calculate the derivative of \(f(x)\) of order \(alpha=2.5\) or \(alpha=-1\) (integral of \(f(x)\)), for example. In this paper, the derivative is generalized to functions and not to the order of the derivative. * q-Derivative [6] The q-Derivative of a function \(f(x)\) is a q-analog of the "classic derivative". Let \(q\in\mathbb{R}\), it is given by: \[\left(\frac{d}{dx}\right)_{q}f(x)=\frac{f(qx)-f(x)}{qx-x} \tag{14}\] For \(q\to 1\), the q-Derivative is the "classic derivative". \(\bullet\): Arithmetic Derivative [7] Let \(a.b\in\mathbb{N}\) and \(p\) a prime number, the arithmetic derivative \(D(a.b)\) is such that: \[\begin{array}{l}D(0)=D(1)=0\\ D(p)=1,\forall\text{ prime }p\\ D(a.b)=D(a)b+a.D(b)\end{array} \tag{15}\] The Arithmetic Derivative is a "number derivative", which is based on prime factorization. The arithmetic derivative can be extended to rational numbers. Other forms of derivatives include: * Carlitz derivative [8] * Covariant derivative [9] * Dini derivative [10] * Exterior derivative [11] * Gateaux derivative [12] * H derivative [13] * Hasse derivative [14] * Lie derivative [15] * Pincherle derivative [16] * Quaternionic derivative [17] * Radon Nikodym derivative [18] * Semi differentiability [19] * Subderivative [20] * Weak derivative [21] All of these forms use concepts different from the foundation employed for the generalization of Differential and Integral Calculus presented in this article. ## 3 Polynomials Derivators Functions Let \(n\in\mathbb{N}\), \(a_{i}\in\mathbb{R}\), \(\forall i\in\mathbb{N}\), \(P:\mathbb{R}\rightarrow\mathbb{R}\) is a Polynomial Function if: \[P(x)=a_{n}x^{n}+a_{n-1}x^{(n-1)}+....+a_{1}x^{1}+a_{0}x^{0}=\sum_{i=0}^{n}a_{ i}x^{i} \tag{16}\] In (16), the value of \(n\) defines the degree of the polynomial. For \(n=1\), the polynomial is a linear equation, and two points are needed to define its parameters (as in (1)). For \(n=2\) and \(n=3\), the polynomial is a quadratic (parabola) and cubic function, and \(3\) and \(4\) points are needed to define their parameters, respectively. For other degrees of the polynomial, the reasoning is analogous; therefore, \(n+1\) points are necessary to define the parameters of a polynomial of degree n. Using a polynomial of degree 2 (\(n=2\)) as derivator function, the derivative becomes an interpolation process to the quadratic function for three infinitesimally close points belonging to \(f(x)\), resulting in the **Parabolic Derivative**, as shown in figure 3. The system is: \[S:\left\{\begin{array}{l}f(x_{0})=a_{2}x_{0}^{2}+a_{1}x_{0}+a_{0}\\ f(x_{0}+\Delta)=a_{2}(x_{0}+\Delta)^{2}+a_{1}(x_{0}+\Delta)+a_{0}\\ f(x_{0}+2\Delta)=a_{2}(x_{0}+2\Delta)^{2}+a_{1}(x_{0}+2\Delta)+a_{0}\end{array}\right. \tag{17}\] Generalizing for any point \(x\) in the domain, \(a_{0}^{ins}:\mathbb{I}\rightarrow\mathbb{R}\) the function \(a_{0}\) instantaneous, \(a_{1}^{ins}:\mathbb{I}\rightarrow\mathbb{R}\) the function \(a_{1}\) instantaneous, \(a_{2}^{ins}:\mathbb{I}\rightarrow\mathbb{R}\) the function \(a_{2}\) instantaneous, and applying limit to \(\Delta\to 0\), the resolution of (17) is: \[\mathfrak{D}\{\}=a_{2}x^{2}+a_{1}x+a_{0} \tag{18}\] \[\mathfrak{D}\{a_{2}\}\frac{df(x)}{dx}=a_{2}^{ins}(x)=\lim_{\Delta\to 0} \frac{1}{2}\frac{f(x)-2f(x+\Delta)+f(x+2\Delta)}{\Delta^{2}} \tag{19}\] \[\mathfrak{D}\{a_{1}\}\frac{df(x)}{dx}=a_{1}^{ins}(x)=\lim_{\Delta\to 0}- \frac{1}{2}\frac{K_{0}f(x)-K_{1}f(x+\Delta)+K_{2}f(x+2\Delta)}{\Delta^{2}} \tag{20}\] where, \[K_{0}=2x+3\Delta\] Figure 3: Secant parabola (red) passing through the points \(x_{0}\), \(x_{0}+\Delta\) and \(x_{0}+2\Delta\). (left figure) and tangent parabola at point \(x_{0}\) (right figure) to the function (black). \[K_{1} =4x+4\Delta\] \[K_{2} =2x+\Delta\] \[\mathfrak{D}\{a_{0}\}\frac{df(x)}{dx}=a_{0}^{ins}(x)=\lim_{\Delta \to 0}\frac{1}{2}\frac{K_{0}f(x)+K_{1}f(x+\Delta)+K_{2}f(x+2\Delta)}{\Delta^{2}} \tag{21}\] where, \[K_{0} =2\Delta^{2}+x^{2}+3x\Delta\] \[K_{1} =-2x^{2}-4x\Delta\] \[K_{2} =x^{2}+x\Delta\] In this form, (19), (20) and (21) define the Parabolic Derivative to the \(a_{2}\), \(a_{1}\) and \(a_{0}\) parameters, respectively. For polynomials of other degrees, the procedure is similar to that performed in (17). For example, the generalized polynomial derivative (polynomial derivator function) of \(f(x)=cx^{m}\), \(c,m\in\mathbb{R}\) for the highest degree parameter \(n\in\mathbb{N}^{*}\) of the polynomial derivator function is: \[\mathfrak{D}\{\}=a_{n}x^{n}+a_{n-1}x^{(n-1)}+...+a_{1}x^{1}+a_{0}x^{0} \tag{22}\] \[\mathfrak{D}\{a_{n}\}\frac{d(cx^{m})}{dx} =c\left(\prod_{i=0}^{n-1}\frac{m-i}{i+1}\right)x^{(m-n)}=\] \[\frac{c}{n!}\left(\prod_{i=0}^{n-1}(m-i)\right)x^{(m-n)} \tag{23}\] The Antiderivative of (23) is: \[\mathfrak{I}\{\}=\mathfrak{D}\{\} =a_{n}x^{n}+a_{n-1}x^{(n-1)}+...+a_{1}x^{1}+a_{0}x^{0} \tag{24}\] \[\mathfrak{I}\{a_{n}\}\int cx^{m}dx =\left(\frac{cx^{(m+n)}}{(m+n)\prod_{i=1}^{n-1}\frac{m+i}{i+1}} \right),\forall(m+n)\neq 0\wedge n\geq 2\] (25) \[\mathfrak{I}\{a_{1}\}\int cx^{m}dx =\left(\frac{cx^{(m+1)}}{(m+1)}\right),m\neq-1\] For the other functions \(f(x)\) and/or other derivator parameters, the reasoning is analogous to (17), (23) and (25). ### Vanishing Terms and Primitives The "classic derivative" uses the linear equation as the derivator function and the angular coefficient as the derivator parameter (as in (9)). However the derivative in this form does not model the linear coefficient of the derivator function, and therefore this term, if it exists in the derivand \(f(x)\), "vanishes" for the differential operator, not influencing the derivative. Using the linear equation as the derivator function and the linear coefficient as the derivator parameter as in (8), the derivative in this form does not model the first-degree term (angular coefficient) of the derivator function. Therefore, this term "vanishes" for the differential operator, not influencing the derivative. The following example is suitable for showing this case. Considering: \[f(x)=x^{2}+2x+3 \tag{26}\] for \(\mathfrak{D}\{\}=a_{1}x+a_{0}\), their derivatives are: \[\mathfrak{D}\{a_{1}\}\frac{df(x)}{dx}=2x+2 \tag{27}\] and \[\mathfrak{D}\{a_{0}\}\frac{df(x)}{dx}=-x^{2}+3 \tag{28}\] The term \(+3\) and \(2x\) in (26) vanishes in (27) and (28), respectively. For \(\mathfrak{I}\{\}=a_{1}x+a_{0}\), the antiderivatives, respectively, for (27) and (28) are: \[\mathfrak{I}\{a_{1}\}\int(2x+2)dx=x^{2}+2x \tag{29}\] and \[\mathfrak{I}\{a_{0}\}\int(-x^{2}+3)dx=x^{2}+3 \tag{30}\] Since (27) and (28) do not model the terms \(a_{0}\) and \(a_{1}x\), respectively, the antiderivatives (29) and (30) do not return in (26) and must be added by the following terms (\(k_{0}\) and \(k_{1}x\), with \(k_{0}\) and \(k_{1}\) constants): \[\mathfrak{I}\{a_{1}\}\int(2x+2)dx=x^{2}+2x+k_{0} \tag{31}\] and \[\mathfrak{I}\{a_{0}\}\int(-x^{2}+3)dx=x^{2}+k_{1}x+3 \tag{32}\] The addition of the terms \(k_{0}\) and \(k_{1}x\) in (31) and (32) is necessary because, **independently of \(k_{0}\) and \(k_{1}x\)**, their derivatives are the same: \[\mathfrak{D}\{a_{1}\}\frac{d(x^{2}+2x+k_{0})}{dx}=2x+2 \tag{33}\] and \[\mathfrak{D}\{a_{0}\}\frac{d(x^{2}+k_{1}x+3)}{dx}=-x^{2}+3 \tag{34}\] ### Integrals without Antiderivatives The Fundamental Theorem of Calculus (FTC) [22] establishes the relationship between differential calculus and integral calculus, as **inverse operations** (with reservations). The FTC is divided into two parts. Part 1 shows that the derivative of the integral of \(f(x)\) is equal to \(f(x)\): this is perfect! Part 2 reverses the order, that is, the integral of the derivative of \(f(x)\) is equal to \(f(x)\), however, plus a constant \(k\), that is, \(f(x)+k\): this is perfect too, but the exact return to function \(f(x)\) does not occur when the derivative is performed first and then the integral. Thus, in formal terms, the FTC states that the operations of derivation and integration are inverse, apart from a constant value. \[\frac{d\int f(x)dx}{dx}=f(x)\neq\int\frac{df(x)}{dx}dx=f(x)+k \tag{35}\] This problem occurs simply because the "classic derivative" only gives the instantaneous rate of change of a function for its domain. This rate, as seen in (5), is the angular coefficient for the linear equation when used as derivator function. Obviously, the linear equation cannot be defined by its angular coefficient \(a_{1}\) alone. The linear coefficient \(a_{0}\) also needs to be defined for the linear equation to be complete. The derivation process is carried out by applying the concept of limit to Newton's quotient. On the other hand, the integration process does not have a specific form, and this is obtained, in practice, through the calculation of antiderivatives. Nonetheless, a function can be defined by applying the integrator function \(\mathfrak{I}\{\}\) to the generalized derivatives for a given derivator function \(\mathfrak{D}\{\}\) for **all** their respective parameters, with \(\mathfrak{I}\{\}=\mathfrak{D}\{\}\). For the integrator function \(\mathfrak{I}\{\}:a_{1}x+a_{0}\) (linear equation) and \(\mathfrak{I}\{\}=\mathfrak{D}\{\}\), the primitive \(f(x)\) is: \[f(x)=\mathfrak{D}\{a_{1}\}\frac{df(x)}{dx}x+\mathfrak{D}\{a_{0}\}\frac{df(x)}{dx} \tag{36}\] It is important to emphasize that \(f(x)\) was obtained from its generalized derivatives without the conventionally used integration process (antiderivative) in "classical integral calculus". For example, from the functions (27) and (28) (derivatives of \(f(x)=x^{2}+2x+3\)), the primitive \(f(x)\) is: \[f(x)=(2x+2)x-x^{2}+3=x^{2}+2x+3 \tag{37}\] The **exact** return to \(f(x)\) is obtained ((37) equals (26)). The concept involved in obtaining the function \(f(x)\) is: **Theorem**: _Let \(\mathbb{I}\) a non-empty open interval, \(\mathbb{I}\subseteq\mathbb{C}\), \(h:\mathbb{I}\to\mathbb{C}\) a function, \(y=h(x;\;p_{0},p1,p_{2},...,p_{N-1})\) and \(\mathfrak{P}=\{p_{0},p_{1},p_{2},...,p_{N-1}\}\) the set of \(N\in\mathbb{N}\) parameters._ _If S is a system that has a unique solution for \(N\) points \((x_{k},y_{k})\), \(k\in\mathbb{N}\), \(k\leq N-1\), such as:_ \[S:\left\{\begin{array}{l}y_{0}=h(x_{0};\;p_{0},p1,p_{2},...,p_{N-1})\\ y_{1}=h(x_{1};\;p_{0},p1,p_{2},...,p_{N-1})\\ y_{2}=h(x_{2};\;p_{0},p1,p_{2},...,p_{N-1})\\...\\ y_{N-1}=h(x_{N-1};\;p_{0},p1,p_{2},...,p_{N-1})\end{array}\right. \tag{38}\] \(\mathfrak{I}\{\}=\mathfrak{D}\{\}=h(x;\;p_{0},p1,p_{2},...,p_{N-1})\)_, \(f(x)\) differentiable on \(\mathfrak{D}\{\}\), \(\forall x\in\mathbb{I}\), then f(x) can be described by \(\mathfrak{I}\{\}\) whose parameters are given by their generalized derivatives in their respective \(N\) parameters, i.e. \(f(x)=h(x;\;\mathfrak{D}\{p_{0}\},\mathfrak{D}\{p_{1}\},\mathfrak{D}\{p_{2}\},...,\mathfrak{D}\{p_{N-1}\})\)._ ## 4 Exponential Derivators Functions Let \(A,a,b,x\in\mathbb{R}\), \(a^{ins}:\mathbb{R}\rightarrow\mathbb{C}\), \(b^{ins}:\mathbb{R}\rightarrow\mathbb{C}\) and \(f:\mathbb{R}\rightarrow\mathbb{R}\) an exponential function as: \[f(x)=Ae^{ax},A\geq 0 \tag{39}\] Making \(A=e^{b}\), (39) is: \[f(x)=e^{b}e^{ax}=e^{ax+b} \tag{40}\] The following system can be written: \[S:\left\{\begin{array}{l}f(x)=e^{ax+b}\\ f(x+\Delta)=e^{a(x+\Delta)+b}\end{array}\right. \tag{41}\] Solving the system and applying the limit of \(\Delta\to 0\) in (41), the Exponential Derivative (derivator function is exponential) of a function \(f(x)\) becomes: \[\mathfrak{D}\{\}=e^{ax+b} \tag{42}\] \[\mathfrak{D}\{a\}\frac{df(x)}{dx}=a^{ins}(x)=\lim_{\Delta\to 0}\frac{ ln(f(x+\Delta))-ln(f(x))}{\Delta} \tag{43}\] \[\mathfrak{D}\{b\}\frac{df(x)}{dx}=b^{ins}(x)=\lim_{\Delta\to 0}\frac{ ln(f(x))(x+\Delta)-ln(f(x+\Delta))x}{\Delta} \tag{44}\] \(f(x)\) can be reconstructed from its exponential derivatives as: \[f(x)=e^{\mathfrak{D}\{a\}\frac{df(x)}{dx}x+\mathfrak{D}\{b\}\frac{df(x)}{dx}} \tag{45}\] The function \(e^{-i\omega x}\) (kernel of the Fourier Transform) is discussed in the section 6.2. ## 5 Trigonometric Derivators Functions Let \(\omega\), \(\phi\in\mathbb{R}\), the frequency and phase, respectively, and \(\omega^{ins}:\mathbb{R}\rightarrow\mathbb{C}\), \(\phi^{ins}:\mathbb{R}\rightarrow\mathbb{C}\) the instantaneous frequency and phase, respectively, the following system can be written: \[S:\left\{\begin{array}{l}f(x)=\sin(\omega x+\phi)\\ f(x+\Delta)=\sin(\omega(x+\Delta)+\phi)\end{array}\right. \tag{46}\] Solving the system and applying the limit of \(\Delta\to 0\) in (46), the sinusoidal derivative (derivator function is sinusoidal) of a function \(f(x)\) becomes: \[\mathfrak{D}\{\}=\sin(\omega x+\phi) \tag{47}\] \[\begin{split}\mathfrak{D}\{\omega\}\frac{df(x)}{dx}=\omega^{ ins}(x)=\lim_{\Delta\to 0}\frac{\arcsin(f(x+\Delta))-\arcsin(f(x))}{\Delta}=\\ =\lim_{\Delta\to 0}\frac{\arcsin(f(x+\Delta)\sqrt{1-f(x)^{2}}-f(x) \sqrt{1-f(x+\Delta)^{2}})}{\Delta}\end{split} \tag{48}\] \[\mathfrak{D}\{\phi\}\frac{df(x)}{dx}=\phi^{ins}(x)=\lim_{\Delta\to 0} \frac{\arcsin(f(x))(x+\Delta)-\arcsin(f(x+\Delta))x}{\Delta} \tag{49}\] The same can be written to cosine and tangent functions: \[\mathfrak{D}\{\}=\cos(\omega x+\phi) \tag{50}\] \[\begin{split}\mathfrak{D}\{\omega\}\frac{df(x)}{dx}=\omega^{ ins}(x)=\lim_{\Delta\to 0}\frac{\arccos(f(x+\Delta))-\arccos(f(x))}{\Delta}=\\ =\lim_{\Delta\to 0}\frac{\arccos(f(x+\Delta)f(x)+\sqrt{(1-f(x+ \Delta)^{2})(1-f(x)^{2})})}{\Delta}\end{split} \tag{51}\] \[\begin{split}\mathfrak{D}\{\phi\}\frac{df(x)}{dx}=\phi^{ins}(x)= \lim_{\Delta\to 0}\frac{\arccos(f(x))(x+\Delta)-\arccos(f(x+\Delta))x}{\Delta} \end{split} \tag{52}\] \[\mathfrak{D}\{\}=\tan(\omega x+\phi) \tag{53}\] \[\begin{split}\mathfrak{D}\{\omega\}\frac{df(x)}{dx}=\omega^{ ins}(x)&=\lim_{\Delta\to 0}\frac{\arctan(f(x+\Delta))-\arctan(f(x))}{ \Delta}=\\ &=\frac{\arctan(\frac{f(x+\Delta)-f(x)}{1+f(x+\Delta)f(x)})}{ \Delta}\end{split} \tag{54}\] \[\mathfrak{D}\{\phi\}\frac{df(x)}{dx}=\phi^{ins}(x)=\lim_{\Delta\to 0}\frac{\arctan(f(x))(x+ \Delta)-\arctan(f(x+\Delta))x}{\Delta} \tag{55}\] If \(f(x)\in[-1,1]\), (48) to (52) \(\in\mathbb{R}\), otherwise (48) to (52) \(\in\mathbb{C}\). \(f(x)\) can be reconstructed from its sinusoidal, cosinusoidal and tangential derivatives, respectively, as: \[f(x)=\sin\left(\mathfrak{D}\{\omega\}\frac{df(x)}{dx}x+\mathfrak{D}\{\phi\} \frac{df(x)}{dx}\right) \tag{56}\] \[f(x)=\cos\left(\mathfrak{D}\{\omega\}\frac{df(x)}{dx}x+\mathfrak{D}\{\phi\} \frac{df(x)}{dx}\right) \tag{57}\] \[f(x)=\tan\left(\mathfrak{D}\{\omega\}\frac{df(x)}{dx}x+\mathfrak{D}\{\phi\} \frac{df(x)}{dx}\right) \tag{58}\] ## 6 Instantaneous Frequency and Heisenberg's Uncertainty Principle Let a phase function [23]\(\Omega(x):\mathbb{R}\rightarrow\mathbb{R}\), and a waveform (signal, Wave Function [24]) \(\varphi(x,\omega)\) given by: \[\varphi(x,\omega)=\sin(\Omega(x)) \tag{59}\] If \(\Omega(x)\) is known, the determination of the instantaneous frequency \(\omega(x)\) presents no difficulty and is determined by: \[\omega(x)=\frac{d\Omega(x)}{dx} \tag{60}\] However, in many real applications, \(\Omega(x)\) is not known, but only the waveform \(\varphi(x,\omega)\) and then, determining \(\omega(x)\) or \(x(\omega)\) precisely, from \(\varphi(x,\omega)\) is not a possible task, according to Heisenberg's Uncertainty Principle [25], [26]. Heisenberg's Uncertainty Principle was first proposed for Quantum Mechanics [27]. However, it is used to demonstrate that there is a limit to the accuracy with which the pair of canonically conjugate variables [28] in phase space, \((x,\omega)\) or \((x,p)\), where \(p\) is the momentum, e.g., can be measured simultaneously. _"...Thus, the more precisely the position is determined, the less precisely the momentum is known, and conversely..."_ _Heisenberg, 1927_ De Broglie's [29] relation establishes the undulatory nature of the particle (matter) by: \[k=\frac{p}{\hbar} \tag{61}\] where, \(k\) is the wavenumber (spatial frequency), \(p\) is the momentum and \(\hbar\) is the reduced Planck's constant. Thus, one can understand that determining the momentum \(p\) as a function of position \(x\) is equivalent to determining the wavenumber \(k\) as a function of position \(x\) or even the (temporal) frequency \(\omega\) as a function of time \(x\) (\(x\), in this case, is the "position" in time) - Instantaneous Frequency. The classical mathematical operation that changes the domain of a function from time to frequency (and vice versa) is the Fourier Transform, but a non-zero function \(f(x):\mathbb{R}\to\mathbb{R}\) and its Fourier Transform \(F(\omega):\mathbb{R}\to\mathbb{C}\) cannot both be sharply localized [26]. The figure 4 shows a function \(f(x):\mathbb{R}\to\mathbb{R}\) (a sinusoidal wave with frequency equal 2 Hz) and \(|F(\omega)|:\mathbb{R}\to\mathbb{R}\) in \(x\) (time) and \(\omega\) (frequency) domain. \(F(\omega)\) has no information about \(x\) and \(f(x)\) has no information about \(\omega\). A Window Function \(g(x):\mathbb{R}\to\mathbb{R}\) that is "well localized" in the time is used to localize the frequency in time. Figure 5 shows the wide (above) and narrow (below) window function and its respective Fourier Transform Magnitude \(|G(\omega)|:\mathbb{R}\to\mathbb{R}\) (narrow (above) and wide (below)). Figure 4: Function in time and frequency domain The function \(f(x)\) is multiplied by the Window Function \(g(x)\). Figure 6 shows the wide (above) and narrow (below) Windowed Function and its respective Fourier Transform Magnitude \(|F(\omega)*G(\omega)|\) (narrow (above) and wide (below)). The Windowed Function in the frequency domain should have only one component (at 2 Hz), but it has components at several frequencies with non-zero amplitudes. This fact is due to the Windowed Function in the frequency domain, which is a consequence of the fact that the frequency domain is a function of the frequency domain. Figure 5: Window Functions in time and frequency domain. Figure 6: Windowed Functions in time and frequency domain. domain being the result of the convolution of the function by the Window Function in the frequency domain (in an analogous form, one can use the concept of wave packets in order to locate a wave in space [30]). So it is impossible to identify whether a particular frequency component is due to the function or the Window Function. To measure the frequency as a function of the time, it was necessary to "locate" the wave in time using the Window function. However, this fact goes beyond the classical concept in physics of the observer effect [31], [32], in which to make a measurement, it is necessary to interfere with the measurement (which causes uncertainty). As everything that exists is a wave (wave nature of matter), Heisenberg's Uncertainty Principle states that uncertainty occurs not only due to the measurement of an experiment (observer effect) but due to the impossibility of locating a wave sharply in the time and frequency (wavenumber, momentum, among others) domain simultaneously. Analytically, Heisenberg's Uncertainty Principle can be demonstrated considering \(\psi(x)\) and \(\Psi(p)\) wave functions and Fourier Transform 2 of each other for position \(x\) and momentum \(p\), respectively. Footnote 2: \(\psi(x)\) and \(\Psi(p)\) are functions in two corresponding orthonormal bases in Hilbert space and, therefore, are Fourier Transform of each other and \(x\) and \(p\) are conjugate variables. Born's rule [33] states that \(|\psi(x)|^{2}\) and \(|\Psi(p)|^{2}\) are probability density functions and then the variances of position \(\sigma_{x}^{2}\) and momentum \(\sigma_{p}^{2}\) are: \[\sigma_{x}^{2}=\int_{-\infty}^{\infty}x^{2}|\psi(x)|^{2}\,dx\ =\int_{-\infty}^{\infty} \left(x\psi(x)\right)^{*}x\psi(x)\,dx \tag{62}\] \[\sigma_{p}^{2}=\int_{-\infty}^{\infty}p^{2}|\Psi(p)|^{2}\,dp\ =\int_{-\infty}^{\infty} \left(p\Psi(p)\right)^{*}p\Psi(p)\,dp \tag{63}\] Let \(f(x)=x\psi(x)\): \[\sigma_{x}^{2}=\int_{-\infty}^{\infty}f^{*}(x)f(x)\,dx\ =\int_{-\infty}^{ \infty}|f(x)|^{2}\,dx\ =\langle f|f\rangle \tag{64}\] Let \(\mathcal{F}\left\{.\right\}\) the Fourier transform, \(-i\hbar\frac{d\psi(x)}{dx}\) the momentum operator in position space, \(G(p)=p\Psi(p)\), \(g(x)=\mathcal{F}\left\{G(p)\right\}\) and applying the Parseval's theorem [34]: \[\sigma_{p}^{2}=\int_{-\infty}^{\infty}G^{*}(p)G(p)\,dp\ =\int_{-\infty}^{ \infty}|G(p)|^{2}\,dp\ =\int_{-\infty}^{\infty}|g(x)|^{2}\,dx\ =\langle g|g\rangle \tag{65}\] Using the Cauchy-Schwarz inequality [35]: \[\langle f|f\rangle\langle g|g\rangle\geq|\langle f|g\rangle|^{2} \tag{66}\] \[|\langle f|g\rangle|^{2}\geq\mbox{Im}(|\langle f|g\rangle|^{2})=\left(\frac{ \langle f|g\rangle-\langle g|f\rangle}{2i}\right)^{2} \tag{67}\] \[\begin{split}\langle f|g\rangle&-\langle g|f\rangle=\\ =\int_{-\infty}^{\infty}x\psi^{*}(x)\biggl{(}-i\hbar\frac{d\psi(x )}{dx}\biggr{)}\,dx\ -\int_{-\infty}^{\infty}\biggl{(}-i\hbar\frac{d\psi^{*}(x)}{dx}\biggr{)}x\psi (x)\,dx\ =i\hbar\end{split} \tag{68}\] Applying 68 in 67: \[|\langle f|g\rangle|^{2}=\biggl{(}\frac{i\hbar}{2i}\biggr{)}^{2}=\frac{\hbar^{ 2}}{4} \tag{69}\] Applying 64, 65 and 69 in 66: \[\sigma_{x}^{2}\sigma_{p}^{2}\geq\frac{\hbar^{2}}{4}=\sigma_{x}\sigma_{p}\geq \frac{\hbar}{2} \tag{70}\] The demonstration of the uncertainty principle is strictly mathematical. Any pair of variables conjugated will produce the same results as this demonstration. Following Kennard's consideration [36], \(\Delta x=\sigma_{x}\) the uncertainty in position \(x\) (proportional to the width of the Window Function in time or space domain), \(\Delta p=\sigma_{p}\) the uncertainty in momentum, \(h\) the Plank's constant, Heisenberg's Uncertainty Principle is normally presented as: \[\Delta x\Delta p\geq\frac{h}{4\pi} \tag{71}\] In the frequency domain, let \(\Delta k\) be the uncertainty in wavenumber (spatial frequency) or \(\Delta\omega\) the uncertainty in (temporal) frequency (\(\Delta k\) or \(\Delta\omega\) are proportional to the width of the Window Function in frequency domain). Through de Broglie's relation 61, 71 can be written as: \[\Delta x\Delta k\geq\frac{1}{2\pi}\quad\mbox{or}\quad\Delta x\Delta\omega\geq \frac{1}{2\pi} \tag{72}\] Another way to understand Heisenberg's Uncertainty Principle (and perhaps the simplest) is through the Fourier Transform of the Gaussian function. Let \(f(x)\) be a Gaussian function in the space (time) domain \(x\), \(F(\omega)\) is its Fourier transform in the frequency domain \(\omega\) and is also a Gaussian function. Then, the standard deviation \(\sigma\) can be understood as a measure of precision, and this occurs inversely in \(f(x)\) and \(F(\omega)\). Thus, if the uncertainty is small in one domain, it is large in the other domain. \[f(x)=e^{-\sigma x^{2}}\Leftrightarrow F(\omega)=\frac{1}{\sqrt{2\sigma}}e^{- \frac{\omega^{2}}{4\sigma}} \tag{73}\] A time-frequency representation3 is used when it is necessary to "localize" \(\omega\) in \(x\) (instantaneous frequency) and vice-versa. This representation is also known as a spectrogram, generally obtained through the Short Time Fourier Transform [37] (or other transforms, such as Wavelet Transform [38], Wigner-Ville distribution function [39], etc). A classic example of a signal whose frequency varies with time (non-stationary signal) is the Chirp Signal [40]. ### Chirp Derivative A Chirp Signal can be defined with the following waveform (signal): \[f(x)=\sin(2\pi\int\omega(x)dx+\phi), \tag{74}\] where, \(\omega(x)\) is the instantaneous frequency function and \(\Omega(x)=2\pi\int\omega(x)dx+\phi\) is the phase function. The following example is suitable for showing the determination of the instantaneous frequency. Considering a Chirp Signal with instantaneous frequency function given by: \[\omega(x)=x^{2}+2x+1 \tag{75}\] For \(\phi=0\), the waveform is; \[f(x)=\sin\biggl{(}2\pi\biggl{(}\int(x^{2}+2x+1)\,dx\biggr{)}+0\biggr{)}=\sin \biggl{(}2\pi\biggl{(}\frac{x^{3}}{3}+x^{2}+x\biggr{)}\biggr{)} \tag{76}\] Figure 8 shows the spectrogram for the signal sampled at 100 samples/unit of \(x\) (100Hz if \(x\) is given in seconds) obtained through Short Time Fourier Transform with Gaussian window function (Gabor Transform [37]) and standard deviation equal to 1. The \(|F(x,\omega)|\) values (z-axis) are proportional to the energy in the signal at \((x,\omega)\). For each \(x\) (or \(\omega\)) value there is a range of \(\omega\) (or \(x\)) values whose function \(|F(x,\omega)|\) is non-zero. These intervals at \((x,\omega)\) domain represent the uncertainty in determining the instantaneous frequency in this signal representation. With: \[\omega(x)=\omega_{1}x+\omega_{0} \tag{77}\] (78) is a Linear Chirp (or Quadratic Phase Signal), with initial frequency \(\omega_{0}\) (at \(x=0\)) and rate chirp \(\omega_{1}\): \[f(x)=\sin\!\left(2\pi\!\left(\int(\omega_{1}x+\omega_{0})\,dx\right)+\phi \right)=\sin\!\left(2\pi\!\left(\frac{1}{2}\omega_{1}x^{2}+\omega_{0}x\right) +\phi\right) \tag{78}\] The resolution of the system, \[S:\left\{\begin{array}{l}f(x)=\sin\!\left(2\pi\!\left(\frac{1}{2}\omega_{1}x ^{2}+\omega_{0}x\right)+\phi\right)\\ f(x+\Delta)=\sin\!\left(2\pi\!\left(\frac{1}{2}\omega_{1}(x+\Delta)^{2}+ \omega_{0}(x+\Delta)\right)+\phi\right)\\ f(x+2\Delta)=\sin\!\left(2\pi\!\left(\frac{1}{2}\omega_{1}(x+2\Delta)^{2}+ \omega_{0}(x+2\Delta)\right)+\phi\right)\end{array}\right. \tag{79}\] applying limit to \(\Delta\to 0\), is: \[\mathfrak{D}\{\omega_{1}\}\frac{df(x)}{dx}=\omega_{1}^{ins}(x)=\lim_{\Delta \to 0}\frac{K_{0}-K_{1}+K_{2}}{2\pi\Delta^{2}} \tag{80}\] where, \[K_{0}=\arcsin(f(x))\] \[K_{1}=\arcsin(2f(x+\Delta))\] \[K_{2}=\arcsin(f(x+2\Delta))\] \[\mathfrak{D}\{\omega_{0}\}\frac{df(x)}{dx}=\omega_{0}^{ins}(x)=\lim_{\Delta \to 0}-\frac{1}{2}\frac{K_{0}-K_{1}+K_{2}}{2\pi\Delta^{2}} \tag{81}\] Figure 8: Short Time Fourier Transform - \(|F(x,\omega)|\) where, \[K_{0} = (2x+3\Delta)\arcsin(f(x))\] \[K_{1} = (4x+4\Delta)\arcsin(f(x+\Delta))\] \[K_{2} = (2x+\Delta)\arcsin(f(x+2\Delta))\] \[\mathfrak{D}\{\phi\}\frac{df(x)}{dx}=\phi(x)=\lim_{\Delta\to 0}\frac{1}{2}\frac{K_{0}+K_{1}+K_{2}}{ \Delta^{2}} \tag{82}\] where, \[K_{0} = (2\Delta^{2}+x^{2}+3x\Delta)\arcsin(f(x))\] \[K_{1} = (-2x^{2}-4x\Delta)\arcsin(f(x+\Delta))\] \[K_{2} = (x^{2}+x\Delta)\arcsin(f(x+2\Delta))\] According to (77), the instantaneous frequency function is obtained as: \[\omega(x)=\mathfrak{D}\{\omega 1\}\frac{df(x)}{dx}x+\mathfrak{D}\{\omega_{0} \}\frac{df(x)}{dx} \tag{83}\] \(\mathfrak{D}\{\omega_{1}\}\), \(\mathfrak{D}\{\omega_{0}\}\) (\(\mathfrak{D}\{\phi\}\) is not needed in this example) are: \[\mathfrak{D}\{\omega_{1}\}\frac{df(x)}{dx}=\frac{2\cos\!\left(\frac{2\,\pi\,x \left(x^{2}+3\,x+3\right)}{3}\right)^{3}\,(x+1)}{\left(\cos\!\left(\frac{2\, \pi\,x\left(x^{2}+3\,x+3\right)}{3}\right)^{2}\right)^{\frac{3}{2}}} \tag{84}\] \[\mathfrak{D}\{\omega_{0}\}\frac{df(x)}{dx}=-\frac{\cos\!\left(\frac{2\,\pi\,x \left(x^{2}+3\,x+3\right)}{3}\right)^{3}\,\left(x^{2}-1\right)}{\left(\cos\! \left(\frac{2\,\pi\,x\left(x^{2}+3\,x+3\right)}{3}\right)^{2}\right)^{\frac{3 }{2}}} \tag{85}\] The (84) and (85) have positive and negative values, which are therefore associated with positive and negative frequency values. In absolute values, (84) and (85) are, respectively: \[\left|\mathfrak{D}\{\omega_{1}\}\frac{df(x)}{dx}\right|=|2x+2| \tag{86}\] \[\left|\mathfrak{D}\{\omega_{0}\}\frac{df(x)}{dx}\right|=|-x^{2}+1| \tag{87}\] However, negative frequency values can be neglected, and therefore the modulus functions at (86) and (87) can be removed without loss of generality. According to (83), the frequency function \(\omega_{qr}(x)\) can be reconstructed by: \[\omega_{qr}(x)=(2x+2)x-x^{2}+1=x^{2}+2x+1 \tag{88}\] It is important to note that (88) is obtained from (76) and not from the (60), i.e., the exact instantaneous frequency is obtained from waveform (wave function or signal) (\(f(x)\)) and not from phase function (\(\Omega(x)\)) and **there is no uncertainty.** * _Heisenberg's Uncertainty Principle was not respected._ From the instantaneous frequency function \(\omega(x)\), the amplitude spectrum \(F(\omega)\) can be calculated as: \[F(\omega)=\frac{1}{\omega}\int_{-\infty}^{\infty}\omega(x)h(x)dx, \tag{89}\] with, \[h(x)=\begin{cases}1;&\text{if }\omega(x)=\omega;\\ 0;&\text{otherwise}\end{cases} \tag{90}\] ### Fourier Derivative An important exponential function is \(e^{-i\omega x}\), with \(i=\sqrt{-1}\) and \(x,\omega,\in\mathbb{R}\), which is the kernel of the Fourier Transforms [34]. Adding a scaling factor (as in (39)) to this kernel, and proceeding analogously to (41), the **Fourier Derivative** of function \(f(x)\) is: \[\mathfrak{D}\{\}=e^{-i\omega x+b} \tag{91}\] \[\mathfrak{D}\{\omega\}\frac{df(x)}{dx}=\omega(x)=\lim_{\Delta\to 0}\frac{ln(f(x+ \Delta))-ln(f(x))}{-i\Delta} \tag{92}\] \[\mathfrak{D}\{b\}\frac{df(x)}{dx}=b(x)=\lim_{\Delta\to 0}\frac{ln(f(x))(x+\Delta)- ln(f(x+\Delta))x}{\Delta} \tag{93}\] \(f(x)\) can be reconstructed from its Fourier derivatives as: \[f(x)=e^{-i\mathfrak{D}\{\omega\}\frac{df(x)}{dx}x+\mathfrak{D}\{b\}\frac{df(x) }{dx}} \tag{94}\] The parameter \(\omega\) is the frequency in the kernel of the Fourier Transform, and therefore (92) is the instantaneous frequency (\(w(x)\in\mathbb{C}\)). Let \(A\in\mathbb{R}\), a wave function of the type \(\psi(x,\omega)=Ae^{-i\omega x}\). Considering, as example, \(A=2\), \(\omega(x)=x^{3}+2x\) (the frequency \(\omega\) varies with \(x\), i.e. \(\omega(x)\)) and \(\Omega(x)\) the phase function (as in Chirp Signal (74)), the wave function \(\psi(x,\omega)\) becomes: \[\psi(x)=2e^{-i\Omega(x)}=2e^{-i\left(\frac{x^{4}}{4}+x^{2}\right)} \tag{95}\] where, \[\Omega(x)=\int(x^{3}+2x)dx=\left(\frac{x^{4}}{4}+x^{2}\right) \tag{96}\] Applying 92 and 93 in 95: \[\mathfrak{D}\{\omega\}\frac{d\psi(x)}{dx}=\omega(x)=x^{3}+2x \tag{97}\] \[\mathfrak{D}\{b\}\frac{d\psi(x)}{dx}=b(x)=i\left(\frac{3x^{4}}{4}+x^{2}\right) +\ln 2 \tag{98}\] Applying 94 in 97 and 98, the wave function \(\psi(x)\) (95), can be reconstructed as: \[\psi(x)=e^{-i(x^{3}+2x)x+i\left(\frac{3x^{4}}{4}+x^{2}\right)+\ln 2}=2e^{-i \left(\frac{x^{4}}{4}+x^{2}\right)} \tag{99}\] **There is no uncertainty.** **Remark**: _Heisenberg's Uncertainty Principle was not respected._ ## 7 Conclusion In a simplified and summarized manner, Differential Calculus is based on applying a limit tending to zero for Newton's Difference Quotient applied under any function \(f(x)\). This operation determines another function (the derivative) whose values represent the instantaneous angular coefficients of the tangent lines to the function \(f(x)\). This paper showed that the Differential and Integral Calculus could be applied to other parameters of other functions called derivator and integrator functions. All the theories presented can be applied to two or more dimensions (partial derivatives and multiple integrals), in addition to well-established operations in classical differential and integral calculus such as the chain rule, product and division derivatives and integrals, differential and integral equations, and others, and this is suggested as future work. Some examples were presented, with emphasis on the determination of the instantaneous frequency. Although Heisenberg's Uncertainty Principle is formalized as a property of waves, this paper has shown that uncertainty occurs due to the methodology employed for determining the instantaneous frequency in a function (wave function or signal). Heisenberg's Uncertainty Principle is based on the use of integral transforms (such as Fourier Transform and similar wave packets), for a function in the time (or space) domain to obtain its representation in the frequency domain and vice versa. An integral transform is obviously based on the calculation of integrals. Hence, the integral is suitable for measuring general quantities associated with the whole function domain, such as an area, expected value, norm, autocorrelation, and even frequency distribution (spectral density), but not instantaneous quantities. Integral transforms (or wave packets) will produce uncertainty in the phase space of canonically conjugate variables. Nevertheless, why use a mathematical operation based on integral to try to determine instantaneous quantities? In turn, the derivative is suitable for measuring instantaneous quantities in a function. This paper presented a form to obtain the instantaneous frequency of a function given in the time (or space) domain using derivatives (and not integrals). The Fourier, Trigonometric, and Chirp Derivatives are examples of different forms to obtain the instantaneous frequency sharply.
2310.08570
Non-symmetric stable processes: Dirichlet heat kernel, Martin kernel and Yaglom limit
We study a $d$-dimensional non-symmetric strictly $\alpha$-stable L\'{e}vy process $\mathbf{X}$, whose spherical density is bounded and bounded away from the origin. First, we give sharp two-sided estimates on the transition density of $\mathbf{X}$ killed when leaving an arbitrary $\kappa$-fat set. We apply these results to get the existence of the Yaglom limit for arbitrary $\kappa$-fat cone. In the meantime we also obtain the spacial asymptotics of the survival probability at the vertex of the cone expressed by means of the Martin kernel for $\Gamma$ and its homogeneity exponent. Our results hold for the dual process $\widehat{\mathbf{X}}$, too.
Łukasz Leżaj
2023-10-12T17:56:27Z
http://arxiv.org/abs/2310.08570v1
# Non-symmetric stable processes: Dirichlet heat kernel, Martin kernel and Yaglom limit ###### Abstract. We study a \(d\)-dimensional non-symmetric strictly \(\alpha\)-stable Levy process \(\mathbf{X}\), whose spherical density is bounded and bounded away from the origin. First, we give sharp two-sided estimates on the transition density of \(\mathbf{X}\) killed when leaving an arbitrary \(\kappa\)-fat set. We apply these results to get the existence of the Yaglom limit for arbitrary \(\kappa\)-fat cone. In the meantime we also obtain the spacial asymptotics of the survival probability at the vertex of the cone expressed by means of the Martin kernel for \(\Gamma\) and its homogeneity exponent. Our results hold for the dual process \(\widehat{\mathbf{X}}\), too. Key words and phrases:non-symmetric stable process, heat kernel estimates, Dirichlet problem, Yaglom limit, Martin kernel 2020 Mathematics Subject Classification: Primary: 60J50, 60F05 secondary: 60G52, 60J35 The author was partially supported by the National Science Centre (Poland): grant 2021/41/N/ST1/04139. ## 1. Introduction The aim of this article is to establish certain results which are well known for rotation-invariant \(\alpha\)-stable Levy processes, but have so far remained unknown in the non-symmetric case. Given \(d\in\{1,2,\ldots\}\) and \(\alpha\in(0,2)\), we let \(\mathbf{X}\) be a strictly \(\alpha\)-stable Levy process in \(\mathbb{R}^{d}\). It is known that the Levy measure \(\nu\) of \(\mathbf{X}\) has a following representation: \[\nu(B)=\int_{\mathbb{S}^{d-1}}\,\zeta(\mathrm{d}\xi)\int_{0}^{\infty}\mathds{1 }_{B}(r\xi)r^{-1-\alpha}\,\mathrm{d}r,\quad B\in\mathcal{B}(\mathbb{R}^{d}).\] The measure \(\zeta\) is called the spherical measure and it describes the intensity of of jumps \(\mathbf{X}\) in certain directions. We consider the following assumption: **A**: The spherical measure \(\zeta\) has a density \(\lambda\) with respect to the surface measure, which is bounded and bounded away from zero, i.e. there is \(\theta\in(0,1]\) such that \[\frac{\mathrm{d}\zeta}{\mathrm{d}\sigma}=\lambda\qquad\text{and}\qquad\theta \leqslant\lambda(\xi)\leqslant\theta^{-1},\quad\xi\in\mathbb{S}^{d-1}.\] Informally speaking, assumption \(\mathbf{A}\) implies that the process \(\mathbf{X}\) is, in a sense, _similar_ to the (classical) isotropic \(\alpha\)-stable Levy process. Of course, if \(\lambda\) is constant on the unit sphere, then one recovers the well-known rotation-invariant case with the fractional Laplacian \(-(-\Delta)^{\alpha/2}\) as a generator. For an open set \(D\) and \(x,y\in D\), we let \(p_{t}^{D}(x,y)\) be the Dirichlet heat kernel of \(D\), that is the transition density of \(\mathbf{X}\) killed when exiting \(D\). Accordingly, for \(x\in D\), \[\mathbb{P}_{x}(\tau_{D}>t)=\int_{D}p_{t}^{D}(x,y)\,\mathrm{d}y\] is the survival probability of \(\mathbf{X}\) in \(D\). Let \(\widehat{\mathbf{X}}:=-\mathbf{X}\) be the dual process of \(\mathbf{X}\). For consistency of the notation, all objects pertaining to \(\widehat{\mathbf{X}}\) will also be denoted by a superscript \({}_{\widehat{\ }}\)". For instance, for \(x,y\in\mathbb{R}^{d}\), we let \(\widehat{p}_{t}(x,y)=p_{t}(y,x)\) be the (free) heat kernel of \(\widehat{\mathbf{X}}\). For the definition of \(\kappa\)-fat sets, see Section 2. Our first result is the Varopoulos-type (see Varopoulos [62]) factorisation of the Dirichlet heat kernel corresponding to the process \(\mathbf{X}\) killed when exiting \(D\). **Theorem 1.1**.: _Suppose that \(R\in(0,\infty]\) and there is \(\kappa\in(0,1)\) such that \(D\) is \((\kappa,r)\)-fat for all \(r\in(0,R]\). Assume that \(\mathbf{X}\) is a strictly \(\alpha\)-stable Levy process satisfying \(\mathbf{A}.\) Then for all \(c\geqslant 1\) there is \(C=C(\alpha,d,\lambda,\kappa,c)\) such that for all \(x,y\in D\) and \(0\leqslant t\leqslant cR^{\alpha}\),_ \[C^{-1}p_{t}^{D}(x,y)\leqslant\mathbb{P}_{x}(\tau_{D}>t)p_{t}(x,y)\widehat{ \mathbb{P}}_{y}(\tau_{D}>t)\leqslant Cp_{t}^{D}(x,y). \tag{1.1}\] The proof of Theorem 1.1 is provided at the end of Section 3. Put in context, it is an extension and natural continuation of Bogdan et al. [10, Theorem 1], where the analogous result was proved for the rotation-invariant \(\alpha\)-stable Levy process, i.e. when \(\zeta\) is uniformly distributed on the unit sphere \(\mathbb{S}^{d-1}\). Here we allow the strictly stable Levy process to be non-symmetric (and in particular: anisotropic), as long as the distribution of the directions of jumps is absolutely continuous and its density is uniformly bounded and bounded away from zero. Observe that \(\lambda\) need not be continuous and the domain \(D\) may be unbounded. As a particular application, one can consider a \(\kappa\)-fat set \(\Gamma\), which is invariant under rescaling, i.e. for every \(r>0\) we have \(ry\in\Gamma\), if only \(y\in\Gamma\). To wit, \(\Gamma\) is a generalised \(\kappa\)-fat cone in \(\mathbb{R}^{d}\). By Theorem 1.1, we immediately obtain global sharp two-sided estimate of the transition density of the Dirichlet heat kernel of \(\Gamma\), i.e. \[p_{t}^{\Gamma}(x,y)\approx\mathbb{P}_{x}(\tau_{\Gamma}>t)p_{t}(x,y)\widehat{ \mathbb{P}}_{y}(\tau_{\Gamma}>t),\quad t>0,\,x,y\in\Gamma.\] Here the symbol \(f\approx g\) means that the ratio of two functions \(f\) and \(g\) stays between two positive constants. In the second part ot the article we exploit the factorisation above to analyse the limiting behaviour of the process living in \(\Gamma\). Denote \(\mathbf{1}:=(0,\ldots,0,1)\in\mathbb{R}^{d}\). Our second result gives the existence of the Yaglom limit for \(\mathbf{X}\) in the \(\kappa\)-fat cone \(\Gamma\). **Theorem 1.2**.: _Let \(\mathbf{X}\) be a strictly \(\alpha\)-stable Levy process satisfying \(\mathbf{A}\). Assume \(\Gamma\) is a \(\kappa\)-fat cone such that \(\mathbf{1}\in\Gamma\). There is a probability measure \(\mu\) concentrated on \(\Gamma\) such that for every Borel set \(A\subseteq\Gamma\) and every probability measure \(\gamma\) on \(\Gamma\) satisfying \(\int_{\Gamma}(1+|y|)^{\alpha}\,\gamma(\mathrm{d}y)<\infty\),_ \[\lim_{t\to\infty}\mathbb{P}_{\gamma}(t^{-1/\alpha}X_{t}\in A|\tau_{\Gamma}>t )=\mu(A),\quad x\in\Gamma. \tag{1.2}\] Theorem 1.2 is proved at the end of Section 4 as Theorem 4.18. In particular, by letting \(\gamma\) be a Dirac delta at \(x\), one can consider the process which starts from an arbitrary fixed point \(x\in\Gamma\). Informally speaking, we show that, given the survival of \(\mathbf{X}\) in the cone \(\Gamma\), the limiting probability distribution of the properly rescaled process is independent of the initial distribution. In fact, the measure \(\mu\) has a strictly positive density on \(\Gamma\) given by the Martin kernel for the cone (see Lemma 4.1) and the stationary density for the corresponding Ornstein-Uhlenbeck operators from Theorem 4.11 (for the exact formula for the density, see Section 4 and Theorem 4.18). Let us comment on the methodology and refer to previous developments in the literature. Theorem 1.1 is a next member of a large family of results concerning (Dirichlet) heat kernel estimates of stable processes in various domains. The analysis began with the estimates of the Green function in \(C^{1,1}\) domains for the (classical) Laplace operator given by Zhao [68]. The extension to Lipschitz domains was accomplished by Bogdan [9]. Shortly afterwards, Zhang [67] established sharp two-sided estimates of the Dirichlet heat kernel in \(C^{1,1}\) domains, which were extended later to Lipschitz domains in Varopoulos [62]. A parallel track was taken for the (non-local) fractional Laplacian, where the Green functions estimates for \(C^{1,1}\) domains were obtained by Kulczycki [40] and Chen and Song [24], and for Lipschitz domains -- by Jakubowski [35]. The last steps were provided by Chen et al. [20] and Bogdan et al. [10], who developed sharp two sided estimates of the Dirichlet heat kernel in \(C^{1,1}\) and Lipschitz domains, respectively. The paper [10] is our roadmap in the first part of the article. We note that since then, analogous results has been obtained for various types of symmetric processes, see, for instance, Chen et al. [19, 21]. In a more general setting, Bogdan et al. [11] and Chen et al. [22] considered a large class of unimodal Levy process whose characteristic exponent satisfies weak scaling conditions, in particular when the upper scaling index is strictly smaller than \(2\). Subordinated Brownian motions which do not satisfy this assumption were studied by Kim and Mimica [37]. Grzywny et al. [28] considered symmetric Markov processes whose jumping kernel is comparable to the one of an unimodal Levy process. Gradient estimates of the Dirichlet heat kernel were studied by Kulczycki and Ryznar [42]. Out of vast amount of results, we would also like to mention a recent preprint of Chen et al. [18], who studied the Dirichlet heat kernel for cylindrical \(\alpha\)-stable Levy process, a case -- in a sense -- complementary to our situation. For more information we refer the reader to the articles above and the references therein. As already mentioned, the techniques used to derive the factorisation of the Dirichlet heat kernel are inspired by [10], with the necessary changes concerning the non-symmetry of the process. To give one example, the Dirichlet heat kernel is not symmetric anymore, which results in the necessity to work simultaneously with \(\mathbf{X}\) and its dual \(\tilde{\mathbf{X}}\). Accordingly, the factorisation (1.1) is also expressed by means of objects pertaining to both the initial process \(\mathbf{X}\) and its dual \(\widehat{\mathbf{X}}\), c.f. [62, Eq. (0.25)]. For this reason, the boundary behaviour of the Dirichlet heat kernel may be significantly different in variable \(x\) than in \(y\). In fact, this is the case even in the simple case of the half-space, as we advocate in Example 4.8. It is crucial in our development that despite the non-symmetry of the process, our assumptions on \(\mathbf{X}\)_are_ symmetric in \(\mathbf{X}\) and \(\widehat{\mathbf{X}}\). In particular, the boundary Harnack principle holds for both \(\mathbf{X}\) and \(\widehat{\mathbf{X}}\) (with the same constants) by virtue of Bogdan et al. [14]. However, other classical bricks in the potential theory of \(\mathbf{X}\) such as the explicit formula for the Green function or the Poisson kernel of the ball are not available in the general non-symmetric case. Nonetheless, the ideas of [10] can be transferred to our setting. Here we should note that some of basic tools from the potential theory are provided by the accordingly titled paper by Vondracek [63] and we frequently refer to his work in what follows. Interestingly, the _standard_ track of results (that is: Dirichlet heat kernel estimates follow sharp Green function estimates) seems to be disturbed in our case. The application of Theorem 1.1 to unbounded cones provides the crucial tool in the analysis of the limiting distribution of the process conditioned not to leave the cone. This limit, if exists, is called the Yaglom limit due to the seminal paper of Yaglom [65], who identified the so-called quasi-stationary distribution for the Bienayme-Galton-Watson trees. Since then, it has been studied in various settings and for different types of stochastic processes and we refer to Van Doorn and Pollet [61] for the comprehensive study. Out of vast amount of papers, let us mention Seneta and Vere-Jones [55] and Tweedie [59] in the discrete-time Markov chain, and Jacka and Roberts [34] in the continuous-time case. Markov chains on non-negative integers with the origin as the absorbing state was studied i.a. by Ferrari et al. [26], van Doorn [60] and Flaspohler and Holmes [27]. See also e.g. Bean et al. [5] for quasi-birth-and-death processes, Asselah et al. [2] for Fleming-Viot process, Lambert [46] for branching processes or the recent paper by Harris et al. [30] for the non-local branching Markov processes. This short list is far from complete and for more information we refer the reader to the references in the mentioned articles and to the survey [61]. The study of Levy processes in the context of Yaglom limit and quasi-stationary distributions started with Martinez and San Martin [48], who studied the case of Brownian motion with drift as a counterpart of results obtained by Iglehart [32] for the random walk with negative drift. Spectrally one-sided Levy processes were investigated by Kyprianou and Palmowski [44], Mandles et al. [47] or Palmowski and Vlasiou [51]. One-dimensional self-similar Markov processes were considered by Hass and Rivero [29] and we note that in this case the appropriate rescaling of the process similar to (1.2) is essential to obtain a non-trivial limit. In the multi-dimensional case, Bogdan et al. [15] obtained the Yaglom limit in Lipschitz (generalised) cones for the isotropic \(\alpha\)-stable Levy process. Recently Armstrong et al. [1] showed that the limit is in fact the same for every unimodal Levy process which is in domain of attraction of the isotropic \(\alpha\)-stable law. We should also mention results of Zhang et al. [66] who studied the Yaglom limit of Markov process killed when leaving sets of bounded volume, but their approach does not apply to general unbounded cones. We note in passing that the problem we study is intrinsically connected to, but nonetheless different than the conditioning of the process to stay forever in a certain set. For more information we refer the reader to Bertoin and Doney [6], Chaumont [16] and Chaumont and Doney [17], or to Kyprianou et al. [45], where the isotropic \(\alpha\)-stable Levy process conditioned to stay in a Lipschitz cone is considered. The article [15] and its successor [1] were both based on a tricky compactness argument and a formula expressing the survival probability as a Green potential, which, after a rather technical argument, yields the spacial asymptotics of the survival probability at the vertex of the cone by means of its Martin kernel. A completely different approach was applied in the recent preprint of Bogdan et al. [13], where the authors considered the generalised Ornstein-Uhlenbeck semigroup and established the existence of its stationary density, which was later used to derive the existence of the Yaglom limit for more general family of \(\kappa\)-fat cones. This line of attack seems more versatile, since once the existence of the aforementioned stationary density is proved, the remaining part follows by the scaling property of the process and therefore, it should be applicable for other self-similar processes which enjoy both sharp two-sided estimates on the Dirichlet heat kernel and a \(P_{\mathrm{f}}^{\Gamma}\)-invariant function which allows for Doob-type conditioning. In [13], this role is played by the Martin kernel for \(\Gamma\) with the pole at infinity, whose existence was established by Banuelos and Bogdan [3]. We follow this path and collect first some results and methods from the literature to conclude the existence and basic properties of the Martin kernel for the non-symmetric \(\alpha\)-stable process \(\mathbf{X}\) and its dual \(\widehat{\mathbf{X}}\) in Lemma 4.1. The crucial property is the homogeneity of order \(\beta\in[0,\alpha)\), which, perhaps unsurprisingly, turns out to be different for \(\mathbf{X}\) and \(\widehat{\mathbf{X}}\) even when \(\Gamma\) is a half-space, see Example 4.7. For this reason, the rest of analysis performed in Section 4 needs to be more delicate. For instance, the definition of the conditioned kernel \(\rho_{t}\) (4.4) involves Martin kernels for both the starting process and its dual counterpart. This is due to the structure of factorisation in Theorem 1.1, which implies that the sharp estimate of \(\rho_{t}\) in (4.9) is, in a sense, _symmetric_ in \(\mathbf{X}\) and \(\widehat{\mathbf{X}}\). Proposition 4.9 then provides the appropriate control of the first and the last term of (4.9). With these tools at our disposal, one can obtain the stationary density \(\varphi\) of the corresponding Ornstein-Uhlenbeck semigroup \(L_{t}\) (see (4.13) and (4.16) for definitions and Theorem 4.11 for the statement) and use it to conclude the existence of the desired Yaglom limit. Similarly to [15], we first obtain the spacial asymptotics of the survival probability in Corollary 4.17 and then use it for the proof of Theorem 1.2. Observe that the Yaglom measure \(\mu\) has a density which is expressed by objects pertaining to both \(\mathbf{X}\) and \(\widehat{\mathbf{X}}\). Clearly, if \(\mathbf{X}\) is symmetric, then \(\mathbf{X}\stackrel{{\mathrm{d}}}{{=}}\widehat{\mathbf{X}}\), but even in this case we extend results from [15] and [13], where the isotropic case was considered. Thus, Theorem 1.2 may be perceived as the next stage of development concerning the analysis of self-similar processes living in scale-invariant open sets. Finally, for the sake of faithfulness we note that the roadmap for the method applied in [13], which we extend in this work, was set by Bogdan et al. [12], where the Hardy operator on \(\mathbb{R}^{d}\setminus\{0\}\) was considered. The article is organised as follows. Section 2 contains basic preliminary results concerning the non-symmetric strictly \(\alpha\)-stable processes, as well as the crucial bound on the growth of the harmonic functions close to the boundary (see Lemma 2.2). Section 3 is devoted to the proof of Theorem 1.1. In Section 4 we first discuss the Martin kernel of the cone \(\Gamma\) and provide a simple, but relevant example, where the exponents \(\beta\) and \(\widehat{\beta}\) differ (see Example 4.7). This case also yields an example for Theorem 1.1, where the phenomenon of different boundary behaviour in variables \(x\) and \(y\) is visible, see Example 4.8. Then we apply the derived tools to the analysis of the corresponding generalised Ornstein-Uhlenbeck semigroup (Theorem 4.11) and use it to prove Theorem 1.2. **Notation.** Throughout the article, \(c\) will be a positive constant, which may vary from line to line in chain of estimates. Sometimes we will use the notation \(c_{1},c_{2},\ldots\) to distinguish certain constants. By writing \(c=c(a)\) we mean that \(c\) depends at most on the parameter \(a\). The notation \(c=c(a,b,\ldots)\) is defined accordingly. For \(x,z\in\mathbb{R}^{d}\), we denote by \(x\cdot z\) the standard scalar product in \(\mathbb{R}^{d}\). As usual, \(|x|\) is the Euclidean norm. For \(r>0\) we let \(B(x,r):=\{z\in\mathbb{R}^{d}\colon|x-z|<r\}\) be the ball centred in \(x\) of radius \(r\). To abbreviate the notation, we will write \(B_{r}:=B(0,r)\). For an open set \(D\), we denote by \(\delta_{D}(x):=\inf\{|y-x|\colon y\in D^{c}\}\) the distance from the complement of \(D\). Similarly, \(\operatorname{dist}(x,D):=\inf\{|y-x|\colon y\in D\}\) and \(\operatorname{dist}(D_{1},D_{2}):=\inf_{x\in D_{1}}\operatorname{dist}(x,D_{2})\) for open sets \(D_{1}\) and \(D_{2}\). All considered sets, functions and measures are assumed to be Borel. As already noted, for two positive functions \(f\) and \(g\), by writing \(f\approx g\) we mean that the ratio of \(f\) and \(g\) is bounded from above and below by positive constants. The symbols \(\lesssim\) and \(\gtrsim\) are defined accordingly. The set of \(d\times d\) matrices with real entries will be denoted by \(\mathcal{M}_{d}\). To avoid unnecessary considerations, we set \(\mathbb{N}=\{1,2,\ldots\}\). Finally, we let \(a\wedge b=\min\{a,b\}\) and \(a\lor b=\max\{a,b\}\). ## 2. Preliminaries Let \(d\in\mathbb{N}\). Throughout the paper we assume that \(\mathbf{X}\) is a \(d\)-dimensional strictly stable Levy process with the stability index \(\alpha\in(0,2)\), that is for all \(a>0\) the following equality of distributions holds: \[\big{(}X_{at}:t\geqslant 0\big{)}\stackrel{{ d}}{{=}}\big{(}a^{1/ \alpha}X_{t}:t\geqslant 0\big{)}. \tag{2.1}\] We note that for \(\alpha=2\) one recovers Brownian motion, the case excluded from our considerations. Thus, there is a function \(\psi\colon\mathbb{R}^{d}\mapsto\mathbb{C}\) such that \[\mathbb{E}e^{i\xi\cdot X_{t}}=e^{-tv\left(\xi\right)},\quad\xi\in\mathbb{R}^{ d},\] and \[\psi(\xi)=-i\xi\cdot\gamma-\int_{\mathbb{R}^{d}}\big{(}e^{i\xi\cdot z}-1-i\xi \cdot z\mathds{1}_{|z|<1}\big{)}\,\nu(\mathrm{d}z),\] where \(\gamma\in\mathbb{R}^{d}\) is the drift component and \(\nu\) is the Levy measure satisfying \[\nu(\{0\})=0\qquad\text{and}\qquad\int_{\mathbb{R}^{d}}\big{(}1\wedge|z|^{2} \big{)}\,\nu(\mathrm{d}z)<\infty.\] Moreover, there is a finite measure \(\zeta\) on \(\mathbb{S}^{d-1}\) such that \[\nu(B)=\int_{\mathbb{S}^{d-1}}\zeta(\mathrm{d}\xi)\int_{0}^{\infty}\mathds{1 }_{B}(r\xi)r^{-1-\alpha}\,\mathrm{d}r,\quad B\in\mathcal{B}(\mathbb{R}^{d}), \tag{2.2}\] see e.g. [54, Theorem 14.3]. The measure \(\zeta\) is called the spherical measure and it describes the non-isotropic intensity of the expansion of the process \(\mathbf{X}\) in different directions. We note here that the trivial case \(\alpha=1\) and \(\nu\equiv 0\), that is \(\mathbf{X}\) being a deterministic drift, is excluded. Therefore, the following characterisation of strictly stable Levy processes in our setting holds true (see e.g. [54, Theorem 14.7]): * Let \(\alpha\in(0,1)\). \(\mathbf{X}\) is strictly stable if and only if \[\gamma-\int_{B_{1}}z\,\nu(\mathrm{d}z)=0.\] * Let \(\alpha=1\). \(\mathbf{X}\) is strictly stable if and only if \[\gamma=0\qquad\text{and}\qquad\int_{\mathbb{S}^{d-1}}\xi\,\zeta(\mathrm{d}\xi )=0.\] * Let \(\alpha\in(1,2)\). \(\mathbf{X}\) is strictly stable if and only if \[\gamma+\int_{B_{1}^{c}}z\,\nu(\mathrm{d}z)=0.\] Put differently, every finite measure on \(\mathbb{S}^{d-1}\) induces a strictly \(\alpha\)-stable Levy process. The conditions above imply that, given \(\alpha\in(0,2)\), the _average expansion rate_ of every strictly \(\alpha\)-stable Levy process is the same. To make this statement precise, following Pruitt [53], we define \[h(r)=r^{-2}\int_{B_{r}}|z|^{2}\,\nu(\mathrm{d}z)+\nu(B_{r}^{c})+r^{-1}\bigg{|} \gamma+\int_{\mathbb{R}^{d}}z\big{(}\mathds{1}_{|z|<r}-\mathds{1}_{|z|<1} \big{)}\,\nu(\mathrm{d}z)\bigg{|},\quad r>0.\] Using (2.2) together with the strict stability, it is easy to show that there is \(c=c(\alpha,d,\zeta)\) such that \[h(r)=cr^{-\alpha},\quad r>0. \tag{2.3}\] If we define \(S(r)=\inf\{t>0\colon|X_{t}|>r\}\), then [53] yields that \(\mathbb{E}S(r)\approx h(r)\), which, by the equation above, justifies the _similar average expansion_ of strictly stable processes. It is known that \(\mathbf{X}\) is a Markov process with transition function given by \[P_{t}(x,A)=\int_{A}p_{t}(x,y)\,\mathrm{d}y,\] where \(p_{t}(x,y):=p_{t}(y-x)\) is the probability density function, so that \[\int_{\mathbb{R}^{d}}p_{t}(x)e^{i\xi\cdot x}\,\mathrm{d}x=e^{-t\psi(\xi)},\quad \xi\in\mathbb{R}^{d}.\] By Hartman and Wintner [31], \(p_{t}\) is smooth and integrable for all \(t>0\). The strict stability (2.1) translates into the scaling property of the transition density as follows: \[p_{t}(x,y)=t^{-d/\alpha}p_{1}(t^{-1/\alpha}x,t^{-1/\alpha}y),\quad x,y\in \mathbb{R}^{d},\,t>0. \tag{2.4}\] In particular, \(p\) is jointly continuous on \((0,\infty)\times\mathbb{R}^{d}\times\mathbb{R}^{d}\). Our **standing** assumption in this article is that the spherical measure \(\zeta\) has a density \(\lambda\) with respect to the surface measure \(\sigma\) which is bounded and bounded away from zero, that is \[\frac{\mathrm{d}\zeta}{\mathrm{d}\sigma}=\lambda\qquad\text{and}\qquad\theta \leqslant\lambda(\xi)\leqslant\theta^{-1},\quad\xi\in\mathbb{S}^{d-1}, \tag{2.5}\] for some \(\theta\in(0,1)\). It is clear that if \(\lambda\) is constant, then \(\zeta\) is the uniform distribution on the sphere \(\mathbb{S}^{d-1}\) and we recover the Levy measure of the isotropic \(\alpha\)-stable Levy process. One immediately obtains from (2.5) that the Levy measure \(\nu\) of \(\mathbf{X}\) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb{R}^{d}\) with the density satisfying \(\nu(x)=\lambda(x/|x|)|x|^{-d-\alpha}\), \(x\in\mathbb{R}^{d}\setminus\{0\}\), and \[\theta|x|^{-d-\alpha}\leqslant\nu(x)\leqslant\theta^{-1}|x|^{-d-\alpha},\quad x \in\mathbb{R}^{d}\setminus\{0\}. \tag{2.6}\] By [64, Theorem 1.5] and the scaling property (2.4) (see also [63] for some partial results), \[p_{t}(x,y)\approx t^{-d/\alpha}\wedge t|x-y|^{-d-\alpha},\quad x,y\in\mathbb{R }^{d}, \tag{2.7}\] and in particular, \[p_{1}(x,y)\approx(1+|y-x|)^{-d-\alpha},\quad x,y\in\mathbb{R}^{d}. \tag{2.8}\] Accordingly, \(p_{1}(0,0)>0\) and \(\mathbf{X}\) is of type A in the terminology of Taylor [58]. Combining the equation above with (2.6), we obtain \[p_{1}(x,y)\approx 1\wedge\nu(y-x)\approx p_{1}(y,x),\quad x,y\in\mathbb{R}^{d}. \tag{2.9}\] It follows also that for any constant \(c>0\) one has \[p_{t}(x,y)\approx p_{ct}(x,y), \tag{2.10}\] with the implied constant dependent on \(c\). ### Killed process Assume \(D\) is a domain in \(\mathbb{R}^{d}\). We let \(\tau_{D}\) be the fist exit time from \(D\), i.e. \[\tau_{D}=\inf\{t\geqslant 0\colon X_{t}\notin D\}.\] The process \(\mathbf{X}^{D}\) killed after exiting \(D\) is then defined by \[X_{t}^{D}=\begin{cases}X_{t},&t<\tau_{D},\\ \partial,&t\geqslant\tau_{D},\end{cases}\] where \(\partial\) is a cemetery point adjoined to \(D\). For every \(t\geqslant 0\) and \(x,y\in D\) we define the (Dirichlet) heat kernel \[p_{t}^{D}(x,y)=p_{t}(x,y)-\mathbb{E}_{x}[\tau_{D}<t;p_{t-\tau_{D}}(X_{\tau_{D}},y)].\] Then it follows that \(p_{t}^{D}(x,y)\leqslant p_{t}(x,y)\) and consequently, \[\int_{D}p_{t}^{D}(x,y)\,\mathrm{d}y\leqslant 1. \tag{2.11}\] In the analogous way we define the dual killed process \(\widehat{\mathbf{X}}^{D}\) and the dual (Dirichlet) heat kernel \(\widehat{p}_{t}^{D}\). The function \((t,x,y)\mapsto p_{t}^{D}(x,y)\) is continuous on \((0,\infty)\times D\times D\) and satisfies the Chapman-Kolmogorov property: for every \(t,s>0\) and \(x,y\in D\), \[p_{t+s}^{D}(x,y)=\int_{D}p_{t}^{D}(x,z)p_{s}^{D}(z,y)\,\mathrm{d}z. \tag{2.12}\] Moreover, for all \(x,y\in D\) and all \(t>0\), \[p_{t}^{D}(x,y)=\widehat{p}_{t}^{D}(y,x). \tag{2.13}\] These (basic) properties are stated and proved in [63] as part of Theorem 3.2. Its proof is basically the application of arguments from Chung and Zhao [25, Section 2.2] for the (classical) Brownian motion case and Chen et al. [23] with the necessary bounds on the heat kernel \(p_{t}\)[63, Proposition 2.1] and in fact, they hold for arbitrary domains \(D\subseteq\mathbb{R}^{d}\). The analogous property for isotropic \(\alpha\)-stable process is well known, see e.g. [23, Theorems 2.3 and 2.4] and also follows from [25, Section 2.2]. For every \(t>0\), \(x\in D\) and \(f\in L^{\infty}(D)\), \[P_{t}^{D}f(x):=\mathbb{E}_{x}[t<\tau_{D};f(X_{t})]=\int_{D}p_{t}^{D}(x,y)f(y) \,\mathrm{d}y.\] To wit, \(P_{t}^{D}\) is a killed semigroup corresponding to the killed strictly stable process \(\mathbf{X}^{D}\). In particular, setting \(f\equiv 1\) yields the survival probability \[\mathbb{P}_{x}(\tau_{D}>t)=\int_{D}p_{t}^{D}(x,y)\,\mathrm{d}y. \tag{2.14}\] It goes without saying that the same results hold for the dual process \(\widehat{\mathbf{X}}^{D}\), too. By (2.4), the scaling property holds for the Dirichlet heat kernel, too: \[p_{t}^{D}(x,y)=t^{-d/\alpha}p_{1}^{t^{-1/\alpha}D}\big{(}t^{-1/\alpha}x,t^{-1/ \alpha}y\big{)},\quad x,y\in D,\,t>0. \tag{2.15}\] Put differently, \[p_{r\epsilon_{t}}^{rD}(rx,ry)=r^{-d}p_{t}^{D}(x,y),\quad r>0,\,x,y\in D. \tag{2.16}\] Therefore, one also has \[\mathbb{P}_{x}(\tau_{D}>t)=\mathbb{P}_{t^{-1/\alpha}x}\Big{(}\tau_{t^{-1/ \alpha}D}>1\Big{)},\quad x\in\mathbb{R}^{d},\,t>0. \tag{2.17}\] Throughout the article, we will often impose some restrictions on the geometry of the open set \(D\). We will say that \(D\) is \((\kappa,r)\)-fat at \(Q\in\overline{D}\), if there is a point \(A=A_{r}(Q)\in D\cap B(Q,r)\) such that \(B(A,\kappa r)\subseteq D\cap B(Q,r)\). If this property holds for all \(Q\in\overline{D}\), then \(D\) is said to be \((\kappa,r)\)-fat. If there is \(R>0\) such that \(D\) is \((\kappa,r)\)-fat for all \(r\in(0,R]\), then we simply say that \(D\) is \(\kappa\)-fat. For example, every Lipschitz set is \(\kappa\)-fat. Of course, if \(D\) is \(\kappa\)-fat, then it is also \(\kappa^{\prime}\)-fat for all \(\kappa^{\prime}<\kappa\). We also note that in general, the point \(A_{r}(Q)\) need not be uniquely determined, but we choose to accept for this slight abuse of notation. We say that a function \(u\colon\mathbb{R}^{d}\mapsto\mathbb{R}\) is harmonic (with respect to \(\mathbf{X}\)) in an open set \(D\subseteq\mathbb{R}^{d}\) if for every bounded open set \(B\subseteq D\), \[u(x)=\mathbb{E}_{x}\big{[}\tau_{B}<\infty;u(X_{\tau_{B}})\big{]},\quad x\in B,\] where the integral on the right-hand side is assumed to be absolutely convergent. If the identity above is satisfied with \(B=D\), then \(u\) is regular harmonic in \(D\). Also, we will say that \(u\) is (regular) co-harmonic, if it is (regular) harmonic with respect to the dual process \(\widehat{\mathbf{X}}\). The probability distribution of the \(X_{\tau_{D}}\) is the harmonic measure \(P_{D}(x,\,\cdot\,)\), that is \[\mathbb{P}_{x}(X_{\tau_{D}}\in A)=\int_{A}\,P_{D}(x,\mathrm{d}y),\quad A \subseteq\mathbb{R}^{d}.\] Thus, a regular harmonic function \(u\) satisfies \[u(x)=\int_{D^{c}}u(y)\,P_{D}(x,\mathrm{d}y),\quad x\in D.\] The Green function \(G_{D}(x,y)\) is given by \[G_{D}(x,y)=\int_{0}^{\infty}p_{t}^{D}(x,y)\,\mathrm{d}t,\quad x,y\in D.\] We will say that \(D\) is Greenian if \(G_{D}\) is finite almost everywhere on \(D\times D\). This is the case, if, for example, \(D\) is bounded. If \(\alpha<d\), then \(\mathbf{X}\) is transient and every \(D\subseteq\mathbb{R}^{d}\) is Greenian, see [54, Chapter 7]. For \(d=1\) and \(\alpha\in(1,2)\) and every \(D\neq\mathbb{R}\) is Greenian e.g. by Port [52]. If \(\alpha=d=1\), then \(\mathbf{X}\) is a symmetric Cauchy process and the explicit formula for the Green function of the half-line was given by M. Riesz, see, for example, Blumenthal et al. [7]. On the other hand, in this case singletons are polar sets by [54, Remark 43.6 and Example 43.7] and consequently, neither \(\mathbb{R}\) nor \(\mathbb{R}\setminus\{0\}\) are Greenian. For details we refer the reader to [54, Chapter 8]. Throughout the paper we assume that \(G_{D}(x,y)=0\) whenever \(x\in D^{c}\) or \(y\in D^{c}\). The scaling property of \(p^{D}\) (2.16) implies the scaling of \(G_{D}\) in the following form: \[G_{rD}(rx,xy)=r^{\alpha-d}G_{D}(x,y),\quad r>0,\,x,y\in D. \tag{2.18}\] Moreover, observe that by the strong Markov property, for every \(B\subseteq D\), \[p^{D}_{t}(x,y)=p^{B}_{t}(x,y)+\mathbb{E}_{x}[\tau_{B}<t;p^{D}_{t-\tau_{B}}(X_{ \tau_{B}},y)].\] Integrating this identity with respect to \(\mathrm{d}t\) yields \[G_{D}(x,y)=G_{B}(x,y)+\mathbb{E}_{x}G_{D}(X_{\tau_{B}},y),\quad x,y\in D.\] Therefore, \(x\mapsto G_{D}(x,y)\) is regular harmonic in \(B\) (and consequently: continuous in \(B\), c.f. [63, Theorem 6.7]), provided that \(y\in D\setminus\overline{B}\). One also concludes that \(G_{B}(x,y)\leqslant G_{D}(x,y)\) for all \(x,y\in\mathbb{R}^{d}\). We note in passing that if \(\alpha<d\), then (finite) \(U:=G_{\mathbb{R}^{d}}\) is called the potential kernel of \(\mathbf{X}\). It follows from the scaling property (2.4) together with (2.8) that \[U(x,y)\approx|x-y|^{\alpha-d},\quad x\neq y,\] i.e. the potential kernel of \(\mathbf{X}\) is comparable to the one of the isotropic \(\alpha\)-stable Levy process. The analogous property for the Green function of an arbitrary open set holds away from the boundary of \(D\), see e.g. [63, Theorem 4.4]. For more information we refer the reader to [63]. Let us comment on the harmonic measure \(P_{D}\). The celebrated Ikeda-Watanabe formula [33] states that for \(x\in D\), the joint distribution of \((\tau_{D},X_{\tau_{D}-},X_{\tau_{D}})\) restricted to the event \(\{\tau_{D}<\infty,X_{\tau_{D}-}\neq X_{\tau_{D}}\}\) has a density function given by \[(0,\infty)\times D\times\overline{D}^{c}\ni(s,u,z)\mapsto p^{D}_{s}(x,u)\nu(z -u). \tag{2.19}\] In particular, the distribution of \(X_{\tau_{D}}\) on \(\overline{D}^{c}\) has a density function called the Poisson kernel of \(D\): \[P_{D}(x,z)=\int_{D}G_{D}(x,y)\nu(z-y)\,\mathrm{d}y,\quad x\in D,z\in\overline{ D}^{c}. \tag{2.20}\] Therefore, under the condition that the process does not hit the boundary when exiting the set \(D\), i.e. when \(\mathbb{P}_{x}(X_{\tau_{D}}\in\partial D)=0\), by the discussion above one has \[\mathbb{P}_{x}(\tau_{D}\in U)=\int_{U}P_{D}(x,z)\,\mathrm{d}z,\quad U\subseteq \mathbb{R}^{d}.\] This is the case if \(D\) is a Lipschitz domain, see Sztonyk [57, Theorem 1.1 and the discussion below it] and Millar [50]. To simplify the notation, we denote the Poisson kernel of the ball \(B_{r}\) by \(P_{r}\). The following immediate extension of [63, Lemma 5.3] will be useful in what follows. **Proposition 2.1**.: _Let \(r>0\) and \(\varepsilon>0\). For all \(y\in B_{r}\) and \(z\in B^{c}_{(1+\varepsilon)r}\) we have_ \[\theta\Big{(}\frac{\varepsilon}{1+\varepsilon}\Big{)}^{d+\alpha}\frac{ \mathbb{E}_{y}\tau_{B_{r}}}{|z|^{d+\alpha}}\leqslant P_{r}(y,z)\leqslant \theta^{-1}\Big{(}\frac{2+\varepsilon}{1+\varepsilon}\Big{)}^{d+\alpha}\frac{ \mathbb{E}_{y}\tau_{B_{r}}}{|z|^{d+\alpha}}.\] In particular, it follows by Pruitt's estimates [53] and (2.3) that \(P_{r}(y,z)\) is strictly positive for all \(y\in B_{r}\) and \(z\in\overline{B}^{c}_{r}\). ### Properties of harmonic functions Let us recall two basic tools of the potential theory. First, by [63, Theorem 6.7 and Corollary 6.9] (see also Song and Vondracek [56]), for any connected open set \(D\) and any compact subset \(K\subseteq D\) there is a constant \(c=c(d,\alpha,\lambda,D,K)\) such that for every function \(h\) which is non-negative in \(\mathbb{R}^{d}\) and harmonic in \(D\), (HI) \[h(x)\leqslant ch(y),\quad x,y\in K.\] The property (HI) is the _Harnack inequality_ and it describes the behaviour of harmonic functions inside the domain of harmonicity. Note that for the symmetric case, this follows also from a seminal paper of Bass and Levin [4]. The phenomena close to the boundary are governed by the _boundary Harnack principle_, which we now invoke. Let \(D\) be an arbitrary open set. In view of Bogdan et al. [14, Example 5.5], the global scale-invariant boundary Harnack inequality holds for \(\mathbf{X}\), i.e. for every \(r>0\) and all non-negative functions \(f,g\) which are regular harmonic in \(D\cap B(x_{0},2r)\) and vanish on \(D^{c}\cap B(x_{0},2r)\) one has (BHP) \[\frac{f(x)}{g(x)}\leqslant\mathrm{C}_{\mathrm{BHI}}\frac{f(y)}{g(y)},\quad x,y\in D\cap B(x_{0},r),\] where \(\mathrm{C}_{\mathrm{BHI}}=\mathrm{C}_{\mathrm{BHI}}(\alpha,d,\lambda)\). Note that all the assumptions are symmetric for \(\mathbf{X}\) and \(\widehat{\mathbf{X}}\) and consequently, (HI) and (BHP) hold for regular co-harmonic functions, too. The following lemma gives the upper bound on the growth of harmonic functions close to the boundary and is crucial for our development. It is a generalisation of [8, Lemma 5], where the isotropic case was considered. We point out that, as in the rotation-invariant setting, the growth order \(\gamma\) is strictly less than \(\alpha\). **Lemma 2.2**.: _Let \(R\in(0,\infty]\) and assume that an open set \(D\) is \((\kappa,r)\)-fat for some \(\kappa\in(0,1)\) and all \(r\in(0,R)\). There are \(c=c(\alpha,d,\lambda,\kappa,R)\in(0,\infty)\) and \(\gamma\in[0,\alpha)\) such that for all boundary points \(x_{0}\in\partial D\), all \(r\in(0,R)\) and all non-negative functions \(f\) which are regular harmonic in \(D\cap B(x_{0},r)\),_ \[f(A_{s}(x_{0}))\geqslant c\bigg{(}\frac{|A_{s}(x_{0})-x_{0}|}{r}\bigg{)}^{ \gamma}f(A_{r}(x_{0})),\quad s\in(0,r).\] Proof.: We follow the proof of [8, Lemma 5]. In fact, the only difference is the derivation of the inequality above [8, eq. (3.32)]. Once this relation is proved, the proof will follow by exactly the same arguments. We apply the analogous notation and geometry. First, without loss of generality we may assume that \(x_{0}=0\), \(r=1\) and \(f(A_{r}(0))=1\). Let \(T=2/\kappa\). For \(k=0,1,\ldots\), we set \(r_{k}=T^{-k}\), \(A_{k}=A_{r_{k}}(0)\) and \(D_{k}=B(A_{k},r_{k+1})\). We then clearly have for \(k>l\geqslant 0\) and \(y\in D_{l}\) that \(|D_{l}|\geqslant c_{1}T^{-dl}\), \(\mathrm{diam}(D_{k})\geqslant c_{1}T^{-k}\) and \(\mathrm{dist}(A_{k},y)\leqslant c_{1}T^{-l}\) for some constant \(c_{1}=c_{1}(d,\kappa)\). Next, by the Harnack inequality (HI), there is \(c_{2}=c_{2}(d,\alpha,\lambda,\kappa,R)\) such that \[\frac{f(x)}{f(y)}\leqslant c_{2},\quad x,y\in D_{k},\,k=0,1,\ldots.\] It follows that \[f(A_{k})=\int_{D_{k}^{c}}f(z)P_{D_{k}}(A_{k},z)\,\mathrm{d}z\geqslant c_{2}^{ -1}\sum_{l=0}^{k-1}f(A_{l})\int_{D_{l}}P_{r_{k+1}}(0,z-A_{k})\,\mathrm{d}z.\] Observe that for \(z\in D_{l}\), \(l=0,1,\ldots,k-1\), one also has \(|z-A_{k}|\geqslant\kappa r_{k}\). Therefore, using Proposition 2.1 with \(\varepsilon=(\kappa r_{k})/r_{k+1}-1=1\), together with (2.3), we conclude that \[f(A_{k})\geqslant c_{3}\sum_{l=0}^{k-1}T^{-(k-l)}f(A_{l}),\] where \(c_{3}=c_{3}(d,\alpha,\lambda,\kappa)\). With the inequality above at hand, one can directly copy the proof of [8, Lemma 5] to conclude the claim, where in place of [8, Lemma 2] one may use [63, Lemma 6.8] or simply apply the (classical) Harnack inequality (HI). ## 3. Factorisation of the Dirichlet heat kernel This section is devoted to the proof of Theorem 1.1. In what follows it will be crucial to exploit the geometric properties of \(\kappa\)-fat sets. Recall that \(D\) is \(\kappa\)-fat, if there is \(R>0\) such that for every \(r\in(0,R)\) and \(x\in\overline{D}\) there is a point \(A_{r}(x)\) such that \(B(A_{r}(x),\kappa r)\subseteq B(x,r)\cap D\). We then apply the following standard notation, c.f. [10, Definition 2] and [22, Figure 1]. Namely, for \(x\in D\) and \(r>0\), we let \[U^{x,r}=B(x,|x-A_{r}(x)|+\kappa r/3)\cap D,\qquad B_{1}^{x,r}=B(A_{r}(x), \kappa r/3),\] so that \(B_{1}^{x,r}\subseteq U^{x,r}\). We also set \(A_{r}^{\prime}(x)\) and \(B_{2}^{x,r}=B(A_{r}^{\prime}(x),\kappa r/6)\) so that \(B(A_{r}^{\prime}(x),\kappa r/3)\subseteq B(A_{r}(x),\kappa r)\setminus U^{x,r}\) and consequently \(\operatorname{dist}(U^{x,r},B_{2}^{x,r})\geqslant\kappa r/6\). Finally we let \(V^{x,r}=B(x,|x-A_{r}(x)|+\kappa r)\cap D\). The structures above are presented on Figure 1. We start with a following observation on the survival probability. **Lemma 3.1**.: _There is a constant \(c=c(\alpha,d,\lambda,\kappa)\) such that if \(D\) is \((\kappa,1)\)-fat in \(x\), then_ \[\mathbb{P}_{x}(\tau_{D}>1/3)\leqslant c\mathbb{P}_{x}(\tau_{D}>3).\] Proof.: Let \(x\in D\) and denote \(A=A_{1}(x)\), \(U=U^{x,1}\) and \(B_{2}=B_{2}^{x,1}\). For \(|x-A|<\kappa/2\) we have \(B(x,\kappa/2)\subseteq D\) and \[1\geqslant\mathbb{P}_{x}(\tau_{D}>1/3)\geqslant\mathbb{P}_{x}(\tau_{D}>3) \geqslant\mathbb{P}_{x}\Big{(}\tau_{B_{(x,\kappa/2)}}>3\Big{)}=\mathbb{P}_{0 }(\tau_{B_{\kappa/2}}>3)=c>0,\] with the constant \(c\) depending on \(\alpha\), \(d\), \(\lambda\) and \(\kappa\). For \(|x-A|\geqslant\kappa/2\) we write \[\mathbb{P}_{x}(\tau_{D}>1/3)\leqslant\mathbb{P}_{x}(\tau_{U}>1/3)+\mathbb{P}_{ x}(X_{\tau_{U}}\in D). \tag{3.1}\] Observe that for \(B=B(x,|x-A|+\kappa/3)\) we have \(U\subseteq B\) and consequently, \[\mathbb{P}_{x}(X_{\tau_{U}}\in\partial U\cap D)\leqslant\mathbb{P}_{x}(X_{ \tau_{D}}\in\partial B)=0,\] Figure 1. Illustration of geometric structure of \(\kappa\)-fat sets see e.g. [57, Section 4] or [63, Proposition 5.2]. Therefore, \[\mathbb{P}_{x}(X_{\tau_{U}}\in D)=\mathbb{P}_{x}(X_{\tau_{U}}\in D \setminus\overline{U}).\] Therefore, by [14, Remark 3.8] with \(x_{0}=x\), \(y=A\), \(R=|x-A|+\kappa/3\), \(r=R-\kappa/6\), \(E_{1}=D\setminus\overline{U}\) and \(E_{2}=B_{2}\) in the notation of [14, Remark 3.8], we get that \[\frac{\mathbb{P}_{x}(X_{\tau_{U}}\in D)}{\mathbb{P}_{A}(X_{\tau_{U}}\in D)} \leqslant c\frac{\mathbb{P}_{x}(X_{\tau_{U}}\in B_{2})}{\mathbb{P}_{A}(X_{ \tau_{U}}\in B_{2})}\] with \(c=c(d,\alpha,\lambda,\kappa)\). Next, we infer from (2.3) and Proposition 2.1 that \(P_{B_{1}}(A,\,\cdot\,)\) is strictly positive on \(\overline{B_{1}}^{c}\); therefore, \[\mathbb{P}_{A}(X_{\tau_{U}}\in B_{2})\geqslant\mathbb{P}_{A}(X_{ \tau_{B_{1}}}\in B_{2})=c>0.\] Thus, \[\mathbb{P}_{x}(X_{\tau_{U}}\in D)\leqslant c\mathbb{P}_{x}(X_{ \tau_{U}}\in B_{2}). \tag{3.2}\] Moreover, note that \(|y-u|\approx 1\) for \(y\in B_{2}\) and \(u\in U\). By the Ikeda-Watanabe formula (2.20) and (2.6), \[\mathbb{P}_{x}(X_{\tau_{U}}\in B_{2})=\int_{U}G_{U}(x,u)\int_{B_{2}}\nu(y-u) \,\mathrm{d}y\,\mathrm{d}u\approx\int_{U}G_{U}(x,u)\,\mathrm{d}u=\mathbb{E}_{ x}\tau_{U}.\] The Markov inequality yields \(\mathbb{P}_{x}(\tau_{U}>1/3)\leqslant 3\mathbb{E}_{x}\tau_{U}\). Combining with (3.1) and (3.2), we get \[\mathbb{P}_{x}(\tau_{D}>1/3)\leqslant c\mathbb{E}_{x}\tau_{U},\] and by the strong Markov property, \[\mathbb{E}_{x}\tau_{U}\leqslant c\mathbb{P}_{x}(X_{\tau_{U}}\in B _{2})\leqslant c\mathbb{E}_{x}\Big{[}X_{\tau_{U}}\in B_{2};\mathbb{P}_{X_{ \tau_{U}}}\big{(}\tau_{B(X_{\tau_{U}},\kappa/6)}>3\big{)}\Big{]}\leqslant c \mathbb{P}_{x}(\tau_{D}>3), \tag{3.3}\] which ends the proof. **Remark 3.2**.: We note two observations from the proof of Lemma 3.3. 1. Let \(V=V^{x,1}\). Then in (3.3) one in fact has \[\mathbb{E}_{x}\tau_{U}\lesssim\mathbb{E}_{x}\Big{[}X_{\tau_{U}} \in B_{2};\mathbb{P}_{X_{\tau_{U}}}\big{(}\tau_{B(X_{\tau_{U}},\kappa/6)}>3 \big{)}\Big{]}\leqslant\mathbb{P}_{x}(\tau_{V}>3)\leqslant\mathbb{P}_{x}( \tau_{D}>3).\] Clearly, if \(D\) is \((\kappa,1)\)-fat in \(x\), it is \((\kappa/3,1)\)-fat in \(x\), too. Since \(D\cap B(x,|x-A|+\kappa/3)=U\), by the proof above we obtain (3.4) \[\mathbb{P}_{x}(\tau_{D}>1/3)\approx\mathbb{P}_{x}(\tau_{D}>3)\approx\mathbb{P }_{x}(\tau_{D}>1)\approx\mathbb{P}_{x}(\tau_{U}>1)\approx\mathbb{P}_{x}(X_{ \tau_{U}}\in D)\approx\mathbb{E}_{x}\tau_{U}.\] 2. In fact, in (3.4) we may replace \(3\) by any constant \(a\geqslant 1\) at the cost of the comparability constant depending also on \(a\). We also note that the analogous results hold for the dual process \(\widehat{\mathbf{X}}^{D}\), too. **Lemma 3.3**.: _Let \(D_{1},D_{3}\subseteq D\) be open and such that \(\operatorname{dist}(D_{1},D_{3})>0\). Denote \(D_{2}=D\setminus(D_{1}\cup D_{3})\). If \(x\in D_{1}\) and \(y\in D_{3}\), then_ \[p_{1}^{D}(x,y)\leqslant\mathbb{P}_{x}(X_{\tau_{D_{1}}}\in D_{2}) \sup_{s<1,z\in D_{2}}p_{s}(z,y)+\mathbb{E}_{x}\tau_{D_{1}}\sup_{u\in D_{1},z \in D_{3}}\nu(z-u)\] _and_ \[p_{1}^{D}(x,y)\geqslant\mathbb{P}_{x}(\tau_{D_{1}}>1)\widehat{ \mathbb{P}}_{y}(\tau_{D_{3}}>1)\inf_{u\in D_{1},z\in D_{3}}\nu(z-u).\] Proof.: By the strong Markov property, \[p_{1}^{D}(x,y) =\mathbb{E}_{x}\Big{[}\tau_{D_{1}}<1;p_{1-\tau_{D_{1}}}^{D}\big{(} X_{\tau_{D_{1}}},y\big{)}\Big{]}\] \[=\mathbb{E}_{x}\Big{[}\tau_{D_{1}}<1,X_{\tau_{D_{1}}}\in D_{2};p_{ 1-\tau_{D_{1}}}^{D}\big{(}X_{\tau_{D_{1}}},y\big{)}\Big{]}+\mathbb{E}_{x} \Big{[}\tau_{D_{1}}<1,X_{\tau_{D_{1}}}\in D_{3};p_{1-\tau_{D_{1}}}^{D}\big{(}X_{ \tau_{D_{1}}},y\big{)}\Big{]}\] \[=:\mathrm{I}_{1}+\mathrm{I}_{2}.\] We clearly have \[\mathrm{I}_{1}\leqslant\mathbb{P}_{x}\big{(}X_{\tau_{D_{1}}}\in D_{2}\big{)} \sup_{s<1,z\in D_{2}}p_{s}(z,y).\] Let \(D_{1}\) be such that \(\mathbb{P}_{x}(X_{\tau_{D_{1}}}\in\partial D_{1}\cap D)=0\), for instance when \(D_{1}=D\cap E\) for some Lipschitz domain \(E\). Then the density of \((\tau_{D_{1}},X_{\tau_{D_{1}}})\) at \((s,z)\) with \(z\in D\) is given by (see (2.19)) \[f_{x}(s,z)=\int_{D_{1}}p_{s}^{D_{1}}(x,u)\nu(z-u)\,\mathrm{d}u.\] Then, using (2.13), for \(z\in D_{3}\), \[f_{x}(s,z)=\int_{D_{1}}p_{s}^{D_{1}}(x,u)\nu(z-u)\,\mathrm{d}u\leqslant\mathbb{ P}_{x}(\tau_{D_{1}}>s)\sup_{u\in D_{1},z\in D_{3}}\nu(z-u), \tag{3.5}\] hence, by (2.11) and (2.13), \[\mathrm{I}_{2} =\int_{0}^{1}\int_{D_{3}}p_{1-s}^{D}(z,y)f_{x}(s,z)\,\mathrm{d}z \,\mathrm{d}s\] \[\leqslant\sup_{u\in D_{1},z\in D_{3}}\nu(z-u)\int_{0}^{1}\int_{D _{3}}\widehat{p}_{1-s}^{D}(y,z)\mathbb{P}_{x}(\tau_{D_{1}}>s)\,\mathrm{d}z\, \mathrm{d}s\] \[\leqslant\int_{0}^{1}\mathbb{P}_{x}(\tau_{D_{1}}>s)\,\mathrm{d}s \cdot\sup_{u\in D_{1},z\in D_{3}}\nu(z-u)\] \[\leqslant\mathbb{E}_{x}\tau_{D_{1}}\cdot\sup_{u\in D_{1},z\in D_{ 3}}\nu(z-u),\] and the upper estimate follows in this case. For general \(D_{1}\), we pick an approximating sequence \((E_{n})_{n\in\mathbb{N}}\) such that \(E_{n}\subseteq E_{n+1}\), \(\cup_{n\in\mathbb{N}}E_{n}=D_{1}\) and \(\mathbb{P}_{x}(X_{\tau_{E_{n}}}\in\partial E_{n}\cap D)=0\), and use the continuity of the heat kernel \(p\). Note that the continuity of the Levy density \(\nu\) is not necessary. For the proof of the second part, we use the reverse estimate to (3.5) together with (2.13) to obtain \[\mathrm{I}_{2} \geqslant\inf_{u\in D_{1},z\in D_{3}}\nu(z-u)\int_{0}^{1}\int_{D _{3}}p_{1-s}^{D}(z,y)\mathbb{P}_{x}(\tau_{D_{1}}>s)\,\mathrm{d}z\,\mathrm{d}s\] \[\geqslant\mathbb{P}_{x}(\tau_{D_{1}}>1)\inf_{u\in D_{1},z\in D_{3} }\nu(z-u)\int_{0}^{1}\int_{D_{3}}\widehat{p}_{1-s}^{D_{3}}(y,z)\,\mathrm{d}z\, \mathrm{d}s\] \[\geqslant\mathbb{P}_{x}(\tau_{D_{1}}>1)\widehat{\mathbb{P}}_{y}( \tau_{D_{3}}>1)\inf_{u\in D_{1},z\in D_{3}}\nu(z-u).\] **Lemma 3.4**.: _If \(D\) is (\(\kappa\),1)-fat at \(x\) and \(y\), then there is a constant \(c=c(\alpha,d,\lambda,\kappa)\) such that_ \[p_{2}^{D}(x,y)\leqslant c\mathbb{P}_{x}(\tau_{D}>2)p_{2}(x,y)\widehat{\mathbb{ P}}_{y}(\tau_{D}>2).\] Proof.: We first claim that if \(D\) is \((\kappa,1)\)-fat at \(u\), then there is \(c=c(\alpha,d,\lambda,\kappa)\) such that \[p_{1}^{D}(u,v)\leqslant cp_{1}(u,v)\mathbb{P}_{u}(\tau_{D}>1),\quad v\in D, \tag{3.6}\] and \[\widehat{p}_{1}^{D}(u,v)\leqslant cp_{1}(v,u)\widehat{\mathbb{P}}_{u}(\tau_{D} >1),\quad v\in D. \tag{3.7}\] Indeed, for \(|u-v|\leqslant 8\) we have \(p_{1}(u,v)\approx p_{1}(v,u)\approx 1\) (see (2.9)); hence, by the semigroup property, (2.7) and Lemma 3.1, \[p_{1}^{D}(u,v) =\int_{D}p_{1/2}^{D}(u,z)p_{1/2}^{D}(z,v)\,\mathrm{d}z\] \[\leqslant\sup_{z\in\mathbb{R}^{d}}p_{1/2}(z,v)\mathbb{P}_{u}(\tau_ {D}>1/2)\] \[\leqslant c\mathbb{P}_{u}(\tau_{D}>1)\] \[\leqslant cp_{1}(u,v)\mathbb{P}_{u}(\tau_{D}>1),\] and (3.6) follows in this case. Therefore, we assume that \(|u-v|>8\) and use Lemma 3.3 with \(A=A_{1}(u)\), \(D_{1}=U^{u,1}=D\cap B(A,|u-A|+\kappa/3)\) and \(D_{3}=\left\{z\in D\colon|z-u|>\frac{1}{2}|u-v|\right\}\). Observe that by (2.6), (2.7) and (2.8), \[\sup_{s<1,z\in D_{2}}p_{s}(z,v)\leqslant cp_{1}(u,v)\] and \[\sup_{w\in D_{1},z\in D_{3}}\nu(z-w)\leqslant cp_{1}(u,v).\] Hence, by Lemma 3.3, \[p_{1}^{D}(u,v) \leqslant cp_{1}(u,v)(\mathbb{P}_{u}(X_{\tau_{D_{1}}}\in D_{2})+ \mathbb{E}_{u}\tau_{D_{1}})\] \[\leqslant cp_{1}(u,v)(\mathbb{P}_{u}(X_{\tau_{D_{1}}}\in D)+ \mathbb{E}_{u}\tau_{D_{1}})\] \[\leqslant cp_{1}(u,v)\mathbb{P}_{u}(\tau_{D}>1),\] where the last inequality is a consequence of Remark 3.2. For the proof of (3.7) it remains to consider the dual process \(\widehat{\mathbf{X}}\), use (2.9) and proceed in exactly the same way. Thus, by the semigroup property, (2.13), (3.6) and (3.7), and Lemma 3.1 with (2.9), \[p_{2}^{D}(x,y) =\int_{D}p_{1}^{D}(x,z)p_{1}^{D}(z,y)\,\mathrm{d}z\] \[=\int_{D}p_{1}^{D}(x,z)\widehat{p}_{1}^{D}(y,z)\,\mathrm{d}z\] \[\leqslant\mathrm{c}\mathbb{P}_{x}(\tau_{D}>1)\widehat{\mathbb{P} }_{y}(\tau_{D}>1)\int_{D}p_{1}(x,z)p_{1}(z,y)\,\mathrm{d}z\] \[\leqslant\mathrm{c}\mathbb{P}_{x}(\tau_{D}>2)\widehat{\mathbb{P} }_{y}(\tau_{D}>2)p_{2}(x,y).\] **Remark 3.5**.: In fact, under the assumptions of Lemma 3.4 we get that \[p_{1}^{D}(x,y)\leqslant c\mathbb{P}_{x}(\tau_{D}>1)\widehat{\mathbb{P}}_{y}( \tau_{D}>1)p_{1}(x,y). \tag{3.8}\] For the proof let us consider the modified Levy measure \(\widetilde{\nu}=\frac{1}{2}\nu\), the corresponding kernels \(\widetilde{p}\) and \(\widetilde{p}^{D}\), and the probability distribution \(\widetilde{\mathbb{P}}\). Then by elementary calculations one gets that \(\widetilde{p}_{t}^{D}(x,y)=p_{t/2}^{D}(x,y)\) for all \(x,y\in D\) and all \(t>0\). It follows from Lemma 3.4 that \[p_{1}^{D}(x,y) =\widetilde{p}_{2}^{D}(x,y)\] \[\leqslant\widetilde{\mathbb{P}}_{x}(\tau_{D}>2)\widetilde{ \mathbb{P}}_{y}(\tau_{D}>2)\widetilde{p}_{2}(x,y)\] \[=\widetilde{\mathbb{P}}_{x}(\tau_{D}>1)\widehat{\mathbb{P}}_{y}( \tau_{D}>1)p_{1}(x,y).\] Now we deal with the lower estimate of the Dirichlet heat kernel. **Lemma 3.6**.: _Let \(r>0\). There is \(c=c(\alpha,d,\lambda,r)\) such that_ \[p_{1}^{B(u,r)\cup B(v,r)}(u,v)\geqslant cp_{1}(u,v),\quad u,v\in\mathbb{R}^{d}.\] Proof.: If \(|u-v|>r/2\), then we apply Lemma 3.3 with \(D=B(u,r)\cup B(v,r)\), \(D_{1}=B(u,r/8)\) and \(D_{3}=B(v,r/8)\), so that \[p_{1}^{B(u,r)\cup B(v,r)}(u,v) \geqslant\mathbb{P}_{u}(\tau_{D_{1}}>1)\widehat{\mathbb{P}}_{y}( \tau_{D_{3}}>1)\inf_{w\in D_{1},z\in D_{3}}\nu(z-w)\] \[\geqslant\mathbb{P}_{0}(\tau_{B_{r/8}}>1)\widehat{\mathbb{P}}_{ 0}(\tau_{B_{r/8}}>1)p_{1}(u,v)\] \[=cp_{1}(u,v),\] where the second inequality follows from (2.9). The case \(|u-v|\leqslant r/2\) is even simpler; we then have by continuity and strict positivity of \(p_{1}^{B_{r}}\) (see [63, Theorem 3.2(4)]) together with (2.8) that \[p_{1}^{B(u,r)\cup B(v,r)}(u,v)\geqslant\inf_{|z|<r/2}p_{1}^{B_{r}}(0,z) \geqslant c\geqslant cp_{1}(u,v).\] **Lemma 3.7**.: _If \(D\) is \((\kappa,1)\)-fat at \(x\) and \(y\), then there is \(c=c(\alpha,d,\lambda,\kappa)\) such that_ \[p_{3}^{D}(x,y)\geqslant c\mathbb{P}_{x}(\tau_{D}>3)p_{3}(x,y)\widehat{\mathbb{P} }_{y}(\tau_{D}>3).\] Proof.: By the semigroup property, Lemma 3.6 with \(r=\kappa/6\) and (2.8), we have \[p_{3}^{D}(x,y) \geqslant\int_{B_{2}^{x,1}}\int_{B_{2}^{y,1}}p_{1}^{D}(x,u)p_{1}^ {D}(u,v)p_{1}^{D}(v,y)\,\mathrm{d}u\,\mathrm{d}v\] \[\geqslant cp_{1}(x,y)\int_{B_{2}^{x,1}}p_{1}^{D}(x,u)\,\mathrm{d}u \int_{B_{2}^{y,1}}\widehat{p}_{1}^{D}(y,v)\,\mathrm{d}v.\] For \(u\in B_{2}^{x,1}=B(A_{1}^{\prime}(x),\kappa/6)\), using Lemma 3.3 with \(D_{1}=U^{x,1}\) and \(D_{3}=B(A_{1}^{\prime}(x),\kappa/4)\), (2.6) and Remark 3.2, we get \[p_{1}^{D}(x,u) \geqslant\mathbb{P}_{x}(\tau_{D_{1}}>1)\widehat{\mathbb{P}}_{u}( \tau_{D_{3}}>1)\inf_{w\in D_{1},z\in D_{3}}\nu(z-w)\] \[\geqslant c\mathbb{P}_{x}(\tau_{D_{1}}>1)\widehat{\mathbb{P}}_{0}( \tau_{B_{\kappa/12}}>1)\] \[\geqslant c\mathbb{P}_{x}(\tau_{D}>1).\] In the same way we obtain, for \(v\in B_{2}^{y,1}\), \[\widehat{p}_{1}^{D}(y,v)\geqslant c\widehat{\mathbb{P}}_{y}(\tau_{D}>1).\] Therefore, by Lemma 3.1 and (2.10), \[p_{3}^{D}(x,y) \geqslant c\mathbb{P}_{x}(\tau_{D}>1)p_{1}(x,y)\widehat{\mathbb{P} }_{y}(\tau_{D}>1)\] \[\geqslant c\mathbb{P}_{x}(\tau_{D}>3)p_{3}(x,y)\widehat{\mathbb{P} }_{y}(\tau_{D}>3).\] **Remark 3.8**.: Proceeding exactly as in Remark 3.5 we conclude that under the assumptions of Lemma 3.7 it holds that \[p_{1}^{D}(x,y)\geqslant c\mathbb{P}_{x}(\tau_{D}>1)\widehat{\mathbb{P}}_{y}( \tau_{D}>1)p_{1}(x,y). \tag{3.9}\] Proof of Theorem 1.1.: Suppose first that \(R\in(0,\infty)\) and \(c=1\). Then it follows that \(t^{-1/\alpha}D\) is \((\kappa,1)\)-fat and the result follows from (3.8), (3.9) and scaling properties (2.4), (2.15) and (2.17) with \(C=C(\alpha,d,\lambda,\kappa)\). For \(c>1\) we use the fact that if \(D\) is \((\kappa,R)\)-fat, then it is also \((\kappa/a,aR)\)-fat for any \(a\in[1,\infty)\). Thus, setting \(a=c^{1/\alpha}\) we arrive at the previous setting at the cost of worsening the constant. In this case \(C=C(\alpha,d,\lambda,\kappa,c)\). For the case \(R=\infty\) we note that by the convention adopted in Section 2, \(D\) is \((\kappa,r)\)-fat for all \(r>0\) with \(\kappa\) independent of \(r\). Thus, for any \(t>0\) one can pick \(r>0\) such that \(r^{\alpha}\geqslant t\) and apply the reasoning above to conclude the claim. This time \(c=1\) and \(C=C(\alpha,d,\lambda,\kappa)\). ## 4. Stable processes in generalised \(\kappa\)-fat cones Now we specify our analysis to open sets \(\Gamma\subseteq\mathbb{R}^{d}\) which are invariant under rescaling, i.e. for every \(r>0\), \(ry\in\Gamma\) whenever \(y\in\Gamma\). In other words, from this moment on, \(\Gamma\) is a generalised cone in \(\mathbb{R}^{d}\). Thus, if \(0\in\Gamma\), then necessarily \(\Gamma=\mathbb{R}^{d}\). Otherwise, \(\Gamma\) can be characterised by its intersection with the unit sphere \(\mathbb{S}^{d-1}\). For example, for \(d=1\) we have four possibilities: \(\Gamma=\mathbb{R}^{d}\), \(\Gamma=\mathbb{R}\setminus\{0\}\), \(\Gamma=(0,\infty)\) and \(\Gamma=(-\infty,0)\). We note that by the scaling invariance, if \(\Gamma\) is \((\kappa,r)\)-fat for some \(r>0\), then it is in fact \((\kappa,r)\)-fat for every \(r>0\) with the constant \(\kappa\) independent of \(r\). In what follows, it will be convenient to fix a reference point \(\mathbf{1}=(0,\dots,0,1)\in\mathbb{R}^{d}\). Note that if \(\mathbf{1}\notin\Gamma\), then one can consider an appropriate rotation matrix \(U\in\mathcal{M}_{d}\) such that \(\mathbf{1}\in\{Ux:x\in\Gamma\}\). It is then easy to see that \(U\mathbf{X}:=(UX_{t}\colon t\geqslant 0)\) is also a strictly stable Levy process with the spherical measure \(\widetilde{\zeta}\) given by \(\widetilde{\zeta}(B)=\zeta(\{x\colon Ux\in B\})\) and the drift component \(\widetilde{\gamma}:=U\gamma+\int Ux(\mathbf{1}_{B_{1}}(Ux)-\mathbf{1}_{B_{1}}( x))\,\nu(\mathrm{d}x)\), c.f. [54, Proposition 11.10]. Clearly, if \(\mathbf{X}\) satisfies (2.5), then so does \(U\mathbf{X}\) (with the same constant \(\theta\)). Thus, at the possible cost of _rotating_ the cone \(\Gamma\) and the underlying process \(\mathbf{X}\), we may and do assume that \(\mathbf{1}\in\Gamma\). ### Martin kernel for the cone and its properties First, we collect and apply results from general theory [38, 39] to establish the existence and basic properties of the Martin kernel for arbitrary \(\kappa\)-fat cones \(\Gamma\). Given \(R>0\), we define \(\Gamma_{R}:=\Gamma\cap B_{R}\). The following is a version of [3, Theorem 3.2] for anisotropic \(\alpha\)-stable processes. **Lemma 4.1**.: _Assume \(\Gamma\) is a \(\kappa\)-fat cone in \(\mathbb{R}^{d}\) and suppose \(\mathbf{X}\) is a strictly stable Levy process satisfying \(\mathbf{A}\). There is a unique non-negative function on \(\mathbb{R}^{d}\) such that \(M_{\Gamma}(\mathbf{1})=1\), \(M_{\Gamma}(x)=0\) for \(x\in\Gamma^{c}\) and \(M_{\Gamma}\) is regular harmonic with respect to \(\mathbf{X}\) in every open bounded subset \(B\subseteq\Gamma\), i.e._ \[M_{\Gamma}(x)=\mathbb{E}_{x}M_{\Gamma}(X_{\tau_{B}}),\quad x\in\Gamma.\] _Moreover, \(M_{\Gamma}\) is locally bounded and homogeneous of order \(\beta=\beta(\alpha,\lambda,\Gamma)\in[0,\alpha)\), i.e._ \[M_{\Gamma}(x)=|x|^{\beta}M_{\Gamma}(x/|x|),\quad x\in\Gamma. \tag{4.1}\] _If \(\Gamma\) is Greenian, then one also has_ \[M_{\Gamma}(x)=\lim_{\Gamma\ni y,|y|\to\infty}\frac{G_{\Gamma}(x,y)}{G_{\Gamma }(\mathbf{1},y)}.\] The function \(M_{\Gamma}\) is called the Martin kernel with pole at infinity for the cone \(\Gamma\) and the process \(\mathbf{X}\). Proof of Lemma 4.1.: We first establish the existence of a function satisfying the desired properties. If \(d=1\) and \(\alpha=1\), then singletons are polar sets (see, for example, [54, Chapter 8]). Thus, for \(\Gamma=\mathbb{R}\) or \(\Gamma=\mathbb{R}\setminus\{0\}\) we see that \(M_{\Gamma}=\mathds{1}_{\Gamma}\) satisfies the required properties with \(\beta=0\). The same candidate works for \(d=1\), \(\alpha\in(1,2)\) and \(\Gamma=\mathbb{R}\). Thus, we may assume that \(\Gamma\) is Greenian. Since \(G_{\Gamma}(x,y)=0\) when \(x\in\Gamma^{c}\) and \(y\in\mathbb{R}^{d}\), we may and do assume that \(x\in\Gamma\). By [38, Example 5.1], \(\mathbf{X}\) satisfies all the assumptions stated in [38]; thus, by [39, Proposition 4.1, Remark 4.2(b) and the discussion below it], the infinity is accessible from \(\Gamma\) with respect to \(\mathbf{X}\), i.e. \(\mathbb{E}_{x}\tau_{\Gamma}=\infty\) for every \(x\in\Gamma\). Then it follows from [39, Theorem 1.3(b)] that the limit \[M_{\Gamma}(x)=\lim_{\Gamma\ni y,|y|\to\infty}\frac{G_{\Gamma}(x,y)}{G_{\Gamma }(\mathbf{1},y)}\] exists and is finite for every \(x\in\Gamma\). Moreover, by [39, proof of Theorem 1.3(b)], for every bounded open set \(B\subseteq\Gamma\), \[M_{\Gamma}(x)=\mathbb{E}_{x}M_{\Gamma}(X_{\tau_{B}}),\quad x\in B.\] That is, \(M_{\Gamma}\) is regular harmonic in \(B\). The existence part is thus established. In the remaining part we follow the proof of [3, Theorem 3.2] to derive its basic properties. First, we will show that \(M_{\Gamma}\) is locally bounded and homogeneous of order \(\beta=\beta(\alpha,\lambda,\Gamma)\in[0,\alpha)\), i.e. \[M_{\Gamma}(x)=|x|^{\beta}M_{\Gamma}(x/|x|),\quad x\in\Gamma. \tag{4.2}\] Indeed, in the non-Greenian case one has \(M_{\Gamma}=\mathds{1}_{\Gamma}\) as above. Then \(\beta=0\) and \(M_{\Gamma}\) is locally bounded. Thus, we assume that \(\Gamma\) is Greenian, fix \(R>0\) and let \(x\in\Gamma_{R}\). Since \(G_{\Gamma}(x,\,\cdot\,)\) is regular co-harmonic in \(\Gamma\setminus\Gamma_{8R}\), by the boundary Harnack principle at infinity [38, Corollary 2.2, Remark 2.3 and Example 5.1], there is \(c>0\) such that for some fixed \(y_{0}\in\Gamma\setminus\Gamma_{8R}\), \[\frac{G_{\Gamma}(x,y)}{G_{\Gamma}(\mathbf{1},y)}\leqslant c\frac{G_{\Gamma}(x,y_{0})}{G_{\Gamma}(\mathbf{1},y_{0})},\quad y\in\Gamma\setminus\Gamma_{8R}.\] Thus, \(M_{\Gamma}\) is locally bounded. Next, recall that \(k\Gamma=\Gamma\) for every \(k>0\). Therefore, by the scaling property of \(G_{\Gamma}\) (2.18), for every \(x,y\in\Gamma\), \[\frac{G_{\Gamma}(kx,y)}{G_{\Gamma}(\mathbf{1},y)}\cdot\frac{G_{\Gamma}(\mathbf{ 1},y)}{G_{\Gamma}(k\mathbf{1},y)}=\frac{G_{\Gamma}(kx,y)}{G_{\Gamma}(k\mathbf{ 1},y)}=\frac{k^{-d+\alpha}G_{\Gamma}(x,k^{-1}y)}{k^{-d+\alpha}G_{\Gamma}( \mathbf{1},k^{-1}y)}=\frac{G_{\Gamma}(x,k^{-1}y)}{G_{\Gamma}(\mathbf{1},k^{-1} y)}.\] By taking the limit as \(|y|\to\infty\) we see that \[M_{\Gamma}(kx)=M_{\Gamma}(x)M_{\Gamma}(k\mathbf{1}).\] It follows that \[M_{\Gamma}(kl\mathbf{1})=M_{\Gamma}(k\mathbf{1})M_{\Gamma}(l\mathbf{1}),\] for every positive \(k,l\). Note that in view of [63, Theorem 6.7], \(M_{\Gamma}\) is continuous on \(\Gamma\). Thus, there is \(\beta\in\mathbb{R}\) such that \(M_{\Gamma}(k\mathbf{1})=k^{\beta}M_{\Gamma}(\mathbf{1})=k^{\beta}\) and hence, \[M_{\Gamma}(kx)=k^{\beta}M_{\Gamma}(x),\quad x\in\Gamma.\] The local boundedness of \(M_{\Gamma}\) implies that \(\beta\geqslant 0\) and Lemma 2.2 entails that \(\beta\leqslant\gamma<\alpha\). Thus, \(\beta\in[0,\alpha)\). To conclude the proof, it remains to note that the uniqueness may be verified using the boundary Harnack principle (BHP) exactly as in the proof of [3, Theorem 3.2]. **Remark 4.2**.: By inspecting the proof above, one can quickly verify that exactly the same reasoning may be applied to the dual process \(\widehat{\mathbf{X}}\). Therefore, the Martin kernel \(\widehat{M}_{\Gamma}\) for the cone exists, too, and the statements of Lemma 4.1 and Corollary 4.3 remain in force, when one replaces \(G_{\Gamma}\) by \(\widehat{G}_{\Gamma}\), \(M_{\Gamma}\) by \(\widehat{M}_{\Gamma}\) and \(\beta\) by \(\widehat{\beta}\). Note that Lemma 4.1 does not give information on the relation between \(\beta\) and \(\widehat{\beta}\), c.f. Remark 4.4(2) and Example 4.7. The following version of [10, Theorem 2] is now an immediate corollary of Lemma 4.1 and Remark 4.2. **Corollary 4.3**.: _Suppose \(\Gamma\) is a \(\kappa\)-fat cone. Then for all \(t>0\) and \(x,y\in\Gamma\), we have_ \[\mathbb{P}_{x}(\tau_{\Gamma}>t)\approx\frac{M_{\Gamma}(x)}{M_{\Gamma}(A_{t^{ 1/\alpha}}(x))}\qquad\text{and}\qquad\widehat{\mathbb{P}}_{x}(\tau_{\Gamma}>t )\approx\frac{\widehat{M}_{\Gamma}(y)}{\widehat{M}_{\Gamma}(A_{t^{1/\alpha}} (y))}.\] _Furthermore,_ \[p_{t}^{\Gamma}(x,y)\approx\frac{M_{\Gamma}(x)}{M_{\Gamma}(A_{t^{1/\alpha}}(x)) }p_{t}(x,y)\frac{\widehat{M}_{\Gamma}(y)}{\widehat{M}_{\Gamma}(A_{t^{1/\alpha }}(y))},\quad t>0,\,x,y\in\Gamma.\] _The comparability constants depend at most on \(d,\alpha,\kappa\) and \(\lambda\)._ Proof.: Recall that \(\Gamma\) is \((\kappa,r)\)-fat for every \(r>0\) with the constant \(\kappa\) independent of \(r\). Thus, with Lemma 4.1, Remark 3.2, (BHP) and Theorem 1.1 at hand, the proof is a mimic of the proof of [10, Theorem 2] and is therefore omitted. The dual counterpart follows by Remark 4.2. **Remark 4.4**.: 1. Proceeding as at the end of the proof of [3, Theorem 3.2] with the help of [63, Proposition 6.1(3)], one can see that \(\beta=0\) if and only if \(\Gamma^{c}\) is polar for \(\mathbf{X}\). Moreover, the following _monotonicity_ of the homogeneity index holds: if \(\gamma,\Gamma\) are \(\kappa\)-fat cones in \(\mathbb{R}^{d}\) and \(\gamma\subseteq\Gamma\), then \(\beta(\alpha,\lambda,\Gamma)\geqslant\beta(\alpha,\lambda,\gamma)\) and the equality holds if and only if \(\Gamma\setminus\gamma\) is a polar set for \(\mathbf{X}\). This is proved exactly as in [3, Lemma 3.3]. 2. By [54, Theorem 42.30], \(\Gamma^{c}\) is polar for \(\mathbf{X}\) if and only if it is polar for the dual process \(\widehat{\mathbf{X}}\). Therefore, using part (1) we conclude that \(\beta=0\) if and only if \(\widehat{\beta}=0\). **Remark 4.5**.: By [63, Theorem 6.7], \(M_{\Gamma}\) is continuous in \(\Gamma\). Let us discuss some examples, where the Martin kernel can be explicitly identified. Note that even in the isotropic setting, there are but a few cases when the exponent \(\beta\) and the formula for \(M_{\Gamma}\) are explicitly known, c.f. [3] and [49]. If we step outside the isotropic world, then the situation gets more complicated. The simplest case was already hinted in the proof of Lemma 4.1. **Example 4.6**.: Let \(\Gamma\) be \(\kappa\)-fat and such that \(\Gamma^{c}\) is a polar set. Then \(M_{\Gamma}=\mathds{1}_{\Gamma}\) satisfies the required properties and by Lemma 4.1, it is the Martin kernel for \(\Gamma\). This class includes \(\Gamma=\mathbb{R}^{d}\) for \(\alpha\in(0,2)\) and \(\Gamma=\mathbb{R}^{d}\setminus\{x_{d}=0\}\) for \(\alpha\in(0,1]\), since then by [54, Theorem 42.30], \(\{x_{d}=0\}\) is a polar set. Now let \(\Gamma\) be the half-space \(\mathbb{H}_{d}:=\{x\in\mathbb{R}^{d}\colon x_{d}>0\}\) in \(\mathbb{R}^{d}\). In the rotation-invariant case, we have by [3, Example 3.2] that \(M_{\Gamma}(x)=\delta_{\Gamma}(x)^{\alpha/2}\) and \(\beta=\alpha/2\). In view of the boundary Harnack inequality, this exponent is in line with the typical rate of decay of harmonic functions in smooth domains, c.f. Chen and Song [24] and Kulczycki [41]. It turns out that dropping the symmetry assumption results in decay rate dependent on the asymmetry of the process, c.f. Juszczyszyn [36, Theorem 1.1]. It also reflects in the homogeneity order of \(M_{\Gamma}\), as we argue below. Recall that every finite measure \(\zeta\) on the sphere \(\mathbb{S}^{d-1}\) induces a strictly \(\alpha\)-stable Levy process on \(\mathbb{R}^{d}\). **Example 4.7**.: Suppose that \(\zeta\) has a density \(\lambda\) with respect to the surface measure \(\sigma\), which is Holder continuous on \(\mathbb{S}^{d-1}\) of (some fixed) order \(\varepsilon>0\). Let \(\Gamma=\mathbb{H}_{d}\) be a half-space in \(\mathbb{R}^{d}\). By [36, Lemma 2.16], a function \(h(x):=\delta_{\Gamma}(x)^{\gamma}\) is regular harmonic in every bounded open subset of \(\mathbb{H}_{d}\), where \(\gamma=\alpha\mathbb{P}(\langle X_{1},\mathbf{1}\rangle>0)\). By Lemma 4.1, it is a Martin kernel for \(\Gamma\) for the strictly \(\alpha\)-stable Levy process \(\mathbf{X}\) corresponding to the spherical density \(\lambda\), with the homogeneity order \(\beta=\alpha\mathbb{P}(\langle X_{1},\mathbf{1}\rangle>0)\). Let us take a closer look on the exponent \(\beta\). It is clear that if \(\mathbf{X}\) is symmetric (not necessarily isotropic), then \(\beta=\alpha/2\) as in [3, Example 3.2]. In general, the process \(Y_{t}:=\langle X_{t},\mathbf{1}\rangle\) is a one-dimensional strictly \(\alpha\)-stable Levy process with the (one-dimensional) Levy density \(\nu(z)=c_{-}|z|^{-1-\alpha}\mathds{1}_{z<0}+c_{+}|z|^{-1-\alpha}\mathds{1}_{ z>0}\), where \[c_{-}=\int_{\mathbb{S}^{d-1}\cap\{x_{d}>0\}}\lambda(-w)w_{d}^{\alpha}\,\mathrm{d }w\qquad\text{and}\qquad c_{+}=\int_{\mathbb{S}^{d-1}\cap\{x_{d}>0\}}\lambda( w)w_{d}^{\alpha}\,\mathrm{d}w,\] see e.g. [36, Lemma 2.12]. Let \(\alpha\neq 1\). By Zolotarev [69], one has \[\beta=\alpha\mathbb{P}(Y_{1}>0)=\frac{1}{2}+\frac{1}{\pi\alpha}\arctan\bigg{(} \frac{c_{+}-c_{-}}{c_{+}+c_{-}}\tan\frac{\pi\alpha}{2}\bigg{)}.\] In particular, choosing \(\lambda\) so that \(c_{+}\neq c_{-}\) yields \(\beta\neq\alpha/2\). The feature presented in Example 4.7 has one expected, but nonetheless interesting consequence. We still assume \(\Gamma=\mathbb{H}_{d}\) and let \(\widehat{M}_{\Gamma}\) be the Martin kernel for the dual process \(\widehat{\mathbf{X}}\) for \(\Gamma\) with the homogeneity order \(\widehat{\beta}\). Continuing with the notation from Example 4.7, it is clear \(\widehat{\lambda}(w)=\lambda(-w)\) for \(w\in\mathbb{S}^{d-1}\). Thus, \(\widehat{c}_{-}=c_{+}\), \(\widehat{c}_{+}=c_{-}\) and consequently, \(\widehat{\beta}\neq\beta\) unless \(c_{+}=c_{-}\). Put differently, even in the simple case of the half-space, the homogeneity (growth) orders \(\beta\) and \(\widehat{\beta}\) may be different. Consequently, in general one cannot expect the same exponents (that is: \(\beta=\widehat{\beta}\)) for dual processes \(\mathbf{X}\) and \(\widehat{\mathbf{X}}\). We continue with Example 4.7 and use it to derive explicit estimates on the Dirichlet heat kernel for \(\mathbb{H}_{d}\). **Example 4.8**.: Again let \(\Gamma=\mathbb{H}_{d}\), \(t>0\) and suppose that \(x,y\in\Gamma\). Note that for every \(r>0\) we have \(\delta_{\Gamma}(A_{r}(x))\approx r\vee\delta_{\Gamma}(x)\). Corollary 4.3 together with Example 4.7 then imply \[\mathbb{P}_{x}(\tau_{\Gamma}>t)\approx\frac{M_{\Gamma}(x)}{M_{\Gamma}(A_{t^{ 1/\alpha}}(x))}=\bigg{(}\frac{\delta_{\Gamma}(x)}{t^{1/\alpha}\vee\delta_{ \Gamma}(x)}\bigg{)}^{\beta}\approx\bigg{(}1\wedge\frac{\delta_{\Gamma}(x)}{t ^{1/\alpha}}\bigg{)}^{\beta},\quad t>0,\,x\in\Gamma.\] Similarly, \[\widehat{\mathbb{P}}_{y}(\tau_{\Gamma}>t)\approx\bigg{(}1\wedge\frac{\delta_{ \Gamma}(y)}{t^{1/\alpha}}\bigg{)}^{\widehat{\beta}},\quad t>0,\,y\in\Gamma.\] Therefore, again by Corollary 4.3, the global explicit two-sided estimate holds: \[p_{t}^{\Gamma}(x,y)\approx\bigg{(}1\wedge\frac{\delta_{\Gamma}(x)}{t^{1/ \alpha}}\bigg{)}^{\beta}p_{t}(x,y)\bigg{(}1\wedge\frac{\delta_{\Gamma}(y)}{t^{ 1/\alpha}}\bigg{)}^{\widehat{\beta}},\quad t>0,\,x,y\in\Gamma.\] As a side remark, we observe that in this example one has \(\beta+\widehat{\beta}=\alpha\mathbb{P}(Y_{1}\neq 0)\). Thus, by [54, Theorem 42.30], \(\beta+\widehat{\beta}=\alpha\) if and only if \(\alpha\in(0,1]\). As a side remark we note that in view of [36, Lemma 2.14], in this case there are constants \(\beta_{\min},\beta_{\max}\) satisfying \(\beta_{\min}>\max\{0,\alpha-1\}\), \(\beta_{\max}<\min\{\alpha,1\}\), \(\beta_{\min}+\beta_{\max}=\alpha\) and such that \(\beta,\widehat{\beta}\in[\beta_{\min},\beta_{\max}]\). ### Yaglom limit in cones In the remaining part of the paper we apply the method developed in [13] to obtain the Yaglom limit for non-symmetric strictly \(\alpha\)-stable Levy processes in arbitrary \(\kappa\)-fat cones. Its versatility lies in the fact that once the necessary tools are available, the existence of the limit in Theorem 1.2 follows by a similar argument. These basic blocks are: boundary Harnack inequality, sharp two-sided estimates on the Dirichlet heat kernel of the cone and nice scaling properties of a \(P_{t}^{\Gamma}\)-invariant function, which is later used for the version of Doob conditioning of the process killed when exiting the cone \(\Gamma\). In our case, the first brick follows from Bogdan et al. [14], the second is provided by Theorem 1.1, and the third is a consequence of properties of the Martin kernel \(M_{\Gamma}\) for the cone, which are gathered in Lemma 4.1. With these tools at hand, we follow step by step the procedure described in [13]. For the convenience of the reader, we provide all the details, unless the extension is immediate and requires no changes in the proof. Let \(\Gamma\) be a fixed \(\kappa\)-fat cone in \(\mathbb{R}^{d}\) such that \(\mathbf{1}\in\Gamma\). Recall that by Lemma 4.1, \(M_{\Gamma}\) is homogeneous of order \(\beta\in[0,\alpha)\) and regular harmonic in every bounded subset of \(\Gamma\). Thus, following [13, Theorem 3.1] one can prove that \(M_{\Gamma}\) is invariant for the killed semigroup \(P_{t}^{\Gamma}\), i.e. for every \(t>0\) and \(x\in\Gamma\), \[P_{t}^{\Gamma}M_{\Gamma}(x)=M_{\Gamma}(x). \tag{4.3}\] These two properties justify the following definition of the (version of) conditioned kernel \[\rho_{t}(x,y)=\frac{p_{t}^{\Gamma}(x,y)}{M_{\Gamma}(x)\widehat{M}_{\Gamma}(y) },\quad x,y\in\Gamma,\,t>0. \tag{4.4}\] Using (4.3) it is easy to see that \[\int_{\Gamma}\rho_{t}(x,y)M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y=1,\quad x\in\Gamma,\,t>0, \tag{4.5}\] and \[\int_{\Gamma}\rho_{t}(x,y)\rho_{s}(y,z)M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\, \mathrm{d}y=\rho_{t}(x,z),\quad x,z\in\Gamma,\,t,s>0. \tag{4.6}\] Thus, \(\rho_{t}\) is a transition probability density on \(\Gamma\) with respect to the measure \(M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y\). We note that \(\rho_{t}\)_need not_ be symmetric. By (2.15) and Lemma 4.1, the following scaling property holds for \(\rho\): for all \(x,y\in\Gamma\) and all \(t>0\), \[\rho_{t}(x,y)=\frac{t^{-d/\alpha}p_{\Gamma}^{\Gamma}(t^{-1/\alpha}x,t^{-1/ \alpha}y)}{t^{(\beta+\widehat{\beta})/\alpha}M_{\Gamma}(t^{-1/\alpha}x) \widehat{M}_{\Gamma}(t^{-1/\alpha}y)}=t^{-(d+\beta+\widehat{\beta})/\alpha} \rho_{1}(t^{-1/\alpha}x,t^{-1/\alpha}y). \tag{4.7}\] Put differently, \[\rho_{st}\big{(}t^{1/\alpha}x,t^{1/\alpha}y\big{)}=t^{-(d+\beta+\widehat{ \beta})/\alpha}\rho_{s}(x,y),\quad x,y\in\Gamma,\,s,t>0. \tag{4.8}\] By Theorem 1.1, we also have \[\rho_{t}(x,y)\approx\frac{\mathbb{P}_{x}(\tau_{\mathrm{T}}>t)}{M_{\Gamma}(x) }p_{t}(x,y)\frac{\widehat{\mathbb{P}}_{y}(\tau_{\mathrm{T}}>t)}{\widehat{M}_ {\Gamma}(y)},\quad x,y\in\Gamma,\,t>0. \tag{4.9}\] The structure of the factorisation (4.9) is one of reasons why we choose to define \(\rho_{t}\) as in (4.4) instead of classical Doob h-transform using the invariant function \(M_{\Gamma}\). In what follows, it will be crucial to control the expression \(\mathbb{P}_{x}(\tau_{\mathrm{T}}>t)/M_{\Gamma}(x)\) and its dual counterpart \(\widehat{\mathbb{P}}_{y}(\tau_{\mathrm{T}}>t)/\widehat{M}_{\Gamma}(y)\) for \(x,y\in\Gamma\). With the boundary Harnack principle (BHP), the Ikeda-Watanabe formula (2.19) and Lemma 4.1 at hand, one can directly repeat the proof of [3, Lemma 4.2] and its extension [13, Lemma 3.2] to get the following. **Proposition 4.9**.: 1. _For every_ \(R\in(0,\infty)\)_, there exists a constant_ \(c_{1}=c_{1}(\alpha,\Gamma,\lambda,R)\) _such that_ \[c_{1}^{-1}M_{\Gamma}(x)t^{-\beta/\alpha}\leqslant\mathbb{P}_{x}(\tau_{\mathrm{ T}}>t)\leqslant c_{1}M_{\Gamma}(x)t^{-\beta/\alpha},\quad x\in\Gamma_{Rt^{1/ \alpha}},\,t>0.\] _._ 2. _There exists a constant_ \(c_{2}=c_{2}(\alpha,\Gamma,\lambda)\)_, such that_ \[\mathbb{P}_{x}(\tau_{\Gamma}>t)\leqslant c_{2}\big{(}t^{-\beta/\alpha}+t^{-1}|x| ^{\alpha-\beta}\big{)}M_{\Gamma}(x),\quad x\in\Gamma,\,t>0.\] 3. _Statements_ (1) _and_ (2) _hold true for the dual process_ \(\widehat{\mathbf{X}}\)_, too._ Proposition 4.9 together with (4.9) and (2.8) yield two crucial estimates on the conditioned kernel \(\rho_{t}\). First, from (1) we get that \[\rho_{1}(x,y)\approx(1+|y|)^{d-\alpha}\frac{\widehat{\mathbb{P}}_{y}(\tau_{ \Gamma}>1)}{\widehat{M}_{\Gamma}(y)},\quad x\in\Gamma_{R},\,y\in\Gamma, \tag{4.10}\] with the implied constant dependent at most on \(\alpha\), \(\Gamma\), \(\lambda\) and \(R\). By parts (2) and (3), one also concludes that there is a constant \(c=c(\alpha,\Gamma,\lambda,R)\), such that \[\rho_{1}(x,y)\leqslant c(1+|y|)^{-d-\widehat{\beta}},\quad x\in\Gamma_{R},\,y \in\Gamma. \tag{4.11}\] In a similar way, one also obtains \[\rho_{1}(x,y)\leqslant c(1+|x|)^{-d-\beta},\quad y\in\Gamma_{R},\,x\in\Gamma. \tag{4.12}\] **Remark 4.10**.: We note that it follows from (4.5) and (4.10) that the function \[\Gamma\ni y\mapsto(1+|y|)^{d-\alpha}\frac{\widehat{\mathbb{P}}_{y}(\tau_{ \Gamma}>1)}{\widehat{M}_{\Gamma}(y)}\] is integrable with respect to \(M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y\). Following [12] and [13], we now apply a change of time and rescaling of space by setting \[\ell_{t}(x,y):=\rho_{1-e^{-t}}(e^{-t/\alpha}x,y),\quad x,y\in\Gamma,\,t>0. \tag{4.13}\] Using (4.5) and (4.6) one can quickly verify that \[\int_{\Gamma}\ell_{t}(x,y)M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y=1,\quad x\in\Gamma,\,t>0, \tag{4.14}\] and \[\int_{\Gamma}\ell_{t}(x,y)\ell_{s}(y,z)M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\, \mathrm{d}y=\ell_{s+t}(x,z),\quad x,z\in\Gamma,\,s,t>0. \tag{4.15}\] To wit, \(\ell_{t}\) is a transition probability density on \(\Gamma\) with respect to \(M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y\). The corresponding (Ornstein-Uhlenbeck type) semigroup is defined as follows: \[L_{t}f(y)=\int_{\Gamma}\ell_{t}(x,y)f(x)M_{\Gamma}(x)\widehat{M}_{\Gamma}(x) \,\mathrm{d}x,\quad y\in\Gamma,\,t>0. \tag{4.16}\] It is easy to see that the operators \(L_{t}\) are bounded and linear, thus continuous on \(L^{1}(M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y)\). Moreover, denote a non-negative function \(\varphi\) satisfying \(\int_{\Gamma}\varphi(x)M_{\Gamma}(x)\widehat{M}_{\Gamma}(x)\,\mathrm{d}x=1\) a _density_. The Fubini-Tonelli theorem entails that for every \(f\geqslant 0\), \[\int_{\Gamma}L_{t}f(y)M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y=\int_ {\Gamma}f(x)M_{\Gamma}(x)\widehat{M}_{\Gamma}(x)\,\mathrm{d}x.\] Thus, the operators \(L_{t}\) preserve densities. The following result is crucial for our development. As we shall see, the function \(\varphi\) from Theorem 4.11 will provide a density of the Yaglom measure, see Theorem 4.18. **Theorem 4.11**.: _Let \(\Gamma\) be a \(\kappa\)-fat cone. There is a unique stationary density \(\varphi\) for the operators \(L_{t}\), \(t>0\)._ The proof of Theorem 4.11 follows the one of [13, Theorem 3.4] with the necessary adaptations resulting from a (slightly) different reference measure and the lack of symmetry of the kernels \(\rho_{t}\). For this reason, we provide the key calculations for the convenience of the reader. Proof of Theorem 4.11.: Fix \(t>0\). First observe that by (4.8), for a non-negative function \(f\), \[L_{t}f(z) =\int_{\Gamma}\rho_{1-e^{-t}}\big{(}e^{-t/\alpha}y,z\big{)}f(y)M_{ \Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y\] \[=\int_{\Gamma}e^{t(d+\beta+\widehat{\beta})/\alpha}\rho_{e^{t}-1 }(y,e^{t/\alpha}z)f(y)M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y.\] Consider a family \(F\) of non-negative functions \(f\) of the form \[f(y)=\int_{\Gamma_{1}}\rho_{1}(x,y)\,\eta(\mathrm{d}x),\quad y\in\Gamma.\] where \(\eta\) is some sub-probability measure concentrated on \(\Gamma_{1}\). We first observe that by (4.10), for every \(f\in F\), \[f(y)\lesssim(1+|y|)^{-d-\alpha}\frac{\widehat{\mathbb{P}}_{y}(\tau_{\Gamma}>1 )}{\widehat{M}_{\Gamma}(y)},\quad y\in\Gamma,\] with the implied constant dependent at most on \(\alpha,\Gamma\) and \(\lambda\). Thus, by Remark 4.10, the family \(F\) is uniformly integrable with respect to \(M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y\). Moreover, by the Fubini-Tonelli theorem, (4.6) and (4.8), for \(f\in F\), \[L_{t}f(z) =\int_{\Gamma_{1}}e^{t(d+\beta+\widehat{\beta})/\alpha}\int_{ \Gamma}\rho_{1}(x,y)\rho_{e^{t}-1}(y,e^{t/\alpha}z)M_{\Gamma}(y)\widehat{M}_{ \Gamma}(y)\,\mathrm{d}y\,\eta(\mathrm{d}z)\] \[=\int_{\Gamma_{1}}e^{t(d+\beta+\widehat{\beta})/\alpha}\rho_{e^{ t}}(x,e^{t/\alpha}z)\,\eta(\mathrm{d}z)\] \[=\int_{\Gamma_{1}}\rho_{1}(e^{-t/\alpha}x,z)\,\eta(\mathrm{d}z)\] \[=\int_{\Gamma_{1}}\rho_{1}(x,z)\widetilde{\eta}(\mathrm{d}z)\] for some sub-probability measure \(\widetilde{\eta}\) concentrated on \(\Gamma_{1}\). It follows that \(L_{t}F\subseteq F\). Since \(F\) is clearly a convex set, one can follow steps in [13, proof of Theorem 3.4] and conclude that there is a density \(\varphi\in\overline{F}\) satisfying \(L_{t}\varphi=\varphi\). The uniqueness of \(\varphi\) follows then from the strict positivity of \(\ell_{t}\) and the fact that the operators \(L_{t}\) commute, see [12, proof of Theorem 3.2]. We now concentrate of the stationary density \(\varphi\) and its properties. First, by Kulik and Scheutzov [43, Theorem 1 and Remark 2], it is the limit of transition probabilities \(\ell_{t}\) in the following sense: for every \(x\in\Gamma\), \[\lim_{t\to\infty}\int_{\Gamma}|\ell_{t}(x,y)-\varphi(y)|M_{\Gamma}(y)\widehat{ M}_{\Gamma}(y)\,\mathrm{d}y=0. \tag{4.17}\] Taking (4.13) into account, we intend to re-phrase the limit (4.17) by replacing \(\ell_{t}(x,y)\) with \(\rho_{1}(x,y)\) and \(t\to\infty\) with \(\Gamma\ni x\to 0\). First we observe that the convergence (4.17) is in fact uniform in \(x\) in every bounded subset \(A\subseteq\Gamma\). Indeed, for some fixed \(x_{0}\in A\), by (4.15), (4.13) and (4.10), we have \[\int_{\Gamma}|\ell_{t+1}(x,y)-\varphi(y)|M_{\Gamma}(y)\widehat{M }_{\Gamma}(y)\,\mathrm{d}y\] \[= \int_{\Gamma}\Big{|}\int_{\Gamma}\ell_{1}(x,z)\big{(}\ell_{t}(z,y )-\varphi(y)\big{)}M_{\Gamma}(z)\widehat{M}_{\Gamma}(z)\,\mathrm{d}z\Big{|}M _{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y\] \[\leqslant \int_{\Gamma}c\ell_{1}(x_{0},z)\int_{\Gamma}|\ell_{t}(z,y)- \varphi(y)|M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y\,M_{\Gamma}(z) \widehat{M}_{\Gamma}(z)\,\mathrm{d}z. \tag{4.18}\] By (4.17), for every \(z\in\Gamma\), the inner integral converges to \(0\) as \(t\to\infty\). Using (4.14) and Theorem 4.11, we also see that it is uniformly bounded by \(2\). Thus, by the dominated convergence theorem, the iterated integral (4.18) converges to \(0\) as \(t\to\infty\), and the claim follows. Put differently, \[\lim_{t\to\infty}\int_{\Gamma}\Big{|}\rho_{1-e^{-t}}\big{(}e^{-t/\alpha}x,y \big{)}-\varphi(y)\Big{|}M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y=0 \tag{4.19}\] uniformly in \(x\in A\) on every bounded subset \(A\subseteq\Gamma\). We now derive a desired reformulation of (4.17). **Corollary 4.12**.: _Let \(\Gamma\) be a \(\kappa\)-fat cone. Then_ \[\lim_{\Gamma\ni x\to 0}\int_{\Gamma}|\rho_{1}(x,y)-\varphi(y)|M_{\Gamma}(y) \widehat{M}_{\Gamma}(y)\,\mathrm{d}y=0.\] Proof.: Let \(A\) be a bounded subset of \(\Gamma\). The scaling property (4.7) yields that \[\rho_{1-e^{-t}}\big{(}e^{-t/\alpha}z,u\big{)}=(1-e^{-t})^{-(d+\beta+\widehat{ \beta})/\alpha}\rho_{1}\Big{(}(e^{t}-1)^{-1/\alpha}z,(1-e^{-t})^{-1/\alpha}u \Big{)},\quad z,u\in\Gamma.\] Thus, (4.19) together with (4.14) and the triangle inequality entail that \[\lim_{t\to\infty}\int_{\Gamma}\Big{|}\rho_{1}\Big{(}(e^{t}-1)^{-1/\alpha}z,(1- e^{-t})^{-1/\alpha}u\Big{)}-\varphi(u)\Big{|}M_{\Gamma}(u)\widehat{M}_{ \Gamma}(u)\,\mathrm{d}u=0\] uniformly in \(z\in A\). By the continuity of dilations in \(L^{1}(\mathbb{R}^{d})\), the change of variables \(y=(1-e^{-t})^{-1/\alpha}u\), (4.2) with Remark 4.2, and the triangle inequality, \[\lim_{t\to\infty}\int_{\Gamma}\Big{|}\rho_{1}\Big{(}(e^{t}-1)^{-1/\alpha}z,y \Big{)}-\varphi(y)\Big{|}M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y=0\] uniformly in \(z\in A\). To conclude the proof, we take \(A=\mathbb{S}^{d-1}\) and \(x=(e^{t}-1)^{-1/\alpha}z\) with \(t=\ln(1+|x|^{-\alpha})\) and \(z=x/|x|\in A\). In fact, the stationary density \(\varphi\) may be refined to a continuous function on \(\Gamma\). **Lemma 4.13**.: _After a modification on a set of Lebesgue measure zero, \(\varphi\) is continuous on \(\Gamma\) and_ \[\varphi(y)\approx(1+|y|)^{-d-\alpha}\frac{\widehat{\mathbb{P}}_{y}(\tau_{ \mathrm{T}}>1)}{\widehat{M}_{\Gamma}(y)},\quad y\in\Gamma.\] Proof.: First we note that \[\varphi(y)\approx(1+|y|)^{-d-\alpha}\frac{\widehat{\mathbb{P}}_{y}(\tau_{ \mathrm{T}}>1)}{\widehat{M}_{\Gamma}(y)}\quad\mathrm{a.e.}\,y\in\Gamma,\] by virtue of Corollary 4.12 and (4.10). Moreover, by Theorem 4.11, \(\varphi(y)=L_{1}\varphi(y)\) a.e. \(y\in\Gamma\). Hence, it is enough to prove that \(L_{1}\varphi\) is continuous on \(\Gamma\). By (4.10) and (4.13), we have \[\ell_{1}(x,y)\approx\frac{\mathbb{P}_{e^{-1/\alpha}x}\big{(}\tau_{\mathrm{T}}> 1-e^{-1}\big{)}}{M_{\Gamma}(e^{-1/\alpha}x)}p_{1-e^{-1}}(e^{-1/\alpha}x,y) \frac{\widehat{\mathbb{P}}_{y}(\tau_{\mathrm{T}}>1)}{\widehat{M}_{\Gamma}(y)},\quad x,y\in\Gamma.\] By (2.17) and Remark 3.2, \[\mathbb{P}_{e^{-1/\alpha}x}\big{(}\tau_{\mathrm{T}}>1-e^{-1}\big{)}\approx \mathbb{P}_{x}(\tau_{\mathrm{T}}>e-1)\approx\mathbb{P}_{x}(\tau_{\mathrm{T}}>1).\] Therefore, if we let \(R>1\), then by Proposition 4.9, homogeneity of \(M_{\Gamma}\) (4.2) and (2.8) with (2.10), \[\ell_{1}(x,y)\lesssim(1+|x|)^{-d-\alpha}\frac{\mathbb{P}_{x}(\tau_{\mathrm{T}}> 1)}{M_{\Gamma}(x)},\quad x\in\Gamma,y\in\Gamma_{R}.\] Here the implied constant may depend on \(R\). The dominated convergence theorem yields the continuity of \(L_{1}\varphi\) on \(\Gamma_{R}\), and by arbitrary choice of \(R\) we conclude the proof. We are now able to assert that there is a finite limit of \(\rho_{t}(x,y)\) as \(\Gamma\ni x\to 0\), expressed by means of the stationary density \(\varphi\). **Theorem 4.14**.: _Let \(\Gamma\) be a \(\kappa\)-fat cone. For every \(t>0\), uniformly in \(y\in\Gamma\),_ \[\lim_{\Gamma\ni x\to 0}\rho_{t}(x,y)=t^{-(d+\beta+\widehat{\beta})/\alpha} \varphi\big{(}t^{-1/\alpha}y\big{)}.\] **Remark 4.15**.: By Remark 4.4(2), \(\beta=0\) if and only if \(\widehat{\beta}=0\), and then one has \(M_{\Gamma}=\widehat{M}_{\Gamma}=\mathds{1}_{\Gamma}\). Consequently, \(\rho_{t}(x,y)=p_{t}(x,y)\) and using the Chapman-Kolmogorov property (2.12) together with the scalings (2.16) one can verify that \(\varphi(y)=p_{1}(0,y)\) is the stationary density from Theorem 4.11. Therefore, the statement of Theorem 4.14 trivialises to the continuity property of the heat kernel. Proof of Theorem 4.14.: By Remark 4.15, we may assume that \(\beta>0\) and \(\widehat{\beta}>0\). We first claim that there is a constant \(c\in(0,\infty)\) dependent at most on \(\alpha\), \(d\), \(\Gamma\) and \(\lambda\), such that for all \(x\in\Gamma_{1}\) and \(y\in\Gamma\), \[\int_{\Gamma}|\rho_{1}(2^{1/\alpha}x,z)-\varphi(z)|\rho_{1}(z,2^{1/\alpha}y)M_ {\Gamma}(z)\widehat{M}_{\Gamma}(z)\,\mathrm{d}z\leqslant c(1+|y|)^{-\widehat{ \beta}}. \tag{4.20}\] To this end, denote \(\tilde{y}=2^{1/\alpha}y\), so that \(|\tilde{y}|\approx|y|\). By (4.10), Lemma 4.13, (4.9) and Proposition 4.9, for all \(z\in\Gamma\) we have \[|\rho_{1}(2^{1/\alpha}x,z)-\varphi(z)|\rho_{1}(z,\tilde{y})M_{\Gamma}(z) \widehat{M}_{\Gamma}(z)\lesssim(1+|z|)^{-d-\alpha}(1+|z-\tilde{y}|)^{-d- \alpha}(1+|y|)^{\alpha-\widehat{\beta}}.\] We split the integral (4.20) as follows. For \(z\in A:=B(\tilde{y},|\tilde{y}|/2)\) we have \(|z|\approx|\tilde{y}|\approx|y|\) and \(1+|z-\tilde{y}|\geqslant 1\), which implies \[\int_{A}|\rho_{1}(2^{1/\alpha}x,z)-\varphi(z)|\rho_{1}(z,2^{1/\alpha}y)M_{ \Gamma}(z)\widehat{M}_{\Gamma}(z)\,\mathrm{d}z\lesssim|y|^{d}(1+|y|)^{-d- \widehat{\beta}}\leqslant(1+|y|)^{-\widehat{\beta}}.\] On \(\Gamma\setminus A\) we use the fact that \(1+|z|\geqslant 1\) to get \[\int_{\Gamma\setminus A}|\rho_{1}(2^{1/\alpha}x,z)-\varphi(z)| \rho_{1}(z,2^{1/\alpha}y)M_{\Gamma}(z)\widehat{M}_{\Gamma}(z)\,\mathrm{d}z \lesssim(1+|y|)^{\alpha-\widehat{\beta}}\int_{\Gamma\setminus A }(1+|z-\tilde{y}|)^{-d-\alpha}\,\mathrm{d}z\] \[\lesssim(1+|y|)^{\alpha-\widehat{\beta}}\int_{|\tilde{y}|/2}^{ \infty}(1+r)^{-1-\alpha}\,\mathrm{d}r\] \[\lesssim(1+|y|)^{-\widehat{\beta}}.\] Thus, we obtain (4.20) as claimed. Next, we prove that, uniformly in \(y\in\Gamma\), \[\lim_{\Gamma\ni x\to 0}\int_{\Gamma}\rho_{1}(2^{1/\alpha}x,z)\rho_{1}(z,2^{1/ \alpha}y)M_{\Gamma}(z)\widehat{M}_{\Gamma}(z)\,\mathrm{d}z=\int_{\Gamma} \varphi(z)\rho_{1}(z,2^{1/\alpha}y)M_{\Gamma}(z)\widehat{M}_{\Gamma}(z)\, \mathrm{d}z. \tag{4.21}\] Indeed, let \(\varepsilon>0\). Since \(\widehat{\beta}>0\), by (4.20), there is \(R>0\) depending at most on \(\alpha,d,\Gamma,\lambda\) and \(\varepsilon\) such that \[\int_{\Gamma}|\rho_{1}(2^{1/\alpha}x,z)-\varphi(z)|\rho_{1}(z,2^{1/\alpha}y)M_ {\Gamma}(z)\widehat{M}_{\Gamma}(z)\,\mathrm{d}z<\varepsilon, \tag{4.22}\] if only \(y\in\Gamma\setminus\Gamma_{R}\). If \(y\in\Gamma_{R}\), then by (4.12) and Corollary 4.12, \[\int_{\Gamma}|\rho_{1}(2^{1/\alpha}x,z)-\varphi(z)|\rho_{1}(z,2^{ 1/\alpha}y)M_{\Gamma}(z)\widehat{M}_{\Gamma}(z)\,\mathrm{d}z\] \[\lesssim \int_{\Gamma}|\rho_{1}(2^{1/\alpha}x,z)-\varphi(z)|M_{\Gamma}(z) \widehat{M}_{\Gamma}(z)\,\mathrm{d}z\] \[< \varepsilon, \tag{4.23}\] for all \(y\in\Gamma_{R}\) and \(x\in\Gamma_{1}\) small enough. Note that the implied constant in (4.23) does not depend on \(y\). Putting together (4.22) with (4.23), we conclude (4.21), as desired. For the final part we note that by the scalings (4.8) and the Chapman-Kolmogorov property (4.6), \[\rho_{1}(x,y) =2^{(d+\beta+\beta)/\alpha}\rho_{2}(2^{1/\alpha}x,2^{1/\alpha}y)\] \[=2^{(d+\beta+\beta)/\alpha}\int_{\Gamma}\rho_{1}(2^{1/\alpha}x,z) \rho_{1}(z,2^{1/\alpha}y)M_{\Gamma}(z)\widehat{M}_{\Gamma}(z)\,\mathrm{d}z.\] It follows now from (4.21), (4.8) and Theorem 4.11 that \[\lim_{\Gamma\ni x\to 0}\rho_{1}(x,y)=2^{(d+\beta+\widehat{\beta})/\alpha}\int_{ \Gamma}\varphi(z)\rho_{1}(z,2^{1/\alpha}y)M_{\Gamma}(z)\widehat{M}_{\Gamma}(z) \,\mathrm{d}z\] \[=\int_{\Gamma}\varphi(z)\rho_{1/2}(2^{-1/\alpha}z,y)\,M_{\Gamma}(z) \widehat{M}_{\Gamma}(z)\,\mathrm{d}z\] \[=L_{\ln 2}\varphi(y)\] \[=\varphi(y).\] **Remark 4.16**.: Recall that for every \(x\in\Gamma\) and \(t>0\), \[\int_{\Gamma}\rho_{t}(x,y)M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y=1.\] By (4.10), Theorem 4.14 and the dominated convergence theorem, we also have \[\int_{\Gamma}\varphi(y)M_{\Gamma}(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y=1.\] With Theorem 4.14 at hand, the following corollary readily follows. Recall that the comparability of the survival probability and the Martin kernel was given in Proposition 4.9. Now we focus on the limiting behaviour as \(\Gamma\ni x\to 0\) and derive the aforementioned asymptotic relation. **Corollary 4.17**.: _Let \(\Gamma\) be a \(\kappa\)-fat cone. Then, for every \(t>0\),_ \[\lim_{\Gamma\ni x\to 0}\frac{\mathbb{P}_{x}(\tau_{\mathrm{T}}>t)}{M_{\Gamma}(x)} =C_{1}t^{-\beta/\alpha},\qquad\text{where}\qquad C_{1}=\int_{\Gamma}\varphi(z) \widehat{M}_{\Gamma}(z)\,\mathrm{d}z\in(0,\infty).\] Proof.: First let \(t=1\). By (2.14), \[\frac{\mathbb{P}_{x}(\tau_{\mathrm{T}}>1)}{M_{\Gamma}(x)}=\int_{\Gamma}\frac {p_{1}^{\Gamma}(x,y)}{M_{\Gamma}(x)}\,\mathrm{d}y=\int_{\Gamma}\rho_{1}(x,y) \widehat{M}_{\Gamma}(y)\,\mathrm{d}y.\] We let \(\Gamma\ni x\to 0\) and use (4.10), the dominated convergence theorem and Theorem 4.14 to get the claim. For the general case we use the scalings (2.17) and (4.1). We are now ready to prove our second main result, c.f. Theorem 1.2. Note that the limiting Yaglom measure is absolutely continuous with respect to the Lebesgue measure, with the density given by objects pertaining to both \(\mathbf{X}\) and \(\widehat{\mathbf{X}}\). **Theorem 4.18**.: _Let \(\mathbf{X}\) be a strictly stable Levy process satisfying \(\mathbf{A}\). Assume \(\Gamma\) is a \(\kappa\)-fat cone. There is a probability measure \(\mu\) concentrated on \(\Gamma\) such that for every Borel set \(A\subseteq\Gamma\) and every probability measure \(\gamma\) on \(\Gamma\) satisfying \(\int_{\Gamma}(1+|y|)^{\alpha}\,\gamma(\mathrm{d}y)<\infty\),_ \[\lim_{t\to\infty}\mathbb{P}_{\gamma}(t^{-1/\alpha}X_{t}\in A|_{\Gamma}>t)=\mu (A),\quad x\in\Gamma,\] _where_ \[\mu(A)=\frac{1}{C_{1}}\int_{A}\varphi(y)\widehat{M}_{\Gamma}(y)\,\mathrm{d}y,\quad A\subseteq\Gamma.\] Proof.: Assume first that the process starts from a fixed point \(x\in\Gamma\) with probability \(1\), that is the initial distribution \(\gamma\) is a Dirac delta at \(x\). By (2.14) and the scaling property (2.15), \[\mathbb{P}_{x}(t^{-1/\alpha}X_{t}\in A\,|\,\tau_{\mathrm{T}}>t) =\frac{\mathbb{P}_{x}(t^{-1/\alpha}X_{t}\in A,\,\tau_{\mathrm{T}} >t)}{\mathbb{P}_{x}(\tau_{\mathrm{T}}>1)}\] \[=\frac{\mathbb{P}_{t^{-1/\alpha}x}(X_{1}\in A,\,\tau_{\mathrm{T} }>1)}{\mathbb{P}_{t^{-1/\alpha}x}(\tau_{\mathrm{T}}>1)}\] \[=\int_{A}\frac{p_{1}^{\Gamma}(t^{-1/\alpha}x,y)}{M_{\Gamma}(t^{-1 /\alpha}x)}\,\mathrm{d}y\cdot\frac{M_{\Gamma}(t^{-1/\alpha}x)}{\mathbb{P}_{t^ {-1/\alpha}x}(\tau_{\mathrm{T}}>1)}\] \[=\int_{A}\rho_{1}(t^{-1/\alpha}x,y)\widehat{M}_{\Gamma}(y)\, \mathrm{d}y\cdot\frac{M_{\Gamma}(t^{-1/\alpha}x)}{\mathbb{P}_{t^{-1/\alpha}x} (\tau_{\mathrm{T}}>1)}.\] Thus, using Corollary 4.17, Theorem 4.14, (4.10) and the dominated convergence theorem we get the conclusion in this case. Note that, in fact, the convergence is uniform in \(x\in B\) for every bounded \(B\subseteq\Gamma\). For the general initial distribution \(\gamma\) satisfying the integrability condition \(\int_{\Gamma}(1+|y|)^{\alpha}\,\gamma(\mathrm{d}y)<\infty\), one can follow the proof of [13, Theorem 3.12] line by line to deduce the desired result. For this reason, we omit the details. **Remark 4.19**.: As a complement of Remark 4.2 and Proposition 4.9(3), we note that in exactly the same way one may obtain the dual counterparts of Corollary 4.17 and Theorem 1.2. To wit, one can show that there is a stationary density \(\widehat{\varphi}\) for the Ornstein-Uhlenbeck operators \(\widehat{L}_{t}\) given by the integral kernels \(\widehat{\ell}_{t}\) associated to the dual conditioned kernel \(\widehat{\rho}_{t}(x,y):=\rho_{t}(y,x)\) for \(x,y\in\Gamma\). Accordingly, c.f. Lemma 4.13, \[\widehat{\varphi}(y)\approx(1+|y|)^{-d-\alpha}\frac{\mathbb{P}_{y}(\tau_{ \Gamma}>t)}{M_{\Gamma}(y)},\quad y\in\Gamma,\] and as a consequence, \[\lim_{\Gamma\ni x\to 0}\frac{\widehat{\mathbb{P}}_{x}(\tau_{\Gamma}>t)}{ \widehat{M}_{\Gamma}(x)}=\widehat{C}_{1}t^{-\widehat{\beta}/\alpha},\qquad \text{where}\qquad\widehat{C}_{1}=\int_{\Gamma}\widehat{\varphi}(z)M_{\Gamma} (z)\,\mathrm{d}z\in(0,\infty),\] and for every \(\kappa\)-fat cone \(\Gamma\) and every probability measure \(\gamma\) satisfying \(\int_{\Gamma}(1+|y|)^{\alpha}\,\gamma(\mathrm{d}y)<\infty\), \[\lim_{t\to\infty}\widehat{\mathbb{P}}_{\gamma}\big{(}t^{-1/\alpha}\widehat{X} _{t}\in A\big{|}\tau_{\Gamma}>t\big{)}=\widehat{\mu}(A),\quad A\subseteq\Gamma,\] where \[\widehat{\mu}(A):=\frac{1}{\widehat{C}_{1}}\int_{A}\widehat{\varphi}(y)M_{ \Gamma}(y)\,\mathrm{d}y,\quad A\subseteq\Gamma.\] The necessary changes essentially consist of replacing the underlying objects with its dual counterparts. The details are left for the reader.
2305.06933
The NetMob23 Dataset: A High-resolution Multi-region Service-level Mobile Data Traffic Cartography
Digital sources have been enabling unprecedented data-driven and large-scale investigations across a wide range of domains, including demography, sociology, geography, urbanism, criminology, and engineering. A major barrier to innovation is represented by the limited availability of dependable digital datasets, especially in the context of data gathered by mobile network operators or service providers, due to concerns about user privacy and industrial competition. The resulting lack of reference datasets curbs the production of new research methods and results, and prevents verifiability and reproducibility of research outcomes. The NetMob23 dataset offers a rare opportunity to the multidisciplinary research community to access rich data about the spatio-temporal consumption of mobile applications in a developed country. The generation process of the dataset sets a new quality standard, leading to information about the demands generated by 68 popular mobile services, geo-referenced at a high resolution of $100\times100$ $m^2$ over 20 metropolitan areas in France, and monitored during 77 consecutive days in 2019.
Orlando E. Martínez-Durive, Sachit Mishra, Cezary Ziemlicki, Stefania Rubrichi, Zbigniew Smoreda, Marco Fiore
2023-05-11T16:12:31Z
http://arxiv.org/abs/2305.06933v2
# The NetMob23 Dataset: A High-resolution Multi-region Service-level Mobile Data Traffic Cartography ###### Abstract. Digital sources have been enabling unprecedented data-driven and large-scale investigations across a wide range of domains, including demography, sociology, geography, urbanism, criminology, and engineering. A major barrier to innovation is represented by the limited availability of dependable digital datasets, especially in the context of data gathered by mobile network operators or service providers, due to concerns about user privacy and industrial competition. The resulting lack of reference datasets curbs the production of new research methods and results, and prevents verifiability and reproducibility of research outcomes. The NetMob23 dataset offers a rare opportunity to the multidisciplinary research community to access rich data about the spatio-temporal consumption of mobile applications in a developed country. The generation process of the dataset sets a new quality standard, leading to information about the demands generated by 68 popular mobile services, geo-referenced at a high resolution of \(100\times 100\) m\({}^{2}\) over 20 metropolitan areas in France, and monitored during 77 consecutive days in 2019. + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: journal: Computer Vision and Image Understanding 2013 and 2014 (Wang et al., 2014; Wang et al., 2015), the International Telecommunication Union (ITU) challenge to investigate the Ebola epidemic in West African countries in 2015 (Wang et al., 2015), the Telecom Italia Big Data Challenge launched in 2014 and 2015 (Wang et al., 2015), the Data for Refugees (D4R) challenge conducted by Turk Telekom in 2018 (Zhu et al., 2018) or the Future Cities Challenge supported by Foursquare during the NetMob 2019 conference (Zhu et al., 2018) are prominent examples of such past initiatives. Our dataset, referred to as NetMob23 hereinafter, sets a new standard for mobile network data made available to the research community, from multiple perspectives as follows. * While previous datasets have largely focused on Call Detail Records (CDRs) that only capture network events associated with voice calls and text messages that are sparse and irregular over time, NetMob23 contains information about the data traffic generated by the mobile devices attached to a modern 4G cellular network, which has been for the past ten years the vastly predominant way of accessing wireless network services. * The NetMob23 dataset captures traffic in a developed country like France, which offers a different perspective than earlier datasets focusing on developing countries; also, the data spans 20 metropolitan areas in France, offering the possibility of generalizing analyses and juxtaposing results across heterogeneous urbanization levels and population densities. * Unlike any dataset previously available to the research community within the framework of open challenges, NetMob23 offers rich information about the usage of 68 popular mobile services, which opens significant opportunities to understand the consumption of applications and its implications across research domains. * The original generation process behind the NetMob23 dataset makes a major step beyond the legacy approach of using Voronoi tessellations as a proxy for antenna coverage, and results in a dataset of unprecedented spatial accuracy where the mobile data traffic information is mapped to more than \(870,000\) high-resolution regular grids whose individual elements span \(100\times 100\) m\({}^{2}\) each, for a total of over 440 billion data points. Overall, the NetMob23 dataset has the potential to support many novel explorations of mobile network traffic and enable the development of new applications on top of those findings. We look forward to seeing creative and constructive uses of the data by the research community worldwide. ## 2. Data Sources The generation process underpinning NetMob23 hinges upon open-source geospatial data and extensive measurements from Orange, a major mobile network operator in France. We note that Orange roughly has a 30% market penetration in the country that is relatively uniform across the French territory. This provides a solid statistical basis for downstream analyses that generalize to the entirety of the local population. We next present the different data sources, and discuss the ethical standards of the data collection and processing. ### Metropolitan areas We use geographical shapefiles outlining the 20 major _metropolitan regions_ in France, a zoning employed by national and local administrations to carry out joint planning of the educational, cultural, economic, and social initiatives over the country territory. Each metropolitan region encompasses a Figure 1. Population density map of France, highlighting the 20 metropolitan areas covered by our dataset. Zoomed-in maps show the heterogeneity of population density in six representative cities, as indicated by the arrows. set of neighboring _commune_, which are French local administrative zones analogous to a civil township in the United States, and thus covers both dense urban, suburban, and more rural areas that constitute the conurbation of a specific city. Figure 1 shows the location and contour of all metropolitan areas covered by the NetMob23 dataset, overlapped to a country-wide map of population density. The areas of a few sample cities are magnified to appreciate the diversity of population density captured by each of them. Overall, our dataset includes the foremost industrial, commercial, and financial centers of France, which are home to more than one third of the total population in the country. ### Mobile network traffic We employ mobile data traffic collected over the 20 target metropolitan areas for 77 consecutive days, _i.e._, roughly two and a half months, from March 16, 2019, to May 31, 2019. #### 2.2.1. Service-level traffic volumes The traffic measurements were performed by Orange using passive measurement probes tapping at the Gi, SGi and Gn interfaces connecting the Gateway GPRS Support Nodes (GGSNs) and the Packet Data Network Gateways (PGWs) of the of Long Term Evolution (LTE) Evolved Packet Core (EPC) network to external public data networks (PDNs). This monitoring strategy allows capturing all 4G traffic traversing the mobile network serving the whole country. The probes run dedicated proprietary classifiers that allow associating individual traffic flows to their corresponding mobile applications to enable network monitoring, traffic engineering and research activities. Ultimately, the EPC probes gather information about the uplink (UL) and downlink (DL) volume of the demand generated by each of 68 mobile services, which are together responsible for about 70% of the total mobile data traffic observed by Orange in France. Figure 2 offers a look into the applications included in the dataset, and a ranking of the same based on the fraction of total traffic by generate. We observe that the services responsible for the largest demands are responsible for around 10% of the overall mobile network usage. The logarithmic scale of the ordinate highlights the power law that characterizes the relative consumption across applications, and the consequent diversity of per-service traffic, which spans three orders of magnitude when juxtaposing the top and bottom applications in the dataset. #### 2.2.2. Traffic flow to eNodeB association We associate traffic volumes to specific base stations using network signalling data (NSD) captured by probes monitoring the LTE S1 interface connecting eNodeBs, _i.e._, 4G base stations, to the Mobility Management Entity (MME). The NSD allow tracking the eNodeB serving a mobile device across all control-plane events, which include (_i_) voice and texting communications such as call establishments and SMS transmissions, (_ii_) handovers, _i.e._, device cell changes during communication, (_iii_) Tracking Area (TA) updates, _i.e._, cell changes that cross boundaries among larger regions that trigger control messages also from idle devices), (_iv_) active paging, _i.e._, periodic requests to update the location of the device started from the network side, (_v_) network attaches and detaches generated by devices joining or leaving the network as they are turned on/off, or (_vi_) data connections, _i.e._, requests to assign resources for traffic generated by mobile applications running on the device. Such high-frequency NSD events allow associating each traffic flow to the exact sequence of eNodeBs that serviced it, and thus to accurately link (portions of) the traffic volume of the flow to each base station. #### 2.2.3. Service-level traffic time series at eNodeBs We aggregate the traffic volume generated by all uplink or downlink flows pertaining to a given mobile service and served by a same eNodeB. We also aggregate the resulting traffic volumes over 15-minute time intervals, _i.e._, a temporal granularity that allows observing a wide range of time-dependent phenomena, while keeping the dataset at a reasonable size. Figure 2. Mobile services in the NetMob23 dataset, ranked by the fraction of total demand they generate. Overall, the mobile network traffic data we employ is in the form of time series of the uplink and downlink load generated by 68 mobile applications at every eNodeB, with 15-minute time steps. In order not to disclose the sensitive information of the actual volume of traffic served by the mobile network operator, we normalize all traffic by a same random value. Therefore, traffic is not provided with a specific unit (e.g., bytes) but it is still fully comparable across space and time. ### Coverage dataset Having associated the correct service-level traffic to each eNodeB over time, we employ coverage information for each eNodeB to perform a spatial mapping of the traffic information to the geographical space. This allows producing data with a much higher spatial resolution than using legacy approaches such as Voronoi tessellations. Specifically, we start from coverage data computed using a commercial radio-frequency signal propagation tool. For every eNodeB, coverage is encoded as probabilities of association over a regular grid tessellation with tiles of \(100\times 100\) m\({}^{2}\) each. A bi-dimensional matrix of \(600\times 600\) tiles is produced for a single eNodeB, hence providing complete coverage information over an area of 60 km\({}^{2}\) surrounding the base station. The matrix tiles contain probabilities \(p(i\mid\ell)\) that explain the likelihood of a User Equipment (UE) to connect to eNodeB \(i\) while at tile \(\ell\). By inverting this probability information via Bayes' theorem, under the assumption of a uniform density of population within the area covered by the eNodeB, we derive \(p(\ell\mid i)\), _i.e._, the probability that a device associated with a base station \(i\) is located at tile \(\ell\). We use \(p(\ell\mid i)\) to distribute over the high-resolution grid of \(100\times 100\)-m\({}^{2}\) tiles the service-level traffic observed by each eNodeB in a probabilistic fashion, as detailed in Section 3. ## 3. Generation Methodology The different data presented above are combined to generate the final NetMob23 dataset, following a process that is visually summarized in Figure 3. Formally, let us denote by \(\mathcal{T}^{i}_{a}(t)\) the mobile traffic generated by application \(a\) at eNodeB \(i\in I\) during time slot \(t\in T\), where \(I\) is the set of all base stations and \(T\) denotes the whole system observation period. Also, recall that \(\mathcal{P}(\ell\mid i)\) is the probability of a user to be at a location \(\ell\) while being connected to eNodeB \(i\). As portrayed in Figure 3, we first multiply traffic observed at each eNodeB (denoted by B in the figure) by the UE positioning probability of the same eNodeB (A in the figure), so as to distribute over space the traffic volume observed at the base station. This operation is repeated for each time slot \(t\) of the time series of every mobile service (_e.g._, Spotify in the figure). The result is a traffic map \(\mathcal{M}^{i}_{a}(t)\), which represents how the traffic generated by a given service \(a\) (again, Spotify in the figure example) at the target eNodeB \(i\) is probabilistically distributed over the geographical space at time \(t\). Since the same regular grid tessellation is used for probabilities \(\mathcal{P}(\ell\mid i)\) across all base stations \(i\in I\), it is possible to repeat the process above for all eNodeBs, and generate consistent maps of the spatial traffic of a same application \(a\) over the whole mobile network. This is attained by simply adding Figure 3. Summary of the dataset generation methodology. (A) Coverage matrix for eNodeB _i._ (B) Traffic time series aggregated over all users for application \(a\), i.e., Spotify, at eNodeB _i._ (C) Spatial mapping of the Spotify traffic at eNodeB \(i\) during time step \(t\). (D) Overall spatial distribution of Spotify traffic in \(t\), as the sum of all eNodeB maps. the different traffic maps as \(\mathcal{M}_{\text{a}}(t)=\sum_{i=1}^{I}\mathcal{M}_{\text{a}}^{i}(t)\) (shown in Figure 3 as C). The final \(\mathcal{M}_{\text{a}}(t)\) is the overall traffic map of service \(a\) at time \(t\), defined over locations \(\ell\) (D in the figure). We repeat the above steps for all \(t\in T\) and for all applications to ultimately obtain a complete spatiotemporal representation of the service-level traffic in the 20 target metropolitan areas introduced in Section 2.1 over a high-resolution grid of locations \(\ell\). ### Ethics considerations The mobile network traffic dataset we use to generate the dataset was collected, processed and aggregated as described in Section 2.2 in full compliance with Article 89 of the General Data Protection Regulation (GDPR) (Zheng et al., 2019), under the supervision of the Data Protection Officer (DPO) at Orange. In particular, all data management was performed on a secure platform at the operator's premises and the raw data was deleted immediately afterward. The resulting service-level time series represent traffic aggregated over all UEs both in space, at eNodeB level, and time, over 15-minute intervals. Moreover, the traffic associated to different base stations is further aggregated via the spatial mapping described earlier. The final representation does not allow re-identifying or tracking individual users. ### Final dataset format To facilitate access to the data, the NetMob23 dataset is divided into spatial representations and traffic information, which are detailed next. As part of the material provided with the dataset, we also make available Jupyter notebooks in Python with examples of manipulations of the dataset and calculations of high-level statistical indicators and plots2. Footnote 2: [https://github.com/nds-group/netmob2023challenge](https://github.com/nds-group/netmob2023challenge). #### 3.2.1. Spatial representation We publish a GeoJson file for each metropolitan region with the WGS84 coordinate system. Each of these files contains the tile identifier of each regular grid tile in the target urban area, and the corresponding polygon that bounds the \(100\times 100\text{-}\text{m}^{2}\) geographical surface of the tile. We also provide an alternative matrix representation of the space, in the form of a text file containing the number of rows and columns in the matrix, whose values are also shown in Table 1. #### 3.2.2. Traffic information The traffic dataset folder is organized according to a tree structure, as shown in Fig 4. The root is the target urban region, followed by the application name and then the day, with two leaves, _i.e._, text files, for the UL and DL directions, respectively. In each text file, one line represents the (normalized) traffic in a given tile of the city, for the corresponding application and date, from midnight to 23:45 at 15-minute time steps, as shown in Table 2. More precisely, each line contains the following fields: * tile is the tile identifier used in the GeoJson spatial representation file for the same urban area. In the case of the matrix representation of tiles, the position inside the matrix can be retrieved by an integer division and a modulo operation. * \(\text{t}_{\text{th.}\,\text{nm}}\) is a vector of 96 values, corresponding to the traffic observed in UL or DL at location tile for the target city, service, and day, during all 15-minute time slots, starting from midnight to 23:45. Due to daylight \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c} & Region & & & & & & & & & & & & & & & & & & & & \\ \hline Rows & 334 & 208 & 195 & 409 & 330 & 426 & 228 & 211 & 226 & 334 & 151 & 277 & 150 & 282 & 409 & 423 & 305 & 296 & 280 & 251 \\ Columns & 342 & 268 & 234 & 251 & 342 & 287 & 246 & 210 & 269 & 327 & 165 & 425 & 214 & 256 & 346 & 370 & 501 & 258 & 347 & 270 \\ \end{tabular} \end{table} Table 1. Dimensions of the matrices of regular grid tiles in each of the 20 target urban regions. Figure 4. Hierarchical organization of traffic files. ## 4. Qualitative Analysis In this section, we provide a preliminary exploration of some characteristics of the NetMob23 dataset. The analysis aims at discussing basic properties of the spatial and temporal evolution of the service-level mobile traffic, and highlight anomalies inherent to the dataset that may bias downstream usages of the same. ### Anomaly analysis The NetMob23 dataset is based on traffic information measured in a production network. As such, it is characterized by anomalous events of technical nature, caused, _e.g._, by exceptional surges in demands, radio access network malfunctioning, network configuration errors, energy supply problems, or issues in the traffic monitoring system. We report examples of anomalous events that can be observed in the NetMob23 dataset in Figure 5, for different combinations of cities and services. The plots show the overall traffic in UL and DL, with anomalies evidenced by light red areas. The three plots show for instance the effects of a nationwide network outage on May 12, 2019, which caused both UL and DL traffic to drop substantially in the whole Orange network. The top and bottom plots also highlight a an event affecting only the South-East of France but for a longer period spanning from May 22 through May 25: the problem caused a reduction of traffic in Bordeaux and Toulouse, but not in Paris, due to the geographical locations of the cities. More rarely, outages concern specific applications, such as for the services of Meta on April 14, as seen, _e.g._, in the top plot for Facebook Live traffic. The full list of major anomalies we detected in the data is provided in Table 3. ### Temporal Analysis People tend to follow fairly regular patterns in their daily lives, and mobile network data have been repeatedly shown to be affected by such periodicity (Bordeaux et al., 2019; Bordeaux et al., 2019; Bordeaux et al., 2019). This is true at the level of the total mobile traffic, but even more so when considering individual applications, since users employ those at specific times of their weekly routine. Figure 6 shows the time series of four major mobile applications with very diverse usage patterns in Paris. LinkedIn is a business and employment-oriented social media application that is used to connect with other professionals to share their resumes, work, or professional events; Netflix is an application that people use for entertainment; Apple iCloud is a platform that is used to store and sync data on various Apple devices for applications like Apple Mail, Apple Calendar, Apple Photos, Apple Notes, contacts, settings, and backup files; and, Uber is used for long and short rides. Each Figure 5. Traffic time series highlighting different anomalies in various cities of the NetMob23 dataset. Figure 6: Traffic time series of four different applications in Paris, as observed during a same week. of these applications shows different patterns in their time series, _i.e._, LinkedIn shows a high traffic peak in the early morning hours until the afternoon and then the traffic values for both UL and UD start to decrease. Conversely, Netflix shows a high traffic peak in the late evening because people are usually free during these hours, while Apple iCloud does not follow either the day or night pattern. Finally, Uber generates a clear pattern where weekend traffic is almost the same during the day and night, implying that the service is used more on weekends when people travel and take rides to return late at night. As another notable difference across applications, the UL/DL ratio is very heterogeneous, which could be noted already in Figure 2. These behaviors are not a specific feature of a large metropolis like Paris. A same application tends in fact to be mostly used in the same way in different geographic locations, as shown in Figure 7. There, we consider the example of Netflix, and display its usage in four cities, namely Bordeaux, Lyon, Toulouse, and Marseille. The patterns are very much comparable in all plots, although minor changes occur that would deserve a deeper investigation. ### Spatial Analysis The NetMob23 dataset allows investigating spatial properties of the mobile service usage as well. Figure 8 shows the average traffic maps on Mondays (top row) and Sundays (bottom row) for the same four applications considered before, _i.e._, Netflix, LinkedIn, Apple iCloud, and Uber. Each of the plots shows the whole Paris urban area and a zoom on the city center delimited by the local inner beltway. The figure allows comparing the applications along two dimensions. On the one hand, looking at plots from left to right shows that the different applications have very diverse geographical distributions of their generated traffic. Netflix, for example, is pervasively distributed across the region, which is even more evident in the zoomed map, whereas LinkedIn and Apple iCloud have a high concentration in the part of the city where there are many large offices and workplaces. In the case of iCloud, notable traffic is also recorded at touristic spots, amusement parks, or entertainment facilities: we speculate that these may be locations where people usually upload photos or videos to their personal cloud. In the case of Uber, there is a clear distribution of traffic on city streets, highways, train stations, and airports. On the other dimension, a comparison of the demands for a same application in working and weekend days, _i.e._, from top to bottom of Figure 8, also reveals interesting phenomena. For instance, Netflix is used in a fairly constant way during the week, whereas Uber is more actively used on weekends than on weekdays. On the contrary, LinkedIn and Apple iCloud largely drops in weekends, although Apple iCloud usage persists in the more touristic areas of the city. Figure 8. Spatial traffic generated by four applications in Paris, on Mondays (top) and Sundays (bottom). ## 5. Additional Resources The NetMob23 dataset can be enriched via combination with other sources of information. We list a number of sources of sociodemographic and telecommunication indicators that cover the French territory at around the same time of the mobile traffic data collection. _Administrative boundaries_. These resources are available on the Open platform for French public data and the National Institute of Geographic and Forest Information (IGN). * _Regions_ are the apex level territorial division of France, which is presently divided into \(22\) such zones (Krishnan, 2017). * _Departments_ are administrative units of France governed by an elected body, _i.e._, the departmental council. There are currently \(96\) departments in France. * _Arrodissements_ are the next level of subdivision of the departments and organize the local police, fire department, and occasionally elections. There are currently \(332\) arrondissements in France. * _Communes_ are the smallest and the oldest French administrative unit (Krishnan, 2017), administered by the municipal council and headed by a mayor. Currently, there are about \(36,000\) communes in mainland France. * _Urban Units_ are formed on the basis of contiguous built-up areas (Krishnan, 2017), and typically merge neighboring communes denotes by urban continuity. * _IRIS_ is a fine-grained territorial subdivision of France, such that the number of inhabitants in each IRIS zone is around \(2,000\). This definition is employed by the French National Institute of Statistics and Economic Studies (INSEE) for statistical analyses (Krishnan, 2017; Krishnan, 2017). _Social-Economic indicators_. INSEE collects data on population, education, income and other socio-economic statistics on a quinquennial basis across the whole France. The institute has an open-data website that includes, among others, the information below. * _Population_ contains data for different age groups for both men and women from different years at Commune (Krishnan, 2017), and IRIS (Krishnan, 2017) levels. * _Average income_ encompasses median and quantiles of consumption units per household at commune (Krishnan, 2017) and IRIS levels for various years. In addition, the data also include Gini indexes that provide information on income inequality between individuals and households in a given region. * _Educational level_ information includes the number of people attending school or university, in communes (Krishnan, 2017) and IRIS zones (Krishnan, 2017), for different age groups. _Telecommunications_. The National Agency for Radio Frequencies (ANFR) and the Authority for the Regulation of Electronic Communications (Arcep) are French regulatory bodies in the telecommunication area, and gather data such as coverage and antenna location for electronic communications. Some potentially relevant datasets are made available by the agencies, as listed below. * _Operator coverage_ is a cartographic platform that assembles all geographic data related to different mobile networks (2G, 3G, 4G, 5G) for all operators (Krishnan, 2017). * _Radio and antennas location_ data is handled by ANFR, which updates its mobile network deployment observatory monthly and lists all radio sites authorized on French territory via a cartographic platform (Krishnan, 2017). ## 6. Concluding Remarks This paper presents a dataset of mobile traffic made available to the research community within the context of a challenge, which features for the first time service-level demands, as well as unprecedented spatial resolution and geographical coverage in a developed country. We believe that the NetMob23 dataset will inspire researchers to design and implement a number of innovative analyses, and discover new knowledge about how and why people consume mobile applications. ## Acknowledgements The mobile network traffic and coverage data presented in Section 2.2 and Section 2.3 were collected during the research project CANCAN (Content and Context based Adaptation in Mobile Networks), grant no. ANR-18-CE25-0011, funded by the French National Research Agency (ANR). The NetMob 2023 Data Challenge is organized with the support of the research project NetSense, grant no. 2019-T1/TIC-16037 funded by Comunidad de Madrid, and of the research project CoCo5G (Traffic Collection, Contextual Analysis, Data-driven Optimization for 5G), grant no. ANR-22-CE25-0016, funded by the French National Research Agency (ANR).
2308.09424
Feel the Breeze: Promoting Relaxation in Virtual Reality using Mid-Air Haptics
Mid-air haptic interfaces employ focused ultrasound waves to generate touchless haptic sensations on the skin. Prior studies have demonstrated the potential positive impact of mid-air haptic feedback on virtual experiences, enhancing aspects such as enjoyment, immersion, and sense of agency. As a highly immersive environment, Virtual Reality (VR) is being explored as a tool for stress management and relaxation in current research. However, the impact of incorporating mid-air haptic stimuli into relaxing experiences in VR has not been studied thus far. In this paper, for the first time, we design a mid-air haptic stimulation that is congruent with a relaxing scene in VR, and conduct a user study investigating the effectiveness of this experience. Our user study encompasses three different conditions: a control group with no relaxation intervention, a VR-only relaxation experience, and a VR+Haptics relaxation experience that includes the mid-air haptic feedback. While we did not find any significant differences between the conditions, a trend suggesting that the VR+Haptics condition might be associated with greater pleasure emerged, requiring further validation with a larger sample size. These initial findings set the foundation for future investigations into leveraging multimodal interventions in VR, utilising mid-air haptics to potentially enhance relaxation experiences.
Naga Sai Surya Vamsy Malladi, Viktorija Paneva, Jörg Müller
2023-08-18T09:45:42Z
http://arxiv.org/abs/2308.09424v1
# Feel the Breeze: Promoting Relaxation in Virtual Reality using Mid-Air Haptics ###### Abstract Mid-air haptic interfaces employ focused ultrasound waves to generate touchless haptic sensations on the skin. Prior studies have demonstrated the potential positive impact of mid-air haptic feedback on virtual experiences, enhancing aspects such as enjoyment, immersion, and sense of agency. As a highly immersive environment, Virtual Reality (VR) is being explored as a tool for stress management and relaxation in current research. However, the impact of incorporating mid-air haptic stimuli into relaxing experiences in VR has not been studied thus far. In this paper, for the first time, we design a mid-air haptic stimulation that is congruent with a relaxing scene in VR, and conduct a user study investigating the effectiveness of this experience. Our user study encompasses three different conditions: a control group with no relaxation intervention, a VR-only relaxation experience, and a VR+Haptics relaxation experience that includes the mid-air haptic feedback. While we did not find any significant differences between the conditions, a trend suggesting that the VR+Haptics condition might be associated with greater pleasure emerged, requiring further validation with a larger sample size. These initial findings set the foundation for future investigations into leveraging multimodal interventions in VR, utilising mid-air Haptics to potentially enhance relaxation experiences. Human-centered computingHuman computer interaction (HCI)Interaction paradigmsVirtual realityHuman-centered computingHuman computer interaction (HCI)--HCI design and evaluation methodsUser studies ## 1 Introduction Stress is the human body's way of responding to any stimulus that could be perceived as a threat. Prior research demonstrates that stress can have a negative impact on memory, cognition, and learning [30, 40]. Notably, an extensive study conducted as part of the World Health Organisation World Mental Health International College Student Initiative, involving over 20,000 students across 9 countries and 24 universities, showed that 93.7% of the respondents experienced at least some stress in at least one life are [22]. The study also identified a causal effect of stress on six types of mental health disorders, further underlining the compelling need to investigate strategies for stress management and prevention. In particular, there is a need for a user-friendly stress management solution that can be easily incorporated into daily life. Furthermore, the solution should be easily accessible to nonclinical groups and potentially complement other forms of treatments and interventions. Due to its immersive nature, VR holds promise as a viable tool for fostering relaxation and mitigating stress. However, most VR relaxation applications are based only on visual and auditory modalities, overlooking the potential benefits of touch (e.g., in perceiving and communicating emotions [16]). In this paper, we introduce a multimodal relaxation experience that incorporates a serense VR scene with congruent mid-air haptic sensations. Utilising mid-air haptic technology, tactile sensations are generated directly on the user's skin without physical contact. We hypothesise that integrating visual, auditory, and haptic stimuli to create a congruent multimodal experience in VR will result in heightened relaxation and more effective stress reduction. We conduct a comprehensive evaluation to investigate the effectiveness of our approach in reducing stress and promoting relaxation, compared to a control condition with no relaxation intervention, and to a purely VR experience with no haptic feedback. The findings of this study contribute to the growing field of stress management and relaxation in virtual environments, with potential applications in diverse settings, from well-being interventions to therapeutic contexts. ## 2 Related Work ### VR for Stress Management and Relaxation VR with its ability to transport users to fully immersive environments that are vastly different from the real world, has emerged as a powerful tool for applications ranging from exposure-based therapy to gaming and simulation [13, 28]. Recent studies have leveraged this immersive property to design and investigate different relaxation techniques in VR, for example in combination with meditation [21] or breathing exercises [33]. VR relaxing experiences involving natural scenery deployed during work breaks, have shown to be effective forms of stress management and relaxation among office workers [35], clinicians [2], and college students [21]. Most of the existing VR relaxation applications in the literature primarily rely on the visual and auditory modalities, with the exception of a few studies that have incorporated olfactory interfaces [3]. To our knowledge, no VR application that incorporates mid-air haptic stimulation has been designed and tested thus far for the purpose of stress reduction and relaxation. ### Haptics for Stress Management and Relaxation Touch is an important communication modality that can have therapeutic effects and hence plays a role in many relaxation interfaces [19]. A variety of haptic devices have been designed to promote relaxation, including Good Vibes [23], which is a haptic sleeve containing vibro-actile actuators that use dynamic vibration patterns as a sothing intervention in stressful situations. Bumatay et al. [9] designed a mobile meditation tool and a breathing guide involving a vibrating huggable pillow, while Haynes et al. [18] recently introduced a huggable haptic interface that uses pneumatics, instead of vibration motors, to more closely simulate slow breathing. However, wearable or handheld devices often provide a limited variety of haptic sensations. In contrast, EmotionAir [36] proposes non-contact tactile stimulation using a rotatable air nozzle and investigates how different parameters of the airflow relate to the valence, arousal, and dominance dimension of the affective scale. Although the air jet haptic display can cover relatively large areas of the body, e.g., the forearm, it has a limited resolution. For a more detailed overview on affective haptic interfaces, please see Eid et al. [12]. ### _Mid-Air_ **Haptic Interfaces** Mid-air haptic technology offers a touchless and hygienic method for providing haptic feedback to the user, as it operates at a distance without requiring any direct physical contact with the user [10, 20]. This technology employs an array of ultrasonic transducers to emit ultrasonic waves, modulated down to a frequency perceptible by the mechanoreceptors in the human skin [10]. The parameters of the array can be adjusted to generate various haptic patterns, shapes, and textures [15, 17, 29]. Ultrasonic mid-air haptic displays offer a high spatial and temporal resolution, operating at a distance, and enabling users to interact with them using only their bare hands, without the need for wearables or attached gadgets. ### _Affective Mid-Air_ **Haptics** Findings of prior studies indicate that it is possible to convey emotional meaning using ultrasonic mid-air haptics [26]. Seinfeld et al. [31] found that visuotactile feedback using mid-air haptic stimulation, has a positive effect on the illusion of being affectively touched by a virtual avatar and embodied in a virtual body in VR, compared to a purely visual condition. An audio-haptic demonstrator was developed to augment emotional short story narratives through mid-air haptics [32]. Tsumoto et al. [37] investigated the effect of different spatiotemporal parameters of the mid-air haptic stimulus on the perceived pleasantness. Moreover, further studies have revealed that adding mid-air haptic stimulus can enhance participant immersion in art experiences [39], increase enjoyment when interacting with digital signage [25], positively influence emotions when watching short movies [1], and increase the implicit sense of agency during a virtual button press [14]. ## 3 Methods ### _Experimental Design_ To control for any lasting effects both of the stress inducing and the relaxation intervention, we opted for a between group experimental design. We note that this is an accepted practice for relaxation studies in the literature [9, 21, 23, 34]. Participants were randomly assigned to one of the following three conditions: 1) No relaxation experience (_Control_), 2) Relaxation experience in VR (_VR-only_), and 3) Relaxation experience in VR with mid-air haptic feedback (_VR+Haptics_). ### _Participants_ A total of 24 participants (13 female, 10 male, and 1 other) with a mean age of 25.75 (SD=2.44) were recruited for the experiment. Most of the participants were students, while the rest were university employees. The study followed ethical standards as per the Helsinki Declaration. All participants received a monetary reimbursement for their participation. ### _Experimental Stimuli_ In both the VR-only and VR+Haptics conditions, we utilised a Vive Pro1 head mounted display. The mid-air haptic stimulus in the VR+Haptics condition was generated using an Ultralape STRATOS Explore development kit2. The therapeutic effects of natural environments, particularly those associated with water bodies, is well-documented in the literature [4]. Considering this, we selected to use a coastal setting for the relaxation experience in our experiment. By making the sea breeze tangible using mid-air haptics, we aimed to create a holistic multisensory environment. With permission by the content owners, we used a 360deg stock video3 showcasing a serene beach scene where the waves gently splash against the coast. An image of the scene is shown in Figure 2 a). The footage also includes an audio track of the sea breeze. Footnote 1: www.vive.com Footnote 2: www.ultralape.com/haptics Footnote 3: www.atmosphaeres.com In order to generate a congruent mid-air haptic sensation, we adopted a systematic iterative approach. first we designed a sensation based on By taking into account the capabilities of the haptic device and principles grounded in physics (specifically, the consistent onshore movement of sea breezes during daytime, owing to the difference in air pressures arising from temperature disparities [5]), we designed an initial haptic stimulation pattern. We used amplitude modulation to generate multiple haptic control points at a frequency of 200Hz. To refine the haptic sensation, we subjected it to pilot testing with 5 users and iteratively improved it based on the feedback obtained. As a result of this preliminary study, we identified the following characteristics as closely approximating the users' perception of a sea breeze sensation (see Figure 1): 1. The haptic sensation of the sea breeze encompasses 4 points of contact on the palm. The points are at a distance of 2.5cm from each other. 2. The haptic sensation starts at an 8cm offset above the centre of the hand. 3. The 4 points of contact are in a straight line and the line of haptic points moves from the tip of the fingers to the base of the palm in steps of 0.75cm, and with a delay of 0.1s. When the points reach the base of the palm, there is a 0.5s pause before they are generated again at the tip of the fingers. 4. The angle between the motion vector of the haptic stimulation (perpendicular to the line of haptic points) and the y-axis of the palm, when oriented flat facing the mid-air haptic array, randomly varies every three iterations, adopting values from the range of [-45deg,45deg]. Figure 1 illustrates two possible arrangements of the control points, with the permissible range of angles denoted with grey lines. Figure 1: Illustration depicting our mid-air haptic representation of a sea breeze. The red and blue circles are two different examples of possible sets of haptic control points in a single iteration. The grey lines flanking the y-axis represent the permissible range of angles that the direction of motion of the haptic stimulation line can make with the y-axis. This angle is randomly varied every third iteration. The blue arrow indicates the direction of motion of the blue haptic points (oriented at a 0° angle relative to the y-axis), while the red arrow indicates the direction of the red points (oriented at a 45° angle relative to the y-axis). ### Measures and Procedure Upon arrival in the lab, the participant was provided with a document that contained basic information about the study. They were encouraged to ask any questions they might have about the study or the technology used, and were reminded that they can quit the experiment at any time. Then they read and signed a consent form, and filled in a demographics questionnaire. Next, the participant was seated on a comfortable chair with armsets, and they were instructed to sit still for 2 minutes to establish a baseline. After this period passed, they were asked to fill in the pleasure and arousal dimension of the Self Assessment Manikin (SAM) questionnaire [7]. Then the participant underwent a mild stress test. For this purpose we used the Short Sing a Song Stress Test (SSST) [38] that induces social-evalative stress similar to that of Trier Social Stress Test [24] and the Sing a Song Stress Test [8]. In particular, after reading some statements on the computer screen, the participant gets unexpectedly prompted to sing a song out loud for 60 seconds. A slight variation of this stress task was used in [23]. On completing the test, the participant filled in the SAM questionnaire again. If the participant belonged to the control condition, they were asked to sit still for the next 5 minutes before the SAM questionnaire was administered for the third time. Subsequently, a short interview about their experience was conducted. Participants in the VR-only condition were instructed to put on the VR headset and the headphones, then they had 5 minutes to experience the VR scene. Participants in the VR+Haptics condition additionally received congruent mid-air haptic stimulation via the haptic device, placed approximately 20cm below their palm (see Figure 2 b). After the 5 minutes passed, they were again asked to fill in the SAM questionnaire. At the end of the experiment, we asked participants who were exposed to the VR scene 12 additional study-specific questions, to better understand how they experienced the relaxation intervention. The questions were formulated along the example of the short form of the User engagement scale (UES) [27], and similarly answered on a 5-point Likert scale (1 meaning _not at all_, and 5 _very likely/strongly agree_). The full list of questions is provided in Appendix A. ## 4 Results ### Sam The results of the SAM questionnaire for the arousal dimension are shown in Figure 3. In the control condition, the initial mean arousal rating of 1.88 (SD 0.35) increased to 2.38 (SD 0.92) after the stress test, and decreased to 2.00 (SD 1.31) after the 5 minute rest. In the VR-only condition, the mean score of 1.75 (SD 0.89) after the baseline interval, increased to 2.13 (SD 0.64) after the stress test, and then decreased to 1.25 (SD 0.46) after the relaxation experience in VR. Lastly, in the VR+Haptics condition, the mean score of 1.88 (SD 0.64) increased to 2.13 (SD 0.99) after the stress test, and then decreased to 1.25 (SD 0.71) after the relaxation experience. The Kruskal-Wallis test indicated no significant difference in the arousal ratings between the different relaxation conditions (\(\chi^{2}=4.22\), \(df=2\), \(p=0.12\)). Figure 4 shows the ratings in the pleasure dimension of the SAM questionnaire. In the control condition, the mean pleasure rating of 3.63 (SD 0.74) slightly decreases to 3.5 (SD 0.76) after the stress test, and increased to 3.88 (SD 0.64) after the rest period. In the VR-only condition, the mean pleasure was 3.88 (SD 0.99) after the baseline. Then it slightly increased to 4.00 (0.53) after the stress test, and increased again to 4.25 (SD 0.46) after the VR relaxation experience. In the VR+Haptics condition, the mean pleasure rating of 4.13 (SD 0.83) remained the same after the stress test, and increased to 4.5 (SD 0.53) after the relaxation in VR with mid-air haptic feedback. The Kruskal-Wallis test indicated a potential trend in the pleasure ratings between the conditions (\(\chi^{2}=5.90\), \(df=2\), \(p=0.05\)), suggesting a potential pattern or tendency among the conditions. To confirm the significance of this trend, further investigation with a larger sample size is necessary. ### Additional Questions The results of the additional study-specific questionnaire that we administered after the VR-only and the VR+Haptics conditions, to better understand how participants experienced the relaxation procedure is shown in Figure 5. For the VR-only condition, the participants rated the experience as involving (in terms of losing track of time) with a mean score of 3.63 (SD 1.06), relaxing with 4.13 (SD 0.64), would recommend it to others with 4.38 (0.52), rewarding with 3.63 (SD 1.51), fun with 4.38 (SD 0.74), frustrating with 1.13 (SD 0.35), and tiring with 2.00 (SD 1.20). The VR+Haptics Figure 4: Boxplot of the Pleasure dimension for the three conditions (Control, VR-only, and VR+Haptics). Figure 3: Boxplot of the Arousal dimension for the three conditions (Control, VR-only, and VR+Haptics). Figure 2: a) An image of the 360’ video of a beach scene, which also included an audio track of the sea breeze. b) The image shows a participant experiencing the VR+Haptic condition. They wore a VR headset and extended one hand beyond the arm rest of the chair over the haptics board, which was positioned around 20 cm below the palm. The other hand was in a relaxed resting position. condition was rated as as involving with a mean score of 3.88 (SD 0.99), relaxing with 4.38 (SD 0.74), would recommend it to others with 3.75 (1.28), rewarding with 4.13 (SD 0.83), fun with 4.63 (SD 0.74), frustrating with 1.25 (SD 0.46), and tiring with 1.63 (SD 0.74). Using the Kruskal-Wallis test, we did not find a significant difference between the conditions (p\(>\)0.05). The last three questions of the questionnaire were related to the experience with the mid-air haptic feedback, hence they were only answered by the participants in the VR+Haptics condition. The results are presented in Figure 6. The participants rated the haptic feedback as soothing with a mean score of 4.38 (SD 0.52), and distracting with 1.75 (SD 0.89). Lastly, participants rated the correspondence of the haptics with the VR scene with a mean score of 4.00 (SD 0.93). ## 5 Discussion Our study focused on exploring the potential of a multimodal relaxation experience, combining a VR scene with congruent mid-air haptic feedback, in alleviating stress. We discuss our findings in relation to existing literature and implications for future stress management and relaxation applications. We combined the immersive quality of VR with the potential benefits of mid-air haptic technology, hypothesising that this integration would lead to heightened relaxation and stress reduction. While we were not able to confirm this hypothesis in this initial investigation, we did observe some interesting trends that warrant further investigation with a larger sample size. Specifically, the results indicate a trend suggesting that the VR+Haptic condition might be associated with higher pleasure ratings compared to the other two conditions. On average, participants rated the experience in the VR+Haptics condition as more involving, relaxing, and fun, while being less tiring, compared to the VR-only condition. However, they hesitated about recommending the VR+Haptics experience to their friends and family, as they expressed some concerns regarding the price of the haptics hardware as well as operating it on their own in everyday life. Similarly, the higher average frustration score for the VR+Haptics condition might be due to the unfamiliarity with the haptics board. Although in the study, we observed that after some short instructions, the participants were able to correctly position their hand and optimally obtain the mid-air haptic stimulus. Participants in the VR+Haptics condition found the mid-air haptic feedback to be soothing, and well-aligned with the VR scene, and most of them did not perceive it as distracting. There are several study limitations that should be considered. Firstly, the relatively small sample size, consisting mainly of university students and employees, might limit the generalisability of our findings. Furthermore, our study's focus on immediate effects and participants' immediate perceptions limits our ability to assess the sustained impact over time. Longitudinal studies are required to determine the real-world effectiveness of the relaxation experience in stress management over extended time periods. Secondly, individual differences and expectations, stemming from prior experiences and personal preferences, could have influenced participants' interaction with the interventions. For instance, the Sing a Song Stress Test might not have been as effective with participants who have experience in performing arts. This might partially explain the unchanged mean pleasure score in the VR+Haptics condition and the higher mean pleasure scores in the VR-only condition after the stress test, compared to the baseline scores. Alternative stress-inducing tasks, such as the mental arithmetic stress test [6], could be considered for future studies. Thirdly, the design of the mid-air haptic feedback pattern, while rooted in iterative preliminary testing, may not account for individual variability in sensitivity and perception. For example, age-related differences in haptic sensitivity might exist. This highlights the potential for customisation to optimise the relaxation experience. Finally, augmenting the VR scene with additional (possibly interactive) elements, such as visual breathing guidance [33] or a virtual breathing coach [11], might further increase user engagement and immersion in the relaxation experience. Implementing interactive breath guidance using the mid-air haptic feedback offers yet another promising avenue for future exploration. ## 6 Conclusion This study investigated for the first time the impact of using congruent mid-air haptic feedback in combination with a natural scene Figure 5: Boxplot of the responses to the questions 1 to 9 on the additional study-specific questionnaire for the VR-only and the VR+Haptics condition. Figure 6: Boxplot of the responses to the questions 10 to 12 on the additional study-specific questionnaire for the VR+Haptics condition.
2307.10928
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it does not consider the nature of user instructions that require instance-wise skill composition. In this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on Alignment Skill Sets), a fine-grained evaluation protocol for both human-based and model-based evaluation which decomposes coarse-level scoring to a skill set-level scoring for each instruction. We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance and increasing the reliability of the evaluation. Using FLASK, we compare multiple open-source and proprietary LLMs and observe a high correlation between model-based and human-based evaluations. We publicly release the evaluation data and code implementation at https://github.com/kaistAI/FLASK.
Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo
2023-07-20T14:56:35Z
http://arxiv.org/abs/2307.10928v4
# FLASK: Fine-grained Language Model ###### Abstract Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it does not consider the nature of user instructions that require instance-wise skill composition. In this paper, we introduce **FLASK** (**F**ine-grained **L**anguage Model Evaluation based on **A**lignment **SK**ill Sets), a fine-grained evaluation protocol for both human-based and model-based evaluation which decomposes coarse-level scoring to a skill set-level scoring for each instruction. We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance and increasing the reliability of the evaluation. Using FLASK, we compare multiple open-source and proprietary LLMs and observe a high correlation between model-based and human-based evaluations. We publicly release the evaluation data and code implementation at [https://github.com/kaistAI/FLASK](https://github.com/kaistAI/FLASK). ## 1 Introduction Large Language Models (LLMs) have shown an impressive capability of following user instructions by aligning to human values, such as responding in a helpful, honest, and harmless manner (Ouyang et al., 2022; Bai et al., 2022; Kim et al., 2023; Korbak et al., 2023; Askell et al., 2021). In particular, techniques such as instruction tuning or reinforcement learning from human feedback (RLHF) have significantly improved this ability by fine-tuning a pretrained LLM on diverse tasks or user preferences (Ouyang et al., 2022; Chung et al., 2022; Wang et al., 2022). However, evaluating the alignment of LLMs to human values is challenging for two reasons. First, open-ended user instructions usually require a composition of multiple abilities, which makes measurement with a single metric insufficient. Second, since these instructions are task-agnostic, the required abilities often vary from one instance to another, making it impractical to use a fixed set of metrics. Currently, the evaluation of LLMs primarily relies on multiple independent benchmarks using automatic metrics (accuracy, ROUGE, etc.) or overall scoring to the model response based on human or model-based preference (Longpre et al., 2023; Wang et al., 2023; Ouyang et al., 2022; Zheng et al., 2023). However, both evaluation settings are insufficient. Benchmarks that adopt multiple metrics are not scalable since each of them targets different skills, domains, and difficulties such as GSM8K (Cobbe et al., 2021) for logical correctness, and TruthfulQA (Lin et al., 2022) for truthfulness. Also, relying on these automatic metrics limits interpretability and reliability because only task-wise analysis is possible and automatic metrics are sensitive to surface forms (Krishna et al., 2021). Moreover, merely assigning a single score based on preferences does not tell the whole story because there could be multiple axes to evaluate the response, such as completeness, factuality, etc. Instead, we need to evaluate the model's performance using fine-grained criteria to comprehend the model from various perspectives. Although many recent works have studied multi-metric or fine-grained evaluation of LLMs, they mainly focus on a fixed metric set across instances for specific tasks, which is not applicable to the task-agnostic evaluation setting for LLM alignment (Liu et al., 2023; Liang et al., 2022; Lee et al., 2022; Min et al., 2023; Krishna et al., 2023). To address the limitations of current evaluation settings, we propose **FLASK** (Fine-grained Language Model Evaluation based on **A**lignment **SK**ill Sets), a novel evaluation protocol that adopts a fine-grained scoring setup, enabling task-agnostic skill evaluation aligned with the provided instructions. We define 4 primary abilities which are divided into 12 fine-grained skills for comprehensive language model evaluation: Logical Thinking (Logical Correctness, Logical Robustness, Logical Efficiency), Background Knowledge (Factuality, Commonsense Understanding), Problem Handling (Comprehension, Insightfulness, Completeness, Metacognition), and User Alignment (Concisens, Readability, Harmlessness). First, we collect a total of 1,740 evaluation instances from various NLP datasets and annotate the relevant set of skills (a _skill set_), domains, and the difficulty level for each instance. Then, evaluators assign scores ranging from 1 to 5 for each annotated skill based on the reference answer and skill-specific scoring rubrics, where the evaluators could be human evaluators or state-of-the-art LLMs1. For the 89 instances that are labeled to be most difficult (FLASK-Hard), we additionally introduce adopting even a more fine-grained evaluation by using _instance-specific_ rubrics. The overall illustration is shown in Figure 1. Footnote 1: We provide further discussions of using LLMs as evaluators in Appendix D.2. By applying FLASK, we compare and analyze various open-source and proprietary LLMs depending on the skill set, target domain, and difficulty. We conduct both human-based and model-based evaluations and observe that their results are highly correlated. We experimentally observe that applying fine-grained evaluation not only leads to better interpretability but also better reliability, increasing the correlation between human and model evaluation and mitigating the bias of model-based evaluation. Also, by conducting extensive analysis based on automatic model-based evaluation, we present several findings: * We observe that current open-source LLMs significantly underperform proprietary LLMs for Logical Thinking and Background Knowledge abilities. * We observe that some skills such as Logical Correctness and Logical Efficiency require larger model sizes to effectively acquire them compared to other skills. * We show that even state-of-the-art proprietary LLMs struggle on FLASK-Hard set, up to 50% performance degradation for some skills compared to the whole FLASK evaluation set. We suggest that comprehensive analysis of LLMs through fine-grained evaluation is important and practical for both the developers and practitioners. For model developers, FLASK facilitates accurate interpretation of the model's current state, providing clear guidance for improving model Figure 1: (a) Skill-agnostic evaluation gives a single overall score for the model response, which limits interpretability. (b) Fine-grained evaluation of FLASK first annotates fine-grained metadata for each instruction and conducts evaluation by assigning a score to each skill based on skill-specific or instance-specific score rubrics. alignment. For practitioners, FLASK's fine-grained comparison of different LLMs helps recommend suitable models for specific situations. ## 2 Related Works Holistic Evaluation of LLMsHolistic evaluation of LLMs is crucial for assessing model strengths, weaknesses, and potential risks (Shevlane et al., 2023; Liang et al., 2022; Gehrmann et al., 2022; Chia et al., 2023; Laskar et al., 2023). To comprehensively evaluate the performance of LLMs, many works have assessed models on multiple independent benchmarks using automated metrics, such as accuracy for knowledge/reasoning tasks or ROUGE for long-form text generation (Chung et al., 2022; Hendrycks et al., 2020; Suzgun et al., 2022; Wang et al., 2022c; Gao et al., 2021; Zhong et al., 2023). To assess multiple aspects of the model response, multi-metric evaluation settings have been proposed, providing a more comprehensive perspective of the model performance beyond accuracy (Liang et al., 2022; Thoppilan et al., 2022; Fu et al., 2023; Jain et al., 2023; Lee et al., 2022). Furthermore, to faithfully evaluate LLMs on tasks such as fact verification or summarization, recent works have proposed fine-grained atomic evaluation settings (Min et al., 2023; Krishna et al., 2023). Especially, Wu et al. (2023a); Lightman et al. (2023) show that fine-grained evaluation of model responses could be utilized for better rewards. In FLASK, we adopt an _instance-wise_ fine-grained multi-metric setting, which distinguishes it from previous works and is more applicable to evaluate the general capabilities of LLMs. Alignment of LLMsAligning pre-trained LLMs to human values can be achieved through different fine-tuning techniques such as supervised instruction tuning or reinforcement learning from human feedback (RLHF). For instruction tuning, various techniques have shown effectiveness such as task and model scaling (Mishra et al., 2022; Wei et al., 2021; Wang et al., 2022c; Chung et al., 2022), dataset distillation (Chiang et al., 2023; Taori et al., 2023; Xu et al., 2023; Dettmers et al., 2023; Geng et al., 2023; Gao et al., 2023; Zhang et al., 2023), instruction generation (Ye et al., 2022b; Honovich et al., 2022b), data augmentation through model-generated response (Wang et al., 2022b; Honovich et al., 2022a; Kim et al., 2023b), multilingual instruction tuning (Muenringhoff et al., 2022) or in-context instruction learning (Ye et al., 2023a). For RLHF, techniques such as training on synthetic feedback (Bai et al., 2022b; Kim et al., 2023c) or applying reinforcement learning during pretraining (Korbak et al., 2023) have shown to better control the model's response to make LLMs aligned to human values. However, a comprehensive comparison between various user-aligned models trained with different techniques is yet to be studied in sufficient detail. ## 3 FLASK: Fine-grained Language Model Evaluation Protocol We introduce FLASK, a fine-grained skill set-based evaluation protocol for assessing the alignment of language models. We define 4 primary abilities, divided into 12 skills, that are necessary to follow user instructions in a desirable manner (Section 3.1). We specify the process of the evaluation dataset construction (Section 3.2) and the evaluation process (Section 3.3). Additionally, for a challenging scenario, we introduce FLASK-Hard (Section 3.4). The illustration of the overall process is shown in Figure 21 in the Appendix. We emphasize that applying instance-wise multi-metric evaluation is what mainly distinguishes our work from previous evaluation settings, enabling task-agnostic evaluation. In this work, we consider two types of evaluators: human evaluators and Eval LM, one of the state-of-the-art LLMs used for evaluation. ### Skill set Categorization Expanding upon previous research on language model evaluation (Sugawara and Aizawa, 2016; Sugawara et al., 2017; Radzwill and Benton, 2017; Schlegel et al., 2020; Rogers et al., 2021), we define a taxonomy of skills to comprehensively evaluate the performance of LLMs. Our taxonomy suggests a systematic framework for classifying the key dimensions of pertinent skills necessary to follow a broad range of single-turn, English natural instructions. Based on the skill categorization of Rogers et al. (2021) which was specifically proposed for question answering and reading comprehension, we recategorize skills suitable for LLM alignment. Our proposed categorization includes four primary abilities, each of which is further divided into 2-4 skills, resulting in a total of 12 skills: * **Logical Thinking** refers to the ability to apply reasoning, critical thinking, and deductive skills when processing and responding to instructions. In order to do so, models should generate a logically correct final answer (Logical Correctness) while preserving generalizability during the step-by-step logical process without any contradiction (Logical Robustness). Also, the logical process should be efficient and not contain any unnecessary steps (Logical Efficiency). * **Background Knowledge** comprises the capacity to generate responses by accessing a broad repository of general and domain-specific information. This ability requires the model to provide accurate and contextually relevant responses to instructions requiring factual (Factuality) or commonsense knowledge (Commonsense Understanding). * **Problem Handling** pertains to the proficiency in addressing challenges that emerge while processing and responding to user instructions. This category encompasses the capacity to understand the implicit and explicit purpose and requirements of the instruction (Comprehension), develop creative perspectives or interpretations of the instruction (Insightfulness), handle the instruction by providing in-depth and in-breadth information (Completeness), and be aware of its own capability to answer the instruction (Metacognition). * **User Alignment** represents the ability to empathize with the user and align its responses to the user's intentions, preferences, and expectations. This category encompasses the model's ability to structure the answer to promote the users' readability (Readability), presenting a concise response for the reader without unnecessary information (Conciseness), and considering potential risks to user safety (Harmlessness). We ensure that each skill offers a wide range of criteria for a holistic evaluation of various LLMs. We provide the specific definition for each skill in Table 11 in the Appendix. ### Evaluation Data Construction For constructing the evaluation data, we collect input and output pairs from various datasets, modify the collected instances, and filter based on length criteria, yielding a total of 1,740 instances sourced from 122 datasets. We first collect input (instruction) and output (reference answer) pairs from diverse English NLP datasets that are either multi-task datasets (e.g. MMLU (Hendrycks et al., 2020)) or single-task datasets (e.g. GSM8K (Cobbe et al., 2021)). For single-task datasets, we restrict them to account for at most 20 instances per dataset for diversity. After collection, we modify the instances by manually writing instructions for datasets that do not include instructions. Lastly, we remove instances where the input length is longer than 2048. More details including the list of source datasets are provided in Appendix J. For each evaluation instance, we annotate the metadata which consists of 1) the essential skills to follow the instruction, 2) target domains, and 3) the difficulty level of the instructions. We first validate that human labelers and Eval LM have a high correlation for the metadata annotation on a subset of 200 instances. We have observed a 95.22% acceptance rate for skill annotation, an 81.32% acceptance rate for domain annotation, and a Pearson correlation coefficient of 0.774 for difficulty annotation. Since the model-based annotation has acceptable noise and high correlation to human labelers, we utilize the Eval LM for metadata annotation to reduce the burden of human annotations. We provide more details on validating the annotation of Eval LM in Appendix G.2. For the selection of necessary skills, the Eval LM selects the top-3 essential skills required to follow the instructions for each instance, from the 12 skills defined in Section 3.1. We achieve this by providing the Eval LM with the instruction, reference answer, and descriptions of all 12 skills. For domain annotation, we identify 10 domains: Humanities, Language, Culture, Health, History, Natural Science, Math, Social Science, Technology, and Coding by modifying the Wikipedia categorization of Reid et al. (2022). Lastly, for difficulty level annotation, we divide the difficulty level into 5 levels based on the extent of required domain knowledge by referencing Webb's depth of knowledge (Webb, 1997; 1999) and NIH proficiency scale2: simple lifestyle knowledge, advanced lifestyle knowledge, formal education knowledge, major-level knowledge, and expert-level knowledge where we map each level into a level from 1 to 5. Details of the metadata annotation process are provided in Appendix E and the statistics of the evaluation dataset are provided in Appendix F. ### Evaluation Process Utilizing the annotated metadata for each instance, we evaluate and analyze the target model response in a fine-grained manner. Given the evaluation instruction, reference answer, response of the target model, and pre-defined score rubric for each selected skill from Section 3.2, evaluators (human annotators or Eval LM) allocate a score from 1 to 5 based on the skill-specific score rubrics that have a corresponding description for each score. For model-based evaluation, we enforce the Eval LM to generate a rationale before assigning a score, motivated by the effectiveness of CoT prompting (Wei et al., 2022b) for the evaluation of LLMs (Liu et al., 2023). After the evaluators assign a score for each skill of the instance, we aggregate the scores based on the skill, domain, and difficulty level for fine-grained analysis. Through this analysis, we can understand how a specific target model performs on a specific composition of metadata. The illustration of the evaluation process and the score rubric for each skill is provided in Figure 1 and Appendix K.1. ### Flask-Hard To compare state-of-the-art LLMs in a challenging setting, we additionally introduce FLASK-Hard subset. For FLASK-Hard construction, we select instances that are annotated as expert-level knowledge difficulty (Level 5), yielding a total of 89 instances. Instances of FLASK-Hard include challenging problems such as predicting a checkmate move, or solving a math problem that requires a deep understanding of major-level theorems. Since FLASK-Hard consists of difficult instructions that require extensive domain knowledge which may prevent reliable evaluation, we explore a more fine-grained evaluation setting for FLASK-Hard. Instead of using a fixed score rubric for each skill, we introduce an _instance-specific_ score rubric for each skill. Specifically, Eval LM first generates at most 5 subquestions (checklists) that correspond to one of the related skills annotated in Section 3.2 for each instance. Then, we manually remove duplicates or subquestions unrelated to the annotated skillset. After we annotate subquestions for each instance, evaluators give a score ranging from 1 to 5 based on the judgment of whether the model response accomplished the specific requirement of the subquestion. We specify the illustration in Figure 1 and the prompt in Figure 34 (Appendix) for the instance-specific score rubric, respectively. ## 4 Reliability of FLASK In this section, we investigate the reliability of FLASK by 1) measuring the correlation between human-based and model-based evaluation and 2) the robustness to stylistic changes of model-based evaluation. For correlation measurement, we conduct both human-based and model-based evaluations on 200 instances randomly sampled from the whole FLASK evaluation set. We recruited 10 human labelers who have majored in various fields including computer science, mathematics, economics, business, chemistry, etc. We evaluate 4 models: 1) GPT-3.5, 2) Bard, 3) Vicuna-13B, and 4) Alpaca-13B3. For model-based evaluation, we use GPT-4 (OpenAI, 2023) as the default Eval LM since it is known to show the highest correlation with human labelers (Liu et al., 2023; Dubois et al., 2023)4. Details of the human evaluation process are provided in Appendix G.1 and the analysis of inter-labeler agreement between skills is provided in Appendix C.1. To measure the robustness to stylistic changes, we use the response of GPT-3.5 of FLASK-Hard and generate an adversarial set to make the response more verbose. We measure the consistency of the scores given by the Eval LM between the original and the adversarial response. Footnote 3: We specify the details of models being evaluated in Appendix B. Footnote 4: We use the gpt-4–0613 version for model-based evaluation. We show the result of using another model (Claude) for model-based evaluation in Appendix C.7. **Fine-graininess leads to a high correlation between human-based and model-based evaluation.** We compare the result of human-based and model-based evaluation of FLASK in Figure 2. Overall, the tendency is similar between the two evaluation settings: Alpaca model results in the worst performance for most of the skills, and both Vicuna and Alpaca have a significant performance gap between GPT-3.5 and Bard on Logical Thinking (Logical Robustness, Logical Correctness, Logical Efficiency) and Background Knowledge abilities (Factuality, Commonsense Understanding skills) compared to other skills. However, it's worth noting that both evaluation settings are necessary, as neither is perfect and they complement each other. In human-based evaluation, we observe central tendency bias (Goldfarb-Tarrant et al., 2020), where labelers tend to assign middle scores more often on the Likert scale, resulting in a more uniform score distribution. Also, human labelers are prone to fatigue since the annotation task requires knowledge-intensive evaluation, such as code implementation tasks (Casper et al., 2023; Bowman et al., 2022). On the other hand, model-based evaluation is known to possess style and verbosity bias (Wang et al., 2023; Dubois et al., 2023; Zheng et al., 2023), where the evaluation model tends to prefer responses similar to its own generation styles and responses with longer lengths. For example, for some skills, the Eval LM tends to prefer the response styles of GPT-3.5 compared to Bard, unlike human evaluators. To quantitatively analyze the correlation between human-based and model-based evaluation, we measure the Spearman, Kendall-Tau, and Pearson correlation. We first observe that using an automatic metric (ROUGE-L) results in the lowest correlation. Next, we compare the skill-specific rubric setting of FLASK with the reference answer-guided, _skill-agnostic_ evaluation setting introduced in Zheng et al. (2023) and illustrated in Figure 1a, which provides an overall single score without considering the skill set5. As shown in Table 1, applying a skill-specific fine-grained evaluation leads to a stronger correlation between human-based and model-based evaluation consistently across various Eval LMs. Also, by comparing different Eval LMs, we observe that GPT-4 shows the highest correlation compared to GPT-3.5 and Claude. Additionally, we analyze the effect of including a reference answer, generating a rationale before assigning a score, and including a score rubric for each skill during the model-based evaluation of FLASK, respectively. As shown in Table 1, we notice that removing any of the factors leads to a significant drop in the correlation, especially for the reference answer. \begin{table} \begin{tabular}{l|c c c} & \(\rho\) & \(\tau\) & \(r\) \\ \hline ROUGE-L & 0.333 & 0.240 & 0.289 \\ skill-agnostic (GPT-3.5) & 0.360 & 0.267 & 0.450 \\ FLASK (GPT-3.5) & 0.424 & 0.330 & 0.449 \\ skill-agnostic (Claude) & 0.352 & 0.264 & 0.391 \\ FLASHK (Claude) & 0.432 & 0.334 & 0.458 \\ Skill-agnostic (GPT-4) & 0.641 & 0.495 & 0.673 \\ FLASK (GPT-4) & **0.680** & **0.541** & **0.732** \\ – Reference Answer & 0.516 & 0.429 & 0.566 \\ – Rationale & 0.634 & 0.523 & 0.683 \\ – Score rubric & 0.646 & 0.512 & 0.696 \\ \end{tabular} \end{table} Table 1: Correlation between model-based evaluation and human labelers for Skill-agnostic (skill-agnostic rubric) and FLASK (skill-specific rubric) across different Eval LMs (GPT-3.5, Claude, GPT-4). We report Spearman (\(\rho\)), Kendall-Tau (\(\tau\)), and Pearson (\(r\)) correlation. We also measure the effect of including a reference answer, rationale generation, and score rubric. Figure 2: (a) The skill comparison between different models (GPT-3.5, Vicuna, Bard, Alpaca) through human-based evaluation on the subset of FLASK evaluation set. (b) The skill comparison between different models through model-based evaluation of FLASK. Both settings are highly correlated with each other. Fine-grained evaluation mitigates the bias of model-based evaluation.As mentioned previously, model-based evaluation is known to be prone to biases (Wang et al., 2023; Zheng et al., 2023). Among various biases, we investigate the effect of fine-grained evaluation on verbosity bias which is quantitatively measurable in a controllable setup. We take the original response of GPT-3.5 on FLASK-Hard and prompt GPT-3.5 to make the response more verbose while retaining the contents. We measure the robustness of the evaluation method by calculating the ratio that the Eval LM assigns the same score regardless of the stylistic changes. We compare the skill-agnostic evaluation, the skill-specific rubric of FLASK, and the instance-specific rubric of FLASK introduced in Section 3.4 and illustrated in Figure 16. As shown in Figure 3, we observe that the robustness increases as the fine-graininess of the evaluation setting increases. This indicates that increasing the fine-graininess could mitigate the biases and enhance the reliability of the model-based evaluation to some extent. We provide the correlation between response length and the performance score for each skill of various models on the whole FLASK evaluation set in Figure 22 and Table 5 in the Appendix. Although the instance-specific rubric is the most robust to stylistic changes, it is more costly as it requires an additional stage for annotating subquestions and manual validation. We therefore utilize the instance-specific rubric in FLASK-Hard only. We leave extending it to the whole evaluation set and the investigation of other biases as future work. Footnote 6: For the evaluation settings of FLASK, we exclude the scores corresponding to Completeness and Conciseness since these skills should be inherently dependent on the length of the response. ## 5 Analysis based on Automatic Evaluation of FLASK Although conducting both human-based and model-based evaluation is reliable for comprehensive analysis, human-based evaluation is time-consuming and expensive. Therefore, considering the high correlation with human-based evaluation shown in Table 1, for the evaluation on the whole FLASK evaluation set, we focus on automatic model-based evaluation for an extensive analysis of LLMs. Open-source models significantly underperform proprietary models on particular skills.First, to compare open-source models with proprietary models on the entire set, we compare GPT-3.5, Vicuna-13B, and WizardM-13B where the latter two models are trained with GPT-3.5 responses during instruction tuning. As shown in Figure 4, Vicuna and WizardLM show similar performance across all skills. In contrast to the claim of Xu et al. (2023), this implies that the effect of complex instructions is not significant when using the same base model, teacher model, and training configuration. By comparing GPT-3.5 and the other two open-source models (Vicuna and WizardLM), we observe that Problem Handling and User Alignment abilities can be almost fully imitated, including Metacognition, Readability, and Conciseness. However, a large gap is especially noticeable in Logical Thinking and Background Knowledge abilities. This result aligns with Gudibande et al. (2023) which demonstrates that the open-source models only imitate the _style_ of the proprietary models rather than the _factuality_. We also observe a similar tendency for larger open-source models such as Tulu-65B as shown in Table 9. By analyzing the performance in terms of each domain, we find that both open-source models significantly underperform GPT-3.5 in Math, and Coding domains, as shown in Figure 28a in the Appendix. We conjecture that failures of open-source models on these domains are due to a Figure 4: The performance comparison between GPT-3.5, Vicuna, and WizardLM for each skill on the FLASK evaluation set. Figure 3: Comparison of skill-agnostic, skill-specific, and instance-specific score rubrics in terms of their robustness to stylistic changes. lack of domain-specific pre-training. Moreover, by analyzing the performance by difficulty level in Figure 29 in the Appendix, both open-source models consistently exhibit poor performance, especially on Logical Thinking and Background Knowledge abilities. Some skills require larger model sizes.We analyze the effect of the model scale for each skill by comparing Tulu 7B, 13B, 30B, and 65B shown in Figure 5. Overall, we can observe that larger models lead to better performance, which aligns with the result of Chung et al. (2022); Wei et al. (2022a). However, the range of improvement varies across different skills. For example, skills such as Readability, Harmlessness, and Metacognition show slow improvement as the model scales up. On the other hand, skills such as Logical Robustness, Logical Correctness, and Logical Efficiency show rapid improvements. Using FLASK, we confirm the findings of Gudibande et al. (2023) that skills requiring logical reasoning or fact retrieval benefit significantly from model scaling. Interestingly, we observe that for some skills, the performance nearly saturates after a particular scale; Logical Efficiency and Conciseness after 30B, Insightfulness after 13B and Metacognition after 7B. This suggests that some skills necessitate larger model sizes, while others can be achieved with smaller models. By analyzing the effect of model scaling for different levels of difficulty for each skill, we find that scaling the model size is more effective for easier instructions, as shown in Figure 6. Larger models of Tulu reduce the performance gap with GPT-3.5, especially for the simple lifestyle knowledge (Level 1), whereas the gap increases for higher difficulties. This implies that Figure 5: The performance of Tülu shown for each skill depending on the model scale (7B, 13B, 30B, 65B). While skills such as Logical Robustness and Logical Correctness largely benefit from model scaling, smaller models also perform well in skills such as Readability and Metacognition. Figure 6: The performance comparison among GPT-3.5, Tülu-7B, 13B, 30B, and 65B for Logical Robustness, Logical Correctness, Factuality, and Completeness, depending on the difficulty of the instructions. Larger models show effectiveness on easier instructions especially. The full results are shown in Figure 30 (Appendix). scaling the model size might not be the optimal solution for harder instructions. We provide the results for each domain in Figure 31 and additionally observe that different skills require different training steps in Appendix C.6. Proprietary models also struggle on the FLASK-Hard set.We also compare the performance of various proprietary models (GPT-3.5, Bard, Claude, InstructGPT, GPT-4) on the FLASK evaluation set as shown in Figure 6(a). For all skills of Problem Handling, Claude shows the best performance while for Logical Thinking and Background Knowledge, GPT-3.5 shows the best performance. InstructGPT shows the worst performance across most skills because it often provides short responses while not fully addressing the intention of given instruction. We provide the comparison between proprietary models for each domain in Figure 32. Furthermore, we compare the performance of different proprietary models on the FLASK-Hard set, as shown in Figure 6(b) and 6(c), which adopts skill-specific and instance-specific score rubrics, respectively. First, we observe that on FLASK-Hard, the performance significantly degrades for Logical Thinking and Background Knowledge abilities compared to Figure 6(a). Also, by comparing other models with GPT-4, we observe that there is a large gap for Logical Correctness, Insightfulness, and Commonsense Understanding. Interestingly, even the state-of-the-art GPT-4 model also performs poorly for Logical Correctness and Factuality skills on the FLASK-Hard set. This suggests there is significant room for improvement in those abilities even for the proprietary models. By comparing Figure 6(b) and Figure 6(c), we can observe that adopting an instance-specific score rubric leads to a lower score overall. This indicates that instance-specific score rubric is a more strict setting since it necessitates accomplishing a more specific requirement as shown in the example of Figure 1. Although an in-depth analysis of the model scales or training techniques is infeasible for proprietary models, FLASK-Hard could provide action items for companies developing proprietary models. ## 6 Application of FLASK FLASK for DevelopersFLASK enables model developers to more accurately analyze the performance of their own models and suggests detailed action items for intermediate model checkpoints. Specifically, developers working on open-source LLMs can compare the performance with proprietary LLMs and try to close the gap between them, especially for Logical Thinking and Background Knowledge abilities. On the other hand, developers working on proprietary LLMs can devise methods to enhance the performance of their own models on the FLASK-Hard set. Similar to the role of Wang et al. (2022); Longpre et al. (2023) for instruction-tuned LLMs and Longpre et al. (2023); Xie et al. (2023) for pre-trained LLMs, FLASK can be utilized for making better base models, making better training datasets, and making better training techniques. FLASK for PractitionersFLASK enables practitioners to select appropriate LLMs for different situations, similar to the role of Jiang et al. (2023). Because the evaluation setting of FLASK is dynamic, practitioners can perform metadata annotation on their own test sets and approximate Figure 7: (a) Performance comparison of various proprietary models (GPT-3.5, Bard, InstructGPT, Claude) on the FLASK evaluation set. (b) Performance comparison of various proprietary models on the FLASK-Hard evaluation set using skill-specific score rubrics. (c) Performance comparison of various proprietary models on the FLASK-Hard evaluation set using instance-specific score rubrics. Exact numbers including those for open-source models are reported in Table 9 and Table 10 (Appendix). which models would be suitable. For example, if the end-use case is a chatbot for chit-chat, using 7B fine-tuned open-source models might be enough. In contrast, it might be worthwhile to pay for API calls of proprietary LLMs for complex reasoning tasks. Potentially, the result of FLASK can be used to automatically route and recommend suitable LLMs depending on the instruction. ## 7 Conclusion In this paper, we introduce FLASK, a fine-grained language skill set evaluation setting for the alignment of language models. We categorize 12 fine-grained skills to evaluate LLMs and annotate necessary skills, the target domain, and the difficulty level for each instance. FLASK provides a comprehensive and interpretable analysis of the capabilities of LLMs by allowing the analysis of the performance depending on different skills, domains, and difficulty levels. Also, we observe that applying fine-grained evaluation results in better reliability in terms of correlation between human-based and model-based evaluation and the robustness of model-based evaluation to stylistic changes. We analyze various open-source and proprietary LLMs and suggest that FLASK could be utilized for making better language models and providing meaningful insights of various LLMs for both developers and practitioners. We hope that FLASK could serve as an initial guideline for fine-grained evaluation towards a comprehensive and reliable evaluation setting. #### Acknowledgments We thank Hyunji Lee, Yizhong Wang, Eric Wallace, and Swaroop Mishra for helpful discussions and constructive feedback. We also thank members of KAIST for participating in human evaluation for FLASK.
2302.13427
Detecting Learning by Exporting and from Exporters
Existing literature at the nexus of firm productivity and export behavior mostly focuses on "learning by exporting," whereby firms can improve their performance by engaging in exports. Whereas, the secondary channel of learning via cross-firm spillovers from exporting peers, or "learning from exporters," has largely been neglected. Omitting this important mechanism, which can benefit both exporters and non-exporters, may provide an incomplete assessment of the total productivity benefits of exporting. In this paper, we develop a unified empirical framework for productivity measurement that explicitly accommodates both channels. To do this, we formalize the evolution of firm productivity as an export-controlled process, allowing future productivity to be affected by both the firm's own export behavior as well as export behavior of spatially proximate, same-industry peers. This facilitates a simultaneous, "internally consistent" identification of firm productivity and the corresponding effects of exporting. We apply our methodology to a panel of manufacturing plants in Chile in 1995-2007 and find significant evidence in support of both direct and spillover effects of exporting that substantially boost the productivity of domestic firms.
Jingfang Zhang, Emir Malikov
2023-02-26T22:31:39Z
http://arxiv.org/abs/2302.13427v1
# Detecting Learning _by_ Exporting ###### Abstract Existing literature at the nexus of firm productivity and export behavior mostly focuses on "learning by exporting," whereby firms can improve their performance by engaging in exports. Whereas, the secondary channel of learning via cross-firm spillovers from exporting peers, or "learning from exporters," has largely been neglected. Omitting this important mechanism, which can benefit both exporters and non-exporters, may provide an incomplete assessment of the total productivity benefits of exporting. In this paper, we develop a unified empirical framework for productivity measurement that explicitly accommodates both channels. To do this, we formalize the evolution of firm productivity as an export-controlled process, allowing future productivity to be affected by both the firm's own export behavior as well as export behavior of spatially proximate, same-industry peers. This facilitates a simultaneous, "internally consistent" identification of firm productivity and the corresponding effects of exporting. We apply our methodology to a panel of manufacturing plants in Chile in 1995-2007 and find significant evidence in support of both direct and spillover effects of exporting that substantially boost the productivity of domestic firms. **Keywords**: exporting, learning by exporting, learning from exporters, production function estimation, productivity, spillovers **JEL Classification**: D24, F10, L10 Introduction Governments in both developing and developed countries commonly pursue policies aimed at promoting exports. In addition to boosting aggregate demand, such policies are also routinely justified by arguing that domestic exporters benefit from export-driven productivity improvements via absorption of new technologies from abroad, learning of international best practices that lead to improved business processes, productivity enhancements driven by exposure to more competition, scale effects, quality and variety effects, etc. This productivity-enhancing mechanism is usually referred to as "learning by exporting" (LBE) (e.g., Clerides et al., 1998; Aw et al., 2000; Delgado et al., 2002; Baldwin & Gu, 2004; Van Biesebroeck, 2005; De Loecker, 2007, 2013). Such export-related productivity gains are facilitated by the firm's _own_ direct access to foreign customers, partners and rivals. Domestic firms, however, can learn not only from their own export experiences but also from their local _peers_ who engage in exports, and this indirect learning opportunity is available to both exporters and non-exporters. These external export-driven productivity spillovers are a type of cross-firm peer effects, which we refer to as "learning from exporters" (LFE), and effectively capture the secondary productivity effects of export engagement. Such cross-firm spillovers may occur via labor turnover, learning by imitation, exposure to affiliate and/or competitor products, etc. (see Greenaway et al., 2004; Sala & Yalcin, 2015). For instance, the movement of workers from exporting firms to other domestic firms may facilitate the dispersion of tacit knowledge about more innovative/efficient foreign technologies or institutional knowledge about foreign markets, which may help these firms improve their productivity too. Taking the indirect cross-firm productivity effect of export engagement for granted, as customarily done in literature on the productivity-exports nexus, will likely underestimate the total productivity benefits of exporting. Besides, omitting this important mechanism may also contaminate the measurement of more traditional LBE effects on firm productivity because it may lead to an endogeneity-inducing omitted variable bias. In this paper, we contribute to the literature by extending De Loecker (2013) to develop a unified empirical framework for productivity measurement that explicitly accommodates both the direct LBE channel taking place _within_ the firm as well as the indirect LFE channel working _between_ spatially and industrially proximate firms, which we then apply to a panel of manufacturing plants in Chile in 1995-2007. While the literature on (within-firm) LBE effects is rather well-established (e.g., Kunst & Marin, 1989; Aw & Hwang, 1995; Bernard & Jensen, 1999; Baldwin & Gu, 2003; Greenaway & Kneller, 2004; Keller, 2004; Blalock & Gertler, 2004; De Loecker, 2007, 2013; Wagner, 2007; Salomon & Jin, 2008; Park et al., 2010; Aw et al., 2011; Kasahara & Lapham, 2013; Manjon et al., 2013), the empirical analysis of external effects of exporting on _productivity_ in the industry beyond exporter firms is practically non-existent. Existing empirical work on "export spillovers" focuses mainly on how average export participation in the industry affects the export status or marginal cost of non-exporters nearby (e.g., Aitken et al., 1997; Clerides et al., 1998; Bernard & Jensen, 2004; Greenaway et al., 2004; Greenaway & Kneller, 2008; Koenig, 2009; Koenig et al., 2010; Alvarez et al., 2013; Poncet & Mayneris, 2013) whereas the productivity implications of export spillovers in the domestic industry remain understudied, essentially being limited to three studies (see Wei & Liu, 2006; Alvarez & Lopez, 2008; Jung, 2010).1 Footnote 1: Wei & Liu (2006) examine productivity spillovers from exports in China’s manufacturing, Alvarez & López (2008) use older Chilean manufacturing data, and Jung’s (2010) dissertation studies Korean manufacturing plants. All three, however, employ empirical strategies that are affected by the econometric issues we discuss here. These empirical analyses of productivity spillovers from exporting are typically done in one of two ways.2 The first approach is two-step, whereby one first estimates unobserved firm productivity via standard proxy methods while ignoring the dependence of the former on exports under the assumption of _exogenous_ first-order Markov evolution of productivity and then examines spillover effects in a second step by (linearly) regressing the already estimated firm productivity on the average export spillover exposure (e.g., Alvarez & Lopez, 2008; Jung, 2010). Taken at its face value, such a second-step analysis is problematic because it contradictorily postulates the existence of an endogenous exporting-productivity relationship that is at odds with the assumption about firm productivity being purely autoregressive in the first stage. De Loecker (2013) makes the same argument in his critique of two-step analyses of LBE, but it is equally important in the context of spillovers via LFE. Aside from the econometric complications arising from its inherently contradictory setup, the two-step approach also cannot provide structurally meaningful interpretation of LFE effects on productivity because of its inability to distinguish between different data-generating processes that all--and not only the one with non-zero LFE--can produce a positive relation between the average export participation of industry and firm productivity. Footnote 2: In fact, this is also the case for studies of spillovers from imports, R&D, FDI and other production-related firm activities in the market; see Malikov & Zhao (2021) for more discussion. We avoid this internal inconsistency in our model by explicitly accounting for potential export spillover-induced productivity improvements during the estimation of firm productivity, which is partly what enables us to consistently estimate LFE effects along with other production-function components. Alternatively, estimates of both productivity and export effects thereon can be biased due to omission of the relevant measurement of the peers' exporting activities from the productivity-proxy function that are correlated with the firm's own export behavior as well as quasi-fixed inputs (via its latent productivity). Empirical findings of external export spillovers based on a two-step estimation procedure may therefore be spurious. The other popular approach to testing export spillovers involves a single step and is based on estimation of the Griliches (1979)-style "augmented production function" which, besides the conventional inputs, also admits--usually in a linear fashion--measures of the firm's own and others' exporting activities (e.g., Wei & Liu, 2006). Although such an approach explicitly recognizes the existence of both LBE and LFE effects on productivity during the estimation, it does so by restrictively assuming that productivity effects of exporting are homogeneous across all firms, e.g., no matter their productivity levels. Besides linearity of the log Cobb-Douglas form that rules out heterogeneous effects, such an approach is also problematic because it assumes that the relationship between exporting and productivity is deterministic and, more importantly, it implies the possibility for a unit-elastic substitution between inputs and export variables (see De Loecker, 2013, for more on these points). To improve upon the above methods, in this paper we seek to measure and test the direct LBE and indirect LFE effects on productivity in a consistent structural framework of firm production. Building upon Doraszelski & Jaumandreu (2013) and De Loecker (2013), we formalize the evolution of firm productivity as an endogenous export-controlled process, where we explicitly accommodate two potential channels --internal and external (direct and indirect)--by which exporting may impact productivity of domestic firms. Specifically, we allow the firm's future productivity to be affected not only by its own past export behavior but also by that of its spatially proximate peers in the industry. This facilitates a simultaneous, internally consistent identification of firm productivity and corresponding LBE and LFE effects.3 Footnote 3: While our approach is related to that by Malikov & Zhao (2021) whose recent work also concerns an internally consistent measurement of cross-firm productivity spillovers, it is distinct in that we focus on the identification of “contextual” spillover effects—in the Manski (1993) nomenclature—of peers’ _activities_ (namely, exporting) on productivity whereas they consider the measurement of “endogenous” effects of peers’ _productivity_. The type of spillovers that we study here is more predominant in the literature, particularly in the context of exporting. Given the objective of our paper, in our analysis we abstract away from other plausible and no less important channels of cross-firm interactions that may lead to productivity spillovers, including local agglomeration, R&D externalities via technology/knowledge sharing, spillovers along the supply chains and material-product connections, and focus exclusively on spillovers from exporting. Similar to De Loecker's (2013) approach, ours is also silent on the exact theoretical mechanism by which LBE and LFE occur. Instead of pinpointing specific channels which is predictably more demanding on data and experiment design, we pursue a simpler but more feasible goal of testing for the presence of these export-driven effects on productivity in general. To achieve identification in the wake of endogeneity-inducing correlation of input allocations and, potentially, exporting with unobserved firm productivity (Griliches & Mairesse, 1998), we rely on the popular proxy variable technique. Our identification strategy utilizes a structural link between the parametric production function and the firm's first-order condition for static inputs which helps us circumvent Ackerberg et al.'s (2015) and Gandhi et al.'s (2018) non-identification critiques of conventional proxy-based estimators a la Olley & Pakes (1996) and Levinsohn & Petrin (2003). In addition, owing to a nonparametric treatment of the firm productivity process, our model enables us to accommodate heterogeneity in productivity effects of exporting across firms. This also lets us explore potential nonlinearities in LBE and LFE effects whereby they can interact with each other as well as, more importantly, with the firm's own productivity thus indirectly allowing for conditioning on the firm's capacity to learn given its proximity to the technology frontier. To this end, not only do we provide a more comprehensive picture of the productivity effects of exporting, but we do so in a robust way by dealing with internal inconsistency and identification problems prevalent in prior literature. We study the productivity-enhancing effects of exporting using plant-level data on Chilean manufacturers during the 1995-2007 period, with exporters accounting for about 21% of the sample. Using our semiparametric methodology, we find that exporters enjoy a statistically significant pro ductivity premium over non-exporters along the entire distribution of productivity which, however, can be reflective of both learning and self-selection effects. Zooming in on the within-firm learning effects of exporting, we find significant evidence of both LBE and LFE. Overall, the LBE productivity effect is statistically significant for 93% of firms in our main specification. On average, the size of LFE effect (cumulatively from all peers) is commensurate to that of LBE but, at the observation level, LFE is significantly non-zero for 69% of plants only, thereby suggesting that the indirect productivity-boosting effect of exporting is less prevalent in Chilean manufacturing than the direct learning effect taking place within the firm. We also document that the LFE effect is stronger for the firms who also export themselves. This empirical evidence therefore suggests that exporters benefit from the exposure to peer exporters in their local industry more than non-exporters, plausibly because there may be complementarities between internal/direct and external/indirect learning from export experiences. Using simple back-of-envelope calculations, we find that long-run total LBE and LFE effects on mean firm productivity are, respectively, 0.70% and 0.63% per percentage point of export intensity. Thus, in the long-run equilibrium a permanent 10 percentage point increase in mean export intensity of _all_ firms in the local industry is roughly estimated to produce a sizable 13.3% boost to average firm productivity through both direct LBE and indirect LFE channels. The rest of the paper is organized as follows. Section 2 presents the conceptual framework. Section 3 describes our identification and estimation procedure. The data are discussed in Section 4. We report the empirical results in Section 5. Section 6 concludes. ## 2 Conceptual Framework Consider the firm \(i(=1,\ldots,n)\) at time \(t(=1,\ldots,T)\). Following the convention of productivity literature (e.g., Olley and Pakes, 1996; Blundell and Bond, 2000; Levinsohn and Petrin, 2003; De Loecker and Warzynski, 2012; Doraszelski and Jaumandreu, 2013; Ackerberg et al., 2015; Konings and Vanormelingen, 2015; Jin et al., 2019), we assume that the firm employs physical capital \(K_{it}\), labor \(L_{it}\) and an intermediate input such as materials \(M_{it}\) to produce the output \(Y_{it}\) via the Cobb-Douglas production technology subject to the Hicks-neutral productivity: \[Y_{it}=A_{0}K_{it}^{a_{K}}L_{it}^{a_{L}}M_{it}^{a_{M}}\exp\left\{\omega_{it}+ \eta_{it}\right\}, \tag{2.1}\] where \(A_{0}\) is a scalar constant; \((\alpha_{K},\alpha_{L},\alpha_{M})^{\prime}\) are the input elasticities; \(\omega_{it}\) is the firm's persistent productivity (TFP) which is known to the firm at time \(t\) but unknown to an econometrician; and \(\eta_{it}\) is a random _i.i.d._ productivity shock such that \(E[\eta_{it}|\mathcal{I}_{it}]=E[\eta_{it}]=0\), where \(\mathcal{I}_{it}\) is the \(i\)th firm's information set in period \(t\). As in many studies in productivity literature (e.g., Gandhi et al., 2020; Tsionas and Mallick, 2019; Hou et al., 2020; Malikov and Zhao, 2021; Malikov et al., 2023), physical capital \(K_{it}\) and labor \(L_{it}\) are said to be subject to adjustment frictions, and the firm optimizes them dynamically at time \(t-1\) rendering these inputs predetermined quasi-fixed state variables. The materials input \(M_{it}\) is freely varying and determined by the firm statically at time \(t\). Both \(K_{it}\) and \(L_{it}\) follow their respective laws of motion: \[K_{it}=I_{it-1}+(1-\delta)K_{it-1}\quad\text{and}\quad L_{it}=H_{it}+L_{it-1}, \tag{2.2}\] where \(I_{it}\), \(H_{it}\) and \(\delta\) respectively denote the gross investment, net hiring and capital depreciation rate of the firm \(i\) in period \(t\). We assume that the risk-neutral firm faces perfectly competitive output and input markets and seeks to maximize a discounted stream of the expected life-time profits subject to its state variables and expectations about the market structure variables including prices that are common to all firms. In this paper, our principal interest is in the measurement of internal and external productivity effects of exporting in a domestic industry. Instead of modeling the firm's export behavior in a discrete fashion by focusing on its "status" as popularly done in the literature (e.g., Blalock & Gertler, 2004; Van Biesebroeck, 2005; Amiti & Konings, 2007; Kasahara & Lapham, 2013), we formalize the firm's exporting in a richer, continuous framework along the lines of De Loecker (2007) and Malikov et al. (2020). We rely on the firm's export intensity as a measure of its own export behavior as well as to model its exposure to peer exporters in the industry. Concretely, let \(X_{it}\in[0,1]\) denote the firm's export intensity defined as the nominal share of its total output produced for export abroad, with its boundary values corresponding to wholly domestic and fully export-oriented firms. We conceptualize the firm's exporting decisions as a choice of the degree of its export orientation \(X_{it}\). Given the documented persistence of export experience over time, we formalize these decisions as dynamic but, unlike other dynamic production choices by the firm in our model such as capital investment or hiring, they need not be subject to adjustment frictions that would render them delayed and, hence, predetermined.4 This assumption is important because it does _not_ rule out a contemporaneous correlation between firm productivity \(\omega_{it}\) and its export orientation \(X_{it}\). Thus, the firm's export intensity \(X_{it}\) evolves according to the following dynamic process: Footnote 4: To make these distinctions clearer: if the optimal decision concerning production in period \(t\) is affected by its history, then that decision is said to be “dynamic.” If, due to adjustment frictions, a decision concerning production in period \(t\) is effectively made at \(t-1\), then we say it is “predetermined.” In this nomenclature, \(K_{it}\) and \(L_{it}\) are dynamic and predetermined, whereas \(X_{it}\) is dynamic but chosen at time \(t\). \[X_{it}=\mathcal{X}_{it}+X_{it-1}, \tag{2.3}\] where \(\mathcal{X}_{it}\) regulates the endogenous adjustment in the degree of export orientation \(X_{it}\) at \(t\). The firm's dynamic optimization problem is then described by the following Bellman equation: \[\mathbb{V}_{t}\big{(}\Xi_{it}\big{)}=\max_{I_{it},H_{it},\mathcal{X}_{it}} \Big{\{}\Pi_{t}(\Xi_{it})-C_{t}^{I}(I_{it},K_{it})-C_{t}^{H}(H_{it},L_{it})-C_{ t}^{X}(\mathcal{X}_{it},X_{it-1})+\mathbb{E}\Big{[}\mathbb{V}_{t+1}\big{(}\Xi_{it+1} \big{)}\Big{|}\Xi_{it},I_{it},H_{it},\mathcal{X}_{it}\Big{]}\Big{\}}, \tag{2.4}\] where \(\Xi_{it}=(\omega_{it},K_{it},L_{it},X_{it-1})^{\prime}\in\mathcal{I}_{it}\) are the state variables; \(\Pi_{t}(\Xi_{it})\) is the restricted profit function derived as a value function corresponding to the static optimization problem in (3.1); and \(C_{t}^{\kappa}(\cdot)\) is the cost of adjustments in capital (\(\kappa=I\)), labor (\(\kappa=H\)) and exporting (\(\kappa=X\)). From the laws of motion in (2.2)-(2.3), in the above dynamic problem the firm's exporting behavior \(X_{it+1}\) is chosen (via \(\mathscr{K}_{it+1}\)) in time period \(t+1\) unlike the amounts of dynamic inputs \(K_{it+1}\) and \(L_{it+1}\) that are chosen by the firm in time period \(t\) (via \(I_{it}\) and \(H_{it}\), respectively). Solving (2.4) for \(I_{it}\), \(H_{it}\) and \(\mathscr{K}_{it}\) yields their respective optimal policy functions. Next we formalize the productivity effects of exports. We do so by extending De Loecker's (2013) framework to accommodate not only the more traditional direct LBE effects but also to allow for indirect effects via learning from exporting peers. That is, we explicitly model two potential channels--internal and external (direct and indirect)--by which exporting may impact the productivity of domestic firms. The first channel, referred to as "learning by exporting," takes place \(within\) the firm internally and is commonly attributed to the exporter firm's absorption of new technologies from abroad, learning of international best practices that lead to improved manufacturing processes, productivity enhancements driven by the exposure to more competition, scale effects, quality and variety effects, etc. These export-related productivity gains are facilitated by the firm's \(own\) direct access to foreign customers and rivals. The second channel is less obvious and oftentimes left unaccounted. Domestic firms (both the exporters and non-exporters) can learn not only from their own export behavior but also indirectly from their exporting _peers_'. These external export-driven productivity spillovers are a type of _cross-firm_ peer effects, which we refer to as "learning from exporters," and effectively capture the secondary productivity effects of export engagement. Such cross-firm spillovers may arise through pooling of workers, informal contacts (e.g., attendance of trade shows, exposure to affiliate and/or competitor products and marketing, learning by imitation, customer-supplier discussions) or more formal reverse engineering. For instance, by monitoring successful exporting peers' market behavior, both in domestic and foreign markets, domestic firms can imitate and then adopt their business strategies to boost own productivity. Alternatively, the movement of labor from exporting firms to other domestic firms may facilitate the dispersion of tacit knowledge about more innovative/efficient foreign technologies and better business practices or institutional knowledge about foreign markets, which may help hiring firms increase their productivity. To capture export-driven productivity spillovers, we proxy each firm's exposure to exporters in the industry using the average export intensity of its spatially proximate _peers_ operating in the same industry defined as \[\overline{X}_{it}=\sum_{j\neq i}p_{ijt}X_{jt}, \tag{2.5}\] where \(\{p_{ijt};\ j(\neq i)=1\ldots,n\}\) are the peer-firm weights identifying exporters in the firm \(i\)'s industry and spatial locality in period \(t\). In our baseline specification, we construct the peer connection weights as \(p_{ijt}=1(j\in\mathscr{L}_{it})/\sum_{k(\neq i)=1}^{n}1(k\in\mathscr{L}_{it})\), where \(\mathscr{L}_{it}\) denotes a set of firms that are in the same industry and geographical region as is the firm \(i\) in time period \(t\). Thus, for the \(i\)th firm at time \(t\) this weighting scheme assigns a uniform weight of \(1/n_{it}\) to all other firms from the same industry in the same region, where \(n_{it}\) is the total number of such peer firms. This scheme is based on the conventional argument that geographical proximity and industry play a central role in productivity spillovers. For example, it is technologically easier and less costly for firms to monitor and mimic strategies of other exporters that operate within the same industry and region. This is also in line with the literature on export spillovers (e.g., see Bernard & Jensen, 2004; Greenaway & Kneller, 2008; Koenig, 2009; Koenig et al., 2010; Poncet & Mayneris, 2013). We weight all peers equally, given that their export intensity which is being averaged to compute \(\overline{X}_{it}\) is already measured relative to the scale of their production. Note that our export exposure measure is firm-specific because it excludes the \(i\)th firm. Thus, \(\overline{X}_{it}\) captures the _external_ export orientation of the local industry which firm \(i\) is exposed to. This measure varies across both the firms and time. While closely related, \(\overline{X}_{it}\) is therefore not the "grand" industry average but the peer average in the industry. This distinction is crucial for the separable identification of LBE and LFE effects (more on this later). We model firm productivity evolution as a controlled first-order Markov process, whereby we allow firm \(i\) to improve its future productivity not only via learning by exporting but also via learning from exporting peers. Generalizing Doraszelski & Jaumandreu's (2013) and De Loecker's (2013) formulations to include the indirect cross-firm effects of exporting, the productivity process is \[\omega_{it}=h\left(\omega_{it-1},X_{it-1},\overline{X}_{it-1}\right)+\zeta_{it}, \tag{2.6}\] where \(h(\cdot)\) is the conditional mean function of \(\omega_{it}\); and \(\zeta_{it}\) is a zero-mean random innovation in persistent productivity that is unanticipated by the firm at \(t-1\): \(E[\zeta_{it}|\mathcal{I}_{it-1}]=0\). The LBE and LFE effects can then be measured as \(LBE_{it}=\partial E[\omega_{it}|\cdot]/\partial X_{it-1}\) and \(LFE_{it}=\partial E[\omega_{it}|\cdot]/\partial\overline{X}_{it-1}\), respectively. Identification of the productivity effects of exporting in our model is based on several structural timing assumptions. The evolution process in (2.6) assumes that both the internal and external learning associated with exports occurs with a delay which is why the dependence of \(\omega_{it}\) on controls is lagged implying that export-driven improvements in firm productivity take a period to materialize. Such a timing assumption is quite common in the LBE literature (e.g., Van Biesebroeck, 2005; De Loecker, 2013; Malikov et al., 2020, 2021, 2023). In \(E[\zeta_{it}|\mathcal{I}_{it-1}]=0\), we also assume that the firm does not experience changes in exporting in light of expected _future_ innovations in its productivity. This rules out the firm's ability to systematically predict future productivity shocks. Instead, the productivity process in (2.6) says that firms anticipate the effect of their export experience \(X_{it-1}\) on future productivity \(\omega_{it}\) when adjusting the former in period \(t-1\), and the conditional mean \(E[\omega_{it}|\omega_{it-1},X_{it-1},\overline{X}_{it-1}]\) captures that _expected_ productivity. But the _actual_ firm productivity at time \(t\) also includes an unanticipated innovation \(\zeta_{it}\). In essence, the error term \(\zeta_{it}\) represents unpredictability that is naturally associated with any productivity-modifying learning activities such as chance in discovery, success in implementation, etc. The innovation \(\zeta_{it}\) is realized after \(X_{it-1}\) will have already been chosen. This structural timing assumption about the arrival of \(\zeta_{it}\), which renders the firm's _past_ export experience mean-orthogonal to a random innovation at time \(t\), helps us identify both the direct learning and external spillover effects. In Appendix A, we also discuss how inclusion of the lagged firm productivity in (2.6) indirectly allows us to, at least partly, control for self-selection of firms into exporting based on their productivity levels. Since, in the productivity process (2.6), exporting enters the conditional mean of productivity via two variables, of natural interest is the ability of our model to separate the direct LBE effect from indirect LFE spillovers. Using simple calculus we can show that, owing to the definition of the average _peer_ export orientation which excludes the export information pertaining to the \(i\)th firm: \[\frac{\partial E[\omega_{it}\cdot]}{\partial X_{it-1}}=\frac{\partial h(\cdot)}{ \partial X_{it-1}}+\frac{\partial h(\cdot)}{\partial\overline{X}_{it-1}}\times \frac{\partial\overline{X}_{it-1}}{\partial X_{it-1}}=LBE_{it}, \tag{2.7}\] because \(\partial\overline{X}_{it-1}/\partial X_{it-1}=\partial\sum_{j\neq i}p_{ijt}X_ {jt-1}/\partial X_{it-1}=0\). Thus, \(LBE_{it}\) is _separably_ identifiable. Intuitively, all observable variation in expected productivity due to a change in the firm's own export intensity is attributable to the direct learning effect because its exporting does not immediately affect the export behavior of its peers. Obviously, the separability of the two effects would be impossible if, in place of the average _peer_ export intensity, we would have used the total average of _all_ firms as oftentimes done in the literature (e.g., Alvarez Lopez, 2008; Jung, 2010). ## 3 Empirical Strategy Estimating the production function using least squares would result in a simultaneity bias due to the dependence of freely varying inputs (regressors in the production function) on unobserved firm productivity \(\omega_{it}\) because the latter is a part of the firm's information set \(\mathscr{I}_{it}\) based upon which it makes optimal input allocation decisions at \(t\). This omitted variable bias is also known as a "transmission bias" (Griliches Mairesse, 1998). A proxy variable method proposed by Olley Pakes (1996) and extended by Levinsohn Petra (2003) tackles this endogenous problem by proxying for unobservable \(\omega_{it}\) via the observable static intermediate input \(M_{it}\) and then using weakly exogenous higher-order lags of inputs to instrument for endogenous freely varying inputs. Recently, this methodology has been critiqued for its lack of identification due to perfect functional dependence between freely varying inputs and self-instrumenting quasi-fixed factors (Ackerberg et al., 2015) and violation of the "rank condition" in instrumentation of these endogenous freely varying inputs (Gandhi et al., 2020). As a solution, Gandhi et al. (2020) have suggested employing the information contained in the first-order condition for static inputs to identify both the production function and latent firm productivity. However, because their procedure is fully nonparametric, its implementation is three-stage and quite computationally burdensome, especially in its requirement to integrate the estimated static input elasticity function at each observation in order to recover the unknown production function. In this paper, we therefore rely on Malikov Zhao's (2021) more easy-to-implement semiparametric adaptation of the Gandhi et al. (2020) methodology (which we modify to suit our research question) that utilizes a prespecified parametric form of the production function to derive the proxy function. This is similar to the idea pursued by Doraszelski Jaumandreu (2013). The semiparametric adaptation can significantly ease the demand on data as well as the computational burden of estimation. Identification.--Consider the firm's optimality condition for materials. Since the intermediate input \(M_{it}\) is freely varying and thus affects profits only in the current period, the firm's restricted expected profit maximization problem with respect to \(M_{it}\) is as follows: \[\max_{M_{it}}P_{t}^{Y}A_{0}K_{it}^{\alpha_{K}}L_{it}^{\alpha_{L}}M_{it}^{\alpha _{M}}\exp\left\{\omega_{it}\right\}\theta-P_{t}^{M}M_{it}, \tag{3.1}\] where \(P_{t}^{Y}\) and \(P_{t}^{M}\) respectively denote the output and material input price, both of which are competitively determined. The constant \(\theta\) is defined as \(\theta\equiv E\left[\exp\left\{\eta_{it}\right\}\mid\mathscr{I}_{it}\right]\). Taking the log-ratio of the first-order condition with respect to \(M_{it}\) \[\alpha_{M}P_{t}^{Y}A_{0}K_{it}^{\alpha_{K}}L_{it}^{\alpha_{L}}M_{it}^{\alpha_ {M}-1}\exp\left\{\omega_{it}\right\}\theta=P_{t}^{M} \tag{3.2}\] and the production function in (2.1) gives \[\ln\left(S_{it}^{M}\right)=\ln\left(\alpha_{M}\theta\right)-\eta_{it}, \tag{3.3}\] where \(S_{it}^{M}\equiv\frac{P_{t}^{M}M_{it}}{P_{t}^{Y}Y_{it}}\) is the intermediate input share of output. Thus, we can identify a composite constant \(\alpha_{M}\theta\) from the unconditional moment \(E\left[\eta_{it}\right]=0\), from where we have that \[\ln\left(\alpha_{M}\theta\right)=E\left[\ln\left(S_{it}^{M}\right)\right]. \tag{3.4}\] We can also separately identify the constant \(\theta\) via \(\theta\equiv E\left[\exp\left\{\eta_{it}\right\}\right]=E\left[\exp\left\{\ln \left(\alpha_{M}\theta\right)-\ln\left(S_{it}^{M}\right)\right\}\right]=E \left[\exp\left\{E\left[\ln\left(S_{it}^{M}\right)\right]-\ln\left(S_{it}^{M} \right)\right\}\right]\), with equation (3.4) used to substitute for \(\ln\left(\alpha_{M}\theta\right)\) in the third equality. Combining this result with (3.4), we identify the firm's material elasticity \(\alpha_{M}\) as \[\alpha_{M}=\exp\left\{E\left[\ln\left(S_{it}^{M}\right)\right]\right\}/E \left[\exp\left\{E\left[\ln\left(S_{it}^{M}\right)\right]-\ln\left(S_{it}^{M }\right)\right\}\right], \tag{3.5}\] where it is a unique function of the first moments of data. To identify the rest of production function as well as latent firm productivity, we take the log of (2.1) on both sides to obtain \[y_{it}=\alpha_{0}+\alpha_{K}k_{it}+\alpha_{L}l_{it}+\alpha_{M}m_{it}+\omega_{ it}+\eta_{it}, \tag{3.6}\] where \(\alpha_{0}\equiv\ln A_{0}\); and the lower-case variables correspond to the log form of the respective upper-case variables. Exploiting the Markov process of \(\omega_{it}\) in (2.6) and bringing the already identified material elasticity \(\alpha_{M}\) to the left-hand side, we rewrite (3.6) as follows: \[y_{it}^{*}=\alpha_{K}k_{it}+\alpha_{L}l_{it}+g\left(\omega_{it-1},X_{it-1}, \overline{X}_{it-1}\right)+\zeta_{it}+\eta_{it}, \tag{3.7}\] where \(y_{it}^{*}=y_{it}-\alpha_{M}m_{it}\) is fully identified and can be treated as an observable, and \(g(\cdot)\equiv h(\cdot)+\alpha_{0}\) is of unknown functional form. Next, from equation (3.2) we derive the explicit form of the conditional demand function for \(M_{it}\), which we then invert to proxy for the unobservable scalar \(\omega_{it}\) in (B.5) in the spirit of material-based proxy estimators: \[y_{it}^{*}=\alpha_{K}k_{it}+\alpha_{L}l_{it}+g\left(\left[m_{it-1}^{*}-\alpha_{K }k_{it-1}-\alpha_{L}l_{it-1}\right],X_{it-1},\overline{X}_{it-1}\right)+\zeta_{ it}+\eta_{it}, \tag{3.8}\] where \(m_{it-1}^{*}=\ln\left(P_{t-1}^{M}/P_{t-1}^{Y}\right)-\ln\left(\alpha_{M} \theta\right)-\left(\alpha_{M}-1\right)m_{it-1}\) is also fully identified and treated as an observable. Since all regressors appearing in (3.8) including \(k_{it}\), \(l_{it}\), \(k_{it-1}\), \(l_{it-1}\), \(m_{it-1}^{*}(m_{it-1})\), \(X_{it-1}\) and \(\overline{X}_{it-1}\) are weakly exogenous based on our structural assumptions, there is no endogenous covariate on the right-hand side of the equation. That is, \[E\left[\zeta_{it}+\eta_{it}|k_{it},l_{it},k_{it-1},l_{it-1},m_{it-1}^{*}(m_{it- 1}),X_{it-1},\overline{X}_{it-1}\right]=0, \tag{3.9}\] and the equation (3.8) is identified. From (3.8)-(3.9), it is obvious that, if there were indeed non-zero export spillovers and we had failed to account for them in the firm's productivity evolution process, then \(\overline{X}_{it-1}\) would have been omitted from the proxy function \(g(\cdot)\) in (3.8) and, consequently, been absorbed, along with its interactions with other arguments of the proxy, into the error term. In the latter case, the error term would then contain variation from the firm's quasi-fixed inputs, its own export intensity as well as the average export orientation of its peers. Generally, these all would be correlated with quasi-fixed inputs and the export variable included as regressors thus violating the weak exogeneity condition, and the model would be unidentified due to the omitted variable bias. This highlights the importance of embedding the external spillover channel into the analytical framework explicitly. Lastly, we can recover latent firm productivity up to a constant using the identified production-function parameters and the productivity shock: \(\omega_{it}+\alpha_{0}=y_{it}-\alpha_{K}k_{it}-\alpha_{L}l_{it}-\alpha_{M}m_{it }-\eta_{it}\). _Estimation procedure.--_The estimation is simple and involves a two-stage procedure. In the first stage, we estimate \(\alpha_{M}\) via a sample counterpart of (3.5) constructed using sample averages computed from the raw data on material share: \[\widehat{\alpha}_{M}=\exp\left\{\frac{1}{nT}\sum_{i}\sum_{t}\ln\left(S_{it}^{ M}\right)\right\}\bigg{/}\left[\frac{1}{nT}\sum_{i}\sum_{t}\exp\left\{ \left[\frac{1}{nT}\sum_{i}\sum_{t}\ln\left(S_{it}^{M}\right)\right]-\ln\left( S_{it}^{M}\right)\right\}\right].\] As a by-product, we also have \(\ln\overline{(\alpha_{M}\theta)}=\frac{1}{nT}\sum_{i}\sum_{t}\ln\left(S_{it}^{ M}\right)\) and \(\widehat{\eta}_{it}=\ln\overline{(\alpha_{M}\theta)}-\ln\left(S_{it}^{M}\right)\). With these estimates in hand, we then construct estimates of \(y_{it}^{*}\) and \(m_{it}^{*}\) as \(\widehat{y}_{it}^{*}=y_{it}-\widehat{\alpha}_{M}m_{it}\) and \(\widehat{m}_{it-1}^{*}=\ln\left(\frac{P_{t-1}^{M}}{P_{t-1}^{T}}\right)-\ln \overline{(\alpha_{M}\theta)}-(\widehat{\alpha}_{M}-1)m_{it-1}\), respectively. The second-stage estimation requires the choice of an approximator for the unknown \(g(\cdot)\). We use the popular second-order polynomial sieves (e.g., Gandhi et al., 2020). Specifically, we approximate \(g(\cdot)\) as follows: \[g(\cdot)\approx\left(1,W_{it-1}(\mathbf{\alpha}),W_{it-1}^{2}(\mathbf{\alpha}),X_{it-1},X_{it-1}^{2},\overline{X}_{it-1},\overline{X}_{it-1}^{2},W_{it-1}(\mathbf{\alpha}) X_{it-1},W_{it-1}(\mathbf{\alpha})\overline{X}_{it-1},X_{it-1}\overline{X}_{it-1} \right]^{\prime}\mathbf{\gamma}=\mathbf{\lambda}_{it}(\mathbf{\alpha}^{\prime})\mathbf{\gamma}, \tag{3.10}\] where \(\mathbf{\alpha}=(\alpha_{K},\alpha_{L})^{\prime}\), \(\bar{W}_{it-1}(\mathbf{\alpha})=\hat{m}_{it-1}^{*}-\alpha_{K}k_{it-1}-\alpha_{L}l_{it-1}\), and \(\mathbf{\gamma}\) is the unknown parameter vector. We estimate (3.8) using a nonlinear least squares method to obtain the second-stage estimates of \((\alpha_{K},\alpha_{L})^{\prime}\) and \(\mathbf{\gamma}\): \[\min_{\alpha_{K},\alpha_{L},\mathbf{\gamma}}\,\sum_{i}\sum_{t}\left(\widehat{y}_{it }^{*}-\alpha_{K}k_{it}-\alpha_{L}l_{it}-\lambda_{it}(\alpha_{K},\alpha_{L})^{ \prime}\mathbf{\gamma}\right)^{2}.\] With the estimated production-function parameters in hand, we can compute the productivity effects via \(\widehat{LBE}_{it}=\partial\widehat{g}(\cdot)/\partial X_{it-1}\) and \(\widehat{LFE}_{it}=\partial\widehat{g}(\cdot)/\partial\overline{X}_{it-1}\), where \(\widehat{g}(\cdot)=\mathbf{\lambda}_{it}(\widehat{\mathbf{\alpha}})^{\prime}\widehat{ \mathbf{\gamma}}\).5 We also recover \(\omega_{it}\) up to a constant via \(\widehat{\omega_{it}+\alpha_{0}}=y_{it}-\widehat{\alpha}_{K}k_{it}-\widehat{ \alpha}_{L}l_{it}-\widehat{\alpha}_{M}m_{it}-\widehat{\eta}_{it}\). Footnote 5: Note that the definitions of the LBE and LFE effects are based on the gradients of \(h(\cdot)\) but, since \(g(\cdot)\) and \(h(\cdot)\) differ only by an additive constant, their gradients are equal. For statistical inference, we employ accelerated biased-corrected percentile bootstrap confidence intervals. Details are provided in Appendix C. ## 4 Data Our data come from the Encuesta Nacional Industrial Anual (ENIA), a national industrial survey, conducted by the Chilean National Institute of Statistics annually. The sample period runs from 1995 to 2007. Manufacturing plants are classified into 22 industry groups according to the 2-digit International Standard Industry Classification (ISIC) code. The dataset contains information on plants from 13 regions including Tarapaca, Antofagasta, Atacama, Coquimbo, Valparaiso, Libertador Gral. Bernardo O'Higgins, Maule, Biobio, La Araucania, Los Lagos, Aisen del Gral. Carlos Ibanez del Campo, Magallanes and Chilean Antarctica, and the Santiago Metropolitan region. Though each observation represents a plant rather than a firm, single-plant establishments account for over 90% of total units (also see Pavcnik, 2002). The total output is defined as total revenues from the sale of products and work done and, as such, our analysis is carried out using a revenue-based productivity measure (TFPR). As in De Loecker (2013), the measured TFPR will capture export-driven changes in both the cost and demand factors. Hence, a positive LBE and/or LFE effect can be interpreted as working through either a decline in production cost or an increase in demand, or both. For more on the latter point, see Appendix B. Capital is the fixed assets balance for buildings, machinery and vehicles at the end of a period. Materials are defined as the total expenditure on intermediate inputs consisting of raw materials and other intermediates. These three variables are measured in hundred thousands of pesos, and we deflate them using price deflators at the 4-digit ISIC level. We measure labor using the total number of people working at the plant. We drop observations that contain missing or negative values for these variables and exclude extreme outliers lying outside the interval between the 1th and 99th percentiles of these four variables. In the end, our sample consists of 8,353 manufacturing plants with a total of 47,622 observations. Export intensity is calculated as the nominal proportion of the firm's exports in its total sales, ranging from 0 to 1 by construction. Out of all plants, exporters are 20.6% of the sample. As discussed earlier, we measure each plant's exposure to exporters using the average export intensity of its peers, excluding the "recipient" plant itself. Peers are identified as operating in the same one of 13 regions and the same 2-digit industry within the same period (a total of 186 peer groups), but we remove the geographic limits in one of the robustness checks. The exposure variable also ranges between 0 and 1 by construction. Table C.1 in Appendix D provides the summary statistics for our data. ## 5 Results Our primary interest is in the estimates of productivity effects of exporting. Owing to a nonparametric form of the conditional mean of \(\omega_{it}\), we obtain observation-specific estimates of the LBE and LFE effects. Since these effects are defined as the gradients of expected future firm log-productivity with respect to export intensity or the peer average thereof, both of which are proportions, the reported \(LBE_{it}\) (\(LFE_{it}\)) is a semi-elasticity measuring percentage changes in productivity per unit percentage point change in the firm's export intensity (the average peer export orientation of the local industry). Lastly, both effects measure "short-run" productivity improvements per annum, which however can accumulate over time owing to the persistent nature of productivity evolution. ### Exporter productivity differential As a preliminary analysis, we first examine the overall cross-firm differential in productivity \(\omega_{it}\) across exporters and non-exporters in the Chilean manufacturing sector.6 The mean estimate of the (log)-productivity differential between exporters and non-exporters is 0.28 with the corresponding 95% bootstrap percentile confidence interval of (0.26, 0.31), which indicates that, on average, exporters are more productive than non-exporters by roughly 28%. Footnote 6: The associated production function parameter estimates are \(\hat{\alpha}_{K}=0.21\), \(\hat{\alpha}_{L}=0.46\) and \(\hat{\alpha}_{M}=0.29\). Scale elasticity, defined as the sum of all three input elasticities, is 0.96 with the corresponding 95% bootstrap confidence interval of (0.938, 0.975), indicating the decreasing returns to scale, consistent with the profit-maximizing behavior. Allowing for differences in estimators and sample periods, our estimates are in the same ballpark as the input elasticity and returns to scale values reported by other studies that used Chilean data. We further investigate if a plant's exporter status commands a productivity premium of varying magnitude and significance at different points in the productivity distribution. We do so by estimating a simple quantile regression: \(Q_{\tau}[\omega_{it}|\cdot]=\beta_{0,\tau}+\beta_{1,\tau}1(X_{it}>0)\) for \(\tau\in(0,1)\). Employing a quantile regression enables us to explore potential distribution heterogeneity in the exporter productivity differential. We estimate this model for the quantile index \(\tau\) taking values from 0.2 to 0.8 (with 0.05 increments) thereby focusing on the central portion of the productivity distribution. Figure 1 plots the quantile regression estimates of the \(\beta_{1,\tau}\) coefficient (the exporter productivity differential) against \(\tau\), along with the 95% confidence intervals. The quantile productivity differentials are all significantly positive and increasing with the quantile of \(\omega\), indicating that the productivity divergence between exporters and non-exporters is more prominent magnitude-wise among more productive firms.7 Footnote 7: Including controls for the plant size (proxied by the number of employees) as well as the region and year effects produces similar findings. For a more holistic look at the productivity differential between exporters and non-exporters, we also plot the kernel density of firm log-productivity for exporters and non-exporters, as shown in Figure 2. This allows us to compare distributions of productivity estimates as opposed to merely focusing on marginal moments or quantiles. The figure indicates that exporters appear to enjoy a productivity premium over non-exporters distribution-wise. To support this visual evidence, we do a formal test to check if exporters are more productive than non-exporters along the entire distribution of productivity. We utilize a generalization of the Kolmogorov-Smirnov test proposed by Linton et al. (2005) to test the (first-order) stochastic dominance of exporters' productivity over non-exporters'. Figure 1: Exporter Productivity Differential Estimates across Productivity Quantiles [_Notes_: Shaded bands correspond to 95% confidence intervals. Solid horizontal lines correspond to the productivity differentials estimated at the conditional mean, with their respective 95% confidence intervals shown using dashed lines.] Figure 2: Distributions of log-Productivity by the Exporter Status [_Notes_: Vertical lines show respective sample means.] This test permits the variables to be estimated latent quantities as opposed to observables from the data and to also share dependence (in our case, the dependence is due to their construction using the same set of parameter estimates). Employing the sub-sampling procedure from Linton et al. (2005), we obtain a \(p\)-value for the test statistic of 0.7789 and, thus, comfortably fail to reject the null hypothesis that non-exporters' productivity is stochastically dominated by that of exporters. Altogether, we can conclude that exporters enjoy a statistically significant productivity premium over non-exporters along the entire distribution of firm productivity. ### Learning by exporting and from exporters The exporter productivity premium can reflect both the learning and self-selection effects. To zoom in on the within-firm evidence of productivity-enhancing _learning_ effects of exporting and to test if the firm's past export experience as well as the experience of its peers impact its future productivity, we use \(LBE_{it}=\partial E\big{[}\omega_{it}|\omega_{it-1},X_{it-1},\overline{X}_{it- 1}\big{]}\big{/}\partial X_{it-1}\) and \(LFE_{it}=\partial E\big{[}\omega_{it}|\omega_{it-1},X_{it-1},\overline{X}_{it- 1}\big{]}\big{/}\partial\overline{X}_{it-1}\) which are conditioned on the firm's past productivity level \(\omega_{it-1}\) thereby, as discussed in Appendix A, allowing us to account for self-selection into exporting based on an _a priori_ higher productivity. In effect, both LBE and LFE measure a differential in mean future productivity identified from the difference in current productivity between firms/peers with _different_ export experiences, holding their input use fixed. Table 1 reports a summary of point estimates of these LBE and LFE effects for all firms as well as for exporters and non-exporters only. We also test for the statistical significance of these effects at _each_ observation. The shares of observations for which each of the two productivity effects of exporting is significant at the 5% level are provided in the far right column of the table. The LBE effect is estimated to average at 0.36 for the entire sample, indicating that a percentage point increase in the firm's own export intensity raises its future productivity by 0.36%. The point estimates are between 0.32 and 0.45 within the inter-quartile range. Overall, the LBE productivity effect is statistically significant for 93% of firms in our data set. We also document notable differences in the magnitude and prevalence of LBE across exporters and non-exporters. For exporters (\(X_{it-1}>0\)), the mean LBE effect is significant at 0.158, albeit the observation-specific point estimates are statistically non-zero only for 68% of the exporting firms. But the evidence is more ubiquitous among non-exporters. While the latter category of firms does not actually export, our methodology still produces an estimate of \(LBE_{it}=\partial E\big{[}\omega_{it}|\cdot\big{]}\big{/}\partial X_{it-1}\) for them which, in this case, is evaluated at \(X_{it-1}=0\). Essentially, these estimates of the LBE effect are "counterfactual" and provide a measurement of how much non-exporters' productivity would change if they were to begin exporting (i.e., marginally increase their export intensity from zero to a positive value).8 The average LBE estimate for non-exporters is 0.418 and significant. In fact, the point estimates of the LBE effect are statistically significant for virtually all non-exporters (99%). Magnitude-wise, the average effect size for non-exporters is about 2.6 times larger than that for active exporters. These findings suggest that the bulk of a productivity boost attributable to (internal) learning from exporting is realized upon the domestic firm's entry into in export markets as it gains access to new technology and business practices, and the effectiveness of LBE is diminishing as the firm further specializes in exporting.9 This initial effect likely captures the productivity-boosting effects of various export-related investments that new exporters normally undertake. Namely, the decisions to start exporting tend to go together with other firm-level actions that can enhance productivity such as the adoption of new/upgraded technologies, quality upgrading or R&D spending (e.g., see Verhoogen, 2008; Aw et al., 2011; Bustos, 2011). Due to the lack of richer data, we are unable to unbundle these individual effects, and our LBE estimates ought to be interpreted as measuring their composite export-focused effect attributable to multiple channels. Footnote 9: The diminishing productivity return to the own export experience is also corroborated by the estimates of the LBE function which we discuss later. On average, the size of the LFE effect is on par with that of LBE: a pooled mean estimate of the LFE effect is statistically significant at 0.32 (vs. 0.36 for LBE) whereby a percentage point increase in the average peer export intensity within a domestic firm's local industry boosts its future productivity by 0.32%. It may perhaps surprise the reader that the indirect cross-f \begin{table} \begin{tabular}{l c c c|c|c} \hline \hline & \multicolumn{4}{c}{_Point Estimates_} & \multicolumn{2}{c}{Stat. Signif.} \\ & Mean & 1st Qu. & Median & 3rd Qu. & (\% Obs.) \\ \hline \multirow{4}{*}{All Firms} & \multicolumn{4}{c}{**—Learning by Exporting—**} \\ & 0.363 & 0.323 & 0.390 & 0.451 & 92.7 \\ & (0.189, 0.555) & (0.139, 0.531) & (0.207, 0.603) & (0.250, 0.675) & \\ Exporters & 0.158 & –0.000 & 0.257 & 0.353 & 68.3 \\ & (0.068, 0.254) & (–0.072, 0.067) & (0.117, 0.419) & (0.184, 0.546) & \\ Non-exporters & 0.418 & 0.356 & 0.408 & 0.465 & 99.2 \\ & (0.234, 0.643) & (0.170, 0.572) & (0.220, 0.628) & (0.259, 0.695) & \\ \multicolumn{4}{c}{**—Learning from Exporters—**} \\ All Firms & 0.324 & 0.141 & 0.296 & 0.457 & 68.5 \\ & (0.116, 0.545) & (–0.059, 0.363) & (0.069, 0.520) & (0.166, 0.729) & \\ Exporters & 0.508 & 0.182 & 0.399 & 0.778 & 73.9 \\ & (0.302, 0.76) & (–0.013, 0.432) & (0.189, 0.618) & (0.508, 1.148) & \\ Non-exporters & 0.275 & 0.132 & 0.28 & 0.42 & 67.1 \\ & (0.027, 0.489) & (–0.088, 0.34) & (0.027, 0.497) & (0.124, 0.701) & \\ \hline \hline \multicolumn{4}{l}{_Notes_: Reported is a summary of point estimates of the LBE and LFE effects tabulated by the firm’s exporter status, with 95% bootstrap percentile confidence intervals in parentheses. The far right column reports the share of (sub)sample for which the observation-specific estimates are statistically significant at the 5% level.} \\ \hline \hline \end{tabular} \end{table} Table 1: The LBE and LFE Productivity Effect Estimates estimated to boost firm productivity by almost as much as the LBE occurring internally. Of note, however, is that these two effects are not commensurable with one another because, by construction, they capture impacts of qualitatively different "treatments." While \(LBE_{it}\) measures the productivity effect of a unit change in _one_ firm's (own) export intensity \(X_{it-1}\), \(LFE_{it}\) measures the effect of a unit change in the average peer export intensity \(n_{it-1}^{-1}\sum_{j\in\mathscr{L}_{it-1}}X_{jt-1}\) of the entire local industry, a change that is equivalent to an increase in the export orientation of _all_ peers by a unit. As such, the scale of two treatments is vastly different. To wit, the LFE effect quantifies a _total_ indirect/spillover effect on firm \(i\)'s productivity at time \(t\) from all its peers of whom, at time \(t-1\), it had \(n_{it-1}\). The average spillover effect on future productivity of firm \(i\) from a percentage point increase in a _single_ peer's \(X_{jt-1}\) is a significantly more modest \(0.013\%\). In what follows, we continue focusing on the (total) LFE effect that aggregates spillovers from all peers--over the average spillover across only a pair of firms--because its effect size is a more policy-relevant quantity. We find that the LFE effect is significantly non-zero only for 69% of recipient firms, thus suggesting that the indirect cross-firm productivity-boosting effect of exporting is less prevalent in Chilean manufacturing than the direct effect taking place within a firm (recall, 93%). Of interest, we also document that the LFE effect is stronger for firms who also export themselves. Namely, the average estimate of the LFE effect for exporter firms is estimated at 0.508, whereas the corresponding estimate for non-exporters is half that at 0.275. In addition, non-zero learning from exporters is also more prevalent for exporters (74%) than it is for non-exporters (67%). The empirical evidence therefore suggests that exporters benefit from the exposure to peer exporters in their local industry more than non-exporters. Plausibly, there may be complementarities between internal/direct and external/indirect learning from export experiences. This is reasonable because challenges associated with engagements in the foreign market can make exporters more motivated and pressured than their fully domestically-oriented non-exporting peers to improve further and more intensely. It is important to re-emphasize that, given an inherently "black box" nature of production functions and the residual-based definition of TFP, _magnitudes_ of LFE point estimates ought to be taken with caution and, probably, not at their face value. As noted earlier, interpretation of the LFE measure based on spatio-industrial proximity of exporting peers as capturing, specifically, the spillover effects of exporting requires that we assume away all other plausible channels of cross-firm interactions that may also spur productivity spillovers, including spatial agglomeration, R&D and FDI externalities, spillovers along supply chains, etc. These other factors are expected to correlate with exporting behavior, and the LFE effect that we measure may partly reflect spillovers that they induce across firms as well. Unless all plausible internal and external productivity modifiers are controlled for when modeling firm productivity evolution, measured LFE effects will expectedly be crude. This is an obvious limitation of our analysis but also happens to be the case for any other study that has ever sought to measure productivity spillovers. (The same arguments can also be made about LBE.) Consequently, we mainly pursue a simpler, more feasible objective of testing for _non-zero_ export-driven spillover effects on productivity. That is, our focus is on the "extensive margin," i.e., whether productivity spillovers from peer exporters occur in general, as opposed to how big these effects are. To this end, we find that the productivity externalities from exporting peers are significant in two-thirds of plants. Cumulative Long-Run Effects.--As noted earlier, the reported point estimates of the LBE and LFE effects are short-run and do not account for dynamic effects over time. Obviously, owing to the persistence of productivity, the cumulative implications of exporting for domestic firms' productivity in the long run will be more sizable. Because the estimated productivity process is nonlinear, the computation of such cumulative effects is, however, not clear-cut. To roughly size up the economic significance of LBE and LFE for productivity in the domestic industry in the long-run equilibrium, we perform the following back-of-envelope calculation. Approximating \(h(\cdot)\) using a linear expansion around the point such that its evaluated gradients are equal to the mean estimates from our semi-parametric model, i.e., \[\omega_{it}\approx\text{const}+0.482\omega_{it-1}+0.363X_{it-1}+0.324\overline {X}_{it-1}+\text{error}_{it},\] under the temporal stationarity of log-productivity we have that the long-run LBE and LFE effects are respectively equal to \(0.363/(1-0.482)=0.701\) and \(0.324/(1-0.482)=0.627\). Thus, in the long-run equilibrium a permanent 10 percentage point increase in mean export intensity of _all_ firms in a local industry is roughly estimated to produce a sizable 13.3% boost to the average productivity of domestic firms.10 This estimate incorporates both direct LBE and indirect LFE channels as well as the temporal multiplier effect due to persistence over time. Whether occurring through technology/efficiency-driven cost savings or an increase in demand (or both), this improvement in the long-run average TFPR, \(E[\omega]\), associated with an industry's 10% pivot towards more exporting is roughly equivalent to \(13.3\%\times E[Y]=3,680\) thousands pesos more in revenues for an average domestic firm, holding the input usage fixed. Footnote 10: The total long-run effect on \(E[\omega]\) of a 10 percentage point increase in \(E[X]\) is \(d\overline{E}[\omega]/d\overline{E}[X]\times 10=(\partial E[\omega]/ \partial E[X]+\partial E[\omega]/\partial E[X]\times\partial E[\overline{X}] \times 0=(0.701+0.627)\times 10=13.3\), where we have made use of \(d\overline{E}[\overline{X}]/\partial E[X]=1\). Heterogeneity and Nonlinearity.--Although we find that average effect sizes of LBE and LFE are commensurate, individual firms are highly heterogeneous across many dimensions including their productivity, the degree of their export orientation as well as the intensity of their exposure to other exporters in the industry. It is therefore of interest to investigate if these characteristics influence the effect size of their internal and external learning from exporting. Recall that we obtain the estimate of the productivity effects of exporting via \(\widehat{LBE}_{it}=\partial\widehat{g}(\cdot)/\partial X_{it-1}\) and \(\widehat{LFE}_{it}=\partial\widehat{g}(\cdot)/\partial\overline{X}_{it-1}\), where we estimate \(g\big{(}o_{it-1},X_{it-1},\overline{X}_{it-1}\big{)}\) using the second-order polynomial sieve approximation given in (3.10). Thus, by derivation, both \(\widehat{LBE}_{it}\) and \(\widehat{LFE}_{it}\) are the estimated linear _functions_ of the "determinants" of firm productivity \(\big{(}o_{it-1},X_{it-1},\overline{X}_{it-1}\big{)}\).11 Table 2 reports the estimates of parameters on these three variables for LBE and LFE. Footnote 11: Concretely, under the quadratic polynomial approximation of \(g(\cdot)\) in (3.10), the estimated gradient functions are \(\widehat{LBE}_{it}=\partial\widehat{g}(\cdot)/\partial X_{it-1}=\overline{ \lambda}_{x}+0.5\overline{\lambda}_{xx}X_{it-1}+\overline{\lambda}_{xx}\widehat {\partial}_{it-1}+\overline{\lambda}_{xx}\overline{X}_{it-1}\) and \(\widehat{LFE}_{it}=\partial\widehat{g}(\cdot)/\partial\overline{X}_{it-1}= \overline{\lambda}_{x}+0.5\overline{\lambda}_{xx}\overline{X}_{it-1}+ \overline{\lambda}_{xx}\widehat{\partial}_{it-1}+\overline{\lambda}_{xx} \widehat{X}_{it-1}\) where \(\overline{\lambda}\)s are the estimated polynomial coefficients. Table 2 reports the coefficients on regressors in these functions. The coefficient estimates on \(o_{it-1}\) for both LBE and LFE are significantly negative, indicating that the magnitude of these effects declines as firms get more productive. Thus, less productive plants benefit more from export-driven learning, be it an internal or external channel. This finding suggests that plants that are already highly productive have less to gain from export-driven learning as well as less to learn from their exporting peers, presumably because they're much closer to the technology/knowledge frontier compared to less productive domestic firms. We also find that a firm's own export intensity \(X_{it-1}\) has a significantly negative effect on learning by exporting but a significantly positive effect on learning from exporters, indicating that less export-oriented plants improve more via learning by exporting and less via learning from exporters (and vice versa). Basically, this is indicative of the diminishing productivity return to the own export experience: an increase in the degree of a firm's export orientation enhances its productivity at a decreasing rate. But we do not find such a pattern for the LFE effect. On the contrary, the more export-oriented the plant is, the higher the cross-firm export-driven productivity spillovers are, which buttresses our earlier discussion of potential complementarities between exporting and external learning from Table 1. Lastly, the results in Table 2 suggest that the average of peer export orientation in local industry \(\overline{X}_{it-1}\) has a significantly positive influence on learning by exporting, indicating that a greater exposure to exporters helps plants absorb productivity improvements from their own export behavior. At the same time, we find no significant effect of \(\overline{X}_{it-1}\) on LFE effect size, indicating that the average export participation in the industry improves plant productivity at a constant rate. _Robustness Analysis.--_We also assess the robustness of our main empirical finding of significant LBE and LFE effects to the choice of how we measure (\(i\)) export experience of firms and (\(ii\)) their exposure to export experiences of their peers. Specifically, we re-estimate our model using a binary export-status variable \(1(X_{it}>0)\) as well as redefine \(\overline{X}_{it}\) by no longer restricting the pool of a firm's peers to the same geographic region. The latter is motivated by the plausible arguments that, unlike pooling of workers, the knowledge diffusion mechanism need not be local and may spill widely across space. These results are summarized in Table 3. Two observations are in order here. First, using a discrete measure of export experience yields smaller effect sizes for both LBE and LFE. Second, \begin{table} \begin{tabular}{l c c} \hline \hline & LBE & LFE \\ \hline \(\omega_{it-1}\) & –0.113 & –0.346 \\ & (–0.212, –0.039) & (–0.575, –0.134) \\ \(X_{it-1}\) & –0.970 & 1.210 \\ & (–1.454, –0.435) & (0.583, 2.008) \\ \(\overline{X}_{it-1}\) & 1.210 & 0.220 \\ & (0.583, 2.008) & (–0.959, 1.377) \\ \hline \hline \end{tabular} _Notes_: Reported are the parameter estimates for the LBE and LFE functions derived from the polynomial approximation of the conditional mean of \(\omega_{it}\), along with the 95% bootstrap percentile confidence intervals in parentheses. \end{table} Table 2: Estimates of the LBE and LFE Functions expanding the group of peer firms to include all regions, thereby covering a wider spillover pool, results in significantly larger estimates of the external LFE effect while expectedly having no notable impact on the internal LBE effect.12 The latter indirectly lends support to the hypothesis that export spillover effects may have a broad geographical scope. By restricting the extent of spillovers to the same region in our main specification, we also restrict the reach of cross-firm export externalities. Footnote 12: This is so despite that the indirect LFE effect _per_ a firm’s each peer actually gets diluted; see tables in Appendix E. As discussed earlier, to identify the spillover effects of exporting beyond exporters, we rule out other potential channels of cross-firm interactions within regions and industries that stem from spatio-industrial proximity of firms to one another. However, we can somewhat relax this assumption by allowing for peer network unobservables that, besides just exporting, may give rise to spillovers in productivity as well (e.g., regional agglomerations). The requirement is that these unobservables be time-invariant, in which case we can control for potential region- and industry-level confounders using peer group fixed effects. We consider group effects across both spatial and industrial dimensions and re-estimate our baseline specification including region and industry fixed effects. The corresponding results are summarized in Table 4. While predictably the LBE estimates of within-firm effects of exporting hardly change, the mean effect size of external spillovers (LFE) increases notably when we rely solely on within-region or within-region-and-industry variation in the estimation. The increase in the magnitude of spillovers is consistent with significant between-region/industry heterogeneity in (peer) firm exporting which, when omitting group effects, "dilutes" the strength of cross-firm effects in the baseline model due to the variation across peer groups. Overall, the cross-specification variation in estimates is unsurprising and, in fact, expected be \begin{table} \begin{tabular}{l c c c c} \hline \hline & (I) & (II) & (III) & (IV) \\ \hline LBE & 0.363 & 0.379 & 0.147 & 0.151 \\ & (0.189, 0.555) & (0.207, 0.575) & (0.117, 0.176) & (0.121, 0.179) \\ & [92.7\%] & [89.4\%] & [99.6\%] & [99.6\%] \\ LFE & 0.324 & 0.912 & 0.231 & 0.463 \\ & (0.116, 0.546) & (0.635, 1.169) & (0.117, 0.331) & (0.321, 0.645) \\ & [68.6\%] & [80.0\%] & [85.1\%] & [66.2\%] \\ \hline \multicolumn{5}{l}{_Exporting Measure_} \\ intensity & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) \\ status & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ _Pool of Peer Exporters_ & & & & \\ same industry & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ same region & \(\checkmark\) & \(\times\) & \(\checkmark\) & \(\times\) \\ \hline \hline \end{tabular} _Notes_: Summarized are the results for the LBE and LFE effects obtained using alternative variable specifications. We report the mean point estimates along with 95% bootstrap percentile confidence intervals (in parentheses) as well as the shares of sample for which the observation-specific estimates are statistically significant at the 5% level (in brackets). Model (I) is our main specification, the results for which we reproduce here for convenience. \end{table} Table 3: Robustness to Alternative Measures of Exporting and of the Exposure to Exporters cause models treat cross-firm interactions a bit differently and/or utilize different variation in data to identify the effects. The qualitative implication however remains unchanged across all alternative specifications: both LBE and LFE are statistically significant and widely prevalent. Also, consistently across these alternatives, we obtain LFE effects that, on average, exceed LBE, implying that the total cross-firm effect of exporting--_after aggregating across all peers_--may even out-size the direct learning effect occurring within one firm.13 If anything, this further underscores the importance of measuring both effects when seeking to quantify the total social effects of exporting (and export-promoting policies, by extension) on domestic industries. Footnote 13: Analogues of Tables 3–4 that report _average_ indirect LFE effects (per peer) are available in Appendix E. As with our main specifications, these average indirect effects are much smaller than the direct LBE effect. Alternative Two-Step Procedure.--Empirical analyses of export-driven spillovers in productivity oftentimes employ a two-step methodology, whereby one first estimates firm productivity via off-the-shelf proxy methods that assume--and thus impose in the estimation--an _exogenous_ Markov process for productivity and then tests for spillovers in a second step by (linearly) regressing firm productivity estimated in the first step on the export spillover exposure variable(s). For instance, this is the approach taken by Alvarez & Lopez (2008). As discussed earlier, not only is it internally inconsistent (contradictory) in that a non-zero internal LBE and/or external LFE effect found in the second step _must_ violate the key identifying assumption of the first-step estimator, but it is also the case that it is typically implemented using the export spillover exposure measures that average across all firms, including the recipient of spillovers. The latter produces regression coefficients that cannot separate the LBE from the LFE effect, muddying their interpretation.14 Footnote 14: For further discussion as well as an illustration for the case of a Alvarez & López (2008)-type second-step regression in (5.1), see Appendix F. \begin{table} \begin{tabular}{l c c c} \hline \hline & (I) & (II) & (III) \\ \hline LBE & 0.363 & 0.381 & 0.367 \\ & (0.189, 0.555) & (0.202,0.577) & (0.189,0.559) \\ & [92.7\%] & [93.2\%] & [92.8\%] \\ LFE & 0.324 & 0.711 & 0.496 \\ & (0.116, 0.546) & (0.512,0.958) & (0.236,0.762) \\ & [68.6\%] & [94.6\%] & [83.6\%] \\ \hline Region FE & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ Industry FE & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline \hline \end{tabular} _Notes_: Summarized are the results for the LBE and LFE effects obtained using alternative fixed-effect specifications. We report the mean point estimates along with 95% bootstrap percentile confidence intervals (in parentheses) as well as the shares of sample for which the observation-specific estimates are statistically significant at the 5% level [in brackets]. Model (I) is our main specification, the results for which we reproduce here for convenience. \end{table} Table 4: Robustness to Controlling for Peer Group Effects Notwithstanding these issues, we implement a two-step approach to estimate the LFE effect and compare it with ours. More concretely, we re-estimate our model to obtain firm productivity but, this time, assuming an exogenous Markov process for productivity, whereby \(\omega_{it}=h(\omega_{it-1})+\zeta_{it}\) is imposed in place of (2.6), and then run the following Alvarez & Lopez (2008)-style second-step linear regression: \[E[\widetilde{\omega}_{it}|\cdot]=\beta_{0}+\beta_{x}X_{it-1}+\beta_{\bar{x}} \overline{X}_{it-1}, \tag{5.1}\] where \(\widetilde{\omega}_{it}\) is the first-step estimate of exogenously evolving \(\omega_{it}\), and the spillover pool \(\overline{X}_{it-1}\) is measured using the "grand" average \(\sum_{j}p_{itit-1}X_{jt-1}\) that does not exclude \(X_{it-1}\). To maximize comparability, we specify the above second-step regression in lags to match the dynamics of (2.6). Figure 3 summarizes the estimates of \(\beta_{\bar{x}}\) from (5.1) estimated with and without peer group fixed effects. As we show in Appendix F, the \(\beta_{\bar{x}}\) parameter measures the LFE effect as defined by us in the rest of the paper using the _peer_ average measure of \(\overline{X}_{it-1}\), viz., \(LFE\equiv\partial E[\omega_{it}|\cdot]/\partial\sum_{j\neq i}p_{i\,jt-1}X_{jt -1}\).15 The three fixed-effects specifications match those reported in Table 4. Footnote 15: The coefficient on the firm’s own \(X_{it-1}\) is significant and stable across all three specifications (ranging in 0.160–0.168) but we do not focus on it here due to the difficulty interpreting the latter since it does not separately identify the LBE from the LFE. Estimates from a two-step procedure and our model are non-negligibly different. The specification with both region and industry fixed effects, which drastically reduces the amount of identifying variation in \(\overline{X}_{it-1}\) defined at the region-industry level, leads to a noisy estimate of \(\beta_{\bar{x}}\) implying an insignificant LFE. Under the other specifications, \(\beta_{\bar{x}}\) estimates are all statistically significant but much larger (even exceeding the unit elasticity) than the corresponding mean LFE effects from our model. Given the lack of internal consistency and the associated misspecification of a productivity process in the first and second steps of the two-step procedure, it is difficult to precisely rationalize why the Figure 3: Alternative Two-Step Estimates of the LFE Effect LFE point estimates differ from ours. Having said that, compared to the two-step procedure, our methodology exhibits a significantly smaller variation in LFE estimates across specifications, lending assurance to their robustness. On the other hand, the dramatic volatility of an Alvarez & Lopez (2008)-style approach is consistent with the simulation evidence reported by Malikov & Zhao (2021) who find that two-step procedures suffer from non-vanishing biases in spillover effect estimates and these biases can be both positive and negative. ## 6 Conclusion This paper extends De Loecker (2013) to develop a unified empirical framework for productivity measurement that explicitly accommodates both the direct LBE channel taking place _within_ a firm as well as the indirect LFE channel working _between_ firms, which enables us to robustly measure and test these two effects in a consistent structural framework. We do so by formalizing the evolution of firm productivity as an export-controlled process, with future productivity potentially affected not only by the firm's own export behavior but also by that of its spatially proximate peers in the industry. This allows a simultaneous, "internally consistent" identification of firm productivity and the corresponding effects of exporting. Our identification strategy utilizes a structural link between the parametric production function and the firm's first-order condition for static inputs which helps us circumvent recent non-identification critiques of conventional proxy-based productivity estimators. In addition, owing to a nonparametric treatment of the firm productivity process, our model enables us to accommodate heterogeneity and nonlinearity in productivity effects of exporting across firms. We apply our semiparametric methodology to a panel of manufacturing plants in Chile in 1995-2007 and find significant evidence of both LBE and LFE. Overall, the LBE effect on productivity is statistically significant for 93% of firms in our main specification. On average, the size of the LFE effect is commensurate to that of LBE but, at the observation level, the LFE effect is significantly non-zero for 69% of plants only, thereby suggesting that the indirect productivity-boosting effect of exporting is less prevalent in Chilean manufacturing than the direct learning effect taking place within the firm. We also document that the LFE effect is stronger for firms who also export. This empirical evidence therefore suggests that exporters benefit from the exposure to peer exporters in their local industry more than non-exporters, plausibly because there may be complementatities between internal/direct and external/indirect learning from export experiences. Using simple back-of-envelope calculation, we find that long-run LBE and LFE effects on mean firm productivity are respectively 0.70% and 0.63% per percentage point of export intensity. Thus, in the long-run equilibrium a permanent 10 percentage point increase in mean export intensity of all firms in the local industry is roughly estimated to produce a sizable 13.3% boost to the average firm productivity through both direct LBE and indirect LFE channels.
2301.12269
Methods and Tools for Monitoring Driver's Behavior
In-vehicle sensing technology has gained tremendous attention due to its ability to support major technological developments, such as connected vehicles and self-driving cars. In-vehicle sensing data are invaluable and important data sources for traffic management systems. In this paper we propose an innovative architecture of unobtrusive in-vehicle sensors and present methods and tools that are used to measure the behavior of drivers. The proposed architecture including methods and tools are used in our NIH project to monitor and identify older drivers with early dementia
Muhammad Tanveer Jan, Sonia Moshfeghi, Joshua William Conniff, Jinwoo Jang, Kwangsoo Yang, Jiannan Zhai, Monica Rosselli, David Newman, Ruth Tappen, Borko Furht
2023-01-28T19:00:50Z
http://arxiv.org/abs/2301.12269v2
# Methods and Tools for Monitoring Driver's Behavior ###### Abstract In-vehicle sensing technology has gained tremendous attention due to its ability to support major technological developments, such as connected vehicles and self-driving cars. In-vehicle sensing data are invaluable and important data sources for traffic management systems. In this paper we propose an innovative architecture of unobtrusive in-vehicle sensors and present methods and tools that are used to measure the behavior of drivers. The proposed architecture including methods and tools are used in our NIH project to monitor and identify older drivers with early dementia driver's behavior, in-vehicle sensing, in-vehicle cameras, telematics sensors ## I Introduction About one in ten older adults in the U.S. have Alzheimer's disease (AD) and another 15 to 20% have mild cognitive impairment (MCI), a third of whom will develop dementia within 5 years. Individuals with dementia eventually become unable to perform complex everyday activities including driving so most current driving research focuses on MCI or early stage dementia. Our 5-year project, funded by NIH, focuses on creating an innovative in-vehicle sensors architecture, selecting a large group of older drivers, ages 65 to 85 years old, measuring driver's behavior during 3-year period, and identify cognitive changes that can lead to early dementia. Leading vehicle manufacturing companies including Volvo [1], Ford [2], and Kia [3], have adopted driver alert system applications that alert the drivers to avoid accidents. Most of those systems used the same techniques and algorithms to monitor driver's behavior but the use of those systems are limited to alerting the driver of any situations and avoid incidents from happening i.e Driver alert systems alert's the driver if he/she is driving long. Advance Driver assistance systems alerts the driver if vehicle to moving onto other lane or if someone is in their blind spot. Measuring driving behaviors can divided into 3 types [9] * Driver Based: Drowsiness, Sensation seeking, impulsive etc. * Driving based: Distraction, attention etc * Qualitative based: Speeding, braking, lane changing etc. Among above mentioned distraction and drowsiness are major behavior that changes with time and can be used to find key insights into driving of participants. One of them is distraction. Some of the usual distractions during driving are when drivers are looking off road and texting and using phone [4, 5, 6]. Some of the metrics that are used to measure distraction are head-pose [7] and gaze patterns [8] ## II In-Vehicle Sensor Architecture We designed an innovative architecture of unobtrusive in-vehicle sensors. The concept of utilizing in-vehicle sensors to measure the behavior of drivers and detect cognitive change is in itself innovative and reflective of the rapid development of these sensors and their application for monitoring driver behavior. Published reports of the proposed sensors are limited, particularly in regard to small sample size, duration of the testing and the number of sensors and/or cognitive tests utilized. Furthermore, few prospective studies examined patients with abnormal aging (MCI and early dementia) and there have been no studies including a culturally diverse sample. Our proposal addresses these limitations with a fully powered study, complete package of unobtrusive sensors and array of cognitive tests including the LASSI-L and others that are particularly sensitive to the early changes in cognition. Furthermore, the algorithms to measure Driving Behavior Indices are unique and will contribute significantly to application of the results in real-world settings. The specific technical innovations of this project include the following * an adaptive driver behavior data sampling technique to improve the efficiency of the data collection for driver behavior analytics, * the data fusion of telematics, vision-sensing, and the environmental data to define highly detailed driver behavior indices, * the creation of a scalable spatial network query processing platform for heterogeneous data sources. The architecture of the proposed in-vehicle sensors is shown in fig 1. Each in-vehicle sensor is comprised of two distributed sensing units: * in-vehicle vision sensing unit, * in-vehicle telemetry unit. For example, the GPS clock information will share data throughout the in-vehicle sensor networks to achieve precise time synchronization. The proposed sensors supports power-full computing resources for real-time, customized onboard data processing, data-fusion, and machine-vision. Customized sensor enclosures are designed for the sensors to support a cooling system and to minimize the size of sensor hardware. Each in-vehicle sensor has local data storage to store in-vehicle sensing data. The data are uploaded to a central database during participants' quarterly cognitive testing visits. In-Vehicle sensing units are the main components that are used in this study, there are two types of sensing units that are used. ### _Vision Sensors_ The major components of the vision sensing unit are forward-facing camera, driver-facing camera, and MDVR device to store data. The vision sensor unit is mounted on the windshield, The vision sensors use computer vision algorithms to unobtrusively * track eye and head movements, * read facial micro-expression, * driving situation awareness (e.g., traffic light classification, taillights of front vehicles, and lane marking, stop sign, and vehicle detection. Table I shows indices measured by driver-facing and front cameras. We describe next several methods and tools that are using vision sensors to measure driver's behavior. #### Ii-A1 Region of Interest and Face Detection Multiple techniques are used for face detection based on the region of interest. Region-of-Interest is identified by considering different metrics such are angle of the camera, height of the driver, place of the camera. Number of drivers are large in number so in order to detect the RoI and Face we used them interchangeably such that we use a less accurate but fast face detection library Fig. 1: Architecture of Vision Sensor. to have a rough estimate of the location of the face and then based on that location. RoI is selected with maximum area for the face or any false positive that it may result in and also to rule out the passenger face. After RoI is identified, it is passed to more accurate face detector to have an exact location with facial landmarks. Fig 2 & Fig 3 shows is results from our vision sensor system #### Iii-A2 Eyes and Mouth Detection: Eyes and mouth detection are two more features implemented in our vision system. Both of these features are detected separately and are based on HAAR-Cascade classifiers. The functions return a bounding box for the detected area of eyes and mouth and can be passed as an array or tuple for RGB or grayscale image for further use. Apart from that we also implemented emotion recognition but were excluded due not sufficient accuracy [11]. Fig 4 & fig 5 illustrates mouth and eyes detection in our video sensor system. #### Iii-A3 Lane Detection Lane detection is another feature of the video sensor system that uses deep learning model to detect lane while driving. It takes image or video as an input and display a green marker on the current lane on which the driver is driving. Rain detection is also included in this framework that used machine learning model to predict the rain ### _Telemactic Sensors_ The major sensing components of the telematics unit are * an on-chip RTK GNSS module, * an Inertial Measurement Unit (IMU), * an on-board diagnostics (OBD) They enable obtaining high-precision positioning data, real-time vehicle state information (e.g., engine RPM, vehicle speed, and pedal position), and vehicles' dynamic motions. #### Iii-A1 High-precision Positioning: An on-chip RTK GNSS module allows capturing a complete picture of vehicles' high-precision positioning data, making in-depth driver behavior analytics at lane and intersection levels possible. This new technology provides highly precise positioning data compared to typical GPS mod- tiles (accuracy about 4.9 m). The precise vehicle localization achieved by this new technology is important to capture lane deviations, travel patterns in parking lot areas, and detailed turning behaviors. #### Iii-A2 Onboard Diagnostics: A Bluetooth OBD scanner will obtain real-time data streams from the vehicle's Controller Area Network (CAN bus). Standardized OBD-II protocol facilitates easy data translation into human-readable formats based on standardized OBD-II PIDs without reverse engineering work. #### Iii-A3 Motion and Orientation: IMU sensors, consisting of tri-axial accelerometers, gyroscopes, and magnetometers, are used to capture vehicles' dynamic motions and orientations. They estimate the harsh acceleration, braking, and cornering of vehicles, which are one of the popular indicators for driver Fig. 4: Eyes Detection. Fig. 5: Mouth Detection. Fig. 3: 6 Facial landmark detection. Fig. 2: Face Detection based on region-of-interest. behaviors. IMU sensors also identify driving over potholes and raised pavement markers, allowing for analysis of how drivers react to unexpected pavement defects and lane departure in conjunction with the other telematics data. ## III Driver Behavior Indices Our objective is to measure and monitor changes of Driver Behavior Indices (DBIs) using the vision and telematics sensor data. The selection of the DBIs is designed to reflect older drivers' cognitive function and driving performance. The DBIs is evaluated for each driver and summarized on a daily, weekly, and monthly basis. DBIs are classified into four categories. Examples of DBIs are shown in Table II. For illustration, fig 6 and 7 shows the driving pattern of a two senior drivers for 2-weeks period based on the data obtained from in-vehicle cameras that measured the number of times the driver closed eyes, number of distractions, crossing lines, and near collision events.details of the the data can be reviewed in previous works that are published is same domain [10] ## IV Conclusion We presented an innovative architecture of an in-vehicle sensor system consisting of vision and telemetry sensors and includes a set of AI algorithms to measure driver behavior indices. The system was already installed in about 70 cars driven by older drivers in Florida with the objective to monitor and detect cognitive changes in these drivers. The final goal of the project is to identify those drivers that their cognitive changes imply early dementia. The detailed results of the study will be published soon. ## V Acknowledgment The project mentioned in section I is supported by Award Number GT003128-NIH from the National Institute of Health. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.
2303.04272
MU-Massive MIMO with Multiple RISs: SINR Maximization and Asymptotic Analysis
In this letter, we investigate the signal-to-interference-plus-noise-ratio (SINR) maximization problem in a multi-user massive multiple-input-multiple-output (massive MIMO) system enabled with multiple reconfigurable intelligent surfaces (RISs). We examine two zero-forcing (ZF) beamforming approaches for interference management namely BS-UE-ZF and BS-RIS-ZF that enforce the interference to zero at the users (UEs) and the RISs, respectively.Then, for each case, we resolve the SINR maximization problem to find the optimal phase shifts of the elements of the RISs. Also, we evaluate the asymptotic expressions for the optimal phase shifts and the maximum SINRs when the number of the base station (BS) antennas tends to infinity. We show that if the channels of the RIS elements are independent and the number of the BS antennas tends to infinity, random phase shifts achieve the maximum SINR using the BS-UE-ZF beamforming approach. The simulation results illustrate that by employing the BS-RIS-ZF beamforming approach, the asymptotic expressions of the phase shifts and maximum SINRs achieve the rate obtained by the optimal phase shifts even for a small number of the BS antennas.
Somayeh Aghashahi, Zolfa Zeinalpour-Yazdi, Aliakbar Tadaion, Mahdi Boloursaz Mashhadi, Ahmed Elzanaty
2023-03-07T22:49:21Z
http://arxiv.org/abs/2303.04272v2
# MU-Massive MIMO with Multiple RISs: SINR Maximization and Asymptotic Analysis ###### Abstract In this letter, we investigate the signal-to-interference-plus-noise-ratio (SINR) maximization problem in a multi-user massive multiple-input-multiple-output (massive MIMO) system enabled with multiple reconfigurable intelligent surfaces (RISs). We examine two zero-forcing (ZF) beamforming approaches for interference management namely BS-UE-ZF and BS-RIS-ZF that enforce the interference to zero at the users (UEs) and the RISs, respectively. Then, for each case, we resolve the SINR maximization problem to find the optimal phase shifts of the elements of the RISs. Also, we evaluate the asymptotic expressions for the optimal phase shifts and the maximum SINRs when the number of the base station (BS) antennas tends to infinity. We show that if the channels of the RIS elements are independent and the number of the BS antennas tends to infinity, random phase shifts achieve the maximum SINR using the BS-UE-ZF beamforming approach. The simulation results illustrate that by employing the BS-RIS-ZF beamforming approach, the asymptotic expressions of the phase shifts and maximum SINRs achieve the rate obtained by the optimal phase shifts even for a small number of the BS antennas. RIS-assisted communication, Multi-RIS, MIMO, Zero-Forcing. ## I Introduction Reconfigurable intelligent surface (RIS) is a new physical layer technology introduced to overcome the drawbacks of wireless communications such as fading, blockage, and interference [1]. An RIS is a planner array of passive reflecting elements which can change the phase of the reflected signals of which the phase shifts can be continuously adjusted [2]. Thus, based on the scenario, the behavior of the wireless environment can be improved by deploying the RISs in appropriate places and optimizing the phase shifts of their elements. For instance, exploiting an RIS in a cell with one user can solve the blockage problem or increase the SNR of the user equipment (UE) [3]. However, in multi-user scenarios with a multiple input multiple output (MIMO) base station (BS), interference management by joint design of beamforming vectors of the BS and phase shifts of the RIS elements is a challenge. Several studies have been conducted on solving the challenges of beamforming and phase shift design in an RIS-aided multi-user MIMO scenario with different objectives [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] such as maximization of the sum rate [4, 5], spectral efficiency [6] and energy efficiency [7, 8] or minimization of the transmission power [9], max-min SINR problem [10, 11], and the exposure to electromagnetic fields [12, 13]. However, the proposed solutions involve intricate algorithms and straightforward designs are not presented in the literature. Moreover, the analysis of the results for a massive number of BS antennas is not investigated. Regarding the above considerations, in this letter, we investigate the SINR maximization problem in the downlink transmission of a multi-user massive MIMO system assisted with multiple RISs. In this scenario, we assume that some of the UEs are directly served by the BS, while others have no direct channel to the BS and are served through the RISs. Each RIS serves several UEs in a specific geographic area, forming a cluster. We use two zero-forcing (ZF) approaches for the beamforming of the BS. The advantage of ZF beamforming is that it completely cancels the interference and this gets us a degree of freedom for asymptotic analysis and phase shift design. Moreover, the ZF beamforming in massive MIMO systems achieves a near-optimal performance [15]. The main contributions of this letter can be summarized as follows: (i) We employ a ZF beamforming approach named BS-UE-ZF that nulls the interference at the UEs, and obtain the optimal phase shifts of the RISs that maximize the SINR of the UEs. Then, for the case that one UE is connected to each RIS, we derive asymptotic expressions for the optimal phase shifts as the number of the BS antennas tends to infinity. (ii) We prove that if the RISs experience i.i.d. normal channels, random phase shifts achieve the maximum SINR when the number of the BS antennas tends to infinity. (iii) For the case that one UE is connected to each RIS, we propose another ZF beamforming approach named BS-RIS-ZF that nulls the interference at the RISs. This approach leads to closed-form expressions for the RIS phase shifts that maximize the SINRs at UEs. (iv) We evaluate asymptotic expressions for the optimal phase shifts and the maximum SINRs for the BS-RIS-ZF. ## II System Model We consider downlink transmission in a multi-RIS multi-user massive MIMO system. We assume that \(U_{d}\) of the UEs (named direct UEs) are directly connected to the BS, and Fig. 1: The system model of them (named blocked UEs) have no direct connection with the BS and \(K\) RISs are deployed to assist the communication between the BS and these UEs. For \(k\in\mathcal{K}\triangleq\{1,\dots,K\}\), \(L_{k}\) of the blocked UEs are served through the \(k^{\rm th}\) RIS which are selected by algorithm 1. Moreover, the channels between this RIS and the other UEs are blocked as considered in [16]. Let, \(M\) be the number of the BS antennas and \(N\) be the number of the elements at each of the RISs. Also, \(\mathbf{H}_{k}\in\mathbb{C}^{M\times N}\), \(\mathbf{h}_{k,\ell}\in\mathbb{C}^{N\times 1}\), and \(\mathbf{h}_{d,u}\in\mathbb{C}^{M\times 1}\) denote the channel matrix between the BS and the \(k^{\rm th}\) RIS, the channel vector between the \(k^{\rm th}\) RIS and the \(\ell^{\rm th}\) blocked UE connected to this RIS, and the channel between the BS and the \(u^{\rm th}\) direct UE, respectively. We assume that each of the channel coefficients has a complex normal distribution. The channel between the BS and each IRS is correlated such that \(\mathbb{E}\{\mathbf{H}_{k}^{H}\mathbf{H}_{k}\}=\mathbf{R}_{k}\), while the channels between the BS and different RISs are independent, i.e., \(\mathbb{E}\{\mathbf{H}_{k}^{H}\mathbf{H}_{k^{\prime}}\}=\mathbf{0}\) for \(k\neq k^{\prime}\). The diagonal reflected matrix for the \(k^{\rm th}\) RIS is \(\mathbf{\Phi}_{k}\triangleq{\rm diag}(e^{j\phi_{k,1}},\dots,e^{j\phi_{k,N}})\), where for \(i\in\mathcal{N}\triangleq\{1,\dots,N\}\), \(\phi_{k,i}\) is the phase shift of the \(i^{\rm th}\) element of the RIS. Thus, the received signal at the \(\ell^{\rm th}\) blocked UE connected to the \(k^{\rm th}\) RIS is given by \[y_{{}_{b,k,\ell}} \!\!\!=\!\!\mathbf{h}_{k,\ell}^{H}\mathbf{\Phi}_{k}\mathbf{H}_{k }^{H}\mathbf{w}_{b,k,\ell}x_{{}_{b,k,\ell}}\!+\!\!\!\!\!\sum_{\begin{subarray}{ c}m=1\\ m\neq k\end{subarray}}^{K}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The optimum value of \(\mathbf{v}_{k}\) in (7) is equal to the eigenvector of matrix \(\sum_{\ell=1}^{L_{k}}\mathbf{q}_{k,\ell}\mathbf{q}_{k,\ell}^{H}\) corresponding to its maximum eigenvalue. Thus, for \(L_{k}=1\), the optimum value of \(\phi_{k,i}\) is \[\phi_{k,i} =\!\!-\!\spherical(h_{k,1,i}^{*}[\mathbf{H}_{k}^{H}\mathbf{Q}_{1}^{H}( \mathbf{Q}_{1}\mathbf{Q}_{1}^{H})^{-1}\mathbf{e}_{k}]_{i}),k\in\mathcal{K},i \in\mathcal{N}. \tag{8}\] In the following proposition, we obtain the asymptotic form of (8) as \(M\longrightarrow\infty\). **Proposition 1**.: _Considering the BS-UE-ZF beamforming in (5) and the asymptotic regime where \(M\longrightarrow\infty\), when one UE is connected to the \(k^{\rm th}\) RIS the phase shifts of the RIS elements that maximize the SINR of the UEs can be found by solving the following equation system:_ \[\phi_{k,i}=-\spherical(h_{k,1,i}^{*}\sum_{\ell=1}^{N}R_{i,\ell}e^{-j\phi_{k,\ell}}h _{k,1,\ell})\ i\in\mathcal{N}, \tag{9}\] _where \(R_{k,i,\ell}\) is the element \((i,\ell)\) of the correlation matrix \(\mathbf{R}_{k}=\mathbb{E}\{\mathbf{H}_{k}^{H}\mathbf{H}_{k}\}\)._ Proof.: Considering \(\mathbf{Q}_{1}=\left[\mathbf{g}_{k,1}\ldots\mathbf{g}_{K,1}\ \mathbf{h}_{d,1}\ldots\mathbf{h}_{d,U_{d}}\right]^{H},\) for \(k\in\mathcal{K}\) we have \[\mathbf{H}_{k}^{H}\mathbf{Q}_{1}^{H}(\mathbf{Q}_{1}\mathbf{Q}_{1} ^{H})^{-1}\mathbf{e}_{k}=\] \[\mathbf{H}_{k}^{H}\left[\mathbf{H}_{1}\mathbf{\Phi}_{1}^{H} \mathbf{h}_{1,1}\ldots\mathbf{H}_{K}\mathbf{\Phi}_{K}^{H}\mathbf{h}_{K,1}\ \ \mathbf{h}_{d,1}\ldots\mathbf{h}_{d,U_{d}}\right] \tag{10}\] \[\times\begin{bmatrix}\mathbf{s}_{i,1}^{H}\mathbf{g}_{k,1}&\ldots& \mathbf{s}_{i,1}^{H}\mathbf{s}_{K,1}&\mathbf{g}_{i,1}^{H}\mathbf{h}_{d,1}& \ldots&\mathbf{s}_{i,1}^{H}\mathbf{h}_{d,U_{d}}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ \mathbf{s}_{K,i}^{H}\mathbf{g}_{k,1}&\ldots&\mathbf{s}_{K,1}^{H}\mathbf{s}_{K, 1}&\mathbf{s}_{K,1}^{H}\mathbf{h}_{d,1}&\ldots&\mathbf{s}_{K,1}^{H}\mathbf{h} _{d,U_{d}}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ \mathbf{h}_{d,U_{d}}^{H}\mathbf{g}_{k,1}&\ldots&\mathbf{h}_{d,U_{d}}^{H} \mathbf{g}_{K,1}&\mathbf{h}_{d,U_{d}}^{H}\mathbf{h}_{d,1}&\ldots&\mathbf{h}_{d, U_{d}}^{H}\mathbf{h}_{d,U_{d}}\end{bmatrix}^{-1}\mathbf{e}_{k}\cdot\] Also, if \(\mathbf{H}_{k}=\left[\mathbf{h}_{k,1}^{\prime}\ \ldots\ \mathbf{h}_{k,N}^{\prime}\right]\), as \(M\longrightarrow\infty\), we get \(\mathbf{h}_{k,\ell}^{H}\mathbf{h}_{n,m}^{\prime}\longrightarrow\mathbb{E}\{ \mathbf{h}_{k,\ell}^{H}\mathbf{h}_{n,m}^{\prime}\}\)[18], and thus \[\mathbf{H}_{k}^{H}\mathbf{H}_{n}\longrightarrow\begin{cases}\mathbf{R}_{k}&n=k \\ \mathbf{0}_{N\times N}&n\neq k\end{cases}. \tag{11}\] where \(\mathbf{R}_{k}=\mathbb{E}\{\mathbf{H}_{k}^{H}\mathbf{H}_{k}\}\) and \(\mathbf{0}_{N\times N}\) is an \(N\times N\) zero matrix. Therefore, as \(M\longrightarrow\infty\) we have \[\mathbf{H}_{k}^{H}\mathbf{Q}_{1}^{H}(\mathbf{Q}_{1}\mathbf{Q}_{1} ^{H})^{-1}\mathbf{e}_{k}=\left[\mathbf{0}\ldots\mathbf{R}_{k}\mathbf{\Phi}_{k} ^{H}\mathbf{h}_{k,1}\ldots\mathbf{0}\right]\] \[\times\mathrm{diag}\Big{(}\frac{1}{\mathbf{h}_{k,1}^{H}\mathbf{ \Phi}_{1}\mathbf{R}_{1}\mathbf{\Phi}_{1}^{H}\mathbf{h}_{1,1}},\ldots,\frac{1}{ \mathbf{h}_{K,1}^{H}\mathbf{\Phi}_{K}\mathbf{R}_{K}\mathbf{\Phi}_{K}^{H} \mathbf{h}_{K,1}},\] \[\frac{1}{\mathbf{h}_{d,1}^{H}\mathbf{h}_{d,1}},\ldots,\frac{1}{ \mathbf{h}_{d,U_{d}}^{H}\mathbf{h}_{d,U_{d}}}\Big{)}\mathbf{e}_{k}=\frac{ \mathbf{R}_{k}\mathbf{\Phi}_{k}^{H}\mathbf{h}_{k,1}}{\mathbf{h}_{k,1}^{H} \mathbf{\Phi}_{k}\mathbf{R}_{k}\mathbf{\Phi}_{k}^{H}\mathbf{h}_{k}} \tag{12}\] and thus (8) would be equal to \[\phi_{k,i} =-\spherical(\frac{1}{\mathbf{h}_{k,1}^{H}\mathbf{\Phi}_{k}\mathbf{R} _{k}\mathbf{\Phi}_{k}^{H}\mathbf{h}_{k,1}}h_{k,1}^{*},\sum_{\ell=1}^{N}R_{i, \ell}e^{-j\phi_{k,\ell}}h_{k,1,\ell})\] \[=-\spherical(h_{k,1,i}^{*}\sum_{\ell=1}^{N}R_{i,\ell}e^{-j\phi_{k, \ell}}h_{k,1,\ell}),\ \ k\in\mathcal{K},i\in\mathcal{N}, \tag{13}\] where (13) is resulted from the fact that \(\mathbf{h}_{k,1}^{H}\mathbf{\Phi}_{k}\mathbf{R}_{k}\mathbf{\Phi}_{k}^{H} \mathbf{h}_{k,1}\) is a real and positive number. **Corollary 1**.: _For the special case that the channel vectors between the BS and each of the elements of the \(k^{\rm th}\) RIS are independent, i.e., \(\ \mathbf{R}_{k}=\mathbf{I}\), employing the BS-UE-ZF approach, the SINR of the blocked UE connected to the \(k^{\rm th}\) RIS is maximized by considering random phase shifts for the RIS elements, as \(M\longrightarrow\infty\)._ Proof.: By substituting \(\mathbf{R}_{k}\) with \(\mathbf{I}\) in (9), we get \(\phi_{k,i}=\phi_{k,i}\ \forall k,i\), meaning that \(\phi_{k,i}\) can be randomly chosen. ### _BS-RIS-ZF Beamforming_ In this section, we consider the case that one UE is related to each RIS. We design a novel approach, i.e., BS-RIS-ZF, in which the beamforming vectors are derived to enforce the interference at the RISs and direct UEs to zero2. Thus, the beamforming vectors should satisfy the following conditions for \(k\in\mathcal{K}\) and \(u\in\{1,\ldots,U_{d}\}\): Footnote 2: Note that the BS-RIS-ZF beamforming approach can also be implemented for the general scenario; But, in this case, it needs to design the phase shifts of the RISs to cancel the interference which is out of scope of this letter. \[\mathbf{H}_{k}^{H}\mathbf{w}_{b,k,1}=\mathbf{1},\ \mathbf{H}_{m}^{H}\mathbf{w}_{b,k,1}=\mathbf{0}\ \forall m\neq k,\ \mathbf{h}_{d,u}^{H}\mathbf{w}_{b,k,1}=0,\] \[\mathbf{h}_{d,u}^{H}\mathbf{w}_{d,u}=1,\ \mathbf{H}_{k}^{H}\mathbf{w}_{d,u}=\mathbf{0},\ \mathbf{h}_{d,u}^{H}\mathbf{w}_{d,u}=\mathbf{0}\ \forall n\neq u. \tag{14}\] By defining \(\mathbf{Q}_{2}\triangleq\left[\mathbf{H}_{1}\ldots\mathbf{H}_{K}\ \mathbf{h}_{d,1}\ldots\mathbf{h}_{d,U_{d}}\right]^{H}\), the BS-RIS-ZF beamforming matrix can be written as \[\mathbf{W}_{\rm ZF}^{\prime}=\mathbf{Q}_{2}^{H}(\mathbf{Q}_{2}\mathbf{Q}_{2}^{H} )^{-1}\mathbf{\Gamma}, \tag{15}\] where \(\mathbf{\Gamma}=\) \[\begin{bmatrix}1&\ldots&1&0&&\ldots&0\\ 0&\cdots&0&1&\ldots&1&0&\ldots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\vdots&\mathbf{0}_{K \times U_{d}}\\ 0&&\ldots&0&1&\ldots&1&\mathbf{1}_{U_{d}}\end{bmatrix}^{T}.\] Thus, the SINR of the blocked UE connected to \(k^{\rm th}\) RIS is \[\mathrm{SINR}_{b,k,1} =\!\frac{|\sum\limits_{i=1}^{N}e^{j\phi_{i,k}}h_{k,1,i}^{*}[\mathbf{ H}_{k}^{H}\mathbf{Q}_{2}^{H}(\mathbf{Q}_{2}\mathbf{Q}_{2}^{H})^{-1}\mathbf{\Gamma} \mathbf{e}_{k}]_ _and the maximum value of \(\mathrm{SINR}_{b,k,1}\) is_ \[\mathrm{SINR}_{b,k,1}^{*}=\frac{1}{\sigma_{k}^{2}}(\sum_{i=1}^{N}|f_{k,i}||h_{k, i}|)^{2}, \tag{19}\] _where \(f_{k,i}\) is the \(i^{\mathrm{th}}\) element of the vector_ \[\mathbf{f}_{k} =\!\!\left[\mathbf{0}_{N\times(k-1)N}\ \mathbf{R}_{k}\ \mathbf{0}_{N \times((K-k)N+U_{d})}\right]\] \[\times\mathrm{blkdiag}(\mathbf{R}_{1}^{-1},\ldots,\mathbf{R}_{K}^ {-1},\mathbf{I}_{U_{d}})\mathbf{\Gamma}\,\mathbf{e}_{k},\] _and \(\mathrm{blkdiag}(.)\) returns a block diagonal matrix constructed by the arguments._ Proof.: As \(M\longrightarrow\infty\), employing (11), we have \[(\mathbf{Q}_{2}\mathbf{Q}_{2}^{H})^{-1}=\mathrm{blkdiag}(\mathbf{R}_{1}^{-1}, \ldots,\mathbf{R}_{K}^{-1},\mathbf{I}_{U_{d}}), \tag{20}\] and \[\mathbf{H}_{k}^{H}\mathbf{Q}_{2}^{H}=\left[\mathbf{0}_{N\times(k-1)N}\quad \mathbf{R}_{k}\quad\mathbf{0}_{N\times((K-k)N+U_{d})}\right]. \tag{21}\] Thus, substituting (20) and (21) in (17), the asymptotic expressions for the optimal phase shifts are as in (18). Then, substituting the phase shifts obtained by (18), (20), and (21) in (16), the asymptotic value of maximum \(\mathrm{SINR}_{k}\) would be obtained as in (19). **Analysis of the rank of matrix \(\mathbf{Q}_{2}\):** Regarding that the performance of the BS-RIS-ZF approach can be affected by the rank of matrix \(\mathbf{Q}_{2}\), we analyze the rank of this matrix in this section. Assuming that \(\mathbf{H}_{k}=\mathbf{F}_{k}\mathbf{D}_{k}\), where \(\mathbf{F}_{k}\) is an \(M\times N\) matrix with i.i.d. normal random variable elements and \(\mathbf{D}_{k}=\mathbf{R}_{k}^{1/2}\), the matrix \(\mathbf{Q}_{2}\) can be rewritten as \(\left[\mathbf{F}_{1}\mathbf{D}_{1}\ldots\mathbf{F}_{K}\mathbf{D}_{K}\ \mathbf{H}_{d}\right]^{H},\) where \(\mathbf{H}_{d}=[\mathbf{h}_{d,1}\ldots\mathbf{h}_{d,U_{d}}]\). Considering the fact that the elements of the random matrices \(\mathbf{F}_{k},\quad k\in\mathcal{K}\), and \(\mathbf{H}_{d}\) are independent of each other, we conclude that the rank of matrix \(\mathbf{Q}_{2}\) is equal to \(\sum_{k=1}^{K}\mathrm{rank}(\mathbf{D}_{k}^{H}\mathbf{F}_{k}^{H})+\mathrm{ rank}(\mathbf{H}_{d})\). Also, from [20], we have \[\mathrm{rank}(\mathbf{D}_{k}^{H}\mathbf{F}_{k}^{H})\leqslant\min\{\mathrm{ rank}(\mathbf{D}_{k}^{H}),\mathrm{rank}(\mathbf{F}_{k}^{H})\},\] and \(\mathrm{rank}(\mathbf{F}_{k}^{H})\leqslant\min\{M,N\}\). Hence, assuming that \(N\leqslant M\), we get \(\mathrm{rank}(\mathbf{D}_{k}^{H}\mathbf{F}_{k}^{H})\leqslant\mathrm{rank}( \mathbf{D}_{k}^{H})\) and thus, \(\mathrm{rank}(\mathbf{Q}_{2})\leqslant\sum_{k=1}^{K}\mathrm{rank}(\mathbf{D}_ {k}^{H})+\mathrm{rank}(\mathbf{H}_{d})\). Moreover, the elements of \(\mathbf{H}_{d}\) are independently distributed and thus, \(\mathrm{rank}(\mathbf{H}_{d})=\min\{M,U_{d}\}\). Therefore, assuming that \(U_{d}<M\) we have \(\mathrm{rank}(\mathbf{Q}_{2})\leqslant\sum_{k=1}^{K}\mathrm{rank}(\mathbf{D}_ {k}^{H})+U_{d}\). Consequently, we can conclude that the correlation of the BS to RIS channels can restrict the rank of matrix \(\mathbf{Q}_{2}\). ### _Computational Complexity._ The number of multiplications required to compute the beamforming vectors and the RIS phase shifts in the BS-UE-ZF approach is equal to \(U_{b}((U_{b}+U_{d})^{3}+2M(U_{b}+U_{d})^{2}+MN(U_{b}+U_{d})+MN^{2}+MN+1)\), while for the BS-RIS-ZF method, it is equal to \(U_{b}((NU_{b}+U_{d})^{3}+(2M+U_{b}+K+d)(NU_{b}+U_{d})^{2}+MN(NU_{b}+U_{d})+1)\). We observe that the complexity of both of the approaches with respect to \(M\) is of the order of \(\mathcal{O}(M)\). Also, their complexities with respect to \(N\) are of the order of \(\mathcal{O}(N)\) and \(\mathcal{O}(N^{3})\), respectively. Therefore, the complexity of the BS-RIS-ZF approach is more sensitive to \(N\). ## IV Simulation Results In this section, we conduct simulations to verify the performance of the proposed beamforming and phase shift design approaches. In this regard, we use the channel estimation approach proposed in [19] and compare the results with the case of perfect channel estimation. The RIS elements correlation model is adopted from [21]. The minimum distance between the elements of each of the RISs is equal to \(d=\lambda\) and \(d=\lambda/4\) for the cases of the i.i.d. and correlated channels, respectively, where \(\lambda\) is the wavelength. Also, the area of each RIS element is equal to \(A=d^{2}\) and \(\mu\lambda^{2}=-75\)dB where \(\mu\) is the average intensity attenuation [21]. Moreover, the carrier frequency is \(f=1800\) MHz and the power spectral density of the AWGN at the UEs is equal to \(-174\ \mathrm{dBm}/\mathrm{Hz}\). In Fig. 2, we illustrate the average sum rate versus the number of the BS antennas, for various phase shift design approaches and the number of RIS elements in the BS-UE-ZF beamforming scenario, considering both the i.i.d. and correlated channels. It can be noticed that in both cases increasing the number of the elements in the RISs improves the average sum rate, and \(N=1\) leads to the lowest average sum rate. In Fig.(a)a, we observe that for the i.i.d. channels, by increasing the number of the BS antennas the average sum rate of random phase shifts tends to the average sum rate of the optimal ones (see Corollary 1). In Fig. (b)b, we observe that in the case of the correlated channels, by increasing the number of the BS antennas, the average sum rate of the asymptotic phase shifts converges to the average sum rate of the optimal phase shifts. Moreover, both of these phase Fig. 2: Average sum rate exploiting the BS-UE-ZF approach vs. the number of the BS antennas (\(M\)), \(K=4\), \(U_{d}=2\) (a): i.i.d. (b): Correlated channels. shift design approaches achieve a much more average sum rate compared to random phase shifts. In Fig. 3, we plot the average sum rate versus the number of the BS antennas for the various phase shift design approaches in the BS-RIS-ZF beamforming scenario, and also depict the asymptotic average sum rate obtained by the asymptotic SINRs. We observe that for both the i.i.d. and correlated channels, the rates of the asymptotic phase shifts and the asymptotic SINRs accurately track the optimal phase shifts rate. Moreover, it can be observed that in the case of the i.i.d. channels when the condition \(M\geqslant KN+U_{d}\) is satisfied, the average sum rates start to increase, and the correlated channels do not have a restrictive impact on the start point of rate arising. Furthermore, in Fig. 2 and Fig. 3, we illustrate the curves corresponding to the case that we have channel estimation error (the channel estimation approach in [19] is used). We observe that in both of the i.i.d. and correlated channels there is not a significant reduction in the sum rate of the BS-UE-ZF and BS-RIS-ZF approaches when we have some error in the channel estimation. ## V Conclusion In this letter, we considered a multi-RIS, multi-user massive MIMO system and investigated the SINR maximization problem for the asymptotic scenario where the number of the BS antennas tends to infinity. We examined two ZF beamforming approaches, i.e., BS-UE-ZF and BS-RIS-ZF which null the interference at the UEs and the RISs, respectively. For each of the proposed methods, we obtained the optimal phase shifts of the elements of the RISs that maximize the SINR of the UEs. Considering the BS-UE-ZF beamforming approach, we showed that when the channels of the RIS elements are independent, random phase shifts can achieve the maximum SINR for an asymptotic large number of BS antennas. For the BS-UE-ZF beamforming method, the simulation results showed that the asymptotic expressions of the RIS phase shifts achieve the rate of the optimal phase shifts, even for a small number of the BS antennas.
2310.03425
Distinguishing Low-lying Vector Beauty-charm Meson via Polarization Analysis
To distinguish the low-lying vector beauty-charm meson, we systematically study the $B_c^*\to B_c+\gamma$, $B_c^*\to \ell+{\nu}_{\ell}$ and $B_c^{(*)}\to J/\psi+nh$ processes within effective theory by the helicity decomposition method. The significant difference of polarization asymmetry in $B_c^{(*)}\to J/\psi+nh$ indicates a general law in vector-to-vector and pseudoscalar-to-vector transition processes, which can be tested in current and future LHC experiments. In the end, we discuss the experiment search and discovery potential for the low-lying vector beauty-charm meson.
Yiqi Geng, Mingqi Cao, Ruilin Zhu
2023-10-05T10:04:15Z
http://arxiv.org/abs/2310.03425v1
# Distinguishing Low-lying Vector Beauty-charm Meson via Polarization Analysis ###### Abstract To distinguish the low-lying vector beauty-charm meson, we systematically study the \(B_{c}^{*}\to B_{c}+\gamma\), \(B_{c}^{*}\to\ell+\nu_{\ell}\) and \(B_{c}^{(*)}\to J/\psi+nh\) processes within effective theory by the helicity decomposition method. The significant difference of polarization asymmetry in \(B_{c}^{(*)}\to J/\psi+nh\) indicates a general law in vector-to-vector and pseudoscalar-to-vector transition processes, which can be tested in current and future LHC experiments. In the end, we discuss the experiment search and discovery potential for the low-lying vector beauty-charm meson. _Introduction._ Understanding of Quantum Chromodynamics (QCD) color confinement is one of the fundamental goals of particle physics. The low-lying vector beauty-charm meson \(B_{c}^{*}\) is believed to exist in various quark models and lattice simulations of first principles QCD, however, which has not been identified in particle experiments. The \(B_{c}^{*}\) meson has become the last missing piece of the low-lying vector meson spectroscopy puzzle since the next to last vector ground state \(B_{s}^{*}\) was probed by CUSB-II detector in 1990 [1]. The long-standing difficulties to discover the \(B_{c}^{*}\) meson come from two aspects. On the one hand, the \(B_{c}^{*}\) meson is produced in large quantities at hadron colliders while observation of the major decay channel \(B_{c}^{*}\to B_{c}+\gamma\) is extremely difficult due to the low energy of the photon. The complete determination of both the emitted photon energy and the decay width is not given in literatures. Recently, the second and third members of beauty-charm meson family, i.e. first radially excited pseudoscalar and vector states \(B_{c}(2S)\) and \(B_{c}^{*}(2S)\), have just discovered and confirmed by the investigating the \(B_{c}+2\pi\) invariant mass spectrum in CMS [2] and LHCb [3] experiments after previous pioneering observation of one excited peak at ATLAS detector [4]. In both CMS and LHCb experiments, two excited structures are observed but reconstruction of the \(B_{c}^{*}(2S)\) state relies on the unknown photon due to \(B_{c}^{*}(2S)\to B_{c}^{*}(\to B_{c}+\gamma)+2\pi\), where the absolute mass satisfies \(m_{B_{c}^{*}(2S)}=m_{B_{c}^{*}(2S)}|_{rec}+E_{\gamma}\) with the missing photon energy \(E_{\gamma}=\Delta M_{b\bar{c}(1S)}=m_{B_{c}^{*}}-m_{B_{c}}\). Thus the probe of \(B_{c}^{*}\) meson will affect the final determination of \(B_{c}^{*}(2S)\) absolute mass. The precise study of hyperfine mass splitting is also helpful to understand the low energy effective theory of QCD. On the other hand, the partial decay widths of weak decay channels such as \(B_{c}^{*}\to J/\psi+X_{H,L}\) are expected to have same order of magnitude compared to that of the ground beauty-charm meson decays \(B_{c}\to J/\psi+X_{H,L}\) with H(L) denoting hadrons(leptons). But the weak decay rates of \(B_{c}^{*}\) is suppressed by a factor \(\Gamma(B_{c})/\Gamma(B_{c}^{*})\) with magnitude around \(10^{-4}\) to \(10^{-5}\). Using the data sample corresponding to an integrated luminosity of \(9fb^{-1}\), the LHCb collaboration have successfully measured 36463 \(B_{c}\to J/\psi+X_{H}\) weak decay events [5]. Thus one can expect several \(B_{c}^{*}\) weak decay events in LHCb Run-2 existing data samples. However, the reconstruction of \(B_{c}^{*}\) weak decay events is still challenging because the small hyperfine mass splitting of beauty-charm mesons leads to two relatively close peaks and one peak is very high due to a large number of \(B_{c}\) decay events. In this Letter, we present the important polarization analysis of \(B_{c}^{*}\) electromagnetic and weak decays. We generalize the low energy effective theory for heavy quarkonium electromagnetic interactions into unequal quark mass case. The decay width of radiative decay \(B_{c}^{*}\to B_{c}+\gamma\) is investigated in a model-independent way, where the dependence of \(B_{c}^{*}\) electromagnetic decay widths on the emitted photon energy is given. The weak decays of \(B_{c}^{(*)}\to J/\psi+n\pi\) are studied in QCD effective theory. By fitting the \(B_{c}\to J/\psi+3\pi\) partial distribution data in LHCb experiment, we can extract the spectra function of three Pions. We find that the spectra function of three Pions is major from two resonance contributions, i.e. the \(a_{1}(1260)\) and \(\pi_{2}(2005)\) states. By employing the helicity decomposition method, we find that the two kinds of channels \(B_{c}^{*}\to J/\psi+n\pi\) and \(B_{c}\to J/\psi+n\pi\) have extremely different polarization behaviors dependence of the invariant mass of \(n\pi\) system. The \(B_{c}^{*}\) meson can be distinguished in \(J/\psi+n\pi\) invariant mass distributions by introducing a new polarization observable and measuring its value in particle experiments at LHC. _Radiative decay._ The lifetime of vector beauty-charm meson \(B_{c}^{*}\) is greatly shorter than that of the ground pseudoscalar beauty-charm meson \(B_{c}\), since meson can first radiate into \(B_{c}\) meson with several tens of \(MeV\) phase space while \(B_{c}\) meson has to weak decay. For the transition of doubly heavy quark mesons, potential nonrelativistic QCD (pNRQCD) is a powerful model-independent effective theory. Its lagrangian can be obtained by integrating out quarks and gluons of momentum and energy at order of heavy quark mass and heavy quark relative momentum from QCD [6]. In pNRQCD effective theory, two fields \(S=S(r,R,t)\) and \(O=O(r,R,t)\) denoting the color singlet and octet quark-antiquark states respectively are introduced. \(r\) is the heavy quark relative coordinate while \(R\) is the center of mass coordinate. In equal quark mass case, the pNRQCD Lagrangian relevant to describe the magnetic dipole transition at order \(E_{\gamma}^{3}v^{2}/m^{2}\) is systematically established in Ref. [7], where the radiative decay width is obtained as \(\Gamma_{J/\psi\rightarrow\eta_{c}+\gamma}=(1.5\pm 1.0)keV\) in excellent agreement with experimental data. We generalize the pNRQCD Lagrangian into unequal quark mass case, and then the effective Lagrangian at order \(E_{\gamma}^{3}v^{2}/m^{2}\) can be written as \[\mathcal{L}_{\gamma\mathrm{pNRQCD}}=\int d^{3}r\,\mathrm{Tr} \left[e\frac{e_{Q}-e_{Q}^{\prime}}{2}V_{A}^{\mathrm{em}}\mathrm{S}^{\dagger} \mathbf{r}\cdot\mathbf{E}^{\mathrm{em}}\mathrm{S}\right.\] \[+e(\frac{e_{Q}m_{Q}^{\prime}-e_{Q}^{\prime}m_{Q}}{4m_{Q}m_{Q}^{ \prime}})\left[V_{S}^{\frac{\pi B}{m}}\left\{\ \mathrm{S}^{\dagger},\mathbf{\sigma}\cdot\mathbf{B}^{\mathrm{em}}\right\} \mathrm{S}\right.\] \[+\left.\frac{1}{8}V_{S}^{(\mathbf{r}\cdot\mathbf{\nabla})^{2} \frac{\pi B}{m}}\left\{\ \mathrm{S}^{\dagger},\mathbf{r}^{\mathrm{i}}\mathbf{r}^{\mathrm{j}}\left( \mathbf{\nabla}^{\mathrm{i}}\nabla^{\mathrm{j}}\mathbf{\sigma}\cdot\mathbf{B}^{ \mathrm{em}}\right)\right\}\mathrm{S}\right.\] \[\left.+V_{O}^{\frac{\pi B}{m}}\left\{\mathrm{O}^{\dagger},\mathbf{ \sigma}\cdot\mathbf{B}^{\mathrm{em}}\right\}\mathrm{O}\right]\] \[+e(\frac{e_{Q}m_{Q}^{2}-e_{Q}^{\prime}m_{Q}^{2}}{32m_{Q}^{2}m_{Q ^{\prime}}^{2}})\left[4\frac{V_{S}^{\frac{\pi^{-B}}{m^{2}}}}{r}\left\{\ \mathrm{S}^{\dagger},\mathbf{\sigma}\cdot\mathbf{B}^{\mathrm{em}}\right\} \mathrm{S}\right.\] \[+4\frac{V_{S}^{\frac{\sigma\cdot(Y\times B)}{m^{2}}}}{r}\left\{ \ \mathrm{S}^{\dagger},\mathbf{\sigma}\cdot[\hat{\mathbf{r}}\times(\hat{\mathbf{r}} \times\hat{\mathbf{B}}^{\mathrm{em}})]\right\}\mathrm{S}\] \[-V_{S}^{\frac{\sigma\cdot\nabla\cdot\nabla\cdot\nabla}{m^{2}}} \left[\ \mathrm{S}^{\dagger},\mathbf{\sigma}\cdot[-i\mathbf{\nabla}\times,\mathbf{E}^{ \mathrm{em}}]\right]\mathrm{S}\] \[-V_{S}^{\frac{\sigma\cdot\nabla_{r}\times\nabla\cdot\nabla}{m^{2 }}}\left[\ \mathrm{S}^{\dagger},\mathbf{\sigma}\cdot[-i\mathbf{\nabla}_{r}\times,\mathbf{r}^{ \mathrm{i}}\left(\mathbf{\nabla}^{\mathrm{i}}\mathbf{E}^{\mathrm{em}}\right)] \right]\mathrm{S}\right]\] \[+e(\frac{e_{Q}m_{Q}^{3}-e_{Q}^{\prime}m_{Q}^{3}}{8m_{Q}^{3}m_{Q ^{\prime}}^{3}})\left[V_{S}^{\frac{\nabla_{r}^{2}\sigma\cdot B}{m^{3}}}\left\{ \ \mathrm{S}^{\dagger},\mathbf{\sigma}\cdot\mathbf{B}^{\mathrm{em}}\right\}\nabla_{r}^ {2}\ \mathrm{S}\right.\] \[\left.+V_{S}^{\frac{(\mathbf{\nabla}\cdot\mathbf{\rho})(\mathbf{ \nabla}\cdot\mathbf{\rho})}{m^{3}}}\left\{\ \mathrm{S}^{\dagger},\mathbf{\sigma}^{i}\mathbf{B}^{\mathrm{em}j}\right\}\mathbf{ \nabla}_{r}^{i}\nabla_{r}^{j}\ \mathrm{S}\right]\right], \tag{1}\] where \(Q\) and \(Q^{\prime}\) denote two different heavy quarks. The final result for the decay width of \(B_{c}^{*}(p)\to B_{c}(p^{\prime})+\gamma(k)\) is \[\Gamma_{B_{c}^{*}\to B_{c}+\gamma}=\frac{\alpha(e_{Q}m_{Q}^{ \prime}-e_{Q}^{\prime}m_{Q})^{2}E_{\gamma}^{3}}{3m_{Q}^{2}m_{Q^{\prime}}^{2}}V _{S}^{\frac{\pi B}{m}}\left(1-\frac{E_{\gamma}}{m_{B_{c}^{*}}}\right), \tag{2}\] where the photon energy is expressed as \(E_{\gamma}=\frac{m_{B_{c}^{*}}^{2}-m_{B_{c}}^{2}}{2m_{B_{c}^{*}}}\). The matching coefficient is known at one loop with \(V_{S}^{\frac{\pi B}{m}}=1+C_{F}\frac{\alpha_{c}}{2\pi}\)[8]. Other higher-order pNRQCD operators are not considered here, however, one expect that their contributions are small in similar to the case in bottomonium. We can choose \(Q=b\) and \(Q^{\prime}=c\) for beauty and charm quarks in the following. The total decay width of the vector \(B_{c}^{*}\) meson can be approximated as \(\Gamma\simeq\Gamma_{B_{c}^{*}\to B_{c}+\gamma}\) since other weak decay channels have a suppression factor \(\Gamma(B_{c})/\Gamma(B_{c}^{*})\) with magnitude around \(10^{-4}\) to \(10^{-5}\). Consider that the heavy quark pole mass is usually chosen as \(m_{b}=4.8\pm 0.2GeV\) and \(m_{c}=1.6\pm 0.1GeV\), the total decay width of the vector \(B_{c}^{*}\) meson as functions of the emitted photon mass or hyperfine mass splitting is plotted in Fig. 1. If we choose the vector \(B_{c}^{*}\) meson mass as \(6331(4)(6)MeV\) from Lattice QCD simulation [9], the total decay width of the vector \(B_{c}^{*}\) meson is estimated as \(\Gamma=114^{+60}_{-42}eV\) where the large uncertainty is from the sensitivity of decay width on meson mass. One should note that there are already several theoretical predictions in literatures [10; 11; 12; 13; 14], however the model-independent investigation is first given in our paper. In calculation, only two polarization states of the vector \(B_{c}^{*}\) meson with \(|J=1,\lambda=\pm 1\rangle\) are equally contributed in the radiative \(B_{c}^{*}\) meson decays. In the rest frame of final \(B_{c}\) meson, the angular momentum projection is identical to the vector \(B_{c}^{*}\) meson helicity \(\lambda\). This phenomenon can be understood by the conservation of angular momentum and parity. Since the photon only has two transversally polarization states, the initial \(B_{c}^{*}\) meson with \(|J=1,\lambda=0\rangle\) can not emit a photon parallel to momentum direction and thus is forbidden in its radiative decay. The right-hand circularly polarized photon is emitted when the initial \(B_{c}^{*}\) meson with \(|J=1,\lambda=1\rangle\) decays into zero-spin \(B_{c}\) meson. While the left-hand circularly polarized photon is emitted when the initial \(B_{c}^{*}\) meson with \(|J=1,\lambda=-1\rangle\) decays into zero-spin \(B_{c}\) meson. _Weak Decay._ In the radiative decay of \(B_{c}^{*}\) meson with \(B_{c}^{*}\to B_{c}+\gamma\), the transverse polarization of Figure 1: Total decay width of the low-lying vector \(B_{c}^{*}\) meson as functions of the emitted photon energy. meson with \(|J=1,\lambda=\pm 1\rangle\) contributes, while the longitudinal polarization of \(B_{c}^{*}\) meson with \(|J=1,\lambda=0\rangle\) decouples. In the weak decays, both the transversely and longitudinally polarized \(B_{c}^{*}\) mesons will come in and contribute the Feynman amplitudes. The pure leptonic weak decays \(B_{c}^{*}\to\ell+\nu_{\ell}\) have been studied up to three-loop accuracy in Refs. [15; 16], where the branching ratios are given with magnitude around \(10^{-6}\). If we focus on the polarization decomposition, the transverse and longitudinal polarizations of \(B_{c}^{*}\) meson leptonic decay widths are \[\Gamma(B_{c}^{*}(\lambda=\pm 1)\to\ell\nu_{\ell})= \frac{\left|V_{cb}\right|^{2}}{12\pi}G_{F}^{2}f_{B_{c}^{*}}^{2} \left(1-\frac{m_{I}^{2}}{m_{B_{c}^{*}}^{2}}\right)^{2}\] \[\times m_{B_{c}^{*}}^{3},\] \[\Gamma(B_{c}^{*}(\lambda=0)\to\ell\nu_{\ell})= \frac{m_{\ell}^{2}\Gamma(B_{c}^{*+}(\lambda=\pm 1)\to\ell\nu_{\ell})}{2m_{B_{c}^{*} }^{2}}, \tag{3}\] where the factor \((m_{\ell}/m_{B_{c}^{*}})^{2}\) in longitudinal polarization formula represents a helicity suppression which is just a consequence of angular momentum conservation. The weak decays into \(B_{c}^{*}\to J/\psi+X_{H,L}\) are also good channels to probe the \(B_{c}^{*}\) meson at hadron colliders. The vector current form factors of \(B_{c}^{*}\) into \(J/\psi\) can be defined as \[\left\langle J/\psi\left(\epsilon^{\prime},p^{\prime}\right)| \bar{b}\gamma_{\mu}c|B_{c}^{*}\left(\epsilon,p\right)\right\rangle\] \[= -\left(\epsilon\cdot\epsilon^{\prime*}\right)\left[P_{\mu}V_{1} \left(q^{2}\right)-q_{\mu}V_{2}\left(q^{2}\right)\right]-\left(\epsilon\cdot q \right)\epsilon_{\mu}^{\prime*}V_{3}\left(q^{2}\right)\] \[+\left(\epsilon^{\prime*}\cdot q\right)\epsilon_{\mu}V_{4}\left( q^{2}\right)+\frac{\left(\epsilon\cdot q\right)\left(\epsilon^{\prime*}\cdot q \right)}{M^{2}-M^{\prime 2}}\left[\left(P^{\mu}\right.\right.\] \[\left.\left.-\frac{M^{2}-M^{\prime 2}}{q^{2}}q^{\mu}\right)V_{5} \left(q^{2}\right)+\frac{M^{2}-M^{\prime 2}}{q^{2}}q^{\mu}V_{6}\left(q^{2}\right) \right], \tag{4}\] where \(P=p+p^{\prime}\), \(q=p-p^{\prime}\). \(M\) and \(M^{\prime}\) are the masses of \(B_{c}^{*}\) and \(J/\psi\) respectively. Similarly, the axial-vector current form factors of \(B_{c}^{*}\) into \(J/\psi\) can be defined as \[\left\langle J/\psi\left(\epsilon^{\prime},p^{\prime}\right)| \bar{b}\gamma_{\mu}\gamma_{5}c|B_{c}^{*}\left(\epsilon,p\right)\right\rangle\] \[= i\varepsilon_{\mu\nu\alpha\beta}\epsilon^{\alpha}\epsilon^{\prime \alpha\beta}\left[P^{\nu}A_{1}\left(q^{2}\right)+q^{\nu}A_{2}\left(q^{2} \right)\right]\] \[+\frac{i\varepsilon_{\mu\nu\alpha\beta}P^{\alpha}q^{\beta}}{M^{2 }-M^{\prime 2}}\left[\epsilon^{\prime*}\cdot q\epsilon^{\prime}A_{3}\left(q^{2} \right)-\epsilon\cdot q\epsilon^{\prime*\nu}A_{4}\left(q^{2}\right)\right]. \tag{5}\] The differential distribution for \(B_{c}^{(*)}\to J/\psi+nh\) can be decomposed into \[\frac{d\Gamma(B_{c}^{(*)}\to J/\psi+nh)}{dq^{2}}= \sum_{\lambda_{i}}\frac{\left|V_{cb}\right|^{2}G_{F}^{2}a_{1}^{2} |\mathbf{p}^{\prime}|}{32\pi M^{2}}\Gamma_{J_{1}\lambda_{1}J_{2}\lambda_{2} \lambda_{nh}}, \tag{6}\] where the parameter \(\Gamma_{J_{1}\lambda_{1}J_{2}\lambda_{2}\lambda_{2}\lambda_{nh}}\) is the helicity component with the initial meson angular momentum \(J_{1}\) and the \(J/\psi\) angular momentum \(J_{2}\). The \(J/\psi\) moving momentum is \(|\mathbf{p}^{\prime}|=((M^{2}+M^{\prime 2}-q2)^{2}/(4M^{2})-M^{\prime 2})^{1/2}\). Due to the angular momentum conservation, we have the following nontrivial helicity components \[\Gamma_{11110}= 2\left[V_{1}^{2}\left(\left(M-M^{\prime}\right)^{2}-q^{2}\right) \left(\left(M^{\prime}+M\right)^{2}-q^{2}\right)\right.\] \[+\left(A_{1}\left(M^{2}-M^{\prime 2}\right)+A_{2}q^{2}\right)^{2} \right]\rho_{T}^{nh}(q^{2}), \tag{7}\] \[\Gamma_{1111t}= 2\left[A_{1}^{2}\left(-2M^{2}\left(M^{\prime 2}+q^{2}\right)+\left(M^{ \prime 2}-q^{2}\right)^{2}+M^{4}\right)\right.\] \[+\left(V_{1}\left(M^{\prime 2}-M^{2}\right)+q^{2}V_{2}\right)^{2} \right]\rho_{L}^{nh}(q^{2}), \tag{8}\] other four nontrivial helicity components for vector \(B_{c}^{*}\) decay are complicated, which are given in Appendix. Due to symmetry, \(\lambda_{1,2}=1\) represents \(\lambda_{1,2}=\pm 1\). For pseudoscalar \(B_{c}\) decay, there are similar helicity components \[\Gamma_{00100}= \frac{\rho_{T}^{nh}(q^{2})}{4M^{\prime 2}\left(M^{\prime}+M\right){}^{2}} \left[-A_{2}^{\prime}\left(M^{4}-2M^{2}\left(M^{\prime 2}+q^{2}\right)\right.\right.\] \[\left.\left.+\left(M^{\prime 2}-q^{2}\right){}^{2}\right)\right.\] \[\left.+A_{1}^{\prime}\left(M^{\prime}+M\right){}^{2}\left(M^{2}-M^ {\prime 2}-q^{2}\right)\right]{}^{2}, \tag{9}\] \[\Gamma_{0010t}= \rho_{L}^{nh}(q^{2}){A^{\prime}}_{0}^{2}\left[-2M^{2}\left(M^{ \prime 2}+q^{2}\right)+M^{4}\right.\] \[\left.+\left(M^{\prime 2}-q^{2}\right){}^{2}\right],\] (10) \[\Gamma_{00111}= \frac{2q^{2}\rho_{Th}^{nh}(q^{2})}{\left(M^{\prime}+M\right){}^{2} }\left[{A^{\prime}}_{1}^{2}\left(M^{\prime}+M\right){}^{4}+V^{\prime 2}\left(M^{4}\right.\right.\] \[\left.\left.\left.-2M^{2}\left(M^{\prime 2}+q^{2}\right)+\left(M^{\prime 2}-q^{2} \right){}^{2}\right)\right], \tag{11}\] where the definition of \(B_{c}\to J/\psi\) form factors \(A_{i}^{\prime}(q^{2})\) and \(V^{\prime}(q^{2})\) can be found in Eqs. (2-3) in Ref. [17]. The above spectral functions \(\rho_{T,L}^{nh}(q^{2})\) are universal and can be defined as \[\int\frac{d\Phi(W^{*}\to nh)}{2\pi}\epsilon_{\mu}^{nh}\epsilon_{ \nu}^{*nh}= \left(q_{\mu}q_{\nu}-q^{2}g_{\mu\nu}\right)\rho_{T}^{nh}(q^{2})\] \[+q_{\mu}q_{\nu}\rho_{L}^{nh}(q^{2}). \tag{12}\] In principle, the dimensionless spectral functions \(\rho_{T,L}^{nh}(q^{2})\) can be determined from nonperturbative calculation or experimental data. The LHCb collaboration have studied the \(B_{c}^{+}\to J/\psi+\pi^{+}+\pi^{-}+\pi^{+}\) process and measured the \(3\pi\) invariant mass distribution in Ref. [18]. The polarization measurement of \(J/\psi\) is not performed in this process, however, the \(3\pi\) distribution has a large peak around \(a_{1}(1260)\) and a small peak around \(\pi_{2}(2005)\) in Fig. 2. Thus we can conclude that the spectral function \(\rho_{T}^{nh}(q^{2})\) dominates in \(B_{c}\to J/\psi+3\pi\). We use the following parametrization form for \(\rho_{T}^{3\pi}\) \[\rho_{T}^{3\pi}(m^{2})= \frac{a}{2m}(\frac{m^{2}-m_{\pi}^{2}e}{m^{2}})^{-2}(1-m^{2}f)\] \[\times[\frac{1}{b^{2}/4+(m-m_{1})^{2}}+\frac{c}{d^{2}/4+(m-m_{2})^{ 2}}],\] for \(a_{1}(1260)\) and \(\pi_{2}(2005)\) peaks, the chi-square goodness of fitting is \(\chi^{2}/dof=1.65\). The parameters are fitted as \(a=0.12GeV\), \(b=0.341GeV\), \(c=0.021\), \(d=0.256GeV\), \(e=-12.456\), and \(f=-0.069GeV^{-2}\). Therein \(b=0.341GeV\) and \(d=0.256GeV\) can explain the decay width of \(a_{1}(1260)\) and \(\pi_{2}(2005)\) respectively. For future theoretical and experimental studies, this process can be employed to precisely measure the basic quantities for both the \(a_{1}(1260)\) and \(\pi_{2}(2005)\) states. Note that the value \(a=0.12GeV\) is obtained by considering the \(B_{c}\) hadroproduction cross section around \(100nb^{-1}\) at LHC in Ref. [21]. Employing the \(\rho(770)\) dominant model for \(\rho_{T}^{2\pi}\) \[\rho_{T}^{2\pi}(m^{2})= \frac{a^{\prime}}{\Gamma_{\rho}^{2}/4+(m-m_{\rho})^{2}}, \tag{14}\] where the parameter \(a^{\prime}=0.1198GeV^{2}\) can be extracted from the theoretical prediction of \(B_{c}\to J/\psi+\rho\) in Ref. [22]. The nontrivial results of various form factors at leading-order can be calculated in NRQCD [23] \[V_{1}(y) = \frac{128\pi(z+1)^{5/2}\alpha_{s}\phi_{{}_{1}S_{0}[c\bar{c}]}^{( 0)}(0)\phi_{{}_{1}S_{0}[c\bar{b}]}^{(0)}(0)}{3z^{3/2}m_{b}^{3}(y-z+1)^{2}(y+z-1 )^{2}},\] \[V_{3}(y) = 2V_{1}(y)=2A_{1}(y),\] \[V_{2}(y) = A_{2}(y)=\frac{1-z}{1+z}V_{1}(y),\] \[V_{4}(y) = \frac{4z}{1+z}V_{1}(y), \tag{15}\] where \(z=m_{c}/m_{b}\) and \(y=\sqrt{q^{2}/m_{b}^{2}}\). The HPQCD collaboration have performed the first lattice calculation of \(B_{c}\to J/\psi\) form factors [24]. In previous works [25; 26; 27; 22; 25], various \(B_{c}\to J/\psi\) form factors have been systematically studied by the NRQCD+HPQCD approach along with the BGL parametrization method [28]. Similarly, we can further determine the \(B_{c}^{*}\to J/\psi\) form factors after combining the lattice QCD results of \(B_{c}\to J/\psi\) form factors and NRQCD relations among form factors. High-order calculation affects the NRQCD relations among form factors very slightly. According to the LHCb reconstruction efficiency in Ref. [18], the event yields per very \(50\)MeV can also be obtained for both the \(B_{c}\to J/\psi+2\pi\) and \(B_{c}^{*}\to J/\psi+3\pi\) processes, which have been plotted in Fig. 3. The helicity formula for the decay width is given in Eq. (6). One can further define the polarization asymmetry \(\alpha_{LT}\) as \[\alpha_{LT}=\sum_{\lambda_{1},\lambda_{nh}}\frac{\Gamma_{J_{1} \lambda_{1}11\lambda_{nh}}-\Gamma_{J_{1}\lambda_{1}10\lambda_{nh}}}{\Gamma_{J_ {1}\lambda_{1}11\lambda_{nh}}+\Gamma_{J_{1}\lambda_{1}10\lambda_{nh}}}, \tag{16}\] where we only observe the transverse and longitudinal polarization of \(J/\psi\) due to its feasibility at particle experiments. Figure 2: The invariant mass distribution of \(m_{3\pi}\) in the \(B_{c}^{+}\to J/\psi\pi^{+}\pi^{-}\pi^{+}\) process. The red ones are from the LHCb measurements based on the sample corresponding to an integrated luminosity of \(9fb^{-1}\) data [18], while the blue line is our fitting result. We plot the results of \(\alpha_{LT}\) of \(B_{c}^{*}\to J/\psi+n\pi\) and \(B_{c}\to J/\psi+n\pi\) with \(n=2\) or \(n=3\) in Fig. 4. In calculation, we found that \(J/\psi\) prefers to be longitudinally polarized in \(B_{c}\) decays while \(J/\psi\) prefers to be transversely polarized in \(B_{c}^{*}\) decays, which indicates a general law of polarization asymmetry for pseudoscalar(vector) meson to vector meson decays. In \(P\to V\) transition, the final vector meson \(V\) prefers to be longitudinally polarized and makes 100% longitudinally polarized (\(\alpha_{LT}=-1\)) in maximum recoil point (\(q^{2}=0\)). In \(V\to V^{\prime}\) transition, the final vector meson \(V^{\prime}\) prefers to be transversely polarized and gets a large polarized rate (\(0.5<\alpha_{LT}<1\)) in maximum recoil point (\(q^{2}=0\)). This general law shall be also tested in various processes such as \(B_{s}/B_{s}^{*}\to D_{s}^{*}+nh\), \(B/B^{*}\to D^{*}+nh\), \(D_{s}/D_{s}^{*}\to\phi+nh\) and \(D/D^{*}\to K^{*}+nh\). Going back to the identification of \(B_{c}^{*}\) meson, one may expect around 20 \(B_{c}^{*}\to J/\psi+\pi\) and 280 \(B_{c}^{*}\to J/\psi+\ell+\nu_{\ell}\) at LHCb run-2, assuming \(9\times 10^{8}\)\(B_{c}^{*}\) mesons are produced [21]. However, only 1 \(B_{c}^{*}\to J/\psi+\pi\) and 11 \(B_{c}^{*}\to J/\psi+\ell+\nu_{\ell}\) can be reconstructed if consider the LHCb efficiency in Ref. [5]. The reconstruction events will increase into 33 times in future LHCb run-3 and run-4 experiments. The polarization measurement of \(J/\psi\) will eliminate the possible \(B_{c}\) background to probe the \(B_{c}^{*}\) meson. Apart from the channels in the paper, one can also probe the \(B_{c}^{*}\) meson by \(B_{c}^{*}\to B_{s}/B+n\pi\) with around \(10^{-5}\) branching ratios. _Conclusion._ In this Letter, the electromagnetic and weak decays of \(B_{c}^{*}\) are studied in a model-independent way. We have shown that the helicity decomposition in \(B_{c}^{*}\to B_{c}+\gamma\), \(B_{c}^{*}\to\ell+\nu_{\ell}\) and \(B_{c}^{*}\to J/\psi+nh\) by the polarization analysis. The polarization asymmetry \(\alpha_{LT}\) introduced in the paper is an important physical observable to distinguish the initial pseudoscalar \(B_{c}\) and vector \(B_{c}^{*}\) states. It also reveals a general law in \(P\to V\) and \(V\to V^{\prime}\) transition processes, which can be tested by the polarization measurements of the final vector mesons. In the end, the long-sought vector \(B_{c}^{*}\) meson has a good opportunity to be resolved during LHCb Run-3 or Run-4 and future experiments such as CEPC running at \(Z\) boson pole. _Acknowledgement._ We thank the valuable discussions with Prof. Chao-Hsi Chang and Prof. Bing-Song Zou. This work is supported by NSFC under grant No. 12322503, No. 12047503, and No. 12075124, and by Natural Science Foundation of Jiangsu under Grant No. BK20211267. ## Appendix The other four helicity components in Equation (6) have the following expressions \[\Gamma_{10111}= \frac{q^{2}\rho_{T}^{nh}(q^{2})}{2M^{2}\left(M^{2}-M^{\prime 2} \right){}^{2}}\left[2A_{1}\left(M^{2}-M^{\prime 2}\right)\left(3M^{2}+M^{ \prime 2}-q^{2}\right)\left(\left(A_{2}-2A_{4}\right)M^{2}q^{2}+\left(A_{2}+A _{4}\right)\left(M^{2}-M^{\prime 2}\right){}^{2}\right.\right.\right.\] \[\left.\left.-\left(A_{2}+2A_{4}\right)q^{2}M^{\prime 2}+A_{4}q^{4} \right)+\left(\left(A_{2}-2A_{4}\right)M^{2}q^{2}+\left(A_{2}+A_{4}\right) \left(M^{2}-M^{\prime 2}\right){}^{2}-\left(A_{2}+2A_{4}\right)q^{2}M^{\prime 2}+A_{4}q^{4} \right){}^{2}\right.\] \[\left.+A_{1}^{2}\left(M^{2}-M^{\prime 2}\right){}^{2}\left(3M^{2}+M^{ \prime 2}-q^{2}\right){}^{2}+V_{3}^{2}\left(M-M^{\prime}\right){}^{2}\left(M^{ \prime}+M\right){}^{2}\left(\left(M-M^{\prime}\right){}^{2}-q^{2}\right)\left( \left(M^{\prime}+M\right){}^{2}-q^{2}\right)\right],\] \[\Gamma_{10100}= \frac{\left(-2M^{2}\left(M^{\prime 2}+q^{2}\right)+M^{4}+\left(M^{ \prime 2}-q^{2}\right){}^{2}\right)\rho_{T}^{nh}(q^{2})}{16M^{2}M^{\prime 2}\left(M^{2}-M^{ \prime 2}\right){}^{2}}\left[\left(M-M^{\prime}\right)\left(M^{\prime}+M \right)\left(M^{2}\left(2V_{1}-V_{3}+V_{4}\right)\right.\right.\] \[\left.\left.+\left(2V_{1}+V_{3}-V_{4}\right)M^{\prime 2}+q^{2}\left(-2V_{1}+V_{3}+V_{4 }\right)\right)+V_{5}\left(\left(M-M^{\prime}\right){}^{2}-q^{2}\right)\left( \left(M^{\prime}+M\right){}^{2}-q^{2}\right)\right]{}^{2},\] \[\Gamma_{1010t}= \frac{\rho_{L}^{nh}(q^{2})}{16M^{2}M^{\prime 2}}\left[2M^{2}\left( \left(-V_{3}+V_{4}+V_{6}\right)M^{\prime 2}+q^{2}\left(V_{1}+V_{2}-V_{3}+V_{4}+V_{6} \right)\right)+M^{4}\left(-\left(2V_{1}-V_{3}+V_{4}+V_{6}\right)\right)\right.\] \[\left.\left.+\left(M^{\prime 2}-q^{2}\right)\left(\left(2V_{1}+V_{3}-V_{4}-V_{6 }\right)M^{\prime 2}+q^{2}\left(2V_{2}-V_{3}+V_{4}+V_{6}\right)\right)\right]{}^{2},\] \[\Gamma_{11101}= \frac{q^{2}\rho_{T}^{nh}(q^{2})}{2M^{\prime 2}\left(M^{2}-M^{\prime 2} \right){}^{2}}\left[2A_{1}\left(M^{2}-M^{\prime 2}\right)\left(M^{2}+3M^{\prime 2}-q^{2} \right)\left(-q^{2}\left(2A_{3}\left(M^{2}+M^{\prime 2}\right)+A_{2}\left(M^{2}-M^{\prime 2} \right)\right)+A_{3}q^{4}\right.\right.\] \[\left.\left.+\left(A_{2}+A_{3}\right)\left(M^{2}-M^{\prime 2} \right){}^{2}\right)+\left(A_{3}q^{4}-q^{2}\left(2A_{3}\left(M^{2}+M^{\prime 2} \right)+A_{2}\left(M^{2}-M^{\prime 2}\right)\right)+\left(A_{2}+A_{3}\right)\left(M^{2}-M^{ \prime 2}\right){}^{2}\right){}^{2}\] \[\left.+A_{1}^{2}\left(M^{2}-M^{\prime 2}\right){}^{2}\left(M^{2}+3M^{ \prime 2}-q^{2}\right){}^{2}+V_{4}^{2}\left(M^{2}-M^{\prime 2}\right){}^{2}\left(\left(M-M^{\prime} \right){}^{2}-q^{2}\right)\left(\left(M^{\prime}+M\right){}^{2}-q^{2}\right) \right].\]
2309.01351
Adv3D: Generating 3D Adversarial Examples for 3D Object Detection in Driving Scenarios with NeRF
Deep neural networks (DNNs) have been proven extremely susceptible to adversarial examples, which raises special safety-critical concerns for DNN-based autonomous driving stacks (i.e., 3D object detection). Although there are extensive works on image-level attacks, most are restricted to 2D pixel spaces, and such attacks are not always physically realistic in our 3D world. Here we present Adv3D, the first exploration of modeling adversarial examples as Neural Radiance Fields (NeRFs). Advances in NeRF provide photorealistic appearances and 3D accurate generation, yielding a more realistic and realizable adversarial example. We train our adversarial NeRF by minimizing the surrounding objects' confidence predicted by 3D detectors on the training set. Then we evaluate Adv3D on the unseen validation set and show that it can cause a large performance reduction when rendering NeRF in any sampled pose. To generate physically realizable adversarial examples, we propose primitive-aware sampling and semantic-guided regularization that enable 3D patch attacks with camouflage adversarial texture. Experimental results demonstrate that the trained adversarial NeRF generalizes well to different poses, scenes, and 3D detectors. Finally, we provide a defense method to our attacks that involves adversarial training through data augmentation. Project page: https://len-li.github.io/adv3d-web
Leheng Li, Qing Lian, Ying-Cong Chen
2023-09-04T04:29:01Z
http://arxiv.org/abs/2309.01351v2
# Adv3D: Generating 3D Adversarial Examples in Driving Scenarios with NeRF ###### Abstract Deep neural networks (DNNs) have been proven extremely susceptible to adversarial examples, which raises special safety-critical concerns for DNN-based autonomous driving stacks (_i.e._, 3D object detection). Although there are extensive works on image-level attacks, most are restricted to 2D pixel spaces, and such attacks are not always physically realistic in our 3D world. Here we present Adv3D, the first exploration of modeling adversarial examples as Neural Radiance Fields (NeRFs). Advances in NeRF provide photorealistic appearances and 3D accurate generation, yielding a more realistic and realizable adversarial example. We train our adversarial NeRF by minimizing the surrounding objects' confidence predicted by 3D detectors on the training set. Then we evaluate Adv3D on the unseen validation set and show that it can cause a large performance reduction when rendering NeRF in any sampled pose. To generate physically realizable adversarial examples, we propose primitive-aware sampling and semantic-guided regularization that enable 3D patch attacks with camouflage adversarial texture. Experimental results demonstrate that the trained adversarial NeRF generalizes well to different poses, scenes, and 3D detectors. Finally, we provide a defense method to our attacks that involves adversarial training through data augmentation. Project page: len-li.github.io/adv3d-web. ## 1 Introduction The perception system of self-driving cars heavily rely on DNNs to process input data and comprehend the environment. Although DNNs have exhibited great improvements in performance, they have been found vulnerable to adversarial examples [2; 16; 27; 45]. These adversarial examples crafted by adding imperceptible perturbations to input data, can lead DNNs to make wrong predictions. Motivated by the safety-critical nature of self-driving cars, we aim to explore the possibility of generating physically realizable adversarial examples to disrupt 3D detectors in driving scenarios, and further improve the robustness of 3D detectors through adversarial training. The 2D pixel perturbations (digital attacks) [16; 45] have been proven effective in attacking DNNs in various computer vision tasks [13; 57; 60]. However, these 2D pixel attacks are restricted to digital space and are difficult to realize in our 3D world. To address this challenge, several works have proposed physical attacks. For example, Athalye _et al._[2] propose the framework of Expectation Over Transformation (EOT) to improve the attack robustness over 3D transformation. Other researchers generate adversarial examples beyond image space through differentiable rendering, as seen in [58; 63]. These methods show great promise for advancing the field of 3D adversarial attacks and defense but are still limited in synthetic environments. Given the safety-critical demand for self-driving cars, several works have proposed physically realizable attacks and defense methods in driving scenarios. For example, Cao _et al._[5; 6] propose to learn 3D-aware adversarial attacks capable of generating adversarial mesh to attack 3D detectors. However, their works only consider learning a 3D adversarial example for a few specific frames. Thus, the learned example is not universal and may not transfer to other scenes. To mitigate this problem, Tu _et al._[47, 48] propose to learn a transferable adversary that is placed on top of a vehicle. Such an adversary can be used in any scene to hide the attacked object from 3D detectors. However, reproducing their attack in our physical world can be challenging since their adversary must have direct contact with the attacked object. We list detailed comparisons of prior works in Tab. 1. To address the above challenges and generate 3D adversarial examples in driving scenarios, we build Adv3D upon recent advances in NeRF [38] that provide both differentiable rendering and realistic synthesis. In order to generate physically realizable attacks, we model Adv3D in a patch-attack [44] manner and use an optimization-based approach that starts with a realistic NeRF object [29] to learn its 3D adversarial texture. We optimize the adversarial texture to minimize the predicted confidence of all objects in the scenes, while keeping shape unchanged. During the evaluation, we render the input agnostic NeRF in randomly sampled poses, then we paste the rendered patch onto the unseen validation set to evaluate the attack performance. Owing to the transferability to poses and scenes, our adversarial examples can be executed without prior knowledge of the scene and do not need direct contact with the attacked objects, thus making for more feasible attacks compared with [47, 48, 61, 66]. Finally, we provide thorough evaluations of Adv3D on camera-based 3D object detection with the nuScenes [4] dataset. Our contributions are summarized as follows: * We introduce Adv3D, the first exploration of formulating adversarial examples as NeRF to attack 3D detectors in autonomous driving. Adv3D provides 3D-aware and photorealistic synthesis that was previously unavailable. * By incorporating the proposed primitive-aware sampling and semantic-guided regularization, Adv3D generates adversarial examples with enhanced physical realism and realizability. * We conduct extensive real-world experiments and demonstrate the transferability of our adversarial examples across unseen environments and detectors. ## 2 Related Work ### Adversarial Attack DNNs are known to be vulnerable to adversarial attacks, where a small perturbation in the input data can cause drastic changes in the output predictions. Szegedy _et al._[45] first discovered that adversarial examples, generated by adding visually imperceptible perturbations to the original images, make DNNs predict a wrong category with high confidence. These vulnerabilities were also discovered in object detection and semantic segmentation [33, 60]. Moreover, DPatch [33] proposes transferable patch-based attacks by compositing a small patch to the input image. However, perturbing image pixels alone does not guarantee that adversarial examples can be created in the physical world. To address this issue, several works have performed physical attacks [26, 21, 2, 50, 56, 62, 55, 56] and exposed real-world threats. For example, Cheng _et al._[11] developed an adversarial patch with physical-oriented transformations to attack a depth estimation network. AdvPC [19] investigate adversarial perturbations on 3D point clouds. SADA [18] proposes semantic adversarial diagnostic attacks in various autonomous applications. ViewFool [14] and VIAT [43] evaluate the robustness of DNNs to adversarial viewpoints by using NeRF's differentiability. In our work, we mainly aim to generate 3D adversarial examples for 3D object detection in driving scenarios. \begin{table} \begin{tabular}{l c c c} \hline \hline **Methods** & **Transferability** & **Adv. Type** & **Additional Requirements** \\ \hline Cao _et al._[5, 6] & Poses & 3D Mesh & Model, Annotation \\ Tu _et al._[47, 48] & Poses, Scenes & 3D Mesh & Model, Annotation \\ Xie _et al._[61] & Scenes, Categories & 2D Patch & Model, Annotation \\ \hline Adv3D & Poses, Scenes, Categories & 3D NeRF & Model \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with prior works of adversarial attack in autonomous driving. ### Robustness in Autonomous Driving With the safety-critical nature, it is necessary to pay special attention to robustness in autonomous driving systems [51]. LiDAR-Adv [6] proposes to learn input-specific adversarial point clouds to fool LiDAR detectors. Tu _et al_. [48] produces generalizable point clouds that can be placed on a vehicle roof to hide it. Furthermore, several work [1, 5, 47] try to attack a multi-sensor fusion system by optimizing 3D mesh through differentiable rendering. We compare our method with prior works in Tab. 1. Our method demonstrates stronger transferability and fewer requirements than prior works. ### Image Synthesis using NeRF NeRF [38] enables photorealistic synthesis in a 3D-aware manner. Recent advances [49, 64] in NeRF allow for control over materials, illumination, and 6D pose of objects. Additionally, NeRF's rendering comes directly from real-world reconstruction, providing more physically accurate and photorealistic synthesis than previous mesh-based methods that relied on human handicrafts. Moreover, volumetric rendering [22] enables NeRF to perform accurate and efficient gradient computation compared with dedicated renderers in mesh-based differentiable rendering [9, 24, 32]. Recently, there has been tremendous progress in driving scene simulation using NeRF. Block-NeRF [46] achieves city-scale reconstruction by modeling the blocks of cities with several isolated NeRFs to increase capacity. FEGR [55] learns to intrinsically decompose the driving scene for applications such as relighting. Lift3D [29] use NeRF to generate new objects and augment them to driving datasets, demonstrating the capability of NeRF to improve downstream task performance. The driving scene simulation provides a perfect test bed to evaluate the effectiveness of self-driving cars. Our method is related to Lift3D, but aims to understand and improve the robustness of 3D detectors. ## 3 Preliminary ### Camera-based 3D Object Detection in Autonomous Driving Camera-based 3D object detection is the fundamental task in autonomous driving. Without loss of generality, we focus on evaluating the robustness of camera-based 3D detectors. The 3D detectors process image data and aim to predict 3D bounding boxes of all surrounding objects. The parameterization of a 3D bounding box can be written as \(\mathbf{b}=\{\mathbf{R},\mathbf{t},\mathbf{s},c\}\), where \(\mathbf{R}\in SO(3)\) is the rotation of the box, \(\mathbf{t}=(x,y,z)\) indicate translation of the box center, \(\mathbf{s}=(l,w,h)\) represent the size (length, width, and height) of the box, and \(c\) is the confidence of the predicted box. The network structure of camera-based 3D object detectors can be roughly categorized into FoV-based (front of view) and BEV-based (bird's eye view). FoV-based methods [52, 53, 54] can be easily built by adding 3D attribute branches to 2D detectors. BEV-based methods [41, 42] typically convert 2D image feature to BEV feature using camera parameters, then directly detect objects on BEV planes. We refer readers to recent surveys [28, 34] for more detail. ### Differentiable Rendering using NeRF Our method leverages the differentiable rendering scheme proposed by NeRF [38]. NeRF parameterizes the volumetric density and color as a function of input coordinates. NeRF uses multi-layer perceptron (MLP) or hybrid neural representations [15, 39, 7] to represent this function. For each pixel on an image, a ray \(\mathbf{r}(t)=\mathbf{r}_{o}+\mathbf{r}_{d}\cdot t\) is cast from the camera's origin \(\mathbf{r}_{o}\) and passes through the direction of the pixel \(\mathbf{r}_{d}\) at distance \(t\). In a ray, we uniformly sample \(K\) points from the near plane \(t_{near}\) to the far plane \(t_{far}\), the \(k^{th}\) distance is thus calculated as \(t_{k}=t_{near}+(t_{far}-t_{near})\cdot k/K\). For any queried point \(\mathbf{r}(t_{k})\) on the ray, the network takes its position \(\mathbf{r}\left(t_{k}\right)\) and predicts the per-point color \(\mathbf{c}_{k}\) and density \(\tau_{k}\) with: \[\left(\mathbf{c}_{k},\tau_{k}\right)=\mathrm{Network}\left(\mathbf{r}\left(t _{k}\right)\right). \tag{1}\] Note that we omit the direction term as suggested by [17]. The final predicted color of each pixel \(\mathbf{C}(\mathbf{r})\) is computed by approximating the volume rendering integral using numerical quadrature [37]: \[\begin{split}\mathbf{C}(\mathbf{r})&=\sum_{k=0}^{K- 1}T_{k}\left(1-\exp\left(-\tau_{k}\left(t_{k+1}-t_{k}\right)\right)\right) \mathbf{c}_{k},\\ \text{with}&\quad T_{k}=\exp\left(-\sum_{k^{\prime }<k}\tau_{k^{\prime}}\left(t_{k^{\prime}+1}-t_{k^{\prime}}\right)\right).\end{split} \tag{2}\] We build our NeRF upon Lift3D [29]. Lift3D is a 3D generation framework that generates photorealistic objects by fitting multi-view images synthesized by 2D generative modes [23] using NeRF. The network of Lift3D is a conditional NeRF with additional latent code input, which controls the shape and texture of the rendered object. The conditional NeRF in Lift3D is a tri-plane parameterized [7] generator. With its realistic generation and 3D controllability, Lift3D has demonstrated that the training data generated by NeRF can help to improve downstream task performance. To further explore and exploit the satisfactory property of NeRF, we present a valuable and important application in this work: we leverage the NeRF-generated data to investigate and improve the robustness of the perception system in self-driving cars. ## 4 Method We illustrate the pipeline of Adv3D in Fig. 1. We aim to learn a transferable adversarial example in 3D detection that, when rendered in any pose (_i.e._, location and rotation), can effectively hide surrounding objects from 3D detectors in any scenes by lowering their confidence. In Sec. 4.1, to improve the physical realizability of adversarial examples, we propose (1) Primitive-aware sampling to enable 3D patch attacks. (2) Disentangle NeRF that provides feasible geometry, and (3) Semantic-guided regularization that enables camouflage adversarial texture. To enhance the transferability across poses and scenes, we formulate the learning paradigm of Adv3D within the EOT framework [2] in Sec. 4.3. ### 3D Adversarial Example Generation We use a gradient-based method to train our adversarial examples. The training pipeline involves 4 steps: **(i)** randomly sampling the pose of an adversarial example, **(ii)** rendering the example in the sampled pose, **(iii)** pasting the rendered patch into the original image of the training set, and finally, **(iv)** computing the loss and optimizing the latent codes. During inference, we discard the **(iv)** step. #### 4.1.1 Pose Sampling To achieve adversarial attack in arbitrary object poses, we apply Expectation of Transformation (EOT) [2] by randomly sampling object poses. The poses of adversarial examples are parameterized as 3D boxes \(\mathbf{b}\) that are restricted to a predefined ground plane in front of the camera. We model the ground plane as a uniform distribution \(\mathcal{B}\) in a specific range that is detailed in the supplement. During training, we independently sample the rendering poses of adversarial examples, and approximate the expectation by taking the average loss over the whole batch. Figure 1: **Adv3D** aims to generate 3D adversarial examples that consistently perform attacks under different poses during rendering. We initialize adversarial examples from Lift3D [29]. During training, we optimize the texture latent codes of NeRF to minimize the detection confidence of all surrounding objects. During inference, we evaluate the performance reduction of pasting the adversarial patch rendered using randomly sampled poses on the validation set. #### 4.1.2 Primitive-aware Sampling We model the primitive of adversarial examples as NeRF tightly bound by 3D boxes, in order to enable non-contact and physically realizable attacks. During volume rendering, we compute the intersection of rays \(\mathbf{r}(t)\) with the sampled pose \(\mathbf{b}=\{\mathbf{R},\mathbf{t},\mathbf{s}\}\in\mathcal{B}\), finding the first hit point and the last hit point of box \((t_{near},t_{far})\) by the AABB-ray intersection algorithm [36]. We then sample our points inside the range \((t_{near},t_{far})\) to reduce large unnecessary samples and avoid contact with the environment. \[(t_{near},t_{far})=Intersect(\mathbf{r},\mathbf{b}), \tag{3}\] \[\mathbf{r}^{\prime}(t_{k})=\mathbf{\tilde{r}}(t_{near})+(\mathbf{ \tilde{r}}(t_{far})-\mathbf{\tilde{r}}(t_{near}))\cdot k/K,\] (4) \[\mathbf{\tilde{r}}(t)=Transform(\mathbf{r}(t),\mathbf{b}), \tag{5}\] where \(\mathbf{\tilde{r}}(t)\) is the sampled points with additional global to local transformation. Specifically, we use a 3D affine transformation to map original sampled points \(\mathbf{r}(t)=\mathbf{r}_{o}+\mathbf{r}_{d}\cdot t\) into a canonical space \(\mathbf{\tilde{r}}=\{x,y,z\}\in[-1,1]\). This ensures that all the sampled points regardless of their distance from the origin, are transformed to the range \([-1,1]\), thus providing a compact input representation for NeRF network. The transformation is given by: \[Transform(\mathbf{r},\mathbf{b})=\mathbf{s}^{-1}\cdot(\mathbf{R}^{-1}\cdot \mathbf{r}-\mathbf{t}), \tag{6}\] where \(\mathbf{b}=\{\mathbf{R},\mathbf{t},\mathbf{s}\}\), \(\mathbf{R}\in SO(3)\) is rotation matrix of the box, \(\mathbf{t},\mathbf{s}\in\mathbb{R}^{3}\) indicate translation and scale vector that move and scale the unit cube to desired location and size. The parameters of b are sampled from a pre-defined distribution \(\mathcal{B}\) detailed in the supplement. Then, the points lied in \([-1,1]\) are projected to exactly cover the tri-plane features \(\mathbf{z}\) for interpolation. Finally, a small MLP takes the interpolated features as input and predicts RGB and density: \[(\mathbf{c}_{k},\tau_{k})=\mathrm{MLP}(Interpolate(\mathbf{z},\mathbf{r}^{ \prime}\left(t_{k}\right))). \tag{7}\] The primitive-aware sampling enables patch attacks [44] in a 3D-aware manner by lifting the 2D patch to a 3D box, enhancing the physical realizability by ensuring that the adversarial example only has a small modification to the original 3D environment. #### 4.1.3 Disentangled NeRF Parameterization The original parameterization of NeRF combines the shape and texture into a single MLP, resulting in an entangled shape and texture generation. Since shape variation is challenging to reproduce in the real world, we disentangle shape and texture generation and only set the texture as adversarial examples. We obtain texture latents \(\mathbf{z_{tex.}}\) and shape latents \(\mathbf{z_{shape}}\) from the Lift3D. During volume rendering, we disentangle shape and texture generation by separately predicting RGB and density: \[\mathbf{c}_{k}=\mathrm{Network}(\mathbf{z_{tex.}},\mathbf{r}^{\prime}\left(t _{k}\right)),\quad\tau_{k}=\mathrm{Network}(\mathbf{z_{shape}},\mathbf{r}^{ \prime}\left(t_{k}\right)), \tag{8}\] where \(\mathbf{z_{shape}}\) is fixed and \(\mathbf{z_{texture}}\) is being optimized. Our disentangled parametrization can also be seen as a geometry regularization in [47, 48] but keeps geometry unchanged as a usual vehicle, leading to a more realizable adversarial example. #### 4.1.4 Semantic-guided Regularization Setting the full part of the vehicle as adversarial textures is straightforward, but not always feasible in the real world. To improve the physical realizability, we propose to optimize individual semantic parts, such as doors and windows of a vehicle. Specifically, as shown in Fig. 2 (d, e)), we only set a specific part of the vehicle as adversarial texture while maintaining others unchanged. This semantic-guided regularization leads to a camouflage adversarial texture that is less likely spotted in the real world. To achieve this, we add a semantic branch to Lift3d [29] to predict semantic part labels of the sampled points. We re-train Lift3d by fitting multi-view images and semantic labels generated by editGAN [31]. Using semantic-guided regularization, we maintain the original texture and adversarial part texture at the same time but only optimize the adversarial part texture while leaving the original texture unchanged. This approach allows us to preserve a large majority of parts as usual, but to alter only the specific parts that are adversarial (see Fig. 2 (b, c)). Potential attackers can easily print the adversarial sticker and stick it on the semantic part of vehicles to hide surrounding objects. In our implementation, we query the NeRF network twice, one for the adversarial texture and the other for the original texture. Then, we replace the part of original texture with the adversarial texture indexed by semantic labels in the point space. ### Gradient Propagation After rendering the adversarial examples, we paste the adversarial patch into the original image through image composition. The attacked image can be expressed as \(I_{1}\times M+I_{2}\times(1-M)\) where \(I_{1}\) and \(I_{2}\) are the patch and original image, \(M\) is foreground mask predicted by NeRF. Next, the attacked images are fed to pretrained and fixed 3D detectors to compute the objective and back-propagate the gradients. Since both the rendering and detection pipelines are differentiable, Adv3D allows gradients from the objective to flow into the texture latent codes during optimization. ### Learning Paradigm We formulate our learning paradigm as EOT [2] that finds adversarial texture latent codes by minimizing the expectation of a binary cross-entropy loss over sampled poses and original images: \[\mathbf{z_{tex.}}=\arg\min_{\mathbf{z_{tex.}}}\mathbb{E}_{\mathbf{b}\sim \mathcal{B}}\mathbb{E}_{\mathbf{x}\sim\mathcal{X}}[-\log(1-P(I(\mathbf{x}, \mathbf{b},\mathbf{z_{tex.}}))], \tag{9}\] where \(\mathbf{b}\) is the rendering pose sampled from the predefined distribution of ground plane \(\mathcal{B}\), \(\mathbf{x}\) is the original image sampled from the training set \(\mathcal{X}\), \(I(\mathbf{x},\mathbf{b},\mathbf{z_{tex.}})\) is the attacked image that composited by the original image \(\mathbf{x}\) and the adversarial patch rendered using pose \(\mathbf{b}\) and texture latent code \(\mathbf{z_{tex.}}\), and \(P(I(\cdot))\) represents the confidence of all proposals predicted by detectors. We approximate the expectation by averaging the objective of the independently sampled batch. The objective is a binary cross-entropy loss that minimizes the confidence of all predicted bounding boxes, including adversarial objects and normal objects. Built within the framework of EOT, Adv3D helps to improve the transferability and robustness of adversarial examples over the sampling parameters (poses and scenes here). This means that the attack can be performed without prior knowledge of the scene and are able to disrupt models across different poses and times in a non-contact manner. ### Adversarial Defense by Data Augmentation Toward defenses against our adversarial attack, we also study adversarial training to improve the robustness of 3D detectors. Adversarial training is typically performed by adding image perturbations using a few PGD steps [35; 59] during the training of networks. However, our adversarial example is too expensive to generate for the bi-level loop of the min-max optimization objective. Thus, instead of generating adversarial examples from scratch at every iteration, we directly leverage the transferable adversarial examples to augment the training set. We use the trained adversarial example to locally store a large number of rendered images to avoid repeated computation. During adversarial training, we randomly paste the rendered adversarial patch into the training images with a probability of \(30\%\), while remaining others unchanged. We provide experimental results in Sec. 5.4. ## 5 Experiments In this section, we first describe the training detail of our adversarial attacks. Then we present the experiments of semantic-guided regularization in Sec. 5.1, the analysis of 3D attack in Sec. 5.2, the transferability across detectors in Sec. 5.3, and our adversarial defense method in Sec. 5.4. Figure 2: Rendered results of our adversarial examples. **(a)** Image and semantic label of an instance predicted by NeRF. **(b)** Top: our example without semantic-guided regularization. Bottom: our example with semantic-guided regularization. **(c)** Multi-view consistent synthesis of our examples. **(d,e)** The texture transfer results of side and back part adversary to other vehicles. DatasetWe conduct our experiments on the widely used nuScenes dataset [4]. This dataset is collected using 6 surrounded-view cameras that cover the full 360\({}^{\circ}\) field of view around the ego-vehicle. The dataset contains 700 scenes for training and 150 scenes for validation. In our work, we train our adversarial examples on the training set and evaluate performance drop on the validation set. TrainingWe implement our methods using PyTorch [40] and MMDetection3D [12]. All detectors are resumed from checkpoints available on their open-source repositories to match the original performance exactly. We only select one instance from Lift3D [29] as the initialization of examples. We conduct our experiments using 8 NVIDIA A100 80G GPUs. We use the Adam optimizer [25] with a learning rate of 1e-3 for texture latents. In practice, we optimize texture latents on the training set for five epochs with the same batch size as used during training detectors. We do not use any regularization except for semantic-guided regularization. In all experiments without specified, we render two adversarial examples per image. We ablate the number of rendered adversaries in the supplement. Target Detectors and MetricsWe evaluate the robustness of six representative detectors. Three are FoV-based, and three are BEV-based. The FoV-based methods are FCOS3D [52], PGD-Det [53] and DETR3D [54]. The BEV-based methods are BEVDet [20], BEVFormer-Tiny [30] and BEVFormer-Base. Following prior work [61], we evaluate the performance drop on the validation set after the attack. Specifically, we use the Mean Average Precision (mAP) and nuScenes Detection Score (NDS) [4] to evaluate the performance of 3D detectors. Quantitative ResultsWe provide the experimental results of adversarial attacks in Tab. 2. The attacks are conducted in a full-part manner without semantic-guided regularization to investigate the upper limit of attack performance. We found that, in spite of FoV-based or BEV-based, they display similar robustness. Meanwhile, we see a huge improvement of robustness by utilizing a stronger \begin{table} \begin{tabular}{l l c|c c c c} \hline \hline **Models** & **Backbone** & **Type** & **Clean NDS** & **Adv NDS** & **Clean mAP** & **Adv mAP** \\ \hline FCOS3D [52] & ResNet101 & FoV & 0.3770 & 0.2674 & 0.2980 & 0.1272 \\ PGD-Det [53] & ResNet101 & FoV & 0.3934 & 0.2694 & 0.3174 & 0.1321 \\ DETR3D [54] & ResNet101 & FoV & 0.4220 & 0.2755 & 0.3470 & 0.1336 \\ BEVDet [20] & ResNet50 & BEV & 0.3822 & 0.2247 & 0.3076 & 0.1325 \\ BEVFormer-Tiny [30] & ResNet50 & BEV & 0.3540 & 0.2264 & 0.2524 & 0.1217 \\ BEVFormer-Base [30] & ResNet101 & BEV & 0.5176 & 0.3800 & 0.4167 & 0.2376 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of different detectors under our attack. Clean NDS and mAP denote evaluation using original validation data. Adv NDS and mAP denote evaluation using attacked data. Figure 3: Visualization of BEVDet [20] prediction on nuScenes validation set under our attacks. The visualization threshold is set at \(0.6\). The adversarial NeRF can hide surrounding objects by minimizing their predicted confidence in a non-contact manner (making the yellow boxes disappear). backbone (ResNet101 versus ResNet50) when comparing BEVFormer-Base with BEVFormer-Tiny. We hope these results will inspire researchers to develop 3D detectors with enhanced robustness. Visualization Results We visualize our attack results with semantic-guided regularization in Fig. 3 (a,b), and without regularization in Fig. 3 (c). The disappearance of detected objects is caused by their lower confidence scores. For example, the confidence predicted by detectors in Fig. 3 (a) have declined from \(0.6\) to \(0.4\), and are therefore filtered out by the visualization threshold of \(0.6\). In Fig. 3 (a), we find that our adversarial NeRF is realistic enough to be detected by a 3D detector if it doesn't display much of the adversarial texture. However, once the vehicle shows a larger area of the adversarial texture as seen in Fig. 3 (b), it will hide all objects including itself due to our untargeted objective. ### Semantic Parts Analysis In Tab. 3, we provide experiments on the impact of different semantic parts on attack performance. Specifically, we focused on three salient parts of the vehicle: the front, side, and rear. Our results show that compared with adversarial attacks using full parts, the semantic-guided regularization leads to a slightly lower performance drop, but remains a realistic appearance and less likely spotted adversarial texture as illustrated in Fig. 2 (b). Since we do not have access to annotation during training, we additionally conduct "No Part" experiment that no part of the texture is adversarial, to evaluate the impact of the collision and occlusion. We acknowledge that part of performance degradation can be attributed to the occlusion to original objects and the false positive prediction of adversarial objects (see Fig. 3 (a)), since we do not update the ground truth of adversarial objects to the validation set. ### Effectiveness of 3D-aware attack To validate the effectiveness of our 3D attacks, we ablate the impact of different poses on the attack performance. In Fig. 4 (a), we divide the BEV plane into \(10\times 10\) bins ranging from \(x\in[-5m,5m]\) and \(z\in[10m,15m]\). We then evaluate the relative mAP drop (percentage) of BEVDet [20] by sampling one adversarial example inside the bin per image, while keeping rotation randomly sampled. Similarly, we conduct experiments of \(30\) uniform rotation bins ranging from \([0,2\pi]\) in Fig. 4 (b). The experimental results demonstrate that all aspects of location and rotation achieve a valid attack (performance drop \(>30\%\)), thereby proving the transferability of poses in our 3D-aware attack. A finding that contrasts with prior work [48] is the impact of near and far locations in \(z\) axis. Our adversarial example is more effective in the near region compared with the far region, while Tu _et al._[48] display a roughly uniform distribution in all regions. We hypothesize that the attack performance is proportional to the area of the rendered patch, which is highly related to the location of objects. Similar findings are also displayed in rotation. The vehicle that poses vertically to the ego vehicle results in a larger rendered area, thus better attack performance. ### Transferability Across Different Detectors In Tab. 5, we evaluate the transferability of adversarial examples across different detectors. To this end, we train a single adversarial example of each detector separately, then use the example to evaluate the performance drop of other detectors. We show that there is a high degree of transferability between different models. Among them, we observe that DETR3D [54] appears to be more resilient to adversarial attacks than other detectors. We hypothesize this can be attributed to the sparsity of the query-based method. During the projection of 3D query to the 2D image plane, only a single point of the feature is indexed by interpolation, thus fewer areas of adversarial features will be sampled. This finding may have insightful implications for the development of more robust 3D detectors in the future. ### Adversarial Defense by Data Augmentation We present the results of adversarial training in Tab. 4. The symbol \(\dagger\) indicates attacks using the same adversarial example used in adversarial training, while \(\lx@sectionsign\) indicates a different example. We observe that incorporating adversarial training improves not only the robustness against the seen adversarial examples, but also the clean performance. However, we also note that our adversarial training is not capable of transferring to unseen adversarial examples trained in the same way, mainly due to the fixed adversarial example during adversarial training. Furthermore, we hope that future work can conduct in-depth investigations and consider handling the bi-level loop of adversarial training in order to better defend against adversarial attacks. ## 6 Limitation and Future Work Learning to Sample and AttackAs we do not have access to the dataset annotations, we can not model the explicit relationship between adversarial and normal objects to avoid collision, and the collision itself can cause a performance drop ("No Parts" in Tab. 3). Future work can apply geometry-aware composition [10] to mitigate this problem. Additionally, future research can explore learning to predict optimal poses of adversarial objects to maximize the effectiveness of attacks. Potential Harmful ConsequencesThe trained adversarial examples have the potential to induce serious traffic accidents in driving scenarios. However, our work is not intended to cause disruptions in autonomous driving systems. Instead, our goal is to use the examples to gain a deeper understanding of the systems and improve their robustness. We hope our work will draw more attention of the community to further verify and enhance the robustness of autonomous driving systems. \begin{table} \begin{tabular}{l c|c c c c c} \hline \hline **TargetSource** & Clean & FCOS3D & PGD-Det & DETR3D & BEVDet & BEVFormer \\ \hline FCOS3D [52] & 0.298 & **0.124** & 0.141 & 0.144 & 0.176 & 0.158 \\ PGD-Det [53] & 0.317 & 0.172 & **0.131** & 0.150 & 0.186 & 0.172 \\ DETR3D [54] & 0.347 & 0.188 & 0.170 & **0.133** & 0.212 & 0.198 \\ BEVDet [20] & 0.307 & 0.148 & 0.145 & 0.140 & **0.132** & 0.140 \\ BEVFormer [30] & 0.252 & 0.175 & 0.155 & 0.136 & 0.177 & **0.124** \\ \hline \hline \end{tabular} \end{table} Table 5: Transferability of our attack to unseen detectors. We evaluate the robustness of **target** detectors using an adversarial example trained on **source** detectors. Reported in mAP. Conclusion In this paper, we propose **Adv3D**, the first attempt to model adversarial examples as NeRF. Adv3D enhances the physical realizability of attacks through our proposed primitive-aware sampling and semantic-guided regularization. Compared with prior works of adversarial examples in autonomous driving, our examples are more threatening in practice as we carry non-contact attacks, have feasible 3D shapes as usual vehicles, and display camouflage adversarial texture. Extensive experimental results also demonstrate that Adv3d transfers well to different poses, scenes, and detectors. We hope our work provides valuable insights for creating more realistic evaluations to investigate and improve the robustness of autonomous driving systems.
2310.14159
Can Language Models Laugh at YouTube Short-form Videos?
As short-form funny videos on social networks are gaining popularity, it becomes demanding for AI models to understand them for better communication with humans. Unfortunately, previous video humor datasets target specific domains, such as speeches or sitcoms, and mostly focus on verbal cues. We curate a user-generated dataset of 10K multimodal funny videos from YouTube, called ExFunTube. Using a video filtering pipeline with GPT-3.5, we verify both verbal and visual elements contributing to humor. After filtering, we annotate each video with timestamps and text explanations for funny moments. Our ExFunTube is unique over existing datasets in that our videos cover a wide range of domains with various types of humor that necessitate a multimodal understanding of the content. Also, we develop a zero-shot video-to-text prompting to maximize video humor understanding of large language models (LLMs). With three different evaluation methods using automatic scores, rationale quality experiments, and human evaluations, we show that our prompting significantly improves LLMs' ability for humor explanation.
Dayoon Ko, Sangho Lee, Gunhee Kim
2023-10-22T03:01:38Z
http://arxiv.org/abs/2310.14159v3
# Can Language Models Laugh at YouTube Short-form Videos? ###### Abstract As short-form funny videos on social networks are gaining popularity, it becomes demanding for AI models to understand them for better communication with humans. Unfortunately, previous video humor datasets target specific domains such as speeches or sitcoms, and mostly focus on verbal cues. We curate a user-generated dataset of 10K multimodal funny videos from YouTube, called **ExFunTube**. Using a video filtering pipeline with GPT-3.5, we verify both verbal and visual elements contributing to humor. After filtering, we annotate each video with timestamps and text explanations for funny moments. Our ExFunTube is unique over existing datasets in that our videos cover a wide range of domains with various types of humor that necessitate a multimodal understanding of the content. Also, we develop a zero-shot video-to-text prompting to maximize video humor understanding of large language models (LLMs). With three different evaluation methods using automatic scores, rationale quality experiments, and human evaluations, we show that our prompting significantly improves LLMs' ability for humor explanation. ## 1 Introduction Today, a huge number of short-form funny videos are popularly circulated on social media platforms. Although humor often triggers instant laughter, understanding humor is not a straightforward process. Numerous studies [1, 184, 178, 190, 189, 170, 192, 183] have explored the cognitive process of humor appreciation. For instance, Hazlitt (1845) and Kant (1786) propose the incongruity theory, asserting that incongruity provokes laughter. Nerhardt (1970) further develops the idea by defining the discrepancy between expectation and content, such as punchlines or cartoons. Suls (1972) suggests the incongruity-resolution theory, positing that humor arises only when the incongruity is resolved by retrieving information from the joke, cartoon, or the perceiver's own knowledge. Since a sufficient understanding of the context is required to perceive and further resolve the incongruity, understanding humor can be challenging. Nevertheless, if AI models can understand humor, they could interact more effectively with humans by providing empathetic responses based on users' sense of humor. Furthermore, if the models understand short-form funny videos, they can recommend videos based on users' preferences or even generate witty titles based on video contexts. Several studies [19, 20, 21, 18] have collected humorous video datasets to investigate whether models can understand if a video is funny or not. However, the datasets have been gathered from a limited domain, such as speeches or sitcoms. For example, Hasan et al. (2019) collect videos from TED, where there is a single speaker, and visual cues are restricted to gestures or facial ex Figure 1: An example from the ExFunTube dataset. We curate funny short-form videos in various domains through a filtering pipeline that verifies both verbal and visual elements contributing to humor. Each video is annotated with timestamps and explanations for funny moments. In this example, three funny moments are identified. pressions. Castro et al. (2019) build the MUStARD dataset from four sitcoms, mainly from "Friends" and "Big Bang Theory," and Patro et al. (2021) collect the MHD dataset from the sitcom "Big Bang Theory." However, in sitcoms, the fixed actors follow a predetermined script on a constructed set, and the punchline plays a crucial role, so the visual elements may have less contribution to humor. Moreover, the aforementioned video datasets only have binary labels indicating whether the content is humorous or not. As binary classification may not evaluate whether a model truly understands the humor in a video, Kumar et al. (2022) collect WITS with annotated text explanations. However, this dataset is limited to sarcasm, a specific form of humor, and focuses on sarcasm explanation in dialogue. It highlights a need for a humor explanation dataset that considers visual elements more and covers general humor. To this end, we curate **ExFunTube**, a dataset of funny, short-form videos with explanations. These videos are collected from user-generated YouTube videos, which are shared on the "r/youtubehaiku" subreddit. In this subreddit, users upload short-form funny videos, typically up to 30 seconds long. We develop a video filtering pipeline with GPT-3.5 (Ouyang et al., 2022), designed to exclude the videos with minimal visual impact on humor. Then, we annotate the collected videos with timestamps and text explanations of funny moments, as exemplified in Figure 1. Recent LLMs show great performance for explaining humor present in text to some extent (Chowdhery et al., 2022). Inspired by the recent research on multimodal-informed prompting (Zeng et al., 2022), we convert video content into text, leveraging various zero-shot models on diverse modalities of the video. We provide LLMs with the text prompt as a linguistic summary of video content. Specifically, we consider two modalities of the video content: visual and audio. From the visual modality, we obtain dense video descriptions. From the audio modality, we acquire speech transcripts and sound labels. Finally, we chronologically integrate them into a text prompt that can maximize LLMs' ability for humor explanation. Since evaluating a model's ability to explain humor is challenging, we report our results in three different ways: model-based automatic scores, rationale quality metrics with the moment localization task, and human evaluation. First, we report model-based metrics instead of those using word overlap. Second, we conduct a rationale quality experiment, which assesses the quality of explanations from the accuracy of predicting gold labels (Wiegreffe et al., 2021). Finally, we carry out human evaluations with sampled test examples. Through these three different results, our prompting approach considerably improves the humor explanation performance of three important LLMs, including one zero-shot GPT-3.5 and two finetuned T5 (Raffel et al., 2020) and BART (Lewis et al., 2020). To summarize, our key contributions are: 1. We curate **ExFunTube**, a dataset consisting \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Dataset** & **Modality** & **Type** & \begin{tabular}{c} **\#Data** \\ **Points** \\ \end{tabular} & **Data Config** & **Exp** & **Task** \\ \hline ExPUN & T & Pun & 2K & \{Pun, Keywords, Up to 5 scores \& explanations\} & ✓ & Pun _Exp_ \\ \hline AVH / FOR & I & \begin{tabular}{c} Abstract \\ Scene \\ \end{tabular} & 3K / 15K & \{A funny image, An unfurny image, 10 funniness ratings / / \} & \begin{tabular}{c} Image Humor \\ Scoring \& Altering \\ \end{tabular} \\ \hline NYCC & I,T & Cartoon & 0.7K & \begin{tabular}{c} \{Cartoon, Three finalist captions, 3 annotations of locations, \\ descriptions, uncanny descriptions, relevant entities, and explanations\} \\ \end{tabular} & ✓ & Cartoon Caption _Exp_ \\ MORE & I,T & Posts & 3K & \begin{tabular}{c} [Image, Caption, 1 explanation] \\ \end{tabular} & ✓ & Image Sarcasm _Exp_ \\ \hline MUStARD & V,A,T & Sitcom & 6K & \{Video, Binary (funny/unfunny) label\} & - & Video Sarcasm _BC_ \\ WITS & V,A,T & Sitcom & 2.2K & \{Video, One Explanation\} & ✓ & Dialogue Sarcasm _Exp_ \\ UR-FUNNY & V,A,T & Speech & 8K & \{Video, Binary (funny/unfunny) label\} & - & Video Humor _BC_ \\ MHD & V,T & Sitcom & 11K & \{Video, Binary (funny/unfunny) label\} & - & Video Humor _BC_ \\ \hline ExFunTube & V,A,T & \begin{tabular}{c} Short-form \\ Youtube videos \\ \end{tabular} & 10K & \{Video, Up to 3 timestamps \& explanations\} & ✓ & Video Humor _Exp_ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of our ExFunTube with previous humor datasets: ExPUN (Sun et al., 2022), AVH&FOR (Chandrasekaran et al., 2016), NYCC (Hessel et al., 2022), MORE (Desai et al., 2022), MUStARD (Castro et al., 2019), WITS (Kumar et al., 2022), UR-FUNNY (Hasan et al., 2019), and MHD (Patro et al., 2021). In the Modality column, I, V, A, and T denote image, video, audio, and text, respectively. The #Data Points column shows only the number of positive (humorous) data points. The Data Config column specifies the composition of each data point. The Exp column indicates the presence of annotated explanations. In the Task column, _Exp_ and _BC_ are abbreviations of explanation generation and binary classification task each. of 10,136 user-generated, funny short-form videos. Each video is annotated with timestamps and explanations of funny moments. As compared in Table 1, our ExFunTube is unique over existing datasets in that our videos cover a wide range of domains with various types of humor that necessitate a multimodal understanding of the content. 2. We design a zero-shot video-to-text prompting that converts video content into text to maximize LLMs' ability to explain video humor. 3. With three different evaluation methods of model-based lexical scores, rationale quality scores, and human evaluations, we verify that our prompting improves LLMs' performance on humor explanation. ## 2 Related work **Humor Understanding**. It has been a long-standing question whether AI models can understand humor in text, images, or videos. Early studies focused on classifying whether text Annamoradnejad and Zoghi (2020), images Chandrasekaran et al. (2016), or videos Hasan et al. (2019); Castro et al. (2019); Patro et al. (2021) are humorous or not. Some studies, such as Chandrasekaran et al. (2016), also rate the degree to which abstract scenes are perceived as humorous. However, binary classifications or ratings do not fully evaluate whether a model understands humor in detail. Recent humor studies have shifted towards having models explain humor. Sun et al. (2022) augment the SemEval 2017 Task 7 Miller et al. (2017) with funniness ratings and explanations. Hessel et al. (2022) augment the New Yorker cartoon captions with explanations. Desai et al. (2022) propose a dataset of explanations for sarcastic captions, and Kumar et al. (2022) collect sarcastic videos from a sitcom with explanations. **Natural Language Explanation**. As tasks of interest become increasingly complex, predicting labels may not be enough to evaluate the models' true understanding. Thus, some works make models explain their decisions as an alternative. For instance, FLUTE Chakrabarty et al. (2022) augments e-SNLI Camburu et al. (2018) to curate figurative texts with labels for natural language inference (NLI) tasks and evaluate model-generated explanations. To evaluate model explanations, they utilize a rationale quality metric suggested by Wiegreffe et al. (2021). As word-overlap scores may be insufficient for the evaluation of explanation, Wiegreffe et al. (2021) propose a rationale quality metric that calculates the difference of prediction scores for gold labels when rationales are provided or not: Acc (IR \(\rightarrow\) O) - Acc (I \(\rightarrow\) O), where I, R, and O denote input, rationale, and gold label, respectively. In addition, Sun et al. (2022) evaluate explanations by comparing the accuracy of joke classification with and without explanations: Acc (IE \(\rightarrow\) O) - Acc (I \(\rightarrow\) O) where E denotes explanation. We introduce a moment localization task to compute the rationale quality score of the video explanation. **Modular Vision-Language Learning**. As pretrained models become larger and are trained with extensive datasets, various multimodal comprehension tasks have been tackled by composing these pretrained models. One approach is to transform visual information into discrete text words Zeng et al. (2022); Yang et al. (2022); Wang et al. (2022). Zeng et al. (2022) propose a modular framework that leverages LLM to construct the input text for the subsequent model based on the output of multimodal models in the previous stage. They demonstrate performance improvements in image captioning and visual question answering (VQA) tasks. Another approach connects pretrained models through continuous feature embeddings Patro et al. (2021); Alayrac et al. (2022); Tiong et al. (2022). Li et al. (2023) pretrain additional lightweight modules that bridge the frozen image encoder and LLMs to eliminate the modality gap between the two frozen pretrained models. Tewel et al. (2022) connect the frozen image encoder with the frozen language decoder and evolve additional pseudo tokens during inference time to perform the video captioning task. Recently, there have been efforts to integrate these two different approaches. Li et al. (2023) introduce VideoChat, a chat-centric video understanding system consisting of two modules: VideoChat-Text and VideoChat-Embed. The former generates text descriptions from the video and the latter encodes the video as embeddings. These text descriptions and embeddings are combined with a received question to form a prompt, based on which the LLM generates a response. In our work, we combine vision-language pretrained models with LLMs through text for two uses: (i) video filtering for collecting multimodal funny videos and (ii) video-to-text generation to provide LLMs with a prompt of video content. The ExFunTube Dataset The ExFunTube dataset comprises 10,136 videos, each annotated with timestamps of funny moments and corresponding explanations describing why each moment is humorous. The purpose of this dataset is to evaluate the models' ability to explain why a given video is funny as a measure of understanding video humor. ### Video Collection and Filtering We initially crawl all 220K videos shared on the subreddit "r/youtubehaiku,"1 where people share humorous short-form YouTube videos lasting up to 30 seconds. To ensure multimodal humor in videos, we design a four-step filtering pipeline that selects videos with both visual and verbal elements contributing to humor, as shown in Figure 2. Footnote 1: [https://www.reddit.com/r/youtubehaiku/](https://www.reddit.com/r/youtubehaiku/) **Video Caption and Transcript**. In the first step (Figure 2 (a)), we obtain a transcript and a video caption to describe the verbal and visual elements of a video clip, respectively. We extract a video caption using a zero-shot video captioning model (Tewel et al., 2022). Since our dataset contains diverse videos such as animations and edited videos not present in previous video datasets, we choose a model that utilizes both CLIP (Radford et al., 2021) and GPT-2 (Radford et al., 2019), which are pretrained on huge Web-sourced data. We transcribe audio from the video clip using a speech-to-text model Whisper (Radford et al., 2022). We remove videos with no speech or in languages other than English. **Multimodal Humor**. Our goal is to collect the videos that are funny from both verbal and visual elements, instead of funny from only one modality. Thus, as shown in Figure 2 (b), we first verify that the video is verbally funny; we do this by whether GPT-3.5 can find a funny utterance given a pair of the video caption and the transcript. If GPT-3.5 detects no funny utterances, we filter out the video. Next, as shown in Figure 2 (c), we again prompt GPT-3.5 to find a funny utterance with only a transcript (_i.e._, no video caption). If no funny utterance is detected, then we accept this video. The rationale is that the humor of this video is _multimodal_; the visual caption is required to identify the fun in the video. Otherwise, if GPT-3.5 can find a funny utterance in this case, we perform a further inspection as follows. **Difference in Explanations**. In the last step (Figure 2 (d)), GPT-3.5 is prompted to generate explanations in one sentence for the two cases: when given both a video caption and a transcript and when given only a transcript. We then measure the similarity between the two explanations using the SentBERT score (Reimers and Gurevych, 2019), which embeds each sentence and calculates the cosine similarity of their embeddings. The reason for adopting the SentBERT score is that it can reflect the semantics of the entire sentence. If the score is higher than the threshold, we exclude the video since the video caption does not contribute to the humor explanation. Otherwise, the video is accepted. Figure 2: The video filtering pipeline selects multimodal funny videos. Red boxes display the actual prompts provided to GPT-3.5. See the details in § 3.1. (a) We generate a transcript and a caption from the input video. (b) Via GPT-3.5 prompting, we filter out the video that is not funny from the transcript and caption. (c) The video is accepted if it is funny from both the transcript and caption but not from the transcript only, since its humor is multimodal. (d) GPT-3.5 generates humor explanations with or without the video caption. We remove the videos if they are too similar since their humor is not multimodal. Examples for each case are presented in the Appendix. **Rationale of Our Pipeline**. There has yet to be a method to gauge the extent and manner in which visual elements contribute to humor. In other benchmarks, the multimodality of datasets has been validated by analyzing the performance gap when visual information is either provided or not Hasan et al. (2019); Patro et al. (2021); Kumar et al. (2022). Similarly, we collect videos that exhibit differences in the assigned task (_i.e_., identifying humorous utterances by GPT-3.5) with or without visual information. In the field of NLI, previous works Liu et al. (2022); Wiegreffe et al. (2022); Chakrabarty et al. (2022) leverage the power of LLMs such as GPT-3 Brown et al. (2020) in creating figurative language examples or explanations for them. Likewise, we use GPT-3.5 to check the difference between generated explanations. To the best of our knowledge, this is the first approach that employs explanations for curating a dataset. Thanks to the pipeline, we can collect 21K high-quality multimodal humorous videos. **Postprocessing**. To ensure that our dataset does not contain any disrespectful or harmful content towards individuals or animals, we conduct a thorough manual review of all 21K videos. We filter out the videos using the five criteria based on the safety objectives outlined by Thoppilan et al. (2022): (i) Discrimination: videos displaying discrimination based on race, gender, sexual orientation, age, or disability. (ii) Animal cruelty: videos depicting acts of animal cruelty, such as a cat falling. (iii) Dangerous goods, services, activities, or self-harm: videos featuring dangerous content like drugs, violence, or bullying. (iv) Obscenities or profanities: videos containing explicit language or sexual actions. (v) Shocking content: videos that include shocking content, such as gunshots or explosions. After the filtering, about 50% of the videos are removed, and we are left with 10,136 videos. ### Data annotations We crowdsource via Amazon Mechanical Turk (AMT) to annotate start and end timestamps of funny moments and provide text explanations for each moment. To participate in our dataset annotation, workers must meet the following criteria: a HIT approval rate of 99% or higher, a total of more than 10,000 approved HITs, and be located in one of the countries of AU, CA, GB, NZ, or US. We conduct a qualification test for these workers, selecting those who can effectively explain humor. Out of 219 workers, only 60 pass the qualification test, indicating our thorough selection. For each video, we instruct one worker first to identify up to three funny moments within a video (up to 30 seconds long) and then annotate why each moment is funny. To make workers explain both humor elements and justifications, we provide a recommended format: "[_What is funny_]. _It is funny because [Why funny]"_. We only accept responses including both descriptions (_What_) and justifications (_Why_) and reject those that lack either. Given the difficulty of the task, we offer detailed feedback to the workers, helping them improve their performance with a high annotation standard. As a result, we obtain 11,166 explanations, each paired with start and end timestamps of the moment. They consist of 44.3 words on average. Out of 10,136 videos, 9,222 contain one funny moment, 798 contain two, and 116 contain three. Most videos contain a single funny moment since videos are typically shorter than 30 seconds. However, given the varied content in each video, there can be any number of funny moments. ## 4 Approach We explore an approach to explain video humor. Our idea is first to convert the video content into fine-grained text and then take advantage of recent powerful LLMs in a zero-shot manner. We design to extract as much information from videos into text as possible. Figure 3 shows a zero-shot video-to-text prompting that converts the video content into a text input to LLMs. ### Fine-grained Text Prompts Videos contain visual and audio modalities. The audio is further split into speech and sound. For each component, we initially generate text descriptions using state-of-the-art zero-shot models. Then, we arrange text descriptions in chronological order and use them as a prompt. **Visual**. In order to populate high-quality text descriptions about the visual, we first (i) segment the video, (ii) generate multiple frame captions, and (iii) retrieve the best-matching caption with the video-to-text model. First, we employ PySceneDetect2 to divide a video into a set of \(N\) segments based on visual changes. During the filtering pipeline (SS3.1), the speech-to-text model Whisper generates timestamps for each utterance. We also use them to split the segments further, resulting in more fine-grained and semantically meaningful video segments. Next, we extract frames at a rate of 5fps from each of the \(N\) video segments. We generate \(K(=20)\) captions per frame using the image captioning model BLIP-2 Li et al. (2023) with a "Who is doing what?" prompt, which can enhance action detection. We then have a frame caption corpus (# Frames \(\times\)\(K\) captions) per segment. Subsequently, we use the video-to-text model InternVideo Wang et al. (2022) to retrieve the caption that best matches each video segment from the respective frame corpus. Finally, we obtain one caption per segment, resulting in a total of \(N\) captions, which are fine-grained descriptions of the visual component. **Speech**. We transcribe audio with Whisper Radford et al. (2022) as done in our video filtering pipeline. We then predict the number of speakers and assign speakers to each utterance utilizing ChatGPT OpenAI (2023). This speaker separation helps a deep understanding of dialogue. **Sound**. We extract sound tags to provide more context. We use an audio tagging model Schmid et al. (2022) to classify the entire audio stream. We select the top 3 predicted tags that have a higher confidence value than the threshold (0.3). We concatenate the tags and insert them at the beginning of the prompt. This can provide the model with an overall atmosphere of the video. ### Prompt Configuration and LLMs After extracting text from visual, speech, and sound, we configure the prompt like an example of Figure 3. The prompt starts with a predefined text "Please generate \(\sim\)" to instruct LLMs to explain as if they are watching the video. We then include sound tags enclosed in parentheses and arrange the extracted text of speech and visuals for each video segment chronologically. To distinguish between video segments, we begin each segment with "Scene: ". Finally, we ask LLMs to generate an explanation of up to three sentences. **LLMs**. Although any LLMs can be adopted, we use three different ones: finetuned T5 Raffel et al. (2020) and BART Lewis et al. (2020), and zero-shot GPT-3.5 text-davinci-003. ## 5 Experiments We experiment with different models to see how well they explain the humor in the ExFunTube videos. We evaluate the models in three different ways of model-based automatic scores, rationale quality experiments, and human evaluation. ### Experimental Setup **Baselines**. We evaluate four types of explanation models. (i) **Text-only LLMs** generate explanations when only a transcript is provided (_i.e._, no use of visual). We use T5 Large and BART Large with finetuning and GPT-3.5 as a zero-shot model. (ii) Figure 3: (a) A zero-shot video-to-text prompting for converting video content into fine-grained text (§ 4.1). For the visual modality, the video is first divided into \(N\) segments, for each of which many possible captions are generated, and the best one is chosen finally. For audio modality, a transcript with speaker separation and sound tags are obtained. (b) The fine-grained text is configured as an input prompt to LLMs (§ 4.2). **MAF**(Kumar et al., 2022) is a multimodal end-to-end model designed for video sarcasm explanation. It generates explanations by receiving features of the three components (visual, speech, and audio). We train the model on our dataset. (iii) **VideoChat-Text**(Li et al., 2023b) is a multimodal prompting framework that textualizes video information into text, including video/clip captions, objects contained in the video and a transcript. Given the prompt, GPT-3.5 generates explanations in a zero-shot manner. (iv) **LLMs with our prompting** generate explanations given a prompt created by our zero-shot video-to-text prompting, using the same LLMs as (i) of T5, BART, and GPT-3.5. Note that T5 and BART models are finetuned to generate explanations given generated prompts, while GPT-3.5 generates in a zero-shot manner. **Explanation Generation**. For all finetuned models on our dataset, we employ K-fold cross-validation as follows. We divide the entire dataset of 10,136 videos into five equal-sized subsets. In each iteration, we train the model on three subsets, use one subset for validation, and test on the remaining subset. We repeat this process five times, rotating the test subset in each iteration. Finally, we obtain predicted explanations for the entire set. **Evaluation**. To compare the predicted explanation with the gold explanation for each video, we concatenate explanations for each moment into a single, unified explanation. For more details on experiments, please refer to the Appendix. ### Results of Model-based Automatic Scores Since the metrics based on word overlaps may fail to reflect faithfulness and plausibility as highlighted by Sun et al. (2022), we evaluate explanations using two model-based scores: SentBERT Score and ROSCOE (Golovneva et al., 2022). ROSCOE is a suite of metrics designed to evaluate the reasoning process within a chain-of-thought prompting (Wei et al., 2022). It is suitable for our explanation tasks since our goal is to uncover the _reason_ for laughter (_i.e._, why is the video humorous?) Among the various scores provided by ROSCOE, we use the reasoning alignment (RA) score, which computes the contextual similarity between the hypothesis and reasoning. Table 2 reports the model-based automatic scores of different methods. We show not only the mean metric values but also the proportions of the test set with scores higher than various thresholds; @\(K\) represents the proportion of data points with scores equal to or greater than \(K\). The results show that, except for SentBERT @0.7, GPT-3.5 with our prompting reaches the best performance. Especially, the SentBERT and ROSCOE scores with our prompting are higher than those with text-only baselines in all cases. In addition, our method outperforms the multimodal end-to-end baseline MAF and the multimodal zero-shot prompting baseline VideoChat-Text. The comparison of @\(K\) metrics shows even more significant differences, particularly for SentBERT @0.5 and \begin{table} \begin{tabular}{l l c c c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Automatic Score} & \multicolumn{6}{c}{Rationale Quality} & \multicolumn{2}{c}{Human} \\ & & \multicolumn{6}{c}{SentBERT (\(\uparrow\))} & \multicolumn{6}{c}{ROSCOE (RA) (\(\uparrow\))} & \multicolumn{6}{c}{Kavation (\(\uparrow\))} \\ \cline{3-13} & & @0.7 & @0.6 & @0.5 & @0.4 & Mean & @0.8 & @0.7 & Mean & @0.3 & @0.5 & Rating \\ \hline \multirow{3}{*}{Text-Only} & T5 & 0.154 & 0.355 & 0.585 & 0.795 & 0.534 & 0.406 & 0.871 & 0.780 & 10.3 & 21.9 & - \\ & BART & 0.169 & 0.388 & 0.617 & 0.807 & 0.545 & 0.440 & 0.875 & 0.785 & 13.7 & 30.1 & 0.178 \\ & GPT-3.5 & 0.149 & 0.310 & 0.556 & 0.774 & 0.529 & 0.371 & 0.841 & 0.772 & 18.8 & 22.5 & 0.385 \\ \hline MAF & - & 0.149 & 0.375 & 0.604 & 0.809 & 0.541 & 0.438 & 0.880 & 0.785 & 13.1 & 25.3 & 0.131 \\ \hline VideoChat-Text & GPT-3.5 & 0.115 & 0.345 & 0.618 & 0.839 & 0.539 & 0.414 & 0.900 & 0.783 & 13.9 & 26.5 & - \\ \hline \multirow{3}{*}{Our Prompting} & T5 & 0.230 & 0.483 & 0.719 & 0.887 & 0.584 & 0.543 & 0.932 & 0.804 & **2.9** & 12.5 & - \\ & BART & **0.238** & 0.500 & 0.730 & 0.886 & 0.588 & 0.554 & 0.935 & 0.805 & 6.3 & 23.9 & 0.282 \\ \cline{1-1} & GPT-3.5 & 0.214 & **0.541** & **0.806** & **0.945** & **0.602** & **0.639** & **0.971** & **0.817** & 5.5 & **9.3** & **0.523** \\ \hline Gold & & - & - & - & - & - & - & - & - & - & 0.792 \\ \hline \hline \end{tabular} \end{table} Table 2: Human explanation results in terms of automatic scores (SentBERT and ROSCOE), rationale quality scores, and human rating. In the automatic scores, @K shows the proportion of test explanations of which scores are higher than K, and the mean column is the average score of each metric. For rationale quality scores with funny moment localization, we adopt two IoU thresholds, 0.3 and 0.5; lower scores are better. For human rating, five workers rate each of 100 randomly selected test videos from No (0), Weak No (0.25), Neutral (0.5), Weak Yes (0.75), to Yes (1). After excluding the highest and lowest scores, the remaining scores are averaged. ROSCOE @0.8, where the performance margin ranges from 0.1 (BART) to 0.27 (GPT-3.5) compared to the text-only baselines. This means that using transcripts alone may not be sufficient to understand the humor in our videos. ### Results of Rationale Quality Scores We conduct a rationale quality experiment following Wiegreffe et al. (2021) and Sun et al. (2022). Since our dataset consists of videos, unlike theirs, we adapt the experimentation scheme by evaluating the rationale quality through a moment localization task, which aims at predicting funny moments defined by their start and end timestamps in a video given the text explanation. We use QD-DETR Moon et al. (2023) as a localizer and divide the entire dataset into 8:1:1 splits for training (8,110), validation (1,013), and testing (1,013). During the training, the localizer is learned to predict the gold timestamp given a gold explanation. At inference, we compute the rationale quality as the prediction difference of the localizer between when given a model-generated explanation and when given a gold explanation. Let \(M\) be a model-generated explanation, \(G\) be a gold explanation, and \(\tau\) be a threshold. For each test data point, we calculate the maximum IoU from the top 5 candidates given \(M\) or \(G\), respectively denoted as \(\mathrm{IoU}_{\mathrm{M}}\) or \(\mathrm{IoU}_{\mathrm{G}}\). We use the top 5 since there can be at most three funny moments in a single video and the localization predictions can overlap with each other. We compute the difference when \(\mathrm{IoU}_{M}>\tau\). The final score \(S\) is the sum of differences for all test data: \[S=\sum_{i=1}^{n}(\mathrm{IoU}_{G_{i}}-\mathrm{IoU}_{M_{t}})\cdot\mathbb{1}( \mathrm{IoU}_{M_{t}}>\tau),\] where \(n\) is the number of test data points, and \(1(\cdot)\) is the indicator function. Table 2 shows the results when the IoU threshold \(\tau\) is set to 0.3 and 0.5. A lower score is better as it is closer to the gold standard. In each LLM, the performance improves when our prompting is included compared to corresponding text-only ones. In particular, our approach improves GPT-3.5 the most, with the threshold at 0.3 resulting in a score gap of 13.3, and at 0.5, a score gap of 13.2. Again, the performance of all LLMs with our prompting is better than MAF and VideoChat-Text. ### Results of Human Evaluations For human evaluation, we employ 10 AMT workers using the same criteria as in the dataset annotation but excluding the ones who already participated in the annotation. We randomly select 100 videos and evaluate explanations generated by all models except baselines using T5 and VideoChat-Text, which show worse automatic scores than other text-only or multimodal baselines. We obtain human evaluations with two methods: rating and comparison. For the rating, workers are asked to rate each explanation according to No (0), Weak No (0.25), Neutral (0.5), Weak Yes (0.75), and Yes (1) and check any shortcomings. We ask five workers for each explanation, exclude the highest and lowest scores, and take the average. For the comparison, workers compare GPT-3.5 with our prompting to (1) Text-only GPT-3.5, (2) MAF, and (3) Gold explanations and choose the better explanation. We ask five workers for each pair of comparisons. The rating results are presented on the far right of Table 2. The scores of BART and GPT-3.5 increase by about 0.1 when our prompting is included. The comparison results are presented in Figure 4. The number of votes for text-only GPT-3.5 is significantly lower than that of GPT-3.5 with our prompting, indicating that visual information is valuable, and our prompting helps convey visual information effectively. In both rating and comparison, MAF shows lower performance than the text-only models despite being a multimodal model. This suggests that providing visual information as text to LLMs could be more effective than training the multimodal model end-to-end. Moreover, GPT-3.5 with our prompting, which shows the best results, still scores lower than Gold, indicating that understanding and explaining the humor in our dataset still remains unsolved. Figure 4: Results of human preference: comparing GPT-3.5 with our prompting to text-only GPT-3.5, MAF, and Gold, respectively. ### Analyzing LLMs with Humor Taxonomy We classify our dataset into a total of 20 humor categories referring to Martin and Ford (2018) and Buijzen and Valkenburg (2004), and observe the performance of baselines by the humor taxonomy. We provide ChatGPT with 20 categories along with a brief description and one example (_i.e._, one-shot learning) and instruct ChatGPT to classify the video based on the given explanation. Thanks to ChatGPT's powerful in-context learning capability, we effectively classify 10,136 videos based on their corresponding explanations. Figure 5 shows the models' performance by humor categories. Excluding the Jokes and Self-deprecating classes, the performance increases with our prompting in all categories. In particular, the performance significantly increases in Clownish humor, Visual gags, and Slapsticks, which heavily reflect visual elements. This indicates that our zero-shot video-to-text prompting effectively conveys visual elements to the LLM. ### Ablation Study We compare the importance of each modality in humor explanation. Table 3 presents the results of SentBERT and ROSCOE scores when visual, speech, and sound components are not included in the prompt one by one. In GPT-3.5 with our prompting, the performance without the visual component drops as much as when the speech is removed, indicating that the visual component plays an important role in our dataset. Moreover, the performance decreases when either of the components is removed, which suggests that all three components are crucial for understanding and explaining humorous videos in our dataset. Additional ablation studies are presented in the Appendix. ## 6 Conclusion We introduced ExFunTube, a dataset consisting of 10,136 user-generated videos annotated with timestamps and explanations of funny moments. Our dataset aims to assess how well AI models understand and explain video humor. We devised a zero-shot video-to-text prompting to make existing LLMs better explain the video content. With three different evaluation methods, we demonstrated that the humor in our dataset is multimodal, and our prompting maximized LLMs' ability to generate explanations. However, as the performance still falls short of human levels, our dataset remains sufficiently challenging and calls for future research. Furthermore, we can consider the training of the model using user feedback for personalized humor understanding. Figure 5: Explanation performance according to humor taxonomy. We categorize all videos into 20 humor classes and compare the performance of eight different baselines in terms of the SentBERT score. The humor taxonomy is arranged in descending order of proportion in our dataset. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{GPT-3.5 w/ Prompting} \\ \cline{2-5} & w/o V & w/o T & w/o A & w/ V, T, A \\ \hline SentBERT & 0.512 & 0.497 & 0.574 & **0.602** \\ ROSCOE (RA) & 0.778 & 0.763 & 0.801 & **0.817** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation results of GPT-3.5 with our prompting measured by SentBERT and ROSCOE scores when each modality component is removed. V, T, and A denote visual, speech, and sound, respectively. ### Limitations Since the copyright remains with the original owners of our dataset videos, we will only distribute URLs instead of videos. Our method relies on the performance of existing state-of-the-art models, as we used them in a zero-shot composition. Also, our approach composes models through text, so it could also be explorable to use an adaptor-based method for prompt tuning during inference. We measured the videos by dividing them into three modalities, but we did not consider the temporal information of sound. As timing can play a role in humor, analyzing the sound in accordance with the timeline could be helpful. Lastly, humor is subjective, which means that our collected explanations may be subjective, too. ## Ethics Statement We put much effort into ensuring that our dataset contains no inappropriate videos that may raise ethical issues. Based on the safety rules of Thoppilan et al. (2022), authors manually viewed each video entirely from start to end and filtered the video if there was any content that corresponded to the filtering criteria presented in the dataset postprocessing. Although we carefully reviewed all the videos, there could still be some videos that are not comfortable for someone. If such inappropriate videos are found, we will remove them in the future. Also, since we only recruit workers in AU, CA, GB, NZ, and US as mentioned in the Appendix, the cultural and geographic biases may influence humor explanations. ## Acknowledgments We sincerely thank Jaekeyeom Kim, Jaewoo Ahn, Soochan Lee, Wonkwang Lee, Yeda Song, and Jaehyeon Son for their valuable comments. We would also like to thank AMT workers for their commitment to building the **ExFunTube** dataset. This work was supported by the SNU-Global Excellence Research Center establishment project, Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (RS-2023-00274280), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00156, Fundamental research on continual meta-learning for quality enhancement of casual videos and their 3D metaverse transformation), and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)). Gunhee Kim is the corresponding author.
2306.09188
A note on secant defective varieties and Clifford modules
We generalise a construction of Landsberg, which associates certain Clifford algebra representations to Severi varieties. We thus obtain a new proof of Russo's Divisibility Property for LQEL varieties.
Oliver Nash
2023-06-15T15:15:52Z
http://arxiv.org/abs/2306.09188v1
# A note on secant defective varieties and Clifford modules ###### Abstract We generalise a construction of Landsberg, which associates certain Clifford algebra representations to Severi varieties. We thus obtain a new proof of Russo's Divisibility Property for LQEL varieties. ## 1 Introduction The geometry of secant-defective varieties is surprisingly rich. In the early \(20^{\mathrm{th}}\) Century, the subject captured the attention of several members of the Italian School of Algebraic Geometry and important results appear in numerous beautiful old papers, such as those of Scorza [15], Severi [16], and Terracini [17]. In the 1980s the subject enjoyed a renaissance, largely due to a series of breakthroughs made by Zak [18]. Zak's solution of Hartshorne's linear normality conjecture [3] lead to his classification of maximally-secant-deficient varieties, which he named Severi varieties. He showed that there are exactly four Severi varieties, that they correspond to normed division algebras, and that their dimensions are 2, 4, 8, 16. The hardest part of the classification was establishing the dimension restriction. In 1996, an intruiging paper of Landsberg [8] appeared in which he showed that the extrinsic geometry of a Severi variety induces certain Clifford algebra representations. Using the classification of Clifford modules, it is then trivial to see that the dimensions of the Severi varieties must take the values already established by Zak. The main purpose of this note is to show that Landsberg's Clifford modules may be generalised. Severi varieties are examples of a class of varieties introduced by Russo [13] in 2009, called LQEL varieties and we show that Landsberg's construction works in this more general setting. Actually the generalisation is only an extremely mild extension of Landberg's results. However we believe it is worth highlighting because, together with the classification of Clifford modules, it provides a new proof of Russo's Divisibility Property for LQEL varieties (corollary 2.8). ## 2 Secant defective varieties and Clifford modules We shall follow Landsberg [6, 8] closely and so recall his constructions and notation1. Footnote 1: Note that [6] (which we follow) adopts slightly different conventions than [8]. For example \(\operatorname{Ann}(v)\) in [8] corresponds to \(\mathbb{P}\operatorname{Ann}(v)\) in [6]. Our primary object of concern is a subvariety of projective space. We write: \[X\subseteq\mathbb{P}^{n+a},\] to indicate that the variety \(X\) is \(n\)-dimensional and that the embedding has codimension \(a\). We work over \(\mathbb{C}\) throughout, and assume that \(X\) is smooth, irreducible, and non-degenerate2, with secant deficiency \(\delta\geq 1\). Footnote 2: Not contained in a hyperplane. ### Second fundamental form We recall [2] that the second fundamental form of an embedding \(X\subseteq\mathbb{P}^{n+a}\) is a section of \(\operatorname{Hom}(S^{2}TX,N)\), where \(S^{2}TX\) is the symmetric square of the tangent bundle and \(N\) is the normal bundle. Thus for \(x\in X\), we have a symmetric bilinear map: \[II^{x}:S^{2}T_{x}X\to N_{x}.\] When we have chosen a point \(x\in X\) and there is no possibility of confusion, we will write \(T\) for \(T_{x}X\), \(N\) for \(N_{x}\) and \(II\) for \(II^{x}\). Taking the transpose, we also regard the second fundamental form as a linear system of quadrics: \[II^{*}:N^{*}\to S^{2}T^{*}.\] A key observation is that global properties of \(X\) are visible infinitesimally via the second fundamental form, and exceptional global properties tend to produce linear systems of quadrics with exceptional properties. For example, if \(X\) has a smooth dual variety, then at a general point \(II^{*}\) is a linear system of quadrics of _constant rank_, and from this follows Landman's parity theorem for dual-deficient varieties (see [6, Theorem 12.4.8 and Corollary 12.4.10]). We shall show that if \(X\) has the exceptional property that the Gauss map of a general tangential projection has zero-dimensional fibres, then its second fundamental form can be used to construct certain Clifford modules, and from this follows Russo's Divisibility Property for secant-deficient varieties (see [13, Theorem 2.8] or [14, Theorem 4.2.8]). ### Tangential projections We now assume \(X\) is non-linear3 make two definitions to fix notation: Footnote 3: Note that this is automatic if \(\operatorname{Sec}(X)\neq\mathbb{P}^{n+a}\) since \(X\) is non-degenerate. **Definition 2.1**.: _Let \(x\in X\) and let \(\mathbb{T}_{x}X\subseteq\mathbb{P}^{n+a}\) be the embedded tangent space at \(x\). Away from \(\mathbb{T}_{x}X\) we define a rational map:_ \[\pi_{x}:X \rightarrow\mathbb{P}N,\] \[y \mapsto[\langle y,\mathbb{T}_{x}X\rangle],\] _where \(N\) is the fibre of normal bundle of \(X\) at \(x\). The map \(\pi_{x}\) is known as the tangential projection map at \(x\)._ We recommend [14, SS2.3.2, SS3.3] for a valuable discussion of tangential projections. **Definition 2.2**.: _Let \(x\in X\) and \(II\) be the second fundamental form at \(x\). Away from \(\operatorname{Baseloc}II^{*}\) we define the rational map:_ \[ii:\mathbb{P}T \rightarrow\mathbb{P}N,\] \[[v] \mapsto[II(v,v)].\] We recall that the closures of the images of \(\pi_{x}\) and \(ii\) coincide and have dimension \(n-\delta\) (see e.g., [14, Proposition 2.3.5] and its proof). Let this \((n-\delta)\)-dimensional irreducible subvariety be: \[Z\subseteq\mathbb{P}N,\] and let: \[\operatorname{Sec}(X)\subseteq\mathbb{P}^{n+a},\] be the \((2n+1-\delta)\)-dimensional secant variety of \(X\), then we note for future reference that: \[\operatorname{codim}Z=\operatorname{codim}\operatorname{Sec}(X). \tag{1}\] ### Second fundamental form of a tangential projection The following is essentially a restatement of [8, Lemma 6.6]. Given a general point \(x\in X\) and a general4 vector \(v\in T\), let \(z=ii([v])\). It follows from the definition of \(ii\) that there is a natural commutative diagram: Footnote 4: Thus \(v\) is non-zero and \([v]\not\!\!/\,\mbox{Baseloc}\,II^{*}\). where \(ii_{*[v]}\) is the derivative of \(ii\) at \([v]\) and \(II_{v}\) is the map: \[II_{v}:w\mapsto II(v,w).\] We thus have natural exact sequences: \[0\rightarrow\langle v,\ker II_{v}\rangle\to T\xrightarrow{\rho_{T}}T_{z}Z\to 0, \tag{2}\] and: \[0\to II_{v}(T)\to N\xrightarrow{\rho_{N}}N_{z}Z\to 0,\] where \(N_{z}Z\) is the normal bundle of \(Z\subseteq\mathbb{P}N\) at \(z\). The maps \(\rho_{T}\) and \(\rho_{N}\) fit into the following commutative diagram: (3) where \(\widetilde{II}\) is the second fundamental form of \(Z\) at \(z\). ### Singular locus of the second fundamental form Griffiths and Harris noticed that the quadrics of the second fundamental form are all singular along the fibres of the Gauss map. In fact they proved [2, (2.6)]: \[\mbox{Singloc}(N^{*})=T_{x}F,\] where \(F\) is the fibre of the Gauss map through \(x\) and for any \(A\subseteq N^{*}\), \(\operatorname{Singloc}(A)\) is the intersection of all the singular loci: \[\operatorname{Singloc}(A)=\bigcap_{f\in A}\{v\in T\ |\ v\mathbin{\raisebox{-1.0pt}{$ \lrcorner$}}II^{*}(f)=0\}. \tag{4}\] A key insight of Landsberg [8] was that there are natural subsystems \(A\subseteq N^{*}\) for which \(\operatorname{Singloc}(A)\) captures more delicate geometric features of \(X\). Indeed it follows from (3) that there is a natural exact sequence: \[0\to\ker\rho_{T}\to\operatorname{Singloc}(\rho_{N}^{*}(N_{z}^{*}Z))\to \operatorname{Singloc}(N_{z}^{*}Z)\to 0, \tag{5}\] and so for \(v\in T\), Landsberg defined: \[\operatorname{Ann}(v) =\rho_{N}^{*}(N_{z}^{*}Z)\] \[=II_{v}(T)^{\perp}\] \[=\{f\in N^{*}\ |\ v\mathbin{\raisebox{-1.0pt}{$\lrcorner$}}II^{*}(f)=0\},\] and studied the middle term \(\operatorname{Singloc}(\operatorname{Ann}(v))\) appearing in (5). We note for future reference that: \[\dim\operatorname{Ann}(v) =\operatorname{codim}Z\] \[=\operatorname{codim}\operatorname{Sec}(X)\quad\text{by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq * _the quadratic Veronese embeddings_ \(\nu_{2}(\mathbb{P}^{n})\subseteq\mathbb{P}^{n(n+3)/2}\) _for_ \(n\geq 2\) _(_\(\delta=1\)_),_ * _the binary Segre embeddings_ \(\mathbb{P}^{n}\times\mathbb{P}^{m}\subseteq\mathbb{P}^{mn+m+n}\) _for_ \(m+n\geq 3\) _(_\(\delta=2\)_),_ * _the rank-2 Plucker embeddings_ \(G(2,n)\subseteq\mathbb{P}^{(n-2)(n+1)/2}\) _for_ \(n\geq 5\) _(_\(\delta=4\)_),_ * _the 16-dimensional Severi variety in_ \(\mathbb{P}^{26}\) _(_\(\delta=8\)_),_ _as well as their linear projections. We recommend [14] for further discussion._ **Remark 2.5**.: _In [14, Definition 3.3.1], given general points \(x,y\in X\) and general \(p\in\operatorname{Sec}(X)\) on the line \(\overline{xy}\), Russo defines the contact locus \(\Gamma_{p}\subseteq X\) as:_ \[\Gamma_{p}=\overline{\{x\in X\ |\ T_{x}X\subseteq T_{p}\operatorname{Sec}(X)\}},\] _and notes that by Terracini's lemma:_ \[\Sigma_{p}\subseteq\Gamma_{p},\] _where \(\Sigma_{p}\) is the entry locus of \(X\) with respect to \(p\in\operatorname{Sec}(X)\)._ _Let \(Z=\overline{\pi_{x}(X)}\) be the closure of the tangential projection at \(x\) and \(F\subseteq Z\) be the fibre of the Gauss map of \(Z\) through \(z=\pi_{x}(y)\). As argued by Russo in the proof of [14, Lemma 3.3.2] the irreducible components of \(\pi_{x}^{-1}(F)\) and \(\Gamma_{p}\) through \(y\) coincide. We should thus have a natural exact sequence of tangent spaces:_ \[0\to T_{y}\Sigma_{p}\to T_{y}\Gamma_{p}\to T_{z}F\to 0. \tag{8}\] _The line \(\overline{yp}\) naturally determines a vector \(v\in T_{y}X\), and we expect:_ \[\operatorname{Singloc}(Ann(v))=T_{y}\Gamma_{p},\] _so that (8) can be interpreted as (5). Given this, [14, Lemma 3.3.2 (2)] would provide an alternate proof of lemma 2.3 above._ We come at last to our main point: **Theorem 2.6**.: _Let \(X\subseteq\mathbb{P}^{n+a}\) be an smooth, irreducible, non-degenerate variety of secant deficiency \(\delta\geq 1\) such that \(\operatorname{Sec}(X)\neq\mathbb{P}^{n+a}\). Suppose that the Gauss map of the tangential projection at a general point \(x\) has zero-dimensional fibres, and let \(v\in T_{x}X\) be general. Then \(T/\operatorname{Singloc}(\operatorname{Ann}(v))\) carries a natural Clifford module structure over \(\ker II_{v}\)._ Proof.: Let \(Z=\overline{\pi_{x}(X)}\subseteq\mathbb{P}N\) be the closure of the image of the tangential projection at \(x\). The result we need is exactly [8, Lemma 6.26] except that we have made no assumption about \(Z\) being a cone (instead assuming that its Gauss map has zero-dimensional fibres) and we do not assume that \(Z\) is a hypersurface. In view of (1), \(Z\) is a hypersurface if and only if \(\operatorname{Sec}(X)\) is. However since the linear projection from a linear subspace which does not meet \(\operatorname{Sec}(X)\) is an isomorphism, we may select a maximal such subspace and project down to the case that \(\operatorname{Sec}(X)\) is a hypersurface; the lemma then applies, and our proof is complete. For the benefit of readers who wish to compare with [6], we provide a reference for the argument as presented there. The key equation is [6, (12.22) page 374]: \[q^{n+j}_{\epsilon\kappa}q^{n+k}_{\delta i}+q^{n+j}_{\delta k}q^{n+k}_{\epsilon i }=-2q^{n+1}_{\epsilon\delta}\delta^{i}_{j}\quad\forall\epsilon,\delta,j,k,i.\] The key assumption required for the derivation is \(S=0\) where: \[S=\dim\operatorname{Singloc}\operatorname{Ann}(v)-\dim\langle v,\ker II_{v}\rangle,\] which follows from lemma 2.3. **Remark 2.7**.: _We can restate theorem 2.6 without referring to the second fundamental form as follows._ _Let \(X\) be as in theorem 2.6 and let \(p\in\operatorname{Sec}(X)\) and \(x\in X\) be general points. Let \(Q\subseteq X\) be the irreducible component of the \(p\)-entry locus through \(x\) and let \(Q^{\prime}\subseteq Q\) be the corresponding5 irreducible component of the tangent locus through \(x\). Then \(T_{x}Q^{\prime}\) carries a non-degenerate quadratic form and the fibre \(N^{x}_{Q|X}\) of the normal bundle of \(Q\) in \(X\) is a Clifford module for the Clifford algebra \(Cl(T_{x}Q^{\prime})\)._ Footnote 5: See [11, Lemma 2.4]. We emphasise the following corollary: **Corollary 2.8**.: _Let \(X\) be as in theorem 2.6 then:_ \[2^{\lfloor\frac{\delta-1}{2}\rfloor}\ \Big{|}\ \ n-\delta.\] Proof.: The result follows immediately from theorem 2.6 together with the classification of Clifford modules. Indeed if there exists a \(k\)-dimensional Clifford module of a non-degenerate \(l\)-dimensional complex quadratic form, then: \[p\ |\ \ k,\] where \(p=2^{\left\lfloor\frac{l}{2}\right\rfloor}\). The reason is that the Clifford algebra of the quadratic form is the matrix algebra \(\mathbb{C}^{p\times p}\) if \(l\) is even or \(\mathbb{C}^{p\times p}\oplus\mathbb{C}^{p\times p}\) if \(l\) is odd (see e.g., [1, Table 1]) and the natural action of \(\mathbb{C}^{p\times p}\) on \(\mathbb{C}^{p}\) is its unique irreducible representation. **Remark 2.9**.: _The divisibility condition established in corollary 2.8 was first proved by Russo and appeared in [13, Theorem 2.8] (see also [14, Theorem 4.2.8]). The proof involved an inductive study of the Hilbert scheme of lines through a general point of an LQEL variety._ _A second proof (due to the author) appeared as [11, Corollary 2.6]. This proof was topological and the key was a calculation in topological \(K\)-theory._ _We now have a third proof (albeit with slightly different assumptions) and this time it is Clifford module theory that is the key._ _It would be interesting to explore the relationship between the second and third proofs given the deep connections between \(K\)-theory and Clifford modules identified by Atiyah, Bott, and Shapiro in [1]. The first step should be to argue that Landsberg's construction actually provides bundles of representations of Clifford algebras, as \(x\) varies over a general tangent locus._ **Remark 2.10**.: _Note that the proof of corollary 2.8 shows that the 2 which appears in the expression \((\delta-1)/2\) corresponds to the mod-2 periodicity of Morita equivalence classes of complex Clifford algebras. Thus it is the same 2 which appears in complex Bott periodicity._ **Remark 2.11**.: _A quite different connection between Clifford modules and Severi varieties arises in the context of 'Clifford structures', introduced by Moroianu and Semmelmann in [10]. The Severi varieties appear in the classification of parallel non-flat even Clifford structures in [10] (see also [12]). It might be interesting to explore whether these Clifford structures have any relationship to Landsberg's Clifford modules._ ## 3 A remark about the \(\delta\leq 8\) problem Let \(X\subseteq\mathbb{P}^{n+a}\) be a smooth, irreducible, non-degenerate, subvariety with \(\operatorname{Sec}(X)\neq\mathbb{P}^{n+a}\). It follows from Zak's proof of Hartshorne's linear normality conjecture that the secant deficiency satisfies: \[\delta\leq\left\lfloor\frac{n}{2}\right\rfloor. \tag{9}\] In the course of their exposition [9] of Zak's work, Lazarsfeld and Van de Ven highlighted that all known examples of \(X\) as above satisfy \(\delta\leq 8\). They thus posed the problem to investigate whether \(\delta\) could be arbitrarily large (see [9, SS1f, page 19]). In view of (9), any \(X\) with \(\delta>8\) must have dimension \(n\geq 18\). Very little progress has been made on this problem in the 40 years since it was first posed. Kaji [7] has shown that any variety with \(\delta>8\) must be non-homogeneous but otherwise the problem remains completely open: all known examples still satisfy \(\delta\leq 8\) and the 16-dimensional Severi variety remains the only variety known to achieve \(\delta=8\). The problem remains open even for the very special class of LQEL varieties (see [14, chapter 4.4, page 113] as well as [4, Remark 3.8] and [5, Conjecture 4.5]). We mention this problem here, to highlight that by combining known results, one may sharpen (9) slightly as follows: **Lemma 3.1**.: _Let \(X\subseteq\mathbb{P}^{n+a}\) be a smooth, irreducible, non-degenerate subvariety with \(\operatorname{Sec}(X)\neq\mathbb{P}^{n+a}\). Suppose \(n\geq 17\), then:_ \[\delta\leq\left\lfloor\frac{n-1}{2}\right\rfloor.\] Proof.: For a general tangential projection of \(X\), let \(\tilde{\gamma}\) be the dimension of the fibre of its Gauss map and \(\tilde{\xi}\) its dual deficiency. Suppose first that \(\tilde{\gamma}=0\). We may assume \(\delta\geq 1\) (otherwise there is nothing to prove) so by Scorza's Lemma [14, Theorem 3.3.3]\(X\) is an LQEL variety. By [14, Corollary 4.4.11]: \[\delta\leq\left\lfloor\frac{n+8}{3}\right\rfloor\leq\left\lfloor\frac{n-1}{2}\right\rfloor\] since \(n\geq 17\). It remains to consider the case \(\tilde{\gamma}\geq 1\). By [14, Theorem 5.4.1, Lemma 3.3.2]: \[\delta\leq\frac{n-\tilde{\xi}}{2}\leq\frac{n-\tilde{\gamma}}{2}\leq\frac{n-1} {2},\] as required6. Footnote 6: We note in passing that one could instead deal with the case \(\tilde{\gamma}\geq 1\) using results of Landsberg. Indeed after projecting to ensure \(\operatorname{Sec}(X)\) is a hypersurface, one could apply [8, Corollary 7.3] (equivalently, the inequality at the bottom of page 373 of [6]). Note that lemma 3.1 increases the dimension restriction on a variety with \(\delta>8\) slightly to \(n\geq 19\). One might hope to make further progress by studying varieties for which \(\tilde{\gamma}=1\) and then arguing as in lemma 3.1 but with three cases corresponding to whether \(\tilde{\gamma}\) is 0, 1, or at least 2.
2307.06192
Failed supernova simulations beyond black hole formation
We present an axisymmetric failed supernova simulation beyond black hole formation, for the first time with numerical relativity and two-moment multi energy neutrino transport. To ensure stable numerical evolution, we use an excision method for neutrino radiation-hydrodynamics within the inner part of black hole domain. We demonstrate that our excision method is capable to stably evolve the radiation-hydrodynamics in dynamical black hole spacetime. As a remarkable signature of the final moment of PNS, we find the emergence of high energy neutrinos. Those high energy neutrinos are associated with the proto-neutron star shock surface being swallowed by the central black hole and could be a possible observable of failed supernovae.
Takami Kuroda, Masaru Shibata
2023-07-12T14:33:27Z
http://arxiv.org/abs/2307.06192v2
# Failed supernova simulations beyond black hole formation ###### Abstract We present an axisymmetric failed supernova simulation beyond black hole formation, for the first time with numerical relativity and two-moment multi energy neutrino transport. To ensure stable numerical evolution, we use an excision method for neutrino radiation-hydrodynamics within the inner part of black hole domain. We demonstrate that our excision method is capable to stably evolve the radiation-hydrodynamics in dynamical black hole spacetime. As a remarkable signature of the final moment of PNS, we find the emergence of high energy neutrinos. Those high energy neutrinos are associated with the proto-neutron star shock surface being swallowed by the central black hole and could be a possible observable of failed supernovae. keywords: (stars:) supernovae: general - stars: black holes - neutrinos - gravitational waves ## 1 Introduction Massive stellar collapse is one of the main formation channels of stellar-mass black hole (BH), whose existence was observationally substantiated through numerous coalescence events (e.g. Abbott et al., 2016, 2019). Massive stars heavier than \(\sim 8\,\mathrm{M}_{\odot}\) undergo a catastrophic gravitational core-collapse (CC) at the end stage of their evolution. The subsequent evolutionary path is rich in variety and determines the remnant property. Broadly speaking, less to moderately massive stars explode as core-collapse supernova (CCSN), whereas more massive stars are prone to fail the explosion, sometimes completely and sometimes exhibiting only a feeble explosion (Nomoto et al., 2006; Tanaka et al., 2009). At the same time some of more massive stars are known to be accompanied by a very energetic explosion termed as hypernova (Iwamoto et al., 1998), whose explosion energy is about one order of magnitude larger than those of canonical SNe. The CCSN explosion scenario and the mass range determining the fate are yet to be fully understood (for reviews, see Janka et al., 2016; Muller, 2016; Burrows and Vartanyan, 2021). It is evident, however, that unless the explosion possesses sufficient energy to expel substantial amounts of stellar mantle, the central compact remnant will ultimately acquire a mass that surpasses the maximum mass limit, above which its internal pressure cannot counteract its own self-gravitational force, thereby leading to the formation of a black hole. The remnant property is tightly connected with its progenitor mass (Woosley et al., 2002; Heger et al., 2003). In general, the more massive the progenitor is, the higher the probability of being BH is. Moreover, recent parametric studies, focusing on the explodability by the standard neutrino heating mechanism, have revealed that the compactness (O'Connor and Ott, 2011) could potentially be a good indicator of BH formation (see also, e.g., Ugliano et al., 2012; Sukhbold et al., 2016; Muller et al., 2016; Ertl et al., 2016; Ebinger et al., 2019). Like these, the formation of a BH is predominantly determined by the compactness of the progenitor star, along with the detailed explosion scenario (but see Burrows and Vartanyan (2021) for counterexamples). There are currently numerous multi-dimensional simulations reporting a successful SN explosion (e.g., Muller and Vartanyan, 2020; Burrows et al., 2020; Stockinger et al., 2020; Bollig et al., 2021; Nakamura et al., 2022; Vartanyan et al., 2022). These studies are primarily directed towards less massive, or more precisely less compact, progenitor stars, in which the canonical neutrino heating mechanism can trigger the explosion, leaving behind a neutron star (NS). However, there are several observational evidences of a "failed" supernova (Kochanek et al., 2008; Smartt, 2015; Adams et al., 2017). These events report a sudden disappearance of red supergiant, inferring that the whole progenitor star collapses and becomes a BH without noticeable explosions. Furthermore exceptionally low energy SNe, e.g., SN 2008ha (Valenti et al., 2009; Foley et al., 2009), were detected, which could possibly be explained by "fallback" during SN explosion (Kawabata et al., 2010; Fryer et al., 2009). Should these events be a gravitational collapse of massive star, the remnant becomes most likely a BH due to their inferred small ejecta mass. These observations associated possibly with a BH formation strongly motivate us to explore the failed and fallback SN scenarios. There were, however, severe numerical difficulties in performing SN simulations in BH spacetime. First, multi-dimensional SN simulations in general relativity (GR), for instance with numerical relativity, are still minor, e.g., Muller et al. (2010) (and its subsequent works) using the so-called conformal flatness condition (CFC) or Kuroda et al. (2016) with a Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formalism (Shibata and Nakamura, 1995; Baumgarte and Shapiro, 1999). Since BHs are fundamentally general relativistic objects, the formation process, namely from the onset of gravitational collapse of massive progenitor to BH formation and beyond, can be precisely followed only by numerical relativity. Second, sophisticated neutrino transport is essential for modern SN simulations. However, numerical relativity simulation in BH spacetime combined with sophisticated neutrino transport is currently still chal lenging. To date, simulations only up to BH formation (Kuroda et al., 2018; Shibagaki et al., 2020; Kuroda et al., 2022) or switching to Newtonian gravity with a large excision region (several times of the Schwarzschild radius) immediately after BH formation (Chan et al., 2018; Rahman et al., 2022) are reported. Very recently Sykes et al. (2023) reported the first SN simulations solving the full spatial domain above the BH, i.e., without discarding too large computational domain in the vicinity of central BH, based on the CFC metric. The main obstacle of neutrino transport in BH spacetime, or rather immediately after BH formation, stems from the rapid change of matter field. At the moment of BH formation, the (rest mass) density just above the BH is generally high \(\gtrsim 10^{14}\) g cm\({}^{-3}\). The density, however, quickly decreases to \(\sim 10^{10}\) g cm\({}^{-3}\) within a few ms concomitantly with the proto-neutron star (PNS) being swallowed by the central BH. This indicates that the region in the vicinity of the BH rapidly shifts from optically thick to thin condition and such extreme condition makes neutrino transport with full interactions a significantly challenging subject. In addition, the matter (and probably also radiation) field inside the BH is typically required to be "exicsed" for stable numerical evolution. As of now, however, there is no concrete method how we should treat the radiation field inside the excised region and also inside BH for stable numerical evolution. In this study, we report our first SN simulation beyond BH formation with numerical relativity and multi-energy neutrino transport. We use an excision method for both matter and neutrino radiation fields inside a part of BH domain. Our excision method demonstrates stable evolution immediately after BH formation as well as in the subsequent BH accretion phase. Furthermore, we find the emergence of high energy neutrinos associated with the PNS shock surface being swallowed by the central BH, which could potentially be a probe of the very final moment of PNS. We also show that these high energy neutrinos could be detectable by the current and next-generation neutrino detectors if the BH formation happens in our Galaxy. This paper is organized as follows. Section 2 starts with a concise summary of our GR radiation-hydrodynamic scheme with an excision scheme and also describe the initial setup of the simulation. The main results and detailed analysis of our new findings are presented in Section 3. We summarize our results and conclude in Section 4. Throughout the paper, Greek indices run from 0 to 3 and Latin indices from 1 to 3, except \(\nu\) and \(\varepsilon\) which denote neutrino species and energy, respectively. ## 2 Method In our full GR radiation-hydrodynamics simulations, we solve the evolution equations of metric, hydrodynamics, and energy-dependent neutrino radiation. Each of the evolution equations is solved in an operator-splitting manner, while the system evolves selfconsistently as a whole, satisfying the Hamiltonian and momentum constraints (Kuroda et al., 2016). In Sec. 2.1, we describe our numerical method focusing particularly on the excision method applied to the neutrino radiation-hydrodynamics variables. Sec. 2.2 is devoted to explaining the computed model and numerical setup. ### Radiation hydrodynamics in BH spacetime We solve full GR multi-energy neutrino transport equations in axisymmetric 2 + 1 dimensions (two spatial dimensions and one momentum-space dimension). Details of the code are described in our previous studies (Kuroda et al., 2016, 2022). The black hole spacetime is evolved using the BSSN formalism (Shibata & Nakamura, 1995; Baumgarte & Shapiro, 1999) with a fourth order finite differencing for the spatial derivatives and a four-step Runge-Kutta method. We choose '1+log' slicing condition for the lapse and gamma-driver condition for the shift vector (Alcubierre et al., 2003). BH formation is determined by identifying the location of apparent horizon (AH) by an AH finder, e.g., Shibata (1997). After the AH formation, we enforce an excision method for radiation-hydrodynamics inside the AH, while we evolve the full black hole spacetime without excision for geometrical variables. Here we will briefly explain our excision technique for radiation-hydrodynamics. Once the AH is found, we divide the interior of AH into two: inner and outer regions. The interface of these two regions is locating at \(f\tau_{\rm AH}(\theta)\), where \(f\in[0,1]\) and \(r_{\rm AH}(\theta)\) denotes the radius of AH at \(\theta\)-direction with \(\theta\) being the angle with respect to \(z\)-axis. In the outer region, we solve the full neutrino radiation-hydrodynamics in the same way as the outside of AH (i.e. \(r>r_{\rm AH}\)). On the other hand, we excise the inner region and artificially set all primitive variables, i.e., the rest mass density \(\rho\), entropy \(s\), electron fraction \(Y_{e}\), apical components of four-velocity \(u^{i}\), and the zeroth and first order neutrino radiation moments (\(E_{(\nu,\varepsilon)},F^{i}_{(\nu,\varepsilon)}\)), as \[\left[\begin{array}{c}\rho\\ u^{i}\\ s\\ Y_{e}\\ E_{(\nu,\varepsilon)}\end{array}\right]=\left[\begin{array}{c}\sim 0.1\rho_{\rm max }\\ 0\\ \approx 1.5\,k_{\rm B}\,{\rm baryon}^{-1}\\ \approx 0.15\\ E_{\rm thick}(\nu,e)\\ F_{\rm thick}(\nu,e)_{i}\end{array}\right]\ The 2D axially symmetric computational domain extends to \(1.5\times 10^{4}\,\mathrm{km}\) from the center. In the cylindrical computational domain, 2:1 ratio nested boxes with 11 refinement levels are embedded, and each nested box contains \(64\times 64\) cells so that the finest resolution at the center becomes \(\approx\)230 m. In this work, we assume the plane symmetry with respect to the equatorial plane. The neutrino energy space \(\varepsilon\) logarithmically covers from 3 to 400 MeV with 14 energy bins. In this study, we use the up-to-date neutrino rates of Kotake et al. (2018), which are used also in our recent studies (Kuroda et al., 2022; Kuroda & Shibata, 2023). ## 3 Results We first describe the picture of post-bounce evolution till the formation of BH. Fig. 1 shows: (a) the maximum rest-mass density \(\rho_{\mathrm{max},15}\) in units of \(10^{15}\,\mathrm{g\,cm^{-3}}\) (black), baryon mass of PNS \(M_{\mathrm{PNS}}\) (blue), and central lapse function \(\alpha_{\mathrm{c}}\) (red); (b) neutrino luminosity \(L_{\nu,51}\) in units of \(10^{51}\,\mathrm{erg\,s^{-1}}\) for neutrino species; and (c) neutrino mean energy \(\langle\varepsilon_{\nu}\rangle\). The PNS surface is defined by the location for which the rest mass density drops below \(10^{10}\,\mathrm{g\,cm^{-3}}\). \(L_{\nu}\) and \(\langle\varepsilon_{\nu}\rangle\) are evaluated from the emergent neutrino spectra measured at \(r=400\,\mathrm{km}\). In panel (a), we also plot the maximum mass of DD2 EOS for cold and non-rotating stars by the horizontal dash-dotted line of \(2.42\,\mathrm{M_{\odot}}\). Panel (a) exhibits that the \(M_{\mathrm{PNS}}\) exceeds the maximum allowed mass of current EOS at \(t_{\mathrm{pb}}\sim 100\,\mathrm{ms}\). However, because of an additional contribution from thermal pressure, the PNS does not immediately collapse to a black hole. From the maximum density evolution, we see a sharp increase at \(t_{\mathrm{pb}}\sim 177\,\mathrm{ms}\), at the same time \(\alpha_{\mathrm{c}}\) decreases to \(\sim 0\). This signals the BH formation. Prior to the BH formation at \(t_{\mathrm{pb}}\gtrsim 160\,\mathrm{ms}\), electron and anti-electron type neutrino luminosities show a decreasing trend, while heavy-lepton neutrinos show a rapid increase in both its luminosity and mean energy. These features were previously identified in 1D full-GR simulations with Boltzmann neutrino transport Liebendorfer et al. (2004) and are commonly observed in the literature, due to rapid contraction of the PNS to the forming BH (see also, Sumiyoshi et al. (2007); Fischer et al. (2009); Hempel et al. (2012); Gullin et al. (2022) as well as 3D models by Kuroda et al. (2018); Shibagaki et al. (2021)). The overall features before the BH formation are in a good agreement with our former model \(z70\) reported in Kuroda et al. (2022), in which the DD2-based nuclear EOS taking into account a first-order quantum chromodynamics (QCD) phase transition was used. Taking into account the fact that the QCD phase transition occurs after the PNS starts collapsing (Kuroda et al., 2022), the agreement between the current and previous models is quite reasonable. We also compare BH formation time with previous related studies. O'Connor & Ott (2011) presented a nice correlation between BH formation time, obtained from various 1D GR models, and compactness parameter of progenitor star. According to their Fig. 6, massive stars having \(\xi_{2.5}=1\), which is the case for the current model, are forming BH at \(t_{\mathrm{pb}}\sim 250-750\,\mathrm{ms}\), where the time variation reflects the different nuclear EOS. Powell et al. (2021) performed faint SN simulations in 3D using a zero-metallicity progenitor star with \(85\,\mathrm{M_{\odot}}\), whose compactness parameter is \(\xi_{2.5}=0.86\). They witnessed shock revival prior to BH formation, which to some extent suppresses subsequent mass accretions onto the PNS and may delay the BH formation. Their models exhibited BH formation occurring at \(t_{\mathrm{pb}}\sim 290-590\,\mathrm{ms}\). Using similar massive progenitor stars, Rahman et al. (2022) also demonstrated faint SN scenarios with BH formation occuring at \(t_{\mathrm{pb}}\sim 350-400\,\mathrm{ms}\). In addition, a recent study of Sykes et al. (2023) presented BH formation at \(t_{\mathrm{pb}}\sim 220\,\mathrm{ms}\) for the same progenitor model used in Powell et al. (2021). Considering that our numerical formalism is totally independent from these previous studies and also that we use a different progenitor model, some time variations in BH formation time are expected to emerge. At the same time, comparing to less massive stars, e.g., with \(\xi_{2.5}\sim 0.25\), which are predicted to form BH at \(t_{\mathrm{pb}}\gtrsim\)2 s (O'Connor & Ott, 2011), unless successful shock revival does not occur, all previous studies including this study are presenting consistent BH formation time, i.e., substantially quicker than \(t_{\mathrm{pb}}\gtrsim\)2 s expected in less massive stars. Next we discuss the neutrino radiation-hydrodynamics evolution after the BH formation, focusing mainly on how effectively our excision method manage to prevent propagation of spurious behaviours often appeared at the excision boundary. Fig. 2 displays spherically averaged spatial profiles of the rest mass density (top-left), electron fraction (top-right), entropy (middle-left), radial component of the three velocity (middle-right), electron type neutrino luminosity (bottom-left), and anti-electron type (solid-line) and heavy-lepton type (dash-dotted line) neutrino luminosities (bottom-right), at several time slices. In the middle-left panel, we supplementary plot a temperature profile, but only at the formation of BH (red dash-dotted line), which is used in the later discussion with Fig. 3. Each color represents the post BH formation time \(t_{\mathrm{BH}}\), denoted in the top-left panel. Once the AH is formed, we plot structures only outside the AH. Slightly before AH formation at \(t_{\mathrm{BH}}=-0.1\,\mathrm{ms}\), the central density exceeds \(10^{15}\,\mathrm{g\,cm^{-3}}\) and the velocity profile inside the PNS shows the infalling structure. For \(t_{\mathrm{BH}}\geq 0\,\mathrm{ms}\), for which we apply an excision method described in the previous section, we see essentially no numerical instabilities at the interface of the AH. All the neutrino radiation fields and hydrodynamical variables exhibit smooth structures across the AH and subsequently swallowed into its inside. Figure 1: Overall evolution feature. Panel (a): the maximum rest-mass density \(\rho_{\mathrm{max}}\) (black), central lapse function \(\alpha_{\mathrm{c}}\) (red), and baryon mass of PNS \(M_{\mathrm{PNS}}\) (blue); (b): neutrino luminosity \(L_{\nu,51}\) in units of \(10^{51}\,\mathrm{erg\,s^{-1}}\); and (c) neutrino mean energy \(\langle\varepsilon_{\nu}\rangle\). Neutrino profiles are evaluated at \(r=400\,\mathrm{km}\). In panels (b) and (c), the color represents neutrino species: electron type neutrino (black), electron type antineutrino (red), and heavy lepton type neutrino (blue). From the density structural evolution, the maximum density drops by four orders of magnitude, from \(\sim 10^{14}\,\mathrm{g\,cm^{-3}}\) to \(\sim 10^{10}\,\mathrm{g\,cm^{-3}}\), within a few ms, presenting a clear transition from optically thick to thin conditions. This feature makes SN simulations in dynamical BH spacetime one of numerically challenging subjects. We found that, if we suddenly switch off the neutrino-matter interactions inside the AH, it causes spurious behaviors, which eventually leak out to the outside and lead to a code crash. Therefore we believe that it is essential to ensure a buffer zone between the AH and the excised region, especially when the neutrino radiation fields are taken into account. During the first few ms after AH formation, low-\(Y_{e}\) and high entropy material, which represent typical PNS shocked material, are still present outside the AH. They are, however, immediately swallowed by the BH and for \(t_{\mathrm{BH}}\gtrsim 3\) ms the BH accretion enters a nearly steady state, exhibiting high-\(Y_{e}\) (\(\sim 0.49\)) and relatively low entropy (\(\sim 5\,\mathrm{k_{B}}\,\mathrm{baryon^{-1}}\)) flows (see magenta lines). Next we focus on how the neutrino signals in association with the BH formation are radiated away. Bottom two panels indicate that all neutrino species have an outgoing flux for \(r\gtrsim 30\,\mathrm{km}\) at the time of the BH formation. In the vicinity of AH, on the other hand, neutrino radiation fields experience a strong drag by infalling high density component (\(\gtrsim 10^{12}\,\mathrm{g\,cm^{-3}}\)) and have an inward flux. After the mass accretion becomes a nearly steady state flow for \(t_{\mathrm{BH}}\gtrsim 3\) ms, the dominant neutrino-matter interaction is the electron capture due to continuous replenishment of high-\(Y_{e}\) materials (\(\sim 0.49\), see top-right panel) from stellar mantle. It results in a sustained neutrino emission even after the BH formation for electron type neutrinos (see blue and magenta lines in the bottom-left panel in Fig. 2), while the rest of neutrino species has essentially no production channel and their neutrino luminosities quickly subside. Sykes et al. (2023) reported a BH excision scheme with neutrino transport. According to their long time failed CCSN simulation in 1D spherical symmetry, qualitatively similar spatial profiles of neutrino luminosities, namely relatively strong \(\nu_{e}\) emission continuing even after BH formation, was also reported. Fig. 3 displays: (a) the irreducible mass \(M_{\mathrm{irr}}\) and 2-norm of Hamiltonian constraint violation \(||H||_{2}\), (b) neutrino luminosities, and (c) mean neutrino energies, as a function of \(t_{\mathrm{BH}}\). Here, \(M_{\mathrm{irr}}\) is defined by the area of apparent horizon \(A\) as \(M_{\mathrm{irr}}=\sqrt{A/16\pi}\)(cf. Baumgarte et al., 1996; Shibata, 1997) and \(||H||_{2}\) measures the constraint violation only for numerical cells outside the AH. From panel (a), the irreducible mass shows an increasing trend from \(M_{\mathrm{irr}}\sim 2.88\,\mathrm{M_{\odot}}\) to \(\sim 3.06\,\mathrm{M_{\odot}}\) during the first \(40\,\mathrm{ms}\). At the moment of the AH formation, the measured value of the protoneutron star mass, \(M_{\mathrm{PNS}}\), is \(\sim 2.76\,\mathrm{M_{\odot}}\), which rapidly decreases to \(\lesssim 0.001\,\mathrm{M_{\odot}}\) (the total mass outside of the AH and where \(\rho\geq 10^{10}\,\mathrm{g\,cm^{-3}}\) is met) within a few ms. It means that the estimated \(M_{\mathrm{irr}}\) is slightly larger than \(M_{\mathrm{PNS}}\) at \(t_{\mathrm{BH}}=0\,\mathrm{ms}\). Furthermore, from panel (a), \(M_{\mathrm{irr}}\) initially shows a slightly odd behavior, a nearly constant evolution until \(t_{\mathrm{BH}}\sim 8\,\mathrm{ms}\), and it increases afterward. From these, we naively suspect that the current numerical resolution at the center \(\Delta x\sim 230\,\mathrm{m}\) might not be high enough1 to accurately resolve the location of apparent horizon and may tend to overestimate the initial BH mass approximately by \(\sim 0.1\,\mathrm{M_{\odot}}\), i.e., \(\sim 3\,\%\) error in the evaluation for the total BH mass or the AH radius. However, once the system relaxes to a quasi-steady state for \(t_{\mathrm{BH}}\gtrsim 10\,\mathrm{ms}\), \(M_{\mathrm{irr}}\) increases with a reasonable growth rate of \(M_{\mathrm{irr}}\approx 4.66\,\mathrm{M_{\odot}}\) s\({}^{-1}\), which agrees approximately with that of the PNS mass, \(M_{\mathrm{PNS}}\approx 4.73\,\mathrm{M_{\odot}}\) s\({}^{-1}\), before the BH formation (see panel (a) in Fig. 1). The 2-norm of Hamiltonian Figure 3: Post BH formation evolution of: (a) the irreducible mass \(M_{\mathrm{irr}}\) and 2-norm of Hamiltonian constraint violation \(||H||_{2}\), (b) neutrino luminosities, and (c) mean neutrino energies, as a function of \(t_{\mathrm{BH}}\). In panels (b) and (c), the color represents neutrino species: electron type neutrino (black), electron type antineutrino (red), and heavy lepton type neutrino (blue). Figure 2: Spherically averaged radial profiles of the rest mass density \(\rho\) (top-left), electron fraction \(Y_{e}\) (top-right), entropy per baryon \(s\) (middle-left), radial component of the three-velocity \(v^{r}=u^{r}/u^{r}\) (middle-right), neutrino luminosity for \(\nu_{e}\) (bottom-left), \(\dot{\nu}_{e}\) (solid, bottom-right), and \(\nu_{\mathrm{x}}\) (solid-dashed, bottom-right) at different times denoted in the top-left panel. In the middle-left panel, we also plot a temperature profile, but only at \(t_{\mathrm{BH}}=0\,\mathrm{ms}\) (red dash-dotted line). constraint \(||H||_{2}\) stays around \(\sim 10^{-4}\) without any secular increase after BH formation. Regarding the neutrino signals, the neutrino luminosity for all species show a rapid distinction and eventually migrate to a quasi steady state for \(t_{\rm BH}\gtrsim 5\,\)ms. From panel (b), \(L_{\nu_{e}}\) stays around \(\sim 2\times 10^{49}\) erg s\({}^{-1}\) till the end of our calculation, which features a long term steady state mass accretion onto the BH. Nearly constant \(L_{\nu_{e}}\) of the order of \(O(10^{49})\) erg s\({}^{-1}\) is also reported in Sykes et al. (2023). The neutrino mean energy (\(\varepsilon_{\nu}\)) may reveal the final moment of devastating PNS cooling. As can clearly seen, \(\langle\varepsilon_{\nu}\rangle\) for all neutrino species show a drastic increase at \(t_{\rm BH}\sim 3\,\)ms. This is particularly the case for heavy lepton type neutrinos, which show a remarkably high mean energy of \(\langle\varepsilon_{\nu_{x}}\rangle\sim 90\,\)MeV. These values are even higher than those from the QCD CCSN models (Fischer et al., 2018; Kuroda et al., 2022), which are also known to emit high energy neutrinos \(\langle\varepsilon_{\nu_{x}}\rangle\sim 40\,\)MeV due to strong shock heating in association with the quark core bounce. We will now shortly discuss their possible excitation mechanism. First, since we measure the emergent neutrino signals at \(r=400\,\)km, these high energy neutrinos are produced at \(t_{\rm BH}\sim 1-2\,\)ms. From Fig. 2, this time corresponds exactly to the time when huge amounts of hot PNS envelope together with a shock surface infall with a relativistic speed of \(\sim 0.3c\). The highest temperature of collapsing PNS material (middle-left panel in Fig. 2) for the regions of \(r\gtrsim 30\,\)km, where \(F_{\nu_{x}}\) has a positive sign (bottom-right panel) and can contribute to the emergent neutrino spectrum, is merely \(T\sim 10\,\)MeV. It indicates that heavy lepton type neutrinos, whose energy are \(\langle\varepsilon_{\nu_{x}}\rangle\sim 30\,\)MeV, could be barely explained via such as pair production channel, although it is not likely for much higher neutrino energy of \(\sim 90\,\)MeV. To further discuss their origin, we examine their spectral features. Fig. 4 depicts: (a) the distribution function \(f_{e}\)2 for \(\tilde{\nu}_{e}\) (black lines) and \(\nu_{x}\) (red lines) at three different time slices: \(t_{\rm BH}=0\,\)ms, 3 ms (corresponding to the time when high energy neutrinos are observed), and 7 ms, (b) time evolution of distribution function \(f_{e}\) for all energy bins higher than \(\varepsilon\geq 52\,\)MeV (this time, 52, 78, 117, 176, and 265 MeV) (solid lines) and mean energy \(\langle\varepsilon\rangle\) (dashed line) for \(\tilde{\nu}_{e}\), and (c) same as the panel (b) but for \(\nu_{x}\). All these values are measured at \(r=400\,\)km. Footnote 2: We reconstruct the distribution function \(f_{e}\) simply by \(f_{e}=J_{e}/4\pi\,e^{3}\), where \(J_{e}\) denotes the zeroth order neutrino radiation moment measured in the comoving frame at the energy bin \(\varepsilon\). With an appropriate closure relation, \(J_{e}\) is determined from the zeroth and first order radiation momenta (\(E_{e}\), \(F_{e}^{H}\)), which are measured in the Eulerian frame and are the basic variables evolved in our M1 neutrino transport. From panel (a), the energy spectrum at \(t_{\rm BH}=3\,\)ms for \(\nu_{x}\) exhibits a flatter profile with relatively more populations for neutrinos with \(\gtrsim 50\,\)MeV. Such feature cannot be seen in other two time snapshots. We attribute the flatter profile to a consequence of more effective isoenergy scatterings taking place in the upstream to the relativistically infalling shock surface. Because of the rapid infall of the PNS shock surface (see \(\nu_{r}\)-profiles from \(t_{\rm BH}=-0.1\,\)ms to 1 ms in Fig. 2), the outgoing comoving neutrino flux ahead of the shock becomes relatively larger. Consequently the effect of isoenergy neutrino scattering becomes more prominent compared to the case with a stationary shock surface. Furthermore, that impact is more visible for high energy neutrinos as the cross section of the isoenergy scatterings is proportional to the square of the incoming neutrino energy. Indeed, from panel (c), the distribution function for heavy lepton type neutrinos shows an increase(decreasing) trend for \(\varepsilon\geq 117(\leq 78)\,\)MeV at \(t_{\rm BH}\lesssim 3\,\)ms. Particularly at the energy bin \(\varepsilon=117\,\)MeV (\(f_{e=117}\): red line), its increase is noteworthy with its maximum appearing at \(t_{\rm BH}\sim 3\,\)ms. Neutrinos at higher energie bins (\(\varepsilon=176\) and \(265\,\)MeV) also show a sudden increase with slight time delays of \(\sim 0.5\,\)ms from the peak time for \(f_{e=117}\). These time delays are mostly due to that higher energy neutrinos require a longer time for escaping from collapsing stellar mantle. On the other hand, regarding \(\tilde{\nu}_{e}\) (as well as \(\nu_{e}\)), the less population of high energy neutrinos (\(\varepsilon\gtrsim 50\,\)MeV) prior to the BH formation than that of \(\nu_{x}\) (compare two thin lines in panel (a)) leads simply to a less noticeable increase at \(t_{\rm BH}\sim 3-4\,\)ms. Additionally, the presence of charged current reactions tend to suppress their increase. In fact, \(f_{e\geq 117}\) for \(\tilde{\nu}_{e}\) shows approximately an order of magnitude smaller values than that for \(\nu_{x}\). These features result in the observed high energy neutrinos pronounced for heavy lepton type ones (Fig.3). Although our moment formalism cannot capture the particle acceleration mechanisms at the shock front, non-thermal shock acceleration (Kazanas & Ellison, 1981; Givanooni et al., 1989; Nagakura & Hotokezaka, 2021) is also reported to excite high energy neutrinos from CCSNe. As a comparison with previous studies, Gullin et al. (2022) has performed a GR Monte Carlo neutrino transport and reported high energy neutrinos with \(\langle\varepsilon_{\nu_{x}}\rangle\sim 50\,\)MeV in association with BH formation. Since their calculations are performed on the fixed spacetime and matter fields after BH formation, quantitative differences in \(\langle\varepsilon_{\nu}\rangle\) from ours are inevitable. We, however, believe that the emission of high energy neutrinos just after the BH formation seem to be a common feature and might be used as a smoking gun of infall of PNS surface. Rahman et al. (2022) performed CCSN simulations with BH formation. However, since they excise the innermost \(400\,\)km once they find the AH and also their models present a successful shock expansion, i.e., corresponding to the fallback SN model, the emergence of high energy neutrinos similar to ours was not reported. Finally, we discuss observable multi messenger signals for a current failed CCSN model. Fig. 5 displays from top: (a) the neutrino detection rate \(\Gamma\) of Hyper-Kamiokande (HK) (Abe et al., 2011; Hyper-Kamiokande Proto-Collaboration et al., 2018); (b) \(\Gamma\) of IceCube (IC) (Abasi et al., 2011; Salathe et al., 2012); (c) matter origin gravitational waves (GWs) \(Dh_{\star}\); and (d) spectrogram of \(h_{\star}\) obtained from a short-time Fourier transform. We assume a source distance of \(D=10\,\)kpc. \(h_{\star}\) is the gravitational wave strain, which is calculated from a standard quadrupole formula, and we show only the non-vanishing component in axisymmetric profile observed along the equatorial plane. The neutrino detection rate \(\Gamma\) is evaluated in the same way as Kuroda et al. (2022) assuming a Fermi-Dirac distribution for the neutrino energy spectrum (Lund et al., 2010; Takiwaki & Kotake, 2018). Note that in the evaluation for \(\Gamma\), we consider two extreme cases: all \(\tilde{\nu}_{e}\) emitted from the source reach the detectors without neutrino flavor conversion and cause the signal at the detectors (black lines in the figure); all \(\tilde{\nu}_{x}\) (identical to \(\nu_{x}\) in this study) emitted from the source are completely swapped by \(\tilde{\nu}_{e}\) and cause the signals (red lines). In inset of the upper two panels, we show a magnified view of \(\Gamma\) relative to BH formation time \(t_{\rm BH}\) to feature detection of high energy neutrinos. Regarding the neutrino detection rate \(\Gamma\), both of the two extreme cases, i.e., with and without neutrino flavor conversion, essentially show a quantitatively similar monotonic increase until the BH formation. This feature can be seen for both detectors. This indicates that the possible range of neutrino oscillation effects (see Mirizzi et al., 2016, for a review), i.e. the region bounded by two lines in panels (a,b), is quite small, compared to previous studies using less massive progenitor stars (e.g. Tamborra et al., 2012; Kuroda et al., 2022). For instance, \(\Gamma_{\tilde{\nu}_{e}\to\tilde{\nu}_{e}}\) becomes \(\sim 1.5\) times higher than \(\Gamma_{\tilde{\nu}_{x}\to\tilde{\nu}_{e}}\) for \(t_{\rm pb}\gtrsim 100\,\)ms for CCSN models with less massive progenitor stars, while the current one with a more massive progenitor star presents roughly comparable values. Another remarkable feature is rapid increase of \(\Gamma_{\phi_{x}\rightarrow\phi_{e}}\) (red lines) as the PNS approaches BH formation (\(t_{\rm pb}\gtrsim 150\) ms). It is a clear signature of the increasing behavior of both \(L_{\nu_{x}}\) and \(\langle\kappa_{\nu_{x}}\rangle\) shown in Fig. 1. We also discuss if the high energy heavy lepton type neutrinos, as a possible signature of the shock surface being swallowed by BH, could be observed. From insets, we can marginally observe a slight increase for \(\Gamma_{\phi_{x}\rightarrow\phi_{e}}\) (red lines) at \(t_{\rm BH}\sim 3\) ms, which is more visible for IC. This time is consistent with the emission time of high energy neutrinos (see panel (c) in Fig. 3). If we could observe such a tentative increase of neutrino detection during the exponential decay, it could be a possible signature of the aforementioned final moment of the PNS shock surface. Bottom two panels show the emitted GWs. We see essentially the same features as have been discussed for model \(z70\) in Kuroda et al. (2022). During the first \(\sim 50\) ms after bounce, relatively large and low frequency GWs originated from postbounce convective motions are observed, whose amplitudes and frequencies reach \(\sim 50\) cm and \(\sim 100\) Hz, respectively. Afterward the gravitational waveform shows a considerable subsidence, which is then disrupted at \(t_{\rm pb}\gtrsim 120\) ms. At the moment of BH formation, burst like GWs of the order of \(\sim 100\) cm are emitted presenting a broad band emission. Once the BH is formed and BH accretion settles into a quasi steady state for \(t_{\rm BH}\gtrsim 3\) ms, we observe essentially no GWs for the current non-rotating model. As a comparison to a previous 2D GR study (Rahman et al., 2022), which performed faint SN simulations using an \(80\,\mathrm{M}_{\odot}\) progenitor star, the current GWs are showing consistent behaviors in the initial convection phase (\(t_{\rm pb}\lesssim 50\) ms). During this phase, the amplitude and typical frequency reach \(Dh\sim 30-40\) cm and \(F\sim 100\) Hz, respectively, in their non-rotating model. These values are quite consistent with our findings. Although a direct comparison in the subsequent phase (\(t_{\rm pb}\gtrsim 50\) ms till BH formation) may not be so meaningful, as their models are faint SN, i.e., exhibiting shock revival before BH formation, high frequency GWs (\(F\sim 1000\) Hz) are also observed prior to BH formation, which could potentially be another common feature. ## 4 Summary We have presented a results of 2D axisymmetric CCSN simulation for a massive star with \(70\,\mathrm{M}_{\odot}\). Our core-collapse supernova model is based on numerical relativity, which solves the GR neutrino-radiation hydrodynamics equations together with the two-moment (M1) neutrino transport equations of Kuroda et al. (2016). We used up-to-date neutrino opacities following Kotake et al. (2018) and employed the DD2 EOS of Typel et al. (2010). In this framework, we follow for the first time "beyond BH formation". To ensure stable numerical evolution, we use an excision method for neutrino radiation-hydrodynamics, while we evolve the geometrical variables for entire computational domain. Our results showed consistent PNS evolution and multi-messenger signals during the PNS contraction phase with previous studies, for which the same progenitor model was used (Kuroda et al., 2018; Shibagaki et al., 2021; Kuroda et al., 2022). The current non-rotating PNS model exceeds the maximum NS mass for DD2 EOS at \(\sim 100\) ms after bounce. Subsequently, it initiates the second gravitational collapse, resulting in BH formation at \(t_{\rm pb}\sim 177\) ms. After we identify the AH, our excision technique demonstrates its capability to stably evolve the radiation-hydrodynamics in dynamical BH spacetime. We solve the full neutrino-matter interactions taking into account the gravitational redshift and Doppler terms from the AH down to the excision domain, so that spurious oscillations often appearing around the excision surface do not leak outside the AH. We also mention that our current numerical method satisfies the Hamiltonian con Figure 4: From left: (a) the distribution function \(f_{E}\) for \(\tilde{\nu}_{e}\) (black lines) and \(\nu_{x}\) (red lines) at three different time slices: \(t_{\rm BH}=0\) ms, 3 ms (corresponding to the time when high energy neutrinos are observed), and 7 ms, (b) time evolution of distribution function \(f_{E}\) for all energy bins higher than \(\kappa\geq 52\) MeV (this time, 52, 78, 117, 176, and 265 MeV) (solid lines) and mean energy \(\langle\varepsilon\rangle\) (dashed line) for \(\tilde{\nu}_{e}\), and (c) same as the panel (b) but for \(\nu_{x}\). All these values are measured at \(r=400\) km. Figure 5: From top: (a) the neutrino detection rate \(\Gamma\) of Hyper-Kamiokande (HK); (b) \(\Gamma\) of IceCube (IC); (c) matter origin GWs \(Dh_{+}\); and (d) spectrogram of \(h_{\star}\) obtained by a short-time Fourier transform. We assume a source distance of \(D=10\) kpc. straint well and its violation after BH formation is free from secular growth. After the BH formation, the PNS envelope was simply swallowed by the BH and the system transitions to a nearly steady BH-accretion phase within a few ms. Afterward the BH mass, i.e. the area of AH, gradually increases because of the continuous mass inflow. The accretion flow is composed of high-\(Y_{e}\) (\(\sim 0.5\)) material, reflecting the component of progenitor core (i.e. iron). On the contrary to the simple collapse dynamics of PNS, its impact on the emergent neutrino signals was not so trivial. Our findings are: (1) neutrinos with significantly high energies, especially for heavy lepton type neutrinos whose mean energy reaches \(\sim 90\) MeV, are observed during the infall phase of PNS envelope and (2) a steady state neutrino emission of electron type neutrinos in the BH accretion phase. Possible observations of high energy neutrinos from BH formation are also reported in a previous similar (but spherical symmetric) study by Gullin et al. (2022). We attribute the first feature to more efficient isoenergy scatterings between neutrinos, which strive to emerge from the shock surface, and infalling stellar mantle ahead of the shock, which is mainly composed of heavy nuclei. Using time evolution of neutrino spectral property, we showed that propagation of high energy neutrinos is indeed hindered, when the PNS shock surface drastically collapses (i.e. 1 ms\(\lesssim f_{\rm BH}\lesssim 2\) ms). Once the shock surface is engulfed by the BH, those neutrinos are radiated away, with some time delays for higher energy neutrinos. In the BH accretion phase, the main component of accretion flow is high-\(Y_{e}\) stellar mantle, whose temperature is at the highest a few MeV. Therefore the main neutrino emission channel is the electron capture on heavy nuclei occurring in the vicinity of AH. It results in a nearly constant electron type neutrino luminosity as also reported in Sykes et al. (2023). We would like to emphasize that these neutrino properties could be revealed only by full neutrino radiation-hydrodynamic simulations with numerical relativity without excising the relevant region outside the AH, i.e., by fully solving the region outside the BH. In this study we employed only one non-rotating progenitor model. In our future works, we are interested in exploring various CCSN models accompanied by BH formation. For instance, a fallback scenario is one of the interesting topics. The current progenitor model has a significantly high compactness \(\xi_{2.5}=1.0\) at precollapse stage (O'Connor & Ott (2011) and see also Table 1 in Kuroda et al. (2022)), which leads to strong mass accretions during the PNS contraction phase. Therefore it induces the PNS core-collapse without affording an opportunity for shock revival. However, if one considers less compact stars (Chan et al., 2018; Powell et al., 2021) or rotating stars (Rahman et al., 2022), the shock revival aided by neutrino heating could happen before BH formation. Such systems could be observed as a faint supernova (Kochanek et al., 2008; Adams et al., 2017) and should be distinguished from the current failed SN (or direct BH formation) model with no shock revival. Progenitor model dependency should definitely be explored in the future study to explain various observations. Another interesting topic to be explored is the collapsar scenario (MacFadyen & Woosley, 1999) as a possible route to long gamma-ray bursts and hypernovae. In the collapsar scenario, a BH surrounded by a massive disk is formed, i.e., highly non spherical system is formed. Such systems can be followed only in numerical relativity with no approximation like CFC approximation. For instance, after the formation of a massive disk, viscous effects significantly heat the disk, leading eventually to the launch of energetic outflows (in the context of both NS mergers and massive stellar collapse, see, e.g., Fernandez & Metzger, 2013; Just et al., 2015; Fujiibayashi et al., 2020, 2020). As another intriguing and also a challenging topic in the context of collapsar scenario, the impact of magnetic fields threading the central BH is undoubtedly worth to be explored as a possible origin of relativistic jets generated via, e.g., the Blandford-Znajek mechanism (Blandford & Znajek, 1977). It has been recently demonstrated by Christie et al. (2019); Hayashi et al. (2022) that the Blandford-Znajek mechanism is a promising mechanism for launching a jet, but only in the framework of compact mergers. We will explore this fascinating topic in our future CCSN studies. ## Acknowledgements We acknowledge K. Kiuchi, S. Fujibayashi, and A. Betranhandy for fruitful discussions. This work was in part supported by Grant-in-Aid for Scientific Research (Nos. 20H00158 and 23H04900) of Japanese MEXT/JSPS. Numerical computations were carried out on Sakura and Raven clusters at Max Planck Computing and Data Facility. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2307.01825
Self-similar solution for fractional Laplacian in cones
We construct a self-similar solution of the heat equation for the fractional Laplacian with Dirichlet boundary conditions in every fat cone. As applications, we give the Yaglom limit and entrance law for the corresponding killed isotropic stable L\'{e}vy process and precise large-time asymptotics for solutions of the Cauchy problem in the cone.
Krzysztof Bogdan, Piotr Knosalla, Łukasz Leżaj, Dominika Pilarczyk
2023-07-04T16:56:42Z
http://arxiv.org/abs/2307.01825v2
# Self-similar solution for fractional Laplacian in cones ###### Abstract. We construct a self-similar solution of the heat equation for the fractional Laplacian with Dirichlet boundary conditions in every fat cone. As applications, we give the Yaglom limit and entrance law for the corresponding killed isotropic stable Levy process and precise large-time asymptotics for solutions of the Cauchy problem in the cone. Key words and phrases:Dirichlet heat kernel; Martin kernel; entrance law; Yaglom limit; stable process; cone 2020 Mathematics Subject Classification: Primary: 60G18, 60J35; secondary: 60G51, 60J50 Krzysztof Bogdan was partially supported by the National Science Centre (Poland): grant 2017/27/B/ST1/01339. Lukasz Lezaj was partially supported by the National Science Centre (Poland): grant 2021/41/N/ST1/04139. Introduction Let \(\Gamma\) be a smooth smooth and \(\beta\geqslant\alpha/2\). We consider the following Cauchy problem (1.1) \[\begin{cases}\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x) \,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x )\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f (x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f( x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x= \int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\, \mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{\Gamma}f(x)\,\mathrm{d}x=\int_{ \Gamma}f(x)\,\mathrm{d (quasi-stationary distribution) in Theorem 3.11, which describes the behavior of the stable process starting from a fixed point \(x\in\Gamma\) and conditioned to stay in a cone, generalizing Theorem 1.1 of [16]. In Theorem 3.12 we extend both results to every initial distribution with finite moment of order \(\alpha\). Note that once the existence and properties of the stationary density \(\varphi\) are established, the results of Subsection 3.3 follow by scaling. Notably, our approach applies to rather general self-similar transition densities, at least when they enjoys positive sharp (upper and lower) bounds and invariant function exists. For an approach to entrance laws based on fluctuation theory of Markov additive processes, we refer to [27], see also Chaumont et al. [20]. In passing, we also note that Yaglom limit for random walks in cones is discussed by Denisov and Wachtel [27]. For a broad survey on quasi-stationary distributions, we refer to van Doorn and Pollet [35]. Self-similar solutions for general homogeneous semigroups are discussed in Cholewa and Rodriguez-Bernal [21]; see Patie and Savov [30] for a treatment of generalized Ornstein-Uhlenbeck semigroups, which they call generalized Laguerre semigroups. Results related to Theorem 1.2, but for fractal Burgers equation and fractional \(p\)-Laplacian can be found in Biler et al. [5, Theorem 2.2] and Vazquez [37, Theorem 1.2], respectively. ## 2. Preliminaries For \(x,z\in\mathbb{R}^{d}\), the standard scalar product of is denoted by \(x\cdot z\) and \(|z|\) is the Euclidean norm. For \(x\in\mathbb{R}^{d}\) and \(r\in(0,\infty)\), we let \(B(x,r)=\{y\in\mathbb{R}^{d}\colon|x-y|<r\}\), the ball centered at \(x\) with radius \(r\), and we write \(B_{r}:=B(0,r)\). All the considered sets, functions and measures are Borel. For non-negative functions \(f,g\), we write \(f\approx g\) if there is a number \(c\in(0,\infty)\), i.e., a _constant_, such that \(c^{-1}f\leqslant g\leqslant cf\), and write \(f\lesssim g\) if there is a constant \(c\) such that \(f\leqslant cg\). Recall that \(\alpha\in(0,2)\) and let \[\nu(z)=c_{d,\alpha}|z|^{-d-\alpha},\quad z\in\mathbb{R}^{d},\] where the constant \(c_{d,\alpha}\) is such that \[\int_{\mathbb{R}^{d}}\big{(}1-\cos(\xi\cdot z)\big{)}\nu(z)\,\mathrm{d}z=|\xi |^{\alpha},\quad\xi\in\mathbb{R}^{d}.\] For \(t>0\) we let \[p_{t}(x):=(2\pi)^{-d}\int_{\mathbb{R}^{d}}e^{-t|\xi|^{\alpha}}e^{-i\xi\cdot x }\,\mathrm{d}\xi,\quad x\in\mathbb{R}^{d}. \tag{2.1}\] By the Levy-Khintch formula, \(p_{t}\) is a probability density function and \[\int_{\mathbb{R}^{d}}e^{i\xi\cdot x}\,p_{t}(x)\,\mathrm{d}x=e^{-t|\xi|^{ \alpha}},\quad\xi\in\mathbb{R}^{d},\;t>0.\] We consider the isotropic \(\alpha\)-stable Levy process \(\mathbf{X}=(X_{t},t\geqslant 0)\) in \(\mathbb{R}^{d}\), with \[p_{t}(x,y):=p_{t}(y-x),\quad x,y\in\mathbb{R}^{d},\;t>0,\] as transition density. Thus, \[\mathbb{E}_{x}e^{i\xi\cdot X_{t}}=\int_{\mathbb{R}^{d}}e^{i\xi\cdot y}\,p_{t} (x,y)\,\mathrm{d}y=e^{i\xi\cdot x-t|\xi|^{\alpha}},\quad\xi\in\mathbb{R}^{d}, \;x\in\mathbb{R}^{d},\;t>0.\] The Levy-Khintchine exponent of \(\mathbf{X}\) is, of course, \(|\xi|^{\alpha}\) and \(\nu\) is the intensity of jumps. By (2.1), \[p_{t}(x,y)=t^{-d/\alpha}p_{1}\big{(}t^{-1/\alpha}x,t^{-1/\alpha}y\big{)},\quad x,y\in\mathbb{R}^{d},\;t>0, \tag{2.2}\] and \[p_{t}\left(Tx,Ty\right)=p_{t}(x,y),\quad x,y\in\mathbb{R}^{d},\;t>0, \tag{2.3}\] for every isometry \(T\) on \(\mathbb{R}^{d}\). It is well known that \[p_{t}(x,y)\approx t^{-d/\alpha}\wedge t|y-x|^{-d-\alpha},\quad x,y\in\mathbb{R }^{d},\;t>0, \tag{2.4}\] see, e.g., [7]. We then consider the time of the first exit of \(\mathbf{X}\) from the cone \(\Gamma\), \[\tau_{\Gamma}:=\inf\{t\geqslant 0\colon X_{t}\notin\Gamma\},\] and we define the Dirichlet heat kernel for \(\Gamma\), \[p_{t}^{\Gamma}(x,y):=p_{t}(x,y)-\mathbb{E}_{x}\left[\tau_{\Gamma}<t;p_{t-\tau_{ \Gamma}}\left(X_{\tau_{\Gamma}},y\right)\right],\quad x,y\in\Gamma,\ t>0,\] see [15, 22]. It immediately follows that \(p_{t}^{\Gamma}(x,y)\leqslant p_{t}(x,y)\) for all \(x,y\in\Gamma\) and \(t>0\). The Dirichlet heat kernel is non-negative, and symmetric: \(p_{t}^{\Gamma}(x,y)=p_{t}^{\Gamma}(y,x)\) for \(x,y\in\Gamma\), \(t>0\). It satisfies the Chapman-Kolmogorov equations: \[p_{t+s}^{\Gamma}(x,y)=\int_{\Gamma}p_{t}^{\Gamma}(x,z)p_{s}^{\Gamma}(z,y)\, \mathrm{d}z,\quad x,y\in\Gamma,\ s,t>0. \tag{2.5}\] For nonnegative or integrable functions \(f\) we define the _killed semigroup_ by \[P_{t}^{\Gamma}f(x):=\mathbb{E}_{x}\left[\tau_{\Gamma}>t;f(X_{t})\right]=\int_ {\Gamma}p_{t}^{\Gamma}(x,y)f(y)\,\mathrm{d}y,\quad x\in\Gamma,\ t>0.\] In particular, for \(f\equiv 1\) we obtain the _survival probability_: \[\mathbb{P}_{x}(\tau_{\Gamma}>t)=\int_{\Gamma}p_{t}^{\Gamma}(x,y)\,\mathrm{d}y,\quad x\in\Gamma,\,t>0, \tag{2.6}\] see [12, Remark 1.9]. Since \(t^{-1/\alpha}\Gamma=\Gamma\), the scaling (2.2) extends to the Dirichlet heat kernel: \[p_{t}^{\Gamma}(x,y)=t^{-d/\alpha}p_{1}^{\Gamma}(t^{-1/\alpha}x,t^{-1/\alpha}y),\quad x,y\in\Gamma,\ t>0.\] As a consequence, \[\mathbb{P}_{x}(\tau_{\Gamma}>t)=\mathbb{P}_{t^{-1/\alpha}x}(\tau_{\Gamma}>1), \quad x\in\Gamma,\ t>0. \tag{2.7}\] Furthermore, by (2.3), \[p_{t}^{\mathrm{TT}}(Tx,Ty)=p_{t}^{\Gamma}(x,y),\quad x,y\in\Gamma,\ t>0. \tag{2.8}\] The operators \(P_{t}^{\Gamma}\) and the kernel \(p_{t}^{\Gamma}(x,y)\) are the main subject of the paper. In view of (2.8), without loss of generality we may assume that \(\mathbf{1}:=(0,\ldots,0,1)\in\Gamma\). By [3, Theorem 3.2], there is a unique non-negative function \(M_{\Gamma}\) on \(\mathbb{R}^{d}\) such that \(M_{\Gamma}(\mathbf{1})=1\), \(M_{\Gamma}=0\) on \(\Gamma^{c}\), and for every open bounded set \(B\subseteq\Gamma\), \[M_{\Gamma}(x)=\mathbb{E}_{x}M_{\Gamma}(X_{\tau_{B}}),\quad x\in\mathbb{R}^{d}. \tag{2.9}\] Moreover, \(M_{\Gamma}\) is locally bounded on \(\mathbb{R}^{d}\) and homogeneous of some order \(\beta\in[0,\alpha)\), i.e., \[M_{\Gamma}(x)=|x|^{\beta}M_{\Gamma}(x/|x|),\quad x\in\Gamma. \tag{2.10}\] We call \(M_{\Gamma}\) the Martin kernel of \(\Gamma\) with the pole at infinity. **Example 2.1**.: By [3], \(\beta=\alpha/2\) if \(\Gamma\) is a half-space and \(\beta=\alpha-1\) if \(\Gamma=\mathbb{R}\setminus\{0\}\) and \(1<\alpha<2\). By [13], \(\beta=(\alpha-1)/2\) if \(\Gamma=\mathbb{R}^{2}\setminus([0,\infty)\times\{0\})\) and \(1<\alpha<2\). Throughout the article, we often assume that \(\Gamma\) is _fat_, i.e., \(\kappa\in(0,1)\) exists such that for all \(Q\in\overline{\Gamma}\) and \(r\in(0,\infty)\), there is a point \(A=A_{r}(Q)\in\Gamma\cap B(Q,r)\) such that \(B(A,\kappa r)\subseteq\Gamma\cap B(Q,r)\), see [11, Definition 1]. Recall that \(\Gamma\) is _smooth_ if \(d=1\) or \(d\geqslant 2\) and \(\Gamma\cap\mathbb{S}^{d-1}\) is a \(C^{1,1}\) subset of \(\mathbb{S}^{d-1}\). Furthermore, a cone \(\Gamma\) is called _right-circular_, if \(\Gamma=\{x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\setminus\{0\}:x_{d}>|x|\cos\eta\}\). The parameter \(\eta\in(0,\pi)\) is usually called the angle of the cone. Of course, every right-circular cone is smooth, and every smooth cone is fat. By [11, Theorem 1], the following approximate factorization holds true for fat cones: \[p_{t}^{\Gamma}(x,y)\approx\mathbb{P}_{x}(\tau_{\Gamma}>t)p_{t}(x,y)\mathbb{P}_{ y}(\tau_{\Gamma}>t),\quad x,y\in\Gamma,\ t>0. \tag{2.11}\] For \(R\in(0,\infty)\), we let \(\Gamma_{R}:=\Gamma\cap B_{R}\), the _truncated cone_. ## 3. Doob conditioning The Martin kernel \(M_{\Gamma}\) is invariant for the semigroup \(P_{t}^{\Gamma}\), as follows. **Theorem 3.1**.: _For all \(x\in\Gamma\) and \(t>0\), we have \(P_{t}^{\Gamma}M_{\Gamma}(x)=M_{\Gamma}(x)\)._ Proof.: Fix \(t>0\) and \(x\in\Gamma\). We have \[P_{t}^{\Gamma}M_{\Gamma}(x)=\mathbb{E}_{x}\Big{[}\tau_{\Gamma}>t;\,M_{\Gamma}( X_{t})\Big{]}. \tag{3.1}\] Let \(R>0\) and \(\tau_{R}:=\tau_{\Gamma_{R}}\). By (2.9) and the strong Markov property, \[M_{\Gamma}(x)=\mathbb{E}_{x}M_{\Gamma}(X_{\tau_{\Gamma_{R}}})=\mathbb{E}_{x}M _{\Gamma}\big{(}X_{t\wedge\tau_{R}}\big{)}=\mathbb{E}_{x}\Big{[}X_{t\wedge\tau _{R}}\in\Gamma;\,M_{\Gamma}\big{(}X_{t\wedge\tau_{R}}\big{)}\Big{]}, \tag{3.2}\] where the last equality follows from the fact that \(M_{\Gamma}=0\) outside \(\Gamma\). We note that \(\mathbb{P}_{x}-a.s.\), \(\tau_{R}\to\tau_{\Gamma}\) as \(R\to\infty\) (see, e.g., [1, proof of Proposition A.1]). We consider two scenarios. On \(\{\tau_{\Gamma}=\infty\}\), for \(R\) large enough, we have: \(\tau_{R}>t\), \(\mathds{1}_{X_{t\wedge\tau_{R}}\in\Gamma}=1=\mathds{1}_{t<\tau_{\Gamma}}\), and \[M_{\Gamma}\big{(}X_{t\wedge\tau_{R}}\big{)}\mathds{1}_{X_{t\wedge\tau_{R}} \in\Gamma}=M_{\Gamma}(X_{t})=M_{\Gamma}(X_{t})\mathds{1}_{t<\tau_{\Gamma}}.\] On \(\{\tau_{\Gamma}<\infty\}\), for \(R\) large enough we have: \(\tau_{R}=\tau_{\Gamma}\), \(\mathds{1}_{X_{t\wedge\tau_{R}}\in\Gamma}=\mathds{1}_{t<\tau_{\Gamma}}\), and \[M_{\Gamma}\big{(}X_{t\wedge\tau_{R}}\big{)}\mathds{1}_{X_{t\wedge\tau_{R}} \in\Gamma}=M_{\Gamma}(X_{t})\mathds{1}_{t<\tau_{\Gamma}},\] too. In both cases, the integrand on the right-hand side of (3.2) converges \(a.s.\) to the integrand on the right-hand side of (3.1) as \(R\to\infty\). By the local boundedness of \(M_{\Gamma}\) and (2.10), \[\Big{|}M_{\Gamma}\big{(}X_{t\wedge\tau_{R}}\big{)}\mathds{1}_{X_{t\wedge\tau_ {R}}\in\Gamma}\Big{|}\leqslant c|X_{t\wedge\tau_{R}}|^{\beta}\leqslant c(X_{t} ^{*})^{\beta},\] where \[X_{t}^{*}:=\sup_{0\leqslant s\leqslant t}|X_{s}|.\] Using [4, Theorem 2.1] and the fact that \(\beta\in[0,\alpha)\), we conclude that \(\mathbb{E}_{x}(X_{t}^{*})^{\beta}<\infty\). An application of the dominated convergence theorem ends the proof. ### Renormalized kernel We define the renormalized (Doob-conditioned) kernel \[\rho_{t}(x,y)=\frac{p_{t}^{\Gamma}(x,y)}{M_{\Gamma}(x)M_{\Gamma}(y)},\quad x,y\in\Gamma,\;t>0. \tag{3.3}\] Note that \(\rho\) is jointly continuous. By Theorem 3.1, \[\int_{\Gamma}\rho_{t}(x,y)M_{\Gamma}^{2}(y)\,\mathrm{d}y=1,\quad x\in\Gamma,\; t>0, \tag{3.4}\] and by (2.5), \[\int_{\Gamma}\rho_{t}(x,y)\rho_{s}(y,z)M_{\Gamma}^{2}(y)\,\mathrm{d}y=\rho_{t+ s}(x,z),\quad x,y\in\Gamma,\;s,t>0. \tag{3.5}\] In other words, \(\rho_{t}\) is a symmetric transition probability density on \(\Gamma\) with respect to the measure \(M_{\Gamma}^{2}(y)\,\mathrm{d}y\). Furthermore, the following scaling property holds true: for all \(x,y\in\Gamma\) and all \(t>0\), \[\rho_{t}(x,y)=\frac{t^{-d/\alpha}p_{1}^{\Gamma}(t^{-1/\alpha}x,t^{-1/\alpha}y) }{t^{2\beta/\alpha}M_{\Gamma}(t^{-1/\alpha}x)M_{\Gamma}(t^{-1/\alpha}y)}=t^{- (d+2\beta)/\alpha}\rho_{1}(t^{-1/\alpha}x,t^{-1/\alpha}y). \tag{3.6}\] Therefore, \[\rho_{st}(t^{1/\alpha}x,t^{1/\alpha}y)=t^{-(d+2\beta)/\alpha}\rho_{s}(x,y), \quad x,y\in\Gamma,\;s,t>0. \tag{3.7}\] By (2.11), for fat cones we have \[\rho_{t}(x,y)\approx\frac{\mathbb{P}_{x}(\tau_{\Gamma}>t)}{M_{\Gamma}(x)}p_{t}( x,y)\frac{\mathbb{P}_{y}(\tau_{\Gamma}>t)}{M_{\Gamma}(y)},\quad x,y\in\Gamma,\,t>0. \tag{3.8}\] The boundary behavior of \(\mathbb{P}_{x}(\tau_{\Gamma}>t)/M_{\Gamma}(x)\) is important due to (3.8), but it is rather elusive. The next lemma strengthens the upper bound from [3, Lemma 4.2]. **Lemma 3.2**.: _There exists a constant \(c\) depending only on \(\alpha\) and \(\Gamma\), such that_ \[\mathbb{P}_{x}(\tau_{\Gamma}>t)\leqslant c\big{(}t^{-\beta/\alpha}+t^{-1}|x|^{ \alpha-\beta}\Big{)}M_{\Gamma}(x),\quad t>0,\,x\in\Gamma. \tag{3.9}\] _Remark 3.3_.: (1) For \(t=1\), (3.9) reads as follows, \[\mathbb{P}_{x}(\tau_{\Gamma}>1)\leqslant c(1+|x|^{\alpha-\beta})M_{\Gamma}(x),\quad x\in\Gamma. \tag{3.10}\] (2) The estimate (3.9) applies to arbitrary cones and arguments \(t,x\), however, it is not optimal. For example, for the right-circular cones, we can confront (3.8) with \[M_{\Gamma}(x)\approx\delta_{\Gamma}(x)^{\alpha/2}|x|^{\beta-\alpha/2},\quad x \in\Gamma,\] and \[\mathbb{P}_{x}(\tau_{\Gamma}>1)\approx\big{(}1\wedge\delta_{\Gamma}(x)\big{)} ^{\alpha/2}\big{(}1\wedge|x|\big{)}^{\beta-\alpha/2},\quad x\in\Gamma,\] as provided by [29, Lemma 3.3] and [11, Example 7]. (3) For the right-circular cones, the ratio \[\frac{\mathbb{P}_{x}(\tau_{\Gamma}>1)}{M_{\Gamma}(x)}\approx\frac{(1+\delta_ {\Gamma}(x))^{-\alpha/2}}{(1+|x|)^{\beta-\alpha/2}},\quad x\in\Gamma,\] is bounded if and only if \(\beta\geqslant\alpha/2\). Proof of Lemma 3.2.: We slightly modify the proof of [3, Lemma 4.2]. First, suppose that \(t=1\). The case \(x\in\Gamma_{1}\) in (3.10) is resolved by [3, Lemma 4.2], so we assume that \(x\in\Gamma\setminus\Gamma_{1}\). For every \(z\in\mathbb{R}^{d}\setminus\{0\}\) we define its projection on the unit sphere \(\tilde{z}:=z/|z|\). By (2.7), \[\mathbb{P}_{x}(\tau_{\Gamma}>1)=\mathbb{P}_{\tilde{x}}(\tau_{\Gamma}>|x|^{- \alpha}).\] Then we have \[\mathbb{P}_{\tilde{x}}(\tau_{\Gamma}>|x|^{-\alpha})\leqslant\mathbb{P}_{ \tilde{x}}(\tau_{\Gamma_{2}}>|x|^{-\alpha})+\mathbb{P}_{\tilde{x}}(\tau_{ \Gamma_{2}}<\tau_{\Gamma}).\] By the boundary Harnack principle (BHP), see Song and Wu [34, Theorem 3.1], and the homogeneity of \(M_{\Gamma}\) (2.10), \[\mathbb{P}_{\tilde{x}}(\tau_{\Gamma_{2}}<\tau_{\Gamma})\leqslant\mathbb{P}_{ 1}(\tau_{\Gamma_{2}}<\tau_{\Gamma})M_{\Gamma}(\tilde{x})=c_{1}|x|^{-\beta}M_{ \Gamma}(x). \tag{3.11}\] We let \[c_{2}=\inf_{y\in\Gamma_{2}}\int_{\Gamma\setminus\Gamma_{2}}\nu(y-z)\,\mathrm{ d}z.\] Clearly, \(c_{2}>0\). We recall the Ikeda-Watanabe formula: \[\mathbb{P}_{x}[\tau_{D}\in I,\ Y_{\tau_{D}-}\in A,\ Y_{\tau_{D}}\in B]=\int_{ I}\int_{A}\int_{B}\,p_{s}^{D}(x,v)\nu(v,z)\,\mathrm{d}z\,\mathrm{d}v\,\mathrm{d}s,\] where \(x\in D\), \(I\subseteq[0,\infty)\), \(A\subseteq D\) and \(B\subseteq D^{c}\), see, e.g., Bogdan et al. [17, Section 4.2]. By Markov inequality and BHP, \[\mathbb{P}_{\tilde{x}}\big{(}\tau_{\Gamma_{2}}>|x|^{-\alpha}\big{)} \leqslant|x|^{\alpha}\mathbb{E}_{\tilde{x}}\tau_{\Gamma_{2}}=|x| ^{\alpha}\int_{\Gamma_{2}}G_{\Gamma_{2}}(\tilde{x},y)\,\mathrm{d}y\] \[\leqslant c_{2}^{-1}|x|^{\alpha}\int_{\Gamma\setminus\Gamma_{2}} \int_{\Gamma_{2}}G_{\Gamma_{2}}(\tilde{x},y)\nu(y-z)\,\mathrm{d}y\,\mathrm{d}z\] \[\leqslant c_{2}^{-1}|x|^{\alpha}\mathbb{P}_{\tilde{x}}\big{(}X_{ \tau_{\Gamma_{2}}}\in\Gamma\big{)}\] \[\leqslant c_{1}c_{2}^{-1}|x|^{\alpha}\mathbb{P}_{\mathbf{1}}(X_{ \tau_{2}}\in\Gamma)M_{\Gamma}(\tilde{x})\] \[=c_{1}c_{2}^{-1}c|x|^{\alpha-\beta}M_{\Gamma}(x).\] By (3.11), we get (3.10) when \(x\in\Gamma\setminus\Gamma_{1}\). For arbitrary \(t>0\), we use (2.7) and (3.10): \[\mathbb{P}_{x}(\tau_{\Gamma}>t) =\mathbb{P}_{t^{-1/\alpha}x}(\tau_{\Gamma}>1)\] \[\leqslant c\left(1+\left(t^{-1/\alpha}|x|\right)^{\alpha-\beta} \right)M_{\Gamma}(t^{-1/\alpha}x)\] \[=c\left(t^{-\beta/\alpha}+t^{-1}|x|^{\alpha-\beta}\right)M_{ \Gamma}(x).\] By the proof of [3, Lemma 4.2], for every \(R\in(0,\infty)\) there exists a constant \(c\), depending only on \(\alpha\), \(\Gamma\) and \(R\), such that \[c^{-1}M_{\Gamma}(x)t^{-\beta/\alpha}\leqslant\mathbb{P}_{x}(\tau_{\Gamma}>t) \leqslant cM_{\Gamma}(x)t^{-\beta/\alpha},\quad x\in\Gamma_{Rt^{1/\alpha}},\;t >0.\] In particular, for fat cones, in view of (2.4) and (3.8), \[\rho_{1}(x,y)\approx(1+|y|)^{-d-\alpha}\frac{\mathbb{P}_{y}(\tau_{\Gamma}>1)}{ M_{\Gamma}(y)},\quad x\in\Gamma_{R},\;y\in\Gamma, \tag{3.12}\] with comparability constant depending only on \(\alpha\), \(\Gamma\) and \(R\). Using Lemma 3.2 we also conclude that for every \(R\geqslant 1\) there is a constant \(c\) depending only on \(R\), \(\alpha\) and \(\Gamma\), such that \[\rho_{1}(x,y)\leqslant c(1+|y|)^{-d-\beta},\quad x\in\Gamma_{R},\;y\in\Gamma. \tag{3.13}\] ### Ornstein-Uhlenbeck kernel Encouraged by [14], we let \[\ell_{t}(x,y):=\rho_{1-e^{-t}}(e^{-t/\alpha}x,y),\quad x,y\in\Gamma,\;t>0, \tag{3.14}\] and, by (3.7), we get the Chapman-Kolmogorov property for \(\ell_{t}\): \[\int_{\Gamma}\ell_{t}(x,y)\ell_{s}(y,z)M_{\Gamma}^{2}(y)\,\mathrm{d}y=\ell_{t+ s}(x,z),\quad x,z\in\Gamma,\;s,t>0.\] By (3.4), \[\int_{\Gamma}\ell_{t}(x,y)M_{\Gamma}^{2}(y)\,\mathrm{d}y=1,\quad x\in\Gamma, \;t>0.\] Thus, \(\ell_{t}\) is a transition probability density on \(\Gamma\) with respect to \(M_{\Gamma}^{2}(y)\,\mathrm{d}y\). We define the corresponding Ornstein-Uhlenbeck semigroup: \[L_{t}f(y)=\int_{\Gamma}\ell_{t}(x,y)f(x)M_{\Gamma}^{2}(x)\,\mathrm{d}x,\quad x \in\Gamma,t>0.\] We easily see that the operators are bounded on \(L^{1}(M_{\Gamma}^{2}(y)\,\mathrm{d}y)\). In fact, they preserve densities, i.e., functions \(f\geqslant 0\) such that \(\int_{\Gamma}f(x)M_{\Gamma}^{2}(x)\,\mathrm{d}x=1\). Before we immerse into details, let us note that the relations (3.12) and (3.13) will be crucial in what follows. Both of them rely on the factorization of the Dirichlet heat kernel (2.11), which is valid for fat sets. For this reason, although it is usually clear from the setting, to avoid unnecessary considerations we assume **below in this section** that \(\Gamma\) is a fat cone. **Theorem 3.4**.: _Assume \(\Gamma\) is a fat cone. Then there is a unique stationary density \(\varphi\) for the operators \(L_{t}\), \(t>0\)._ Proof.: Fix \(t>0\) and consider the family \(F\) of non-negative functions on \(\Gamma\) that have the form \[f(y)=\int_{\Gamma_{1}}\rho_{t}(x,y)\,\mu(\mathrm{d}x),\quad y\in\Gamma,\] for some sub-probability measure \(\mu\) concentrated on \(\Gamma_{1}\). By (3.4), \(F\subseteq L^{1}(M_{\Gamma}^{2}(y))\). By the scaling (3.7) and the same reasoning as in the proof of [14, Theorem 3.2], \(L_{t}F\subseteq F\). Since \(L_{t}\) is continuous, we also have \(L_{t}\overline{F}\subseteq\overline{F}\), where \(\overline{F}\) is the closure of \(F\) in the norm topology of \(L^{1}(\Gamma,M_{\Gamma}^{2}(y)\,\mathrm{d}y)\). Next, we observe that \(F\) is convex, therefore by [19, Theorem 3.7], \(\overline{F}\) is equal to the closure of \(F\) in the weak topology. In view of (3.12), \[f(y)\lesssim(1+|y|)^{-d-\alpha}\frac{\mathbb{P}_{y}(\tau_{\Gamma}>1)}{M_{ \Gamma}(y)},\quad y\in\Gamma, \tag{3.15}\] uniformly for \(f\in F\). Moreover, (3.4) and (3.12) show that the right-hand side of (3.15) is integrable with respect to \(M_{\Gamma}^{2}(y)\,\mathrm{d}y\). Therefore, the family \(F\) is uniformly integrable with respect to \(M_{\Gamma}^{2}(y)\,\mathrm{d}y\). By [8, Theorem 4.7.20], \(F\) is weakly pre-compact in \(L^{1}(M_{\Gamma}^{2}(y))\), so \(\overline{F}\) is weakly compact. Furthermore, we invoke [19, Theorem 3.10] to conclude that \(L_{t}\) is weakly continuous. By the Schauder-Tychonoff fixed point theorem [32, Theorem 5.28], there is a density \(\varphi\in\overline{F}\) satisfying \(L_{t}\varphi=\varphi\). It is unique by the strict positivity of the kernel \(\ell_{t}\), and the same for every \(t>0\), see the proof of [14, Theorem 3.2]. Let us note that by Theorem 3.4 and [26, Theorem 1 and Remark 2], the following stability result for kernels \(l_{t}\) in \(L^{1}(M_{\Gamma}^{2}(y)\,\mathrm{d}y)\) holds true for every \(x\in\Gamma\): \[\int_{\Gamma}|\ell_{t}(x,y)-\varphi(y)\big{|}M_{\Gamma}^{2}(y)\,\mathrm{d}y \to 0\quad\text{as}\quad t\to\infty. \tag{3.16}\] We claim that the convergence in (3.16) is in fact uniform for \(x\) in any bounded subset \(A\subseteq\Gamma\). Indeed, let \(x,x_{0}\in A\). In view of (3.14) and (3.8) we may write \[\int_{\Gamma}|\ell_{1+t}(x,y)-\varphi(y)|M_{\Gamma}^{2}(y)\, \mathrm{d}y =\int_{\Gamma}\Big{|}\int_{\Gamma}\ell_{1}(x,z)\big{(}\ell_{t}(z, y)-\varphi(y)\big{)}M_{\Gamma}^{2}(z)\,\mathrm{d}z\Big{|}M_{\Gamma}^{2}(y)\, \mathrm{d}y\] \[\leqslant c\int_{\Gamma}\ell_{1}(x_{0},z)\int_{\Gamma}|\ell_{t}( z,y)-\varphi(y)\big{|}M_{\Gamma}^{2}(y)\,\mathrm{d}yM_{\Gamma}^{2}(z)\,\mathrm{d}z. \tag{3.17}\] By (3.16), for every \(z\in\Gamma\), \[I_{t}(z):=\int_{\Gamma}|\ell_{t}(z,y)-\varphi(y)|M_{\Gamma}^{2}(y)\,\mathrm{d }y\to 0\quad\text{as}\quad t\to\infty.\] Moreover, \(I_{t}(z)\leqslant\int_{\Gamma}\big{(}\ell_{t}(z,y)+\varphi(y)\big{)}M_{\Gamma} ^{2}(y)\,\mathrm{d}y=2\). Since \[\int_{\Gamma}2\ell_{1}(x_{0},z)M_{\Gamma}^{2}(z)\,\mathrm{d}z=2<\infty,\] by the dominated convergence theorem the iterated integral in (3.17) tends to \(0\) as \(t\to\infty\), so the convergence in (3.16) is uniform for all \(x\in A\), as claimed. By rewriting (3.16) in terms of \(\rho\), we get that, uniformly for \(x\in A\), \[\int_{\Gamma}\Big{|}\rho_{1-e^{-t}}\big{(}e^{-t/\alpha}x,y\big{)}-\varphi(y) \Big{|}M_{\Gamma}^{2}(y)\,\mathrm{d}y\to 0\quad\text{as}\quad t\to\infty. \tag{3.18}\] This leads to the following spacial asymptotics for \(\rho_{1}\). **Corollary 3.5**.: _Let \(\Gamma\) be a fat cone. If \(\Gamma\ni x\to 0\) then \(\int_{\Gamma}\big{|}\rho_{1}(x,y)-\varphi(y)|M_{\Gamma}^{2}(y)\,dy\to 0\)._ Proof.: By the scalings (3.6) and (3.7), \[\rho_{1-e^{-t}}(e^{-t/\alpha}x,y)=\big{(}1-e^{-t}\big{)}^{-(d+2\beta)/\alpha} \rho_{1}\Big{(}\big{(}e^{t}-1\big{)}^{-1/\alpha}x,\big{(}1-e^{-t}\big{)}^{-1/ \alpha}y\Big{)},\] thus, in view of (3.18), \[\int_{\Gamma}\bigg{|}\rho_{1}\Big{(}\big{(}e^{t}-1\big{)}^{-1/\alpha}x,\big{(} 1-e^{-t}\big{)}^{-1/\alpha}y\Big{)}-\varphi(y)\bigg{|}M_{\Gamma}^{2}(y)\, \mathrm{d}y\to 0\quad\text{as}\quad t\to\infty. \tag{3.19}\] By the continuity of dilations in \(L^{1}(\mathbb{R}^{d})\), \[\int_{\Gamma}\bigg{|}\varphi\Big{(}\big{(}1-e^{-t}\big{)}^{1/\alpha}y\Big{)} M_{\Gamma}^{2}(y)-\varphi(y)M_{\Gamma}^{2}(y)\bigg{|}\,\mathrm{d}y\to 0 \quad\text{as}\quad t\to\infty.\] Thus, by a change of variables in (3.19) and the triangle inequality, we conclude that \[\int_{\Gamma}\bigg{|}\rho_{1}\Big{(}\big{(}e^{t}-1\big{)}^{-1/\alpha}z,y\Big{)} -\varphi(y)\bigg{|}M_{\Gamma}^{2}(y)\,\mathrm{d}y\to 0\quad\text{as}\quad t\to\infty\] uniformly for all \(z\in A\). To end the proof, we take \(A=B_{1}\) and \(x=\big{(}e^{t}-1\big{)}^{-1/\alpha}z\), where \(t=\ln\big{(}1+|x|^{-\alpha}\big{)}\) and \(z=x/|x|\in A\). **Lemma 3.6**.: _After a modification on set of Lebesgue measure \(0\), \(\varphi\) is continuous on \(\Gamma\) and_ \[\varphi(y)\approx(1+|y|)^{-d-\alpha}\frac{\mathbb{P}_{y}(\tau_{\Gamma}>1)}{M_{ \Gamma}(y)},\quad y\in\Gamma.\] Proof.: By Corollary 3.5 and (3.12), \[\varphi(y)\approx(1+|y|)^{-d-\alpha}\frac{\mathbb{P}_{y}(\tau_{\Gamma}>1)}{M_{ \Gamma}(y)}\] on \(\Gamma\) less a set of Lebesgue measure zero. Theorem 3.4 entails that \(\varphi=L_{1}\varphi\;a.e.\), so it suffices to verify that \(L_{1}\varphi\) is continuous on \(\Gamma\). To this end we note that \(\ell_{1}(x,y)\) is continuous in \(x,y\in\Gamma\). Next, by (3.14) and (3.8), \[\ell_{1}(x,y)\approx\frac{\mathbb{P}_{e^{-1/\alpha}x}\big{(}\tau_{\Gamma}>1-e^ {-1}\big{)}}{M_{\Gamma}\big{(}e^{-1/\alpha}x\big{)}}p_{1-e^{-1}}\big{(}e^{-1/ \alpha}x,y\big{)}\frac{\mathbb{P}_{y}(\tau_{\Gamma}>1-e^{-1})}{M_{\Gamma}(y)}, \quad x,y\in\Gamma.\] Let \(R>1\). By (2.4) and (3.9), [11, Remark 3] and (2.7), and the homogeneity (2.10) of \(M_{\Gamma}\), \[\ell_{1}(x,y)\lesssim(1+|x|)^{-d-\alpha}\frac{\mathbb{P}_{x}(\tau_{\Gamma}>1 )}{M_{\Gamma}(x)},\quad x\in\Gamma,y\in\Gamma_{R}.\] By the dominated convergence theorem, \(L_{1}\varphi\) is continuous on \(\Gamma_{R}\). In what follows, \(\varphi\) denotes the continuous modification from Lemma 3.6. **Theorem 3.7**.: _Let \(\Gamma\) be a fat cone. For every \(t>0\), uniformly in \(y\in\Gamma\) we have_ \[\rho_{t}(0,y):=\lim_{\Gamma\ni x\to 0}\rho_{t}(x,y)=t^{-(d+2\beta)/\alpha} \varphi\big{(}t^{-1/\alpha}y\big{)}.\] Proof.: If \(\beta=0\) then \(\rho_{t}(x,y)=p_{t}(x,y)\) and the claim is simply the continuity property of the heat kernel \(p_{t}\). Thus, we assume that \(\beta>0\). We only prove the claim for \(t=1\); the extension to arbitrary \(t\) is a consequence of the scaling (3.6). By (3.7) and the Chapman-Kolmogorov property, for \(x,y\in\Gamma\), \[\rho_{1}(x,y) =2^{-(d+2\beta)/\alpha}\rho_{2}\big{(}2^{1/\alpha}x,2^{1/\alpha}y \big{)}\] \[=2^{-(d+2\beta)/\alpha}\int_{\Gamma}\rho_{1}\big{(}2^{1/\alpha}x,z\big{)}\rho_{1}(z,2^{1/\alpha}y)M_{\Gamma}^{2}(z)\,\mathrm{d}z.\] We will prove that, uniformly in \(y\in\Gamma\), \[\int_{\Gamma}\rho_{1}\big{(}2^{1/\alpha}x,z\big{)}\rho_{1}(z,2^{1/\alpha}y \big{)}M_{\Gamma}^{2}(z)\,\mathrm{d}z\to\int_{\Gamma}\varphi(z)\rho_{1}(z,2^{ 1/\alpha}y)M_{\Gamma}^{2}(z)\,\mathrm{d}z \tag{3.20}\] as \(\Gamma\ni x\to 0\). To this end we first claim that there is \(c\in(0,\infty)\) dependent only on \(\alpha\) and \(\Gamma\), such that for all \(x\in\Gamma_{1}\) and \(y\in\Gamma\), \[\int_{\Gamma}|\rho_{1}\big{(}2^{1/\alpha}x,z\big{)}-\varphi(z)|\rho_{1}(z,2^{ 1/\alpha}y)M_{\Gamma}^{2}(z)\,\mathrm{d}z\leqslant c(1+|y|)^{-\beta}. \tag{3.21}\] Indeed, denote \(\tilde{y}=2^{1/\alpha}y\). By (3.8), Lemma 3.6 and (3.10), there is \(c>0\) such that for all \(z,y\in\Gamma\) and \(x\in\Gamma_{1}\), \[|\rho_{1}\big{(}2^{1/\alpha}x,z\big{)}-\varphi(z)|\rho_{1}(z,\widetilde{y})M_ {\Gamma}^{2}(z)\lesssim(1+|z|)^{-d-\alpha}(1+|z-\widetilde{y}|)^{-d-\alpha}(1+ |y|)^{\alpha-\beta}.\] We split the integral in (3.21) into two integrals. For \(z\in A:=B(\widetilde{y},|\widetilde{y}|/2)\) we use the fact that \(|z|\approx|\widetilde{y}|\approx|y|\) and \(1+|z-\widetilde{y}|\geqslant 1\), therefore \[\int_{A}|\rho_{1}\big{(}2^{1/\alpha}x,z\big{)}-\varphi(z)|\rho_{1}(z,\widetilde {y})M_{\Gamma}^{2}(z)\,\mathrm{d}z\lesssim|y|^{d}(1+|y|)^{-d-\beta}\leqslant(1 +|y|)^{-\beta}. \tag{3.22}\] For \(z\in\Gamma\setminus A\) we simply have \(1+|z|\geqslant 1\), thus, \[\int_{\Gamma\setminus A}|\rho_{1}\big{(}2^{1/\alpha}x,z\big{)}- \varphi(z)|\rho_{1}\big{(}z,\widetilde{y}\big{)}M_{\Gamma}^{2}(z)\,\mathrm{d}z \lesssim(1+|y|)^{\alpha-\beta}\int_{\Gamma\setminus A}(1+|z- \widetilde{y}|)^{-d-\alpha}\,\mathrm{d}z\] \[\lesssim(1+|y|)^{\alpha-\beta}\int_{|\widetilde{y}|/2}^{\infty}(1 +r)^{-1-\alpha}\,\mathrm{d}r\] \[\lesssim(1+|y|)^{-\beta}.\] Combining it with (3.22), we arrive at (3.21), as claimed. Let \(\varepsilon>0\). In view of (3.21) and the fact that \(\beta>0\), there is \(R\in(0,\infty)\) depending only on \(\alpha\), \(\beta\), \(\Gamma\) and \(\varepsilon\) such that \[\int_{\Gamma}|\rho_{1}\big{(}2^{1/\alpha}x,z\big{)}-\varphi(z)|\rho_{1}\big{(} z,2^{1/\alpha}y\big{)}M_{\Gamma}^{2}(z)\,\mathrm{d}z<\varepsilon, \tag{3.23}\] provided that \(y\in\Gamma\setminus\Gamma_{R}\). For \(y\in\Gamma_{R}\), by (3.13) we get \[\int_{\Gamma}\big{|}\rho_{1}\big{(}2^{1/\alpha}x,z\big{)}-\varphi(z)\big{|}\rho_ {1}(z,2^{1/\alpha}y)M_{\Gamma}^{2}(z)\,\mathrm{d}z\lesssim\int_{\Gamma}\big{|} \rho_{1}\big{(}2^{1/\alpha}x,z\big{)}-\varphi(z)\big{|}M_{\Gamma}^{2}(z)\, \mathrm{d}z,\] with the implied constant dependent only on \(\alpha\), \(\beta\), \(\Gamma\) and \(R\), but not otherwise dependent of \(y\). Thus, by Corollary 3.5, \[\int_{\Gamma}\big{|}\rho_{1}(2^{1/\alpha}x,z)-\varphi(z)\big{|}\rho_{1}(z,2^{1 /\alpha}y)M_{\Gamma}^{2}(z)\,\mathrm{d}z<\varepsilon \tag{3.24}\] for all \(y\in\Gamma_{R}\) and \(x\in\Gamma_{1}\) small enough. Putting (3.24) together with (3.23) we arrive at (3.20). Using the scaling property (3.7) and Theorem 3.4, \[\lim_{\Gamma\ni x\to 0}\rho_{1}(x,y) =2^{-(d+2\beta)/\alpha}\int_{\Gamma}\varphi(z)\rho_{1}(z,2^{1/ \alpha}y)M_{\Gamma}^{2}(z)\,\mathrm{d}z\] \[=\int_{\Gamma}\varphi(z)\rho_{1/2}(2^{-1/\alpha}z,y)M_{\Gamma}^{ 2}(z)\,\mathrm{d}z=L_{\ln 2}\varphi(y)=\varphi(y).\] The proof is complete. Note that by the symmetry of \(\rho_{t}\), for \(x\in\Gamma\), \[\rho_{t}(x,0):=\lim_{\Gamma\ni y\to 0}\rho_{t}(x,y)=\rho_{t}(0,x)=t^{-(d+2 \beta)/\alpha}\varphi\big{(}t^{-1/\alpha}x\big{)}.\] Recall also that by (3.12) and (3.4), \[\rho_{1}(x,y)\approx(1+|y|)^{-d-\alpha}\frac{\mathbb{P}_{y}(\tau_{\Gamma}>1)}{ M_{\Gamma}(y)}\in L^{1}(M_{\Gamma}^{2}(y)\,\mathrm{d}y).\] Thus, by Theorem 3.7 and the dominated convergence theorem, \[\int_{\Gamma}\varphi(x)M_{\Gamma}^{2}(x)\,\,\mathrm{d}x=1. \tag{3.25}\] Let us summarize the results of this section in one statement. **Theorem 3.8**.: _Assume \(\Gamma\) is a fat cone. Then the function \(\rho\) has a continuous extension to \((0,\infty)\times(\Gamma\cup\{0\})\times(\Gamma\cup\{0\})\) and_ \[\rho_{t}(0,y):=\lim_{\Gamma\ni x\to 0}\rho_{t}(x,y)\in(0,\infty),\quad t>0,\,y \in\Gamma, \tag{3.26}\] _satisfies_ \[\rho_{t}(0,y)=t^{-(d+2\beta)/\alpha}\rho_{1}(0,t^{-1/\alpha}y),\quad t>0,\,y \in\Gamma, \tag{3.27}\] _and_ \[\int_{\Gamma}\rho_{t}(0,y)\rho_{s}(y,z)M_{\Gamma}^{2}(y)\,dy=\rho_{t+s}(0,z), \quad s,t>0,\,z\in\Gamma. \tag{3.28}\] Proof.: The existence of the limit (3.26) and the scaling property (3.27) are proved in Theorem 3.7, see also Lemma 3.6. For the proof of (3.28) we employ (3.5) to write \[\rho_{t+s}(0,z)=\lim_{\Gamma\ni y\to 0}\rho_{t+s}(y,z)=\lim_{\Gamma\ni y \to 0}\int_{\Gamma}\rho_{t}(y,w)\rho_{s}(w,z)M_{\Gamma}^{2}(w)\,\mathrm{d}w,\] and use (3.4), (3.6), (3.13), and the dominated convergence theorem. Thus, it remains to prove the continuity of \(\rho\) on \((0,\infty)\times(\Gamma\cup\{0\})\times(\Gamma\cup\{0\})\). By symmetry and the Chapman-Kolmogorov property (3.5) of \(\rho_{1}\), \[\rho_{1}(x,y)=\int_{\Gamma}\rho_{1/2}(x,z)\rho_{1/2}(y,z)M_{\Gamma}^{2}(z)\, \mathrm{d}z,\quad x,y\in\Gamma. \tag{3.29}\] The continuity of \(\rho_{1}\) on \(\Gamma\times\Gamma\) together with Theorem 3.7, for every \(x_{0},y_{0}\in\Gamma\cup\{0\}\) we have \(\rho_{1/2}(x,z)\to\rho_{1/2}(x_{0},z)\) and \(\rho_{1/2}(y,z)\to\rho_{1/2}(y_{0},z)\) as \(x\to x_{0}\) and \(y\to y_{0}\). Moreover, (3.12) entails that \[\rho_{1/2}(x,z)\rho_{1/2}(y,z)M_{\Gamma}^{2}(z)\leqslant c(1+|z|)^{-2d-2\alpha},\] with the constant \(c\) possibly dependent on \(x_{0}\) and \(y_{0}\). It follows by the dominated convergence theorem that \[\rho_{1}(x,y)\to\int_{\Gamma}\rho_{1/2}(x_{0},z)\rho_{1/2}(y_{0},z)M_{\Gamma}^{2} (z)\,\mathrm{d}z,\] as \(x\to x_{0}\) and \(y\to y_{0}\) and in view of (3.29), it is an extension of \(\rho_{1}\) to \((\Gamma\cup\{0\})\times(\Gamma\cup\{0\})\), which will be denoted by the same symbol. It follows now from (3.6) that \[t^{-(d+2\beta)/\alpha}\rho_{1}(t^{-1/\alpha}x,t^{-1/\alpha}y),\quad x,y\in \Gamma\cup\{0\},\,t>0,\] is a finite continuous extension of \(\rho_{t}\) for every \(t>0\). It remains to observe that the extension is unique and jointly continuous in \((t,x,y)\in\mathbb{R}_{+}\times(\Gamma\cup\{0\})\times(\Gamma\cup\{0\}.\) **Corollary 3.9**.: _We have_ \[\rho_{1}(0,0)=\lim_{\Gamma\ni x,y\to 0}\rho_{1}(x,y)\in(0,\infty).\] Proof.: By Theorem 3.8, \(\rho_{1}(0,0)=\lim_{\Gamma\ni y\to 0}\varphi(y)=:\varphi(0)\). Thus, the claim follows by Lemma 3.6 and [3, Lemma 4.2]. Proof of Theorem 1.1.: By (3.26), \(\Psi_{t}(x)=\rho_{t}(0,x)M_{\Gamma}(x)\), \(t>0\), \(x\in\Gamma\). Thus, the existence of \(\Psi_{t}\) is just a reformulation of (3.26). The scaling property (1.1) follows immediately from (3.27) and the homogeneity of the Martin kernel (2.10), and (1.2) is equivalent to (3.28). We conclude this part by rephrasing (3.25) in terms of \(\Psi_{t}\): \[\int_{\Gamma}\Psi_{t}(x)M_{\Gamma}(x)\,\mathrm{d}x=1,\quad t>0. \tag{3.30}\] ### Yaglom limit The above results quickly lead to calculation of the Yaglom limit for the stable process (conditioned to stay in a cone). Note that our proof is different than that in [16]. We also cover more general cones, including \(\mathbb{R}\setminus\{0\}\) and \(\mathbb{R}^{2}\setminus([0,\infty)\times\{0\})\). First, we obtain the following extension of [16, Theorem 3.1]. **Corollary 3.10**.: _Let \(\Gamma\) be a fat cone. For every \(t>0\),_ \[\lim_{\Gamma\ni x\to 0}\frac{\mathbb{P}_{x}(\tau_{\Gamma}>t)}{M_{\Gamma}(x)}=C _{1}t^{-\beta/\alpha}\quad\text{where}\quad C_{1}=\int_{\Gamma}\varphi(z)M_{ \Gamma}(z)\,dz\in(0,\infty).\] Proof.: It is enough to prove the claim for \(t=1\); the general case follows by the scalings (2.7) and (2.10). We have \[\frac{\mathbb{P}_{x}(\tau_{\Gamma}>1)}{M_{\Gamma}(x)}=\int_{\Gamma}\frac{p_{ \Gamma}^{\Gamma}(x,y)}{M_{\Gamma}(x)}\,\mathrm{d}y=\int_{\Gamma}\rho_{1}(x,y) M_{\Gamma}(y)\,\mathrm{d}y,\quad x\in\Gamma.\] We use (3.12), the dominated convergence theorem, and Theorem 3.7 to get the conclusion. The first identity below is the Yaglom limit. **Theorem 3.11**.: _Assume \(\Gamma\) is a fat cone and let \(B\) be a bounded subset of \(\Gamma\). Then, uniformly in \(x\in B\),_ \[\lim_{t\to\infty}\mathbb{P}_{x}\left(t^{-1/\alpha}X_{t}\in A\;\Big{|}\;\tau_{ \Gamma}>t\right)=\mu(A),\quad A\subseteq\Gamma,\] _where_ \[\mu(A):=\frac{1}{C_{1}}\int_{A}\varphi(y)M_{\Gamma}(y)\;dy,\quad A\subseteq\Gamma.\] Proof.: By (2.6) and the scaling property (2.7), \[\mathbb{P}_{x}\left(t^{-1/\alpha}X_{t}\in A\;\Big{|}\;\tau_{ \Gamma}>t\right) =\frac{\mathbb{P}_{x}\left(\tau_{\Gamma}>t,t^{-1/\alpha}X_{t}\in A \right)}{\mathbb{P}_{x}(\tau_{\Gamma}>t)}\] \[=\frac{\mathbb{P}_{t^{-1/\alpha}x}\left(\tau_{\Gamma}>1,X_{1}\in A \right)}{\mathbb{P}_{t^{-1/\alpha}x}(\tau_{\Gamma}>1)}\] \[=\int_{A}\frac{p_{1}^{\Gamma}\left(t^{-1/\alpha}x,y\right)}{M_{ \Gamma}(t^{-1/\alpha}x)}\;\mathrm{d}y\cdot\frac{M_{\Gamma}(t^{-1/\alpha}x)}{ \mathbb{P}_{t^{-1/\alpha}x}(\tau_{\Gamma}>1)}.\] The claim follows by Corollary 3.10, (3.12), and the dominated convergence theorem. **Theorem 3.12**.: _If \(\Gamma\) is a fat cone and \(\gamma\) is a probability measure on \(\Gamma\) with \(\int_{\Gamma}(1+|y|)^{\alpha}\,\gamma(\mathrm{d}y)<\infty\), then_ \[\lim_{t\to\infty}\mathbb{P}_{\gamma}\left(t^{-1/\alpha}X_{t}\in A\;\Big{|}\; \tau_{\Gamma}>t\right)=\mu(A),\quad A\subseteq\Gamma.\] Proof.: Let \(t\geqslant 1\). In view of [24, Lemma 8.7], we may write \[\mathbb{P}_{\gamma}\left(t^{-1/\alpha}X_{t}\in A\;\Big{|}\;\tau_{ \Gamma}>t\right) =\frac{\mathbb{P}_{\gamma}\left(t^{-1/\alpha}X_{t}\in A,\tau_{ \Gamma}>t\right)}{\mathbb{P}_{\gamma}\left(\tau_{\Gamma}>t\right)}\] \[=\int_{\Gamma}\mathbb{P}_{x}\left(t^{-1/\alpha}X_{t}\in A\; \Big{|}\;\tau_{\Gamma}>t\right)\frac{\mathbb{P}_{x}\left(\tau_{\Gamma}>t \right)}{\mathbb{P}_{\gamma}\left(\tau_{\Gamma}>t\right)}\,\gamma(\mathrm{d} x).\] We first prove that for all \(x\in\Gamma\), \[\frac{\mathbb{P}_{\gamma}\left(\tau_{\Gamma}>t\right)}{\mathbb{P}_{x}\left( \tau_{\Gamma}>t\right)}:=\int_{\Gamma}\frac{\mathbb{P}_{y}\left(\tau_{\Gamma }>t\right)}{\mathbb{P}_{x}\left(\tau_{\Gamma}>t\right)}\,\gamma(\mathrm{d}y) \to\int_{\Gamma}\frac{M_{\Gamma}(y)}{M_{\Gamma}(x)}\,\gamma(\mathrm{d}y), \tag{3.31}\] as \(t\to\infty\). Indeed, fix \(x\in\Gamma\). First we note that by local boundedness of \(M_{\Gamma}\) and (2.10), \[\int_{\Gamma}M_{\Gamma}(y)\,\gamma(\mathrm{d}y)\leqslant c\int_{\Gamma}(1+|y| )^{\beta}\,\gamma(\mathrm{d}y)<\infty,\] so the right-hand side of (3.31) is finite. Next, by Corollary 3.10, (2.7), and (2.10), \[\lim_{t\to\infty}\frac{\mathbb{P}_{y}\left(\tau_{\Gamma}>t\right)}{\mathbb{P} _{x}\left(\tau_{\Gamma}>t\right)}=\lim_{t\to\infty}\frac{\mathbb{P}_{t^{-1/ \alpha}y}\left(\tau_{\Gamma}>1\right)M_{\Gamma}(t^{-1/\alpha}x)}{\mathbb{P}_{ t^{-1/\alpha}x}\left(\tau_{\Gamma}>1\right)M_{\Gamma}(t^{-1/\alpha}y)}\frac{M_{ \Gamma}(y)}{M_{\Gamma}(x)}=\frac{M_{\Gamma}(y)}{M_{\Gamma}(x)},\quad x,y\in\Gamma.\] Moreover, since \(x\) is fixed, we may assume that \(t\geqslant 1\vee|x|^{\alpha}\). Thus, by [3, Lemma 4.2], Lemma 3.2, the local boundedness of \(M_{\Gamma}\) and (2.10), \[\frac{\mathbb{P}_{y}\left(\tau_{\Gamma}>t\right)}{\mathbb{P}_{x}\left(\tau_{ \Gamma}>t\right)}\leqslant c\frac{\left(t^{-\beta/\alpha}+t^{-1}|y|^{\alpha- \beta}\right)M_{\Gamma}(y)}{t^{-\beta/\alpha}M_{\Gamma}(x)}\leqslant c\frac{ (1+|y|^{\alpha-\beta})M_{\Gamma}(y)}{M_{\Gamma}(x)}\leqslant c\frac{(1+|y|)^{ \alpha}}{M_{\Gamma}(x)}.\] Thus, the dominated convergence theorem yields (3.31), as desired. Next, we consider a family \(\mathcal{F}_{1}\) of functions \(f_{t}\) of the form \[f_{t}(x)=\frac{\mathbb{P}_{x}(\tau_{\Gamma}>t)}{\mathbb{P}_{\gamma}(\tau_{ \Gamma}>t)},\quad x\in\Gamma,\,t\geqslant 1.\] Denote \[f(x)=\frac{M_{\Gamma}(x)}{\int_{\Gamma}M_{\Gamma}(y)\,\gamma(\mathrm{d}y)}, \quad x\in\Gamma.\] By virtue of (3.31), \(f_{t}\to f\) everywhere in \(\Gamma\) as \(t\to\infty\). Thus, \(f_{t}\to f\) in measure \(\gamma\) as \(t\to\infty\), see [33, Definition 22.2]. Moreover, we have \[\int_{\Gamma}f(x)\,\gamma(\mathrm{d}x)=1=\lim_{t\to\infty}1=\lim_{t\to\infty} \int_{\Gamma}f_{t}(x)\,\gamma(\mathrm{d}x).\] Therefore, by [33, Theorem 22.7], the family \(\mathcal{F}_{1}\) is uniformly integrable. If we now consider the family \(\mathcal{F}_{2}\) of functions \(\tilde{f}_{t}\) of the form \[\tilde{f}_{t}(x)=\mathbb{P}_{x}\left(t^{-1/\alpha}X_{t}\in A\;\Big{|}\;\tau_{ \Gamma}>t\right)f_{t}(x),\quad x\in\Gamma,\,t\geqslant 1,\] then a trivial bound \(\mathbb{P}_{x}\left(t^{-1/\alpha}X_{t}\in A\;\Big{|}\;\tau_{\Gamma}>t\right)\leqslant 1\) shows that \(\mathcal{F}_{2}\) is uniformly integrable as well (see, e.g., [33, Theorem 22.9]). By Theorem 3.11, (3.31) and [33, Theorem 22.7], \[\lim_{t\to\infty}\int_{\Gamma}\mathbb{P}_{x}\left(t^{-1/\alpha}X_{t}\in A\; \Big{|}\;\tau_{\Gamma}>t\right)\frac{\mathbb{P}_{x}\left(\tau_{\Gamma}>t\right) }{\mathbb{P}_{\gamma}\left(\tau_{\Gamma}>t\right)}\,\gamma(\mathrm{d}x)=\int_{ \Gamma}\mu(A)\frac{M_{\Gamma}(x)}{\int_{\Gamma}M_{\Gamma}(y)\,\gamma( \mathrm{d}y)}\,\gamma(\mathrm{d}x)=\mu(A).\] The proof is complete. **Example 3.13**.: Note that \(\beta=0\) if and only if \(\Gamma^{c}\) is a polar set and then \(M_{\Gamma}(x)=1\) for all \(x\in\Gamma\), see [3, Theorem 3.2]. Consequently, we have \(p_{t}^{\Gamma}(x,y)=p_{t}(x,y)\) and \(\mathbb{P}_{x}(\tau_{\Gamma}>t)=1\) for all \(x,y\in\Gamma\) and all \(t>0\). It follows that \(\rho_{t}(x,y)=p_{t}(x,y)\) and a direct calculation using the Chapman-Kolmogorov property entails that \(\varphi(y)=p_{1}(0,y)\) is the stationary density for the (classical) \(\alpha\)-stable Ornstein-Uhlenbeck semigroup, see (3.14) and Theorem 3.4. The statement of Theorem 3.7 thus reduces to the continuity property the heat kernel of the isotropic \(\alpha\)-stable Levy process. Theorems 3.11 and 3.12 trivialize in a similar way. Incidentally, in this case the moment condition on \(\gamma\) in Theorem 3.12 is superfluous. Further examples are given in Section 4. ## 4. Asymptotic behavior for the killed semigroup This section is devoted to examples and applications in Functional Analysis and Partial Differential Equations. Note that in Lemmas 4.1 and 4.2 we do not assume that \(\Gamma\) is fat. **Lemma 4.1**.: \(\{P_{t}^{\Gamma}\}_{t>0}\) _is a strongly continuous contraction semigroup on \(L^{1}(M_{\Gamma})\) and_ \[\int_{\Gamma}P_{t}^{\Gamma}f(x)M_{\Gamma}(x)\,dx=\int_{\Gamma}f(x)M_{\Gamma}( x)\,dx,\quad t>0,\quad f\in L^{1}(M_{\Gamma}). \tag{4.1}\] Proof.: Let \(f\geqslant 0\). By the Fubini-Tonelli theorem, the symmetry of \(p_{t}^{\Gamma}\) and Theorem 3.1, \[\int_{\Gamma}P_{t}^{\Gamma}f(x)M_{\Gamma}(x)\,\mathrm{d}x=\int_{\Gamma}\int_{ \Gamma}p_{t}^{\Gamma}(x,y)f(y)M_{\Gamma}(y)\,\mathrm{d}y\,\mathrm{d}x=\int_{ \Gamma}f(y)M_{\Gamma}(y)\,\mathrm{d}y. \tag{4.2}\] Since \(|P_{t}^{\Gamma}f|\leqslant P_{t}^{\Gamma}|f|\), the contractivity follows. Furthermore, for arbitrary \(f\in L^{1}(M_{\Gamma})\) we write \(f=f_{+}-f_{-}\) and use (4.2) to prove (4.1). The semigroup property follows from (2.5). To prove the strong continuity, we fix \(f\in L^{1}(M_{\Gamma})\) and let \(G:=fM_{\Gamma}\in L^{1}(\Gamma)\). There is a sequence \(g_{n}\in C_{c}^{\infty}(\Gamma)\) such that \(\|g_{n}-G\|_{L^{1}(\Gamma)}\to 0\) as \(n\to\infty\). For \(f_{n}:=g_{n}/M_{\Gamma}\) we get \(f_{n}\in C_{c}^{\infty}(\Gamma)\) and \(\|f_{n}-f\|_{L^{1}(M_{\Gamma})}=\|g_{n}-G\|_{L^{1}(\Gamma)}\to 0\). By the first part of the proof, \[\|P_{t}^{\Gamma}f-f\|_{L^{1}(M_{\Gamma})} \leqslant\|P_{t}^{\Gamma}f-P_{t}^{\Gamma}f_{n}\|_{L^{1}(M_{ \Gamma})}+\|P_{t}^{\Gamma}f_{n}-f_{n}\|_{L^{1}(M_{\Gamma})}+\|f_{n}-f\|_{L^{1} (M_{\Gamma})}\] \[\leqslant 2\|f_{n}-f\|_{L^{1}(M_{\Gamma})}+\|P_{t}^{\Gamma}f_{n}-f_ {n}\|_{L^{1}(M_{\Gamma})}.\] It remains to prove that \(\|P_{t}^{\Gamma}f-f\|_{L^{1}(M_{\Gamma})}\to 0\) as \(t\to 0^{+}\) for every \(f\in C_{c}^{\infty}(\Gamma)\). To this end we let \(\varepsilon>0\) and choose \(R>0\) such that \(\operatorname{supp}f\in B_{R}\) and \(\int_{\Gamma\setminus F_{R}}P_{t}^{\Gamma}|f|(x)M_{\Gamma}(x)\,\mathrm{d}x<\varepsilon\). Then, \[\|P_{t}^{\Gamma}f-f\|_{L^{1}(M_{\Gamma})}<\int_{\Gamma_{R}}|P_{t}^{\Gamma}f(x) -f(x)|M_{\Gamma}(x)\,\mathrm{d}x+\varepsilon. \tag{4.3}\] Considering the integrand in (4.3), for all \(x\in\Gamma_{R}\) we have \[|P_{t}^{\Gamma}f(x)-f(x)|\leqslant\int_{\Gamma}p_{t}^{\Gamma}(x,y)|f(y)-f(x)| \,\mathrm{d}y+|f(x)|\mathbb{P}_{x}(\tau_{\Gamma}\leqslant t). \tag{4.4}\] Since \(P_{t}f\to f\) uniformly as \(t\to 0^{+}\), for \(t>0\) small enough we get \[\int_{\Gamma}p_{t}^{\Gamma}(x,y)|f(y)-f(x)|\,\mathrm{d}y\leqslant\int_{ \mathbb{R}^{d}}p_{t}(x,y)|f(y)-f(x)|\,\mathrm{d}y<\varepsilon.\] On the other hand, \[|f(x)|\mathbb{P}_{x}(\tau_{\Gamma}\leqslant t)\leqslant\|f\|_{\infty}\sup_{x \in K}\mathbb{P}_{x}(\tau_{\Gamma}\leqslant t),\] where \(K:=\operatorname{supp}f\). We have \(r:=\operatorname{dist}(K,\Gamma^{c})>0\), so \[\mathbb{P}_{x}(\tau_{\Gamma}\leqslant t)\leqslant\mathbb{P}_{x}(\tau_{B(x,r)} \leqslant t)=\mathbb{P}_{0}(\tau_{B_{r}}\leqslant t)\leqslant ctr^{-\alpha}<\varepsilon,\] for \(t\) small enough, see, e.g., [31]. By (4.3) and (4.4) we get, as required, \[\|P_{t}^{\Gamma}f-f\|_{L^{1}(M_{\Gamma})}<\varepsilon+(\varepsilon+\|f\|_{ \infty}\varepsilon)|\Gamma_{R}|\sup_{\Gamma_{R}}M_{\Gamma}.\] Recall that \[\|f\|_{q,M_{\Gamma}}:=\|f/M_{\Gamma}\|_{L^{q}(M_{\Gamma}^{2})}=\bigg{(}\int_{ \Gamma}|f(x)|^{q}M_{\Gamma}^{2-q}(x)\,\mathrm{d}x\bigg{)}^{\frac{1}{q}}=\|f\|_{ L^{q}(M_{\Gamma}^{2-q})},\] if \(1\leqslant q<\infty\), and \[\|f\|_{\infty,M_{\Gamma}}:=\operatorname*{ess\,sup}_{x\in\Gamma}|f(x)|/M_{ \Gamma}(x).\] The following characterization of _hypercontractivity_ of \(P_{t}^{\Gamma}\) is crucial for the proof of (1.3). **Lemma 4.2**.: _Let \(q\in[1,\infty)\). We have_ \[\|P_{t}^{\Gamma}f\|_{q,M_{\Gamma}}\leqslant Ct^{-\frac{d+2\beta}{\alpha}\frac{ q-1}{q}}\|f\|_{1,M_{\Gamma}} \tag{4.5}\] _for all \(t>0\) and all non-negative functions \(f\) on \(\mathbb{R}^{d}\) if and only if_ \[\sup_{y\in\Gamma}\int_{\Gamma}\rho_{1}(x,y)^{q}M_{\Gamma}^{2}(x)\,dx<\infty. \tag{4.6}\] Proof.: Assume (4.6). Let \(f\geqslant 0\). With the notation \(F:=f/M_{\Gamma}\) we get \[\|P_{1}^{\Gamma}f\|_{q,M_{\Gamma}}=\bigg{(}\int_{\Gamma}\bigg{(}\int_{\Gamma} \rho_{1}(x,y)F(y)M_{\Gamma}^{2}(y)\,\mathrm{d}y\bigg{)}^{q}M_{\Gamma}^{2}(x) \,\mathrm{d}x\bigg{)}^{1/q}.\] Let \(c\) be the supremum in (4.6). By Minkowski integral inequality, \[\bigg{(}\int_{\Gamma}\bigg{(}\int_{\Gamma}\rho_{1}(x,y)F(y)M_{ \Gamma}^{2}(y)\,\mathrm{d}y\bigg{)}^{q}M_{\Gamma}^{2}(x)\,\mathrm{d}x\bigg{)} ^{1/q}\leqslant\int_{\Gamma}\bigg{(}\int_{\Gamma}\rho_{1}(x,y)^{q}M_{\Gamma}^{ 2}(x)\,\mathrm{d}x\bigg{)}^{1/q}F(y)M_{\Gamma}^{2}(y)\,\mathrm{d}y\] \[\leqslant c\int_{\Gamma}F(y)M_{\Gamma}^{2}(y)\,\mathrm{d}y=c\|f \|_{1,M_{\Gamma}}.\] For \(t>0\), by scaling we get (4.5) as follows: \[\|P_{t}^{\Gamma}f\|_{q,M_{\Gamma}} =t^{\frac{d+\beta(2-q)}{\alpha q}}\|P_{1}^{\Gamma}f(t^{1/\alpha} \,\cdot)\|_{q,M_{\Gamma}}\leqslant ct^{\frac{d+\beta(2-q)}{\alpha q}}\|f(t^{ 1/\alpha}\,\cdot)\|_{1,M_{\Gamma}}\] \[=ct^{\frac{d+\beta(2-q)}{\alpha q}}t^{-\frac{d+\beta}{\alpha}}\| f\|_{1,M_{\Gamma}}=ct^{-\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|f\|_{1,M_{ \Gamma}}.\] Conversely, assume (4.5). Let \(y\in\Gamma\). Let \(g_{n}\geqslant 0\), \(n\in\mathbb{N}\), be functions in \(C_{c}^{\infty}(\Gamma)\) approximating \(\delta_{y}\), the Dirac measure at \(y\), as follows: \[\int_{\Gamma}g_{n}(x)\,\mathrm{d}x=1,\qquad\text{and}\qquad\lim_{n\to\infty} \int_{\Gamma}h(x)g_{n}(x)\,\mathrm{d}x=h(y),\] for every function \(h\) continuous near \(y\). For \(f_{n}:=g_{n}/M_{\Gamma}\), \(\|f_{n}\|_{1,M_{\Gamma}}=\|g_{n}\|_{1}=1\) and \[P_{1}^{\Gamma}f_{n}(x)=\int_{\Gamma}p_{1}^{\Gamma}(x,z)\frac{g_{n}(z)}{M_{ \Gamma}(z)}\,\mathrm{d}z\to\frac{p_{1}^{\Gamma}(x,y)}{M_{\Gamma}(y)},\] as \(n\to\infty\). By (4.5) and Fatou's lemma, \[C^{q} \geqslant\liminf_{n\to\infty}\|P_{1}^{\Gamma}f_{n}\|_{q,M_{\Gamma}} ^{q}\] \[=\liminf_{n\to\infty}\int_{\Gamma}\bigg{|}\int_{\Gamma}\frac{p_{1 }^{\Gamma}(x,z)}{M_{\Gamma}(z)}g_{n}(z)\,\mathrm{d}z\bigg{|}^{q}M_{\Gamma}^{2- q}(x)\,\mathrm{d}x\] \[\geqslant\int_{\Gamma}\rho_{1}(x,y)^{q}M_{\Gamma}^{2}(x)\,\mathrm{ d}x.\] Since \(y\in\Gamma\) was arbitrary, we obtain (4.6). _Remark 4.3_.: Of course, (4.5) extends to arbitrary \(f\in L^{1}(M_{\Gamma})\). **Example 4.4**.: As in Example 3.13, we assume that \(\beta=0\). In fact, to simplify notation, let \(\Gamma=\mathbb{R}^{d}\). Then (4.6) is trivially satisfied for every \(q\in[1,\infty)\), because \(\rho_{1}(x,y)=p_{1}(x,y)\) is bounded. Therefore, by (4.5), for each \(f\in L^{1}\), \[\|P_{t}f\|_{q}\leqslant Ct^{-\frac{d}{\alpha}\frac{q-1}{q}}\|f\|_{1}.\] This agrees with [36], see also [14]. Here is a refinement of Lemma 4.2. **Lemma 4.5**.: _Let \(q\in[1,\infty)\), assume (4.6) and suppose \(\Gamma\) is fat. If \(f\in L^{1}(M_{\Gamma})\), \(\int_{\Gamma}f(x)M_{\Gamma}(x)\,dx=0\) then_ \[\lim_{t\to\infty}t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t}^{\Gamma}f\|_ {q,M_{\Gamma}}=0. \tag{4.7}\] _If, additionally, \(f\) has compact support, then (4.7) is true for \(q=\infty\), too._ Proof.: Let \(\omega>0\). First, we prove prove (4.7) for a compactly supported function \(f\in L^{1}(M_{\Gamma})\) satisfying \[\int_{\Gamma}f(x)M_{\Gamma}(x)\,\mathrm{d}x=0. \tag{4.8}\] _Step 1. Case \(q=\infty\)._ For \(t>0\) we let \[I(t):=t^{\frac{d+2\beta}{\alpha}}\|P_{t}^{\Gamma}f\|_{\infty,M_{\Gamma}}=t^{ \frac{d+2\beta}{\alpha}}\sup_{x\in\Gamma}\Bigl{|}\int_{\Gamma}\rho_{t}(x,y)M_{ \Gamma}(y)f(y)\,\mathrm{d}y\Bigr{|}\,.\] By (4.8), \[I(t)=t^{\frac{d+2\beta}{\alpha}}\sup_{x\in\Gamma}\Bigl{|}\int_{\Gamma}\left( \rho_{t}(x,y)-\rho_{t}(x,0)\right)M_{\Gamma}(y)f(y)\,\mathrm{d}y\Bigr{|}\,.\] Since \(f\) has compact support, for sufficiently large \(t>0\) we have \[I(t) =t^{\frac{d+2\beta}{\alpha}}\sup_{x\in\Gamma}\left|\int_{|y| \leqslant t^{\frac{1}{\alpha}}\omega}\left(\rho_{t}(x,y)-\rho_{t}(x,0)\right) M_{\Gamma}(y)f(y)\,\mathrm{d}y\right|\] \[\leqslant t^{\frac{d+2\beta}{\alpha}}\sup_{\begin{subarray}{c}x \in\Gamma\\ |y|\leqslant t^{\frac{1}{\alpha}}\omega\end{subarray}}|\rho_{t}(x,y)-\rho_{t}(x,0)|\int_{|y|\leqslant t^{\frac{1}{\alpha}}\omega}M_{\Gamma}(y)|f(y)|\,\mathrm{ d}y\] \[=\sup_{\begin{subarray}{c}x\in\Gamma\\ |y|\leqslant\omega\end{subarray}}|\rho_{1}\left(x,y\right)-\rho_{1}\left(x,0 \right)|\int_{\Gamma}M_{\Gamma}(y)|f(y)|\,\mathrm{d}y,\] where in the last line we used scaling (3.6) of \(\rho\). By Theorem 3.7, we can make it arbitrary small by choosing small \(\omega\), and (4.7) follows in this case. _Step 2. Case \(q=1\)._ For \(t>0\) we let \[J(t):=\|P_{t}^{\Gamma}f\|_{1,M_{\Gamma}}=\int_{\Gamma}\left|\int_{\Gamma}p_{t }^{\Gamma}(x,y)M_{\Gamma}(x)f(y)\,\mathrm{d}y\right|\,\mathrm{d}x=\int_{ \Gamma}\Big{|}\int_{\Gamma}\rho_{t}(x,y)M_{\Gamma}^{2}(x)f(y)M_{\Gamma}(y)\, \mathrm{d}y\Big{|}\,\mathrm{d}x.\] Applying (4.8), we get \[J(t)\leqslant\int_{\Gamma}\int_{\Gamma}\left|\rho_{t}(x,y)-\rho_{t}(x,0)|M_{ \Gamma}^{2}(x)|f(y)|M_{\Gamma}(y)\,\mathrm{d}y\,\mathrm{d}x.\right.\] Since \(f\) has compact support, \[J(t) \leqslant\int_{\Gamma}\int_{|y|\leqslant t^{\frac{1}{\alpha}} \omega}\left|\rho_{t}(x,y)-\rho_{t}(x,0)\right|M_{\Gamma}^{2}(x)|f(y)|M_{\Gamma }(y)\,\mathrm{d}y\,\mathrm{d}x\] \[\leqslant\sup_{|y|\leqslant t^{\frac{1}{\alpha}}\omega}\|\rho_{t}( \cdot,y)-\rho_{t}(\cdot,0)\|_{L^{1}(M_{\Gamma}^{2})}\int_{\Gamma}M_{\Gamma}(y) |f(y)|\,\mathrm{d}y,\] for sufficiently large \(t\). In view of (2.10) and (3.6), by changing variables \(t^{-1/\alpha}x\to x\) and \(t^{-1/\alpha}y\to y\) we obtain \[\sup_{|y|\leqslant t^{\frac{1}{\alpha}}\omega}\|\rho_{t}(\cdot,y)- \rho_{t}(\cdot,0)\|_{L^{1}(M_{\Gamma}^{2})}\] \[=t^{-\frac{d+2\beta}{\alpha}}\sup_{|y|\leqslant t^{\frac{1}{ \alpha}}\omega}\int\left|\rho_{1}\left(t^{-1/\alpha}x,t^{-1/\alpha}y\right)- \rho_{1}\left(t^{-1/\alpha}x,0\right)\right|M_{\Gamma}^{2}(x)\,\mathrm{d}x\] \[=\sup_{|y|\leqslant\omega}\|\rho_{1}(\cdot,y)-\rho_{1}(\cdot,0) \|_{L^{1}(M_{\Gamma}^{2})}.\] By Corollary 3.5, we can make it arbitrary small by choosing small \(\omega\), so (4.7) is true. _Step 3. Case \(q\in(1,\infty)\)._ By Holder inequality we get that, as \(t\to\infty\), \[t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t}^{\Gamma}f\|_{q,M _{\Gamma}} =t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\bigg{(}\int_{\Gamma}|P_ {t}^{\Gamma}f(x)/M_{\Gamma}(x)|^{q-1}|P_{t}^{\Gamma}f(x)M_{\Gamma}(x)|\, \mathrm{d}x\bigg{)}^{\frac{1}{q}}\] \[\leqslant\left(t^{\frac{d+2\beta}{\alpha}}\|P_{t}^{\Gamma}f\|_{ \infty,M_{\Gamma}}\right)^{\frac{q-1}{q}}\|P_{t}^{\Gamma}f\|_{1,M_{\Gamma}}^{ \frac{1}{q}}\to 0,\] since both factors converge to zero as \(t\to\infty\) by _Step 1_. and _Step 2_. Finally, consider arbitrary \(f\in L^{1}(M_{\Gamma})\) with \(\int_{\Gamma}f(x)M_{\Gamma}(x)\,\mathrm{d}x=0\). Let \(R>0\) and \(f_{R}(x)=(f(x)-c_{R})\mathbf{1}_{|x|\leqslant R}\), where \(c_{R}=\int_{|x|\leqslant R}f(x)M_{\Gamma}(x)\,\mathrm{d}x/\int_{|x|\leqslant R} M_{\Gamma}(x)\,\mathrm{d}x\). Of course, \[\int_{\Gamma}M_{\Gamma}(x)f_{R}(x)\,\mathrm{d}x=0, \tag{4.9}\] and \(f_{R}\) is compactly supported. Furthermore, due to our assumptions, \[\|f-f_{R}\|_{L^{1}(M_{\Gamma})} =|c_{R}|\int_{|x|\leqslant R}M_{\Gamma}(x)\,\mathrm{d}x+\int_{|x| >R}M_{\Gamma}(x)|f(x)|\,\mathrm{d}x\] \[=\Big{|}\int_{|x|\leqslant R}M_{\Gamma}(x)f(x)\,\mathrm{d}x\Big{|} +\int_{|x|>R}M_{\Gamma}(x)|f(x)|\,\mathrm{d}x\to 0\] as \(R\to\infty\). Let \(\varepsilon>0\) and choose \(R>0\) so large that \[\|f-f_{R}\|_{1,M_{\Gamma}}<\varepsilon.\] For \(q=1\), by using the triangle inequality and Lemma 4.1, we get \[\|P_{t}^{\Gamma}f\|_{1,M_{\Gamma}} \leqslant\|P_{t}^{\Gamma}f_{R}\|_{1,M_{\Gamma}}+\|P_{t}^{\Gamma}( f-f_{R})\|_{1,M_{\Gamma}}\] \[\leqslant\|P_{t}^{\Gamma}f_{R}\|_{1,M_{\Gamma}}+\|f-f_{R}\|_{1,M_ {\Gamma}},\] and _Step 2_. yields \[\limsup_{t\to\infty}\|P_{t}^{\Gamma}f\|_{1,M_{\Gamma}}\leqslant\varepsilon,\] which proves (4.7) in this case. If \(1<q<\infty\), then using the triangle inequality and Lemma 4.2, we obtain \[t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t}^{\Gamma}f\|_{q,M _{\Gamma}} \leqslant t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t}^{ \Gamma}f_{R}\|_{q,M_{\Gamma}}+t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t} ^{\Gamma}(f-f_{R})\|_{q,M_{\Gamma}}\] \[\leqslant t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t}^{\Gamma }f_{R}\|_{q,M_{\Gamma}}+C\|f-f_{R}\|_{1,M_{\Gamma}}.\] By (4.9) and _Step 3_., \[\limsup_{t\to\infty}t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t}^{\Gamma}f\| _{q,M_{\Gamma}}\leqslant 2C\varepsilon.\] This completes the proof of (4.7) for \(q\in(1,\infty)\). **Theorem 4.6**.: _Let \(q\in[1,\infty)\), assume (4.6) and suppose \(\Gamma\) is fat. Then for \(f\in L^{1}(M_{\Gamma})\) and \(A=\int_{\Gamma}f(x)M_{\Gamma}(x)\,dx\),_ \[\lim_{t\to\infty}t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t}^{\Gamma}f-A \Psi_{t}\|_{q,M_{\Gamma}}=0.\] _Remark 4.7_.: In view of (3.30) and Lemma 4.1, the constant \(A\) in Theorem 4.6 satisfies \[\int_{\Gamma}\left(P_{t}^{\Gamma}f(x)-A\Psi_{s}(x)\right)M_{\Gamma}(x)\,\mathrm{ d}x=0,\quad s,t>0.\] Proof of Theorem 4.6.: By (2.5), (1.2), Remark 4.7 and Lemma 4.5, \[\lim_{t\to\infty}t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t}^{ \Gamma}f-A\Psi_{t}\|_{q,M_{\Gamma}} =\lim_{t\to\infty}t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t+1 }^{\Gamma}f-A\Psi_{t+1}\|_{q,M_{\Gamma}}\] \[=\lim_{t\to\infty}t^{\frac{d+2\beta}{\alpha}\frac{q-1}{q}}\|P_{t }^{\Gamma}\big{(}P_{1}^{\Gamma}f-A\Psi_{1}\big{)}\|_{q,M_{\Gamma}}=0.\] ### Applications We conclude the article by providing several applications and examples which apply to our results. In particular, we draw the reader's attention to Lemma 4.9, which provides sharp distinction between cones contained in the half-space \(\mathbb{R}^{d}_{+}:=\{x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\colon x_{d}>0\}\) and those which contain \(\mathbb{R}^{d}_{+}\). The same behavior is displayed by a bigger class of smooth cones, as we assert in Corollary 4.12. First, we note a simple observation. **Example 4.8**.: Let \(q=1\). By (3.4), the condition (4.6) holds for every fat cone \(\Gamma\). **Lemma 4.9**.: _Let \(q\in(1,\infty)\) and suppose \(\Gamma\) is a right-circular cone. Then (4.6) holds if \(\beta\geqslant\alpha/2\). Conversely, if \(d\geqslant 2\) and \(\beta<\alpha/2\), then (4.6) does not hold._ Proof.: Recall that by [11, Theorem 2 and Eq. (3)], \[p_{1}^{\Gamma}(x,y)\approx p_{1}(x,y)\frac{\big{(}1\wedge\delta_{\Gamma}(x) \big{)}^{\alpha/2}\big{(}1\wedge\delta_{\Gamma}(y)\big{)}^{\alpha/2}}{(1 \wedge|x|)^{\alpha/2-\beta}\big{(}1\wedge|y|)^{\alpha/2-\beta}},\quad x,y\in\Gamma. \tag{4.10}\] Moreover, [29, Lemma 3.3] entails that \[M_{\Gamma}(x)\approx\delta_{\Gamma}(x)^{\alpha/2}|x|^{\beta-\alpha/2},\quad x \in\mathbb{R}^{d}.\] Using this together with (4.10) and (2.4), we infer that, for \(x,y\in\Gamma\), \[\begin{split}\rho_{1}(x,y)&\approx p_{1}(x,y)\frac{ \big{(}1\wedge\delta_{\Gamma}(x)\big{)}^{\alpha/2}\big{(}1\wedge\delta_{ \Gamma}(y)\big{)}^{\alpha/2}}{\big{(}1\wedge|x|\big{)}^{\alpha/2-\beta}\big{(} 1\wedge|y|\big{)}^{\alpha/2-\beta}\delta_{\Gamma}(x)^{\alpha/2}|x|^{\beta- \alpha/2}\delta_{\Gamma}(y)^{\alpha/2}|y|^{\beta-\alpha/2}}\\ &\approx\big{(}1+|x-y|\big{)}^{-d-\alpha}\frac{\big{(}1+\delta_ {\Gamma}(x)\big{)}^{-\alpha/2}\big{(}1+\delta_{\Gamma}(y)\big{)}^{-\alpha/2}}{ (1+|x|)^{\beta-\alpha/2}(1+|y|)^{\beta-\alpha/2}}.\end{split} \tag{4.11}\] Let \(q\in(1,\infty)\) and assume \(\beta\geqslant\alpha/2\). Then it follows from (4.11) that \(\rho_{1}(x,y)\lesssim 1\) for \(x,y\in\Gamma\), and (3.4) entails that \[\begin{split}\int_{\Gamma}\rho_{1}(x,y)^{q}M_{\Gamma}^{2}(x)\, \mathrm{d}x&\leqslant\|\rho_{1}(x,\,\cdot\,)\|_{\infty}^{q-1} \int_{\Gamma}\rho_{1}(x,y)M_{\Gamma}^{2}(y)\,\mathrm{d}y\\ &=\|\rho_{1}(x,\,\cdot\,)\|_{\infty}^{q-1}\lesssim 1.\end{split} \tag{4.12}\] Thus, we get (4.6) as claimed. Now assume that \(d\geqslant 2\) and \(\beta<\alpha/2\). Let \(y\in\Gamma\) be such that \(\delta_{\Gamma}(y)=2\), so that \(A:=B(y,1)\subseteq\Gamma\). Then for \(x\in A\) one clearly has that \(1+|x-y|\approx 1\) and \(\delta_{\Gamma}(x)\approx 1\). Then it follows from (4.11) that \[\begin{split}\int_{\Gamma}\rho_{1}(x,y)^{q}M_{\Gamma}^{2}(x)\, \mathrm{d}x&\geqslant\int_{A}\rho_{1}(x,y)^{q}M_{\Gamma}^{2}(x)\, \mathrm{d}x\\ &\approx\int_{A}(1+|x|)^{q(\alpha/2-\beta)}(1+|y|)^{q(\alpha/2- \beta)}\delta_{\Gamma}(x)^{\alpha}|x|^{2\beta-\alpha}\,\mathrm{d}x\\ &\approx|y|^{(q-1)(\alpha-2\beta)}.\end{split}\] Since \(\alpha-2\beta>0\) and \(q>1\), by taking \(|y|\to\infty\) we see that (4.6) cannot hold in this case. **Example 4.10**.: Considering the direct part of Lemma 4.9, we note that for \(d=1\) one cannot have at the same time \(\delta_{\Gamma}(y)\approx 1\) and \(|y|\to\infty\). In fact, in this case either \(\Gamma=(0,\infty)\) or \(\Gamma=\mathbb{R}\setminus\{0\}\). In both situations \(\delta_{\Gamma}(y)=|y|\) for \(y\in\Gamma\) and (4.11) yields the boundedness of \(\rho_{1}\). When \(\Gamma=(0,\infty)\), then one has \(\beta=\alpha/2\) by [3, Example 3.2]. If \(\alpha\in(1,2)\) and \(\Gamma=\mathbb{R}\setminus\{0\}\), then \(\Gamma^{c}=\{0\}\) is a non-polar set and \(\beta=\alpha-1\), see [3, Example 3.3]. We note that both \(\Gamma=(0,\infty)\) and \(\Gamma=\mathbb{R}\setminus\{0\}\) are (trivially) smooth cones. In both cases, (4.6) holds by (4.11) and (4.12). **Example 4.11**.: Let \(d\geqslant 2\) and \(\Gamma\) be a right-circular cone, which is the subset of a half-space \(\mathbb{R}_{+}^{d}:=\{x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\colon x_{d}>0\}\). Then by [3, Example 3.2 and Lemma 3.3] we have \(\beta\geqslant\alpha/2\) and Lemma 4.9 gives (4.6). In particular, one can take \(\Gamma=\mathbb{R}_{+}^{d}\). On the other hand, if \(\Gamma\) is such that \(\mathbb{R}_{+}^{d}\subsetneq\Gamma\), then \(\beta<\alpha/2\) by [3, Lemma 3.3], and Lemma 4.9 asserts that (4.6) does not hold. **Corollary 4.12**.: _Let \(d\geqslant 2\) and suppose \(\Gamma\) is a smooth cone. Then (4.6) holds if and only if \(\beta\geqslant\alpha/2\)._ Proof.: Recall that \(\Gamma\) is open and \(C^{1,1}\) outside of the origin. From the harmonicity and homogeneity of \(M_{\Gamma}\), by the boundary Harnack principle we get, as in [29, Lemma 3.3], that \[M_{\Gamma}(x)\approx\delta_{\Gamma}(x)^{\alpha/2}|x|^{\beta-\alpha/2},\quad x \in\Gamma.\] Moreover, since the smooth cone is fat, its Dirichlet heat kernel satisfies (4.10). Thus, one can directly repeat the proof of Lemma 4.9 to conclude the claim.
2310.15247
SyncFusion: Multimodal Onset-synchronized Video-to-Audio Foley Synthesis
Sound design involves creatively selecting, recording, and editing sound effects for various media like cinema, video games, and virtual/augmented reality. One of the most time-consuming steps when designing sound is synchronizing audio with video. In some cases, environmental recordings from video shoots are available, which can aid in the process. However, in video games and animations, no reference audio exists, requiring manual annotation of event timings from the video. We propose a system to extract repetitive actions onsets from a video, which are then used - in conjunction with audio or textual embeddings - to condition a diffusion model trained to generate a new synchronized sound effects audio track. In this way, we leave complete creative control to the sound designer while removing the burden of synchronization with video. Furthermore, editing the onset track or changing the conditioning embedding requires much less effort than editing the audio track itself, simplifying the sonification process. We provide sound examples, source code, and pretrained models to faciliate reproducibility
Marco Comunità, Riccardo F. Gramaccioni, Emilian Postolache, Emanuele Rodolà, Danilo Comminiello, Joshua D. Reiss
2023-10-23T18:01:36Z
http://arxiv.org/abs/2310.15247v1
# Syncfusion: Multimodal Onset-synchronized ###### Abstract Sound design involves creatively selecting, recording, and editing sound effects for various media like cinema, video games, and virtual/augmented reality. One of the most time-consuming steps when designing sound is synchronizing audio with video. In some cases, environmental recordings from video shoots are available, which can aid in the process. However, in video games and animations, no reference audio exists, requiring manual annotation of event timings from the video. We propose a system to extract repetitive actions onsets from a video, which are then used - in conjunction with audio or textual embeddings - to condition a diffusion model trained to generate a new synchronized sound effects audio track. In this way, we leave complete creative control to the sound designer while removing the burden of synchronization with video. Furthermore, editing the onset track or changing the conditioning embedding requires much less effort than editing the audio track itself, simplifying the sonification process. We provide sound examples, source code, and pretrained models to facilitate reproducibility1. Footnote 1: [https://mcomunita.github.io/diffusion-sfz_page](https://mcomunita.github.io/diffusion-sfz_page) Marco Comunita\({}^{1}\)+ Riccardo F. Gramaccioni\({}^{2}\)+ Emilian Postolache\({}^{2}\) Emanuele Rodola\({}^{2}\) Danilo Comminiello\({}^{2}\) Joshua D. Reiss\({}^{1}\) Footnote †: 1}\)Centre for Digital Music, Queen Mary University of London, UK \({}^{2}\)Sapienza University of Rome, Italy Sound effects synthesis, foley, diffusion models, audio-video synchronization, multimodal audio synthesis. ## 1 Introduction Sound plays an essential part in the narration of any audiovisual work. Consequently, it should come as no surprise that the role of the Foley artist - who creates sound effects for films, video games, commerciala, etc. - is crucial to achieving top-quality productions. This task presents considerable difficulties, as it is necessary to create an audio track that corresponds perfectly both in time and content to the video to be soundtrack. In contexts like video games and animated movies, sound designers often receive silent videos, requiring them to create soundtracks entirely from scratch without timing guidance. In other cases, like cinematographic filming, a raw audio track may accompany the video, but Foley artists can only rely on it for timing, while sounds have to be recreated from the ground up; often adopting totally different materials to those present in the video in order to seek a hyper-realism that can benefit the narration. In any case, the essence of the work lies in sourcing and creating high-quality sounds, leading them to build distinctive sound libraries used across their productions. While the creative aspect is highly stimulating, sound designers dedicate the majority of their time to meticulous, repetitive tasks, such as precise synchronization of sound events with video moments, crucial for maintaining viewer immersion. Many works have tried to automate audio-video synchronization: [1] propose a transformer-based architecture that allows the analysis of long video sequences; similar solutions have been adopted in [2], while in [3] the analysis is extended to lip synchronization for singing voices. In [4] the aim is to generate a musical soundtrack that is synchronized with the movie pictures. In sound design, the target is to create an ambient soundtrack that perfectly describes the scene in terms of general mood of the audio-visual work, while following the scenes' transitions as well as temporal and spatial localization. Spatio-temporal event localization is a key computer vision task, that many research studies have tried to solve through the use of deep learning techniques. However, works often focused on detecting and counting the number of repetitions of a particular action in a video [5, 6]. In [7] the repetition count is class agnostic, which is a case of major interest for the analysis of movie sequences, video games, and so forth. However, these works do not provide a precise timing of the actions, which is crucial information for soundtracking. In recent years, video-to-audio tasks have started gathering wider attention and some works analyzed the generation of audio tracks that are temporally and thematically aligned to a given video sequence [8, 9]. Contrastive learning is providing remarkable results in domain translation from video to audio for solving this challenging Figure 1: The overall architecture consists of two distinct parts: 1) an Onset Model that, given a silent video extracts the onsets for the actions in that video; and 2) a Diffusion Model which, conditioned on the onsets track and a CLAP embedding, generates a synchronized audio that can be used as soundtrack for the input video problem [10, 11]. The authors of [12] proposed a model to generate a soundtrack for a silent input video, given a reference audio-video pair that specifies what the video should sound like. Compared to previous work, conditioning the audio generation model with onsets of the actions to be soundtrack can provide Foley artists with greater creative control, allowing them to bypass the mechanical task of manually annotating each repetition of the relevant action and focus exclusively on the quality of the sounds to be produced. Furthermore, modifying an onsets track is very simple and can be of great help when editing. Therefore, we decided to base our work on a model that is able to perform onset detection of the actions present in a silent input video Next, we developed a diffusion model that, conditioned on an embedding of the sound representing how the actions in the video should sound like and an onsets track depicting when those actions occur over time, generates an audio track that is in time and content aligned with the input video. This choice is motivated by the extraordinary results that diffusion models have recently achieved in audio generation [13, 14]. A block diagram of the proposed system is shown in Figure 1. ## 2 Method In this section we detail the different components of the proposed model. The video onset network to detect the occurrences of the relevant actions in the silent input video. The audio representation network to obtain an embedding of the desired target sound from audio or textual source. Finally, the diffusion model which, conditioned on onset track and sound embedding, generates the onset-synchronized audio track. ### Video Onset Detection Inspired by the work done in [12], we selected a ResNet(2+1)D-18 as the video onset detection network. Following their implementation, we removed all temporal striding so that the last convolutional layer would have the same temporal sampling rate as the input video and therefore preserve more detailed temporal information. At the final stage, after pooling, a fully connected layer outputs a label of the same length in frames as the input video. Each element of this label is a prediction representing the presence or absence of a given action at the specific frame. Consequently the resulting label will be a binary mask in which the value \(1\) for the \(i\)-th element represents the presence of an onset for frame \(i\), while the value \(0\) indicates the fact that no action has been detected for that frame. Therefore, given a silent input video \(V\in\mathbb{R}^{C\times H\times W\times T}\), where \(C\) is the number of input channels, \(H\times W\) is the dimension in height and width of each frame and \(T\) is the total duration of the video expressed in frames, the video onset model outputs an onset label \(o\in\mathbb{R}^{T}\) where each element \(o_{i}\) is defined as: \[o_{i}=\left\{\begin{array}{ll}1,&\text{if there is an action in the $i$-th frame}\\ 0,&\text{if there is no action in the $i$-th frame}\end{array}\right.\] ### Audio Representation In recent years there has been a substantial effort to develop general-purpose audio representations that generalize well to a variety of downstream tasks [15], with contrastive learning becoming a widely adopted training regime [16], especially in the case of multimodal approaches. A successful example of multimodal representation learning is CLAP [17], where embeddings for the audio and text modalities are aligned in the latent space. We leverage such alignment conditioning our synthesis model on audio embeddings only at training time, allowing textual queries as a secondary conditioning modality at inference time. ### Sound Effects Synthesis We generate a time-domain sound effect sequence \(\mathbf{x}(0)\) by employing a variance-preserving continuous-time diffusion model \(S_{\theta}\)[18, 19], capturing the gradient of the noisy log-distribution: \[\nabla_{\mathbf{x}(t)}\log p(\mathbf{x}(t))\approx S_{\theta}(\mathbf{x}(t), \sigma(t)),\] where \(p(\mathbf{x}(t))=\int_{\mathbf{x}(0)}p(\mathbf{x}(t)\mid\mathbf{x}(0))p( \mathbf{x}(0))\), with: \[p(\mathbf{x}(t)\mid\mathbf{x}(0))=\mathcal{N}(\mathbf{x}(t)\mid\alpha(t) \mathbf{x}(0),\sigma^{2}(t)\mathbf{I})\] a Gaussian perturbation kernel. Following [19], we use a noise schedule \(\sigma(t)\in[0,1]\) and \(\alpha(t)=\cos(0.5\pi\sigma(t))\). \(S_{\theta}\) is trained by minimizing the following loss: \[\mathbb{E}_{\sigma(t)\sim[0,1],\mathbf{x}(t)}\left[\|S_{\theta}(\mathbf{x}(t),\sigma(t))-\mathbf{v}(t)\|_{2}^{2}\right]\,,\] where \(\mathbf{x}(t)=\alpha(t)\mathbf{x}(0)+\beta(t)\boldsymbol{\epsilon},\mathbf{ v}(t)=\alpha(t)\boldsymbol{\epsilon}-\beta(t)\mathbf{x}(0)\,,\) with \(\beta(t)=\sin(0.5\pi\sigma(t))\) and \(\boldsymbol{\epsilon}\) white noise. For sampling, we use a standard DUM integrator [20]. The architecture of \(S_{\theta}\) is a UNet that follows the design of Molsai [21]. The encoder/decoder structure of the residual UNet has 8 layers and a total downsampling/upsampling factor of 1024. The innermost 4 layers perform self-attention with 8 attention heads and 64 features. To condition with CLAP embeddings we use cross-attention and train with classifier-free guidance [22]. To condition the diffusion model with the onsets, we feed them to a convolutional encoder that follows the structure of the UNet encoder and inject the channels at the corresponding layer inside the UNet. Figure 2: Example showing ground truth audio and video, detected onsets and generated audio. ## 3 Experimental Design ### Dataset To train and test our models we adopt the widely-used Greatest Hits dataset [23]. This dataset includes videos of humans using a drumstick to hit or rub objects or surfaces. The choice of a drumstick as the striking object is useful, as it minimally occludes each frame, enabling the video onset detection network to better comprehend the scene. Each video in the dataset captures the drumstick action, and the audio is recorded with a shotgun microphone attached to the camera, followed by denoising. The dataset contains onset annotations for each video, along with action and material labels for most events. This comprehensive dataset is fundamental for our model since it is the only one of sufficient size and quality, providing the audio-visual information our model relies on. Altogether, the dataset consists of 977 videos captured both outdoor and indoor. Indoor scenes contain a variety of hard and soft materials, such as metal, plastic, cloth, while the outdoor scenes contain materials that scatter and deform, such as grass, leaves and water. On average, each video contains 48 actions, divided between hitting and scratching. This ensures that each extracted chunk of video, lasting either 2s (for the onset model) or 6s (for the synthesis model), contains a sufficient number of hits. We divided the dataset into 683 videos for the training set, 98 for the validation set and 196 for the test set (70/10/20%). ### Experiments We split the problem of video-to-audio synthesis into a video analysis stage and a sound synthesis stage. In order to establish which of these stages is the limiting factor on the overall performance, we organize training and evaluation in three main parts for video onset detection stage, sound effects synthesis stage, and complete system. Evaluation of the complete system is conducted on pre-trained models, i.e., we do not attempt end-to-end training of both models. As objective metrics - similar to previous work [12] - we measure the accuracy on the number of detected/synthesized onsets and the average precision score2, which measures the synchronization between models' outputs and ground truth. To further evaluate how well the synthesized audio approximates the training data we use the Frechet Audio Distance (FAD) [24] which, correlating with human judgment, is also a measure of perceived quality of individual sounds. Footnote 2: [https://scikit-learn.org/](https://scikit-learn.org/) Video Onset Detection --to assess the performance of the onset detection model we rely on ground truth annotations. Since the dataset includes - on average - an event every 1.5 s, we train the model on 2s long video chunks. Furthermore, to train more efficiently, we downsample the videos to a 15fps frame rate. To construct the input, we extract the single frames as image files, and feed the network groups of 30 consecutive frames. Overall, each batch has size \([B,T,C,H,W]\), with \(B\) batch size, \(T\) frames, \(C\) color channels, \(H\) and \(W\) height and width of each frame. We repeat the same experiment twice comparing the performance when using augmentations [25] on the input frames. Without augmentations the frames are simply resized to a 112-by-112 dimension to match the model requirements, and are channel normalized with mean and standard deviation computed across the dataset for each color channel. When using augmentations, we first resize to 128-by-128 and apply random crop to the final size; we also apply color jitter before normalization. Augmentations are only applied to training and validation sets. We train each model for 100 epochs on a binary cross-entropy loss with batch size of 16, using AdamW optimizer with weight decay of \(1\cdot 10^{-3}\) and a learning rate of \(1\cdot 10^{-4}\). To generate the output binary labels we use a sigmoid on the network output and apply a threshold of 0.5. We compute accuracy and average precision score at 15fps. Sound Effects Synthesis --to train the diffusion model we use audio batches - extracted from the videos - of shape \([B,C,L]\), with \(B\) batch size, \(C\) audio channels, and \(L\) length in samples. The model trains on windows of \(2^{18}\) samples at 48kHz (\(\sim\)6s). Ground truth onset annotations are used to build binary tensors at audio rate for conditioning. We also zero out the audio track before the first onset to remove possible tails from previous events. Finally, a conditioning segment is extracted by randomly choosing an onset and slicing until the following one. Such slice is embedded with CLAP and the result is fed to the UNet via cross-attention. For classifier-free guidance, during training, we use a constant embedding with 0.1 probability, and for inference, we use an embedding scale of 2. The model is trained with the AdamW optimizer with a batch size of 2 for 1000 epochs with weight decay \(1\cdot 10^{-3}\) and a learning rate of \(1\cdot 10^{-4}\). To evaluate our model, we create 2-second clips using the initial ground truth onsets from the test set videos. We exclude tracks with zero onsets, leaving us with 160 segments. To account for any potential initial onset misses in the manual annotations of the Greatest Hits dataset, we reset the beginning of both the generated and ground truth tracks until the first annotated event. Differently from the onset model, in this case we compute the objective metrics at audio sample rate, using a confidence interval of 50ms. Furthermore, we compute the FAD for both audio and text modalities using the labels available in the dataset as text queries. Complete System --performance of the complete system is measured by first generating the binary labels with the pre-trained onset networks and converting them into onset tracks at audio sample rate to condition the synthesis model. Onsets are then extracted from the ground truth and synthesized audio using _librosa_3, and a tolerance of \(\pm\)50ms is applied to compute the onset synchronization precision. Although we have ground truth annotations, we adopt this approach for a fair comparison with the baseline described below, which does not use annotations. We generate 160 chunks like in the previous scenario. Even if _librosa_'s onset detection tool has mainly been used for the peak detection of musical audio segments, here the direct application of this feature is justified by the fact that ground truth and generated audio tracks contain minimal if any background noise, allowing for a correct onsets extraction without further processing steps. Footnote 3: [https://librosa.org/doc/0.10.0/index.html](https://librosa.org/doc/0.10.0/index.html) An example of the proposed model's output is shown in Fig. 2. Baseline --we compare our approach with a recent work [12] where a model is proposed to sonify a silent video using a conditioning audio-visual pair. CondFoleyGen uses SpecVQGAN [26] to learn a codebook from the training data spectrograms. During training, the code for the conditioning audio is passed as input - along with the embeddings for the conditioning and silent videos - to a transformer, which autoregressively predicts the codes that should represent the target sound. Then, a MelGAN vocoder [27] produces the waveform generated from the spectrogram reconstructed by the codebook decoder. Finally, an audio-visual synchronization model is used to re-rank many generated soundtracks and choose the one with the best temporal alignment. Since no pre-trained models were available at the moment of writing, we re-trained the model using the code provided in the official repository4 and following details from the paper. Accordingly, we trained the SpecVQGAN codebook for 400 epochs and the transformer for 40 epochs in order to make a fair comparison with our model. Footnote 4: [https://github.com/XYPB/CondFoleyGen/tree/main](https://github.com/XYPB/CondFoleyGen/tree/main) ## 4 Results **Video Onset Detection --** Table 1 shows the results for the onset detection model in the two cases. As a fairly simple architecture, not specifically designed for the task at hand, the overall results are satisfying, with augmentation improving both accuracy and average precision. In either cases, the model seems to be well suited to condition our synthesis model. We noticed that the network tends to overestimate the number of detected events, which becomes the main limitation in terms of reliability for the overall system. **Sound Effects Synthesis --** Table 2 reports the evaluation for the diffusion model. Accuracy and average precision are computed between conditioning onset tracks and generated output, with a tolerance of \(\pm 50\)ms. FAD is computed between conditioning and synthesized audio. Metrics are measured for both, audio and text modalities, to measure the impact of training with audio modality only. The model learns very well to synchronize generated audio with the onsets. This supports the idea of separating video analysis and audio synthesis tasks; knowing that, upon development of better video understanding models, a more precise conditioning could be given for synthesis in terms of both timing and target sound. The FAD for the two modalities is very similar, highlighting the alignment in the embedding space. **Complete System --** results for the complete model are shown in Table 3. Comparing with Table 2, we notice how the onset detection model is the limiting factor in terms of accuracy and synchronization. In this case, augmentations have a negative impact, but the objective metrics remain in line with Table 1. With respect to the baseline, the strong conditioning induced by the onsets track results in higher performance with almost half the number of parameters. The increase in onset accuracy with respect to the onset model alone might be explained by the tendency of the onset model to overestimate the number of onsets. When detected from the audio, nearby false positives in the conditioning onset track might be "obscured" by preceding ones, leading to a few percentage point improvement. The proposed system is also surpassing the baseline - although by a limited margin - in terms of audio quality when measured with the FAD. Again, with respect to the diffusion model alone, we observe also a degradation in the FAD. This can be explained by the fact that in the complete system, we do not zero out the generated and test tracks like with the standalone diffusion model, resulting in a lower correlation between ground truth and generated tracks. ## 5 Discussion Video-to-audio tasks are garnering researchers' attention, thanks to the rapid advancement of both vision and audio generation architectures. Even if sound effects and environmental sounds are crucial for the task, these applications are lagging behind speech and music synthesis, especially in professional sound design, where high-quality audio libraries and annotations are vital. In fact, no datasets specific for Foley generation tasks are available, with Greatest Hits [23] being the only exception. This choice allowed us to compare with the selected baseline; although, the use of such a specific dataset - with not totally realistic scenes - is a limitation of our work. ## 6 Conclusion and Future Work In this paper we propose a model for the sonification of silent videos by generating an audio track that is temporally and semantically aligned with the target video. Our model is divided in two parts: a video onset network, with which onsets of actions present in an input silent video can be extracted; and a diffusion model that, conditioned on an onset track and a latent representation of the desired sound allows to generate a matching audio track that is synchronized to the onsets. In future work, we plan to create a new dataset with audio-video pairs and onset annotations for scenes of interest in Foley generation. We'll extract these scenes from films and video games to test the model in realistic settings. Additionally, we aim to explore novel approaches for training the onset model with minimal annotations, avoiding the need for manual annotation of every action in the video. ## 7 Acknowledgements M.C. is funded by UKRI and EPSRC as part of the "UKRI CDT in Artificial Intelligence and Music", under grant EP/S022694/1. The work of R. F. G. was partly supported by the PNRR MUR project "Centro Nazionale 1 - Spoke 6", under grant number CN1321845CE18353. E.P. is supported by the ERC Grant no. 802554 (SPECGEO) and PRIN 2020 project no.2020TA3K9N (LEGO.AI).
2305.11297
Planetesimal formation via the streaming instability with multiple grain sizes
Kilometre-sized planetesimals form from pebbles of a range of sizes. We present the first simulations of the streaming instability that begin with a realistic, peaked size distribution, as expected from grain growth predictions. Our 3D numerical simulations directly form planetesimals via the gravitational collapse of pebble clouds. Models with multiple grain sizes show spatially distinct dust populations. The smallest grains in the size distribution do not participate in the formation of filaments or the planetesimals that are formed by the remaining 80% of the dust mass. This implies a size cutoff for pebbles incorporated into asteroids and comets. Disc observations cannot resolve this dust clumping. However, we show that clumping, combined with optical depth effects, can cause significant underestimates of the dust mass, with 20%-80% more dust being present even at moderate optical depths if the streaming instability is active.
Josef Rucska, James Wadsley
2023-05-18T20:39:31Z
http://arxiv.org/abs/2305.11297v2
# Planetesimal formation via the streaming instability with multiple grain sizes ###### Abstract Kilometre-sized planetesimals form from pebbles of a range of sizes. We present the first simulations of the streaming instability that begin with a realistic, peaked size distribution, as expected from grain growth predictions. Our 3D numerical simulations directly form planetesimals via the gravitational collapse of pebble clouds. Models with multiple grain sizes show spatially distinct dust populations. The smallest grains in the size distribution do not participate in the formation of filaments or the planetesimals that are formed by the remaining \(\sim\)80% of the dust mass. This implies a size cutoff for pebbles incorporated into asteroids and comets. Observations cannot resolve this dust clumping. However, we show that clumping, combined with optical depth effects, can cause significant underestimates of the dust mass, with 20%-80% more dust being present even at moderate optical depths if the streaming instability is active. keywords: hydrodynamics - instabilities - protoplanetary discs - planets and satellites: formation ## 1 Introduction In the process of planet formation, planetary embryos grow from the collisions of many millions of 1 km to 100 km sized planetesimals. In turn, planetesimals are born from millimetre-centimeter sized pebbles in protoplanetary discs. However, the formation of planetesimals cannot occur through simple pathways such as the collisional coagulation of progressively larger objects. It is well understood that collisions between objects in the range of 1 cm to 1 m in protoplanetary disc environments are predominantly destructive, resulting in smaller remnants from the original bodies in the collision (Zsom et al., 2010; Gutler et al., 2010; Windmark et al., 2012). Further, all solid objects orbiting in protoplanetary discs experience a headwind as they orbit through the gaseous component of the disc. At \(\sim\)1 m sizes, this process is maximally efficient, and causes the rapid orbital decay of these objects, sending them into the central star on timescales on the order of a few hundred years (Weidenschilling, 1977b). Hence, planet formation requires a mechanism that is capable of rapidly forming planetesimals directly from cm sized pebbles. A leading candidate for this process is known as the streaming instability (SI), first studied by Youdin & Goodman (2005) (see also: Youdin & Johansen, 2007; Johansen & Youdin, 2007). The SI is a specific example of a broader family of resonant drag instabilities that exist when the aerodynamic drag timescale becomes resonant with another dynamical timescale in the disc (Squire & Hopkins, 2018, 2020). In the saturated, non-linear phase of the instability, the SI is capable of producing strong, localized overdensities of clouds of pebble-sized dust that can then gravitationally collapse into planetesimals (Johansen et al., 2007), thus directly overcoming the aforementioned growth barriers. Since the seminal work from Johansen et al. (2007), over a decade of research has explored planetesimal formation via the SI with high resolution 3D hydrodynamic simulations (Johansen et al., 2009, 2012, 2015; Simon et al., 2016, 2017; Schafer et al., 2017; Abod et al., 2019; Li et al., 2019; Nesvorny et al., 2019, 2021; Gole et al., 2020; Rucska & Wadsley, 2021; Carrera et al., 2021, 2022; Carrera & Simon, 2022). The streaming instability has proven to be a robust mechanism for forming planetesimals, so long as the local protoplanetary disc region meets the prerequisite conditions of enhanced dust mass concentration (i.e. supersolar) and sufficiently large dust grains (Carrera et al., 2015; Yang et al., 2017; Li & Youdin, 2021). Protoplanetary discs observed with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Very Large Telescope (VLT)/SPHERE and Subaru/HiCIAO have shown features with concentrated dust mass, such as rings (e.g. Dullemond et al., 2018; Macias et al., 2019; Muto et al., 2012; Avenhaus et al., 2018), non-axisymmetric bumps (e.g. van der Marel et al., 2013, 2015; Cazzoletti et al., 2018; van der Marel et al., 2021) and spiral structure (e.g. Benisty et al., 2015; Perez et al., 2016; Benisty et al., 2017). Some rings may have sufficiently high dust concentrations to initiate planetesimal formation via the SI (Stammler et al., 2019; Mauco et al., 2021). Indeed, Carrera et al. (2021), Carrera et al. (2022) and Xu & Bai (2022, 2020) show that persistent radial gas pressure maxima, which likely play a role in the formation of the observed large-scale rings (Whipple, 1972), can sufficiently concentrate dust to trigger planetesimal formation via the SI. Observations of minor solar system bodies support the idea that these objects may have been formed via the SI or a similar process. Asteroids are commonly described as "rubble piles": gravitationally bound conglomerates of smaller pebbles with significant bulk porosity (Walsh, 2018). Further, results from the Rosetta mission to comet 67P/Churyumov-Gerasimenko suggest this object likely formed from a cloud of millimetre-sized dust particles (Blum et al., 2017; Fulle & Blum, 2017). Results from the New Horizons flyby of the Kuiper belt object (486958) Arrokoth suggest that this object, a contact binary with two distinct lobes, is likely a result of the slow decay of a binary orbit, where the two progenitor objects formed via the gravitational collapse of a pebble cloud (McKinnon et al., 2020; Grishin et al., 2020; Morohnic et al., 2021). Nesvorny et al. (2019) and Nesvorny et al. (2021) also show that the gravitational collapse of dense pebble clouds from SI simulations can produce planetesimal binaries with properties similar to binaries observed in the Kuiper belt. Kavelaars et al. (2021) measured the size distribution of objects in the cold classical Kuiper belt and find it is well described by an exponential cut-off at large sizes--a feature predicted by the streaming instability. The New Horizons mission also observed the craters that cover the 4 billion-year-old surfaces of the Pluto-Charon system, enabling an analysis of the inferred size distribution of the impactors from the early Solar system that produced those craters (Singer et al., 2019; Robbins & Singer, 2021; Robbins et al., 2017). Singer et al. (2019) find a deficit of craters at small sizes, for impactors below \(\lesssim\) 1-2 km in diameter. Unfortunately, current limits on computational power prevent 3D simulations of the SI from providing any insights on the SI-formed planetesimal size distribution at these small sizes (Simon et al., 2016; Li et al., 2019; see Section 4.1 of Rucska & Wadsley, 2021 for further discussion). Observational constraints on planetesimal formation are difficult to acquire, yet there is a general agreement between observational data and predictions from models of planetesimal formation via the SI. To date, the SI remains a leading candidate for the efficient formation of planetesimals, yet there remains open questions regarding this process, such as how the presence of a distribution of dust grain sizes affects outcomes regarding planetesimal formation. ### Dust grain size distributions in protoplanetary discs Observations reveal that protoplanetary discs in nature have at least two distinct dust populations (e.g. Franceschi et al., 2023): \(\sim\)millimetre-sized pebbles which have settled to the disc mid-plane and are most readily visible via their sub-mm wavelength thermal emission with ALMA (e.g. Andrews et al., 2016; van der Marel et al., 2021; Maucco et al., 2021), and \(\sim\)micron-sized grains suspended vertically in the disc, seen in infrared scattered light (e.g. Muto et al., 2012; Benisty et al., 2015; Avenhaus et al., 2018). Though these components occur in spatially distinct regions in the disc, they are likely linked, as grain growth theory shows that pebbles can readily grow via coagulation from the micron-size grains that the disc inherits from the interstellar medium (Birnstiel et al., 2011, 2015, see Birnstiel et al., 2016, for a review). Once the grain growth/fragmentation process reaches equilibrium, the predicted outcome from the widely-used Birnstiel et al. (2011) model is a grain size distribution described by multiple power-laws and a distinct peak, so that most of the mass in the distribution is within a factor of two of a specific grain size. ### Streaming instability with a distribution of grain sizes Until recently, there were few studies of the streaming instability with multiple sizes. Johansen et al. (2007) included multiple dust species in a subset of their runs, but the focus of their work was the onset of planetesimal formation rather than the behavior of the different grains. Bai & Stone (2010) modelled discs with a variety of grain size distributions simultaneously in 3D simulations, and explored the influence of these distributions on properties of the non-linear, saturated state of the SI, pre-planetesimal formation. Recently, there have been multiple studies on how particle size distributions influence the linear growth phase of the SI (Krapp et al., 2019; Paardekooper et al., 2020, 2021; McNally et al., 2021; Zhu & Yang, 2021) and the linear and non-linear phase in 2D numerical simulations (Schaffer et al., 2018, 2021; Yang & Zhu, 2021). These studies explored linear SI growth rates and the clumping of dust in the non-linear phase for distributions with a wide range of grain sizes. Overall, they conclude that the SI can produce strong dust clumping so long as the local dust-to-gas mass density ratio is large, approaching unity, and that the grain size distribution involves sufficiently large grains (near approximately a centimetre in size). In this paper, we study dust that is well within the strong growth regime, and follow the non-linear phase of the SI all the way to planetesimal formation. We expand upon prior work by Bai & Stone (2010) (3D), Schaffer et al. (2018, 2021) (2D) and Yang & Zhu (2021) (2D). The grain size distributions in these studies are power laws, with exponents similar to the fiducial slope for interstellar grains from Mathis et al. (1977). In this study, we sample the grain size distribution of Birnstiel et al. (2011), which is the equilibrium outcome of a grain growth/fragmentation model applicable to the midplane of protoplanetary discs, where planetesimal formation is believed to occur. The Birnstiel et al. (2011) distribution deviates from a single power law and includes a peak at large sizes. Thus, in our discretized version of that grain size distribution, the spacing between the representative grain size for each bin is not equal, in linear or logarithmic space, which is unique from prior work on this subject. We present the first 3D, vertically stratified simulations of the SI with multiple species of dust grains since Bai & Stone (2010), and compare the non-linear development of the SI in dust with multiple sizes against data from our prior work which used a single size (Rucska & Wadsley, 2021). We highlight the differences in the dust surface density distribution between multi-size and single-size models, along with a novel analysis that reveals the observational consequences of the strong dust clumping seen in our runs, and explore how grains of different sizes participate in planetesimal formation. Our paper is organized as follows. In Section 2 we present our methods and choice of parameters and a discussion about the dust grain size distribution we model. Section 3 focuses on the different dust surface density distributions between our multi-size model and prior work with single grain sizes, and the observational consequences of these differences. Section 4 focuses on how the different grain sizes participate in the non-linear filament and planetesimal formation process. In Section 5 we summarize our key results and discuss how this paper influences the current understanding of planetesimal formation via the SI. ## 2 Methods We model a local portion of a near-Keplerian protoplanetary disc. We study the dynamics of a gas phase aerodynamically coupled to a dust/solids phase. The specifics of our numerical and hydrodynamic set-up are nearly identical to those described in Rucska and Wadsley (2021), so we briefly summarize those methods here and refer a reader interested in a more detailed discussion to that paper. We use the shearing sheet approximation of Goldreich and Lynden-Bell (1965) to track the local dynamics of a Cartesian frame co-rotating at the Keplerian orbital velocity. We employ the Athena hydrodynamics code (Stone et al., 2008; Stone and Gardiner, 2009) with the solids particle module (Bai and Stone, 2010) to numerically evolve the protoplanetary disc system. Verticality, the box is centred on the disc midplane (\(z=0\)), and the co-rotating frame of reference leads to an imposed background velocity in the azimuthal (\(y\)) direction described by \((q\Omega x)\hat{y}\). Here \(x\) is the radial co-ordinate in the co-rotating frame, with \(x=0\) being the radial centre of the box, and \(q\) is the power-law index of the angular velocity with radial position in the disc, \(\Omega\propto r^{-q}\), so that in Keplerian discs \(q=3/2\). The equations that describe the dynamics of the gas and solids (dust) are \[\frac{\partial\rho_{g}}{\partial t}+\nabla\cdot(\rho_{g}\mathbf{u})=0, \tag{1}\] \[\frac{\partial\rho_{g}\mathbf{u}}{\partial t}+\nabla\cdot(\rho_{g}\bm {u}\mathbf{u})=-\nabla P_{g}\] \[\qquad+\rho_{g}\Bigg{[}-2\mathbf{\Omega}\times\mathbf{u}+2q\,\Omega^{2}x \,\hat{\mathbf{x}}-\Omega^{2}z\,\hat{\mathbf{z}}+\mu\frac{\overline{\mathbf{v}}-\mathbf{u}}{t_ {\rm stop}}\Bigg{]},\] (2) \[\frac{d\mathbf{v}^{\prime}_{j}}{dt}=2(v^{\prime}_{ly}-\eta v_{K}) \Omega\hat{\mathbf{x}}-(2-q)v^{\prime}_{ix}\Omega\hat{\mathbf{y}}-\Omega^{2}z\,\hat{ \mathbf{z}}-\frac{\mathbf{v}^{\prime}_{i}-\mathbf{u}^{\prime}}{t_{\rm stop}}+\mathbf{F}_{g}, \tag{3}\] where \(\rho_{g}\) is the gas mass density, \(P_{g}\) is the gas pressure, \(\mathbf{u}\), is the velocity of the gas, \(\mathbf{v}^{\prime}_{i}\) is the velocity of an individual dust particle in the frame of the background shear flow, and \(\overline{\mathbf{v}}\) is the mass-weighted average velocity of the dust in a gas cell. The gas equation of state is isothermal, \(P_{g}=\rho_{g}c_{s}^{2}\), where \(c_{s}\) is the sound speed. The quantity \(\mu\equiv\rho_{d}/\rho_{g}\) is the local ratio of dust to gas mass density, and \(\eta\) controls the strength of the radially inward drag force on the dust, which is related to the steepness of the radial gas pressure gradient (see Section 2.1). The quantity \(t_{\rm stop}\) is the time-scale for the exchange of momentum between the dust and gas phase, which depends on a local gas quantities such as density and temperature, and, crucially, the physical size of the dust grains. We discuss this parameter in more detail in Section 2.2 as it is central to the context for this paper. For the numerical algorithms, as in Rucska and Wadsley (2021) we use the standard Athena options for the Reimann solver (HLLC), hydrodynamics integrator (corner transport upwind) and a semi-implicit integrator for the dust momentum equations with a triangular-shaped cloud scheme to interpolate the dust particle properties with the simulation grid. In equation 3, the background shear flow has been subtracted from the dust and gas velocities. Separating the advection of the shear velocity from local deviations leads to a more efficient and accurate numerical integration (Masset, 2000; Johnson et al., 2008). We use the shearing box boundary conditions, which are periodic in the azimuthal (\(y\)) and the vertical directions (\(z\)) and shear periodic in the radial (\(x\)) direction (Hawley et al., 1995; Stone and Gardiner, 2010). The term \(\mathbf{F}_{g}\) in equation 3 represents the gravitational acceleration. Not all prior work on high-resolution studies of the non-linear SI includes the effects of the dust density field self-gravity, but since our study is in part focused on the properties of planetesimals, it is included here. Self-gravity enables the collapse of dense dust material into gravitationally bound objects (i.e. planetesimals). Following Simon et al. (2016), based on an implementation described and tested in Rucska and Wadsley (2021), this acceleration is computed via the gradient in the gravitational potential from the dust density field, and this potential is computed from the solution to Poisson's equation, \[\mathbf{F}_{g} =-\nabla\Phi_{d}, \tag{4}\] \[\nabla^{2}\Phi_{d} =4\pi G\rho_{d}. \tag{5}\] Here \(G\) is the gravitational constant. Our parameterization of this constant is discussed further in the next section (equation 9). Note, we neglect the gravitational influence of the gas, since the density perturbations in the gas in these kinds of local protoplanetary disc models are very small (Li et al., 2018). We use the fast Fourier transform Poisson solver available in Athena(Kim and Ostriker, 2017) to solve equation 5, shear periodic horizontal boundary conditions and vaccuum boundary conditions vertically. ### Physical and numerical parameters, initial conditions In this section we discuss the physical parameters that influence the dynamics of our local protoplanetary disc system. In this study we choose identical or very similar values to previous work on these systems (Simon et al., 2016; Schafer et al., 2017; Johansen et al., 2012; Li et al., 2018; Gole et al., 2020; Rucska and Wadsley, 2021). These parameters and our choices are summarized in Table 1 and briefly discussed in this section, with a more in-depth discussion of the stopping time parameter \(\tau_{s}\) in Section 2.2. The total mass of the dust particles is controlled by the ratio of dust mass surface density to the gas surface density \[Z=\frac{\Sigma_{d}}{\Sigma_{g}}, \tag{6}\] and we choose \(Z=0.02\), which is a slightly supersolar metal mass ratio. Note that our simulation domains model only a fraction of the vertical gas scale height while capturing the full dust scale height. Thus the effective surface density mass ratio within the simulation domain is higher than 0.02. Following the discussion from Section 2.4 of Rucska and Wadsley (2021), the ratio of total dust mass to total gas mass within the full simulation domain is approximately 0.25. The radial gas pressure gradient represented by the \(-2\eta v_{k}\Omega\) \begin{table} \begin{tabular}{l c c} \hline Run names & \(\tau_{s}\) - grain stopping time(s) \\ \hline S0,S1, & & \\ S2,S3 & 0.314 \\ (S) & & \\ M6-0,M6-1,M6-2, & & \\ M6-3,M6-4 & 0.036, 0.191, 0.270, 0.314, 0.353, 0.412 \\ (M6) & & \\ M12 & 0.021, 0.113, 0.170, 0.218, 0.256, 0.284, & \\ & 0.305, 0.324, 0.342, 0.363, 0.390, 0.437 \\ M18 & 18 values between 0.016 and 0.450 \\ \hline \hline \multicolumn{3}{c}{Domain Size} & Grid Resolution \\ \((L_{x}\times L_{y}\times L_{z})/H_{g}\) & \(N_{\rm cell}=N_{x}\times N_{y}\times N_{z}\) \\ \(0.2\times 0.2\times 0.2\) & \(120\times 120\times 120\) \\ \hline \hline \(N_{\rm flux}/(N_{\rm species}\times N_{\rm cell})\) & \(Z\) & \(\widetilde{G}\) & \(\Pi\) \\ 1 & 0.02 & 0.05 & 0.05 \\ \hline \end{tabular} \end{table} Table 1: Simulation parameters. term in equation 3 is parameterized via \(\eta\): \[\eta=n\frac{c_{s}^{2}}{v_{K}^{2}}. \tag{7}\] where \(n\) is the pressure power law index, \(P_{g}\propto r^{-n}\), the local Keplerian speed is \(v_{K}\), and the isothermal sound speed is \(c_{s}\). This pressure gradient shifts the azimuthal component of the dust and gas velocities by \(\eta v_{K}\), and we subtract this shift from our data to conduct analysis in this shifted frame. As with other work, in our simulations \(\eta\) is ultimately controlled by a similar parameter \[\Pi=\frac{\eta v_{K}}{c_{s}}, \tag{8}\] and we choose \(\Pi=0.05\), a typical value that applies to a wide variety of disc models (Bai & Stone, 2010). The strength of self-gravity versus tidal shear is controlled by \[\widetilde{G}\equiv\frac{4\pi G\rho_{g,0}}{\Omega^{2}}. \tag{9}\] Selecting \(\widetilde{G}=0.05\) is equivalent to a Toomre (1964)\(Q\) of 32, so the gas phase is gravitationally stable, supporting our exclusion of the gas density field in solving for the gravitational potential (equation 5). In equation 9, \(\rho_{g,0}\) is the gas midplane density. The gas density is initialized to have a Gaussian profile vertically with a scale height \(H_{g}\), and a uniform distribution in the radial and azimuthal directions. The dust phase is initialized analogously except with a scale height of \(H_{d}=0.02H_{g}\). We set the units of our scale-free model to that \(\rho_{g,0}=H_{g}=\Omega=c_{s}=1\). See Section 2.4 of Rucska & Wadsley (2021) for a discussion on how to convert these units to physical units. Using the minimum mass solar nebula model of Hayashi (1981) and placing our model at 3 AU, there is approx 1.5 \(M_{\rm Ceres}\) worth of dust mass in our full simulation domain. The 3D simulation domains we study have equal lengths of \(L_{x}=L_{y}=L_{z}=0.2\)\(H_{g}\), and we choose a grid resolution of \(N_{x}=N_{y}=N_{z}=120\). Simon et al. (2016) show that this resolution and Rucska & Wadsley (2021) show that this box size is sufficient to accurately capture the planetesimal formation process with our chosen set of physical parameters. This grid resolution matches that of Rucska & Wadsley (2021). We choose a dust resolution such that the total number of particles for each grain species is equal to the total number of grid points in the gas grid. The millions of dust particles are initially placed so that the overall dust density distribution is uniform in the r-y plane and follows a Gaussian profile vertically. The precise initial positions of the particles are set via a random number generator. As in Rucska & Wadsley (2021), we re-run multiple simulations that are otherwise identical except for the initial seed for the random number generator, which gives a different initial (and very small in amplitude) noise pattern to the dust density in each run. Once the streaming instability develops into the non-linear phase, the initial perturbations result in dramatic variations in the dust density. Thus, re-running simulations with different initial seeds probes the stochastic qualities of the non-linear SI and the variance in the outcomes in a way that a single simulation cannot. In Table 1, the simulation labels S0,...,S3 denote the simulations which use a single dust grain (these are the same L02(a-d) simulations from Rucska & Wadsley, 2021), and analogously the labels M6-0,...,M6-4 represent five simulations that use multiple grain species simultaneously, with the only difference being the random seed that sets in the initial particle distribution. The M12 and M18 are simulations that sample the same size distribution as the M6-0,...,M6-4 simulations but with a greater number of grain species/bins. Details on the grain sizes in each simulation are discussed in the proceeding section. ### Grain size distribution We base our distribution of grain sizes on the results from Birnstiel et al. (2011), a widely used model of the collisional growth and fragmentation of dust grains in protoplanetary discs. Dynamics such as local turbulence, vertical settling, and radial drift affect the relative velocities between dust grains and can lead to grain growth or fragmentation via destructive collisions, depending on local conditions (for a review, see Birnstiel et al., 2016). Birnstiel et al. (2011) conclude that the dust grain population will equilibrate towards a size distribution with a shape that depends on properties of the disc (see their Fig. 6). Relevant properties include the gas surface density, midplane temperature, the Shakura & Sunyaev (1973)\(\alpha\) turbulent viscosity parameter, and a fragmentation threshold velocity for the grains. The authors also provide an online tool for exploring different combinations of disc quantities. For the distribution shape we study, we choose \(\Sigma_{g}=100\) g/cm\({}^{2}\), \(T_{\rm mid}=100\) K, roughly equivalent to a radial position of \(\sim 5\) AU for a disc with \(\Sigma_{g}(r)=1000\)\((r/{\rm AU})^{-3/2}\) g/cm\({}^{2}\) (e.g. minimum mass solar nebula model; Weidenschilling, 1977) and \(T_{\rm mid}=200\)\((r/{\rm AU})^{-3/7}\) K (e.g. Chiang & Goldreich, 1997). For other parameters we choose \(\alpha=1\times 10^{-4}\), \(v_{\rm frag}=3\) m/s. These choices lead to a distribution that peaks around \(\sim\)4 cm. The simulation impact of grain size is to set the characteristic time scale for the aerodynamic coupling between the dust and gas, \(t_{\rm stop}\) (equations 2 and 3). There are different forms for this stopping time depending on the regime of drag one considers, but for protoplanetary discs, almost all grains are in the Epstein (Epstein, 1924) drag regime (Birnstiel et al., 2016), where the size of the dust grains is smaller than the mean free path of the gas particles. The form of \(t_{\rm stop}\) in this regime is \[t_{\rm stop}=\frac{\rho_{s}}{\rho_{g}c_{s}}s, \tag{10}\] where \(\rho_{s}\) is the material density of the particles (approximately \(2.6\) g cm\({}^{-3}\) for silicates; Moore & Rose, 1973), \(\rho_{g}\) is the local gas density, \(c_{s}\) is the local sound speed, which depends on the gas temperature, and \(s\) is the size of the dust grains. Thus, for the same gas properties, \(t_{\rm stop}\) scales linearly with grain size. In our models, as with other studies of the streaming instability, we model the drag coupling between the dust and gas with a dimensionless parameter \(\tau_{s}=t_{\rm stop}\Omega\), \[\tau_{s}=\frac{\Omega\rho_{s}s}{\rho_{g}c_{s}}. \tag{11}\] In our disc model, the midplane gas density is \(\rho_{g,0}=(1/\sqrt{2\pi})(\Sigma_{g}/H_{g})\) and \(H_{g}=c_{s}/\Omega\)(Armitage, 2020), so that \(\tau_{s}=(\rho_{s}/\Sigma_{g})s\). With our above choices for the disc properties in the grain size distribution, the previous mentioned size peak of 4 cm grains translates to \(\tau_{s}\sim 0.1\). In this study, we wish to directly compare our results to both our previous study (Rucska & Wadsley, 2021) and prior work which has focused on a single stopping time of \(\tau_{s}=0.314\). Thus, we maintain the original shape of this particular Birnstiel et al. (2011) distribution from our chosen disc parameters, but slightly shift the peak to \(\tau_{s}=0.314\). Within the model of Birnstiel et al. (2011) this is equivalent to moving the fragmentation threshold from 3 km/s to \(\sim 5\) km/s or modifying the temperature profile. Setting the peak at our previous single grain size value al lows us to directly compare these two different representations of the dust environment. #### 2.2.1 Sampling the Birnstiel et al. (2011) distribution Figure 1 shows the dust mass surface density distribution that we sample, as a function of \(\tau_{\rm S}\). The six different bins (red curve), used for the five M6-0 \(\ldots\) M6-4 simulations, are chosen such that there is roughly equal dust mass in each bin while enforcing that one bin is centred on the peak at \(\tau_{\rm S}=0.314\). Since the streaming instability is driven by the aerodynamic coupling between the dust and gas, we chose the representative \(\tau_{\rm S}\) for each bin so that there is the same total drag force (proportional to \(\Sigma_{d}/\tau_{\rm S}\)) in each bin as in the original distribution. This requires us to satisfy the equality \[\int_{\tau_{\rm bin,t}}^{\tau_{\rm bin}}\frac{\Sigma_{d}(\tau_{\rm S})}{\tau_{ \rm S}{}^{\prime}}d\tau_{\rm S}{}^{\prime}=\frac{\Sigma_{\rm bin}}{\tau_{\rm bin }}(\tau_{\rm bin,r}-\tau_{\rm bin,t}), \tag{12}\] where \(\tau_{\rm bin,(t,i)}\) are the right and left \(\tau_{\rm S}\) values in each bin, \(\Sigma_{\rm bin}\) is the mean height of the distribution in that bin, and \(\tau_{\rm bin}\) is the representative size in that bin which we solve for. We see from Figure 1 that \(\tau_{\rm bin}\) in the bins for \(\tau_{\rm S}>0.1\) roughly tracks the half-mass point of the \(\Sigma_{d}(\tau_{\rm S})\) curve, but in the bin for the smallest grains, \(\tau_{\rm bin}\) is closer to the leftmost, small-\(\tau_{\rm S}\) edge of the bin, because the drag force per unit mass scales as \(1/\tau_{\rm S}\). Table 1 lists the exact values of \(\tau_{\rm bin}\) (hereafter just referred to by \(\tau_{\rm S}\)) modelled simultaneously by our simulations, with each species given a roughly equal amount of the total dust mass in the simulation domain1. Note that these values of \(\tau_{\rm S}\) are not equally spaced, linearly or logarithmically, which is different from the distributions modelled by prior work (Johansen et al., 2007; Bai & Stone, 2010; Schaffer et al., 2018; Yang & Zhu, 2021) which also used equal-mass bins in their discretized distributions. Footnote 1: We could not simultaneously ensure that our sample has one bin with a representative \(\tau_{\rm bin}\) exactly at the peak \(\tau_{\rm S}=0.314\) and have the bins contain exactly equal mass. However all bin total masses are within 10% of each other. #### 2.2.2 Increasing the number of grain species To accompany our main M6-0 \(\ldots\)M6-4 simulations which use 6 bins, we also run two simulations with more species of grains/bins in order to test how our results are affected by the number of species present. We ran one with 12 bins and the other with 18 bins, which we denote M12 and M18 respectively. When creating these samples with additional bins, we decide to subdivide each of the original 6 bins into 2 and 3 bins, again keeping an equal mass in each bin. This maintains the original bin edges from the 6-bin sample and thus allows for a more straightforward comparison of the results between the different discrete distributions. Once the new (additional) bin edges are computed, the same procedure of equal drag from eqaution 12 is used to select a representative \(\tau_{\rm S}\) for each bin. ### Planetesimal/clump identification To quantify how the different sized grains participate in the formation of planetesimals in our simulations, we must first identify which grains are a part of bound planetesimals. For this study, we accomplish this with a dust density cut. The Hill radius denotes a region where the gravity of an object in a circumstellar disc dominates over the shear due to the velocity gradient of the background Keplerian rotation. This shear is the only force that directly opposes the gravitational collapse of the dust. As described in Rucska & Wadsley (2021), we can covert the Hill radius into a Hill density, above which a dust clump is unstable to gravitational collapse. In the physical parameters of our model, this Hill density is given by, \[\rho_{H}=9\frac{\Omega^{2}}{4\pi G}. \tag{13}\] With our choices of parameters, \(\rho_{H}=180\). We identify all particles within cells with dust densities greater than \(\rho_{H}\) as being a part of bound planetesimals, and all adjacent cells above this threshold are considered the same planetesimal. The triangular shaped cloud scheme that translates particle data to the gas grid smooths the dust density on the length scale of a single grid cell. As a result, some cells with relatively few particles have a dust density above \(\rho_{H}\) because there are tens of thousands of particles in the neighbouring cells. We also average clump-related data over the multiple M6-0 \(\ldots\)M6-4 simulations, removing some of the influence of the stochastic nature of the non-linear SI from our results concerning planetesimals. In this paper, we are not interested in the details of the clump mass distributions so we do not opt for a more sophisticated clump finding algorithm as in Rucska & Wadsley (2021). ## 3 Dust surface density at different grain sizes In this section we examine the dust surface density in the multiple-grain simulations (M6-0, \(\ldots\),M6-4, hereafter referred to collectively as M6) and compare them with the surface density from the single-grain simulations (S0, \(\ldots\),S3, hereafter S). We inspect the surface density maps visually and then present a quantitative analysis of rudimentary observational consequences resulting in differences from these maps. The dust surface density in the 6 different sizes or species of dust grains at \(t=100\Omega^{-1}\) in the M6-0 simulation is shown in Figure 2. We present all data from single snapshots at \(t=100\Omega^{-1}\) because at this stage planetesimal formation has begun in earnest, but the planetesimals have not yet disrupted the other features in the Figure 1: Grain size distribution sampled for this study. The grey curve represents the surface density distribution as a function of grain size according to a grain growth model in collision-fragmentation equilibrium (Birnsti et al., 2011). The red curve represents our sampling of the distribution with six bins, and the vertical dashed lines are the representative \(\tau_{s}\) selected for each bin (See Section 2.2.1). dust such as the filaments. As we discuss in detail in Section 4.1 of Rucska & Wadsley (2021), the numerical cross-sections of the planetesimals in our simulations (and all similar simulations in the literature) are unphysically large, which causes the planetesimals to post-formation interact more strongly with the other dust particles than we would expect in nature. Thus, the true final state of the dust surface density post planetesimal formation is uncertain. We pick \(t=100\Omega^{-1}\) as a compromise to capture the coexistence of the planetesimals and the filaments, which we expect to be typical of the saturated stage of the non-linear streaming instability. We notice immediately in Figure 2 that the smallest sized dust grains (lowest \(\tau_{s}\)) do not readily collect into filaments or planetesimals at all, even when the larger grains are producing dense features simultaneously. All grains with \(\tau_{s}>0.1\) participate in the structure of the filaments, while the distribution of the \(\tau_{s}=0.0355\) grains is smooth with relatively little spatial variation. Secondly, with a more careful visual inspection of the \(\tau_{s}=0.191\) surface density map, one can see that the brightest, \(\sim\)2-3 cell wide objects in the largest grains-which represent the planetesimals-are less bright than in the \(\tau_{s}>0.2\) grains, suggesting the \(\tau_{s}=0.191\) grains do not incorporate into planetesimals as readily (for more quantitative results concerning clumping, see Section 4). In the full dust surface density, which includes all grains ("all \(\tau_{s}\)" panel), we see altogether the filaments, planetesimals, and the smooth, dispersed quality of the smallest grains which is most apparent in the space between filaments. Similar visual features can be seen in other studies of the non-linear SI with multiple grain sizes (cf. Yang & Zhu 2021 Figures 4 and 6, Bai & Stone 2010b Figure 2, Johansen et al. 2007 Figure 2). We can make similar observations when comparing the surface density maps of the M6 and S simulations, shown in Figure 3. The S cases use one grain size of \(\tau_{s}=0.314\) and thus more closely resemble the \(\tau_{s}=0.270\) to \(0.412\) grains from the M6 simulations, in that the dust mass at these sizes is predominantly concentrated into planetesimals and filaments which are separated by relatively empty regions with surface densities \(\lesssim\)10% of the mean surface density. The smaller grains in the M6 simulations fill these empty regions. More quantitative confirmation of these observations can be seen in the probability distribution functions (PDFs) of the dust surface density, in Figure 4. The top panel shows the PDFs for the Figure 2: Dust surface density in the \(x\)-\(y\) (radial-azimuthal) plane for each species of grain in the M6-0 simulation. The two right columns represent the surface density in the individual species, each identified by their grain size which here is represented by the dimensionless stopping time, \(\tau_{s}\) (see equation 11 and surrounding discussion). The lone panel in the left column represents the total dust surface density in the simulation, with all grain species. The colour represents the logarithm of the dust surface density normalized by the mean dust surface density. The mean and normalization is computed in each panel individually. These data represents the simulation at time \(t=100\) in units of the inverse orbital frequency, \(\Omega^{-1}\). individual grains from the M6-0 simulation at \(t=100\Omega^{-1}\). Here, we quantify what is observable in Figure 2: the distribution of surface density in the \(\tau_{\rm S}=0.036\) grains is narrow, peaking around the mean, \(\langle\Sigma_{d}\rangle\). The \(\tau_{\rm S}=0.191\) distribution is wider by an order of magnitude in each direction, which is a sign that these grains are participating in filaments. Yet only the grains with \(\tau_{\rm S}>0.2\) extend out to surface densities greater than \(100\,\langle\Sigma_{d}\rangle-\) a (rough) proxy for planetesimals. The PDFs of the particle volume density from Yang & Zhu (2021) Figure 8 show similar segregation by grain size. The bottom panel of Figure 4 highlights how this affects the overall surface density in the M6 simulations. The PDFs extend only as low as \(0.1\,\langle\Sigma_{d}\rangle\), while the distributions from the single size \(\tau_{\rm S}=0.314\) simulations extend out to \(0.01\,\langle\Sigma_{d}\rangle\). Interestingly, when looking at the \(\tau_{\rm S}=0.314\) grains from the M6 simulations on their own, these PDFs show there are more low surface density areas in these grains than there are in the S simulations at this size. This suggests that the presence of different-sized dust grains in the M6 results in more empty or lower surface density regions than if the \(\tau_{\rm S}=0.314\) grains were left to evolve on their own. The PDFs for the M12 and M18 simulations-which have more grain size bins than the M6 simulations-are also shown in the bottom panel of Figure 4. These PDFs follow the M6 data closely, suggesting that six grain species was sufficient to capture the main effects on the dust surface density. The differences in the S (blue) and M6 (red) PDFs have interesting observational consequences. There many more regions with low surface density in the S simulations and (on average) more regions at higher surface densities. Depending on the opacity of the dust, this could lead to a lower estimate of the total dust mass from observations due to optical depth effects. We explore this idea in the next section. ### Observational consequences Observations of some bright rings in protoplanetary discs have come to the interesting conclusion that the thermal emission from the dust in these rings is likely not optically thick (Dullemond et al., 2018; Huang et al., 2018; Cazzoletti et al., 2018; Macias et al., 2019; Mauco et al., 2021). Other studies have shown that, with a parameterized model of planetesimal formation via the streaming instability, this can be explained by pebble-sized dust in rings being converted into planetesimals, which do not contribute to mm wavelength emission (Stammler et al., 2019; Mauco et al., 2021). Taking this idea a step further, Scardoni et al. (2021) used the dust surface density profiles from 2D simulations of the SI and explored how the dust clumping would affect observations. They use a complex model for the dust opacity (Birnstiel et al., 2018) and find general agreement between their calculations of observed properties of discs such as the fraction of the emission that is optically thick and the spectral index. Unsur Figure 3: Dust surface density at \(t=100\,\,\Omega^{-1}\) in the \(x\)-\(y\) (radial-azimuthal) plane for all simulations. The S simulations use a single grain size, and the M6 simulations use multiple sizes simultaneously. See Table 1 for a summary of the simulation parameters. prisingly, they conclude that planetesimal formation can reduce the optical depth of emission at mm wavelengths. In this section, we construct two mass correction factors which quantify the observational implications of the varying degrees of dust clumping seen in our simulations2. We explore how these mass correction factors vary with optical depth (\(\tau_{\rm opt}=\kappa\Sigma_{d}\)), and how they evolve over time. Note that to leading order effects like disc inclination can be wrapped into a different effective \(\kappa\). We forgo a detailed mock observational treatment and complicated calculations of the dust opacity. Instead, we examine how the mean emission is modified due to both unresolved structure from the SI and differences between single grain size and multiple grain size models. Footnote 2: Note, the dust features created by the SI occur on length scales several orders of magnitude below 1 AU, and are hence unresolvable by any contemporary observational facility. The intensity of emission (\(I\)) from a source of radiation (source function \(S\), and other dust physical properties assumed constant), can be expressed as, \[I(\tau_{\rm opt})=S\big{(}1-e^{-\tau_{\rm opt}}\big{)}, \tag{14}\] where \(\tau_{\rm opt}\) is the optical depth, and the wavelength dependence of all quantities has been ignored. We consider a simple prescription for the optical depth, \(\tau_{\rm opt}=\kappa\Sigma\), where \(\kappa\) is the dust opacity and \(\Sigma\) is the dust surface density. For optically thin emission (low opacity, and/or low surface density), \(\tau_{\rm opt}\ll 1\), and then \(I\approx\kappa\Sigma\), linear in the surface density. If \(S\) and \(\kappa\) are known, we can estimate \(\Sigma_{\rm est}=I/S\kappa\). However, assuming the emission is optically thin systematically underestimates the surface density at points where \(\tau_{\rm opt}\) is not small, as, \[\Sigma_{\rm est}=\frac{\big{(}1-e^{-\kappa\Sigma}\big{)}}{\kappa}. \tag{15}\] This expression is useful as it does not require knowledge of the source function to assess the potential for systematic errors. We take the surface density map from our simulations, \(\Sigma_{d}(x,y)\), and compute an converted surface density map, \(\Sigma_{\rm est}(x,y)\), via equation 15. We allow for a wide range of constant opacities, Figure 4: Probability distribution functions (PDFs) of the dust surface density in our simulations at \(t=100\Omega^{-1}\). _Top._ PDF for the M6-0 simulation for each \(\tau_{s}\) (grain size) bin (cf. Figure 2). The normalization by the mean surface density is computed for each grain species individually. _Bottom._ PDFs for all simulations (cf. Figure 3). The red (blue) shaded regions represent the maximum and minimum bounds among all M6 (S) simulations, and solid line represents the mean PDFs. The grey shaded data and solid line represent the \(\tau_{s}=0.314\) grains from the H6 simulations only. The dashed curves are the PDFs for the M12 and H18 simulations. Figure 5: Observational dust mass correction factors. _Top. C_: The ratio of the true surface density (\(\Sigma_{d}\)) to an estimate that assumes the disc to be optically thin with no unresolved structures within the beam, as a function of the mean optical depth. _Bottom._\(C_{\rm clump}\): The correction factor due to unresolved clumping from the streaming instability alone, with uniform optical depth effects removed. These curves are equivalent to dividing the coloured and dashed curves in the top panel by the solid black curve in the top panel. _Both._ Each blue or red curve represents an individual simulation. Details on how these factors are defined are in Section 3.1. \(\kappa\), acknowledging that there are large uncertainties in the values of dust opacity (Birnstiel et al., 2018). We then take the spatial average, using the fact that all features in our simulations would be unresolved within an observational beam. Henceforth, we will refer to this average using, \(\Sigma_{\rm est}=\langle\Sigma_{\rm est}(x,y)\rangle\). Finally, we construct two dust mass correction factors from these average estimate surface densities. First, using a ratio of the true averaged dust surface density of the simulation \(\Sigma_{\rm actual}\) (i.e. \(\Sigma_{\rm actual}=\langle\Sigma_{d}(x,y)\rangle\)), \[C=\frac{\Sigma_{\rm actual}}{\Sigma_{\rm est}}. \tag{16}\] \(C\) is an overall correction factor for optically thick, unresolved clumping. To examining clumping alone, we can compute a new estimate, \(\Sigma_{\rm est,\ uniform}\), assuming a _uniform_ dust density distribution, \(\Sigma(x,y)=\langle\Sigma(x,y)\rangle\), and compare that to \(\Sigma_{\rm est}\) from our strongly clumped simulations. The ratio, \[C_{\rm clump}=\frac{\Sigma_{\rm est,uniform}}{\Sigma_{\rm est}}=\frac{C}{C_{ \rm uniform}}, \tag{17}\] measures the correction associated with of dust clumping only (here due to the streaming instability). \(C_{\rm uniform}=\Sigma_{\rm actual}/\Sigma_{\rm est,uniform}\) is the correction associated assuming low optical depths. If one is confident about the optical depth of an observed source, \(C_{\rm clump}\) represents the factor the inferred dust surface density should be multiplied by if the SI is believed to have caused significant unresolved clumping in that region of the disc. The top panel of Figure 5 shows \(C\) as a function of \(\tau_{\rm opt}\). At low \(\tau_{\rm opt}\), \(C\sim 1\) for all simulations, since in this regime the optically thin assumption is valid by definition. At intermediate \(\tau_{\rm opt}\), expected variation between simulations (Rucska & Wadsley, 2021) leads to a spread in \(C\). At high \(\tau_{\rm opt}\), all simulations converge to \(C\sim\tau_{\rm opt}\) since the exponential term in equation 15 vanishes, removing any dependence in \(C\) on the surface density distribution in the simulations and hence any influence of clumping. It is at intermediate values of \(\tau_{\rm opt}\) where differences between the sets of simulations are apparent. To understand the impact of clumping, we look at the clumping correction factor, \(C_{\rm clump}\), in the bottom panel of Figure 5. This factor is the ratio of \(C\) for each simulation (coloured and dashed lines, top panel) to \(C_{\rm uniform}\), calculated for a spatially uniform dust surface density distribution at \(\langle\Sigma_{d}\rangle\) (black line, top panel3). Hence, what \(C_{\rm clump}\) highlights is the influence of the different amount of clumping in the dust surface density maps/distributions between the two sets of simulations (Figure 3 and 4). If the optical depth is well constrained (e.g. from grain properties), it is the factor one would multiply a dust surface density inferred from observations by in order to account for the (unresolved) dust clumping from our simulations. Footnote 3: This curve is simply a plot of \(\tau_{\rm opt}/(1-\exp(-\tau_{\rm opt}))\). We note that \(C_{\rm clump}\) peaks near \(\tau_{\rm opt}=1\) for all simulations, with peak values generally higher for the single size simulations S than the M6 simulations with multiple sizes. As we discussed earlier in Section 3, when compared the simulations with multiple grains, the dust in the S sims is more heavily concentrated into denser structures. As a consequence, in our simple model, the dust emission from the S models is overall less bright than the M6 models, as there is relatively less dust mass in the inter-filament space, and there is more dust in the filaments and planetesimals, where the emission is saturated at intermediate optical depths. Thus more of the dust mass is "hidden" in the S simulations. The peak values of \(C_{\rm clump}\) are between \(\sim 1.2-1.5\) for the M6 and M12 and M18 simulations, which we believe to be more representative of protoplanetary disc grain size distributions in nature rather than a single size. Thus, for dust grains described by a Birnstiel et al. (2011) grain size distributions with stopping times peaked at \(\tau_{\rm S}=0.314\), observational estimates of the dust mass from protoplanetary discs could be too low by a factor of \(20-30\%\) in regions of the disc where the streaming instability is active. If the grain size distribution were instead much more strongly peaked at a single size-i.e., closer to the S models than M6-then the clumping mass correction factor could be as high as factor of two. The values of \(C_{\rm clump}\) for a mean optical depth,\(\tau_{\rm opt}=1.0\), over time, plotted in Figure 6, are fairly stable over the course of our simulations. Once planetesimal formation begins, the main features of the dust surface density which influence \(C_{\rm clump}\) persist over dozens of dynamical timescales. Note that the single snapshot values of \(C_{\rm clump}\) presented in Figure 5 are from \(t=100\,\Omega^{-1}\), and in Figure 6 this is one of the rare times where there is slight overlap between the single size and multiple size sims. At most other times the curves in Figure 5 do not overlap at all. Similar to the single snapshot data, the curves for the M12 (12 bins) and M18 (18 bins) simulations in Figure 6 are consistent with the M6 simulations, suggesting incorporating more grain species does not influence our results. Note, we do not plot \(C\) over time as it has the same shape of \(C_{\rm clump}\), since the difference in normalization between \(C\) and \(C_{\rm clump}\) (at a specific optical depth) are just different constant factors. \(C\) uses \(\langle\Sigma_{d}\rangle\) as a reference point, and, for the purposes of Figure 6, \(C_{\rm clump}\) uses \(\langle\Sigma_{d,{\rm unif}}\rangle(\tau_{\rm opt}=1.0)\). ## 4 Planetesimal Composition: Grain Size In this section we explore the composition of these clumps in terms of the various dust species within them, as well as the composition of the dust mass that lost from each clump from simulation snapshot to snapshot. The bright cells in the surface density maps in Figure 3 indicate that all simulations from our study produce dense, Figure 6: The dust mass correction factor \(C_{\rm clump}\) over time in all simulations, at an optical depth of \(\tau_{\rm opt}=1.0\). The shaded regions are bounded by the maximum and minimum values across the sample of multiple simulations, and the solid curves represent the means of that sample. gravitationally bound clumps. As described in Section 2.3, we identify bound clumps (i.e. planetesimals) as regions where the 3D dust volume density (\(\rho_{d}\)) exceeds the Hill density (\(\rho_{H}\)) threshold above which it is unstable to gravitational collapse. The fraction of mass in clumps for each grain size is shown in Figure 7. The different coloured bands represent the mass in each grain size bin. Table 2 shows the time averages for these data over the range of time across the full \(x\)-axis in Figure 7. As seen in the top panel and in Table 2(a), the majority of the mass in clumps (\(>90\%\)) is in the grains with \(\tau_{s}>0.2\). A small fraction of the clumps are composed of \(\tau_{s}=0.191\) grains and there is effectively no clump mass associated with the \(\tau_{s}=0.035\) grains. These results corroborate earlier observations from Figure 2 regarding the decreased prominence or total lack of visible planetesimals in the surface density maps for these grain sizes. Some particles that are within in a clump in one snapshot are not within that same clump4 in the consecutive snapshot. These particles may be loosely bound at the edge of the gravitational influence of the planetesimal (i.e. near the Hill radius) or simply passing through the high-density grid cells that are identified as planetesimals. We will explore these ideas with velocity and vertical position data in Section 4.2. For the purposes of this analysis, we identify these transient clump particles as "lost", and plot the composition of this lost dust mass in the bottom panel of Figure 7 and provide the time averages of these data in Table 2(b). The lost dust mass is nearly evenly distributed among the grains at \(\tau_{s}>0.1\), with the highest proportion involving the \(\tau_{s}=0.270\) grains. Note that on average, approximately 10% of all the mass in clumps is consistently lost between snapshots. Footnote 4: Planetesimals in concurrent simulation snapshots which share over 50% of the same unique particles (determined by particle ID numbers) are determined to be the same clump. We can combine the results from the two panels of Figure 7 into a single idea known as the residence time-a quantity that estimates how long the dust mass of a particular grain species will remain in clumps given how quickly that mass is lost. This is represented simply by, \[\text{Residence time}=\Delta t_{\text{snap}}\left(\frac{\text{Mass in clumps}}{\text{Mass lost btwn. snapshots}}\right), \tag{18}\] where \(\Delta t_{\text{snap}}\) is the amount of time between data outputs and in this study is equal to \(2.0\,\Omega^{-1}\). The residence time is hence equivalent to diving the data in Table 2(a) by the data in Table 2(b) and multiplying by \(\Delta t_{\text{snap}}\). We present calculations of the residence time in Table 3. This \begin{table} \begin{tabular}{c c c c} \hline \(\tau_{s}\) & 6 bin runs & 12 bin & 18 bin \\ \hline 0.036 & \(4.90\times 10^{-5}\) & \(4.38\times 10^{-4}\) & \(5.84\times 10^{-4}\) \\ 0.191 & \(7.73\times 10^{-3}\) & \(7.47\times 10^{-3}\) & \(1.08\times 10^{-2}\) \\ 0.270 & \(2.60\times 10^{-2}\) & \(1.81\times 10^{-2}\) & \(3.20\times 10^{-2}\) \\ 0.314 & \(3.38\times 10^{-2}\) & \(2.29\times 10^{-2}\) & \(4.23\times 10^{-2}\) \\ 0.353 & \(3.30\times 10^{-2}\) & \(2.24\times 10^{-2}\) & \(4.33\times 10^{-2}\) \\ 0.412 & \(3.43\times 10^{-2}\) & \(2.06\times 10^{-2}\) & \(4.46\times 10^{-2}\) \\ \hline \end{tabular} (b) Dust mass lost by clumps (as fraction of total dust mass). \begin{tabular}{c c c} \hline \(\tau_{s}\) & 6 bin runs & 12 bin & 18 bin \\ \hline 0.036 & \(4.71\times 10^{-5}\) & \(2.74\times 10^{-4}\) & \(3.52\times 10^{-4}\) \\ 0.191 & \(2.04\times 10^{-3}\) & \(1.99\times 10^{-3}\) & \(2.70\times 10^{-3}\) \\ 0.270 & \(2.78\times 10^{-3}\) & \(2.51\times 10^{-3}\) & \(3.83\times 10^{-3}\) \\ 0.314 & \(2.43\times 10^{-3}\) & \(2.21\times 10^{-3}\) & \(3.38\times 10^{-3}\) \\ 0.353 & \(1.95\times 10^{-3}\) & \(1.75\times 10^{-3}\) & \(2.73\times 10^{-3}\) \\ 0.412 & \(1.93\times 10^{-3}\) & \(1.72\times 10^{-3}\) & \(2.55\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 2: Time averages (\(t=80-1200\Omega^{-1}\)) of the total dust mass in bound clumps and lost by clumps, split by \(\tau_{s}\) (grain size) (cf. Figure 7). \begin{table} \begin{tabular}{c c c c} \hline \(\tau_{s}\) & Mass & Mass lost & Residence \\ & in clumps & from clumps & time (\(\Omega^{-1}\)) \\ \hline 0.036 & \(4.90\times 10^{-5}\) & \(4.71\times 10^{-5}\) & 2.08 \\ 0.191 & \(7.73\times 10^{-3}\) & \(2.04\times 10^{-3}\) & 7.60 \\ 0.270 & \(2.60\times 10^{-2}\) & \(2.78\times 10^{-3}\) & 18.7 \\ 0.314 & \(3.38\times 10^{-2}\) & \(2.43\times 10^{-3}\) & 27.8 \\ 0.353 & \(3.30\times 10^{-2}\) & \(1.95\times 10^{-3}\) & 33.9 \\ 0.412 & \(3.43\times 10^{-2}\) & \(1.93\times 10^{-3}\) & 35.6 \\ \hline \end{tabular} \end{table} Table 3: Residence time (equation 18) for the different dust grains in the M6 simulations. Figure 7: Fraction of total dust mass for particles in bound clumps or lost by clumps, for each \(\tau_{s}\) (grain size) bin. These data represent an average over the whole group of \(\tt 86\) simulations. The coloured bands represent the fractional mass for each \(\tau_{s}\). The data for each grain size are vertically stacked so that the total mass in clumps (or lost by clumps) for all dust grains is tracked by the top of the pink shaded region. The data for \(\tau_{s}=0.036\) are too small to be seen on this scale; see Table 2 for time-averaged values of these data for all \(\tau_{s}\), and for the data from the M12 and M18 simulations. table confirms our prior conclusions when considering both panels of Figure 7 together: the largest grains are the most bound, longest-lived components of the planetesimals. All grains with \(\tau_{\rm S}>0.2\) have residence times above 18 \(\Omega^{-1}\), and this quantity increases monotonically with \(\tau_{\rm S}\). The smallest grains at \(\tau_{\rm S}=0.036\) have residence times comparable to \(\Delta t_{\rm snap}\), suggesting they form only a transient component of the clump mass5. Footnote 5: A more sophisticated clump-finding approach may definitely determine these small grains to be kinematically unbound. However, our simpler (and less expensive) analysis reaches the same conclusion to the degree of precision suitable for our study. Interestingly, the \(\tau_{\rm S}=0.191\) grains have an intermediate residence time of \(\sim 8\)\(\Omega^{-1}\). We can also observe from the dust surface density maps for each grain species (Fig. 2) and the PDF of those surface densities (top panel Fig. 4) that the \(\tau_{\rm S}=0.191\) grains exhibit behavior that is not like the smallest grains or the larger grains. The smallest grains do not participate in any kind of dust clumping, and the larger grains readily form gravitationally unstable planetesimals. Our results suggest the \(\tau_{\rm S}=0.191\) grain behavior is in-between these two regimes. We can see evidence of this in-between behavior for the \(\tau_{\rm S}=0.191\) grains in Figure 8, which shows the amount of dust mass above a certain density threshold at each grain size at \(t=100\)\(\Omega^{-1}\). In the bottom panel, the threshold is \(\rho_{H}\), and hence these data are equivalent to (a single time/vertical slice of) the data from the top panel of Figure 7. We see similar conclusions as before: the \(\tau_{\rm S}>0.2\) grains dominate the clump mass budget, the \(\tau_{\rm S}=0.036\) grains are not a part of the clumps at all, and the \(\tau_{\rm S}=0.191\) make up a small fraction of the mass at clump densities. In the top panel of Figure 8, the threshold is \(\rho_{g,0}\), the mid-plane gas density. In our simulations and those like it from the literature, the gas density displays little variation, even when the streaming instability develops strong dust clumps and filaments (Li et al., 2018). So the \(\rho_{g,0}\) threshold effectively marks the boundary where the dust density dominates the total local mass density (\(\rho=\rho_{d}+\rho_{g}\)), an important regime for the streaming instability (Youdin & Goodman, 2005). We see that the \(\tau_{\rm S}=0.036\) grains are underrepresented, even at the lower threshold of \(\rho_{g,0}\), representing \(\sim 7\%\) of all dust mass. Meanwhile, the \(\tau_{\rm S}=0.191\) grains contribute just as much mass above this threshold as the larger grains. Including observations from the dust surface density at each grain size (Fig. 2), we can interpret the data in Figure 8 as supporting the idea that the \(\tau_{\rm S}=0.191\) grains form filaments but not strong clumps, while the smaller \(\tau_{\rm S}=0.036\) grains form neither. In other words, the \(\rho_{g,0}\) threshold appears to delimit the dust density boundary for the filamentary features. ### Simulations with larger numbers of species As with the results from Section 3, using a larger number of grain species to sample the grain size distribution does not change our results. In Table 2, we include time averages of the mass in clumps and lost by clumps for the M12 and M18 simulations. As discussed in Section 2.2.2, the larger bin samples are created by sub-sampling the 6 bins from the M6 simulations, so that we can easily combine the sub-sampled bins to match the \(\tau_{\rm S}\) bin boundaries from 6 bin sample for the purposes of comparison. The overall conclusions from the M12 and M18 data are the same: the larger \(\tau_{\rm S}>0.2\) grains dominate the clump mass budget, while the dust mass lost is more evenly spread among the \(\tau_{\rm S}>0.1\) grains. Also, the shape of the curves from Figure 8 are within the bounds set by the M6 simulations. We note that, as a whole, including Figure 5, the M12 has slightly lower mass in clumps and dense structures than the M6 average, while the M18 data is slightly above this average. We do not interpret these differences as evidence that an increased number of bins affects planetesimal formation in a deterministic way. Rather, we view these differences are a consequence of the non-linear nature of the developed stage of the streaming instability. The variability in the SI is immediately observable as the range of outcomes among the individual M6 and S simulations, and was the overarching theme of our previous study (Rucska & Wadsley, 2021). ### Dust velocity In this section we use velocity data to further explore the differences in behavior between the smaller and larger dust grains in our simulations, and the consequences this has on planetesimal formation. A 2D histogram of the dust particles in the dust volume density (\(\rho_{d}\)) and individual particle velocity (\(|v_{\rm dust}|\)) phase space, for the M6-0 run, is shown in Figure 9. Also plotted is the magnitude of the equilibrium drift velocity for the dust (Nakagawa et al., 1986) as the white curve, which tracks the expected steady-state drift rates of the dust (in the absence of complex dynamics like the non-linear SI). We see at large \(\rho_{d}\), the expected drift velocity falls to 0, predicting that the dust fully decouples from the dust-gas equilibrium and orbits at Figure 8: Total dust mass above certain density thresholds (\(\widetilde{\rho}\)) as a function of grain size (\(\tau_{\rm S}\)), normalized by the total dust mass in each grain size bin. In the top panel the threshold is the initial midplane gas density \(\rho_{g,0}\) and the bottom panel the threshold is the Hill density (equation 13), the threshold above which dust forms gravitationally bound planetesimals. The shaded regions represent the bound for the maximum and minimum across the five M6 simulations. The M12 and M18 data are shown with grey and black curves, with multiplications by 2 and 3 to allow for a direct comparison with the M6 simulations, which have fewer bins and hence more dust mass per bin. the Keplerian velocity, and at low \(\rho_{d}\) the drift velocity approaches to the radial pressure gradient offset \(\sim\eta v_{K}\) with a factor of order unity that depends on \(\tau_{s}\). The smallest \(\tau_{s}=0.036\) dust grains have most of their mass below \(\rho_{g,0}\), which is in line with conclusions regarding Figure 8. Nearly all of the dust at this size-which does not form filaments or clumps-follows the NSH equilibrium curve closely. This provides further evidence that these smallest grains do not participate in highly non-linear behavior that deviates from analytical, steady-state expectations. Most of the \(\tau_{s}=0.191\) grains do not exist at densities above \(\rho_{H}=180\), but between 30 and \(100\rho_{g,0}\), which corroborates earlier discussions in Section 4 which conclude these grains predominantly participate in filament formation but not clump formation. The lower density dust between \(\sim 0.3\) and \(10\rho_{g,0}\) primarily follows the NSH equilibrium curve. For the larger \(\tau_{s}>0.2\) grains, most of their mass exists at large densities well above \(\rho_{H}\). Dust in the centre of planetesimals can be seen as the bright yellow pixels at \(\rho_{d}\geq 10\rho_{H}\). The lines of above and below these brightest pixels show that the dust resolution element superparticles can have slightly different velocities within a single grid cell. As with the \(\tau_{s}=0.191\) grains, the lower density dust is centered around the NSH expectations. Note that for all grains with dust densities above \(\sim 30\rho_{g,0}\), the bulk of the mass deviates substantially from the NSH equilibrium, settling at velocities between \(\sim 0.001\) and \(0.01c_{s}\). This is evidence of small amplitude, local turbulence, likely driven in part by the dense dust clumps near the midplane imparting substantial momentum onto the gas over small length scales. The width of the histogram about the NSH curve at lower densities is likely a result of this more disperse dust interacting with stirred up midplane gas. ### Vertical position We can further highlight the different behavior between the different sized dust grains by briefly exploring the properties of the vertical (out of midplane) dynamics. Figure 10 shows the dust surface density in the radial-vertical (\(x\)-\(z\)) plane. We can see the that small grains have a much more extended vertical profile than any of the larger grains, with no bright features. Comparatively, the \(\tau_{s}=0.314\) grains (which look nearly identical to the other grains in the largest four sizes, which are not shown) are distributed very closely to the midplane. The \(\tau_{s}=0.191\) are slightly more extended with slightly broader features than the large grains, and the bright planetesimal between \(x=0.0\) and \(0.05H_{g}\) is not very bright in these grains. Yet, the filament features are readily visible. We can further quantify these observations by computing the Figure 9: 2D histogram in the dust density-velocity phase space for the different grains in the Mg–0 simulation at \(t=100\Omega^{-1}\). Each panel is the histogram for the individual grain species. Note that all dust velocities are measured with respect to the background Keplerian flow. The (logarithmic) colourbar is normalized to the total dust mass in the simulation. The darkest bins do not contain any particles, a minimum value is applied for aesthetic purposes. The solid white curve represents the NSH equilibrium velocity (Nakagawa et al., 1986; see also equations 7 in Youdin & Johansen, 2007) and the vertical white dashed line represents the Hill density in our simulation units (equation 13). The NSH velocity is a function of \(\tau_{s}\) and the local dust-to-gas mass ratio, \(\epsilon=\rho_{d}/\rho_{g}\). Since \(\rho_{g}\approx 1\) throughout our simulation domain, we use \(\rho_{d}\) as a proxy for \(\epsilon\). dust scale height, defined as, \[H_{p}=\sqrt{\frac{1}{N_{\rm par}}\sum_{i}^{N_{\rm par}}(z_{i}-\overline{z})^{2}}, \tag{19}\] and a similar (and related) quantity, the root mean-square (RMS) \(z\) velocity, \[v_{z,\rm rms}=\sqrt{\frac{1}{N_{\rm par}}\sum_{i}^{N_{\rm par}}(v_{z,i}- \overline{v_{z}})^{2}}. \tag{20}\] These values for all dust grains in the M6-0 simulation are presented in Table 4. The \(H_{p}\) data confirm what is visible in the vertical surface density: the smallest grains have by far the most vertically extended scale heights, and the scale height monotonically decreases with \(\tau_{s}\). The scale height is directly related to the RMS of the vertical dust velocity since it is only through turbulent motions- which provide a constant source of vertical velocity dispersion-that the dust can maintain a persistent scale height (Youdin & Lithwick, 2007). Similar to observations made by Schaffer et al. (2018, 2021) in their 2D simulations of the SI with multiple grains, it appears in our simulations that the larger grains stir up turbulence near the midplane, which causes the smaller grains, which are more tightly coupled to the gas aerodynamically (short drag stopping times), to remain suspended at relatively large scale heights. The vertical RMS velocity for the gas near the midplane is \(4.13\times 10^{-3}\) (in units of \(c_{s}\)), which is very close to \(v_{z,\rm rms}\) for the small \(\tau_{s}=0.036\) grains. ## 5 Conclusions and Discussion In this study we model a patch of a protoplanetary disc in 3D numerical hydrodynamics simulations. We model the dust component of the disc with multiple grain sizes simultaneously under conditions that are unstable to the streaming instability, and track the non-linear development of the SI to the formation of bound planetesimals. This paper extends previous work that used multiple grain sizes in simulations of the non-linear phase of the SI (Johansen et al., 2007; Bai & Stone, 2010; Schaffer et al., 2018, 2021; Yang & Zhu, 2021). Most prior work used a grain size distribution with a number density described by a single power law, but in our study we sample a distribution that is the output of a widely-used model of grain growth and fragmentation applicable to midplane of protoplanetary discs (Birnstiel et al., 2011). To compare our multi-species results to prior work which modelled the dust with a single species, we match the peak of the size distribution to the grain size studied in Rucska & Wadsley (2021). Our main results are as follows: 1. Only larger grains with dimensionless stopping times \(\tau_{s}>0.1\) participate strongly in the non-linear SI, producing filaments and regions with large dust densities that gravitationally collapse into planetesimals. The smaller grains do not form filaments or clumps at all, despite the fact they are embedded in an environment where roughly 5/6 of the dust mass is forming dense structures. This confirms a basic property of the multi-species SI at the non-linear stage (Bai & Stone, 2010; Yang & Zhu, 2021), which remains true for a realistic protoplanetary disc grain size distribution from Birnstiel et al. (2011). The net result is there is more dust mass in the regions between the filaments in the multi-species simulations when compared to the single grain simulations, and slightly less mass in the dense structures. 2. Clumping of dust via the SI on sub-AU length scales reduces the average surface brightness for a given amount of dust. This confirms in 3D models that the SI may explain the lower than expected (order unity) optical depths inferred in observed protoplanetary disc rings (see Section 3.1 for details). We estimate that 20%-80% more dust may be present than in uniform mass distribution models. The effect is less severe for multi-size versus single-size models. 3. We identify bound clumps and dense dust features. Larger \(\tau_{s}\gtrsim 0.2\) grains form clumps, \(\tau_{s}\lesssim 0.04\) grains do not form clumps or filaments. Intermediate sizes are somewhat in between, forming filaments but not clumps. The velocities of the smallest grains are quite different from the larger grains in clumps and filaments, suggesting that these small grains--with a short drag stopping time that enforces tight coupling to the gas--simply sweep by the planetes \begin{table} \begin{tabular}{c c c} \hline \(\tau_{s}\) & \(H_{p}\) (\(H_{g}\)) & \(v_{z,\rm rms}\) (\(c_{s}\)) \\ \hline 0.036 & \(11.7\times 10^{-3}\) & \(4.16\times 10^{-3}\) \\ 0.191 & \(4.47\times 10^{-3}\) & \(2.87\times 10^{-3}\) \\ 0.270 & \(3.40\times 10^{-3}\) & \(2.50\times 10^{-3}\) \\ 0.314 & \(3.03\times 10^{-3}\) & \(2.31\times 10^{-3}\) \\ 0.353 & \(2.77\times 10^{-3}\) & \(2.29\times 10^{-3}\) \\ 0.412 & \(2.64\times 10^{-3}\) & \(2.37\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 4: Particle scale height and vertical RMS velocity for the different dust grains in the M6–0 simulation at \(t=100\Omega^{-1}\). Figure 10: Dust surface density in the \(x\)-\(z\) (radial-vertical) plane for a subset of grains from the M6–0 simulation at \(t=100\Omega^{-1}\). imals rather than becoming incorporated into them. This implies a lower size cutoff for pebble and dust grains incorporated into asteroids and comets. (iv) The main group of the multi-species runs in this study used 6 bins or species to sample of the grain size distribution. We test 12 and 18 bins to show convergence. More bins appears to have no appreciable effect on the results for the multi-species simulations and we conclude that 6 bins is sufficient to study peaked grain size distributions. ### The future of planetesimal formation via the SI with multiple grain sizes Including multiple sizes in models of the non-linear SI affects not just planetesimal formation but also the observable properties of protoplanetary discs. Most prior work on the SI has modelled the dust with a single grain size. However, recent observations of protoplanetary discs (see Andrews, 2020, for a review) and results from grain growth theory (Birnstiel et al., 2011, 2015) suggest that there is a distribution of dust grain sizes within discs. An important consideration then is what the shape of this distribution should be. In this paper we have shown that just an order of magnitude difference in grain size can determine whether grains are fully active in the SI all to way to planetesimal formation, or whether they do not even form filaments. This result motivates further exploration of the grain size distribution parameter space. Our study represents a single instance of the Birnstiel et al. (2011) distribution for a specific set of disc conditions. In our results, most of the species participate in planetesimal formation. Shifting the distribution peak to smaller sizes--equivalent to considering different radial positions in the disc--would move dust mass from species that undergo strong clumping towards species that do not participate in planetesimal formation or primarily form only filaments. Presumably, this would result in an overall decrease in the total dust mass that is converted to planetesimals and may act to suppress the instability itself. Extending our work to a broader range of distributions would reveal how planetesimal formation varies in conditions at different radial locations in the disc. Of particular interest is a distribution with a more equal mix of SI-active and SI-inactive grains. These conditions likely describe the onset of the SI and planetesimal formation. Early in the disc lifetime, most of the dust in the midplane may be too small (\(\tau_{s}\lesssim 0.04\)) to participate in planetesimal formation initially, and then grow through mutual collisions (e.g. Birnstiel et al., 2011) to involve sizes that are unstable to the SI. However, the time scales for grain growth are typically \(>10^{4}\) yr (e.g. Birnstiel et al., 2012), while the timescale for planetesimal formation via the SI is much shorter6. Thus, for initially small grains, planetesimal formation may occur as grains grow. It would be interesting to explore this initial planetesimal formation phase with a dust size distribution that includes a larger proportion of smaller, SI-inactive grains. Footnote 6: For the timescales in our study, \(100\,\Omega^{-1}\approx 16\) orbital periods, which is equivalent to \(\sim\)200 years at 5 AU around a solar mass star. More realistically, however, it is likely grain growth and the streaming instability occur simultaneously. Dust growth and fragmentation is driven by collisions between dust grains. The source of the relative velocity for these collisions in models such as Birnstiel et al. (2011) is an underlying turbulence that may be driven by large scale hydrodynamic instabilities (see Lyra & Umurhan, 2019, for a review). The streaming instability generates its own turbulence locally (e.g. Li et al., 2018) that drives relative velocities between dust, especially when a distribution of sizes is considered (Bai & Stone, 2010). How these SI-driven collisions influence grain growth remains unstudied. A possible technique may be a model where the dust size can change based on collisions and expectations of growth/fragmentation. These dynamic grain size models have been applied to global models of disc evolution (e.g. Gonzalez et al., 2017; Drazkowska et al., 2021), yet have not appeared in high resolution studies of the SI. Our results show that, under the SI, a distribution of sizes will segregate spatially. The larger, pebble-sized dust settles to the midplane and undergoes vigorous non-linear dynamics leading to filament and planetesimal formation, while the smaller grains remain vertically suspended and occupy the space between filaments. Thus, the influence of grain growth likely varies spatially as well. Perhaps the small, vertically suspended grains could grow to sizes that are more SI-active, settle towards the midplane, and participate in planetesimal formation. The dense, dust-dominated regions within filaments could promote the growth of pebbles to larger sizes than is possible in gas-dominated regimes. Or, the pebbles in filaments could fragment to smaller SI-inactive grains and reduce the efficiency of planetesimal formation. These smaller sized, fragmented remnants would be created at low scale heights near the midplane, and it is unclear how those grains would interact with clumps and pebble-rich filaments. Such possibilities could be explored in dynamic grain size models. Incorporating grain growth introduces models dependent on physical units (e.g. fragmentation threshold velocity). This breaks the scale-free property of the common shearing box model used in high-resolution studies of the SI that allows, for example, the translation of \(\tau_{s}=0.314\) dust to represent different physical grain sizes depending on the disc model and radial position. This means multiple simulations will be required to model how grain growth theory interacts with the local dynamics of the streaming instability under different disc conditions. The composition of grains could also influence both grain growth and the aerodynamic coupling between the solids and gas phase. Icy grains can stick together at larger collisional velocities than silicate grains (e.g. Gundlach & Blum, 2015), and since icy grains are, generally speaking, larger than dry grains, they can radially drift through protoplanetary discs at different rates (Drazkowska & Alibert, 2017). If both dry and icy grains co-exist in a disc region that is unstable to the SI (in the vicinity of a disc ice line), our results suggest the two populations could become spatially separated. The small, dry grains would preferentially remain suspended above the disc midplane while the larger, SI-active icy grains would form filaments and planetesimals. This would distinguish the chemical composition of the planetesimals from the overall dust population within which they are formed. Improving numerical resolution to near planetesimal (\(\sim\)10 km) length scales could confirm our interpretation of our results that small grains do not participate in clump formation because they are tightly coupled to the gas which flows around the planetesimals. An increase in resolution to this scale is not possible with the methods applied to the streaming instability thus far, but may be approachable with adaptive resolution techniques and/or zoom-in simulations with small domains. The ability of the available numerical schemes to model an aerodynamically coupled solids-gas system with a dynamic grain size distribution also remains unexplored. Bai & Stone (2010) suggested that 1 dust superparticle per grid cell per grain species is adequate to capture the non-linear SI, and this has been the literature standard since. It is unclear how this would translate to a dust phase with a continuous, dynamic size range. Difficulties and uncertainties aside, we believe a dynamic dust size distribution could be a promising avenue for approaching a more realistic model of planetesimal formation via the streaming instability. ## Acknowledgements These simulations were performed on the Niagara supercomputing cluster operated by SciNet and Compute Canada. JW thanks NSERC for funding support. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2303.12410
EDGI: Equivariant Diffusion for Planning with Embodied Agents
Embodied agents operate in a structured world, often solving tasks with spatial, temporal, and permutation symmetries. Most algorithms for planning and model-based reinforcement learning (MBRL) do not take this rich geometric structure into account, leading to sample inefficiency and poor generalization. We introduce the Equivariant Diffuser for Generating Interactions (EDGI), an algorithm for MBRL and planning that is equivariant with respect to the product of the spatial symmetry group SE(3), the discrete-time translation group Z, and the object permutation group Sn. EDGI follows the Diffuser framework (Janner et al., 2022) in treating both learning a world model and planning in it as a conditional generative modeling problem, training a diffusion model on an offline trajectory dataset. We introduce a new SE(3)xZxSn-equivariant diffusion model that supports multiple representations. We integrate this model in a planning loop, where conditioning and classifier guidance let us softly break the symmetry for specific tasks as needed. On object manipulation and navigation tasks, EDGI is substantially more sample efficient and generalizes better across the symmetry group than non-equivariant models.
Johann Brehmer, Joey Bose, Pim de Haan, Taco Cohen
2023-03-22T09:19:39Z
http://arxiv.org/abs/2303.12410v2
# EDGI: Equivariant Diffusion for Planning with Embodied Agents ###### Abstract Embodied agents operate in a structured world, often solving tasks with spatial, temporal, and permutation symmetries. Most algorithms for planning and model-based reinforcement learning (MBRL) do not take this rich geometric structure into account, leading to sample inefficiency and poor generalization. We introduce the Equivariant Diffuser for Generating Interactions (EDGI), an algorithm for MBRL and planning that is equivariant with respect to the product of the spatial symmetry group \(\mathrm{SE}(3)\), the discrete-time translation group \(\mathbb{Z}\), and the object permutation group \(\mathrm{S}_{n}\). EDGI follows the Diffuser framework (Janner et al., 2022) in treating both learning a world model and planning in it as a conditional generative modeling problem, training a diffusion model on an offline trajectory dataset. We introduce a new \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-equivariant diffusion model that supports multiple representations. We integrate this model in a planning loop, where conditioning and classifier-based guidance allow us to softly break the symmetry for specific tasks as needed. On navigation and object manipulation tasks, EDGI improves sample efficiency and generalization. ## 1 Introduction Our world is awash with symmetries. The laws of physics are the same everywhere in space and time--they are symmetric under translations and rotations of spatial coordinates as well as under time shifts.1 In addition, whenever multiple identical or equivalent objects are labeled with numbers, the system is symmetric with respect to a permutation of the labels. Embodied agents are exposed to this structure, and many common robotic tasks exhibit spatial, temporal, or permutation symmetries. The gaits of a quadruped are independent of whether it is moving East or North, and a robotic gripper would interact with multiple identical objects independently of their labeling. However, most reinforcement learning (RL) and planning algorithms do Figure 1: Schematic of EDGI in a navigation task, where the agent (red square) plans the next actions (red dots) to reach the goal (green star) without touching obstacles (grey circles). **Top**: planning as conditional sampling from a diffusion model. **Bottom**: effect of a group action. Equivariance requires the diagram to commute. not take this rich structure into account. While they have achieved remarkable success on well-defined problems after sufficient training, they are often sample-inefficient (Holland et al., 2018) and lack robustness to changes in the environment. To improve the sample efficiency and robustness of RL algorithms, we believe it is paramount to develop them with an awareness of their symmetries. Such algorithms should satisfy two key desiderata. First, policy and world models should be equivariant with respect to the relevant symmetry group. Often, for embodied agents this will be a subgroup of the product group of the spatial symmetry group \(\mathrm{SE}(3)\), the group of discrete time shifts \(\mathbb{Z}\), and one or multiple object permutation groups \(\mathrm{S}_{n}\). Second, it should be possible to softly break (parts of) the symmetry group to solve concrete tasks. For example, a robotic gripper might be tasked with moving an object to a specific point in space, which breaks the symmetry group \(\mathrm{SE}(3)\). First works on equivariant RL have demonstrated the potential benefits of this approach (van der Pol et al., 2020; Walters et al., 2020; Mondal et al., 2021; Muglich et al., 2022; Wang and Walters, 2022; Wang et al., 2022; Cetin et al., 2022; Rezaei-Shoshtari et al., 2022; Deac et al., 2023). However, these works generally only consider small finite symmetry groups such as \(C_{n}\) and do not usually allow for soft symmetry breaking at test time. In this paper, we introduce the _Equivariant Diffuser for Generating Interactions_ (EDGI), an equivariant algorithm for model-based reinforcement learning and planning. EDGI consists of a base component that is equivariant with respect to the full product group \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\) and supports the multiple different representations of this group we expect to encounter in embodied environments. Moreover, EDGI allows for a flexible soft breaking of the symmetry at test time depending on the task. Our work builds on the Diffuser method by Janner et al. (2022), who approach both the learning of a dynamics model and planning within it as a generative modeling problem. The key idea in Diffuser is to train a diffusion model on an offline dataset of state-action trajectories. To plan with this model, one samples from it conditionally on the current state, using classifier guidance to maximize the reward. Our main contribution is a new diffusion model that is equivariant with respect to the product group \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\) of spatial, temporal, and permutation symmetries and supports data consisting of multiple representations. We introduce a new way of embedding multiple input representations into a single internal representation, as well as novel temporal, object, and permutation layers that act on the individual symmetries. When integrated into a planning algorithm, our approach allows for a soft breaking of the symmetry group through test-time task specifications both through conditioning and classifier guidance. We demonstrate EDGI in 3D navigation and robotic object manipulation environments. Compared to non-equivariant baselines, we find a performance improvement in the low-data regime as well as increased robustness to symmetry transformations of the environment. ## 2 Background **Equivariant deep learning**. Equivariant networks directly encode the symmetries described by a group \(G\) in their architecture. For the purposes of this paper, we are interested in the symmetries of 3D space, which include translations and rotations and are described by the special Euclidean group \(\mathrm{SE}(3)\), discrete-time translations \(\mathbb{Z}\), and object permutations, which are defined using the symmetric group of \(n\) elements \(\mathbb{S}_{n}\). A function \(f:\mathcal{X}\to\mathcal{Y}\) is called \(G\)-equivariant if \(g\cdot f(x)=f(g\cdot x)\) for all \(g\in G\) and \(x\in\mathcal{X}\). Here \(\mathcal{X}\) and \(\mathcal{Y}\) are input and output spaces that carry a \(G\) action denoted by \(\cdot\). The function \(f\) is called \(G\)-invariant if the group action in \(\mathcal{Y}\) is trivial, \(g\cdot y=y\). We will focus on \(\mathcal{X}=\mathbb{R}^{n}\), \(\mathcal{Y}=\mathbb{R}^{m}\), and linear group actions or representations, which are group homomorphisms \(\rho:G\to\mathrm{GL}(\mathbb{R}^{k})\). For generative modeling, we seek to model \(G\)-invariant densities. As proven in (Kohler et al., 2020; Bose and Kobyzev, 2021; Papamakarios et al., 2021), given a \(G\)-invariant prior density it is sufficient to construct a \(G\)-equivariant map to reach the desired \(G\)-invariant target density. In Sec. 3, we design \(G\)-equivariant diffusion architectures to model a distribution of trajectories that are known to be symmetric with respect to the product group \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\). **Diffusion models**. Diffusion models (Sohl-Dickstein et al., 2015) are latent variable models that generate data by iteratively inverting a diffusion process. This diffusion process starts from a clean data sample \(x\sim q(x_{0})\) and progressively injects noise for \(i\in[T]\) steps until the distribution is pure noise. The reverse, generative process takes a sample from a noise distribution and denoises it by progressively adding back structure, until we return to a sample that resembles being drawn from the empiri cal data distribution \(p(x)\). In diffusion models, it is customary to choose a parameter-free diffusion process (e. g. Gaussian noise with fixed variance). Specifically, define \(q(x_{t}|x_{t-1})\) as the forward diffusion distribution modeled as a Gaussian centered around the sample at timestep \(x_{t-1}\): \(q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I)\), where \(\beta_{t}\) is a known variance schedule. The reverse generative process is learnable and can be parametrized using another distribution \(p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\sigma_{t} ^{2}I)\), and the constraint that the terminal marginal at time \(T\) is a standard Gaussian--i.e. \(p(x_{T})=\mathcal{N}(0,I)\). The generative process can be learned by maximizing a variational lower bound on the marginal likelihood. In practice, instead of predicting the mean of the noisy data, it is convenient to predict the noise level \(\epsilon_{t}\) directly Ho et al. (2020). Furthermore, to perform low-temperature sampling in diffusion models it is possible to leverage a pretrained classifier to guide the generation process (Dhariwal and Nichol, 2021). To do so we can modify the diffusion score by including the gradient of the log likelihood of the classifier \(\tilde{\epsilon}_{\theta}(x_{t},t,c)=\epsilon_{\theta}(x_{t},t)-\lambda\sigma _{t}\nabla_{x_{t}}\log p(c|x_{t})\), where \(\lambda\) is the guidance weight and \(c\) is the label. **Trajectory optimization with diffusion**. We are interested in modeling systems that are governed by discrete-time dynamics of a state \(s_{h+1}=f(s_{h},a_{h})\), given the state \(s_{h}\) and action \(a_{h}\) taken at timestep \(h\). The goal in trajectory optimization is then to find a sequence of actions \(\mathbf{a}_{0:H}^{*}\) that maximizes an objective (reward) \(\mathcal{J}\) which factorizes over per-timestep rewards \(r(s_{h},a_{h})\). Formally, this corresponds to the optimization problem \(\mathbf{a}_{0:H}^{*}=\operatorname*{arg\,max}_{a_{0:H}}\mathcal{J}(s_{0},a_{0 :H})=\operatorname*{arg\,max}_{a_{0:H}}\sum_{h=0}^{H}r(s_{h},a_{h})\), where \(H\) is the planning horizon and \(\tau=(s_{0},a_{0},\ldots,s_{H},a_{H})\) denotes the trajectory. A practical method to solve this optimization problem is to unify the problem of learning a model of the state transition dynamics and the problem of planning with this model into a single generative modeling problem. Janner et al. (2022) propose to train a diffusion model on offline trajectory data consisting of state-action pairs, learning a density \(p_{\theta}(\tau)\). Planning can then be phrased as a conditional sampling problem: finding the distribution \[\tilde{p}_{\theta}(\tau)\propto p_{\theta}(\tau)c(\tau), \tag{1}\] over trajectories \(\tau\) where \(c(\tau)\) encodes constraints on the trajectories and specifies the task for instance as a reward function. Diffusion models allow conditioning in a way similar to inpainting in generative image modeling, and reward maximization in analogy to classifier-based guidance. ## 3 Equivariant diffuser for generating interactions (EDGI) We now describe our EDGI method. We begin by discussing the symmetry group \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\) and common representations in robotic problems. In Sec. 3.2 we introduce our key novelty, an \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-equivariant diffusion model for state-action trajectories \(\tau\). Finally, we show how a diffusion model trained on offline trajectory data can be used for planning in Sec. 3.3. ### Symmetry and representations **Symmetry group**. We consider the symmetry group \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\), which is a product of three distinct groups: 1. the group of spatial translations and rotations \(\mathrm{SE}(3)\), 2. the discrete time translation symmetry \(\mathbb{Z}\), and 3. the permutation group over \(n\) objects \(\mathrm{S}_{n}\). It is important to note, however, that this symmetry group may be softly broken in an environment. For instance, the direction of gravity usually breaks the spatial symmetry group \(\mathrm{SE}(3)\) to the smaller group \(\mathrm{SE}(2)\), and distinguishable objects in a scene may break permutation invariance. We follow the philosophy of modeling invariance with respect to the larger group and including any symmetry-breaking effects as inputs to the networks. We require that spatial positions are always expressed relative to a reference point, for example, the robot base or center of mass. This guarantees equivariance with respect to spatial translations: to achieve \(\mathrm{SE}(3)\) equivariance, we only need to design an \(\mathrm{SO}(3)\)-equivariant architecture. **Data representations**. We consider 3D environments that contain an embodied agent as well as \(n\) other objects. We parameterize their degrees of freedom with two \(\mathrm{SO}(3)\) representations, namely the scalar representation \(\rho_{0}\) and the vector representation \(\rho_{1}\). Any \(\mathrm{SE}(3)\) pose can be transformed to these two representations, see Appendix A for details. We assume that all trajectories transform under the regular representation of the time translation group \(\mathbb{Z}\) (similar to how images transform under spatial translations). Under \(\mathrm{S}_{n}\), object properties permute, while robot properties or global properties of the state remain invariant. Each feature is thus either in the trivial or the standard representation of \(\mathrm{S}_{n}\). Overall, we thus expect that data in environments experienced by our embodied agent to be categorized into four representations of the symmetry group \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\): scalar object properties, vector object properties, scalar robotic degrees of freedom (or other global properties of the system), and vector robotic degrees of freedom (again including other global properties of the system). ### Equivariant diffusion model Our main contribution is a novel \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-equivariant diffusion model which leads to an _invariant_ distribution over trajectories. Specifically, given an invariant base density with respect to our chosen symmetry group--a Gaussian satisfies this property for \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)--and an equivariant denoising model with respect to the same group we arrive at a diffusion model that is \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-invariant (Kohler et al., 2020; Papamakarios et al., 2021). Under mild assumptions, such an equivariant map that pushes forward the base density always exists (Bose and Kobyzev, 2021). We design a novel equivariant architecture for the denoising model \(f\). Implemented as a neural network, it maps noisy input trajectories \(\tau\) and a diffusion time step \(i\) to an estimate \(\hat{\epsilon}\) of the noise vector that generated the input. Our architecture does this in three steps. First, the input trajectory consisting of various representations is transformed into an internal representation of the symmetry group. Second, in this representation the data are processed with an equivariant network. Finally, the outputs are transformed from the internal representation into the original data representations present in the trajectory. We illustrate the architecture of our EDGI model in Fig. 2. **Step 1: Representation mixer**. The input noisy trajectory consists of features in different representations of the symmetry group (see above). While it is possible to mirror these input representations for the hidden states of the neural network, the design of equivariant architectures is substantially simplified if all inputs and outputs transform under a single representation. Hence, we decouple the data representation from the representation used internally for the computation--in a similar fashion to graph neural networks that decouple the data and computation graphs. **Internal representation**. We define a single internal representation that for each trajectory time step Figure 2: Architecture of our \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-equivariant denoising network. Input trajectories (top left), which consist of features in different representations of the symmetry group, are first transformed into a single internal representation (green block). The data are then processed with equivariant blocks (blue), which consist of convolutional layers along the time dimension, attention over objects, normalization layers, and geometric layers, which mix scalar and vector components of the internal representations. These blocks are combined into a U-net architecture. For simplicity, we leave out many details, including residual connections, downsampling, and upsampling layers; see Appendix A. \(t\in[H]\), each object \(o\in[n]\), each channel \(c\in[n_{c}]\) consists of an2\(\mathrm{SO}(3)\) scalar \(s_{toc}\) and an \(\mathrm{SO}(3)\) vector \(v_{toc}\). We write \(w_{toc}=(s_{toc},v_{toc})\in\mathbb{R}^{4}\). Under spatial rotations \(g\in\mathrm{SO}(3)\), these features thus transform as the direct sum of the scalar and vector representations \(\rho_{0}\oplus\rho_{1}\): Footnote 2: Pairing up just one scalar and one vector is a design choice; for systems in which scalar or vectorial quantities play a larger role, it may be beneficial to use multiple copies of either representation here. \[w_{toc}=\begin{pmatrix}s_{toc}\\ v_{toc}\end{pmatrix}\to w^{\prime}_{toc}=\begin{pmatrix}\rho_{0}(g)s_{toc}\\ \rho_{1}(g)v_{toc}\end{pmatrix}\,. \tag{2}\] These internal features transform in the regular representation under time shift and in the standard representation under permutations \(\mathbb{P}\) as \(w_{toc}\to w_{to^{\prime}c}=\sum_{o}\mathbb{P}_{o^{\prime}o}\,w_{toc}\). There are thus no global (not object-specific) properties in our internal representations. **Transforming input representations into internal representations**. The first layer in our network transforms the input \(\tau\), which consists of features in different representations of \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\), into the internal representation. On the one hand, we pair up \(\mathrm{SO}(3)\) scalars and \(\mathrm{SO}(3)\) vectors into \(\rho_{0}\oplus\rho_{1}\) features. On the other hand, we get rid of global features - those unassigned to one of the \(n\) objects in the scene - by including them in the representation of each of the \(n\) objects. Concretely, for each object \(o\in[n]\), each trajectory step \(t\in[H]\), and each channel \(c=[n_{c}]\), we define the input in the internal representation as \(w_{toc}\in\mathbb{R}^{4}\) as follows: \[w_{toc}=\begin{pmatrix}\sum_{c^{\prime}}\mathbf{W}_{\mathrm{occ^{\prime}}}^{1 }s_{toc^{\prime}}\\ \sum_{c^{\prime}}\mathbf{W}_{\mathrm{occ^{\prime}}}^{2}v_{toc^{\prime}} \end{pmatrix}+\begin{pmatrix}\sum_{c^{\prime}}\mathbf{W}_{\mathrm{occ^{\prime} }}^{3}s_{toc^{\prime}}^{4}s_{tb^{\prime}}\\ \sum_{c^{\prime}}\mathbf{W}_{\mathrm{occ^{\prime}}}^{3}v_{tb^{\prime}}\end{pmatrix}\,. \tag{3}\] The matrices \(\mathbf{W}^{1,2,3,4}\) are learnable and of dimension \(n\times n_{c}\times n_{s}^{\text{object}}\), \(n\times n_{c}\times n_{v}^{\text{object}}\), \(n\times n_{c}\times n_{s}^{\text{global}}\), or \(n\times n_{c}\times n_{v}^{\text{global}}\), respectively. Here \(n_{s}^{\text{object}}\) is the number of \(\mathrm{SO}(3)\) scalar quantities associated with each object in the trajectory, \(n_{v}^{\text{object}}\) is the number of \(\mathrm{SO}(3)\) vector quantities associated with each object, \(n_{s}^{\text{global}}\) is the number of scalar quantities associated with the robot or global properties of the system, and \(n_{v}^{\text{global}}\) is the number of vectors of that nature. The number of input channels \(n_{c}\) is a hyperparameter. We initialize the matrices \(\mathbf{W}^{i}\) such that Eq. (3) corresponds to a concatenation of all object-specific and global features along the channel axis at the beginning of training. **Step 2: \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-equivariant \(\mathbf{U}\)-net**. We then process the data with a \(\mathrm{SO}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-equivariant denoising network. Its key components are three alternating types of layers. Each type acts on the representation dimension of one of the three symmetry groups while leaving the other two invariant--i. e. they do not mix internal representation types of the other two layers: * _Temporal layers_: Time-translation-equivariant convolutions along the temporal direction (i. e. along trajectory steps), organized in a U-Net architecture. * _Object layers_: Permutation-equivariant self-attention layers over the object dimension. * _Geometric layers_: \(\mathrm{SO}(3)\)-equivariant interaction between the scalar and vector features. In addition, we use residual connections, a new type of normalization layer that does not break equivariance, and context blocks that process conditioning information and embed it in the internal representation (see Appendix A for more details). These layers are combined into an equivariant block consisting of one instance of each layer, and the equivariant blocks are arranged in a U-net, as depicted in Fig. 2. Between the levels of the U-net, we downsample (upsample) along the trajectory time dimension by factors of two, increasing (decreasing) the number of channels correspondingly. **Temporal layers**. Temporal layers consist of \(1\)D convolutions along the trajectory time dimension. To preserve \(\mathrm{SO}(3)\) equivariance, these convolutions do not add any bias and there is no mixing of features associated with different objects nor the four geometric features of the internal \(\mathrm{SO}(3)\) representation. **Permutation layers**. Permutation layers enable features associated with different objects to interact via an equivariant multi-head self-attention layer. Here, there is no mixing between features associated with different time steps, nor between the four geometric features of the internal \(\mathrm{SO}(3)\) representation. This is \(\mathrm{SO}(3)\)-equivariant, as the attention weights compute invariant \(\mathrm{SO}(3)\) norms. **Geometric layers**. Geometric layers enable mixing between the scalar and vector quantities that are combined in the internal representation but do not mix between different objects or across the time dimension. We construct an expressive equivariant map between scalar and vector inputs and outputs following Villar et al. (2021): We first separate the inputs into \(\mathrm{SO}(3)\) scalar and vector components, \(w_{toc}=(s_{toc},v_{toc})^{T}\). We then construct a complete set of \(\mathrm{SO}(3)\) invariants by combining the scalars and pairwise inner products between the vectors, \(S_{to}=\{s_{toc}\}_{c}\cup\{v_{toc}\cdot v_{toc^{\prime}}\}_{c,c^{\prime}}\). These are then used as inputs to two MLPs \(\phi\) and \(\psi\), and finally we get output scalars and vectors, \(w^{\prime}_{toc}=(\phi(S_{to})_{c},\sum_{c^{\prime}}\psi(S_{to})_{cc^{\prime} }v_{toc^{\prime}})\). Villar et al. (2021) show that this approach can approximate any equivariant map between \(\mathrm{SO}(3)\) scalars and vectors under mild assumptions. In its original form, however, it can become prohibitively expensive, as the number of \(\mathrm{SO}(3)\) invariants \(S_{to}\) scales quadratically with the number of channels. Thus, we first linearly map the input vectors into a smaller number of vectors, apply this transformation, and increase the number of channels again with another linear map. **Step 3: Representation unmixer**. The equivariant network outputs internal representations \(w_{toc}\) that are transformed back to data representations using linear maps, in analogy to Eq. (3). Global properties, e. g. robotic degrees of freedom, are aggregated from the object-specific internal representations by taking the mean, minimum, and maximum across the objects. These three aggregates are then concatenated along the channel dimension. We find it beneficial to apply an additional geometric layer to these aggregated global features before separating them into the original representations. **Training**. We train EDGI by optimizing for a simplified variational lower bound (Ho et al., 2020) on offline trajectories without any reward information. ### Planning with equivariant diffusion **Planning as diffusion**. A diffusion model trained on offline trajectory data jointly learns a world model and a policy. Following Janner et al. (2022), we use it to solve planning problems by choosing a sequence of actions to maximize the expected task rewards. To do this, we use three features of diffusion models. The first is the ability to sample from them by drawing noisy trajectory data from the base distribution and iteratively denoising them with the learned network yielding trajectories similar to those in the training set. For such sampled trajectories to be useful for planning, they need to begin in the current state of the environment. We achieve this by conditioning the sampling process such that the initial state of the generated trajectories matches the current state, in analogy to inpainting. Finally, we can guide this sampling procedure toward solving concrete tasks specified at test time using classifier-based guidance where a regression model is trained offline to map trajectories to task rewards. **Symmetry breaking**. By construction, our equivariant diffusion model learns a \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-invariant density over trajectories. Unconditional samples will reflect this symmetry property--it will be equally likely to sample a trajectory and its rotated or permuted counterpart. However, concrete tasks will often break this invariance, for instance by requiring that a robot or object is brought into a particular location. Our EDGI approach allows us to elegantly break the symmetry at test time for concrete tasks. Such a soft symmetry breaking can happen through conditioning, for instance by specifying the initial or final state of the sampled trajectories, or through a non-invariant reward model used for guidance during sampling. ## 4 Experiments We demonstrate the effectiveness of incorporating symmetries as a powerful inductive bias in the Diffuser algorithm with experiments in two environments. The first environment is a 3D navigation task, in which an agent needs to navigate a number of obstacles to reach a goal state. Rewards are awarded based on the distance to the goal at each step, with penalties for collisions with obstacles. The position of the obstacles and the goal state are different in each episode and part of the observation. For simplicity, the actions directly control the acceleration of the agent and we use identical spherical obstacles. Please see Fig. 1 for a schematic representation of this task and Appendix B for more details and the reward structure for this task. In our remaining experiments, the agent controls a simulated Kuka robotic arm interacting with four blocks on a table. Following Janner et al. (2022), we consider three different tasks: an unconditional block stacking task, a conditional block stacking task where the stacking order is specified, and a rearrangement problem, in which the stacking order has to be changed in a particular way. For both environments, we generate an offline trajectory dataset of roughly \(10^{5}\) (navigation) or (manipulation) trajectories. We describe the setup in detail in Appendix C. **Algorithms**. We train our EDGI on the offline dataset and use conditional sampling to plan the next actions. For the conditional and rearrangement tasks in the Kuka environment, we also use classifier guidance following Janner et al. (2022). As our main baseline, we compare our results to the (non-equivariant) Diffuser model (Janner et al., 2022). We also compare two model-based RL baselines reported by (Janner et al., 2022), BCQ (Fujimoto et al., 2019) and CQL (Kumar et al., 2020). **Task performance**. We report the results on both navigation and object tasks in Tab. 1. For each environment, we evaluate \(100\) episodes and report the average reward and standard error for each method. In the navigation task, the baseline diffuser fails to solve the problem, even after substantially increasing the model's capacity compared to the hyperparemeters used in Janner et al. (2022). EDGI achieves a substantially better performance. On the Kuka environment, we find that EDGI achieves rewards comparable with the original Diffuser model within the error bars and both methods clearly outperform the BCQ and CQL baselines. **Sample efficiency**. Next, we study the sample efficiency by training EDGI and Diffuser models on small subsets of the training data. The results in Fig. 3 show that our EDGI model achieves reasonable rewards in both environments even when training with only on \(0.1\%\) of the training data, while the baseline Diffuser struggles in this setting. This provides evidence for the benefits of the inductive bias of equivariant models and matches similar observations in other works for using symmetries in an RL context (van der Pol et al., 2020; Walters et al., 2020; Mondal et al., 2021; Rezaei-Shoshtari et al., 2022; Deac et al., 2023). **Group generalization**. Finally, we demonstrate that equivariance improves generalization across the \(\mathrm{SO}(3)\) symmetry group. On both environments, we train EDGI and Diffuser models on restricted offline datasets in which all trajectories are oriented in a particular way. In particular, in the navigation environment, we only use training data that navigates towards a goal location with \(x=0\). In the robotic manipulation tasks, we only use training trajectories where the red block is in a position with \(x=0\) at the beginning of the episode. We test all agents on the original environment, where they encounter goal positions and block configurations unseen during training. We show results for these experiments in Tab. 1. The original Diffuser performs substantially worse, showing its limited capabilities to generalize to the new setting. In contrast, the performance of EDGI is robust to this domain shift, confirming that equivariance helps in generalizing across the symmetry group. ## 5 Related work Our work builds on two foundational lines of research: framing planning as a generative modeling problem and equivariant deep learning. The closest work to ours is the original Diffuser paper (Janner et al., 2022) which we used as a baseline. Concurrent to our work, Diffuser was extended by Ajay et al. (2022) who used a separate inverse dynamics model and classifier-free guidance. The key novelty of our work is that we make this approach aware of the symmetry structure of planning problems through a new \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\)-equivariant denoising network yielding an invariant distribution over trajectories while allowing for soft symmetry breaking as dictated by the task. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{**Standard setting**} & \multicolumn{2}{c}{\(\mathrm{SO}(3)\) **generalization**} \\ Environment & BCQ & CQL & Diffuser & EDGI (ours) & Diffuser & EDGI (ours) \\ \hline Navigation & – & – & \(\mathbf{94.9}_{\pm 3.9}\) & \(\mathbf{95.1}_{\pm 3.4}\) & \(5.6_{\pm 4.4}\) & \(\mathbf{83.3}_{\pm 3.5}\) \\ \hline Unconditional & \(0.0\) & \(24.4\) & \(\mathbf{61.3}_{\pm 2.7}\) & \(\mathbf{62.0}_{\pm 2.1}\) & \(39.3_{\pm 2.5}\) & \(\mathbf{59.9}_{\pm 2.4}\) \\ Conditional & \(0.0\) & \(0.0\) & \(\mathbf{52.3}_{\pm 3.5}\) & \(45.8_{\pm 4.3}\) & \(17.7_{\pm 2.3}\) & \(\mathbf{37.9}_{\pm 5.8}\) \\ Rearrangement & \(0.0\) & \(0.0\) & \(\mathbf{54.0}_{\pm 3.5}\) & \(\mathbf{53.0}_{\pm 3.5}\) & \(20.3_{\pm 2.7}\) & \(\mathbf{48.8}_{\pm 3.6}\) \\ Average & \(0.0\) & \(8.1\) & \(\mathbf{55.9}_{\pm 1.9}\) & \(\mathbf{53.6}_{\pm 2.0}\) & \(25.8_{\pm 1.4}\) & \(\mathbf{48.9}_{\pm 2.4}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on navigation tasks and block stacking problems with a Kuka robot. We report normalized cumulative rewards, showing the mean and standard errors over 100 episodes. Results consistent with the best results within the errors are bold. BCQ and CQL results are taken from Janner et al. (2022); for Diffuser, we show our reproduction using their codebase. **Left**: Models trained on the standard datasets. **Right**: \(\mathrm{SO}(3)\) generalization experiments, with training data restricted to specific spatial orientations such that the agent encounters previously unseen states at test time. **Equivariant deep learning.** Baking in symmetries into deep learning architectures was first studied in the work of (Cohen and Welling, 2016) for geometric transformations, and the DeepSet architecture for permutations (Zaheer et al., 2017). Followup work to group convolutional networks focused on both spherical geometry (Cohen et al., 2018) and building kernels using irreducible group representations (Cohen and Welling, 2016; Weiler and Cesa, 2019; Cesa et al., 2021). For symmetries of the 3D space--i. e. subgroups of \(\mathrm{E}(3)\)--a dominant paradigm is to use the message passing framework (Gilmer et al., 2017) along with geometric quantities like positions, velocities, and relative angles (Satorras et al., 2021; Schutt et al., 2021; Batatia et al., 2022). **Equivariance in RL.** The role of symmetries has also been explored in reinforcement learning problems with a body of work focusing on symmetries of the joint state-action space of an MDP (van der Pol et al., 2020; Walters et al., 2020; Mondal et al., 2021; Muglich et al., 2022; Wang and Walters, 2022; Wang et al., 2022; Cetin et al., 2022; Rezaei-Shoshtari et al., 2022). More recently, model-based approaches--like our proposed EDGI--have also benefited from increased data efficiency through the use of symmetries of the environment (Deac et al., 2023). **Equivariant generative models**. Early efforts in learning invariant densities using generative models utilized the continuous normalizing flow (CNF) framework. A variety of works imbued symmetries by designing equivariant vector fields (Kohler et al., 2020; Rezende and Mohamed, 2015; Bose and Kobyzev, 2021). As flow-based models enjoy exact density estimation, their application is a natural fit for applications in theoretical physics (Boyda et al., 2020; Kanwar et al., 2020) and modeling equivariant densities on manifolds (Katstman et al., 2021). Other promising approaches to CNFs include equivariant score matching (De Bortoli et al., 2022) and diffusion models (Hoogeboom et al., 2022; Xu et al., 2022; Igashov et al., 2022). Our proposed EDGI model extends the latter category to the product group \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\) and increases flexibility with respect to the data representations. ## 6 Discussion Embodied agents often solve tasks that are structured through the spatial, temporal, or permutation symmetries of our 3D world. Taking this structure into account in the design of planning algorithms can improve sample efficiency and generalization--notorious weaknesses of RL algorithms. We introduced EDGI, an equivariant planning algorithm that operates as conditional sampling in a generative model. The main innovation is a new diffusion model that is equivariant with respect to the symmetry group \(\mathrm{SE}(3)\times\mathbb{Z}\times\mathrm{S}_{n}\) of spatial, temporal, and object permutation symmetries. Beyond this concrete architecture, our work presents a general blueprint for the construction of networks that are equivariant with respect to a product group and support multiple representations in the data. Integrating this equivariant diffusion model into a planning algorithm allows us to model an invariant base density, but still solve non-invariant tasks through task-specific soft symmetry breaking. We demonstrated the performance, sample efficiency, and robustness of EDGI on object manipulation and navigation tasks. While our work shows encouraging results, training and planning are currently expensive. Progress on this issue can come both from more efficient layers in the architecture of the denoising model as well as from switching to recent continuous-time diffusion methods with accelerated sampling. Figure 3: Average reward as a function of training dataset size for EDGI and Diffuser. **Left**: navigation environment. **Right**: Kuka object manipulation, averaged over the three tasks. ## Acknowledgements We would like to thank Gabriele Cesa, Daniel Dijkman, and Pietro Mazzaglia for helpful discussions.
2302.05245
Count-min sketch with variable number of hash functions: an experimental study
Conservative Count-Min, an improved version of Count-Min sketch [Cormode, Muthukrishnan 2005], is an online-maintained hashing-based data structure summarizing element frequency information without storing elements themselves. Although several works attempted to analyze the error that can be made by Count-Min, the behavior of this data structure remains poorly understood. In [Fusy, Kucherov 2022], we demonstrated that under the uniform distribution of input elements, the error of conservative Count-Min follows two distinct regimes depending on its load factor. In this work, we provide a series of experimental results providing new insights into the behavior of conservative Count-Min. Our contributions can be seen as twofold. On one hand, we provide a detailed experimental analysis of the behavior of Count-Min sketch in different regimes and under several representative probability distributions of input elements. On the other hand, we demonstrate improvements that can be made by assigning a variable number of hash functions to different elements. This includes, in particular, reduced space of the data structure while still supporting a small error.
Éric Fusy, Gregory Kucherov
2023-02-10T13:49:57Z
http://arxiv.org/abs/2302.05245v2
# Count-min sketch with variable number of hash functions: an experimental study ###### Abstract Conservative Count-Min, an improved version of Count-Min sketch [Cormode, Muthukrishnan 2005], is an online-maintained hashing-based data structure summarizing element frequency information without storing elements themselves. Although several works attempted to analyze the error that can be made by Count-Min, the behavior of this data structure remains poorly understood. In [Fusy, Kucherov 2022], we demonstrated that under the uniform distribution of input elements, the error of conservative Count-Min follows two distinct regimes depending on its load factor. In this work, we provide a series of experimental results providing new insights into the behavior of conservative Count-Min. Our contributions can be seen as twofold. On one hand, we provide a detailed experimental analysis of the behavior of Count-Min sketch in different regimes and under several representative probability distributions of input elements. On the other hand, we demonstrate improvements that can be made by assigning a variable number of hash functions to different elements. This includes, in particular, reduced space of the data structure while still supporting a small error. ## 1 Introduction Regarded from the most general viewpoint, _Count-Min sketch_ is a data structure for representing an associative array of numbers indexed by elements (keys) drawn from a large universe, where the array is input through a stream of (key,value) updates so that the current value associated to a key is the sum of all previous updates of this key. Perhaps the most common setting for applying Count-Min, that we focus on in this paper, is the _counting_ setting where all update values are \(+1\). In this case, the value of a key is its _count_ telling how many times this key has appeared in the stream. In other words, Count-Min can be seen as representing a _multiset_, that is a mapping of a subset of keys to non-negative integers. With this latter interpretation in mind, each update will be called _insertion_. The main supported query of Count-Min is a query of the count of a given key, and the returned estimate may not be exact, but can only overestimate the true count. The counting setting occurs in different practical scenarios in network traffic monitoring, as well as other applications related to data stream mining [18]. It also occurs in bioinformatics [25, 2, 29], where the input can only be scanned online, due to its large size. More examples of applications are given in [17]. Count-Min relies on hash functions but, unlike classic hash tables, does not store elements but only count information (hence the term _sketch_). It was proposed in [13], however a very similar data structure was proposed earlier in [10] under the name _Spectral Bloom filter_. The latter, in turn, is closely related to _Counting Bloom filters_[20]. In this work, we adopt the definition of [10] but still call it Count-Min to be consistent with the name commonly adopted in the literature. A survey on Count-Min can be found e.g. in [11]. In this paper, we study a stronger version of Count-Min called _conservative_. This modification of Count-Min was introduced in [18] under the name _conservative update_, see [11]. It was also discussed in [10] under the name _minimal increase_. Conservative Count-Min provides strictly tighter count estimates using the same memory and thus strictly outperforms the original version. The price to pay is the impossibility to deal with deletions (negative updates), whereas the original Count-Min can handle deletions as well, provided that the cumulative counts remain non-negative (condition known as _strict turnstile model_[23]). Analysis of error of conservative Count-Min is a difficult problem having direct consequences on practical applications. Below in Section 2.2 we survey known related results in more details. In our previous work [21], we approached this problem through the relationship with _random hypergraphs_. We proved, in particular, that if the elements represented in the data structure are uniformly distributed in the input, the error follows two different regimes depending on the _peelability_ property of the underlying _hash hypergraph_. While properties of random hypergraphs have been known to be crucially related to some data structures (see Section 2.3), this had not been known for Count-Min. Starting out from these results, in this paper we extend and strengthen this analysis in several ways, providing experimental demonstrations in support of our claims. Our first goal is to provide a fine analysis of the "anatomy" of conservative Count-Min, describing its behavior in different regimes. Our main novel contribution is the demonstration that assigning different number of hash functions to different elements can significantly improve the error, and, as a consequence, lead to memory saving. Another major extension concerns the probability distribution of input elements: here we study non-uniform distributions as well, in particular step distribution and Zipf's distribution, and analyze the behavior of Count-Min for these distributions. This analysis is important not only because non-uniform distributions commonly occur in practice, but also because this provides important insights for the major application of Count-Min: detection of most frequent elements (sometimes called _heavy hitters_[23, 8, 12]). In particular, we consider the "small memory regime" (_supercritical_, in our terminology) when the number of distinct represented elements is considerably larger than the size of the data structure, and analyse conditions under which most frequent elements are evaluated with negligible error. This has direct applications to the frequent elements problem. To conclude the introduction, we note that the experimental character of our analysis does not restrict the generality of our results that hold for a wide range of parameters. This follows from the general nature of tested hypotheses, as well as from theoretical justifications based on previous works. Background and related work ### Conservative Count-Min: definitions A Count-Min sketch is a counter array \(A\) of size \(n\) together with a set of hash functions mapping elements (keys) of a given universe \(U\) to \([1..n]\). In this work, each element \(e\in U\) can in general be assigned a different number \(k_{e}\) of hash functions. Hash functions are assumed fully random, therefore we assume w.l.o.g. that an element \(e\) is assigned hash functions \(h_{1},\ldots,h_{k_{e}}\). At initialization, counters \(A[i]\) are set to \(0\). When processing an insertion of an input element \(e\), basic Count-Min increments by \(1\) each counter \(A[h_{i}(e)]\), \(1\leq i\leq k_{e}\). The conservative version of Count-Min increments by \(1\) only the smallest of all \(A[h_{i}(e)]\). That is, \(A[h_{i}(e)]\) is incremented by \(1\) if and only if \(A[h_{i}(e)]=\min_{1\leq j\leq k_{e}}\{A[h_{j}(e)]\}\) and is left unchanged otherwise. In both versions, the _estimate_ of the number of occurrences of a queried element \(e\) is computed by \(c(e)=\min_{1\leq i\leq k_{e}}\{A[h_{i}(e)]\}\). It is easily seen that for any input sequence of elements, the estimate computed by original Count-Min is greater than or equal to the one computed by the conservative version. In this work, we study the conservative version of Count-Min. Let \(H\) denote a selection of hash functions \(H=\{h_{1},h_{2},\ldots\}\). Consider an input sequence \(I\) of insertions of length \(N\) and let \(E\) be the set of distinct elements appearing in \(I\). The _relative error_ of an element \(e\) is defined by \(\operatorname{err}_{H,I}(e)=(c(e)-o(e))/o(e)\), where \(o(e)\) is the number of occurrences of \(e\) in the input. The _combined error_ is an average error over all elements in \(I\) weighted by the number of occurrences, i.e. \[\operatorname{err}_{H,I}=\frac{1}{N}\sum_{e\in E}o(e)\cdot\operatorname{err} (e)=\frac{1}{N}\sum_{e\in E}(c(e)-o(e)).\] We assume that \(I\) is an i.i.d. random sequence drawn from a probability distribution on a set of elements \(E\subseteq U\). A key parameter is the size of \(E\) relative to the size \(n\) of \(A\). By analogy to hash tables, \(\lambda=|E|/n\) is called the _load factor_, or simply the _load_. ### Analysis of conservative Count-Min: prior works Motivated by applications to traffic monitoring, [6] was probably the first work devoted to the analysis of conservative Count-Min in the counting setting. Their model assumed that all \(\binom{n}{k}\) counter combinations are equally likely, where \(k\) hash functions are applied to each element. This implies the regime when \(|E|\gg n\). The focus of [6] was on the analysis of the _growth rate_ of counters, i.e. the average number of counter increments per insertion, using a technique based on Markov chains and differential equations. We will come back to the results of [6] later on in our work. Another approach proposed in [17] is based on a simulation of a conservative Count-Min sketch by a hierarchy of ordinary Bloom filters. Obtained error bounds are not explicit but are expressed via a recursive relation based on false positive rates of corresponding Bloom filters. Recent works [4, 3] propose an analytical approach for computing error bounds depending on element probabilities assumed independent but not necessarily uniform, in particular leading to improved precision bounds for detecting heavy hitters. However the efficiency of this technique is more limited when all element probabilities are small. In particular, if the input distribution is uniform, their approach does not bring out any improvement over the general bounds known for original Count-Min. In our recent work [21], we proposed an analysis of conservative Count-Min based on its relationship with random hypergraphs. We summarize the main results of this work below in Section 2.4. ### Hash hypergraph Many hashing-based data structures are naturally associated with hash hypergraphs so that hypergraph properties are directly related to the proper functioning of the data structure. This is the case with Cuckoo hash tables [27] and Cuckoo filters [19], Minimal Perfect Hash Functions and Static Functions [24], Invertible Bloom Lookup Tables [22], and some others. [30] provides an extended study of relationships between properties of hash hypergraphs and some of those data structures. A Count-Min sketch is associated with a _hash hypergraph_\(H=(V,E)\) where \(V=\{1..n\}\) and \(E=\{\{h_{1}(e),...h_{k_{e}}(e)\}\}\) for all distinct input elements \(e\). We use notation \(\mathcal{H}_{n,m}\) for hypergraphs with \(n\) vertices and \(m\) edges, and \(\mathcal{H}_{n,m}^{k}\) for \(k\)-uniform such hypergraphs, where all edges have cardinality \(k\). In the latter case, since our hash functions are assumed fully random, a hash hypergraph is a \(k\)-uniform Erdos-Renyi random hypergraph. As inserted elements are assumed to be drawn from a random distribution, it is convenient to look at the functioning of a Count-Min sketch as a stochastic process on the associated hash hypergraph [21]. Each vertex holds a counter initially set to zero and therefore each edge is associated with a set of counters held by corresponding vertices. Inserting an element consists in incrementing the minimal counters of the corresponding edge, and retrieving the estimate of an element returns the minimum value among the counters of the corresponding edge. From now on in our presentation, we will interchangeably speak of distinct elements and edges of the associated hash hypergraph, as well as of counters and vertices. Thus, we will call the _vertex value_ the value of the corresponding counter, and the _edge value_ the estimate of the corresponding element. Also, we will speak about the _load_ of a hypergraph understood as the ratio \(|E|/|V|\). ### Hypergraph peelability and phase transition of error Let \(H=(V,E)\) be a hypergraph. Then \(H\) is called _peelable_ if iterating the following step starting from \(H\) results in the empty graph: if the graph has a vertex of degree 1 or 0, delete this vertex together with the incident edge (if any). As many other properties of random hypergraphs, peelability undergoes a phase transition. Consider the Erdos-Renyi \(k\)-uniform hypergraph model where graphs are drawn from \(\mathcal{H}_{n,m}^{k}\) uniformly at random. It is shown in [26] that a phase transition occurs at a (computable) peelability threshold \(\lambda_{k}\): a random graph from \(\mathcal{H}_{n,\lambda n}^{k}\) is with high probability (w.h.p.) peelable if \(\lambda<\lambda_{k}\), and w.h.p. non-peelable if \(\lambda>\lambda_{k}\). The first values are \(\lambda_{2}=0.5\), \(\lambda_{3}\approx 0.818\), \(\lambda_{4}\approx 0.772\), etc., \(\lambda_{3}\) being the largest. Note that the case \(k=2\) is particular, as below the threshold a weaker property holds that a negligible fraction of vertices remain after peeling. Peelability is known to be directly relevant to certain constructions of Minimal Perfect Hash Functions [24] as well as to the good functioning of Invertible Bloom filters [22]. In [21], we showed that _under uniform distribution of elements_, the error of Count-Min follows two regimes depending on the load factor which determines the peelability of the underlying hash hypergraph. When the load factor is below the peelability threshold, the relative error made by Count-Min is w.h.p. \(o(1)\) when both the sketch size and the input size grow. On the other hand, when the load is larger than the peelability threshold, the relative error is w.h.p. \(\Theta(1)\) with a value depending on the ratio itself. We will call these two regimes _subcritical_ and _supercritical_, respectively. ### Assigning variable number of hash functions: mixed hypergraphs The best peelability threshold \(\lambda_{3}\approx 0.818\) can be improved in at least two different ways. One way is to use a carefully defined class of hash functions, which amounts to replacing uniform sampling of \(k\)-edges by a specific non-uniform sampling. Thus, [16] showed that the peelability threshold can be increased to \(\approx 0.918\) for \(k=3\) and up to \(\approx 0.999\) for larger \(k\)'s if a special class of hypergraphs is used. Another somewhat surprising idea, that we apply in this paper, is to allow different number of hash functions for differents elements, that is to consider non-uniform hypergraphs. [28] shows that non-uniform hypergraphs may have a larger peelability threshold than uniform ones. Following [15], Rink [28] showed that _mixed hypergraphs_ with two types of edges of different cardinalities, each constituting a constant fraction of all edges, may have a larger peelability threshold: for example, hypergraphs with a fraction of \(\approx 0.887\) of edges of cardinality \(3\) and the remaining edges of cardinality \(21\) have the peelability threshold \(\approx 0.920\), larger than the best threshold \(0.818\) achieved by uniform hypergraphs. We adopt the notation of [28] for mixed hypergraphs: by writing \(k=(k_{1},k_{2})\) we express that the hypergraph contains edges of cardinality \(k_{1}\) and \(k_{2}\), and \(k=(k_{1},k_{2};\alpha)\) specifies in addition that the fraction of \(k_{1}\)-edges is \(\alpha\). The idea of using different number of hash functions for different elements has also appeared in data structures design. [7] proposed _weighted Bloom filters_ which apply a different number of hash functions depending on the frequency with which elements are queried and on probabilities for elements to belong to the set. The idea is to assign more hash functions to elements with higher query frequency and to those with smaller occurrence probability. It is shown that this leads to a reduced false positive probability, where the latter is defined to be weighted by query frequencies. This idea was further refined in [31], and then further in [5], under the name _Daisy Bloom filter_. ## 3 Results ### Uniform distribution We start with the case where input elements are uniformly distributed, i.e. edges of the associated hash hypergraph have equal probabilities to be processed for updates. #### 3.1.1 Subcritical regime In [21], we showed that the error made by Count-Min sketch follows two regimes depending on peelability of the underlying hash hypergraph. The latter is determined, with high probability, by whether its load \(\lambda\) is smaller or larger than the peelability threshold. On the other hand, as shown in [28], the peelability threshold can be larger for mixed hypergraphs where edges have different cardinalities, as opposed to uniform hypergraphs. This leads to the assumption that using a different number of hash functions for different elements one could "extend" the regime of \(o(1)\) error of Count-Min sketch. Figure 1 confirms this assumption. It shows the average relative error as a function of the load factor for three types of hypergraphs: 2-uniform, 3-uniform and mixed hypergraph where a \(0.885\) fraction of edges are of cardinality \(3\) and the remaining ones are of cardinality \(14\). 2-uniform and 3-uniform hypergraphs illustrate phase transitions at load factors approaching respectively \(0.5\) and \(\approx 0.818\) - peelability thresholds for 2-uniform and 3-uniform hypergraphs respectively. It is clearly seen that the phase transition for the mixed hypergraphs occurs at a larger value approaching \(\approx 0.898\) which is the peelability threshold for this class of hypergraphs [28]. While this result follows by combining results of [28] and [21], it has not been observed earlier and has an important practical consequence: _using a varying number of hash functions in Count-Min sketch allows one to increase the load factor while keeping negligibly small error_. In particular, for the same input, this leads to space saving compared to the uniform case. Note that parameters \(k=(3,14;0.885)\) are borrowed from [28] in order to make sure that the phase transition corresponds to the peelability threshold obtained in [28]. In practice, "simpler" parameters can be chosen, for example we found that \(k=(2,5;0.5)\) produces essentially the same curve as \(k=(3,14;0.885)\). #### 3.1.2 Supercritical regime For small loads (subcritical regime), the best edge cardinality for the uniform case is \(k=3\), in that it has the largest phase transition value \(\lambda_{3}\approx 0.818\) and therefore supports a vanishing error with the smallest possible space. On the other hand, as shown above, this space can be further improved by using a varying number of hash functions for different elements. When the load factor becomes large (supercritical regime), the situation changes drastically. When the load factor just surpasses the threshold, some edges are still evaluated with small or zero error, whereas for the other edges, the error becomes large. This "intermediate regime" has been illustrated in [21]. Interestingly, edge values are distributed in this regime in a very peculiar way, concentrating around several values (see Figure 3 in [21] for illustration). These values must be explained by some graph structural patterns which remain to be elucidated. Figure 1: \(\mathrm{err}_{H,I}\) for small \(\lambda=m/n\), for uniform distribution and different types of hypergraphs: 2-uniform, 3-uniform and (3,14)-mixed with a fraction of \(0.885\) of 3-edges (parameters borrowed from [28]). Data obtained for \(n=1000\). The input size in each experiment is \(5,000\) times the number of edges. Each average is taken over 10 random hypergraphs. When the load factor goes even larger, the multi-level pattern of edge values disappears and all edge values become concentrated around the same value. We call this phenomenon _saturation_. For example, for \(k=3\) saturation occurs at around \(\lambda=6\). Under this regime, the hash hypergraph is dense enough so that its specific topology is likely to be irrelevant and the largest counter level "percolates" into all vertex counters. In other words, all counters grow at the same rate, without any of them "lagging behind" because of particular graph structural patterns (such as edges containing leaf vertices). Bianchi et al. [6] did their analysis under the assumption that each of \(\binom{n}{k}\) edges is equally likely to be processed at each step. This emulates the situation where the load factor is very large and the hypergraph is saturated. The focus of [6] is on the _growth rate_ which is the expected number of counter increments when processing an edge. Obviously, this number varies between \(1\) and \(k\), and [6] establishes that larger values of \(k\) imply larger growth rates. Note that the growth rate determines the slope of the linear dependence of \(\operatorname{err}_{H,I}\) on \(\lambda\). The case \(k=1\) has not been considered by [6]. In this case, the growth rate is trivially \(1\), as each insertion increments exactly one counter. Furthermore, \(\operatorname{err}_{H,I}\) can be easily inferred in this case, as the error of a given key is defined by the number of keys hashed to the same counter. The number of such keys is approximated by the Poisson distribution with parameter \(\frac{m}{n}=\lambda\), with expected value \(\lambda\). Since keys are uniformly distributed, \(\operatorname{err}_{H,I}=\lambda\) is a good estimator of the average relative error. However, it is non-trivial to see how this compares to the error for \(k=2\) in the case of moderate load factors. Figure 2 clarifies the situation. It shows that \(k=1\) produces larger estimates for load factors until a certain point before going below the estimate for \(k=2\). This "intermediate regime" roughly corresponds to the \(n\ln n\) coupon collector bound, i.e. to the regime where a significant number of counters remain zero. In this case, even if the total sum of counters is smaller for \(k=1\) than for \(k=2\), it is more evenly distributed for the latter case, similar to the well-known _power of choice_ phenomenon in resource allocation [1]. Since empty counters are irrelevant for average relative error, this results in a smaller error for \(k=2\) compared to \(k=1\). Note that the configuration \(k=1\) does not benefit from the advantage of the conservative update strategy over the regular Count-Min and has a limited interest from the practical viewpoint. It is however interesting to observe that \(k=2\) still outperforms \(k=1\) for limited \(\lambda\). #### 3.1.3 Mixed hypergraphs We have just seen that for sufficiently large loads, the configuration \(k=1\) results in the smallest error. Our next question is whether using a varying number of hash functions (mixed hypergraphs) can make the error even smaller, by analogy to the subcritical regime where it extends the range of load values supporting \(o(1)\) error. At first glance, this question is not relevant, as the configuration \(k=1\) has obviously the smallest possible sum of counters, since every insertion increments at least one counter and therefore \(k=1\) seems to yield smallest possible estimates. This argument, however, applies to the regime when the hypergraph is saturated (all counters are hit), and \(\lambda\) is large enough so that counters are concentrated. Perhaps surprisingly, it turns out that the error produced by \(k=1\) can be made smaller for a large interval of \(\lambda\) by using a varying number of hash functions, before \(\lambda\) reaches the saturation point. Figure 3 illustrates this phenomenon. It compares the error produced by the uniform case \(k=1\) and a mixed configuration with \(80\%\) of edges of cardinality \(1\) and \(20\%\) of edges of cardinality \(3\). The latter produces a smaller error for values of \(\lambda\) up to about \(50\). Similar to the previous section, this is explained by the _power of choice_ effect: 3-edges "smooth out" counter values making the combined error smaller, which in the not-yet-asymptotic regime n slightly overweights the the fact that the presence 3-edges makes the sum of counter values larger (as some insertions increment more than one counter). ### Step distribution The analysis of the behavior of conservative Count-Min under uniform distribution of input elements shows that in the supercritical regime, the error made by the sketch grows linearly with the load factor. This implies a limited practical utility of the sketch in this regime. On the other hand, in many practical situations, input elements are not occurring with the same frequency. This motivates the application of Count-Min to non-uniform distributions and, in particular, to detection and analysis of frequent elements in the input stream. One popular problem here is computation of _heavy hitters_, where Count-Min sketch have been previously applied [13]. In this section, we focus on the simplest non-uniform distribution - _step distribution_ - in order to examine the behavior of Count-Min sketch in presence of elements with different frequencies. Our model is as follows. We assume that input elements are classified into two groups that we call _hot_ and _cold_, where a hot element has a larger appearance probability than a cold one. Note that we assume that we have a prior knowledge on whether a given element belongs to hot or cold ones. This setting is similar to the one studied for Bloom filters augmented with prior membership and query probabilities [5]. Seen in this context, our definition of \(\mathrm{err}_{H,I}\) assumes that the query probability of an element and its appearance probability in the input are equal. We assume that the load factors of hot and cold elements are \(\lambda_{h}\) and \(\lambda_{c}\) respectively. That is, there are \(\lambda_{h}n\) hot and \(\lambda_{c}n\) cold edges in the hash hypergraph. \(G>1\), called _gap factor_, denotes the ratio between probabilities of a hot and a cold element respectively. Let \(p_{h}\) (resp. \(p_{c}\)) denote the probability for an input element to be hot (resp. cold). Then \(p_{h}/p_{c}=G\lambda_{h}/\lambda_{c}\), and since Figure 2: \(\mathrm{err}_{H,I}\) as a function of \(\lambda=m/n\), for uniform distribution and supercritical regime. \(p_{h}+p_{c}=1\), we have \[p_{h}=\frac{G\lambda_{h}}{\lambda_{c}+G\lambda_{h}},\ \ p_{c}=\frac{\lambda_{c}}{ \lambda_{c}+G\lambda_{h}}.\] For example, if there are 10 times more distinct cold elements than hot ones (\(\lambda_{h}/\lambda_{c}=0.1\)) but each hot element is 10 times more frequent than a cold one (\(G=10\)), than we have about the same fraction of hot and cold elements in the input (\(p_{h}=p_{c}=0.5\)). In the rest of this section, we will be interested in the combined error of hot elements alone, denoted \(\mathrm{errhot}_{H,I}\). If \(E_{h}\subseteq E\) is the subset of hot elements, and \(N_{h}\) is the total number of occurrences of hot elements in the input, then \(\mathrm{errhot}_{H,I}\) is defined by \[\mathrm{errhot}_{H,I}=\frac{1}{N_{h}}\sum_{e\in E_{h}}o(e)\cdot\mathrm{err}(e )=\frac{1}{N_{h}}\sum_{e\in E_{h}}(c(e)-o(e)).\] #### 3.2.1 "Interaction" of hot and cold elements A partition of elements into hot and cold induces the partition of the underlying hash hypergraph into two subgraphs that we call _hot_ and _cold subgraphs_ respectively. Since hot elements have larger counts, one might speculate that counters associated with hot edges are larger than counts of cold elements and therefore are not incremented by those. Then, \(\mathrm{errhot}_{H,I}\) is entirely defined by the hot subgraph, considered under the uniform distribution of hot elements in the input. In particular, \(\mathrm{errhot}_{H,I}\) as a function of \(\lambda_{h}\) should behave the same way as \(\mathrm{err}_{H,I}\) for the uniform distribution (see Section 3.1). This conjecture, however, is not true in general. One reason is that there is a positive probability that all nodes of a cold edge are incident to hot edges as well. As a consequence, "hot counters" (i.e. those incident to hot edges) gain an additional increment due to cold edges, and the latter contribute to the overestimate of hot edge counts. Figure 4a illustrates this point. It shows, for \(k=3\), \(\mathrm{errhot}_{H,I}\) as a function of \(\lambda_{h}\) in presence of cold elements with \(\lambda_{c}=5\), for the gap value \(G=20\). For the purpose of comparison, the orange curve shows the error for the uniform distribution (as in Figure 1), that is the error that hot elements would have if cold elements were not Figure 3: \(\mathrm{err}_{H,I}\) as a function of \(\lambda\), for uniform distribution and supercritical regime. Uniform configuration \(k=1\) (blue curve) vs. mixed configuration \(k=(1,3;0.8)\) (orange curve). For clarity, the difference between the former and the latter is shown in the right plot. there. We clearly observe the contribution of cold elements to the error, even in the load interval below the peelability threshold. Figure (b)b illustrates that when the gap becomes larger (here, \(G=50\)), the contribution of cold elements diminishes and the curve approaches the one of the uniform distribution. A larger gap leads to larger values of hot elements and, as a consequence, to a smaller relative impact of cold ones. Another reason for which the above conjecture may not hold is the following: even if the number of hot elements is very small but the gap factor is not large enough, the cold edges may cause the counters to become large if \(\lambda_{c}\) is large enough, in particular in the saturation regime described in Section 3.1.2. As a consequence, the "background level" of counters created by cold edges may be larger than true counts of hot edges, causing their overestimates. As an example, consider again the configuration with \(k=3\) and \(\lambda_{c}=5\). From Figure 2 it follows that the cold elements taken alone have an error of about 6 on average (\(\approx 6.25\), to be precise) which means an about \(7\times\) overestimate. Since the graph is saturated in this regime (see Section 3.1.2), this means that most of the counters will be about 7 times larger than counts of cold edges. Now, if a hot element is only 5 times more frequent than a cold one, those will be about \(1.4\times\) overestimated, i.e. will have an error of about Figure 4: \(\mathrm{errhot}_{H,I}\) for \(k=3\) depending on \(\lambda_{h}\), in presence of cold elements with \(\lambda_{c}=5\) (blue curves) and without any cold elements (orange curve). 0.4, This situation is illustrated in Figure 4c. #### 3.2.2 Mixed hypergraphs The analysis above shows that in presence of a "background" formed by large number of cold elements, the error of hot elements starts growing for much smaller load factors than without cold elements, even if the latter are much less frequent than the former. Inspired by results of Section 3.1.3, one may ask if the interval of negligible error can be extended by employing the idea of variable number of hash functions. Note that here this idea applies more naturally by assigning a different number of hash functions to hot and cold elements. Figure 5 illustrates that this is indeed possible by assigning a smaller number of hash functions to hot elements and a larger number to cold ones. It is clearly seen that the interval supporting close-to-zero errors is extended. This happens because when the hot subgraph is not too dense, increasing the cardinality of cold edges leads to a higher probability that at least one of the vertices of such an edge is not incident to a hot edge. As a consequence, this element does not affect the error of hot edges. For the same reason, decreasing the cardinality of hot edges (here, from 3 to 2) improves the error, as this increases the fraction of vertices non-incident to hot edges. #### 3.2.3 Saturation in supercritical regime In Section 3.1.2 we discussed the saturation regime occurring for large load values: when the load grows sufficiently large, i.e. the hash hypergraph becomes sufficiently dense, all counters reach the same level, erasing distinctions between edges. In this regime, assuming a fixed load (graph density) and the uniform distribution of input, the edge value depends only on input size and not on the graph structure (with high probability). It is in this context that Bianchi et al. [6] studied the growth rate of edge values depending on input size. It is an interesting, natural and practically important question whether this saturation phenomenon holds for non-uniform distributions as well, as it is directly related to the capacity of distinguishing elements of different frequency. A full and precise answer to this question is not within the scope of this work. We believe that the answer is positive at least when the distribution is piecewise uniform, when edges are partitioned into several classes and are equiprobable within each class, provided that each class takes a linear fraction of all elements. Here we illustrate this thesis with the step distribution. Figure 6 illustrates the saturation phenomenon by showing average values of hot and cold edges (\(G=10\)) with three different configurations: 2-uniform, 3-uniform, and (2,5)-mixed. Note that the \(x\)-axis shows here the total load \(\lambda=\lambda_{h}+\lambda_{c}\), where \(\lambda_{h}=0.1\cdot\lambda\) and \(\lambda_{c}=0.9\cdot\lambda\). That is, the number of both hot and cold edges grows linearly when the total number of edges grows. One can observe that in all configurations, values of hot and cold edges converge, which is a demonstration of the saturation phenomenon. Interestingly, the "convergence speed" heavily depends on the configuration: the convergence is "slower" for uniform configurations, whereas in the mixed configuration, it occurs right after the small error regime for hot edges. Bianchi et al. [6] observed that in the case of non-uniform distribution of input elements, the level of counter values in the saturated regime is upper-bounded by the level of counter values when the input distribution is modeled by a uniform choice among all \(\binom{n}{k}\) possible edges. This latter level is computed in [6] using a method based on Markov chains and differential equations. We believe that this method can be extended to the case of mixed graphs as well and leave it for future work. Evaluating this level is important as it determines the level of counter values such that elements whose frequency exceeds this level are evaluated with a negligible error. We will develop on this in the next Section with the case of Zipf's distribution. ### Zipf's distribution Power law distributions are omnipresent in practical applications. The simplest of those is Zipf's distribution which is often used as a test case for different algorithms including Count-Min sketches [14, 6, 17, 9, 4]. Under Zipf's distribution, element probabilities in descending order are proportional to \(1/i^{\beta}\), where \(i\) is the rank of the key and \(\beta\geq 0\) is the _skewness_ parameter. Note that for \(\beta=0\), Zipf's distribution reduces to the uniform one. Zipf's distribution is an important test case for our study as well, as it forces several (few) most frequent elements to have very large counts and a large number of elements (_heavy tail_) to have small counts whose values decrease only polynomially on the element rank and are therefore of the same order of magnitude. Bianchi et al. [6, Fig. 1] observed that for Zipf's distribution in the supercritical regime, the estimates follow the "waterfall-type behavior": the most frequent elements have essentially exact estimates whereas the other elements have all about the same estimate regardless of their frequency. Figure 7 illustrates this phenomenon for different skewness values. The waterfall-type behavior for Zipf's distribution is well explained by the analysis we developed in the previous sections. The "waterfall pool level" of values (called _error floor_ in [6]) is the effect of saturation formed by heavy tail elements. The few "exceptionally frequent" elements are too few to affect the saturation level (their number is \(\ll n\)), they turn out to constitute "peaks" above the level and are thus estimated without error. Naturally, smaller skewness values make the distribution less steep and reduce the number of "exceptionally frequent" elements. For example, according to Figure 7, for \(\lambda=5\) and \(k=2\), about 50 most frequent elements are evaluated without error for \(\beta=0.7\), about 40 for \(\beta=0.5\) and only 5 for \(\beta=0.3\). Following our results from previous sections, we studied whether Figure 8: Exact (blue) and estimated (orange) edge values for Zipf’s distribution as a function on the element frequency rank, for \(n=1000\), \(\lambda=2\), and input size \(20\cdot 10^{6}\). Values for ranks 20 to 120 only are shown. Estimates are averaged over 50 hash function draws. Figure 7: Exact (blue) and estimated (orange) edge values for Zipf’s distribution as a function on the element frequency rank, plotted in double log scale. All plots obtained for \(n=1000\), \(\lambda=5\), \(k=2\), and the input size \(50\cdot 10^{6}\). Estimates are averaged over 10 hash function draws. hash functions can extend the range of frequent elements estimated with small error. We found that for moderate loads \(\lambda\), this is possible indeed. Figure 8 illustrates this for \(\lambda=2\). It shows a "zoom" around the "break point" (see Figure 7) for \(k=2\) vs. \(k=(2,5;0.2)\). For clarity, plots are shown in regular scale and for elements of rank \(20\) to \(120\) only. The Figure demonstrates that the case \(k=(2,5;0.2)\) provides a sharper break (see also Figure 6). As a result, even if the saturation level is higher for \(k=(2,5;0.2)\) than for \(k=2\), about \(70\) most frequent elements are evaluated with small error with \(k=(2,5;0.2)\), vs. about \(50\) with \(k=2\). ## 4 Conclusions In this paper, we presented a series of experimental results providing new insights into the behavior of conservative Count-Min sketch. Some of them have direct applications to practical usage of this data structure. Main results can be summarized as follows. * For the uniform distribution of input elements, assigning a different number of hash functions to different elements extends the subcritical regime (range of load factors \(\lambda\)) that supports asymptotically vanishing relative error. This immediately implies space saving for Count-Min configurations verifying this regime. For non-uniform distributions, varying number of hash functions allows extending the regime of negligible error for most frequent elements, * In a supercritical regime (here, \(\lambda>6\)) and uniform distribution, for a limited range of load factors (roughly, \(\lambda<50\)), varying number of hash functions allows reducing the error even compared to the configuration \(k=1\) which yields the smallest error for constant \(k\)'s, * Under "sufficiently uniform distributions", including uniform and step distributions, a Count-Min sketch reaches a saturation regime when \(\lambda\) becomes sufficiently large. In this regime, counters become concentrated around the same value and elements with different frequency become indistinguishable, * Frequent elements that can be estimated with small error can be seen as those which surpass the saturation level formed by the majority of other elements. For example, in case of Zipf's distribution, those elements are a few "exceptionally frequent elements", whereas the saturation is insured by the heavy-tail elements. Applying a varying number of hash functions can increase the number of those elements for moderate loads \(\lambda\). Many of those results lack a precise mathematical analysis. Perhaps the most relevant to practical usage of Count-Min is the question of saturation level, as it provides a lower bound to the frequency of elements that will be estimated with small error, which in turn is a fundamental information for heavy-hitter type of applications. As we mentioned earlier, we believe that the analysis of [6] can be extended to the step distribution, however providing an analysis for more complex distributions including Zipf's distribution is an open problem.
2308.10175
BAVS: Bootstrapping Audio-Visual Segmentation by Integrating Foundation Knowledge
Given an audio-visual pair, audio-visual segmentation (AVS) aims to locate sounding sources by predicting pixel-wise maps. Previous methods assume that each sound component in an audio signal always has a visual counterpart in the image. However, this assumption overlooks that off-screen sounds and background noise often contaminate the audio recordings in real-world scenarios. They impose significant challenges on building a consistent semantic mapping between audio and visual signals for AVS models and thus impede precise sound localization. In this work, we propose a two-stage bootstrapping audio-visual segmentation framework by incorporating multi-modal foundation knowledge. In a nutshell, our BAVS is designed to eliminate the interference of background noise or off-screen sounds in segmentation by establishing the audio-visual correspondences in an explicit manner. In the first stage, we employ a segmentation model to localize potential sounding objects from visual data without being affected by contaminated audio signals. Meanwhile, we also utilize a foundation audio classification model to discern audio semantics. Considering the audio tags provided by the audio foundation model are noisy, associating object masks with audio tags is not trivial. Thus, in the second stage, we develop an audio-visual semantic integration strategy (AVIS) to localize the authentic-sounding objects. Here, we construct an audio-visual tree based on the hierarchical correspondence between sounds and object categories. We then examine the label concurrency between the localized objects and classified audio tags by tracing the audio-visual tree. With AVIS, we can effectively segment real-sounding objects. Extensive experiments demonstrate the superiority of our method on AVS datasets, particularly in scenarios involving background noise. Our project website is https://yenanliu.github.io/AVSS.github.io/.
Chen Liu, Peike Li, Hu Zhang, Lincheng Li, Zi Huang, Dadong Wang, Xin Yu
2023-08-20T06:48:08Z
http://arxiv.org/abs/2308.10175v1
# BAVS: Bootstrapping Audio-Visual Segmentation by Integrating Foundation Knowledge ###### Abstract Given an audio-visual pair, audio-visual segmentation (AVS) aims to locate sounding sources by predicting pixel-wise maps. Previous methods assume that each sound component in an audio signal always has a visual counterpart in the image. However, this assumption overlooks that off-screen sounds and background noise often contaminate the audio recordings in real-world scenarios. They impose significant challenges on building a consistent semantic mapping between audio and visual signals for AVS models and thus impede precise sound localization. In this work, we propose a two-stage bootstrapping audio-visual segmentation framework by incorporating multi-modal foundation knowledge1. In a nutshell, our BAVS is designed to eliminate the interference of background noise or off-screen sounds in segmentation by establishing the audio-visual correspondences in an explicit manner. In the first stage, we employ a segmentation model to localize potential sounding objects from visual data without being affected by contaminated audio signals. Meanwhile, we also utilize a foundation audio classification model to discern audio semantics. Considering the audio tags provided by the audio foundation model are noisy, associating object masks with audio tags is not trivial. Thus, in the second stage, we develop an audio-visual semantic integration strategy (AVIS) to localize the authentic-sounding objects. Here, we construct an audio-visual tree based on the hierarchical correspondence between sounds and object categories. We then examine the label concurrency between the localized objects and classified audio tags by tracing the audio-visual tree. With AVIS, we can effectively segment real-sounding objects. Extensive experiments demonstrate the superiority of our method on AVS datasets, particularly in scenarios involving background noise. Our project website is [https://yemanliu.github.io/AVSS.github.io/](https://yemanliu.github.io/AVSS.github.io/). Audio-visual segmentation, visual sound localization, and audio-visual hierarchical trees. Footnote 1: Foundation knowledge refers to the information extracted from off-the-shelf large models. ## I Introduction Human perception is multi-dimensional including vision, hearing, touching, tasting, and smell [2]. Among them, audio and visual are very important perceptual modalities in our daily life and their correspondences provide rich semantics for humans [3]. Understanding the correspondences fosters the development of audio-visual collaboration tasks, such as visual sound localization (VSL) [4, 5, 6] and audio-visual segmentation (AVS) [1, 7]. Unlike VSL which represents sounding objects by heatmaps, AVS strives to delineate the shape of sounding sources at a pixel level. Existing AVS methods directly decode the aligned audio-visual features into the masks of sounding sources. However, they may struggle to learn the consistent mapping between the audio-visual modalities when audio signals contain noise. As shown in the last two rows of Fig. 1, when the input audio signal (_i.e._ gun-shot) is mixed with background noise or off-screen sounds, the audio-visual segmentation results tend to incorporate silent regions. This indicates that current AVS methods might not perform well when audios contain interference. To tackle the above challenges, we develop a two-stage bootstrapping audio-visual segmentation framework (BAVS). In a nutshell, in the first stage, we intend to segment all potential sounding objects and extract the semantics of audio signals. In the second stage, we aim to associate all potential sounding objects with the audio tags and thus attain authentic audible sources. In the first stage, we propose to leverage a segmentation model to localize potential-sounding objects. However, Fig. 1: Comparisons of our method and TPAVI [1]. When audio signals involve background noise (white noise) or off-screen sounds (electrical clipper), TPAVI [1] tends to incorporate silent regions (_i.e._ the male) in segmentation results. In contrast, our method segments the sounding object (the gun) more precisely, especially near the gun boundaries. This implies that our method is robust against background noise and off-screen sounds. The comparison regions are highlighted by yellow bounding boxes. we observed that directly learning a segmentation model with the provided labels struggles to segment all potential-sounding objects. The primary reason is that the label of a sounding object in one frame can be changed to "background" when it is a silent one in other visual frames. This leads to ambiguity in learning a segmentation model and degrades the performance of the segmentation model. To tackle this problem, we devise a silent object-aware objective (SOAO) based on the semantics extracted by an off-the-shelf large foundation model. To be specific, SOAO excludes penalties in segmenting silent objects that do not have ground-truth object masks in visual frames, while increasing the penalty in recognizing objects as background. By doing so, we are able to segment various potential sounding instances. Meanwhile, we employ a state-of-the-art audio foundation model to acquire semantic tags for each audio. This allows us to discern the underlying audio semantics, including on-screen sounds, background noise, and off-screen sounds. In the second stage, we focus on establishing a consistent mapping between audio and visual semantics. One direct solution is to align the top-\(k\) most confident audio tags with the corresponding segmented instance labels. However, we found that such audio tags generated by the audio foundation model are not always accurate due to the background noise and off-screen sounds. For example, lawn mower sounds with surrounding noise can be easily misidentified as helicopter sounds. To overcome the aforementioned issue and localize genuine-sounding objects, we devise an audio-visual semantic integration strategy (AVIS). Specifically, we first construct an audio-visual tree based on the hierarchical correlations between sounds and object categories, as shown in Fig. 4. The hierarchical structure indicates how the audio categories of potential sounds correspond to the visual categories. Moreover, to enhance the robustness of AVIS against noise tags, we group sounding sources based on the harmonics and overtones. For instance, sounding sources such as "car", "truck", and "bus" are grouped under a parent node "road vehicle" as they may emit similar sounds. In this manner, when a visual instance does not have a corresponding audio tag, we will examine whether an audio tag from the same parent node can be assigned to this instance. If we can find one, we consider this visual instance to be a genuine-sounding object. Extensive experiments conducted in AVS datasets demonstrate that our method achieves state-of-the-art performance. Moreover, we also illustrate that our framework is still effective and reliable in noisy audio scenarios. Our contributions are three-fold: * We introduce a novel two-stage bootstrapping audio-visual segmentation framework (BAVS). Our framework shows strong robustness against background noise and off-screen sounds. * We devise a silent object-aware objective (SOAO) to address the label-shifting issue in learning our segmentation model. This objective enables our model to segment all potential sounding objects. * We develop an audio-visual semantic integration strategy (AVIS) by proposing an audio-visual tree. This strategy allows us to establish a consistent audio-visual mapping and thus find the authentic sounding objects. ## II Related Works ### _Visual Sound Localization_ Visual sound localization (VSL) aims to localize the sounding object in the visual data based on provided audio signals [4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15]. Predominantly, VSL methods explore the correspondence and feature similarity between audio and visual modalities to generate localization heatmaps. For example, Senocak _et al._[8] leverage a triplet loss to correlate audio signals with their corresponding visual frames. Oya _et al._[9] devise a similarity loss that encourages the similarity of paired audio-visual features while suppressing the similarity of non-matched pairs. However, these methods align the two modality features at the frame level, inevitably incorporating the silent regions during aligning. This hinders the formation of consistent audio-visual correlations, thereby degrading the performance of VSL methods. To enhance the localization accuracy, several works attempt to associate the sounding regions with audio signals by employing contrastive loss mechanisms [11, 14, 15, 16, 5]. Some approaches utilize pre-trained detection models to refine the possible sounding regions [13], while others focus on extracting challenging samples from visual frames [15, 5]. For instance, Chen _et al._[15] introduce a mechanism for mining challenging samples within a contrastive learning framework. Meanwhile, SLAVC [5] proposes a protocol for generating hard negative samples. However, these methods are less effective in scenarios with multiple-sounding objects since the additive nature of audio recordings. Recently, Zhou _et al._[1, 7] develop audio-visual segmentation datasets, encompassing mask annotations for audible objects. With pixel-level supervision, more precise localization can be achieved. In this work, we investigate the essential factors that limit the performance of audio-visual segmentation tasks and subsequently propose a two-stage framework to solve these limitations. ### _Image Segmentation Networks_ Image segmentation aims to partition an image into distinct regions, where each segment represents a distinct region or a part of an object [17, 18]. Image segmentation tasks can be divided into three categories based on how semantic labels are assigned to individual pixels: semantic segmentation [19], instance segmentation [20], and panoptic segmentation [21]. Conventionally, Deepnet-based methods are widely used to address these tasks, such as Mask-RCNN [20] and U-Net [19]. Furthermore, techniques that aggregate global feature contexts, such as DANet [22], OCNet [23], and CCNet [24], have also been developed. Recently, transformer-based architectures have gained prominence in image segmentation due to their enhanced ability to capture long-range context [25, 26, 27]. However, these methods are tailored for only one specific sub-task, either semantic segmentation or instance segmentation. To address the challenge, Cheng _et al._[28] introduce a method to combine both semantic and instance segmentation tasks within a single model framework. This pioneering methodology paves the way for sophisticated techniques that consolidate various sub-tasks within a singular architectural framework, such as Mask2Former [29], and MP-Former [30]. Different from image segmentation tasks which only consider the visual modality, audio-visual segmentation demands an intricate understanding of the interaction between audio and visual modalities. TPAVI [7] pioneers an approach to address the audio-visual segmentation tasks, where they first fuse features from both modalities with the cross-attention mechanism and then decode masks from the fused features. However, this strategy impairs the effectiveness of audio semantics and amplifies visual semantic ambiguity when the audio is contaminated with noise. ## III Proposed Method In this section, we present the details of the proposed framework. First, we apply the foundation models to extract prior knowledge for both visual and audio signals in Sec. III-A. We then describe potential-sounding object segmentation in Sec. III-B. Last, we introduce audio-visual tree construction and the audio-visual semantic integration strategy (AVIS) in Sec. III-C. The overall structure of our proposed framework is illustrated in Fig. 2. ### _Foundation Knowledge Priors_ #### Iii-A1 Visual Foundation Knowledge Extraction To identify the semantic labels for the silent objects in each visual frame, we adopt a large off-the-shelf foundation model (_i.e._, UniDiffuser [31]) to extract visual foundation knowledge. Specifically, UniDiffuser is pre-trained on a large-scale dataset with paired image text. Hence, when feeding the frames into the model, the generated captions usually contain much richer semantics and encompass an extensive vocabulary. This implies an opportunity to extract detailed semantic labels from these captions. For simplicity, we take one frame as an example to illustrate the overall process. Given the generated caption \(c\) of the frame, we utilize the noun parser [32] to collect a set of nouns \(\mathcal{N}=\{n_{i},i=i=1,\dots,M_{1}\}\), where \(M_{1}\) denotes the number of extracted nouns. For example, as illustrated in Fig. 3, given one generated image caption "There is a pink parrot standing on a woman's hand", we can collect the nouns "parrot", "hand", and "woman" from the caption. These nouns denote the semantic labels of all potential-sounding objects in the frame, including silent objects and sounding objects. Based on the category labels of sounding objects in the current frame, we can identify silent objects by excluding those associated with sounding objects from \(\mathcal{N}\). However, the direct exclusion is challenged by the expression inconsistency between the extracted nouns and the category labels. For example, UniDiffuser might extract a more specific expression such as "parrot", while the AVS datasets broadly categorize it as "bird". To address this, we implement a semantic alignment strategy, unifying different terminologies before exclusion. In conjunction with the obtained noun set \(\mathcal{N}\), we characterize the category label set as \(\mathcal{C}=\{c_{i},i=1,\dots,M_{2}\}\), where \(M_{2}\) indicates the number of object categories. Then, we employ a semantic extractor \(\mathcal{F}(\cdot)\)[33] to derive word representations for both the nouns set \(\mathcal{N}\) and the category label set \(\mathcal{C}\). For each noun \(n_{i}\) in \(\mathcal{N}\), we calculate its cosine similarity score \(s(i,j)\) with the Fig. 2: Overview of our BAVS framework. In the first stage, we first utilize an off-the-shelf large foundation multi-modal model to extract the visual semantics. Based on the visual semantics, we introduce a silent object-aware objective (SOAO) to our segmentation model and thus obtain the potential sounding instance labels and masks. Moreover, we employ a large pre-trained audio classification foundation model to collect semantic tags for each audio recording. In the second stage, we first introduce an audio-visual tree to fit sound semantics to their sounding sources. Then we present the audio-visual semantic integration strategy (AVIS) to establish a consistent audio-visual mapping between the segmented instances and the audio-semantic tags. category name \(c_{j}\) in \(\mathcal{C}\), formulated as: \[s(i,j)=\cos(\mathcal{F}(n_{i}),\mathcal{F}(c_{j})), \tag{1}\] where \(\cos(\cdot)\) denotes the cosine similarity function. We thus obtain a similarity score set \(\mathcal{S}_{i}=\{s(i,k),k=1,\dots,M_{2}\}\) for each noun \(n_{i}\). We sort the elements in \(\mathcal{S}_{i}\) in descending order and identify the best matching pair \(\langle n_{i},c_{best}\rangle\). The noun \(n_{i}\) in \(\mathcal{N}\) is then replaced by its corresponding category label \(c_{best}\), resulting in an updated set \(\hat{\mathcal{N}}=\{\hat{n}_{i},i=1,\dots,M_{1}\}\). Leveraging set \(\hat{\mathcal{N}}\), we finally obtain the semantic labels for silent objects in the current frame by excluding those encapsulated in \(\mathcal{C}\). For each visual frame, we perform the same procedures and obtain their corresponding labels of silent objects. #### Ii-A2 Audio Foundation Knowledge Extraction Similarly, we employ a large off-the-shelf audio foundation model (_i.e._, Beats [34]) to discern the audio semantics from an audio signal. It is noteworthy that Beats [34] is trained on AudioSet [35], the largest audio event dataset to date. This dataset is characterized by its exceptionally detailed categorization of audio recordings, including various types of ambient noise such as wind, thunder, rain, and white noise. Utilizing Beats enables our framework to efficiently differentiate on-screen sounds, background noise, and off-screen sounds. In the process of obtaining audio tags for each audio signal in the AVS datasets, the first step involves forwarding the audio signal to Beats [34] to attain its audio representation. Subsequently, a projection layer is employed to generate the semantics distribution across all available categories. Lastly, a sigmoid function determines the confidence score associated with each audio tag. ### _Potential-sounding Objects Segmentation_ After obtaining the semantic labels of silent objects in Sec. III-A1, we then introduce the details of potential-sounding object segmentation here. We first explain the details of the adopted network architecture and then elaborate on the training objectives. #### Ii-B1 Network Architecture We employ a potential-sounding object segmentation network (PSOS) to localize all potential-sounding objects. The architecture is inspired by the previous unified framework that shows excellent performance across different segmentation tasks [28, 29]. As indicated in Fig. 2, we first feed a visual frame into a backbone network to extract its image feature \(f\). The feature is then input into a pixel decoder and a transformer decoder respectively. The pixel decoder progressively upsamples the image feature to generate a per-pixel embedding \(\varepsilon_{pixel}\), and the transformer decoder produces \(N\) segmentation embeddings \(\varepsilon_{mask}\). These embeddings contribute to the formation of \(N\) class predictions, along with \(N\) corresponding mask embeddings. Here, the mask embeddings are further processed by the _classes head_, assigning a class label to each mask. Furthermore, the combination of mask embeddings and per-pixel embedding \(\varepsilon_{pixel}\) is fed into the _masks head_ to derive binary masks. Thus, we obtain the predicted results \(Q=\{(p_{i},m_{i}),i=1,\dots,N\}\), where \(p_{i}\in\mathbb{R}^{C+1}\) denotes the classification score and \(m_{i}\in\{0,1\}^{H\times W}\) symbolizes the binary mask. Note that, a "no object" class is introduced to represent either the background regions or the categories that fall outside the scope of AVS datasets. In practice, \(N\) is usually much larger than the number of ground-truth instances in the visual images. This ensures that all potential-sounding objects have the corresponding predictions. In calculating the training loss, we first employ the bipartite matching algorithm to identify the best prediction for each ground truth. After obtaining the best predictions, we calculate the training loss between the matched predictions and the ground truth for optimization. #### Ii-B2 Training Objectives Besides the obtained prediction set \(Q\), we denote the ground-truth set \(Q^{gt}=\{(c_{j}^{gt},m_{j}^{gt}),j=1,\dots,N^{gt}\}\), where \(N^{gt}\) is the number of ground truth, \(c_{j}^{gt}\) and \(m_{j}^{gt}\) represent the label and the binary mask for \(j\)-th ground truth, respectively. Based on the matching index \(\sigma\) between ground truth and predicted results, previous works [28] adopt the focal loss \(\mathcal{L}_{\mathrm{focal}}\)[36], the dice loss \(\mathcal{L}_{\mathrm{dice}}\)[37], and the cross entropy loss as the training objective. Mathematically, it is represented by: \[\begin{split}\mathcal{L}_{\mathrm{seg}}=&\sum_{j=1 }^{N}[\lambda_{f}\mathcal{L}_{\mathrm{focal}}(m_{\sigma(j)},m_{j}^{gt})\\ &+\lambda_{d}\mathcal{L}_{\mathrm{dice}}(m_{\sigma(j)},m_{j}^{gt} )-\log p_{\sigma(j)}(c_{j}^{gt})],\end{split} \tag{2}\] where \(\lambda_{f}\) and \(\lambda_{d}\) are the hyper-parameters to balance the focal loss, dice loss, and cross-entropy loss. Although the objective has shown good performance in traditional segmentation tasks, they fall short in AVS tasks. This primarily attributes to the label shift phenomenon inherent to the AVS dataset. Specifically, the label of an object would shift to "background" if the object is no longer sounding in the input audio-visual pair. This leads to ambiguity for segmentation models in training and thus causes the model overfit to the salient objects in visual frames. To address such problems, we introduce an innovative objective function, dubbed the silent object-aware objective (SOAO). Fig. 3: Illustration of visual foundation knowledge extraction. We first employ a caption generator to attain textual descriptions for the visual frame. Then we utilize the noun parser to obtain the nouns in the caption. Subsequently, a semantic extractor is adopted to obtain the representations of extracted nouns and category labels of sounding objects. We compute the similarity between extracted nouns and category labels and replace the nouns with the most similar category label. After filtering out the ground truth labels of the frame, we obtain the labels of silent objects. SOAO aims to increase the diversity of segmented objects, guided by semantic- and instance-level constraints. The first semantic-level constraint explicitly makes the segmentation model aware of the silent objects. Previously, predictions from set \(Q\) can be divided into two subsets: one corresponding to the ground truth \(Q^{gt}\) and the other aligning with "no object" \(\varnothing\). In optimization, constraints are imposed on the predictions in both subsets. Such a process neglects the potential silent objects in "no object", which tends to compromise the performance of obtained models in AVS tasks. Building upon the semantic labels of silent objects derived in Sec. III-A1, we introduce a more sophisticated constraint. Specifically, we exclude the predictions that align with the silent objects and only impose constraints on the remaining predictions in the "no object" set. These predictions with the constraints are denoted as \(\varnothing_{new}\). Mathematically, the semantic-level constraint is expressed as: \[\mathcal{L}_{\mathrm{cls}}=\sum\nolimits_{j=1}^{N}-\log p_{j\in\varnothing_{ new}}(c^{bg}), \tag{3}\] where \(c^{bg}\) represents the "no object" class \(p_{j\in\varnothing_{new}}(c^{bg})\) denotes the probability corresponding to the class. The second instance-level constraint is designed to reduce the overlap between non-object regions and the foreground. Specifically, when masks are predicted to be the "no object" class, this constraint actively reduces their intersection with the ground truth object regions (foreground). This constraint is mathematically expressed as: \[\mathcal{L}_{\mathrm{ins}}=\sum\nolimits_{j=1}^{N}\mathbbm{1}_{j\in\varnothing }\frac{m_{j}\cap(\bigcup_{k=1}^{N^{gt}}m_{k}^{gt})}{m_{j}\cup(\bigcup_{k=1}^{ N^{gt}}m_{k}^{gt})}, \tag{4}\] where \(m_{j}\) denotes the \(j\)-th predicted mask in "no object" and \(m_{k}^{gt}\) represent the \(k\)-th ground truth mask. \(\bigcup_{k=1}^{N^{gt}}m_{k}^{gt}\) represents the ground truth objects regions. The term \(m_{j}\cap\left(\bigcup_{k=1}^{N^{gt}}m_{k}^{gt}\right)\) represents the intersection between the "no object" region and foreground regions, meanwhile, \(m_{j}\cup\left(\bigcup_{k=1}^{N^{gt}}m_{k}^{gt}\right)\) captures the union area of the two regions. By introducing this constraint, the model is directed towards focusing on silent regions, thereby enhancing the diversity of segmented results. Overall, the objective of our segmentation is defined as follows: \[\mathcal{L}_{\mathrm{SOAO}}=\mathcal{L}_{\mathrm{seg}}+\lambda_{cls}\mathcal{ L}_{\mathrm{cls}}+\lambda_{ins}\mathcal{L}_{\mathrm{ins}}, \tag{5}\] where \(\lambda_{cls}\) and \(\lambda_{ins}\) are the hyperparameters. With the training objective \(\mathcal{L}_{\mathrm{SOAO}}\), our segmentation model overcomes the issue of overfitting to the most prominent ones and effectively identifies and localizes all potential-sounding objects in the visual frame. ### _Audio-Visual Semantic Integration_ #### Iii-C1 Audio-visual Tree Construction To establish a consistent mapping between audio-visual semantics, we first construct an audio-visual tree. The audio tags extracted from Beats (Sec. III-A2) are inherently granular. For example, distinct audio labels such as horse barking, clip-clop, neigh, sink, and clutter all denote horse-related sounds. Thus, we establish a mapping from these audio tags to the respective visual categories. This relationship is visualized in Fig. 4, where each yellow node represents an audio tag, and each red node corresponds to a visual category. Using the derived confidence scores of the audio tags, we aggregate scores corresponding to the same visual category. Any visual category (red node) with a non-zero score is then incorporated into the potential-sounding object set. This set is represented as \(\mathcal{T}_{c}=\{t_{i}:\{a_{j},j=1,\ldots,J\},i=1,\ldots,N_{1}\}\), where \(N_{1}\) denotes the number of reserved nodes in this set, \(\{a_{j},j=1,\ldots,J\}\) denotes all the audio tags that correspond to one visual category \(t_{i}\), and \(J\) is the number of audio tags. Considering the complexity of the input audio, the audio tags generated by the audio foundation model may be inaccurate. Specifically, the genuine categories in the audio might receive diminishing scores, while semantically analogous categories could have much higher scores. Such inaccuracy can result in the exclusion of the genuine category label from \(\mathcal{T}_{c}\). For example, the audio of "ambulance" may receive a lower score for the visual category "ambulance" and is excluded in \(\mathcal{T}_{c}\). Its semantically-related category "bus", however, might have a high score, ensuring its inclusion in \(\mathcal{T}_{c}\). To retrieve the missed genuine category, _e.g._, "ambulance", we thus construct a high-level mapping based on the harmonics and overtones of sounds. If two red nodes (representing visual categories) may emit the similarity sounds, we group them in a high-level set, denoted as \(\mathcal{T}_{p}=\{\hat{t_{i}}:\{t_{k},k=1,\ldots,K\},i=1,\ldots,N_{2}\}\). Here, \(N_{2}\) denotes the number of nodes in the set, \(\{t_{k},k=1,\ldots,K\}\) denotes all the visual categories that share the similarity in one sound, and Fig. 4: The audio-visual tree consists of three layers. In this diagram, we only display part of the nodes in each layer. The complete tree structure files can be downloaded from our project website. From the first layer to the third layer, there are 24, 156, and 527 nodes respectively. The yellow nodes in the third layer represent the audio tags. In the second layer, the red nodes are the visual categories. Moreover, the blue nodes are the high-level category groupings. \(K\) is the number of visual categories included in node \(\hat{t}_{i}\). By integrating this mapping structure, we obtain an audio-visual tree that comprises three hierarchical layers: the third layer denotes audio tags, the second layer encapsulates visual categories of sounding objects, and the first layer signifies high-level category groupings. #### Iii-C2 Audio-Visual Semantic Integration Based on the prediction set \(Q\) from the potential-sounding object segmentation network, we also derive its potential-sounding object set for the visual data of AVS datasets. Considering the presence of numerous overlapped and low-quality masks in \(Q\), we perform a two-phase filtering process. Specifically, in the first phase, we partition masks from set \(Q\) into multiple subsets based on their predicted labels. In each subset, we arrange the objects in descending order based on their confidence score \(p_{i}\). Masks with the highest scores are selected to form the preliminary object set \(\hat{P}_{c}=\{(v_{i},m_{i}),i=1,\ldots,N_{3}\}\), where \(v_{i}\) denotes the instance label, \(m_{i}\in\{0,1\}^{H\times W}\) represent the binary mask, and \(N_{3}\) is the total number of selected objects in the first phase. In the second phase, we further identify potential-sounding objects from the remaining masks. For each remaining mask, we compute its intersection over union (IoU) scores with all the masks in \(\hat{P}_{c}\). If all the IoU scores of the mask are below the threshold \(t\), we then add this mask to \(\hat{P}_{c}\). After performing such a process, we finally obtain the potential-sounding object set \(P_{c}=\{(v_{i},m_{i}),i=1,\ldots,N_{4}\}\), where \(N_{4}\) denotes the number of potential-sounding objects after two phases. Especially, the label \(v_{i}\) shares the same category set with the nodes in the second layer of the audio-visual tree, ensuring consistent categorization and naming. Based on the audio-visual tree, we perform a consistent mapping between audio-visual semantics. Specifically, for a visual mask in \(P_{c}\), we identify it as the mask of a sounding object if its semantic label presents in both \(P_{c}\) and \(\mathcal{T}_{c}\). Once it is identified, the related mask and the category are removed from \(P_{c}\) and \(\mathcal{T}_{c}\), respectively. If such a condition is not satisfied, we retrieve the categories that share similar sounds based on the high-level set \(\mathcal{T}_{p}\). Similarly, if the semantic label of the mask appears both in \(P_{c}\) and retrieved categories, we also consider it as the mask of a sounding object. In contrast, if neither condition is fulfilled, the visual mask is labeled as the mask of silent objects. ### _Inference_ Given an audio-visual pair, we input the visual frame into the PSOS network to obtain the potential-sounding object set \(P_{c}\). Meanwhile, we generate the audio tags for the input audio signal by leveraging the audio foundation model. Based on the audio-visual tree, we attain the sounding source set \(\mathcal{T}_{c}\) and high-level set \(\mathcal{T}_{p}\) of the input audio recording. Then, we utilize our audio-visual integration strategy to filter out the masks of silent objects in \(P_{c}\) and obtain the final masks for sounding objects. ## IV Experiments ### _Experimental Setup_ #### Iv-A1 Implementation Details For visual masks generation, we employ Swin Transformer (Swin) [38], Pyramid Vision Transformer v2 (PVTv2) [39], and ResNet50 [40] as the backbones of our segmentation network, all of which are pre-trained on ImageNet [41]. All visual frames are resized to a resolution of 224 \(\times\) 224 \(\times\) 3 pixels. During the training stage, the models are optimized by Adam optimizer with a learning rate of 1e-4 [42]. We set the batch size to 20. The model with the best performance on the validation dataset is employed to segment all potential sounding objects. For audio semantic tagging, we adopt Beats model [34] pre-trained on Audioset [35]. The input audio signals are sampled at a rate of 48,000 samples per second, each with a duration of one second. #### Iv-A2 Audio-Visual Segmentation Datasets To validate our approach, we utilize two established audio-visual segmentation datasets: AVS dataset [7] and AVSS dataset [1]. We adopt the same data split rules in [7] and [1]. AVS DatasetThe AVS Dataset consists of 5,356 video samples distributed over 23 categories [7]. In this dataset, each video is evenly split into five video clips and each clip lasts one second. The last visual frame and the audio signal of the video clip are annotated and regarded as one audio-visual pair. The dataset is further categorized into two distinct sub-datasets: the single-sounding source sub-dataset (AVS-S4) comprising 4,932 videos and the multi-sounding source setting (AVS-MS3) with 424 videos. In AVS-S4, each audio-visual pair possesses only one sounding object, while AVS-MS3 includes pairs with multiple sounding objects. It is important to highlight that the ground truth for each pair is a binary mask, where "0" indicates the silent region and the "1" represent the sounding region. AVSS DatasetAVSS Dataset has 12,356 videos across 70 categories [1]. Different from the AVS dataset which solely offers binary mask annotations, AVSS provides semantic-level annotations for sounding sources. The dataset also introduces 7,000 new videos, each lasting 10 seconds. Similar to the AVS dataset, each video in AVSS is uniformly divided into 10 clips, where the last frame and the corresponding audio signal form an audio-visual pair. #### Iv-A3 Evaluation Metrics In this study, we employ the Jaccard Index (\(\mathcal{J}\)) [43] and the F-score (\(\mathcal{F}\)) to evaluate the obtained AVS models. Specifically, \(\mathcal{J}\) calculates the intersection-over-union (IoU) value between the predicted masks of sounding objects and the ground truth. \(\mathcal{F}\) provides a comprehensive evaluation of the model in terms of precision and recall. It is defined as \(\mathcal{F}=\frac{(1+\beta^{2})\text{-precision}\text{-recall}}{\beta^{2}\text{- precision}\text{+recall}}\). In line with previous works [1, 7], we set \(\beta^{2}\) as 0.3. ### _Quantitative Evaluation_ #### Iv-B1 Comparison with the AVS method We benchmark our approach against the state-of-the-art method TPAVI [1, 7]. As suggested in Tab. I, our method consistently outperforms TPAVI on all datasets. Note that on the AVS-MS3 sub-dataset, our method achieves a significant 5.63% improvement with respect to metric \(\mathcal{J}\). This implies that our method excels in handling scenarios involving multiple sound sources. Significantly, the AVSS dataset presents more challenges, characterized by more intricate audio recording environments and the necessity for semantic-level predictions. Hence, scores under both metrics are lower when compared to those on the AVS dataset. Nonetheless, our method still surpasses TPAVI by a large margin, _e.g._, 3.82% with respect to metric \(\mathcal{J}\). These results demonstrate the effectiveness and robustness of our method in more complex audio-visual segmentation cases. #### Iv-B2 Comparison with the VSL methods We further compare our method with state-of-the-art methods in Visual Sound Localization (VSL), including EZLSL [11], LVS [15], SLAVC [5], and SSPL [14]. For a fair comparison, we re-train these models on both AVS and AVSS datasets. While VSL methods are crafted for locating sounding sources, their outputs are primarily heatmaps. Hence, we cannot directly use metrics \(\mathcal{F}\) and \(\mathcal{J}\) to compare our method with VSL methods. To solve this problem, we first convert the ground truth masks of the AVSS dataset into binary masks and then calculate the values under metrics \(\mathcal{J}\) and \(\mathcal{F}\). As suggested by Tab. II, our method consistently outperforms the baselines under both metrics. For example, compared to EZLSL, our approach achieves notable improvements of 49.69% on AVS-S4, 35.62% on AVS-MS3, and 25.45% on AVSS with respect to the \(\mathcal{J}\) metric. It is worth noting that these VSL methods neglect the challenges introduced by off-screen sounds and background noise. Most of them are trained on VGGSound [44] dataset, which comprises audio recordings curated from controlled, interference-free settings. Although we have re-trained them on AVS and AVSS datasets, the models are still incapable of effectively handling the cases with noise or off-screen sounds due to their inherent limitations. #### Iv-B3 Analysis of background noise and off-screen sounds To intuitively demonstrate the effectiveness of our method against background noise or off-screen sounds, we compare with TPAVI on the data with explicit background noise or off-screen sounds. In detail, we add the white noise or the off-screen sound (_i.e._, "electrical clipper") to each audio recording in AVSS test data. As suggested by Tab. III, our method exhibits superior performance across both challenging settings. Specifically, in the case involving off-screen sounds (w/ off-screen sounds), our method witnesses a marginal drop of 0.07% under metric \(\mathcal{J}\), whereas TPAVI suffers from a significant decline of 6.21%. In the case of white noise (w/ noise), the \(\mathcal{J}\) result of our method tends to drop from 33.59% to 30.51%, Fig. 5: Qualitative comparisons with the state-of-the-art TPAVI on the single-sounding source and multi-sounding source cases. We highlight regions for comparison by yellow bounding boxes. Among all results, different colors suggest distinct sounding sources. while TPAVI decreases from 29.77% to 23.59%. These results imply that TPAVI struggles to establish a consistent mapping between audio and visual signals when facing the interference of background noise and off-screen sounds. In contrast, our proposed method effectively mitigates these adverse effects. ### _Qualitative Evaluation_ #### Iv-C1 Comparison with the AVS method Fig. 5 illustrates the qualitative results on both single-sounding source and multi-sounding source scenarios. In single-sounding source data (Fig. 5 (a)), all methods are able to locate the sounding objects. However, our proposed method achieves more delicate results on the object boundaries. For example, while TPAVI ignores the propeller of the helicopter, our method successfully segments that region. In the context of multi-source cases (Fig. 5 (b)), TPAVI fails to segment some sounding regions or misclassifies parts of sounding regions. These problems can be attributed to its training strategy. Specifically, TPAVI directly aligns the audio with the corresponding visual regions and neglects the multiple nuanced audio semantics in the audio recording. This means one visual instance could be associated with multiple audio sources in one audio recording, thus resulting in impaired performance. In contrast, our method first obtains the object masks and audio semantics, then associates them based on our proposed integration strategy. Our method avoids the problems in TPAVI and obtains high-quality segmentation masks for all sounding objects. To show the model performance under the contaminated audio signals, we present the visualization results of our method and TPAVI in Fig. 6. For instance, when the audio signal is mixed with white noise, TPAVI tends to segment the silent region ("girl") while our method avoids the segmentation of any silent regions (Fig. 6 (a)). As indicated by Fig. 6 (b), when the audio signal is mixed with the off-screen sound ("electrical clipper"), TPAVI tends to overlook the sounding region ("elephant") while our method still segments the intact region. These observations highlight that our method builds strong audio-visual semantic correlations and is robust against background noise and off-screen sounds. #### Iv-C2 Comparison with the VSL methods Fig. 7 illustrates the visual results of our method and visual sound localization methods. As suggested by the Fig. 7 (b), visual sound localization methods fail to localize the multiple sounding regions. They tend to localize one-sounding regions or cover all significant objects within visual frames. For instance, if the visual frame contains "humans", these models tend to focus only on the regions of "humans", even if "humans" do not produce any sound. In contrast, our method avoids such problems and demonstrates a superior ability to precisely Fig. 6: Visual results of adding white noise to original audio recordings. Red bounding boxes highlight the specific regions for comparison. While the input audio-visual pairs are with interference, our framework can still segment sounding objects accurately. locate multiple-sounding sources. To further illustrate the anti-interference ability of all methods, we visualize their localization results under scenarios where external sound interference or the absence of sound is considered. As depicted in Fig. 7, although the audio is silent or mixed with "white noise" and "off-screen sounds", VSL methods are still prone to localize the most prominent object in visual frames. This observation suggests that VSL methods can not localize the genuine sounding sources but just focus on the salient objects. In contrast, our approach demonstrates a nuanced sensitivity to audio changes and is robust against noise interference. ### _Ablation Study_ #### Iv-D1 Impact of various backbones To demonstrate that the superiority of our method benefits from the method design rather than the backbone models, we adopt various networks, _i.e._ ResNet50 [40], PVT-v2 [39], and Swin [38] as the backbone for our segmentation model. Additionally, we also provide the parameter amount (Params) and computation cost (GLOPs) of all methods for further analysis. As suggested by Tab. IV, our method consistently outperforms all other methods across different backbones. In addition, our method attains lower Params and GFLOPs when compared with TPAVI under the same backbone. #### Iv-D2 Impact of SOAO As discussed in Sec. III-B, we increase the variety of segmented masks by introducing SOAO. To demonstrate the effectiveness of its two components (\(\mathcal{L}_{cls}\) and \(\mathcal{L}_{ins}\)), we conduct an ablation study on the AVSS dataset. As suggested by Tab. V, both components significantly improve the segmentation performance across all datasets. Specifically, considering the involvement of \(\mathcal{L}_{cls}\) and \(\mathcal{L}_{ins}\), the obtained segmentation model achieves nearly 3% improvement under metric \(\mathcal{J}\). Additionally, we also adopt \(\mathcal{I}\) to denote the number of recalled potential-sounding objects. The much higher performance under \(\mathcal{I}\) validates that our SOAO encourages our segmentation model to segment richer objects in visual images rather than overfitting to the salient ones. #### Iv-D3 Impact of AVIS In this work, we employ the audio-visual semantic integration strategy (AVIS) to establish the audio-visual semantic correlation. To demonstrate the effectiveness of AVIS, we perform an ablation study on all datasets, and the results are shown in Tab. VI. With the incorporation of AVIS, the \(\mathcal{J}\) results are increased by 3.45% on AVS-S4, 5.96% on AVS-MS3, and 6.77% on AVSS. The above results suggest AVIS adequately builds a strong correlation between audio and visual signals. ## V Conclusions In this paper, we present a two-stage bootstrapping audio-visual segmentation framework for AVS tasks. Based on the Fig. 7: Visual comparisons with the visual sounding localization methods including EZLVSL [11], LVS [15], and SSPL [14]. Original Audio means the audio signal matches the visual frame. (a) and (b) suggest the visual results of all methods on the single-sounding source and multi-sounding source cases, respectively. w/ white noise indicates the original audio blended with the white noise, and w/ clipper represents the original audio mixed with the off-screen sound “clipper”. Silent denotes that the input audio is silent. visual knowledge from visual foundation models and the proposed silent object-aware objective (SOAO), our segmentation model identifies multiple potential sounding objects within the image. Meanwhile, leveraging the knowledge from the audio foundation models, we distinguish the semantics of each audio, which include the background noise and the off-screen sounds. Finally, we use the proposed audio-visual semantic integration strategy to pinpoint the genuine sounding objects by examining the label concurrency beween segmented objects and audio semantics. Extensive experiments demonstrate the superiority of our method in addressing AVS tasks, especially the cases with background noise and off-screen sounds. We hope that our work can provide some insights to the works in the future.
2303.12376
Graph Data Models and Relational Database Technology
Recent work on database application development platforms has sought to include a declarative formulation of a conceptual data model in the application code, using annotations or attributes. Some recent work has used metadata to include the details of such formulations in the physical database, and this approach brings significant advantages in that the model can be enforced across a range of applications for a single database. In previous work, we have discussed the advantages for enterprise integration of typed graph data models (TGM), which can play a similar role in graphical databases, leveraging the existing support for the unified modelling language UML. Ideally, the integration of systems designed with different models, for example, graphical and relational database, should also be supported. In this work, we implement this approach, using metadata in a relational database management system (DBMS).
Malcolm Crowe, Fritz Laux
2023-03-22T08:24:06Z
http://arxiv.org/abs/2303.12376v1
# Graph Data Models and Relational Database Technology ###### Abstract Recent work on database application development platforms has sought to include a declarative formulation of a conceptual data model in the application code, using annotations or attributes. Some recent work has used metadata to include the details of such formulations in the physical database, and this approach brings significant advantages in that the model can be enforced across a range of applications for a single database. In previous work, we have discussed the advantages for enterprise integration of typed graph data models (TGM), which can play a similar role in graphical databases, leveraging the existing support for the unified modelling language UML. Ideally, the integration of systems designed with different models, for example, graphical and relational database, should also be supported. In this work, we implement this approach, using metadata in a relational database management system (DBMS). typed graph model; graph schema; relational database; implementation; information integration. ## I Introduction For many years, the process of database implementation has included a conceptual data modeling phase, and this has often been supported by declarative structures using annotations or attributes [1]. Some recent DBMS have included metadata in the relational model to form a bridge with the physical database. This approach brings significant advantages in that the data model can be enforced across all applications for a single database. In previous work [2], we provided mapping rules for TGM so that data models can play a similar role in graphical databases, using the notations of UML [3]. During such early conceptual model building, incremental and interactive exploration can be helpful [4] as fully automated integration tools may combine things in an inappropriate way, and the use of data types [5] can help to ensure that semantic information is included not merely in the model, but also in the final database. In this short paper we report on such an implementation of TGM, using metadata in a relational DBMS [6], partly inspired by recent developments in the PostgreSQL community [7]. As with the original relational model, the Typed Graph Model (TGM) has a rigorous mathematical foundation as an instance of a Graph Schema. The plan of this short paper is to review the TGM in Section II, and discuss the implementation details in Section III, including an illustrative example. Section IV provides some conclusions. ## II The Typed Graph Model and Information Integration We will construct a TGM for a database by declaring instances of nodes and edges as an alternative to specifying tables of nodes and edges. ### _Typed Graphs formalism_ In this section we review the informal definition of the TGM from [2], using small letters for elements (nodes, edges, data types, etc.) and capital letters for sets of elements. Sets of sets are printed as bold capital letters. A typical example would be \(n\in N\in N\subseteq\wp(N)\), where \(N\) is any set and \(\wp(N)\) is the power-set of \(N\). Let \(T\) denote a set of simple or structured (complex) data types. A data type \(t\):=(\(l\),\(d\))\(\in\)\(T\) has a name \(l\) and a definition \(d\). Examples of simple (predefined) types are (\(int\),Z), (\(char\),\(ASCII\)), (\(\%\),[0..100]) etc. It is also possible to define complex data types like an order line (\(Order\)Line,(\(pos\)No,\(part\)No,\(part\)Description,\(quantiity\))). The components need to be identified in \(T\), e. g., (\(pos\)No,\(int\)\(>\)0). Recursion is allowed as long as the defined structure has a finite number of components. **Definition 1** (Typed Graph Schema, TGS): _A typed graph schema is a tuple TGS=(\(N_{S}E_{S^{\prime}}\),\(T\),\(c\)) where:_ * \(N_{S}\) _is the set of named (labeled) objects (nodes)_ \(n\) _with properties of data type_ \(t\)_:=(_l,d_)_\(\in\)\(T\)_, where_ \(l\) _is the label and_ \(d\) _the data type definition._ * \(E_{S}\) _is the set of named (labeled) edges_ \(e\) _with a structured property_ \(p\)_:=(_l,d_)_\(\in\)\(T\)_, where_ \(l\) _is the label and_ \(d\) _the data type definition._ * \(\varrho\) _is a function that associates each edge_ \(e\) _to a pair of object sets (_O_,_A_), i. e.,_ \(\varrho\)_(_e_):=(_O_\({}_{e^{\prime}}A_{e}\)_) with _O_\({}_{e^{\prime}}A_{e^{\prime}}\)_\(\in\)\(\wp(N_{S^{\prime}})\)_._ \(O_{e}\) _is called the_ tail _and_ \(A_{e}\) _is called the_ head _of an edge_ _e._ * \(\tau\) _is a function that assigns for each node_ \(n\) _of an edge_ \(e\) _a pair of positive integers (_i_\({}_{n^{\prime}}\)_k_\({}_{n^{\prime}}\)_), i. e.,_ \(\tau\)_(_e_):=(_i_\({}_{n}\)_,_k_\({}_{n^{\prime}}\)_) with _i_\({}_{n}\)_EN_\({}_{0}\) _and_ _k_\({}_{n}\)_EN_. The function_ \(\tau\) _defines the min-max multiplicity of an edge connection. If the min-value_ \(i_{n}\) _is 0 then the connection is optional._ * \(C\) _is a set of integrity constraints, which the graph database must obey._ The notation for defining data types T, which are used for node types \(N_{S}\) and edge types \(E_{S}\), can be freely chosen: and in this implementation SQL will be used for identifiers and expressions, together with a strongly typed relational database engine. The integrity constraints \(C\) restrict the model beyond the structural limitations of the multiplicity \(\tau\) of edge connections. Typical constraints in \(C\) are semantic restrictions of the content of an instance graph. **Definition 2** (Typed Graph Model): _A typed graph Model is a tuple TGM=(N,E,TGS,\(\varphi\)) where:_ * \(N\) _is the set of named (labeled) nodes_ \(n\) _with data types from_ \(N_{S}\) _of schema TGS._ * \(E\) _is the set of named (labeled) edges_ \(e\) _with properties of types from_ \(E_{S}\) _of schema TGS._ * _TGS is a typed graph schema as defined above.._ * \(\varphi\) _is a homomorphism that maps each node_ \(n\) _and edge_ \(e\) _of TGM to the corresponding type element of TGS, formally:_ \(\varphi\)_:TGM_\(\rightarrow\)_TGS_ \(n\)_:_\(n\)_:_\(n\)_:_\(n\)_:_\(n\)_:_\(n\)_:_\(n\)_:_\(n\)_:_\(n\)_:_\(e\)_:_\(\varphi\)_:_\(e\)_:_\(\varphi\)_:_\(e\)_:_\(\varphi\)_:_\(e\)_:_\(\varphi\)_:_\(e\)_:_\(\varphi\)_:_\(e\)_:_\(\varphi\)_:_\(e\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\varphi\)_:_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_\(\varphi\)_:_:_\(\varphi\)_:_:_\(\varphi\varphi\)_:_:_\(\varphi\)_:_\(\varphi\)_:_:_\(\varphi\) single quoted, and doc is a JSON-like structure providing a set of properties and value expressions, possibly including metadata definitions for ranges and multiplicity. Such declarative statements build a base table in the database for each label. Nodes and edges and new node types and edge types can be introduced with this syntax. The database engine constructs a base table for each distinct label, with columns sufficient to represent the associated properties. These database base tables for node types (or edge types) contain a single row for each node (resp. edge) including node references. They can be equipped with indexes, constraints, and triggers in the normal ways. To the normal SQL DML, we add the syntax for the MATCH query, which has a similar syntax, except that it may contain unbound identifiers for nodes and edges, their labels and/or their properties. Match: MATCH Node {',' Node } [whereClause] Statement. The first part of the MATCH clause defines a graph expression. We say that a graph expression is bound if it contains only constant values, and all its nodes and edges are in the database. The MATCH algorithm proceeds along the node expressions, matching more and more of its nodes and edges with those in the database by assigning values to the unbound identifiers. If we cannot progress to the next part of the MATCH clause, we backtrack by undoing the last binding and taking an alternative value. If the processing reaches the end of the MATCH statement, the set of bindings contributes a row in the default result, subject to the optional WHERE condition. These rows then act as a source of values for the following statement. ### _Outline of the usage of the TGM_ Following the suggestion in [5] we will consider the use of the TGM in analysis, where an interactive process is envisaged. The nodes and edges contained in the database combine to form a set of disjoint graphs that is initially empty. Adding a node to the database adds a new entry to this set. When an edge is added, either the two endpoints are in the same graph, or else the edge will connect two previously disjoint graphs. If each graph in the set is identified by a representative node (such as the one with the lowest uid) and maintains a list of the nodes and edges it contains, it is easy to manage the set of graphs as data is added to the database. If an edge is removed, the graph containing it might now be in at most two pieces: the simplest algorithm removes it from the set and adds its nodes and edges back in. The database with its added graph information can be used directly in ordinary database application processing, with the advantage of being able to perform graph-oriented querying and graph-oriented stored procedures. The normal processing of the database engine naturally enforces the type requirements of the model, and also enforces any constraints specified in graph-oriented metadata. The nodes and edges are rows in ordinary tables that can be accessed and refined using normal SQL statements. In particular, using the usual dotted syntax, properties can be SET and updated, and can be removed by being set to NULL. As the TGM is developed and merged with other graphical data, conflicts will be detected and diagnostics will help to identify any obstacles to integrating a new part of the model, so that the model as developed to that point can be refined. ### _An example_ To get started with a customer-supplier ordering system we could have a number of problematic CREATE statements such as: CREATE (Joe:Customer {"Name":'Joe Edwards', Address:'10 Station Rd.'}), (Joe)-[:Ordered {"Date":date'22/11/2002'} ]-> (Ord201:"Order")-[:Item {Qty: 5}]->("16/50x100" : WoodScrew), (Ord201)-[:Item {Qty: 5}]->("Fiber 12cm": WallPlug), (Ord201)-[:Item {Qty: 1}]->("500ml" : RubberGlue) Primary keys for edges are here being left to the engine to supply - they could be specified explicitly if preferred. Name, Order and Date are in double quotes because they are reserved words in SQL. By default, the entire CREATE statement shown is considered a single transaction by the database engine: if the syntax checker is happy with it, it will be automatically committed. It is easy to criticize what the user offers here: and the graph would benefit from splitting up composite information such as Fibre 12cm and 16/8x100 to clarify the meaning of the components and facilitate processing. Such changes can be made by the designer later. Assuming the database is empty before we start, the first line above, if committed, would create a new base table CUSTOMER (a NodeType) CREATE TYPE CUSTOMER as ("Name" char, ADDRESS char) NodeType The NodeType metadata flag adds as the first column a primary key column ID of type char so that the new CUSTOMER table has an initial row ('JOE','Joe Edwards','10 Station Rd.') That would work. The next line defines four more base tables, two NodeTypes and two EdgeTypes: CREATE TYPE "Order" NodeType CREATE TYPE WOODSCREW NodeType CREATE TYPE ORDERED as ("Date" date) EdgeType(CUSTOMER,"Order") CREATE TYPE IETM as (Qty int) EdgeType("Order","MODSCREW) This also will work, but is probably not what the analyst wanted, because the Item edge type connects to nodes of type WOODSCREW. If this is committed, we cannot later have an Item edge connecting to a WALLPLUG. But nothing is committed yet, so when the database engine finds this difficulty, it simply replaces the specification :WoodScrew in the second line by :WoodScrew:81, and similar changes to WallPlug and RubberGlue. This adds a new anonymous base node type for these node types, with a system-generated name CREATE TYPE &1 NodeType and the node type proposal becomes CREATE TYPE ITEM as (QTY int) EdgeType("Order",WOODSCREW) CREATE TYPE WoodScrew UNDER &1 CREATE TYPE WallPlug UNDER &1 CREATE TYPE RubberGlue UNDER &1 The analyst can be advised that this has been done, and they can later choose a better name for the new NodeType &1 (maybe PRODUCT?). This process of generalization can be offered as a standard database transformation. After the nodes and edges have been generated and the transaction commits, the node and edge data would be installed in the database as follows: CUSTOMER ('Joe', 'Joe Edwards', '10 Station Rd.') "Order" ('Ord201') WOODSCREW ('16/50x100') WALLPLUG ('Fiber 12cm') RUBBERGLUE ('500m1') ORDERED ('82', 'Joe', 'Ord201', 'date'22/1/2002') ITEM ('83','Ord201', '16/50x100') ('84','Ord201','Fiber 12cm') ('85','Ord201','500m1') This is satisfyingly neat. We see that while the metadata flag NodeType gave the node type a primary key as the first column ID that is a primary key, the metadata flag EdgeType has given the edge types three initial columns: ID, a primary key, LEAVING, a foreign key to the leaving node type, and ARRIVING, a foreign key to the arriving type. Note also that ITEM's arriving type is the new anonymous type &1. It is noteworthy that this mechanism allows schemas to evolve bottom-up during the database design, as envisaged in [2]. The normal Schema-first strategy is still available, and the two approaches can be combined for convenience. Either way, the database will contain a rigorous and enforceable relational schema at all stages, since any declarations that would not be enforceable will be rejected before being committed to the database. During refinement of the model, there are opportunities for adding constraints and other metadata. Such details, and the enhanced diagnostics mentioned above, are the subject of ongoing research. The conference presentation will provide an opportunity for a demonstration of the process and more details on MATCH. ## IV Conclusions The purpose of this paper was to report some progress in our Typed Graph Modeling workstream. The work is available on Github [8] for free download and use and is not covered by any patent or other restrictions. The current "alpha" state of the software implements all of the above ideas apart but lacks the suggested interaction with the model designer. The test suite includes a version of the running example together with others that demonstrate the integration of the relational and typed graph model concepts in Pyrho DBMS.
2302.05523
RFI Flagging in Solar and Space Weather Low Frequency Radio Observations
Radio spectroscopy provides a unique inspection perspective for solar and space weather research, which can reveal the plasma and energetic electron information in the solar corona and inner heliosphere. However, Radio-Frequency Interference (RFI) from human activities affects sensitive radio telescopes, and significantly affects the quality of observation. Thus, RFI detection and mitigation for the observations is necessary to obtain high quality, science-ready data. The flagging of RFI is particularly challenging for the solar and space weather observations at low frequency, because the solar radio bursts can be brighter than the RFI, and may show similar temporal behavior. In this work, we investigate RFI flagging methods for solar and space weather observations, including a strategy for AOFlagger, and a novel method that makes use of a morphology convolution. These algorithms can effectively flag RFI while preserving solar radio bursts.
Peijin Zhang, André R. Offringa, Pietro Zucca, Kamen Kozarev, Mattia Mancini
2023-02-10T22:03:50Z
http://arxiv.org/abs/2302.05523v1
# RFI Flagging in Solar and Space Weather Low Frequency Radio Observations ###### Abstract Radio spectroscopy provides a unique inspection perspective for solar and space weather research, which can reveal the plasma and energetic electron information in the solar corona and inner heliosphere. However, Radio-Frequency Interference (RFI) from human activities affects sensitive radio telescopes, and significantly affects the quality of observation. Thus, RFI detection and mitigation for the observations is necessary to obtain high quality, science-ready data. The flagging of RFI is particularly challenging for the solar and space weather observations at low frequency, because the solar radio bursts can be brighter than the RFI, and may show similar temporal behavior. In this work, we investigate RFI flagging methods for solar and space weather observations, including a strategy for AOFlagger, and a novel method that makes use of a morphology convolution. These algorithms can effectively flag RFI while preserving solar radio bursts. keywords: Sun: radio radiation, methods: data analysis ## 1 Introduction High-resolution solar and space weather radio spectroscopy can provide rich information for the study of transient short-term radio bursts, the density fluctuations of the inner heliosphere and ionosphere detection, as well as long-term variations. Solar activity and its interaction with the plasma of the inner heliosphere generates radio emissions, which can be used to characterize the energy-releasing process and the plasma properties in the solar atmosphere, as well as in interplanetary space. For example, tied-array-beam observations were used to study in detail dynamic spectrum observations of Type III solar radio bursts (Zhang et al., 2019). The fine structures in a solar Type II radio burst can indicate the details of the shock evolution in a solar eruption (Magdalenic et al., 2020). Also, the interplanetary scintillation (IPS) method uses the spectroscopy of a static astronomical source (e.g., Cygnus A, Cassiopeia A) to diagnose the small-scale density structures in the inner heliosphere and ionosphere from electron density fluctuation in the dynamic spectra. The Low-Frequency Array (LOFAR) (van Haarlem et al., 2013) has proven to be a useful instrument for the study of solar activity and space weather (Oberoi and Kasper, 2004; Dabrowski et al., 2016). The project 'LOFAR for Space Weather' (Carley et al., 2020, LOFAR4SW) aimed at providing an upgraded design to make LOFAR capable of performing continuous observations of the Sun and space weather during the possible time window. Recently, a new project led by Dr. Pietro Zucca at ASTRON started, 'Incremental Development Of LOFAR Space weather' (IDOLS), IDOLS creates all-time single station observation of the Sun and space weather and a daily interferometric image of the Sun with the array of both Low Band Antenna (LBA), and High Band Antenna (HBA). For the dynamic spectrum observation, IDOLS uses a single core station (ID:CS032) detached from the LOFAR network to perform solar observations during the day and scintillation observations during the night. With the current development of IDOLS, a foreseeable increasing amount of spectroscopy data will be available for solar and space weather monitoring and research work. Unfortunately, radio frequency interference (RFI) can significantly damage the quality of observations, especially in the low frequency range (<300MHz) (Offringa et al., 2013). RFI signal contamination is a long-standing problem in radio astronomy, and is becoming more challenging because of the increased radio spectrum occupancy by technology. RFI flagging has been widely studied and many different algorithms have been developed. AOFlagger (Offringa et al., 2012) is a flagger based on filtering, combinatorial thresholding and morphological operations. AOFlagger is highly efficient and widely used in low-frequency radio observations, and is used in the default pipeline of LOFAR interferometric data preprocessing of imaging observations. However, it is less commonly used for transient processing. Recently, machine learning techniques have been applied for RFI detection, (Zhang et al., 2019). Yang et al. (2020) proposed a machine learning model called RFI-Net. The method is implemented into the RFI flagging of the Five-hundred-meter Aperture Spherical radio Telescope (FAST). Sun et al. (2022) developed an RFI-flagging tool based on Convolutional Neural Networks (CNN). The machine learning methods usually requires large training datasets, and the flagging running process is usually computationally intensive. The RFI flagging task for solar and space weather observations is different from the flagging for imaging observations with long integrations (e.g., deep extragalactic surveys). For such imaging observation, the signal of the target object for imaging is usually static or slow varying. Therefore, threshold-based methods are effective in marking the very bright and fast varying RFI. For example, the SumThreshold step in AOFlagger(Offringa et al., 2010), could effectively identify the samples with a high probability to be RFI. Then, the scale-invariant rank (SIR) operation is used to detect temporally and spectrally nearby weaker RFI(Offringa et al., 2012; van de Gronde et al., 2016). These signal characteristics differs from solar and space weather observations, where the solar radio burst can be strong (sometimes much brighter than the RFI), and changes rapidly in both time and frequency. A flagging strategy designed for non-transient observations will classify the samples with solar radio bursts as RFI. This work addresses the problem of RFI flagging in the solar and space weather observations. We propose a flagging method based on the morphological feature matching in the dynamic spectrum, an updated flagging strategy of AOFlagger, and a hybrid method combining the first two methods. We also test the flagging precision for these methods. This paper is arranged as follows: in Section 2, we present the algorithms and how they are implemented in the data processing of LOFAR solar and space weather observations. In Section 3, we show the result of the flagging in the observation data and simulation dynamic spectra. Section 4 presents the implementation of the method. A summary and discussion are given in Section 5. ## 2 Algorithm The dynamic spectrum data processing for solar and space weather observations of LOFAR includes two steps: flagging and averaging. The purpose of flagging and averaging for the solar and space weather spectroscopy is to obtain a smoother and clearer dynamic spectrum to study the spectrum features (e.g., the frequency drift rate of solar Type III radio bursts, or the bandwidth and duration of solar noise storms). The flagging step will create a binary map that marks the RFI samples, and the averaging step will downsample the dynamic spectrum to a lower resolution and smaller size, to ease the data distribution and transferring. The averaging will discard the samples marked as RFI. This section presents the details of the data processing steps for the dynamic spectrum of solar and space weather observations. ### Flagging In this sub-section, we introduce the RFI identification method. #### 2.1.1 ConvRFI The idea of using a morphological convolution for RFI detection, is based on the description of the RFI morphology (or shape) with convolution cores, and applying the convolution cores to the convolution operation for the dynamic spectrum. In the convolution result array, the pixels which match the corresponding morphology described by the kernel will be positive, representing RFI detected. We select convolutional kernels that are designed to match the typical behavior of RFI. In observations, the most common types of RFIs are local lightning storms and communication transmissions. The spectrum of lightning is wide-band and transient. It appears as features parallel to the frequency axis in the dynamic spectra; while the spectrum of communication transmissions is narrow-band, appearing parallel to the time axis in the dynamic spectra. Considering these characteristics of RFIs, the kernel is prepared as shown in Fig 1. Each kernel (\(K_{i}\)) is a binary value 2D array. Each element is set to either the value -1, or to \(a_{i}\), which is a parameter set to a value larger than zero that represents the sensitivity of the kernel for flagging. Larger values of \(a_{i}\) correspond to a higher sensitivity of flagging. The flagging scheme \(B_{\text{flag}}\) can be expressed as: \[B_{\text{flag}}=B_{f0}\cup B_{f1}\cup B_{f2}\cup B_{f3}\cup B_{f4}\cup B_{f5} \tag{1}\] in which, \[B_{i} =h(conv(D,K_{i}))\quad[i=0,1,2,3,4,5] \tag{2}\] \[B_{f0} =corr(B_{0},h(K_{0}))\] \[B_{f1} =corr(B_{1},h(K_{1}))\] \[B_{f2} =corr((B_{2}\cap\overline{B_{0}}),h(K_{2}))\] \[B_{f3} =corr((B_{3}\cap\overline{B_{0}}),h(K_{3}))\] \[B_{f4} =corr((B_{4}\cap\overline{B_{1}}),h(K_{4}))\] \[B_{f5} =corr((B_{5}\cap\overline{B_{1}}),h(K_{5}))\] where \(D\) is the dynamic spectrum, \(K_{i}\) are the kernels shown in Fig. 1, \(B_{i}\) are the convolution results with kernel \(K_{i}\), indicating the detection points, \(B_{f\,i}\) are the binary-value arrays of flagging corresponding to the morphology described be kernel \(K_{i}\), \(\overline{B_{i}}\) is the logical negation of \(B_{i}\), and \(h(x)\) is the Heaviside function. \(conv\) and \(corr\) are the convolution and correlation operations defined as: \[conv(D,K)[i,j]= \tag{3}\] \[\sum_{m=0}^{m=N}\sum_{n=0}^{n=N}D\left[i-\left(m-\frac{N-1}{2} \right),j-\left(n-\frac{N-1}{2}\right)\right]\,K[m,n]\] \[corr(D,K)[i,j]= \tag{4}\] \[\sum_{m=0}^{m=N}\sum_{n=0}^{n=N}D\left[i+\left(m-\frac{N-1}{2} \right),j+\left(n-\frac{N-1}{2}\right)\right]\,K[m,n]\] with \(N\) as the dimension of the kernel, \(K_{0-1}\) is \(N=3\), and \(K_{2-5}\) is \(N=5\). Eq. (2) shows that, if kernels \(K_{2-5}\) convoluted with a very bright line, they will result in RFI-positive samples, meaning the strong line-like features marked by \(K_{0,1}\) will also be flagged as an edge-like feature by \(K_{2-5}\). When the lines are flagged as sharp edges, the samples near the bright lines will also be marked as RFI. This will cause over-flagging near the strong line features if we directly take the union set of the flagging result of the line features and the sharp edges. Thus, we have adopted the strategy to first perform the line feature convolution to flag the line features (\(B_{f0,1}\)), and then run the sharp edge features convolution excluding the line feature points, as expressed in Eq. (2), \(B_{f2}-B_{f5}\). With a dynamic spectrum \(D\) as input, after the operation described in Eq. (1), we can have a binary map of flagging \(B_{\text{flag}}\), with RFI pixels set to 1, and the rest of the points set to 0. The sensitivity parameters \(a_{i}\) can be used to control the sensitivity of each morphology; for example, a large value of \(a_{0}\) and a small value of \(a_{1}\) makes the algorithm more sensitive to features parallel to the time axis but less sensitive to features parallel to the frequency axis. The most computationally intensive tasks in the flagging steps are the convolution and correlation operations on 2D arrays. For this, we use the Conv2D implemented in PyTorch, which is a machine learning framework providing a highly optimized implementation of convolution operations available on both CPU and GPU. The performance and resource requirements are presented in detail in Sec 3.4. #### 2.1.2 AOFlagger Local-RMS The RFI selection of AOFlagger is based on weighted low-pass filtering; combinatorial thresholding (SumThreshold) and morphological expansion (SIR operator). The thresholding step is sensitive to large values; it is thus likely to select all samples during bright solar radio bursts. To achieve RFI flagging for dynamic spectra with strong radio bursts, we use relative thresholding in reference to the root-mean-square (RMS) of nearby samples, the RMS is weighted by a Gaussian kernel excluding the flagged samples. The local-RMS method can help avoid flagging bright samples in the radio bursts as RFI. A further option is to use the detections results of ConvRFI as the initial flags for the local-RMS AOFlagger strategy. This will be referred to as the 'hybrid method' in the following sections. ### Averaging After flagging, to down-sample the data into a smaller size, one can perform the averaging with the following expression: \[I_{avg}[n,m]=\frac{\sum_{i=0}^{N-1}\sum_{j=0}^{M-1}(I_{raw}\times\overline{B_{ flag}})[n\times N+i,m\times M+j]}{\sum_{i=0}^{N-1}\sum_{j=0}^{M-1}\overline{B_{ flag}}[n\times N+i,m\times M+j]} \tag{5}\] where the multiplication and division operation in this equation are element-wise, and \(I_{avg}[n,m]\) is the averaged dynamic spectrum at time index \(n\) and frequency index \(m\). Within \(\overline{B_{flag}}\), the RFI positive samples corresponds to value '0', and the RFI negative samples corresponds to value '1' in the result array. Thus, with this method, the flagged samples are treated as 0-weight in the averaging. The averaging window size is \(N\) and \(M\) in time and frequency, respectively. As shown in Eq. 5, the averaging step and averaging window are of the same length. The averaging described in Eq. (5) can be expressed in the form of a convolution operation: \[I_{avg}=\frac{conv(I_{raw}\times\overline{B_{flag}},U_{MN};\text{ stride}=[M,N])}{conv(\overline{B_{flag}},U_{MN};\text{ stride}=[M,N])}, \tag{6}\] Where \(U_{MN}\) is a matrix with all elements set to 1 and a size of \(M\times N\), stride-length defines how many steps we take when sliding the convolution core (e.g. \(U_{MN}\)) across the array (e.g. \(\overline{B_{flag}}\)). This convolution operation can also be implemented with the Conv2D module in PyTorch. With the above flagging and averaging methods, the procedure is : flagging in high resolution and then averaging down to low resolution. By flagging in the higher resolution input data, the RFI (narrow in time and frequency) would contaminate a smaller portion of the dynamic spectrum. As a result, the averaging result will have more contribution from the RFI uncontaminated sample. ## 3 Flagging Results In this section, we present the test results of the various methods described in the previous section. ### Dataset The algorithms presented above are designed and tested for high resolution dynamic spectra at low frequency (10-90 MHz). We use solar observations with the low-band antennas (LBA) of LOFAR. The time resolution of the raw data is 10.5 ms. The observation covers 10-90 MHz with 6400 frequency channels, giving a frequency resolution of 12.2 kHz. Values are stored as single precision floating point values. The data rate is 8.19 GB/h, saved in HDF5 files. For all-day spectroscopy observations as part of the project IDOLS, these observations will generate approximately 0.7 TB of full Stokes (I,Q,U,V) data. To test our method, we use a dynamic spectrum recorded with the LBA on 19-May-2022 near 07:00 UT. The raw dynamic spectrum is shown in Fig. 2(A). The dynamic spectrum shows a strong solar radio burst, as well as strong RFI. ### Flagging on observed RFI Fig. 2 (B,C,D) shows the flagging results (masks) of ConvRFI with different parameter combinations. Panel (B) uses \(a_{i}\) = [0.01,0.01,0.45,0.45], which flags mainly on sharp edge features without triggering specifically on line-like features. Panel (C) uses \(a_{i}\) = [1.66,1.66,0.01,0.01], which flags mainly on line-like features without triggering specifically on sharp-edge features. Panel (D) uses \(a_{i}\) = [1.66,1.66,0.45,0.45], which combines sharp edge detection and line detection. Comparing the three parameter combinations, we can see that the flagging result is best when \(a_{i}\) = [1.66,1.66,0.45,0.45] is used (panel D). If we compare panel (B) and panel (D) of Fig. 2, for frequencies near 29 MHz and 40 MHz, we can clearly see the over-flagging of the sharp edge feature without line feature flagging. The result of using combined parameters (panel D) shows accurate flagging of line and sharp edge features. In our pipeline, we therefore use the parameter combination \(a_{i}\) =[1.66,1.66,0.45,0.45] for LOFAR-LBA flagging. As a comparison, we also applied AOFlagger using the default flagging strategy without further parameter tweaking, as well as the AOFlagger local-RMS method. As shown in Fig. 3, we can see that, Figure 1: Convolution kernels. \(a_{0-3}\) control the sensitivity of RFI detection. Figure 3: Result of various flagging strategies. Panel (A) is the observed dynamic spectrum, panels (B-E) are the flagging results, Panels (A1-E1) zoom in on the magenta rectangles in panels (A-E). Figure 2: ConvRFI applied to the observed dynamic spectrum with different parameter combinations. The white patches indicate the mask of flagged time-frequency samples. Panel (A) presents the dynamic spectrum without flagging, panel (B) shows the result of sharp edge detection; panel (C) shows the horizontal and vertical line detection; and panel (D) uses a combination. with the default flagging strategy of AOFlagger in panel (B), the RFI pixels are well-flagged, but a significant part of the solar radio bursts is also flagged out as RFI. This demonstrates that the RFI detection of high-time-resolution data, which may contain transients of interest, requires the design of different methods. One method that we test, is to adapt the aoflagger strategy so that the flag thresholds of the SumThreshold step are relative to the local RMS. We will refer to this method as AOFlagger local-RMS. With this strategy, most samples of the solar radio burst are preserved (not flagged), as shown in panel (C) of Fig. 3. Comparing the results of AOFlagger local-RMS and ConvRFI, we can see that the vertical RFI lines are better detected in AOFlagger local-RMS, but not fully masked in ConvRFI. ConvRFI tends to ignore weak RFI within the solar radio bursts, but preserves more radio burst samples. As shown in Panel (C1,D1,E1), by using a hybrid method where the flag output of ConvRFI is used as input for the AOFlagger local-RMS method, the result has a better flagging coverage on both the slash-line-shaped RFI, and the narrowband RFI. Fig. 4 shows the averaged dynamic spectrum with and without flagging. The averaging window of time and frequency is 64 time samples and 16 frequency channels, respectively. The RFI between 20-30 MHz and at 40 MHz is reduced, and the effect of RFI on the radio burst is reduced as well. Fig. 5 shows power spectrum of raw data taken from 15 s integral from an interval around the burst and from a quiet interval. Before flagging, the spectrum shows residual spikes due to RFI in both intervals. After flagging, the power spectrum is smoother. ### Evaluation To test the flagging performance of the methods using a ground truth, we overlap simulated RFI (hera_sim) (Parsons et al., 2012) to a segment of the dynamic spectrum with no visible RFI. Fig. 6 shows an example of simulated RFI with intensity of 1% of the peak intensity in the dynamic spectrum, and the flagging results of the four methods. With the simulated RFI, we can compare the ground-truth RFI mask with the flagging result, from which the correctness of RFI flagging can be classified into four categories: true positive (TP) is the number of successfully detected RFI samples, false positive (FP) is the number of RFI-free samples flagged as RFI, true negative (TN) is the number of RFI-free samples not flagged as RFI, false negative (FN) is the number of RFI samples not flagged. The ratio of these parameters: \[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{7}\] \[\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{8}\] Figure 4: The averaged dynamic spectrum. Panel (A): without flagging; panel (B) with flagging. Figure 5: The raw power spectra of the quiet time and also the burst time. Figure 6: The RFI simulation overlapped on the observed radio bursts. Panel (G) shows the ground truth mask of flagging. Panel (A) shows the overlapped dynamic spectrum, panels (B, C, D, E) are the flagging results of four flagging methods indicated in the label. \[\text{Accuracy}=\frac{\text{TP}}{\text{TP}+\text{FN}+\text{FP}} \tag{9}\] is used to evaluate the flagging results. We generate the RFI at different intensities to benchmark the performance of the flagging methods in different RFI conditions. The RFI relative intensity is defined as the intensity of the generated RFI normalized with the peak radio burst intensity. For example, RFI relative intensity=0.1 means the received RFI flux is 0.1 times the radio burst peak flux. The resulting precision recall, and accuracy are shown in Fig. 7. We see that the AOFlagger local-RMS and hybrid methods have very similar results for all three measures. The AOFlagger default (not optimized) method has very low precision but high recall in the test result, representing a high over-flagging ratio. In terms of accuracy, the AOFlagger local-RMS, ConvRFI and hybrid methods all score above 0.96 for the tested relative RFI intensity range (\(10^{-4}\sim 1\)). As shown in the precision panel of Fig. 7, we can see that AOFlagger local-RMS and hybrid perform significantly better with weak RFI situation, while ConvRFI scores higher in stronger RFI (relative intensity > 0.01). ### Efficiency The computational efficiency of AOFlagger-based method is well described in Offringa et al. (2012), and more recently in Offringa et al. (2023). AOFlagger is shown there to reach a flagging speed of 370 MB/s on an 8 core desktop machine. In this section, we provide a performance test of the ConvRFI method. The flagging task mainly consumes three types of resources: input-output (IO), memory and computation. For dynamic spectrum data stored as HDF files, the IO time consumption refers to the time used to transfer data from disk to host memory (RAM), and the performance is determined by the disk reading speed. The memory size usage is the total size of the temporary variables used for flagging, (also for GPU computation). For GPU computations, an extra step of transferring data from host memory to GPU memory is added, which could also be time-consuming. The time consumption of the computation part is determined by the algorithm efficiency and the computing resource (cores of CPU or GPU). To test the efficiency, we use a data segment of \(0.32\times 10^{6}\) time slots, with 6400 frequency channels, the total size is about 7.6 Gigabytes. We tested the algorithm on two types of machines: an 8-core CPU laptop with GPU and a server with 128 cores. The benchmark result is shown in Table 1. The first two rows represent runs on a laptop, and the third row is on a server. From the benchmark of the three rounds of processing, we can see that the GPU run has the best overall performance, although it requires an extra step of transferring data from host memory (RAM) to GPU memory (VRAM). For a data segment of 7.6 GB, the GPU version took 21.4 seconds in total, of which the time consumption for IO, flagging, and averaging took similar amounts of time - each about 5 seconds. The pure computational speed of flagging is about 1.5 GB/s. For CPU versions with both 8 and 128 cores, the most time-consuming part is the flagging: the pure computational speed of flagging is about 0.05 GB/s on an 8-core CPU and 0.23 GB/s on an 128-core CPU. The IO performance is much better on the server due to higher-speed hard drives. Overall, the flagging speed including the IO and averaging time can reach 0.35 GB/s on GPU-laptop, and 0.2 GB/s on CPU-server. ### RFI ratio With the tested methods, most RFI can be detected and the spectrum of interest can be extracted, We can determine the residual spectral coverage after RFI detected results. To assess the residual spectral coverage for the solar and space weather observations of LOFAR, we run flagging on the spectrum data of both daytime solar observations and nighttime ionospheric scintillation observations targeting Cassiopeia A, in this test we use ConvRFI for better precision with strong RFIs. The data was recorded by the LOFAR core station _C3032LBA_. The nighttime observation is operating in the frequency range of 25-65 MHz, the daytime observation is operating in the frequency range of 10-90 MHz. As shown in Fig. 8, in the majority part of the time-frequency domain, the RFI ratio is still below 2% (indicated by the blue area), which is defined as 'high quality dynamic spectrum'. In the lower frequency range (<30 MHz), the quality is much worse, reaching 10%-20% near 30MHz, and can reach >40% below 20 MHz. there is persistent RFI near 28, 40, and 80 MHz. Comparing Fig. 8 and Fig. 4, for the frequency range of >30 MHz, the RFI ratio is mostly below 5%, and the features of the radio burst can be well presented. In the frequency range of 20-30 MHz, the RFI ratio is about 5%-30%, and we can see some faint artifacts in the flagged and averaged dynamic spectrum. For the frequency range of <20 MHz, a major part of the spectrum has RFI ratio above 40%, due in part to ionospheric reflection and absorption. Therefore, most of the solar radio bursts cannot be resolved below 20 MHz. ## 4 Implementation We implement RFI detection in the pre-processing pipeline of the project 'Incremental development of LOFAR for space weather' (IDOLS). We select different methods for different scenarios. The data processing procedure is shown in Fig. 9. As LOFAR is capable of simultaneous multi-beam observation (Mol & Romein, 2011), the telescope will provide simultaneous solar and calibrator observations. The calibrator observation is processed with the AOFlagger default pipeline, considering the calibrator is static in flux. It is then averaged in time to get the bandpass response of the calibrator (Obscal(\(f\))). Combining Obscal(\(f\)) with the spectrum flux model (Model\({}_{\text{cal}}(f)\)) of the calibrator (e.g. (Perley & Butler, 2017)), we can apply the relative calibration for solar observation with \[\text{Flux}_{\text{sum}}(t,f)=\frac{\text{Model}_{\text{cal}}(f)}{\text{Obs}_ {\text{cal}}(f)}\times\text{Obs}_{\text{sum}}(t,f).\] For the target observation, the flagging method depends on the objective. ConvRFI is used for solar radio bursts, in which case weak RFI removal is not strongly required and ConvRFI has higher precision with strong RFI (shown in Fig. 7). The hybrid method is used for quiet Sun and fluctuations and the scintillation studies, which are not sensitive to dynamic spectrum completeness but sensitive to weak RFI. ## 5 Summary and Discussion In this work, we presented RFI flagging methods that are specifically aimed at low frequency solar and space weather observations with the LOFAR radio telescope. The three methods (ConvRFI, AOFlagger local-RMS and hybrid) perform well for dynamic spectra with solar radio bursts. The ConvRFI method is a simple and GPU enabled algorithm with a reasonable accuracy, that is therefore suitable as a preprocessing step. AOFlagger local-RMS and hybrid perform well for weak RFI situations, and are less likely to miss RFI samples. Thus, ConvRFI is suitable for the pre-flagging and flagging tasks with more stringent requirements to retain the radio burst features, but lower requirements to flag all RFI samples. While the AOFlagger local-RMS and hybridis suitable for the flagging task of RFI sensitive cases, with higher priority of flagging out RFI samples than retaining more solar radio bursts features. According to the efficiency test, ConvRFI can reach 0.35GB/s processing speed on a laptop with GPU. This makes real time flagging possible for the solar and space weather dynamic spectrum observations of LOFAR. The method ConvRFI is based on morphological convolution, equivalent to matched filters. For now, considering the features of RFI are usually line-like with sharp edges, we have implemented the corresponding 6 convolutional cores. This can be easily extended to mark other features by changing or appending other convolutional cores accordingly. Another avenue to the RFI flagging would be to apply machine learning (ML) methods. ML supervised methods are more flexible, because the flagging result of the machine learning models would depend on the training-sets; thus, the model can be fed in a given type of data with given type of RFI feature, and trained to adapt to a particular application scenario. Compared to machine learning based methods, the algorithm-based methods (such as AOFlagger, ConvRFI) have the advantage of less complexity, higher computational efficiency, robustness, and no need for training sets. Thus, algorithmic approaches are widely implemented in the data processing pipelines of radio telescopes like LOFAR and MWA. ## Data Availability The dynamic spectrum data of LOFAR (ID:L860566,L861370) is publicly available on LTA ([http://lta.lofar.org/](http://lta.lofar.org/)) after a period of 1-year data protection according to LOFAR data policy. The data from recent events are available on request to the author. ## Acknowledgements This project is majorly supported by the STELLAR project, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 952439. K. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline [ms](10\({}^{-3}\)s) & IO & RAM to VRAM & ConvRFI & VRAM to RAM & Averaging & Total \\ \hline GPU RTX3060 & 5557.9 & 2549.5 & 5012.8 & 2937.8 & 5336.5 & 21394.5 \\ CPU 8-core & 5128.7 & - & 148420.3 & - & 4989.1 & 158538.1 \\ CPU 128-core & 2332.4 & - & 32790.5 & - & 1111.1 & 36234.0 \\ \hline \end{tabular} \end{table} Table 1: The time cost of ConvRFI to process the size of 7.6 GB data, with input data shape of 320000 time slots and 6400 frequency channels, including the data reading (IO), the transfer time between CPU and GPU, and the step of averaging (downsampling) in both time and frequency. The output of execution is a binary map with the RFI time-frequency points as ’1’ and else as ’0’ and a small-size dynamic spectrum. Figure 8: The RFI ratio of the averaged dynamic spectrum. Figure 7: Evaluation results of the 4 methods. Kozarev acknowledges support from the Bulgarian National Science Fund, VIHREN program, under contract KP-06-DV-8/18.12.2019.
2310.05732
Improved Scheduling with a Shared Resource
We consider the following shared-resource scheduling problem: Given a set of jobs $J$, for each $j\in J$ we must schedule a job-specific processing volume of $v_j>0$. A total resource of $1$ is available at any time. Jobs have a resource requirement $r_j\in[0,1]$, and the resources assigned to them may vary over time. However, assigning them less will cause a proportional slowdown. We consider two settings. In the first, we seek to minimize the makespan in an online setting: The resource assignment of a job must be fixed before the next job arrives. Here we give an optimal $e/(e-1)$-competitive algorithm with runtime $\mathcal{O}(n\cdot \log n)$. In the second, we aim to minimize the total completion time. We use a continuous linear programming (CLP) formulation for the fractional total completion time and combine it with a previously known dominance property from malleable job scheduling to obtain a lower bound on the total completion time. We extract structural properties by considering a geometrical representation of a CLP's primal-dual pair. We combine the CLP schedule with a greedy schedule to obtain a $(3/2+\varepsilon)$-approximation for this setting. This improves upon the so far best-known approximation factor of $2$.
Christoph Damerius, Peter Kling, Florian Schneider
2023-10-09T14:04:48Z
http://arxiv.org/abs/2310.05732v2
# Improved Scheduling with a Shared Resource ###### Abstract We consider the following shared-resource scheduling problem: Given a set of jobs \(J\), for each \(j\!\in\!J\) we must schedule a job-specific processing volume of \(v_{j}\!>\!0\). A total resource of 1 is available at any time. Jobs have a resource requirement \(r_{j}\!\in\![0,\!1]\), and the resources assigned to them may vary over time. However, assigning them less will cause a proportional slowdown. We consider two settings. In the first, we seek to minimize the makespan in an online setting: The resource assignment of a job must be fixed before the next job arrives. Here we give an optimal \(e/(e-1)\)-competitive algorithm with runtime \(\mathrm{O}(n\log n)\). In the second, we aim to minimize the total completion time. We use a continuous linear programming (CLP) formulation for the fractional total completion time and combine it with a previously known dominance property from malleable job scheduling to obtain a lower bound on the total completion time. We extract structural properties by considering a geometrical representation of a CLP's primal-dual pair. We combine the CLP schedule with a greedy schedule to obtain a (3/2+\(\varepsilon\))-approximation for this setting. This improves upon the so far best-known approximation factor of 2. Keywords:Approximation Algorithm Malleable Job Scheduling Makespan List Scheduling Completion Time Continuous Linear Program ## 1 Introduction Efficient allocation of scarce resources is a versatile task lying at the core of many optimization problems. One of the most well-studied resource allocation problems is parallel processor scheduling, where a number of _jobs_ need (typically at least temporarily exclusive) access to one or multiple _machines_ to be completed. The problem variety is huge and might depend on additional constraints, parameters, available knowledge, or the optimization objective (see [16]). In the context of computing systems, recent years demonstrated a bottleneck shift from _processing power_ (number of machines) towards _data throughput_. Indeed, thanks to cloud services like AWS and Azure, machine power is available in abundance while data-intensive tasks (e.g., training LLMs like ChatGPT) rely on a high data throughput. If the bandwidth of such data-intensive tasks is, say halved, they may experience a serious performance drop, while computation-heavy tasks care less about their assigned bandwidth. In contrast to the number of machines, throughput is (effectively) a _continuously_ divisible resource whose distribution may be easily changed at runtime_. This opens an opportunity for adaptive redistribution of the available resource as jobs come and go. Other examples of similarly flexible resources include power supply or the heat flow in cooling systems. This work adapts formal models from a recent line of work on such flexible resources [1, 7, 15] and considers them under new objectives and settings. Classical _resource constrained scheduling_[10, 13, 17, 18] assumes an,,all-or-nothing" mentality (a job can be processed if it receives its required resource but is not further affected). One key aspect of the model we consider is the impact of the amount of received resource on the jobs' performance (sometimes referred to as _resource-dependent processing times_[11, 12, 13, 14]). The second central aspect is that we allow a job's resource assignment to change while the job is running. ### Model Description and Preliminaries We consider a scheduling setting where a set \(J\!=\![n]\!:=\!\{\)1,2,...,\(n\}\) of \(n\!\in\!\mathbb{N}\)_jobs_ compete for a finite, shared resource in order to be processed. A _schedule_\(R\!=\!(R_{j})_{j\in J}\) consists of an (integrable) function \(R_{j}\!:\mathbb{R}_{\geq 0}\!\to\![0,\!1]\) for each \(j\!\in\!J\) (the job's _resource assignment_) that returns what fraction of the resource is assigned to \(j\) at time \(t\!\in\!\mathbb{R}_{\geq 0}\). We use \(R(t)\!=\!(R_{j}(t))_{j\in J}\) to refer to \(j\)'s _resource distribution_ at time \(t\) and \(\bar{R}(t)\!:=\!\sum_{j\in J}\!R_{j}(t)\) for the _total resource usage_ at time \(t\). Each \(j\!\in\!J\) comes with a (_processing_) _volume_\(v_{j}\!\in\!\mathbb{R}_{\geq 0}\) (the total amount of resource the job needs to receive over time in order to be completed) and a _resource requirement_\(r_{j}\!\in\![0,\!1]\) (the maximum fraction of the resource the job can be assigned). We say a schedule \(R\!=\!(R_{j})_{j\in J}\) is _feasible_ if: * the resource is never overused: \(\forall t\!\in\!\mathbb{R}_{\geq 0}\!:\,\bar{R}(t)\!\leq\!1\), * a job never receives more than its resource requirement: \(\forall t\!\in\!\mathbb{R}_{\geq 0}\!:\,R_{j}(t)\!\leq\!r_{j}\), and * all jobs are completed: \(\forall j\!\in\!J\!:\,\int_{0}^{\infty}\!R_{j}(t)\mathrm{d}t\!\geq\!v_{j}\). For \(j\!\in\!J\) we define its _processing time_\(p_{j}\!\coloneqq\!v_{j}/r_{j}\) as the minimum time that \(j\) requires to be completed. See Figure 0(a) for an illustration of these notions. For a schedule \(R\!=\!(R_{j})_{j\in J}\) we define \(C_{j}(R)\!\coloneqq\!\sup\,\{t\!\geq\!0\,|\,R_{j}(t)\!>\!0\}\) as the _completion time_ of job \(j\!\in\!J\). We measure the quality of a schedule \(R\) via its _makespan_\(M(R)\coloneqq\max\,\{C_{j}(R)\,|\,j\!\in\!J\}\) and its _total completion time_\(C(R)\!\coloneqq\!\sum_{j\in J}C_{j}(R)\). Our analysis additionally considers the _total fractional completion time_\(C^{F}(R)\!\coloneqq\!\sum_{j\in J}\!C_{j}^{F}(R)\), where \(C_{j}^{F}(R)\!\coloneqq\!\int_{0}^{\infty}\!R_{j}(t)\!\cdot\!t/v_{j}\mathrm{d}t\) is job \(j\)'s _fractional completion time_. Relation to Malleable Tasks with Linear Speedup.Our problem assumes an arbitrarily divisible resource, as for example the bandwidth shared by jobs running on the same host. Another common case are jobs that compete for a _discrete_ set of resources, like a number of available processing units. This is typically modeled by a scheduling problem where a set \(J\) of \(n\)_malleable_ jobs of different sizes \(s_{j}\) (length when run on a single machine) must be scheduled on \(m\) machines. Each machine can process at most one job per time, but jobs \(j\) can be processed on up to \(\delta_{j}\!\in\![m]\) machines in parallel with a linear speedup. Jobs are preemptable, i.e., they can be paused and continued later on, possibly on a different number of machines. See [16, Ch. 25] for a more detailed problem description. This formulation readily maps to our problem by setting \(j\)'s processing volume to \(v_{j}\!=\!s_{j}/m\) and its resource requirement to \(r_{j}\!=\!\delta_{j}/m\!\in\!(0,\!1]\). The only difference is that our schedules allow for arbitrary resource assignments, while malleable job scheduling requires that each job \(j\) gets an _integral_ number \(\delta_{j}\) of machines (i.e., resource assignments must be multiples of \(1/m\)). However, as observed by Beaumont et al. [4], fractional schedules can be easily transformed to adhere to this constraint: _Observation 1_ ([4, Theorem 3, reformulated]).: Consider a feasible schedule \(R\) for a job set \(J\) in which \(j\!\in\!J\) completes at \(C_{j}\). Let \(m\!:=\!1/\min\{r_{j}\,|\,j\!\in\!J\}\). We can transform each \(R_{j}\) without changing \(C_{j}\) to get \(R_{j}(t)\!\in\!\{i/m\,|\,i\!\in\![m]\cup\{0\}\}\) for any \(t\!\in\!\mathbb{R}_{\geq 0}\) and such that each \(R_{j}\) changes at most once between consecutive completion times. We first consider _online makespan minimization_ (Section 2), where the scheduler must commit to future resource assignments as jobs arrive (as in list-scheduling). Afterwards, we consider _offline total completion time minimization_ (Section 3). ### Related Work Our model falls into the class of continuous shared-resource job scheduling as introduced in [1] and its variants [7, 15]. These models have the same relation between a job's resource requirement, the assigned resource, and the resulting processing time as we but only consider makespan minimization as objective. The two main differences are that they assumed an additional constraint on the number of machines and considered discrete time slots in which resource assignments may not change. Another closely related model is _malleable_ job scheduling, where the number of machines assigned to a job can be dynamically adjusted over time. If each job \(j\) has its own upper limit \(\delta_{j}\) on the number of processors it can be assigned, the model becomes basically equivalent to our shared-resource job scheduling problem (as discussed at the end of Section 1.1). Drozdowski [9] gave a simple greedy algorithm for minimizing the makespan in the offline setting (see also Section 2). Decker et al. [8] considered total completion time minimization for _identical_ malleable jobs for an otherwise rather general (possibly non-linear) speed-up function. They gave a \(5/4\)-approximation for this setting. Beaumont et al. [4] is closest to our model. In particular, they assumed job-dependent resource limits \(\delta_{j}\) that correspond to our resource requirements. For minimizing weighted total completion time, they used a water-fill approach to prove the existence of structurally nice solutions (cf. to the our water-filling approach in Section 2). Their main result is a (non-clairvoyant) 2-approximation algorithm for the weighted case. Their algorithm WDEQ assigns each job a number of processors according to their relative weight, but no more than the limit imposed by \(\delta_{j}\). Our results in Section 3 yield an improved approximation ratio of \(3/2+\varepsilon\) at the cost of clairvoyance (i.e., we must know the job's volumes and resource requirements). Also, our algorithm only handles the unweighted case. Other related models, such as rigid and moldable scheduling, disallow the resource assignment of a job to be adjusted after it has been started (see [16] for details). ### Our Contribution and Methods For our model, makespan minimization is known to be offline solvable (see Section 2). We thus concentrate on an online (list-scheduling) setting where jobs are given sequentially and we must commit to a resource assignment without knowing the number of jobs and future jobs' properties. We use a water-filling approach that is known to produce,,flattest" schedules [4]. We derive properties that are necessary and sufficient for any \(c\)-competitive algorithm by providing conditions on _c-extendable_ schedules (\(c\)-competitive schedules to which we can add any job while remaining \(c\)-competitive). From this, we derive slightly weaker _universal schedules_ that are just barely \(c\)-extendable and show that schedules derived via water-fill are always flatter than universal schedules. Optimizing the value of \(c\) yields \(e/(e-1)\)-competitiveness. We then show that no algorithm can have a lower competitive ratio than \(e/(e-1)\). Our main result considers _offline total completion time minimization_. We improve upon the so far best result for this variant (a 2-approximation [4]) by providing a \((3/2+\varepsilon)\)-approximation running polynomial time in \(n\),\(1/\varepsilon\). The result relies on a continuous linear programming (CLP) formulation for the fractional total completion time, for which we consider primal-dual pairs. The primal solution represents the resource assignments over time, while the dual represents the _priority_ of jobs over time. We then extract additional properties about the primal/dual pair. Roughly, our method is as follows. We draw both the primal and dual solutions into a two-dimensional coordinate system. See Figure 1(b) for an example. We then merge both solutions into a single 3D coordinate system by sharing the time axis and use the these solutions as a blueprint for shapes in this coordinate system (see Figure 3). The volume of these shapes then correspond to parts of the primal and dual objective. We use a second algorithm called Greedy that attempts to schedule jobs as early as possible. Choosing the better one of the CLP and the greedy solution gives us the desired approximation. ## 2 Makespan Minimization This section considers our resource-aware scheduling problem under the makespan objective. For the offline problem, it is well-known that the optimal makespan \(M^{*}(J)\) for a job set \(J=[n]\) with total volume \(V(J)=\sum_{j\in J}v_{j}\) is \(M^{*}(J)=\max\{V(J)\}\cup\{p_{j}\,|\,j\in J\}\) and that a corresponding schedule can be computed in time \(\mathrm{O}(n)\)[16, Section 25.6]. The idea is to start with a (possibly infeasible) schedule \(R\) that finishes all jobs at time \(p_{\max}\coloneqq\max\{p_{j}\mid j\in J\}\) by setting \(R_{j}(t)=v_{j}/p_{\max}\) for \(t\in[0,p_{\max})\) and \(R_{j}(t)=0\) for \(t>p_{\max}\). This schedule uses a constant total resource of \(\bar{R}\coloneqq V(J)/p_{\max}\) until all jobs are finished. If \(\bar{R}\leq 1\) (the resource is not overused), this schedule is feasible and optimal (any schedule needs time at least \(p_{\max}\) to finish the,,longest" job). Otherwise we scale all jobs' resource assignments by \(1/\bar{R}\) to get a new feasible schedule that uses a constant total resource of \(1\) until all jobs are finished at time \(V(J)\). Again, this is optimal (any schedule needs time at least \(V(J)\) to finish a total volume of \(V(J)\)). List-Scheduling SettingGiven that the offline problem is easy, the remainder of this section considers the (online) list-scheduling setting. That is, an (online) algorithm \(\mathcal{A}\) receives the jobs from \(J=[n]\) one after another. Given job \(j\in J\), \(\mathcal{A}\) must fix \(j\)'s resource assignment \(R_{j}:\,\mathbb{R}_{\geq 0}\to[0,1]\) without knowing \(n\) or the properties of future jobs. We refer to the resulting schedule by \(\mathcal{A}(J)\). As usual in the online setting without full information, we seek to minimize the worst-case ratio between the costs of the computed and optimal schedules. More formally, we say a schedule \(R\) for a job set \(J\) is _\(c\)-competitive_ if \(M(R)\leq c\cdot M^{*}(J)\). Similarly, we say an algorithm \(\mathcal{A}\) is _\(c\)-competitive_ if for any job set \(J\) we have \(M\bigl{(}\mathcal{A}(J)\bigr{)}\leq c\cdot M^{*}(J)\). An Optimal List-Scheduling AlgorithmWater-filling algorithms are natural greedy algorithms for scheduling problems with a continuous, preemptive character. They often yield structurally nice schedules [2; 4; 6]. In this section, we show that water-filling (described below) yields a simple, optimal online algorithm for our problem. Theorem 2.1: _Algorithm WaterFill has competitive ratio \(e/(e-1)\) for the makespan. No deterministic online algorithm can have a lower worst-case competitive ratio._ We first describe a single step WFStep(\(R\),\(\iota\),\(C\)) of WaterFill (illustrated in Figure 0(b)). It takes a schedule \(R=(R_{j})_{j\in J}\) for some job set \(J\), a new job \(\iota\notin J\), and a _target completion time_\(C\). Its goal is to _augment_\(R\) by \(\iota\)_with completion time_\(C\), i.e., to feasibly complete \(\iota\) by time \(C\) without altering the resource assignments \(R_{j}\) for any \(j\in J\). To this end, define the _\(h\)-water-level_\(\mathrm{wl}_{h}(t)\coloneqq\min\{r_{\iota},\max\{h-\bar{R}(t),0\}\}\) at time \(t\) (the resource that can be assigned to \(\iota\) at time \(t\) without exceeding total resource \(h\)). Note that \(\iota\) can be completed by time \(C\) iff \(\int_{0}^{C}\mathrm{wl}_{1}(t)\mathrm{d}t\geq v_{\iota}\) (the total leftover resource suffices to complete \(\iota\)'s volume by time \(C\)). If \(\iota\) cannot be completed by time \(C\), WFStep(\(R\),\(\iota\),\(C\)) _fails_. Otherwise, it _succeeds_ and returns a schedule that augments \(R\) with the resource assignment \(R_{\iota}=\mathrm{wl}_{h^{*}}\) for job \(\iota\), where \(h^{*}\coloneqq\inf_{h\in[0,1]}\{h\mid\int_{0}^{C}\mathrm{wl}_{h}(t)\mathrm{d}t \geq v_{\iota}\}\) is the smallest water level at which \(\iota\) can be scheduled. WaterFill is defined recursively via WFStep. Given a job set \(J=[n]\), define \(H_{j}\coloneqq M^{*}([j])\cdot e/(e-1)\) as the target completion time for job \(j\in J\) (remember that \(M^{*}([j])\) can be easily computed, as described at the beginning of this section). Assuming WaterFill computed a feasible schedule \(R^{(j-1)}\) for the first \(j-1\) jobs (with \(R^{(0)}(t)=0\)\(\forall t\in\mathbb{R}_{\geq 0}\)), we set \(R^{(j)}\coloneqq\mathrm{WF}\textsc{Step}(R^{(j-1)},j,H_{j})\). If this step succeeds, the resulting schedule is clearly \(e/(e-1)\)-competitive by the choice of \(H_{j}\). The key part of the analysis is to show that indeed these water-filling steps always succeed. We start the observation that water-fill schedules always result in,,staircase-like" schedules (see Figure 0(b)), a fact also stated in [4] (using a slightly different wording). _Observation 2_ ([4, Lemma 3]).: Consider a schedule \(R\) whose total resource usage \(\bar{R}\) is non-increasing (piecewise constant). If we WFstep(\(R\),\(t\),\(C\)) successfully augments \(R\) by a job \(\iota\), the resulting total resource usage is also non-increasing (piecewise constant). Next, we formalize that WFstep generates the "flattest" schedules: if there is _some_ way to augment a schedule by a job that completes until time \(C\), then the augmentation can be done via WFstep. Definition 1: The _upper resource distribution_\(A_{R}^{C}(y)\) of a schedule \(R\) is the total volume above height \(y\) before time \(C\) in \(R\). Given schedules \(R\),\(S\) (for possibly different job sets), we say \(R\) is _flatter_ than \(S\) (\(R\!\preceq\!S\)) if \(A_{R}^{C}(y)\!\leq\!A_{S}^{C}(y)\)\(\forall C\!\in\!\mathbb{R}_{\geq 0}\),\(y\!\in\![0\),\(1]\). Lemma 1 ([4, Lemma 4, slightly generalized]).: _Consider two schedules \(R\!\preceq\!S\) for possibly different job sets. Let \(S^{\prime}\) denote a valid schedule that augments \(S\) by a new job \(\iota\) completed until time \(C\). Then WFstep(\(R\),\(t\),\(C\)) succeeds and WFstep(\(R\),\(t\),\(C\))\(\preceq\!S^{\prime}\)._ Next, we characterize \(c\)-competitive schedules that can be augmented by _any_ job while staying \(c\)-competitive. Definition 2: A schedule \(R\) is \(c\)-_extendable_ if it is \(c\)-competitive and if it can be feasibly augmented by _any_ new job \(\iota\) such that the resulting schedule is also \(c\)-competitive. Lemma 2: _Consider a job set \(J\) of volume \(V\) and with maximal processing time \(p_{\max}\). A \(c\)-competitive schedule \(R\) for \(J\) is \(c\)-extendable if and only if_ \[\forall y\text{ with }(c\!-\!1)/c\!<\!y\!\leq\!1\!:\quad A_{R}^{\infty}(y) \!\leq\!(c\!-\!1)\!\cdot\!(1\!-\!y)/y\!\cdot\!\max\{\!V,\!p_{\max}\!\cdot\!y\}. \tag{1}\] See Appendix 0.A.2 for the proof of Lemma 2. While Lemma 2 gives a strong characterization, the bound on the right hand side of Equation (1) cannot be easily translated into a proper schedule for the given volume. Thus we introduce proper (idealized) schedules that adhere to a slightly weaker version of Equation (1). These schedules are barely \(e/(e\!-\!1)\)-extendable. Our proof of Theorem 1 combines their existence with Lemma 1 to deduce that WaterFill is \(e/(e\!-\!1)\)-competitive. Definition 3: For any \(V\!\in\!\mathbb{R}_{\geq 0}\) we define the _universal schedule_1\(U_{V}\!:\mathbb{R}_{\geq 0}\!\rightarrow\![0\),\(1]\) via Footnote 1: One can think of \(U_{V}\) as a schedule for a single job of volume \(V\) and resource requirement \(1\). Since there is only one job, we identify \(U_{V}\) with its total resource requirement function \(\bar{U}_{V}\). \[U_{V}(t)\!:=\!\begin{cases}1&\text{if }t\!<\!\frac{1}{e-1}\!\cdot\!V,\\ 1\!-\!\ln\!\left(t\!\cdot\!\frac{c\!-\!1}{V}\right)&\text{if }\frac{1}{e-1}\! \cdot\!V\!\leq\!t\!<\!\frac{e}{e-1}\!\cdot\!V,\text{ and}\\ 0&\text{otherwise}.\end{cases} \tag{2}\] See Figure 4 for an illustration of universal schedules. With \(c\!=\!e/(e\!-\!1)\), one can easily check that \(A_{U_{V}}^{\infty}(y)\!=\!\frac{e^{1-y}-1}{e-1}\!\cdot\!V\!\leq\!(c\!-\!1)\! \cdot\!\frac{1-y}{y}\!\cdot\!V\). Thus, by Lemma 2, universal schedules (and any flatter schedules for the same volume) are \(e/(e-1)\)-extendable. Our final auxiliary lemma extends the optimality of WaterFill from Lemma 1 to certain augmentations of universal schedules.2 See Appendix 0.A.3 for the proof of Lemma 3. Lemma 3: _Consider the universal schedule \(U_{V}\), a new job \(\iota\) of volume \(v\) and processing time \(p\), as well as a target completion time \(H\!\geq\!\frac{e}{e-1}\!\cdot\!\max\{V\!+\!v\!,\!p\}\). Then WFSTEP\((U_{V}\),\(t\),\(H)\!\preceq\!U_{V+v}\)._ The above enables us to prove the competitiveness of WaterFill from Theorem 2.1: We show inductively that WaterFill produces a feasible schedule \(R^{(j)}\) for the first \(j\) jobs (using that \(R^{(j-1)}\) is,,flatter" than \(U_{V([j-1])}\) together with Lemma 1) and use this to prove \(R^{(j)}\!\preceq\!U_{V([j])}\) (via Lemma 3). By universality, this implies that all \(R^{(j)}\) are \(e/(e\!-\!1)\)-extendable (and thus, in particular, \(e/(e\!-\!1)\)-competitive). The full prove of WaterFill is given in Appendix 0.A.1. ## 3 Total Completion Time Minimization This section considers the total completion time minimization and represents our main contribution. In contrast to offline makespan minimization (Section 2), it remains unknown whether there is an efficient algorithm to compute an offline schedule with minimal total completion time. The so far best polynomial-time algorithm achieved a 2-approximation [4]. We improve upon this, as stated in the following theorem. Theorem 3.1: _There is a \((3/2\!+\!\varepsilon)\)-approximation algorithm for total completion time minimization. Its running time in polynomial time in \(n\) and \(1/\varepsilon\)._ For clarity of presentation, we analyze an idealized setting in the main part. The details for the actual result can be found in Appendices 0.B and 0.C. Algorithm Description.Our algorithm computes two _candidate schedules_ using the two sub-algorithms Greedy and LSAapprox (described below). It then returns the schedule with smallest total completion time among both candidates. Sub-algorithm Greedy processes the jobs in ascending order of their volume. To process a job, Greedy assigns it as much resource as possible as early as possible in the schedule. Formally, for jobs \(J\!=\![n]\) ordered as \(v_{1}\!\leq\!\cdots\!\leq\!v_{n}\), the schedule \(R^{G}\) for Greedy is calculated recursively using \(R^{G}_{j}(t)\!=\!\mathds{1}_{t<t_{j}}\!\cdot\!\min(r_{j},\!1\!-\!\sum_{i=1}^{ j-1}\!R^{G}_{i}(t))\)3, where the completion time \(t_{j}\) for job \(j\) is set such that \(j\) schedules exactly its volume \(v_{j}\). See Figure 1(a) for an example of a Greedy schedule. Sub-algorithm LSAapprox deals with solutions to following _continuous linear program_ (\(CLP\)). Footnote 3: \(\mathds{1}_{t<t_{j}}\) denotes the indicator function that is \(1\) for \(t\!<\!t_{j}\) and \(0\) everywhere else. \[\begin{array}{ll}\mbox{minimize}&\sum_{j\in J}\!\int_{0}^{ \infty}\!\frac{t\!\cdot\!R_{j}(t)}{v_{j}}\mathrm{d}t&\int_{0}^{ \infty}\!R_{j}(t)\mathrm{d}t\!\geq\!v_{j}\;\;\;\forall j\!\in\!J\\ &0\!\leq\!R_{j}(t)\!\leq\!r_{j}\;\;\;\forall j\!\in\!J,\!t\!\in\!\mathbb{R}_{ \geq 0}&\sum\nolimits_{j\in J}\!R_{j}(t)\!\leq\!1\;\;\;\forall t\!\in\!\mathbb{R}_{ \geq 0}\end{array}\] Roughly, LSAapprox first subdivides the job set into those jobs that produce a high completion time and the remaining jobs. For the former, an approximate solution is computed using the dual to the discretization (an LP) of above \(CLP\). For the latter, is is enough to reserve a small portion of the resource to schedule them with small completion times. The details of this algorithm are explained in Appendix 0.C.1. For clarity of presentation, the main part will only do a simplified analysis using an idealization of \(\operatorname{\textsc{LSAapprox}}\). For the detailed analysis using \(\operatorname{\textsc{LSAapprox}}\), we refer to Appendix 0.C. For the analysis of Greedy, we refer to Appendix 0.B.2. ### Analysis via a Bounded Fractionality Gap Throughout the analysis, we use \(C^{*}\) to denote the optimal total completion time and \(C^{F*}\) for the optimal fractional total completion time. We require an algorithm that produces a schedule \(R\) with a small _fractionality gap_\(\gamma(R)\operatorname{\,{\coloneqq}\,}C(R)/C^{F*}\), i.e., we compare the total completion time of \(R\) with the optimal fractional total completion time for the same job set. We show the following generalization of Theorem 3.1. Theorem 3.2: _Assume that there is a polynomial-time algorithm \(A\) for total completion time minimization that produces a schedule \(R\) with \(\gamma(R)\operatorname{\,{\geq}\,}1\). Then there exists a polynomial-time \((\gamma(R)+1)/2\)-approximation for total completion time minimization._ The proof of Theorem 3.1 relies on Proposition 1 (three lower bounds on the optimal total completion time) and Proposition 2 (Greedy's objective in relation to these bounds). Lower Bound (1) (_Squashed Area_ Bound) and Bound (2) (_Length_ or _Height_ Bound) are due to Beaumont et al. [4, Def. 6,7]. Bound (3) is our novel lower bound. The proof can be found in Appendix 0.B.1. Proposition 1: _Assuming \(v_{1}\operatorname{\,{\leq}\,}\cdots\operatorname{\,{\leq}\,}v_{n}\), the following are lower bounds on \(C^{*}\):_ 1. \(C^{L}\operatorname{\,{\coloneqq}\,}\max_{j\in J}p_{j}\)__ (2)__\(C^{A}\operatorname{\,{\coloneqq}\,}\sum_{j=1}^{n}\sum_{i=1}^{j}v_{j}\)__ (3)__\(C^{F*}\operatorname{\,{+}\,}1/2\operatorname{\cdot}C^{L}\)__ Proposition 2: _The Greedy schedule \(R^{G}\) satisfies \(C(R^{G})\operatorname{\,{\leq}\,}C^{A}\operatorname{\,{+}\,}C^{L}\)._ Figure 2: Schedules for a job set \(J=[3]\) with \((v_{1},r_{1})=(1,3/4)\), \((v_{2},r_{2})=(4,1/2)\) and \((v_{3},r_{3})\operatorname{\,{=}\,}(6,2/3)\). (a) Greedy’s schedule, (b) Above: A primal (resource) schedule. Below: A dual (priority) schedule. With the dual variables having values \(\alpha_{1}\operatorname{\,{=}\,}51/16\),\(\alpha_{2}\operatorname{\,{=}\,}39/16\) and \(\alpha_{3}\operatorname{\,{=}\,}31/16\), the volumes of the jobs are exactly scheduled. (See Appendix 0.B.5 for calculations.) Using them, we can give the proof of Theorem 3.1. Proof of Theorem 3.1.: We run both Greedy and \(A\) in polynomial time to produce schedules \(R^{G}\) and \(R^{A}\), respectively, and choose the schedule with the smaller total completion time. Using Propositions 1 and 2 and the fractionality gap \(\gamma\!:=\!\gamma(R^{A})\), we can bound the cost \(C\!:=\!\min(C(R^{A})\),\(C(R^{G}))\) of the resulting schedule in terms of \(C^{*}\): \[C\!\leq\!\min(\gamma\!\cdot\!C^{F*}\!,\!C^{A}\!+\!C^{L})\!\leq\! \min(\gamma\!\cdot\!(C^{*}\!-\!1/2\!\cdot\!C^{L})\!,\!C^{*}\!+\!C^{L})\] \[=\!\frac{\gamma\!+\!1}{2}C^{*}\!-\!\frac{\gamma\!+\!2}{4}C^{L}\!+ \!\min\!\left(\frac{\gamma\!-\!1}{2}C^{*}\!-\!\frac{\gamma\!+\!2}{4}C^{L},\! \frac{\gamma\!+\!2}{4}C^{L}\!-\!\frac{\gamma\!-\!1}{2}C^{*}\right)\!\leq\! \frac{\gamma\!+\!1}{2}C^{*}\qed\] ### The fractionality gap of line schedules For the remainder of this paper, we will introduce _line schedules_ and their structural properties. Roughly, a line schedule is a certain primal-dual pair for the \(CLP\) defined in Section 3, and its dual, which we call \(DCP\): \[\text{maximize}\,\sum\nolimits_{j\in J}\alpha_{j}v_{j}\!-\!\sum \nolimits_{j\in J}\!r_{j}\!\int_{0}^{\infty}\!\beta_{j}(t)\mathrm{d}t\!-\! \int_{0}^{\infty}\!\gamma(t)\mathrm{d}t\] \[\text{s.t.}\,\,\alpha_{j},\beta_{j}(t),\gamma(t)\!\geq\!0\quad \forall j\!\in\!J,\!t\!\in\!\mathbb{R}_{\geq 0}\qquad\gamma(t)\!+\!\beta_{j}(t)\! \geq\!\alpha_{j}\!-\!t/v_{j}\;\;\;\forall j\!\in\!J,\!t\!\in\!\mathbb{R}_{ \geq 0}\] It is obtained by dualizing the time-discretized version of the \(CLP\) (see Appendix 0.B.3) and then extending its constraints to the continuous time domain. _Line schedules_ formalize the idea that, if we know the dual \(\alpha\)-values, we can reconstruct all remaining primal/dual variables to obtain a primal-dual pair. If the \(\alpha\)-values are chosen correctly, then the volumes scheduled in the primal are exactly the desired volumes \((v_{j})_{j\in J}\). To this end, we will _assume_ that we have access to an algorithm called LS that produces such a line schedule \(R^{F}\) with \(C^{F}(R^{F})\!=\!C^{F*}\). We can then show that LS produces schedules with a fractionality gap of 2: Proposition 3.2.: _The LS schedule \(R^{F}\) satisfies \(\gamma(R^{F})\!\leq\!2\)._ In the following, we develop the details of line schedules. To this end, first define _primal-dual pair_ as a tuple (\(R\),\(\alpha\),\(\beta\),\(\gamma\),\(v\)) that fulfills the following continuous _slackness conditions (sc)_. Again, these are found by extending the time-discretized version of the \(CLP\) to the continuous time domain. These conditions hold for all \(j\!\in\!J\) and \(t\!\in\!\mathbb{R}_{\geq 0}\). \[(\alpha\text{-sc})\!:\,\alpha_{j}(\bar{v}_{j}\!-\!\int_{0}^{\infty}\!R _{j}(t)\mathrm{d}t)\!=\!0 (\beta\text{-sc})\!:\,\beta_{j}(t)(r_{j}\!-\!R_{j}(t))\!=\!0\] \[(\gamma\text{-sc})\!:\,\gamma(t)(1\!-\!\sum\nolimits_{j\in J}\!R _{j}(t))\!=\!0 (R\text{-sc})\!:\,R_{j}(t)(\alpha_{j}\!-\!t/v_{j}\!-\!\beta_{j}(t) \!-\!\gamma(t))\!=\!0\] If we choose arbitrary \(\alpha\)-values, then the corresponding line schedule is still a primal-dual pair, except that it possibly schedules a different set of volumes, i.e., the \(\alpha\)-sc is only true if we replace \(v_{j}\) in the constraint by some other volume \(\bar{v}_{j}\). This fact is used for the detailed proof of our \((3/2\!+\!\varepsilon)\)-approximation, see Appendix 0.B.3. To this end, define the _dual line_\(d_{j}(t)\!\coloneqq\!\alpha_{j}\!-\!t/v_{j}\) for each \(j\!\in\!J\). The intuition behind a line schedule is now that the heights of the dual lines represent priorities: Jobs are scheduled (with maximum remaining schedulable resource) in decreasing order of the dual line heights at the respective time points. Jobs are not scheduled if their dual line lies below zero. This is formalized in the following definition. (In Figure 1(b), we supplement the example from Figure 1(a) by a depiction of the dual lines.) Definition 4: We call a job set \(J\)_non-degenerate_ if all job volumes are pairwise distinct, i.e., \(v_{j}\neq v_{j^{\prime}}\) for all \(j,j^{\prime}\in J\).4 Define a total order for each \(t\!\geq\!0\) as \(j^{\prime}\!\succ_{t}\!j\!:\Leftrightarrow\!d_{j^{\prime}}(t)\!>\!d_{j}(t)\) or \(d_{j^{\prime}}(t)\!=\!d_{j}(t)\) and \(v_{j^{\prime}}\!>\!v_{j}\).5 The _line schedule_ of \(\alpha\) is a tuple (\(R\),\(\alpha\),\(\beta\),\(\gamma\),\(v\)) (recursively) defined as follows. Footnote 5: The _dual line_\(d_{j}(t)\!=\!\alpha_{j}\!-\!t/v_{j}\) for each \(j\!\in\!J\) is a tuple \(R_{j}(t)\!=\!\alpha_{j}\!-\!t/v_{j}\) for each \(j\!\in\!J\). \[R_{j}(t)\!=\!\mathds{1}_{d_{j}(t)>0}\!\cdot\!\min(r_{j},\!1\!-\! \sum\nolimits_{j^{\prime}\succ_{t}\!j}\!R_{j^{\prime}}(t))\qquad\quad\beta_{ j}(t)\!=\!\max(0,\!d_{j}(t)\!-\!\gamma(t))\] \[\gamma(t)\!=\!\max(0,\!d_{j}(t)),\text{ where }j\text{ is the smallest job according to }\succ_{t}\text{ with }R_{j}(t)\!>\!0\] Equipped with the definition of a line schedule, we can now tackle the proof of Proposition 3. It requires the following two properties about the assumed algorithm LS. First, Lemma 4 allows us to bound the completion times of a fractional schedule in terms of the \(\alpha\)-variables in the \(DCP\): Lemma 4: _Algorithm LS produces a schedule \(R^{F}\) with \(C_{j}(R^{F})\!\leq\!\alpha_{j}v_{j}\) for all \(j\!\in\!J\)._ Second, we show the following lemma. Abbreviate \(P\!=\!\sum_{j\in J}\!\int_{0}^{\infty}\!t\!\cdot\!R_{j}(t)/v_{j}\mathrm{d}t\) (the primals objective), and \(A\!=\!\sum_{j\in J}\!\alpha_{j}v_{j}\), \(B\!=\!\sum_{j\in J}\!r_{j}\!\int_{0}^{\infty}\!\beta_{j}(t)\mathrm{d}t\) and \(\Gamma\!=\!\int_{0}^{\infty}\!\gamma(t)\mathrm{d}t\) (the parts of the dual objective). Lemma 5: _Algorithm LS produces a schedule \(R^{F}\) such that there exists a primal-dual pair \((R^{F},\cdot,:,:)\) that fulfills strong duality (\(A\!=\!B\!+\!\Gamma+\!P\)) and balancedness (\(P\!=\!B\!+\!\Gamma\))._ Using these lemmas, we can show Proposition 3. Proof of Proposition 3.: Using Lemmas 4 and 5, we show the statement as follows: \[C(R^{F})\!=\!\sum_{j\in J}\!C_{j}(R^{F})\!\leq\!\sum_{j\in J}\! \alpha_{j}v_{j}\!=\!A\!=\!A\!-\!B\!-\!\Gamma\!+\!P\!=\!2P\!=\!2C^{F}(R^{F})\!= \!2C^{F*}\qed\] In Appendix 0.B. The remainder of this section will initiate the proof of Lemma 5. We first give a geometric understanding of the involved quantities (\(P\),\(A\),\(B\),\(\Gamma\)). We build a 3D coordinate system from a line schedule. The time axis is shared, and the ordinates form the remaining two axes. We then draw 3D shapes into this coordinate system that correspond to parts of the above quantities and therefore of the \(CLP/DCP\) objectives. These shapes are described in detail in Appendix 0.B.6. Generally, these shapes are constructed such that the primal and dual schedules can be,,seen" from above or front. In our case, the primal schedule will be seen from the top, and the dual schedule from the front. Figure 3 illustrates the shapes in our construction. For each part of the objective \(\Psi\!\in\!\{P\),\(A\),\(B\),\(\Gamma\}\), we have a corresponding shape \(\Psi^{\mathrm{all}}\), which is subdivided into pieces \(\Psi^{i,l}\), respectively. We can show that certain pieces are pairwise non-overlapping (Lemma 7), that the \(A\)-pieces make up all other pieces (Lemma 8) and we can relate the volume of these pieces with one another and with the actual objective (Lemma 9). Lemma 7: _Let \(V\) and \(W\), \(V\neq W\), be \(P\)-pieces, \(B\)-pieces or \(\Gamma\)-pieces (every combination allowed), or both be \(A\)-pieces. Then \(V\) and \(W\) do not overlap._ Lemma 8: \(A^{\mathrm{all}}\) _is composed of the other shapes, i.e., \(A^{\mathrm{all}}\!=\!P^{\mathrm{all}}\cup B^{\mathrm{all}}\cup\Gamma^{\mathrm{all}}\)._ Lemma 9: _The following statements hold:_ 1. \(|P^{i,l}|\!=\!|B^{i,l}|\!+\!|\Gamma^{i,l}|\) _for all_ \(i\)_,_l._ 2. \(|\Psi^{\mathrm{all}}|\!=\!\Psi\) _for all_ \(\Psi\!\in\!\{P\)_,_\(A\)_,_\(B\)_,_\(\Gamma\}\)_ Due to space limitations, we give the actual construction of the pieces and the proofs of Lemmas 7 to 9 in Appendix 0.B.6. Now we can give the proof of Lemma 5. Proof of Lemma 5.: Using Lemmas 7 to 9, we get \[A\!=\!|A^{\mathrm{all}}|\!=\!|P^{\mathrm{all}}\cup B^{\mathrm{all}} \cup\Gamma^{\mathrm{all}}|\!=\!|P^{\mathrm{all}}|\!+\!|B^{\mathrm{all}}|\!+\!| \Gamma^{\mathrm{all}}|\!=\!P\!+\!B\!+\!\Gamma\] \[=\!|P^{\mathrm{all}}|\!+\!|B^{\mathrm{all}}|\!+\!|I^{\mathrm{all}} |\!=\!\sum\nolimits_{i,l}\!|P^{i,l}|\!+\!|B^{i,l}|\!+\!|I^{i,l}|\!=\!\sum \nolimits_{i,l}\!2|P^{i,l}|\!=\!2P.\qed\] Figure 3: (a) \(P\)-shapes for job set from Figure 1(b). \(P\)-shapes are delimited from below by \(d_{j}(t)\) (extended into the resource axis), from above by \(\alpha_{j}\), and their top surface follows the primal schedule. (b) The shapes shown represent the union of \(B\)- and \(\Gamma\)-shapes. They are delimited from the left (right) by \(t\!=\!0\) (\(d_{j}(t)\)) (extended into the resource axis), and from top and bottom by the value of \(d_{j}(t)\) at the starting and finishing time of some piece of \(j\). See Definitions 5 and 6 for the formal definition of these shapes. ## Appendix 0.A Details for Section 2 ### Proof of Theorem 3.1 We first prove that WaterFill is competitive. Consider an arbitrary job set \(J\!=\![n]\). For \(j\!\in\!J\!\cup\!\{0\}\) let \(R^{(j)}\) denote the schedule WaterFill produces for the job set \([j]\!\subseteq\!J\) of the first \(j\) jobs. Similarly define \(O^{(j)}\!\coloneqq\!M^{*}([j])\) as the optimal makespan for the first \(j\) jobs. To simplify notation, we define \(V(j)\!\coloneqq\!V([j])\) as the total volume of the first \(j\) jobs. We show inductively that the computation of \(R^{(j)}\) succeeds and that \(R^{(j)}\!\preceq\!U_{V(j)}\) for all \(j\!\in\!J\!\cup\!\{0\}\). By universality, this implies that all \(R^{(j)}\) are \(e/(e\!-\!1)\)-extendable (and thus, in particular, \(e/(e\!-\!1)\)-competitive). For the base case \(j\!=\!0\), note that \(R^{(0)}\) and \(U_{V(0)}\) are identical (the trivial \(0\)-schedule that schedules no volume at all). Thus, clearly \(R^{(0)}\!\preceq\!U_{V(0)}\). Now consider \(j\!\geq\!1\) and assume \(R^{(j-1)}\!\preceq\!U_{V(j-1)}\). By definition of universal schedules, \(U_{V(j-1)}\) can be feasibly augmented by \(j\) with completion time \(H_{j}\!=\!\frac{e}{e-1}\cdot O^{(j)}\!\geq\!\frac{e}{e-1}\cdot\max\{V(j),p_{j}\}\). Using Lemma 1, we get \(R^{(j)}\!=\!\textsc{WFstep}(R^{(j-1)},j,H_{j})\!\preceq\!\textsc{WFstep}(U_{V( j-1)},j,H_{j})\). Combining this with Lemma 3, which gives \(\textsc{WFstep}(U_{V(j-1)},j,H_{j})\!\preceq\!U_{V(j)}\), we get the desired statement \(R^{(j)}\!\preceq\!U_{V(j)}\), finishing the proof of the competitiveness. For the optimality of WaterFill, consider the algorithm \(c\)-WaterFill that is identical to WaterFill but uses target completion times \(H_{j}\!=\!c\cdot M^{*}([j])\) in the recursion. By Lemma 1, if there is a \(c\)-competitive algorithm, then \(c\)-WaterFill cannot fail, as it can schedule the jobs with the same completion times \(C_{j}\!\leq\!c\cdot M^{*}([j])\). Thus, it is sufficient to prove that for any \(c\!<\!e/(e\!-\!1)\) there is an instance for which \(c\)-WaterFill fails. Let \(c\!<\!e/(e\!-\!1)\) and consider the job set \(J\!=\![n]\) with \(v_{j}\!\coloneqq\!1/n\) and \(r_{j}\!\coloneqq\!1/j\) for \(j\!\in\!J\). By construction, \(M^{*}([j])\!=\!j/n\). Thus, the target completion times for \(c\)-WaterFill are \(H_{j}\!=\!c\cdot j/n\). For the sake of a contradiction, assume \(c\)-WaterFill successfully computes a feasible schedule for \(J\). Consider the intervals \(I_{j}\!\coloneqq\![H_{j},H_{j+1})\!=\![c\!\cdot\!\alpha,\!c\!\cdot\!\alpha+c/n)\) for \(\alpha\!:=\!j/n\) and note that \(c\)-WaterFill cannot schedule any job \(j^{\prime}\!<\!j\) during \(I_{j}\) (by construction, \(c\)-WaterFill completes \(j^{\prime}\) at time \(H_{j^{\prime}}\!<\!H_{j}\)). Thus, the total resource usage during \(I_{j}\) is at most \(\min\{\frac{1}{j}\!+\!\frac{1}{j+1}\!+\!\cdots\!+\!\frac{1}{n},1\}\!=\!\min\{ \mathcal{H}_{n}\!-\!\mathcal{H}_{j-1},1\}\), where \(\mathcal{H}_{k}\) denotes the \(k\)-th harmonic number. For \(n\!\rightarrow\!\infty\), \(I_{j}\!=\!I_{\alpha n}\) becomes a point interval at time point \(H_{j}\!=\!\alpha\) with total resource usage at most \(\min\{\!\lim_{n\rightarrow\infty}\!\mathcal{H}_{n}\!-\!\mathcal{H}_{\alpha n-1 },1\}\!=\!\min\{\!\lim_{n\rightarrow\infty}\!\mathcal{H}_{n}\!-\!\mathcal{H}_{ \alpha n},1\}\). Using the Euler-Mascheroni constant \(\gamma\!\coloneqq\!\lim_{n\rightarrow\infty}(\mathcal{H}_{n}\!-\!\ln n)\), we evaluate \[\lim_{n\rightarrow\infty}\!\mathcal{H}_{n}\!-\!\mathcal{H}_{\alpha n }\!=\lim_{n\rightarrow\infty}(\mathcal{H}_{n}\!-\!\ln n)\!+\!\ln n\!-\!( \mathcal{H}_{\alpha n}\!-\!\ln\!\alpha n)\!-\!\ln\!\alpha n \tag{3}\] \[=\lim_{n\rightarrow\infty}\!\gamma\!+\!\ln n\!-\!\gamma\!-\!\ln \!\alpha n\!=\!-\!\ln\!\alpha\] Thus, at time point \(t\!=\!\alpha\cdot\!c\), the algorithm has total resource usage at most \(-\!\ln(t/c)\). This is non-increasing in \(t\), so we look for the time point \(t^{*}\) where \(-\!\ln(t^{*}/c)\!=\!1\) (before \(t^{*}\) the schedule has total resource usage 1). Solving this yields \(t^{*}\!=\!c/e\). Since \(c\)-WaterFill must be finished by \(H_{n}\!=\!c\), the total volume it can schedule is at most \[t^{*}\!\cdot\!1\!+\!\int_{t^{*}}^{c}\!-\!\ln\!\frac{t}{c}\!\mathrm{d}t\!=\!ce^{ -1}\!+\!\int_{ce^{-1}}^{c}\!-\!\ln\!\frac{t}{c}\!\mathrm{d}t\!=\!ce^{-1}\!+\! \left[t\!-\!t\!\ln\!\frac{t}{c}\right]_{ce^{-1}}^{c}=\!c\frac{e\!-\!1}{e} \tag{4}\] Because of \(c\!<\!e/(e\!-\!1)\) and for large enough \(n\), this implies that \(c\)-WaterFill schedules a total volume of strictly less than 1 (the total volume of the job set), contradicting the feasibility of the computed schedule. ### Proof of Lemma 2 Consider a \(c\)-competitive schedule \(R\) for a job set \(J\) of volume \(V\) and with maximal processing time \(p_{\max}\). Let \(\mathrm{OPT}\!:=\!\max\{V\!,\!p_{\max}\}\) denote the optimal makespan for \(J\) and set \(H\!:=\!c\!\cdot\!\mathrm{OPT}\). Schedule \(R\) is \(c\)-extendable if and only if it can be augmented by _any_ new job of volume \(v\!\in\!\mathbb{R}_{\geq 0}\), resource requirement \(r\!\in\!(0,\!1]\) and processing time \(p\!:=\!v/r\) that completes until time \(H^{\prime}\!:=\!c\!\cdot\!\mathrm{OPT}^{\prime}\), with \(\mathrm{OPT}^{\prime}\!:=\!\max\{V\!+\!v,\!p_{\max},\!p\}\). Let \(A_{R}\!:=\!A_{R}^{\infty}\). Note that the new job can be scheduled with completion time \(H^{\prime}\) if and only if \[r\!\cdot\!H^{\prime}\!-\!A_{R}(1\!-\!r)\!\geq\!v \tag{5}\] (the available free area before \(H^{\prime}\) suffices to schedule volume \(v\)). Rearranging and using the definitions of \(H^{\prime}\) we get \[\begin{split} A_{R}(1\!-\!r)\!\leq&\,r\!\cdot\! \left(c\!\cdot\!\max\{V\!+\!p\!\cdot\!r,\!p_{\max},\!p\}\!-\!p\right)\\ =&\,r\!\cdot\!\max\{c\!\cdot\!V\!+\!(cr\!-\!1) \!\cdot\!p,\!c\!\cdot\!p_{\max}\!-\!p,\!(c\!-\!1)\!\cdot\!p\}.\end{split} \tag{6}\] Note that for \(r\!\geq\!1/c\), the right-hand side is at least \(cr\!\cdot\!V\!+\!(cr\!-\!1)\!\cdot\!pr\!\geq\!V\), while the left-hand side is clearly at most \(V\). Thus, in this case the inequality is trivially true. So assume \(r\!<\!1/c\). Note that the left-hand side does not depend on \(p\). For the right hand side, we can compute the partial derivatives w.r.t. \(p\) of the three terms in the maximum. Note that only the rightmost term in the maximum has a positive derivative and that for \(p\!=\!0\) it is zero (and, thus, clearly smaller than the other two terms). This implies that the worst case (minimal right-hand side) occurs for \(p\) such that \((c\!-\!1)\!\cdot\!p\!=\!\max\{c\!\cdot\!V\!+\!(cr\!-\!1)\!\cdot\!p,\!c\!\cdot \!p_{\max}\!-\!p\}\), which is equivalent to \(p\!=\!\max\{V\!+\!r\!\cdot\!p,\!p_{\max}\}\). Using that \(p\!=\!V\!+\!r\!\cdot\!p\) implies (by recursion) \(p\!=\!V/(1\!-\!r)\), the worst-case for our inequality becomes \[A_{R}(1\!-\!r)\!\leq\!r\!\cdot\!\max\{c\!\cdot\!V\!+\!(cr\!-\!1)\!\cdot\!p,\! c\!\cdot\!p_{\max}\!-\!p\}\!=\!(c\!-\!1)\!\cdot\!r/(1\!-\!r)\!\cdot\!\max\{V,\!p_{ \max}\!\cdot\!(1\!-\!r)\}. \tag{7}\] Using the substitution \(y\!=\!1\!-\!r\), we get the desired result. ### Proof of Lemma 3 We first note that the condition \(H\!\geq\!\frac{e}{e\!-\!1}\!\cdot\!\max\{V\!+\!v,\!p\}\) on the target completion time ensures that WFstep(\(U_{V}\),\(t\),\(H\)) succeeds: The value \(\max\{V\!+\!v,\!p\}\) is the optimal makespan for scheduling \(\iota\) together with the volume from \(U_{V}\) (remember that can be thought of as scheduling a single job of volume \(V\) and resource requirement 1). Since universal schedules are \(e/(e-1)\)-extendable, \(U_{V}\) can be augmented by \(\iota\) finishing at time \(\frac{e}{e-1}\cdot\max\{V+v,p\}\) (or later). Thus, by Lemma 1, WFStep(\(U_{V}\),\(\iota\),\(H\)) succeeds. Now let the \(W\) and \(U\) denote the schedules WFStep(\(U_{V}\), \(\iota,H\)) and \(U_{V+v}\), respectively. To simplify notation, we also identify \(W\) and \(U\) with their total resource usage functions \(\bar{W}\) and \(\bar{U}\), respectively. We must prove \(W\!\preceq\!U\), which is equivalent to showing that \(\Delta(C,\!y)\!:=\!A_{U}^{C}(y)-A_{W}^{C}(y)\!\geq\!0\) for all \(C\!\in\!\mathbb{R}_{\geq 0}\) and \(y\!\in\![0,\!1]\). We illustrate the possible situations in Figure 4. From the definition of WFstep, we know that there is exactly one time point \(t^{*}\) at which the function \(U\!-\!W\) switches signs (it goes from positive to negative). With the notation \((x)_{+}\!\coloneqq\!\max\{x,0\}\), we have \[\Delta(C,\!y)\!=\!\int_{0}^{C}\!\big{(}U(t)\!-\!y\big{)}_{+}\!-\! \big{(}W(t)\!-\!y\big{)}_{+}\mathrm{d}t. \tag{8}\] Using the monotonicity of both \(W\) and \(U\), we can consider their (say left-continuous) inverse functions \(W^{-1}\) and \(U^{-1}\), which allow us to compute the partial derivatives of \(\Delta\) as \[\frac{\partial}{\partial C}\Delta(C,\!y)\!=\!\big{(}U(C)\!-\!y\big{)}_{+}\!- \!\big{(}W(C)\!-\!y\big{)}_{+} \tag{9}\] and \[\frac{\partial}{\partial y}\Delta(C,\!y)\!=\!\min\{W^{-1}(y),C\}\!- \!\min\{U^{-1}(y),C\}. \tag{10}\] We use these in the remainder of the proof to gradually prove \(\Delta(C,\!y)\!\geq\!0\) for all \(C\!\in\!\mathbb{R}_{\geq 0}\) and \(y\!\in\![0,\!1]\). First, for any \(y\!\in\![0,\!1]\) we obviously have \(\Delta(0,\!y)\!=\!A_{U}^{0}(y)\!-\!A_{W}^{0}(y)\!=\!0\!-\!0\!=\!0\) by definition of \(A_{U}^{0}(y)\) and \(A_{W}^{0}(y)\). Moreover, by definition of \(t^{*}\), \(U(C)\!-\!W(C)\!\geq\!0\) for Figure 4: Universal schedules \(U_{V}\) and \(U_{V+v}\). The blue area indicates a new job \(\iota\) with volume \(v\) and resource requirement \(r\) that is scheduled via WFStep(\(U_{V}\),\(t\),\(H\)). Depending on the resource requirement \(r\), the yellow line enters the blue area exactly once, either on the upper plateau (a) or on the lower plateau (b). all \(C\in[0,t^{*}]\), we have \(\frac{\partial}{\partial C}\Delta(C,y)=\big{(}U(C)-y\big{)}_{+}-\big{(}W(C)-y \big{)}_{+}\geq\big{(}W(C)-y\big{)}_{+}-\big{(}W(C)-y\big{)}_{+}=0\). So \(\Delta(0,y)=0\) and \(C\mapsto\Delta(C,y)\) is non-decreasing on [0,\(t^{*}\)], implying that \(\Delta(C,y)\geq\Delta(0,y)\geq 0\) for all \(C\in[0,t^{*}]\) and \(y\in[0,1]\). Now fix any \(C>t^{*}\) (such that \(U(C)-W(C)\leq 0\) by definition of \(t^{*}\)). For any \(y\in\big{[}W(C),1\big{]}\), we have \(y\geq W(C)\geq U(C)\), yielding \(\frac{\partial}{\partial C}\Delta(C,y)=0-0=0\). So \(\Delta(t^{*},y)\geq 0\) and the function \(C\mapsto\Delta(C,y)\) is non-decreasing on \([t^{*},\infty]\), implying \(\Delta(C,y)\geq\Delta(t^{*},y)\geq 0\) for all \(C\in[t^{*},\infty]\) and \(y\in\big{[}W(C),1\big{]}\). It remains to consider \(C\in[t^{*},\infty]\) and \(y\in\big{[}0,W(C)\big{)}\). For the sake of a contradiction, assume there are \(\bar{C}\in[t^{*},\infty]\) and \(\bar{y}\in\big{[}0,W(\bar{C})\big{)}\) with \(\Delta(\bar{C},\bar{y})<0\). Using that \(W^{-1}\) is non-increasing, for any \(y\in[0,\bar{y}]\) we get (since \(y\leq\bar{y}\leq W(\bar{C})\)) that \(W^{-1}(y)\geq W^{-1}(W(\bar{C}))\geq\bar{C}\) (note that \(W(\bar{C})\) might be at a discontinuity of \(W^{-1}\)). Then we have \(\frac{\partial}{\partial y}\Delta(\bar{C},y)=\min\big{\{}W^{-1}(y),\bar{C}\}- \min\big{\{}U^{-1}(y),\bar{C}\}=\bar{C}-\min\big{\{}U^{-1}(y),\bar{C}\}\geq 0\). So \(\Delta(\bar{C},\bar{y})<0\) and the function \(y\mapsto\Delta(\bar{C},y)\) is non-decreasing on [0,\(\bar{y}\)], implying \(\Delta(\bar{C},y)\leq\Delta(\bar{C},\bar{y})<0\) for all \(y\in[0,\bar{y}]\). In particular, \(\Delta(\bar{C},0)<0\). But then \(\frac{\partial}{\partial C}\Delta(C,0)=\big{(}U(C)-y\big{)}_{+}-\big{(}W(C)-y \big{)}_{+}=U(C)-W(C)\leq 0\) for all \(t\in[\bar{C},\infty]\). So \(\Delta(C,0)<0\) and the function \(C\mapsto\Delta(C,0)\) is non-increasing on [\(\bar{C},\infty]\), implying \(\Delta(\infty,0)\leq\Delta(\bar{C},0)<0\). This clearly contradicts \(\Delta(\infty,0)=A_{U}^{\infty}(0)-A_{W}^{\infty}(0)=(V+v)-(V+v)=0\), finishing the proof. ## Appendix 0.B Details for Section 3 ### Proof of Proposition 1 In this sub-section we prove Proposition 1. Lower Bounds (1) and (2) can be gleaned from Beaumont et al. [4, Definitions 6 and 7]. For sake of completeness, we give their proofs here. Proof of Bounds (1) and (2) of Proposition 1.: For Bound (1), suppose that there is infinite resource available (instead of 1). Then an optimal schedule \(R^{*}\) will set \(R^{*}_{j}(t)=\mathds{1}_{t\in[0,p_{j})}\cdot r_{j}\), so \(C_{j}(R^{*}_{j})=p_{j}\). Since all previously feasible schedules remain feasible, the optimal total completion time may only decrease. Hence, the total completion time of jobs \(C^{L}=\sum_{j\in J}p_{j}\) is a lower bound on \(C^{*}\), i.e., \(C^{*}\geq C^{L}\). For Bound (2), suppose instead that we increase all resource requirements to 1. Again, the optimal total completion time may at most decrease. The problem is now essentially \(1|pmtn|\sum_{j\in J}C_{j}\), for which it is well-known that there is an optimal schedule that uses the SPT rule (shortest processing time first) [3]. This is equivalent to scheduling the jobs according to their volumes \(v_{1}\leq\cdots\leq v_{n}\) in ascending order. The total completion time then becomes \(C^{A}=\sum_{j=1}^{n}\sum_{i=1}^{j}v_{j}\). This establishes our second lower bound: \(C^{*}\geq C^{A}\). For Bound (3), we need a result by Sadykov [19]. It states that there is a schedule with minimum total completion time that has the _ascending property_, i.e., each resource assignment \(R_{j}\) is non-decreasing until the respective job \(j\) completes. **Lemma 10** ([19, Theorem 1]).: _There exist an optimal schedule \(R^{*}\) for total completion time minimization such that \(R^{*}_{j}(t)\leq R^{*}_{j}(t^{\prime})\) for all \(t\leq t^{\prime}\leq C_{j}(R^{*})\) and \(j\in J\).6_ Footnote 6: Their model limits the number of machines \(m\). We can effectively assume \(m=|J|\). We then show that at least half of the volume of each job lies after the job's fractional completion time (see Lemma 11). The proof can be found below. **Lemma 11**.: _In ascending schedules \(R\), jobs \(j\in J\) schedule \(\geq v_{j}/2\) volume after \(C^{F}_{j}(R)\)._ Using this lemma, we can give the proof of the Bound (3) of Proposition 1. Proof of Lower Bound (3) of Proposition 1.: By Lemma 10, there exists an optimal schedule \(R^{*}\) that has the ascending property. By Lemma 11, \(R^{*}\) schedules at least \(v_{j}/2\) volume of each \(j\in J\) after \(C^{F}_{j}(R^{*})\). So \(j\) requires at least \(p_{j}/2\) units of time after \(C^{F}_{j}(R^{*})\) to finish since \(R_{j}(t)\leq r_{j}\) for all \(t\geq 0\). Hence \(C_{j}(R^{*})\geq C^{F}_{j}(R^{*})+p_{j}/2\). Summing over all \(j\in J\) yields \(C^{*}=C(R^{*})\geq C^{F}(R^{*})+1/2\cdot\sum_{j\in J}p_{j}=C^{F*}+1/2\cdot C^{L}\). Proof of Lemma 11.: Let \(V_{j}(R)\) be the volume scheduled for \(j\) before \(C^{F}_{j}(R)\). The statement is equivalent to showing that \(V_{j}(R)\leq v_{j}/2\). For that we construct a new resource assignment \(\tilde{R}_{j}\) with \(C^{F}_{j}(\tilde{R})\geq C^{F}_{j}(R)\). It is constructed from \(R\) by rescheduling the volume around \(C^{F}_{j}(R)\) such that always \(r\coloneqq R_{j}(C^{F}_{j}(R))\neq 0\) resource is used. Formally, we set \(\tilde{R}_{j}(t)=r\cdot\mathds{1}_{a_{j}\leq t<b_{j}}\), where \(a_{j}\coloneqq C^{F}_{j}(R)-V_{j}(R)/r\) and \(b_{j}\coloneqq C^{F}_{j}(R)+(v_{j}-V_{j}(R))/r\). This means that the volume before (after) \(C^{F}_{j}(R)\) stays before (after) \(C^{F}_{j}(R)\), respectively. Because \(R\) is ascending, we only shift volume from earlier to later time points in \(\tilde{R}\) compared to \(R\). From this \(C^{F}_{j}(\tilde{R})\geq C^{F}_{j}(R)\) follows directly. We calculate \(C^{F}_{j}(\tilde{R})\). It is \[C^{F}_{j}(\tilde{R}) = \int_{0}^{\infty}\frac{t\cdot\tilde{R}_{j}(t)}{v_{j}}\mathrm{d}t =\frac{r}{v_{j}}\int_{a_{j}}^{b_{j}}t\mathrm{d}t=\frac{r}{v_{j}}\cdot\frac{1} {2}(b_{j}^{2}-a_{j}^{2})=\frac{r}{2v_{j}}\cdot(b_{j}-a_{j})(a_{j}+b_{j})\] \[= \frac{r}{2v_{j}}\cdot\frac{v_{j}}{r}\left(C^{F}_{j}(R)-\frac{V_{ j}(R)}{r}+C^{F}_{j}(R)+\frac{v_{j}-V_{j}(R)}{r}\right)=C^{F}_{j}(R)+\frac{v_{j}-2V_{ j}(R)}{2r}\] Combining this with \(C^{F}_{j}(\tilde{R})\geq C^{F}_{j}(R)\) yields \(V_{j}(R)\leq v_{j}/2\), which is the desired result. ### Analysis of Greedy In this section we show Proposition 2. Throughout this sub-section, we assume that all \(j\in J\) are ordered as algorithm Greedy sorts them, i.e., \(v_{1}\leq\cdots\leq v_{n}\). Imagine we cut Greedy's schedule \(R^{G}\) at specific time points \(0=\tau_{0}<\cdots<\tau_{m}=M(R^{G})\). We then observe for each \(i\in[m]\) the sub-schedule \(R^{\tau_{i}}\) that contains all job volumes scheduled in the time interval [0,\(\tau_{i}\)) in \(R^{G}\), respectively. We can then associate each sub-schedule with its total completion time \(C(R^{\tau_{i}})\) by only looking at the portions of jobs scheduled and ignoring all so-far unscheduled jobs. At the same time, we consider the lower-bound equivalents from Proposition 1 for these job portions, i.e., \(C^{L}(\tau_{i})\) and \(C^{A}(\tau_{i})\) (see below for formal notations). We can then easily see that \(C^{L}(\tau_{0})=C^{A}(\tau_{0})=C(R^{\tau_{0}})=0\) as well as \(C^{L}(\tau_{m})=C^{L}\), \(C^{A}(\tau_{m})=C^{A}\) and \(C(R^{\tau_{m}})=C(R^{G})\). By inductive application of the following Lemma 12, Proposition 2 follows. Lemma 12: _If \(C(R^{\tau_{i}})\leq C^{L}(\tau_{i})+C^{A}(\tau_{i})\) for \(i<m\), then also \(C(R^{\tau_{i+1}})\leq C^{L}(\tau_{i+1})+C^{A}(\tau_{i+1})\)._ We prove Lemma 12 after giving some additional notation and observations about Greedy's schedules. Additional Notation: Denote \(0=\tau_{0}<\cdots<\tau_{m}=M(R^{G})\) where \(\tau_{i}\), \(i\in[m]\), denotes the \(i\)'th smallest distinct completion time in \(R^{G}\). Let \(R^{\tau}(t)\coloneqq R^{G}(t)\cdot\mathds{1}_{t<\tau}\) be the (sub-)schedule of \(R^{G}\) up to time point \(\tau\in\mathbb{R}_{\geq 0}\). It schedules exactly \(v_{j}(\tau)\coloneqq\int_{0}^{\tau}R^{G}_{j}(t)\mathrm{d}t\) volume for each job \(j\in J\). We use \(J^{\tau}\coloneqq\{j\in J\,|\,v_{j}(\tau)>0\}\) to denote the set of \(n(\tau)\coloneqq|J^{\tau}|\) jobs scheduled by \(R^{\tau}\). From Proposition 1, we get the lower bounds \(C^{L}(\tau)\coloneqq\sum_{j\in J}v_{j}(\tau)/r_{j}\) and \(C^{A}(\tau)\coloneqq\sum_{j=1}^{n(\tau)}\sum_{i=1}^{j}v_{j}(\tau)\) for the optimal total completion time of \(J^{\tau}\). Observation 3: The solution \(R^{G}\) produced by Greedy has the following properties: 1. \(R^{G}\) stays constant within each time interval \([\tau_{i-1},\tau_{i})\) for all \(i\in[m]\). 2. At any time point \(t\), \(R^{G}\) has at most one job \(j\) that does not receive its full resource requirement, i.e., \(R^{G}_{j}(t)<r_{j}\). Furthermore, \(j\) has the highest \(v_{j}\) among all jobs scheduled somewhere within [0,\(t\)). Observation 3 is straightforward to prove. Using this Observation, we can now prove Lemma 12. Proof of Lemma 12: We determine how \(C(R^{\tau})\), \(C^{L}(\tau)\) and \(C^{A}(\tau)\) change as we increase \(\tau\) from \(\tau_{i}\) to \(\tau_{i+1}\). We can essentially view this as a two-step process. First, some new jobs may be started at \(\tau_{i}\), say jobs \(J_{i}^{+}\coloneqq\{k,k+1,\ldots,l-1\}\) (\(J_{i}^{+}=\emptyset\) possible). This changes \(C(R^{\tau})\), as we know have to add the completion times of the jobs in \(J_{i}^{+}\). At the same time, \(n(\tau)\) and \(C^{A}(\tau)\) change. \(C^{L}(\tau)\) does not change. Second, we increase the volume scheduled for the jobs scheduled within \([\tau_{i},\tau_{i+1})\) (which we denote by \(J_{i}\)) by increasing \(\tau\) to \(\tau_{i+1}\). Assume first that \(\bar{R}^{G}(t)=1\) for \(t\in[\tau_{i},\tau_{i+1})\). For the first step, when we add the jobs \(J_{i}^{+}\), \(C(R^{\tau})\) increases by \(|J_{i}^{+}|\cdot\tau_{i}=(l-k)\cdot\tau_{i}\). Similarly, \(C^{A}(\tau)\) increases by \((l-k)\cdot\sum_{j=k}^{l-1}\sum_{i=1}^{j}v_{j}(\tau_{i})=(l-k)\cdot\sum_{i=1}^{ j}v_{j}(\tau_{i})=(l-k)\cdot\tau_{i}\), where the last equality comes from the fact that \(\bar{R}^{G}(t)=1\) for all \(t<\tau_{i}\). This establishes that \(C(R^{\tau})\) and \(C^{A}(\tau)\) change by the same amount (and \(C^{L}(\tau)\) does not change). For the second step, recall Observation 3. Because \(R^{G}\) stays constant within \([\tau_{i},\tau_{i+1})\), \(C(R^{\tau})\) increases by \(|J_{i}|\cdot(\tau_{i+1}-\tau_{i})\), where \(p\) is the number of jobs scheduled within this interval. Statement (2) assures that all jobs in \(J_{i}\) receive their full resource requirement within \([\tau_{i},\tau_{i+1})\), except for the one with the highest index. Therefore, \(C^{L}(\tau)\) increases by at least \((|J_{i}|-1)\cdot(\tau_{i+1}-\tau_{i})\): This is because each job in \(J_{i}\) that receives full resource requirement in \([\tau_{i},\tau_{i+1})\) will increase \(C^{L}(\tau)\) by \((v_{j}(\tau_{i+1})-v_{j}(\tau_{i}))/r_{j}=(\tau_{i+1}-\tau_{i})\cdot r_{j}/r_{j}= \tau_{i+1}-\tau_{i}\). For \(C^{A}(\tau)\), we see an increase by at least \(\sum_{i=1}^{l-1}v_{j}(\tau_{i+1})-\sum_{i=1}^{l-1}v_{j}(\tau_{i})=\tau_{i+1}-\tau_ {i}\). To summarize, \(C(R^{\tau})\) increases at most as fast as \(C^{L}(\tau)+C^{A}(\tau)\) when increasing \(\tau\) from \(\tau_{i}\) to \(\tau_{i+1}\). Instead assume that \(C(\bar{R}(t))\neq 1\) for \(t\in[\tau_{i},\tau_{i+1})\). Then all jobs will receive their resource requirement by definition of Greedy, and \(C^{L}(\tau)\) increases at least as much as \(C(R^{\tau})\), analogous to above argument. ### Generalized CLP and DCP For the purpose of proving Lemma 6 (see Appendix B.4), we first extend the definition of \(CLP\) and \(DCP\) from Section 3. In these generalizations, we change the volume that is required for scheduling from a vector \(v\) of volumes to a vector of volumes \(\bar{v}\). However, the volumes within the \(CLP\) objective remains intact. The \(CLP\) then becomes \[\begin{array}{ll}\mbox{minimize }\sum_{j\in J}\int_{0}^{\infty}\frac{t \cdot R_{j}(t)}{v_{j}}\mathrm{d}t&\int_{0}^{\infty}R_{j}(t)\mathrm{d}t\geq \bar{v}_{j}\ \ \forall j\in J\\ 0\leq R_{j}(t)\leq r_{j}\ \ \forall j\in J,\!t\in\mathbb{R}_{\geq 0}&\sum \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\begin{array}{l}\mbox{maximize}\,\,\sum_{j\in J}\alpha_{j}\bar{v}_{j}-\sum_{j\in J }r_{j}\sum_{i\in I}\beta_{j,i}\cdot\delta-\sum_{i\in I}\gamma_{i}\cdot\delta\\ \alpha_{j},\beta_{j,i},\gamma_{i}\geq 0\ \ \ \forall j\in J,i\in I\\ \gamma_{i}+\beta_{j,i}\geq\alpha_{j}-(i\delta-\delta/2)/v_{j}\ \ \ \forall j\in J,i\in I \end{array}\] Notice that the terms \((i\delta-\delta/2)\), \(i\in I\), represent the midpoints of the \(i\)'th slot, respectively. In the primal objective, it is used for a rectangular integration using samples at the slot midpoints. The corresponding dual constraint is the same as in the continuous version, also restricted to the slot midpoints. When \(\delta\) approaches zero, the rectangular integration will effectively become an integral, and the constraints/slackness conditions will hold for all \(t\geq 0\). For the constraints \(0\leq V_{j,i}\leq r_{j}\cdot\delta\) and \(\sum_{j\in J}V_{j,i}\leq\delta\), it is easy to see that a division by \(\delta\) effectively yields its continuous counterpart. #### 0.b.3.2 Slackness Conditions and Primal-Dual-Pairs. The LP slackness condition are as follows. 1. \(a_{j}(\bar{v}_{j}-\sum_{i\in I}V_{j,i})=0\ \ \forall j\in J\) 2. \(\beta_{j,i}(r_{j}\cdot\delta-V_{j,i})=0\ \ \forall j\in J,i\in I\) 3. \(\gamma_{i}(\delta-\sum_{j\in J}V_{j,i})=0\ \ \forall i\in I\) 4. \(V_{j,i}(\alpha_{j}-(i\delta-\delta/2)/v_{j}-\beta_{j,i}-\gamma_{i})=0\ \ \ \forall j\in J,i\in I\) For \(CLP(\bar{v})/DCP(\bar{v})\), we establish the following continuous slackness conditions. For \(\bar{v}=v\), these become equivalent to the slackness conditions in Section 0.b.2. 1. (\(\alpha\)-slackness condition): \(\alpha_{j}(\bar{v}_{j}-\int_{0}^{\infty}R_{j}(t)\mbox{d}t)=0\ \ \ \forall j\in J\) 2. (\(\beta\)-slackness condition): \(\beta_{j}(t)(r_{j}-R_{j}(t))=0\ \ \forall j\in J,t\in\mathbb{R}_{\geq 0}\) 3. (\(\gamma\)-slackness condition): \(\gamma(t)(1-\sum_{j\in J}R_{j}(t))=0\ \ \forall t\in\mathbb{R}_{\geq 0}\) 4. (\(R\)-slackness condition): \(R_{j}(t)(\alpha_{j}-t/v_{j}-\beta_{j}(t)-\gamma(t))=0\ \ \forall j\in J,t\in\mathbb{R}_{\geq 0}\) A _primal-dual-pair_ for \(CLP(\bar{v})/DCP(\bar{v})\) is a 5-tuple (\(R\),\(\alpha\),\(\beta\),\(\gamma\),\(\bar{v}\)) consisting of a schedule \(R\), a dual schedule \(\alpha\), a tuple of functions \((\beta_{j})_{j\in J}\) (\(\beta_{j}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) and a function \(\gamma:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\), such that above slackness conditions are fulfilled (for volumes \(\bar{v}=(\bar{v}_{j})_{j\in J}\)). ### Proof of Lemma 6 The purpose of this sub-section is to prove the following lemma. Lemma 6: _For any job set \(J\) there exists an \(\alpha\) such that the line schedule of \(\alpha\) is a primal-dual pair._ We require a slightly more general version of this lemma in Appendix 0.C. This generalization is based on the continuous linear programs \(CLP(\bar{v})\) and \(DCP(\bar{v})\) defined in Appendix 0.B.3. With this, the statement we want to show for this subsection now becomes Lemma 13: _For any job set \(J\) (with volumes \(v\)) and for any volumes \(\bar{v}\) there exists an \(\alpha\) such that the line schedule of \(\alpha\) is a primal-dual pair for \(CLP(\bar{v})\) and \(DCP(\bar{v})\)._ Note that the concept of a _line schedule_ remains the same as in Section3, however for a primal-dual-pair, we require that \(\int_{0}^{\infty}\!R_{j}(t)\mathrm{d}t\!=\!\bar{v}_{j}\) for all \(j\!\in\!J\). Our proof idea for Lemma13 is as follows. Consider an arbitrary dual schedule \(\alpha\). We can use Definition4 to extend \(\alpha\) to a line schedule (\(R\),\(\alpha\),\(\beta\),\(\gamma\),\(\bar{v}\)). This allows us to define a function, mapping from \(\alpha\) to the scheduled volumes as \(v(\alpha)\!\coloneqq\!\int_{0}^{\infty}\!R_{j}(t)\mathrm{d}t\). Our general idea is then to show that it is possible to guide \(\alpha\) such that \(v(\alpha)\) converges to the desired volumes \(\bar{v}\), which then proves Lemma13. For its proof, we require the two following lemmas. In Lemma14, we show that any line schedule as constructed in Definition4 is a primal-dual-pair. In Lemma15, we show various properties about the function \(v(\cdot)\) defined above. Lemma 14: _The line schedule of any vector \((\alpha_{j})_{j\in J}\) is a primal-dual pair._ Lemma 15: _The following observations hold for the function \(v(\cdot)\) and a vector \(\alpha\):_ * \(v(\cdot)\) _is continuous._ * _If_ \(\alpha_{j}\!=\!0\) _for some_ \(j\!\in\!J\)_, then_ \(v_{j}(\alpha)\!=\!0\)_._ * _If_ \(v_{j}(\alpha)\!\leq\!x\) _for some_ \(j\!\in\!J\) _and_ \(x\!\geq\!0\)_, then there exists_ \(\hat{\alpha}_{j}\!\geq\!0\) _such that replacing_ \(\alpha_{j}\) _with_ \(\hat{\alpha}_{j}\) _in_ \(\alpha\) _gives a new vector_ \(\tilde{\alpha}\) _with_ \(v_{j}(\tilde{\alpha})\!=\!x\)_._ * _If we increase increase_ \(\alpha_{j}\) _to obtain_ \(\tilde{\alpha}\)_, then_ \(v_{j}(\tilde{\alpha})\!\geq\!v_{j}(\alpha)\) _and_ \(v_{j^{\prime}}(\tilde{\alpha})\!\leq\!v_{j^{\prime}}(\alpha)\) _for all_ \(j^{\prime}\!\in\!J\setminus\{j\}\)_._ Equipped with these two lemmas, we can now provide the proof of Lemma13. Proof of Lemma13.: Let \(\alpha^{(0)}\!=\!\mathbf{0}\), i.e., all dual lines pass through the origin. We will (recursively) define \(\alpha^{(i+1)}\) from \(\alpha^{(i)}\) (\(i\!\in\!\mathbb{N}\)). For any \(\alpha\) with \(v(\alpha)\!\leq\!\bar{v}\), define \(\alpha^{+}\) where each \(\alpha_{j}\), \(j\!\in\!J\), is replaced by \(\hat{\alpha}_{j}\) according to the third statement of Lemma15, i.e., we raise each dual line such that the respective job would receive \(\bar{v}_{j}\) volume if considered in isolation. Then define \(\alpha^{(i+1)}\!=\!\alpha^{(i)}\!^{+}\). Notice that the sequence \((\alpha^{(i)})_{i\geq 0}\) is monotonous non-decreasing. Specifically, if \(v_{j}(\alpha^{(i)})\!\neq\!\bar{v}_{j}\), then \(\alpha^{(i+1)}\!\neq\!\alpha^{(i)}\). Furthermore, because of the third statement in Lemma15, it follows from \(v_{j}(\alpha^{(i)})\!\leq\!\bar{v}_{j}\) that \(v_{j}(\alpha^{(i+1)})\!\leq\!\bar{v}_{j}\). From this also follows that all fix points \(\alpha\) of \(\alpha^{+}\) must satisfy \(v(\alpha)\!=\!\bar{v}\). If we can show that the sequence is also bounded, then by the monotone convergence theorem, the sequence then converges to its supremum, which we call \(\alpha^{*}\!=\!\lim_{i\to\infty}\!\alpha^{(i)}\). Since the sequence can only converge to a fix point, we then get \(v_{j}(\alpha^{*})\!=\!\bar{v}_{j}\). Using Lemma14, the line schedule of \(\alpha^{*}\) is then a primal-dual pair for \(CLP(\bar{v})\) and \(DCP(\bar{v})\). So it only remains to show that \((\alpha^{(i)})_{i\geq 0}\) is bounded. Specifically, we show that, for each \(j\!\in\!J\), \(\alpha_{j}\!\leq\!(\sum_{j^{\prime}\in J}\!\bar{v}_{j^{\prime}})/(v_{j}\!\cdot \!\min_{j^{\prime}\in J}\!r_{j^{\prime}})\). Suppose the opposite. Then, by the definition of a line schedule, for each \(t\!<\!\alpha_{j}v_{j}\), there is at least one job scheduled at time \(t\) that receives at least \(\min_{j^{\prime}\in J}\!r_{j^{\prime}}\) resource. Then, a total volume of at least \(\alpha_{j}v_{j}\). \(\min_{j^{\prime}\in J}\!r_{j^{\prime}}\!>\!\sum_{j^{\prime}\in J}\!\bar{v}_{j^{ \prime}}\) is scheduled, with a contradiction. With this, we established that there exists an upper bound on each \(\alpha_{j}\) in the process, thus finishing the proof. It remains to give the proofs for the remaining two lemmas. In Lemma 14, we essentially have to check the slackness conditions. For Lemma 15, the desired statements stem from geometic observations about line schedules. Proof of Lemma 14.: We show that the slackness conditions from Appendix 0.B.3 are fulfilled. Let (\(R\),\(\alpha\),\(\beta\),\(\gamma\),\(\bar{v}\)) be the line schedule of \(\alpha\). The \(\alpha\)-slackness conditions are trivially fulfilled by the definition of \(\bar{v}\). For any \(\beta\)-slackness condition, let first \(t\!\in\!\mathbb{R}_{\geq 0}\). Observe that if \(\beta_{j}(t)\!>\!0\) (else its trivially true), then \(\beta_{j}(t)\!=\!d_{j}(t)\!-\!\gamma(t)\!>\!0\) and \(d_{j}(t)\!>\!0\). Because of the choice of \(\gamma\), we must have that \(j\) received its full resource requirement at \(t\), i.e., \(R_{j}(t)\!=\!r_{j}\), which fulfills the condition. For the \(\gamma\)-slackness condition, if \(\gamma(t)\!=\!0\) then the condition is fulfilled, otherwise since \(\gamma(t)\!=\!d_{j}(t)\!>\!0\) for some job \(j\), and by definition of \(R_{j}(t)\), the resource must have been exhausted, i.e., \(\bar{R}_{j}(t)\!=\!1\), which means that the \(\gamma\)-slackness condition is true. Finally, for the \(R\)-slackness condition, when a job is scheduled at time \(t\), then \(\gamma(t)\!\leq\!d_{j}(t)\) and therefore \(\beta_{j}(t)\!=\!d_{j}(t)\!-\!\gamma(t)\). This implies \(\alpha_{j}\!-\!t/v_{j}\!-\!\beta_{j}(t)\!-\!\gamma(t)\!=\!0\) which fulfills such conditions as well. Proof of Lemma 15.: 1. If \(\alpha\) is changed continuously, then all dual lines change continuously. Since the job set is non-degenerate, all dual lines have different slope, so their intersections change continuously with \(\alpha\). Then also the time points where \(R_{j}\) changes over time changes continuously, and as such also \(\bar{v}\). 2. This statement holds trivially by definition of a line schedule. 3. We can set \(\alpha_{j}\) such that \(d_{j}(t)\!>\!d_{j^{\prime}}(t)\) for all \(t\!\in\![0\),\(\max\{\alpha_{j}v_{j}\,|\,j\!\in\!J\})\) and \(j^{\prime}\!\neq\!j\), i.e., the dual line \(d_{j}(t)\) lies above all other dual lines until all dual lines fall below zero. As such, by raising \(\alpha_{j}\), we can make \(\bar{v}_{j}(\alpha^{\prime})\) arbitrarily large. Because \(\bar{v}\) is continuous over \(\alpha\), the intermediate value theorem guarantees the existence of an \(\alpha_{j}\) such that the corresponding \(\alpha^{\prime}\) has \(\bar{v}_{j}(\alpha^{\prime})\!=\!v_{j}\). 4. As we increase \(\alpha_{j}\), the value of the dual line \(d_{j}(t)\) for each time point \(t\) is only increasing. As such \(j\) increases as of the order \(\succ_{t}\) (defined in Definition 4) and as such only increases its resource assignment for all \(t\!\geq\!0\). The converse is true for all other jobs \(j^{\prime}\!\neq\!j\). They may at most decrease according to \(\succ_{t}\) and thus reduce their \(R_{j}(t)\). From this, the statement immediately follows by the definition of \(v(\cdot)\). ### Calculation for Figure 2b To show that each job schedules its volume, we first calculate the intersections between the dual lines. Two dual lines for jobs \(j\!,\!j^{\prime}\) intersect when \(\alpha_{j}-t/v_{j}\!=\!\alpha_{j^{\prime}}-t/v_{j^{\prime}}\), i.e., when \(t\!=\!(\alpha_{j}-\alpha_{j^{\prime}})/(1/v_{j}-1/v_{j^{\prime}})\). We get the intersections \(t_{j,j^{\prime}}\) between jobs \(j\!,\!j^{\prime}\) as \(t_{1,2}\!=\!(51/16-39/16)/(1/1-1/4)\!=\!1,\ t_{1,3}\!=\!(51/16-31/16)/(1/1-1/6) \!=\!3/2\) and \(t_{2,3}\!=\!(39/16-31/16)/(1/4-1/6)\!=\!6\). \(j_{1}\) receives \(r_{1}\!=\!3/4\) resource within \([0\),\(t_{1,2})\!=\![0,\!1)\) and \(1\!-\!r_{2}\!=\!1/2\) resource within \([t_{1,2},t_{1,3})\!=\![1,\!3/2)\). The volume scheduled is \(3/4\cdot(1-0)+1/2\cdot(3/2-1)\!=\!1\!=\!v_{1}\). Similarly, for \(j_{2}\), \((1\!-\!r_{1})(t_{1,2}\!-\!0)+r_{2}\cdot(t_{2,3}-t_{1,2})+(1-r_{3})\cdot(\alpha _{2}v_{2}-t_{2,3})\!=\!1/4\cdot 1+1/2\cdot 5+1/3\cdot 15/4\!=\!4\!=\!v_{2}\); and for \(j_{3}\), \((1\!-\!r_{2})\cdot(t_{2,3}-t_{1,3})+r_{3}\cdot(\alpha_{3}v_{3}-t_{2,3})\!=\!1 /2\cdot 9/2+2/3\cdot 45/8\!=\!6\!=\!v_{3}\). ### Geometric Shapes and their properties This sub-section aims to show the three lemmas about the 3D shapes suggested in Section 3.2, specifically Lemmas 7 to 9. Before we can show these, we need to become more concrete about the 3D shapes. For the purpose of Appendix 0.C, we use the more general \(CLP(\bar{v})/DCP(\bar{v})\) definitions. In particular, we define the objective quantities \(P=\sum_{j\in J}\int_{0}^{\infty}t\cdot R_{j}(t)/v_{j}\,\mathrm{d}t\) (the primal objective) and \(A=\sum_{j\in J}\alpha_{j}\bar{v}_{j}\), \(B=\sum_{j\in J}r_{j}\int_{0}^{\infty}\beta_{j}(t)\mathrm{d}t\) and \(\Gamma=\int_{0}^{\infty}\gamma(t)\mathrm{d}t\) (the parts of the dual objective). For the 3D shapes, we will first define a geometrical representation of a (primal) schedule (see Definition 5). Each job is assigned a subset of points in [0,\(\infty\)] \(\times\) [0,1) that indicates where that job is scheduled. We want to construct primal and dual volume pieces for jobs \(j\) such that the time points \(t\) where they start/end satisfy \(d_{j}(t)=\gamma(t)\). This is justified by statement 2 of the following Observation 4. That is why we want each job to be assigned the same portion of the resource axis [0,1) at least until \(d_{j}(t)=\gamma(t)\). _Observation 4_.: A line schedule (\(R\),\(\alpha\),\(\beta\),\(\gamma\),\(\bar{v}\)) for a job set \(J\) has the following properties. 1. \(\gamma\) is continuous, and \(\gamma\) is strictly monotonically decreasing in an interval [0,\(t_{\gamma}\)] for some \(t_{\gamma}\geq 0\), and \(\gamma(t)=0\) for all \(t\geq t_{\gamma}\). 2. If for each \(\varepsilon>0\) there is some \(t^{\prime}\in[t-\varepsilon,t)\) with \(R_{j}(t^{\prime})\neq R_{j}(t)\), i.e., \(j\)'s resource assignment changes at time \(t\), then \(d_{j}(t)=\gamma(t)\). The proof can be found below. Formally, we define the point subsets as follows. Definition 5: Let (\(R\),\(\alpha\),\(\beta\),\(\gamma\),\(\bar{v}\)) be a completion, and let \(0=t_{0}<t_{1}<\cdots<t_{m}\) be the time points where \(R\) changes, i.e., such that \(R\) is constant in each interval [\(t_{i-1}\),\(t_{i}\)] and \(R(t_{i-1})\neq R(t_{i})\) for all \(i\in[m]\). A _geometrical representation_\(\Omega=(\Omega_{j})_{j\in J}\) of \(R\) consists of point sets \(\Omega_{j}\subseteq[0,\infty)\times[0,1)\) for each \(j\in J\), such that the following properties hold. 1. \(\{r\mid(t,r)\in\Omega_{j}\}\) is measureable and its measure is \(R_{j}(t)\) for all \(t\geq 0\). 2. For all \(t\geq 0\) and \(j\in J\), if for each \(\varepsilon>0\) there exists a \(t^{\prime}\in[t-\varepsilon,t)\) such that \(\{r\mid(t,r)\in\Omega_{j}\}\neq\{r\mid(t^{\prime},r)\in\Omega_{j}\}\), then \(\gamma(t)=d_{j}(t)\). Such a geometrical representation can always be constructed. When jobs gain or lose resource at some time point, we make sure that only their portion of the resource axis [0,1) is involved at that time point. As such the second condition of a geometrical representation can always be fulfilled. With this, we are now ready to define the individual 3D shapes. Definition 6: Let \(\Omega\) be a geometrical representation of a completion (\(R\),\(\alpha\),\(\beta\),\(\gamma\),\(\bar{v}\)). For \(0=r^{0}<r^{1}<\cdots<r^{k}=1\), define strips \(\Omega^{i}\) (\(i\in[k]\)) for some \(k\in\mathbb{N}\) where \(\Omega^{i}=(\Omega^{i}_{j})_{j\in J}\) and \(\Omega^{i}_{j}=\{(t,r)\in\Omega_{j}\mid r\in[r^{i-1},r^{i})\}\), such that for each \(t\in\mathbb{R}_{\geq 0}\), \(|\{j\in J\mid(t,.)\in\Omega^{i}_{j}\}|\leq 1\). Define \(R^{i}=(|\Omega^{i}_{j}(t)|)_{j\in J}\) where \(\Omega^{i}_{j}(t^{\prime}):=\{(t,r)\in\Omega^{i}_{j}\mid t=t^{\prime}\}\). Then we require that if \(R^{i}_{j}(t)>0\) for some \(t\in\mathbb{R}_{\geq 0}\), then \(R^{i}_{j}(t)=q^{i}\coloneqq r^{i}-r^{i-1}\).7 Consider any strip \(\Omega^{i}\). Subdivide it by time points \(0\!=\!t^{i,0}\!<\!\cdots\!<\!t^{i,K^{i}}\) such that each \(T^{i,l}\!\coloneqq\![t^{i,l-1}\!,\!t^{i,l})\) (\(l\!\in\![K^{i}]\)) is an inclusive-maximal time interval where some job is scheduled within \(\Omega^{i}\). Let the job scheduled in the time interval \(T^{i,l}\) be called \(j^{i,l}\). We define for each \(i\!\in\![k]\!,\!l\!\in\![K^{i}]\) the following \(\Psi\)-_pieces_ (\(\Psi\!\in\!\{P\!,\!A\!,\!B\!,\!\Gamma\}\)) 1. \(P^{i,l}\!=\!\{(t,\!r,\!\alpha)\,|\,(t,\!r)\!\in\!\Omega^{i}_{j^{i,l}},\!d_{j^{i,l}}(t)\!\leq\!\alpha\!<\!\alpha\!<\!\alpha_{j^{i,l}},\!t\!\in\!T^{i,l}\}\) 2. \(A^{i,l}\!=\!\{(t,\!r,\!\alpha)\,|\,(t,\!r)\!\in\!\Omega^{i}_{j^{i,l}},\!0\! \leq\!\alpha\!<\!\alpha_{j^{i,l}},\!t\!\in\!T^{i,l}\}\) 3. \(B^{i,l}\!=\!\{(t,\!r,\!\alpha)\,|\,r\!\in\!\Omega^{i}_{j^{i,l}}(t^{i,l-1}),\! \gamma(t^{i,l})\!\leq\!\alpha\!<\!\min(\gamma(t^{i,l-1}),\!d_{j^{i,l}}(t)),\! \alpha\!\geq\!\gamma(t),\!t\!\geq\!0\}\) 4. \(\Gamma^{i,l}\!=\!\{(t,\!r,\!\alpha)\,|\,r\!\in\!\Omega^{i}_{j^{i,l}}(t^{i,l-1}), \!\gamma(t^{i,l})\!\leq\!\alpha\!<\!\min(\gamma(t^{i,l-1}),\!d_{j^{i,l}}(t)), \!\alpha\!<\!\gamma(t),\!t\!\geq\!0\}\) Based on this, we define suitable shapes per job. For all \(\Psi\!\in\!\{P\!,\!A\!,\!B\!,\!\Gamma\}\) and \(j\!\in\!J\), we define the \(\Psi\)-shape \(\Psi_{j}=\bigcup_{i\in[k],l\in[K^{i}],j^{i,l}=j}\Psi^{i,l}\). Further define \(\Psi^{\rm all}=\bigcup_{i\in[k],l\in[K^{i}]}\Psi^{i,l}\). Proof of Observation 4.: By Lemma 14, (\(R,\!\alpha\!,\!\beta\!,\!\gamma\!,\!\bar{v}\)) is also a primal-dual pair and as such the slackness conditions are fulfilled. 1. Start at \(t\!=\!0\) and observe how \(\gamma\) changes as we increase \(t\). \(\gamma\) will always coincide with some dual line \(d_{j}\) or its value will be zero by definition. As long as no dual line crosses \(d_{j}\), dual lines above (below) \(d_{j}\) stay above (below) \(d_{j}\), respectively. Therefore the resource assignment of the respective jobs stays constant, and \(\gamma\) will coincide with the same dual line. As such \(\gamma\) is monotonically decreasing until it becomes zero. It remains to show that \(\gamma\) is continuous. Note that job \(j\) (where currently \(d_{j}(t)\!=\!\gamma(t)\)) uses up the remaining resource. When \(d_{j}\) crosses some other dual line \(d_{j^{\prime}}\), \(\gamma\) will either continue to coincide with \(d_{j}\) or with \(d_{j^{\prime}}\): This is because \(j\) and \(j^{\prime}\) will together still use up the remaining resource. An analogous argument applies if multiple dual lines cross at a common point, then \(\gamma\) continues as one of them. As such \(\gamma\) is continuous. Trivially, there exists point \(t_{\gamma}\!\geq\!0\) such that \(\gamma(t)\!=\!0\) for all \(t\!\geq\!t_{\gamma}\). 2. This follows from the same argument as above: As long as no dual line crosses the dual line \(d_{j}\) corresponding to \(\gamma\), dual lines above (below) \(d_{j}\) stay above (below) \(d_{j}\), respectively, and the respective job's resource assignment stays constant. Therefore, when we have \(R_{j}(t\!-\!\delta)\!\neq\!R_{j}(t)\) for some \(t\!\geq\!0\) and arbitrarily small \(\delta\!>\!0\), then \(j\)'s dual line must cross \(\gamma\) exactly at \(t\), so \(d_{j}(t)\!=\!\gamma(t)\). We are now ready to prove Lemmas 7 to 9. Proof of Lemma 7.: Suppose that \(V=A^{i,l}\) and \(W=A^{i^{\prime},l^{\prime}}\) with \(V\!\neq\!W\) overlap. Both contain only points (\(t,\!r,\!\alpha\)) with \((t,\!r)\!\in\!\Omega^{i}\) (\(\in\Omega^{i^{\prime}}\)), respectively. Therefore we must have \(i\!=\!i^{\prime}\) for them to overlap. However, \(T^{i,l}\) and \(T^{i,l^{\prime}}\) are disjoint unless \(l\!=\!l^{\prime}\). Therefore \(V\!=\!W\) with a contradiction. For the second statement, if both pieces are \(P\)-pieces, the argument is analogous. Next, note that if \(V\!=\!B^{i,l}\) and \(W\!=\!\Gamma^{i,l}\), they cannot overlap by the conditions \(\alpha\!\geq\!\gamma(t)\) and \(\alpha\!<\!\gamma(t)\), respectively. As such we continue by considering the unions \(U^{i,l}\!=\!B^{i,l}\cup\Gamma^{i,l}\!=\!\{(t,\!r,\!\alpha)\,|\,r\!\in\!\Omega^{ i}_{j^{i,l}}(t^{i,l-1}),\!\gamma(t^{i,l})\!\leq\!\alpha\!<\!\min(\gamma(t^{i,l-1}),\!d_{j^{i,l} }(t))\}\) instead of the individual \(B\)- and \(\Gamma\)-pieces. Specifically, we assume \(V\) or \(W\) to be \(U^{i,l}\) instead of \(B^{i,l}\) or \(\Gamma^{i,l}\). Suppose \(V\!=\!U^{i,l}\) and \(W\!=\!U^{i^{\prime},l^{\prime}}\). Then there exists some (\(t,\!r,\!\alpha)\!\in\!U^{i,l}\cap U^{i^{\prime},l^{\prime}}\). Similarly as above, (\(t,\!r)\!\in\!\Omega^{i}_{j}\cap\Omega^{i^{\prime}}_{j^{\prime}}\) for some \(j,\!j^{\prime}\!\in\!J\), so \(i\!=\!i^{\prime}\) must hold. W.l.o.g. let \(l\!>\!l^{\prime}\). Then \(\alpha\!<\!\min(\gamma(t^{i,l-1}),\!d_{j^{i}}(t))\!\leq\!\gamma(t^{i,l-1})\! \leq\!\gamma(t^{i,l^{\prime}})\!\leq\!\alpha\), where the second inequality follows from \(l>l^{\prime}\) and the second statement of Observation 4. This is a contradiction, so \(l=l^{\prime}\) and as such \(V=W\). Let w.l.o.g. \(V=P^{i,l}\) and \(W=U^{i^{\prime},l^{\prime}}\). Assuming they overlap, we have some (\(t,r,\alpha\)) \(\in P^{i,l}\cap U^{i^{\prime},l^{\prime}}\). For the same reasons as above, \(i=i^{\prime}\) must hold. Assume that \(l=l^{\prime}\). Then \(d_{j^{i,l}}(t)\leq\alpha<\min(\gamma(t^{i,l-1}),d_{j^{i,l}}(t))\leq d_{j^{i,l}}(t)\) with a contradiction. So we must have \(l\neq l^{\prime}\). Let \(j=j^{i,l}\) and \(j^{\prime}=j^{i^{\prime},l^{\prime}}\). Assume \(l<l^{\prime}\). Then the two pieces are separated over the priority axis, i.e., \(\alpha\geq d_{j}(t)\geq d_{j}(t^{i,l})=\gamma(t^{i,l})\geq\gamma(t^{i,l^{ \prime}})\geq\min(\gamma(t^{i,l^{\prime}}),d_{j^{\prime}}(t^{i,l^{\prime}}))>\alpha\). Assume instead \(l>l^{\prime}\). Using the second statement of Observation 4, we must then have \(t<t^{i,l^{\prime}}\), as otherwise \(\gamma(t^{i,l^{\prime}})\leq\alpha<\min(\gamma(t^{i,l^{\prime}-1}),d_{j^{i,l^{ \prime}}}(t))\leq\min(\gamma(t^{i,l^{\prime}-1}),d_{j^{i,l^{\prime}}}(t^{i,l^{ \prime}}))=\min(\gamma(t^{i,l^{\prime}-1}),\gamma(t^{i,l^{\prime}}))\leq\gamma (t^{i,l^{\prime}})\). Since \(t\in T^{i,l}\), we have \(t\geq t^{i,l-1}\geq t^{i,l^{\prime}}>t\) with a contradiction. Proof of Lemma 8.: We show two directions, starting with \(A^{\rm all}\subseteq P^{\rm all}\cup B^{\rm all}\cup I^{\rm all}\). Let (\(t,r,\alpha\)) \(\in A^{\rm all}\), specifically, (\(t,r,\alpha\)) \(\in A^{i,l}\) for some \(i\in[k],l\in[K^{i}]\). Denote \(j\coloneqq j^{i,l}\). If \(\alpha\geq d_{j^{i,l}}(t)\), then (\(t,r,\alpha\)) \(\in P_{j}\subseteq P^{\rm all}\). If \(\gamma(t^{i,l})\leq\alpha<d_{j^{i,l}}(t)\), then since \(t\in T^{i,l}\), \(\gamma(t^{i,l})\leq\alpha<\min(\gamma(t^{i,l-1}),d_{j^{i,l}}(t))\) and \(t\geq 0\). Also, \(r\in\Omega_{j}^{i}(t^{i,l-1})\) and as such (\(t,r,\alpha\)) \(\in B^{i,l}\cup\Gamma^{i,l}\subseteq B^{\rm all}\cup\Gamma^{\rm all}\). Otherwise, \(\alpha<\gamma(t^{i,l})\). In such a case there exists some \(l^{\prime}>l\) such that \(\alpha\in[\gamma(t^{i,l^{\prime}-1}),\gamma(t^{i,l^{\prime}}))\). This is because of the second property of Observation 4. In the time interval \(T^{i,l^{\prime}}\), the full resource must still be used as \(\gamma\) is still positive. Therefore, there exist \(B\)- and \(\Gamma\)-pieces \(B^{i,l^{\prime}}\) and \(\Gamma^{i,l^{\prime}}\). To show that (\(t,r,\alpha\)) lies in one of them, it only remains to show that \(\alpha<d_{j^{i,l^{\prime}}}(t)\). This follows from the fact that \(d_{j^{i,l^{\prime}}}(t^{i,l^{\prime}-1})=\gamma(t^{i,l^{\prime}-1})\) by Definition 5, \(t\leq t^{i,l^{\prime}-1}\) (follows from \(t\leq t^{i,l}\) and \(l^{\prime}>l\)) and the fact that \(d_{j^{i,l^{\prime}}}\) is monotonically decreasing. This shows \(A^{\rm all}\subseteq P^{\rm all}\cup B^{\rm all}\cup\Gamma^{\rm all}\). Now let (\(t,r,\alpha\)) \(\in P^{\rm all}\cup B^{\rm all}\cup\Gamma^{\rm all}\). First assume that (\(t,r,\alpha\)) \(\in P^{i,l}\) for some \(i\in[k]\), \(l\in[K^{i}]\). Then (\(t,r\)) \(\in\Omega_{j^{i,l}}\) and \(t\in T^{i,l}\) as well as \(\alpha<\alpha_{j^{i,l}}\), which implies (\(t,r,\alpha\)) \(\in A^{\rm all}\). On the other hand, if (\(t,r,\alpha\)) \(\in B^{i,l}\cup\Gamma^{i,l}\) for some \(i\in[k]\), \(l\in[K^{i}]\), then \(r\in\Omega_{j^{i,l}}(t^{i,l-1})\) and \(\alpha<d_{j^{i,l}}(t)\leq\alpha_{j^{i,l}}\). This shows that \(P^{\rm all}\cup B^{\rm all}\cup\Gamma^{\rm all}\subseteq A^{\rm all}\). Proof of Lemma 9.: We show the following more detailed statements for all \(j\in J\), \(i\in[k]\) and \(l\in[K^{i}]\), from which the statements in the lemma follow. Note again that we are using the general \(CLP(\bar{v})/DCP(\bar{v})\) definitions (see Appendix 0.B.3). 1. \(|P^{i,l}|=q^{i}\int_{t^{i,l-1}}^{t^{i,l}}\!\!t/v_{j}{\rm d}t=|B^{i,l}|+| \Gamma^{i,l}|\). 4. \(|\Gamma^{i,l}|=q^{i}\int_{\gamma(t^{i,l})}^{\gamma(t^{i,l-1})}\gamma^{-1}( \alpha){\rm d}\alpha\) 2. \(|P_{j}|=\int_{0}^{\infty}\!\!R_{j}(t)\cdot t/v_{j}{\rm d}t\) 5. \(|\Psi^{\rm all}|=\Psi\) for all \(\Psi\in\{P,A,B,\Gamma\}\) 3. \(|A_{j}|=\alpha_{j}\bar{v}_{j}\) These statements are proven as follows. 1. By definition, \(P^{i,l}\) is a shape that is \(q^{i}\) deep into the resource axis, and its front face is a (possibly degenerate) trapezoid. At time \(t\in T^{i,l}\), it has a height of \(\max(0,\alpha_{j}-(\alpha_{j}-t/v_{j}))=t/v_{j}\), where \(j\coloneqq j^{i,l}\). As such \(|P^{i,l}|=q^{i}\int_{t^{i,l-1}}^{t^{i,l}}\!\!t/v_{j}{\rm d}t\). Next, consider the union of the \(B\)- and \(\Gamma\)-pieces \[B^{i,l}\cup\Gamma^{i,l}=\{(t,r,\alpha)\,|\,r\in\Omega_{j^{i,l}}^{i}(t^{i,l-1}), \gamma(t^{i,l})\leq\alpha<\min(\gamma(t^{i,l-1}),d_{j^{i,l}}(t)),t\geq 0\}.\] By Lemma 7, the two pieces do not overlap, as such the union has a volume of \(|B^{i,l}|+|\Gamma^{i,l}|\). Again, the shape is \(q^{i}\) deep into the resource axis. Its front face is a trapezoid with a height of max(0,min(\(\gamma(t^{i,l-1})\),\(d_{j^{i,l}}(t))-\gamma(t^{i,l})\)) at time \(t\). We can then calculate \[|B^{i,l}|+|\Gamma^{i,l}| = q^{i}\int_{0}^{\infty}\max(0,\min(\gamma(t^{i,l-1})\mbox{,}d_{j}( t))-\gamma(t^{i,l}))\mbox{d}t\] \[= q^{i}\left(\int_{0}^{t^{i,l-1}}\gamma(t^{i,l-1})-\gamma(t^{i,l} )\mbox{d}t+\int_{t^{i,l-1}}^{t^{i,l}}d_{j}(t)-\gamma(t^{i,l})\mbox{d}t\right)\] \[= q^{i}\left(\int_{0}^{t^{i,l-1}}d_{j}(t^{i,l-1})-d_{j}(t^{i,l}) \mbox{d}t+\int_{t^{i,l-1}}^{t^{i,l}}d_{j}(t)-d_{j}(t^{i,l})\mbox{d}t\right)\] \[= q^{i}\left(\int_{0}^{t^{i,l-1}}\alpha_{j}-\frac{t^{i,l-1}}{v_{j} }-\alpha_{j}+\frac{t^{i,l}}{v_{j}}\mbox{d}t+\int_{t^{i,l-1}}^{t^{i,l}}\alpha_{ j}-\frac{t}{v_{j}}-\alpha_{j}+\frac{t^{i,l}}{v_{j}}\mbox{d}t\right)\] \[= \frac{q^{i}}{v_{j}}\left(\int_{0}^{t^{i,l-1}}-t^{i,l-1}+t^{i,l} \mbox{d}t+\int_{t^{i,l-1}}^{t^{i,l}}t^{i,l}\mbox{d}t-\int_{t^{i,l-1}}^{t^{i,l} }t\mbox{d}t\right)\] \[= \frac{q^{i}}{v_{j}}\left((t^{i,l}+t^{i,l-1})(t^{i,l}-t^{i,l-1})- \frac{1}{2}\left(t^{i,l^{2}}-t^{i,l-1}\right)\right)=q^{i}\int_{t^{i,l-1}}^{t^ {i,l}}\frac{t}{v_{j}}\mbox{d}t.\] 2. We show \(|P_{j}|=\int_{0}^{\infty}R_{j}(t)\cdot t/v_{j}\mbox{d}t\): \[|P_{j}|= \mid\bigcup_{i\in[k],l\in[K^{i}],j^{i,l}=j}P^{i,l}|=\sum_{i\in[k], l\in[K^{i}],j^{i,l}=j}|P^{i,l}|=\sum_{i\in[k],l\in[K^{i}],j^{i,l}=j}\int_{t^{i,l-1} }^{t^{i,l}}q^{i}\frac{t}{v_{j}}\mbox{d}t\] \[= \sum_{i\in[k]}\int_{0}^{\infty}q^{i}\cdot\mathds{1}_{(t,\cdot)\in \Omega_{j}^{i}}\frac{t}{v_{j}}\mbox{d}t=\int_{0}^{\infty}R_{j}(t)\frac{t}{v_{j }}\mbox{d}t\] 3. The volume of any \(A\)-piece \(A^{i,l}\) is by definition \(|A^{i,l}|=q^{i}\alpha_{j^{i,l}}|T^{i,l}|\). Then \[|A_{j}|= \mid\bigcup_{i\in[k],l\in[K^{i}],j^{i,l}=j}A^{i,l}|=\sum_{i\in[k], l\in[K^{i}],j^{i,l}=j}|A^{i,l}|\] \[= \sum_{i\in[k],l\in[K^{i}],j^{i,l}=j}q^{i}\alpha_{j}|T^{i,l}|= \alpha_{j}\sum_{i\in[k]}|\Omega_{j}^{i}|=\alpha_{j}\bar{v}_{j}\] 4. The inequalities describing \(\Gamma^{i,l}\) are \(\gamma(t^{i,l})\leq\alpha<\min(\gamma(t^{i,l-1}\mbox{,}d_{j^{i,l}}(t)))\),\(\alpha<\gamma(t)\). They can be simplified to \(\gamma(t^{i,l})\leq\alpha<\min(\gamma(t^{i,l-1})\mbox{,}\gamma(t))\) as the job \(j^{i,l}\) scheduled must always satisfy \(d_{j^{i,l}}(t)\geq\gamma(t)\) to be scheduled. Therefore, \(\Gamma^{i,l}\) contains all points (\(t\),\(r\),\(\alpha\)) with \(r\in\Omega_{j^{i,l}}^{i}\), \(\alpha\) lies between \(\gamma_{t^{i,l}}\) and \(\gamma(t^{i,l-1})\), and (\(t\),\(\alpha\)) must be a point to the left of the fracture line \(\gamma\) (since \(\alpha\leq\gamma(t)\)). This can be described by the integral over Since \(\gamma\) is strictly monotonically decreasing in the interval \([t^{i,l-1}\mbox{,}t^{i,l})\) (see Observation 4), we can express \(|\Gamma_{l}^{i}|\) by an integral over its inverse as \[|\Gamma_{l}^{i}|= q^{i}\int_{\gamma(t^{i,l})}^{\gamma(t^{i,l-1})}\gamma^{-1}( \alpha)\mbox{d}\alpha\] (By Observation 4, statement 2, we know that \(\gamma\) is invertible over the interval [0,\(t_{\gamma}\)) for some \(t_{\gamma}\geq 0\) and as such also over [\(t^{i,l-1}\),\(t^{i,l}\)). 5. For \(\Psi\in\{P,A\}\), we can calculate \[|\Psi^{\mathrm{all}}|=|\bigcup_{i\in[k],l\in[K^{i}]}\Psi^{i,l}|=|\bigcup_{j\in J _{i}\in[k],l\in[K^{i}],j=j^{i,l}}\Psi^{i,l}|=|\bigcup_{j\in J}\Psi_{j}|=\sum_{ j\in J}|\Psi_{j}|=\sum_{j\in J}\cdots=\Psi,\] where for,,...", we can insert the results from statements 2 and 3. Consider \(\Psi=B\). We have to show \(|B_{j}|=r_{j}\!\!\int_{0}^{\infty}\!\!\beta_{j}(t)\mathrm{d}t\), since this will allow us to derive the statement: \[|B^{\mathrm{all}}|=|\bigcup_{i\in[k],l\in[K^{i}]}B^{i,l}|=|\bigcup_{j\in J _{i}\in[k],l\in[K^{i}],j=j^{i,l}}B^{i,l}|\] So consider any \(B_{j}\). It consists of pieces \(B^{i,l}\) with \(j^{i,l}=j\). \(B^{i,l}\) is \(q^{i}\) deep into the resource axis. Further the points \((t,r,\alpha)\in B^{i,l}\) satisfy \(\max(\gamma(t^{i,l}),\gamma(t))\leq\alpha<\min(\gamma(t^{i,l-1}),d_{j}(t))\). Using that \(\gamma(t)\leq\alpha<\gamma(t^{i,l-1})\) (and the fact that \(\gamma\) is monotonically decreasing (see Observation 4), we derive \(t^{i,l-1}\leq t\) From \(\gamma(t^{i,l})\leq\alpha<d_{j}(t)\) follows that \(t<t^{i,l}\), since by definition of a geometrical representation, we have that \(\gamma(t^{i,l})=d_{j}(t^{i,l})\). As such, \(t\in T^{i,l}\). The height of the piece at time \(t\in T^{i,l}\) is then \(d_{j}(t)-\gamma(t)=\beta_{j}(t)\). (The last equality follows from the \(R\)-slachness condition.) As such the volume of the piece is \(|B^{i,l}|=q^{i}\int_{t^{i,l-1}}^{t^{i,l}}\beta_{j}(t)\,\mathrm{d}t\). Because of the \(\beta\)-slachness condition, \(\beta_{j}(t)>0\) iff \(j\) is assigned \(r_{j}\) resource at time \(t\). As such, when we sum over all strip widths \(q^{i}\) where \(j\) is scheduled, we will obtain exactly \(r_{j}\). As such we can finish our calculation: \[|B_{j}|=|\bigcup_{i\in[k],l\in[K^{i}],j=j^{i,l}}B^{i,l}|=\sum_{i \in[k],l\in[K^{i}],j=j^{i,l}}|B^{i,l}|=\sum_{i\in[k]l\in[K^{i}],j=j^{i,l}}\int _{t^{i,l-1}}^{t^{i,l}}q^{i}\beta_{j}(t)\mathrm{d}t\] Lastly, for \(\Psi=\Gamma\), we calculate \[|\Gamma^{\mathrm{all}}|=|\bigcup_{i\in[k],l\in[K^{i}]}\Gamma^{i,l}|=\sum_{i\in [k],l\in[K^{i}]}|\Gamma^{i,l}|=\sum_{i\in[k]}q^{i}\sum_{l\in[K^{i}]}\int_{ \gamma(t^{i,l})}^{\gamma(t^{i,l-1})}\gamma^{-1}(\alpha)\mathrm{d}\alpha\] \[=\sum_{i\in[k]}q^{i}\!\int_{\gamma(t_{\gamma})}^{\gamma(0)}\!\gamma^{-1}( \alpha)\mathrm{d}\alpha=\sum_{i\in[k]}q^{i}\!\int_{0}^{t_{\gamma}}\!\gamma(t) \mathrm{d}t=\int_{0}^{\infty}\!\gamma(t)\mathrm{d}t=\Gamma\] A (\(3/2+\varepsilon\))-Approximation for Total Completion Time Minimization ### Algorithm Description Our \((3/2+\varepsilon)\)-approximation follows the same principle as described in Section3, but uses algorithm \(\operatorname{\textsc{LSAapprox}}\) instead of \(\operatorname{\textsc{LS}}\). Algorithm Greedy stays the same. We give a broad description of \(\operatorname{\textsc{LSAapprox}}\). It first computes an approximate solution to the \(CLP\) (see Section3). For that, the job set is first subdivided (see Definition7 for the formal definition). Then a small portion of the total resource (say \(\mu=\kappa\cdot\varepsilon\in(0,1)\), \(1/\mu\in\mathbb{N}\) for some constant factor \(\kappa>0\)) is reserved for jobs with a small resource requirement and/or processing time. For the remaining jobs, the dual \(LP\) (see AppendixB.3) is solved optimally with a polynomial number of slots. The \(\alpha\)-values of this solution are then used to construct a line schedule (\(R\),\(\alpha\),\(\cdot\),\(\bar{v}\)) for some \(\bar{v}\). (It is clear that this can be done in polynomial time, as it effectively boils down to calculating the intersections of the dual lines.) Because generally \(\bar{v}\neq v\) will hold, the schedule is scaled horizontally afterwards, i.e., a new schedule \(\tilde{R}(t)=R(t/s)\) is created, where \(s=\max\{v_{j}/\bar{v}_{j}\,|\,j\in J\}\). Lastly, \(\tilde{R}\) is squashed to only use \(1-\mu\) overall resource (scale vertically by \(1-\mu\) and horizontally by \(1/(1-\mu)\)), and packed together with a schedule for the small jobs that used \(\mu\) resource. We denote the schedule produced by \(\operatorname{\textsc{LSAapprox}}\) by \(R^{FA}\). In principle, we could just squash this \(LP\) solution for these remaining jobs to only use the remaining \((1-\mu)\) resource and would obtain an FPTAS for the fractional problem. The issue with this approach is, however, that the \(LP\)/dual \(LP\) solutions do not necessarily correspond to line schedules, and we therefore could not use our knowledge about these. This is why we reuse the \(\alpha\)-values from the dual LP to produce a line schedule. We then show that the scheduled volumes \(\bar{v}\) are close to the actual volumes \(v\) (in dependence on the granularity of the slots). By also choosing \(\mu\) to be very small, we can guarantee that the overall error is small enough. Analogous to Proposition3, we will prove the following proposition. Proposition 4.: _For any \(\varepsilon>0\), \(\operatorname{\textsc{LSAapprox}}\) produces in polynomial time in \(n\),\(1/\varepsilon\) a schedule \(R^{FA}\) with \(\gamma(R^{FA})\leq 2+\varepsilon\)._ Using this proposition, we can show our main theorem. Theorem 0.D.2: _There is a \((3/2+\varepsilon)\)-approximation algorithm for total completion time minimization. Its running time in polynomial time in \(n\) and \(1/\varepsilon\)._ Proof.: The proof follows the same idea as the proof of Theorem3. We run Greedy and \(\operatorname{\textsc{LSAapprox}}\) and choose the schedule with the smaller total completion time. Both algorithms run in polynomial time in \(n\),\(1/\varepsilon\) (\(\operatorname{\textsc{LSAapprox}}\) by Proposition4). Plugging Proposition4 into Theorem3, we obtain an approximation ratio of \((3+\varepsilon)/2\leq 3/2+\varepsilon\). The remainder of this section is dedicated to the proof of Proposition4. The proof is based on the following Propositions5 to 7. These propositions use a subdivision \(J\) into three job sets. These are \(J^{1}\) (_light_ jobs: jobs with small resource requirement), \(J^{\mathrm{sh}}\) (_short-heavy_ jobs: jobs with large resource requirement but small processing time) and \(J^{\mathrm{lh}}\) (_long-heavy_ jobs: both large resource requirement and processing time). For details, see the below Definition7. Proposition5 essentially allows us to ignore the light and short-heavy jobs for the remainder of analysis. Proposition6 shows that, with a fine enough slotting, optimal \(LP\) solution (and therefore also the optimal \(DP\) solution) closely approximates the optimal \(CLP\) solution. Lastly, Proposition7 deals with the step of \(\,\mathrm{LSA\textsc{approx}}\) where we translate the optimal \(\alpha\)-values into a line schedule. It guarantees that the volume scheduled by the line schedule is close to the actual job volumes and that the cost of the produced line schedule is close to the optimal dual \(LP\) cost. Proposition5: _Let \(R^{\mathrm{lh}}\) be a feasible schedule for \(J^{\mathrm{lh}}\) and \(R^{*}\) an optimal schedule for \(J\). Then we can compute in polynomial time a feasible schedule \(R^{FA}\) for \(J\) such that \(C(R^{FA})\leq(1+2\mu)\cdot(C(R^{\mathrm{lh}})+\sum_{j\in J^{\mathrm{l}}}C_{j}( R^{*}))\) if \(\mu>0\) is sufficiently small._ Proposition6: _Let \(R^{S}\) be a schedule induced from an optimal (slotted) \(LP\) solution for \(J^{\mathrm{lh}}\) with a time horizon \(T=n\cdotp We can bound the cost of \(R^{FA}\) by using the above and Propositions 5 to 7: \[C(R^{FA}) \leq (1+2\mu)\Bigg{(}C(R^{\mathrm{lh}})+\sum_{j\in J^{l}}C_{j}(R^{*}) \Bigg{)}\] \[\leq (1+2\mu)\Bigg{(}(2+\mu)(1+5\mu)C^{F}(R^{\mathrm{s}})+\sum_{j\in J ^{l}}C_{j}(R^{*})\Bigg{)}\] \[\leq (1+2\mu)\Bigg{(}(2+\mu)(1+5\mu)(1+2\mu)C^{F}(R^{\mathrm{lh}*})+ \sum_{j\in J^{l}}C_{j}(R^{*})\Bigg{)}\] \[\leq (1+2\mu)(2+\mu)(1+5\mu)(1+2\mu)C^{F*}\leq^{*}(2+20\mu)C^{F*}\] By defining \(\mu=\varepsilon/20\), we obtain the desired bound. For the running time, note first that LSAapprox runs in polynomial time in \(1/\mu\) and \(n\). Since \(\mu\) and \(\varepsilon\) differ only by a multiplicative factor, LSAapprox will also run in polynomial time in \(1/\varepsilon\). ### Dividing the Set of Jobs Our goal for this sub-section is to show Proposition 5 so that we can essentially ignore all jobs except \(J^{\mathrm{lh}}\). Before proving Proposition 5, we give the formal definition of the job subsets. **Definition 7**.: We define the sets of 1. _light_ jobs__\(J^{\mathrm{l}}=\{j\in J\,|\,r_{j}\leq\mu/n\}\), 2. _short-heavy_ jobs__\(J^{\mathrm{sh}}=\{j\in J\,|\,p_{j}\leq(\mu/n)^{2}\cdot p_{\mathrm{max}}\) and \(r_{j}>\mu/n\}\) and 3. _long-heavy_ jobs__\(J^{\mathrm{lh}}=J\big{\backslash}(J^{\mathrm{l}}\cup J^{\mathrm{sh}})\) As described in Appendix 0.C.1, we squash \(R^{\mathrm{lh}}\) to only use \(1-\mu\) resource, and then pack the remaining jobs such that they only use \(\mu\) resource in total at each time point. To explain in more detail, each job \(j\in J_{\mathrm{l}}\cup J_{\mathrm{sh}}\) is assigned a resource of \(r^{(j)}\coloneqq\min(\mu/n,r_{j})\) until it finishes, while \(R^{\mathrm{lh}}\) is squashed such that it uses at most a resource of \(1-\mu\). Formally, define \[R^{FA}_{j}(t)=\begin{cases}r^{(j)}\cdot\mathds{1}_{t<v_{j}/r^{(j)}}&\text{if } j\in J_{\mathrm{l}}\cup J_{\mathrm{sh}}\\ (1-\mu)\cdot R^{\mathrm{lh}}_{j}(t/(1-\mu))&\text{if }j\in J^{\mathrm{lh}}. \end{cases}\] The schedule is feasible as all jobs have been scheduled, and the light and short-heavy jobs use a resource of at most \(n\cdot\min(\mu/n,r_{j})\leq\mu<1\), so the resource is not overused. Furthermore, \(r^{(j)}\leq r_{j}\), so each light and short-heavy job is assigned at most its resource requirement. Proof of Proposition 5.: Obviously, \(R^{FA}\) (as described above) can be calculated in polynomial time. We calculate the cost of the resulting schedule. It is easy to calculate that \(C_{j}(R^{FA})=1/(1-\mu)\cdot C_{j}(R^{\text{lh}})\) for \(j\in J^{\text{lh}}\). Further, jobs \(j\in J^{\text{l}}\) are scheduled from time \(0\) with full resource, so they cannot be scheduled better in \(R^{*}\), giving \(C_{j}(R^{FA})\!\leq\!C_{j}^{F}(R^{*})\). Notice that \(C_{j_{\max}}\!\geq\!p_{\max}\) for any job \(j_{\max}\!\notin\!J^{\text{sh}}\) with \(p_{j_{\max}}\!=\!p_{\max}\) (which must exist since \(\mu\!<\!1\!\leq\!n\). The total completion time of short-heavy jobs is then \[\sum_{j\in J^{\text{sh}}}C_{j}(R^{FA}) =\sum_{j\in J^{\text{sh}}}\frac{v_{j}}{\min(\mu/n,\!r_{j})}\!\leq \!\sum_{j\in J^{\text{sh}}}\frac{np_{j}r_{j}}{\mu}\] \[\leq\sum_{j\in J^{\text{rh}}}\frac{r_{j}\mu p_{\max}}{n}\!\leq\! \mu p_{\max}\!\leq\!\mu\!\cdot\!\max(C(R^{\text{lh}}),\!C(R^{*})),\] where the last inequality is due to \(j_{\max}\!\in\!J^{\text{lh}}\cup J^{\text{l}}\). Therefore, \[C(R^{FA}) =\!\sum_{j\in J^{\text{l}}}\!C_{j}(R^{FA})\!+\!\sum_{j\in J^{ \text{rh}}}\!C_{j}(R^{FA})\!+\!\sum_{j\in J^{\text{lh}}}\!C_{j}(R^{FA})\] \[\leq\!\sum_{j\in J^{\text{l}}}\!C_{j}(R^{*})\!+\!\mu\!\cdot\!\max (C(R^{\text{lh}}),\!C(R^{*}))\!+\!\frac{1}{1\!-\!\mu}\sum_{j\in J^{\text{lh} }}\!C_{j}(R^{\text{lh}})\] \[\leq\!\left(\frac{1}{1\!-\!\mu}\!+\!\mu\right)\!C(R^{\text{lh}}) \!+\!(1\!+\!\mu)\!\sum_{j\in J^{\text{l}}}\!C_{j}(R^{*})\] \[\leq\!\left(\frac{1}{1\!-\!\mu}\!+\!\mu\right)\!\left(C(R^{\text{lh }})\!+\!\sum_{j\in J^{\text{l}}}\!C_{j}(R^{*})\right)\] \[\leq^{*}(1\!+\!2\mu)\!\left(C(R^{\text{lh}})\!+\!\sum_{j\in J^{ \text{l}}}\!C_{j}(R^{*})\right)\] ### Approximating the \(\boldsymbol{CLP}\) with an \(\boldsymbol{LP}\) Solution This sub-section will be concerned with the proof of Proposition 6. For this we consider optimal \(LP\) solutions using the parameters described in the proposition, specifically a time horizon \(T\!=\!n\!\cdot\!p_{\max}\) and a slot width of \(\delta\!=\!T\!\cdot\!(\mu/n)^{6}\). Using these parameters, we can first make the following observation for the schedule \(R^{S}\) for the long-heavy jobs \(J^{\text{lh}}\). Observation 5.: Consider a schedule \(R^{S}\) corresponding to an optimal \(LP\) solution for \(J^{\text{lh}}\), and let \(\alpha\) be the corresponding \(\alpha\)-values for the optimal dual \(LP\) solution. Then the following statements hold. 1. Each \(j\!\in\!J^{\text{lh}}\) can at most process \(\mu^{4}/n^{3}\!\cdot\!v_{j}\) volume in each slot. 2. The resource assignment of \(R^{S}\) and the schedule \(R^{\text{lh}}\) from the line schedule (\(R^{\text{lh}}\),\(\alpha\),\(\cdot\),\(\cdot\),\(\cdot\)) of \(\alpha\) differ in at most \(n^{2}\) slots. Proof.: 1. In each slot, a job \(j\!\in\!J^{\mathrm{lh}}\) can at most process a volume of \(r_{j}\delta\!=\!r_{j}T\!\cdot\!(\mu/n)^{6}\!=r_{j}\!\cdot\!\mu^{6}/n^{5}\!\cdot \!p_{\max}\!<\!r_{j}\!\cdot\!\mu^{6}/n^{5}\!\cdot\!p_{j}n^{2}/\mu^{2}\!=\!\mu^{4 }/n^{3}\!\cdot\!v_{j}\), where the inequality comes from the definition of long-heavy jobs. 2. The two schedules may only differ in slots where two dual lines (using vector \(\alpha\)) intersect with each other or with the time axis. There are \(|J^{\mathrm{lh}}|\!\leq\!n\) dual lines, so the number of intersections between two dual lines is \(n(n\!-\!1)/2\). Additionally, there are \(n\) intersections of a dual line with the time axis. In the worst-case, each intersection lies in a different slot and the intersections happen above the time axis, so there are at most \(n(n\!-\!1)/2\!+\!n\!\leq\!n^{2}\) slots that contain dual line intersections. Our general proof idea for Proposition 6 is to convert an optimal fractional solution \(R^{\mathrm{lh}*}\) for \(J^{\mathrm{lh}}\) into a slotted solution \(R^{S}\) and to observe how the costs of the two schedules relate. The conversion happens by averaging the resource assignment of \(R^{\mathrm{lh}*}\) in each slot to obtain the slotted solution. This averaging mostly does not change the cost too much because the starting and endpoints of a slot are often close enough together that rescheduling volume inside of it does not make a big difference to the overall objective. However, in the first few slots, this argument breaks down. This is why we first show that the cost of the first few slots is negligible (see Lemma 16). Afterwards, we can show Proposition 6. We denote by \(f\!\coloneqq\!3n^{3}/\mu^{1}\) the number of slots at the beginning of the schedule we neglect. Lemma 16.: _Let \(R^{S}\) be a schedule induced from an optimal (slotted) \(LP\) solution for \(J^{\mathrm{lh}}\) with a time horizon \(T\!=\!n\!\cdot\!p_{\max}\) and with a slot width of \(\delta\!=\!T\!\cdot\!(\mu/n)^{6}\). Let \(C^{F}_{\mathrm{first}}\) be the contribution of the first \(f\) slots to the fractional total completion time of \(R^{S}\), and \(C^{F}_{\mathrm{last}}\) be the contribution of the other slots, Then \(C^{F}_{\mathrm{first}}\!\leq\!\mu^{4}/n^{3}\!\cdot\!f\!\cdot\!\frac{1}{1-\mu} \!\cdot\!C^{F}_{\mathrm{last}}\)._ Proof.: By Observation 5, we know that each job can at most schedule \(\mu^{4}/n^{3}\!\cdot\!v_{j}\) volume in each slot. This means that the first \(f\) slots will process at most \(f\!\cdot\!\mu^{4}/n^{3}\!\cdot\!v_{j}\) processing volume, and consequently, the other slots will process at least \((1\!-\!f\!\cdot\!\mu^{4}/n^{3})v_{j}\leq(1\!-\!\mu)v_{j}\) processing volume. Using this, we can show that the cost of the first \(f\) slots is negligible: \[C^{F}_{\mathrm{first}} =\!\frac{1}{v_{j}}\!\sum_{i\leq f}\!V_{j,i}\!\left(i\delta\!-\! \frac{\delta}{2}\right)\!\leq\!\frac{1}{v_{j}}\!\sum_{i\leq f}\!V_{j,i}\!\left( f\!\cdot\!\delta\!-\!\frac{\delta}{2}\right)\!=\!\frac{1}{v_{j}}\!\left(f\! \cdot\!\delta\!-\!\frac{\delta}{2}\right)\!\sum_{i\leq f}\!V_{j,i}\] \[\leq\!\frac{\mu^{4}}{n^{3}}f\frac{1}{1\!-\!\mu}\frac{1}{v_{j}} \!\left(f\!\cdot\!\delta\!-\!\frac{\delta}{2}\right)\!\sum_{i\geq f}\!V_{j,i} \!=\!\frac{\mu^{4}}{n^{3}}f\!\cdot\!\frac{1}{1\!-\!\mu}\frac{1}{v_{j}}\! \sum_{i>f}\!V_{j,i}\!\left(f\!\cdot\!\delta\!-\!\frac{\delta}{2}\right)\] \[\leq\!\frac{\mu^{6}}{n^{3}}f\frac{1}{1\!-\!\mu}\!\left(\frac{1}{v_ {j}}\!\sum_{i>f}\!V_{j,i}\!\left(i\delta\!-\!\frac{\delta}{2}\right)\right)\!= \!\frac{\mu^{4}}{n^{3}}f\!\cdot\!\frac{1}{1\!-\!\mu}\!\cdot\!C^{F}_{\mathrm{ last}}.\qed\] Proof of Proposition 6.: First note that \(R^{\mathrm{lh}*}\) will not schedule past the time horizon \(T\) by definition of \(T\). We show that \(C^{F}(R^{S})\!\leq\!(f\!+\!1)/f\!\cdot\!C^{F}(R^{\mathrm{lh}*})\) by providing a slotted solution based on \(R^{\mathrm{lh}*}\). For that, we set the \(LP\) variables \(V_{j,i}\!=\!\int_{i\delta}^{(i+1)\delta}\!R_{j}^{\mathrm{lh}*}(t)\mathrm{d}t\). Call the \(LP\) solution \(V\). Denote by \(C_{j}^{F}(V)\) the fractional contribution of job \(j\) in the LP in solution \(V\!=\!(V_{j,i})_{j\in J,i\in I}\), i.e., \(C_{j}^{F}(V)\!=\!1/v_{j}\!\cdot\!\sum_{i\in I}V_{j,i}(i\delta\!-\!\delta/2)\). We show the cost of \(V\) using Lemma 16. The cost of \(V\) for a job \(j\) is then as follows. \[C_{j}^{F}(V) =\frac{1}{v_{j}}\!\sum_{i\in I}\!V_{j,i}\!\left(i\delta\!-\!\frac {\delta}{2}\right)\!=\!\frac{1}{v_{j}}\!\sum_{i\in I}\!V_{j,i}\!\left(i\delta \!-\!\frac{\delta}{2}\right)\!+\!\frac{1}{v_{j}}\!\sum_{i>f}\!V_{j,i}\!\left(i \delta\!-\!\frac{\delta}{2}\right)\] \[\leq\!\left(1\!+\!\frac{\mu^{4}}{n^{3}}f\!\cdot\!\frac{1}{1\!-\! \mu}\right)\frac{1}{v_{j}}\!\sum_{i>f}\!V_{j,i}\!\left(i\delta\!-\!\frac{ \delta}{2}\right)\] We can then bound the right hand side \(RH\!:=\!\frac{1}{v_{j}}\!\sum_{i>f}V_{j,i}\!\left(i\delta\!-\!\frac{\delta}{2}\right)\) as follows. We temporarily omit the prefactor and continue the calculation: \[RH \leq\!\frac{1}{v_{j}}\!\sum_{i>f}\!\frac{i}{i\!-\!1}\!\cdot\!(i\!- \!1)\delta V_{j,i}\!\leq\!\frac{f\!+\!1}{f}\frac{1}{v_{j}}\!\sum_{i>f}\!(i\!- \!1)\delta V_{j,i}\] \[\leq\!\frac{f\!+\!1}{f}\frac{1}{v_{j}}\!\sum_{i>f}\!\left((i\!-\! 1)\delta V_{j,i}\!+\!\frac{{V_{j,i}}^{2}}{2r_{j}}\right)\] \[=\!\frac{f\!+\!1}{f}\frac{1}{v_{j}}\!\sum_{i>f}\!\frac{r_{j}}{2} \!\left(\left((i\!-\!1)\delta\!+\!\frac{V_{j,i}}{r_{j}}\right)^{2}\!-\!((i\!- \!1)\delta)^{2}\right)\] \[=\!\frac{f\!+\!1}{f}\frac{1}{v_{j}}\!\sum_{i>f}\!\!\int_{(i-1) \delta}^{(i-1)\delta+V_{j,i}/r_{j}}\!r_{j}\!\cdot\!\mathrm{d}t\!\leq\!\frac{f \!+\!1}{f}\frac{1}{v_{j}}\!\sum_{i>f}\!\!\int_{(i-1)\delta}^{i\delta}\!R_{j}^{ \mathrm{lh}*}(t)\!\cdot\!t\mathrm{d}t\] \[\leq\!\frac{f\!+\!1}{f}\frac{1}{v_{j}}\!\sum_{i\in I}\!\int_{(i-1) \delta}^{i\delta}\!R_{j}^{\mathrm{lh}*}(t)\!\cdot\!t\mathrm{d}t\!=\!\frac{f\!+ \!1}{f}\!\int_{0}^{\infty}\!\frac{R_{j}^{*}(t)\!\cdot\!t}{v_{j}}\mathrm{d}t\!= \!\frac{f\!+\!1}{f}C_{j}^{F}(R^{\mathrm{lh}*})\] In total we get \[C_{j}^{F}(V) \leq\!\frac{f\!+\!1}{f}\!\left(1\!+\!\frac{\mu^{4}}{n^{3}}f\!\cdot \!\frac{1}{1\!-\!\mu}\right)\!\cdot\!C_{j}^{F}(R^{\mathrm{lh}*})\] \[=\!\left(1\!+\!\frac{\mu}{3n^{3}}\right)\!\left(1\!+\!\frac{3\mu^{ 3}}{1\!-\!\mu}\right)\!\cdot\!C_{j}^{F}(R^{\mathrm{lh}*})\!\leq\!^{*}\left(1\! +\!2\mu\right)C_{j}^{F}(R^{\mathrm{lh}*}),\] The statement is then obtained by bounding \(C_{j}^{F}(R^{S})\!\leq\!C_{j}^{F}(V)\) and summing over all \(j\!\in\!J^{\mathrm{lh}}\). ### Comparing a Line Schedule with its Slotted Counterpart In this sub-section, we prove Proposition 7. The first statement is straightforward to prove using Observation 5: Proof of Statement 1 of Proposition 7.: By the second statement of Observation 5, there can be at most \(n^{2}\) slots that contain intersections between two dual lines, or of a dual line with the time axis. By definition of Definition4 and the \(LP\) slackness conditions, the resource assignment stays the same in all slots that do not contain such intersections. By the first statement of Observation5, each job \(j\!\in\!J^{\mathrm{lh}}\) can only process \(\mu^{4}/n^{3}v_{j}\) volume in such a slot. If follows that, \(j\) can at most lose \(\mu^{4}/nv_{j}\) volume compared to a schedule corresponding to the primal solution that corresponds to \(\alpha\). Hence \(\bar{v}\!\geq\!v_{j}\!-\!\mu^{4}/nv_{j}\!=\!(1\!-\!\mu^{4}/n)v_{j}\). The second statement is more difficult to prove. We first use Observation5 to establish that there are at most \(n^{2}\) slots where \(R^{\mathrm{lh}}\) and \(R^{S}\) differ. Similiar to the proof of Lemma16, we want to essentially ignore these slots, i.e., bound their cost in terms of the cost of the other slots. Because of Lemma16, this is easy for the first \(f\) slots. For the other slots, we require another lemma. First, let us define the _cost rate_ of a resource distribution \(R(t)\) as \(c(R(t))\!\coloneqq\!\sum_{j\in J}\!R_{j}(t)/v_{j}\). Lemma17 states then that the cost rate must be non-increasing with time \(t\). Lemma17.: _Let \((R^{\mathrm{lh}},\cdot,\cdot,\bar{v})\) be a line schedule. Then \(c(R^{\mathrm{lh}}(\cdot))\) is monotonically non-increasing._ The proof is given below. Now we are able to bound the cost of a slot in terms of the cost of earlier slots, since the former has a smaller cost rate. Now we want to assign each slot where \(R^{\mathrm{lh}}\) and \(R^{S}\) differ that is not among the first \(f\) slots a sufficient number of earlier slots that in total have a much higher cost. In short, we have to find a mapping as guaranteed by the following lemma. Lemma18.: _Consider a slot set \(\bar{I}\) with \(|\bar{I}|\!\leq\!n^{2}\) and \(i\!>\!f\) for all \(i\!\in\!I\). Then we can find a map \(M\!:\!\bar{I}\to P(I\setminus\bar{I})\) with \(\forall i^{\prime}\in M(i)\!:\!i-2n^{3}/\mu<i^{\prime}<i\) such that \(M(i)\cap M(i^{\prime})\!=\!\varnothing\) for all \(i\!\neq\!i^{\prime}\) and \(|M(i)|\!=\!n/\mu\)._ Using Lemmas17 and 18, we are now able to prove the second statement of Proposition7: Proof of Statement 2 of Proposition7.: Let \(c_{i}\) be the cost of \(R^{\mathrm{lh}}\) in slot \(i\!>\!0\), i.e., \[c_{i}\!\coloneqq\!\int_{i\delta}^{(i+1)\delta}\sum_{j\in J^{\mathrm{lh}}}\frac {R_{j}(t)}{v_{j}}t\mathrm{d}t\!=\!\int_{i\delta}^{(i+1)\delta}\!c(R^{\mathrm{lh }}(t))t\mathrm{d}t,\] such that \(C^{F}(R^{\mathrm{lh}})\!=\!\sum_{i=0}^{\infty}\!c_{i}\). We subdivide the set of slots into three sets, namely 1. \(I_{\mathrm{first}}\): The first \(f\) slots 2. \(\bar{I}\): The set of all slots \(i\!\notin\!I_{\mathrm{first}}\) where \(R^{\mathrm{lh}}\) and \(R^{S}\) differ 3. \(I_{\mathrm{rem}}\): all remaining slots We first show that the cost of the slots in \(\bar{I}\) is negligible: For that, consider any slot \(i\!\in\!\bar{I}\). Then, using Lemma17, we can first bound \(c_{i}\): \[c_{i}\!=\!\int_{i\delta}^{(i+1)\delta}\!c(R^{\mathrm{lh}}(t))t\mathrm{d}t\! \leq\!\int_{i\delta}^{(i+1)\delta}\!c(R^{\mathrm{lh}}(i\delta))(i\!+\!1)\delta \mathrm{d}t\!=\!c(R^{\mathrm{lh}}(i\delta))(i\!+\!1)\delta^{2}\] On the other hand, if we use the mapping \(M\) from Lemma 18, we can show that \(c_{i}\) has a much smaller cost: \[\sum_{i^{\prime}\in M(i)}c_{i^{\prime}} =\sum_{i^{\prime}\in M(i)}\int_{i^{\prime}\delta}^{(i^{\prime}+1) \delta}c(R^{\text{lh}}(t))t\text{d}t\!\geq\!\sum_{i^{\prime}\in M(i)}\!\!\!\! \int_{i^{\prime}\delta}^{(i^{\prime}+1)\delta}\!\!\!\!c(R^{\text{lh}}((i^{ \prime}\!+\!1)\delta))i^{\prime}\delta\text{d}t\] \[=\sum_{i^{\prime}\in M(i)}\!\!\!\!c(R^{\text{lh}}(i\delta))i^{ \prime}\delta^{2}\!\geq\!\!\!\!c(R^{\text{lh}}(i\delta))\!\cdot\!\delta^{2} \!\!\sum_{i^{\prime}\in M(i)}(i\!-\!2n^{3}/\mu)\!=\!\frac{n}{\mu}\!\cdot\!(i\! -\!2n^{3}/\mu)\] As such, we can establish for their ratio: \[\frac{c_{i}}{\sum_{i^{\prime}\in M(i)}c_{i^{\prime}}}\!\leq\!\frac{i\!+\!1}{ \frac{n}{\mu}\!\cdot\!(i\!-\!2n^{3}/\mu)}\!\leq\!\frac{f\!+\!1}{\frac{n}{\mu} \!\cdot\!(f\!-\!2n^{3}/\mu)}\] When we insert \(f\), we get \[\frac{\frac{3n^{3}}{\mu}\!+\!1}{\frac{n}{\mu}\!\cdot\!\left(\frac{3n^{3}}{\mu} \!-\!2n^{3}/\mu\right)}\!=\!\frac{3n^{3}\!+\!\mu}{n^{3}}\!\cdot\!\frac{\mu}{n} \!\leq\!4\mu\] Thus, we established that the cost of slots \(I_{\text{first}}\) is negligible (Lemma 16) and the cost of \(\bar{I}\) is negligible: \[\sum_{i\in I_{\text{first}}}c_{i}\!\leq\!\frac{\mu^{4}}{n^{3}}\!\cdot\!f\! \cdot\!\frac{1}{1\!-\!\mu}\!\cdot\!\left(\sum_{i\in\bar{I}}\!c_{i}\!+\!\sum_{i \in I_{\text{rem}}}c_{i}\right)\!=\!\frac{3\mu^{3}}{1\!-\!\mu}\!\cdot\!\left( \sum_{i\in\bar{I}}\!c_{i}\!+\!\sum_{i\in I_{\text{rem}}}c_{i}\right)\] \[\sum_{i\in\bar{I}}\!c_{i}\!\leq\!4\mu\!\left(\sum_{i\in I_{\text{first}}}c_{i} \!+\!\sum_{i\in I_{\text{rem}}}c_{i}\right)\] Summing both, we obtain \[\sum_{i\in I_{\text{first}}}c_{i}\!+\!\sum_{i\in\bar{I}}\!c_{i}\!\leq\!\frac{3 \mu^{3}}{1\!-\!\mu}\!\cdot\!\left(\sum_{i\in\bar{I}}\!c_{i}\!+\!\sum_{i\in I_ {\text{rem}}}\!c_{i}\right)\!+\!4\mu\!\!\left(\sum_{i\in I_{\text{first}}}c_{ i}\!+\!\sum_{i\in I_{\text{rem}}}\!c_{i}\right)\!.\] Rearranging this yields \[\left(\frac{3\mu^{3}}{1\!-\!\mu}\!+\!4\mu\right)\!\sum_{i\in I_{ \text{rem}}}c_{i}\!\geq (1\!-\!4\mu)\!\sum_{i\in I_{\text{first}}}c_{i}\!+\!\left(1\!-\! \frac{3\mu^{3}}{1\!-\!\mu}\right)\!\!\sum_{i\in\bar{I}}\!c_{i}\] \[\geq \left(1\!-\!4\mu\!-\!\frac{3\mu^{3}}{1\!-\!\mu}\right)\left(\sum_{ i\in I_{\text{first}}}c_{i}\!+\!\sum_{i\in\bar{I}}\!c_{i}\right)\] For the total cost, we get that \[C^{F}(R^{\mathrm{lh}}) = \sum_{i=0}^{\infty}c_{i}=\sum_{i\in I_{\mathrm{first}}}c_{i}+\sum_{ i\in\bar{I}}c_{i}+\sum_{i\in I_{\mathrm{rem}}}c_{i}\] \[\leq \left(1+\frac{\frac{3\mu^{3}}{1-\mu}+4\mu}{1-4\mu-\frac{3\mu^{3} }{1-\mu}}\right)\sum_{i\in I_{\mathrm{rem}}}c_{i}\leq^{*}(1+5\mu)C^{F}(R^{S}),\] where the last inequality is due to \(R^{S}\) and \(R^{\mathrm{lh}}\) having the same cost in all slots from \(I_{\mathrm{rem}}\). Lastly, we give the proofs of Lemmas 17 and 18. Proof of Lemma 17.: By Lemma 14, \((R^{\mathrm{lh}},\cdot,\cdot,\bar{v})\) is a primal-dual pair and therefore fulfills the slackness conditions for \(CLP(\bar{v})/DCP(\bar{v})\). By Definition 4, we have \(R(t)=R(t^{\prime})\) if \(\succ_{t}\) and \(\succ_{t^{\prime}}\) induce the same total order and the same set of dual lines lies above the time axis. The cost rate may only change at some time point \(t\) if the resource distribution changes at that point. By definition, this is only possible if 1. \(d_{j}(t)=0\) for some job \(j\) or 2. \(d_{j}(t)=d_{j^{\prime}}(t)\) for two jobs \(j\),\(j^{\prime}\). In the first case, \(R(\cdot)\) does only change at \(t\) when \(j\) was scheduled before \(t\). In this case, by Definition 4, only \(R^{\mathrm{lh}}_{j}(t)\) drops to zero, decreasing \(c(R^{\mathrm{lh}}(\cdot))\) at \(t\). In the second case, two jobs \(j\),\(j^{\prime}\) exchange some volume \(r>0\), i.e., \(R^{\mathrm{lh}}_{j}(\cdot)\) reduces by \(r\), and \(R^{\mathrm{lh}}_{j^{\prime}}\) increases by \(r\). This can only happen when \(v_{j}<v_{j^{\prime}}\) since all dual lines are monotonically decreasing. Therefore, the cost rate will change by \(r\cdot(-1/v_{j}+1/v_{j^{\prime}})<0\) at time \(t\), again decreasing the cost rate. If more than two jobs meet at \(t\), then we can express the change in cost rate as a multiple such changes. Proof of Lemma 18.: We build \(M\) from \(\bar{I}\) by considering each \(i\in\bar{I}\) in descending order. For each such \(i\in\bar{I}\), we define \(i\) as the largest \((n/\mu)\) slots that are smaller than \(i\) and not from \(\bar{I}\). Since \(\bar{I}\) does not contain any of the first \(f\) slots and since \(n^{2}\cdot(n/\mu)\leq f\), we have enough slots to assign. By this assignment, all desired properties of \(M\) are fulfilled, except we still have to show that \(\forall i^{\prime}\in M(i)\dvtx i-2n^{3}/\mu<i^{\prime}\) holds for any \(i\in\bar{I}\): Since \(|\bar{I}|\leq n^{2}\) and we choose the largest \(n/\mu\) slots for each set \(M(i)\), we have that \(\forall i^{\prime}\in M(i)\dvtx i-n^{2}-n^{2}\cdot(n/\mu)\geq i-2n^{3}/\mu\).
2304.09097
Sheaf4Rec: Sheaf Neural Networks for Graph-based Recommender Systems
Recent advancements in Graph Neural Networks (GNN) have facilitated their widespread adoption in various applications, including recommendation systems. GNNs have proven to be effective in addressing the challenges posed by recommendation systems by efficiently modeling graphs in which nodes represent users or items and edges denote preference relationships. However, current GNN techniques represent nodes by means of a single static vector, which may inadequately capture the intricate complexities of users and items. To overcome these limitations, we propose a solution integrating a cutting-edge model inspired by category theory: Sheaf4Rec. Unlike single vector representations, Sheaf Neural Networks and their corresponding Laplacians represent each node (and edge) using a vector space. Our approach takes advantage from this theory and results in a more comprehensive representation that can be effectively exploited during inference, providing a versatile method applicable to a wide range of graph-related tasks and demonstrating unparalleled performance. Our proposed model exhibits a noteworthy relative improvement of up to 8.53% on F1-Score@10 and an impressive increase of up to 11.29% on NDCG@10, outperforming existing state-of-the-art models such as Neural Graph Collaborative Filtering (NGCF), KGTORe and other recently developed GNN-based models. In addition to its superior predictive capabilities, Sheaf4Rec shows remarkable improvements in terms of efficiency: we observe substantial runtime improvements ranging from 2.5% up to 37% when compared to other GNN-based competitor models, indicating a more efficient way of handling information while achieving better performance. Code is available at https://github.com/antoniopurificato/Sheaf4Rec.
Antonio Purificato, Giulia Cassarà, Federico Siciliano, Pietro Liò, Fabrizio Silvestri
2023-04-07T07:03:54Z
http://arxiv.org/abs/2304.09097v3
# Sheaf Neural Networks for Graph-based Recommender Systems ###### Abstract. Recent progress in Graph Neural Networks has resulted in wide adoption by many applications, including recommendation systems. The reason for Graph Neural Networks' superiority over other approaches is that many problems in recommendation systems can be naturally modeled as graphs, where nodes can be either users or items and edges represent preference relationships. In current Graph Neural Network approaches, nodes are represented with a static vector learned at training time. This static vector might only be suitable to capture some of the nuances of users or items they define. To overcome this limitation, we propose using a recently proposed model inspired by category theory: Sheaf Neural Networks. Sheaf Neural Networks, and its connected Laplacian, can address the previous problem by associating every node (and edge) with a vector space instead than a single vector. The vector space representation is richer and allows picking the proper representation at inference time. This approach can be generalized for different related tasks on graphs and achieves state-of-the-art performance in terms of F1-Score@N in collaborative filtering and Hits@20 in link prediction. For collaborative filtering, the approach is evaluated on the MovieLens 100K with a 5.1% improvement, on MovieLens 1M with a 5.4% improvement and on Book-Crossing with a 2.8% improvement, while for link prediction on the ogbl-ddi dataset with a 1.6% refinement with respect to the respective baselines. Recommender systems, graph neural networks, link prediction, collaborative filtering + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: conference: Proceedings in the 2015 ACM SIGKDD international conference on Knowledge and data mining, pp. 1-10. ACM, 2015. + Footnote †: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference:: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference:: conference:: conference: conference: conference:: conference: conference: conference: conference:: conference: conference: conference: conference: conference: conference:: conference: conference: conference: conference: conference:: conference: conference: conference: conference:: conference: conference: conference:: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference: conference:: conference: conference: conference: conference:: conference: conference: conference: conference: conference:: conference:: conference: conference: conference: conference: conference:: conference: conference: conference:: conference:: conference: conference: conference: conference: conference:: conference: conference: conference:: conference: conference:: conference: conference: conference: conference:: conference: conference:: conference: conference: conference:: conference: conference: conference:: conference:: conference: conference: conference:: conference: conference: conference:: conference: conference:: conference:: conference:: conference:: conference: conference:: conference: conference: conference:: conference: conference:: conference: conference:: conference: conference: conference:: conference:: conference: conference:: conference: conference:: conference: conference:: conference: conference:: conference: conference:: conference: conference:: conference: conference: conference:: conference: conference:: conference:: conference: conference:: conference: conference: conference: conference:: conference: conference:: conference: conference:: conference: conference: conference: conference:: conference: conference:: conference: conference:: conference: conference:: conference: conference:: conference: conference: conference:: conference:: conference: conference: conference:: conference: conference:: conference: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference::: conference: conference:: conference:: conference::: conference:: conference:: conference:: conference::: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference::: conference:: conference:: conference:: conference:: conference::: conference: conference:: conference::: conference:: conference:: conference:: conference::: conference:: conference:: conference:: conference:: conference:: conference::: conference:: conference:: conference::: conference:: conference:: conference:: conference:: conference:: conference:: conference::: conference::: conference:: conference:: conference::: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference::: conference:: conference:: conference::: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference::: conference:: conference:: conference: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference: conference:: conference:: conference:: conference: conference:: conference:: conference: conference:: conference::: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference: conference: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference:: conference::: conference:: conference:: conference:: conference:: conference:: conference: conference:: conference:: conference: conference: conference:: conference we are proposing, in this research, to use a novel class of GNN architectures: "_Sheaf Neural Networks_". Sheaf Neural Networks (SNNs) are a recently proposed and novel class of Graph Neural Network models inspired by Category Theory. It has been used in many different tasks and has been proven superior to GNNs (Beng et al., 2015). In this class of models, the basic building blocks are _Cellular Sheaves_. Cellular sheaves associate a vector space with each node and edge of a graph and linear maps between these spaces. Simply put, Sheaf Neural Networks are Graph Neural Networks that operate over Cellular Sheaves (Beng et al., 2015). SNNs work by computing a generalization of the well-known Graph Laplacian, the so-called _Sheaf Laplacian_(Chen et al., 2016). Sheaf Laplacian is indeed a generalization of Graph Laplacian because when the vector spaces of the cellular sheaves are 1-dimensional, and we apply identity maps between them, then the two Laplacians are perfectly equivalent. The Sheaf Laplacian is computed via the restriction map in a non-parametric manner at pre-processing time (Beng et al., 2015). Our research extends the results in the literature on the theory of SNNs, to two real-world problems: item recommender systems and link prediction. In particular, we show that applying SNNs to the task of graph-based recommender systems greatly improves the performance of both item recommender and link prediction systems. Experiments show that our proposed SNN-based models outperform all state-of-the-art models in the respective tasks. We show that in the case of recommendation, we outperform the state-of-the-art method by 5.1% on MovieLens 100k, 5.4% on MovieLens 1M and 2.8% on Book-Crossing. In the case of link prediction, we have a 1.6% improvement on the ogbl-ddi dataset. The innovative contributions of this research work are the following: * We propose a novel architecture for products and link recommender systems based on Sheaf Neural Networks that achieve state-of-the-art results in their respective tasks. * We demonstrate the relation between our model and the use of the Bayesian Personalised Ranking loss function using different experiments. * We perform extensive experiments on multiple datasets for top-N recommendation and link prediction tasks, which verify our solution's rationality and potential. Experimental results show that SNNs outperforms all the other baselines on the two tasks we focus on in this paper. In recommendation tasks, we beat the state-of-the-art method by 5.1% on MovieLens 100k, 5.4% on MovieLens 1M, and 2.8% on Book-Crossing. In the case of link prediction, we have a 1.6% improvement on the ogbl-ddi dataset. The rest of the paper is organized as follows. Section 2 shows recent works in the current literature. Section 3 gives a background to Graph Neural Networks and sheaf theory. Section 4 explains our approach. Section 5 shows some implementation details as the dataset choice and the setting for the experiments. In section 6, we demonstrate the effectiveness of the proposed solution, and we conclude in 7 with pointers to future ideas. ## 2. Related Work This section introduces some examples of state-of-the-art recommender systems. ### Deep learning-based recommender systems We can distinguish between two categories in the existing DL-based recommendation models. For every category, it is shown an example. * Recommendation based on neural building blocks (Han et al., 2015): the DL technique determines the recommendation model's applicability. For example, MLPs can simply model the non-linear interactions between users and items; CNNs can extract local and global representations from heterogeneous data sources like text and image; recommender systems can model the temporal dynamics and sequential evolution of content information using RNNs. * Recommendation based on deep hybrid models (Zheng et al., 2016): this technique is based on a mix of deep learning techniques that change depending on the task. One common idea is to mix long-term and short-term networks to model long-term preferences and short-term preferences. Usually, this approach achieves better results with respect to the previous one. Zheng et al. (Zheng et al., 2016) first convert a user's implicit feedback into a "like" vector and a confidence vector, and then they model the probability of the "like" vector, weighted by the confidence vector. Dong et al. (Dong et al., 2015) propose a hybrid model which jointly performs deep users and items' latent factors learning from side information and collaborative filtering from the rating matrix. The output of this model approximates the predicted rating, and then a list of ranked items is generated for each user based on these prediction ratings. ### Autoencoder-based recommender systems Autoencoder is an unsupervised model attempting to reconstruct its input data in the output layer. In general, the bottleneck layer (the middle-most layer) is used as a salient feature representation of the input data (Sachdeva et al., 2017). The main variants of Autoencoders can be considered as denoising Autoencoder (Wang et al., 2017) or variational Autoencoder (VAE) (Sachdeva et al., 2017). There are two ways to apply Autoencoder to a recommender system: * Using Autoencoder to learn lower-dimensional feature representations at the bottleneck layer. * Filling the blanks of the interaction matrix directly in the reconstruction layer. Sachdeva et al. (Sachdeva et al., 2017) introduce a recurrent version of the VAE, where instead of passing a subset of the whole history regardless of temporal dependencies, they pass the consumption sequence subset through a recurrent neural network. At each time-step of the RNN, the sequence is fed through a series of fully-connected layers, the output of which models the probability distribution of the most likely future preferences. Han et al. (Han et al., 2015) improve the previous idea. They first pre-train an autoencoder with the local kernelised weight matrix, which transforms the data from one space into the feature space by using a 2d-RBF kernel. Then, the pre-trained autoencoder is fine-tuned with the rating matrix, produced by a convolution-based global kernel, which captures the characteristics of each item. Background We briefly review the necessary background to understand the solution we propose. We start by summarizing the main characteristics of GNNs, and we then introduce cellular sheaf theory to finally present the learning algorithm we use: _Neural Sheaf Diffusion_. ### Graph Neural Networks The rise of GNN mainly originates from the advancement of convolutional neural network (CNN) and Graph Representation Learning (GRL) (Goh et al., 2017). When applied to regular Euclidean data such as images or texts, CNN is extremely effective in extracting localized features. However, for non-Euclidean data like graphs, CNN requires generalization to handle the situations where operation objects (e.g., pixels in images or nodes on graphs) are non-fixed in size. In terms of GRL, it aims to generate low-dimensional vectors for graph nodes, edges, or subgraphs, which represent complex connection structures of graphs. A graph is represented as \(G=(V,E)\), where \(V\) is the set of nodes and \(E\) is the set of edges. The set of edges can also be described by an adjacency matrix \(A\). Let \(v_{i}\in V\) be a node and \(e_{ij}=(v_{i},v_{j})\in E\) be an edge pointing from \(v_{j}\) to \(v_{i}\). \(A\) is defined as: \[A_{ij}=\begin{cases}1\text{ if }\ e_{ij}\in E\\ 0\text{ otherwise}\end{cases} \tag{1}\] The neighbourhood of a node \(v\) is denoted as: \[N(\mathbf{o})=\{\mathbf{u}\in V\mid(\mathbf{o},\mathbf{u})\in E\} \tag{2}\] The degree matrix of \(G\) is a matrix \(D\) which contains information about the number of edges attached to each vertex. \[D_{ij}=\begin{cases}degree(i)\text{ if }i=j\\ 0\text{ otherwise}\end{cases} \tag{3}\] Using the definition of degree matrix \(D\) and adjacency matrix \(A\) the Laplacian can be easily defined as: \[L=D-A \tag{4}\] Given the graph data, the main idea of GNN is to iteratively aggregate feature information from neighbours and integrate the aggregated information with the current central node representation during the propagation process. From the perspective of network architecture, GNN stacks multiple propagation layers consisting of aggregation and update operations. The formulation of propagation is: \[Update:H^{(I)}=f(H^{(I-1)},A) \tag{5}\] \[Aggregate:H^{(I)}=\sigma(D_{\mathbf{o}}^{-\frac{1}{2}}AW^{0}D_{\mathbf{e}}^{-1 }D_{\mathbf{o}}^{-\frac{1}{2}}W^{(I)}) \tag{6}\] Where the aggregation function is for a hypergraph neural network. \(\sigma\) is the activation function, \(D_{\mathbf{o}}\) and \(D_{\mathbf{e}}\) denote the diagonal matrices of the edge degrees and the vertex degrees, respectively. ### Sheaf theory A cellular sheaf \((G,\mathcal{F})\) on an undirected graph \(G=(V,E)\) consists of: * A vector space \(\mathcal{F}(\mathbf{o})\) for each \(v\in V\); * A vector space \(\mathcal{F}(\mathbf{e})\) for each \(v\in E\); * A linear map \(\mathcal{F}_{v\ni\mathbf{e}}:\mathcal{F}(v)\rightarrow\mathcal{F}(e)\) for each incident node-edge pair \(v\trianglelefteq e\); The vector spaces of the node and edges are called stalks, while the linear maps are called restriction maps. Given a sheaf \((G,\mathcal{F})\) we define the space of 0-cochains \(C^{0}(G,\mathcal{F})\) as the direct sum over the vertex stalks: \[C^{0}(G,\mathcal{F})=\bigoplus_{v\in V}\mathcal{F}(v) \tag{7}\] The space of 1-cochains \(C^{1}(G,\mathcal{F})\) is the direct sum over the edge stalks. Given some arbitrary orientation for each edge \(e\) we define the co-boundary map: \[\delta:C^{0}(G,\mathcal{F})\to C^{1}(G,\mathcal{F})=\delta(x)_{e}= \mathcal{F}_{v\trianglelefteq e}x_{\mathbf{o}}-\mathcal{F}_{u\trianglelefteq e}x_{u} \tag{8}\] The sheaf Laplacian of a sheaf is defined as: \[L_{\mathcal{F}}=\delta^{T}\delta=\sum_{v,u\trianglelefteq e}\mathcal{F}_{v \trianglelefteq e}^{T}(F_{v\trianglelefteq e}x_{\mathbf{o}}-F_{u\trianglelefteq e}x_{u}) \tag{9}\] It can be seen how the sheaf Laplacian is strictly related to the Laplacian defined in 4 but considers restriction maps instead of edges. In fact, the sheaf Laplacian is a generalization of the graph Laplacian that encodes additional relational structure parameterized by the underlying graph. The normalised sheaf Laplacian \(\Delta_{\mathcal{F}}\) is defined as: \[\Delta_{\mathcal{F}}=D^{-\frac{1}{2}}L_{\mathcal{F}}D^{-\frac{1}{2}} \tag{10}\] With \(D\) block-diagonal of \(L_{\mathcal{F}}\). ### Neural sheaf diffusion Given a graph \(G=(V,E)\) where each node of the graph \(v\in V\) has a \(d\)-dimensional feature vector \(x_{\mathbf{o}}\in\mathcal{F}(v)\) we construct an \(nd\)-dimensional vector \(x\in C^{0}(G,\mathcal{F})\) by column-stacking the individual vectors \(x_{\mathbf{o}}\). We produce the feature matrix \(\mathbf{X}\in\mathbb{R}^{(nd)\times f}\) where the columns of \(\mathbf{X}\) are vectors in \(C^{0}(G,\mathcal{F})\). Sheaf diffusion is a process on \((G,\mathcal{F})\) governed by the following differential equation: \[\mathbf{X}(0)=\mathbf{X},\ \ \ \mathbf{X}(t)=-\Delta_{\mathcal{F}}\mathbf{X}(t) \tag{11}\] which is discretised via the explicit Euler scheme with unit step size: \[\mathbf{X}(t+1)=\mathbf{X}(t)-\Delta_{\mathcal{F}}\mathbf{X}(t)=(\mathbf{I}_{ nd}-\Delta_{\mathcal{F}})\mathbf{X}(t) \tag{12}\] In the model used by (Bang et al., 2017) the previous equation is discretised in the following way: \[\mathbf{X}(t+1)=\mathbf{X}(t)-\sigma(\Delta_{\mathcal{F}(t)}(\mathbf{I}_{ \mathbf{o}}\otimes\mathbf{W}_{1}^{t})\mathbf{X}_{t}\mathbf{W}_{2}^{t}) \tag{13}\] With the sheaf \(\mathcal{F}(t)\) and the weights \(\mathbf{W}_{1}^{t}\in\mathcal{R}^{d\times d}\) and \(\mathbf{W}_{2}^{t}\in\mathcal{R}^{f\times f}\) time-dependent, meaning that the underlying geometry evolves. ## 4. Method ### Collaborative filtering We consider the information that a generic user embedding would have at each layer of the GNN. The user nodes are the input embeddings provided to the model in the input Layer, as shown in Figure 2. After the input Layer, a new item embedding is generated and is passed to Layer 1. The user node embedding will then get updated with a new item embedding containing semantic information related to its connected users. Note that, during this layer, the nodes learn not only from the embedding they obtained in the input Layer but also from this new embedding. In Layer 2 the previous item embedding is updated with a new one. In this way, we try to cluster users with the same item interests. As a result, the user node in the final layer will gain knowledge about the embeddings of users 2-hops away in the GNN. Semantically, in terms of recommendations, we have made this user's node embedding more similar to other users who share the same item interests. The reason behind the choice of node embedding is straightforward: at the node level, an embedding vector is generated with each node in the graph. This embedding vector can hold the graph representation and structure. At each step, the user's node embedding is more similar to other users who share the same item interests. Essentially, nodes in close proximity should also have vectors in close proximity to each other. One advantage of our approach is that the connection between the proposed embedding and the architecture is extremely easy. SNN takes as input an embedding vector with the modified representation of users and items and the size of the graph (sum of the number of users and items). Then all the information from the embedding is stored in the corresponding vector spaces. The complete process can be found in Figure 1. #### 4.1.1. Loss function A surrogate loss function is used to enable efficient gradient-based optimization and also to work in the best way with sheaves. Specifically, we use the Bayesian Personalized Ranking (BPR) loss as the surrogate. To understand BPR, we need to define the notion of positive and negative edges: positive edges are those that exist in the graph and negative edges are those that don't. In our bipartite graph, we can define the score of the user \(u\) for the item \(v\) is: \[f_{\theta}(u,v)=e_{u}^{T}e_{v} \tag{14}\] Where \(e_{u}\) is the embedding related to the user \(u\) and \(e_{v}\) is the embedding related to the item \(v\). Consequently, we can define for the user \(u\) the set of all positive edges containing \(u\) as \(s(p)=f_{\theta}(u,v_{pos})\) and the set of all negative edges containing u as \(s(n)=f_{\theta}(u,v_{neg})\). The BPR is: \[BPR(u)=\sum_{p,n}-ln(\sigma(s(p)-s(n)) \tag{15}\] To efficiently estimate the BPR loss and optimize \(f_{\theta}\), we train the model by sampling mini-batches. For each mini-batch, we sample the set of users and for each user sample one positive item (the item from a positive edge containing the user in question) and one negative item. ### SheafNN and recommendation The improvements in this work are due to the use of topology. Since SNN works using sheaves we need to use an update function based on simplicial complexes (Bordes and T \[m_{B}^{t+1}(\sigma)=AGG_{\tau\in B(\sigma)}(M_{B}(h_{\sigma}^{t},h_{ \tau}^{t})) \tag{16}\] \[m_{\uparrow}^{t+1}(\sigma)=AGG_{\tau\in N_{\uparrow},\beta eC( \sigma,t)}(M_{\uparrow}(h_{\sigma}^{t},h_{\tau}^{t},h_{\beta}^{t})) \tag{17}\] The update function is: \[h_{\sigma}^{t+1}=\bigcup\left(h_{\sigma}^{t},m_{B}^{t}(\sigma),m_{\uparrow}^{t +1}(\sigma)\right) \tag{18}\] \(\sigma,\tau\) and \(\delta\) are cells while \(B\) represents a boundary. The update function \(h\) receives as input two different types of messages \(m_{B}\) and \(m_{\uparrow}\). Our network transforms a standard graph into a sheaf. An example of a sheaf can be found in Figure 3. We consider the node \(V\) as the user \(V\) and the node \(U\) as the user \(U\). We work in the following way: \(\mathcal{F}(U_{e})\) is the vector space containing all the opinions of \(V\) that we split into positive and negative opinions and the same is for \(\mathcal{F}(U_{u})\) and user U. Up to this moment the main difference is that the information about a certain user is stored in the corresponding vector space. \(U_{e}\) is the edge containing all the items in common between the two users. The features of the item are modelled into the vector space \(\mathcal{F}(U_{e})\). Every iteration of the training updates these features for all the users. From the previous chapter, we also know that \(f_{\theta}(u,v)=e_{u}^{T}e_{e}\) is the score for the user \(u\). But with this representation, we have the user embedding into \(\mathcal{F}(U_{u})\) and the item embedding into \(\mathcal{F}(U_{e})\). As a result, the score can be simply found in the restriction maps \(\mathcal{F}_{u\in\mathcal{E}}\) and \(\mathcal{F}_{\sigma\in\mathcal{E}}\) and the more we train our network the higher the score. For this reason, we have chosen to use the BPR loss, in fact with this representation it is extremely simple to be computed. ### Link prediction The goal of SNN is to learn the embeddings of each node and to use them to predict edges in the graph. But SNN computes node embeddings for all nodes in the graph, but what we want to do is make predictions on pairs of nodes. For this reason, it is created a module in pairs of node embeddings (i.e., an edge) that classifies if the edge connecting two drugs exists or not. This module is a simple neural network called Link Predictor. The Link predictor (shown in Figure 4) takes the element-wise product of the real-valued embedding vector of \(2\) nodes (\(h_{i}\) and \(h_{j}\)) and computes the probability score of whether there exists a link between the \(2\) nodes through a multi-layer perceptron. Also in this case it is created a node embedding. The creation of the embedding is of primary importance for the training. Since the task is different, SNN is used differently. Given two generic nodes \(U\) and \(V\), the vector space \(\mathcal{F}(U_{u})\) associated to node \(U\) contains information about the molecule \(U\) and the same for \(V\). The restriction maps \(\mathcal{F}_{u\mathcal{E}\mathcal{E}}\) and \(\mathcal{F}_{v\mathcal{E}\mathcal{E}}\) contain information about the interaction between \(U\) and \(V\). During the training at every iteration the vector space \(\mathcal{F}(U_{e})\) related to the edge \(e\) and the restriction maps are updated with new features about the corresponding molecules. Also in this case SNN takes as input simply the embedding vector computed in the previous step. #### 4.3.1. Loss function The objective is to maximize the probability of correct edges (positive predictions \(p\)) while minimizing the probability of incorrect edges (negative predictions \(n\)). The loss function is the following: \[\mathcal{L}=-\mu(ln(p))+\mu(ln(1-n)) \tag{19}\] Where \(\mu\) represents the mean. ## 5. Experiments In this section, we test the performance of Sheaf Neural Networks on two different tasks: collaborative filtering and link prediction. For collaborative filtering, three different datasets are used: MovieLens 100k, MovieLens 1M and the Book-Crossing dataset. For link prediction, it is used the ogbl-ddi dataset. ### Datasets description #### 5.1.1. Book-Crossing dataset The Book-Crossing dataset (Song et al., 2016) is a collection of user ratings of books. It comes with both explicit ratings (1-10 stars) and implicit ratings (user interacted with the book). It is composed by 278858 members of Book-Crossing and 1157112 ratings, referring to 271379 distinct ISBNs. #### 5.1.2. MovieLens MovieLens 1M dataset contains around 1 million anonymous ratings of approximately 3900 movies by 6040 users, where each user rated at least 20 items. MovieLens 100k dataset contains around 100k anonymous ratings of approximately 1700 movies by 1000 users, where each user rated at least 20 items. The ratings in MovieLens 1M and MovieLens 100K datasets are made on a 5-star scale, with 1-star increments. In both datasets, each user is described by demographic info (age, gender, occupation). A good way to visualize the interactions in a recommender system is by using a bipartite graph with the users and items (movies in this case) as nodes and edges between them indicating user-item interactions. The graph will be bipartite because users can be interested in items, but items and users can't be interested in other items or users, respectively. #### 5.1.3. ogbl-ddi This dataset contains 4,267 nodes and 1,334,889 edges representing the drug-drug interaction network. In the ogbl-ddi dataset, each node represents an FDA-approved or experimental drug. Edges represent interactions between drugs and can be interpreted as a phenomenon where the joint effect of taking two drugs together is considerably different from the expected effect in which drugs act independently of each other. It is implemented a protein-target split, meaning that drug edges are split according to Figure 3. Sheaf structure. It is possible to see the stalks associated with each node and the restriction maps. what proteins those drugs target in the body. As a result, the test set consists of drugs that predominantly bind to different proteins from drugs in the train and validation sets. This means that drugs in the test set work differently in the body, and have a rather different biological mechanism of action than drugs in the train and validation sets. ### Competitors We compare our model with various baselines and current state-of-the-art models including recently published neural architectures and we now present a brief summary of our competitors to provide a better understanding of these models. #### 5.2.1. Collaborative filtering * CoFM (Krishnan et al., 2017) jointly trains FM and TransE by sharing parameters or regularization of aligned items and entities. * MVAE (Krishnan et al., 2017) is an extension of variational autoencoders (VAEs) to collaborative filtering for implicit feedback. * SVAE (Savaghi et al., 2018) is an improvement of MVAE using sequential VAEs based on a recurrent neural network. * LinUCB (Liu et al., 2019) is a multi-armed bandit approach which recommends items to the user based on the contextual information about the user and items. * SVD (Krishnan et al., 2017) is a popular algorithm utilizing Singular Value Decomposition for the process of recommendation. * DeepMF (Wang et al., 2018) is a state-of-the-art neural network architecture based Matrix Factorization Method for recommendation. * MDP (Krishnan et al., 2017) works as a reinforcement learning agent modeling collaborative filtering as a Markov Decision Process (MDP). * LibFM (Liu et al., 2019) is a widely used feature-based factorization model for CTR prediction. * RippleNet (Wang et al., 2018) is a memory-network-like approach that propagates users' preferences on the KG for recommendation. * KGNN-LS (Wang et al., 2018) transforms the graph into a weighted graph and uses a GNN to compute item embeddings. #### 5.2.2. Link prediction * Node2Vec (Chen et al., 2017) learns a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighbourhoods of nodes. * SEAL (Wang et al., 2018) learns a function mapping the subgraph patterns to link existence. * GCN (Glorot et al., 2017) is based on an efficient variant of convolutional neural networks which operate directly on graphs. * GraphSAGE (Chen et al., 2017) leverages node feature information to efficiently generate node embeddings for unseen data. ### Metrics #### 5.3.1. Collaborative filtering We use three evaluation metrics that have been widely used in previous work: * Precision@N: It is the fraction of the items recommended that are relevant to the user. We compute the mean of all the users as final precision; * Recall@N: It is the proportion of items relevant to the user that have been successfully recommended. We compute the mean of all users as final recall. * F1-score@N: It is the harmonic mean of precision at rank N and recall at rank N. #### 5.3.2. Link prediction The performance is evaluated by Hits@20: each true drug interaction is ranked among a set of approximately \begin{table} \begin{tabular}{c c c} \hline \hline & MovieLens 100k & MovieLens 1M \\ \hline Number of users & 943 & 6040 \\ Number of items & 1682 & 3952 \\ Number of ratings & 100000 & 1000029 \\ Number of all genres & 19 & 18 \\ Average number of genres & 1.7 & 1.6 \\ \hline \hline \end{tabular} \end{table} Table 1. Main stats about MovieLens 100k and MovieLens 1M. Figure 4. Link predictor architecture. It takes as input \(h_{i}\odot h_{j}\) and outputs the probability of a link between node \(i\) and node \(j\). 100,000 randomly-sampled negative drug interactions, and count the ratio of positive edges that are ranked at 20-place or above. ## 6. Results All the experiments were performed on a single NVIDIA RTX A6000 with 10752 CUDA cores and 48 GB RAM. In all the experiments different values of seed are used to see how the results of the models change with the split. ### ogbl-ddi In this section, we test the performance of SNN on the ogbl-ddi dataset. During the comparison, the learning rate and the weight decay are fixed and equal to 0.003 and 0.0005, respectively. The model is trained with Adam optimizer, and 300 epochs are performed, completing the training phase in about 18 hours. Training our model takes quite a long time due to the inherent complex structure of SNNs, and the operations performed make computing the backpropagation algorithm less time efficient than other architectures. It is important to notice that the variance of our results, compared to our competitors, decreases consistently. Our network is less affected by variations in the parameters than the other architectures making our method stable and, therefore, fitter to be used in different problems. We run statistical tests on the results obtained by our models on this task to assess the significance of the differences. We used the student's unpaired t-test between our predictions and the predictions of our competitors with independent and identically distributed samples. By selecting \(p=0.05\) we demonstrated that our results are statistically significant if we compare them against GraphSage, GCN, and Node2Vec. ### MovieLens 1M In this section, we test the performance of SNN on MovieLens 1M. We used a 90-10 train-test split. 90% of the samples are used for training and 10% for testing. This is the most common example of a split for this dataset. During the comparison, the learning rate and the weight decay are fixed and equal to 0.005 and 0.0001. The model is trained with a computational time of 210 minutes. The selected size for the embedding vector is 64; consequently, the input size of SNN is 64. We use as a loss function the BPR loss, as explained in chapter 4. For the sake of completeness, we also experimented with different loss functions. In recommender systems, the most common example of a loss function is the Root Mean Squared Error (RMSE) loss. We also tested our approach using this loss function, but our experimental results show that the F1-Score was lower. The main advantage of BPR over RMSE is on the computational time. Table 3 shows that BPR is up to 50% faster than RMSE. We also tested Binary Cross-Entropy (BCE) loss. In this case, the efficiency of the method was comparable, but the performance in terms of F1-Score was much worse. Thus, we also did not consider this loss when training our models. The choice of F1-Score@10 and F1-Score@100 as evaluation metrics is not random. In fact, by choosing \(N=10\) and \(N=100\) we are working with tho different sizes for the sets of relevant items for the users. The most common recommender systems based on graphs have a high F1-Score@N for small values of N. In our case also with high values of N the F1-Score is really high. In Figure 5 it is possible to compare our result with other state-of-the-art systems and we demonstrate that our solution is able to achieve good performance also with high values of \(N\). In this Image was not possible to represent the values of F1-Score@N for values of \(N>50\) because the other approaches obtained very bad results. Finally, in Table 4 we can see that SNN outperforms all the baselines in terms of F1-Score and Recall. In Figure 6 we have tested the value of F1 when increasing the number of returned recommendations, \(K\). We can observe that F1-Score starts decreasing for \(K>60\). This is different from our competitors, where this value ranges between 20 and 40. This experiment shows that this architecture can achieve good results also when we produce a high number of recommended, and potentially relevant for the users, items. For example, consider MovieLens. The MovieLens dataset contains movie ratings, therefore, the higher the number of movies that we are able to recommend the more satisfied the user. ### MovieLens 100K In this section, we test the performance of SNN on MovieLens 100k. We used an 80-20 train-test split. 80% of the samples are used for training and 20% for testing. The value of the hyperparameters is different from the previous experiment: the learning rate is fixed at 0.001 while the weight decay is 0.0002. The model is trained with \begin{table} \begin{tabular}{c c c} \hline \hline Model & Validation Hits@20 & Test Hits@20 \\ \hline Node2Vec & 0.329\(\pm\)0.012 & 0.233\(\pm\)0.021 \\ SEAL & 0.285\(\pm\)0.027 & 0.306\(\pm\)0.039 \\ GCN\({}^{\dagger}\) & 0.555\(\pm\)0.021 & 0.370\(\pm\)0.051 \\ GraphSAGE \({}^{\dagger}\) & 0.626\(\pm\)0.037 & 0.539\(\pm\)0.047 \\ **SNN (our)**\({}^{\dagger}\) & **0.632\(\pm\)0.036** & **0.555\(\pm\)0.044** \\ \hline \hline \end{tabular} \end{table} Table 2. Results of the evaluation using the ogbl-ddi dataset. SNN has a 32% improvement on the test hits with respect to the baseline and a 1.6% improvement with respect to the best model. \(\dagger\) means statistically significantly (Student’s unpaired t-test, p \(<\) 0.05) \begin{table} \begin{tabular}{c c c c} \hline \hline Loss & \#Layers & F1-Score@10 & Time \\ \hline RMSE & 2 & 0.112 & 283 \\ RMSE & 5 & 0.148 & 371 \\ BCE & 2 & 0.087 & 198 \\ BCE & 5 & 0.101 & 262 \\ BPR & 2 & 0.163 & 163 \\ BPR & 5 & 0.192 & 210 \\ \hline \hline \end{tabular} \end{table} Table 3. Performance of the same model using different loss functions and numbers of layers. The computational time is in minutes, \(\downarrow\) is better. Adam optimizer and 50 epochs are performed with a computational time of 30 minutes. It is possible to see how our solution outperforms all the baselines in terms of F1-Score. The results can be seen in Table 5. One of the advantages of SNN is the possibility of preventing over-smoothing. Deeply stacking the number of layers does not decrease the performance. In a lot of GNNs the features become progressively smoother with increased depth (Krizhevsky et al., 2017). In our case, the best results are obtained using 5 layers, as shown in Figure 7. ### Book-Crossing In this section, we test the performance of SNN on Book-Crossing. During the comparison, the learning rate and the weight decay are fixed and equal to 0.001 and 0.0005. The model is trained for 70 epochs with a computational time of 251 minutes using as loss function the BPR loss. The computational time is a bit higher with respect to the previous experiments because the dataset is bigger and also contains a lot of side information, so the processing time is higher. In this experiment, we obtained better results by setting the selected size for the embedding vector to 32. As we can see in Table 6 we outperform the state-of-the-art results both on Recall@10 and Recall@100. ## 7. Conclusions and Future Work Associating each node and each edge with a vector space is an effective strategy to sequence recommendation. To prove this, we used our architecture SNN, and the experimental evaluation highlights the capability of SNN to outperform state-of-the-art models on two different tasks consistently. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & Precision@10 & Recall@10 & F1-Score@10 & Precision@100 & Recall@100 & F1-Score@100 \\ \hline \hline MVAE\({}^{\dagger}\) & 0.090 & 0.091 & 0.091 & 0.054 & 0.414 & 0.096 \\ SVAE\({}^{\dagger}\) & 0.144 & 0.125 & 0.134 & 0.069 & 0.494 & 0.121 \\ CoFM\({}^{\dagger}\) & **0.321** & 0.130 & 0.178 & 0.354 & 0.135 & 0.189 \\ **SNN (our)** & 0.284 & **0.145** & **0.192** & 0.122 & **0.520** & **0.198** \\ \hline \hline \end{tabular} \end{table} Table 4. Results of the evaluation using the MovieLens 1M dataset. SNN has a 5.4% improvement on F1@10 with respect to the baseline and a 0.7% improvement with respect to the best model. Figure 5. F1-Score values on MovieLens 1M compared with state-of-the-art solutions. The value of F1-Score for CAKR (Krizhevsky et al., 2017), RippleNet (Zhu et al., 2017) and Ripp-MKR (Ripp et al., 2018) goes down extremely fast. The value of the F1-Score of our model goes down less fast. Figure 6. The value of F1-Score of our solution starts decreasing with a value of K between 60 and 70. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & Precision@10 & Recall@10 & F1-Score@10 & Precision@20 & Recall@20 & F1-Score@20 \\ \hline LinUCB\({}^{\dagger}\) & 0.321 & 0.135 & 0.192 & 0.286 & 0.203 & 0.238 \\ SVD\({}^{\dagger}\) & 0.343 & 0.163 & 0.222 & 0.282 & 0.245 & 0.272 \\ DeepMF\({}^{\dagger}\) & 0.344 & 0.160 & 0.219 & 0.314 & 0.244 & 0.262 \\ MDP\({}^{\dagger}\) & **0.357** & 0.168 & 0.228 & **0.313** & 0.251 & 0.278 \\ **SNN (our)** & 0.210 & **0.287** & **0.243** & 0.265 & **0.309** & **0.285** \\ \hline \hline \end{tabular} \end{table} Table 5. Results of the evaluation using the MovieLens 100k dataset. SNN has a 5.1% improvement on F1@10 with respect to the baseline and a 1.5% improvement with respect to the best model. \(\dagger\) is used when the results are taken from the original paper. These results finally prove the applicability of algebraic topology to machine learning tasks. In the future, we are interested in using this architecture in new studies. Thanks to its topological structure, we think this network could obtain good results on different applications storing the side information of the nodes in the corresponding stalks. A possible future direction could be the Next Point-Of-Interest recommendation to provide users with where to go and how to plan the day based on previous visits as well as current status or feature selection.
2310.15521
Zr-Co-Al bulk metallic glass composites containing B2 ZrCo via rapid quenching and annealing
As a promising remedy for overcoming the limited ductility and work softening of bulk metallic glasses (BMGs), BMG composites incorporating a B2 crystalline phase have attracted considerable attention. Here, we explore the formation of Zr-Co-Al BMG composites by quenching alloys Zr$_{55}$Co$_{31}$Al$_{14}$, Zr$_{54.5}$Co$_{33.5}$Al$_{12}$, Zr$_{53.5}$Co$_{36.5}$Al$_{10}$, Zr$_{52.5}$Co$_{37.5}$Al$_{10}$, and Zr$_{43}$Co$_{43}$Al$_{14}$. We found the first alloy fully amorphous whereas the fifth was fully crystallized upon quenching. The other three were quenched to generate composite structures, with a higher fraction of B2 ZrCo phase with increasing Co/Zr ratio and decreasing Al content. For comparison, the formation of B2 ZrCo in annealed Zr$_{55}$Co$_{31}$Al$_{14}$ was also studied. For both approaches the influence of crystalline phases on hardness was examined.
Yu Chen, Chunguang Tang, Kevin Laws, Qiang Zhu, Michael Ferry
2023-10-24T04:57:53Z
http://arxiv.org/abs/2310.15521v1
# Zr-Co-Al bulk metallic glass composites containing B2 ZrCo via rapid quenching and annealing ###### Abstract As a promising remedy for overcoming the limited ductility and work softening of bulk metallic glasses (BMGs), BMG composites incorporating a B2 crystalline phase have attracted considerable attention. Here, we explore the formation of Zr-Co-Al BMG composites by quenching alloys Zr\({}_{55}\)Co\({}_{31}\)Al\({}_{14}\), Zr\({}_{54.5}\)Co\({}_{33.5}\)Al\({}_{12}\), Zr\({}_{53.5}\)Co\({}_{36.5}\)Al\({}_{10}\), Zr\({}_{52.5}\)Co\({}_{37.5}\)Al\({}_{10}\), and Zr\({}_{43}\)Co\({}_{43}\)Al\({}_{14}\). We found the first alloy fully amorphous whereas the fifth was fully crystallized upon quenching. The other three were quenched to generate composite structures, with a higher fraction of B2 ZrCo phase with increasing Co/Zr ratio and decreasing Al content. For comparison, the formation of B2 ZrCo in annealed Zr\({}_{55}\)Co\({}_{31}\)Al\({}_{14}\) was also studied. For both approaches the influence of crystalline phases on hardness was examined. Published at Journal of Alloys and Compounds 820 (2020) 153079. ## Introduction Bulk Metallic Glasses (BMGs) are promising structural materials with high strength, hardness and elastic limit. However, their low tensile ductility at failure and strain-softening associated with autocatalytic shear-banding at room temperature significantly hamper their applications. These problems have been tackled by designing BMG composites containing ductile crystalline phases dispersed throughout the glassy matrix [1]. While compressive work hardening [2] and large tensile ductility [1; 3] have been reported for BMG composites with ductile crystalline phases, the combination of tensile plasticity and work hardening is only accomplished via BMG composites containing the B2 (Pm/3m, space group number 221) crystalline phase, such as Zr-Cu-Al BMG composites [2; 4; 5; 6]. Different from conventional ductile crystalline phases, the B2 phase can respond to the applied stress or strain via martensite transformation (B2 to B19\({}^{\prime}\) or B2 to B33), resulting in work-hardening and transformation-induced plasticity. Recently, a few studies have shown that ZrCo in the B2 phase, due to its large number of slip systems, is inherently ductile [7; 8; 9; 10]. For example, tensile elongations greater than 7% [7] and 20% [8] have been reported for B2 ZrCo after hot-rolling and recrystallization, and compressive strain greater than 40% [10] has also been reported for as-cast B2 ZrCo. On the other hand, glass forming abilities (GFAs) of Zr-Co-Al alloys have been systematically investigated [11; 12; 13] and some alloys with high GFA have been identified. For example, fully amorphous Zr\({}_{56}\)Co\({}_{28}\)Al\({}_{16}\) cylinders with diameter up to 18 mm was produced [13]. Amorphous Zr\({}_{56}\)Co\({}_{44-x}\)Al\({}_{x}\) (\(x\)=12, 14, 16, 18) exhibits high compressive plasticity between \(\sim\)5% and \(\sim\)11% and strength more than 2 GPa [14; 15]. The success of CuZr-based BMG composites implies the potential to further improve the mechanical properties of Zr-Co-Al BMGs by designing some composites containing B2 ZrCo. There have been some attempts to produce B2 ZrCo phase within Zr-Co-Al BMG matrix by annealing [14], but so far as-cast Zr-Co-Al BMG composites have not been reported. We noted that B2 ZrCo phase has been found in as-cast (ZrCo)\({}_{100-x}\)Al\({}_{x}\) (x=1, 2, 3, 5), but these alloys have poor GFA and are fully crystallized upon quenching [9; 16; 17]. In this work, by exploring the ternary composition space we identify certain compositions of significant potential for the formation of BMG composites containing B2 ZrCo. For comparison, we also study the formation of B2 ZrCo in devitrified Zr\({}_{55}\)Co\({}_{31}\)Al\({}_{14}\). ## Methods To date a number of Zr-Co-Al compositions have already been explored, as indicated by the solid blue squares (for high GFA) or the open blue squares (for poor GFA) in Fig. 1. We start with a composition near the glass-forming Zr\({}_{56}\)Co\({}_{28}\)Al\({}_{16}\) and increase its crystallization ability by changing the composition towards Zr\({}_{50}\)Co\({}_{50}\). To this end we selected three compositions, Zr\({}_{55}\)Co\({}_{31}\)Al\({}_{14}\), Zr\({}_{54.5}\)Co\({}_{33.5}\)Al\({}_{12}\), and Zr\({}_{53.5}\)Co\({}_{36.5}\)Al\({}_{10}\). A fourth alloy, Zr\({}_{52.5}\)Co\({}_{37.5}\)Al\({}_{10}\), was also selected in view of the relatively poor GFA for compositions with Al below 10%. Since alloy (ZrCo)\({}_{100-x}\)Al\({}_{x}\) is well characterized by B2 ZrCo but has poor GFA at low \(x\), we also studied alloy Zr\({}_{43}\)Co\({}_{43}\)Al\({}_{14}\). For simplicity, these five alloys are denoted thereafter as alloys A1 to A5, respectively. All five compositions studied in this work are indicated in Fig. 1 by the solid red circles. Alloys were fabricated by mini arc melting the mixture of pure Zr (99.95%), Co (99.95%) and Al (99.99%) metals in a Ti-gettered high-purity argon atmosphere. The alloys were remelted a minimum of six times before being suction-cast into cylinders of 3 mm in diameter and 40 mm in length. Thermal analysis was performed using differential scanning calorimetry (DSC, NETZSCH 404) at a heating rate of 20 K/min. Structural characterizations were performed using X-ray diffraction (XRD, Bruker D8, Philips X'Pert, Operating Voltage of 45 kV and Cu-K\({}_{\alpha}\) radiation), Scanning Electron Microscopy (SEM, Hitachi 3400X) attached with an Energy dispersive Spectroscopy (EDS). Micro-Hardness was measured using a Struers Durascan micro-hardness tester at 1 kg with dwell time of 15 seconds. To examine the effect of devitrification on the formation of the B2 phase, Zr\({}_{55}\)Co\({}_{31}\)Al\({}_{14}\) was annealed using DSC near the onsets of the first and second crystallization, respectively, for various times before cooling the samples by switching off the DSC. The heating rate for annealing was 20 K/min. ## Results and Discussion The DSC curves of as-cast alloys A1-A5 are shown in Fig. 2, with the thermal parameters summarized in Table 1. As can be seen, as the content of Al decreases and Zr/Co ratio increases in alloys A1-A4, the latent heat associated with the two crystallization peaks decreases. For alloy A5, no crystallization peaks were detected. These results indicate decreasing GFA as the composition changes from alloy A1 to A5. The width of supercooled liquid region (\(\Delta T_{x}\)=\(T_{x1}\)\(-\)\(T_{g}\)) for alloys A1 and A2 was determined to be 49 and 37 K, respectively. These are close to the reported values for similar compositions Zr\({}_{56}\)Co\({}_{30}\)Al\({}_{14}\) (40 K) and Zr\({}_{56}\)Co\({}_{32}\)Al\({}_{12}\) (38 K). Based on the proposal of Nagel and Tauc Nagel and Tauc (2010), whereby metallic glasses can be stabilized at compositions for which the Fermi level is located at a minimum electronic density of states, some authors Nagel _et al._ (2011) have investigated the GFA of Zr-Co-Al alloys from the perspective of conduction electron concentration, or the number of (effective) conduction electrons per atom (\(e_{c}/a\)). Using the assigned elemental \(e_{c}/a\) values (1.5, 0, and 3 for Zr, Co, and Al, respectively Nagel and Tauc (2010)), they found a positive correlation between the stability of glass and \(e_{c}/a\) in the range of 1.3\(<\)\(e_{c}/a\)\(<\)1.5 for (Zr\({}_{9}\)Co\({}_{4}\))\({}_{100-x}\)Al\({}_{x}\), as indicated by arrow 1 in Fig. 1. In our study, \(e_{c}/a\) decreases from 1.245 for alloy A1 to 1.088 for alloy A5 (Table 1), which appears to be consistent with the decreasing GFA and the previous findings Nagel _et al._ (2011). However, it is worth noting that Zr\({}_{64.6}\)Co\({}_{17.7}\)Al\({}_{17.7}\) (\(e_{c}/a\)=1.5) and some other compositions have high \(e_{c}/a\) values but poor GFA Nagel _et al._ (2011) indicating that there may be a fixed e/a band for glass formation, or other effects are at play in these alloys. It is interesting to compare the GFA of (ZrCo)\({}_{100-x}\)Al\({}_{x}\) and its counterpart (ZrCu)\({}_{100-x}\)Al\({}_{x}\) for low \(x\) since both Zr\({}_{50}\)Co\({}_{50}\) and Zr\({}_{50}\)Cu\({}_{50}\) can exhibit B2 crystal phase, which, according to the binary phase diagrams, precipi Figure 2: DSC curves for as-cast alloys A1-A5, and the obtained thermal properties shown in Table 1. \(T_{a1}\) and \(T_{a2}\) indicate the annealing temperatures for alloy A1. Inset in top panel: the determination of \(T_{g}\) of alloy A1. \(T_{i}\) is the inflection point where the heat flow has the local lowest derivative (\(d\)HF\(/dT\)). Figure 1: Zr-Co-Al composition diagram. The square symbols (solid for amorphous and open for crystal) represent some previously explored compositions of various sample size Nagel _et al._ (2011); Nagel _et al._ (2011); Nagel _et al._ (2011); Nagel _et al._ (2011); Nagel _et al._ (2011); Nagel _et al._ (2011); Nagel _et al._ (2011); Nagel _et al._ (2011); Nagel _et al._ (2011); Nagel _et al._ (2011). Arrows 1 and 2 indicate two series of compositions studied in reference Nagel _et al._ (2011), which have increasing and constant conduction electron concentration, respectively. The red circles represent the compositions in this work, and the crystal phases in the samples are listed in a descending order of their X-ray diffraction intensities. The dashed line is for eye guide. tates directly from the liquid. Previous results [9; 16; 17] and the present work indicate that (ZrCo)\({}_{100-x}\)Al\({}_{x}\) has poor GFA for \(x\) as high as 14, but (ZrCu)\({}_{100-x}\)Al\({}_{x}\) (\(x\leq 10\)) is a good glass former [22], albeit with nanocrystals found in the glass matrix [23; 24; 25]. This is consistent with the fact that Cu contributes one (effective) electron [26] and hence less Al is required for a large glass. The significant GFA difference could also be a reflection of the stability of B2 ZrCo and B2 ZrCu, the former being stable at low temperature while the latter decomposing into Cu\({}_{10}\)Zr\({}_{7}\) and CuZr\({}_{2}\) around 1000 K according to the phase diagram [27], although we note that the ternary phase equilibria of the systems is a little different, with many ternary intermetallics in the Zr-Co-Al system being present. As GFA decreases, the alloys (partially) crystallize during quenching. As shown in Fig. 3, three crystalline phases, namely Zr\({}_{2}\)Co, ZrCo and Zr\({}_{2}\)Al, were detected in alloy A2. The formation of Zr\({}_{2}\)Co and ZrCo phases is understandable since, if we neglect the influence of Al, binary alloy Zr\({}_{62}\)Co\({}_{38}\), corresponding to the Zr/Co ratio of alloy A2, solidifies into Zr\({}_{2}\)Co and ZrCo. For alloys A3 and A4, which have lower Zr/Co ratio, crystal phases Zr\({}_{2}\)Co and Zr\({}_{2}\)Al disappear and a new Zr\({}_{6}\)CoAl\({}_{2}\) phase forms. At the same time, the fraction of B2 ZrCo phase increases as the composition changes from alloy A2 to A4. Alloy A5 was fully crystallized on quenching, characterized by a dominant B2 ZrCo phase and a small amount of Zr\({}_{5}\)Co\({}_{7}\)Al\({}_{3}\) phase. These two phases are also those found in as-cast Zr\({}_{47.5}\)Co\({}_{47.5}\)Al\({}_{5}\)[9], where the Zr\({}_{5}\)Co\({}_{7}\)Al\({}_{3}\) phase segregates to the grain boundaries of the B2 ZrCo phase. For 3% or lower Al content, the influence of Al on crystallization dynamics of (ZrCo)\({}_{100-x}\)Al\({}_{x}\) is negligible and the crystalline phases are B2 ZrCo, Co\({}_{2}\)Zr, and CoZr\({}_{2}\)[17]. The DSC and XRD results discussed earlier clearly indicate that _in-situ_ Zr-Co-Al BMG composites can be fabricated by tuning the compositions. For comparison we also explored the formation of BMG composites by annealing alloy A1 for various dwell times at 753 and 843 K, respectively. These two temperatures are slightly below the crystallization onsets (Fig. 2). Transient annealing at 753 K results in an emerging peak of ZrCoAl phase, and, after annealing for 100 min, ZrCo and ZrGeOAl2 peaks also emerge, but overall the amorphous hump is still preserved. Annealing at 843 K makes the crystalline peaks more pronounced, but no new crystalline phases were observed. These three phases were observed for annealing Zr\({}_{56}\)Co\({}_{28}\)Al\({}_{16}\) and Zr\({}_{55}\)Co\({}_{20}\)Al\({}_{25}\)[14; 18] and also were the phases of as-cast crystallized Zr\({}_{64.6}\)Co\({}_{17.7}\)Al\({}_{17.7}\)[11]. However, comparing the annealed and as-cast alloys in this study, the product phases are somewhat different, reflecting the complexity of crystallization in Zr-Co-Al alloys. Fig. 4 shows some typical SEM micrographs of the crystallites, together with the line scanning results. In Figure 3: (left) XRD patterns for as-cast alloys A1-A5; (right) alloy A1 annealed at 753 K and 843 K for various time. \begin{table} \begin{tabular}{c c c c c c c c} alloy & \(T_{g}\) (K) & \(T_{x1}\) (K) & \(\Delta T\) (K) & \(\Delta H_{1}\) (J/g) & \(\Delta H_{2}\) (J/g) & e\({}_{c}\)/a & reference \\ \hline Zr\({}_{55}\)Co\({}_{31}\)Al\({}_{14}\) (A1) & 737 & 785 & 49 & 33.64 & 27.59 & 1.245 & this work \\ Zr\({}_{54.5}\)Co\({}_{33.5}\)Al\({}_{12}\) (A2) & 737 & 774 & 37 & 10.38 & 7.58 & 1.178 & this work \\ Zr\({}_{53.5}\)Co\({}_{96.5}\)Al\({}_{10}\) (A3) & 751 & 798 & 37 & 1.28 & 1.53 & 1.103 & this work \\ Zr\({}_{52.5}\)Co\({}_{97.5}\)Al\({}_{10}\) (A4) & 768 & 803 & 35 & 0.15 & 0.27 & 1.088 & this work \\ Zr\({}_{43}\)Co\({}_{43}\)Al\({}_{14}\) (A5) & & & in as-cast crystal state & & 1.065 & this work \\ Zr\({}_{56}\)Co\({}_{30}\)Al\({}_{14}\) & 748 & 788 & 40 & - & - & 1.26 & [15] \\ Zr\({}_{56}\)Co\({}_{32}\)Al\({}_{12}\) & 747 & 785 & 38 & - & - & 1.20 & [15] \\ \end{tabular} \end{table} Table 1: Thermal and electronic properties of alloys A1-A5 and similar reported alloys. The glass transition temperature (\(T_{g}\)), the onset temperature of crystallization (\(T_{x}\)), the width of the supercooled liquid region (\(\Delta T_{x}\)), and the latent heat of crystallization (\(\Delta H\)) were measured at a heating rate of 20 K/min, same as those in reference [15]. The average number of effective conduction electrons per atom (\(e_{c}/a\)) is also listed. alloys A2 and A4, dendrites with a composition close to ZrCo formed due to the segregation of Co from the matrix into the dendrites and that of Zr and Al in the opposite way. It can be seen that the dendrites in alloy A4 are larger than those in alloy A2, consistent with the DSC results. For alloy A1 annealed for 30 minutes at 843 K, areas of compositions ZrCo and ZrCoAl can be seen in the line scanning plot, which are consistent with the XRD results. To understand the effect of BMG composite formation on mechanical properties we carried out uniaxial compression testing on the as-cast alloys A1 to A4. It can be seen from Fig. 5 that the stress-strain curves are slightly curved, indicating varying Young's modulii \(E\), in the initial elastic regime. We attribute this to slightly non-parallel faces of the cylindrical samples, which can result in a continuously changing \(E\) at the initial deformation stage that is lower than the true \(E\) (see the Appendix). In our study we also expect the influence on \(E\) from machine compliance and other factors. In view of these and the fact that \(E\) is relatively insensitive to small composition changes, we did not'rectify' the stress-strain curves by scaling. Comparing the measured \(E\) of A1, \(\sim\)40 GPa based on the linear portion in Fig. 5, with the reported \(\sim\)80-90 GPa for Zr\({}_{56}\)Co\({}_{30}\)Al\({}_{14}\)[15], we estimated that the machine compliance causes about 50% reduction of \(E\). For amorphous Zr\({}_{56}\)Co\({}_{44-x}\)Al\({}_{x}\) (\(x\) from 12 to 18) [15], good compressive plasticity has been reported, albeit sensitive to the fraction of free volume and processing parameters [28]. In this study we did not observe noticeable plasticity in amorphous sample A1. According to Fig. 5, samples A1 and A2 have similar strengths, and samples A3 and A4 have noticeable higher strengths than A1 and A2. Compared with the other samples, A4 exhibits no Figure 4: SEM pictures (left) and the corresponding line-scanning compositions (right) for selected alloys. For as-cast alloys A2 and A4, crystalline areas with composition near Zr\({}_{50}\)Co\({}_{50}\) were found. For alloy A1 annealed for 30 min at 843 K, crystalline areas with composition close to Zr\({}_{50}\)Co\({}_{50}\) and ZrCoAl were found. ticeable plasticity and hardening, which we attribute to the B2 phase, the dominant crystal phase in A4. We also performed Vickers hardness tests, as a complimentary approach for understanding mechanical behaviour. As shown in Fig. 6, the hardness of as-cast alloy A1 is about 530 kgf/mm\({}^{2}\), and it increases to about 550 and 600 kgf/mm\({}^{2}\) for alloys A2 and A3, respectively. Alloy A4 has similar hardness as alloy 3A. Since the alloys in this study were quenched in the same manner, the influence of quenching rate differences can be neglected and the hardness is largely affected by the formation of crystalline phases. Consistent with our study, numerous studies [29; 30; 31; 32] have shown that the hardness of BMGs increases as the volume fraction of crystalline phases increases. Nevertheless, the influence of crystalline phases on hardness is complex and depends on various factors, such as the strength/hardness of the crystal phases relative to that of the glassy matrix, whether or not the crystalline phases strain hardens, and how their distribution interacts with shear banding. The increased hardness of BMG composites does not necessarily mean higher hardness of the embedded crystals compared with the matrix. Indeed, it has been reported for Zr-Cu-based composites [33; 5] that B2 ZrCu is softer than the glassy matrix, but the transformation-induced B19\({}^{\prime}\) ZrCu is harder than the matrix [5]. Annealing of alloy A1 also results in an increase in hardness (Fig. 6). For annealing at 753 K and transient annealing at 843 K, the magnitude of the hardness increase is similar to that of the as-cast alloys. The maximum hardness is reached for annealing for 20 minutes at 843 K, and further annealing to 30 min results in a significant drop in hardness. It is worth noting that, while for the as-cast alloys the hardness increases because of the crystalline phase, this is not the case for transient annealing at 753 K where the crystal fraction is minimal. The increase in hardness associated with annealing near the glass transition temperature is usually attributed to free volume annihilation [34; 35] associated with structural relaxation. Nevertheless, a series of recent simulations [36; 37; 38; 39] imply that the relaxation-induced increase in hardness could be a result of increased five-fold topological symmetry in annealed BMGs. Consistent with these simulations, it was found that low temperature annealing results in, along with densification, more icosahedral clusters [36], which are high in five-fold symmetry. Microscopically, the fraction of five-fold symmetry, instead of the volume, of an atomic cluster was found to positively correlate with the resistance to shearing [37; 38; 39]. As can be seen from Fig. 6, shear bands formed around the indent for alloys A1 and A3, but were not observed for alloy A4 in which the fraction of glass was small according to the DSC results. This implies that shear banding becomes more difficult in the presence of crystals within the glassy matrix. A similar trend was found for the annealed A1 sample. It has been reported that extended annealing time at high temperatures can result in em Figure 5: Uncorrected compressive stress strain curves for as-cast samples. The curves are shifted vertically for visional clarity. The linear dotted lines for A1 and A4 are for eye guide. Figure 6: (Top): Vickers hardness of as-cast alloys and annealed alloy A1, normalized by that of alloy A1. Label ‘753/0’ means annealing at 753 K with zero dwell time. (Bottom): Optical microscopy images of typical indents. brittlement, exemplified by crack formation upon indentation [29]. For A1 annealed for 30 minutes at 843 K, radial surface cracks were found around one of the indentation tips, where high tensile stress exists [40]. The formation of cracks and a concomitant decrease in hardness for A1 indicates that long-time annealing at a high temperature, which may cause full crystallization rather than BMG composites, is not favorable for mechanical property optimization. For some final comments, we compare the formation of B2 phase in Zr-Co-based and Zr-Cu-based BMG composites. Due to their high GFA of Zr\({}_{50}\)Cu\({}_{50}\), Zr-Cu-based BMG composites are often made with compositions close to Zr\({}_{50}\)Cu\({}_{50}\)[4; 5], and, as a result, often only the B2 ZrCu phase, in spherical shape, precipitates out of the glassy matrix. This can lead to significant hardening and large tensile plasticity [5]. In contrast, the formation of Zr-Co-Al BMG composites with a large fraction of equimolar Zr and Co is difficult due to the poor GFA of Zr\({}_{50}\)Co\({}_{50}\). On the other hand, for Zr-Co-Al alloys with different Zr and Co concentrations, the formation of B2 ZrCo is often accompanied by other crystal phases whose effects on mechanical properties could be complex. It is also important to note that for increasing GFA of Zr-Co-Al alloys a relatively high content of Al is required, and the Al dissolved in the B2 phase (see Fig. 4) may affect the morphology and properties of the latter. In the future, it might be useful to enhance the GFA of Zr-Co-Al alloys with high fraction of equimolar Zr and Co by, for example, adding other alloying elements. ### Summary In this work we have explored the possibility of fabricating Zr-Co-Al bulk metallic glass composites with B2 ZrCo phase via 1) rapidly quenching alloys of various compositions and 2) annealing the as-cast amorphous alloys. In the first approach, it was found that, as composition changes from Zr\({}_{55}\)Co\({}_{31}\)Al\({}_{14}\) towards Zr\({}_{52.5}\)Co\({}_{37.5}\)Al\({}_{10}\) (with increasing Co/Zr ratio and decreasing Al fraction), glass forming ability decreases and B2 ZrCo phase forms, along with Zr\({}_{2}\)Co, Zr\({}_{2}\)Al, and Zr\({}_{6}\)CoAl\({}_{2}\) phases. In particular, the B2 ZrCo phase was dominant in Zr\({}_{52.5}\)Co\({}_{37.5}\)Al\({}_{10}\). Alloy Zr\({}_{43}\)Co\({}_{43}\)Al\({}_{14}\) with equimolar Zr and Co was found to fully crystallize into B2 ZrCo and minor Zr\({}_{5}\)Co\({}_{7}\)Al\({}_{3}\) upon quenching. In the second approach, annealing amorphous Zr\({}_{55}\)Co\({}_{31}\)Al\({}_{14}\) close to the onset \(T_{g}\) temperature results in structural relaxation with minimal crystallization of B2 ZrCo, ZrCoAl, and Zr\({}_{6}\)CoAl\({}_{2}\). A large volume fraction of these crystals was found upon annealing at 843 K, above the first DSC exothermic peak, and the alloy was almost fully crystallized after annealing for 30 min. For both approaches, the formation of crystalline phases increases hardness of the alloys, but full crystallization upon annealing results in embrittlement and a decrease in hardness. ## Appendix To study the effect of non-parallel sample surfaces on the measured (nominal) Young's modulus, we consider a model sample as shown in Fig. A1 (top). For simplicity we assume that only one surface is not normal to the loading axis \(x\) and the sample cross-section is rectangular. We also assume a perfect testing machine with zero compliance. The non-parallel sample can be approximated as a combination of \(N\) slabs with different lengths whose interactions are neglected. When the sample is compressed by \(dx\), the strain for slab \(i\) is \[\varepsilon_{i}=(dx-\frac{i-1}{N-1}x_{1})/(x_{0}-\frac{i-1}{N-1}x_{1}) \tag{1}\] where \(x_{1}/(N-1)\) represents the length difference between two neighbouring slabs. The compressive force in slab \(i\) is \[f_{i}=\varepsilon_{i}E_{0}(y_{0}/N)z \tag{2}\] where \(E_{0}\) is the true modulus of the sample and \(y_{0}z\) is the cross-sectional area of the sample. The total force \(F\) in the sample is the summation of \(f_{i}\). With nominal strain \(\varepsilon\)=\(dx/x_{0}\) and nominal modulus \(E\)=\(F/(y_{0}z\varepsilon)\) for the sample, one can obtain the normalized modulus \(E_{n}\)=\(E/E_{0}\) as \[E_{n}=\sum_{i=1}^{n}\frac{\varepsilon_{i}}{(N\varepsilon)} \tag{3}\] where \(n\) (up to \(N\)) is the number of slabs deformed. Several observations can be made from equation A3. First, geometrically, the measured modulus only depends on the dimensions (\(x_{0}\) and \(x_{1}\)) along the loading axis, not the cross-sectional dimensions \(y_{0}\) and \(z\). Second, when the nominal strain \(\varepsilon\) increases while the sloped end is only partially deformed (\(n\)\(<\)\(N\)), not only \(\varepsilon_{i}\) but also \(n\) increase, resulting in a relatively rapid increase in the nominal \(E\). Third, even when all the sloped surface is deformed (\(n\)=\(N\)), the nominal modulus \(E\) is still smaller than the true modulus \(E_{0}\) since \(\varepsilon_{i}\)\(<\)\(\varepsilon\) for \(i\)\(>\)\(1\). By assuming a large N (N=100, so that the sloped surface is'smooth'), we examined the effects of strain and geometry on the normalized modulus \(E_{n}\). As can be seen from Fig. A1, for a certain \(x_{1}/x_{0}\) value, \(E_{n}\) increases nearly linearly as \(\varepsilon\) increases up to \(x_{1}/x_{0}\), and this increasing modulus causes the curvature of the stress-strain curves in the initial elastic regime (see Fig. 5). After the point \(\varepsilon\)=\(x_{1}/x_{0}\), \(E_{n}\) increases asymptotically towards some value less than unity (before the sample yields), and as a result the stress-strain curves become more linear at larger strains. For a given strain \(\varepsilon\), the larger the \(x_{1}/x_{0}\) value, the lower the nominal modulus \(E\). A previous experiment shows that a slanted contact due to a tilted platen can also cause the reduced measured modulus [41]. This study reveals that the measured \(E\) when the sample surface is fully contacted is higher for a higher length/diameter ratio. This is consistent with our analysis since, for given diameter and tilt angle (which is equivalent to a given \(x_{1}\) in our analysis), a high length/diameter ratio translates into a low \(x_{1}/x_{0}\) and, hence, high \(E\), as indicated by points \(c\) and \(d\) in Fig. A1.
2309.00729
Invariant approach to the Driven Jaynes-Cummings model
We investigate the dynamics of the driven Jaynes-Cummings model, where a two-level atom interacts with a quantized field and both, atom and field, are driven by an external classical field. Via an invariant approach, we are able to transform the corresponding Hamiltonian into the one of the standard Jaynes-Cummings model. Subsequently, the exact analytical solution of the Schr\"odinger equation for the driven system is obtained and employed to analyze some of its dynamical variables.
I. Bocanegra, L. Hernández-Sánchez, I. Ramos-Prieto, F. Soto-Eguibar, H. M. Moya-Cessa
2023-09-01T20:36:11Z
http://arxiv.org/abs/2309.00729v3
# Invariant approach to the driven Jaynes-Cummings model ###### Abstract We investigate the dynamics of the driven Jaynes-Cummings model, where a two-level atom interacts with a quantized field and both, atom and field, are driven by an external classical field. Via an invariant approach, we are able to transform the corresponding Hamiltonian into the one of the standard Jaynes-Cummings model. Subsequently, the exact analytical solution of the Schrodinger equation for the driven system is obtained and employed to analyze some of its dynamical variables. ## I Introduction The Jaynes-Cummings model (JCM) is probably the most fundamental theoretical model in quantum optics [1]. It is also the simplest exactly solvable model describing the interaction between matter and electromagnetic radiation. The JCM consists of a single two-level atom interacting with a single quantized mode of the electromagnetic field in a lossless cavity, under the dipole and rotating wave approximations [2]. In the strong-coupling (or ultra-strong coupling) regime of matter-radiation interaction, when the rotating wave approximation is not valid anymore, the corresponding model is known as the Rabi model. It has been proved that the presence of the counter-rotating terms in the corresponding (Rabi) Hamiltonian are responsible for richer dynamics [3; 4]. Although numerical results have been known for a while, the Rabi model has only been recently solved by Braak, through highly sophisticated analytical techniques [5]. In turn, over the years, the JCM has been exhaustively studied, extended and generalized. Such generalizations intend to address more involved and realistic aspects of the interaction between atoms and fields, beyond the simplifications of the original model [6; 7; 8; 9]. These include the generalized JCM (which incorporates multiple atomic levels [10; 11; 12] or field modes [13; 14]), the dispersive JCM [15], models including nonlinear effects [16; 17; 18] and losses [19; 20], among others [21; 22; 23; 24; 25]. The standard JCM predicts that the atom and the quantized field become entangled, ceasing to be individual systems and turning into a kind of "molecule" [26]. In fact, Alsing et al. [26] demonstrated that in order to analyze this molecule, it is necessary to probe it in some manner, and they showed that an external classical field is the natural way to do it. This leads to a new generalization of the conventional JCM, referred to as the "driven Jaynes-Cummings model" [26; 27]. In their article, Alsing et al. analyze two types of driving mechanisms; the first involves the external classical field driving the cavity mode (which was experimentally reported by Thompson [28] et al.), and the second involves the classical field driving the two-level atom. In both cases, the eigenenergies and eigenstates of the system are determined. However, we emphasize that nothing about the dynamical variables of the system (atomic inversion, average photon number, etc.) is said in [26]. Additionally, Dutra et al. [27] also analyze the scenario in which the classical field drives the atom only, discussing the necessary criteria for the model to have physical significance. They also show how the driven JCM can be transformed into the standard one, enabling the calculation of certain dynamic variables of the system. In this study, we are interested in investigating the most general case of the driven Jaynes-Cummings model, which allows for the simultaneous excitation of both, the atom and the quantized field, by the presence of an external classical field. Our aim is to establish a methodology that enables the direct calculation of the dynamic variables of the driven system in a straightforward manner, and not presented in [26]. Therefore, obtaining the solution of the Schrodinger equation in the general driven case constitutes our main motivation and contribution. In that respect, the present work represents also a generalization of [27]. Note also that although we are studying the interaction between two fields (classical and quantum) with a two-level atom, the Hamiltonian we solve may also appear in ion-laser interactions [29; 30; 31]. The paper is organized as follows: in Section II, a detailed description of the model under study is given, and the invariant approach is described. In Section III, the time-dependent Hamiltonian of the driven system is related to that of the conventional JCM, by means of unitary transformations, and according to the invariant technique. In Section IV, the general solution to the Schrodinger equation for the driven Hamiltonian is obtained. By choosing specific initial conditions, for both the atom and the quantized field, the atomic inversion, the average photon number, the Mandel Q parameter, and the entropy of the system are easily determined (as examples of the dynamical variables) and studied from the general solution. In Section V, the dispersive regime is considered. Finally, in Section VI the main conclusions are presented. ## II The Driven System and the Invariant Approach Let us consider a system consisting of a two-level atom, with states denoted as \(\left|g\right\rangle\) (ground state) and \(\left|e\right\rangle\) (excited state), having a transition frequency \(\omega_{eg}\). The atom is placed within a cavity (the reader may think of a cavity formed by perfectly reflecting mirrors) sustaining a single quantized electromagnetic field mode with frequency \(\omega_{c}\). Additionally, an external classical field with frequency \(\omega_{0}\) driving both the atom and the quantized field is considered. This setup is depicted in Fig. 1. Furthermore, we assume that the coupling between the cavity mode and the atom is significantly larger than the cavity damping and atomic decay rates, enabling us to neglect their effects [27]. Based on these assumptions, and in the dipole and rotating wave approximations, the time-dependent Hamiltonian describing the system can be written as [26] \[\hat{\mathcal{H}} =\frac{\omega_{eg}}{2}\hat{\sigma}_{z}+\omega_{c}\hat{a}^{\dagger }\hat{a}+g\left(\hat{\sigma}_{+}\hat{a}+\hat{\sigma}_{-}\hat{a}^{\dagger}\right)\] \[+\zeta\left(\hat{\sigma}_{-}e^{\mathrm{i}\,\omega_{0}t}+\hat{ \sigma}_{+}e^{-\mathrm{i}\,\omega_{0}t}\right)+\xi\left(\hat{a}e^{\mathrm{i} \,\omega_{0}t}+\hat{a}^{\dagger}e^{-\mathrm{i}\,\omega_{0}t}\right), \tag{1}\] where the real parameters \(g\), \(\zeta\) and \(\xi\) are the coupling constants between the atom and the quantized field, the external classical field and the atom, and the quantized and classical fields, respectively. As usual, the creation and annihilation operators \(\hat{a}^{\dagger}\) and \(\hat{a}\), satisfying the commutation relation \(\left[\hat{a},\hat{a}^{\dagger}\right]=1\), are considered for the quantized field, while the pseudo-spin operators \(\hat{\sigma}_{+}=\left|e\right\rangle\left\langle g\right|\), \(\hat{\sigma}_{-}=\left|g\right\rangle\left\langle e\right|\), and \(\hat{\sigma}_{z}=\left|e\right\rangle\left\langle e\right|-\left|g\right\rangle \left\langle g\right|\), with the commutation relations \(\left[\hat{\sigma}_{+},\hat{\sigma}_{-}\right]=\hat{\sigma}_{z}\) and \(\left[\hat{\sigma}_{z},\hat{\sigma}_{\pm}\right]=\pm 2\hat{\sigma}_{\pm}\), describe the atomic part of the system. It is possible to write an invariant \(\hat{I}\) satisfying [32; 33] \[\frac{d\hat{I}}{dt}=\frac{\partial\hat{I}}{\partial t}-\mathrm{i}[\hat{I}, \hat{\mathcal{H}}]=0, \tag{2}\] for the time-dependent Hamiltonian (1), in the form \[\hat{I}=\frac{\hat{\sigma}_{z}}{2}+\hat{a}^{\dagger}\hat{a}+\alpha\left(\hat{ a}e^{\mathrm{i}\,\omega_{0}t}+\hat{a}^{\dagger}e^{-\mathrm{i}\,\omega_{0}t} \right), \tag{3}\] where \(\alpha\) is a real constant to be determined. For \(\hat{I}\) to fulfill (2), it is necessary that \(\alpha=\zeta/g\) and \(\xi=\alpha(\omega_{c}-\omega_{0})\), constrictions that prevent the classical and quantized fields to be on resonance. The time dependence of the invariant (3) can be eliminated by changing to a frame rotating at frequency \(\omega_{0}\), i.e., \[\hat{I}_{T}:=\hat{T}\hat{I}\hat{T}^{\dagger}=\frac{\hat{\sigma}_{z}}{2}+\hat{a }^{\dagger}\hat{a}+\alpha\left(\hat{a}+\hat{a}^{\dagger}\right), \tag{4}\] where \(\hat{T}=\exp\left[\mathrm{i}\,\omega_{0}t(\hat{n}+\hat{\sigma}_{z}/2)\right]\), with \(\hat{n}=\hat{a}^{\dagger}\hat{a}\) the usual number operator. Furthermore, by transforming (4) with the Glauber displacement operator \(\hat{D}(\alpha)=\exp\left[\alpha(\hat{a}^{\dagger}-\hat{a})\right]\)[34], the well-known constant of motion of the standard JCM is obtained [35] \[\hat{I}_{D}:=\hat{D}(\alpha)\hat{I}_{T}\hat{D}^{\dagger}(\alpha)=\frac{\hat{ \sigma}_{z}}{2}+\hat{a}^{\dagger}\hat{a}. \tag{5}\] The above result suggests that properly transforming the Hamiltonian (1), we can arrive at the (solvable) Jaynes-Cummings Hamiltonian, as we show in the next section. It is important to emphasize that if a classical field drives either only the atom or the quantum field, it is not possible to write an invariant. Furthermore, it is noteworthy also to stress that when the classical field solely drives the quantum field, even though a solution exists [26], it is actually hardly useful to study the system dynamics. ## III Connection between the Driven and the Standard JCM The dynamics of the system associated to the Hamiltonian (1) is governed by the Schrodinger equation \[\mathrm{i}\,\frac{\partial\left|\psi(t)\right\rangle}{\partial t}=\hat{ \mathcal{H}}\left|\psi(t)\right\rangle. \tag{6}\] As we saw previously, we can move to a frame that rotates at frequency \(\omega_{0}\). We propose \(\left|\psi(t)\right\rangle=\hat{T}^{\dagger}\left|\phi(t)\right\rangle\); therefore, the resulting Schrodinger equation is \[\mathrm{i}\,\frac{\partial\left|\phi(t)\right\rangle}{\partial t}=\hat{ \mathcal{H}}_{T}\left|\phi(t)\right\rangle, \tag{7}\] Figure 1: Scheme of a lossless cavity formed by perfectly reflecting mirrors (in shades of gray). Within the space between the mirrors, a two-level atom with a transition frequency \(\omega_{eg}\) interacts with a quantized field of frequency \(\omega_{c}\) (in shades of red). Additionally, both the atom and the quantized field, are influenced by an external classical field with frequency \(\omega_{0}\). The classical field that drives the quantized field is represented by the thick horizontal arrow in green, while the one that impinges on the atom is depicted by a thin diagonal arrow in black. with \[\begin{split}\hat{\mathcal{H}}_{T}&=\hat{T}\hat{\mathcal{H }}\hat{T}^{\dagger}-\mathrm{i}\,\hat{T}\partial_{t}\hat{T}^{\dagger}\\ &=\Delta_{c}\hat{n}+\frac{\Delta_{eg}}{2}\hat{\sigma}_{z}+g\left( \hat{\sigma}_{+}\hat{a}+\hat{\sigma}_{-}\hat{a}^{\dagger}\right)\\ &\quad+\zeta\left(\hat{\sigma}_{-}+\hat{\sigma}_{+}\right)+\xi \left(\hat{a}+\hat{a}^{\dagger}\right),\end{split} \tag{8}\] where \(\Delta_{c}=\omega_{c}-\omega_{0}\) and \(\Delta_{eg}=\omega_{eg}-\omega_{0}\) represent the detunnings between the quantized and the classical fields, and the atomic and the classical field frequencies, respectively. If now we perform a unitary transformation such that \(\ket{\phi(t)}=\hat{D}^{\dagger}(\alpha)\ket{\chi(t)}\), we arrive to the Schrodinger equation \[\mathrm{i}\,\frac{\partial\ket{\chi(t)}}{\partial t}=\hat{\mathcal{H}}_{D} \ket{\chi(t)}, \tag{9}\] where \(\hat{\mathcal{H}}_{D}\) is given by \[\begin{split}\hat{\mathcal{H}}_{D}&=\hat{D}( \alpha)\hat{\mathcal{H}}_{T}\hat{D}^{\dagger}(\alpha)\\ &=\Delta_{c}\hat{n}+\frac{\Delta_{eg}}{2}\hat{\sigma}_{z}+g\left( \hat{\sigma}_{+}\hat{a}+\hat{\sigma}_{-}\hat{a}^{\dagger}\right)+\alpha( \alpha\Delta_{c}-2\xi)\\ &\quad+\left(\zeta-g\alpha\right)\left(\hat{\sigma}_{-}+\hat{ \sigma}_{+}\right)+\left(\xi-\alpha\Delta_{c}\right)\left(\hat{a}+\hat{a}^{ \dagger}\right).\end{split} \tag{10}\] As before, setting \[\alpha=\frac{\zeta}{g}, \tag{11}\] and \(\Delta_{c}=g\xi/\zeta\), the last two terms in (10) are eliminated, and we obtain \[\begin{split}\hat{\mathcal{H}}_{D}&=\hat{D}\left( \zeta/g\right)\hat{\mathcal{H}}_{T}\hat{D}^{\dagger}\left(\zeta/g\right)\\ &=\Delta_{c}\hat{n}+\frac{\Delta_{eg}}{2}\hat{\sigma}_{z}+g\left( \hat{\sigma}_{+}\hat{a}+\hat{\sigma}_{-}\hat{a}^{\dagger}\right)-\zeta\xi/g.\end{split} \tag{12}\] The last term above can be ignored, as it does not play any role in the dynamics of the system, resulting in the standard Jaynes-Cummings Hamiltonian \[\hat{\mathcal{H}}_{\text{JCM}}=\Delta_{c}\hat{n}+\frac{\Delta_{eg}}{2}\hat{ \sigma}_{z}+g\left(\hat{\sigma}_{+}\hat{a}+\hat{\sigma}_{-}\hat{a}^{\dagger} \right). \tag{13}\] Finally, we can move to a frame rotating at frequency \(\Delta_{c}\), via the transformation \(\hat{S}=\exp\left[\mathrm{i}\,\Delta_{c}t(\hat{n}+\hat{\sigma}_{z}/2)\right]\), such that \(\ket{\chi(t)}=\hat{S}^{\dagger}\ket{\eta(t)}\). The equation to solve is then \[\mathrm{i}\,\frac{\partial\ket{\eta(t)}}{\partial t}=\hat{\mathcal{H}}_{ \text{I}}\ket{\eta(t)}, \tag{14}\] with the interaction Hamiltonian \[\hat{\mathcal{H}}_{\text{I}}=\frac{\Delta}{2}\hat{\sigma}_{z}+g\left(\hat{ \sigma}_{+}\hat{a}+\hat{\sigma}_{-}\hat{a}^{\dagger}\right), \tag{15}\] where \(\Delta=\Delta_{eg}-\Delta_{c}=\omega_{eg}-\omega_{c}\). ## IV Dynamics The evolution operator associated to (15) is widely known [36; 37; 7; 7], and can be expressed as \[\hat{U}_{\text{I}}=e^{-\mathrm{i}\,t\hat{\mathcal{H}}_{\text{I}}}=\begin{pmatrix} \hat{U}_{11}(t)&\hat{U}_{12}(t)\\ \hat{U}_{21}(t)&\hat{U}_{22}(t)\end{pmatrix}, \tag{16}\] where \[\hat{U}_{11}(t) =\cos\left(\hat{\Omega}_{n+1}t\right)-\mathrm{i}\,\frac{\Delta}{ 2}\frac{\sin\left(\hat{\Omega}_{n+1}t\right)}{\hat{\Omega}_{n+1}}, \tag{17a}\] \[\hat{U}_{12}(t) =-\mathrm{i}\,g\hat{a}\frac{\sin\left(\hat{\Omega}_{n}t\right)}{ \hat{\Omega}_{n}},\] (17b) \[\hat{U}_{21}(t) =-\mathrm{i}\,g\hat{a}^{\dagger}\frac{\sin\left(\hat{\Omega}_{n+1 }t\right)}{\hat{\Omega}_{n+1}},\] (17c) \[\hat{U}_{22}(t) =\cos\left(\hat{\Omega}_{n}t\right)+\mathrm{i}\,\frac{\Delta}{2} \frac{\sin\left(\hat{\Omega}_{n}t\right)}{\hat{\Omega}_{n}}, \tag{17d}\] and \[\hat{\Omega}_{n}=\left(\frac{\Delta^{2}}{4}+g^{2}\hat{n}\right)^{1/2}. \tag{18}\] Then, the solution of the initial Schrodinger equation (6) is given by \[\ket{\psi(t)}=\hat{T}^{\dagger}\hat{D}^{\dagger}\left(\zeta/g\right)\hat{S}^{ \dagger}\hat{U}_{\text{I}}(t)\hat{D}\left(\zeta/g\right)\ket{\psi(0)}, \tag{19}\] since \(\hat{T}(0)=\hat{S}(0)=\hat{1}\), with \(\hat{1}\) the identity operator. Recall that we have set \(\Delta_{c}=g\xi/\zeta\), thus there are only five free parameters out of the initial six parameters in the Hamiltonian (1). From now on, we set \(\omega_{0}=\omega_{c}-g\xi/\zeta\). The general solution (19) allows to calculate and analyze the dynamical variables of the driven system, enabling also a direct comparison with the standard JCM, as shown next. For the sake of simplicity, we consider that the field is initially in a coherent state \(\ket{\beta}\), where \(\beta\) is an arbitrary complex number, while the atom is in the excited state \(\ket{e}\); that is, our initial state will be \(\ket{\psi(0)}=\ket{\beta}\otimes\ket{e}=\ket{\beta,e}\). ### Atomic inversion The atomic inversion \(W(t)\) is a meaningful observable that indicates changes in the population distribution of atoms and contains important statistical information of the field. It is defined as the difference between the probability of the atom to be in its excited state and the probability of it to be in its ground state. It can be calculated as the expected value of the operator \(\hat{\sigma}_{z}\), namely, \(W(t)=\left\langle\psi(t)\right|\hat{\sigma}_{z}\left|\psi(t)\right\rangle\). From (19), we get \[W(t)=\sum_{n=0}^{\infty}P_{n}(\gamma)\left\{\cos^{2}\left(\Omega_{ n+1}t\right)+\left[\frac{\Delta^{2}}{4}-g^{2}(n+1)\right]\right.\\ \times\left.\frac{\sin^{2}\left(\Omega_{n+1}t\right)}{\Omega_{n+ 1}^{2}}\right\}, \tag{20}\] where \(P_{n}\) is the probability of detecting \(n\) photons in the field, which is given by the Poisson distribution \[P_{n}(\gamma)=e^{-\left|\gamma\right|^{2}}\frac{\left|\gamma\right|^{2n}}{n!}, \tag{21}\] with \(\gamma=\beta+\alpha\), and \(\alpha\) is given in (11). Besides, \[\Omega_{n}=\left(\frac{\Delta^{2}}{4}+g^{2}n\right)^{1/2}. \tag{22}\] It is important to note that the expression (20) differs from that of the conventional JCM only in a shift in the probability distribution of photons: we go from \(P_{n}(\beta)\) in the usual JCM, to \(P_{n}(\beta+\alpha)\) in the driven system. Then, the magnitude of the shift is determined by the couplings \(\zeta\) and \(g\). Fig. 2 illustrates the aforementioned effect; Fig. 2(a) and Fig. 2(b) show the atomic inversion \(W(t)\) in the driven and conventional JCM, respectively. For the chosen values of the parameters, it is evident that the occurrence time of the first revival in the driven case [Fig. 2(a)] increases with the value of \(\alpha\), in comparison with the conventional JCM [Fig. 2(b)]. In other words, as \(\zeta\) (\(g\)) is increased (decreased), there is an observed displacement in time at which the first revival occurs; from a physical point of view, this means that if the coupling \(\zeta\) between the classical field and the atom is far greater than that between the atom and the quantized field \(g\), the transitions may be suppressed by the interaction with the classical field. ### Average photon number It is crucial to analyze another observable: the expectation value of the number operator \(\hat{n}\), namely \(\left\langle\hat{n}(t)\right\rangle=\left\langle\psi(t)\right|\hat{n}\left| \psi(t)\right\rangle\). By studying \(\left\langle\hat{n}(t)\right\rangle\), we can get a better understanding of the statistical properties of the system, including the photon distribution and its relation with the dynamics of the atom-field interaction. From (19), we obtain \[\left\langle\hat{n}(t)\right\rangle=S_{1}(t)-2\alpha\text{Re}\left[\gamma \exp\left(-\operatorname{i}\Delta_{c}t\right)S_{2}(t)\right]+\alpha^{2}, \tag{23}\] where \[S_{1}(t)=\sum_{n=0}^{\infty}P_{n}\left(\gamma\right)\left[\left| \gamma\right|^{2}\left|V_{1}^{(n+2)}\left(t\right)\right|^{2}\right.\\ \left.+\left(n+1\right)^{2}g^{2}\left|V_{2}^{(n+1)}\left(t\right) \right|^{2}\right], \tag{24}\] \[S_{2}(t)=\sum_{n=0}^{\infty}P_{n}\left(\gamma\right)\left[\bar{V }_{1}^{(n+1)}\left(t\right)V_{1}^{(n+2)}\left(t\right)\right.\\ \left.+\left(n+2\right)g^{2}V_{2}^{(n+1)}\left(t\right)V_{2}^{(n+ 2)}\left(t\right)\right], \tag{25}\] and \[V_{1}^{(n)}\left(t\right) =\cos\left(\Omega_{n}t\right)-\operatorname{i}\frac{\Delta}{2} \frac{\sin\left(\Omega_{n}t\right)}{\Omega_{n}}, \tag{26}\] \[V_{2}^{(n)}\left(t\right) =\frac{\sin\left(\Omega_{n}t\right)}{\Omega_{n}}, \tag{27}\] with the bar denoting complex conjugation, and \(\text{Re}\left[z\right]\) meaning the real part of \(z\). It is relevant to note that similar to the case of the atomic inversion (20), the probability distribution of the number of photons in (23) undergoes a change when going from the conventional to the driven JCM: \(P_{n}(\beta)\to P_{n}(\beta+\alpha)\). Nevertheless, the resulting \(\left\langle\hat{n}(t)\right\rangle\) reveals a modification of the Rabi frequency \(\Omega_{n}\) (this can be particularly seen in the shifts of the labels \(\Omega_{n}\rightarrow\Omega_{n+1}\), \(\Omega_{n}\rightarrow\Omega_{n+2}\) in the expressions for \(S_{1}(t)\) and \(S_{2}(t)\) above, through \(V_{1}^{(n)}(t)\) and \(V_{2}^{(n)}(t)\)), which leads to a strikingly different behavior of the average photon number, in comparison to the standard JCM. This fact is illustrated in Fig. 3; in Fig. 3(a) the average photon number \(\left\langle\hat{n}(t)\right\rangle\) in the driven JCM is shown, and in Fig. 3(b) the average photon number of the usual JCM is depicted; the same values of the parameters used in Fig. 2 are employed. Unlike the usual JCM [Fig. 3(b)], in which \(\langle\hat{n}(t)\rangle\) exhibits a behavior similar to the atomic inversion [Fig. 2(b)], the driven JCM shows a completely different dynamics due to the direct influence of the external classical field on the cavity mode that feeds it with photons. In addition, in the driven JCM the average photon number shows the _super revivals_ discussed in [27]; if we analyze \(\langle\hat{n}(t)\rangle\) at times larger than those of Fig. 3, it can be observed [Fig. 4(a)] that \(\langle\hat{n}(t)\rangle\) shows a collapse, just as in the case of the conventional JCM [Fig. 3(b)], though at larger times. Moreover, at even larger times [see Fig. 4(b)], the corresponding revival can be appreciated. Such large-scale fluctuations (referred to as super revivals) were previously noted and studied in [27], where the external classical field drives the atom only. Thus, they are present in the general case as well, where the classical field drives also the quantized cavity field, as can be clearly seen from Fig. 4. Finally, it can be appreciated that the collapse in Fig. 4(a) occurs at \(\langle\hat{n}\rangle\sim 13.44\) (red dashed line), while in the conventional case [Fig. 3(b)] it does occur at \(\langle\hat{n}\rangle~{}=8.5\)[35]. Of course, this is attributed to the classical driving field that, as mentioned, provides the quantized one with photons, increasing \(\langle\hat{n}(t)\rangle\). ### Mandel \(Q\) parameter The Mandel \(Q\) parameter is defined as follows [39]: \[Q=\frac{\langle\hat{n}^{2}\rangle-\langle\hat{n}\rangle^{2}}{\langle\hat{n} \rangle}-1, \tag{28}\] and it measures the deviation from a Poissonian distribution. In other words, it gives information about the nature (sub- or super-Poissonian) of the quantized cavity field. For \(Q>0\) (\(Q<0\)) we have a super (sub) Poissonian distribution of photons; for \(Q=0\), we have a Poissonian distribution of photons. In Figure 5, we show the Mandel \(Q\) parameter as defined by (28) for the driven JCM, as well as for the conventional case. It is seen that in both, the driven and the conventional JCM, the field shows sub- and super-Poissonian features. In particular, the driven case (a) presents a slower dynamics, which is in agreement with what has been observed in the case of the average photon number \(\langle\hat{n}(t)\rangle\), due to the presence of the driving classical field. Figure 3: Average photon number \(\langle\hat{n}(t)\rangle\) corresponding to the same initial condition and parameters used in Fig. 2. In (a) and (b), the average photon number is shown for the driven JCM and the standard JCM, respectively. The black lines correspond to the analytical result, while the green ones represent the numerical result. Figure 4: Average photon number \(\langle\hat{n}(t)\rangle\) in the driven JCM for relatively large time. The same initial condition and parameters of Fig. 2 were used. In (a) the collapse of \(\langle\hat{n}(t)\rangle\) can be clearly appreciated, while in (b) the collapse-revival is observed. The black lines denote the analytical result, while the green ones represent the numerical result. ### Entanglement and von Neumann entropy Even when the quantized field and the atom are initially separate entities, while interacting they become together a composed system; in other words, the initial separable state becomes mixed. An accurate quantitative measure of the degree of mixing is obtained from the (von Neumann) entropy of the system, defined as [35] \[S=-\mathrm{Tr}\left\{\hat{\rho}\ln\hat{\rho}\right\}, \tag{29}\] with \(\hat{\rho}=\left|\psi(t)\right\rangle\left\langle\psi(t)\right|\) the density matrix of the composed system. As the initial states of the field and atom are pure states (\(\left|\psi(0)\right\rangle=\left|\beta,e\right\rangle\)), the corresponding initial entropies of the quantum field and the atomic subsytems, \(S_{F}\) and \(S_{A}\), are equal to zero. In fact, as the initial state of the composed system is a pure (separable) state, the total entropy \(S\) is zero as well. Furthermore, as the entropy of a closed system does not change in time, we have \(S(t)=0\) for all \(t\). From the Araki-Lieb theorem [35; 36; 2] \[\left|S_{A}-S_{B}\right|\leq S\leq S_{A}+S_{B}, \tag{30}\] it follows that \(S_{F}(t)=S_{A}(t)\). Therefore we can focus, for instance, on the entropy of the atomic subsystem \[S_{A}=-\mathrm{Tr}_{A}\left\{\hat{\rho}_{A}\ln\hat{\rho}_{A}\right\}, \tag{31}\] where \[\hat{\rho}_{A}=\mathrm{Tr}_{F}\left\{\hat{\rho}\right\}. \tag{32}\] Using basic properties of the trace, it is easy to show that the atomic entropy (31) is given by \[S_{A}=-\lambda_{1}\ln\lambda_{1}-\lambda_{2}\ln\lambda_{2}, \tag{33}\] with \(\lambda_{1}\) and \(\lambda_{2}\) the eigenvalues of the matrix \(\hat{\rho}_{A}\). Fig. 6 shows a comparison of the atomic entropy for the driven and standard JCM. It can be observed that, in the driven case [Fig. 6(a)] the minimum entropy is displaced to larger values of \(t\) (dashed red vertical line, at around \(t\sim 11.5\)), with respect to the minimum entropy (at around \(t\sim 9.83\)) in the conventional JCM [Fig. 6(b)]. This in turn corresponds to the displacement observed in the atomic inversion (Fig. 2) caused by the classical driving field. At the minimum entropy, the quantum field and two-level atom subsystems behave nearly as separate independent entities (see [36] and references therein). Also, the decreasing of the entropy to its minimum is known to coincide with the collapse of \(W(t)\) (see for instance [35] and compare time scales in Fig. 2 and Fig. 6). In other words, in the driven case [Fig. 6(a)] the entropy takes longer time to reach its minimum value, in comparison with the conventional case [Fig. 6(b)]. This is as well in agreement with the longer collapse observed in Fig. 2(a), in comparison with Fig. 2(b), due to the classical driving field. ## V Dispersive model In this section we analyze the dispersive interaction for the general driven case, this means that \(\left|\Delta\right|\gg g\) is considered. In turn, the importance of the dispersive Figure 5: Mandel \(Q\) parameter (28) in the driven JCM (a), and its comparison with the conventional case (b). The same initial condition and parameters of Fig. 2 were used. These plots correspond to numerical calculations, however the analytical expression is pretty straightforward to be obtained from the exact solution (6). Figure 6: Entropy \(S_{A}(t)\) corresponding to the same initial condition and parameters used in Fig. 2. In (a) and (b) the entropy is shown for the driven and standard JCM, respectively. These plots correspond to numerical calculations, however the analytical expression is pretty straightforward to be obtained from the exact solution (6), (see also Ref. [35]).
2301.03402
Convergence rates for defect modes in large finite resonator arrays
We show that defect modes in infinite systems of resonators have corresponding modes in finite systems which converge as the size of the system increases. We study the generalized capacitance matrix as a model for three-dimensional coupled resonators with long-range interactions and consider defect modes that are induced by compact perturbations. If such a mode exists, then there are elements of the discrete spectrum of the corresponding truncated finite system that converge to each element of the pure point spectrum. The rate of convergence depends on the dimension of the lattice. When the dimension of the lattice is equal to that of the physical space, the convergence is exponential. Conversely, when the dimension of the lattice is less than that of the physical space, the convergence is only algebraic, because of long-range interactions arising due to coupling with the far field.
Habib Ammari, Bryn Davies, Erik Orvehed Hiltunen
2023-01-09T14:58:10Z
http://arxiv.org/abs/2301.03402v4
# Spectral convergence of defect modes in large finite resonator arrays ###### Abstract We show that defect modes in infinite systems of resonators have corresponding modes in finite systems which converge as the size of the system increases. We study the generalized capacitance matrix as a model for three-dimensional coupled resonators with long-range interactions and consider defect modes that are induced by compact perturbations. If such a mode exists, then there are elements of the discrete spectrum of the corresponding truncated finite system that converge to each element of the pure point spectrum. The rate of convergence depends on the dimension of the lattice. When the dimension of the lattice is equal to that of the physical space, the convergence is exponential. Conversely, when the dimension of the lattice is less than that of the physical space, the convergence is only algebraic, thanks to long-range interactions arising due to coupling with the far field. **Mathematics Subject Classification (MSC2010):** 35J05, 35C20, 35P20. **Keywords:** finite crystals, metamaterials, edge effects, capacitance coefficients, subwavelength resonance ## 1 Introduction Much of the physical literature concerning wave propagation in periodic media relies on a believable but highly non-trivial piece of logic. That is, researchers want to be able to relate the spectral properties of infinite periodic structures with truncated, finite versions of the same material. The motivation for this is that infinite periodic structures can be described very concisely using Floquet-Bloch analysis [11]. However, finite, truncated versions of the structure are often required when it comes to either numerical or physical experiments. It is perfectly plausible that the two structures should behave similarly, particularly away from the edges of the truncated structure and especially when the truncated structure is very large. However, a precise convergence theory relating the spectra of these two quite different differential operators is, in general, yet to be developed. The many interesting phenomena that occur at the edges of periodic arrays have been studied in some detail [9]. For example, there is a tendency for wave energy to be localized to the edges of the structure, taking the form of surface waves [15, 20]. This is an example of an _edge effect_ and highlights that there will always be fundamental differences between how infinite and truncated structures interact with waves. Another important question that has been explored in this field, and is intimately related to the results presented in this work, is the extent to which waves incident on the edge of a truncated periodic structure can excite Bloch waves in the structure (thus, replicating the behaviour of its infinite counterpart) [10, 18, 19]. The central question of this work is the extent to which the resonant spectra of infinite and truncated structures can be related. We will focus on localized modes which decay quickly outside of some compact region, meaning they are less severely affected by edge effects. Additionally, localized modes are the eigenmodes of interest for many wave guiding applications. Existing results have shown that in certain one- and two-dimensional systems, any _defect mode_ of the infinite structure will have a corresponding mode in the truncated structure converging to the defect mode as the size tends to infinity [12, 13, 14, 16]. A defect mode is a mode that is created by making a perturbation to introduce a defect to the periodic structure. Such a mode is characterized by being spatially localized (in the sense that it decays quickly enough to be square integrable along the axis or axes of periodicity) and having an eigenfrequency that belongs to the pure point spectrum of the perturbed periodic operator. The terminology "pure point spectrum" and "defect mode eigenfrequency" are preferred by spectral analysts and wave physicists, respectively, and we will use them somewhat interchangeably here. The rest of the spectrum will typically be composed of the _continuous spectrum_, which corresponds to the Bloch modes that propagate through the material without decaying. In previous works, it was shown that the convergence of defect mode eigenfrequency to the pure point spectrum of the infinite periodic operator was exponential with respect to the size of the truncated array [12, 13, 14, 16]. These results concerned either one-dimensional systems or two-dimensional systems with two-dimensional lattices. In this work, we study the _generalized capacitance matrix_, which is a dense resonator model that includes long-range interactions [1]. This model gives a leading-order characterisation of the resonant modes of a three-dimensional scattering problem with high-contrast resonators. However, it can be viewed more generally as a canonical model for coupled resonators. In this setting, we will prove that any defect mode eigenfrequency of the infinite structure has a sequence of eigenvalues of the truncated structures converging to it. We expand on the one- and two-dimensional models explored previously, by showing that this convergence is algebraic when the dimension of the periodic lattice is less than the dimension of the differential operator (which is three, in this work). This is due to the long-range interactions that arise since waves are able to radiate in the "spare" dimensions and couple with the far field. When the lattice is three-dimensional, so there are no spare dimensions, we see the same exponential convergence as for one-dimensional lattices in one-dimensional differential problems [12, 13, 16] and two-dimensional lattices in two-dimensional differential problems [14]. We believe that similar behaviour will be observed in other multi-dimensional differential systems and dense matrix models. This paper is split into three main parts. In Section 2, we introduce the matrix model (the generalized capacitance matrix) that we will study and prove some elementary properties that lay the foundations for the subsequent analysis. Section 3 contains the main results of this work, which show that the truncated structures have eigenfrequencies that converge to the pure point spectrum of the infinite structure. Finally, in Section 4, we present numerical evidence for the convergence of the truncated spectra to the continuous spectrum. Proving convergence to these Bloch modes remains an open problem; however, the constructive nature of the generalized capacitance matrix approach presented in this work provides a promising platform for future investigations. ## 2 The generalized capacitance matrix model In this section, we will introduce the generalized capacitance matrix model that will be the object of this study. Its definition uses layer potentials to capture the (potentially complex) shapes of the resonators. In Appendix A we briefly present asymptotic results showing how this model can be deduced from a subwavelength resonance problem with a system of high-contrast resonators. Finally, we will prove a convergence result for the capacitance coefficients that will be the basis of the theorems in subsequent sections. ### Definition We study a system of periodically repeated resonators in a lattice in \(\mathbb{R}^{3}\). We take lattice vectors \(l_{1},\ldots,l_{d}\in\mathbb{R}^{3}\), where \(0<d<3\), and let \(\Lambda\) denote the lattice generated by these vectors. In other words, \[\Lambda:=\left\{m_{1}l_{1}+\cdots+m_{d}l_{d}\ |\ m_{i}\in\mathbb{Z}\right\}.\] At this point, we remark that there are three possible cases: \(d=1\), corresponding to a _chain_ of resonators; \(d=2\), corresponding to a _screen_ of resonators; or \(d=3\), corresponding to a _crystal_ of resonators. For simplicity, we assume that the lattice is aligned with the first \(d\) coordinate axes. We take \(Y\subset\mathbb{R}^{3}\) to be a single unit cell, \[Y=\begin{cases}\{c_{1}l_{1}+x_{2}e_{2}+x_{3}e_{3}\mid 0\leq c_{1}\leq 1,x_{2},x_{3} \in\mathbb{R}\},&d=1,\\ \{c_{1}l_{1}+c_{2}l_{2}+x_{3}e_{3}\mid 0\leq c_{1},c_{2}\leq 1,x_{3}\in\mathbb{R}\},&d =2,\\ \{c_{1}l_{1}+c_{2}l_{2}+c_{3}l_{3}\mid 0\leq c_{1},c_{2},c_{3}\leq 1\},&d =3.\end{cases}\] We let \(D\subset Y\) be a collection of \(N\) resonators contained in \(Y\) \[D=\bigcup_{i=1}^{N}D_{i},\] where \(D_{n}\) are disjoint domains in \(Y\) with boundary \(\partial D_{i}\in C^{1,s}\) for \(s>0\). In the periodic lattice, we let \(D_{i}^{m}=D_{i}+m\), for \(m\in\Lambda\), and then denote the full lattice as \[\mathcal{D}=\bigcup_{m\in\Lambda}\bigcup_{i=1}^{N}D_{i}^{m}.\] We will define a finite system of resonators resulting from truncation of the periodic lattice. Let \(I_{r}\subset\Lambda\) be all lattice points within distance \(r\) from the origin \[I_{r}=\{m\in\Lambda\mid|m|<r\}.\] We define the finite collection of resonators \(\mathcal{D}_{\mathrm{f}}=\mathcal{D}_{\mathrm{f}}(r)\) as \[\mathcal{D}_{\mathrm{f}}(r)=\bigcup_{m\in I_{r}}D+m.\] In this setting, \(\mathcal{D}_{\mathrm{f}}\) is a finite lattice where \(D\) is the single, repeated unit. The goal is to clarify in which sense the spectral properties of a finite, but large, lattice can be approximated by the corresponding infinite one. We let \(G\) be the Green's function for Laplace's equation in three dimensions: \[G(x)=-\frac{1}{4\pi|x|}.\] Given a bounded domain \(\Omega\subset\mathbb{R}^{3}\), we then define the _single layer potential_\(\mathcal{S}_{\Omega}:L^{2}(\partial\Omega)\to H^{1}(\partial\Omega)\) as \[\mathcal{S}_{\Omega}[\varphi](x):=\int_{\partial\Omega}G(x-y)\varphi(y)\; \mathrm{d}\sigma(y),\quad x\in\partial\Omega.\] Specifically, \(\mathcal{S}_{\Omega}\) is known to be invertible [7]. For a finite lattice, we define the capacitance coefficients as \[(C_{\mathrm{f}}^{mn})_{ij}(r)=\int_{\partial D_{i}^{m}}\mathcal{S}_{\mathcal{ D}_{\mathrm{f}}}^{-1}[\chi_{\partial D_{j}^{m}}]\;\mathrm{d}\sigma, \tag{2.1}\] for \(1\leq i,j\leq N\) and \(m,n\in I_{r}\). Here, we explicitly indicate the dependence of the size \(r\) of the truncated lattice. For \(m,n\in I_{r}\), we observe that \(C_{\mathrm{f}}^{mn}(r)\) is a matrix of size \(N\times N\), while the block matrix \(C_{\mathrm{f}}=(C_{\mathrm{f}}^{mn})\) is a matrix of size \(N|I_{r}|\times N|I_{r}|\). We next define the capacitance coefficients for the infinite lattice. We begin by defining the dual lattice \(\Lambda^{*}\) of \(\Lambda\) as the lattice generated by \(\alpha_{1},...,\alpha_{d}\) satisfying \(\alpha_{i}\cdot l_{j}=2\pi\delta_{ij}\) and \(P_{\perp}\alpha_{i}=0\), for \(i,j=1,...,d.\) We define the _Brillouin zone_\(Y^{*}\) as \(Y^{*}:=\big{(}\mathbb{R}^{d}\times\{\mathbf{0}\}\big{)}/\Lambda^{*}\), where \(\mathbf{0}\) is the zero-vector in \(\mathbb{R}^{3-d}\). We remark that \(Y^{*}\) can be written as \(Y^{*}=Y^{*}_{d}\times\{\mathbf{0}\}\), where \(Y^{*}_{d}\) has the topology of a torus in \(d\) dimensions. When \(\alpha\notin Y\setminus\{0\}\), we can define the quasi-periodic Green's function \(G^{\alpha}(x)\) as \[G^{\alpha}(x):=\sum_{m\in\Lambda}G(x-m)e^{\mathrm{i}\cdot m}. \tag{2.2}\] Figure 1: This work studies the convergence of the eigenfrequencies of defect modes in a truncated periodic material to the spectrum of the corresponding infinite material. We use capacitance matrices as a canonical model for many-body scattering of time-harmonic waves. The aim of this work is to show how eigenvalues of the finite capacitance matrix \(C_{\mathrm{f}}(r)\) converge to those of the real-space capacitance matrix \(\mathfrak{C}\). The calligraphic font for \(\mathfrak{C}\) denotes the fact that this is an infinite matrix. Our strategy is to compare the spectrum of \(C_{\mathrm{f}}(r)\) with the truncated capacitance matrix \(C_{\mathrm{t}}(r)\), which is obtained by truncating all but a finite \(O(r)\) number of rows in \(\mathfrak{C}\), before letting \(r\to\infty\). Throughout this work, we use the block matrix notation \((C^{mn})_{ij}\) to refer to the \(i,j\in\{1,\dots,N\}\) entry of the \(n,m\in\Lambda\) block in a matrix \(C\). The series in (2.2) converges uniformly for \(x\) and \(y\) in compact sets of \(\mathbb{R}^{d}\), with \(x\neq y\) and \(\alpha\neq 0\). Given a bounded domain \(\Omega\subset Y\), we can then define the _quasi-periodic_ single layer potential \(\mathcal{S}^{\alpha}_{\Omega}:L^{2}(\partial\Omega)\to H^{1}(\partial\Omega)\) as \[\mathcal{S}^{\alpha}_{\Omega}[\varphi](x):=\int_{\partial\Omega}G^{\alpha}(x-y )\varphi(y)\;\mathrm{d}\sigma(y),\quad x\in\partial\Omega. \tag{2.3}\] For \(\alpha\in Y^{*}\) and for \(1\leq i,j\leq N\), the quasi-periodic capacitance matrix ("dual-space" representation) is the \(N\times N\)-matrix defined as \[\widehat{C}^{\alpha}_{ij}=\int_{\partial D_{i}}(\mathcal{S}^{\alpha}_{D})^{-1 }[\chi_{\partial D_{j}}]\;\mathrm{d}\sigma. \tag{2.4}\] For \(1\leq i,j\leq N\), we can then define the "real-space" capacitance coefficients at the lattice point \(m\) by \[C^{m}_{ij}=\frac{1}{|Y^{*}|}\int_{Y^{*}}\widehat{C}^{\alpha}_{ij}e^{-\mathrm{ i}\alpha\cdot m}\;\mathrm{d}\alpha. \tag{2.5}\] Here, \(C^{0}_{ij}\) corresponds to the diagonal block which contains the capacitance coefficients of the resonators within a single unit cell. We use the notation \(\mathfrak{C}\) to denote the infinite matrix that contains all the \(C^{m}_{ij}\) coefficients, for all \(1\leq i,j\leq N\) and all \(m\in\Lambda\). A final, important quantity for the analysis in this work is the truncated capacitance matrix \(C_{\mathrm{t}}\). This is obtained by keeping only \(N|I_{r}|\times N|I_{r}|\) coefficients from \(\mathfrak{C}\), to give a matrix that is the same size as \(C_{\mathrm{f}}\). A schematic of the various pieces of notation used in this article and how they related to each other is given in Figure 1. The proof strategy deployed in this work is to compare the spectra of \(C_{\mathrm{f}}\) with that of \(C_{\mathrm{t}}\), and then let \(r\to\infty\) in order to approximate the spectrum of \(\mathfrak{C}\). In particular, the modes that we will compare are _defect modes_, which are spatially localized modes that exist due to the presence of defects in the otherwise periodic material, an example of which is shown in Figure 2. We will model defect modes through pre-multiplication by a defect matrix \(\mathfrak{B}\). For each \(m\in\Lambda\), we let \(B^{m}\) be an \(N\times N\) diagonal matrix \[B^{m}=\left(\begin{array}{cccc}b^{m}_{1}&0&\cdots&0\\ 0&b^{m}_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&b^{m}_{N}\end{array}\right), \tag{2.6}\] where the diagonal entries \(b^{m}_{i}\) are real-valued parameters. In this work, we only consider _compact_ defects, where \(b^{m}_{i}=1\) for all but finitely many \(i\) and \(m\). For the infinite structure, we let \(\mathfrak{B}\) be the infinite block-diagonal matrix that contains \(B^{m}\) for all \(m\in\Lambda\). Under the assumption on the \(b^{m}_{i}\), \(\mathfrak{B}\) is said to be a compact perturbation of the identity. The spectrum of the infinite structure is given by the solutions to the spectral problem \[\mathfrak{B}\mathfrak{C}\mathfrak{u}=\lambda\mathfrak{u}. \tag{2.7}\] For the finite structure of size \(r\), we let \(B_{\mathrm{t}}\) be the block-diagonal matrix \((B^{m}),m\in I_{r}\) and consider the spectral problem \[B_{\mathrm{t}}C_{\mathrm{f}}u=\lambda u.\] An example of such a defect mode is shown in Figure 2. A system of 31 resonators is modelled, with the finite defect matrix \(B_{\mathrm{t}}\) chosen to be the identity, perturbed so that its central element is \((B_{\mathrm{t}})^{0}_{11}=2\) Figure 2: An example of a localized defect mode for a system of 31 resonators. The eigenvalues of the finite matrix \(B_{\mathrm{t}}C_{\mathrm{f}}\) are computed, where \(C_{\mathrm{f}}\) is the generalized capacitance matrix for a system of evenly spaces resonators and \(B_{\mathrm{t}}\) is the identity matrix but with the central entry \((B_{\mathrm{t}})^{0}_{11}=2\). The generalized capacitance matrix serves not only as a canonical model for coupled resonators (whose interaction terms decay as \(r^{-1}\)), but can also be derived from first principles in certain physical settings. For example, in Appendix A we briefly explain how this model arises for a system of high-contrast resonators in which case the eigenstates of the generalized capacitance matrix fully characterize the subwavelength resonant spectrum of the system. ### Convergence of capacitance coefficients Based on the layer-potential characterization of capacitance, we prove in this section that the capacitance coefficients of a large but finite structure converge, as the size grows, to corresponding coefficients of the infinite structure. We begin with the following result, which collects some well-known results on the capacitance matrices [1, 8]. **Lemma 2.1**.: _Let \(\widehat{C}^{\alpha}\) and \(C_{\mathrm{f}}\) be the quasi-periodic and finite capacitance matrix, respectively. Then_ 1. \(\widehat{C}^{\alpha}\) _and_ \(C_{\mathrm{f}}\) _are symmetric, positive definite matrices;_ 2. \(\widehat{C}^{\alpha}\) _and_ \(C_{\mathrm{f}}\) _are strictly diagonally dominant matrices;_ 3. _We have_ \((\widehat{C}^{\alpha})_{ii}>0\) _and_ \((C_{\mathrm{f}}^{mm})_{ii}>0\)_. Moreover, for_ \(i\neq j\) _and_ \(m\neq n\) _we have_ \((\widehat{C}^{\alpha})_{ij}<0\) _and_ \((C_{\mathrm{f}}^{mn})_{ij}<0\)_._ The next result shows that a fixed block of the infinite capacitance matrix is approximately equal to corresponding block of the capacitance matrix of the finite structure. In other words, the finite-structure capacitance coefficients can be approximated through the infinite structure as long as we are sufficiently far away from the edges of the finite structure. **Theorem 2.2**.: _For fixed \(m,n\in\Lambda\), we have as \(r\to\infty\),_ \[\lim_{r\to\infty}C_{\mathrm{f}}^{mn}(r)=C^{m-n}.\] Proof.: Firstly, observe that \[\mathcal{S}_{\mathcal{D}_{\mathrm{f}}}[\psi]=\sum_{m\in I_{r}}\mathcal{S}_{D+m }[\psi_{m}],\] where \(\psi_{m}=\psi|_{\partial D+m}\). Recall that the quasi-periodic single-layer potential is defined as \[\mathcal{S}_{D}^{\alpha}[\phi]=\int_{\partial D}\sum_{m\in\Lambda}G(x-y-m)e^{ \mathrm{i}\alpha\cdot m}\phi(y)\:\mathrm{d}\sigma.\] Given \(\phi\in L^{2}(D)\), we define \(\phi_{m}^{\alpha}\in L^{2}(D+m)\) as \[\phi_{m}^{\alpha}(y)=\phi(y-m)e^{\mathrm{i}\alpha\cdot m}.\] Then it is clear that \[\mathcal{S}_{D}^{\alpha}[\phi]=\sum_{m\in\Lambda}\mathcal{S}_{D+m}[\phi_{m}^{ \alpha}].\] We can then decompose \[\mathcal{S}_{D}^{\alpha}[\phi] =\sum_{m\in I_{r}}\mathcal{S}_{D+m}[\phi_{m}^{\alpha}]+\int_{ \partial D}\sum_{m\in\Lambda\setminus I_{r}}G(x-y-m)e^{\mathrm{i}\alpha\cdot m }\phi(y)\:\mathrm{d}\sigma\] \[=\mathcal{S}_{\mathcal{D}_{\mathrm{f}}}[\phi^{\alpha}]+\mathcal{R} ^{\alpha}[\phi],\] where, in the operator norm, \(\mathcal{R}^{\alpha}=o(1)\) as \(r\to\infty\). From the Neumann series, we now have \[(\mathcal{S}_{D}^{\alpha})^{-1}[\chi_{\partial D_{i}}]=\mathcal{S}_{\mathcal{ D}_{\mathrm{f}}}^{-1}[\chi_{i}^{\alpha}]+o(1), \tag{2.8}\] where \(\chi_{i}^{\alpha}\) is defined as \[\chi_{i}^{\alpha}=\sum_{m\in I_{r}}\chi_{\partial D_{i}^{m}}e^{\mathrm{i} \alpha\cdot m}.\] From Lemma B.2 in Appendix B, we know that the error term in (2.8) holds uniformly in \(\alpha\). If \(m,n\in I_{r}\) are fixed and \(i,j=1,...,N\), we then have from (2.8) that \[C_{ij}^{m-n} =\frac{1}{|Y^{*}|}\int_{Y^{*}}\int_{\partial D_{i}}e^{-\mathrm{i} \alpha\cdot(m-n)}\mathcal{S}_{\mathcal{D}_{\mathrm{f}}}^{-1}[\chi_{j}^{\alpha} ]\,\mathrm{d}\sigma\,\mathrm{d}\alpha+o(1)\] \[=\int_{\partial D_{i}^{m}}\mathcal{S}_{\mathcal{D}_{\mathrm{f}}}^ {-1}[\chi_{\partial D_{j}^{m}}]\,\mathrm{d}\sigma+o(1)\] \[=(C_{\mathrm{f}}^{mn})_{ij}(r)+o(1).\] This proves the claim. The numerical results presented in Figure 3 demonstrate the convergence of the capacitance coefficients, as established by Theorem 2.2. We plot \(|(C_{\mathrm{f}})_{11}^{0}-C_{11}^{0}|\) for a one-dimensional, two-dimensional, and three-dimensional lattice. When \(d=1\) or \(d=2\), the convergence is algebraic. The crucial property here is that the dimension of the lattice is less than the three dimensions of the underlying differential problem. As a result, waves are able to propagate away from the structure in the "spare" dimensions. This introduces long-range interactions to the system, as non-adjacent resonators are able to interact by coupling with the far field. Conversely, when \(d=3\) the dimension of the lattice is maximal (in the sense that we have a three-dimensional lattice in three-dimensional space). In this case, we see exponential convergence as there are no "spare" dimensions that allow waves to propagate away from the structure and couple with the far field. ## 3 Convergence to pure point spectrum In this section, we study a problem where the infinite structure has a pure point spectrum, corresponding to a localized mode. We introduce a defect to the model in order to create such a mode. For a finite, truncated structure, there will be an eigenvalue arbitrarily close to the pure point spectrum. ### Example of a defect structure Before developing any convergence theory, we present an example of a defect structure exhibiting a pure point spectrum, corresponding to a localized mode. We take a lattice with a single resonator \(N=1\) inside each unit cell. We take a single resonator with perturbed ("defect") material parameter. In other words, \[b_{1}^{m}=\begin{cases}1,&m\neq 0,\\ 1+x,&m=0,\end{cases} \tag{3.1}\] for some parameter \(x>-1\). The eigenvalues of the (infinite-dimensional) generalized capacitance matrix \(\mathfrak{B}\mathfrak{C}\) in this setting was studied in [2]. It was found that \(\lambda\) is an eigenvalue of \(\mathfrak{B}\mathfrak{C}\) if and only if it is a root of the equation \[\frac{x}{|Y^{*}|}\int_{Y^{*}}\frac{\lambda_{1}^{\alpha}}{\lambda-\lambda_{1}^ {\alpha}}\,\mathrm{d}\alpha=1, \tag{3.2}\] where \(\lambda_{1}^{\alpha}\) is the single eigenvalue of the quasi-periodic capacitance matrix \(\widehat{C}^{\alpha}\) of the unperturbed periodic structure. This equation has a solution \(\lambda=\lambda_{0}\) precisely in the case \(x>0\). In other words, the defect induces an eigenvalue \(\lambda_{0}\) in the pure point spectrum of \(\mathfrak{B}\mathfrak{C}\), corresponding to an exponentially localized eigenmode. An example of such a localized eigenmode was shown in Figure 2. ### Convergence of defect modes In this section, we prove that, if the infinite structure has a localized mode, there will be an eigenvalue of the truncated structure arbitrarily close to the localized frequency. We let \(\mathfrak{C}\) denote the infinite capacitance matrix. As before, we let \(C_{\mathrm{f}}\) denote the capacitance matrix of a finite structure of size \(N|I_{r}|\times N|I_{r}|\). Furthermore, we let \(C_{\mathrm{t}}\) denote the truncated matrix of \(\mathfrak{C}\) of size \(N|I_{r}|\times N|I_{r}|\), and similarly let \(B_{\mathrm{t}}\) be the truncation of \(\mathfrak{B}\). At this point, we emphasize that \(C_{\mathrm{t}}\) is "nonphysical" in the sense that it does not correspond to a capacitance matrix associated to any Figure 3: Convergence of the capacitance coefficient of large finite lattices. For each lattice, there is a single resonator in the unit cell (\(N=1\)). In each case, we plot \(|(C_{t})_{11}^{0}-C_{11}^{0}|\) for increasing size \(r\) of the finite structure. Observe the log-log scales in (a) and (b) and the semi-log scale in (c), which correspond to algebraic and exponential convergence, respectively. physical structure but, rather, to the finite matrix obtained by simply truncating the infinite matrix \(\mathfrak{C}\). We assume that \(\mathfrak{BC}\) has a localized eigenmode \(\mathfrak{u}\), and let \(u_{\mathfrak{t}}\) be the truncation of \(\mathfrak{u}\) of size \(N|I_{r}|\). The first result follows only from the decay of the localized mode. **Lemma 3.1**.: _Assume that \(\mathfrak{B}\) is a compact perturbation of the identity, such that \(\mathfrak{BC}\) has a localized eigenmode \(\mathfrak{u}\) with corresponding eigenvalue \(\lambda\). Then there is an eigenvalue \(\tilde{\lambda}=\tilde{\lambda}(r)\) of \(B_{\mathfrak{t}}C_{\mathfrak{t}}\) satisfying_ \[\lim_{r\to\infty}\tilde{\lambda}(r)=\lambda.\] Proof.: We let \(\mathfrak{u}_{\mathfrak{t}}\) be the infinite vector obtained by padding \(u_{\mathfrak{t}}\) with \(0\). Since \(\mathfrak{u}\) is in \(\ell^{2}(\Lambda)\), for any \(\varepsilon>0\) we can choose large enough \(r\) so that \[\|\mathfrak{u}-\mathfrak{u}_{\mathfrak{t}}\|_{\ell^{2}}<\varepsilon.\] Since \(\mathfrak{BC}\) is a bounded operator, we then have \[\|\lambda\mathfrak{u}-\mathfrak{BC}\mathfrak{u}_{\mathfrak{t}}\|_{\ell^{2}}<K\varepsilon,\] for some \(K>0\). Restricting to the finite block of size \(r\), we have \[\|\lambda u_{\mathfrak{t}}-B_{\mathfrak{t}}C_{\mathfrak{t}}u_{\mathfrak{t}}\| _{2}<K\varepsilon.\] In other words, \(\lambda\) is in the \(K\varepsilon\)-pseudospectrum of \(B_{\mathfrak{t}}C_{\mathfrak{t}}\), and since \(B_{\mathfrak{t}}C_{\mathfrak{t}}\) is normal, we have an eigenvalue \(\tilde{\lambda}\) of \(B_{\mathfrak{t}}C_{\mathfrak{t}}\) satisfying \[|\tilde{\lambda}(r)-\lambda|=K\varepsilon.\] This proves the claim. Next, we study the properties of \(C_{\mathfrak{f}}\) as the size of the finite structure increases. **Lemma 3.2**.: _For \(i=1,...,N+1\), assume that \(B_{i}\subset\mathbb{R}^{3}\) are disjoint, connected domains and let_ \[B=\bigcup_{n=1}^{N}B_{i}\quad\widetilde{B}=\bigcup_{n=1}^{N+1}B_{i}.\] _Let \(C_{ij}\), \(\widetilde{C}_{ij}\) denote the capacitance coefficients associated to \(B\) and \(\widetilde{B}\), respectively. Then_ \[C_{ii}\leq\widetilde{C}_{ii}\quad i=1,...,N.\] Proof.: We will use a variational characterization of the capacitance coefficients. Let \(\mathcal{H}=\{v\in H^{1}_{\rm loc}(\mathbb{R}^{3})\mid v(x)\sim|x|^{-1}\) as \(x\to\infty\}\) and let \[\mathcal{V} =\{v\in\mathcal{H}\mid v|_{\partial B_{j}}=\delta_{ij}\text{ for }j=1,...,N\},\] \[\widetilde{\mathcal{V}} =\{v\in\mathcal{H}\mid v|_{\partial B_{j}}=\delta_{ij}\text{ for }j=1,...,N+1\}.\] Observe that \(\widetilde{\mathcal{V}}\subset\mathcal{V}\). It then follows that \[C_{ii}=\min_{v\in\mathcal{V}}\int_{\mathbb{R}^{3}}|\nabla v|^{2}\,\mathrm{d}x \leq\min_{v\in\widetilde{\mathcal{V}}}\int_{\mathbb{R}^{3}}|\nabla v|^{2}\, \mathrm{d}x=\widetilde{C}_{ii}.\] **Remark 3.3**.: Lemma 3.2 states that the diagonal capacitance coefficients will always increase when adding additional resonators. In the physical situation of electrostatics this result is intuitive: the self-capacitance of a conductor can only increase if additional conductors are introduced. **Lemma 3.4**.: _As \(r\to\infty\), we have \(\|C_{\mathfrak{f}}\|_{2}<K\) for some \(K\) independent of \(r\)._ Proof.: We know that the capacitance matrix \(C_{\mathrm{f}}\) is diagonally dominant: \[(C_{\mathrm{f}}^{mm})_{ii}>\sum_{n\in\mathbb{Z},j\neq i}\big{|}(C_{\mathrm{f}}^{ mn})_{ij}\big{|},\] for any \(i,m\). For fixed \(i\) and \(m\), we know from Lemma 3.2 that \((C_{\mathrm{f}}^{mm})_{ii}(r)\) is increasing in \(r\), and for all \(r\) we have \[(C_{\mathrm{f}}^{mm})_{ii}(r)<C_{ii}^{0},\] where, as before, \(C_{ii}^{0}\) is the corresponding entry of the infinite capacitance matrix \(\mathfrak{C}\). In particular, the eigenvalues of \(C_{\mathrm{f}}(r)\) are bounded as \(r\to\infty\), which shows the claim. As discussed above, the matrix \(C_{\mathrm{t}}\) appearing in Lemma 3.1 is nonphysical, as it is a truncation of the matrix for the infinite system. Instead, we need to phrase the result for the matrix \(C_{\mathrm{f}}\), which describes the finite system. The following theorem is the main result of this section. **Theorem 3.5**.: _Assume that \(\mathfrak{B}\) is a compact perturbation of the identity, such that \(\mathfrak{B}\mathfrak{C}\) has a localized eigenmode \(\mathfrak{u}\) with corresponding eigenvalue \(\lambda\). Then there is an eigenvalue \(\hat{\lambda}=\hat{\lambda}(r)\) of \(B_{\mathrm{t}}C_{\mathrm{f}}\) satisfying_ \[\lim_{r\to\infty}\hat{\lambda}(r)=\lambda.\] Proof.: We let \[K_{1}=\sup_{r>0}\|C_{\mathrm{f}}(r)-C_{\mathrm{t}}\|_{2},\] and observe from Lemma 3.4 that \(K<\infty\). We also let \[K_{2}=\|B_{\mathrm{f}}\|_{2}.\] Given \(\varepsilon>0\), we pick \(r_{0}>0\) such that the following four terms are small: \[\|C_{0,\mathrm{f}}-C_{0,\mathrm{t}}\|_{2}<\frac{\varepsilon}{4K_{2}},\quad\|u _{\mathrm{t}}-u_{0,\mathrm{t}}\|_{2}<\frac{\varepsilon}{4K_{1}K_{2}},\quad\| B_{\mathrm{f}}(C_{\mathrm{t}}-C_{0,\mathrm{t}})u_{0,\mathrm{t}}\|_{2}< \frac{\varepsilon}{4},\quad\|B_{\mathrm{f}}(C_{\mathrm{f}}-C_{0,\mathrm{f}})u _{0,\mathrm{t}}\|_{2}<\frac{\varepsilon}{4}\] for all \(r\) large enough; the first inequality follows from Theorem 2.2 while the subsequent inequalities follow from the \(\ell^{2}(\Lambda)\)-decay of \(u\). Here, \(C_{0,\mathrm{t}},u_{0,\mathrm{t}}\), and \(C_{0,\mathrm{f}}\) are the truncations of \(C_{\mathrm{t}},u_{\mathrm{t}}\), and \(C_{\mathrm{f}}\) to the smaller lattice of radius \(r_{0}\) (padded with zero where needed for the matrix operations). We know from Lemma 3.1 that we can take \(r\) large enough so that \(B_{\mathrm{t}}C_{\mathrm{t}}\) has an eigenvalue \(\tilde{\lambda}\) of distance \(\varepsilon\) from \(\lambda\). We then have \[B_{\mathrm{f}}C_{\mathrm{f}}u_{\mathrm{t}}=B_{\mathrm{f}}C_{ \mathrm{t}}u_{\mathrm{t}}+B_{\mathrm{f}}(C_{\mathrm{f}}-C_{\mathrm{t}})(u_{ \mathrm{t}}-u_{0,\mathrm{t}})+B_{\mathrm{f}}(C_{0,\mathrm{f}}-C_{0,\mathrm{t} })u_{0,\mathrm{t}}\\ +B_{\mathrm{f}}(C_{\mathrm{f}}-C_{0,\mathrm{f}})u_{0,\mathrm{t}} -B_{\mathrm{f}}(C_{\mathrm{t}}-C_{0,\mathrm{t}})u_{0,\mathrm{t}}. \tag{3.3}\] Then \[\|(B_{\mathrm{t}}C_{\mathrm{f}}-B_{\mathrm{t}}C_{\mathrm{t}})u_{\mathrm{t}}\| _{2}<\varepsilon,\] which means that there is an eigenvalue \(\hat{\lambda}\) of distance \(\varepsilon\) from \(\widetilde{\lambda}\), and hence \(|\hat{\lambda}-\lambda|<2\varepsilon\). **Remark 3.6**.: As an example, \(\mathfrak{B}\) and \(\mathfrak{C}\) as given in Section 3.1 satisfy the assumptions of Theorem 3.5. ### Numerical illustration Figure 4 shows the convergence of the difference between the defect frequency computed for a finite structure and for the corresponding infinite structure, computed analytically using _e.g._ (3.2). Comparing Figure 4 with Figure 3, it appears that the error of the frequency of the defect mode is inheriting the convergence rate of the capacitance coefficients. In other words, when \(d=1\) or \(d=2\), there are long-range interactions through coupling with the far-field, leading to algebraic convergence. In \(d=3\), there are no "spare" dimensions and the convergence is exponential. This is consistent with the results for one-dimensional models [12, 13, 16] and for two-dimensional lattices in two-dimensional problems [14]. Figure 4: Convergence of the frequency of the defect modes, for a defect on the central resonator (with \(x=1\)) created by perturbing a single entry of \(\mathfrak{B}\). (a) A one-dimensional lattice with a single resonator in the unit cell (\(N=1\)). The difference between the defect frequency computed for a finite structure and for the corresponding infinite structure scales as \(O(r^{-1.4})\), where \(r\) is the length of the truncated structure. The lower right plot shows the spectrum of successively larger lattices. In the geometry sketch on the right, the corresponding entry \(b_{1}^{n}\) from the matrix \(\mathfrak{B}\) is shown above each resonator. (b) A two-dimensional square lattice with a single resonator in the unit cell (\(N=1\)). Here, the error scales as \(O(r^{-3.3})\), where \(r\) is the width of the (square) truncated structure. (c) A three-dimensional cubic lattice with a single resonator in the unit cell (\(N=1\)). Here, the error scales as \(O(e^{-2.4r})\), where \(r\) is the width of the (cubic) truncated structure. **Remark 3.7**.: Comparing Figure 3 and Figure 4, it appears that the error of the frequency of the defect mode is inheriting the convergence rate of the capacitance coefficients. While this is unsurprising, it turns out not to be the case for other types of defect. For example, in Figure 5 we show the convergence of the defect modes in a dislocated Su-Schrieffer-Heeger (SSH) lattice, which is a one-dimensional lattice of resonators arranged in pairs (so \(N=2\)). This system supports two defect modes that are known to be _topologically protected_ and benefit from enhanced robustness properties (see [3] for details). The even mode experiences \(O(r^{-1.7})\) convergence while the odd mode converges at a faster \(O(r^{-3.8})\) rate. Understanding these different convergence rates is a valuable question for future study. ## 4 Convergence to continuous spectrum Through numerical illustrations, we can illustrate how the discrete spectrum of the truncated structure approximates the Floquet-Bloch spectral bands of the infinite structure. Making analytic statements relating these two quantities, however, is a challenging problem that is beyond the scope of the present work. The two spectra have very different fundamental characteristics and a greater understanding of the edge effects that occur at the ends of the finite structure would be needed in order to make progress on this fiendish question. We now outline the method used to compute the discrete band structure which, given the set of eigenpairs \((\omega_{j},u_{j})\) of a truncated structure, approximates the band structure of the periodic structure. If we take the size \(r\) of the truncated structure to be reasonably large, the eigenmode \(u_{j}\) will _approximately_ be a linear combination of Bloch modes with frequency \(\omega_{j}\). To compare the discrete eigenvalues of the truncated problem to the continuous spectrum of the periodic problem, we'reverse engineer' the appropriate quasi-periodicities \(\alpha\) corresponding to these Bloch modes. Observe that \(u_{j}\) is a vector of length \(N|I_{r}|\). If we let \((u_{j})_{m}\) denote the vector of length \(N\) associated to cell \(m\in\Lambda\), we define the truncated Floquet transform of \(u_{j}\) as \[(\hat{u}_{j})_{\alpha}=\sum_{m\in I_{r}}(u_{j})_{m}e^{\mathrm{i}\alpha\cdot m },\qquad\alpha\in Y^{*}. \tag{4.1}\] Observe that \((\hat{u}_{j})_{\alpha}\) is a vector of length \(N\). Looking at the 2-norm \(\|(\hat{u}_{j})_{\alpha}\|_{2}\) as a function of \(\alpha\), this function has distinct peaks at certain values of \(\alpha\). We then take the quasi-periodicitiy associated to the mode \(u_{j}\) as \[\operatorname*{argmax}_{\alpha\in Y^{*}}\|(\hat{u}_{j})_{\alpha}\|_{2}. \tag{4.2}\] Note that the symmetry of the problem means that if \(\alpha\) is an approximate quasi-periodicity then so will \(-\alpha\) be. In cases of additional symmetries of the lattice, we expect additional symmetries of the quasi-periodicities. Figure 5(a) shows the subwavelength continuous spectrum of an infinite array of resonators, which takes the form of a single spectral band. It is plotted alongside the discrete spectrum of a truncated array of 50 resonators, for which the quasi-periodicities have been approximated using the method Figure 5: Convergence of the frequency of the defect modes in a lattice with resonators arranged in pairs (\(N=2\)) and a defect corresponding to the two central resonators being removed. This gives two topologically protected edge modes. Here, the error scales as \(O(r^{-1.7})\) for the even mode and \(O(r^{-3.8})\) for the odd mode, where \(r\) is the length of the truncated structure. outlined above. The discrete band structure mostly follows closely the infinite one, even for this relatively small truncated array. The frequencies close to zero are not exhibited in the finite structure, as the edge effects have the greatest effect on low-frequency modes. We would need to consider a much larger truncated structure to capture the lowest frequency part of the spectrum. This behaviour can also be observed in more complicated structures. In Figure (b)b, we compare the continuous and truncated spectra of an array of resonators arranged in pairs (dimers). The truncated structure has 100 resonators arranged in 50 pairs. This geometry is an example of the famous SSH chain [17] which has been shown to have fascinating topological properties [5]. This system has two subwavelength spectral bands and the truncated modes are split between approximating the two bands. Additionally, we can consider this method for lattices of higher dimension. Figure (a)a shows the case of a square lattice of resonator dimers. Similarly to Figure (b)b, there is a band gap between the first and the second bands, and we see a close agreement between the discrete and the continuous band structure. Figure (b)b shows a similar figure in the case of a honeycomb lattice, where the finite lattice is truncated along zig-zag edges of the lattice. As shown in [6], there are Dirac cones on each corner of the Brillouin zone. In the truncated structure, in addition to the "bulk modes" whose frequencies closely agree with the continuous spectrum, there are "edge modes" which are localized around the edges and whose points in the band structure lie away from the continuous bands. ## 5 Concluding remarks In this work, we have demonstrated the convergence of defect modes in large resonator arrays to the corresponding modes in the infinite, periodic structure. We have studied this using the generalized capacitance matrix, which is a canonical model for three-dimensional wave scattering by resonant systems with long-range interactions. Our conclusions could also be generalized to other models, since the decay of the Helmholtz Green's function is the key feature that underpins our results. Figure 6: The continuous spectrum of the infinite structure and the discrete spectrum of the truncated structure for a one-dimensional lattices. (a) Single periodic resonators (\(N=1\)) with a truncated structure consisting of 50 resonators. (b) Periodic pairs of resonators (\(N=2\)) with a truncated structure containing 100 resonators. In both cases, the truncated Floquet transform (4.1) is used to approximate the quasi-periodicity of the truncated modes. Our results clarify the exponential convergence of defect modes that was observed in previous studies [12, 13, 14, 16]. We observed that the exponential convergence occurs only when the dimension of the periodic lattice is equal to that of the differential problem. When the lattice has fewer dimensions than the space it is embedded in, the convergence is algebraic. This is due to the fact that waves are able to propagate away from the structure in the "spare" directions, leading to long-range interactions. A significant advantage of the model used in this work is that the Bloch modes, in addition to the defect modes, are also concisely characterized. As detailed in Section 4, this provides a numerical method for approximating the continuous spectrum. Importantly, this constructive approach presents a possible avenue for proving statements about the convergence of eigenvalues to the continuous spectrum. We see developing this convergence theory as a challenging but important problem for future investigation. Even in one-dimensional models, demonstrating convergence to the continuous spectrum remains an open problem. ## Data availability No new data were created or analyzed in this study. Figure 7: Examples of continuous and discrete spectra of the infinite and truncated structures, respectively. (a) A square lattice with two resonators per unit cell, resulting in two bands separated by a gap. (b) A honeycomb lattice with Dirac cones at the vertices of the Brillouin zone. In both cases, the truncated structure have 800 resonators and the truncated Floquet transform is used to approximate the quasi-periodicity of the truncated modes. Asymptotic derivation of the model In this brief appendix, we recall how the generalized capacitance matrix arises through an asymptotic treatment of a system of coupled high-contrast resonators. In particular, it can be used to characterize the subwavelength (_i.e._ asymptotically low-frequency) resonance of the system. For more details and a review of extensions to other settings (such as non-Hermitian and time-modulated systems) see [1]. We will present the results for a finite system of resonators. Analogous results hold for infinite periodic systems, by modifying the Green's function appropriately [1]. We suppose that the material inclusions \(D_{i}\subset\mathbb{R}^{3}\), as considered already in this work, represent the material inclusions that will act as our resonators. We consider the scattering of time-harmonic waves with frequency \(\omega\) and will solve a Helmholtz scattering problem in three dimensions. This Helmholtz problem, which can be used to model acoustic, elastic and polarized electromagnetic waves, represents the simplest model for wave propagation that still exhibits the rich phenomena associated to subwavelength physics. We use \(v_{i}\) denote the wave speed in each resonator \(D_{i}\). In which case, \(k_{i}=\omega/v_{i}\) is the wave number in \(D_{i}\). Similarly, the wave speed and wave number in the background medium are denoted by \(v\) and \(k\). Finally, we must introduce the material contrast parameters \(\delta_{1},\dots,\delta_{N}\). These parameters describe the contrast between the material inside \(D_{i}\) and the background material. For example, in the case of an acoustic system, \(\delta_{i}\) is the density of the material inside \(D_{i}\) divided by the density of the background material. We will want these contrast parameters to be small (an air bubble in water is one famous example in the setting of acoustics). Then for the domain \[D=\bigcup_{m\in I_{r}}\bigcup_{i=1}^{N}(D_{i}+m),\] we consider the Helmholtz resonance problem \[\left\{\begin{array}{ll}\Delta u+k^{2}u=0&\text{in }\mathbb{R}^{d}\setminus \overline{D},\\ \Delta u+k_{i}^{2}u=0&\text{in }D_{i}+m,\text{ for }i=1,\dots,N,\ m\in I_{r}, \\ u|_{+}-u|_{-}=0&\text{on }\partial D,\\ \delta_{i}\frac{\partial u}{\partial\nu}\Big{|}_{+}-\frac{\partial u}{ \partial\nu}\Big{|}_{-}=0&\text{on }\partial D_{i}+m\text{ for }i=1,\dots,N,\ m\in I_{r}, \\ u(x)\text{ satisfies the Sommerfeld radiation condition,}\end{array}\right.\] (A.1) where the Sommerfeld radiation condition says that \[\lim_{|x|\to\infty}|x|^{\frac{d-1}{2}}\left(\frac{\partial}{\partial|x|}- \mathrm{i}k\right)u=0,\quad\text{uniformly in all directions }x/|x|,\] (A.2) and guarantees that energy is radiated outwards by the scattered solution. The asymptotic regime we consider is that the material contrast parameters are all small while the wave speeds are all of order one. That is, there exists some \(\delta>0\) such that \[\delta_{i}=O(\delta)\quad\text{and}\quad v,v_{i}=O(1)\quad\text{as}\quad \delta\to 0,\text{ for }i=1,\dots,N.\] (A.3) Within this setting, we are interested in solutions to the resonance problem (A.1) that are _subwavelength_ in the sense that \[\omega\to 0\quad\text{as}\quad\delta\to 0.\] (A.4) To be able to characterize the subwavelength resonant modes of this system, we must define the _generalized_ capacitance coefficients. Recall the capacitance coefficients \((C_{\mathrm{f}}^{mn})_{ij}\) from (2.1). Then, we define the corresponding generalized capacitance coefficient as \[(\mathcal{C}_{\mathrm{f}}^{mn})_{ij}=\frac{\delta_{i}v_{i}^{2}}{|D_{i}^{m}|}(C _{\mathrm{f}}^{mn})_{ij},\] (A.5) where \(|D_{i}^{m}|\) is the volume of the bounded subset \(D_{i}^{m}\). Then, the eigenvalues of \(\mathcal{C}_{\mathrm{f}}\) determine the subwavelength resonant frequencies of the system, as prescribed by the following theorem. **Theorem A.1**.: _Consider a system of \(N|I_{r}|\) subwavelength resonators in \(\mathbb{R}^{3}\). For sufficiently small \(\delta>0\), there exist \(N|I_{r}|\) subwavelength resonant frequencies \(\omega_{1}(\delta),\ldots,\omega_{N|I_{r}|}(\delta)\) with non-negative real parts. Further, the subwavelength resonant frequencies are given by_ \[\omega_{n}=\sqrt{\lambda_{n}}+O(\delta)\quad\text{as}\quad\delta\to 0,\] _where \(\{\lambda_{n}:n=1,\ldots,N|I_{r}|\}\) are the eigenvalues of the generalized capacitance matrix \(\mathcal{C}_{\text{f}}\), which satisfy \(\lambda_{n}=O(\delta)\) as \(\delta\to 0\)._ A similar result exists for an infinite periodic structure, in terms of the eigenvalues of the _generalized_ quasi-periodic capacitance matrix, as defined in (2.4), see [1] for details. The definition (A.5) clarifies the motivation for pre-multiplying by the perturbation matrix \(\mathfrak{B}\) to describe defects. When \(\mathfrak{B}\) is a compact perturbation of the identity, it describes defects that correspond to changing the material parameters on a finite number of resonators, so that the quantity \(\delta_{i}v_{i}^{2}\) corresponding to those resonators is altered. ## Appendix B Uniformity across the Brillouin zone In this appendix, we provide additional details of the proof of Theorem 2.2. The main result is Lemma B.2, which shows that \((\mathcal{S}^{0}_{D})^{-1}\) is in operator norm, uniformly bounded for \(\alpha\) in a neighbourhood of \(0\). The analysis is similar to [4, Section 3.3]. From _e.g._[7], we have a dual-space representation of \(G^{\alpha}\) given by \[G^{\alpha}(x)=-\frac{1}{|Y|}\sum_{q\in\Lambda^{*}}\frac{e^{\mathrm{i}(\alpha+ q)\cdot x}}{|\alpha+q|^{2}}=\frac{-e^{\mathrm{i}\alpha\cdot x}}{|Y||\alpha|^{2}} -\frac{1}{|Y|}\sum_{q\in\Lambda^{*}\setminus\{0\}}\frac{e^{\mathrm{i}(\alpha+ q)\cdot x}}{|\alpha+q|^{2}}.\] Define the periodic Green's function \(G^{0}\) as \[G^{0}(x)=-\frac{1}{|Y|}\sum_{q\in\Lambda^{*}\setminus\{0\}}\frac{e^{\mathrm{i }q\cdot x}}{|q|^{2}}.\] For \(\alpha\) close to zero, we then have \[G^{\alpha}(x)=\frac{-1}{|Y||\alpha|^{2}}-\frac{\mathrm{i}\alpha\cdot x}{|Y|| \alpha|^{2}}+\frac{(\alpha\cdot x)^{2}}{2|Y||\alpha|^{2}}+G^{0}(x)+O(|\alpha|).\] Consequently, for \(\alpha\) close to zero, we have the an expansion of the single-layer potential \(\mathcal{S}^{\alpha}_{D}\): \[\mathcal{S}^{\alpha}_{D}[\psi](x)=-\frac{1}{|Y||\alpha|^{2}}\int _{\partial D}\psi(y)\,\mathrm{d}\sigma-\frac{\mathrm{i}}{|Y||\alpha|^{2}}\int _{\partial D}\alpha\cdot(x-y)\psi(y)\,\mathrm{d}\sigma\\ +\frac{1}{2|Y||\alpha|^{2}}\int_{\partial D}\bigl{(}\alpha\cdot(x- y)\bigr{)}^{2}\psi(y)\,\mathrm{d}\sigma+\mathcal{S}^{0}_{D}[\psi](x)+O(|\alpha|).\] (B.1) **Lemma B.1**.: _If \(\mathcal{S}^{0}_{D}[\varphi]=K\chi_{\partial D}\) for some constant \(K\) and some \(\varphi\in L^{2}(\partial D)\) satisfying \(\int_{\partial D}\varphi\,\mathrm{d}\sigma=0\), then \(\varphi=0\)._ Proof.: For \(x\in\mathbb{R}^{3}\setminus\mathcal{D}\), define \(V(x):=\mathcal{S}^{0}_{D}[\varphi](x)\). Then \(V\) solves the following differential problem, \[\left\{\begin{array}{ll}\Delta V=0&\text{in }\mathbb{R}^{3}\setminus \mathcal{D},\\ V|_{+}=K&\text{on }\partial\mathcal{D},\\ V(x+m)=V(x)&\text{for all }m\in\Lambda.\end{array}\right.\] (B.2) Moreover, using the jump relations and integration by parts, we have that \[\int_{\partial D}\varphi\,\mathrm{d}\sigma=K\int_{Y\setminus D}|\nabla V|^{2} \,\mathrm{d}x=0.\] If \(K\neq 0\), it follows from (B.2) that \(\int_{Y\setminus D}|\nabla V|^{2}\,\mathrm{d}x\neq 0\) which is a contradiction. In other words we must have \(K=0\), so that \(\mathcal{S}^{0}_{D}[\varphi]=0\) and \(\int_{\partial D}\varphi\,\mathrm{d}\sigma=0\). From [4, Lemma 3.7], we have that \(\varphi=0\). **Lemma B.2**.: \(\|(\mathcal{S}^{\alpha}_{\mathrm{D}})^{-1}\|\)_, in operator norm, is bounded for \(\alpha\) in a neighbourhood of \(0\)._ Proof.: To reach a contradiction, we assume that \(\mathcal{S}^{\alpha}_{\mathrm{D}}[\phi]=O(|\alpha|)\) for some \(\phi\), which can be written as \(\phi=\phi_{0}+|\alpha|\phi_{1}\), where \(\phi_{0}\) is nonzero, does not depend on \(\alpha\), and \(\phi_{1}=O(1)\) as \(|\alpha|\to 0\). Also define \(\mathbf{v}=\frac{\alpha}{|\alpha|}\). From (B.1) it follows that \[\int_{\partial D}\phi_{0}\:\mathrm{d}\sigma =0,\] \[\int_{\partial D}\phi_{1}(y)\:\mathrm{d}\sigma+\mathrm{i}\int_{ \partial D}\mathbf{v}\cdot(x-y)\phi_{0}(y)\:\mathrm{d}\sigma =O(|\alpha|),\] \[K(\mathbf{v})-\mathrm{i}\mathbf{v}\cdot x\int_{\partial D}\phi_ {1}(y)\:\mathrm{d}\sigma+\frac{1}{2}\int_{\partial D}\bigl{(}\mathbf{v} \cdot(x-y)\bigr{)}^{2}\phi_{0}(y)\:\mathrm{d}\sigma+|Y|\mathcal{S}^{\alpha}_{ D}[\phi_{0}] =O(|\alpha|),\] where \(K\) is constant as function of \(x\). Simplifying, we have that \[\frac{1}{2}\int_{\partial D}\bigl{(}\mathbf{v}\cdot(x-y)\bigr{)}^{2}\phi_{0}( y)\:\mathrm{d}\sigma=-(\mathbf{v}\cdot x)\int_{\partial D}(\mathbf{v}\cdot y) \phi_{0}(y)\:\mathrm{d}\sigma+\frac{1}{2}\int_{\partial D}(\mathbf{v}\cdot y) ^{2}\phi_{0}(y)\:\mathrm{d}\sigma.\] In total we get \[\mathcal{S}^{0}_{D}[\phi_{0}](x)=\tilde{K}(\mathbf{v})+\frac{2(\mathbf{v} \cdot x)}{|Y|}\int_{\partial D}(\mathbf{v}\cdot y)\phi_{0}(y)\:\mathrm{d}\sigma,\] where \(\tilde{K}\) is constant in \(x\). Observe that \(\mathcal{S}^{0}_{D}[\phi_{0}](x)\) is independent of \(\mathbf{v}\). As a function of \(x\), this function is constant for \(x\in\mathbf{v}^{\perp}\), and so this function is constant for all \(x\). From Lemma B.1 we get that \(\phi_{0}=0\) which proves the claim.
2307.08801
Towards Automated Design of Riboswitches
Experimental screening and selection pipelines for the discovery of novel riboswitches are expensive, time-consuming, and inefficient. Using computational methods to reduce the number of candidates for the screen could drastically decrease these costs. However, existing computational approaches do not fully satisfy all requirements for the design of such initial screening libraries. In this work, we present a new method, libLEARNA, capable of providing RNA focus libraries of diverse variable-length qualified candidates. Our novel structure-based design approach considers global properties as well as desired sequence and structure features. We demonstrate the benefits of our method by designing theophylline riboswitch libraries, following a previously published protocol, and yielding 30% more unique high-quality candidates.
Frederic Runge, Jörg K. H. Franke, Frank Hutter
2023-07-17T19:34:59Z
http://arxiv.org/abs/2307.08801v1
# Towards Automated Design of Riboswitches ###### Abstract Experimental screening and selection pipelines for the discovery of novel riboswitches are expensive, time-consuming, and inefficient. Using computational methods to reduce the number of candidates for the screen could drastically decrease these costs. However, existing computational approaches do not fully satisfy all requirements for the design of such initial screening libraries. In this work, we present a new method, _libLEARNA_, capable of providing RNA focus libraries of diverse variable-length qualified candidates. Our novel structure-based design approach considers global properties as well as desired sequence and structure features. We demonstrate the benefits of our method by designing theophylline riboswitch libraries, following a previously published protocol, and yielding 30% more unique high-quality candidates. Machine Learning, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitches, Biooswitchesoswitches, Bioosw versity and hinders their applicability to larger scale library designs. This is also true for tools that were specifically developed for the task of riboswitch design, like the sampling approach proposed by Hammer et al. (2017). In this work, we present a novel structure-based RNA design algorithm, _libLEARNA_, that is capable of designing large amounts of diverse candidate sequences with different lengths, while considering desired sequence and structural constraints, as well as nucleotide distributions, during the design process. In particular, our contributions are as follows: * We introduce a novel _RNA design paradigm_ that enables the design of variable-length RNA sequences from desired arbitrary sequence and structure parts. * We improve an existing _RNA design algorithm_, _LEARNA_(Runge et al., 2019), with a masked training objective to enable efficient RNA library generation, including global sequence properties due to the robustness of our approach. * We show the benefits of our approach by exemplarily designing theophylline riboswitch libraries, following a previously proposed protocol by Wachsmuth et al. (2012), yielding 30% more unique candidates compared to the original approach and up to 47% when designing sequences with desired G and C nucleotide ratios, while providing greater structural diversity across nearly uniformly distributed sequence lengths. ## 2 Method We first recap the foundation of our method, the _LEARNA_ algorithm, followed by the changes we apply to the state representations, the training procedure, and the meta-optimization process which leads to its successor _libLEARNA_. ### Background _LEARNA_ is a generative automated deep reinforcement learning (AutoRL) (Parker-Holder et al., 2022) algorithm for the inverse RNA folding problem (Hofacker et al., 1994). The algorithm samples RNA sequences from a learned policy given a target secondary structure. In an inner-loop, a deep reinforcement learning (RL) algorithm meta-learns an RNA design policy across thousands of different inverse RNA folding tasks. The validation loss is communicated to an efficient Bayesian Optimization method, BOHB (Falkner et al., 2018), which jointly optimizes the configuration of the RL system in the outer-loop. The actions of the RL agent correspond to placing a nucleotide (A, C, G, or U) for each position, or directly placing Watson-Crick pairs (A-U, U-A, G-C, or C-G) in case the structure indicates that the given position is paired to another nucleotide. States are defined as local representations of the input structure in dot-bracket format (Hofacker et al., 1994) using an n-gram centered around the current position. After applying a folding algorithm to the designed sequence, the reward function is based on the Hamming distance between the folded candidate and the desired structure. Finally, the best-performing configuration on the validation set is evaluated at test time. ### libLEARNA We improve _LEARNA_ to design RNA candidates from arbitrary sequence and structure constraints due to an extended state representation, changes in the training procedure, and additional dimensions, as well as changes to the general objective of the meta-optimization process. **State Representations** To inform _libLEARNA_ about constraints in the sequence and the structure, we extend the state representations of _LEARNA_. In particular, we numerically encode pairs of a sequence symbol and its corresponding structure symbol for each position of the design task. This enlarges the state space of _libLEARNA_ compared to _LEARNA_ but allows _libLEARNA_ to be more flexible in terms of design tasks it can be applied to. **Training** Instead of training on tasks of the inverse RNA folding problem, we use a masked training objective similar to the masked language model training in BERT (Devlin et al., 2018). In particular, we follow Runge et al. (2019) to generate three training datasets with different length distributions (\(\leq 200\) nucleotides (nt), \(\geq 200\) nt, random length) of \(100000\) samples each, and a non-overlapping validation set of \(100\) samples from the Rfam (Griffiths-Jones et al., 2003) database version 14.1 using _RNAfold_(Lorenz et al., 2011) for folding the sequences. However, in contrast to Runge et al. (2019) we do not mask the entire sequences but derive tasks that correspond to the design of RNAs from desired sequence and structure parts by applying a random masking procedure to the sequences and the structures detailed in Appendix A. The task of _libLEARNA_ during training then is to fill the masked parts of the sequence such that, after applying a folding algorithm to the designed sequence, all positions of the resulting folding satisfy the given positional constraints of the masked structure. Similar to Runge et al. (2019), the reward is based on the Hamming distance, while masked positions in the structure are ignored. Our design procedure describes a completely new structure-based RNA design paradigm since the design algorithm is not informed about the pairing conditions _a priori_. This is in contrast to previous work in the field of structure-based RNA design and enables the design of RNAs from arbitrary sequence and structure parts while creating larger structural diversity and further allowing us to extend the task at any given point since we are no longer bound to explicit pairing positions. However, we allow indexing pairs to explicitly indicate pairing positions if desired. More details and examples of tasks for the datasets can be found in Appendix A. Meta-OptimizationFor _libLEARNA_, we mainly adopt the configuration space proposed by Runge et al. (2019) but introduce four new dimensions. (1) While algorithms for the inverse RNA folding problem typically benefit from directly predicting Watson-Crick base pairs at sites that are known to be paired with another nucleotide, the pairing partners for many paired sites are not trivially distinguishable as a result of our masking procedure during training. However, we include the choice to make use of the direct prediction of pairs if the pairing partner can be identified, i.e. there is no masking between the pairing positions. (2) We add a choice to the configuration space to dynamically adapt the states based on the design progress to inform the agent about its own decisions. (3) The choice of training data and (4) the schedule of the tasks during training can have a strong impact on the final performance of the learning algorithm. We, therefore, include two dimensions to choose from three different training data distributions and curricula (unsorted tasks or sorted by length). We further allowed searching over an additional LSTM layer. The result is an 18-dimensional search space to jointly optimize over the network architecture, all parts of the MDP, as well as training hyperparameters, task distributions, and schedule, using BOHB (Falkner et al., 2018). We use the exact same setup during meta-optimization as Runge et al. (2019), with the same training budgets and validation protocols. However, while Runge et al. (2019) optimized for an RL algorithm without any policy updates at test time, we directly optimize for an algorithm with policy updates at evaluation time to increase the adaptation capabilities of our approach. The meta-optimization procedure, the configuration space, as well as the final configuration of _libLEARNA_ are detailed in Appendix B. ## 3 Riboswitch Library Generation After describing the procedure of the original design and evaluation of candidates for synthetic riboswitches for theophylline-dependent regulation of transcription proposed by Wachsmuth et al. (2012) we detail our approach for the construction of an RNA design space and the design of RNA libraries and evaluate both methods. ### Original Setup Initial SettingOriginally, Wachsmuth et al. (2012) constructed riboswitch candidates from (1) the TCT8-4 theophylline aptamer sequence and structure, (2) a spacer sequence of 6 to 20 nucleotides (nt), (3) a sequence of 10nt to 21nt complementary to the 3\({}^{\prime}\)-end of the aptamer, and (4) a U-stretch of 8nt at the 3\({}^{\prime}\)-end of the construct. Design ProcedureTo generate candidates, Wachsmuth et al. (2012) designed a large library of random sequences for the spacer region (6-20nt) and a library of sequences complementary to the 3\({}^{\prime}\)-end of the aptamer (10-21nt). From these sets, randomly sampled sequences were combined with the aptamer and the 8-U-stretch. ### libLEARNA Setup Design Space FormulationWe start the formulation of our design space from the entire sequence and the structure part of the unbound aptamer, a spacer region of unknown sequence, a region complementary to the 3\({}^{\prime}\)-end of the aptamer of at least 10nt and the unpaired 8-U-stretch, similar to Wachsmuth et al. (2012). To ensure a length of 6nt of the spacer region, we use the shared structure constraints of the six final constructs proposed by Wachsmuth et al. (2012), which were also used to restrict the unknown parts \begin{table} \begin{tabular}{l l l l l} \hline \hline **Connect** & **APTAZ** & **Spacer** & **Complementary REGION** & **8-U-stretch** \\ \hline RS1 Sequence & \(\textsc{AGGQUACACAGUC of the aptamer. We then introduce three positions where the task can be extended (indicated by \(\hat{?}\) in Table 1) to fit the requirements and fix the length of the design space to 66nt-91nt according to Wachsmuth et al. (2012). The final definition of the design space and the proposed constructs of Wachsmuth et al. (2012) are shown in Table 1. Design ProcedureThe input to _libLEARNA_ is the design space shown in Table 1. To generate candidates, _libLEARNA_ internally first samples a masked task from the design space by uniformly sampling a sequence of masking tokens for each extension position considering the length constraints and then tries to solve the task with a single shot. ### Experiments We assess the performance of _libLEARNA_ against the originally proposed library generation procedure proposed by Wachsmuth et al. (2012) in two experiments: (1) The design of a library based on sequence and structure constraints only, and (2) the design of candidates when additionally querying _libLEARNA_ to design candidates with a specific G and C nucleotide ratio (GC-content), given a tolerance of \(0.01\). We note that _libLEARNA_ was never trained for predictions with desired GC-contents but that a GC-content loss-term (the absolute deviation of the GC-content of the current candidate sequence from the desired GC-content) was simply added to the reward function of _libLEARNA_. The reward function then is a weighted sum (without any tuning, setting all weights to 1) of the structure- and the GC-loss. However, similar to the structural improvement step described by Runge et al. (2019), we implement a GC-improvement step to guide the agent for this challenging task, detailed in Appendix C. For each experiment, we generate 50000 candidates with the approach of Wachsmuth et al. (2012) and _libLEARNA_ and evaluate them as follows. EvaluationThe evaluation of the designed candidates was performed following Wachsmuth et al. (2012): after dropping duplicates, the designed sequences were folded using _RNAfold_ and verified for (1) the existence of two hairpin structure elements, the aptamer hairpin for binding the ligand and the terminator hairpin formed between the \(3^{\prime}\)-end of the aptamer, the spacer and the region complementary to the aptamer sequence, (2) no pairing within the last seven nucleotides of the 8-U-stretch, (3) no pairing between the spacer and the aptamer in a sequence of folding steps, simulating co-transcriptional folding with a fixed elongation speed of five. A candidate that did not pass all criteria was rejected. ResultsWe observe that _libLEARNA_ generates considerably more candidates that pass the design criteria compared to the original procedure proposed by Wachsmuth et al. (2012), yielding 30% more satisfying candidates on average (Table 2). Further, the candidates are nearly uniformly distributed across the lengths of the design space, especially for the longer sequences (Figure 1 left), and the structure diversity generated by _libLEARNA_ is around 23% higher (Table 2). When designing candidates with desired GC-contents, _libLEARNA_ provides up to 47% more candidates that satisfy the design criteria and contain the specific G and C nucleotide distribution (Figure 1 right). Remarkably, _libLEARNA_ can also design such candidates on the margins of possible GC-contents (\(0.3\) and \(0.6\)) which lay between \(0.29\) and \(0.63\) for the given riboswitch design space. ## 4 Conclusion We propose a new RNA design paradigm based on a masked prediction task that enables the design of RNA libraries with large structure diversity from arbitrary sequence and structure constraints. We use the paradigm and develop a new algorithm, _libLEARNA_, capable of generating these libraries. We exemplarily demonstrate the benefits of our method on the design of variable-length riboswitches, showing the efficiency of our approach. Our results for library generation with desired G and C nucleotide distributions show that _libLEARNA_ can handle global constraints, while the robustness to reward changes suggests potential for handling more constraints, e.g. a desired energy threshold of the structures, which we will assess in future work. Overall, our approach bears great potential to support experimental pipelines. \begin{table} \begin{tabular}{l c c} \hline \hline **Method** & **Valid Candidates [\%]** & **Unique Structures** \\ \hline Wachsmuth et al. (2012) & 41.7 & 6316.6 \\ _libLEARNA_ & **70.9** & **8269.6** \\ \hline \hline \end{tabular} \end{table} Table 2: **Overview of candidates that satisfy the design criteria.** All numbers are averages across five runs with different random seeds. Figure 1: **Length and GC-content distribution in the generated libraries.** (Left) The plot displays the distribution of the candidates across different lengths. (Right) The plot shows the number of unique candidates that satisfy all design criteria and have the specified G and C nucleotide distribution when running _libLEARNA_ with a desired GC-content. All numbers are averages across five runs with different random seeds. ## Acknowledgements This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828. The authors further acknowledge support by the state of Baden-Wurttemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG.
2303.13068
Effects of magnetic field on the evolution of energy density fluctuations
We study the effects of a static and uniform magnetic field on the evolution of energy density fluctuations present in a medium. By numerically solving the relativistic Boltzmann-Vlasov equation within the relaxation time approximation, we explicitly show that magnetic field can affect the characteristics of energy density fluctuations at the timescale the system achieves local thermodynamic equilibrium. A detailed momentum mode analysis of fluctuations reveals that magnetic field increases the damping of mode oscillations, especially for the low momentum modes. This leads to a reduction in the ultraviolet (high momentum) cutoff of fluctuations and also slows down the dissipation of relatively low momentum fluctuation modes. We discuss the phenomenological implications of our study on various sources of fluctuations in relativistic heavy-ion collisions.
Shreyansh S. Dave, Subrata Pal
2023-03-23T06:52:51Z
http://arxiv.org/abs/2303.13068v2
# Effects of magnetic field on the evolution of energy density fluctuations ###### Abstract We study the effects of a static and uniform magnetic field on the evolution of energy density fluctuations present in a medium. By numerically solving the relativistic Boltzmann-Vlasov equation within the relaxation time approximation, we explicitly show that magnetic field can affect the characteristics of energy density fluctuations at the timescale the system achieves local thermodynamic equilibrium. A detailed momentum mode analysis of fluctuations reveals that magnetic field increases the damping of mode oscillations, especially for the low momentum modes. This leads to a reduction in the ultraviolet (high momentum) cutoff of fluctuations and also slows down the dissipation of relatively low momentum fluctuation modes. We discuss the phenomenological implications of our study on various sources of fluctuations in relativistic heavy-ion collisions. ## I. Introduction The effects of magnetic field on the bulk evolution of dynamical systems have been extensively studied and found to be crucial for a proper understanding of the physical properties of the system. For example, the evolution of cosmic fluid in the early Universe has been found to be affected by a primordial magnetic field that can non-trivially modify the power spectrum of cosmic microwave background radiation (CMBR) [1]. The fluid evolution in the post-merger (ringdown) phase of a binary neutron star merger and the gravitational collapse of a homogeneous dust [2] can be influenced by the magnetic field [3, 4, 5, 6, 7] modifying the strain amplitude and frequency spectrum of the gravitational waves [8, 9, 10, 11]. A strong magnetic field can also be generated in the participant zone in relativistic heavy-ion collisions [12, 13] which can qualitatively modify the azimuthal anisotropic flow and power spectrum of the flow fluctuations of the hadrons [14, 15, 16, 17, 18, 19, 20]. These systems were investigated within the relativistic magnetohydrodynamic (RMHD) framework where the medium is assumed to be in the local thermodynamic equilibrium. The observable consequences of magnetic field basically stem from the stiffening of the equation of state, which causes an increase in the sound speed in the plane perpendicular to the magnetic field and generates an additional momentum anisotropy in the fluid evolution [16, 17, 18, 21]. However, expanding systems are naturally not in thermal equilibrium, but may gradually approach equilibrium from out-of-equilibrium initial conditions. Such scenarios have been perceived in the reheating of early Universe in the presence of a magnetic field [22, 23], and in noncentral relativistic heavy-ion collisions where a deconfined state of Quark-Gluon-Plasma (QGP) [24, 25, 26] is formed in presence of a large magnetic field [12, 13, 27, 28, 29]. In these situations, the RMHD framework is not applicable and an out-of-equilibrium description is required to explore the evolution of the physical quantities as they approach local-equilibrium. In general, fluctuations in physical quantities can exist at various length scales and may largely influence the dynamics of the system. In fact, fluctuations can be exploited to infer the information of a system at various time/length scales. For cosmic fluid and astrophysical systems, compared to long wavelength fluctuations, the short wavelength fluctuations (comparable to the coarse-grained length scale for hydrodynamic description) may not have any significant effect on the bulk evolution. In contrast, in heavy-ion collisions, fluctuations of short wavelengths comparable to the length scale, \(\ell\sim 1\,\)fm, are particularly important in the description of the evolution dynamics of QGP droplet of transverse length \(L\sim 10\,\)fm. These fluctuations are dominantly present at the initial stages of the collision [30, 31], and also during the space-time evolution (collisional and thermal/hydrodynamic fluctuations), and play an important role in the bulk hydrodynamic description of the QGP medium [32, 33, 34].1 Footnote 1: Additionally, at the moderate collision energies, large critical fluctuations can inevitably arise near the QCD critical point and can potentially be used to probe the critical point [34, 35]. While these model studies of fluctuations were carried out in the absence of magnetic field, the short-lived strong magnetic field produced in nuclear collisions [12] can affect mainly the "fast-evolving" short wavelength fluctuations and thereby the observables that are sensitive to fluctuations. Thus, it is important to study the impact of magnetic field on these fluctuations that can provide reliable insight on the medium properties in a model-to-data comparison [36, 37, 38, 39].2 Footnote 2: Interestingly, the ideal RMHD simulations of QGP in relativistic heavy-ion collisions mostly show the effect of magnetic field on the higher flow harmonics [17]. In this work, we investigate within kinetic transport [40, 41, 42], the effects of an external magnetic field on the evolution of energy density fluctuations present in a slightly out-of-equilibrium medium of electrically charged particles. The magnetic field \(B\) is considered to be static, uniform, and marginally weaker than the thermal energy or the temperature \(T\) of the medium, i.e., \(\sqrt{|qB|}<T\). In particular, we solve numerically the relativistic Boltzmann-Vlasov equation within the relaxation time approximation (RTA) [43; 44; 45], and show that the magnetic field can affect the evolution of energy density fluctuations in the transverse direction to \(B\), leading to fluctuations of completely different characteristics at the timescale at which the medium achieves local-equilibrium. We perform momentum mode analysis of the fluctuations and demonstrate that the magnetic field enhances the damping of mode oscillations. We extend the analysis to the high momentum scale of fluctuations where the nonhydrodynamic modes of RTA kinetic theory dominate [46; 47; 48; 49; 50; 39] and determine the ultraviolet cutoff of fluctuations above which all the higher momentum modes get suppressed while approaching local-equilibrium. We show that this cutoff decreases (i.e., the short-wavelength cutoff increases) with increasing magnetic field. We emphasize that the analysis presented here is quite general and can be applied to any system whose constituents are electrically charged. For inclusiveness, we consider a system of charged pions and discuss the phenomenological implications on relativistic heavy-ion collisions. This paper is organized as follows. Section II deals with a detailed description of the Boltzmann-Vlasov equation, where the underlying assumptions for solving this equation are discussed. The simulation details are given in Sec. III, and the simulation results are presented in Sec. IV. In Sec. IVA, the effects of magnetic field on the evolution of energy density fluctuations are studied with an extensive analysis of the momentum modes of fluctuations. The effects of magnetic field on the other components of energy-momentum tensor are shown in Sec. IVB. In Sec. IVC, the effects of magnetic field on a generic initial energy density profile are presented, which readily elucidate the implications of the preceding sections. The phenomenological implications of the results, especially in the context of QGP formation in relativistic heavy ion collisions are discussed in Sec. V. Finally, we conclude with a summary of the work in Sec. VI. Throughout the study, we consider the Minkowski space-time metric as \(\eta_{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)\), and work in the units \(k_{B}=\hbar=c=1\). The four-position and four-momentum of particle (and anti-particle) are represented by \(x^{\mu}=(t,\mathbf{x})\) and \(k^{\mu}=(k^{0},\mathbf{k})\), respectively, where \(k^{\mu}\) is normalized to the particle's rest mass square as \(k^{\mu}k_{\mu}=m_{0}^{2}\) giving \(k^{0}=\sqrt{\mathbf{k}^{2}+m_{0}^{2}}=\)\(E_{\mathbf{{}_{k}}}\) -- the energy of particle with three-momentum \(\mathbf{k}\). ## II. Boltzmann-Vlasov Equation To study the effects of magnetic field on the equilibration of a system, we solve the relativistic Boltzmann-Vlasov (BV) equation [41; 51]: \[k^{\mu}\partial_{\mu}f+qF^{\mu\nu}k_{\nu}\frac{\partial}{\partial k^{\mu}}f= \mathcal{C}[f,\bar{f}]. \tag{1}\] This provides the time evolution of the single-particle phase-space distribution function \(f\equiv f(t,\mathbf{x},\mathbf{k})\) of particles with electric charge \(q\). Here \(F^{\mu\nu}\) is the electromagnetic field tensor whose components are treated as external fields.3 The collision integral \(\mathcal{C}[f,\bar{f}]\) makes the BV equation nonlinear which cannot be solved analytically [29; 52; 53]. We consider the linear approximation -- also known as the relaxation time approximation (RTA) [43; 44; 45] -- where the system is assumed to be slightly away from the equilibrium state such that the distribution function can be written as \(f=f_{\mathrm{eq}}+\delta f\), where \(f_{\mathrm{eq}}\) is the local equilibrium distribution function of the system and \(\delta f\ll f_{\mathrm{eq}}\) gives the deviation from \(f_{\mathrm{eq}}\). Footnote 3: Thus, unlike a RMHD fluid [17], there is no feedback of the medium on the electromagnetic fields. In RTA, the collision integral can be written in the linear form as \(\mathcal{C}[f,\bar{f}]\)=\(-\frac{k^{\mu}u_{\mu}}{\tau_{c}}\delta f\)[42; 44], where \(\tau_{c}\) is the relaxation time which sets a timescale for local equilibration [44; 45], \(u^{\mu}\)=\(\gamma(1,\mathbf{v})\) the four-velocity of the fluid, and \(\gamma\)=\((1-\mathbf{v}^{2})^{-1/2}\) the Lorentz factor. For anti-particles, the Boltzmann-Vlasov equation has a form similar to Eq. (1) with \((f,q)\leftrightarrow(\bar{f},-q)\) and the collision term under the RTA as \(\mathcal{\bar{C}}[f,\bar{f}]\)=\(-\frac{k^{\mu}u_{\mu}}{\tau_{c}}\delta\bar{f}\).4 The collision term within RTA is constrained by the Landau matching conditions required to satisfy the net-particle four-current and energy-momentum conservations [55; 56; 44]. These conditions are given by Footnote 4: Note that a magnetic field can modify the relaxation time \(\tau_{c}\)[54]. However, we have rescaled the space-time coordinates by \(\tau_{c}\) and hence it will not enter in the BV equation explicitly. \[\begin{split} u_{\mu}T^{\mu\nu}&=u_{\mu}T^{\mu \nu}_{\mathrm{eq}},\\ u_{\mu}N^{\mu}&=u_{\mu}N^{\mu}_{\mathrm{eq}}, \end{split} \tag{2}\] which should be satisfied throughout the evolution of distribution functions [55; 44]. Here \(T^{\mu\nu}\) and \(T^{\mu\nu}_{\mathrm{eq}}\) are the energy-momentum tensors of the medium corresponding to distribution functions \((f,\bar{f})\) and local equilibrium distribution functions \((f_{\mathrm{eq}},\bar{f}_{\mathrm{eq}})\), respectively. Likewise, \(N^{\mu}\) and \(N^{\mu}_{\mathrm{eq}}\) are the net-particle four-currents corresponding to \((f,\bar{f})\) and \((f_{\mathrm{eq}},\bar{f}_{\mathrm{eq}})\), respectively. These variables can be calculated by using the relations [51; 55] \[\begin{split} T^{\mu\nu}&=\int dK\ k^{\mu}k^{\nu}(f+ \bar{f}),\\ N^{\mu}&=\int dK\ k^{\mu}(f-\bar{f}),\end{split} \tag{3}\] where \(dK=d^{3}k/[(2\pi)^{3}E_{\bf k}]\) is the Lorentz invariant momentum space integration measure. The Landau matching conditions are satisfied by the Landau-Lifshitz's definition of four-velocity \(u^{\mu}\)[51; 55]: \[u^{\mu}=\frac{T^{\mu\nu}u_{\nu}}{u_{\rho}T^{\rho\sigma}u_{\sigma}}. \tag{4}\] In this definition, the momentum density and energy flux are zero in the local rest frame of the medium.5 Footnote 5: The Eckart’s definition of fluid velocity [51] becomes ambiguous at zero chemical potential and therefore not used here. We consider a static and uniform magnetic field along \(y\) direction, i.e., **B**=\(B_{0}\hat{y}\), which yields the BV equation of the form \[E_{\bf k}\frac{\partial f}{\partial t}+k_{x}\frac{\partial f}{ \partial x}+k_{y}\frac{\partial f}{\partial y}+k_{z}\frac{\partial f}{ \partial z}+qB_{0}\Big{(}k_{x}\frac{\partial f}{\partial k_{z}}-k_{z}\frac{ \partial f}{\partial k_{x}}\Big{)}\] \[=\frac{k^{\mu}u_{\mu}}{\tau_{c}}(f_{\rm eq}-f). \tag{5}\] The magnetic field thus affects the distribution function in the \((k_{x},k_{z})\) plane, but has no effect in the \(k_{y}\) direction. This suggest that during evolution, the magnetic field can by itself generate three-dimensional spatial anisotropies of any fluctuation present in the system. Further, in the RTA (without non-linearity), direct coupling between the fluctuation modes in the transverse \((xz)\) plane and the parallel \((y)\) direction is not expected. Consequently, such anisotropies can grow in the linear regime until the fluctuations decay to vanishingly small values. Solving the integro-differential Boltzmann-Vlasov equation in (6+1)-dimensional phase-space becomes computationally quite intensive as we are interested in studying the short-wavelength fluctuations that require small lattice spacing and hence large number of lattice points. We take recourse to a tractable (3+1)-dimension equation, where we consider the evolution of distribution function \(f\equiv f(t,x,k_{x},k_{z})\) for magnetic field pointing along \(y\) direction. This implies a variation of \(f\) in phase-space along \(x\) direction and homogeneity along \(y\) direction with \(k_{y}=0\).6 Further, \(f\) is taken homogeneous along spatial \(z\) direction, but its variation is accounted along \(k_{z}\). This allows us to study the effect of magnetic field that generates finite values of \(T^{0z}\) and \(T^{xz}\).7 For numerical simulations we rewrite the BV equations for particles and anti-particles in the dimensionless form Footnote 6: This assumption will not alter our final conclusions as the evolution of the distribution functions \((f,\bar{f})\) is unaffected by the magnetic field along \(y\) direction. \[E_{\bf k^{\prime}}\frac{\partial f}{\partial t^{\prime}}+k^{ \prime}_{x}\frac{\partial f}{\partial x^{\prime}}+\beta_{0}\Big{(}k^{\prime}_{ x}\frac{\partial f}{\partial k^{\prime}_{z}}-k^{\prime}_{z}\frac{\partial f}{ \partial k^{\prime}_{x}}\Big{)}=k^{\prime\mu}u_{\mu}(f_{\rm eq}-f),\] \[E_{\bf k^{\prime}}\frac{\partial\bar{f}}{\partial t^{\prime}}+k^ {\prime}_{x}\frac{\partial\bar{f}}{\partial x^{\prime}}-\beta_{0}\Big{(}k^{ \prime}_{x}\frac{\partial\bar{f}}{\partial k^{\prime}_{x}}-k^{\prime}_{z} \frac{\partial\bar{f}}{\partial k^{\prime}_{x}}\Big{)}=k^{\prime\mu}u_{\mu}( \bar{f}_{\rm eq}-\bar{f}), \tag{6}\] where we have introduced the dimensionless variables \(t^{\prime}\)=\(t/\tau_{c}\), \(x^{\prime}\)=\(x/\tau_{c}\), \({\bf k^{\prime}}\)=\({\bf k}/m_{0}\) (hence \(E_{\bf k^{\prime}}\)=\(E_{\bf k}/m_{0}\)), with \(|{\bf k}|=\sqrt{k_{x}^{2}+k_{z}^{2}}\). The term \(\beta_{0}\)=\(|qB|\tau_{c}/m_{0}\), is dubbed as the "magnetic field parameter" which is varied to study the impact of magnetic field \(B\) on the medium evolution. The two coupled differential equations in Eq. (6) are simultaneously solved numerically for complete dynamical evolution. To ensure energy-momentum and net-particle four-current conservations by the RTA collision kernel, the Landau matching conditions given in Eq. (2) are imposed, i.e., \(\varepsilon=\varepsilon_{\rm eq}(T,\mu)\) and \(n=n_{\rm eq}(T,\mu)\)[57; 58; 44; 55]. The energy density \(\varepsilon=u_{\mu}u_{\nu}T^{\mu\nu}\) and the net-particle density \(n=u_{\mu}N^{\mu}\) can be obtained from Eq. (3). Similarly, the equilibrium energy density \(\varepsilon_{\rm eq}=u_{\mu}u_{\nu}T_{\rm eq}^{\mu\nu}\) and the net-particle density \(n=u_{\mu}N_{\rm eq}^{\mu}\) can be obtained from Eq. (3) by using the local-equilibrium distributions \[f_{\rm eq}=\frac{1}{\exp{(\beta E_{\bf k}-\alpha)\pm 1}},\ \bar{f}_{\rm eq}= \frac{1}{\exp{(\beta E_{\bf k}+\alpha)\pm 1}}, \tag{7}\] where \(\pm\) refer to fermions and bosons, respectively. The associated equilibrium temperature \(T=1/\beta\), chemical potential \(\mu=\alpha/\beta\) are solved by using the matching conditions. It may be noted that alternative collision kernels are also available, such as the Bhatnagar-Gross-Krook (BGK) collision kernel [43] and its relativistic extensions [59; 60; 61; 62; 63] that conserves net-particle four-current, and the recently proposed novel RTA [64]. To generate the initial configuration of \(f\) (and \(\bar{f}\)), the following procedure is adopted. We start with a local equilibrium state defined by a temperature \(T(x^{\prime})\) that gives a local equilibrium distribution function \(f_{\rm eq}\). Specifically, we consider sine-Gaussian function for temperature fluctuations which gives \[T(x^{\prime})=T_{0}+\delta T\sin{\Big{(}\frac{2\pi rx^{\prime}}{L}\Big{)}}\exp( -x^{\prime 2}/2\chi^{2}), \tag{8}\] where \(T_{0}\) resembles the global equilibrium temperature with a corresponding distribution function \(f_{0}\). \(\delta T\) is the temperature fluctuation scale factor which is taken sufficiently small \(\delta T\)=\(0.01T_{0}\). This ensures that long timescales are required to smooth out the inhomogeneities in \(T(x^{\prime})\) and achieve global equilibrium in the system of total size \(L\). The Gaussian width is taken to be \(\chi\)=\(L/10\) and the "fluctuation parameter" \(r\) is varied to simulate the effects of magnetic field on different wavelength fluctuations. This procedure also ensures that the spatial variations in \(f_{\rm eq}\) are sufficiently localized compared to the total system size \(L\). We perturb further the local equilibrium state in the \((x^{\prime},k^{\prime}_{x},k^{\prime}_{z})\) space such that the system achieves an out-of-equilibrium state. For this purpose, we consider random fluctuations, \(\delta f\), on top of \(f_{\rm eq}\), which gives the initial distribution function as \(f\)=\(f_{\rm eq}\)+\(\delta f\). The choice of \(\delta f\) is dictated by the Landau matching conditions [44, 56] such that the initial out-of-equilibrium distribution function, \(f\), gives the same net-particle number density and energy density as that given by \(f_{\rm eq}\). The random fluctuation \(\delta f\) is then taken as \[\delta f={\cal R}_{x^{\prime}k^{\prime}_{x}k^{\prime}_{z}}f_{\rm eq}, \tag{9}\] where \({\cal R}_{x^{\prime}k^{\prime}_{x}k^{\prime}_{z}}\) is a small random number in the \((x^{\prime},k^{\prime}_{x},k^{\prime}_{z})\) space which varies from negative to positive values. The maximum value of \(|{\cal R}_{x^{\prime}k^{\prime}_{x}k^{\prime}_{z}}|\) is set by the RTA condition \(\delta f\ll f_{\rm eq}\), which gives \(|{\cal R}_{x^{\prime}k^{\prime}_{x}k^{\prime}_{z}}|\ll 1\). We set the maximum value of \(|{\cal R}_{x^{\prime}k^{\prime}_{x}k^{\prime}_{z}}|=0.01\). Likewise, an initial configuration for \(\bar{f}\) is also generated. Note that our final conclusions of the study are completely independent of the choices of initial fluctuations. The fluctuations given in Eqs. (8) and (9) may be considered as an individual component out of all possible sources of non-equilibrium fluctuations present in the system, such as the initial-state or hydrodynamic fluctuations present in relativistic energy nuclear collisions [30, 31]8. It is therefore important to analyze the effects of magnetic field on the evolution of these fluctuations. Footnote 8: For various modes of energy density fluctuations in the transverse plane of the colliding system, see also Ref. [65]. ## III Simulation Details The Boltzmann-Vlasov equations (6) are solved by performing numerical simulations on a lattice of dimension \(1500\times 500\times 500\) with 1500 lattice points along \(x^{\prime}\) direction. The spatial and momentum step sizes are chosen to be \(\Delta x^{\prime}\)=\(\Delta k^{\prime}_{x}\)=\(\Delta k^{\prime}_{z}\)=0.01, yielding a total system size of \(L=15\) in spatial direction and \(2L_{{}_{k^{\prime}}}=5\) in the momentum direction. The time step for evolution of distribution functions is taken to be \(\Delta t^{\prime}\)=\(\Delta x^{\prime}/2\). At the initial time, the medium velocity is considered to be zero. The simulations are performed using the second-order Leapfrog method [66] with periodic boundary conditions along all the three directions (one space and two momentum). With the given choice of simulation parameters, it is convenient to consider a medium of charged pions \(\pi^{\pm}\) (of mass \(m_{0}=m_{\pi}=140\) MeV) which is slightly away from the equilibrium state with an ambient (global equilibrium) temperature of \(T_{0}=50\) MeV. The charge chemical potential \(\mu_{Q}\) is varied between \(0-100\) MeV, though most of our results are presented at \(\mu_{Q}=100\) MeV.9 In the equilibrium, this pionic medium follows the Bose-Einstein distribution function. Footnote 9: At a non-zero \(\mu_{Q}\), the effects of magnetic field on the evolution of fluctuations become explicit. In the study, the strength of magnetic field is constrained by the thermal energy (\(\sim T\)) of the medium [40] and taken in the marginally weak field limit \(\sqrt{|qB|}<T\)[53]. (A large \(B\) would cause Landau quantization of the energy levels in the transverse plane, which is not the interest of the present study.) In terms of the magnetic field parameter \(\beta_{0}\) of Eq. (6), this condition becomes equivalent to \(\beta_{0}<\tau_{c}T^{2}/m_{0}\). For the pionic medium considered at \(T\simeq 50\) MeV with \(\mu_{Q}=0-100\) MeV and for typical values of relaxation time \(\tau_{c}\gtrsim 15\) fm [67, 68], the above condition gives \(\beta_{0}\lesssim 1\). Accordingly, we have taken values of \(\beta_{0}=0-0.7\) in this analysis. It may be noted that the strength of the magnetic field generated in the medium in relativistic heavy-ion collision can be estimated from the difference in the polarizations of \(\Lambda\) and \(\bar{\Lambda}\) hyperons (\(\Delta{\cal P}={\cal P}_{\Lambda}-{\cal P}_{\bar{\Lambda}}\)) via \(B\propto T_{f}|\Delta{\cal P}|\)[13]. For STAR measurements of \({\cal P}_{\Lambda}\) and \({\cal P}_{\bar{\Lambda}}\) in Au+Au collisions at c.m. energy \(\sqrt{s_{NN}}=200\) GeV [69], a conservative upper bound of \(|qB|<2.7\times 10^{-3}m_{\pi}^{2}\) was estimated at the level of one standard deviation at a freeze-out temperature of \(T_{f}=150\) MeV. Since the observed value of \(|\Delta{\cal P}|\) is found to increase rapidly with decreasing collision energy up to \(\sqrt{s_{NN}}=7.7\) GeV [69, 70, 71], the magnetic field \(|qB|\) and thereby \(\beta_{0}\) could also increase (see also Ref. [62], where the magnetic field was shown to increase at lower collision energy mainly due to early freeze-out time). Moreover, for three standard deviations, the upper bound on the magnetic field strength becomes about six times larger than the bound given above [13]. ## IV Simulation Results and Discussions ### Energy density fluctuations In the Boltzmann-Vlasov simulation, we solve the distribution functions of particles and anti-particles at each space-time point and calculate the "dimensionless" energy density of the system by using the relation \[\hat{\varepsilon}(x^{\prime},t^{\prime})=\int_{-L_{\cal W}}^{L_{\cal W}}\int_{-L _{\cal W}}^{L_{\cal W}}dk^{\prime}_{x}dk^{\prime}_{z}\ E_{{}_{\cal W}}(f+\bar{ f}). \tag{10}\] The energy density fluctuations can be obtained from \[\delta\hat{\varepsilon}=\frac{1}{\hat{\varepsilon}_{0}}\{\hat{\varepsilon}(x^ {\prime},t^{\prime})-\langle\hat{\varepsilon}(x^{\prime},t^{\prime})\rangle\}, \tag{11}\] where \(\hat{\varepsilon}_{0}\) is the global equilibrium energy density [calculated from Eq. (10) by using \((f_{0},\bar{f}_{0})\)], and \(\langle\hat{\varepsilon}(x^{\prime},t^{\prime})\rangle\) is the average of \(\hat{\varepsilon}(x^{\prime},t^{\prime})\) over all lattice points at time \(t^{\prime}\). For our choice of \(T_{0}=50\) MeV and \(\mu_{Q}=100\) MeV, one obtains \(\hat{\varepsilon}_{0}\approx 2.4\). Figure 1 shows the energy density fluctuations, \(\delta\hat{\varepsilon}\), at the initial time \(t/\tau_{c}=0\) (dash-dotted line), and at time \(t/\tau_{c}=3.0\) in the absence (solid line) and presence (dashed line) of the magnetic field, for the magnetic field parameter \(\beta_{0}=0.5\) and the fluctuation parameter \(r=2.0\). Compared to the initial state, at later times the fluctuation spreads out spatially and the peak amplitudes are dominantly suppressed. The inclusion of magnetic field dampens the evolution/expansion of the underlying medium, which in turn slows down the propagation of the fluctuations in the transverse direction. As a result, the fluctuations persist with somewhat larger magnitudes and for longer duration and should influence the final observables compared to the magnetic-field-free situation. ### 1. Fourier modes of energy density fluctuations In this subsection, we analyze the effects of magnetic field on energy density fluctuations in terms of momentum modes. We note that in a non-equilibrium state various momentum modes can be present, including very high momentum modes whose decay timescales are lesser than or equal to the local-equilibration timescale. Momentum of the critical mode (the mode whose decay timescale is comparable to the local-equilibration timescale) naturally sets an ultraviolet cutoff of fluctuations, above which all higher momentum modes of fluctuations are suppressed in local-equilibrium. In RTA, this cutoff is set by the local equilibrium relaxation time \(\tau_{c}\), where the modes with momenta \(\kappa\gtrsim\tau_{c}^{-1}\) are suppressed at the timescale of \(\tau_{c}\), while modes with momenta \(\kappa\lesssim\tau_{c}^{-1}\) can survive to further participate in the (hydrodynamic or RMHD) evolution. In the following analysis, we use this fact to set the momentum range (wavelength) of initial fluctuations, by accordingly varying the fluctuation parameter \(r\) of Eq. (8) in the range \(r\in[0.5,4.8]\). To quantify the evolution of energy density fluctuations with and without magnetic field, we perform Fourier transform from configuration \(x^{\prime}\) space to the momentum \(\kappa^{\prime}\) space as \[\Pi(\kappa^{\prime},t^{\prime})=\frac{1}{L}\int_{-L/2}^{L/2}dx^{\prime}\ \delta\hat{ \varepsilon}(x^{\prime},t^{\prime})\ e^{i\kappa^{\prime}x^{\prime}}, \tag{12}\] where \(\kappa^{\prime}=\kappa\tau_{c}\) is the dimensionless momentum of the mode \(\Pi(\kappa^{\prime},t^{\prime})\), and \(\kappa\) the dimensionful momentum. Figure 2 shows the momentum spectrum (Im[\(\Pi(\kappa^{\prime},t^{\prime})\)] versus \(\kappa^{\prime}\)) of energy density fluctuations at the initial time (dash-dotted line), and at \(t/\tau_{c}=3.0\) in the absence (solid line) and in the presence (dashed line) of magnetic field, for the same simulation parameters as used in Fig. 1. At the initial time \(t=0\), the peak momentum of the spectrum at \(\kappa_{p}\tau_{c}=0.88\) corresponds to the most dominant mode; the magnitude and position of the peak depends on the fluctuation parameter \(r\) of Eq. (8). In the inset of Fig. 2, the modulus of modes, \(|\Pi(\kappa^{\prime},t^{\prime})|\), versus \(\kappa\tau_{c}\) is plotted for \(\beta_{0}=0\) and \(0.5\) at \(t/\tau_{c}=3.0\). The modulus of modes, \(|\Pi(\kappa^{\prime},t^{\prime})|\), quantify the strength of fluctuations present in the system, thus as the fluctuations evolve and damp, each mode Im[\(\Pi(\kappa^{\prime},t^{\prime})\)] would decrease with increasing time. Since the high momentum modes decay faster Figure 2: Momentum spectrum of energy density fluctuations at initial time (dash-dotted line), and at time \(t/\tau_{c}=3.0\) in the absence (solid line) and in the presence (dashed line) of magnetic field, for the same simulation parameters as used in Fig. 1. The dotted vertical line indicates the mode at momentum \(\kappa\tau_{c}=1\). In the inset, the modulus of modes versus \(\kappa\tau_{c}\) is plotted for \(\beta_{0}=0\) and \(0.5\) at \(t/\tau_{c}=3.0\). Figure 1: Energy density fluctuations at the initial time \(t/\tau_{c}=0\) (dash-dotted line), and at time \(t/\tau_{c}=3.0\) in the absence (solid line) and presence (dashed line) of magnetic field for the magnetic field parameter \(\beta_{0}=0.5\) and the fluctuation parameter \(r=2.0\). the peak of the spectrum shifts towards lower momenta at later times as can be seen at \(t/\tau_{c}=3.0\). Moreover, certain higher momentum modes in the spectrum become negative, which indicates that these modes are not just decaying but also performing oscillations over time; see Fig. 3 for such damped harmonic oscillations of various modes present in the spectrum. In Fig. 2 we also present the momentum spectrum in presence of magnetic field at time \(t/\tau_{c}=3.0\) (dashed line). The magnetic field clearly affects the entire spectrum of fluctuations, retaining to some extent the strength of the initial fluctuations in the low momentum regime (\(\kappa\tau_{c}\lesssim 1.2\)), while suppressing the _modulus_ of higher momentum modes relative to \(B=0\) situation; see the inset of Fig. 2. This leads to characteristic change in the energy density fluctuations during the evolution towards equilibrium. Later, we have shown that magnetic field increases damping coefficient of mode oscillations causing such characteristic changes in the fluctuations. To determine in which momentum regime the fluctuation modes are strongly affected by the magnetic field at a given time, we display in Fig. 4 the momentum spectrum for the difference of modes with and without magnetic field at time \(t/\tau_{c}=3.0\). At the starting time of \(t=0\), the momentum spectrum, with and without \(B\), are identical. Subsequently, at later times, the magnetic field is found to influence various modes differently leading to such a structure. The maximum difference, corresponding to the peak position, occurs at a momentum \(\kappa\tau_{c}\simeq 1.0\) which is larger than the peak-position momentum (\(\kappa\tau_{c}\simeq 0.5\)) in Fig. 2. This implies that at any instant, the magnetic field affects quantitatively more the evolution of higher momentum modes of fluctuations as these fast modes experience stronger Lorentz force (along \(z^{\prime}\) direction). For further analysis we focus on the time evolution of the most dominant mode of energy density fluctuations. Figure 5(a) shows the time evolution of \(\mathrm{Im}[\Pi(\kappa_{p}^{\prime},t^{\prime})]\) for the dominant mode for \(\kappa_{p}^{\prime}=\kappa_{p}\tau_{c}=0.88\) (corresponding to fluctuation parameter \(r=2.0\)) in the absence (solid line) and presence (dashed line) of magnetic field. It is clear that the effect of magnetic field becomes noticeable at times \(t/\tau_{c}\gtrsim 1\) that enforces a higher magnitude in the fluctuation strength at early times followed by a gradual increase in the damping of the oscillating modes (as shown in the inset). In Fig. 5(b), we also show the the evolution of a high momentum mode \(\kappa_{p}\tau_{c}=2.0\) (for \(r=4.8\)). This high momentum (short wavelength) mode is strongly suppressed and quickly damped in presence of magnetic field. The above results demonstrate that magnetic field can affect the evolution of energy density fluctuations in the transverse plane at the timescale \(t/\tau_{c}\sim 1\). Consequently, the characteristics of fluctuations in three dimensional physical space can get modified, which may generate additional spatial anisotropies. For precise quantification of the growth of these anisotropies due to magnetic field, it is necessary to perform a full (6+1)-dimensional phase-space simulation. Nevertheless, in the present (3+1)D evolution, the relation between the dominant modes of fluctuations with and without magnetic field can provide some crucial insight about this growth. As mentioned above and evident from Figs. 3 and 5, the time evolution of \(\mathrm{Im}[\Pi(\kappa_{p}^{\prime},t^{\prime})]\) can be best fitted with the damped harmonic oscillator function: \[\mathrm{Im}[\Pi(\kappa_{p}^{\prime},t^{\prime})]=\alpha_{0}\cos(\omega t^{ \prime}-\phi)\exp(-\gamma_{m}t^{\prime}). \tag{13}\] Here \(\alpha_{0}\equiv\alpha_{0}(\kappa_{p}^{\prime})\) is the amplitude scale factor of the oscillator, \(\omega\equiv\omega(\kappa_{p}^{\prime})\) the dimensionless angular frequency, \(\phi\equiv\phi(\kappa_{p}^{\prime})\) the phase, and \(\gamma_{m}\equiv\gamma_{m}(\kappa_{p}^{\prime})\) the dimen Figure 3: Damped harmonic oscillations of various non-zero modes present in the spectrum of Fig. 2 (without magnetic field). Each mode is normalized with its initial value \(\alpha_{0}\) (at \(t/\tau_{c}=0\)) and marked with corresponding momentum \(\kappa\tau_{c}\). sionless damping coefficient. We found that magnetic field has an insignificant effect on the oscillatory factor \(\alpha_{0}\cos(\omega t^{\prime}-\phi)\), but increases the damping coefficient \(\gamma_{m}\), and further, \(\omega\) is always greater than \(\gamma_{m}\) -- representing an underdamped oscillator. This yields a relation between the fluctuation modes with and without \(B\) as \[\mbox{Im}[\Pi(\beta_{0})]\approx\mbox{Im}[\Pi(0)]\exp[-\delta\gamma_{m}\gamma_ {m}(0)t^{\prime}], \tag{14}\] where \(\delta\gamma_{m}\)=\([\gamma_{m}(\beta_{0})\)\(-\)\(\gamma_{m}(0)]/\gamma_{m}(0)\) is the fractional change in the damping coefficient by the magnetic field, which is completely independent of the initial magnitude of energy density fluctuations. The decay of the modes in presence of \(B\) can be then conveniently determined by \(\delta\gamma_{m}\). Note that \(\gamma_{m}^{-1}\tau_{c}\) sets a decay timescale of the mode, which can also be identified as the "relaxation time" of the particular mode. Figure 6 shows the variation of the "dimensionless decay timescale" \(\gamma_{m}^{-1}\) of mode with peak-momentum \(\kappa_{p}\tau_{c}\) for different values of magnetic field parameter \(\beta_{0}\). The values of momentum in the range \(\kappa_{p}\tau_{c}\in[0.64,2.0]\) are obtained by varying the fluctuation parameter between \(r\in[0.5,4.8]\). In general, for any value of \(\beta_{0}\), the decay timescale \(\gamma_{m}^{-1}\) exhibits a decreasing trend with increasing \(\kappa_{p}\tau_{c}\) as the higher momentum modes decay fast. Stronger magnetic field in the medium, i.e. with increasing \(\beta_{0}\), reduces the decay timescale of especially the slow modes that actually experience magnetic force for a longer duration -- inspite of higher momentum modes are largely influenced (quantitatively) by the magnetic field at any instant of time (see Fig. 4). Each curve in Fig. 6, corresponding to a \(\beta_{0}\) value, can be best fitted with a power law scaling \(\gamma_{m}^{-1}\)=\(s_{0}(\kappa_{p}\tau_{c})^{-s_{1}}\)+\(s_{2}\). The mode that has a decay timescale comparable to the local-equilibration timescale of the system can be identified with a momentum cutoff \(\kappa_{c}\tau_{c}\) above which all the higher momentum modes are suppressed. In other words, it resembles a "dimensionless" wavelength cutoff \(\lambda_{c}^{*}=\lambda_{c}/\tau_{c}\) below which any inhomogeneity in the energy density is suppressed. This cutoff can be determined by putting \(\gamma_{m}^{-1}=1\) in the above power law scaling. Figure 7 illustrates the qualitative growth of the wavelength cutoff, \(\lambda_{c}^{*}(\beta_{0})\), with the magnetic field parameter \(\beta_{0}\); the value of \(\lambda_{c}(\beta_{0}=0)\) turns out to be about \(2.5\tau_{c}\). Note that \(\lambda_{c}(\beta_{0})\) is equivalent to the coarse-grained length scale inherent in the hydrodynamic description for medium evolution. The above analysis indicates that magnetic field suppresses the short wavelength fluctuations up to a larger wavelength (as compared to \(B=0\)) resulting in a smoother coarse-grained structure of the hydrodynamic variables. Given that \(\delta\gamma_{m}\) is enhanced more towards the lower momentum modes (see Fig. 6), we can also obtain a power law scaling behavior of \(\delta\gamma_{m}\) with \(\kappa_{p}\tau_{c}\) and \(\beta_{0}\) (as found for \(\gamma_{m}^{-1}\)), namely: \[\delta\gamma_{m}\approx\beta_{0}^{2}\left[2(\kappa_{p}\tau_{c})^{-(0.03/\beta _{0}+2)}-0.5\right]. \tag{15}\] This relation is completely independent of the choice of initial energy density fluctuations, and perfectly valid in Figure 5: Time evolution of the most dominant mode of energy density fluctuations (a) for \(\kappa_{p}\tau_{c}=0.88\) (corresponding to fluctuation parameter \(r=2.0\)) and (b) for \(\kappa_{p}\tau_{c}=2.0\) (for \(r=4.8\)). The results are in the absence (solid line) and presence (dashed line) of magnetic field. Insets show the mode evolution for a longer time. Figure 6: Variation of “dimensionless decay timescale” \(\gamma_{m}^{-1}\) of mode with peak-momentum \(\kappa_{p}\tau_{c}\) for different values of magnetic parameter \(\beta_{0}\). the above mentioned range of \(\kappa_{p}\tau_{c}\). From the above relation one can determine the fractional change in the modes, generated by the magnetic field, by using the expression \(\delta\Pi=({\rm Im}[\Pi(\beta_{0})]-{\rm Im}[\Pi(0)])/{\rm Im}[\Pi(0)]\). Although the damping coefficient of modes and effects of magnetic field would be quite different in three dimensional physical space, \(\delta\Pi\) may be valid in that case as well. Hence, using Eqs. (14) and (15), \(\delta\Pi\) provides a measure of spatial anisotropies in the energy density solely generated by the magnetic field. ### B. Evolution of \(\hat{T}^{\mu\nu}\) and dependence on \(\mu_{Q}\) In the previous subsection, the effects of magnetic field on the evolution of energy density fluctuations (\(\delta\hat{\varepsilon}=\delta\hat{T}^{00}\)) have been studied and found to increase the damping coefficient of mode oscillations. In this subsection, we shall explore magnetic effects on the other components of energy-momentum tensor \(\hat{T}^{\mu\nu}\) which is calculated by using Eq. (3) and performing the momentum integrals as done in Eq. (10). The spatial variation of the energy-momentum tensor components, \(\hat{T}^{0x}\), \(\hat{T}^{0z}\), \(\hat{T}^{xz}\), and \((\hat{T}^{xx}-\hat{T}^{zz})\), is shown in Fig. 8 at time \(t/\tau_{c}=3.0\) in the absence (solid line) and in the presence (dashed line) of magnetic field, for the same simulation parameters as used in Fig. 1. The variation of \(\hat{T}^{0x}\) arises due to the spatial gradients present in the energy density fluctuations as shown in Fig. 1. It is clear from Fig. 8 that the magnetic field suppresses \(\hat{T}^{0x}\), which essentially leads to the increase in the damping coefficient of the mode oscillations (as shown previously). The most appreciable effects of magnetic field involve the \(z\)-components, namely \(\hat{T}^{0z}\) and \(\hat{T}^{xz}\), which become rather large compared to vanishingly small values for \(B=0\). Such a behavior can be traced to the Lorentz force exerted by the magnetic field that acts along \(z^{\prime}\) direction on the medium evolving along \(x^{\prime}\). The magnetic field is seen to also suppress substantially the magnitude of \((\hat{T}^{xx}-\hat{T}^{zz})\). It is important to comment on the effects of charge chemical potential \(\mu_{Q}\) on the characteristic change seen for the components of \(\hat{T}^{\mu\nu}\) and, in particular, on the energy density fluctuations in the presence of \(B\). At \(\mu_{Q}=0\) (when the net-electric charge density \(\hat{n}=\hat{n}_{+}-\hat{n}_{-}=0\) due to identical particle and anti-particle number densities), the equal and opposite Lorentz force exerted by the magnetic field along \(z^{\prime}\) direction on the positive and negatively charged particles causes \(\hat{T}^{0z}\) and \(\hat{T}^{xz}\) to vanish throughout their evolution. Only at _finite_\(\mu_{Q}\), the magnitude of \(\hat{T}^{0z}\) and \(\hat{T}^{xz}\) become non-zero in presence of magnetic field due to net-electric charge density imbalance (as seen in Fig. 8). This also suggests, that irrespective of the value of \(\mu_{Q}\), a finite \(B\) will affect solely the magnitude of the momentum density \(\hat{T}^{0x}\) and its accompanied energy density fluctuations \(\delta\hat{\varepsilon}\). Consequently, the earlier discussed scaling of \(\delta\gamma_{m}\) for a given \(\beta_{0}\) will remain unaltered which we have verified for a range of \(\mu_{Q}/T_{0}=0-2.0\) at a fixed equilibrium temperature \(T_{0}\). ### C. Fluctuations with multiple modes To illustrate the effects of magnetic field on the energy density fluctuations having more than one initial dominant mode, we consider fluctuations of the form \[T(x^{\prime})=T_{0}+\left[\delta T\cos\left(\frac{2\pi\tau_{1} x^{\prime}}{L}\right)+\delta T_{1}\sin\left(\frac{2\pi\tau_{2}x^{\prime}}{L} \right)\right] \tag{16}\] \[\times\exp(-x^{\prime 2}/2\chi^{2}),\] Figure 8: Spatial variation of the components of energy-momentum tensor (a) \(\hat{T}^{0x}\) (b) \(\hat{T}^{0z}\) (c) \(\hat{T}^{xz}\) and (d) (\(\hat{T}^{xx}-\hat{T}^{zz}\)) at a time \(t/\tau_{c}=3.0\) in the absence (solid line) and presence (dashed line) of the magnetic field for the same simulation parameters as used in Fig. 1. where the mode mixing parameter \(\delta T_{1}\) is varied between zero and \(\delta T\) to generate different energy density fluctuations of different amplitudes. For the present study, we have taken \(r_{1}=0.5\) and \(r_{2}=8.0\) which allows the generation of a low and a very high momentum mode, respectively. The choice of sine and cosine functions invokes some arbitrariness (such as an asymmetry about \(x^{\prime}=0\)) resembling to some extent a general form for energy density fluctuations present in a physical system, for example, the initial-state fluctuations in the Glauber model [72] or in the gluon saturation model [73] as commonly employed for initial conditions in the modeling of relativistic heavy-ion collisions. The other simulation parameters are the same as taken previously. Figure 9 shows spatial dependence for the evolution of the energy density fluctuations, \((\hat{\varepsilon}-\hat{\varepsilon}_{0})\). The results are for a relatively small mode mixing \(\delta T_{1}=0.2\delta T\) as shown in Fig. 9(a) and for maximal mixing \(\delta T_{1}=\delta T\) as shown in Fig. 9(b). The initial energy density profile (dash-dotted lines) becomes rather smooth at a later time of \(t/\tau=3.0\) (solid and dashed lines). This arises as the fluctuations of wavelengths shorter than the cutoff \(\lambda_{c}^{*}\) are dominantly suppressed at times \(t/\tau_{c}\gtrsim 1\). In presence of magnetic field, at \(t/\tau=3.0\) (dashed lines), the energy density profile becomes slightly smoother and has a larger peak value as compared to \(B=0\) (solid lines); see inset of Fig. 9(b). This smoothening is essentially caused by the enhanced suppression of the short wavelength (high momentum) modes, whereas the larger peak is due to slow dissipation of long wavelength (low momentum) fluctuations at early times (as discussed in Fig. 5). We present in Fig. 10 the momentum spectrum [the modulus of Fourier modes, \(|\Pi_{m}(\kappa^{\prime},t^{\prime})|\), versus \(\kappa\tau_{c}\)] of the energy density fluctuations of Fig. 9(b). In presence of magnetic field (dashed line), the high momentum dominant modes (at \(\kappa\tau_{c}\approx 3.0\) and \(4.0\)) are more suppressed, while the low momentum modes (\(\kappa\tau_{c}<1.2\)) are somewhat less dissipated [as compared to \(B=0\) (solid line)] at later time \(t/\tau_{c}=3.0\). All these lead to qualitatively different characteristics of the fluctuations in the presence of magnetic field. and gluons [29, 74]. In the gluon saturation model, thermalization or hydrodynamization occurs at \(\tau_{\rm eq}\simeq 0.4\) fm [30] when the magnetic field can still survive with appreciable strength; the magnetic field may persist in the entire partonic and possibly hadronic phase if the medium has a large electrical conductivity [75]. In the pre-equilibration dynamics at \(\tau\lesssim 0.4\) fm, various short wavelength modes are present in the system with decay timescales smaller or comparable to this time (see Fig. 2 in Ref. [31]). Such fluctuation modes have sources from initial-state fluctuations in nucleon position, parton production and dynamics, hadron production and evolution. The magnetic field can increase the damping coefficient of mode oscillations and modify the characteristics of short wavelength fluctuations in the reaction plane (transverse to \(B\)) as demonstrated in this work. Consequently, this can have measurable effects on the azimuthal anisotropy of particle production/emission, namely the collective flow harmonics \(v_{n}(p_{T})=\langle\cos n(\phi-\Psi_{n})\rangle\) (especially the odd harmonics that are driven by initial-state fluctuations) and the flow fluctuations [36, 37, 38, 39, 76, 77]. In particular, the flow and flow-fluctuation observables would exhibit a noticeable suppression, reflecting the enhanced damping of fluctuation modes over the evolution time as found here (Fig. 6). Moreover, a smoother energy density profile, induced by suppression of short wavelength fluctuations in presence of magnetic field, should also show some qualitative changes in the power spectrum of flow fluctuations [\(v_{n}(p_{T})\) versus \(n\)] at higher \(n\). The hydrodynamic fluctuations [32, 33, 78] and disturbances in the medium due to energy deposition by a partonic jet [79, 80, 81] can prevail during the entire evolution of the system. The thermal or hydrodynamic fluctuations are correlated over short length scales that generate short-range correlation peak at small rapidity separation \(\Delta y=y_{1}-y_{2}\) and nontrivial structure at large \(\Delta y\) in the two-particle rapidity correlation [32, 33, 78]. Such long-range rapidity structures have been observed in multiparticle correlation measurements involving heavy ion and high-multiplicity light particle collision experiments at relativistic energies. Our analysis suggests magnetic damping of the peak at \(\Delta y\approx 0\) and farther spread of the correlations in the rapidity separation. The disturbance generated by energy-momentum deposited in the vicinity of traversing hard jets in QGP and modifications of the jet shape and jet substructure observables due to (enhanced) rescattering of the emitted soft gluons in the medium will be sensitive to the magnetic field. On the other hand, near the critical end point in the QCD phase diagram, the correlations among fluctuations diverge resulting in new fluctuation modes [82, 83]. The non-monotonous behavior in the event-by-event fluctuations with varying c.m. energy \(\sqrt{s_{NN}}\) signals the location of the critical point. As the QCD critical point is expected to be at finite baryon density and at moderate collision energy [84], the strength of magnetic field will be relatively smaller, which however, decays slowly and can slow down some of the modes of critical fluctuations via increasing the damping coefficient. This can essentially enhance the magnitude of the observable signatures of the critical point. A detailed numerical simulation involving all the discussed features can provide quantitative effects of the magnetic field. ## VI VI. Summary and Conclusions In this paper we have studied the effects of magnetic field on the evolution of energy density fluctuations in the transverse direction. We find characteristic changes in the fluctuations at the timescale required by the system to achieve local thermal equilibrium. The magnetic damping of the underlying medium slows down the dissipation of the initial strength of the fluctuation near its peak while spreading out the fluctuation spatially to larger distances at early times. Increased Lorentz force at later times enforces larger damping of the fluctuation compared to the field-free case. A detailed Fourier mode analysis of the energy density fluctuations reveals that the low momentum modes, which survive longer in the entire evolution of the system, are strongly damped as compared to the fast evolving high momentum modes. The behavior is found to progressively increase with the strength of the magnetic field. However, at any instant of time, the magnetic field affects quantitatively more the high momentum modes as compared to the low momentum modes. This leads to a growth in the cutoff for the shortest wavelength fluctuations present in the system. If this cutoff is identified with the coarse-grained length scale in the hydrodynamic description of the medium, which then indicates an enhanced smoothening of energy density profile in presence of magnetic field. Further, the fluctuations in the direction transverse to the magnetic field are essentially affected, and moreover, various components of the energy-momentum tensor in the transverse direction are found to be modified differently, namely, \(\hat{T}^{\alpha x}\) is suppressed, while the \(z\)-components \(\hat{T}^{0z}\) and \(\hat{T}^{xz}\) are generated for fluid evolving along \(x\)-direction. As a result, additional spatial and momentum anisotropies can be generated in the three dimensional physical space by the magnetic field. The present study has crucial phenomenological implications in the context of understanding the properties of quark-gluon plasma formed in relativistic heavy-ion collisions, and in general for any small systems whose constituents are electrically charged. In particular, the magnetic field can have noticeable impact on the potentially important observables, as for example, the flow harmonics and flow fluctuations, the hydrodynamic fluctuations, the jet substructure, and the dynamics of correlations and fluctuations close to the QCD critical point. In the study, we have worked in the relaxation time approximation where the nonlinear effects, which can arise due to proper collision integral, has been ignored. The non-linearity can affect the evolution of energy density fluctuations as well as the effects of magnetic field studied in this work. We defer its inclusion for future studies. ###### Acknowledgements. We would like to thank Rajeev Bhalerao and Sunil Jaiswal for useful discussions. The simulations were performed at the High Performance Computing cluster at TIFR, Mumbai. The authors acknowledge financial support by the Department of Atomic Energy (Government of India) under Project Identification No. RTI 4002.
2309.06358
Generative Data Augmentation using LLMs improves Distributional Robustness in Question Answering
Robustness in Natural Language Processing continues to be a pertinent issue, where state of the art models under-perform under naturally shifted distributions. In the context of Question Answering, work on domain adaptation methods continues to be a growing body of research. However, very little attention has been given to the notion of domain generalization under natural distribution shifts, where the target domain is unknown. With drastic improvements in the quality and access to generative models, we answer the question: How do generated datasets influence the performance of QA models under natural distribution shifts? We perform experiments on 4 different datasets under varying amounts of distribution shift, and analyze how "in-the-wild" generation can help achieve domain generalization. We take a two-step generation approach, generating both contexts and QA pairs to augment existing datasets. Through our experiments, we demonstrate how augmenting reading comprehension datasets with generated data leads to better robustness towards natural distribution shifts.
Arijit Ghosh Chowdhury, Aman Chadha
2023-09-03T03:27:06Z
http://arxiv.org/abs/2309.06358v2
# Generative Data Augmentation using LLMs improves Distributional Robustness in Question Answering ###### Abstract Robustness in Natural Language Processing continues to be a pertinent issue, where state of the art models under-perform under naturally shifted distributions. In the context of Question Answering, work on domain adaptation methods continues to be a growing body of research. However, very little attention has been given to the notion of domain generalization under natural distribution shifts, where the target domain is unknown. With drastic improvements in the quality and access to generative models, we answer the question: How do generated datasets influence the performance of QA models under natural distribution shifts? We perform experiments on 4 different datasets under varying amounts of distribution shift, and analyze how "in-the-wild" generation can help achieve domain generalization. We take a two-step generation approach, generating both contexts and QA pairs to augment existing datasets. Through our experiments, we demonstrate how augmenting reading comprehension datasets with generated data leads to better robustness towards natural distribution shifts. ## 1 Introduction Natural language processing has seen substantial progress over the last few years owing to the emerging abilities of pre-trained language models. In many benchmarks, large pre-trained models adapted to a target dataset reach or even surpass human performance. However, very often these models fail to generalize to changing test distributions. Through this work, we perform a systematic study of how "in-the-wild" generation can affect the distributional robustness of question-answering models trained on the popular Stanford Question Answering Dataset (SQUAD) Rajpurkar et al. (2016). Synthetic data generation is a widely adopted method for domain adaptation in QA systems Shakeri et al. (2020) Yue et al. (2021) Yue et al. (2022). However, domain adaptation methods have access to unlabelled/labelled data belonging to the target domain, and do not account for unseen natural distribution shifts. Our work studies the effect of generated data on distribution shifts where the target domain is unseen. The conception of a dataset has undergone significant evolution in recent times. This transformation has been catalyzed by the advent of generative models trained 'in-the-wild', such as those described in Brown et al. (2020), Bubeck et al. (2023), and Touvron et al. (2023). These models, which utilize vast and diverse datasets across a range of domains, have facilitated the infusion of the web with synthesized data of high calibre, applicable to an extensive array of conceptual topics. Interestingly, these models are not merely confined to generation based on a pre-established distribution; they possess the capacity for repeated prompting, resulting in the creation of markedly diverse data. In the context of this emerging model paradigm, our research investigates the following query: How do generated datasets affect the distributional robustness of Question Answering models? Specifically, we focus on _Natural Distribution Shifts_, where the test distribution is naturally occurring Wang et al. (2021), where robustness can be defined around model's performance under distribution shift. We present an overview of our generation setup in Figure 1. For generating data, utilize GPT-3.5 Brown et al. (2020), and create a question-answering dataset using questions provided in the SQUAD Rajpurkar et al. (2016) dataset. We use a dual generation approach, by first prompting the language model to generate a context for a question given in the SQUAD dataset, and then generating question-answer pairs for the newly generated context. As the prevalence of generative AI continues to grow, it is anticipated that vast quantities of synthesized data will be utilized directly for machine learning model training. It is crucial to acknowledge, however, that there currently exists a scarcity of standardized evaluation and critique methods for such datasets. The generation of data unveils an array of unique opportunities and obstacles, thus warranting a more comprehensive evaluation process. Therefore, as a key part of our research, we present SQUAD-G, a question-answering dataset derived from generated data. This dataset will be made accessible to the public, facilitating straightforward and replicable benchmarking, alongside the assessment of the utility and critique of generated datasets, thus ensuring reliable and trustworthy natural language processing. ## 2 Related Work ### Distribution Shift in NLP Ramponi and Plank (2020) provide a comprehensive survey of domain adaptation and generalization problems in NLP, dividing methods into _data centric_ and _model centric_. In our work, we adopt a data centric approach towards domain generalization. Wang et al. (2022) note that there is often a significant overlap in test-train data for QA models, and SOTA models often fail to perform well on questions that cannot be memorized from training data. This motivates our work to create diverse datasets for training and evaluation. This work also highlights the scarcity of research done for generalization in QA models, especially for natural distribution shifts. Arora et al. (2021) outline the several different kinds of out-of-distribution data in NLP, and highlight the need to better benchmark performance on these samples. ### Robustness in Question Answering Longpre et al. (2019) performed one of the first experiments towards domain agnostic question answering using data augmentation techniques that utilize pretrained models.Miller et al. (2020) introduce three new datasets for QA models, constructed from New York Times articles, Reddit Posts, and Amazon Product Reviews. This is the first comprehensive study analyzing the effect of natural distribution shift on QA models. Our work builds on this by utilizing these datasets to evaluate our data augmentation method. Furthermore,Miller et al. (2020) conduct an extensive evaluation of the robustness of different model and adaptation methods under 15 distribution shifts in question answering. However, they do not explore new methods that incorporate better generalization in QA models. Liu et al. (2021) highlight similar challenges in open domain question answering, breaking down the root causes towards failure. Our work is grounded in extractive question answering, and aims to be the first study that studies the effects of creating datasets using large language models for downstream QA tasks. ### Generalization using Generated Data Gowal et al. (2021) demonstrate how generated data conditioned on the training set can lead to adversarial robust models, while Bartolo et al. (2021) create synthetic data to improve adversarial robustness in QA models. Our work confirms and extends this hypothesis for QA models specifically for natural distribution shifts. More recently, context generation has been used as a data augmentation for text classification Mekala et al. (2022) and information retrieval Su et al. (2022). Our work "instructs" Wei et al. (2022) a GPT-3.5 model to generate a context for a given question. Along similar lines, Bansal and Grover (2023) demonstrate how Stable Diffusion 1 can be used to create diverse datasets for image classification. We leverage the generation capabilities of large language models to generate diverse datasets for downstream tasks, that can be inferenced using smaller models. Footnote 1: [https://github.com/CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) ## 3 Methodology ### Context Generation We first generate contexts by conditioning it on a question present in the SQUAD dataset. This allows the language model to generate a paragraph that can be used to generate question-answer pairs. Since the paragraph is generated using an existing question, the generated context is consistent with the informative trivia format of SQUAD-like datasets. To maintain further consistency, the generated context is clipped to be within 250 words, based on the average context length present in the SQUAD dataset. We prompt GPT 3.5 (gpt-3.5-turbo) 2 in the following manner: _Generate a paragraph which answers the following question: (question)_. Here the question is sampled from the SQUAD dataset. Figure 1 demonstrates the generation process. Additionally, the Appendix A contains examples from the generation process. Footnote 2: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) ### Question Answer Generation After the context is created, the generated paragraph is used to create question-answer pairs. This is done by using a T5 based question generation model Lopez et al. (2020) that is trained on the SQUAD dataset, which takes a paragraph has an input and returns a question-answer pair. We use the open source3 implementation for this model. Additionally we also filter out QA pairs based on round-trip consistency Alberti et al. (2019). Footnote 3: [https://github.com/patii-suraj/question-generation](https://github.com/patii-suraj/question-generation) ## 4 Experiments ### Setup We train an extractive reading comprehension model using SQUAD V1.1, using the RoBERTA-Base model across all our experiments. We use a learning rate of \(3e-5\), a batch size of \(16\) and run our experiments for \(3\) epochs each. We use the implementation provided by HuggingFace, and run our models on a stand-alone Nvidia A100 GPU provided by Google Colab. We do not use GPT-3.5 as a baseline since the purpose of this study is to specifically measure the performance by smaller models. For all our experiments, we measure F1 and Exact Match scores to quantify performance on Natural Distribution Shift (NDS) datasets. ### Datasets We use the following datasets created by Miller et al. (2020) to set up our test bed: The **New Wikipedia** dataset contains newer QA pairs from wikipedia articles used by the SQUAD V1.1 dataset. Contains 7938 test samples from 48 contexts. The **New York Times** dataset contains articles from New York times which are then used to annotate QA pairs in the same format as SQUAD. It is ensured that the passage length statistics stay the same. Contains 10,065 test samples from 46 articles. **Reddit** dataset contains articles from Reddit where the authors concatenated each post's title with its body. This dataset contains 9,803 test samples from 1969 posts. The **Amazon Product Reviews** dataset contains user generated product reviews from the "Home and Kitchen" category on Amazon. This data contains 9,885 test samples from 1909 reviews. ## 5 Results ### Does generated data help with distributional robustness? We evaluate the F1 and Exact Match scores of models trained with different datasets on natural distribution shifts (NDS) benchmarks. We note the average EM and F1 numbers across three random seeds in Table 1. The models are trained on an equal amount of real and generated data. We find that the model, when trained on SQUAD, when subjected to natural distribution shift datasets, the model's performance significantly deteriorates. A noteworthy observation was that exclusive train Figure 1: Overview of the generation system. Our method creates a generated dataset which is then augmented with the real dataset to train a question answering model. ing on the generated data resulted in substandard performance on both the SQUAD and its Natural Distribution Shift (NDS) datasets. The inferior absolute performance could be potentially attributed to the distribution disparity between the source and the generated training datasets. Interestingly, we observe that for the model trained on the generated data, the performance gaps on the real validation dataset and its NDS datasets are low, which might be attributed to the benefits of training on diverse generated data. Across all experiments, we note that the performance decrease is slightly less pronounced across the New York Times test set, which resembles the Wikipedia based training set in SQUAD most closely. Similarly, it is not surprising that the performance drop is substantial on user generated noisy datasets like Amazon Reviews and Reddit posts. Finally, we expose our model to an evenly-distributed blend of real and generated datasets, with the goal of investigating the impact of generative augmentations. Our results reveal that the absolute performance of the model, when trained with a combination of real and generated data, either parallels or exceeds the performance of models trained exclusively on either real or generated datasets, across all naturally distributed datasets. This observation suggests that the incorporation of real data into the training process is indeed essential for attaining superior absolute performance. To summarize, while using solely generated data improves robustness at the expense of absolute performance, a blend of real and artificially generated data presents the ideal balance for robust and precise training. ### How much generated data is needed? Here, we investigate how different combinations of the generated dataset can help the classifiers take advantage of the complementary strengths of the two data sources (Table 2). To do so, we assessed the average performance of models trained with three different input mixing combinations created by using 50%, 100%, and 200% of the generated dataset. We observed an increase in performance on shifted datasets as the size of the generated data increases while keeping the amount of real data fixed. However, when the proportion of the generated data increases twofold while keeping the proportion of the real data fixed, we observe that the performance gains are only marginal. Additionally, we note that using only half of the generated data does not provide enough meaningful signal in terms of diversity and does not lead to major performance improvements compared to training on real data. Overall, we found that the ideal split between real and generated data is a 50-50 split where the two datasets are able to compliment each other, in terms of providing both diversity and in-domain samples at the same time. ## 6 Conclusion and Future Avenues We created a framework that enhances the performance of reading comprehension models by supplementing real datasets with a diverse dataset generated by contemporary, real-world generative models. Our findings indicate that this training method yields superior results on test datasets and those with natural distribution shifts, due to the added \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Dataset** & SQUAD & \multicolumn{2}{c}{NewWiki} & \multicolumn{2}{c}{NYT} & \multicolumn{2}{c}{Amazon} & \multicolumn{2}{c}{Reddit} \\ \hline **Metrics** & F1 & EM & F1 & EM & F1 & EM & F1 & EM & F1 & EM \\ \hline Real data & 90.4 & 83.0 & 89.4 & 79.2 & 86.4 & 76.1 & 79.9 & 66.4 & 80.1 & 67.1 \\ Generated data & 79.5 & 64.6 & 80.1 & 65.3 & 76.5 & 63.2 & 72.4 & 59.5 & 72.7 & 60.2 \\ Real + Generated data & **92.7** & **84.7** & **91.1** & **80.4** & **88.9** & **79.3** & **80.3** & **67.1** & **81.7** & **68.7** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on natural distribution shift datasets. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Dataset** & SQUAD & \multicolumn{2}{c}{NewWiki} & \multicolumn{2}{c}{NYT} & \multicolumn{2}{c}{Amazon} & \multicolumn{2}{c}{Reddit} \\ \hline **Metrics** & F1 & EM & F1 & EM & F1 & EM & F1 & EM & F1 & EM \\ \hline Real + 50\% Generated data & 91.4 & 81.1 & 90.4 & 82.2 & 87.4 & 77.1 & 79.7 & 65.4 & 80.3 & 67.4 \\ Real + 100\% Generated data & 92.7 & 84.7 & 91.1 & 80.4 & **88.9** & **79.3** & 80.3 & 67.1 & **81.7** & **68.7** \\ Real + 200\% Generated data & **92.9** & **84.8** & **91.3** & **80.7** & 88.5 & 79.1 & **80.9** & **67.3** & 80.8 & 68.1 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance on varying amounts of generated data. robustness from training on the generated data as opposed to traditional methods. We studied how varying amounts of generated data impact these patterns. We found that data generated by large, current language models prove more effective for developing sturdy models in Natural Language Processing (NLP). Our work also suggests a positive trend towards using high-quality synthetic data to train smaller, specialized models. In the future, we want to explore a more extensive comparison against question generation methods and how this paradigm fits into fine-tuning larger language models. ## 7 Limitations The development of robust question answering models using synthetic data derived from modern large language models (LLMs) demonstrates a notable progression in the field of generative AI for trustworthy machine learning (ML). However, this study does not delve into other vital facets such as fairness and privacy. The primary focus of our methodology is to generate data from a large language model in a zero-shot fashion for topics that are adequately encapsulated within its range. Nonetheless, it becomes essential to fine-tune the base model for distributions that aren't sufficiently represented in its training data. In spite of these limitations, the central contributions of this paper continue to hold substantial value and offer key insights for advancing the positive influence in the realm of trustworthy ML.
2304.05875
Lensing with generalized symmetrons
Generalized symmetrons are models that have qualitatively similar features to the archetypal symmetron, but have barely been studied. In this article, we investigate for what parameter values the fifth forces induced by disformally coupling generalized symmetrons can provide an explanation for the difference between baryonic and lens masses of galaxies. While it is known that the standard symmetron struggles with providing an alternative source for the lensing otherwise attributed to particle dark matter, we show that some generalized symmetron models are more suitable for complying with existing constraints on disformal couplings. This motivates future studies of these only little explored models.
Christian Käding
2023-04-12T14:13:13Z
http://arxiv.org/abs/2304.05875v2
# Lensing with generalized symmetrons ###### Abstract Generalized symmetrons are models that have qualitatively similar features to the archetypal symmetron, but have barely been studied. In this article, we investigate for what parameter values the fifth forces induced by disformally coupling generalized symmetrons can provide an explanation for the difference between baryonic and lens masses of galaxies. While it is known that the standard symmetron struggles with providing an alternative source for the lensing otherwise attributed to particle dark matter, we show that some generalized symmetron models are more suitable for complying with existing constraints on disformal couplings. This motivates future studies of these only little explored models. ## I Introduction Some of modern physics' most prominent open problems can be found in cosmology, i.e. the questions after the natures of dark energy (DE) and dark matter (DM). In order to tackle these problems, a zoo of modifications of general relativity have been considered. Amongst those, scalar-tensor theories [1] are some of the most studied ones. An overview of models that address the problems of DE and DM can be found in Refs. [2; 3]. Some of the scalar fields appearing in scalar-tensor theories are expected to cause a fifth force of Nature, which, however, is in tension with Solar System-based experiments [4; 5; 6]. This led to some of these models being already ruled out [7]. Though, so-called screened scalar fields, see Refs. [8; 9] for reviews, have ways of circumventing Solar System constraints by screening their fifth forces in environments of higher mass densities, such that the forces are effectively rendered to be feeble. The most well-known screened scalar field models include the chameleon [10; 11], the symmetron [12; 13; 14; 15; 16; 17; 18; 19], the environment dependent dilaton [14; 20; 21; 22; 23; 24], and the galileon [25; 26; 27]. In recent years these models have been (proposed to be) tested in various experiments, see e.g. Refs. [8; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44], studied as quantum fields [45; 46; 47; 48], and proposed for investigations in analogue gravity simulations [49]. Screened scalar field models have a rich phenomenology, which allows them to serve as promising candidate theories for DM or DE. For example, in Ref. [50] it was shown that the symmetron fifth force can explain the stability and rotation curves of disk galaxies, while Ref. [51] demonstrated how the same force can lead to the observed motion of stars perpendicular to the plane of the Milky Way disk. This made the symmetron a promising alternative to particle DM. Though, in Refs. [52] and [48] it was shown that, using the same parameter values as in Ref. [50], the difference between the baryonic masses of galaxies and the masses required for causing the observed (gravitational) lensing cannot be explained by a symmetron fifth force alone, but potentially by adding another scalar field or considering a hybrid model between modified gravity and particle DM. In the present article we revisit the idea of a symmetron fifth force being an alternative to particle DM. However, instead of only considering the archetypal symmetron, we discuss so-called generalized symmetrons, which were discovered using tomographic methods [53; 54]. This class of models comprises the standard symmetron, but also an, in principle, infinite number of qualitatively similar generalizations. Generalized symmetrons have barely been studied in the literature [8] and their parameter spaces are still unconstrained. As we show in the present article, some generalized symmetron models can actually explain the observed lensing otherwise attributed to particle DM while complying with existing constraints [55; 56] on the necessary disformal coupling [57]. Since the parameter spaces of those models have yet not been constrained by experiments, they offer significantly more freedom to discuss them as alternatives to particle DM than many other screened scalar field models including the standard symmetron do. This motivates future studies of these yet only little explored models. The article is structured as follows: In Sec. II we review the generalized symmetron models, while in Sec. III, based on the discussion in Ref. [48], we derive the deviation of a galaxy's lensing from general relativity's predictions due the presence of a generalized symmetron fifth force. Subsequently, in Sec. IV, we study which parts of the generalized symmetron parameter spaces are suitable for explaining the difference between baryonic and lens masses of galaxies. Finally, we draw our conclusions in Sec. V. ## II Generalized symmetrons The standard symmetron model is a scalar-tensor modification of gravity and introduces a new type of scalar field - the symmetron. It was first mentioned in Refs. [12; 13; 14; 15; 16; 17] and then introduced with its current name in Refs. [18; 19]. Besides being a potential DM candidate, the symmetron also motivated a new inflationary scenario [58] and was proposed as a solution to the \(H_{0}\)-tension [59]. The symmetron \(\varphi\) is described by the action \[S\;=\;\int d^{4}x\sqrt{-g}\left(\frac{M_{P}^{2}}{2}R-\frac{1}{2}(\partial\varphi) ^{2}-V(\varphi)\right)\;\;, \tag{1}\] where \(M_{P}\) is the reduced Planck mass, and the effective symmetron potential is given by \[V(\varphi)\;=\;\frac{1}{2}\left(\frac{\rho}{{\cal M}^{2}}-\mu^{2}\right) \varphi^{2}+\frac{\lambda}{4}\varphi^{4}\;\;. \tag{2}\] Here, \(\rho=-T^{\mu}_{\;\;\mu}\) is the density for dust-like matter, \({\cal M}\) is a constant with dimension of a mass that parametrizes the coupling to matter, \(\mu\) is a tachyonic mass, and \(\lambda\) is the dimensionless self-coupling constant of the symmetron. The universal coupling to the trace of the matter energy-momentum tensor \(T^{\mu}_{\;\;\mu}\) arises since the symmetron couples conformally to the metric tensor via a factor \[A(\varphi)\;=\;1+\frac{\varphi^{2}}{{\cal M}^{2}}+{\cal O}\left(\frac{\varphi^ {4}}{{\cal M}^{4}}\right)\;\;,\;\;\varphi\ll{\cal M}\;\;. \tag{3}\] This factor is used in order to translate from one conformal frame [1] to another, e.g. from the Einstein frame given in terms of the metric \(g\) to the Jordan frame given in terms of \(\tilde{g}\) via \(\tilde{g}_{\mu\nu}=A(\varphi)g_{\mu\nu}\). Due to its coupling to matter, an additional force of Nature, a so-called fifth force, is expected to be mediated by the symmetron. Though, in order to avoid Solar System constraints on fifth forces [4; 5; 6], the symmetron is equipped with a screening mechanism, which renders the fifth force to be feeble in regions where the trace of the energy-momentum tensor, or, in case of dust, the density \(\rho\), is sufficiently large. This screening is achieved by the behavior of the symmetron's non-linear effective potential given in Eq. (2). As long as \(\mu^{2}{\cal M}^{2}>\rho\), this potential has minima at \[\varphi_{0}\;=\;\pm\sqrt{\frac{1}{\lambda}\left(\mu^{2}-\frac{\rho}{{\cal M}^ {2}}\right)}\;\;. \tag{4}\] However, if this condition is not fulfilled, the vacuum expectation value (vev) of the symmetron can only be \(\varphi_{0}=0\). Both possible scenarios are depicted in Fig. 1. Separating the symmetron into its vev and a small fluctuation \(\delta\varphi\), such that \(\varphi=\varphi_{0}+\delta\varphi\), it can be shown that the symmetron fifth force at leading order goes like \(F_{\varphi}\sim\varphi_{0}\nabla\delta\varphi\). Consequently, in situations where \(\mu^{2}{\cal M}^{2}<\rho\), the symmetron-induced fifth force is screened since its leading term vanishes, leaving only small corrections of higher order in \(\delta\varphi\). Besides this standard symmetron model, a radiatively-stable symmetron has been developed [60], and generalizations were discovered by using tomographic methods [53; 54]. The latter are the main subject of this article and discussed in what follows. Generalized symmetrons couple to the metric tensor via \[A(\varphi)\;=\;1+\frac{\varphi^{2\alpha}}{{\cal M}^{2\alpha}}+{\cal O}\left( \frac{\varphi^{4\alpha}}{{\cal M}^{4\alpha}}\right)\;\;,\;\;\varphi\ll{\cal M }\;\;, \tag{5}\] and have potentials [8] \[V(\varphi)\;=\;\left(\frac{\rho}{{\cal M}^{2\alpha}}-\mu^{4-2\alpha}\right)\varphi^ {2\alpha}+\frac{\varphi^{2\beta}}{\Lambda^{2\beta-4}}\;\;, \tag{6}\] where \(\Lambda\) describes a symmetron self-coupling constant which generally has the dimension of a mass, but is replaced \(1/\Lambda^{2\beta-4}\rightarrow\lambda/4\) for \(\beta=2\). Note that, for \(\alpha=2\), also \(\mu\) must be replaced by a dimensionless constant. The numbers \(\alpha,\beta\in\mathbb{Z}^{+}\) label each model and are, in principle, arbitrary, but must adhere to \(\beta>\alpha\). Choosing the smallest possible pair \((\alpha,\beta)=(1,2)\) recovers the standard symmetron model as given in Eq. (2). Each choice for \((\alpha,\beta)\) leads to a model that has similar qualitative features to the standard symmetron. This means, the potential in Eq. (6) is also represented by Fig. 1, a generalized symmetron fifth force is screened in regimes where \(\rho>\mu^{4-2\alpha}{\cal M}^{2\alpha}\) since there \(\varphi_{0}=0\), and in unscreened regimes the field's vev takes on the form \[\varphi_{0}\;=\;\pm\;^{2(\beta-\alpha)}\!\!\sqrt{\frac{\alpha}{\beta}\Lambda^ {2\beta-4}\left(\mu^{4-2\alpha}-\frac{\rho}{{\cal M}^{2\alpha}}\right)}\;\;. \tag{7}\] In an unscreened situation a generalized symmetron has a mass \[m^{2}\;=\;4\beta(\beta-\alpha)\frac{\varphi_{0}^{2(\beta-1)}}{\Lambda^{2\beta -4}}\;\;, \tag{8}\] while in case of total screening, this becomes \[m^{2}\;=\;\begin{cases}\frac{\rho}{{\cal M}^{2}}-\mu^{2}&,\;\;\alpha=1\\ 0&,\;\;\alpha>1\end{cases}\;. \tag{9}\] Figure 1: Potential \(V(\varphi)\) of the (generalized) symmetron: if the density \(\rho\) is sufficiently small, then the potential takes on the shape of the blue curve and the field has a non-vanishing vev. However, if \(\rho>\mu^{2}{\cal M}^{2}\) (orange curve), then the potential can only have a minimum at \(\varphi=0\), which results in the fifth force being screened. ## III Lensing Gravitational lensing became the first experimentally confirmed novel effect predicted by general relativity when in 1919 the bending of light in the Sun's gravitational field was observed [61]. Today gravitational lensing serves as a valuable tool for indirect observations of DM [62]. If a symmetron fifth force is supposed to act as an alternative to particle DM, as suggested by the findings in Refs. [50; 51], then it must be able to explain the observed roughly \(1:5\) ratio between baryonic mass and DM in galaxies [63], i.e. contribute to the lensing about \(5\) times as much as the Newtonian potential \(\Phi\) of a galaxy. In Refs. [52] and [48] it was shown that the standard or \((1,2)\)-symmetron cannot provide such an explanation for the parameter values required by Ref. [50]. Though, other models of the class of generalized symmetrons might still be able to serve as sensible alternatives to particle DM since they show the same qualitative features as the \((1,2)\)-symmetron, but are yet much less constrained. In order to study the effect of a generalized symmetron on lensing, we now derive a measure that enables us to compare the contribution from the fifth force with the one from Newtonian gravity. For this, we follow the discussion in Ref. [48], which in turn is based on elaborations made in Ref. [64]: We consider a perturbed Friedmann-Lemaitre-Robertson-Walker (FLRW) background \[ds^{2} = a^{2}(\tau)[-(1+2\Psi)d\tau^{2}+(1+2\Phi)\delta_{ij}dx^{i}dx^{j}] \tag{10}\] with conformal time \(d\tau=dt/a(t)\) and Newtonian potentials \(\Phi,\Psi\ll 1\), which fulfill the no-slip condition \(\Phi=-\Psi\). Furthermore, we assume that the thickness of the lens (e.g. a galaxy) is much smaller than the distance \(d_{L}\) between lens and observer (on Earth), and the deflection angles are very small. This assumption justifies a thin-lens approximation, which effectively considers the lens to have no thickness and lie within a two-dimensional plane. This so-called lens plane is perpendicular to the line between light source and observer. Using this approximation, we introduce the coordinate system \(\{\tau,r,x^{1}\approx r\theta^{1},x^{2}\approx r\theta^{2}\}\) with radial coordinate \(r\) and deflection angles \(\theta^{1,2}\). See Fig. 2 for a depiction of the setup. We introduce the photon momentum \(k^{\mu}:=dx^{\mu}/d\kappa\) with some affine parameter \(\kappa\), which gives the geodesic equation \[\frac{dk^{\mu}}{d\kappa}+\Gamma^{\mu}_{\nu\sigma}k^{\nu}k^{\sigma} \ =\ 0\enspace. \tag{11}\] The photon momentum can be separated into a background vector \(\hat{k}^{\mu}\) and a small perturbation \(\delta k^{\mu}\) due to the presence of a lens mass, such that \(k^{\mu}=\hat{k}^{\mu}+\delta k^{\mu}\). Without the lens mass, the photon trajectory is not subject to deflection, which means, for the background momentum \(\hat{k}^{x^{1,2}}=0\) must hold. Consequently, from the on-shell condition \(k^{\mu}k_{\mu}=0\) follows \(\hat{k}^{r}=\hat{k}^{0}\). Taking all this into account, and considering only terms up to first order in the Newtonian potentials and the momentum perturbations, for the purely Newtonian case without any fifth forces, the geodesic Eq. (11) can be used to derive the lensing force law \[\frac{d^{2}x^{i}}{dr^{2}}\;=\;(\Phi-\Psi)_{,x^{i}}\;\;,\;\;i\in\{1,2\}\;\;. \tag{12}\] A similar force law can be derived when also including generalized symmetrons. Though, it must be stressed that solely a conformal coupling of the scalar field \(\varphi\) to the metric tensor is not sufficient to have any effect on lensing. This was, for example, explicitly shown in Ref. [48], but can also simply be concluded from the fact that the energy-momentum tensor of a massless particle like the Figure 2: Schematics of gravitational lensing in two dimensions with thin-lens approximation: The source sends out light which is distorted by the lens and then received by an observer. \(d_{L}\) denotes the distance between the observer and the lens plane in which the lens is situated. The radial coordinate \(r\) equals \(0\) at the position of the observer and is orthogonal to the lens plane. \(\theta\) denotes the angle between \(r\) and the light ray reaching the observer (here it represents either \(\theta^{1}\) or \(\theta^{2}\)). The coordinate \(x\) is approximated by \(r\theta\) since \(\theta\) is assumed to be very small. [48] photon has a vanishing trace. Therefore, the scalar \(\varphi\) must not only couple conformally, but also disformally via \[\bar{g}_{\mu\nu} = A(\varphi)g_{\mu\nu}+B\varphi_{,\mu}\,\varphi_{,\nu}\ \, \tag{13}\] \[\bar{g}^{\mu\nu} = A^{-1}(\varphi)\left(g^{\mu\nu}-\frac{B}{C(\varphi)}\varphi^{, \mu}\,\varphi,^{\nu}\,\right)\ \, \tag{14}\] where \(A(\varphi)\) is the conformal factor from Eq. (5), \(B\) is the disformal coupling parameter, which for simplicity is assumed to be a constant, and \(C(\varphi):=A(\varphi)+B(\partial\varphi)^{2}\). \(\bar{g}\) denotes the metric in the Jordan frame and \(g\) the metric in the Einstein frame [1]. For the Jordan frame Christoffel symbols we find \[\bar{\Gamma}^{\mu}_{\nu\sigma} = \Gamma^{\mu}_{\nu\sigma}+A^{-1}\left[A_{,(\nu}\,g^{\mu}_{\sigma)} -\frac{1}{2}A,^{\mu}\,g_{\nu\sigma}\right] \tag{15}\] \[+\frac{B}{C}\varphi^{,\mu}\left[\varphi_{,\nu\sigma}\,-\varphi_{,\rho}\,\Gamma^{\rho}_{\nu\sigma}+\frac{A^{-1}}{2}\varphi^{,\rho}\,A_{,\rho} \,g_{\nu\sigma}-A^{-1}A_{,(\nu}\,\varphi_{,\sigma)}\,\right]\ \,\] where the second term in the first line represents a contribution which is arising purely from the conformal coupling and therefore does not affect the lensing. Substituting Eq. (5) into Eq. (15), following the same procedure as for the derivation of Eq. (12), and assuming the lens object to be static, then the force law \[\frac{d^{2}x^{i}}{dr^{2}} = (\Phi-\Psi)_{,x^{i}}-\frac{B}{C}\varphi^{,x^{i}}\left[\left( \varphi_{,yz}-\frac{2\alpha}{{\cal M}^{2\alpha}}\varphi^{2\alpha-1}\varphi_{,y }\,\varphi_{,z}\,\right)\frac{dx^{y}}{dr}\frac{dx^{z}}{dr}\right. \tag{16}\] \[\left.-2{\cal H}\varphi_{,y}\,\frac{dx^{y}}{dr}+a^{2}\varphi_{,y} \,(\Phi-\Psi),^{y}\,-2\varphi_{,r}\,\Phi_{,r}\,\right]\ \,\] where \(y,z\in\{r,x^{1},x^{2}\}\), and \({\cal H}\) denotes the conformal Hubble parameter, can be obtained. Comparing this force law with the one in Eq. (12), we find that the term proportional to \(B\) in Eq. (16) corresponds to the contribution of the generalized symmetron fifth force to the lensing effect. Next, we assume \(a({\rm today})=1\), use the no-slip condition for the Newtonian potentials, and consider the force law only in the lens plane \(r=d_{L}\), where we expect the lensing to be maximal and \(dx/dr\) to vanish: \[\frac{d^{2}x^{i}}{dr^{2}}\bigg{|}_{d_{L}} = \left.\left[2\Phi_{,x^{i}}\,-\frac{B}{C}\varphi^{,x^{i}}\left( \varphi_{,rr}\,-\frac{2\alpha}{{\cal M}^{2\alpha}}\varphi^{2\alpha-1}(\varphi_ {,r}\,)^{2}-2{\cal H}\varphi_{,rr}\,+2\varphi_{,x^{j}}\,\Phi,^{x^{j}}\,\right) \right]\right|_{d_{L}}\ . \tag{17}\] In order to get a rough numerical estimate of the generalized symmetron contribution, we assume its source, i.e. a galaxy acting as lens, to be a disk lying in the lens plane with homogeneous, constant mass density and radius \(R\), leading to a total galaxy mass \(M\). Since this approximated lens has a spherical symmetry, we can restrict our investigation to the case \(\theta^{2}=0\), and consequently only work with \(\theta^{1}=:\theta\). Applying Newton's shell theorem and considering that the radial coordinate within the lens plane originating from the disk's center can be expressed as \(r\theta\) in the observer's coordinates, the Newtonian potential becomes \[\Phi\;=\;-\frac{GM}{r\theta}\;\;. \tag{18}\] Some references, including Refs. [48; 54], suggest an approximation for the symmetron field profile outside a homogeneous, spherically symmetric source of radius \(R\), which is valid for a symmetron mass \(m_{\rm out}\) outside the source fulfilling \(r\theta<m_{\rm out}^{-1}\). However, in what follows, this approximation will not always be good. Therefore, we use the symmetron profile [37] \[\varphi\;=\;v-\frac{D}{r\theta}e^{-m_{\rm out}r\theta} \tag{19}\] with \[D\;:=\;(v-w)Re^{m_{\rm out}R}\frac{Rm_{\rm in}-{\cal T}}{Rm_{\rm in }+Rm_{\rm out}{\cal T}}\;\;, \tag{20}\] where \(m_{\rm in}\) is the field's mass within the source, \(v:=\varphi_{0,\rm out}\) and \(w:=\varphi_{0,\rm in}\) are the vev in the environment surrounding the source and within the source, respectively, and \({\cal T}:=\tanh(m_{\rm in}R)\). The solution in Eq. (19) requires that we only consider terms up to first order in \(\delta\varphi\), and, at this order, is even valid for any generalized symmetron model. Substituting Eqs. (18) and (19) into Eq. (17) leads us to \[\left.\frac{d^{2}x}{dr^{2}}\right|_{d_{L}}\;=\;\frac{2GM}{d_{L}^ {2}\theta^{2}}\left[1+F\right] \tag{21}\] with \[F\;:= \frac{BD^{2}}{Cd_{L}^{4}\theta^{2}}e^{-2m_{\rm out}d_{L}\theta}(1 +m_{\rm out}d_{L}\theta)\bigg{\{}\left(1+\frac{d_{L}\theta}{2GM}\right)\bigg{[} 1+(1+m_{\rm out}d_{L}\theta)^{2} \tag{22}\] \[+\frac{2\alpha D}{d_{L}\theta{\cal M}^{2\alpha}}\left(v-\frac{D} {d_{L}\theta}e^{-m_{\rm out}d_{L}\theta}\right)^{2\alpha-1}e^{-m_{\rm out}d_{ L}\theta}(1+m_{\rm out}d_{L}\theta)^{2}+2{\cal H}d_{L}(1+m_{\rm out}d_{L} \theta)\bigg{]}\] \[-\frac{1+m_{\rm out}d_{L}\theta}{\theta^{2}}\bigg{\}}\] being a term that describes the contribution of the generalized symmetron fifth force to lensing in comparison to the one originating from Newtonian gravity. Furthermore, in the lens plane we have: \[C|_{d_{L}}\;=\;1+\frac{1}{{\cal M}^{2\alpha}}\left(v-\frac{D}{d _{L}\theta}e^{-m_{\rm out}d_{L}\theta}\right)^{2\alpha}+(1-2\Phi|_{d_{L}}) \frac{BD^{2}}{d_{L}^{4}\theta^{4}}e^{-2m_{\rm out}d_{L}\theta}(1+m_{\rm out}d _{L}\theta)^{2}(1+\theta^{2})\;\;. \tag{23}\] In Eq. (22) we see that \(F\sim D^{2}\), such that from Eq. (20) we can find \(F\sim(v-w)^{2}\). This result implies that if \(v\) and \(w\) are very similar, and have the same sign, for example in some situations where the field is unscreened both in- and outside the source, the contribution of the generalized fifth force to lensing can be very small. However, it is not necessarily true that both vev need to have the same sign. Since we are interested in checking whether it is at all possible to explain the observed lensing by a generalized symmetron fifth force, we consider the best case scenario, in which \(v=+|v|\) and \(w=-|w|\). ## IV Model parameters We now want to consider different generalized symmetron models and see for what points in their parameter spaces the expression in Eq. (22) gives \(F\approx 5\), such that the contribution of a fifth force can be interpreted as an alternative to particle DM, at least when it comes to lensing. For this, we consider the same galaxy parameters as in Ref. [48], i.e. we look at a Milky Way-like galaxy with \(M=6\times 10^{11}M_{\odot}\approx 6.67\times 10^{77}\,\mathrm{eV}\) and a scale length \(R=5\,\mathrm{kpc}\approx 2.69\times 10^{26}\,\mathrm{eV}^{-1}\)[65]. Ref. [66] reports lensing by galaxies at redshift \(z=1\) under angles of about \(1\,\mathrm{arcmin}\). Therefore, we choose \(\theta=\frac{\pi}{10800}\) and \(d_{L}\approx 6.60\times 10^{32}\,\mathrm{eV}^{-1}\), where we obtained the latter from \(d_{L}\approx zd_{H}\)[67] using the Hubble length \(d_{H}\). For the density around the galaxy we assume \(\rho_{\mathrm{out}}\approx 2.59\times 10^{-11}\,\mathrm{eV}^{4}\)[68]. In addition, we use \(G\approx 6.71\times 10^{-57}\,\mathrm{eV}^{-2}\) and \(\mathcal{H}\approx 1.51\times 10^{-33}\,\mathrm{eV}\). We start the discussion by again looking at the \((1,2)\)-symmetron before we show that moving to larger values of \((\alpha,\beta)\) is beneficial for complying with the existing constraint \(B<5.6\times 10^{-48}\,\mathrm{eV}^{-4}\)[55; 56] on the disformal coupling parameter. ### \((1,2)\)-symmetron Ref. [50] suggested that the \((1,2)\)-symmetron could explain the stability and rotation curves of disk galaxies, and therefore be a possible candidate for an alternative to particle DM. For this to work, the symmetron parameters were chosen to be \(\mathcal{M}=M_{P}/10\), \(v=\mathcal{M}/150\), and \(\mu=3\times 10^{-30}\,\mathrm{eV}\). In Refs. [52] and [48] it was shown that those parameter choices require a disformal coupling parameter \(B\) much larger than permitted by the constraints given in Refs. [55; 56]. This result also held for realistic variations of the galaxy parameters. We now study for what values of \(\mathcal{M}\) and the self-coupling constant \(\lambda\) the \((1,2)\)-symmetron can actually comply with the constraints on disformal couplings. For the disformal coupling parameter we choose \(B=10^{-49}\,\mathrm{eV}^{-4}\) in order to consider a value that is not yet excluded by experiments, and for \(\mu\) we initially take the same value as in Ref. [50]. We find that \(\lambda\approx 10^{-174.6}\) is the largest and \({\cal M}\approx 10^{58.7}\,\)eV the smallest possible value in order to find \(F\approx 5\), while simultaneously fulfilling the perturbative condition \(\varphi\ll{\cal M}\). Even though there are not necessarily any restrictions on the permitted values for the coupling constants, it is certainly peculiar to have a theory with such a small self-coupling and, more strikingly, a mass scale that is more than 30 orders of magnitude above the Planck mass. Note that for these parameter values the field is not screened within the galaxy. Furthermore, since in this parameter regime \(1/m_{\rm in}>R\), i.e. the Compton wavelength of the field is larger than the galaxy scale, the field adapts to the size of the galaxy and consequently rather takes on the mass \(m_{\rm in}\approx q/R\) and the vev \[w\;\approx\;\pm\,^{\beta-1}\!\!\sqrt{\frac{q\Lambda^{\beta-2}}{2\sqrt{\beta( \beta-\alpha)}R}}\;\;, \tag{24}\] where \(q\) is a fudge factor that would have to be computed numerically taking into account more detailed properties of a galaxy, but is assumed to be of order 1 (compare with the fudge factor, e.g., in Ref. [69]). Considering smaller values of the disformal coupling parameter only worsens the situation, i.e. requires smaller values of \(\lambda\) and larger \({\cal M}\). As was also observed in Ref. [48], changing the galaxy parameters to other realistic values does not significantly improve the situation. What remains to be checked is how the result is affected by a change in the tachyonic mass \(\mu\). As it turns out, and can be seen in Fig. 3, the value for \(\mu\) used in Ref. [50] is sitting within a small part of the parameter space that allows for \(F\approx 5\). Looking at Eqs. (19) and (20), and remembering that \(F\sim D^{2}\), we see that significantly increasing or decreasing \(\mu\) beyond this small section of the parameter space, while keeping all other parameters at the same values, leads to \(F\) getting rapidly smaller since the combined exponential function in Eqs. (19) and (20) becomes smaller due to the enlarged symmetron mass or the vev reduces, respectively. In order to counteract the declining \(F\), such that \(F\approx 5\) can be recovered, we have to consider even smaller values of \(\lambda\) and consequently larger values of \({\cal M}\). This is the exact opposite of what we had hoped to achieve, which is why varying \(\mu\) can be excluded as a possibility for improving the case for the \((1,2)\)-symmetron. ### \((1,\beta)\)-symmetrons with \(\beta\geq 3\) Now we move to the first set of non-standard symmetrons and consider models with \((1,\beta)\) for \(\beta\geq 3\). For those, Eq. (22) has still the same form as for the \((1,2)\)-symmetron, but the vev and the unscreened masses are different according to Eqs. (7) and (8). Furthermore, instead of using a dimensionless constant \(\lambda\), we now have to work with \(\Lambda\) as another mass scale besides \(\mathcal{M}\). Again choosing \(\mu=3\times 10^{-30}\,\mathrm{eV}\), we find the results presented in Fig. 4 for \(3\leq\beta\leq 10\). We observe that going from \(\beta=2\) to \(\beta=3\) requires a slightly lower \(\mathcal{M}\) for \(F\approx 5\) and \(\varphi/\mathcal{M}\ll 1\) but the \(\Lambda\) of the \((1,3)\)-symmetron must be more than 100 orders of magnitude above the Planck mass. This renders this theory to be unrealistic as an alternative to particle DM. At \(\beta=4\) there is another slight decrease in \(\mathcal{M}\), while from \(\beta=5\) on this mass scale increases again until it settles around \(\mathcal{M}\approx 10^{58.3}\,\)eV even for very large \(\beta\). This means, compared to the standard symmetron, only changing \(\beta\) barely improves the value of the matter coupling mass scale. The self-coupling constant \(\Lambda\) is strictly monotonically decreasing and approaches \(\Lambda\approx 10^{57.2}\,\)eV for large \(\beta\). This means, the two mass scales in \((1,\beta\geq 3)\)-symmetron theories are getting closer to each other with increasing \(\beta\) until \(\Lambda\) becomes smaller than \(\mathcal{M}\) and both finally differ by approximately 1 order of magnitude. Even though there are parts of their parameter spaces for which generalized symmetrons with \(\alpha=1\) can explain the lensing otherwise attributed to particle DM, they require coupling constants which are at least 30 order of magnitude above the Planck mass. ### \((\alpha,\beta)\)-symmetrons with \(\alpha\geq 2\) Finally, we also consider generalized symmetron models with \(\alpha\geq 2\). Now we encounter a couple of differences to the previously considered models: Eq. (22) changes with every possible value of \(\alpha\), and \(\mu\) is either dimensionless (for \(\alpha=2\)) or appears in Eq. (6) with negative order (for \(\alpha\geq 3\)). Generally, it can be said that studying these models is numerically more intricate than it was for the ones we considered before. This is due to the fact that, within the considered astrophysical setup, \(\alpha\geq 2\)-symmetrons can lead to extremely small terms that have to be compensated by extremely large terms in order to find any results of order 1. For example, Eq. (22) can be separated into two terms, one outside (\(F_{o}\)) and one inside (\(F_{i}\)) the curly brackets. \(F_{o}\) tends to become very small since it is suppressed by at least \(B/d_{L}^{3}\theta\sim 10^{-144}\,\)eV\({}^{-1}\) (for \(B=10^{-49}\,\)eV\({}^{-4}\)). While generalized symmetrons with only \(\alpha=1\) seem to not struggle with compensating such smallness by leading to a sufficiently large \(F_{i}\), the models discussed here are more problematic since they lead to even more extreme values for \(F_{o}\) and \(F_{i}\). Beginning with the \((2,3)\)-symmetron model, we can find no point in the parameter space that allows for \(F\approx 5\). The closest value we find is around \(F\sim 10^{-139}\). This is caused by \(F_{i}\) not reaching sufficiently large values and whenever it increases for parameter values beyond the maximum of \(F\), \(F_{o}\) decreases even faster. Increasing \(\beta\) does not lead to greatly improved results. Moving to \(\alpha=3\), from where on \(\mu\) is a mass scale, we again do not find any point in the parameter space that allows for \(F\approx 5\), but can at least reach \(F\sim 10^{-117}\). The first model we find that allows for \(F\approx 5\) is the \((5,7)\)-symmetron around the parameter space point \((\Lambda,\mathcal{M},\mu)=(10^{37.4},10^{58.7},10^{24.0})\,\mathrm{eV}\). Interestingly, from \(\alpha=5\) on, changing \(\beta\) can actually have a significant impact on the results. However, while for \(\alpha=6\) we still need at least \(\beta=\alpha+2\), from \(\alpha=7\) on we can obtain \(F\approx 5\) even for \(\beta=\alpha+1\). With increasing \(\alpha\) we have to use smaller \(\Lambda\) in order to reach \(F\approx 5\), such that we are getting closer to the Planck scale. For example, in the \((11,12)\)-symmetron model we find \(F\approx 5\) at the point \((\Lambda,\mathcal{M},\mu)=(10^{27.3},10^{58.7},10^{24.0})\,\mathrm{eV}\), where \(\Lambda\) is close to \(M_{P}\). As can be seen in the examples presented above, increasing \(\alpha\) is useful for reducing \(\Lambda\), but \(\mathcal{M}\) remains more than 30 order of magnitude above the Planck scale independent of the choice of \((\alpha,\beta)\). ## V Conclusions In this article we discussed generalizations of the symmetron model, characterized by a pair of positive integers \((\alpha,\beta)\), and investigated for what parameter values their fifth forces can explain the difference between baryonic and lens masses of galaxies, which is otherwise attributed to particle DM. For this, we first reviewed generalized symmetrons and then derived a measure \(F\) that allowed us to compare the lensing contribution from Newtonian gravity with one from a gravity-like fifth force induced by a disformally coupling generalized symmetron. We looked at a Milky Way-like galaxy and checked for a selection of generalized symmetron models for what parameter space values \(F\approx 5\), which corresponds to the expected ratio between DM and baryonic matter in galaxies, is fulfilled. With \(B=10^{-49}\,\mathrm{eV}^{-4}\) we chose a disformal coupling parameter close to the maximal value allowed by current experimental constraints. For the standard symmetron, corresponding to \((\alpha,\beta)=(1,2)\), we found that a tiny self-coupling constant \(\lambda\) and a matter coupling mass scale \(\mathcal{M}\) more than 30 orders of magnitude larger than the Planck mass \(M_{P}\) are required in order to find \(F\approx 5\). Increasing \(\beta\), which required us to work with a mass scale \(\Lambda\) instead of a dimensionless \(\lambda\), led to both \(\Lambda\) and \(\mathcal{M}\) being at least 30 orders of magnitude above the Planck scale. Finally, also varying \(\alpha\) was most promising since it allowed us to use generalized symmetron fifth forces as explanations for the observed lensing excess at mass scales \(\Lambda\) and \(\mu\) around or even below the Planck scale. However, in no model it was possible to significantly reduce the value for \(\mathcal{M}\), which means it must always be much larger than \(M_{P}\). From Refs. [52] and [48] we know that the \((1,2)\)-symmetron is not able to successfully act as an alternative to particle DM since it cannot explain lensing. This is also reflected in the fact that, as we showed in the present article, the standard symmetron fifth force would require an extremely small \(\lambda\) for \(F\approx 5\). In contrast, some generalized symmetron models with larger \(\alpha\) are in so far better in explaining the observed lensing as they instead require the self-interaction mass parameter \(\Lambda\) to be around the Planck scale, which is a typical value expected for many screened scalar field models. Though, every model, even for very large \(\alpha\) and \(\beta\), required \(\mathcal{M}\) to be at least 30 orders of magnitude above \(M_{P}\) in order to reach \(F\approx 5\). There are several possible ways around this conundrum: either we accept that we have such a large fundamental mass scale in Nature, we relax the requirement on \(F\) and want a generalized symmetron fifth force to only partially explain DM, we find a way to relax the requirement \(\varphi\ll\mathcal{M}\), or we consider a hybrid model between modified gravity and particle DM, as was suggested for the standard symmetron in Ref. [52]. In any case, generalized symmetrons beyond \((\alpha,\beta)=(1,2)\) are interesting theories to study since, to date, there are no experimental constraints on their parameter spaces. In addition, redoing the analyses made in Refs. [50] and [51] for some of the more promising models, for example theories like \((\alpha,\beta)=(5,7)\) and beyond, might in the future demonstrate that generalized symmetrons are actually good alternatives to particle DM. ###### Acknowledgements. CK is grateful to C. Burrage and M. Pitschmann for useful discussions, and to C. Voith and B. Burazor Domazet for spotting a sign mistake in the lensing calculation. This article was supported by the Austrian Science Fund (FWF): P 34240-N, and is based upon work from COST Action COSMIC WISPers CA21106, supported by COST (European Cooperation in Science and Technology).
2308.05578
Searching for the flavon at current and future colliders
The $ B_3 - L_2$ $ Z' $ model may explain certain features of the fermion mass spectrum as well as the $b \rightarrow s \mu^+ \mu^-$ anomalies. The $ Z' $ acquires its mass via a TeV-scale scalar field, the flavon, whose vacuum expectation value spontaneously breaks the family non-universal gauged $ U(1)_{B_3 - L_2} $ symmetry. We review the key features of the model, with an emphasis on its scalar potential and the flavon field, and use experimental data and perturbativity arguments to place bounds upon the Higgs-flavon mixing angle. Finally, we discuss flavonstrahlung as a means to discover the flavon experimentally and compute flavonstrahlung cross-sections at current and future colliders.
Eetu Loisa
2023-08-10T13:41:55Z
http://arxiv.org/abs/2308.05578v1
# Searching for the flavon at current and future colliders ###### Abstract The \(B_{3}-L_{2}\)\(Z^{\prime}\) model may explain certain features of the fermion mass spectrum as well as the \(b\to s\mu^{+}\mu^{-}\) anomalies. The \(Z^{\prime}\) acquires its mass via a TeV-scale scalar field, the flavon, whose vacuum expectation value spontaneously breaks the family non-universal gauged \(U(1)_{B_{3}-L_{2}}\) symmetry. We review the key features of the model, with an emphasis on its scalar potential and the flavon field, and use experimental data and perturbativity arguments to place bounds upon the Higgs-flavon mixing angle. Finally, we discuss _flavonstrahlung_ as a means to discover the flavon experimentally and compute flavonstrahlung cross-sections at current and future colliders. _21st Conference on Flavor Physics and CP Violation (FPCP 2023)_ _29 May - 2 June 2023 Lyon, France_ ## 1 Introduction Several observables involving rare \(B\) meson decays with muonic final states remain in disagreement with Standard Model (SM) predictions. For instance, discrepant branching ratios of \(B\to K\mu\mu\), \(B\to K^{*}\mu\mu\) and \(B_{s}\to\phi\mu\mu\)[1, 2, 3] as well as angular observables of \(B\to K^{*}\mu\mu\)[4] are hinting that there may be New Physics (NP) at play. These measurements can be accounted for by the \(B_{3}-L_{2}\) model [5, 6, 7], which introduces a family non-universal \(U(1)_{B_{3}-L_{2}}\) abelian gauge symmetry, mediated by a \(Z^{\prime}\) gauge boson able to contribute to \(b\to s\mu^{+}\mu^{-}\) transitions. Whilst the recent LHCb update on the lepton flavour universality ratios \(R_{K}\) and \(R_{K^{*}}\)[8, 9] guides us towards NP scenarios coupling to both muons and electrons [10, 11], the one-parameter fits based solely on the Wilson coefficient \({\cal C}_{9\mu}\) are still able to improve considerably upon the SM \(\chi^{2}\)[12]. Owing to its family non-universal nature, the \(B_{3}-L_{2}\) model is also able to explain certain features of the CKM matrix, setting up the foundations for further model-building to address the fermion mass problem. Since the \(Z^{\prime}\) is assumed to be massive, we are compelled to introduce a scalar flavon field that breaks the \(U(1)_{B_{3}-L_{2}}\) symmetry by developing a vacuum expectation value (VEV). In what follows, we will review the construction of the \(B_{3}-L_{2}\) and study the phenomenology of the flavon field with an eye on the Higgs-flavon mixing. Finally, we will study the production of the flavon at hadron and muon colliders. This write-up draws heavily from ref. [13], which the interested reader can refer to for a more detailed account of our findings. ## 2 \(B_{3}-L_{2}\) model To construct the \(B_{3}-L_{2}\) model, one begins by extending the \(SU(3)\times SU(2)_{L}\times U(1)_{Y}\) gauge group of the SM by an abelian \(U(1)_{B_{3}-L_{2}}\) factor in a direct product. The SM fermions carry charges proportional to third family baryon number (\(B_{3}\)) minus second family lepton number (\(L_{2}\)) under the new symmetry. The exact charge assignments are shown in table 1. Gauge anomaly cancellation is ensured by assuming the existence of three right-handed neutrinos. Because the \(U(1)_{B_{3}-L_{2}}\) symmetry is gauged, it implies the existence of an electrically neutral \(Z^{\prime}\) boson which is able to contribute to flavour-changing neutral current (FCNC) process, including \(b\to s\mu^{+}\mu^{-}\) transitions. We also introduce a complex scalar field called the flavon (\(\theta\)), which is a SM singlet but carries charge \(q_{\,\theta}\) under \(U(1)_{B_{3}-L_{2}}\). (We assume \(q_{\,\theta}=1\) in this work.) The flavon field develops a non-zero VEV near the TeV scale, \(\langle\theta\rangle=v_{\,\theta}\neq 0\). This breaks \(U(1)_{B_{3}-L_{2}}\) and yields the \(Z^{\prime}\) mass \(M_{Z^{\prime}}=q_{\,\theta}g_{Z^{\prime}}v_{\,\theta}\), where \(g_{Z^{\prime}}\) is the coupling constant of the new gauge symmetry. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \(Q^{\prime}_{IL}\) & \(u^{\prime}_{IR}\) & \(d^{\prime}_{iR}\) & \(L^{\prime}_{1}\) & \(L^{\prime}_{2}\) & \(L^{\prime}_{3}\) & \(e^{\prime}_{1R}\) & \(e^{\prime}_{2R}\) & \(e^{\prime}_{3R}\) \\ 0 & 0 & 0 & 0 & -3 & 0 & 0 & -3 & 0 \\ \hline \(v^{\prime}_{1R}\) & \(v^{\prime}_{2R}\) & \(v^{\prime}_{3R}\) & \(Q^{\prime}_{3L}\) & \(u^{\prime}_{3R}\) & \(d^{\prime}_{3R}\) & \(H\) & \(\theta\) & \\ 0 & -3 & 0 & 1 & 1 & 1 & 0 & \(q_{\,\theta}\) & \\ \hline \hline \end{tabular} \end{table} Table 1: The \(U(1)_{B_{3}-L_{2}}\) charge assignments. A prime stands for a weak eigenstate Weyl fermion and the family index \(i\) takes values 1 and 2. The flavon charge \(q_{\,\theta}\) is a non-zero rational number set equal to unity in this work. The fermionic couplings of the \(Z^{\prime}\) are expressed as \[\mathcal{L}_{Z^{\prime}\psi}=-g_{Z^{\prime}}\left(\overline{Q^{\prime}_{3L}} \chi^{\prime}Q^{\prime}_{3L}+\overline{u^{\prime}_{3R}}\chi^{\prime}u^{\prime}_ {3R}+\overline{d^{\prime}_{3R}}\chi^{\prime}d^{\prime}_{3R}-3\overline{L^{ \prime}_{2L}}\chi^{\prime}L^{\prime}_{2L}-3\overline{e^{\prime}_{2R}}\chi^{ \prime}e^{\prime}_{2R}-3\overline{\nu^{\prime}_{2R}}\chi^{\prime}\nu^{\prime}_ {2R}\right) \tag{1}\] where the primed fermion fields are in the weak eigenbasis. In order to obtain phenomenologically useful results we must transform to the (unprimed) mass eigenbasis. Expressing the three-component column vectors in family space as boldface letters, the transformation between the two bases is written as \[\mathbf{P}^{\prime}_{I}=V_{I}\mathbf{P}_{I} \tag{2}\] for \(I\in\{u_{L},d_{L},e_{L},v_{L},u_{R},d_{R},e_{R},v_{R}\}\). Owing to the family universality of its gauge interactions, the SM is not sensitive to the form of the individual mixing matrices \(V_{I}\). The model builder is allowed to tune the \(V_{I}\) to their liking as long as the matrices are unitary and reproduce the measured CKM and PMNS matrix elements, given by \(V_{\text{CKM}}=V_{u_{L}}^{\dagger}V_{d_{L}}\) and \(U_{\text{PMNS}}=V_{v_{L}}^{\dagger}V_{e_{L}}\). We pick a simple ansatz which is able to mediate \(b\to s\mu^{+}\mu^{-}\) transitions by turning on the \(\mathcal{C}_{9\mu}\) Wilson coefficient and not ruled out by tight experimental bounds on FCNC processes: \[V_{d_{L}}=\begin{pmatrix}1&0&0\\ 0&\cos\theta_{sb}&-\sin\theta_{sb}\\ 0&\sin\theta_{sb}&\cos\theta_{sb}\end{pmatrix}, \tag{3}\] \(V_{d_{R}}=1,V_{e_{R}}=1,V_{e_{L}}=1\) and \(V_{u_{R}}=1\), where 1 denotes the 3 by 3 identity matrix. These imply \(V_{u_{L}}=V_{d_{L}}V^{\dagger}\) and \(V_{v_{L}}=U^{\dagger}\). Substituting the ansatz into Eq. 1, one finds the terms \[\mathcal{L}_{Z^{\prime}\psi}\supset-g_{Z^{\prime}}\left[\left(\frac{1}{2}\sin 2 \theta_{sb}\overline{s}\chi^{\prime}P_{L}b+\text{H.c.}\right)-3\overline{\mu} \chi^{\prime}\mu\right] \tag{4}\] which, upon integrating out the \(Z^{\prime}\), will contribute to the \(\mathcal{O}_{9\mu}\) operator in the WET Hamiltonian \[\mathcal{H}_{\text{WET}}\supset\mathcal{C}_{9\mu}\mathcal{N}(\overline{s} \gamma_{\nu}P_{L}b)(\overline{\mu}\gamma^{\nu}\mu). \tag{5}\] with \(\mathcal{C}_{9\mu}\sim g_{Z^{\prime}}^{2}\sin 2\theta_{sb}/M_{Z^{\prime}}^{2}\), and \(\mathcal{N}\) a model-independent constant. For each choice of \(g_{Z^{\prime}}\) and \(M_{Z^{\prime}}\), we tune \(\theta_{sb}\) such that this expression matches the value of \(\mathcal{C}_{9\mu}\) obtained from fits to \(b\to s\mu^{+}\mu^{-}\) data.1 This procedure eliminates \(\theta_{sb}\) as an independent parameter. Footnote 1: We have used the best-fit value obtained in [14] before the LHCb updates [8, 9] of the lepton flavour universality ratios \(R_{K}\) and \(R_{K^{*}}\), \(\mathcal{C}_{9\mu}=-0.73\pm 0.15\). The results presented here are very weakly dependent on the value of \(\theta_{sb}\) and using an up-to-date determination of \(\mathcal{C}_{9\mu}\) would not have an observable impact on them. The \(U(1)_{B_{3}-L_{2}}\) symmetry places restrictions on the forms of the Yukawa matrices of the model, causing the CKM matrix to take the form \[V_{\text{CKM}}^{\text{model}}\sim\begin{pmatrix}\times&\times&0\\ \times&\times&0\\ 0&0&\times\end{pmatrix} \tag{6}\] at the renormalisable level. Here \(\times\) stands for an arbitrary order one element. This broadly agrees with the experimentally determined CKM matrix [15], \[V_{\text{CKM}}^{\text{exp.}}\approx\begin{pmatrix}1&0.2&0.004\\ 0.2&1&0.04\\ 0.009&0.04&1\end{pmatrix}, \tag{7}\] enabling the \(U(1)_{B_{3}-L_{2}}\) model to act as a bottom-up starting point in explaining the structure of the CKM matrix. ### Spontaneous symmetry breaking The flavon field \(\theta\) modifies the scalar potential of the theory, which reads \[V(H,\theta)=-\mu_{H}^{2}H^{\dagger}H+\lambda_{H}(H^{\dagger}H)^{2}-\mu_{\theta}^ {2}\theta^{*}\theta+\lambda_{\theta}(\theta^{*}\theta)^{2}+\lambda_{\theta H} \theta^{*}\theta H^{\dagger}H. \tag{8}\] The last term is of particular significance as it allows the SM Higgs to interact with the flavon. Scalar potentials of this form have been studied in more detail in e.g. [16, 17, 18]. Working in the unitary gauge and expanding both fields about their VEVs: \[H=\begin{pmatrix}0\\ \frac{v_{H}+h^{\prime}}{\sqrt{2}}\end{pmatrix},\qquad\theta=\frac{v_{\theta}+ \theta^{\prime}}{\sqrt{2}}, \tag{9}\] one obtains terms bilinear in the two scalars: \[V(H,\theta)\supset-\lambda_{\theta H}v_{\theta}v_{H}h^{\prime}\theta^{\prime}, \tag{10}\] meaning that the scalar field mass matrix is non-diagonal. We perform a field rotation parameterised by an angle \(\phi\) to go from the (primed) non-diagonal field basis to the (unprimed) mass basis: \[\begin{pmatrix}h\\ \theta\end{pmatrix}=\begin{pmatrix}\cos\phi&-\sin\phi\\ \sin\phi&\cos\phi\end{pmatrix}\begin{pmatrix}h^{\prime}\\ \theta^{\prime}\end{pmatrix}. \tag{11}\] We call \(\phi\) the Higgs-flavon mixing angle. ## 3 Constraints on Higgs-flavon mixing We review here the main experimental and theoretical constraints on the Higgs-flavon mixing angle \(\phi\). See e.g. refs. [18, 19, 20] for more detailed discussions in the context of the real singlet extension of the SM. Since most of these constraints are not affected by the \(Z^{\prime}\) or the flavon field being complex, we obtain constraints that largely align with the literature on the SM singlet extension. Here, we impose four constraints on the \(B_{3}-L_{2}\) model, corresponding to the four coloured regions in figure 1: 1. Exclusion limits from direct Higgs/scalar searches at hadron colliders, obtained using the public code HiggsBounds[21]. This corresponds to the dark green region in figure 1. 2. Higgs signal strength measurements. The limit is obtained using the ATLAS Run 2 combination of the global signal strength [22]. This limit is shown in light green in the figure. 3. Perturbativity of the quartic couplings. Here, we require that the three quartic couplings in Eq. 8 satisfy \(|\lambda_{H},\lambda_{\theta},\lambda_{\theta H}|<4\pi\) to ensure that the model remains perturbative. This requirement gives the orange region in the figure. 4. Measurements of the \(W\) boson mass. Taking the \(Z\) boson mass \(M_{Z}\), the Fermi constant \(G_{F}\) and the fine structure constant \(\alpha_{\rm EM}\) as experimental inputs, the \(W\)-boson mass is predicted to be \[M_{W}^{2}=\frac{1}{2}M_{Z}^{2}\bigg{[}1+\sqrt{1-\frac{4\pi\alpha}{\sqrt{2}G_{F }M_{Z}^{2}}\big{[}1+\Delta r(M_{W}^{2})\big{]}}\ \bigg{]}\.\] (12) where the \(\Delta r\) parameter captures SM loop effects as well as the BSM contributions coming from the flavon and the \(Z^{\prime}\). We require that the model prediction of \(M_{W}\) agrees with the experimental world average (excluding the 2022 CDF measurement [23]) reported by the Particle Data Group [15], which gives the yellow region in the figure. The figure shows that, for flavons with \(\mathcal{O}(\mathrm{TeV})\) masses, mixings of magnitude \(|\sin\phi|\lesssim 0.15\) are allowed. The strictest constraint currently comes from the \(W\) boson mass measurements. We note that since the \(B_{3}-L_{2}\) model can only ever make the \(W\) boson lighter, the model is unable to account for the 2022 CDF measurement. ## 4 Flavonstrahlung at current and future colliders We will now discuss the feasibility of producing the flavon at a particle collider. To this end, we turn our attention to a process called _flavonstrahlung_ shown in figure 2, which proceeds from a \(b\overline{b}\) or \(\mu^{+}\mu^{-}\) initial state and leads to a final state \(Z^{\prime}\vartheta\) pair (with further decays into SM particles). The flavonstrahlung process is special compared to the conventional Higgs-like production modes of the flavon because it combines the \(Z^{\prime}\) and the flavon in a single process in a way that, if observed, Figure 1: Various bounds on the Higgs–flavon mixing angle coming from experimental measurements and theoretical constraints. The coloured regions correspond to 95% CL search limits. The dark green region comes from direct collider searches using scalar search data from the LHC and the Tevatron, whereas the light green constraint is derived from the ATLAS Higgs signal strength measurements. The requirement that the quartic couplings of the scalar potential be perturbative gives rise to the orange limit. The yellow band, which is also the strictest constraint, results from insisting on agreement between the measured value of the \(W\) boson mass and the theory prediction. would confirm the role of the flavon as the source of the \(U(1)_{B_{3}-L_{2}}\) symmetry breaking. Another advantage is that, unlike the conventional Higgs-like production modes, the flavonstrahlung cross-section is proportional to \(\cos^{2}\phi\) and so does not vanish in the limit of zero Higgs-flavon mixing (\(\phi\to 0\)). We use the event generator MadGraph5_aMC@NLO v.3.4.1 [24] to calculate leading-order flavonstrahlung cross-sections for \(pp\) and \(\mu^{+}\mu^{-}\) collisions \(\sigma(pp/\mu^{+}\mu^{-}\to\theta Z^{\prime}\to\theta\mu^{+}\mu^{-})\). The final \(Z^{\prime}\) is assumed to decay into a di-muon pair, a choice motivated by the large \(Z^{\prime}\to\mu^{+}\mu^{-}\) branching ratio and the clean experimental signature of the di-muon pair. To study the process systematically, we select currently allowed combinations of \(\{M_{Z^{\prime}},g_{Z^{\prime}}\}\) and compute the flavonstrahlung cross-section as a function of the flavon mass \(m_{\theta}\). These benchmark parameter choices are indicated by the five coloured stars in the left-hand panel of figure 3, which is adapted from figure 10 of ref. [25]. The solid and dashed lines stand for various constraints on the parameter space as explained in detail in the caption of figure 3. We have chosen a representative value of the Higgs-flavon mixing angle close to its upper limit, \(\sin\phi=0.15\), noting again that the cross-sections become smaller as \(\phi\) is increased. We start by computing flavonstrahlung cross-sections at a centre-of-mass energy \(\sqrt{s}=14\) TeV, corresponding to the HL-LHC. The cross-sections for the five benchmark points are shown in the right-hand side of figure 3, where the colours of the lines correspond to the colours of the stars in the left-hand panel. Assuming HL-LHC integrated luminosity of 3000 fb\({}^{-1}\), the plot shows that less than \(\mathcal{O}(1)\) flavonstrahlung events are expected. We conclude that the cross-sections are too small for discovery at the HL-LHC, at least when the flavon charge \(q_{\,\theta}\) is set to unity. Having seen that the HL-LHC lacks the centre-of-mass energy to look for flavonstrahlung, we would now like to investigate the prospects of various future colliders in observing the process. We shall start with a 100 TeV hadron collider with an assumed integrated luminosity of 20-30 ab\({}^{-1}\), representing the FCC-hh. Figure 4 shows the cross-sections for the five parameter space points represented by the coloured stars in figure 3. The resulting cross-sections are enhanced by 3-5 orders of magnitude compared to the HL-LHC. To estimate the reach of the collider, we regard parameter space points at which less than 10 flavonstrahlung events are expected to be produced as undiscoverable. Employing this basic approach, we find that the collider can investigate the parameter space up to approximately 5 TeV flavon and \(Z^{\prime}\) masses, as long as \(g_{Z^{\prime}}\gtrsim 0.4\). For the smallest allowed values of the coupling, \(g_{Z^{\prime}}\lesssim 0.4\), the mass reach is more restricted, but the collider remains sensitive up to flavon masses of around 2 TeV. We also simulate flavonstrahlung at 3 TeV and 10 TeV \(\mu^{+}\mu^{-}\) colliders with integrated luminosities of 1 ab\({}^{-1}\) and 10 ab\({}^{-1}\), respectively. The results for the five benchmark points are shown Figure 2: Flavonostrahlung at a hadron collider or a muon collider. Figure 4: Tree-level flavonstrahlung cross-sections for 100 TeV \(pp\) collisions for \(q_{\theta}=1\). Each coloured line corresponds to a parameter space point labelled by a star of the same colour in figure 3. Figure 3: The left-hand panel, based on figure 9 of ref. [25], shows the \(g_{Z^{\prime}}-M_{Z^{\prime}}\) plane of the parameter space. Everything above the solid black line is excluded at the 95% CL by the LHC whereas the dashed black line indicates the projected 95% CL sensitivity of the HL-LHC. The dashed and solid green lines indicate the \(\Gamma/M_{Z^{\prime}}=1/3\) and \(\Gamma/M_{Z^{\prime}}=1\) bounds, above which perturbative computations become inaccurate. The blue dashed lines are bounds arising from the neutrino trident cross-sections and \(B_{s}-\overline{B_{s}}\) mixing; the region of parameter space between the two lines is currently allowed. Coloured stars have been superposed on the figure, with each star labelling a benchmark point in the parameter plane. The right-hand panel shows tree-level flavonstrahlung cross-sections for 14 TeV \(pp\) collisions with the flavon charge \(q_{\theta}\) set to unity. Each coloured line corresponds to a parameter space point labelled by a star of the same colour. in figure 5. The figure shows that the cross-sections at the 3 TeV collider are large enough to reach regions of the parameter space up to \(M_{Z^{\prime}}\lesssim 5\) TeV and \(m_{\,\theta}\lesssim 2.5\) TeV. As for the 10 TeV muon collider, the figure shows that there is excellent reach up to \(M_{Z^{\prime}}\lesssim 15\) TeV and \(m_{\,\theta}\lesssim 8\) TeV. The flavonstrahlung cross-sections at the 10 TeV muon collider are 2-3 orders of magnitude larger than at the 100 TeV hadron collider, for a fixed parameter space point. This outcome is to be expected since flavonstrahlung at a hadron collider requires a \(b\overline{b}\) partonic initial state, whereas the muon collider can utilise nearly the entire beam luminosity for flavonstrahlung production. ## 5 Conclusions The \(B_{3}-L_{2}\)\(Z^{\prime}\) model, whose salient features we have reviewed, is motivated by the \(b\to s\mu^{+}\mu^{-}\) anomalies and the fermion mass puzzle. We have studied the flavon potential of the model and placed constraints on the size of the Higgs-flavon mixing in the model, concluding that mixing of magnitude \(|\sin\phi|<0.15\) is currently allowed. The flavon may be produced through the flavonstrahlung process at hadron or muon colliders. Whilst the cross-sections are likely too small for flavonstrahlung to be observed at the HL-LHC, we have shown that a 100 TeV hadron collider or a 10 TeV muon collider would have excellent discovery prospects. ###### Acknowledgments. EL would like to thank Ben Allanach for his contributions to the work presented here. EL is supported by STFC consolidated grant ST/T000694/1.
2307.02492
Some familiar graphs on the rings of measurable functions
In this paper, replacing `equality' by 'equality almost everywhere' we modify several terms associated with the ring of measurable functions defined on a measure space $(X, \mathcal{A}, \mu)$ and thereby study the graph theoretic features of the modified comaximal graph, annihilator graph and the weakly zero-divisor graph of the said ring. The study reveals a structural analogy between the modified versions of the comaximal and the zero-divisor graphs, which prompted us to investigate whether these two graphs are isomorphic. Introducing a quotient-like concept, we find certain subgraphs of the comaximal graph and the zero-divisor graph of $\mathcal{M}(X, \mathcal{A})$ and show that these two subgraphs are always isomorphic. Choosing $\mu$ as a counting measure, we prove that even if these two induced graphs are isomorphic, the parent graphs may not be so. However, in case of Lebesgue measure space on $\mathbb{R}$, we establish that the comaximal and the zero-divisor graphs are isomorphic. Observing that both of the comaximal and the zero-divisor graphs of the ring $\mathcal{M}(X, \mathcal{A})$ are subgraphs of the annihilator graph of the said ring, we find equivalent conditions for their equalities in terms of the partitioning of $X$ into two atoms. Moreover, the non-atomicity of the underlying measure space $X$ is characterized through graph theoretic phenomena of the comaximal and the annihilator graph of $\mathcal{M}(X, \mathcal{A})$.
Pratip Nandi, Atasi Deb Ray, Sudip Kumar Acharyya
2023-07-04T04:23:09Z
http://arxiv.org/abs/2307.02492v1
# Some familiar graphs on the rings of measurable functions ###### Abstract. In this paper, replacing 'equality' by 'equality almost everywhere' we modify several terms associated with the ring of measurable functions defined on a measure space \((X,\mathcal{A},\mu)\) and thereby study the graph theoretic features of the modified comaximal graph, annihilator graph and the weakly zero-divisor graph of the said ring. The study reveals a structural analogy between the modified versions of the comaximal and the zero-divisor graphs, which prompted us to investigate whether these two graphs are isomorphic. Introducing a quotient-like concept, we find certain subgraphs of the comaximal graph and the zero-divisor graph of \(\mathcal{M}(X,\mathcal{A})\) and show that these two subgraphs are always isomorphic. Choosing \(\mu\) as a counting measure, we prove that even if these two induced graphs are isomorphic, the parent graphs may not be so. However, in case of Lebesgue measure space on \(\mathbb{R}\), we establish that the comaximal and the zero-divisor graphs are isomorphic. Observing that both of the comaximal and the zero-divisor graphs of the ring \(\mathcal{M}(X,\mathcal{A})\) are subgraphs of the annihilator graph of the said ring, we find equivalent conditions for their equalities in terms of the partitioning of \(X\) into two atoms. Moreover, the non-atomicity of the underlying measure space \(X\) is characterized through graph theoretic phenomena of the comaximal and the annihilator graph of \(\mathcal{M}(X,\mathcal{A})\). Key words and phrases:Rings of measurable functions, Zero-divisor graph, Co-maximal graph, Annihilator graph, Complemented graph, Bipartite graph, dominating number, Atomic measure, Lebesgue measure 2020 Mathematics Subject Classification: Primary 13A70; Secondary 05C60, 05C63, 05C90 The first author thanks the CSIR, New Delhi - 110001, India, for financial support. ## 1. Introduction Let \(G\) be a graph with \(V\) as the set of vertices of \(G\). \(G\) is said to be a simple graph if \(G\) contains no self-loops and no parallel edges. \(G\) is a connected graph if every pair of vertices is connected by a path in \(G\). A stable set in \(G\) is a subset of \(V\) in which no two vertices are adjacent. For a cardinal number \(\alpha\), \(G\) is called \(\alpha\)-partite graph if \(G\) contains \(\alpha\)-many disjoint stable sets whose union is \(V\). An \(\alpha\)-partite graph is called a complete \(\alpha\)-partite graph if any two vertices from two different stable sets are adjacent. If \(\alpha=2\), we call these graphs bipartite and complete bipartite respectively. The distance between two vertices \(u,v\) in \(G\) is the length of the shortest path joining \(u,v\), denoted by \(d(u,v)\). The eccentricity of a vertex \(v\) is defined by \(ecc(v)=min\{d(u,v):u\in V\}\). The diameter of \(G\) is \(diam(G)=max\{d(u,v):u,v\in V\}\) and the girth \(gr(G)\) of \(G\) is the length of the smallest cycle in \(G\). \(G\) is triangulated or hypertriangulated according as every vertex is a vertex of a triangle or every edge is an edge of a triangle in \(G\). For two vertices \(u,v\in V\), \(c(u,v)\) denotes the length of the smallest cycle in \(G\) containing \(u,v\). Two vertices \(u,v\) are said to be orthogonal in \(G\), denoted by \(u\perp v\), if \(u,v\) are adjacent and \(c(u,v)>3\). \(G\) is called a complemented graph if for every \(u\in V\), there exists \(v\in V\) such that \(u\perp v\). A complemented graph is called uniquely complemented if whenever \(u\perp v\) and \(u\perp w\) in \(G\) for \(u,v,w\in V\), then \(v,w\) are adjacent to the same set of vertices in \(G\). The clique number \(cl(G)\) of \(G\) is the maximum cardinality of complete subgraphs of \(G\). A subset \(V_{1}\) of \(V\) is a dominating set in \(G\) if for every \(u\in V\setminus V_{1}\), \(u,v\) are adjacent in \(G\) for some \(v\in V_{1}\) and the dominating number \(dt(G)=min\{|V_{1}|:V_{1}\) is a dominating set in \(G\}\). A subset \(V_{1}\) of \(V\) is a total dominating set in \(G\) if for every \(u\in V\), \(u,v\) are adjacent in \(G\) for some \(v\in V_{1}\) and the total dominating number \(dt_{t}(G)=min\{|V_{1}|:V_{1}\) is a total dominating set in \(G\}\). For a cardinal number \(\alpha\), \(G\) is said to be \(\alpha\)-colorable if there is a map \(\psi:V\rightarrow[0,\alpha]\) such that \(\psi(u)\neq\psi(v)\) whenever two \(u,v\) are adjacent in \(G\). The chromatic number \(\chi(G)\) of \(G\) is \(min\{\alpha:G\) is \(\alpha\)-colorable}. For any subset \(V^{\prime}\) of \(V\), the induced subgraph \(G^{\prime}\) of \(G\) induced by \(V^{\prime}\) is the graph whose vertex set is \(V^{\prime}\) and two vertices in \(G^{\prime}\) are adjacent if they are adjacent in \(G\). Let \(G_{1},G_{2}\) be two graphs having \(V_{1},V_{2}\) as set of vertices respectively. A bijection map \(\psi:V_{1}\to V_{2}\) is called a graph isomorphism between two graphs \(G_{1},G_{2}\), if it preserves the adjacency relations; i.e., \(a,b\in V_{1}\) are adjacent in \(G_{1}\) if and only if \(\psi(a),\psi(b)\) are adjacent in \(G_{2}\). For more graph related terms we refer to [8]. Let \((X,\mathcal{A},\mu)\) be a measure space and \(\mathcal{M}(X,\mathcal{A})\) be the corresponding ring of real-valued measurable functions on \(X\). It is clear that \(Z(f),X\setminus Z(f)\in\mathcal{A}\) for all \(f\in\mathcal{M}(X,\mathcal{A})\), here \(Z(f)=\{x\in X:f(x)=0\}\) is the zero set of \(f\). For any subset \(A\) of \(X\), the characteristic function of \(A\) is denoted by \(1_{A}\) and is given by \[1_{A}(x)=\begin{cases}1&\text{ if }x\in A\\ 0&\text{ otherwise}\end{cases}\] Therefore, \(1_{A}\in\mathcal{M}(X,\mathcal{A})\) if and only if \(A\in\mathcal{A}\). An element \(A\in\mathcal{A}\) is called an atom in \(X\) if \(\mu(A)>0\) and \(A\) can not be written as a union of two disjoint measurable sets, each with positive measure, i.e., \(A\) can not be written as \(B\sqcup C\), where \(B,C\in\mathcal{A}\) with \(\mu(B),\mu(C)>0\), here '\(\sqcup\)' denotes the disjoint union of two sets. Suppose that \(A\in\mathcal{A}\) is an atom in \(X\). Then for any \(B\in\mathcal{A}\) with \(\mu(B)=0\), it can easily be proved that, \(A\setminus B\) and \(A\cup B\) are both atoms in \(X\). The measure \(\mu\) is said to be an atomic measure if every measurable set with positive measure contains an atom. On the other hand, \(\mu\) is said to be non-atomic if \(\mathcal{A}\) does not contain any atom. Let \(R\) be a commutative ring. There are several graphs, viz. the zero-divisor graph, the comaximal graph, etc., defined on \(R\) to study the interaction between the properties of the ring and the properties of the respective graphs. The zero-divisor graph \(\Gamma(R)\) of \(R\) has its vertices as the set of non-zero zero-divisors of \(R\) with two distinct vertices \(x,y\) declared adjacent if and only if \(x.y=0\) [[4], [7]]. The comaximal graph of \(R\), defined in [12], with vertices as elements of \(R\), where two distinct vertices \(x\) and \(y\) are adjacent if and only if the sum of the principal ideals generated by \(x,y\) is \(R\). Later in [6], the vertex set of the comaximal graph of \(R\) was redefined as \(R\setminus[U(R)\cup J(R)]\), where \(U(R)\) and \(J(R)\) denote the set of all units in \(R\) and the Jacobson radical of \(R\) respectively. This graph is denoted by \(\Gamma^{\prime}_{2}(R)\). The annihilator graph \(AG(R)\) of \(R\), introduced in [5], is a supergraph of \(\Gamma(R)\) having the same set of vertices and two distinct vertices \(x,y\) are adjacent if and only if \(ann(x)\cup ann(y)\subsetneqq ann(x.y)\), where \(ann(x.y)\) is the \(ann(a)=\{r\in R:a.r=0\}\) is the annihilator ideal of \(a\) in \(R\). The weakly zero-divisor graph \(W\Gamma(R)\) of \(R\) is also a supergraph of \(\Gamma(R)\) with the same set of vertices where two distinct vertices \(x,y\) are adjacent if and only if there exists \(a\in ann(x)\setminus\{0\}\) and \(b\in ann(y)\setminus\{0\}\) such that \(a.b=0\), introduced in [11]. As proposed in the introduction of this paper, we study the behaviour of the well-known graphs, viz. comaximal, annihilator and weakly zero-divisor graphs, redefining them via \(\mu\), of the ring of measurable functions defined over a measure space \((X,\mathcal{A},\mu)\). In this context, a few of the ring theoretic terms also demand natural modifications when 'equality' is replaced by 'equality almost everywhere', in the presence of a measure \(\mu\) on \((X,\mathcal{A})\). We now describe this modification and redefine the terms, retaining their nomenclature in most of the cases. Two functions \(f,g\in\mathcal{M}(X,\mathcal{A})\) are said to be equal almost everywhere on \(X\) with respect to the measure \(\mu\) if \(\mu(\{x\in X:f(x)\neq g(x)\})=0\) and in this case we write \(f\equiv g\) a.e. on \(X\). If \(f\in\mathcal{M}(X,\mathcal{A})\) is such that \(\mu(Z(f))=0\), then \(f\) is said to be non-zero almost everywhere on \(X\) (in short, non-zero a.e. on \(X\)). **Definition 2.1**.: 1. A function \(f\in\mathcal{M}(X,\mathcal{A})\) which is non-zero a.e. on \(X\), is called a zero-divisor in \(\mathcal{M}(X,\mathcal{A})\) if there exists \(g\in\mathcal{M}(X,\mathcal{A})\) such that \(g\) is non-zero a.e. on \(X\) and \(f.g\equiv 0\) a.e. on \(X\). The set of all zero-divisors in \(\mathcal{M}(X,\mathcal{A})\) is denoted by \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) (in short, \(\mathscr{D}\)). 2. A function \(f\in\mathcal{M}(X,\mathcal{A})\) is called a \(\mu\)-unit in \(\mathcal{M}(X,\mathcal{A})\) if \(\mu(Z(f))=0\). \(U(\mathcal{M}(X,\mathcal{A}))\) stands for the set of all \(\mu\)-units in \(\mathcal{M}(X,\mathcal{A})\). 3. An ideal \(I\) in \(\mathcal{M}(X,\mathcal{A})\) is said to be almost \(\mathcal{M}(X,\mathcal{A})\) if \(I\) is a principal ideal generated by a \(\mu\)-unit in \(\mathcal{M}(X,\mathcal{A})\). Since the Jacobson radical of \(\mathcal{M}(X,\mathcal{A})\) is \((0)\), we redefine \(J(\mathcal{M}(X,\mathcal{A}))=\{f\in\mathcal{M}(X,\mathcal{A}):f\equiv 0\ \text{a.e. on }X\}\). For each \(f\in\mathcal{M}(X,\mathcal{A})\), the annihilator ideal of \(f\) is also redefined as \(ann(f)=\{g\in\mathcal{M}(X,\mathcal{A}):f.g\equiv 0\ \text{a.e. on }X\}\). We can also write \(ann(f)\) as \(\{g\in\mathcal{M}(X,\mathcal{A}):\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\}\), where \(f\in\mathcal{M}(X,\mathcal{A})\). The following theorem completely describes the zero-divisors of \(\mathcal{M}(X,\mathcal{A})\) : **Theorem 2.2**.: \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))=\{f\in\mathcal{M}(X,\mathcal{A}):\mu(Z (f))>0\) _and \(\mu(X\setminus Z(f))>0\}\)._ Proof.: Let \(f\in\mathcal{M}(X,\mathcal{A})\) be such that \(\mu(Z(f))>0\) and \(\mu(X\setminus Z(f))>0\). Then \(f\) is non-zero a.e. on \(X\) and there exists \(g=1_{Z(f)}\), such that \(f.g=0\) on \(X\). Clearly \(g\in\mathcal{M}(X,\mathcal{A})\) and \(\mu(X\setminus Z(g))=\mu(Z(f))>0\), i.e., \(g\) is non-zero a.e. on \(X\). So, \(f\in\mathscr{D}\). Conversely let \(f\in\mathscr{D}\). Then by definition of a zero-divisor, \(\mu(X\setminus Z(f))>0\) and there exists \(g\in\mathcal{M}(X,\mathcal{A})\) with \(\mu(X\setminus Z(g))>0\) such that \(f.g\equiv 0\) a.e., i.e., there exists \(A\in\mathcal{A}\) with \(\mu(A)=0\) such that \(f.g=0\) on \(X\setminus A\). So, \(X\setminus Z(g)=(X\setminus Z(g)\cap A)\sqcup(X\setminus Z(g)\cap X\setminus A)\) and \(\mu(X\setminus Z(g)\cap A)=0\). This implies that \(\mu(X\setminus Z(g)\cap X\setminus A)=\mu(X\setminus Z(g))>0\). Therefore, \(\mu(Z(f))>0\). We observe in the next theorem that the ring \(\mathcal{M}(X,\mathcal{A})\) has no divisor of zero is equivalent to a purely measure theoretic phenomenon of the measure space \((X,\mathcal{A},\mu)\). **Theorem 2.3**.: _The following statements are equivalent:_ 1. _Every measurable set in_ \(X\) _with positive measure is an atom._ 2. \(X\) _is an atom._ 3. \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) _is an empty set._ Proof.: \((1)\implies(2)\) is trivial, as \(\mu(X)>0\) and \((2)\implies(3)\) follows directly from the definition of an atom. \((3)\implies(1)\) : Let \((1)\) be false. Then some \(A\in\mathcal{A}\) exists such that \(\mu(A)>0\) but is not an atom. So \(A\) is expressible as the disjoint union of some \(A_{1},A_{2}\in\mathcal{A}\) with \(\mu(A_{1})>0\) and \(\mu(A_{2})>0\). In such case, \(1_{A_{1}},1_{A_{2}}\in\mathscr{D}\), proving \(\mathscr{D}\neq\emptyset\). Following are a couple of examples of measure spaces where every measurable set with positive measure is an atom and therefore by Theorem 2.3, their corresponding rings of measurable functions possess no divisor of zero. **Example 2.4**.: (1) Let \(X\) be a non-empty set, \(\mathcal{A}=\mathscr{P}(X)\) (= the power set of \(X\)) and \(x_{0}\in X\). Consider the Dirac measure at \(x_{0}\) given by \[\delta_{x_{0}}(A)=\begin{cases}0&\text{ if }x_{0}\notin A\\ 1&\text{ if }x_{0}\in A.\end{cases}\] (2) Let \(X\) be an uncountable set, \(\mathcal{A}=\{A\subseteqq X:\text{ either }A\text{ or }X\setminus A\text{ is countable}\}\). Consider the co-countable measure given by \[\rho(A)=\begin{cases}0&\text{ if }A\text{ is countable}\\ \infty&\text{ if }X\setminus A\text{ is countable}.\end{cases}\] **Theorem 2.5**.: _For any \(f,g\in\mathcal{M}(X,\mathcal{A})\), \(ann(f)\subset ann(g)\) if and only if \(\mu(Z(f)\setminus Z(g))=0\)._ Proof.: Let \(E=Z(f)\setminus Z(g)\). If \(\mu(E)=0\), then \(Z(f)\cap X\setminus E\subset Z(g)\). Now let \(h\in ann(f)\), i.e., \(h.f=0\) on \(X\setminus F\) for some \(F\in\mathcal{A}\) with \(\mu(F)=0\). Therefore \(X\setminus Z(h)\cap X\setminus F\subset Z(f)\implies X\setminus Z(h)\cap X \setminus(E\cup F)\subset Z(g)\implies h.g\equiv 0\) on \(X\setminus(E\cup F)\). Since \(\mu(E\cup F)=0\), \(h.g\equiv 0\) a.e. on \(X\) i.e., \(h\in ann(g)\). Conversely let \(\mu(E)>0\). Let \(h=1_{E}\). Then \(h\in\mathcal{M}(X,\mathcal{A})\) and \(X\setminus Z(h)=E=Z(f)\setminus Z(g)\). Therefore \(f.h\equiv 0\) on \(X\setminus Z(h)\cap X\setminus Z(g)=E\implies\mu(X\setminus Z(h)\cap X \setminus Z(g))>0\). Thus \(h\in ann(f)\setminus ann(g)\). i.e., \(ann(f)\not\subset ann(g)\). **Corollary 2.6**.: For \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\), \(\mu(Z(f)\triangle Z(g))=0\) if and only if \(ann(f)=ann(g)\). In view of Theorem 2.3, throughout this paper, we assume that \(X\) is not an atom and therefore, for every atom \(A\in\mathcal{A}\), \(\mu(X\setminus A)>0\). **The comaximal graph \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) of \(\mathcal{M}(X,\mathcal{A})\)** We redefine the comaximal graph of \(\mathcal{M}(X,\mathcal{A})\), taking into account the measure defined on the measurable space \((X,\mathcal{A})\) and investigate in this section its graph features via the properties of the underlying measure space \((X,\mathcal{A},\mu)\) and the ring \(\mathcal{M}(X,\mathcal{A})\). **Definition 3.1**.: The comaximal graph \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) (in short, \(\Gamma^{\prime}_{2}\)) of \(\mathcal{M}(X,\mathcal{A})\) is a graph on the vertex set \(\mathcal{M}(X,\mathcal{A})\setminus[U(\mathcal{M}(X,\mathcal{A}))\cup J( \mathcal{M}(X,\mathcal{A}))]\) and the adjacency relation for two vertices \(f,g\) is given by : \(f,g\) are adjacent if and only if the sum of the principal ideals generated by \(f,g\) is almost \(\mathcal{M}(X,\mathcal{A})\). In notation, \(<f>+<g>=<u>\), where \(u\) is a \(\mu\)-unit in \(\mathcal{M}(X,\mathcal{A})\). Theorem 2.3 suggests that the vertex set of the comaximal graph \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) and hence it is a non-empty graph under the assumption that \(X\) is not an atom. In [9], the author defined the zero-divisor graph \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) (in short, \(\Gamma\)) of \(\mathcal{M}(X,\mathcal{A})\) whose vertex set is \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) and the adjacency relation is given by: \(f\) is adjacent to \(g\) if and only if \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). So, the sets of vertices of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) coincide. The adjacency relation in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) can be interpreted through the almost disjointness of the corresponding zero sets of the vertices in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\), as seen in the next result. **Theorem 3.2**.: _Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Then \(f,g\) are adjacent in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) if and only if \(\mu(Z(f)\cap Z(g))=0\)._ Proof.: Let \(f,g\) be adjacent in \(\Gamma^{\prime}_{2}\). Then there exists a \(\mu\)-unit \(u\) in \(\mathcal{M}(X,\mathcal{A})\) such that \(<f>+>=<u>\). i.e., \(f.f_{1}+g.g_{1}=u\) for some \(f_{1},g_{1}\in\mathcal{M}(X,\mathcal{A})\). Therefore, \(Z(f)\cap Z(g)\subset Z(u)\). Since \(u\) is a \(\mu\)-unit \(\mu(Z(u))=0\implies\mu(Z(f)\cap Z(g))=0\). Conversely, let \(\mu(Z(f)\cap Z(g))=0\). We claim that \(<f>+>=<u>\), where \(u=f^{2}+g^{2}\), a \(\mu\)-unit in \(\mathcal{M}(X,\mathcal{A})\). Clearly \(u\in<f>+g>\). Also \(f\in<u>\), because \(f=u.f_{1}\), where \[f_{1}(x)=\begin{cases}\frac{f(x)}{f^{2}(x)+g^{2}(x)}&\text{ if }x\notin Z(f) \cap Z(g)\\ 0&\text{ if }x\in Z(f)\cap Z(g)\end{cases}\] and similarly, \(g\in<u>\). Hence, \(<f>+g>=<u>\). i.e., \(f,g\) are adjacent in \(\Gamma^{\prime}_{2}\). The following theorem provides a necessary and sufficient condition under which a pair of vertices of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) admits of a third vertex adjacent to both of them. **Theorem 3.3**.: _Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Then there is a vertex in \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) adjacent to both \(f,g\) in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) if and only if \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\)._ Proof.: Let \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\). Consider \(h=1_{Z(f)\cup Z(g)}\in\mathcal{M}(X,\mathcal{A})\). Clearly \(Z(h)=X\setminus Z(f)\cap X\setminus Z(g)\implies h\in\mathscr{D}\). Also \(Z(f)\cap Z(h)=\emptyset=Z(g)\cap Z(h)\) i.e., \(h\) is adjacent to both \(f,g\) in \(\Gamma^{\prime}_{2}\). Conversely, let \(h\in\mathscr{D}\) be adjacent to both \(f,g\) in \(\Gamma^{\prime}_{2}\). So \(\mu(Z(f)\cap Z(h))=0=\mu(Z(g)\cap Z(h))\) and hence, \(\mu(Z(h)\cap[Z(f)\cup Z(g)])=0\). Now \(Z(h)=(Z(h)\cap[Z(f)\cup Z(g)])\sqcup(Z(h)\cap X\setminus[Z(f)\cup Z(g)]) \implies\mu(Z(h)\cap X\setminus[Z(f)\cup Z(g)])=\mu(Z(h))>0\), as \(h\in\mathscr{D}\). Consequently \(\mu(X\setminus[Z(f)\cup Z(g)])>0\); i.e., \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\). **Corollary 3.4**.: Every edge \(f-g\) in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is either an edge of a triangle or an edge of a square according as \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\) or \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). Proof.: If \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\), then by Theorem 3.3, there exists a vertex adjacent to both \(f,g\). If \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\), then \(f-g-2f-2g-f\) forms a square in \(\Gamma^{\prime}_{2}\). **Corollary 3.5**.: \(gr(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A})))\leq 4\)_._ The distance between two vertices in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is given below. **Theorem 3.6**.: _Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Then_ \[d(f,g)=\begin{cases}1&\text{ if }\mu(Z(f)\cap Z(g))=0\\ 2&\text{ if }\mu(Z(f)\cap Z(g))>0\text{ and }\mu(X\setminus Z(f)\cap X \setminus Z(g))>0\\ 3&\text{ if }\mu(Z(f)\cap Z(g))>0\text{ and }\mu(X\setminus Z(f)\cap X \setminus Z(g))=0\end{cases}\] Proof.: The conditions for \(d(f,g)=1\) or \(2\) are straightforward consequences of the adjacency relation and Theorem 3.3. From these two results, \(d(f,g)=3\implies\mu(Z(f)\cap Z(g))>0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). Conversely let \(\mu(Z(f)\cap Z(g))>0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). Then certainly, \(d(f,g)>2\). Let \(h_{1}=1_{Z(f)}\) and \(h_{2}=1_{Z(g)}\). Then \(h_{1},h_{2}\in\mathscr{D}\) and \(Z(h_{1})=X\setminus Z(f),Z(h_{2})=X\setminus Z(g)\implies Z(f)\cap Z(h_{1})= \emptyset=Z(h_{2})\cap Z(g)\) and also \(\mu(Z(h_{1})\cap Z(h_{2}))=0\). Therefore \(f-h_{1}-h_{2}-g\) is a path in \(\Gamma^{\prime}_{2}\). Consequently, \(d(f,g)=3\). **Corollary 3.7**.: \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) _is a connected graph._ **Lemma 3.8**.: _Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\)._ 1. _If_ \(\mu(Z(f)\triangle Z(g))=0\)_, then_ \(f,g\) _are not adjacent in_ \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) 2. _If_ \(\mu(Z(f)\setminus A)=0\) _and_ \(\mu(Z(g)\cap A)=0\) _for some_ \(A\in\mathcal{A}\)_, then_ \(f,g\) _are adjacent in_ \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\)_._ 3. _If_ \(f,g\) _are adjacent in_ \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\)_, then for any_ \(f_{1}\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) _with_ \(\mu(Z(f_{1})\triangle Z(f))=0\) _and for any_ \(g_{1}\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) _with_ \(\mu(Z(g_{1})\triangle Z(g))=0\)_,_ \(f_{1},g_{1}\) _are adjacent in_ \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\)_._ Proof.: 1. Let \(\mu(Z(f)\triangle Z(g))=0\). If \(f,g\) are adjacent in \(\Gamma^{\prime}_{2}\), then \(\mu(Z(f)\cap Z(g))=0\). Now \(Z(f)\subset(Z(f)\cap Z(g))\cup(Z(f)\triangle Z(g))\implies\mu(Z(f))=0\), a contradiction to \(f\in\mathscr{D}\). Therefore, \(f,g\) are not adjacent in \(\Gamma^{\prime}_{2}\). 2. Let \(\mu(Z(f)\setminus A)=0\) and \(\mu(Z(g)\cap A)=0\) for some \(A\in\mathcal{A}\). Now \(Z(f)\cap Z(g)\subset[A\cup(Z(f)\setminus A)]\cap[(X\setminus A)\cup(Z(g) \cap A)]\implies\mu(Z(f)\cap Z(g))=0\). Therefore, \(f,g\) are adjacent in \(\Gamma^{\prime}_{2}\). 3. Let \(f,g\) be adjacent in \(\Gamma^{\prime}_{2}\) and \(\mu(Z(f_{1})\triangle Z(f))=0=\mu(Z(g_{1})\triangle Z(g))\) for some \(f_{1},g_{1}\in\mathscr{D}\). Since \(f,g\) are adjacent, \(\mu(Z(f)\cap Z(g))=0\). Now \(Z(f_{1})\cap Z(g_{1})\subset[Z(f)\cup(Z(f_{1})\triangle Z(f))]\cap[Z(g)\cup( Z(g_{1})\triangle Z(g))]\). Therefore, \(f_{1},g_{1}\) are adjacent in \(\Gamma^{\prime}_{2}\). **Lemma 3.9**.: _A partition of \(X\) by two atoms \(A,B\) yields a partition of \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) by two sets \(\{f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A})):\mu(Z(f)\triangle A)=0\}\) and \(\{f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A})):\mu(Z(f)\triangle B)=0\}\)._ Proof.: To prove this result, it is enough to show that for each \(f\in\mathscr{D}\), either \(\mu(Z(f)\triangle A)=0\) or \(\mu(Z(f)\triangle B)=0\). Let \(f\in\mathscr{D}\). Since \(A\) is an atom, either \(\mu(A\cap Z(f))=0\) or \(\mu(A\cap X\setminus Z(f))=0\). At first, let \(\mu(A\cap Z(f))=0\). By hypothesis, \(A=X\setminus B\) and so, \(\mu(X\setminus B\cap Z(f))=0\). If \(\mu(B\cap Z(f))=0\), then \(\mu(Z(f))=0\) which contradicts \(f\in\mathscr{D}\). Hence, \(\mu(B\cap Z(f))>0\) which implies \(\mu(B\cap X\setminus Z(f))=0\), as \(B\) is also an atom. Consequently, \(\mu(B\triangle Z(f))=0\). Analogously, the hypothesis \(\mu(A\cap X\setminus Z(f))=0\) would lead to the conclusion \(\mu(Z(f)\triangle A)=0\). **Theorem 3.10**.: \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) _is a complete bipartite graph if and only if \(X\) is partitioned into two atoms._ Proof.: If \(\Gamma^{\prime}_{2}\) is a complete bipartite graph, then it has a bipartion, say \(V_{1},V_{2}\). Since both \(V_{1},V_{2}\) are non-empty, we choose and fix \(f\in V_{1}\) and \(g\in V_{2}\). \(\Gamma^{\prime}_{2}\) being complete bipartite, no \(h\in\mathscr{D}\) is adjacent to both \(f,g\) and hence by Lemma 3.3, \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). Also \(f,g\) are adjacent in \(\Gamma^{\prime}_{2}\implies\mu(Z(f)\cap Z(g))=0\). We claim that \(Z(f)\) is an atom. To prove this, let \(A\in\mathcal{A}\) with \(\mu(A)>0\). If \(\mu(X\setminus A)=0\), then \(\mu(X\setminus A)=0\). Now let \(\mu(X\setminus A)>0\). Then \(1_{A},1_{X\setminus A}\in\mathscr{D}\) and \(1_{A},1_{X\setminus A}\) are adjacent in \(\Gamma^{\prime}_{2}\). If \(1_{A}\in V_{1}\), then \(1_{X\setminus A}\in V_{2}\implies 1_{X\setminus A},f\) are adjacent i.e., \(\mu(A\cap Z(f))=0\). If \(1_{A}\in V_{2}\), then \(1_{A},f\) are adjacent and so \(\mu(X\setminus A\cap Z(f))=0\). In any case either \(\mu(A\cap Z(f))=0\) or \(\mu(X\setminus A\cap Z(f))=0\). Consequently, \(Z(f)\) is an atom. Similarly we can show that \(Z(g)\) is an atom. Let \(A=Z(f)\setminus Z(g)=Z(f)\setminus[Z(f)\cap Z(g)]\) and \(B=Z(g)\cup X\setminus Z(f)=Z(g)\cup[X\setminus Z(f)\cap X\setminus Z(g)]\). Then \(X=A\sqcup B\) and \(A,B\) are atoms. Conversely let \(A,B\) be two atoms in \(X\) such that \(X=A\sqcup B\). Let \(V_{1}=\{f\in\mathscr{D}:\mu(Z(f)\triangle A)=0\}\) and \(V_{2}=\{f\in\mathscr{D}:\mu(Z(f)\triangle B)=0\}\). By Lemma 3.9, \(\mathscr{D}=V_{1}\sqcup V_{2}\). Again by Lemma 3.8(1), \(V_{1},V_{2}\) are stable sets in \(\Gamma^{\prime}_{2}\). Let \(f\in V_{1}\) and \(g\in V_{2}\). Then \(\mu(Z(f)\triangle A)=0=\mu(Z(g)\triangle B)\implies\mu(Z(f)\setminus A)=0=\mu(Z (g)\setminus B)=\mu(Z(g)\cap A)\), as \(B=X\setminus A\). By Lemma 3.8(2), \(f,g\) are adjacent in \(\Gamma^{\prime}_{2}\). Consequently, \(\Gamma^{\prime}_{2}\) is a complete bipartite graph. **Corollary 3.11**.: \(X\) _is partitioned into two atoms if and only if for each \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\), both \(Z(f)\) and \(X\setminus Z(f)\) are atoms._ Proof.: Let \(X=A\sqcup B\), where \(A,B\) are two atoms. Then by Theorem 3.10, \(\mu(Z(f)\triangle A)=0\) or \(\mu(Z(f)\triangle B)=0\). Without loss of generality let \(\mu(Z(f)\triangle A)=0\implies\mu(Z(f)\setminus A)=0\). Now \(Z(f)=A\setminus(A\triangle Z(f))\cup Z(f)\setminus A\implies Z(f)\) is an atom. Let \(g=1_{Z(f)}\). Then \(g\in\mathscr{D}\) and \(Z(g)=X\setminus Z(f)\). Similarly we can show that \(Z(g)\) i.e., \(X\setminus Z(f)\) is an atom. Conversely let \(X\) can not be partitioned into two atoms and \(f\in\mathscr{D}\). Since \(X=Z(f)\sqcup X\setminus Z(f)\), either \(Z(f)\) or \(X\setminus Z(f)\) is not an atom. Replacing \(Z(f)\) by \(X\setminus Z(f)\) in the proof of Theorem 3.10 and making some suitable modification we can prove that \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) is a complete bipartite graph if and only if \(X\) is partitioned into two atoms. This observation is recorded in the following theorem. **Theorem 3.12**.: _The following statements are equivalent:_ 1. \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is a complete bipartite graph._ 2. \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) _is a complete bipartite graph._ 3. \(X\) _is partitioned into two atoms._ In the next few theorems we find the eccentricity, diameter and the lengths of possible cycles in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) and determine how far the measure \(\mu\) is responsible to make \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) triangulated, hypertriangulated and complemented. **Theorem 3.13**.: _Let \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Then_ \[ecc(f)=\begin{cases}2&\text{ if $Z(f)$ is an atom}\\ 3&\text{ otherwise}\end{cases}\] Proof.: Let \(Z(f)\) be an atom and \(g\in\mathscr{D}\). Then either \(\mu(Z(f)\cap Z(g))=0\) or \(\mu(Z(f)\cap X\setminus Z(g))=0\). If \(\mu(Z(f)\cap Z(g))=0\), then \(f,g\) are adjacent. If \(\mu(Z(f)\cap X\setminus Z(g))=0\), then \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\), for otherwise \(\mu(X\setminus Z(g))=0\). By Theorem 3.6, \(d(f,g)=2\). Thus \(ecc(f)=2\). Conversely, let \(Z(f)\) be not an atom i.e., there exist \(A,B\in\mathcal{A}\) with \(\mu(A),\mu(B)>0\) such that \(Z(f)=A\sqcup B\). Clearly, \(1_{A}\in\mathscr{D}\), \(Z(f)\cap Z(1_{A})=B\) and \(X\setminus Z(f)\cap X\setminus Z(1_{A})=\emptyset\) i.e., \(\mu(Z(f)\cap Z(1_{A}))>0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(1_{A})=0\). By Theorem 3.6, \(d(f,1_{A})=3\implies ecc(f)=3\). **Corollary 3.14**.: The diameter of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is \[diam(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A})))=\begin{cases}2&\text{ if $X$ is partitioned into two atoms}\\ 3&\text{ otherwise}\end{cases}\] Proof.: If \(X\) is partitioned into two atoms, then by Theorem 3.10, \(\Gamma^{\prime}_{2}\) is a complete bipartite graph and hence \(diam(\Gamma^{\prime}_{2})=2\). If \(X\) can not be partitioned into two atoms, then for any \(f\in\mathscr{D}\), either \(Z(f)\) or \(X\setminus Z(f)\) is not an atom. By Theorem 3.13, \(ecc(f)=3\), if \(Z(f)\) is not an atom and \(ecc(1_{Z(f)})=3\), if \(X\setminus Z(f)\) is not an atom. In any case, the diameter of \(\Gamma^{\prime}_{2}\) is \(3\). **Theorem 3.15**.: \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) _is a vertex of a triangle in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) if and only if \(X\setminus Z(f)\) is not an atom._ Proof.: If \(X\setminus Z(f)\) is not an atom, then there exist \(A,B\in\mathcal{A}\) with \(\mu(A),\mu(B)>0\) and \(X\setminus Z(f)=A\sqcup B\). Let \(g=1_{X\setminus A}\) and \(h=1_{X\setminus B}\). Then \(g,h\in\mathscr{D}\) and \(Z(g)=A,Z(h)=B\implies Z(f)\cap Z(g)=\emptyset=Z(g)\cap Z(h)=Z(h)\cap Z(f)\). Thus \(f-g-h-f\) is a triangle in \(\Gamma^{\prime}_{2}\). Conversely if \(f\) is a vertex of a triangle in \(\Gamma^{\prime}_{2}\), then there exists \(g,h\in\mathscr{D}\) such that \(f-g-h-f\) is a triangle in \(\Gamma^{\prime}_{2}\). Since \(f,g\) are adjacent, \(\mu(Z(f)\cap Z(g))=0\implies\mu(X\setminus Z(f)\cap Z(g))>0\), for otherwise \(\mu(Z(g))=0\). Similarly the adjacency of \(f,h\) implies \(\mu(X\setminus Z(f)\cap Z(h))>0\). Let \(A=X\setminus Z(f)\cap Z(g)\) and \(B=X\setminus Z(f)\cap X\setminus Z(g)\). Then \(\mu(A)>0\) and \(X\setminus Z(f)=A\sqcup B\). Since \(g,h\) are adjacent, \(\mu(Z(g)\cap Z(h))=0\implies\mu(A\cap Z(h))=0\). Now \(X\setminus Z(f)\cap Z(h)=(A\cap Z(h))\sqcup(B\cap Z(h))\implies\mu(B\cap Z(h))= \mu(X\setminus Z(f)\cap Z(h))>0\implies\mu(B)>0\). Thus \(X\setminus Z(f)=A\sqcup B\) and \(\mu(A),\mu(B)>0\), which prove that \(X\setminus Z(f)\) is not an atom. **Corollary 3.16**.: The girth of \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) is \[gr(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A})))=\begin{cases}4&\text{ if $X$ is partitioned into two atoms}\\ 3&\text{ otherwise}\end{cases}\] Proof.: If \(X\) is partitioned into two atoms, then by Theorem 3.10, \(\Gamma_{2}^{\prime}\) is a complete bipartite graph and hence \(gr(\Gamma_{2}^{\prime})=4\). If \(X\) can not be partitioned into two atoms, then for each \(f\in\mathscr{D}\), either \(Z(f)\) or \(X\setminus Z(f)\) is not an atom. Consequently, by Theorem 3.15, \(1_{Z(f)}\) or \(f\) is a vertex of a triangle in \(\Gamma_{2}^{\prime}\). So, \(gr(\Gamma_{2}^{\prime})=3\). **Theorem 3.17**.: \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) _is triangulated if and only if \(\mu\) is non-atomic._ Proof.: If \(\mu\) is non-atomic, then \(X\setminus Z(f)\) is not an atom for all \(f\in\mathscr{D}\). By Theorem 3.15, every \(f\in\mathscr{D}\) is a vertex of a triangle. In other words, \(\Gamma_{2}^{\prime}\) is triangulated. Conversely let \(X\) contain an atom, say \(A\). Then \(\mu(X\setminus A)>0\), for otherwise, \(X\) will be an atom. Thus \(1_{A}\in\mathscr{D}\) and \(X\setminus Z(1_{A})=A\) is an atom. Hence by Theorem 3.15, \(1_{A}\) is not a vertex of a triangle; i.e., \(\Gamma_{2}^{\prime}\) is not triangulated. **Theorem 3.18**.: _Given \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\), there exists an edge containing \(f\) which is not an edge of any triangle in \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\)._ Proof.: For each \(f\in\mathscr{D}\), consider \(g=1_{Z(f)}\in\mathscr{D}\). Then \(f,g\) are adjacent and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). By Theorem 3.3, there does not exist any vertex in \(\mathscr{D}\) adjacent to both \(f\) and \(g\). Therefore, the edge \(f-g\) is not an edge of any triangle in \(\Gamma_{2}^{\prime}\). **Corollary 3.19**.: \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) is not hypertriangulated. If \(X\) is partitioned into two atoms, then by Theorem 3.10, \(\Gamma_{2}^{\prime}\) is a complete bipartite graph and hence \(c(f,g)=4\) for all \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). We now calculate \(c(f,g)\) for \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\), when \(X\) is not partitioned into two atoms. **Theorem 3.20**.: _Suppose \(X\) can not be partitioned into two atoms and \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Then_ \[c(f,g)=\begin{cases}3&\text{ if $\mu(Z(f)\cap Z(g))=0$ and $\mu(X\setminus Z(f)\cap X \setminus Z(g))>0$}\\ 4&\text{ if $\mu(Z(f)\cap Z(g))=0$ and $\mu(X\setminus Z(f)\cap X \setminus Z(g))=0$}\\ &\text{ or if $\mu(Z(f)\cap Z(g))>0$ and $\mu(X\setminus Z(f)\cap X \setminus Z(g))>0$}\\ 6&\text{ if $\mu(Z(f)\cap Z(g))>0$ and $\mu(X\setminus Z(f)\cap X \setminus Z(g))=0$}\end{cases}\] Proof.: It is clear that \(c(f,g)=3\) if and only if \(f-g\) is an edge of a triangle. So, \(c(f,g)=3\) if and only if \(f,g\) are adjacent in \(\Gamma_{2}^{\prime}\) and there exists a vertex in \(\Gamma_{2}^{\prime}\) adjacent to both \(f\) and \(g\); i.e., by Theorem 3.2 and Theorem 3.3, \(\mu(Z(f)\cap Z(g))=0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\). Let \(c(f,g)=4\). If \(f,g\) are adjacent in \(\Gamma_{2}^{\prime}\), then there exists no vertex adjacent to both \(f\) and \(g\), for otherwise \(c(f,g)=3\). Therefore, \(\mu(Z(f)\cap Z(g))=0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). If \(f,g\) are not adjacent, then there exists vertex adjacent to both \(f,g\) in \(\Gamma_{2}^{\prime}\), because \(c(f,g)=4\). Therefore, \(\mu(Z(f)\cap Z(g))>0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\). Conversely let \(\mu(Z(f)\cap Z(g))=0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). Then \(f,g\) are adjacent and there does not exist any vertex adjacent to both \(f,g\implies c(f,g)>3\). Clearly \(f-g-2.f-2.g-f\) is a \(4\)-cycle in \(\Gamma_{2}^{\prime}\) which implies \(c(f,g)=4\). Again let \(\mu(Z(f)\cap Z(g))>0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\). Then \(f,g\) are non-adjacent and there exists \(h\in\mathscr{D}\) adjacent to both \(f,g\) in \(\Gamma^{\prime}_{2}\). Clearly, \(c(f,g)>3\) and \(f-h-g-2.h-f\) is a 4-cycle in \(\Gamma^{\prime}_{2}\implies c(f,g)=4\). If \(c(f,g)=6\), then it follows from the previous two cases that \(\mu(Z(f)\cap Z(g))>0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). Conversely let \(\mu(Z(f)\cap Z(g))>0\) and \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\). So, \(f,g\) are not adjacent and there does not exist any vertex adjacent to both \(f,g\). Hence there does not exist any 5-cycle in \(\Gamma^{\prime}_{2}\) containing \(f,g\) as vertices. In other words, \(c(f,g)>5\). By Theorem 3.6, \(d(f,g)=3\); i.e., there exist \(h_{1},h_{2}\in\mathscr{D}\) such that \(f-h_{1}-h_{2}-g\) is a path in \(\Gamma^{\prime}_{2}\). Consequently, \(f-h_{1}-h_{2}-g-2.h_{2}-2.h_{1}-f\) is a 6-cycle in \(\Gamma^{\prime}_{2}\implies c(f,g)=6\). The visual representation of the above Theorem is exhibited as follows: \(f\)\(g\)\(f\)\(g\)\(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\)\(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\)\(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\)\(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\)\(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0\)\(f,g\) are adjacent\(f\)\(g\)\ Now \(Z(k)=Z(g)\cap X\setminus Z(h)\) and so \(Z(k)\cap Z(h)=\emptyset\implies\mu(Z(k)\cap Z(h))=0\) and also \(Z(k)\cap Z(f)\subset Z(g)\cap Z(f)\implies\mu(Z(k)\cap Z(f))=0\), as \(f\perp g\). Therefore \(k\) is adjacent to both \(f,h\), which contradicts that \(f\perp h\). Therefore, \(\mu(Z(g)\cap X\setminus Z(h))=0\) and similarly \(\mu(X\setminus Z(g)\cap Z(h))=0\). Hence \(\mu(Z(g)\triangle Z(h))=0\). By Lemma 3.22, \(g,h\) are adjacent to the same set of vertices in \(\Gamma^{\prime}_{2}\). Consequently, \(\Gamma^{\prime}_{2}\) is uniquely complemented. We now introduce special subgraphs of \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) which enable us to find conditions under which \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) become isomorphic. On \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\), we define a relation " \(f\sim g\) if and only if \(\mu(Z(f)\triangle Z(g))=0\)". It is easy to see that \(\sim\) is an equivalence relation on \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) and hence it partitions \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) into disjoint equivalence classes, denoted by\([f]\), for each \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Certainly, for each \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\), \(f\sim 1_{X\setminus Z(f)}\). We choose a subset \(V\) of \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) obeying the following conditions: 1. every element of \(V\) is of the form \(1_{A}\) where \(\mu(A)>0\) and \(\mu(X\setminus A)>0\). 2. \(1_{A},1_{B}\) are distinct elements in \(V\) if and only if \(\mu(A\triangle B)>0\). Then \(V\) is a collection of distinct class representatives corresponding to the equivalence relation \(\sim\) on \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Let \(G_{2}\) be the induced subgraph of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) whose set of vertices is \(V\). We first observe the following: **Observation 3.24**.: 1. \([1_{A}]\) is a stable set in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\), for each \(1_{A}\in V\). (Follows from Lemma 3.8(1)). 2. For any two distinct stable sets \([1_{A}],[1_{B}]\) in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\), either \([1_{A}]\sqcup[1_{B}]\) is a stable set or \([1_{A}]\sqcup[1_{B}]\) forms a complete bipartite subgraph of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) (Follows from Lemma 3.8(2)). Thus \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is a \(|V|\)-partite graph such that for any two stable sets \(V_{1},V_{2}\), either \(V_{1}\sqcup V_{2}\) is a stable set or \(V_{1}\sqcup V_{2}\) is a complete bipartite subgraph of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\). Therefore, \(G_{2}\) is a subgraph of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) where two vertices \(1_{A}\), \(1_{B}\) are adjacent if and only if the corresponding two stable sets \([1_{A}]\) and \([1_{B}]\) make a complete bipartite subgraph. We jot down in the next theorem some of the features highlighting the behaviour of \(G_{2}\) as a graph and observe the analogy with its parent graph: **Theorem 3.25**.: 1. _Let_ \(1_{A},1_{B}\in V\)_. There is a vertex in_ \(V\) _adjacent to both_ \(1_{A},1_{B}\) _in_ \(G_{2}\) _if and only if_ \(\mu(A\cap B)>0\)_._ 2. _The distance between two vertices_ \(1_{A},1_{B}\) _in_ \(G_{2}\) _is given by_ \[d_{G_{2}}(1_{A},1_{B})=\begin{cases}1&\text{if }\mu(X\setminus A\cap X \setminus B)=0\\ 2&\text{if }\mu(X\setminus A\cap X\setminus B)>0\text{ and }\mu(A\cap B)>0\\ 3&\text{if }\mu(X\setminus A\cap X\setminus B)>0\text{ and }\mu(A\cap B)=0 \end{cases}\] 3. \(G_{2}\) _is a connected graph._ 4. \(|V|=2\) _i.e.,_ \(G_{2}=K_{2}\) _if and only if_ \(X\) _can be partitioned into two atoms._ 5. _The diameter of_ \(G_{2}\) _is_ \(diam(G_{2})=\begin{cases}2&\text{if }|V|=2\\ 3&\text{otherwise}\end{cases}\)__ 6. _The eccentricity of_ \(1_{A}\in V\) _in_ \(G_{2}\) _is_ \(ecc_{G_{2}}(1_{A})=\begin{cases}2&\text{if }X\setminus A\text{ is an atom}\\ 3&\text{otherwise}\end{cases}\)__ 7. _A vertex_ \(1_{A}\in V\) _is a vertex of a triangle in_ \(G_{2}\) _if and only_ \(A\) _is not an atom._ 8. _The girth of_ \(G_{2}\) _is_ \(=\begin{cases}\infty&\text{if }|V|=2\\ 3&\text{otherwise}\end{cases}\)__ 9. \(G_{2}\) _is triangulated if and only if_ \(\mu\) _is a non-atomic measure._ 10. \(G_{2}\) _is not hypertriangulated._ 11. \(1_{A},1_{B}\in V\) _are orthogonal in_ \(G_{2}\) _if and only if_ \(\mu(A\cap B)=0=\mu(X\setminus A\cap X\setminus B)\)_._ 12. _For every_ \(1_{A}\in V\) _there exists unique_ \(1_{B}\in V\) _such that_ \(1_{A}\perp_{G_{2}}1_{B}\) _[uniqueness follows from Lemma_ 3.22_]__._ 13. \(G_{2}\) _is uniquely complemented._ To compare the clique number of \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) with that of its subgraph \(G_{2}\), we need the following lemma. **Lemma 3.26**.: _If \(M\) is a complete subgraph of \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\), then there exists a complete subgraph \(M^{\prime}\) of \(G_{2}\) such that \(|M|=|M^{\prime}|\)._ Proof.: Let \(M\) be a complete subgraph of \(\Gamma_{2}^{\prime}\). For each vertex \(f\) in \(M\), there is a vertex \(1_{A}\in V\) such that \(f\sim 1_{A}\). Let \(f,g\) be distinct vertices in \(M\) and \(1_{A},1_{B}\in V\) are such that \(1_{A}\sim f\) and \(1_{B}\sim g\). Since \(M\) is a complete subgraph of \(\Gamma_{2}^{\prime}\), \(f,g\) are adjacent in \(\Gamma_{2}^{\prime}\). By Lemma 3.8(3), \(1_{A},1_{B}\) are adjacent. Consequently, \(1_{A},1_{B}\) are distinct vertices in \(G_{2}\) and they are adjacent in \(G_{2}\). Let \(M^{\prime}\) be the subgraph of \(G_{2}\) whose vertex set is \(\{1_{A}\in V:1_{A}\sim f\) for some vertex \(f\) in \(M\}\). Clearly, \(M^{\prime}\) is a complete subgraph of \(G_{2}\) and \(|M|=|M^{\prime}|\). Since \(G_{2}\) is a subgraph of \(\Gamma_{2}^{\prime}\), \(cl(G_{2})\leq cl(\Gamma_{2}^{\prime})\). So, using Lemma 3.26, we get the following: **Theorem 3.27**.: _The clique number of \(G_{2}\) and \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) are the same._ As \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) is a \(|V|\)-partite graph, \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) is \(|V|\)-colorable and consequently \(\chi(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A})))\leq|V|=|G_{2}|\). We conclude that the chromatic number of both \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) and \(G_{2}\) are equal from the following lemma. **Lemma 3.28**.: _Let \(\alpha\) be a cardinal number. If \(G_{2}\) is \(\alpha\)-colorable, then we can color \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) by using \(\alpha\)-many colors._ Proof.: For each \(f\in\mathscr{D}\), there exists \(1_{A}\in V\) such that \(f\sim 1_{A}\). Since \(G_{2}\) is \(\alpha\)-colorable, \(1_{A}\) already gets a color. We color each \(f\in[1_{A}]\) by the color of \(1_{A}\in V\). As a consequence, each \(f\in\mathscr{D}\) gets a color from the available \(\alpha\) many colors. It only remains to show that this coloring on \(\Gamma_{2}^{\prime}\) is consistent. Let \(f,g\in\mathscr{D}\) be colored by the same color, say the color of \(1_{A}\). By our method of coloring of \(\Gamma_{2}^{\prime}\), \(f\sim 1_{A}\) and \(g\sim 1_{A}\). Since \([1_{A}]\) is a stable set in \(\Gamma_{2}^{\prime}\), \(f,g\) are non-adjacent in \(\Gamma_{2}^{\prime}\). Consequently, \(\Gamma_{2}^{\prime}\) is \(\alpha\)-colorable. Since \(G_{2}\) is a subgraph of \(\Gamma_{2}^{\prime}\), \(\chi(G_{2})\leq\chi(\Gamma_{2}^{\prime})\). From Lemma 3.28, we get the following result. **Theorem 3.29**.: _The chromatic number of \(G_{2}\) and \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) are equal._ **Theorem 3.30**.: \(dt(G_{2})\leq dt(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A})))\)_._ Proof.: Let \(D\subset\mathscr{D}\) be a dominating set in \(\Gamma_{2}^{\prime}\). Let \(V^{\prime}=\{1_{A}\in V:1_{A}\sim f\) for some \(f\in D\}\). Clearly, \(|V^{\prime}|\leq|D|\) and \(D\cap V\subset V^{\prime}\). To show that \(V^{\prime}\) is a dominating set in \(G_{2}\), let \(1_{B}\in V\setminus V^{\prime}\). If \(1_{B}\in D\), then \(1_{B}\in V^{\prime}\), which is not. So, \(1_{B}\in\mathscr{D}\setminus D\). Since \(D\) is a dominating set in \(\Gamma_{2}^{\prime}\), there exists \(f\in D\) such that \(f,1_{B}\) are adjacent in \(\Gamma_{2}^{\prime}\). Let \(1_{A}\in V^{\prime}\) such that \(1_{A}\sim f\). By Observation 3.24, \(1_{A},1_{B}\) are adjacent in \(\Gamma_{2}^{\prime}\) and hence they are adjacent in \(G_{2}\). Therefore, \(V^{\prime}\) is a dominating set in \(G_{2}\). Now \(dt(G_{2})\leq|V^{\prime}|\leq|D|\) and this holds for every dominating set \(D\) in \(\Gamma_{2}^{\prime}\). Consequently, \(dt(G_{2})\leq dt(\Gamma_{2}^{\prime})\). **Lemma 3.31**.: _Let \(D\subset\mathscr{D}\)._ 1. _Every total dominating set in_ \(G_{2}\) _is also a total dominating set in_ \(\Gamma_{2}^{\prime}(\mathcal{M}(X,\mathcal{A}))\)_._ _._ 2. _If_ \(D\) _is a total dominating set in_ \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\)_, then there exists a total dominating set_ \(D^{\prime}\) _in_ \(G_{2}\) _such that_ \(|D|\geq|D^{\prime}|\)_._ Proof.: 1. Let \(V^{\prime}\) be a total dominating set in \(G_{2}\) and \(f\in\mathscr{D}\). Let \(1_{A}\in V\) such that \(f\sim 1_{A}\). Since \(V^{\prime}\) is a total dominating set in \(G_{2}\), there exists \(1_{B}\in V^{\prime}\) such that \(1_{A},1_{B}\) are adjacent in \(G_{2}\). By Lemma 3.8(3), \(f,1_{B}\) are adjacent in \(\Gamma^{\prime}_{2}\). Thus \(V^{\prime}\) be a total dominating set in \(\Gamma^{\prime}_{2}\). 2. Let \(D\) be a total dominating set in \(\Gamma^{\prime}_{2}\) and \(D^{\prime}=\{1_{A}\in V:1_{A}\sim f\text{ for some }f\in D\}\). Clearly, \(|D^{\prime}|\leq|D|\) and \(D\cap V\subset D^{\prime}\). Let \(1_{A}\in V\). Then \(1_{A}\in\mathscr{D}\) and hence there exists \(f\in D\) such that \(f,1_{A}\) are adjacent in \(\Gamma^{\prime}_{2}\). Consequently, \(1_{A}\) is adjacent to \(1_{B}\) in \(G_{2}\) where \(f\sim 1_{B}\in D^{\prime}\). Hence, \(D^{\prime}\) is a a total dominating set in \(G_{2}\). From Lemma 3.31 the following result is immediate. **Theorem 3.32**.: _The total dominating number of \(G_{2}\) and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) are equal._ For any two subsets \(A,B\) of \(X\), as \(A\triangle B=(X\setminus A)\triangle(X\setminus B)\) holds, we get a similar induced subgraph \(G\) of the zero-divisor graph \(\Gamma(\mathcal{M}(X,\mathcal{A}))\), arising from the same equivalence relation \(\sim\) on \(\mathscr{D}\) and considering the same set of vertices \(V\) as in the case of comaximal graph. In this case too, each equivalence class \([1_{A}]\) is a stable set in \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and for any two distinct equivalence classes \([1_{A}]\) and \([1_{B}]\), either \([1_{A}]\sqcup[1_{B}]\) is a stable set in \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) or they make a complete bipartite subgraph of \(\Gamma(\mathcal{M}(X,\mathcal{A}))\). Hence, \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) is also a \(|V|\)-partite graph. This inherent similarity between the induced subgraphs \(G\) and \(G_{2}\) leads to a more stronger conclusion, as seen in the next proposition. **Theorem 3.33**.: \(G\) _and \(G_{2}\) are graph isomorphic._ Proof.: Let us define a function \(\psi:V\to V\) as follows \(\psi(1_{A})=1_{\psi(A)}\) where \(1_{\psi(A)}\sim 1_{X\setminus A}\). Since for \(1_{A}\in V\implies\mu(A),\mu(X\setminus A)>0\), there exists \(1_{B}\in V\) such that \(1_{B}\sim 1_{X\setminus A}\). Also if \(1_{A}\) and \(1_{B}\) are two distinct elements in \(V\), then \(\mu(A\triangle B)>0\implies\mu(X\setminus A\triangle X\setminus B)>0\). As \(1_{\psi(A)}\sim 1_{X\setminus A}\) and \(1_{\psi(B)}\sim 1_{X\setminus B}\), \(\mu(\psi(A)\triangle\psi(B))>0\) i.e., \(1_{\psi(A)}\) and \(1_{\psi(B)}\) are distinct in \(V\). Consequently, \(\psi\) is well-defined and injective. It is also clear that, for each \(1_{B}\in V\), \(\psi(1_{A})=1_{B}\), where \(1_{A}\sim 1_{X\setminus B}\). Thus \(\psi\) is surjective and hence \(\psi\) is a bijection on \(V\). We now claim that, \(\psi\) preserves the adjacency relation between \(G\) and \(G_{2}\). Let \(1_{A},1_{B}\in V\) be adjacent in \(G\); i.e., \(\mu(A\cap B)=0\). It suffices to show that \(1_{\psi(A)},1_{\psi(B)}\) are adjacent in \(G_{2}\). We have \(1_{\psi(A)}\sim 1_{X\setminus A}\) and \(1_{\psi(B)}\sim 1_{X\setminus B}\). Since \(\mu(A\cap B)=0\), \(1_{X\setminus A},1_{X\setminus B}\) are adjacent in \(G_{2}\). By Lemma 3.8(3), \(\psi(1_{A}),\psi(1_{B})\) are adjacent in \(G_{2}\). Similarly if \(1_{A},1_{B}\) are adjacent in \(G_{2}\), then \(\psi^{-1}(1_{A}),\psi^{-1}(1_{B})\) are adjacent in \(G\). Hence \(\psi\) is an isomorphism between the two graphs \(G\) and \(G_{2}\). Theorem 3.33 prompted us to suspect that \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) are isomorphic. But, in the last section of this paper we could find an example of a measure space on which these two graphs are not isomorphic. However, we establish a sufficient condition for these two graphs to be isomorphic in general, as given below. If \(|[1_{A}]|=|[1_{X\setminus A}]|\) for each \(1_{A}\in V\), then there exists a bijection between \([1_{A}]\) and \([1_{\psi(A)}]\), because \([1_{X\setminus A}]=[1_{\psi(A)}]\). For each \(1_{A}\in V\), let \(\phi_{A}:[1_{A}]\rightarrow[1_{\psi(A)}]\) be a bijection. Define \(\phi:\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\rightarrow\mathscr{D}(\mathcal{M} (X,\mathcal{A}))\) by \(\phi(f)=\phi_{A}(f)\) whenever \(f\sim 1_{A}\). Since each \(\phi_{A}\) is a bijection and \(V\) is the collection of the class representatives under the equivalence relation \(\sim\), \(\phi\) is a bijective map. Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) be adjacent in \(\Gamma(\mathcal{M}(X,\mathcal{A}))\). Let \(1_{A},1_{B}\in V\) be such that \(f\sim 1_{A}\) and \(g\sim 1_{B}\). Since \(f,g\) are adjacent, it can be proved easily that \(1_{A},1_{B}\) are adjacent in \(G\). Therefore \(\psi(1_{A}),\psi(1_{B})\) are adjacent in \(G_{2}\). By the definition of \(\phi\), \(\phi(f)\sim\psi(1_{A})\) and \(\phi(g)\sim\psi(1_{B})\). Therefore \(\phi(f),\phi(g)\) are adjacent in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\). Similarly if \(f,g\) are adjacent in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\), then \(\phi^{-1}(f),\phi^{-1}(g)\) are adjacent in \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) (follows from Lemma 3.8(3)). Hence, \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) are graph isomorphic. We record this fact in the form of a theorem. **Theorem 3.34**.: _If \(|[1_{A}]|=|[1_{X\setminus A}]|\) for each \(1_{A}\in V\), then \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) are graph isomorphic._ ## 4. **The annihilator graph \(AG(\mathcal{M}(X,\mathcal{A}))\) of \(\mathcal{M}(X,\mathcal{A})\)** We redefine the annihilator graph of the ring \(\mathcal{M}(X,\mathcal{A})\) as follows: **Definition 4.1**.: The annihilator graph \(AG(\mathcal{M}(X,\mathcal{A}))\) (in short \(AG\)) of \(\mathcal{M}(X,\mathcal{A})\) is a simple graph whose set of vertices is \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) and the adjacency relation is given by the following rule : \(f,g\) are adjacent if and only if \(ann(f)\cup ann(g)\subsetneq ann(f.g)\). It is quite clear that for any two \(f,g\in\mathcal{M}(X,\mathcal{A})\), \(ann(f)\cup ann(g)\subseteq ann(f.g)\). So, two vertices \(f,g\) are non-adjacent when and only when \(ann(f)\cup ann(g)=ann(f.g)\). We first observe that the adjacency of two vertices of \(AG(\mathcal{M}(X,\mathcal{A}))\) can be determined by measuring the differences between their corresponding zero sets. **Theorem 4.2**.: \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) _are adjacent in \(AG(\mathcal{M}(X,\mathcal{A}))\) if and only if \(\mu(Z(f)\setminus Z(g))>0\) and \(\mu(Z(g)\setminus Z(f))>0\)._ Proof.: Let \(f,g\) be not adjacent in \(AG\). Then \(ann(f)\cup ann(g)=ann(f.g)\). Since union of two ideals is an ideal, either \(ann(f)\subset ann(g)\) or \(ann(g)\subset ann(f)\). By Theorem 2.5, either \(\mu(Z(f)\setminus Z(g))=0\) or \(\mu(Z(g)\setminus Z(f))=0\). Conversely, for definiteness sake, let \(\mu(Z(f)\setminus Z(g))=0\). Then by Theorem 2.5, \(ann(f)\subset ann(g)\). Let \(h\in ann(f.g)\). Then \(\mu(X\setminus Z(h)\cap X\setminus Z(f)\cap X\setminus Z(g))=0\). Now, \(X\setminus Z(h)\cap X\setminus Z(g)=(X\setminus Z(h)\cap X\setminus Z(g)\cap Z (f))\cup(X\setminus Z(h)\cap X\setminus Z(g)\cap(X\setminus Z(f)))\subset(Z( f)\setminus Z(g))\cup(X\setminus Z(h)\cap X\setminus Z(g)\cap X\setminus Z(f))\) implies \(\mu(X\setminus Z(h)\cap X\setminus Z(g))=0\). Hence, \(h\in ann(g)\). i.e., \(ann(g)=ann(f)\cup ann(g)=ann(f.g)\) which proves \(f,g\) are non-adjacent in AG. The next theorem sets conditions for the existence of a third vertex, adjacent to a given pair of vertices in \(AG(\mathcal{M}(X,\mathcal{A}))\). **Theorem 4.3**.: _Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\)._ 1. _If_ \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\) _or_ \(\mu(Z(f)\cap Z(g))>0\)_, then there is a vertex adjacent to both_ \(f\) _and_ \(g\) _in_ \(AG(\mathcal{M}(X,\mathcal{A}))\)_._ 2. _Let_ \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0=\mu(Z(f)\cap Z(g))\)_. Then there exists a vertex adjacent to both_ \(f\) _and_ \(g\) _if and only if_ \(Z(f)\) _and_ \(Z(g)\) _are not atoms._ Proof.: It is easy to check that \(\mu(X\setminus Z(f)\cap X\setminus Z(g))>0\) or \(\mu(Z(f)\cap Z(g))>0\) imply that \(1_{Z(f)\cup Z(g)}\) or \(1_{Z(f)\cap Z(g)}\) is adjacent to both \(f\), \(g\) in \(AG\) respectively. Let \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0=\mu(Z(f)\cap Z(g))\). Without loss of generality let, \(Z(f)=X\setminus Z(g)\). If both \(Z(f)\) and \(Z(g)\) are not atoms, then there exist \(A_{1},A_{2},B_{1},B_{2}\in\mathcal{A}\) each with positive measure such that \(Z(f)=A_{1}\sqcup A_{2}\) and \(Z(g)=B_{1}\sqcup B_{2}\). Consider \(h=1_{A_{1}\cup B_{1}}\). Clearly \(h\in\mathscr{D}\) and \(A_{2}\cup B_{2}\subset Z(h)\). Therefore, \(\mu(Z(f)\setminus Z(h))>0\) and \(\mu(Z(h)\setminus Z(f))>0\) i.e., \(f,h\) are adjacent in \(AG\). Similarly \(g,h\) are adjacent in \(AG\). Conversely let \(h\) be adjacent to both \(f,g\) in \(AG\). Then \(\mu(Z(f)\setminus Z(h))>0\) and \(\mu(Z(h)\setminus Z(g))>0\). Now \(Z(f)=(Z(f)\cap Z(h))\sqcup(Z(f)\cap X\setminus Z(h))=(Z(h)\setminus Z(g))\sqcup( Z(f)\setminus Z(h))\), as \(Z(f)=X\setminus Z(g)\). Therefore, \(Z(f)\) is not an atom and similarly, \(Z(g)\) is also not an atom. **Corollary 4.4**.: Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Then \[d(f,g)=\begin{cases}1&\text{ if $f,g$ are adjacent}\\ 2&\text{ otherwise}\end{cases}\] **Corollary 4.5**.: \(ecc(f)=2\) for all \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). **Corollary 4.6**.: The diameter of \(AG(\mathcal{M}(X,\mathcal{A}))\) is \(2\). We have already seen that the vertex sets of the comaximal graph and the zero-divisor graph of \(\mathcal{M}(X,\mathcal{A})\) are also \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\); i.e., the same as the vertex set of AG. If \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) are adjacent in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) then \(\mu(Z(f)\cap Z(g))=0\). So, \((Z(f)\setminus Z(g))\sqcup(Z(f)\cap Z(g))=Z(f)\implies\mu(Z(f)\setminus Z(g))= \mu(Z(f))>0\). Similarly, \(\mu(Z(g)\setminus Z(f))>0\). This indicates that \(f,g\) are adjacent in \(AG(\mathcal{M}(X,\mathcal{A}))\). By an analogous argument we also get that the vertices which are adjacent in \(\Gamma(\mathcal{M}(X,\mathcal{A}))\), are adjacent in \(AG(\mathcal{M}(X,\mathcal{A}))\). We record these observations in the form of a theorem : **Theorem 4.7**.: _Both \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) are subgraphs of \(AG(\mathcal{M}(X,\mathcal{A}))\)._ **Theorem 4.8**.: \(AG(\mathcal{M}(X,\mathcal{A}))\) _is a complete bipartite graph if and only if \(X\) is partitioned into two atoms._ Proof.: Let \(AG\) be complete bipartite by bipartion \(V_{1},V_{2}\). Since \(V_{1}\neq\emptyset\) and \(V_{2}\neq\emptyset\), fix \(f\in V_{1}\) and \(g\in V_{2}\). Since \(AG\) is a complete bipartite graph, there does not exit any vertex adjacent to both \(f,g\) and hence by Theorem 4.3, \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0=\mu(Z(f)\cap Z(g))\) and either \(Z(f)\) or \(Z(g)\) is an atom. Suppose \(Z(f)\) is an atom. We claim that \(Z(g)\) is also an atom. If possible let \(Z(g)=A_{1}\sqcup A_{2}\) where \(A_{1},A_{2}\in\mathcal{A}\) with \(\mu(A_{1}),\mu(A_{2})>0\). Consider \(h=1_{X\setminus(Z(f)\cup A_{1})}\). Then \(h\in\mathscr{D}\) and \(Z(h)=Z(f)\cup A_{1}\). Clearly, \(h\) is adjacent to both \(f,g\), which is a contradiction. Thus \(Z(g)\) is an atom. Let \(A=Z(f)\setminus Z(g)=Z(f)\setminus(Z(f)\cap Z(g))\) and \(B=Z(g)\cup(X\setminus Z(f))=Z(g)\cup((X\setminus Z(f))\cap(X\setminus Z(g)))\). Thus \(A,B\) are atoms and \(X=A\sqcup B\). Conversely let \(A,B\) be two atoms of \(X\) such that \(X=A\sqcup B\). Then by Lemma 3.9, \(\mathscr{D}\) is partitioned into two sets, say \(V_{1},V_{2}\), where \(V_{1}=\{f\in\mathscr{D}:\mu(A\triangle Z(f))=0\}\) and \(V_{2}=\{f\in\mathscr{D}:\mu(B\triangle Z(f))=0\}\). Let \(f_{1},f_{2}\in V_{1}\). Then \(\mu(Z(f_{1})\setminus A)=0=\mu(A\setminus Z(f_{2}))\). Now \(Z(f_{1})\subset A\sqcup Z(f_{1})\setminus A\implies Z(f_{1})\setminus Z(f_{2}) \subset A\setminus Z(f_{2})\sqcup Z(f_{1})\setminus A\implies\mu(Z(f_{1}) \setminus Z(f_{2}))=0\) i.e., \(f_{1},f_{2}\) are non-adjacent in \(AG\). Consequently \(V_{1}\) is a stable set in \(AG\) and similarly \(V_{2}\) is a stable set. Now let \(f\in V_{1}\) and \(g\in V_{2}\). Then \(\mu(Z(f)\setminus A)=0=\mu(Z(g)\setminus B)\). Now, \(Z(f)\subset A\sqcup Z(f)\setminus A\) and \(Z(g)\subset B\sqcup Z(g)\setminus B\). Therefore, \(Z(f)\cap Z(g)\subset(A\sqcup Z(f)\setminus A)\cap(B\sqcup Z(g)\setminus B) \subset(A\cap B)\cup(Z(f)\setminus A)\cup(Z(g)\setminus B)\implies\mu(Z(f) \cap Z(g))=0\). i.e., \(f,g\) are adjacent in \(\Gamma^{\prime}_{2}\). By Theorem 4.7, \(f,g\) are adjacent in \(AG\) and hence \(AG\) is a complete bipartite graph. Combining Theorem 3.12 and Theorem 4.8, we get the following result. **Corollary 4.9**.: The following statements are equivalent: 1. \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) is a complete bipartite graph. 2. \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is a complete bipartite graph. 3. \(AG(\mathcal{M}(X,\mathcal{A}))\) is a complete bipartite graph. 4. \(X\) is partitioned into two atoms. **Theorem 4.10**.: _The following statements are equivalent:_ 1. \(\Gamma(\mathcal{M}(X,\mathcal{A}))=AG(\mathcal{M}(X,\mathcal{A}))\)_._ 2. \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))=AG(\mathcal{M}(X,\mathcal{A}))\)_._ 3. \(X\) _is partitioned into two atoms._ Proof.: If \(X\) is partitioned into two atoms, then all the three graphs are complete bipartite with same bipartion and hence they are equal. Suppose \(X\) can not be partitioned into two atoms. Consider an \(A\in\mathcal{A}\) such that \(\mu(A),\mu(X\setminus A)>0\). Then either \(A\) or \(X\setminus A\) is not an atom. Suppose \(X\setminus A\) is not an atom. Then there exist \(A_{1},A_{2}\in\mathcal{A}\) with \(\mu(A_{1}),\mu(A_{2})>0\) such that \(X\setminus A=A_{1}\sqcup A_{2}\). Let \(f_{1}=1_{A\sqcup A_{1}}\in\mathscr{D}\) and \(f_{2}=1_{A\sqcup A_{2}}\in\mathscr{D}\). Then \(X\setminus Z(f_{1})\cap X\setminus Z(f_{2})=A\) and \(Z(f_{1})\cap Z(f_{2})=\emptyset\). Therefore, \(f_{1},f_{2}\) are not adjacent in \(\Gamma\), but they are adjacent in \(\Gamma^{\prime}_{2}\) and hence they are adjacent in \(AG\). Therefore, \(\Gamma\neq AG\). Again let \(g_{1}=1_{A_{1}}\in\mathscr{D}\) and \(g_{2}=1_{A_{2}}\in\mathscr{D}\). Then \(Z(g_{1})\cap Z(g_{2})=A\) and \(X\setminus Z(g_{1})\cap X\setminus Z(g_{2})=\emptyset\). Therefore, \(g_{1},g_{2}\) are not adjacent in \(\Gamma^{\prime}_{2}\), but are adjacent in \(\Gamma\) and hence they are adjacent in \(AG\). Therefore, \(\Gamma^{\prime}_{2}\neq AG\). **Theorem 4.11**.: _A vertex \(f\) in \(AG(\mathcal{M}(X,\mathcal{A}))\) is a vertex of a triangle if and only if either \(Z(f)\) or \(X\setminus Z(f)\) is not an atom._ Proof.: If \(X\setminus Z(f)\) is not an atom, then by Theorem 3.15, \(f\) is a vertex of triangle in \(\Gamma^{\prime}_{2}\). By Theorem 4.7, \(f\) is a vertex of a triangle in \(AG\). Again if \(Z(f)\) is not an atom, then there exists \(A_{1},A_{2}\in\mathcal{A}\) with \(\mu(A_{1}),\mu(A_{2})>0\) such that \(Z(f)=A_{1}\sqcup A_{2}\). Clearly \(1_{A_{1}},1_{A_{2}}\in\mathscr{D}\) and \(f.1_{A_{1}}\equiv 0\equiv 1_{A_{1}}.1_{A_{2}}\equiv 1_{A_{2}}.f\) on \(X\). i.e., \(f-1_{A_{1}}-1_{A_{2}}-f\) is a triangle in \(\Gamma\). By Theorem 4.7, \(f-1_{A_{1}}-1_{A_{2}}-f\) is a triangle in \(AG\). Let both \(Z(f)\) and \(X\setminus Z(f)\) be atoms. Then by Theorem 4.8, \(AG\) is complete bipartite. Hence \(f\) can not be a vertex of a triangle. **Corollary 4.12**.: The girth of \(AG(\mathcal{M}(X,\mathcal{A}))\) is given by \[gr(AG(\mathcal{M}(X,\mathcal{A})))=\begin{cases}4&\text{ if }X\text{ is partitioned into two atoms}\\ 3&\text{ otherwise}\end{cases}\] **Corollary 4.13**.: \(AG(\mathcal{M}(X,\mathcal{A}))\) is triangulated if and only if \(X\) is not partitioned into two atoms. The next result directly follows from Theorem 4.3. **Theorem 4.14**.: _Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). Then \(f\perp g\) in \(AG(\mathcal{M}(X,\mathcal{A}))\) if and only if \(\mu(X\setminus Z(f)\cap X\setminus Z(g))=0=\mu(Z(f)\cap Z(g))\) and either \(Z(f)\) or \(Z(g)\) is an atom._ **Corollary 4.15**.: An edge \(f-g\) in \(AG(\mathcal{M}(X,\mathcal{A}))\) is an edge of a triangle if and only if \(f\not\perp g\). If \(X\) can be partitioned into two atoms, then \(AG(\mathcal{M}(X,\mathcal{A}))\) is never hypertriangulated. On the contrary, if \(X\) can not be partitioned into two atoms, then for any \(A\in\mathcal{A}\) with \(\mu(A)>0\) and \(\mu(X\setminus A)>0\), either \(A\) or \(X\setminus A\) is not an atom. This condition helps in determining exactly when the graph \(AG(\mathcal{M}(X,\mathcal{A}))\) is hypertriangulated. **Theorem 4.16**.: \(AG(\mathcal{M}(X,\mathcal{A}))\) _is hypertriangulated if and only if \(\mu\) is non-atomic._ Proof.: If \(\mu\) is non-atomic, then for any pair of vertices \(f,g\) in \(AG\), \(f\not\perp g\) (by Theorem 4.14), because \(Z(f)\) and \(Z(g)\) are not atoms. Consequently, by Corollary 4.15, \(AG\) is hypertriangulated. Conversely, let \(A\in\mathcal{A}\) be an atom. Then, \(\mu(X\setminus A)>0\), as \(X\) is not an atom. Thus, \(1_{A},1_{X\setminus A}\in\mathscr{D}\). By Theorem 4.14, \(1_{A}\perp 1_{X\setminus A}\). Therefore, \(1_{A}-1_{X\setminus A}\) is not an edge of a triangle in \(AG\). **Corollary 4.17**.: \(AG(\mathcal{M}(X,\mathcal{A}))\) is hypertriangulated if and only if \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is triangulated. **Theorem 4.18**.: _Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). In \(AG(\mathcal{M}(X,\mathcal{A}))\),_ \[c(f,g)=\begin{cases}3&f,g\text{ are adjacent and }f\not\perp g\\ 4&\text{ either }f,g\text{ are not adjacent or }f\perp g\end{cases}\] Proof.: Let \(f,g\in\mathscr{D}\). If \(f,g\) are adjacent and \(f\not\perp g\), then there exists \(h\in\mathscr{D}\) adjacent to both \(f,g\); i.e., \(f-g-h-f\) is a triangle in \(AG\). Thus \(c(f,g)=3\). If \(f,g\) are not adjacent in \(AG\), then they are not adjacent in \(\Gamma^{\prime}_{2}\) also, because \(\Gamma^{\prime}_{2}\) is a subgraph of \(AG\). So, \(\mu(Z(f)\cap Z(g))>0\). By Theorem 4.3, there exists \(h\in\mathscr{D}\) adjacent to both \(f,g\). Thus \(f-h-g-2.h-f\) is a square in \(AG\implies c(f,g)\leq 4\). Since \(f,g\) are not adjacent in \(AG\), \(c(f,g)=4\). If \(f\perp g\), then \(f-g-2.f-2.g-f\) is a square in \(AG\). Hence, \(c(f,g)\leq 4\). Since \(f\perp g\), no vertex is adjacent to both \(f,g\); i.e., \(c(f,g)>3\). Therefore, \(c(f,g)=4\). The visual representation of the above Theorem is exhibited as follows: **Theorem 4.19**.: \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) _has an orthogonal complement if and only if either \(Z(f)\) or \(X\setminus Z(f)\) is an atom._ Proof.: If one of \(Z(f)\) and \(X\setminus Z(f)\) is an atom, then by Theorem 4.14, \(f\perp 1_{Z(f)}\) in AG. Conversely let \(f\perp g\) for some \(g\in\mathscr{D}\). Then \(\mu(Z(f)\cap Z(g))=0=\mu(X\setminus Z(f)\cap X\setminus Z(g))\) and either \(Z(f)\) or \(Z(g)\) is an atom. If \(Z(g)\) is an atom, then \(X\setminus Z(f)\) is an atom, because \(X\setminus Z(f)=[X\setminus Z(f)\cap X\setminus Z(g)]\cup Z(g)\setminus[Z(f) \cap Z(g)]\). Therefore either \(Z(f)\) or \(X\setminus Z(f)\) is an atom. **Lemma 4.20**.: _If \(X\) is partitioned into three atoms, then for each \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\), either \(Z(f)\) or \(X\setminus Z(f)\) is an atom._ Proof.: Let \(X=A\sqcup B\sqcup C\), where \(A,B,C\) are atoms. If possible let there be some \(f\in\mathscr{D}\) such that both \(Z(f),X\setminus Z(f)\) are not atoms. So there are \(A_{1},A_{2},B_{1},B_{2}\in\mathcal{A}\), such that \(Z(f)=A_{1}\sqcup A_{2}\), \(X\setminus Z(f)=B_{1}\sqcup B_{2}\) and \(\mu(A_{1}),\mu(A_{2}),\mu(B_{1}),\mu(B_{2})>0\). Then \(\mathscr{B}=\{A,B,C\}\) and \(\mathscr{B}^{\prime}=\{A_{1},A_{2},B_{1},B_{2}\}\) constitute partitions of \(X\) by sets with positive measure. Hence there exist \(E\in\mathscr{B}\) and \(F_{1},F_{2}\in\mathscr{B}^{\prime}\) such that \(\mu(E\cap F_{1}),\mu(E\cap F_{2})>0\). Now \(E=(E\cap F_{1})\sqcup(E\setminus F_{1})\). Since, \(F_{1},F_{2}\) are disjoint, \(E\cap F_{2}\subset E\setminus F_{1}\) and hence \(\mu(E\setminus F_{1})>0\). This contradicts that \(E\) is an atom. **Theorem 4.21**.: \(AG(\mathcal{M}(X,\mathcal{A}))\) _is complemented if and only if \(X\) is partitioned into either two or three atoms._ Proof.: We prove this in three cases. Case I: \(X=A\sqcup B\), where \(A,B\) are atoms. By Theorem 4.8, \(AG\) is a complete bipartite graph and hence \(AG\) is complemented. Case II: \(X=A\sqcup B\sqcup C\), where \(A,B,C\) are atoms. Then by Lemma 4.20, either \(Z(f)\) or \(X\setminus Z(f)\) is an atom for each \(f\in\mathscr{D}\). By Theorem 4.19, each \(f\in\mathscr{D}\) has an orthogonal complement. Consequently, \(AG\) is complemented. Case III: Neither Case I nor Case II. Subcase I: \(X\) contains no atom. Then for any pair \(f,g\in\mathscr{D}\), \(f\not\perp g\). Hence, \(AG\) is not complemented. Subcase II: \(X\) contains an atom, say \(A\in\mathcal{A}\). Then \(\mu(X\setminus A)>0\), for otherwise \(X\) will be an atom. By our hypothesis, \(X\setminus A\) is not an atom. Then \(X\setminus A=B_{1}\sqcup B_{2}\), for some \(B_{1},B_{2}\in\mathcal{A}\) with \(\mu(B_{1}),\mu(B_{2})>0\). Then again, one of \(B_{1},B_{2}\) is not an atom. Without loss of generality, let \(B_{1}\) be not an atom. Then \(B_{1}=C_{1}\sqcup C_{2}\), for some \(C_{1},C_{2}\in\mathcal{A}\) with \(\mu(C_{1}),\mu(C_{2})>0\). Consider \(f\in\mathscr{D}\) such that \(Z(f)=C_{1}\sqcup B_{2}\). Then \(X\setminus Z(f)=C_{2}\sqcup A\). Therefore, both of \(Z(f)\) and \(X\setminus Z(f)\) are not atoms. By Theorem 4.19, \(f\in\mathscr{D}\) has no orthogonal complement. Consequently, \(AG\) is not complemented. **Lemma 4.22**.: _Let \(f,g,h\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) be such that \(f\perp g\) and \(f\perp h\) in \(AG\). Then \(g,h\) are adjacent to the same set of vertices._ Proof.: By Theorem 4.14, \(\mu(Z(f)\cap Z(g))=0=\mu(X\setminus Z(f)\cap X\setminus Z(g))\) and \(\mu(Z(f)\cap Z(h))=0=\mu(X\setminus Z(f)\cap X\setminus Z(h))\). i.e., \(\mu(Z(g)\triangle(X\setminus Z(f)))=0=\mu(Z(h)\triangle(X\setminus Z(f)))\). This implies \(\mu(Z(g)\triangle(Z(h))=0\). Let \(k\in\mathscr{D}\) be adjacent to \(g\); i.e., \(\mu(Z(k)\setminus Z(g))>0\) and \(\mu(Z(g)\setminus Z(k))>0\). Now \(Z(g)\setminus Z(k)=(Z(g)\cap Z(h)\cap X\setminus Z(k))\sqcup(Z(g)\setminus Z (h)\cap X\setminus Z(k))\). Since \(\mu(Z(g)\setminus Z(h))=0\), \(\mu(Z(g)\cap Z(h)\cap X\setminus Z(k))=\mu(Z(g)\setminus Z(k))>0\implies\mu(Z( h)\setminus Z(k))\geq\mu(Z(g)\cap Z(h)\cap X\setminus Z(k))>0\) i.e., \(\mu(Z(h)\setminus Z(k))>0\). Similarly, \(\mu(Z(k)\setminus Z(h))>0\). Thus \(k\) is adjacent to \(h\). Likewise if \(k\) is adjacent to \(h\), then it is adjacent to \(g\) also. Hence \(g,h\) are adjacent to the same set of vertices. As a consequence of this lemma, we get the following theorem. **Theorem 4.23**.: \(AG(\mathcal{M}(X,\mathcal{A}))\) _is complemented if and only if \(AG(\mathcal{M}(X,\mathcal{A}))\) is uniquely complemented._ The fact that \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) are subgraphs of \(AG(\mathcal{M}(X,\mathcal{A}))\) helps us to determine the dominating sets in \(AG(\mathcal{M}(X,\mathcal{A}))\), as seen in the next result. **Theorem 4.24**.: \(dt(AG(\mathcal{M}(X,\mathcal{A})))=2\)_._ Proof.: Fix an \(A\in\mathcal{A}\) with \(\mu(A),\mu(X\setminus A)>0\). We claim that \(\{1_{A},1_{X\setminus A}\}\) is a dominating set in \(AG\). Suppose \(f\in\mathscr{D}\) is not adjacent to \(1_{X\setminus A}\) in \(AG\); i.e., either \(\mu(Z(f)\setminus A)=0\) or \(\mu(A\setminus Z(f))=0\). If \(\mu(Z(f)\setminus A)=0\), then \(\mu(Z(f)\cap X\setminus A)=0\). So \(f,1_{A}\) are adjacent in \(\Gamma^{\prime}_{2}\) and hence by Theorem 4.7, \(f,1_{A}\) are adjacent in \(AG\). If \(\mu(A\setminus Z(f))=0\), then \(\mu(X\setminus Z(f)\cap A)=0\). Thus \(f,1_{A}\) are adjacent in \(\Gamma\). Again by Theorem 4.7, \(f,1_{A}\) are adjacent in \(AG\). Hence, in any case, \(f\) is adjacent to \(1_{A}\) in \(AG\). Consequently, \(\{1_{A},1_{X\setminus A}\}\) is a dominating set in \(AG\). Also for each \(f\in\mathscr{D}\), \(f,2.f\) are not adjacent in \(AG\) which proves singleton sets are not dominating sets in \(AG\). Hence \(dt(AG)=2\). In the above theorem, we note that the dominating set \(\{1_{A},1_{X\setminus A}\}\) is a total dominating set in \(AG(\mathcal{M}(X,\mathcal{A}))\). We record this observation in the following result. **Corollary 4.25**.: \(dt_{t}(AG(\mathcal{M}(X,\mathcal{A})))=2\)_._ In Theorem 4.10, we have seen that \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))=AG(\mathcal{M}(X,\mathcal{A}))\) if and only if \(X\) is partition into two atoms. We now show that these two graphs are identical if and only if they are isomorphic. **Theorem 4.26**.: _The graphs \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) and \(AG(\mathcal{M}(X,\mathcal{A}))\) are isomorphic if and only if \(X\) is partitioned into two atoms._ Proof.: If \(X\) is partitioned into two atoms, then by Theorem 4.10, these two graph are equal and so, they are isomorphic. If \(X\) is not partitioned into two atoms then two cases may arise; either \(X\) is partitioned into three atoms or \(X\) is \(X\) is not partitioned into three atoms. First we assume that \(X\) is partitioned into three atoms, say \(X=A\sqcup B\sqcup C\) where \(A,B,C\) are atoms. If possible let \(\psi\) be a graph isomorphism between the graphs \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) and \(AG(\mathcal{M}(X,\mathcal{A}))\). Now \(1_{A}\) is a vertex in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) and \(ecc(1_{A})=3\) in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) (by Theorem 3.13), because \(Z(1_{A})=B\sqcup C\), is not an atom. Since \(\psi\) is a graph isomorphism, \(ecc(\psi(1_{A}))=3\) in \(AG(\mathcal{M}(X,\mathcal{A}))\), which contradicts the fact that \(ecc(f)=2\) for every vertex \(f\) in \(AG(\mathcal{M}(X,\mathcal{A}))\) (Corollary 4.5). Therefore, there does not exist any graph isomorphism between these two graphs. Finally, let \(X\) be neither partitioned into two atoms nor into three atoms. Then \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is a complemented graph (by Theorem 3.21), though \(AG(\mathcal{M}(X,\mathcal{A}))\) is not complemented. Thus, they are not isomorphic as graphs. **Corollary 4.27**.: \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and \(AG(\mathcal{M}(X,\mathcal{A}))\) are isomorphic if and only if \(X\) is partitioned into two atoms. **The weakly zero-divisor graph \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) of \(\mathcal{M}(X,\mathcal{A})\)** The weakly zero-divisor graph \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) (in short \(W\Gamma\)) of \(\mathcal{M}(X,\mathcal{A})\), is defined as a graph with \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) as the set of vertices and two vertices \(f,g\) are adjacent if there exist \(h_{1}\in ann(f)\cap\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) and \(h_{2}\in ann(g)\cap\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) such that \(h_{1}.h_{2}\equiv 0\) a.e. on \(X\). The adjacency relation in \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) is interpreted via the measure \(\mu\) as follows: **Theorem 5.1**.: _Let \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). In \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\),_ 1. _if_ \(\mu(Z(f)\triangle Z(g))>0\)_, then_ \(f,g\) _are adjacent._ 2. _if_ \(\mu(Z(f)\triangle Z(g))=0\) _and_ \(Z(f)\) _is not an atom, then_ \(f,g\) _are adjacent._ 3. _if_ \(\mu(Z(f)\triangle Z(g))=0\) _and_ \(Z(f)\) _is an atom, then_ \(f,g\) _are not adjacent._ Proof.: 1. Since \(\mu(Z(f)\triangle Z(g))>0\), either \(\mu(Z(f)\setminus Z(g))>0\) or \(\mu(Z(g)\setminus Z(f))>0\). Without loss of generality let \(\mu(Z(f)\setminus Z(g))>0\). Consider \(h_{1}=1_{Z(f)\setminus Z(g)}\) and \(h_{2}=1_{Z(g)}\). Then \(h_{1},h_{2}\in\mathscr{D}\) and \(h_{1}.f=0=h_{2}.g\) on \(X\). So, \(h_{1}\in ann(f)\cap\mathscr{D}\) and \(h_{2}\in ann(g)\cap\mathscr{D}\). Also \(h_{1}.h_{2}=0\) on \(X\). Hence, \(f,g\) are adjacent in \(W\Gamma\). 2. Since \(Z(f)\) is not an atom, \(Z(f)=A\sqcup B\) for some \(A,B\in\mathcal{A}\) with \(\mu(A),\mu(B)>0\). Clearly \(1_{A},1_{B}\in\mathscr{D}\) and \(1_{A}.f=0=1_{B}.f\) on \(X\) ; i.e., \(1_{A},1_{B}\in ann(f)\). Since \(\mu(Z(f)\triangle Z(g))=0\), by Corollary 2.6, \(ann(f)=ann(g)\implies 1_{B}\in ann(g)\). Also, \(1_{A}.1_{B}=0\) on \(X\implies f,g\) are adjacent in \(W\Gamma\). 3. If possible let \(f\) and \(g\) be adjacent. Then there exist \(h_{1}\in ann(f)\cap\mathscr{D}\) and \(h_{2}\in ann(g)\cap\mathscr{D}\) such that \(\mu(X\setminus Z(h_{1})\cap X\setminus Z(h_{2}))=0\). Since \(\mu(Z(f)\triangle Z(g))=0\), by Lemma 2.6, \(ann(f)=ann(g)\). Therefore \(h_{1},h_{2}\in ann(f)\); i.e., there exists a zero measurable set \(E\in\mathcal{A}\) such that \(X\setminus Z(h_{1})\cap X\setminus E\subset Z(f)\) and \(X\setminus Z(h_{2})\cap X\setminus E\subset Z(f)\). Let \(A=X\setminus Z(h_{1})\cap X\setminus E\) and \(B=Z(f)\setminus A\). Clearly, \(A,B\in\mathcal{A}\) and \(Z(f)=A\sqcup B\). Since \(h_{1}\in\mathscr{D}\) and \(\mu(E)=0\), \(\mu(A)>0\). Again, \((X\setminus E\cap X\setminus Z(h_{2}))\setminus(X\setminus Z(h_{1})\cap X \setminus Z(h_{2}))\subset B\). Since \(h_{2}\in\mathscr{D}\) and \(\mu(X\setminus Z(h_{1})\cap X\setminus Z(h_{2}))=0=\mu(E)\), \(\mu(B)>0\). This contradicts the hypothesis that \(Z(f)\) is an atom. From Theorem 5.1, it follows that \(f,f\) are adjacent if and only if \(Z(f)\) is not an atom. In other words, **Corollary 5.2**.: \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\) is self-adjacent in \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) if and only if \(Z(f)\) is not an atom. To make \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) a simple graph, we redefine the vertex set of the graph as \(\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))=\{f\in\mathscr{D}(\mathcal{M} (X,\mathcal{A})):Z(f)\text{ is an atom}\}\). If \(\mu\) is non-atomic, then \(\mathcal{A}\) does not contain any atom and hence \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) is an empty graph. For the non-emptiness of \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\), we assume that \(\mathcal{A}\) contains atleast one atom \(A\) with \(\mu(X\setminus A)>0\). Then the adjacency relation between two vertices in \(\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) takes the form as described in the next theorem. **Theorem 5.3**.: _Let \(f,g\in\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))\). Then \(f,g\) are adjacent in \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) if and only if \(\mu(Z(f)\triangle Z(g))>0\)._ Consider the equivalence relation \(\sim\) on \(\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) given by: "\(f\sim g\) if and only if \(\mu(Z(f)\triangle Z(g))=0\)". For each \(f\in\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))\), let \([f]\) denote the equivalence class of \(f\) under the relation \(\sim\) on \(\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))\). From Theorem 5.3, it follows that 1. for each \(f\in\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))\), \([f]\) is a stable set in \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\). 2. if \([f],[g]\) are distinct classes, then for all \(f_{1}\in[f]\) and for all \(g_{1}\in[g]\), \(f_{1},g_{1}\) are adjacent in \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\). If \(W\) is the collection of distinct class representatives under \(\sim\) on \(\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))\), then we get the following theorem. **Theorem 5.4**.: \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is a complete \(|W|\)-partite graph._ It is known that if \(A\in\mathcal{A}\) is an atom, then \(\mu(X\setminus A)>0\). Moreover, two atoms \(A,B\in\mathcal{A}\) are said to be distinct if \(\mu(A\triangle B)>0\). So, \(|W|=\) number of distinct atoms in \(\mathcal{A}\). This observation leads to the following theorem. **Theorem 5.5**.: \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is a complete bipartite graph if and only if \(\mathcal{A}\) contains exactly two distinct atoms._ **Theorem 5.6**.: _If \(\mathcal{A}\) contains atleast three distinct atoms, then the following hold:_ 1. \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is triangulated._ 2. \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is hypertriangulated._ 3. _The girth of_ \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is_ \(3\)_._ 4. _No pair of vertices in_ \(\mathscr{D}^{\prime}(\mathcal{M}(X,\mathcal{A}))\) _are orthogonal in_ \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\)_._ 5. \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is not a complemented graph._ We list the values of certain important graph parameters of \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) in the following theorem: **Theorem 5.7**.: 1. _The dominating number of_ \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is_ \(2\)_._ 2. _The chromatic number of_ \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is_ \(|W|\)_._ 3. _The clique number of_ \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is_ \(|W|\)_._ ## 6. **Illustration via two familiar Measure Spaces** ### On the Counting Measure Space Let \(X\) be a non-empty set, \(\mathcal{A}=\mathscr{P}(X)\), the power set of \(X\) and \(\mu\), the counting measure on \(X\), be defined as follows: \[\mu(E)=\begin{cases}|E|\text{ if }E\text{ is finite}\\ \infty\text{ otherwise}\end{cases}\] Then \(\mathcal{M}(X,\mathcal{A})=\mathbb{R}^{X}\) and \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))=\{f\in\mathcal{M}(X,\mathcal{A}):Z(f) \neq\emptyset,X\}\). For the non-emptiness of \(\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\), we always consider \(|X|>1\). Clearly, each singleton set is an atom in \((X,\mathcal{A},\mu)\). Choosing \((X,\mathcal{A},\mu)\) as the counting measure space, the following two theorems list the important features of the comaximal graph and the annihilator graph of \(\mathcal{M}(X,\mathcal{A})\). **Theorem 6.1.1**.: _Consider the comaximal graph \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) of \(\mathcal{M}(X,\mathcal{A})\) and \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\)._ 1. _There exists a vertex adjacent to both_ \(f\) _and_ \(g\) _if and only if_ \(Z(f)\cup Z(g)\neq X\)_._ 2. \(d(f,g)=\begin{cases}1&Z(f)\cap Z(g)=\emptyset\\ 2&Z(f)\cap Z(g)\neq\emptyset\text{ and }Z(f)\cup Z(g)\neq X\\ 3&Z(f)\cap Z(g)\neq\emptyset\text{ and }Z(f)\cup Z(g)=X\\ \end{cases}\)__ 3. \(ccc(f)=\begin{cases}2&\text{ if }|Z(f)|=1\\ 3&\text{ otherwise}\end{cases}\)__ 4. \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) _is complete bipartite if and only if_ \(|X|=2\)_._ 5. _The diameter of_ \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) _is_ \(\begin{cases}2&|X|=2\\ 3&|X|\geq 3\end{cases}\)__ 6. _The girth of_ \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) _is_ \(\begin{cases}4&|X|=2\\ 3&|X|\geq 3\end{cases}\)__ 7. \(f\) _is a vertex of a triangle if and only if_ \(|X\setminus Z(f)|>1\)_._ 8. \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) _is never triangulated and never hypertriangulated._ 9. \(f\perp g\) _if and only if_ \(Z(f)=X\setminus Z(g)\)_._ **Theorem 6.1.2**.: _Consider the annihilator graph \(AG(\mathcal{M}(X,\mathcal{A}))\) of \(\mathcal{M}(X,\mathcal{A})\) and \(f,g\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\)._ 1. _There exists a vertex adjacent to both_ \(f,g\) _if and only if either_ \(Z(f)\cup Z(g)\neq X\) _or_ \(Z(f)\cap Z(g)\neq\emptyset\) _or_ \(|Z(f)|,|Z(g)|\geq 2\)_._ 2. \(d(f,g)=\begin{cases}1&Z(f)\setminus Z(g)\neq\emptyset\text{ and }Z(g)\setminus Z(f)\neq\emptyset\\ 2&\text{ otherwise}\end{cases}\)__ 3. \(ccc(f)=2\)__ 4. \(AG(\mathcal{M}(X,\mathcal{A}))\) _is complete bipartite if and only if_ \(|X|=2\)_._ 5. \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))=AG(\mathcal{M}(X,\mathcal{A}))\) _if and only if_ \(|X|=2\) _if and only if_ \(\Gamma(\mathcal{M}(X,\mathcal{A}))=AG(\mathcal{M}(X,\mathcal{A}))\)_._ 6. _The girth of_ \(AG(\mathcal{M}(X,\mathcal{A}))\) _is_ \(\begin{cases}4&|X|=2\\ 3&|X|\geq 3\end{cases}\)__ 7. \(AG(\mathcal{M}(X,\mathcal{A}))\) _is not hypertriangulated._ 8. \(f\) _has an orthogonal complement in_ \(AG(\mathcal{M}(X,\mathcal{A}))\) _if and only if either_ \(|Z(f)|=1\) _or_ \(|X\setminus Z(f)|=1\)_._ 9. \(AG(\mathcal{M}(X,\mathcal{A}))\) _is uniquely complemented if and only if_ \(|X|\leq 3\)_._ For this measure space, the Theorem 5.4 takes the following form: **Theorem 6.1.3**.: \(W\Gamma(\mathcal{M}(X,\mathcal{A}))\) _is a complete \(|X|\)-partite graph where the stable sets are \(W_{x}=\{f\in\mathcal{M}(X,\mathcal{A}):Z(f)=\{x\}\}\), \(x\in X\)._ In the counting measure space, the equivalence relation described before, reads as "\(f\sim g\) if and only if \(Z(f)=Z(g)\)" and therefore \([f]=\{g\in\mathcal{M}(X,\mathcal{A}):Z(f)=Z(g)\}\). Hence, the vertex set of the induced graph \(G_{2}\) of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) becomes \(V=\{1_{A}:A\neq\emptyset,X\}\). In \(G_{2}\), two distinct vertices \(1_{A},1_{B}\) are adjacent if and only if \(X\setminus A\cap X\setminus B=\emptyset\). In the next few results we observe that in the counting measure space, the chromatic number and clique number of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) solely depend on the cardinality of the underlying space \(X\). We prove these results using the induced graph \(G_{2}\) of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) and Theorem 3.29 and Theorem 3.27 of Section 3. **Theorem 6.1.4**.: _The chromatic number of \(G_{2}\) is \(|X|\)._ Proof.: Clearly, \(\{1_{X\setminus\{x\}}:x\in X\}\) is a complete subgraph of \(G_{2}\). Therefore, \(\chi(G_{2})\geq|X|\). We begin with coloring each \(1_{X\setminus\{x\}}\in V\) by using distinct colors. Let \(V\setminus\{1_{X\setminus\{x\}}:x\in X\}\). Then \(|X\setminus A|\geq 2\). We color \(1_{A}\) by one of the colors of \(1_{X\setminus\{x\}}\), where \(x\in X\setminus A\). Suppose \(1_{A},1_{B}\in V\) are colored by the same color, say by the color of \(1_{X\setminus\{x\}}\). Then by hypothesis, \(x\in X\setminus A\cap X\setminus B\). Consequently, \(1_{A},1_{B}\) are not adjacent in \(G_{2}\). It gives a consistent coloring of \(G_{2}\) by \(|X|\)-many colors. Therefore, \(\chi(G_{2})\leq|X|\). **Corollary 6.1.5**.: The chromatic number of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is \(|X|\). **Theorem 6.1.6**.: _The clique number of \(G_{2}\) is \(|X|\)._ Proof.: Since \(\{1_{X\setminus\{x\}}:x\in X\}\) is a complete subgraph of \(G\), \(cl(G_{2})\geq|X|\). For any graph \(H\), we know that \(cl(H)\leq\chi(H)\). Hence by Theorem 6.1.4, \(cl(G_{2})\leq|X|\). **Corollary 6.1.7**.: The clique number of \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is \(|X|\). **Remark 6.1.8**.: A graph is said to be weakly perfect if the clique number and the chromatic number are equal. Since \(cl(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A})))=|X|=\chi(\Gamma^{\prime}_{ 2}(\mathcal{M}(X,\mathcal{A})))\), in this measure space, \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) is a weakly perfect graph. We now compare the comaximal graph and the zero-divisor graph of \(\mathcal{M}(X,\mathcal{A})\) in the counting measure space. For this, we need the following lemmas. **Lemma 6.1.9**.: _For each \(f\in\mathcal{M}(X,\mathcal{A})\), \(|[f]|=|\mathbb{R}^{|X\setminus Z(f)|}|\)._ Proof.: Let \(X\setminus Z(f)=\{x_{\alpha}:\alpha\in\Lambda\}\). Define \(\eta:[f]\rightarrow(\mathbb{R}\setminus\{0\})^{|\Lambda|}\) by \(\eta(g)=\Pi_{\alpha\in\Lambda}(g(x_{\alpha}))\). \(\eta\) is well defined, since for all \(g\in[f]\), \(X\setminus Z(g)=X\setminus Z(f)\). Let \(g_{1},g_{2}\in[f]\) be such that \(\eta(g_{1})=\eta(g_{2})\); i.e., \(g_{1}(x_{\alpha})=g_{2}(x_{\alpha})\) for all \(\alpha\in\Lambda\). Also for all \(x\notin X\setminus Z(f)\), \(g_{1}(x)=0=g_{2}(x)\implies g_{1}=g_{2}\) on \(X\). Therefore \(\eta\) is injective. Let \(y\in(\mathbb{R}\setminus\{0\})^{|\Lambda|}\). Then \(y\) can be written as \(y=(y_{\alpha})_{\alpha\in\Lambda}\), where \(y_{\alpha}\in\mathbb{R}\setminus\{0\}\) for all \(\alpha\in\Lambda\). Let \(g(x)=\begin{cases}y_{\alpha}&\text{ if }x=x_{\alpha}\\ 0&\text{ otherwise}\end{cases}\). Since \(y_{\alpha}\neq 0\) for all \(\alpha\in\Lambda\), \(X\setminus Z(g)=\{x_{\alpha}:\alpha\in\Lambda\}=X\setminus Z(f)\implies g\in[f]\). Clearly \(\eta(g)=y\). Consequently \(\eta\) is a bijection i.e., \(|[f]|=|(\mathbb{R}\setminus\{0\})^{|\Lambda|}|=|\mathbb{R}^{|X\setminus Z(f)|}|\). Let \(f\in\mathscr{D}(\mathcal{M}(X,\mathcal{A}))\). By Theorem 6.1.1(3) it follows that, \(ecc(f)=2\) in \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) if and only if \(Z(f)\) is a singleton set. Similarly, it can be proved that, \(ecc(f)=2\) in \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) if and only if \(X\setminus Z(f)\) is a singleton set. These observations lead to the following important lemma. **Lemma 6.1.10**.: _If \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) are graph isomorphic, then for each \(x\in X\) there exists \(y\in X\) such that \(|[1_{x}]|=|[1_{X\setminus\{y\}}]|\)._ Proof.: Let \(\psi:\Gamma\rightarrow\Gamma^{\prime}_{2}\) be a graph isomorphism. Fix an \(x\in X\). Let \(f\in[1_{x}]\). Then \(X\setminus Z(f)=\{x\}\implies ecc(f)=2\) in \(\Gamma\). Since \(\psi\) is a graph isomorphism, \(ecc(\psi(f))=2\) in \(\Gamma^{\prime}_{2}\implies Z(\psi(f))\) is a singleton set i.e., \(Z(\psi(f))=\{y\}\) for some \(y\in X\). Therefore, \(\psi(f)\in[1_{X\setminus\{y\}}]\). Conversely let \(g\in[1_{X\setminus\{y\}}]\). Then \(Z(g)=\{y\}\implies ecc(g)=2\) in \(\Gamma^{\prime}_{2}\implies ecc(\psi^{-1}(g))=2\) in \(\Gamma\implies X\setminus Z(\psi^{-1}(g))\) is a singleton set, i.e., \(X\setminus Z(\psi^{-1}(g))=\{x^{\prime}\}\) for some \(x^{\prime}\in X\). If \(x\neq x^{\prime}\), then \(1_{x},\psi^{-1}(g)\) are adjacent in \(\Gamma\). Therefore their images under \(\psi\) are adjacent in \(\Gamma^{\prime}_{2}\), i.e., \(\psi(1_{x}),g\) are adjacent in \(\Gamma^{\prime}_{2}\). But \(1_{x}\in[1_{x}]\implies\psi(1_{x})\in[1_{X\setminus\{y\}}]\), by our definition. Therefore, \(Z(\psi(1_{x}))=\{y\}=Z(g)\implies Z(\psi(1_{x}))\cap Z(g)=\{y\}\). So, \(\psi(1_{x}),g\) are not adjacent in \(\Gamma^{\prime}_{2}\), which is a contradiction. Therefore, \(X\setminus Z(\psi^{-1}(g))=\{x\}\implies\psi^{-1}(g)\in[1_{x}]\). Hence, the restriction map \(\psi:[1_{x}]\rightarrow[1_{X\setminus\{y\}}]\) is a bijection, proving \(|[1_{x}]|=|[1_{X\setminus\{y\}}]|\). **Theorem 6.1.11**.: \(\Gamma(\mathcal{M}(X,\mathcal{A}))\) _and \(\Gamma^{\prime}_{2}(\mathcal{M}(X,\mathcal{A}))\) are graph isomorphic if and only if \(X\) is atmost countable._ Proof.: If \(X\) is atmost countable, then for any \(f\in\mathscr{D}\), \(|X\setminus Z(f)|\) is countable and hence by Lemma 6.1.9, \(|[f]|=|\mathbb{R}^{|X\setminus Z(f)|}|=|\mathbb{R}|=\mathfrak{c}\). So, \(|[f]|=|[g]|\) for all \(f,g\in\mathscr{D}\). Therefore by Theorem 3.34, \(\Gamma\) and \(\Gamma_{2}^{\prime}\) are isomorphic. Conversely let \(X\) be uncountable. Then for any \(x,y\in X\), \(X\setminus\{x\}\) is uncountable \(\implies|\mathbb{R}^{|X\setminus\{x\}}||>|\mathbb{R}^{|\{y\}|}|\) ; i.e., \(|[1_{X\setminus\{x\}}]|>|[1_{y}]|\). By Lemma 6.1.10, \(\Gamma\) and \(\Gamma_{2}^{\prime}\) are not graph isomorphic. **Remark 6.1.12**.: Let \((X,\mathcal{A},\mu)\) be the counting measure space, where \(X\) is an uncountable set. Then by Theorem 6.1.11, the zero-divisor graph and the comaximal graph of \(\mathcal{M}(X,\mathcal{A})\) are never isomorphic as graphs. But \(G\) and \(G_{2}\) are graph isomorphic. ### On the Lebesgue Measure Space Let \((\mathbb{R},\mathcal{A},\ell)\) be the Lebesgue measure space on \(\mathbb{R}\) and \(\mathcal{A}\), the collection of all Lebesgue measurable sets on \(\mathbb{R}\). We know that, the Lebesgue measure on \(\mathbb{R}\) is a non-atomic measure. So we have the following results: **Theorem 6.2.1**.: 1. _The diameter of the comaximal graph_ \(\Gamma_{2}^{\prime}(\mathcal{M}(\mathbb{R},\mathcal{A}))\) _of_ \(\mathcal{M}(\mathbb{R},\mathcal{A})\) _is_ \(3\)_._ 2. _The eccentricity of every vertex in_ \(\Gamma_{2}^{\prime}(\mathcal{M}(\mathbb{R},\mathcal{A}))\) _is_ \(3\)_._ 3. _The girth of_ \(\Gamma_{2}^{\prime}(\mathcal{M}(\mathbb{R},\mathcal{A}))\) _is_ \(3\)_._ 4. \(\Gamma_{2}^{\prime}(\mathcal{M}(\mathbb{R},\mathcal{A}))\) _is always a triangulated graph._ 5. _The annihilator graph_ \(AG(\mathcal{M}(\mathbb{R},\mathcal{A}))\) _of_ \(\mathcal{M}(\mathbb{R},\mathcal{A})\) _is always triangulated as well as hypertriangulated._ 6. \(AG(\mathcal{M}(\mathbb{R},\mathcal{A}))\) _can never be a complemented graph._ 7. _The zero-divisor graph and the comaximal graph of_ \(\mathcal{M}(\mathbb{R},\mathcal{A})\) _are not isomorphic to the annihilator graph of_ \(\mathcal{M}(\mathbb{R},\mathcal{A})\)_._ 8. _The weakly zero-divisor graph of_ \(\mathcal{M}(\mathbb{R},\mathcal{A})\) _is an empty graph._ As \(\mathcal{A}\) contains \(2^{\mathfrak{c}}\)-many distinct Lebesgue measurable sets, \(|\mathcal{M}(\mathbb{R},\mathcal{A})|=2^{\mathfrak{c}}\). We now show that the comaximal graph and the zero-divisor graph of \(\mathcal{M}(\mathbb{R},\mathcal{A})\) are isomorphic. **Lemma 6.2.2**.: _Let \(A\in\mathcal{A}\) be such that \(A\) is compact in \(\mathbb{R}\) (with respect to the usual topology). Then there exist \(a,b\in A\) such that \(\ell(A\cap[a,b])=\ell(A)\)._ Proof.: Since \(A\) is compact in \(\mathbb{R}\), \(A\) is closed and \(A\subset[x,y]\) for some \(x,y\in\mathbb{R}\). Clearly, \(\ell(A)\leq y-x\), finite. Let \(B=\{z\in[a,b]:\ell(A\cap[z,y])=\ell(A)\}\) and \(a=\sup B\). Since \(x\in B\), \(B\neq\emptyset\) and hence such \(a\) exists and \(x\leq a<y\). If possible let \(a\notin A\), then \(a\) is not a limit point of \(A\). There exists \(\delta>0\) such that \((a-\delta,a+\delta)\cap A=\emptyset\). Therefore \(\ell((a-\delta,a+\delta)\cap A)=0\). Since \(a\) is the supremum of the set \(B\), there exists \(a^{\prime}\in B\) such that \(a-\delta<a^{\prime}<a\). Now \(a^{\prime}\in B\implies\ell(A\cap[a^{\prime},y])=\ell(A)\). Therefore \(\ell(A\cap[x,a^{\prime}))=0\). This along with the fact \(\ell((a-\delta,a+\delta)\cap A)=0\) imply \(\ell(A\cap[x,a+\delta))=0\implies\ell(A\cap[a+\delta,y])=\ell(A)\) i.e., \(a+\delta\in B\), which contradicts that \(a=\sup B\). Therefore \(a\in A\). Also \([x,a)=\bigcup\limits_{n=1}^{\infty}[x,a-\frac{1}{n}]\implies\ell(A\cap[x,a)) \leq\sum\limits_{n=1}^{\infty}\ell(A\cap[x,a-\frac{1}{n}])\). Since \(a=\sup B\), \(\ell(A\cap[x,a-\frac{1}{n}])=0\) for all \(n\) and hence \(\ell(A\cap[x,a))=0\implies\ell(A\cap[a,y])=\ell(A)\). Similarly if \(b=\inf\{z\in[a,y]:\ell(A\cap[a,z])=\ell(A)\}\), then \(b\in A\) and \(\ell(A\cap[a,b])=\ell(A)\). **Lemma 6.2.3**.: _Let \(A\in\mathcal{A}\) and \(\ell(A)>0\). Then for every \(r\in[0,\ell(A)]\), there exists \(A_{r}\in\mathcal{A}\) such that \(A_{r}\subset A\) and \(\ell(A_{r})=r\)._ Proof.: We consider the function \(\psi:\mathbb{R}\to[0,\ell(A)]\) given by \(\psi(x)=\ell(A\cap(-\infty,x])\). Since \(|\psi(x)-\psi(y)|\leq|x-y|\) for all \(x,y\in\mathbb{R}\), \(\psi\) is continuous on \(\mathbb{R}\). Also \(\lim\limits_{x\to-\infty}\psi(x)=0\) and \(\lim\limits_{x\to\infty}\psi(x)=\ell(A)\). Therefore \(\psi\) takes every value on \([0,\ell(A)]\). i.e., For each \(r\in[0,\ell(A)]\), there exists \(x_{r}\in\mathbb{R}\) such that \(\psi(x_{r})=r\). So, \(\ell(A\cap(-\infty,x_{r}])=r\). In the Lemma 6.2.3, if \(A\) is a compact set, then for some \(a,b\in A\), \(\psi:[a,b]\to[0,\ell(A)]\) may be taken as \(\psi(x)=\ell(A\cap[a,x])\). Also for each \(r\in[0,\ell(A)]\), there exists \(y_{r}\in A\cap[a,b]\) such that \(\ell(A\cap[a,y_{r}])=r\). **Theorem 6.2.4**.: _If \(\ell(A)>0\) for some \(A\in\mathcal{A}\), then \(A\) contains an uncountable set of measure zero._ Proof.: Without loss of generality let \(\ell(A\cap[0,1])>0\). By the regularity of \(\ell\), there exists a compact set \(K\) in \(\mathbb{R}\) such that \(\ell(K)>0\) and \(K\subset A\cap[0,1]\). By Lemma 6.2.2, there exists \(a,b\in K\cap[0,1]\) such that \(\ell(K\cap[a,b])=\ell(K)\). By the observation after Lemma 6.2.3, there exist \(x_{11},x_{12}\in K\cap[a,b]\) such that \(\ell(K\cap[a,x_{11}])=\frac{1}{3}\ell(K)\) and \(\ell(K\cap[a,x_{12}])=\frac{2}{3}\ell(K)\). Let \(I_{11}=K\cap[a,x_{11}]\), \(I_{12}=K\cap[x_{11},x_{12}]\) and \(I_{13}=K\cap[x_{12},b]\). Then \(\ell(I_{11})=\frac{1}{3}\ell(K)=\ell(I_{12})=\ell(I_{13})\); i.e., we trisect \(K\cap[a,b]\) into three parts \(I_{11},I_{12},I_{13}\) with equal measure. Let \(E_{1}=I_{11}\sqcup I_{13}\). Then \(E_{1}\in\mathcal{A}\) and \(\ell(E_{1})=\frac{2}{3}\ell(K)\). Similarly we trisect \(I_{11}\) into \(I_{21}=K\cap[a,x_{21}],I_{22}=K\cap[x_{21},x_{22}]\) and \(I_{23}=K\cap[x_{22},x_{11}]\) with equal measure, where \(x_{21},x_{22}\in K\cap[a,b]\). We also trisect \(I_{13}\) into \(I_{24}=K\cap[x_{12},x_{23}],I_{25}=K\cap[x_{23},x_{24}]\) and \(I_{26}=K\cap[x_{24},b]\) with equal measure, where \(x_{23},x_{24}\in K\cap[a,b]\). Let \(E_{2}=(I_{21}\sqcup I_{23})\sqcup(I_{24}\sqcup I_{26})\). Then \(E_{2}\in\mathcal{A}\) and \(\ell(E_{2})=(\frac{2}{3})^{4}\ell(K)\). Continuing this process, at the \(n\)-th stage, we get \(E_{n}\in\mathcal{A}\) which is the union of \(2^{n}\)-many disjoint measurable sets \(I_{n1},\ldots,I_{n2^{n}}\) each of which has measure \(\frac{1}{3^{n}}\ell(K)\). Thus \(\ell(E_{n})=(\frac{2}{3})^{2^{n}}\ell(K)\). Let \(E=\bigcap\limits_{n=1}^{\infty}E_{n}\). Then \(E\in\mathcal{A}\) and \(\ell(E)=\lim\limits_{n\to\infty}\ell(E_{n})=\ell(K)\). \(\lim\limits_{n\to\infty}(\frac{2}{3})^{2^{n}}=0\). For every chain of Lebesgue measurable sets of the form \(I_{mn}\), \(x_{mn}\in\bigcap I_{mn}\) and \(\bigcap I_{mn}\subset\bigcap\limits_{n=1}^{\infty}E_{n}\subset E\implies E\neq\emptyset\). We claim that \(E\) is uncountable. If possible let \(E\) be countable, say \(E=\{x_{n}:n\in\mathbb{N}\}\). Since \(x_{1}\in E\), \(x_{1}\in E_{1}\implies\) there exists \(k\in\{1,3\}\) such that \(x_{1}\notin I_{1k_{1}}\). Similarly \(x_{2}\in E_{2}\implies x_{2}\notin I_{2k_{2}}\) where \(I_{2k_{2}}\) is one of the measurable sets in \(E_{2}\) such that \(I_{2k_{2}}\subset I_{1k_{1}}\). Thus for each \(n\in\mathbb{N}\), there exists some \(k_{n}\in\{1,2,...,2^{n}\}\) such that \(x_{n}\notin I_{nk_{n}}\) and \(I_{nk_{n}}\subset I_{(n-1)k_{n-1}}\). Since \(I_{nk_{n}}\subset I_{(n-1)k_{n-1}}\) for all \(n\in\mathbb{N}\), \(\bigcap\limits_{n=1}^{\infty}I_{nk_{n}}\neq\emptyset\), say \(x\in\bigcap\limits_{n=1}^{\infty}I_{nk_{n}}\). Then \(x\in E\) and \(x\notin\{x_{1},x_{2},...,x_{n},....\}=E\), a contradiction. Hence, \(E\) is an uncountable set. **Corollary 6.2.5**.: If \(\ell(A)>0\) for some \(A\in\mathcal{A}\), then \(A\) contains exactly \(2^{\mathfrak{c}}\)-many Lebesgue measurable sets. Proof.: Clearly, \(A\) cannot contain more than \(2^{\mathfrak{c}}\)-many sets, as \(A\subset\mathbb{R}\). By Theorem 6.2.4, \(A\) contains an uncountable set of measure zero, say \(E\). Since \(\ell\) is a complete measure, and \(E\) is uncountable, \(E\) contains \(2^{\mathfrak{c}}\)-many Lebesgue measurable sets. Consequently, \(A\) contains \(2^{\mathfrak{c}}\)-many Lebesgue measurable sets. We already made a partition of \(\mathscr{D}(\mathcal{M}(\mathbb{R},\mathcal{A}))\) by the equivalence relation "\(\sim\)" given by: "\(f\sim g\) if and only if \(\ell(Z(f)\triangle Z(g))=0\)". For each \(f\in\mathscr{D}(\mathcal{M}(\mathbb{R},\mathcal{A}))\), we consider \([f]\) as the equivalence class of \(f\) under the equivalence relation "\(\sim\)" which is precisely the set \(\{g\in\mathscr{D}(\mathcal{M}(\mathbb{R},\mathcal{A})):\ell(Z(f)\triangle Z(g) )=0\}\). Clearly, for each \(f\in\mathscr{D}(\mathcal{M}(\mathbb{R},\mathcal{A}))\), \(|[f]|\leq|\mathcal{M}(\mathbb{R},\mathcal{A})|=2^{\mathfrak{c}}\). \(V\) is the collection of all class representatives of the form \(1_{A}\) under \(\sim\), as in all the previous cases. **Lemma 6.2.6**.: _For each \(f\in\mathscr{D}(\mathcal{M}(\mathbb{R},\mathcal{A}))\), \(|[f]|=2^{\mathfrak{c}}\)._ Proof.: Let \(f\in\mathscr{D}(\mathcal{M}(\mathbb{R},\mathcal{A}))\). Then \(\ell(X\setminus Z(f))>0\). By Corollary 6.2.5, \(X\setminus Z(f)\) contains exactly \(2^{\mathfrak{c}}\)-many Lebesgue measurable sets. Let \(\{A_{\alpha}:\alpha\in\Lambda\}\) be the collection of all Lebesgue measurable subsets of \(X\setminus Z(f)\), where \(|\Lambda|=2^{\mathfrak{c}}\). For each \(\alpha\in\Lambda\), let \(f_{\alpha}:\mathbb{R}\to\mathbb{R}\) be defined by \[f_{\alpha}(x)=\begin{cases}0&\text{ if }x\in Z(f)\\ 1&\text{ if }x\in A_{\alpha}\\ 2&\text{ otherwise}\end{cases}\] Clearly \(f_{\alpha}\in[f]\) for all \(\alpha\in\Lambda\). Also for distinct \(\alpha,\beta\in\Lambda\), \(f_{\alpha}\neq f_{\beta}\). Therefore \(|[f]|\geq 2^{\mathfrak{c}}\). Hence \(|[f]|=2^{\mathfrak{c}}\). **Theorem 6.2.7**.: _In the Lebesgue measure space \((\mathbb{R},\mathcal{A},\ell)\), the zero-divisor graph and the comaximal graph on \(\mathcal{M}(\mathbb{R},\mathcal{A})\) are isomorphic._ Proof.: By Lemma 6.2.6, \(|[f]|=2^{\mathfrak{c}}\) for all \(f\in\mathscr{D}(\mathcal{M}(\mathbb{R},\mathcal{A}))\). In particular \(|[1_{A}]|=|[1_{X\setminus A}]|\) for each \(1_{A}\in V\). Hence by Theorem 3.34, the zero-divisor graph and the comaximal graph on \(\mathcal{M}(\mathbb{R},\mathcal{A})\) are isomorphic. Since the Lebesgue measure on \(\mathbb{R}\) is a non-atomic measure, we fell tempted to ask the following question. **Question 6.2.8**.: Suppose \((X,\mathcal{A},\mu)\) is a non-atomic measure space. Are the zero-divisor graph and the comaximal graph of \(\mathcal{M}(X,\mathcal{A})\) isomorphic?
2310.10459
On Turán inequality for ultraspherical polynomials
We show that the normalised ultraspherical polynomials, $G_n^{(\lambda)}(x)=C_n^{(\lambda)}(x)/C_n^{(\lambda)}(1)$, satisfy the following stronger version of Tur\'{a}n inequality, $$|x|^\theta \left(G_n^{(\lambda)}(x)\right)^2 -G_{n-1}^{(\lambda)}(x)G_{n+1}^{(\lambda)}(x) \ge 0 ,\;\;\;|x| \le 1, $$ where $\theta=4/(2-\lambda)$ if $-1/2 <\lambda \le 0$, and $\theta=2/(1+2\lambda)$ if $\lambda \ge 0$. We also provide a similar generalisation of Tur\'{a}n inequalities for some symmetric orthogonal polynomials with a finite or infinite support defined by a three term recurrence.
Ilia Krasikov
2023-10-16T14:41:37Z
http://arxiv.org/abs/2310.10459v2
# On Turan inequality for ultraspherical polynomials ###### Abstract. We show that the normalised ultraspherical polynomials, \(G_{n}^{(\lambda)}(x)=C_{n}^{(\lambda)}(x)/C_{n}^{(\lambda)}(1)\), satisfy the following stronger version of Turan inequality, \[|x|^{\theta}\left(G_{n}^{(\lambda)}(x)\right)^{2}-G_{n-1}^{(\lambda)}(x)G_{n+1 }^{(\lambda)}(x)\geq 0,\ \ \ |x|\leq 1,\] where \(\theta=4/(2-\lambda)\) if \(-1/2<\lambda\leq 0\), and \(\theta=2/(1+2\lambda)\) if \(\lambda\geq 0\). We also provide a similar generalisation of Turan inequalities for some symmetric orthogonal polynomials with a finite or infinite support defined by a three term recurrence. **Keywords:** Orthogonal polynomials, Turan inequality, ultraspherical polynomials, Gegenbauer polynomials, three term recurrence. _2010 Mathematics Subject Classification 33C45_ ## 1. Introduction Let \[y_{n}(x)=G_{n}^{(\lambda)}(x)=\frac{C_{n}^{(\lambda)}(x)}{C_{n}^{(\lambda)}( 1)}\] be the normalised ultraspherical polynomials, \(G_{n}^{(\lambda)}(1)=1\). The classical Turan inequality states [4] that \[y_{n}^{2}(x)-y_{n-1}(x)y_{n+1}(x)\geq 0,\ \ \lambda\geq 0,\ \ |x|\leq 1. \tag{1}\] Gerhold and Kauers [2] gave a computer proof of the following stronger result in the special case \(\lambda=\frac{1}{2}\) of Legendre polynomials \(P_{n}(x)\), \[|x|P_{n}^{2}(x)-P_{n-1}(x)P_{n+1}(x)\geq 0,\ \ \ |x|\leq 1. \tag{2}\] In turn, this was extended by Nikolov and Pillwein to the ultraspherical case [3], \[|x|y_{n}^{2}(x)-y_{n-1}(x)y_{n+1}(x)\geq 0,\ \ -1/2<\lambda\leq 1/2,\ \ |x|\leq 1. \tag{3}\] In this paper we use an elementary geometric approach which, in particular, leads to the following sharper generalization of (3): **Theorem 1**.: _For \(\lambda>-1/2\) and \(|x|\leq 1\) the following Turan type inequality holds,_ \[\Delta_{n}=\Delta_{n}^{(\lambda)}(x)=|x|^{\theta}y_{n}^{2}(x)-y_{n-1}(x)y_{n+1 }(x)\geq 0, \tag{4}\] _where_ \[\theta=\left\{\begin{array}{ll}\frac{4}{2-\lambda}\,,&-1/2< \lambda\leq 0;\\ \\ \frac{2}{1+2\lambda}\,,&\lambda\geq 0.\end{array}\right.\] _Moreover, for \(\lambda\geq 0\) the above value of \(\theta\) is the best possible for all \(n\geq 1\)._ For \(-1/2<\lambda<0\) the obtained result \(\theta=\dfrac{4}{2-\lambda}\) is not sharp and, in fact, the maximal possible value of \(\theta\) depends on \(n\). Note also that generally one must have \(\theta\leq 2\) for all results of this type. This is so because for any family of orthogonal polynomials defined by the general three term recurrence \[p_{n+1}(x)=(b_{n}x+c_{n})p_{n}(x)-a_{n}p_{n-1}(x),\] the following easy to check identity holds for all \(x\in\mathbb{R}\): \[\dfrac{(xb_{n}+c_{n})^{2}}{4a_{n}}\,p_{n}^{2}(x)-p_{n-1}(x)p_{n+1}(x)=\dfrac{( p_{n+1}(x)-a_{n}p_{n-1}(x))^{2}}{4a_{n}}\geq 0. \tag{5}\] Thus, all local maxima of the ratio \(\dfrac{p_{n-1}(x)p_{n+1}(x)}{p_{n}^{2}(x)}\) lie on the curve \(\dfrac{(xb_{n}+c_{n})^{2}}{4a_{n}}\). In particular, (5) is sharp in the orthogonality interval. For the normalised ultraspherical polynomials, which are defined by the recurrence \[(n+2\lambda)y_{n+1}=2(n+\lambda)xy_{n}-ny_{n-1}, \tag{6}\] identity (5) yields \[\left(1+\dfrac{\lambda^{2}}{n(n+2\lambda)}\right)x^{2}y_{n}^{2}-y_{n-1}y_{n+1 }\geq 0,\ \ x\in\mathbb{R}. \tag{7}\] Thus, the inequality \[x^{2}y_{n}^{2}-y_{n-1}y_{n+1}\geq 0,\ \ x\in[-1,1],\] is possible only for \(\lambda=0\), that is, for Chebyshev \(T_{n}\) polynomials. Our proof of Theorem 1 can be extended to some other families of orthogonal polynomials. In particular, we provide the following generalisation of the case \(\lambda<0\) of Theorem 1. Consider a family of symmetric polynomials \(p_{n}(x)\) orthogonal on \([-1,1]\) and normalised by \(p_{n}(1)=1\). Let \(p_{-1}=0,\ p_{0}=1\) and \(a_{0}=0\), then such polynomials are defined by the following the three term recurrence, \[(1-a_{n})p_{n+1}(x)=xp_{n}(x)-a_{n}p_{n-1}(x),\ \ 0<a_{n}<1. \tag{8}\] **Theorem 2**.: _Suppose that the sequence \(a_{n}\) in (8) is decreasing and \(\frac{1}{2}<a_{n}<1\). Then the following Turan type inequality holds,_ \[\Delta_{n}=|x|^{\theta}p_{n}^{2}(x)-p_{n-1}(x)p_{n+1}(x)\geq 0,\ \ |x|\leq 1,\] _where_ \[\theta=\inf_{n\geq 1}\dfrac{2\log\dfrac{(1-a_{n})\,a_{n+1}}{(1-a_{n+1})\,a_{n }}}{\log\dfrac{4(1-a_{n})\,a_{n+1}^{2}}{a_{n}}}. \tag{9}\] In the ultraspherical case and \(-\frac{1}{2}<\lambda\leq 0\), this gives the same value for \(\theta\) as Theorem 1. We will also consider orthogonal polynomials with (possibly) unbounded support, the family which includes Hermite polynomials as a special case. It was observed in [1] that the Turan inequality \[p_{n}^{2}(x)-p_{n-1}(x)p_{n+1}(x)\geq 0 \tag{10}\] holds for any family of monic symmetric orthogonal polynomials \(p_{n}(x)\) defined by the three term recurrence \[p_{n+1}(x)=xp_{n}(x)-a_{n}p_{n-1}(x),\ \ p_{-1}=0,\ p_{0}=1,\ \ \ a_{n}>0, \tag{11}\] with the increasing coefficients \(a_{n}\). We will show that (10) can also be strengthened in the spirit of Theorem 1. Note that in the case of an unbounded support, that is, \(a_{n}\to\infty\) in (11), any inequality of the form \[\phi(x)p_{n}^{2}(x)-p_{n-1}(x)p_{n+1}(x)\geq 0,\] must satisfy \(\lim\limits_{x\to\infty}\phi(x)=1\), as \[\lim\limits_{x\to\infty}\frac{p_{n-1}(x)p_{n+1}(x)}{p_{n}^{2}(x)}=1,\] and possibly even stronger restrictions due to the asymptotic behaviour of \(p_{n}(x)\). **Theorem 3**.: _Let \(p_{n}(x)\) be a family of monic symmetric orthogonal polynomials satisfying (11) with an increasing sequence \(a_{n},\ n=1,2,...\,\). Then the following Turan type inequality holds,_ \[\Delta_{n}(x)=\frac{x^{2}}{x^{2}+a_{n}-a_{n-1}}\,p_{n}^{2}(x)-p_{n-1}(x)p_{n+1 }(x)\geq 0,\ \ x\in\mathbb{R}. \tag{12}\] _In particular, for the Hermite polynomials in the monic or standard normalisation [5] we have_ \[\frac{x^{2}}{x^{2}+\frac{1}{2}}H_{n}^{2}(x)-H_{n-1}(x)H_{n+1}(x)\geq 0. \tag{13}\] It would be interesting to obtain similar inequalities for Laguerre type polynomials orthogonal on \([0,\infty)\). Our approach is based on the following observation. Setting \[t=t_{n}(x)=\frac{p_{n+1}(x)}{p_{n}(x)} \tag{14}\] and using the corresponding three term recurrence, one can rewrite a Turan type inequality as a quadratic form \(Q(x)=A(x)t^{2}+B(x)t+C(x)\) in \(t\), where we have to assume that \(A(x)>0\), since otherwise the form attains negative values in the oscillatory region, where \(t\) runs over all real values. Suppose now that one wants to prove that \(Q_{n+1}(x)=A_{n+1}(x)t^{2}+B_{n+1}(x)t+C_{n+1}(x)\geq 0\), using that \(Q_{n}(x)=A_{n}(x)t^{2}+B_{n}(x)t+C_{n}(x)\geq 0\). Consider the corresponding curves defined by the equalities \[\mathcal{T}_{n}(x)=A_{n}(x)\tau^{2}+B_{n}(x)\tau+C_{n}(x)=0,\] and \[\mathcal{T}_{n+1}(x)=A_{n+1}(x)\tau^{2}+B_{n+1}(x)\tau+C_{n+1}(x)=0.\] First, one checks that \(\mathcal{T}_{n}(x)\) and \(\mathcal{T}_{n+1}(x)\) do not intersect by, say, showing that the resultant of the curves in \(\tau\) does not vanish. Next, since \(Q_{n}(x)\geq 0\), quadratic \(Q_{n+1}(x)\) could be negative only inside the region enclosed by \(\mathcal{T}_{n}(x)\). Therefore, it is enough to check that the curves are nested with \(\mathcal{T}_{n+1}(x)\) lying inside \(\mathcal{T}_{n}(x)\). Fig.1 illustrates the above arguments for the ultraspherical case. ## 2. Proof of Theorem 1 We will proceed by the induction on \(n\geq 1\). Without loss of generality we assume that \(\theta<2\) and \(x\in[0,1]\). Furthermore, when it is convenient, we may restrict the value of \(x\) to open interval \((0,1)\) treating the endpoints \(x=0,1\) as the limiting case. Assume first that \(-\frac{1}{2}<\lambda<0\) and check that \(\Delta_{1}>0\). In this case already the assumption \(\theta<2\) will suffice. We have \[(1+2\lambda)\Delta_{1}=1-2(1+\lambda)x^{2}+(1+2\lambda)x^{2+\theta}\geq 1-2(1+ \lambda)x^{2}+(1+2\lambda)x^{4}=\] \[(1-x^{2})\left(1-(1+2\lambda)x^{2}\right)>0.\] For \(\lambda>0\) and \(\theta=\frac{2}{1+2\lambda}\), noticing that \[(1+2\lambda)\frac{d\Delta_{1}}{dx}=-4(1+\lambda)(1-x^{\theta})x\leq 0,\] one finds \[\Delta_{1}(x)>\Delta_{1}(1)=0.\] Note that for \(n=1\) the chosen value \(\theta=\frac{2}{1+2\lambda}\) is the best possible because the expansion of \(\Delta_{1}\) into the Taylor series around \(x=1\) is \[(1+2\lambda)\Delta_{1}(x)=(2-(1+2\lambda)\theta)\,(1-x)+O((1-x)^{2}).\] More work is needed for the induction step in order to show that \(\Delta_{n}>0\) implies \(\Delta_{n+1}>0\). We split the proof into a few lemmas. As in (14), for fixed \(n\) and \(\lambda\) using we introduce the function \[t=t_{n}^{(\lambda)}(x)=\frac{G_{n+1}^{(\lambda)}(x)}{G_{n}^{(\lambda)}(x)}\,.\] Figure 1. The black line is \(t_{4}\); the blue and red lines show \(\mathcal{T}_{4}\) and \(\mathcal{T}_{5}\), respectively, for three different values of \(\lambda\). Using (6), we can rewrite \(\Delta_{n}\) and \(\Delta_{n+1}\) as follows, \[\frac{n\Delta_{n}}{\left(G_{n}^{(\lambda)}(x)\right)^{2}}=(n+2\lambda)t^{2}-2(n+ \lambda)xt+nx^{\theta}, \tag{15}\] \[\frac{(n+2\lambda+1)\Delta_{n+1}}{\left(G_{n}^{(\lambda)}(x)\right)^{2}}=(n+2 \lambda+1)x^{\theta}t^{2}-2(n+\lambda+1)xt+n+1. \tag{16}\] Now consider the corresponding curves \[\mathcal{T}_{n}=(n+2\lambda)\tau_{n}^{2}-2(n+\lambda)x\tau_{n}+nx^{\theta}=0,\] and \[\mathcal{T}_{n+1}=(n+2\lambda+1)x^{\theta}\tau_{n+1}^{2}-2(n+\lambda+1)x\tau_ {n+1}+n+1=0,\] or explicitly, \[\tau_{n}=\frac{(n+\lambda)x\pm\sqrt{(n+\lambda)^{2}x^{2}-n(n+2\lambda)x^{ \theta}}}{n+2\lambda}\,, \tag{17}\] \[\tau_{n+1}=\frac{(n+\lambda+1)x\pm\sqrt{(n+\lambda+1)^{2}x^{2}-(n+1)(n+2 \lambda+1)x^{\theta}}}{(n+2\lambda+1)x^{\theta}}\,, \tag{18}\] where \(0\leq x\leq 1\). Both curves have a single component and are simultaneously real for \[x\geq x_{0}=\left(\frac{(n+1)(n+2\lambda+1)}{(n+\lambda+1)^{2}}\right)^{1/(2- \theta)}\,, \tag{19}\] where \(x_{0}\) is just the vertex of \(\mathcal{T}_{n+1}\). Thus, for \(x\leq x_{0}\) both functions \(\Delta_{n}\) and \(\Delta_{n+1}\) are nonnegative, the first one by the induction hypothesis. So (as it will be necessary for \(\lambda<0\)), we may restrict the proof to \(x>x_{0}\). To show that \(\mathcal{T}_{n}\) and \(\mathcal{T}_{n+1}\) do not intersect, we calculate resultant \(R_{n}(x,\theta)\) of \(\mathcal{T}_{n}(x)\) and \(\mathcal{T}_{n+1}(x)\) in \(t\) and check that it does not vanish. **Lemma 4**.: _Let \(0<x<1\), \(\lambda>-1/2\), and \(\theta\) is defined in Theorem 1, then \(R_{n}(x,\theta)>0\)._ Proof.: The resultant of \(\mathcal{T}_{n}(x)\) and \(\mathcal{T}_{n+1}(x)\) in \(\tau\) can be written as \[R_{n}(x,\theta)=A^{2}-4x^{2}BC,\] where \[A=n(n+2\lambda+1)x^{2\theta}-(n+1)(n+2\lambda),\] \[B=n(n+\lambda+1)x^{\theta}-(n+1)(n+\lambda),\] \[C=(n+\lambda)(n+2\lambda+1)x^{\theta}-(n+\lambda+1)(n+2\lambda).\] We shall consider two cases. _Case 1._\(\lambda>0\). First, we observe that \(R_{0}(x,\theta)>0\) for \(\theta=\frac{2}{1+2\lambda}\) and \(0<x<1\). Indeed, \[\rho(x)=\frac{1}{4\lambda^{2}}\,R_{0}(x,\theta)=1-2(1+\lambda)x^{2}+(1+2 \lambda)x^{2+\frac{2}{1+2\lambda}}>0,\] since \[\rho^{\prime}(x)=4(1+\lambda)(x^{\theta}-1)x<0,\;\text{and}\;\;\rho(1)=0.\] Hence, it will be enough to prove the following inequality, \[D_{n}(x,\theta)=\frac{R_{n}(x,\theta)-R_{0}(x,\theta)}{n(n+2\lambda+1)(1-x^{ \theta})}>0,\] where \[D_{n}(x,\theta)=n(n+2\lambda+1)(1-x^{\theta})(1-2x+x^{\theta})(1+2x+x^{\theta}) \,+4\lambda\eta(x),\] and \[\eta(x)=1-(3+\lambda)x^{2}+(1+x^{2}+\lambda x^{2})x^{\theta}.\] The first summand at the right hand side of \(D_{n}(x,\theta)\) is positive as \(0<\theta<2\) and \(0<x<1\). Let us show that \(\eta(x)\) is also positive in that region. We have \[\eta(x)>1-(3+\lambda)x^{2}+(1+x^{2}+\lambda x^{2})x^{2}=(1-x^{2})(1-x^{2}- \lambda x^{2}),\] hence \(\eta(x)>0\) for \(x<\frac{1}{\sqrt{1+\lambda}}\). Assume now that \(\frac{1}{\sqrt{1+\lambda}}\leq x<1\), then \[\frac{(1+2\lambda)x}{2}\,\eta^{\prime}(x)=\big{(}1+2(1+\lambda)^{2}x^{2}\big{)} x^{\theta}-(2\lambda^{2}+7\lambda+3)x^{2}<\] \[1+2(1+\lambda)^{2}x^{2}-(2\lambda^{2}+7\lambda+3)x^{2}=1-(1+3\lambda)x^{2} \leq 1-\frac{1+3\lambda}{1+\lambda}<0,\] hence \(\eta(x)>\eta(1)=0\). This completes the proof of Case 1. _Case 2. \(-1/2<\lambda<0\)._ To show that \(R_{n}(x,\theta)>0\), we may assume that \(BC>0\), otherwise there is nothing to prove. By \(\theta<2\) we have \[R_{n}(x,\theta)=A^{2}-4x^{2}BC\geq A^{2}-4x^{\theta}BC=\] \[z(nz+2\lambda)\left((n+2\lambda+1)z-2\lambda\right)\left((n+2\lambda+1)nz-2 \lambda\right),\ \ z=1-x^{\theta}.\] This implies that \(R_{n}(x,\theta)>0\) for \(\frac{(n+1)(n+2\lambda)}{n(n+2\lambda+1)}<x<1\). We have to show that \(R_{n}(x,\theta)\geq 0\) only for \(x_{0}<x<1\), where \(x_{0}\) is defined by (19). Hence, it is enough to check that for \(\theta=4/(2-\lambda)\), \[x_{0}^{\theta}>\frac{(n+1)(n+2\lambda)}{n(n+2\lambda+1)}\,,\] or, equivalently, \[g(n,\lambda)=2\ln\frac{(n+1)(n+2\lambda+1)}{(n+\lambda+1)^{2}}-\lambda\ln \frac{n(n+2\lambda+1)}{(n+1)(n+2\lambda)}>0.\] Since \[\frac{\partial g(n,\lambda)}{\partial n}=-\frac{2\lambda^{2}(3n+2\lambda^{2}+ 3\lambda+1)}{n(n+1)(n+\lambda+1)(n+2\lambda)(n+2\lambda+1))}<0,\] we have \[g(n,\lambda)>\lim_{n\to\infty}g(n,\lambda)=0.\] This completes the proof. **Lemma 5**.: _Let \(0<x<1\), \(\lambda>-1/2\), then the curves \(\mathcal{T}_{n}\) and \(\mathcal{T}_{n+1}\) are nested, and \(\mathcal{T}_{n+1}\) lies inside \(\mathcal{T}_{n}\)._ Proof.: For \(\lambda>0\), using an obvious notation for the branches of the roots in (17) and (18), we obtain at \(x=1\) \[\left(\tau_{n}^{(+)},\tau_{n}^{(-)}\right)=\left(1,\frac{n}{n+2\lambda}\, \right),\ \ \left(\tau_{n+1}^{(+)},\tau_{n+1}^{(-)}\right)=\left(1,\frac{n+1}{n+2\lambda+1} \,\right).\] Similarly, for \(\lambda<0\), we obtain \[\left(\tau_{n}^{(+)},\tau_{n}^{(-)}\right)=\left(\frac{n}{n+2\lambda}\,,1 \right),\ \ \left(\tau_{n+1}^{(+)},\tau_{n+1}^{(-)}\right)=\left(\frac{n+1}{n+2\lambda+1} \,,1\right).\] Since the curves do not intersect for \(x<1\), it follows that \(\mathcal{T}_{n+1}\) is enclosed inside \(\mathcal{T}_{n}\). This completes the proof of inequality (4). It is left to show that for \(\lambda\geq 0\) the value \(\theta=2/(2\lambda+1)\) is sharp for all \(n\geq 1\). The following lemma completes the proof of Theorem 1. **Lemma 6**.: _Let \(\lambda\geq 0\), then for any \(n\geq 1\) inequality (4) does not hold with \(\theta>\frac{2}{1+2\lambda}\) for \(x\) less than and sufficiently close to one._ Proof.: Expanding \(\Delta_{n}\) into Taylor series around \(x=1\) (we used Mathematica), one gets \[\binom{n+2\lambda-1}{n}^{2}\Delta_{n}=\frac{(2-\theta-2\lambda\theta)\Gamma^{ 2}(n+2\lambda)}{(1+2\lambda)\Gamma^{2}(2\lambda)\,n!^{2}}\,(1-x)+O((1-x)^{2}).\] Hence, \(\Delta_{n}<0\) in a vicinity of \(x=1\) if \(\theta>\frac{2}{1+2\lambda}\,.\) Note that the main term of the expansion for \(\theta=\frac{2}{1+2\lambda}\) is \[\frac{16\lambda^{2}(3n^{2}+6\lambda n+2\lambda^{2}-\lambda)\Gamma^{2}(n+2 \lambda)}{(3+2\lambda)\Gamma^{2}(2\lambda+2)n!^{2}}\,(1-x)^{2},\] which is still positive. Since the vertex of \(\mathcal{T}_{n}\) is situated at \[\tilde{x}=\left(\frac{n(n+2\lambda)}{(n+\lambda)^{2}}\right)^{1/(2-\theta)}, \ \ \text{where}\ \ \frac{1}{2-\theta}=\left\{\begin{array}{cc}\frac{1}{2}-\frac{1}{\lambda}\,,&-\frac{1}{2}<\lambda<0,\\ \\ \frac{1}{2}+\frac{1}{4\lambda}\,,&\lambda\geq 0,\end{array}\right.\] the inequalities of Theorem 1 are non-trivial only in the intervals \((\tilde{x},1)\) and \((-1,-\tilde{x})\). Outside these regions \(\Delta_{n}>0\) just as a positive definite quadratic in \(t_{n}\). In particular, for a fixed \(\lambda\) the length of these intervals shrinks to \(O(n^{-2})\). Using that \(\Delta_{n}\) must be positive definite in the oscillatory region, where \(t_{n}\) runs through all real values, we can provide more information about the location of vertex \(\tilde{x}\) of \(\mathcal{T}_{n}\) with respect to the zeros of \(C_{n+1}^{(\lambda)}(x)\). **Claim 1**.: _Let \(x_{1}>x_{2}>\ldots>x_{n+1}\), be the zeros of \(C_{n+1}^{(\lambda)}(x)\). Then_ \[\tilde{x}>\left\{\begin{array}{cc}x_{2},&-1/2<\lambda<0,\\ \\ x_{1},&\lambda\geq 0.\end{array}\right.\] Proof.: First notice that \(|x_{i}|<1\). If \(\lambda=0\) the situation is trivial as \(\tilde{x}=1\). Assuming that \(\lambda\neq 0\), we observe that \(\Delta_{n}>0\) means that \(t_{n}\) does not intersect \(\mathcal{T}_{n}\). Notice also that \(t_{i}\) is an increasing function of \(x\) for any \(i\), as \[y_{n}^{2}(x)\frac{dt_{i}}{dx}=y_{n+1}^{\prime}(x)y_{n}(x)-y_{n}^{\prime}(x)y_{n +1}(x)>0\] is just the Christoffel-Darboux kernel. It is also easy to check that \(\mathcal{T}_{n}^{(-)}(x)\), the lower branch of \(\mathcal{T}_{n}\), is positive and decreasing in \(x\). The situation when \(x_{2}\geq\tilde{x}\) is impossible, as then \(t_{n}\) running through the whole interval \([0,\infty)\) intersects \(\mathcal{T}_{n}^{(-)}(x)\) at some point between \(x_{2}\) and \(x_{1}\). The same is true regarding the interval \([x_{1},1]\), provided the intersection occurs before \(x=1\). For \(x\in(x_{1},1]\), \(t_{n}(x)\) increases from zero to one. However \(\tau_{n}^{(-)}(1)=\frac{n+\lambda-|\lambda|}{n+2\lambda}<1\) for \(\lambda>0\), where \(\tau_{n}^{(\pm)}\) are defined by (17). Therefore the case \(x_{1}\geq\tilde{x}\) is impossible for \(\lambda>0\), as otherwise \(t\) and \(\mathcal{T}_{n}^{(-)}(x)\) would intersect inside the interval \((x_{1},1)\). The different situations appearing in the proof of Lemma 1 are illustrated in Fig.1 for \(n=4\) and three values of \(\lambda\). Here \(x_{2}<\tilde{x}<x_{1}\) for \(\lambda=-\frac{1}{3}\), and \(x_{1}<\tilde{x}\) for \(\lambda=-\frac{1}{4}\) and \(\lambda=1\). In general, the difference between cases \(\lambda<0\) and \(\lambda>0\) in Lemma 1 occurs since \(t\) passes below \(\mathcal{T}_{n}\) for \(\lambda<0\) and above it for \(\lambda>0\). For \(\lambda<0\) we have \(\tau_{n}^{(-)}(1)=1\), and both situations, \(x_{2}<\tilde{x}<x_{1}\) and \(x_{1}<\tilde{x}\leq 1\), may occur depending on the values of \(\lambda\) and \(n\). **Remark 1**.: _It seems difficult to find the exact value of \(\theta\) for \(-\frac{1}{2}<\lambda<0\). Let us note that for \(\theta>\frac{8}{4-\lambda}\) the resultant changes the sign for sufficiently large \(n\). Thus the curves intersect and our arguments fail. Indeed, for \(\theta>\frac{8}{4-\lambda}\), in the notation of Lemma 5, the Taylor series expansion around \(n=\infty\) at \(\hat{x}=\frac{3x_{0}+1}{4}\) yields,_ \[\tau_{n}^{(+)}(\hat{x})-\tau_{n+1}^{(+)}(\hat{x})=\frac{3\lambda\left((4- \lambda)\theta-8\right)}{4(2-\theta)n^{2}}+O(n^{-3})<0,\] \[\tau_{n+1}^{(-)}(\hat{x})-\tau_{n}^{(-)}(\hat{x})=\frac{\lambda\left((4+3 \lambda)\theta-8\right)}{4(2-\theta)n^{2}}+O(n^{-3})>0.\] _For \(\theta=\frac{8}{4-\lambda}\) the curves probably do not intersect as_ \[R_{n}(\hat{x},\,\frac{8}{4-\lambda})=\frac{9}{2}\,(8+\lambda^{2})\lambda^{4} n^{-4}+O(n^{-5})>0.\] ## 3. Proof of Theorem 2 Proof.: It is enough to consider the case \(0<x<1\), as in Theorem 1. We may assume that coefficients \(a_{n}\) are strictly decreasing treating the general situation as the limiting case. By the assumption \(\theta<2\), we get \[\Delta_{1}=x^{2\theta}-\frac{x^{2}-a_{1}}{1-a_{1}}>\frac{1-x^{2}}{1-a_{1}}(2a _{1}-1)>0.\] Note also that if the Turan inequality of the theorem holds for some \(\theta_{0}\), it also holds for any \(\theta<\theta_{0}\). The proof is by the induction on \(n\) and is similar to that of Theorem 1. To show that \(\Delta_{n+1}\geq 0\) we consider the following two expressions, \[\frac{a_{n}\Delta_{n}}{p_{n}^{2}(x)}\ \ \text{and}\ \ \ \frac{(1-a_{n+1})\Delta_{n+1}}{p_{n}^{2}(x)}\,,\] set \(t=p_{n+1}(x)/p_{n}(x)\), and define the corresponding curves \[\mathcal{T}_{n}(x)=(1-a_{n})\tau^{2}-x\tau+a_{n}\tau^{\theta}=0,\] \[\mathcal{T}_{n+1}(x)=(1-a_{n+1})x^{\theta}\tau^{2}-x\tau+a_{n+1}=0,\] with the vertices \[\left(4a_{n}-4a_{n}^{2}\right)^{1/(2-\theta)}\text{ and }\left(4a_{n+1}-4a_{n+1}^{ 2}\right)^{1/(2-\theta)},\] respectively. The both curves have a single component and are simultaneously real for \[x\geq x_{0}=\left(4a_{n+1}-4a_{n+1}^{2}\right)^{1/(2-\theta)}.\] First, we have to show that \(\mathcal{T}_{n}\) and \(\mathcal{T}_{n+1}\) do not intersect on \((0,1)\). The resultant of \(\mathcal{T}_{n}\) and \(\mathcal{T}_{n+1}\) in \(\tau\) is given by \[R_{n}(x,\theta)=A^{2}-x^{2}BC, \tag{20}\] where \[A=a_{n}(1-a_{n+1})t^{2\theta}-(1-a_{n})a_{n+1},\] \[B=a_{n+1}-a_{n}x^{\theta},\] \[C=1-a_{n}-(1-a_{n+1})x^{\theta}.\] If \(BC\leq 0\) there is nothing to prove. Otherwise, using \(x^{2}<x^{\theta}\), we get \[R_{n}(x,\theta)>A^{2}-x^{\theta}BC=\] Thus, \(R_{n}(x,\theta)>0\) for \[\frac{a_{n+1}(1-a_{n})}{a_{n}(1-a_{n+1})}<x^{\theta}<1.\] Hence, the curves do not intersect on \((x_{0},1)\) if \[x_{0}^{\theta}>\frac{a_{n+1}(1-a_{n})}{a_{n}(1-a_{n+1})}\,.\] This yields inequality (9). It is left to check that \(\mathcal{T}_{n+1}(x)\) lies inside \(\mathcal{T}_{n}(x)\). For \(x=1\), one finds \[\mathcal{T}_{n}^{(+)}(1)=\frac{a_{n}}{1-a_{n}}>\mathcal{T}_{n+1}^{(+)}(1)= \frac{a_{n+1}}{1-a_{n+1}}\,,\quad\mathcal{T}_{n}^{(-)}(1)=\mathcal{T}_{n+1}^{ (-)}(1)=1.\] Hence, the curves are nested with \(\mathcal{T}_{n+1}\) lying inside \(\mathcal{T}_{n}\). **Remark 2**.: _It is easy to show that for ultraspherical polynomials and \(\lambda<0\), Theorem 2 leads to the same result as Theorem 1. Indeed, for the ultraspherical case, \(a_{n}=\frac{n}{2(n+\lambda)}\), and_ \[F(n)=\frac{2\log\frac{(1-a_{n})\,a_{n+1}}{(1-a_{n+1})\,a_{n}}}{\log\frac{4(1- a_{n})\,a_{n+1}^{2}}{a_{n}}}=\frac{2\log\frac{(n+1)(n+2\lambda)}{n(n+2 \lambda+1)}}{\log\frac{(n+1)^{2}(n+2\lambda)}{n(n+\lambda+1)^{2}}}\,.\] _One finds \(\frac{dF(n)}{dn}<0\), hence,_ \[\inf F(n)=\lim_{n\to\infty}F(n)=\frac{4}{2-\lambda}\,.\] _On the other hand, an attempt to generalise the case \(\lambda>0\), would require to work with recurrence (8) for a growing sequence \(a_{n}\), where \(a_{n}<1/2\). It is unclear how to extract \(\theta\) from the inequality \(R_{n}(x,\theta)>0\) in that case._ ## 4. Proof of Theorem 3 Proof.: The proof is by the induction on \(n\). Without loss of generality, we assume that the sequence \(a_{i}\) is strictly increasing and that \(x>0\). It will be convenient to set \(a_{0}=0\), then \(\Delta_{1}(x)=a_{1}^{2}/(x^{2}+a_{1})>0\). Rewriting \(\Delta_{n}\) and \(\Delta_{n+1}\) in terms of \(t=\frac{p_{n+1}(x)}{p_{n}(x)}\), and setting \(d_{i}=a_{i}-a_{i-1}>0\), we consider \(a_{n}(x^{2}+d_{n})\frac{\Delta_{n}(x)}{p_{n}^{2}(x)}\) and \((x^{2}+d_{n+1})\frac{\Delta_{n+1}(x)}{p_{n}^{2}(x)}\). Then the corresponding curves are \[\mathcal{T}_{n}(x)=(x^{2}+d_{n})\tau^{2}-(x^{2}+d_{n})x\tau+a_{n}x^{2}=0,\] \[\mathcal{T}_{n+1}(x)=x^{2}\tau^{2}-(x^{2}+d_{n+1})x\tau+a_{n+1}(x^{2}+d_{n+1}) =0.\] To show that \(\mathcal{T}_{n}\) and \(\mathcal{T}_{n+1}\) do not intersect, we calculate the resultant in \(\tau\). This yields a rather long expression of the form \[\sum_{i=0}^{3}b_{i}\,x^{2i},\] where all \(b_{i}\) are positive as they are composed of positive monomials. Therefore the resultant does not vanish. We omit the details. It is left to verify that the curves are nested and \(\tau_{n+1}\) lies inside \(\tau_{n}\). For, it is enough to check that the value of \(\mathcal{T}_{n}\) at the vertex of \(\mathcal{T}_{n+1}\) with coordinates \[\left(\sqrt{3a_{n+1}+a_{n}}\,,\frac{2a_{n+1}}{\sqrt{3a_{n+1}+a_{n}}}\right)\] is negative. Indeed, one calculates that it is \[-\frac{6d_{n+1}^{3}+(17a_{n}+2d_{n})d_{n+1}^{2}+6a_{n}(2a_{n}+d_{n})d_{n+1}+4 a_{n}^{2}d_{n}}{3a_{n+1}+a_{n}}<0.\] Hence, \(\Delta_{n+1}(x)>0\), provided \(\Delta_{n}(x)>0\). This completes the proof of (12). Finally, let \(H_{n}(x)\) and \(\mathcal{H}_{n}(x)\) be the Hermite polynomials in the standard and monic normalisation, respectively, \(\mathcal{H}_{n}(x)=2^{-n}H_{n}(x).\) Now (13) follows by \(a_{n}=n/2\) in the monic case.
2305.14489
Are Large Language Models Robust Coreference Resolvers?
Recent work on extending coreference resolution across domains and languages relies on annotated data in both the target domain and language. At the same time, pre-trained large language models (LMs) have been reported to exhibit strong zero- and few-shot learning abilities across a wide range of NLP tasks. However, prior work mostly studied this ability using artificial sentence-level datasets such as the Winograd Schema Challenge. In this paper, we assess the feasibility of prompt-based coreference resolution by evaluating instruction-tuned language models on difficult, linguistically-complex coreference benchmarks (e.g., CoNLL-2012). We show that prompting for coreference can outperform current unsupervised coreference systems, although this approach appears to be reliant on high-quality mention detectors. Further investigations reveal that instruction-tuned LMs generalize surprisingly well across domains, languages, and time periods; yet continued fine-tuning of neural models should still be preferred if small amounts of annotated examples are available.
Nghia T. Le, Alan Ritter
2023-05-23T19:38:28Z
http://arxiv.org/abs/2305.14489v2
# Are Large Language Models Robust Zero-shot Coreference Resolvers? ###### Abstract Recent progress in domain adaptation for coreference resolution relies on continued training using annotated data from target domains. At the same time, pre-trained large language models (LMs) have exhibited strong zero- and few-shot learning abilities across a wide range of NLP tasks including pronoun resolution. While this demonstrates evidence of coreference ability, previous work has mostly studied this ability using simple sentence-level datasets such as the Winograd Schema Challenge. In this work, we assess the feasibility of zero-shot learning for coreference resolution by evaluating instruction-tuned language models on more difficult, linguistically-complex coreference benchmarks (e.g., CoNLL-2012). We demonstrate that zero-shot prompting outperforms current unsupervised coreference systems. Further investigations reveal the robust zero-shot generalization ability of instruction-tuned LMs across a wide range of domains, languages, and time periods, as well as a strong reliance on high-quality mention detection systems. ## 1 Introduction Entity coreference resolution aims to find all spans within an input text that refer to the same entity Jurafsky and Martin (2000). As an important task in the core NLP pipeline, coreference resolution has received considerable attention from the NLP community over the years, with most progress driven by neural coreference models Lee et al. (2017); Wu et al. (2020); Joshi et al. (2020) that achieve substantial improvements on CoNLL-2012 benchmark Pradhan et al. (2012). There has also been an increasing interest in generalization of coreference systems to domains and languages beyond CoNLL-2012 Xia and Van Durme (2021); Bohnet et al. (2022). On the other hand, unsupervised Haghighi and Klein (2010) and few-shot Le et al. (2022) coreference research has received less attention in recent years, despite the fact that learning from little labeled data is desirable for coreference systems when adapting to new domains and where annotating full coreference chains is expensive Gandhi et al. (2022). Concurrently, there has been a great deal of research in zero- and few-shot learning based on prompting pre-trained language models due to their emerging abilities in solving a variety of NLP tasks Brown et al. (2020); Raffel et al. (2020); Ouyang et al. (2022); Schick and Schutze (2021). There have also been several attempts at evaluating pre-trained LMs coreference abilities under zero- and few-shot settings: Brown et al. (2020) first demonstrated the effectiveness of prompting GPT-3 for resolving coreference on the Winograd Schema Challenges (WSC), Yang et al. (2022) showed that coreference resolution was a challenging task for GPT-2 when prompted with multiple-choice templates, and Agrawal et al. (2022) successfully reframed clinical pronoun resolution as span generations. While these studies revealed some evidence of the coreference abilities in large LMs, they evaluated the models on either sentence-level or syntactically simple datasets. In contrast, the traditional benchmark dataset for coreference resolution, CoNLL-2012, contains document-level examples with much more complex linguistic annotations Pradhan et al. (2012). In this paper, we aim to bridge the gap between coreference and language modeling literature by investigating to what extent instruction-tuned language models (e.g., InstructGPT) can perform zero-shot coreference resolution. In preliminary experiments (Appendix SSA.1), we found that zero-shot prompting with a Question Answering (QA) template, like previous work, is less effective at eliciting coreference knowledge than a document-level, marker-based prompt template (Figure 1). We then show that prompting LLMs is a feasible strategy for zero-shot coreference, achieving competitive performance when comparing to previous unsupervised coreference resolution systems. Our analysis reveals strong antecedent linking abilities as well as the generalizability of LLMs to different domains, languages, and time periods. However, we also observe that the models struggle to predict the correct mention spans, instead showing strong reliance on a robust external mention detector. ## 2 Prompt-based Coreference Resolution Previous work in zero- and few-shot coreference resolution assumes access to candidate mentions to resolve, usually pronouns in the passage [11, 12]. This method is akin to the pipeline approach in modelling coreference: a mention detector generates a candidate mention set, followed by an antecedent linker that links each anaphor to an antecedent [13, 14]. We adopt this formulation, following previous work: given a document, we assume the existence of a set of mentions and prompt an autoregressive language model with several handcrafted prompts and extract the predicted coreference links (Figure 1). Prior work has mainly experimented with question answering prompts for pronoun resolution [11, 12] and demonstrated its effectiveness when comparing with other templates such as multiple-choice [11, 12]. However, in a preliminary study (SSA.1), we found that prompting with a QA template struggled to compete with comparable rule-based unsupervised coreference systems [13], even when providing gold mentions and few-shot guidance [12], or when scaling to larger LMs (achieving 61 F\({}_{1}\) when comparing to 72 F\({}_{1}\) from Lee et al. [11]). We also experimented with an alternative document-level template that is able to elicit more coreference links than the usual QA template, achieving an F\({}_{1}\) of 81 in zero-shot setting. In this template, the mentions of the input text are first marked with special tokens indicating a span to annotate (e.g., _Mr. Clinton \(\rightarrow\) [Mr. Clinton](#)_). The LM is then given instructions to annotate this marked span with the cluster ID, (e.g., _[Mr. Clinton](#) \(\rightarrow\) [Mr. Clinton](#cluster_1)_). Given strong results over the QA template, we used this document template for all zero-shot prompting experiments. ## 3 CoNLL-2012 Experiments In this section, we describe the experiments that investigate zero-shot coreference abilities of large LMs on the CoNLL-2012 benchmark [12]. We found that, InstructGPT[11] yields competitive results with previous unsupervised models. Notably, it significantly outperforms comparable methods when Figure 1: An example of zero-shot coreference resolution with LLMs prompting. Here we show two prompt templates experimented in this work: Question Answering and Document templates. In the QA template, the language model generates the answer when given a passage and an open-ended \(wh\)-question [11]. In contrast, the document template marks the candidate mentions and asks the LM to annotate the cluster IDs for each mention directly within the text (represented by different colors). Both templates require a mention detector to generate candidate mentions to resolve. gold mentions are provided (Table 1). ### Experimental Details Dataset and Evaluation MetricsWe report results on the traditionally benchmarked English OntoNotes 5.0 dataset (Weischedel et al., 2011; Pradhan et al., 2012), which spans seven distinct genres such as news, telephone conversations, and religious text. We follow the standard train-dev-test splits from previous work (statistics in Table 11) and report CoNLL F\({}_{1}\), which averages over three coreference-based metrics MUC, B3, and CEAF\({}_{\phi_{4}}\). Footnote 3: [https://nlp.stanford.edu/software/docref.html](https://nlp.stanford.edu/software/docref.html) SystemsThe primary language model we investigate is the instruction-tuned 175B GPT-3 model (text-davinci-003) from the InstructGPT series, which we refer to as InstructGPT. While we do not know the exact training details for this model, previous work has reported evidence of its coreference abilities via few-shot prompting (Agrawal et al., 2022). Since the zero-shot setting assumes no access to supervised training data, we mainly compare zero-shot InstructGPT to a popular unsupervised coreference system: Stanford's deterministic resolver, which we refer to as dcoref (Lee et al., 2013). This rule-based coreference resolver consists of multiple sieves, where each sieve is a set of handcrafted rules that filters out mentions. The sieves are ordered from highest to lowest precision to minimize cascading errors from previous sieves. We use the open-sourced implementation of dcoref to obtain the results in this study.1 Footnote 1: [https://nlp.stanford.edu/software/docref.html](https://nlp.stanford.edu/software/docref.html) In addition, we include two comparable neural, end-to-end systems: longdoc (Toshniwal et al., 2020), an entity-ranking model designed for long documents, and SpanBERT+c2f (Stolfo et al., 2022), a weakly-supervised system that trained a SpanBERT-based coarse-to-fine architecture (Joshi et al., 2020) on dcoref coreference predictions. For longdoc, since the authors pre-trained variants of this model on different coreference datasets, we select the best results trained on two out-of-domain datasets, PreCo (Chen et al., 2018) and Litbank (Bamman et al., 2020). \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{System} & \multicolumn{3}{c}{MUC} & \multicolumn{3}{c}{B3} & \multicolumn{3}{c}{CEAF\({}_{\phi_{4}}\)} & CoNLL \\ \cline{2-11} & P & R & F\({}_{1}\) & P & R & F\({}_{1}\) & P & R & F\({}_{1}\) & F\({}_{1}\) \\ \hline \hline \multicolumn{11}{c}{_Predicted mentions_} \\ dcoref (Lee et al., 2013) & 67.7 & 67.8 & 67.7 & 59.3 & 52.8 & 55.9 & 49.3 & 56.0 & 52.5 & 58.6 \\ SpanBERT+c2f (Stolfo et al., 2022) & 67.4 & 69.8 & 68.6 & 52.4 & 61.8 & 56.7 & 54.1 & 51.4 & **52.7** & 59.3 \\ longdoc (Toshniwal et al., 2021) & - & - & - & - & - & - & - & - & - & 58.8 \\ InstructGPT (Ouyang et al., 2022) & 71.1 & 69.7 & **70.4** & 58.1 & 58.6 & **58.4** & 60.6 & 45.1 & 51.7 & **60.1** \\ \hline \multicolumn{11}{c}{_Gold mentions_} \\ dcoref (Lee et al., 2013) & 90.0 & 74.5 & 81.6 & 84.2 & 59.7 & 70.0 & 74.4 & 61.4 & 67.3 & 72.9 \\ longdoc (Toshniwal et al., 2021) & - & - & - & - & - & - & - & - & - & 77.6 \\ InstructGPT (Ouyang et al., 2022) & 89.6 & 88.9 & **89.2** & 76.0 & 89.2 & **79.4** & 84.8 & 65.2 & **73.7** & **80.8** \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot result on English OntoNotes test set for predicted mentions (top) and gold mentions (bottom). Toshniwal et al. (2021) only reported the final CoNLL F\({}_{1}\) in their paper and we could not find the detailed coreference metrics. Figure 2: Resolution accuracy by mention types (amongst the recalled mentions) on OntoNotes dev set. SettingsWe report results under two settings: predicted mentions, where only raw text is provided as input, and gold mentions, where the gold mention boundaries are provided as input. To obtain predicted mentions, we use the same mentions output by dcoref as input into InstructGPT prompts. ### Results Table 1 shows the results between different unsupervised coreference systems. We note that prompting InstructGPT outperforms other unsupervised systems for both predicted and gold mentions, with the performance gaps increasing for gold mentions. This result demonstrates the feasibility of prompting large LMs for zero-shot coreference resolution, particularly in the setting where the mentions are known. To further understand the strengths and weaknesses of InstructGPT, we break down the results according to different _resolution classes_, following Lu and Ng (2020). Specifically, for each coarse-grained mention class (named entity, pronoun, nominal), we compute the _resolution accuracy_, which is the percentage of anaphors correctly linked to an antecedent (Figure 2). We observe that InstructGPT does particularly well in pronoun resolution, corroborating previous work Agrawal et al. (2022). It struggles more for named entities and the particularly difficult nominal resolution. However, InstructGPT still remains competitive with dcoref for these classes, with the gaps increasing when gold mentions are provided. ### The Importance of Mention Detection While zero-shot prompting of LLMs can be competitive with previous unsupervised systems, the quality of candidate mentions seems to have a considerable effect on the final performance. We quantify the importance of high-quality Mention Detection (MD) for prompting InstructGPT by measuring the coreference performance of InstructGPT and dcoref when input with different candidate mention sets generated by different mention detectors (Figure 3).2 In addition, we analyze the performance of InstructGPT when zero-shot prompting for mentions by prompting the LM with a simple mention-detection template that asks it to output a list of named entities, pronouns, and nominal phrases in the input text (Table 10). We report this result in Table 2. Footnote 2: Details of the mention detectors are presented in §A.2 First, InstructGPT consistently outperforms dcoref as MD performance increases. In general, coreference performance improves as mention detection score increases. This is not surprising, as it has been similarly reported in previous work studying mention detection of neural coreference resolution systems Lu and Ng (2020). However, InstructGPT again consistently outperforms dcoref, regardless of MD performance. Second, InstructGPT struggles with zero-shot mention detection, performing much worse than dcoref, which was also developed on the OntoNotes dataset (Table 2). Further analysis by mention types shows it particularly struggles to recall nominal mentions. A qualitative example in Table 3 demonstrates that while InstructGPT was able to recover a considerable portion of named entities and pronouns, it also made numerous errors, including span errors, extra entities, and missing mentions Kummerfeld and Klein (2013). Given that what constitutes a mention can depend heavily on the annotation guidelines of specific datasets and domains, it may be challenging \begin{table} \begin{tabular}{l c c} \hline \hline Type & InstructGPT & dcoref \\ \hline Name & 50.0 & **78.7** \\ Pronoun & 75.9 & **94.7** \\ Nominal & 18.7 & **52.7** \\ \hline All (F\({}_{1}\)) & 46.5 & **76.6** \\ \hline \hline \end{tabular} \end{table} Table 2: MD recall broken down by mention types, on OntoNotes dev set. Last row shows the overall F\({}_{1}\). In addition to being overall worse than dcoref, InstructGPT particularly struggles on recalling nominal noun phrases. Figure 3: CoNLL F\({}_{1}\) as a function of MD F\({}_{1}\), on OntoNotes dev set. InstructGPT and dcoref were fed the same outputs from MD systems in Table 8. to ask a MD system to predict mentions without any labeled examples. Since Mention Detection plays a crucial role in coreference resolution Wu and Gardner (2021) as well as its generalizability to different domains, it appears that high-quality mention detection is a pre-requisite for zero-shot coreference resolution. Fortunately, however, mention annotation has been shown to be much less costly than annotating full coreference chains Gandhi et al. (2022). ## 4 Generalizing Zero-shot Coreference Recent research in coreference largely focus on the generalization ability of neural models beyond the OntoNotes dataset Xia and Van Durme (2021); Gandhi et al. (2022); Bohnet et al. (2022). Given the effectiveness of zero-shot prompting of InstructGPT and that this approach does not require any training data, we are interested in how well InstructGPT generalize to different domains (SS4.1), languages (SS4.2), and time periods (SS4.3). The coreference datasets for different domains and languages are given in Table 4.3 Since zero-shot mention detection has been shown to be fairly challenging (SS3.3), the experiments in this section are evaluated using gold mentions. Footnote 3: We left out GAP and WSC due to the simplicity of these datasets as well as being extensively studied by previous work. As for PreCo, we contacted the authors but were not able to obtain it. ### Can InstructGPT generalize zero-shot coreference across domains? We use the datasets benchmarked in Toshniwal et al. (2021) due to the diversity in genres (news, Wikipedia, conversations), document lengths (long vs. short), and annotation guidelines (singletons vs. \begin{table} \begin{tabular}{l l} \hline \hline Mention Detection: & [Nine years] ago today, allegations of infidelity almost derailed Bill Clinton’s journey from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. [Bob Glasc] tracks the life of \\ & from hope to the White House. [Bob Glascoff] tracks the life of \\ & from hope to the White House. non-singletons). As in SS2, we compare zero-shot InstructGPT with longdoc and dcoref. In addition, while not directly comparable, we include few-shot results of TRANSFER-ON and SpanBERT models from Xia and Van Durme (2021) as the current best results for few-shot coreference domain adaptation.4 Note that longdoc and TRANSFER-ON were also fully trained on a source coreference dataset (PreCo and OntoNotesen, respectively). For evaluation, we follow the annotation schema of the corresponding dataset (_i.e.,_ if the dataset contains singletons, then we also output singletons). Footnote 4: Figure 1 of Xia and Van Durme (2021). Models summary detailed in Table 12 Table 5 shows that InstructGPT performs competitively with other models in the zero-shot setting, despite not explicitly being trained on a task-specific source datasets like the longdoc and TRANSFER-ON models. When comparing to dcoref and SpanBERT - models that were not trained on source coreference datasets - InstructGPT outperforms them by a significant margin. Interestingly, we note the degradation in performance of dcoref for LitBank (LB) and QuizBowlCoref (QBC), possibly due to the inclusions of singletons (mentions without an antecedent) in these datasets. InstructGPT does not seem to suffer from this difference in annotation schema. ### Can InstructGPT also generalize zero-shot coreference across languages? To test the zero-shot generalization of InstructGPT on resolving coreference across multiple languages, we experimented with Chinese and Arabic portions of OntoNotes and the multilingual coreference SemEval-2010 dataset (Recasens et al., 2010). A notable difference between OntoNotes and SemEval-2010 is the annotations of singletons, which has led to different evaluation methods for SemEval-2010. We follow the evaluation setting of previous work for each of the evaluated languages: excluding singletons from both predicted and evaluation clusters for Chinese and Arabic, while excluding singletons from predicted set but keeping them in evaluation sets for other languages. We refer to Section 5 of Bohnet et al. (2022) for more discussion on this. Since we could not find prior work on zero-shot multilingual coreference resolution evaluated on gold mentions,5 we opted to compare zero-shot InstructGPT with the closest alternatives: few-shot cross-lingual neural models from Xia and Van Durme (2021), TRANSFER-EN and XLM-R.6 Both use a pretrained XLM-RoBERTa-large encoder fine-tuned with 10 documents from the target language. Additionally, TRANSFER-EN was fully trained on English OntoNotes before continuing training on target language, which makes it a stronger model than XLM-R. Footnote 5: Bohnet et al. (2022) reported results on zero-shot multilingual coreference. However, their models predict mentions themselves and did not allow candidate mentions. Footnote 6: Figure 6 of Xia and Van Durme (2021) From the results in Table 6, we observe that InstructGPT outperforms XLM-R (which was not trained on source coreference language) across all languages. On Chinese, it is even competitive with TRANSFER-EN (which was fully trained on OntoNotesen) on Chinese and Dutch. ### What about different time periods? An interesting dimension to analyze zero-shot coreference generalization is whether current zero-shot systems are robust to temporal changes (Agarwal and Nenkova, 2022; Liu and Ritter, 2023), since having coreference systems that can generalize beyond training and test datasets that were created over a decade ago (e.g., OntoNotes) can be beneficial. To that end, we compare dcoref and InstructGPT on two new silver-annotated coreference datasets from different time periods: \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Test & Toks/Doc & \% Sing. \\ \hline OntoNotesen & 348 & 489 & 0.0 \\ \hline LitBank & 10 & 2105 & 19.8 \\ Character Iden. & 192 & 262 & 6.4 \\ WikiCoref & 30 & 1996 & 0.0 \\ QuizBowlCoref & 400 & 126 & 26.0 \\ \hline OntoNoteszh & 218 & 412 & 0.0 \\ OntoNotesar & 44 & 681 & 0.0 \\ SemEvalca & 167 & 293 & 45.9 \\ SemEvalnl & 72 & 666 & 13.0 \\ SemEvalit & 46 & 891 & 61.9 \\ SemEvales & 168 & 303 & 47.7 \\ \hline \hline \end{tabular} \end{table} Table 4: Statistics of datasets used in this work. The first five datasets are used as benchmarks in Toshniwal et al. (2021). We include only the number of test documents (first col.) since our main evaluation setting is zero-shot. A detailed version is shown in Table 11. **OntoNotes\({}^{\text{WSJ}}\)**, which contains 56 Wall Street Journal (WSJ) articles from 1989, and **RealNews\({}^{\text{WSJ}}\)**, which contains 56 WSJ articles sampled from the RealNews dataset (Zellers et al., 2019) dated from February 2015 to February 2019. Since RealNews\({}^{\text{WSJ}}\) does not have coreference annotations, we used SpanBERT (Joshi et al., 2020), which was fine-tuned on the in-domain OntoNotes train set, to obtain _silver annotations_ for both datasets. We then evaluate the models on these silver annotations, with mentions given as before. Further details on how we sampled and annotated these datasets are presented in SSA.3. Results and DiscussionTable 7 shows the result of dcorefInstructGPT on both datasets. First, the results on OntoNotes\({}^{\text{WSJ}}\) (even on silver annotations) are fairly consistent with previous results on the full OntoNotes test set in Table 1. Specifically, dcoref and InstructGPT achieve 70.8 and 80.9 respectively on OntoNotes\({}^{\text{WSJ}}\), and 72.9 and 80.8 respectively on the OntoNotes test set. Second, on RealNews\({}^{\text{WSJ}}\), we see a clear degredation in performance of dcoref (-7.2 F\({}_{1}\)), whereas the reduction is less pronounced for InstructGPT (-2.7 F\({}_{1}\)). This indicates InstructGPT appears more robust to temporal changes in dataset. Further analysis on which aspects of coreference are more affected by others would be an interesting avenue for future work. ## 5 Related Work Domain Adaptation for CoreferencePrevious work has reported that neural coreference resolution trained on a single dataset struggled with out-of-domain generalization, with some performing worse than rule-based systems (Moosavi and Strube, 2018). Several solutions to this challenge have been proposed with varying success: Xia and Van Durme (2021) showed that continued training can help generalize to different domains and languages with as few as 10 annotated documents, and Toshniwal et al. (2021) demonstrated joint training on large coreference corpora with different annotations can help neural models adapt to new domains. Recently, Gandhi et al. (2022) demonstrated that adapting mention annotations to new domains instead of the entire coreference chains is more cost-efficient while also improves domain adaptation performance. Their findings are in line with insight from analyzing different components of neural coreference systems: improving mention detection provides the largest improvement to coreference performance (Lu and Ng, 2020; Wu and Gardner, 2021). We observe a similar trend with zero-shot prompting InstructGPT. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Lang.} & \multicolumn{2}{c}{TRANSFER-EN} & \multicolumn{1}{c}{XLM-R} & \multicolumn{1}{c}{InstructGPT} \\ & en\(\rightarrow\) few-shot & \(\varnothing\) \(\rightarrow\) few-shot & \(\varnothing\)\(\rightarrow\) 0-shot \\ \hline Chinese (zh) & 75.0 & 70.0 & 77.3 \\ Arabic (ar) & 80.0 & 49.0 & 65.6 \\ Catalan (ca) & 52.0 & 29.0 & 41.9 \\ Dutch (nl) & 71.0 & 42.0 & 70.8 \\ Italian (it) & 46.0 & 25.0 & 41.4 \\ Spanish (es) & 57.0 & 35.0 & 42.2 \\ \hline \hline \end{tabular} \end{table} Table 6: CoNLL F\({}_{1}\) on the non-English portions of OntoNotes (Chinese and Arabic) and the SemEval-2010 dataset. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **source \(\rightarrow\) target** & **ON\({}^{\text{en}}\)** & **LB** & **CI** & **WC** & **QBC** & **Avg.** \\ \hline TRANSFER-ON (Xia and Van Durme, 2021) & ON\({}^{\text{en}}\)\(\rightarrow\) few-shot & - & 85.0 & - & - & 81.0 & 83.0 \\ SpanBERT (Xia and Van Durme, 2021) & \(\varnothing\)\(\rightarrow\) few-shot & - & 63.0 & - & - & 59.0 & 61.0 \\ \hline longdoc (Toshniwal et al., 2021) & PreCo \(\rightarrow\) 0-shot & 76.8 & **81.1** & 66.5 & 67.0 & **77.3** & 73.7 \\ dcoref (Lee et al., 2013) & \(\varnothing\)\(\rightarrow\) 0-shot & 72.9 & 55.4 & - & 72.4 & 34.8 & 59.0 \\ InstructGPT & \(\varnothing\)\(\rightarrow\) 0-shot & **80.8** & 77.0 & **72.6** & **72.9** & 68.3 & **74.3** \\ \hline \hline \end{tabular} \end{table} Table 5: CoNLL F\({}_{1}\) on different English coreference datasets, with the macro average shown in the last column. The “source \(\rightarrow\) target” column denotes which source coreference dataset the models were trained on and whether they were continued training on target datasets (few-shot) or not (0-shot). The first two rows are not directly comparable to InstructGPT, since they were fine-tuned with (limited) target documents. Overall, InstructGPT exhibits strong zero-shot generalization results, despite not being explicitly trained on any coreference dataset.
2310.08216
The carnivorous plant Genlisea harnesses active particle dynamics to prey on microfauna
Carnivory in plants is an unusual trait that has arisen multiple times, independently, throughout evolutionary history. Plants in the genus Genlisea are carnivorous, and feed on microorganisms that live in soil using modified subterranean leaf structures (rhizophylls). A surprisingly broad array of microfauna has been observed in the plants' digestive chambers, including ciliates, amoebae and soil mites. Here we show, through experiments and simulations, that Genlisea exploit active matter physics to 'rectify' bacterial swimming and establish a local flux of bacteria through the structured environment of the rhizophyll towards the plant's digestion vesicle. In contrast, macromolecular digestion products are free to diffuse away from the digestion vesicle and establish a concentration gradient of carbon sources to draw larger microorganisms further inside the plant. Our experiments and simulations show that this mechanism is likely to be a localised one, and that no large-scale efflux of digested matter is present.
José Martín-Roca, C. Miguel Barriuso-Gutiérrez, Raúl Martínez Fernández, Camila Betterelli Giuliano, Rongjing Zhang, Chantal Valeriani, Laurence G. Wilson
2023-10-12T11:04:42Z
http://arxiv.org/abs/2310.08216v1
# The carnivorous plant _Genlisea_ harnesses active particle dynamics to prey on microfauna ###### Abstract Carnivory in plants is an unusual trait that has arisen multiple times, independently, throughout evolutionary history. Plants in the genus _Genlisea_ are carnivorous, and feed on microorganisms that live in soil using modified subterranean leaf structures (rhizophylls). A surprisingly broad array of microfauna has been observed in the plants' digestive chambers, including ciliates, amoebae and soil mites. Here we show, through experiments and simulations, that _Genlisea_ exploit active matter physics to'rectify' bacterial swimming and establish a local flux of bacteria through the structured environment of the rhizophyll towards the plant's digestion vesicle. In contrast, macromolecular digestion products are free to diffuse away from the digestion vesicle and establish a concentration gradient of carbon sources to draw larger microorganisms further inside the plant. Our experiments and simulations show that this mechanism is likely to be a localised one, and that no large-scale efflux of digested matter is present. Carnivorous plants are unusual organisms that have evolved to survive in nutrient-poor environments by trapping and digesting animal prey, typically insects. Most commonly this is to supplement their intake of soil macronutrients such as nitrogen and phosphorus [1; 2]. Carnivorous plants have adopted a range of prey-capture strategies: pitfall traps (_Sarracenia, Neptenthes, Heliumphora_), 'flypaper' traps (_Drosera, Pinguicula_) and suction traps (_Utricularia_) amongst others. Carnivory has also given insight into evolutionary biology. Not only is carnivory a rare trait, but there are several evolutionarily distinct lineages that have arrived at the same basic trapping principle -- for example, the sticky-leaved flypaper traps of _Byblis_ and _Drosophyllum_[3]. The genus _Genlisea_ is comparatively obscure, though it was included in Darwin's work 'Insectivorous Plants' in 1875 [4]. Approximately 30 extant species are distributed across tropical Africa, Central and South America, often in inaccessible and sparsely populated regions. The parts of _Genlisea_ spp. that lie above the surface are superficially unremarkable. The plant is marked by a rosette of small oblong or obovate green leaves, up to a centimeter or so in size. Extending beneath the soil from the main core of the plant are a series of white or translucent tube-like rhizophylls that possess a pronounced bulge (the vesicle) part-way along their length, and split into two twisted terminal structures (see Fig. 1a). From an anatomical point of view, _Genlisea_ is rootless [5]; these white appendages are underground leaves specially adapted for the capture and digestion of microorganisms. The interior of the rhizophyll is hollow from the vesicle to the spiral-shaped openings at its distal end. The hollow core is filled with rows of detentive hairs that point upwards towards the vesicle; these have been posited to act like an eel or lobster trap [6], allowing soil-dwelling organisms such as soil mites to pass inwards while making escape difficult. Glands, most densely clustered in the rhizophyll vesicle, secrete digestive enzymes [7; 8] which break down prey, and reabsorb nutrients through pores in the cuticle of the digestive glands. The interior milieu of the vesicle has been characterised as mucilaginous [9]. The manner in which _Genlisea_ traps its prey remains controversial. Darwin noted that microorganisms entering the branches of the rhizophylls would find their egress prevented by the rows of detentive hairs, but stated that it is not clear what would entice the microorganisms to enter in the first place. Furthermore, he noticed that the digestive vesicles of the plant were filled with soil particles and other inorganic debris; this debris is often seen in older rhizophylls, and is too large to have arrived there by Brownian motion. Juniper _et al._[10] suggest that the plants actively pump fluid into their traps, similar to the closely related bladderworts (_Utricularia_ spp.), although at that time there were few studies in living plants, and the flow rates and concomitant energy consumption required to sustain flow seem prohibitive. Several authors have speculated upon whether the plant uses an attractant to lure prey into its traps. Barthlott _et al._ introduced ciliates tagged with \({}^{35}\)S and later found the tags had accumulated in the leaves of the plant [11]. Darnowski and Fritz [12] tested whether agar that had been placed near _Genlisea_ had absorbed any putative 'lure' chemical. The production of lure molecules will incur an energetic cost on the plant, discouraging their creation. In this work, we show that active matter physics principles play a hitherto unrecognised role in the flux of living matter into _Genlisea_. Over recent decades, Soft Matter Physics and Statistical Physics have given useful insights to unravel features of complex biological phenomena such as cell division, tissue morphogenesis and population dynamics [13; 14; 15]. Active Matter represents a fundamentally new non-equilibrium branch of soft condensed matter physics, studying out-of-equilibrium systems in which energy is supplied at the level of individual entities and translated into unidirectional motion, for example 'active particles' [16]. These dissipate energy while moving [17; 18]. Active systems give rise to unexpected collective phenomena not observed in equilibrium systems, from colonies of bacteria to flocks of birds [18; 19; 20]. Active particles can be synthetic (such as active colloids [21; 22; 23; 24; 25; 26]) or living (such as bacteria [27; 28; 29]). Microorganisms live in environments where viscous forces are orders of magnitude larger than inertial forces (_i.e._ low Reynolds numbers environments). Here, fluid motion is described by the time-independent Stokes equation. This gives rise to behaviour qualitatively different to the macroscopic case, and so analogies drawn between _Genlisea_ and human-scale eel traps or lobster pots are only superficial -- different physics obtains at the microscale. Bacterial cells (sizes \(\sim 10^{-6}\,\)m) often swim in a series of relatively straight runs (length \(\sim 10^{-4}-10^{-5}\,\)m) separated by reorientation events. Depending on the species, bacterial reorientations vary from deflections in swimming trajectory [30] through to reversals [31, 32, 33] or complete stops [34], during which time Brownian motion reorients cells. One of the most commonly observed swimming phenotypes is the 'run-and-tumble' motility observed in soil bacteria such as _Bacillus subtilis_ and enteric bacteria such as _Escherichia coli_. When _E. coli_ is confined in the presence of a microfabricated wall, Galajda and coworkers demonstrated that its motion is rectified by funnel-shaped openings [35]. This behaviour is qualitatively different to that of true Brownian particles [36] and from that of active polar particles[37] in the same confining geometry. We find that _Genlisea_ exploits phenomena observed in the study of active matter in the presence of obstacles to capture prey. We demonstrate how an unusual carnivorous plant has harnessed the rectification of bacterial motion to capture microorganisms, while allowing an attractive chemical gradient of organic molecule to form inside its traps. We first demonstrate that the presence of larger prey microorganisms enhanced transport of environmental debris into the traps, but show no evidence for a chemical 'lure' for these organisms. Next we demonstrate that _Genlisea_ can rectify bacterial swimming, that the geometry of the hairs within the plants is close to the optimal value for trapping, and that the hairs (rather than a proteinaceous mucilage) are likely the dominant contributor to trapping. Figure 1.a show the gross morphology of an excavated plant, and Figure 1.b a scanning electron microscopy cross-section of a trap, showing the detentive hairs. To establish a mechanism for large debris to arrive in the vesicle under normal growth conditions (_rhizophylls_ grow downward, so material is transported against gravity), we used fluorescent particles, diameter \(15\upmu\)m, as probes (Fig. 1.c). Excised but intact _rhizophylls_ were placed in a sample chamber containing water and probe particles, both with (right-hand side panel) and without (left-hand side panel) ciliate microorganisms (_Paramecium multimicronucleaton_, with cell bodies around \(100\upmu\)m long and \(50\upmu\)m in diameter). To compare rhizophylls of different sizes, we normalised the data by dividing the number of beads counted in a trap (\(N_{beads}\)) by the area of the trap'mouth' (\(A_{mouth}\)) to give a net flux of beads, as shown in Fig. 1.d: \(\Phi=N_{beads}/A_{mouth}\). The chosen particles were too large to be significantly transported by Brownian motion in the vertical direction. The presence of ciliates (_Paramecium multimicronucleaton_) gives a roughly three-fold increase in the number of particles in the vesicles (Fig. 1.c and d). Therefore, the presence of microorganisms is sufficient to transport inorganic material into the traps. We note that placing rhizophylls in the sample chambers, and removing them for particle counting causes a small flow within the sample chamber, which appears to be responsible for some transport, albeit it at a lower level: a few particles could be found in the rhizophylls irrespective of the presence of microorganisms. To investigate the presence or absence of a prey attractant, we built a T-maze choice assay of the type used by van Houten _et al._[38] (diagram in Fig. 1.e), and tested the reaction to soil eluate of ciliates similar to those previously determined to be prey microorganisms [11]. Eluate from a pot containing a plant was placed in one test arm of the T-maze, and a control solution in the other. Negative controls were provided by eluate from bare media pots, and by placing eluate in both arms of the maze. A 5 mM solution of NH\({}_{4}\)Cl was used as a positive control [38]. 5 ml of distilled water containing _P. multimicronucleaton_ were placed in the final arm of the T-maze, and the stopcock opened. After 30 mins, the stopcock was closed and the ciliates in each arm were counted under a microscope at low magnification and dark field illumination. The effect of the chemical stimulus was measured using the 'index of chemokinesis' (\(\chi\)), reported in Fig. 1.f: \[\chi=\frac{N_{test}}{N_{test}+N_{control}}, \tag{1}\] where \(N_{test}\) and \(N_{control}\) are the numbers of cells in the test and control arm of the T-maze, respectively. A value of \(\chi=1\) indicates a chemoattractant strong enough to draw all cells into the Figure 1: **Experiments showing transport of active and inactive matter.****a** Photograph of a _G. hispidula_ plant: Photosynthetic leaf _(L)_, digestive vesicle _(V)_, trap neck _(N)_, trap bifurcation _(B)_, and the characteristic ‘corkscrew’ structure containing the trap openings _(T)_. **b** A cut-away SEM image showing the interior of one of the traps. The detentive hairs point upwards along the rhizophyll’s central channel towards the digestive vesicle (\(100\,\mu\)m scale bar). **c** An epi-fluorescence image of \(15\,\mu\)m tracers inside the vesicle (left) and the trap neck (right) of a _G. hispidula_ rhizophyll. **d** Concentration of \(15\,\mu\)m fluorescent tracers (normalized by the trap’s open area) in the absence (red) or presence (blue) of ciliate preys (mean values in black, errors bars show S.E.M.) **e** Chemotaxis assay chamber presenting different stimuli to planktonic ciliates (see text). **f** Chemokinesis coefficients. Each point corresponds to an experiment involving an initial loading of around 150 ciliates. Red boxes indicate 95% confidence intervals. The horizontal black line represents the null hypothesis. **g** A ‘cartoon’ of the bacterial rectification setup, with a section of rhizophyll trap neck connecting two chambers. **h** Number of cells at each end of the trap neck section after two hours (black data points represent mean and 95% CI). test arm of the choice chamber, and a value of \(\chi=0\) corresponds to a perfect repellent (all cells in the control arm). A value of \(\chi=0.5\) corresponds to the null result: the microorganisms are neither attracted or repelled by the test substance (shown by a continous line in Fig. 1.f). The values of \(\chi\) reported in Fig. 1.f show no clear evidence of a chemical lure for these ciliates. Previous studies of the carnivory of _Genlisea_ have focused on interactions with protozoa, but many smaller microorganisms exist in the soil alongside them. In a nutrient-poor environment, a broader prey spectrum increases the chances that nutritional requirements will be met. It has been estimated that there are between \(10^{4}\)-\(10^{5}\) protozoa per gram of soil [39], compared to around \(10^{8}\) bacteria in the same mass [40]. The biomasses of these categories are therefore likely to be similar. Rhizosphere microbiomes are complex [41], but the swimming phenotype of common model soil-dwelling and root-associated bacterial species such as _B. subtilis_ is a canonical run-and-tumble quantitatively similar to _E. coli_[42]. The size and arrangement of detentive hairs within the rhizophylls is reminiscent of microfabricated devices used in previous studies of active matter systems [35, 43, 44]. Inspired by such results, we prepared chambers divided into two, with the halves bridged by excised rhizophylls (shown in 'cartoon' form in Fig. 1g). The rhizophylls were aligned in opposite directions in alternating channels, and both halves of the chambers filled with an initially uniform concentration of _E. coli_. After two hours, bacteria in a 2 mm\({}^{2}\) field of view at either end of each rhizophyll were counted. As shown in Figure 1.h we observed an enrichment of 10-15% in the number of cells present at the end of the rhizophyll previously connected to the vesicle (as compared to the trap entrance). We use this information to guide numerical simulations that untangle the relative importance of different aspects of the trap structure. We simulated suspensions of run-and-tumble particles (disk-like, with diameter \(\sigma\) and propulsion speed of \(v\)) confined in a channel. Figure 2a represents typical trajectories of active particles in the funneled channel, mimicking the plant's hair and whose geometry has been tailored borrowing parameters from experiments: the individual tracks are colour-coded to indicate time. Detentive hairs within the traps have been omitted for clarity, but their influence can be seen in the chevron-shaped deviations in the trajectories, which lead to the trap vesicle on the right. The particles' motion is characterised by a persistence length \(l_{p}=v\tau_{p}\), where \(\tau_{p}\) is the persistence time, or the time between reorientations (Fig. 2.b). Interactions with the detentive hairs are shown in Fig.2.c, where \(\theta\) is the angle that the hairs make with the rhizophyll wall. Two-dimensional numerical simulations were carried out (see Supplementary Materials). To quantify the trapping efficacy, we examine the rate of accumulation in the vesicle, as well as the final (steady state) fraction of cells located there. Figure 2.d shows the time dependence of cell accumulation, for cells with different swimming speeds (\(v=0.1\)-\(0.8\), as indicated in the legend). Intuitively, cells that swim faster accumulate more quickly in the vesicle. The 'toy model' invoked is clearly sufficient to allow trapping of bacteria in the vesicle: the vesicle comprises 10% of the length of the rhizophyll, but accommodates at least 15% of the cells, even in the case of the weakest trapping, rising to over 60% of cells for the strongest trapping. If the cells' tumble rate is held constant but the speed is varied, this will give rise to a longer persistence length \(l_{p}\). This results in a more efficient rectification of the swimming behaviour, and therefore a greater fraction of cells accumulating within the vesicle. Nevertheless, the stochastic swimming process does offer cells a chance of escape. Figure 2.e shows the steady-state accumulation of cells in the vesicle, as a function of swimming speed. The accumulation saturates with around 65% of the cells in the vesicle, presumably limited by cells stochastically escaping the trap. Our microscopy studies, as well as those of other authors [5], show that the detentive hairs within the rhizophyll vary in length somewhat between species, but that the gap in the rhizophyll centre is somewhat constant. We therefore chose to vary \(\theta\) and the hair length \(h\), while keeping \(h\)sin\(\theta\) (their projection on the \(y\)-axis) constant. Figure 2.f shows that the trapping efficiency, expressed as the ratio of the number of trapped cells (\(N_{c}\)) to the total number of cells (\(N\)) increases as \(\theta\) decreases (filled symbols), up to a saturation level that is close to the value achieved by increasing the swimming speed (or persistence length). Although smaller values of \(\theta\) will lead to increased trapping, there is an energetic cost associated with growing longer hairs, which to a first approximation scales linearly with \(h\). For a constant value of \(h\)sin\(\theta\), the trapping efficiency becomes \(N_{c}\)sin\(\theta/N\), which is plotted against the second vertical axis of 2.f (empty symbols), showing a peak at around \(60^{\circ}\), close to the value observed in SEM studies. Figure 2: **Geometrical contributions to carnivorous trapping.****a** Trajectories of simulated bacteria within the rhizophyll of _Genlisea_, with dimensions taken from experiments. The detentive trap hairs have been omitted for clarity. The structures have three regions including an ‘exterior’ (no hairs), neck (with hairs), and vesicle (no hairs, but variable friction). **b** A ‘cartoon’ showing characteristic swimming mode and parameters for the bacterium: size (\(\sigma\)), speed (\(v\)) persistence time (\(\tau_{p}\)) and persistence length (\(l_{p}=v\tau_{p}\)). **c** Subsection of a particle trajectory from panel **a** showing a simulated bacterium interacting with a hair. The hair lies at an angle \(\theta\) to the rhizophyll wall; black arrows show the direction of the bacterium’s travel. **d** The trapping efficiency \(N_{c}/N\) for fixed trap geometry and variable swimming speed. Both trapping rate (initial gradient) and saturation occupancy (plateau level at late times) increase with \(v\); the time is indicated in non-dimensionalised simulation units. **e** The saturation level of \(N_{c}/N\) as a function of swimming speed. The escape probability decreased with \(l_{p}\). **f** Dependence of trapping efficiency on the hair angle \(\theta\), for hairs with a fixed projection on the \(y\)-axis (see text). Smaller angles lead to more efficient trapping, but at a higher cost of production. A first-order correction to these values that takes into account the extra cost of producing longer hairs is shown with green squares (right-hand vertical axis), and peaks around \(60^{\circ}\). **g** The effect of decreased mobility within the vesicle on trapping efficiency. There is no apparent evidence that increasing the friction coefficient in the vesicle increases trapping; the hairs are therefore the dominant contribution to prey trapping. Lastly, we investigated the effect of the mucilaginous plug that has been observed in the vesicle of species of _Genlisea_. It is difficult to determine definitively whether the mucilage is a product of the plant, or the consequence of the digestion of microbes. Nevertheless, its rheology may have consequences for the trapping behaviour of the plant. In a situation where the viscosity is variable, we anticipate that purely Brownian diffusing particles will have a distribution \(\rho(r)\) that is uniform, because \(\rho(r)\sim\exp\left[-\frac{U(r)}{k_{B}T}\right]\) at equilibrium. Conversely, swimming particles accumulate in regions of lower mobility. We therefore perform simulations in which the stomach region has increased friction \(\gamma\), Fig. 2 g. These show that the effect of lower particle mobility has only a marginal trapping effect, and that the accumulation in the vesicle is governed by the geometry of hairs in the rhizophyll channel. We have shown that plants in this fascinating genus constitute a naturally-occurring active matter rectifier, allowing them to increase their supply of nutrients in an otherwise nutrient-poor environment by trapping bacterial prey. Darwin's 'eel trap' description [4] was based on observations of microfauna that became stuck in _Genlisea_ rhizophylls, unable to escape. We show that the same structures that trapped these arthropods are sufficient to guide bacteria - orders of magnitude smaller - to the plant's digestive vesicles. Moreover, we explain the presence of large soil particles in the vesicle without recourse to an active mechanism such as fluid flows (for which there is little evidence in this genus [45]), but find no evidence for a chemical lure, at least one suitable for attracting a common genus of ciliate. Although this plant is considered a 'true' carnivore due to its production of protease [46], carnivory is a spectrum, with genera such as _Roridula_ using symbiosis with insect commensals to facilitate prey digestion [47]. Our study raises the intriguing possibility of non- or quasi-carnivorous subterranean structures in a potentially wide range of other plants that use quirks of their morphology to sequester microorganisms that then die near to the plant, releasing nutrients -- a web of subtle carnivory beneath our feet. ## Materials and Methods ### Plant Growth Experiments were conducted on _Genlisea hispidula_, with additional measurements on _Genlisea lobata x violacea_ in the case of the chemoattractant tests. The plants were grown in 5 cm pots filled with a 1:1:1 peat-sand-perlite soil medium. Each pot was placed into an individual plastic cup within the terrarium to prevent the water outflow from each pot mixing together (necessary for the chemotaxis experiments). The plants were watered with distilled water, and placed on a 16 hour photoperiod under 'cool white' compact fluorescent lamps (color temperature 6500 K). During cultivation, a USB data logger was used to record the average temperature and relative humidity in the terrarium: around 24\({}^{\circ}\)C and 45% humidity during the day, and 19.5\({}^{\circ}\)C at night with 65% humidity. The microscopy experiments were conducted at the ambient temperature of the lab (away from the fluorescent lighting) of 21\(\pm\)1\({}^{\circ}\)C. The SEM image in Fig. 1b is from _G. hispidula_, prepared according to the protocols provided in previous studies [5; 9; 10]. ### Transport of debris into vesicles As previously stated, inorganic debris is found inside the digestive vesicle of _Genlisea_ spp., and this debris must be actively moved into the traps against gravity. The most likely agent seems to be the prey animals that live in the surrounding environment. To test the hypothesis that prey animals push inorganic material into the traps, several plantlets of _G. hispidula_ were excavated and washed thoroughly with distilled water. Around six rhizophylls were excised from the plants and cut above the digestive vesicle so that the hollow rhizophyll channel was maintained. The plants and excised traps were placed in separate petri dishes, which were divided into two groups, test and control. Petri dishes in the test group contained a suspension of fluorescent beads (diameter 15 \(\upmu\)m, 'dragon green', Bangs Labs Inc.) mixed with a culture of the ciliate _Paramecium multimicronucleaton_ (Carolina Biological Supply Co.); the control group of petri dishes contained only a suspension of beads (at the same concentration as the test dishes). The particles that we use are too large to be significantly transported by Brownian motion in the vertical direction; we would expect their concentration do decrease exponentially against gravity, with the characteristic length scale given by Perrin [48], \(l_{c}=k_{B}T/m^{*}g\), where \(k_{B}T\) is the thermal energy, \(m^{*}\) is a particle's buoyant mass and \(g\) is acceleration due to gravity. For particles the size of ours, \(l_{c}\approx 5\mu\)m, so we expect the fluorescent tracers to lie more or less in a layer on the bottom surface of the petri dish, though they are free to diffuse in the horizontal plane. After 7 days, plants and traps were washed extensively with distilled water and analyzed under a fluorescence microscope. The number of beads found inside each trap was recorded. As can be seen in Fig. 1a, the rhizophylls from a single plant are often of different sizes, with different trap thicknesses and lengths. To make a valid comparison between rhizophylls, we normalize the number of beads each trap ingested by its effective opening ('mouth') size, giving a total flux of \(\Phi\) beads/mm\({}^{2}\). The trap shapes are rather complicated, so we modeled each trap arm as a cylinder with a opening fraction per unit surface area, and the'mouth' region at the trap bifurcation as an open rectangle. The dimensions of the trap were determined by digital photographs. ### Presence of a chemoattractant To test for the presence or absence of a soluble chemoattractant produced by the plant, we initially attempted some holographic particle tracking [31; 49] to see if the swimming patterns of _P. multimicronucleaton_ were modified when swimming close to the rhizophyll. These results were inconclusive so instead we performed a T-maze choice assay of the type used by van Houten _et al._[38] (as in Fig. 1 **e**). Five pots containing _G. hispidula_ plants, five containing _G. lobata_\(\times\)_violacea_, and five pots containing bare potting media without _Genlisea_ were kept in the conditions described above. Each pot was placed in an individual plastic cup so that the water outflow from the pots could not mix together. After 3 months, each pot was washed through with around 50 ml of distilled water, and the eluate that accumulated in the plastic cups was collected. The eluate from a pot containing a plant was placed in one test arm of the T-maze, and a control solution in the other. Negative controls were provided by the eluate from bare media pots, and by comparing plant solutions to themselves. A 5 mM solution of NH\({}_{4}\)Cl was used as a positive control [38]. 5 ml of distilled water containing _P. multimicronucleaton_ was placed in the final arm of the T-maze, and the stopcock opened. After 30 mins, the stopcock was closed and the ciliates in each arm were counted under a microscope at low magnification and dark field illumination. The effect of the chemical stimulus was measured using the index of chemokinesis \(\chi\) (Eq. 1). ### Bacterial rectification To demonstrate that the rhizophyll 'neck' is capable of rectifying bacterial swimmers, an assay was developed as pictured in Fig. 1g. Channels measuring 50 mm \(\times\) 4mm \(\times\) 3 mm were constructed from UV-curing glue and glass slides. 2 cm sections of trap 'necks' cut from _G. hispidula_ were carefully washed in DI water and placed in the channels. These were sealed in place using UV curing glue to give an external barrier between the ends of the neck sections. The chambers were then filled with a suspension of _E. coli_ bacteria in tryptone broth as described previously [50]. The suspension initially occupied both halves of the chamber, with bacteria at a uniform concentration. After a period of 2 hours, the density of bacteria immediately adjacent to the neck openings was measured. ### Simulation details To explore the hypothesis of living particles being trapped inside the rhizophylls, we numerically study the relationship between the motion of prey organisms and the hair geometry. Based on the experimental observation of [35], our model consists of a two dimensional system of \(N\) run-and-tumble particles. The active particles of diameter \(\sigma\) are kept in a two-dimensional closed geometry which consists of a channel with hairs at regular distances, of identical shape and orientation. Channel hairs are composed of non-motile circles, while the channel's walls are directly simulated as straight boundaries.This geometry (shown in 'cartoon' form in Fig. 2a), is a simplified representation of a rhizophyll. Particles move due to self-propulsion in a straight line at speed \(v\) during a period of time \(\tau_{p}\), after which they randomly change their direction of motion, but not their speed. Interactions between particles and with the walls of the channel are solved via a Molecular Dynamics algorithm computing the interacting forces between particles using a WCA potential (see Supplementary Material for more details). We consider a channel of width \(Ly=25\sigma_{0}\) and length \(L_{x}=775\sigma_{0}\), containing particles of diameter \(\sigma_{0}\) at a number density \(\rho=0.1\sigma_{0}^{-2}\). The distance between consecutive hairs along the walls is \(d=25\sigma_{0}\). The opening between opposing hairs in the channel is \(3\sigma_{0}\). In Fig 2.e the values used for the simulations are \(\theta=60^{\circ}\) (tilt angle of the hairs respect to the longitudinal axis of the channel), \(v=1.0\sigma_{0}/\tau_{0}\), \(\tau_{r}=1.0\tau_{0}\), \(k_{B}T=1.0\epsilon_{0}\), \(\gamma_{0}=1.0\,m_{0}/\tau_{0}\), \(m=m_{0}\) where \(\tau_{0}\), \(\epsilon_{0}\), \(m_{0}\) and \(\sigma_{0}\) are Lennard-Jones units for time, energy, mass and distance, related as \(\epsilon_{0}=m_{0}\sigma_{0}^{2}/\tau_{0}^{2}\). The channel is divided in 3 sectors: mouth (\(0<x<75\sigma_{0}\)), root (\(75\sigma_{0}<x<700\sigma_{0}\)) and stomach (\(700\sigma_{0}<x<775\sigma_{0}\)) in every simulation. The rightmost section is considered as the stomach and the particles lying there are considered trapped, for counting purposes. ###### Acknowledgements. The authors would like to thank John Chervinsky for assistance in producing the SEM images. This work was funded by the Rowland Institute at Harvard (RZ and LGW) and the CAPES Science Without Borders Program (CBG, Process Number: 7340/11-7). C.V. acknowledges fundings EUR2021-122001, PID2019-105343GB- I00, IHRC22/00002 and PID2022-140407NB-C21 from MINECO.
2308.12004
Lifting Klein-Gordon/Einstein Solutions to General Nonlinear Sigma-Models: the Wormhole Example
We describe a simple technique for generating solutions to the classical field equations for an arbitrary nonlinear sigma-model minimally coupled to gravity. The technique promotes an arbitrary solution to the coupled Einstein/Klein-Gordon field equations for a single scalar field $\sigma$ to a solution of the nonlinear sigma-model for $N$ scalar fields minimally coupled to gravity. This mapping between solutions does not require there to be any target-space isometries and exists for every choice of geodesic computed using the target-space metric. In some special situations -- such as when the solution depends only on a single coordinate (e.g. for homogeneous time-dependent or static spherically symmetric configurations) -- the general solution to the sigma-model equations can be obtained in this way. We illustrate the technique by applying it to generate Euclidean wormhole solutions for multi-field sigma models coupled to gravity starting from the simplest Giddings-Strominger wormhole, clarifying why in the wormhole case Minkowski-signature target-space geometries can arise. We reproduce in this way the well-known axio-dilaton string wormhole and we illustrate the power of the technique by generating simple perturbations to it, like those due to string or $\alpha'$ corrections.
Philippe Brax, C. P. Burgess, F. Quevedo
2023-08-23T08:46:48Z
http://arxiv.org/abs/2308.12004v1
# Lifting Klein-Gordon/Einstein Solutions to General Nonlinear Sigma-Models: the Wormhole Example ###### Abstract We describe a simple technique for generating solutions to the classical field equations for an arbitrary nonlinear sigma-model minimally coupled to gravity. The technique promotes an _arbitrary_ solution to the coupled Einstein/Klein-Gordon field equations for a single scalar field \(\sigma\) to a solution of the nonlinear sigma-model for \(N\) scalar fields minimally coupled to gravity. This mapping between solutions does not require there to be any target-space isometries and exists for every choice of geodesic computed using the target-space metric. In some special situations - such as when the solution depends only on a single coordinate (_e.g._ for homogeneous time-dependent or static spherically symmetric configurations) - the general solution to the sigma-model equations can be obtained in this way. We illustrate the technique by applying it to generate Euclidean wormhole solutions for multi-field sigma models coupled to gravity starting from the simplest Giddings-Strominger wormhole, clarifying why in the wormhole case Minkowski-signature target-space geometries can arise. We reproduce in this way the well-known axio-dilaton string wormhole and we illustrate the power of the technique by generating simple perturbations to it, like those due to string or \(\alpha^{\prime}\) corrections. ###### Contents * 1 Motivation * 2 Mechanism * 2.1 A broad class of sigma-model solutions * 2.2 Two-field special case * 3 Applications to wormholes * 3.1 Giddings-Strominger wormhole * 3.2 Timelike target-space geodesics * 3.3 2D Kahler target space ## 1 Motivation Although gravity is famously the weakest known force it is also the one most feared in everyday mishaps. The astronomical number of particles in even everyday macroscopic objects makes a measurable gravitational force from the extremely feeble gravitational pull of each particle separately, given that gravitational interactions of complicated sources simply add up rather than screen. The fact that such feeble forces are detectable makes precision tests of gravity [1] very sensitive probes for the existence of other new fields mediating macroscopic classical forces, even if these also couple to ordinary matter at the particle level with only gravitational strength. This makes these tests in particular sensitive to the existence of any light bosons [2; 3] whose coupling-to-mass ratio is the same for macroscopic objects as for microscopic particles. The resulting constraints are poison for the many cosmological models that would otherwise like to use light, gravitationally coupled scalars to solve problems in late-time cosmology (as has been known for many years [4; 5]). The continued cosmological appeal of these models has led to a broad search for new types of scalar-matter interactions for which a scalar's couplings to macroscopic objects can be much smaller than - _i.e._'screened' relative to - their per-particle couplings [6; 7; 8; 9; 10; 11] (for reviews see [12; 13; 14]). A recent line of this research similarly seeks macroscopic screening due to the mutual interactions of multiple scalars [15; 16; 17]. A practical problem when analyzing the constraints in these models is explicitly computing the scalar fields that might be expected around a macroscopic source, since this solves a coupled set of nonlinear partial differential equations in which the mutual scalar interactions play a prominent role. That is, the kinetic part of the action for a general collection of scalars \[S_{\rm kin}=-\frac{f^{2}}{2}\int{\rm d}^{4}x\,\sqrt{-g}\;{\cal G}_{ab}(\phi) \,\partial_{\mu}\phi^{a}\,\partial^{\mu}\phi^{b}\,, \tag{1}\] is characterized by a target-space metric \({\cal G}_{ab}(\phi)\) (and overall normalization scale \(f\)), whose presence complicates solving the field equations found by varying \(S_{{ EH}}+S_{\rm kin}\) with respect to \(g_{\mu\nu}\) and \(\phi^{a}\) (where \(S_{{ EH}}\) is the usual Einstein-Hilbert action).1 Progress understanding screening requires finding solutions to these equations, though explicit solutions are known only for simple special choices for \({\cal G}_{ab}\). One such choice is the 2D \(SL(2,R)\)-invariant target-space metric \[{\cal G}_{ab}\,{\rm d}\phi^{a}\,{\rm d}\phi^{b}=\frac{3}{4}\left(\frac{{\rm d} \tau^{2}+{\rm d}{\sf a}^{2}}{\tau^{2}}\right)\,. \tag{2}\] considered in [15, 16], for which the solutions seemed to rely on the existence of target-space isometries in an important way. Our purpose here is to describe a very general mechanism for generating exact solutions to the classical field equations for multiple gravitating scalar fields that are coupled to one another through an essentially arbitrary target-space metric \({\cal G}_{ab}(\phi)\). For spherically symmetric (or homogeneously rolling) configurations the solution-generating technique we describe gives the general solution to the field equations, and agrees with the earlier solutions of [15, 16] once restricted to their specific couplings. We also illustrate our technique to rederive and generalize known axio-dilaton wormhole solutions. ## 2 Mechanism The procedure starts with _any_ solution, \(\{\sigma(x),\mathfrak{g}_{\mu\nu}(x)\}\), to the coupled Einstein/Klein-Gordon field equations for a single real scalar field, \[\Box\sigma=0\qquad\mbox{and}\qquad M_{p}^{2}{\cal R}_{\mu\nu}+f^{2}\partial_{ \mu}\sigma\,\partial_{\nu}\sigma=0\,, \tag{3}\] and maps it onto a general solution, \(\{\phi^{a}(x),g_{\mu\nu}\}\), to the equations obtained from the action obtained by adding \(S_{\rm kin}\) from (1) to the Einstein-Hilbert action. The metric and scalar fields in the sigma-model case are given explicitly by \[g_{\mu\nu}=\mathfrak{g}_{\mu\nu}\qquad\mbox{and}\qquad\phi^{a}(x)=\phi^{a}[ \sigma(x)]\,, \tag{4}\] where the functions \(\phi^{a}(\mathfrak{s})\) describe _any_ geodesic of the target-space metric \({\cal G}_{ab}\) parameterized by arc-length \(\mathfrak{s}\) along the curve (as measured using \({\cal G}_{ab}\)). The construction works equally well in any spacetime dimension or in Euclidean or Minkowski signature (with Euclideanization leading to a few complications, as described in SS3). For example, specializing to 4D and using spherically symmetric configurations \(\sigma=\sigma(r)\) in the equations (3) leads to the solutions [18, 19] \[g_{\mu\nu}\,{\rm d}x^{\mu}{\rm d}x^{\nu}=-f(r)\,{\rm d}t^{2}+\frac{{\rm d}r^{ 2}}{f(r)}+h(r)\left({\rm d}\theta^{2}+\sin^{2}\theta\,{\rm d}\varphi\right)^ {2} \tag{5}\] with \[f(r)=\left(1-\frac{r_{0}}{r}\right)^{\delta}\,,\quad h(r)=r^{2}\left(1-\frac{ r_{0}}{r}\right)^{1-\delta}\quad\mbox{and}\quad e^{\sqrt{2}\,f\sigma(r)/M_{p}}= \left(1-\frac{r_{0}}{r}\right)^{\gamma}\,, \tag{6}\] where the parameters \(\delta\) and \(\gamma\) both lie in the interval \((0,1)\) because they must satisfy \(\delta^{2}+\gamma^{2}=1\). The corresponding sigma-model solution has precisely the same metric (5) and (6) with \(\phi^{a}=\phi^{a}[\sigma(r)]\) obtained by replacing \(\mathfrak{s}\to\sigma(r)\) in any of the target space's geodesics \(\phi^{a}(\mathfrak{s})\). In the specific \(SL(2,R)\)-invariant case considered in [15], where \({\cal G}_{ab}\) is given by (2) geodesics are given explicitly by \[\tau(\sigma)=\frac{\beta}{\cosh\sigma}\qquad\mbox{and}\qquad\mathfrak{a}( \sigma)=\mathfrak{a}_{0}-\beta\tanh\sigma\,. \tag{7}\] Exact solutions to the full Einstein/sigma-model field equation are then given by using in these expressions \(\sigma\to\sigma(r)\) given by (4). These solutions generalize the weak-field solutions of [15] to the domain of strong gravitational fields. The explicit and constructive nature of the solution lends itself to a broad class of applications. For instance for supersymmetric sigma-models in 4D the scalar fields are typically complex, \(\{\phi^{i},\bar{\phi}^{\bar{i}}\}\), and with a target-space metric that is Kahler and so \({\cal G}_{i\bar{j}}=\partial_{i}\partial_{\bar{j}}K\) for some Kahler potential \(K(\phi,\bar{\phi})\). Geodesics in this case satisfy the equations \(\ddot{\phi}^{i}+{\cal G}^{i\bar{j}}\partial_{k}\partial_{l}\partial_{\bar{j}}K \dot{\phi}^{k}\dot{\phi}^{l}=0\). In many applications (such as to string compactifications) \(K\) is given in successive approximations, but for any given accuracy of \(K\) the above construction of explicit classical solutions goes through (we describe a simple example below). The rest of this section proves the fundamental assertion underlying the above solution-generating technique. This is followed in SS3 by its application to the construction of wormhole solutions for several types of sigma models coupled to gravity. ### A broad class of sigma-model solutions This section defines the system whose field equations are to be solved and describes a broad class of vacuum solutions to them. #### 2.1.1 Action and field equations Consider a general sigma model containing \(N\) fields, \(\phi^{a}\), with a target space metric \({\cal G}_{ab}(\phi)\), in terms of which target-space proper distance is given by \[{\rm d}\mathfrak{s}^{2}={\cal G}_{ab}(\phi)\,{\rm d}\phi^{a}\,{\rm d}\phi^{b}\,. \tag{6}\] We imagine the scalar self-interactions to be governed by the sigma model based on this metric, with Einstein-frame lagrangian density \[{\cal L}=-\sqrt{-g}\left[\frac{M_{p}^{2}}{2}{\cal R}+\frac{f^{2}}{2}\,{\cal G }_{ab}(\phi)\,\partial_{\mu}\phi^{a}\,\partial^{\mu}\phi^{b}\right]\,, \tag{7}\] for which the scalar field equations are \[\partial_{\mu}\Big{(}\sqrt{-g}\,{\cal G}_{ab}\,\partial^{\mu}\phi^{b}\Big{)}- \frac{1}{2}\,\partial_{a}{\cal G}_{bc}\,\partial_{\mu}\phi^{b}\,\partial^{\mu }\phi^{c}=\sqrt{-g}\;{\cal G}_{ab}\Big{[}\Box\phi^{b}+\Gamma^{b}_{cd}\, \partial_{\mu}\phi^{c}\,\partial^{\mu}\phi^{d}\Big{]}=0\,, \tag{8}\] with \(\Gamma^{a}_{bc}(\phi)\) being the Christoffel symbols built from the metric \({\cal G}_{ab}\). The trace-reversed Einstein equations similarly are \[{\cal R}_{\mu\nu}+\frac{f^{2}}{M_{p}^{2}}\;{\cal G}_{ab}\,\partial_{\mu}\phi^ {a}\,\partial_{\nu}\phi^{b}=0\,. \tag{9}\] #### 2.1.2 Geodesic solutions There is a very general solution to these equations obtained by specializing to \(\phi^{a}(\sigma)\) lying along a curve in the target space along which the single variable \(\sigma(x)\) varies in spacetime. When this is true the derivatives become \[\partial_{\mu}\phi^{a}=\dot{\phi}^{a}\,\partial_{\mu}\sigma\quad\mbox{and} \quad\partial_{\mu}\partial_{\nu}\phi^{a}=\ddot{\phi}^{a}\,\partial_{\mu} \sigma\,\partial_{\nu}\sigma+\dot{\phi}^{a}\,\partial_{\mu}\partial_{\nu} \sigma\,, \tag{10}\] where over-dots denote differentiation with respect to \(\sigma\) along the curve. These imply \[\sqrt{-g}\;\Box\phi^{a}=\partial_{\mu}\Big{(}\sqrt{-g}\;\partial^{\mu}\phi^{ a}\Big{)}=\partial_{\mu}\Big{(}\sqrt{-g}\;\dot{\phi}^{a}\,\partial^{\mu} \sigma\Big{)}=\sqrt{-g}\Big{(}\ddot{\phi}^{a}\partial_{\mu}\sigma\,\partial^{ \mu}\sigma+\dot{\phi}^{a}\,\Box\sigma\Big{)}\,, \tag{11}\] and so the left-hand sides of the scalar field equations (10) become \[\Box\phi^{a}+\Gamma^{a}_{bc}\partial_{\mu}\phi^{b}\,\partial^{\mu}\phi^{c}= \Big{(}\ddot{\phi}^{a}+\Gamma^{a}_{bc}\,\dot{\phi}^{b}\dot{\phi}^{c}\Big{)} \partial_{\mu}\sigma\,\partial^{\mu}\sigma+\dot{\phi}^{a}\,\Box\sigma\,. \tag{13}\] A sufficient condition for the field equation to be satisfied therefore is to choose the curve \(\phi^{a}(\sigma)\) to be any affinely parameterized geodesic, for which \[\ddot{\phi}^{a}+\Gamma^{a}_{bc}\,\dot{\phi}^{b}\dot{\phi}^{c}=0\,, \tag{14}\] provided we also take the spacetime-dependence of \(\sigma(x)\) to satisfy \(\Box\sigma=0\). The Einstein equations also simplify for these solutions. Parameterizing the geodesic using arc length along the curve, \(\sigma=\mathfrak{s}\), implies (11) becomes \[\mathcal{R}_{\mu\nu}+\frac{f^{2}}{M_{p}^{2}}\;\mathcal{G}_{ab}\,\dot{\phi}^{a }\,\dot{\phi}^{b}\partial_{\mu}\sigma\,\partial_{\nu}\sigma=\mathcal{R}_{\mu \nu}+\frac{f^{2}}{M_{p}^{2}}\;\partial_{\mu}\sigma\,\partial_{\nu}\sigma=0\,, \tag{15}\] where the first equality uses \(\mathcal{G}_{ab}\dot{\phi}^{a}\dot{\phi}^{b}=1\) for any curve parametrised by its arc-length \(\mathfrak{s}\). Any target-space geodesic promotes any solution \(\mathfrak{s}(x)\) to the coupled Einstein/Klein-Gordon equations for a massless field into a fully nonlinear solution to the coupled sigma-model/Einstein equations. This construction does not usually provide the general solutions to these field equations, but does so in the special case that we seek solutions only of a single coordinate - _e.g._ spherical symmetry, \(\mathfrak{s}=\mathfrak{s}(r)\), or homogeneous time evolution, \(\mathfrak{s}=\mathfrak{s}(t)\)) - since in this case the solution is determined by the initial values of the fields and their first derivatives in this one coordinate direction and these map perfectly onto the choices \(\phi^{a}_{0}\) and \(\dot{\phi}^{a}_{0}\) that specify a geodesic. In the special case that the target space has symmetries then the existence of conservation laws along the geodesics makes it simpler to explicitly solve for them. For instance suppose there is a target-space Killing vector \(X^{a}(\phi)\) satisfying \[\nabla_{a}X_{b}+\nabla_{b}X_{a}=0 \tag{16}\] where \(X_{a}:=\mathcal{G}_{ab}\,X^{b}\). Then \[\frac{\mathrm{d}}{\mathrm{d}\mathfrak{s}}\Big{(}X_{a}\dot{\phi}^{b}\Big{)}= \partial_{c}X_{a}\dot{\phi}^{c}\dot{\phi}^{a}+X_{a}\ddot{\phi}^{a}=\Big{(} \nabla_{c}X_{a}\Big{)}\dot{\phi}^{c}\dot{\phi}^{a}+X_{a}\Big{(}\ddot{\phi}^{a }+\Gamma^{a}_{bc}\dot{\phi}^{b}\,\dot{\phi}^{c}\Big{)}=0\,, \tag{17}\] and so both \(X_{a}\dot{\phi}^{a}\) and \(\mathcal{G}_{ab}\,\dot{\phi}^{a}\dot{\phi}^{b}\) are functions involving only first derivatives of \(\phi^{a}\) that are constants along the geodesic. ### Two-field special case For concreteness' sake this section explores the simplest case we explore in detail, for which there are two scalars, \(\{\phi^{a}\}=\{\phi,\mathfrak{a}\}\). For simplicity (and with axion applications in mind) we take a target space metric for which one direction is a would-be axion, with a shift symmetry of the target space metric, and the most general metric consistent with this assumption can be written \[\mathrm{d}\mathfrak{s}^{2}=\mathrm{d}\phi^{2}+W^{2}(\phi)\,\mathrm{d} \mathfrak{a}^{2} \tag{18}\] through an appropriate choice of field variables. Stability requires \(W\) does not vanish for any \(\phi\). The gravity-scalar action in these variables then is \[S=-\frac{1}{2}\int\mathrm{d}^{4}x\,\sqrt{-g}\,\Big{\{}M_{p}^{2}\,\mathcal{R} +f^{2}\Big{[}(\partial\phi)^{2}+W^{2}(\phi)\,(\partial\mathfrak{a})^{2}\Big{]} \Big{\}}\,, \tag{19}\] leading to the scalar field equations \[f^{2}\;\nabla_{\mu}\Big{(}W^{2}\,\partial^{\mu}\mathfrak{a}\Big{)}=f^{2}\Big{[} \square\phi-WW^{\prime}\;(\partial\mathfrak{a})^{2}\Big{]}=0\,, \tag{19}\] and the trace-reversed Einstein equation \[\mathcal{R}_{\mu\nu}+\frac{f^{2}}{M_{p}^{2}}\Big{[}\partial_{\mu}\phi\, \partial_{\nu}\phi+W^{2}\,\partial_{\mu}\mathfrak{a}\,\partial_{\nu}\mathfrak{ a}\Big{]}=0\,. \tag{20}\] In particular, solutions with constant \(\phi=\phi_{0}\) are only consistent with nonzero \((\partial\mathfrak{a})^{2}\) if \(W^{\prime}(\phi_{0})=0\). #### 2.2.1 The geodesic solutions We now display the solutions to these field equations (19) and (20) corresponding to evaluating along a target-space geodesic. #### Target-space geodesics If we use \(\phi\) as the coordinate along a curve then the target-space distance \(\mathfrak{s}\) between two points measured along a curve \(\mathfrak{a}(\phi)\) is given in terms of the target-space metric (17) by \[\mathfrak{s}=\int_{\phi_{0}}^{\phi_{1}}\mathrm{d}\phi\;\sqrt{1+W^{2}\,( \mathfrak{a}^{\prime})^{2}} \tag{21}\] where \(\mathfrak{a}^{\prime}=\mathrm{d}\mathfrak{a}/\mathrm{d}\phi\). Geodesics are the curves for which this is stationary against small variations of the function \(\mathfrak{a}(\phi)\), and so \[\frac{\mathrm{d}}{\mathrm{d}\phi}\left[\frac{W^{2}\mathfrak{a}^{\prime}}{ \sqrt{1+W^{2}(\mathfrak{a}^{\prime})^{2}}}\right]=0\quad\text{which implies}\quad\frac{W^{2} \mathfrak{a}^{\prime}}{\sqrt{1+W^{2}(\mathfrak{a}^{\prime})^{2}}}=C \tag{22}\] for some integration constant \(C\). Choosing the convention \(W>0\) implies \(\mathfrak{a}(\phi)\) satisfies \[\frac{\mathrm{d}\mathfrak{a}}{\mathrm{d}\phi}=\frac{C}{W\sqrt{W^{2}-C^{2}}} \quad\text{and so}\quad\frac{\mathrm{d}\mathfrak{s}}{\mathrm{d}\phi}=\sqrt{1+ W^{2}(\mathfrak{a}^{\prime})^{2}}=\frac{W}{\sqrt{W^{2}-C^{2}}}\,. \tag{23}\] Switching to \(\mathfrak{s}\) as the parameter along the curve - so \(\mathfrak{a}=\mathfrak{a}(\mathfrak{s})\) and \(\phi=\phi(\mathfrak{s})\) - then gives \[\frac{\mathrm{d}\mathfrak{a}}{\mathrm{d}\mathfrak{s}}=\frac{C}{W^{2}}\quad \text{and}\quad\frac{\mathrm{d}\phi}{\mathrm{d}\mathfrak{s}}=\sqrt{1-\frac{C^ {2}}{W^{2}}}\,. \tag{24}\] These can be integrated explicitly once \(W(\phi)\) is specified. Consider now allowing \(\mathfrak{s}=\mathfrak{s}(x)\) to vary in space and time. Chasing through the definitions shows \(\mathfrak{a}(\mathfrak{s})\) inherits the derivatives \[\partial_{\mu}\Big{(}\sqrt{-g}\;W^{2}\partial^{\mu}\mathfrak{a}\Big{)}=C \partial_{\mu}\Big{(}\sqrt{-g}\;\partial^{\mu}\mathfrak{s}\Big{)} \tag{25}\] and \(\phi(\mathfrak{s})\) similarly satisfies \[\square\phi=\frac{C^{2}W^{\prime}}{W^{3}}\,(\partial\mathfrak{s})^{2}+\square \mathfrak{s}\,\sqrt{1-\frac{C^{2}}{W^{2}}}=WW^{\prime}\,(\partial\mathfrak{a} )^{2}+\square\mathfrak{s}\,\sqrt{1-\frac{C^{2}}{W^{2}}}\,. \tag{26}\] These show that the vacuum field equations (19) are automatically satisfied if \(\square\mathfrak{s}=0\). Furthermore the vacuum Einstein equation (20) becomes \[\mathcal{R}_{\mu\nu}+\left[\left(\frac{\mathrm{d}\phi}{\mathrm{d}\mathfrak{s} }\right)^{2}+W^{2}\left(\frac{\mathrm{d}\mathfrak{a}}{\mathrm{d}\mathfrak{s}} \right)^{2}\right]\partial_{\mu}\mathfrak{s}\,\partial_{\nu}\mathfrak{s}= \mathcal{R}_{\mu\nu}+\partial_{\mu}\mathfrak{s}\,\partial_{\nu}\mathfrak{s}=0\,, \tag{27}\] where the first equality uses (17). As expected, the geodesic map \(\{{\mathfrak{a}}({\mathfrak{s}})\,,\phi({\mathfrak{s}})\}\) promotes any fully nonlinear solution \(\{{\mathfrak{s}}(x),g_{\mu\nu}(x)\}\) to the field equations for a massless Klein Gordon field coupled to Einstein gravity to a fully nonlinear solution \(\{{\mathfrak{a}}(x),\phi(x),g_{\mu\nu}(x)\}\) for the two-field sigma model coupled to gravity. A special case of this was used in [16] for the specific \(SL(2,R)\) invariant choice for \(W\) coming from string theory, generalizing the earlier broad class of radial solutions found for that model in [15]. But the arguments above shows that the relation is much more general because it works for more than just two fields and for arbitrary target space metrics. ## 3 Applications to wormholes Wormholes provide a specific and concrete example where this construction might be used. This is because explicit wormhole solutions are known for the massless Einstein/Klein-Gordon system, whose symmetries often ensure they depend only on a single coordinate. In the specific instance of wormholes the relation between solutions and target-space geodesics has been known for some time (see for example [20, 21]). ### Giddings-Strominger wormhole The simplest example starts from a Euclidean solution for a 2-form Kalb-Ramond field \(B_{\mu\nu}\) coupled to gravity [22] (see also [21, 23]) through the action \[S=-\frac{1}{2}\int{\rm d}^{4}x\,\sqrt{-g}\left[M_{p}^{2}{\cal R}+\frac{1}{3!f ^{2}}\,H_{\mu\nu\lambda}H^{\mu\nu\lambda}\right] \tag{18}\] where \(H={\rm d}B\) is the 3-form Kalb-Ramond field strength. The asymptotically flat Euclidean-signature field equations for this system support a wormhole solution of the form \[{\rm d}s^{2}=\left(1-\frac{r_{0}^{4}}{r^{4}}\right)^{-1}{\rm d}r^{2}+r^{2}{ \rm d}\sigma_{3}^{2}\quad\mbox{and}\quad H_{ijk}=\frac{n}{2\pi^{2}r^{3}}\, \epsilon_{ijk}\,, \tag{19}\] where \(x^{i}\) and \({\rm d}\sigma_{3}^{2}\) are respectively coordinates and volume element for the unit 3-sphere and \(\epsilon_{ijk}\) is the volume form for the spatial slices at fixed \(r\). \(n\) is an integer that determines the quantization of flux \[\oint_{S_{3}}H=n \tag{20}\] in terms of which the field equations determine the constant \(r_{0}\) to be \[r_{0}^{4}=\frac{n^{2}}{24\pi^{4}f^{2}M_{p}^{2}}\,. \tag{21}\] This solution with \(r_{0}\leq r<\infty\) describes the nucleation of the 3-sphere baby universe from a larger space, with subsequent classical baby-universe evolution starting with vanishing initial time derivatives and the initial spatial metric and 3-form field as given by the \(r=r_{0}\) slice of the wormhole solution. #### Dual scalar wormhole The above solution also points to the existence of a wormhole solution for the Einstein/Klein-Gordon system that can be obtained by dualizing, with dimensionless scalar \({\mathfrak{s}}(x)\) related to 3-form field by \[H^{\mu\nu\lambda}=f^{2}\epsilon^{\mu\nu\lambda\rho}\partial_{\rho}{\mathfrak{ s}}\,. \tag{22}\] Standard arguments show that using the transformation (10) in (11) leads (in Minkowski signature) to a standard scalar action \[S=-\frac{1}{2}\int\mathrm{d}^{4}x\,\sqrt{-g}\Big{(}M_{p}^{2}\mathcal{R}+f^{2} \partial_{\mu}\mathfrak{s}\,\partial^{\mu}\mathfrak{s}\Big{)}\,, \tag{12}\] as appropriate for a scalar with positive kinetic energy. There is a subtlety in this duality when applied to Euclidean-signature wormhole solutions, however [24, 25, 26, 27, 28]. The subtlety arises because in Euclidean signature applying (10) to (11) leads to \[S_{{}_{E}}=\frac{1}{2}\int\mathrm{d}^{4}x_{{}_{E}}\,\sqrt{g}\,\left(M_{p}^{2} \mathcal{R}-f^{2}\partial_{m}\mathfrak{s}\,\partial^{m}\mathfrak{s}\right), \tag{13}\] rather than (12), corresponding to _negative_ kinetic energy. Indeed, using (10) in the solution (10) leads to a configuration that solves the field equations \[\mathcal{R}_{mn}-\frac{f^{2}}{M_{p}^{2}}\,\partial_{m}\mathfrak{s}\,\partial_ {n}\mathfrak{s}=0\quad\text{and}\quad g^{mn}\nabla_{m}\nabla_{n}\mathfrak{s}= 0\,, \tag{14}\] appropriate to the Euclidean action (13). In particular the metric remains given by (10) and the scalar is given by \(f^{2}\partial_{m}\mathfrak{s}=\frac{1}{3!}\epsilon_{mnpq}H^{npq}\) and so \[\partial_{r}\mathfrak{s}=\frac{1}{3!f^{2}}\,\sqrt{g_{rr}}\,\epsilon_{ijk}H^{ ijk}=\frac{n}{2\pi^{2}f^{2}r^{3}}\left(1-\frac{r_{0}^{4}}{r^{4}}\right)^{-1/2}\,, \tag{15}\] which implies \[\mathfrak{s}(r)=\mathfrak{s}_{0}+\frac{n}{4\pi^{2}f^{2}r_{0}^{2} }\,\tan^{-1}\left(\frac{\sqrt{r^{4}-r_{0}^{4}}}{r_{0}^{2}}\right) = \mathfrak{s}_{0}+\frac{n}{4\pi^{2}f^{2}r_{0}^{2}}\cos^{-1}\left( \frac{r_{0}^{2}}{r^{2}}\right) \tag{16}\] \[= \mathfrak{s}_{0}+\sqrt{\frac{3}{2}}\left(\frac{M_{p}}{f}\right) \cos^{-1}\left(\frac{r_{0}^{2}}{r^{2}}\right)\,,\] where the additive constant satisfies \(\mathfrak{s}_{0}=\mathfrak{s}(r_{0})\) and the second line uses (12) to eliminate \(n\). We see that \(\mathfrak{s}\) asymptotes to a constant of order \(M_{p}/f\) in the asymptotically flat large-\(r\) limit and remains bounded as \(r\to r_{0}\) with \(\oint_{\mathcal{S}_{3}}\epsilon_{mnpq}\partial^{\alpha}\mathfrak{s}\propto \oint_{\mathcal{S}_{3}}H\) also finite (by construction). Eq. (15) can be seen by inspection to satisfy the massless Klein-Gordon equation \(\square\mathfrak{s}=0\) for the metric of (10) because \(\sqrt{-g}\,g^{rr}\partial_{r}\mathfrak{s}\) is a constant. Why is it consistent to use a solution to the Klein-Gordon/Einstein equations with the wrong-sign Euclidean kinetic term as a wormhole for the scalar that has positive energy when dualized in Minkowski signature? Consistency between the Euclidean and Minkowski-signature dualities rests on careful identification of the quantum amplitude to which the wormhole contributes in both the Kalb-Ramond theory and its scalar dual (see _e.g._[29] for a general discussion of the relationship between WKB states and path-integral saddle points in quantum mechanics). The Kalb-Ramond instanton (10) corresponds to a transition where the baby universe emerges with a specified field strength \(H_{ijk}\), as can be seen from the wormhole boundary conditions at \(r=r_{0}\). For the dual scalar field the relation (10) shows this corresponds to computing a transition to a baby universe state for which the scalar field's canonical momentum, \[\mathfrak{p}=\sqrt{-g}\,\partial_{t}\mathfrak{s}\,, \tag{17}\] (where \(t\) is Minkowski-signature time) takes a given real value - as opposed to being in an eigenstate of \(\mathfrak{s}\) itself. If we dualize in Minkowski signature and then Euclideanize we seek solutions to (27) rather than (10), but the presence of the time derivative in the boundary condition (29) drives \(\mathfrak{s}\) to be imaginary in Euclidean signature when \(\mathfrak{p}\) is real [25; 26], leading to a saddle point for the positive-energy Euclidean action (11) on which the field \(\mathfrak{s}\) is imaginary. This is equivalent to seeking a real solution to (10), and so is a real saddle point for a wrong-sign Euclidean action (12). ### Timelike target-space geodesics Our goal is to map the above wormhole solution to more general sigma models using the general geodesic transformation described in SS2.1. The above Euclidean considerations modify some of the details of this mapping, but it otherwise goes through as before. With the above discussion in mind we choose to explore this mapping using real solutions to a Euclidean action for which one of the scalars has a wrong-sign kinetic term (having in mind for the sigma model the same saddle-point discussion described for the GS wormhole in [24; 25; 26; 27]). This leads to two main changes: * as the above discussion argues corresponds to preparing a baby universe in an eigenstate of canonical momentum, as would naturally occur if that particular scalar arises as the dual of a flux field like \(H_{\mu\nu\lambda}\). This is ensured if the target space metric \(\mathcal{G}_{ab}\,\mathrm{d}\phi^{a}\,\mathrm{d}\phi^{b}\) has Minkowski signature rather than the standard Euclidean signature usually required by positive kinetic energy. Because of the important role played by shift symmetry when dualizing we assume the negative eigenvalue of the metric corresponds to a symmetry (axion) direction in field space. * Since we wish the sigma-model field equations to be automatic consequences of \(\Box_{\mathfrak{s}}=0\) and (10), we demand that the geodesic solution to (13) be timelike, so that \(\mathcal{G}_{ab}\,\dot{\phi}^{a}\,\dot{\phi}^{b}=-1\). This ensures -- through (14) -- that the Euclidean Einstein equation (10) generates solutions to the Euclidean Einstein equations for the full sigma model. #### Two-field special case As applied to the two-field special case introduced in SS2.2 this leads to the target-space metric \[\mathcal{G}_{ab}\,\mathrm{d}\phi^{a}\,\mathrm{d}\phi^{b}=\mathrm{d}\phi^{2}-W ^{2}(\phi)\,\mathrm{d}\mathfrak{a}^{2}\,, \tag{30}\] which is to be contrasted with (17). Using \(\tau\) to denote proper distance along a timelike curve \(\mathfrak{a}(\phi)\) in this metric we have \[\tau=\int_{\phi_{0}}^{\phi_{1}}\mathrm{d}\phi\;\sqrt{-1+W^{2}\,(\mathfrak{a}^ {\prime})^{2}} \tag{31}\] where \(\mathfrak{a}^{\prime}=\mathrm{d}\mathfrak{a}/\mathrm{d}\phi\). Geodesics are solutions to \[\frac{W^{2}\mathfrak{a}^{\prime}}{\sqrt{-1+W^{2}(\mathfrak{a}^{\prime})^{2}}}=C \tag{32}\] for some real integration constant \(C\) and so \(\mathfrak{a}(\phi)\) and \(\tau(\phi)\) satisfy \[\frac{\mathrm{d}\mathfrak{a}}{\mathrm{d}\phi}=\frac{C}{W\sqrt{C^{2}-W^{2}}} \quad\text{and}\quad\frac{\mathrm{d}\tau}{\mathrm{d}\phi}=\sqrt{-1+W^{2}( \mathfrak{a}^{\prime})^{2}}=\frac{W}{\sqrt{C^{2}-W^{2}}}\,. \tag{33}\] Inverting to find \(\phi=\phi(\tau)\) allows us to use \(\tau\) as the parameter along the curve. This implies the functions \(\mathfrak{a}=\mathfrak{a}(\tau)\) and \(\phi=\phi(\tau)\) solve \[\frac{\mathrm{d}\mathfrak{a}}{\mathrm{d}\tau}=\frac{C}{W^{2}}\quad\text{and} \quad\frac{\mathrm{d}\phi}{\mathrm{d}\tau}=\sqrt{\frac{C^{2}}{W^{2}}-1}\,. \tag{34}\] The Euclidean Einstein equation is then \[{\cal R}_{mn}+\left[\left(\frac{{\rm d}\phi}{{\rm d}\tau}\right)^{2}-W^{2}\left( \frac{{\rm d}{\sf a}}{{\rm d}\tau}\right)^{2}\right]\partial_{m}\tau\,\partial_ {n}\tau={\cal R}_{mn}-\partial_{m}\tau\,\partial_{n}\tau=0\,, \tag{3.17}\] as required, where the second equality uses the timelike nature of the geodesic. Eq. (3.17) ensures that the metric is always given by (3.2) regardless of the sigma model chosen. We must demand \(0\leq W^{2}\leq C^{2}\) for the existence of real solutions. We also seek solutions for which \(\partial_{r}\phi=0\) at \(r=r_{0}\), so that subsequent Minkowski-signature evolution starts with the initial condition \(\partial_{t}\phi=0\). Although this would seem to be true automatically if \(\phi=\phi_{0}\) were constant, we have seen - see the discussion below (2.20) - that such solutions only exist if the constant value \(\phi=\phi_{0}\) is one for which \(W^{\prime}(\phi_{0})\) vanishes.2 Footnote 2: Constant-\(\phi\) solutions cannot be generated using the above arguments because their starting point (3.13) assumes \(\phi\) can be used as a parameter along the geodesic. ### Exponential special case For the special case \(W(\phi)=e^{-\beta\phi/2}\) the geodesic equation for \(\phi(\tau)\) can be integrated quite generally and implies \[e^{-\beta\phi(\tau)}=\frac{C^{2}\tan^{2}\left(\frac{1}{2}\beta\tau\right)}{1+ \tan^{2}\left(\frac{1}{2}\beta\tau\right)}=C^{2}\sin^{2}\left(\frac{1}{2}\beta \tau\right) \tag{3.18}\] where the new integration constant is absorbed into \(\tau_{0}\) appearing in the Euclideanized Klein-Gordon solution \(\tau(x)\) given by (3.10) (and reproduced here for convenience of reference) \[\tau(r)=\tau_{0}+\sqrt{\frac{3}{2}}\left(\frac{M_{p}}{f}\right)\cos^{-1}\left( \frac{r_{0}^{2}}{r^{2}}\right)\,. \tag{3.19}\] Using (3.18) to integrate \({\sf a}(\tau)\) then gives \[{\sf a}(\tau)=-\frac{2\cot\left(\frac{1}{2}\beta\tau\right)}{\beta C}+c_{2}\,, \tag{3.20}\] where \(c_{2}\) is a new integration constant. Our boundary condition is \(\partial_{r}\phi=0\) at \(r=r_{0}\), but since \(\partial_{r}\tau\) does not vanish at \(r=r_{0}\) we choose \(\tau_{0}\) so that \({\rm d}\phi/{\rm d}\tau\) vanishes at \(\tau_{0}\) while \({\sf a}(\tau)\) remains bounded. This implies \(\cos\left(\frac{1}{2}\beta\tau_{0}\right)=0\) and so \(\tau_{0}=\pi/\beta\), implying (3.18) and (3.20) become \[e^{-\beta\phi(\tau)}=e^{-\beta\phi_{0}}\cos^{2}\left[\sqrt{\frac{3}{2}}\left( \frac{\beta M_{p}}{2f}\right)\cos^{-1}\left(\frac{r_{0}^{2}}{r^{2}}\right) \right]\,, \tag{3.21}\] and \[{\sf a}(\tau)={\sf a}_{0}+\frac{2}{\beta}\,e^{\beta\phi_{0}/2}\tan\left[\sqrt {\frac{3}{2}}\left(\frac{\beta M_{p}}{2f}\right)\cos^{-1}\left(\frac{r_{0}^{2} }{r^{2}}\right)\right]\,, \tag{3.22}\] in agreement with the expressions given in [30]. Notice that \(\phi(r)\) is only bounded - and so \(W(\phi)\) is nonzero - for all \(r>r_{0}\) only if \(\beta<\beta_{c}\) where \[\beta_{c}:=\frac{2\sqrt{2}\,f}{\sqrt{3}\,M_{p}}\,. \tag{3.23}\] ### 2D Kahler target space The real power of being able to use target-space geodesics to promote the GS wormhole to a general sigma model lies in the ability it provides to generate explicit wormhole solutions in a broad class of situations. We illustrate this by exploring a broader class of 2D target spaces that in Minkowski signature correspond to 2D Kahler geometries. Kahler geometries are of interest in this context because they are naturally generated in higher-dimensional supersymmetric theories, with the Kahler condition related to the condition for some supersymmetries to survive compactification. For our purposes a Kahler metric is a complex manifold where there exist complex coordinates \(\{z^{i},\bar{z}^{\bar{\jmath}}\}\) for which the metric's only nonzero components are \(g_{i\bar{\jmath}}(z,\bar{z})=\partial_{i}\partial_{\bar{\jmath}}K(z,\bar{z})\) for some real Kahler potential \(K(z,\bar{z})\). For two real dimensions there is only a single complex coordinate \(\{z,\bar{z}\}\) and the metric is determined by one real function \[\mathfrak{g}:=g_{z\bar{z}}=\partial\bar{\partial}K \tag{3.24}\] for \(K(z,\bar{z})\). For such a geometry the only nonzero Christoffel symbol is the purely holomorphic one \[\Gamma:=\Gamma^{z}_{zz}=g^{z\bar{z}}\partial g_{\bar{z}z}=\partial\ln\mathfrak{ g}\,, \tag{3.25}\] and its complex conjugate \(\overline{\Gamma}:=\Gamma^{\bar{z}}_{\bar{z}\bar{z}}\). The geodesic equation for such a manifold therefore separates into a holomorphic condition \[\bar{z}+\Gamma^{z}_{zz}\dot{z}^{2}=\bar{z}+\dot{z}^{2}\partial\ln\mathfrak{g} =0\,, \tag{3.26}\] and its (antiholomorphic) complex conjugate. Rather than continuing in these coordinates we instead change variables to the real and imaginary parts, \(z=\frac{1}{2}(\mathfrak{t}+i\mathfrak{a})\), in order to make contact with the discussion in previous sections. For the same reason we also impose a shift symmetry for \(\mathfrak{a}\), and so specialize to Kahler potentials of the form \[K(z,\bar{z})=K(z+\bar{z})\,, \tag{3.27}\] for which \(\mathfrak{g}=\mathfrak{g}(z+\bar{z})=\mathfrak{g}(\mathfrak{t})>0\). In this case the target-space line element is \[\mathrm{d}\mathfrak{s}^{2}=2g_{z\bar{z}}\mathrm{d}z\,\mathrm{d}\bar{z}=\frac{ \mathfrak{g}}{2}\Big{(}\mathrm{d}\mathfrak{t}^{2}+\mathrm{d}\mathfrak{a}^{2} \Big{)} \tag{3.28}\] and so the Minkowski-signature sigma-model lagrangian is given by \[\mathcal{L}=-\sqrt{-g}\left[\frac{M_{p}^{2}}{2}\,\mathcal{R}+2f^{2}\mathfrak{ g}(z+\bar{z})\ \partial_{\mu}z\partial^{\mu}\bar{z}\right]=-\frac{1}{2}\sqrt{-g}\Big{\{}M_{p}^{ 2}\mathcal{R}+f^{2}\mathfrak{g}(\mathfrak{t})\left[(\partial\mathfrak{t})^{2} +(\partial\mathfrak{a})^{2}\right]\Big{\}}\,. \tag{3.29}\] Contact with earlier sections is made by changing coordinates to proper distance \(\phi(\mathfrak{t})\), by defining \(\mathrm{d}\phi^{2}:=\mathrm{d}\mathfrak{t}^{2}\,\mathfrak{g}(\mathfrak{t})\) using \[\frac{\mathrm{d}\phi}{\mathrm{d}\mathfrak{t}}=\sqrt{\mathfrak{g}(\mathfrak{t}) }\,. \tag{3.30}\] because this takes the lagrangian into the form (2.18) with \[W^{2}(\phi)=\mathfrak{g}[\mathfrak{t}(\phi)]=\left(\frac{\mathrm{d}\phi}{ \mathrm{d}\mathfrak{t}}\right)^{2}\,. \tag{3.31}\] Given a choice for \(\mathfrak{g}(\mathfrak{t})\) one integrates (3.30) to obtain \(\phi(\mathfrak{t})\) and inverts the result to get \(\mathfrak{t}(\phi)\) and so compute \(W(\phi)\) using (3.31). The resulting geodesics can then be computed and used to generate Minkowski-signature solutions (or Euclidean-signature wormholes) by using the result in (24) (or (3.16)) together with the appropriate Klein-Gordon/Einstein solution. Alternatively we may express directly the solutions in terms of the fields \(\mathfrak{t}(\tau)\) and \(\mathfrak{a}(\tau)\). Using (3.31) and (3.16) we get \[\int\frac{\mathfrak{g}(\mathfrak{t})\mathrm{d}\mathfrak{t}}{\sqrt{C^{2}- \mathfrak{g}(\mathfrak{t})}}=\tau+\kappa \tag{3.32}\] with \(\kappa\) a constant. Given \(\mathfrak{g}\) we can invert this equation to obtain \(\mathfrak{t}(\tau)\). Also, since \(\mathrm{d}\mathfrak{a}/\mathrm{d}\tau=C/\mathfrak{g}\) we can then find directly \(\mathfrak{a}(\tau)\) after solving for \(\mathfrak{t}(\tau)\) or explicitly: \[\mathfrak{a}(\mathfrak{t})=C\int\frac{\mathrm{d}\mathfrak{t}}{\sqrt{C^{2}- \mathfrak{g}(\mathfrak{t})}}. \tag{3.33}\] #### The volume modulus As an illustration consider \[K(z+\bar{z})=-\alpha^{2}\ln(z+\bar{z})\,, \tag{3.34}\] which in practical applications often applies to a compactification's volume modulus. The special case \(\alpha=\sqrt{3}\) is a no-scale model [31]. For this class of Kahler potentials we have \[\mathfrak{g}=K^{\prime\prime}=\frac{\alpha^{2}}{(z+\bar{z})^{2}}\,, \tag{3.35}\] and so \[\frac{\mathrm{d}\phi}{\mathrm{d}\mathfrak{t}}=\frac{\alpha}{\mathfrak{t}} \quad\text{which implies}\quad\mathfrak{t}(\phi)=\alpha\,e^{\phi/\alpha}\,, \tag{3.36}\] where the integration constant is used to choose the origin in \(\phi\)-space to satisfy \(\mathfrak{t}(0)=\alpha\) for later convenience. Then \[W(\phi)=\frac{1}{\mathrm{d}\mathfrak{t}/\mathrm{d}\phi}=e^{-\beta\phi/2} \quad\text{with}\quad\beta=\frac{2}{\alpha}\,. \tag{3.37}\] The wormhole solutions for this model are given explicitly by (3.21) and (3.22) above, and are non-singular for all \(r\) provided \(\alpha>\alpha_{c}\) where (recall (3.23)) \[\alpha_{c}:=\sqrt{\tfrac{3}{2}}\left(\frac{M_{p}}{f}\right)\,. \tag{3.38}\] More interestingly, string-scale corrections are known in some cases to modify the Kahler potential for the volume modulus from (3.34) to \[K(z,\bar{z})=-\alpha^{2}\ln\left[z+\bar{z}+\frac{c}{(z+\bar{z})^{p}}\right] \simeq-\alpha^{2}\ln(z+\bar{z})-\frac{\alpha^{2}c}{(z+\bar{z})^{p+1}} \tag{3.39}\] for real values \(c\) and \(p\) (which remains a no-scale model when \(\alpha=\sqrt{3}\) and \(p=0\)). This is to be understood as the next-to-leading part of an expansion in inverse powers of \(z+\bar{z}\gg 1\). With this choice we have \[\mathfrak{g}(\mathfrak{t})\simeq\frac{\alpha^{2}}{\mathfrak{t}^{2}}\left[1- \frac{(p+1)(p+2)c}{\mathfrak{t}^{p+1}}\right] \tag{3.40}\] and so \[\phi(\mathfrak{t})\simeq\alpha\ln\mathfrak{t}+\frac{\alpha(p+2)c}{2\mathfrak{ t}^{p+1}}\quad\text{which inverts to}\quad\mathfrak{t}(\phi)\simeq\alpha e^{\phi/\alpha}-\frac{(p+2)c}{2 \alpha^{p}}\,e^{-p\phi/\alpha} \tag{3.41}\] leading to \[W(\phi)\simeq e^{-\phi/\alpha}\left[1-\frac{p(p+2)c}{2\alpha^{p+1}}\,e^{-(p+1) \phi/\alpha}\right]\,. \tag{42}\] More explicitly we can find \(\mathfrak{t}(\tau)\) and \(\mathfrak{a}(\tau)\) as outlined in equations (32) and (33), with \[\tau+\kappa\simeq\alpha^{2}\int\frac{d\mathfrak{t}}{\mathfrak{t}\sqrt{C^{2} \mathfrak{t}^{2}-\alpha^{2}}}\left(1-\frac{(p+1)(p+2)c}{\mathfrak{t}^{p+1}}\right) \tag{43}\] which integrates to give \(\mathfrak{t}(\tau)\). For the simplest case \(p=0\) we can do the integrals and get \[\tau+\kappa\simeq\alpha\tan^{-1}\left(\frac{\sqrt{C^{2}\mathfrak{t}^{2}- \alpha^{2}}}{\alpha}\right)-\frac{2c}{\mathfrak{t}}\sqrt{\mathfrak{t}^{2}- \alpha^{2}}\qquad(\text{when }p=0)\,. \tag{44}\] For \(c=0\) the second term vanishes and the first term reproduces the result of (38). Similarly: \[\mathfrak{a}=C\int\frac{d\mathfrak{t}}{\sqrt{C^{2}-\mathfrak{g}(\mathfrak{t}) }}\simeq C\int\frac{\mathfrak{t}d\mathfrak{t}}{\sqrt{C^{2}\mathfrak{t}^{2}- \alpha^{2}}}\left[1-\frac{\alpha^{2}(p+1)(p+2)c}{2C^{2}\mathfrak{t}^{p+3}}\right] \tag{45}\] For \(p=0\) this gives: \[\mathfrak{a}(\mathfrak{t}(\tau))\simeq\frac{\sqrt{C^{2}\mathfrak{t}^{2}- \alpha^{2}}}{C}\left(1-\frac{c}{\mathfrak{t}}\right)\qquad(\text{when }p=0)\,. \tag{46}\] These simple examples illustrate the power of the general technique. **Summary** In summary, we describe in this note a solution-generating technique that promotes classical solutions of the fully nonlinear Einstein/Klein-Gordon equations for a single scalar field into exact solutions to the classical equations for a general non-linear sigma model involving \(N\) scalar fields. It returns the general solutions in situations where the fields depend only on one single parameter. We illustrate the method by reproducing known wormhole solutions and extending them for more complicated target-space metrics, such as those that encode corrections to the Kahler potential in supersymmetric examples. Other applications include the description of homogeneous cosmological configurations when the scalar potential can be neglected, such as in a period of kinetic-energy domination or the study of spherically symmetric configurations such as arise when exploring potential screening mechanisms mediated by multiple mutually interacting scalar fields. More dynamical applications would be to wave propagation and to black-hole mergers or black-hole/neutron-star mergers, where solutions to the evolution for a single Klein-Gordon pair can be lifted to more general multiple-scalar models. Although these solutions need not be the most general ones in these more dynamical settings (though could be for waves that are functions of a single variable, like \(x-t\)), they might capture the dominant behaviour when only a single field captures the response. Although target-space geodesics have been used in some specific instances in the literature, we hope the generality of this solution-generating technique proves useful for the many circumstances where multiple scalar fields play a role in physics beyond the Standard Model. ## Acknowledgements We thank Adam Solomon for helpful conversations. CB's research was partially supported by funds from the Natural Sciences and Engineering Research Council (NSERC) of Canada. Research at the Perimeter Institute is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI. The work of FQ has been partially supported by STFC consolidated grants ST/P000681/1, ST/T000694/1.
2308.02217
Nonlinear wave damping by Kelvin-Helmholtz instability induced turbulence
Magnetohydrodynamic kink waves naturally form as a consequence of perturbations to a structured medium, for example transverse oscillations of coronal loops. Linear theory has provided many insights in the evolution of linear oscillations, and results from these models are often applied to infer information about the solar corona from observed wave periods and damping times. However, simulations show that nonlinear kink waves can host the Kelvin-Helmholtz instability (KHi) which subsequently creates turbulence in the loop, dynamics which are beyond linear models. In this paper we investigate the evolution of KHi-induced turbulence on the surface of a flux tube where a non-linear fundamental kink-mode has been excited. We control our numerical experiment so that we induce the KHi without exciting resonant absorption. We find two stages in the KHi turbulence dynamics. In the first stage, we show that the classic model of a KHi turbulent layer growing $\propto t$is applicable. We adapt this model to make accurate predictions for damping of the oscillation and turbulent heating as a consequence of the KHi dynamics. In the second stage, the now dominant turbulent motions are undergoing decay. We find that the classic model of energy decay proportional to $t^{-2}$ approximately holds and provides an accurate prediction of the heating in this phase. Our results show that we can develop simple models for the turbulent evolution of a non-linear kink wave, but the damping profiles produced are distinct from those of linear theory that are commonly used to confront theory and observations.
Andrew Hillier, Iñigo Arregui, Takeshi Matsumoto
2023-08-04T09:20:49Z
http://arxiv.org/abs/2308.02217v2
# Nonlinear wave damping by Kelvin-Helmholtz instability induced turbulence ###### Abstract Magnetohydrodynamic kink waves naturally form as a consequence of perturbations to a structured medium, for example transverse oscillations of coronal loops. Linear theory has provided many insights in the evolution of linear oscillations, and results from these models are often applied to infer information about the solar corona from observed wave periods and damping times. However, simulations show that nonlinear kink waves can host the Kelvin-Helmholtz instability (KHi) which subsequently creates turbulence in the loop, dynamics which are beyond linear models. In this paper we investigate the evolution of KHi-induced turbulence on the surface of a flux tube where a non-linear fundamental kink-mode has been excited. We control our numerical experiment so that we induce the KHi without exciting resonant absorption. We find two stages in the KHi turbulence dynamics. In the first stage, we show that the classic model of a KHi turbulent layer growing \(\propto t\) is applicable. We adapt this model to make accurate predictions for damping of the oscillation and turbulent heating as a consequence of the KHi dynamics. In the second stage, the now dominant turbulent motions are undergoing decay. We find that the classic model of energy decay proportional to \(t^{-2}\) approximately holds and provides an accurate prediction of the heating in this phase. Our results show that we can develop simple models for the turbulent evolution of a non-linear kink wave, but the damping profiles produced are distinct from those of linear theory that are commonly used to confront theory and observations. ## 1 Introduction Observations show that the solar atmosphere is filled with highly structured plasma forming loops and threads of material tracing the magnetic field. A number of different phenomena lead to oscillations of these structures in the direction transverse to the field. A few examples are the flare-generated transverse coronal loop oscillations (Aschwanden et al., 1999; Nakariakov et al., 1999); the more prevalent small-scale disturbances in active region loops (McIntosh et al., 2011); Doppler-shift disturbances in extended regions of the solar corona (Tomczyk et al., 2007); propagating transverse waves in prominences (Lin et al., 2007; Schmieder et al., 2013); or the occurrence of transverse waves generated from colliding plasma flows (Antolin et al., 2018). A common interpretation has been given to these oscillations, in terms of propagating or standing transverse magnetohydrodynamic (MHD) kink waves (see e.g., Ruderman & Erdelyi, 2009; Goossens et al., 2011; Nakariakov et al., 2021, for comprehensive reviews). The usual paradigm under which theoretical studies on MHD kink waves have been carried out is that of waves modelled as linear perturbations to an initial background state (Roberts, 1983, 2000). Theoretical models for the damping of kink waves have also focused mainly on the linear regime (Goossens et al., 1992, 2002; Ruderman & Roberts, 2002; Goossens et al., 2006). However, as evidenced by the catalogues compiled by Goddard & Nakariakov (2016) and Nechaeva et al. (2019), many of the observed kink oscillations have amplitudes that are large and their damping seems to depend on the oscillation amplitude. For at least some of the observed oscillations, the nonlinearities associated with the perturbations to the system are non-negligible. This can lead to damping of the oscillations through nonlinearities (e.g. Chen & Schuck, 2007; Van Doorsselaere et al., 2021; Arregui, 2021). A well-established result from analytical analysis and numerical simulations is that plasma motions in a flux tube undergoing a nonlinear kink oscillation can lead to the development of the Kelvin-Helmholtz instability (KHi) (see e.g. Terradas et al., 2008; Antolin et al., 2014, 2015; Magyar & Van Doorsselaere, 2016) and the subsequent turbulence the instability can induce (e.g. Hillier & Arregui, 2019; Hillier et al., 2020). In this case, the Kelvin-Helmholtz instability is a parasitic instability that grows on the shear-flow that exists on the flanks of the oscillating flux tube. If the instability can grow, it is then able to develop nonlinearities and if there is sufficient energy in the flow it can then develop turbulence (Hillier, 2019; Hillier and Arregui, 2019). This turbulence can extract energy from an oscillation, either providing a saturation mechanism for the amplitude of the wave for a driven oscillation (Hillier et al., 2020) or lead to damping of an impulsively excited mode. The turbulence excited by the Kelvin-Helmholtz instability is one that is fundamentally different from MHD wave turbulence (e.g. van Ballegooijen et al., 2011), where nonlinear MHD waves interact to create a daughter wave of higher frequency. The key difference being that even though both of these mechanisms use the large-scale oscillation as the energy source, in the wave turbulence model the large-scale oscillation is also involved in creating the energy cascade process. For Kelvin-Helmholtz turbulence this is not the case, with the energy cascade being related to the nonlinearities of an instability. In this paper we perform a detailed analysis of the evolution of the Kelvin-Helmholtz instability on the surface of an oscillating flux tube, identifying how the turbulent dynamics results in two different phases of the evolution of the oscillation amplitude of the tube. We then develop an analytic model for the first stage of the evolution of the flux tube. Here we focus on the large scale response of the oscillating tube to the Kelvin-Helmholtz turbulence, in particular the damping of the oscillations and the heating this creates. During the second phase of the dynamics we find that the turbulent energy dominates that of the wave energy, so we develop a model only for this based on classic models of decaying turbulence, using these to predict the heating rate in the latter stage. ## 2 Simulations of kink-wave-driven Kelvin-Helmholtz instability and related wave damping To build a model of wave damping through Kelvin-Helmholtz turbulence, it is necessary to first identify the fundamental processes that are occurring. To do this we perform a 3D ideal MHD simulation of a nonlinear kink oscillation that develops the Kelvin-Helmholtz instability. Through this we look to identify what it means for the Kelvin-Helmholtz instability to damp a kink wave, and look at the fundamental processes involved in the damping. ### Simulation setup We perform this ideal MHD simulation of an impulsively excited MHD kink wave using the MHD routines of the (PIP) code (Hillier et al., 2016). We solve the evolution in 3D in a Cartesian reference frame using the non-dimensionalised ideal MHD equations in conservative form, namely: \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v})=0, \tag{1}\] \[\frac{\partial}{\partial t}(\rho\mathbf{v})+\nabla\!\cdot\!\left( \rho\mathbf{v}\mathbf{v}+P\mathbf{I}-\mathbf{B}\mathbf{B}+\frac{\mathbf{B}^{2} }{2}\mathbf{I}\right)=0,\] (2) \[\frac{\partial}{\partial t}\left(e+\frac{B^{2}}{2}\right)+\nabla \cdot\left[\mathbf{v}(e+P)-(\mathbf{v}\times\mathbf{B})\times\mathbf{B}\right] =0,\] (3) \[\frac{\partial\mathbf{B}}{\partial t}-\nabla\times(\mathbf{v} \times\mathbf{B})=0,\] (4) \[\nabla\cdot\mathbf{B}=0,\] (5) \[e\equiv\frac{P}{\gamma-1}+\frac{1}{2}\rho v^{2}, \tag{6}\] which allows us to calculate the evolution of the primitive variables, i.e the density (\(\rho\)), velocity field (\(\mathbf{v}=(v_{x},v_{y},v_{z})^{T}\)), pressure (\(P\)) and magnetic field (\(\mathbf{B}\)), through the evolution of the relevant conserved quantities. Note that in our normalisation we have taken the magnetic permeability of a vacuum inside \(\mathbf{B}\) meaning that the local Alfven speed is given by \(V_{\mathrm{A}}=|\mathbf{B}|/\sqrt{\rho}\). These equations are solved using a fourth-order central difference approximation for spatial derivatives (calculated on a uniform mesh) with a four-step Runge-Kutta time integration. We have non-dimensionlaised the equations using a characteristic coronal density (\(\rho_{c}\)), sound speed (\(C_{s}\)) and lengthscale (\(L_{c}\)). Here we do not include any explicit resistive or viscous terms in the equations. Dissipation at some level is inherent in the simulation due to the finite grid size and the use of flux limiters to smooth sharp structures. As the code is written in conservative form, any extraction of energy from the flow or magnetic field results in a corresponding increase in internal energy (i.e. heating). This allows us to quantify the magnitude of any heating that occurs in the simulation whilst being able to run calculations in the least viscous, least resistive regime we can achieve for the resolution. The initial conditions are of a dense tube, aligned with the direction of the magnetic field, which we then perturbed to excite the fundamental kink mode of the system. The initial density profile is given by \[\rho(x,y,z)=\rho_{e}+\frac{\rho_{e}-\rho_{i}}{2}\left(1-\tanh\left(2^{6} \left(\frac{r}{R}-1\right)\right)\right), \tag{7}\] with \(r=\sqrt{x^{2}+y^{2}}\), \(R\) the radius of the tube, and \(\rho_{i}\) and \(\rho_{e}\) the internal density of the tube and the external density. We take \(R=0.3\), \(\rho_{e}=1\) and \(\rho_{i}=3\). The initial pressure is uniform throughout the domain taking a value \(P(x,y,z)=1/\gamma\) (which gives an initial sound speed of \(C_{s}=1\)) and a magnetic field of \(\mathbf{B}=(B_{x},B_{y},B_{z})^{\mathrm{T}}=\sqrt{2/(\gamma\beta)}(0,0,1)^{ \mathrm{T}}\), with \(\beta=0.05\). The calculations are performed in a domain of \(x\in[-L_{x},L_{x}]\), \(y\in[0,L_{y}]\) and \(z\in[0,L_{z}]\) with \(L_{x}=L_{y}=1\) and \(L_{z}=10\). We use a grid size of \(\Delta x=0.005\), \(\Delta y=0.0025\) and \(\Delta z=0.1\). This tube is perturbed with a lateral velocity perturbation to the tube in the form \[v_{x}(x,y,z)=\frac{V_{0}}{2}\left(1-\tanh\left(2^{6}\left(\frac{r}{R}-1\right) \right)\right)\sin\left(\frac{\pi z}{2L_{z}}\right), \tag{8}\] to excite a fundamental kink mode of the system. We use a value of \(V_{0}=0.2\). We also set the \(y\)- and \(z\)-components of the velocity to zero (i.e. \(v_{y}=v_{z}=0\)). We set the boundaries of our domain to be either periodic or possessing symmetries. The boundaries at \(x=-L_{x}\) and \(x=L_{x}\) are set to be periodic. The boundaries at \(y=0\) and \(y=L_{y}\) are symmetric boundaries such that the magnetic field is imposed to be in the (\(x\), \(z\))- plane, as is the flow. This implies that \(v_{y}=B_{y}=0\) on the boundary and \(\partial v_{x}/\partial y=\partial v_{z}/\partial y=\partial B_{x}/\partial y =\partial B_{z}/\partial y=\partial B_{z}/\partial y=\partial\rho/\partial y =0\). The \(z\)-boundaries are symmetric with the magnetic field penetrating the boundary with the imposed symmetries on the magnetic and flow field being of those of a node (\(v_{x}=v_{y}=v_{z}=0\) and \(\partial B_{x}/\partial z=\partial B_{y}/\partial z=\partial B_{z}/\partial z =\partial\rho/\partial z=0\) at \(z=0\)) and anti-node (\(B_{x}=B_{y}=v_{z}=0\) and \(\partial v_{x}/\partial z=\partial v_{y}/\partial z=\partial B_{z}/\partial z =\partial\rho/\partial z=\partial P/\partial z=0\) at \(z=0\) at \(z=L_{z}\)). ### The evolution of the kink wave and their damping through Kelvin-Helmholtz induced turbulence Having been subject to its initial kick, as explained in the previous subsection, the tube begins to oscillate. The wave excited is (or at least dominated by) a fundamental kink mode with frequency \(\omega_{\mathrm{KINK}}=0.53\), resulting in the dense tube oscillating back and forth. For our initial conditions we have set a sharp boundary in density between the tube and the external medium (see Equation 7). Therefore, these oscillations are not subject to resonant damping, which requires a continuous non-uniform variation of density such that the fundamental kink mode has its frequency in the Alfven continuum. However, as the natural state for these oscillations is for a strong shear flow to develop between the tube and the external medium (Sakurai et al., 1991; Goossens et al., 1992), the boundary of the oscillating tube becomes unstable to the Kelvin-Helmholtz instability (Terradas et al., 2008; Antolin et al., 2014, 2015; Magyar and Van Doorsselaere, 2016). Figure 1 shows the density evolution of the tube cross-section at the wave apex (the \(x-y\) plane at \(z=L_{z}\)). These snapshots are taken every 3.2 time units to show the evolution of the oscillation at approximately every quarter period. As the oscillations proceed, we can clearly see the development of Kelvin-Helmholtz roll-ups on the top of the tube. After approximately the first quarter period (t=3.2) the instability has started to grow, though the scales associated with this (and with that the area of the cross-section of the tube that has been disturbed by the development of the instability) are small. As the wave continues to oscillate the Kelvin-Helmholtz vortices become larger, and with this the area of the cross section that has been disturbed by the turbulence keeps on increasing. Ultimately, the vast majority of the tube becomes part of the turbulent layer. Looking at this in further detail, Figure 2 shows the temporal evolution of the area of cross section of the tube at the apex (normalised by the initial value) for different density thresholds. We plot the density \(\rho\geq 2.7\) (shown by the red line), \(\rho\geq 1.89\) (shown by the black line) and \(\rho\geq 1.1\) (shown by the blue line). The area shown by the red curve acts as a proxy for the area that has not been disturbed by the Kelvin Helmholtz induced turbulence. It is clear that once KHi develops there is a clear evolution of material that is initially part of the kink oscillation of the tube but then joins the mixing layer. Once the undisturbed area of the tube at the apex becomes about 20% of its initial area, we can see that the decrease in size of the tube core slows. The area of material with \(\rho\geq 1.89\) stays approximately constant over time. This is an important result as it connects to the model of Hillier and Arregui (2019) where for the density contrast used in this calculation the density value of \(\rho=1.89\) is predicted to be an area conserving threshold, so is predicted to show no evolution. Looking back at Figure 1 we can see by eye that the total displacement of the tube, at least as measured by the displacement along the \(x\) axis, does not appear to be showing a strong decay with time. We can see this somewhat clearer in Figure 3 which shows the spatial evolution of \(v_{x}\) at \(y=0\) and \(z=L_{z}\) over time. The blue and red colours show the positive and negative velocities respectively. Until a time of \(t\approx 44\), the magnitude of the displacement and the magnitude and coherency of the velocity field do not show any drastic change. However, after that time, corresponding to where the decrease in area for the density threshold of \(\rho\geq 2.7\) is clearly reduced in Figure 2, there is a complete change in the dynamics with clear reductions in the lateral displacement of the tube and a velocity field which is less coherent and of smaller magnitude. The temporal evolution of the velocity field in the plane of the apex provides further details on the evolution from coherent oscillations to turbulent motions. Figure 4 shows the evolution of \(v_{x}\) in the plane at the apex of the oscillation (\(z=L_{z}\)). In this plot we have added two black lines to approximately denote the inner and outer edges of the turbulent layer using the density thresholds of \(\rho=2.7\) and \(\rho=1.1\) respectively. Though this is more pronounced initially, throughout the evolution the \(x\) velocity for the region inside the inner black line (i.e. the core of the tube) is relatively coherent at all times. However, in the turbulent layer (i.e. the region between the two black lines) there is more significant fluctuation around the average motion. This includes local speed-ups in the flow which are a common feature of Kelvin-Helmholtz roll-ups (e.g. Hasegawa et al., 2006) and other turbulent structuring. The coherent component of motions in the turbulent layer clearly do not have to move with the core of the tube, where sometimes they both move in the same direction but sometimes they are moving in opposite directions. There is one question that Figure 4 (along with Figures 1 and 3) elicits related to measuring how the wave damps. We can see that at least for the first few periods the core of the tube undergoes coherent oscillations, but the whole evolution of the tube takes into account the turbulent layer which is growing over time, has more incoherent motions and a coherent component of the motions that moves differently to the core. Therefore, Figure 1: Contour plots of the density distribution in the (\(x\), \(y\))-plane taken at \(z\)= \(L_{z}\) showing the time evolution of the KHi on the surface of the oscillating flux tube. Time is given in units of the sound crossing time. Colour contours show densities between a lower value of 0.99 and an upper value of 3.01. Figure 2: Normalised area of the tube cross section at the apex that has \(\rho\geq 2.7\) (red line), \(\rho\geq 1.89\) (black line) and \(\rho\geq 1.1\) (blue line). it is necessary to ask: How do we measure the coherent motions of the loop? And are all these measures the same? Clearly the way we measure the bulk motion is determining how we view the change in the oscillation amplitude with time. We can see this very aspect in Figure 5 where the centre-of-mass displacement in the \(x\)- direction has been calculated in three slightly different ways resulting in very different displacements measured. For the solid black line, the displacement in the 2D plane at the apex for all the material above a density threshold of \(\rho\geq 1.62\) is displayed. The solid red line is a similar calculation, but for a density threshold of \(\rho\geq 1.1\) and the blue line for a density threshold of \(\rho\geq 2.7\). We can see that initially the amplitude of the oscillation, as given by the maximum and minimum displacements, decreases monotonically for all cases. However, for the different density thresholds the rate at which the amplitude reduces (and the form of the envelope giving the damping of the oscillating profile) clearly differs. We can also see that there is a clear period drift for the cases with lower thresholds compared to the \(\rho\geq 2.7\) curve. After \(t\approx 44\) the oscillations are difficult to distinguish in the lower threshold curves, but are clear and consistent in the \(\rho\geq 2.7\) curve that captures the motion of the core of the tube. Ultimately this leads us to a fundamental question: How do we quantify damping of a kink oscillation when the loop develops a growing turbulent layer? The standing kink wave is a coherent oscillation of a dense tube aligned with magnetic field. The damping of these oscillations, when considering linear wave theory, should be the same wherever it is measured, not heavily dependent on how the tube is being measured. Clearly something different is happening in this simulation where not only the amplitude but the period of the oscillations measured is a function of the density threshold used to calculate a centre-of-mass amplitude evolution. Another key aspect of this simulation is that there are two clear phases of the dynamics. The initial period of damping oscillations connected to the growth of a turbulent layer, followed by the later stage when the tube cross-section is almost completely turbulent and the growth of the layer is almost completely arrested. In the next two Sections we will present models for these two phases by developing and combining aspects of the models in Hillier and Arregui (2019) and Hillier et al. (2023), and bench-marking the predictions of the model with key aspects of the simulated evolution. ## 3 Modelling the development of a turbulent layer around an oscillating tube In this section we develop a formulation to describe the evolution of the kink oscillation into a turbulent tube, i.e. the first stage of the damping as described Figure 3: Contour plot of the x velocity in the \(x\)–\(t\) plane at \(y=0\) and \(z=L_{z}\). blue (red) colours show positive (negative) values. Colour contour is only given for regions with \(\rho\geq 1.62\). in the previous section. A model for key aspects of the second stage will be presented in Section 4. The first stage of the damping process happens as the turbulent layer on the boundary grows. This process exchanges momentum from inside and outside of the tube, manifesting as the originally coherent oscillations of the tube developing a more incoherently moving outer layer, with a coherently moving inner region. This can be seen in Figure 5 where the oscillations of the centre-of-mass for the material with density greater than \(1.1\) has an amplitude decaying faster than just the material with density greater than \(2.7\). In this regime, we can expect the velocity jump across the turbulent layer stays relatively constant because the velocity at the centre of the tube (as evidenced by Figure 3) remains at approximately the same magnitude until \(t\approx 44\). If we consider a simpler flow, i.e. a non-oscillating hydrodynamic shear layer, it is well known that for a constant velocity jump across the layer the thickness (\(h\)) of the turbulent layer grows as (e.g. Winant and Browand, 1974) \[h=\beta_{\rm MIX}\Delta Vt, \tag{9}\] where \(\beta_{\rm MIX}\) is a constant for a given mixing layer but the particular value will depend on the density contrast of the layer (Baltzer and Livescu, 2020). Using a dimensional analysis of the model proposed in Hillier and Arregui (2019) and developed in Hillier et al. (2023) we propose Figure 4: \(x\)- component of the velocity in the \(x\)–\(y\) plane at \(z=L_{z}\). Colours correspond to same velocities as used in Figure 3. Black lines show the \(\rho=1.1\) and \(\rho=2.7\) transition. snapshots are taken at the same time as those shown in figure 1 Figure 5: Plot of the evolution of the amplitude of the displacement of the centre-of-mass of the tube in the \(x\) direction over time. The three curves are shown for the centre-of-mass calculated for density thresholds of \(\rho\geq 1.1\) (red), \(\rho\geq 1.62\) (black) and \(\rho\geq 2.7\) (blue). that the mixing layer grows as \[h=C_{1}\sqrt{\frac{1}{2}}\frac{(\rho_{i}\rho_{e})^{1/4}}{\sqrt{\rho_{i}}+\sqrt{ \rho_{e}}}\Delta Vt, \tag{10}\] where \(C_{1}\) is a constant of between \(\sim 0.1\) and \(\sim 1\). Through comparison with mixing in hydrodynamic shear flows, the value of \(C_{1}\) is expected to be in the range of \(C_{1}=0.3\)(Baltzer and Livescu, 2020) to \(0.5\)(Brown and Roshko, 1974) as discussed in Hillier et al. (2023). To determine an appropriate value for the mixing constant \(C_{1}\) for this problem we will look more at the mixing in this section. By doing this we can then use this model of an expanding shear layer, combined with information on the structure of the shear layer taken from Hillier and Arregui (2019), to calculate the rate at which mass and momentum are transferred into or out from the oscillating loop and with that how the oscillation damps. Before making these comparisons, it is necessary to measure the magnitude of the shear flow across the turbulent layer as this is what drives the Kelvin-Helmholtz instability induced turbulence. Figure 6 shows the shear flow at the surface of the tube measured at \(x=0\) and \(z=L_{z}\) at \(t=11.6\) (blue line) and \(t=18\) (red line). The velocity is reformulated such that the value is positive at \(y=0\) for easy comparison. Two dashed horizontal lines are added at \(v_{x}=0.15\) and \(v_{x}=-0.05\) to show the approximate values of the flow inside the dense tube and outside the tube. This implies the magnitude of the shear flow is \(\Delta V\approx 0.2\). Some consideration has to be given to the fact that this is an oscillatory flow and as such will not have the same magnitude of shear flow at any given time. We hypothesize that as the turbulent motions drive the growth in layer width, the Root-Mean-Squared (RMS) shear flow speed is the appropriate value to use, therefore we redefine \(h\) with a factor of \(1/\sqrt{2}\), \[h=\frac{1}{2}C_{1}\frac{(\rho_{i}\rho_{e})^{1/4}}{\sqrt{\rho_{i}}+\sqrt{\rho_ {e}}}\Delta Vt. \tag{11}\] Note that this is not strictly necessary as the constant \(C_{1}\) will have to be calculated and the value calculated will just scale up or down based on the numerical constants included in the model. To make the value of \(C_{1}\) as close to unity as possible (i.e. explicitly including as many of the physical processes that determine its value as possible), we include this factor here. ### Mass evolution To study the evolution of the tube oscillations in Section 2.2 we looked at the centre of mass motions of the system based on given density thresholds (e.g. Figure 5). However, these different thresholds result in changes in the mass over time, so a first important step is to determine how the mass above a given threshold evolves with time. Following the self-similar argument, we can predict the evolution of the total mass as \[\frac{dm_{\rho>\rho_{T}}}{dt}=\frac{d}{dt}(\Delta m_{\rho>\rho_{T}}h2R), \tag{12}\] where \(m_{\rho>\rho_{T}}\) is the mass above a given density threshold and \(\Delta m_{\rho>\rho_{T}}\) is the change in mass above that density threshold for a mixing layer of unit width as predicted by the model of Hillier and Arregui (2019). As for this model, the actual detailed wave dynamics are not so important for the construction of the model we have decided for simplicity of the model to take a rectangular cross-section of length \(2R\) and initial height \(H=R\pi/4\) to model the initial cross section of the tube to simplify the calculation of the evolution of the dynamics in the analytical model. The simple justification of this is that this maintains the cross-sectional area, mass, momentum and kinetic energy of the tube whilst making analytic progress easier. Based on the model predictions of Hillier and Arregui (2019), \(\Delta m_{\rho>\rho_{T}}\) is a constant in time, therefore our model becomes \[\frac{dm_{\rho>\rho_{T}}}{dt}=\Delta m_{\rho>\rho_{T}}2RC_{1}\frac{1}{2}\frac{ (\rho_{i}\rho_{e})^{1/4}}{\sqrt{\rho_{i}}+\sqrt{\rho_{e}}}\Delta V. \tag{13}\] Using the model of Hillier and Arregui (2019), the value of \(\Delta m_{\rho>\rho_{T}}\) is calculated by first calculating the initial Figure 6: Variation of \(v_{x}\) with \(y\) for \(t=11.6\) (blue line) and \(t=18\) (red line). The sign of \(v_{x}\) is set such that it is positive at \(y=0\) for easy comparison. mass of the dense region by multiplying the density of the high density region by the fraction of a unit length that would have been occupied by this dense phase pre-mixing. Then we integrate the density component of the model of Hillier and Arregui (2019) over a unit length from the density threshold value to the maximum density value (in this case 3). This is plotted in Figure 7 for different thresholds for the initial density contrast of our simulation. We can see for a small enough threshold the mass will be growing in time, but for larger thresholds there is a loss of mass. The value where the mass evolution is roughly neutral, i.e. \(\Delta m_{\rho>\rho_{T}}=0\), is \(\rho_{T}=1.62\). Equation 13 can be integrated to give \[m_{\rho>\rho_{T}}(t)=m_{\rho>\rho_{T}}(0)+\Delta m_{\rho>\rho_{T}}RC_{1}\frac{ (\rho_{i}\rho_{e})^{1/4}}{\sqrt{\rho_{i}}+\sqrt{\rho_{e}}}\Delta Vt. \tag{14}\] To confirm that Equation 14 gives the expected evolution of the mass, we compare the evolution of the mass for three different density thresholds from the simulation with comparisons to the predicted evolution given by Equation 14. Figure 8 shows the evolution of the mass at the apex for three density thresholds: \(\rho_{T}=1.1\) (blue line), \(\rho_{T}=1.62\) (black line) and \(\rho_{T}=2.7\) (red line). The dashed lines are the predicted evolution curves for these three thresholds (during this initial regime of the evolution) using \(C_{1}=0.3\), which was determined by fitting by eye to the \(\rho_{T}=1.1\) lines. The value of \(C_{1}\) found here (\(C_{1}=0.3\)) is in the range of the values found in hydrodynamic simulations/experiments (Brown and Roshko, 1974; Baltzer and Livescu, 2020). Overall the simulated curves follow the predicted linear trend. It is clear that though there is some evolution in the total mass for the density threshold of \(\rho_{T}=1.62\), this is a sufficiently accurate threshold to use to maintain an approximately constant mass in a centre-of-mass velocity calculations. The other two curves show the large variation in mass that can occur when different thresholds are employed. The reason the \(\rho_{T}=1.62\) threshold shows some evolution could be due to errors in applying this Cartesian model for the mixing to this more complex geometry, but other, well-established reasons, unrelated to the mixing process like the ponderomotive force driving mass accumulation at the loop apex (e.g. Terradas and Ofman, 2004), exist to explain this. It is worth noting that the density threshold of \(\rho_{T}=1.89\), which is predicted to be area conserving by the model of Hillier et al. (2019), is approximately area conserving for the duration of the simulation (as evidenced in Figure 2) One consequence of having determined the value for \(C_{1}\) is that it allows an upper bound for the time this model holds to be determined. That is to say we can calculate the time when the mixing layer has become large enough that all of the initial cross-section of the tube is predicted to be engulfed in the turbulent mixing layer. Using the asymmetry of the mixing layer predicted by Hillier et al. (2019), the whole cross-section of Figure 8: Evolution of the mass of material found above three different density thresholds. With \(\rho_{T}=1.1\) shown in blue, \(\rho_{T}=1.62\) shown in black, and \(\rho_{T}=2.7\) shown in red. The dashed lines are the predicted evolution value using \(C_{1}=0.3\) with the colours corresponding to the appropriate density threshold. the tube should have become turbulent when \[\frac{\sqrt{\rho_{e}}}{\sqrt{\rho_{i}}+\sqrt{\rho_{e}}}h(t)=H, \tag{15}\] where as stated previously \(H\) is half the initial height used for the rectangular approximation of the tube given as \(H=\pi R/4\). This implies that at a time of \[t=\frac{2H}{0.3\Delta V}\frac{(\sqrt{\rho_{i}}+\sqrt{\rho_{e}})^{2}}{\rho_{i}^ {1/4}\rho_{e}^{3/4}}=44.5, \tag{16}\] the whole tube would have become turbulent and, therefore, the model of a growing turbulent layer is invalid. This predicts that a second stage to the dynamics must have been reached by this time, which is exactly what is seen in Figure 3. ### Momentum evolution Following on from the model of the mass evolution, we can develop a similar (though slightly more complex) model for the evolution of the momentum. To do this we split the momentum above a given density threshold into two components: 1) the momentum held in the material still corresponding to the original tube (\(M_{\rm core}\)), and 2) the momentum injected into the mixing layer (\(M_{\rm L}\)). This is the same concept as has been used for the modelling of the mass evolution, but with the added complexity that the momentum changes sign as the system undergoes its natural oscillations. The combination of a global oscillation at the kink frequency with the momentum extraction/injection process acts to make the momentum extraction (from the core) or injection (into the layer) oscillatory. Therefore, the forcing in the system driving the momentum change is oscillatory, meaning that it is natural to attempt to approximate this sytem by a forced linear oscillator. Firstly, to model the evolution of the momentum in the core, we know that the core of the tube is oscillating at the frequency of the kink wave (\(\omega_{\rm KINK}\)) and the momentum extraction is driven by the shear-flow dynamics which extract momentum at the same frequency. Therefore, we need to look at the case where the forcing occurs at the same frequency (i.e. resonant frequency) as the natural oscillations of the system. This can be stated in equation form as \[\frac{dM_{\rm core}}{dt}=-\omega_{\rm KINK}^{2}I_{\rm core}-2\dot{F}_{\rm core }\cos(\omega_{\rm KINK}t), \tag{17}\] i.e. a forced linear oscillator where the system is being forced at a resonant frequency. Here \(M_{\rm core}\) is the momentum of the core, we define \(I_{\rm core}\) such that \(dI_{\rm core}/dt=M_{\rm core}\) and a forcing (\(\dot{F}_{\rm core}\)) given by \[\dot{F}_{\rm core}=2R\overline{M}_{0}\frac{dh}{dt}, \tag{18}\] with \(\overline{M}_{0}\) the initial momentum density given by \[\overline{M}_{0}=\rho_{i}V_{i}, \tag{19}\] where \(V_{i}\) is the speed of the dense material, in this case \(V_{i}=0.15\) as can be seen in Figure 6. Equation 23 has the solution \[I_{\rm core}(t)=A\sin(\omega_{\rm KINK}t)+B\cos(\omega_{\rm KINK}t)-\frac{ \dot{F}_{\rm core}}{\omega_{\rm KINK}}t\sin(\omega_{\rm KINK}t). \tag{20}\] If we follow our initial assumption that the forcing term is in phase with the oscillations, this implies that \(B=-\dot{F}_{\rm core}/\omega_{\rm KINK}^{2}\), which leads to \[I_{\rm core}(t)= \frac{\overline{M}_{0}}{\omega_{\rm KINK}}\sin(\omega_{\rm KINK }t)-\frac{\dot{F}_{\rm core}}{\omega_{\rm KINK}^{2}}\cos(\omega_{\rm KINK}t)\] \[- \frac{\dot{F}_{\rm core}}{\omega_{\rm KINK}}t\sin(\omega_{\rm KINK }t). \tag{21}\] This leads to the momentum evolution of the core following \[M_{\rm core}=2RH\overline{M}_{0}\cos(\omega_{\rm KINK}t)\left(1-\frac{0.3}{2H }\frac{\rho_{i}^{1/4}\rho_{e}^{3/4}}{(\sqrt{\rho_{i}}+\sqrt{\rho_{e}})^{2}} \Delta Vt\right). \tag{22}\] To model the momentum injected into the mixing layer requires more subtlety. Unlike the mass evolution, the momentum is not a positive definite quantity, it oscillates around zero, so the sign of the momentum injected into the mixing layer changes periodically. On top of this, the local field lines in the mixing layer have their own characteristic frequency of oscillation (i.e. Alfven frequencies), which can be different from that of the frequency with which momentum is being injected. The simple model we put forward to explain how the momentum in the mixing layer develops is based again on a forced linear oscillator, but this time the characteristic frequency of the oscillation of the layer and that of the core are not assumed to be the same. We consider a single, representative Alfven frequency for the mixing layer (a composite Alfven frequency or kink frequency of the layer) based on the turbulent motions coupling all the different regions of the layer allowing it to develop oscillations at a single representative frequency. As the model of Hillier and Arregui (2019) proposes, the representative density of the layer is \(\sqrt{\rho_{i}\rho_{e}}\). Therefore, the simple estimate of the non-dimensional frequency would be \(\omega_{\rm A}=B/(\rho_{i}\rho_{e})^{1/4}\). However, as the inverse of the square-root of a mean is not the same as the mean of the inverse of a square-root, we calculate the latter from the model of the density distribution across the layer from Hillier and Arregui (2019) and use this to calculate the approximate Alfven frequency of the mixing layer (\(\omega_{\rm A}\approx 0.6\)). Here we model the whole layer as a single forced nonlinear oscillator with momentum \(M_{\rm L}\). Mathematically, the evolution of the momentum would be modelled as: \[\frac{dM_{\rm L}}{dt}=-\omega_{\rm A}^{2}I_{\rm L}-2\dot{F}_{L}\cos(\omega_{\rm KINK }t), \tag{23}\] where the forcing is occurring at the kink frequency (\(\omega_{\rm KINK}\)). We define \(I_{\rm L}\) such that \(dI_{\rm L}/dt=M_{\rm L}\) and \(\dot{F}_{L}\) is the forcing term relating to the rate at which momentum is added into the mixing layer from the external regions. This is given by \[\dot{F}_{L}=\overline{M}C_{1}\frac{(\rho_{i}\rho_{e})^{1/4}}{\sqrt{\rho_{i}}+ \sqrt{\rho_{e}}}\Delta VR, \tag{24}\] with \(\overline{M}\) as the momentum injection calculated from the model of Hillier and Arregui (2019) over a region of unit thickness. The precise calculation is given by \[\overline{M}=\left(0.15-\frac{\Delta V\sqrt{\rho_{e}}}{\sqrt{\rho_{i}}+\sqrt{ \rho_{e}}}\right)\times\int_{y^{\prime}(\rho_{min})}^{y^{\prime}(\rho_{max})} \rho dy^{\prime}. \tag{25}\] The term in the bracket gives the mean velocity of the mixing layer in the rest frame of the simulation. This is then multiplied by the integral of the density between the two density limits of interest across the predicted average density profile of the mixing layer model of Hillier and Arregui (2019) for a layer with width of unit length. Equation 23 leads to the well know solution (in the case that the characteristic Alfven frequency of the layer is not the same as the kink frequency of the tube) \[I_{\rm L}=A\sin(\omega_{\rm A}t)+B\cos(\omega_{\rm A}t)+\frac{2\dot{F}_{L}}{ \omega_{\rm A}^{2}-\omega_{\rm KINK}^{2}}\cos(\omega_{\rm KINK}t). \tag{26}\] If we take that at \(t=0\) we have \(I_{\rm L}=0\) this implies that \[B=-\frac{2\dot{F}_{L}}{\omega_{\rm A}^{2}-\omega_{\rm KINK}^{2}}. \tag{27}\] Equation 26 can be differentiated to give the momentum evolution in the mixing layer as \[M_{\rm L}= A\omega_{\rm A}\cos(\omega_{\rm A}t)+\frac{2\dot{F}_{L}\omega_{ \rm A}}{\omega_{\rm A}^{2}-\omega_{\rm KINK}^{2}}\sin(\omega_{\rm A}t)\] \[-\frac{2\dot{F}_{L}\omega_{\rm KINK}}{\omega_{\rm A}^{2}-\omega_ {\rm KINK}^{2}}\sin(\omega_{\rm KINK}t). \tag{28}\] We then have \(A=M_{\rm L}(0)/\omega_{\rm A}\) giving \[M_{\rm L}= M_{\rm L}(0)\cos(\omega_{\rm A}t)+\frac{2\dot{F}_{L}\omega_{\rm A }}{\omega_{\rm A}^{2}-\omega_{\rm KINK}^{2}}\sin(\omega_{\rm A}t)\] \[-\frac{2\dot{F}_{L}\omega_{\rm KINK}}{\omega_{\rm A}^{2}-\omega_ {\rm KINK}^{2}}\sin(\omega_{\rm KINK}t). \tag{29}\] Figure 9 shows the comparison between the predicted momentum and kinetic evolution of the core of the tube (\(M_{\rm core}\)) and the mixing layer (\(M_{L}\)) compared with the simulation results. What is presented here is not a fit, but the solution from an initial value problem compared with the results from the 3D simulations. This model clearly captures the key features of the evolution. Going further, we can simplify Equation 29 by taking \(M_{\rm L}(0)=0\) and looking at the case where \(\omega_{\rm A}-\omega_{\rm KINK}\) is small. By setting \(\omega_{\rm D}=(\omega_{\rm A}-\omega_{\rm KINK})/2\) and \(\omega_{\rm S}=(\omega_{\rm A}+\omega_{\rm KINK})/2\), which means \(\omega_{\rm A}=\omega_{\rm S}+\omega_{\rm D}\) and \(\omega_{\rm KINK}=(\omega_{\rm S}+\omega_{\rm S})/2\). The results of the simulations are shown in Figure 10. The results of the simulations are shown in Figure 11. The results of the simulations are shown in Figure 12. The results of the simulations are shown in Figure 13. The results of the simulations are shown in Figure 14. The results of the simulations are shown in Figure 15. The results of the simulations are shown in Figure 16. The results of the simulations are shown in Figure 17. The results of the simulations are shown in Figure 18. The results of the simulations are shown in Figure 19. The results of the simulations are shown in Figure 19. \(\omega_{\mathrm{S}}-\omega_{\mathrm{D}}\), we have \[M_{\mathrm{L}}\approx \frac{\dot{F}(\omega_{\mathrm{S}}+\omega_{\mathrm{D}})}{2\omega_{ \mathrm{S}}\omega_{\mathrm{D}}}\sin((\omega_{\mathrm{S}}+\omega_{\mathrm{D}})t)\] \[-\frac{\dot{F}(\omega_{\mathrm{S}}-\omega_{\mathrm{D}})}{2\omega_ {\mathrm{S}}\omega_{\mathrm{D}}}\sin((\omega_{\mathrm{S}}-\omega_{\mathrm{D}})t). \tag{30}\] This can be expanded out through double angle formulas to give \[M_{\mathrm{L}}\approx \frac{\dot{F}}{2\omega_{\mathrm{S}}\omega_{\mathrm{D}}}\times \tag{31}\] \[((\omega_{\mathrm{S}}+\omega_{\mathrm{D}})(\sin(\omega_{\mathrm{S }}t)\cos(\omega_{\mathrm{D}}t)+\cos(\omega_{\mathrm{S}}t)\sin(\omega_{\mathrm{ D}}t))\] \[-(\omega_{\mathrm{S}}-\omega_{\mathrm{D}})(\sin(\omega_{\mathrm{S }}t)\cos(\omega_{\mathrm{D}}t)-\cos(\omega_{\mathrm{S}}t)\sin(\omega_{\mathrm{ D}}t))).\] Taking that \(\omega_{\mathrm{D}}\ll\omega_{\mathrm{S}}\) this can again be simplified to \[M_{\mathrm{L}}\approx\frac{\dot{F}}{\omega_{\mathrm{D}}}\cos(\omega_{\mathrm{S }}t)\sin(\omega_{\mathrm{D}}t)). \tag{32}\] This implies we expect our momentum in the mixing layer to oscillate with a frequency of \(\omega_{\mathrm{S}}\) inside an envelope that evolves as \(\sin(\omega_{\mathrm{D}}t)\). Our result above implies that we can refine our prediction for when this initial phase of the dynamics will end. The model we have put forward for this period of the dynamics is that of the oscillations in the core of the tube driving the motions. However, it is clear from panel (b) of Figure 9 (showing the kinetic energy evolution of the wore, mixing layer bulk motions and the turbulent fluctuations) that we reach a time where assuming that the core of the tube holds the majority of the kinetic energy associated with coherent motions no longer holds. At a time of \(t\approx 33\) the energy of the coherent motions of the mixing layer becomes greater than that of the energy of the oscillations of the core. So even without including the energy of the turbulent motions we can see that our assumption that the core energetically dominates the dynamics only holds for a limited time. This switch from the core to the layer dominating the energetics might explain why the model is less accurate once we reach a time of \(t\approx 40\). ### Predicted evolution of amplitude and velocity amplitude of the wave motions With a prediction for both the momentum evolution and the mass evolution above a given density threshold of the oscillation, this can then be turned into an estimate of the centre-of-mass velocity of the oscillation. This can be formulated mathematically as \[V_{\mathrm{CoM},\rho\geq\rho_{\mathrm{T}}}=\frac{\int_{\rho\geq\rho_{ \mathrm{T}}}\rho v_{x}dA}{\int_{\rho\geq\rho_{\mathrm{T}}}\rho dA}=\frac{M_{ \rho\geq\rho_{\mathrm{T}}}}{m_{\rho\geq\rho_{\mathrm{T}}}}, \tag{33}\] with \(\rho_{\mathrm{T}}\) the threshold density used. That is the integral of the density weighted velocity divided by the integral of the density gives a centre-of-mass velocity, but this is just the momentum divided by the mass. As both \(m_{\rho\geq\rho_{\mathrm{T}}}\) and \(M_{\rho\geq\rho_{\mathrm{T}}}\) have direct predictions as shown in the previous subsections, we can use these to subsequently make a prediction of \(V_{\mathrm{CoM},\rho\geq\rho_{\mathrm{T}}}\). Panels (a) and (c) of Figure 10 show the temporal evolution of centre-of-mass velocity for just the core of the tube at its apex (panel a) and when the mixing layer is also included (panel c). Unsurprisingly, given the accuracy of the momentum and mass evolution up until a time of \(t\approx 40\) these panels show the model is a good representation of the simulation results. A measure of the evolution of the velocity of the loop apex over time can be used to make a prediction for the evolution of the position of the loop apex (i.e. the wave amplitude) over time. Then by integrating \(V_{\mathrm{CoM},\rho\geq\rho_{\mathrm{T}}}\) over time, we can make a subsequent prediction for the amplitude evolution (\(A_{\rho\geq\rho_{\mathrm{T}}}\)) of \[A_{\rho\geq\rho_{\mathrm{T}}}= \int_{0}^{t}\frac{\int_{\rho\geq\rho_{\mathrm{T}}}\rho v_{x}dA}{ \int_{\rho\geq\rho_{\mathrm{T}}}\rho dA}dt+A_{\rho\geq\rho_{\mathrm{T}}}(0)\] \[= \int_{0}^{t}\frac{M_{\rho\geq\rho_{\mathrm{T}}}}{m_{\rho\geq\rho_{ \mathrm{T}}}}dt+A_{\rho\geq\rho_{\mathrm{T}}}(0). \tag{34}\] Due to the nonlinear system we are studying, though dimensionally the same, the \(A_{\rho\geq\rho_{\mathrm{T}}}\) defined above is different from the centre-of-mass amplitude which is defined as \[A_{\mathrm{CoM},\rho\geq\rho_{\mathrm{T}}}=\frac{\int_{\rho\geq\rho_{\mathrm{ T}}}\rho xdA}{\int_{\rho\geq\rho_{\mathrm{T}}}\rho dA}. \tag{35}\] This difference in formulation is very important to consider when comparing results for a nonlinear system, as differing formulations will lead to some differences in results. This is seen in panels (b) and (d) of Figure 10, where for looking only at the core of the tube (with its simpler evolution) the model prediction for the amplitude evolution (\(A_{\rho\geq\rho_{\mathrm{T}}}\)) is a reasonable representation of the centre-of-mass amplitude from the simulation (\(A_{\mathrm{CoM},\rho\geq\rho_{\mathrm{T}}}\)). However, in panel (d) there is a much greater decrease in the centre-of-mass amplitude (not seen in the centre-of-mass velocity) compared to the model. ### Bounding the heating rate in the first phase of the dynamics An important question, especially in terms of how the results of this paper may impact our understanding of wave heating of the solar corona, is: can we predict the heating rates in the simulation from the model of Kelvin-Helmholtz turbulence. As we have performed an ideal MHD simulation, we do not have any explicit heating terms from which to calculate this, but the numerical dissipation will (due to the use of a conservative scheme) result in energy lost from the kinetic and magnetic energies being added to the internal energy. Combined with the choice of boundary conditions which mean energy cannot leave the simulation domain this gives us some measure of the dissipative heating from the turbulence. The black line in Figure 11 shows the increase from the initial value of the total internal energy over time. At the end of this phase (around \(t\sim 40\)) we find a total internal energy increase in the simulation volume that is \(\approx 0.1\%\) of the initial internal energy of the simulation domain. To make a prediction of the rate at which the internal energy increases (i.e. a heating rate), first we need to make an estimate of how the magnitude of the turbulent energy (both in the velocity and the magnetic fields) held in the mixing layer evolves over time. To do this, we need to account for all the energy that has not been previously accounted for in Section 3.2, i.e. energy not in the bulk motions of the layer. This energy can be distributed into three categories: the turbulent energy, the energy in the variation of the mean and the energy lost by the forcing of the mixing layer not being resonant with the layer's natural frequency. The first of these is simple to understand, it is the energy that is predicted to be in turbulent fluctuations by the model of Hillier and Arregui (2019). The total energy of this component is calculated as (Hillier and Arregui, 2019) \[E_{\rm turb}=\frac{1}{16}\frac{\rho_{i}\rho_{e}}{(\sqrt{\rho_{i}}+\sqrt{\rho_ {e}})^{2}}\Delta V^{2}2Rh(t)L_{z}, \tag{36}\] Figure 10: Comparison of the model and the simulation for the centre of mass velocity amplitude (panels (a) and (c)) and the centre-of-mass amplitude of the oscillatory motions (panels (b) and (d)). Panels (a) and (b) show the temporal evolution for the material with \(\rho\geq 2.7\) and panels (c) and (d) show the temporal evolution for the material with \(\rho\geq 1.1\). The solid lines are calculated from the simulation and the dashed lines are \(V_{\rm CoM,\rho\geq\rho_{T}}\) (panels (a) and (c)) or \(A_{\rho\geq\rho_{T}}\) (panels (b) and (d)). where a factor of 1/2 has been introduced as a consequence of integrating along the loop. The second of these is not seen as turbulent energy by Hillier and Arregui (2019), but has to do with the mean flow having a distribution across the mixing layer. This enters the calculations here as the mixing layer is no longer moving with the shear driving but is drifting out of phase. As such it is not clear how much of the energy we expect to be in the mean variation is kept there and how much might be released for further turbulence. The total energy of this component is calculated as \[E_{\rm{meandist}}\approx E_{\rm{turb}}. \tag{37}\] Finally, we have the energy that is lost by forcing the mixing layer at a frequency different to its natural oscillatory frequency. This is just the difference between the energy of the bulk motions of the mixing layer when they are forced at the kink frequency and forced at the natural frequency of the layer. This difference is more pronounced for larger density contrasts highlighting a greater potential for heating in those situations. As this calculation would already take into account the cross-sectional area, this would be multiplied by \(L_{z}\) to give a total energy \(E_{\rm{force}}\). The total possible energy in the turbulence in the mixing layer can then be calculated by summing these three: i.e. \(E_{\rm{tot}}=E_{\rm{turb}}+E_{\rm{meandist}}+E_{\rm{force}}\). The evolution of \(E_{\rm{tot}}\) is shown with the dashed red line in Figure 11. This can be used as an expected upper limit from turbulent heating in the simulation. We can add more nuance to these arguments. We should expect a time delay between energy being brought into the mixing layer and its dissipation after a turbulent cascade. To model this, we can use the approximate self-similarity of the turbulent energy cascade which implies that the velocity fluctuations at a given length scale (\(l\)) scale as \(\propto(l/l_{0})^{1/3}\) where \(l_{0}\) is the largest scale of the turbulent cascade or integral length scale(Kolmogorov, 1941). Assuming that the turbulent cascade involves the standard nonlinear process where energy is passed from one scale to the scale of half that size and that the turbulence cascades to infinitesimal scales (the Reynolds and magnetic Reynolds number are tending to infinity) leads to the following estimate of the dissipation time \(\tau_{\rm{DISS}}\)(e.g. Onsager, 1949) \[\tau_{\rm{DISS}}=\sum_{i=0}^{\infty}\frac{l_{i}}{v_{i}}=\frac{l_{0}}{v_{0}} \left(1+\frac{1}{2^{2/3}}+\frac{1}{2^{4/3}}+...\right)\approx 2.70\frac{l_{0}} {v_{0}}, \tag{38}\] where the subscript denotes the level in the cascade and \(\tau_{\rm{EDDY}}=l_{0}/v_{0}\) is the large-scale vortex turnover time. This leads us to estimate that at any given time the energy that has been brought into the mixing layer and has formed part of the turbulent fluctuations, \(\tau_{\rm{EDDY}}/\tau_{\rm{DISS}}=1/2.7\) of the energy will have been dissipated leaving 1.7/2.7 as part of the turbulence. This leads to an estimation of a lower bound for the heating rate of \((E_{\rm{turb}}+E_{\rm{force}})/2.7\). This is shown with the red solid line in Figure 11. Overall the two bounds we derive do provide bounds for the growth of the internal energy of the simulation. The fact this is closest to the upper bound may imply that the turbulent energy is very efficiently dissipated (in much less than 2.7 turnover times) or more likely as the energy from the initial kick is not all trapped in the oscillation some extra heating is occurring. ## 4 Modelling energy dissipation in the second phase of the dynamics Having developed a model to explain the evolution of the first phase, we now turn our attention to the second phase of the dynamics. The key characteristic of this regime that we use to guide the model is that the energy of the dynamics is dominated by the turbulent component (see Figure 12 after \(t\approx 40\)). It is also important to note that as no further energy is input into the system and without an energy source the turbulence will decay over time (again see Figure 12 after \(t\approx 40\)). Therefore, we focus on developing a model for the decay of the turbulence. Figure 11: Change in internal energy (solid black line) with predicted slope based on the model of turbulent heating as the Reynolds number tends to infinity (solid red line) and the predicted upper limit of the heating rate (dashed red line). To model the decay of the turbulence, we will no longer consider the energy of the oscillations and restrict our arguments solely to the evolution of the turbulence. This simplification makes the model we propose for understanding the rate at which the turbulence decays to be that of simple decaying turbulence, and independent of the oscillatory dynamics. The model we use is based on those first proposed by Taylor (1935) and Kolmogorov (1941) where the turbulent transport takes the turbulent energy held at large scales to smaller scales until it is dissipated. This can be understood simply by the nonlinear turbulent transport through spatial scales, which can be approximated by: \[\frac{dE_{\rm turb}}{dt}=-\mathbf{v}_{\rm turb}\cdot\nabla E_{\rm turb}, \tag{39}\] where \(E_{\rm turb}\) is the energy held in the turbulent fluctuations of the velocity and magnetic field and \(\mathbf{v}_{\rm turb}\) are the turbulent motions that transport the energy to smaller scales. The RHS of Equation 39 can be approximated by using the root-mean-square (RMS) of the turbulent motions (\(v_{\rm RMS}\)) and the largest lengthscale of the turbulence (also known as the integral lengthscale) and further by connecting to the energy of the turbulent fluctuations to get (e.g. Kolmogorov, 1941; Onsager, 1949) \[\frac{dE_{\rm turb}}{dt}\approx-C\frac{v_{\rm RMS}}{L(t)}E_{\rm turb}\approx- \frac{C\sqrt{2}}{L(t)A(t)\sqrt{\rho_{i}\rho_{e}}}E_{\rm turb}^{3/2}, \tag{40}\] where \(v_{\rm RMS}\approx\sqrt{2E_{\rm turb}/(A(t)\sqrt{\rho_{i}\rho_{e}})}\) (following Hillier & Arregui, 2019) is an estimate of the connection between the root-mean-squared velocity of the turbulent motions and the turbulent energy with \(A(t)\) the area of the turbulent region and \(L(t)\) the integral length-scale of the turbulent motions. Figure 2 shows that at late times the area of the turbulent region is approximately constant implying that \(A(t)\) is constant (which we denote as \(A_{0}\)). If the area is not changing then we can also assume that the integral lengthscale \(L(t)\) is also constant, which we denote as \(l_{0}\) following Section 3.4 Separating the variables in Equation 40 and then integrating leads to the following integral equation: \[\int_{E(t_{0})}^{E(t)}E^{-3/2}dE=\frac{C}{l_{0}}\sqrt{\frac{2}{A_{0}\sqrt{\rho _{e}\rho_{i}}}}\int_{t_{0}}^{t}dt. \tag{41}\] Therefore, \[E_{\rm turb}=\frac{1}{(\alpha(t-t_{0})+E(t_{0})^{-1/2})^{2}}, \tag{42}\] with \[\alpha=\frac{C}{l_{0}}\sqrt{\frac{2}{A_{0}\sqrt{\rho_{e}\rho_{i}}}}. \tag{43}\] Based on the area of the mixing layer shown in Figure 1 we take \(A_{0}\) to be an annulus with outer radius of 0.45 and inner radius of 0.15. This leads to a lengthscale \(l_{0}\) which we take to be the difference between the inner and outer radius of 0.3. For dynamical consistency we take that the unknown constant \(C\) has the value \(C_{1}/2.7\) where the 2.7 comes from the cascade time for Kolmogorov turbulence (see Section 3.4), i.e. \(C=C_{1}/2.7=0.3/2.7\). This then gives \[\alpha=\frac{0.3}{2.7l_{0}}\sqrt{\frac{2}{A_{0}\sqrt{\rho_{e}\rho_{i}}}}. \tag{44}\] The simple implication from this model is that the turbulent energy should decay at late times \(\propto t^{-2}\). This is a classic result for decaying turbulence with a fixed length-scale of the large scale motions (e.g. Taylor, 1935; Oberlack, 2002; Sagaut & Cambon, 2018). Figure 13 shows the comparison between the theoretically predicted decay and that shown for the kinetic energy in the simulation at the apex of the tube. Note that \(\alpha\) has been multiplied by a factor of \(\sqrt{4/3}\) because we predict that the total kinetic energy will be \(4/3\) greater than the Figure 12: Kinetic energy of the x component of the velocity at the apex of the oscillation for the core of the tube (black line), the coherent motions of the mixing layer (red line), with the energy of everything left, i.e. the incoherent, turbulent motions (blue line). kinetic energy in the \(x\)-direction. This can be inferred from the model of Hillier and Arregui (2019) due to the approximate equipartition between the energy found in the component of the flow varying about the mean layer velocity for a shear flow and the turbulent component of the flow. Overall the match shown in Figure 13 between the model and simulation results is good, though this only over a relatively short time range making stronger statements difficult. If the energy of the turbulence is decaying, the reason for this is that the energy held in the turbulent fluctuations is being dissipated. Therefore, we can expect the decay of the turbulence to directly connect to heating. If the turbulent energy goes down, this should directly correspond to the same level of increase in thermal energy. Assuming the turbulent energy (taking the turbulent energy to be the sum of turbulent kinetic energy and turbulent magnetic energy) decay is uniform along the flux tube, we can integrate \(E(t_{0})-E_{\rm turb}\) over the length of the tube and this gives the amount of energy dissipated. To take into account the velocity (and with that magnetic field) fluctuations in the \(y\)-direction we multiply \(E(t_{0})-E_{\rm turb}\) by \(4/3\) and again take the same factor of \(\sqrt{4/3}\) in \(\alpha\). The comparison between the relative increase in internal energy of the model and the simulation are shown in Figure 14 with the black line giving the simulation results and the red line that of the model (the model is only shown after a time of \(t=42\) for which it was derived). The model clearly provides a reasonable representation of the evolution implying that the key physical concepts that are needed to understand the increase in thermal energy of the simulation have been included in the model. ## 5 Summary and Discussion In this paper we have investigated the dynamics of a nonlinear kink wave excited in a flux tube. The numerical study was designed to capture the fundamentals of the system we are interested, i.e. kink oscillations of coronal loops, whilst maintaining sufficient simplicity and conserving energy in the domain to make analysis straight-forward. The oscillating tube develops the Kelvin-Helmholtz instability on its surface, developing a turbulent layer. It is the growth of this turbulent layer that characterises the first stage of the dynamics in the simulation. Once the layer becomes sufficiently large, and it became the dominant source of kinetic energy, the dynamics transition to a second stage characterised by the decay of the turbulent energy in an annulus around the much reduced core of the loop. We presented simplified analytic models to explain key aspects of these two stages. The first stage of the dynamics is characterised by the growth of a turbulent layer that grows until most of the tube cross-section is turbulent. To model this evolution, we combined together the mixing models of Hillier and Arregui (2019) and Hillier et al. (2023) to create new models for how the mass and momentum (and as a consequence energy, wave velocity, wave amplitude Figure 14: evolution of total thermal energy over time (black line). Model of the expected increase in thermal energy as a consequence of heating due to decay of turbulence is shown with the red line. Figure 13: Plot of the normalised turbulent energy against modified time. The black line shows the results of the simulation and the red line the predicted decay from the model. and heating) evolve over time. This model made some clear predictions about when this stage has to end. As this transition is linearly proportional to the magnitude of the shear flow. We would naturally expect that this stage finishes sooner for highly nonlinear waves. A fundamental aspect of the model for this stage of the evolution is the self-similar evolution of the turbulent layer. This model was developed for steady hydrodynamic shear flows, e.g. Winant and Browand (1974). However, in this study we have a strong magnetic field, an oscillating flow and a curved boundary. In spite of these departures from the base model, the self-similar evolution can still be clearly distinguished. This implies that this is a robust physical mechanism that could occur in flows in many systems with strong magnetic fields. As the self-similar model comes from dimensional analysis of the system, it inherently has unknown constants associated with it. For the case presented in this paper, the constant we determined was consistent with previous hydrodynamic studies of shear layer mixing. However, a wider parameter survey is necessary to determine if this constant has some as yet unknown dependency on the system that is as yet not built into the model. In the second stage of the evolution, as the energy of the turbulent motions dominates the energy of the coherent wave motions the model we propose to understand this phase focuses on the decay of this turbulent energy. This model provides a good prediction of the temporal decay of the turbulent energy of the simulation. This allows for a reasonable prediction of the heating in the simulation due to this turbulent decay to be estimated. One key aspect of this model is we assume that the characteristic length-scale of the turbulence remains fixed. However, the growth of the turbulent area shown by the difference between the blue and red lines in Figure 2 implies that even after \(t\approx 50\) the area does increase. This could imply that there is also some growth in the largest scale of the turbulence in the layer over time. This would change the exponent of the power law of the turbulent decay (e.g. Kolmogorov, 1941; Oberlack, 2002). Clearly, the simulated energy evolves at close to \(\sim t^{-2}\), but it would be interesting to explore how including an evolving length-scale improves the modelling of the system. With both phases of dynamics, what is striking about the models proposed is that they both are taken almost directly from their hydrodynamic equivalents without major revision due to the presence of strong magnetic fields. We have implicitly used the magnetic fields in both phases, mainly through assuming that Alfven waves are travelling back and forth along the magnetic field to connect turbulent fluctuations in one region in \(z\) to other regions in \(z\). In the first phase this implies the influence of the mixing at the apex is felt the full length of the tube, creating velocity and magnetic field fluctuations all along the tube, and allowing the coherent oscillations of the mixing layer to develop. For the second phase, this connection means that we assume turbulence has developed along the full length of the tube and assuming the magnitude of the turbulent energy (whether it is in turbulent kinetic energy or turbulent magnetic energy) is the same along the tube due to the magnetic field connecting regions along \(z\). This assumption allows us to calculate heating rates for the system. Another consequence of the magnetic field is it makes the turbulent motions highly anisotropic (with KHi swirls being highly elongated along the \(z\)-direction (e.g. Antolin et al., 2018) giving quasi-2D like behaviour. This may imply that the value of the constant \(C_{1}\) should be nearer to \(0.5\) than the value of \(\sim 0.3\) we have found for the first phase of the dynamics. However, we know that some of the energy of the turbulence is held in magnetic fluctuations, which will reduce the magnitude of the turbulent motions, slowing mixing. Assuming that there is equipartition between the energy of velocity and magnetic field fluctuations this would correspond to a factor of \(1/\sqrt{2}\) in Equation 11 which would increase the value found for \(C_{1}\) by \(\sqrt{2}\). Taking that the velocity shear is maximal at the apex and zero at \(z=0\) because of the nature of the MHD kink wave under study, this would again put another factor of \(1/\sqrt{2}\) in Equation 11 and another subsequent increase in the value found for \(C_{1}\) by \(\sqrt{2}\). Therefore, in some sense part of the effect of the magnetic field on the turbulence model is wrapped up in the value of the constant \(C_{1}\). ### Resonant absorption and its lack of consequence in our simulations For the simulations presented in this paper, as with many others of KHi turbulence developing from oscillations of tubes with thin boundaries (Terradas et al., 2008; Magyar and Van Doorsselaere, 2016), the turbulent mixing broadens the tube boundary. This takes the tube density profile to be one where resonant layers (Goossens et al., 1992) would exist for a linear wave perturbation. Therefore, it may be expected that resonant absorption could develop in the tube once mixing has made a clear boundary layer between the inside and outside of the tube. However, there is no evidence of a resonant layer forming in the mixing layer at any point in our simulation and to explain the evolution of the turbulence in our simulation our model does not need to invoke resonant absorption at all. So why is resonant absorption not present? The simple answer is that the development of turbulence naturally inhibits the formation of a resonant layer. There are two explanations as to why we should not expect it to develop. Firstly, the KHi turbulence is constantly driving changes in the local conditions of the mixing layer meaning that the resonant field lines are constantly changing and through the swirling motion of the turbulence field lines with different Alfven speed are locked together and unable to freely move (which inhibits them developing their own oscillations). Secondly (or another way of looking at this) is that turbulence at small scales can act to have a diffusion effect at larger scales (e.g. Taylor, 1922). We can infer that there would be a strongly enhanced magnetic diffusion felt by the tube in the mixing layer as a consequence of turbulent motions. As the efficacy of resonant absorption is greatly reduced in the presence of strong diffusion, it can be understood that turbulence would work to suppress the development of a resonant layer. It is worth noting that the magnitude of the turbulent diffusion scales with the magnitude of the velocity shear. Therefore, smaller shear flows will produce less turbulent diffusion and at sufficiently small magnitude may allow the resonant absorption process to manifest. There is, however, one further point to consider. The oscillation frequency of the mixing layer (where a resonant layer could develop) is not that of the original kink oscillation, but a hybrid of that frequency and the kink frequency of the layer itself. Therefore it is likely any resonant process would be associated with the oscillation frequency of the mixing layer and not that of the initial kink oscillation. If this is the case, then the growth of the KHi would completely remove the possibility of resonant absorption of the global kink mode from the system. ### Why is this model different from the other "KHi" damping model The model presented by Van Doorsselaere et al. (2021) is labelled in their work as damping by the Kelvin-Helmholtz instability. However, the model presented in this paper and their model have vastly different formulations, with very different dependency on the density contrast (and in the case of the model in this paper two distinct regimes). This leads to the question: How can two different formulations be modelling the same physical phenomenon? The model presented in this paper is developed from direct analysis of the turbulence created by the Kelvin-Helmholtz instability and, as shown by the comparison in Hillier et al. (2020) and Section 2, gives results that are consistent with those found in simulations of loop oscillations that develop the Kelvin-Helmholtz instability. The model presented in Van Doorsselaere et al. (2021) used the linear eigenfunction for a kink wave and then prescribed a nonlinear amplitude to the oscillation. By integrating the wave oscillation over one period they could calculate the energy extraction rate through these nonlinearities. As they only study the particular wave mode associated with the kink wave, there are no other modes that are excited in the system. This means that there is no perturbation to seed the linear growth of the Kelvin-Helmholtz instability, and as a consequence there can be no Kelvin-Helmholtz turbulence. That is to say, the model they calculated does not allow the growth of the Kelvin-Helmholtz instability so cannot have any damping through Kelvin-Helmholtz turbulence. This leads to the question, what is the nature of the nonlinear damping investigated in Van Doorsselaere et al. (2021). The initial conditions of their model are not an exact solution of the nonlinear equations (unlike a linear Alfven wave in incompressible nonlinear MHD). Therefore, the kink wave (characterised by a wavenumber \(k_{0}\) and azimuthal wavemode \(m=1\)) does work on the neighbouring Fourier mode (in this case with wavenumber \(k=2k_{0}\) and azimuthal wave number \(m=2\)). As with any forcing problem, the energy will be trapped in that wave mode efficiently if the forcing (which occurs at twice the kink frequency), and the wave mode being forced are resonant (this is the basis of any wave turbulence argument, e.g. Goldreich and Sridhar, 1995). This clearly is a different mechanism to KHi damping of a kink wave as presented in this paper, highlighting that when considering nonlinear mechanisms for wave damping there are many different ways these nonlinearities can manifest. ### Heating rates and the implications for coronal heating One of the important predictions of the models present here are the heating rates from both stages of the evolution. To make this comparison, we need to compare the characteristic heating rate of the simulation with the cooling time for comparable coronal plasma. The cooling time for the solar corona (based on a number density of \(10^{9}\,\mathrm{cm}^{-3}\)) is \(\tau_{\mathrm{cool}}\sim 10^{3}\,\mathrm{s}\). In the time units of our simulation (i.e. sound crossing times), if we assume a loop length of \(10^{10}\,\mathrm{cm}\) and a sound speed of \(1.2\times 10^{7}\,\mathrm{cm}\,\mathrm{s}^{-1}\) the dimensional cooling time becomes \(24\tau\). As this is an exponential decay, it is the equivalent of the thermal energy reducing by a factor of \(1/\exp(1)\). Note this ignores the losses through thermal conduction to the chromosphere and so should be taken merely as a crudely representative value. For the heating, we can see from Figure 11, at \(t\approx 40\) the internal energy had increased by a value of \(\sim 0.018\). This gives a rate of increase of internal energy of \(0.00045\). By dividing the total internal energy at the start of the simulation divided by \(\exp(1)\) by this number we get a heating time of \(\sim 10^{4}\tau\), which is significantly larger than the cooling time of \(24\tau\). Therefore, even though the kink wave in this simulation is a relatively nonlinear perturbation (with the initial kick of 20% of the sound speed), the heating rates found are significantly smaller than those needed to balance radiative losses. The small heating rates we find in our simulation are consistent with those found by Howson et al. (2017) where explicit viscosity and magnetic resisitivity were included. ### A corollary on coronal seismology The work presented in this paper leads to a very important corollary about the damping of kink waves in the solar atmosphere. It is standard to fit either exponential or, in some cases, Gaussian damping envelopes to the observed decay of loop oscillations (e.g. Nechaeva et al., 2019). However, these damping envelopes are fundamentally connected to linear wave theory. The work presented in Sections 3.2 and 3.3 highlights how KHi turbulence produces non-linear wave damping. The wave damping envelope for this nonlinear mechanism has a different functional form to the linear damping profiles. Even more importantly, exactly how the tube motions are measured changes the damping envelope that is measured. This implies that when measuring damping of kink waves from observations of the solar corona, it is important to consider more than just linear damping envelopes to understand the evolution. AH is supported by STFC Research Grant No. ST/V000659/1. IA and AH are supported by project PID2021-127487NB-I00 from Ministerio de Ciencia e Innovacion and FEDER funds. AH and TM are supported by JSPS KAKENHI Grant Number 19K03669. AH would like to acknowledge the discussions with members of ISSI Team 457 "The Role of Partial Ionization in the Formation, Dynamics and Stability of Solar Prominences", which have helped improve the ideas in this manuscript. AH and TM would like to acknowledge support by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University. For the purpose of open access, the author has applied a 'Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. The active branch of the (PIP) code is available on GitHub ([https://github.com/AstroSnow/PIP](https://github.com/AstroSnow/PIP)). All simulation data is available upon reasonable request.
2307.03954
Magnon influence on the superconducting density of states in superconductor$-$ferromagnetic-insulator bilayers
Superconductor$-$ferromagnetic-insulator heterostructures are paradigmatic systems for studying the mutual influence of superconductivity and magnetism via proximity effects. In particular, spin-split superconductivity is realized in such structures. Recent experiments and theories demonstrate a rich variety of transport phenomena occurring in devices based on such heterostructures that suggest direct applications in thermoelectricity, low-dissipative spintronics, radiation detection, and sensing. In this work we investigate the influence of the electron-magnon interaction at the superconductor$-$ferromagnetic-insulator interface on the spin-split superconductivity. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed, and the BCS-like spin-split shape of the superconducting density of states, which is typical for superconductors in the effective exchange field, is strongly modified. An odd-frequency superconducting order parameter admixture to the leading singlet order parameter is also found. These findings expand the physical picture of spin-split superconductivity beyond the mean-field description of the ferromagnet exchange field.
A. S. Ianovskaia, A. M. Bobkov, I. V. Bobkova
2023-07-08T11:20:25Z
http://arxiv.org/abs/2307.03954v2
# Magnon influence on the superconducting DOS in FI/S bilayers ###### Abstract Heterostructures superconductor/ferromagnetic insulator (FI/S) are paradigmic systems for studying mutual influence of superconductivity and magnetism via proximity effects. In particular, spin-split superconductivity is realized in such structures. Recent experiments and theories demonstrate a rich variety of transport phenomena occurring in devices based on such heterostructures that suggest direct applications in thermoelectricity, low-dissipative spintronics, radiation detection and sensing. In this work we investigate the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface on the spin-split superconductivity. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed, and the BCS-like spin-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified. An odd-frequency superconducting order parameter admixture to the leading singlet order parameter is also found. These findings expand the physical picture of spin-split superconductivity beyond the mean-field description of the ferromagnet exchange field. ## I Introduction Long ago it was demonstrated that the exchange field of ferromagnetic insulators (FIs), such as EuS and EuO, can spin-split the excitation spectrum of an adjacent thin-film superconductor [1; 2; 3]. The spin splitting in the DOS observed in those experiments resembles the spin splitting created by a strong in-plane field applied to a thin superconducting film. This discovery opened up the way for performing spin-polarized tunneling measurements without the need of applying large magnetic fields. A renewed interest in studying ferromagnetic/superconductor (F/S) structures came with active development of superconducting spintronics [4; 5], caloritronics and spin caloritronics [6; 7]. In particular, in F/S structures with spin-split density of states (DOS) a series of promising phenomena have been studied. Among them are giant thermoelectric [8; 9; 10; 11; 12; 13; 14; 15; 16], thermospin effects [9; 17; 18], highly efficient thermally-induced domain wall motion [19], spin and heat valves [20; 21; 22; 23; 24], cooling at the nanoscale [25; 26], low-temperature thermometry and development of sensitive electron thermometers [27]. The spin-split DOS in F/S structures has also been explored in the presence of magnetic inhomogeneities, such as textured ferromagnets and domain walls [28; 29; 30; 31; 32; 33; 34]. Characteristic signatures of equal-spin triplet pairing were reported [32]. It was shown that the characteristic spatial and energy dependence of the spin-dependent DOS allows to tomographically extract the structure of the spin-triplet Cooper pairs [34]. Furthermore, the influence of the domain structure on the position-averaged superconducting DOS in FI/S bilayer was studied [24]. Another important direction in the field of F/S hybrid structures is investigation of interplay between the superconducting state and ferromagnetic excitations - magnons. A series of interesting results, presumably related to the influence of the superconductor on the magnon spectrum have been reported. In particular, it was found that the adjacent superconductor works as a spin sink strongly influencing Gilbert damping of the magnon modes [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45] and can result in shifting of k = 0 magnon frequencies (Kittel mode) [40; 43; 44; 45]. The electromagnetic interaction between magnons in ferromagnets and superconductors also results in appearance of magnon-fluxon excitations [46] and efficient gating of magnons [47]. Further it was reported that the magnetic proximity effect in thin film F/S hybrids results in appearing of magnon-cooparons, which are composed of a magnon in F and an accompanying cloud of spinful triplet pairs in S [48]. Some aspects of back influence of magnons on superconducting state have already been investigated. For example, a possible realization of the magnon-mediated superconductivity in F/S hybrids has been proposed [49; 50; 51; 52; 53; 54]. At the same time, the influence of magnons via the magnetic proximity effect on the superconducting DOS practically has not yet been studied, although the electron-magnon interaction and influence of this interaction on the DOS in ferromagnetic metals have been investigated long ago [55; 56]. Here we consider how the effects of electron-magnon interactions in FI/S thin-film hybrids manifest themselves in the superconducting DOS and quasiparticle spectra of the superconductor. It is found that the magnon-mediated electron spin-flip processes cause the interaction and mixing of the spin-split bands resulting in their reconstruction, which is especially important near the edge of the superconducting gap. We demonstrate that the classical BCS-like Zeeman-split shape of the superconducting DOS can be strongly modified due to the electron-magnon interaction and this modification is temperature-dependent. The influence of magnons on the temperature dependence of the Zeeman splitting of the DOS and relevance of our findings to existing and future experiments are also discussed. The paper is organized as follows. In Sec. II we de scribe the system under consideration and the Green's functions formalism taking into account magnon self-energies. In Sec. III the modifications of the quasiparticle spectra in the superconductor due to the electron-magnon coupling are discussed. In Sec. IV we study signatures of the electron-magnon interaction in the Zeeman-split superconducting DOS and their temperature dependence. Our conclusions are summarized in Sec. V. ## II System and Formalism We consider a thin-film bilayer as depicted in Fig. 1, in which a ferromagnetic insulator FI is interfaced with a conventional spin-singlet s-wave superconductor S. The thickness of the S layer \(d_{S}\) is assumed to be small as compared to the superconducting coherence length \(\xi_{S}\). In this case the S layer can be considered as homogeneous along the normal to the interface plane. The FI layer in its ground state is magnetized in-plane, along the \(z\)-direction. The Hamiltonian of the system takes the form: \[\hat{H}=\hat{H}_{S}+\hat{H}_{FI}+\hat{H}_{ex}, \tag{1}\] where \(\hat{H}_{S}\) is the standard mean-field BCS Hamiltonian describing electrons in the superconducting film: \[\hat{H}_{S}=\sum_{\mathbf{k}\sigma}\xi_{\mathbf{k}}c^{\dagger}_{\mathbf{k}\sigma}c_{\mathbf{k }\sigma}-\sum_{\mathbf{k}}\Delta c^{\dagger}_{\mathbf{k}\uparrow}c^{\dagger}_{-\mathbf{k }\downarrow}-\sum_{\mathbf{k}}\Delta^{*}c_{-\mathbf{k}\downarrow}c_{\mathbf{k}\uparrow}. \tag{2}\] \(\xi_{\mathbf{k}}=k^{2}/2m-\mu\) is the normal state kinetic energy of the electrons in the S layer, counted from the chemical potential of the superconductor \(\mu\). \(\Delta\) is the superconducting order parameter in S, which assumed to be of conventional isotropic \(s\)-wave type. \(c^{+}_{\mathbf{k}\sigma}\) and \(c_{\mathbf{k}\sigma}\) are creation and annihilation operators of electrons with the wave vector \(\mathbf{k}\) and spin \(\sigma\). \(\hat{H}_{FI}\) describes magnons in the FI. Assuming easy-axis magnetic anisotropy in the FI it can be written as \[\hat{H}_{FI}=\sum_{\mathbf{q}}(\omega_{0}+D\mathbf{q}^{2})b^{\dagger}_{\mathbf{q}}b_{\mathbf{q }}, \tag{3}\] where \(b^{+}_{\mathbf{q}}\) and \(b_{\mathbf{q}}\) are creation and annihilation operators of magnons in FI with wave vector \(\mathbf{q}\), \(\omega_{0}=|\gamma|(\mu_{0}H_{0}+2K_{a}/M_{s})\) is the magnonic frequency at \(q=0\), \(D\) is the magnon stiffness constant, \(\gamma\) is the typically negative gyromagnetic ratio, \(M_{s}\) is the saturation magnetization, \(\mu_{0}\) is the permeability of free space, \(K_{a}\) is the easy-axis anisotropy constant and \(H_{0}\) is the external field (can be equal to zero in our consideration). Electronic and magnonic wave vectors \(\mathbf{k}\) and \(\mathbf{q}\) are assumed to be two-dimensional (2D), that is the electrons and magnons can only propagate in plane of the FI/S interface. The wave functions along the \(y\)-direction, perpendicular to the interface, are assumed to be quantized. For simplicity, in the formulas we leave only one transverse magnon mode. In fact, we have checked that different modes give quantitatively different, but qualitatively the same contributions to considered self-energies. Their effect can be accounted for by multiplying our results for the self-energy corrections by an effective number of working transverse modes (see below). \(\hat{H}_{ex}\) accounts for the exchange interaction between S and FI: \[\hat{H}_{ex}=-J\int d^{2}\mathbf{\rho}\mathbf{S}_{FI}(\mathbf{\rho})\mathbf{s}_{e}(\mathbf{\rho}), \tag{4}\] where \(\mathbf{\rho}\) is a two-dimensional radius-vector at the interface plane, \(\mathbf{S}_{FI}\) and \(\mathbf{s}_{e}\) are the spin density operators in the FI and S, respectively. \(J\) is the interface exchange constant. By performing the Holstein-Primakoff transformation to the second order in the magnonic operators in Eq. (4) one obtains \[\hat{H}_{ex}=\hat{H}_{1}+\hat{H}_{2}+\hat{H}_{3}, \tag{5}\] with \[\hat{H}_{1}=\sum_{\mathbf{k},\mathbf{k}^{\prime}}U_{\mathbf{k},\mathbf{k}^{\prime }}(c^{\dagger}_{\mathbf{k},\uparrow}c_{\mathbf{k}^{\prime},\uparrow}-c^{\dagger}_{ \mathbf{k},\downarrow}c_{\mathbf{k}^{\prime},\downarrow}), \tag{6}\] \[U_{\mathbf{k},\mathbf{k}^{\prime}}=\frac{JM_{s}}{2|\gamma|}\int d^{2}\rho \Psi^{*}_{\mathbf{k}}(\mathbf{\rho})\Psi_{\mathbf{k}^{\prime}}(\mathbf{\rho}),\] \[\hat{H}_{2}=\sum_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q},\mathbf{q}^{\prime}}T_{\mathbf{k},\bm {k}^{\prime},\mathbf{q},\mathbf{q}^{\prime}}b^{\dagger}_{\mathbf{q}}b_{\mathbf{q}^{\prime}}(c^ {\dagger}_{\mathbf{k},\uparrow}c_{\mathbf{k}^{\prime},\uparrow}-c^{\dagger}_{\mathbf{k },\downarrow}c_{\mathbf{k}^{\prime},\downarrow}),\] \[T_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q},\mathbf{q}^{\prime}}=-\frac{J}{2}\int d^{2}\rho \Psi^{*}_{\mathbf{k}}(\mathbf{\rho})\Psi_{\mathbf{k}^{\prime}}(\mathbf{\rho})\phi^{*}_{\mathbf{q} }(\mathbf{\rho})\phi_{\mathbf{q}^{\prime}}(\mathbf{\rho}), \tag{7}\] \[\hat{H}_{3}=\sum_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}}V_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{ q}}(b_{\mathbf{q}}c^{\dagger}_{\mathbf{k},\uparrow}c_{\mathbf{k}^{\prime},\downarrow}+b^{ \dagger}_{\mathbf{q}}c^{\dagger}_{\mathbf{k}^{\prime},\downarrow}c_{\mathbf{k},\uparrow}),\] \[V_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}}=J\sqrt{\frac{M_{s}}{2|\gamma|}}\int d^{2}\rho \Psi^{*}_{\mathbf{k}}(\mathbf{\rho})\Psi_{\mathbf{k}^{\prime}}(\mathbf{\rho})\phi_{\mathbf{q}}( \mathbf{\rho}), \tag{8}\] where \(\hat{H}_{1}\) describes a spin-splitting of the electronic energy spectrum in S in the mean-field approximation. The second term \(\hat{H}_{2}\) represents the Ising-term, which physically accounts for the renormalization of the spin-splitting by magnonic contribution. Since the processes of the spin transfer between electrons and magnons are Figure 1: Sketch of the FI/S thin-film bilayer. of primary importance for our consideration, when calculating the electronic Green's function we simplify this term by substituting the magnon operator \(b_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}\) by its averaged value \(\langle b_{\mathbf{q}}^{\dagger}b_{\mathbf{q}}\rangle=n_{\mathbf{q}}\delta_{\mathbf{q}\mathbf{q}^{ \prime}}\), where \(n_{\mathbf{q}}\) is the density of magnons with wave vector \(\mathbf{q}\). The third term \(\hat{H}_{3}\) transfers spin between electron and magnon operators and will turn out to be the most significant for effects under consideration. If we choose the wave functions of electrons \(\Psi_{\mathbf{k}}(\mathbf{\rho})\) and magnons \(\phi_{\mathbf{q}}(\mathbf{\rho})\) at the interface in the form of plane waves propagating along the interface, that is \(\Psi_{\mathbf{k}}(\mathbf{\rho})=(1/\sqrt{d_{S}})e^{i\mathbf{k}\mathbf{\rho}}\) and \(\phi_{\mathbf{q}}(\mathbf{\rho})=(1/\sqrt{d_{FI}})e^{i\mathbf{q}\mathbf{\rho}}\), then \(\hat{H}_{ex}\) can be simplified: \[\hat{H}_{ex}=\tilde{U}\sum_{k}(c_{k,\uparrow}^{\dagger}c_{k, \uparrow}-c_{k,\downarrow}^{\dagger}c_{k,\downarrow})+ \tag{9}\] \[V\sum_{k,q}(b_{q}c_{k,\uparrow}^{\dagger}c_{k-q,\downarrow}+b_{ q}^{\dagger}c_{k-q,\downarrow}^{\dagger}c_{k,\uparrow}),\] where \(\tilde{U}=-J(M_{s}-N_{m}|\gamma|)/(2|\gamma|d_{S})\) is the averaged spin-splitting field in the superconductor renormalized by the magnon density \(N_{m}\), and \(V=J\sqrt{M_{s}/2|\gamma|d_{FI}A}(1/d_{S})\) is the electron-magnon coupling constant, where \(A\) is the area of the FI/S interface. Introducing the following Nambu-spinor \(\tilde{\Psi}_{\mathbf{k}}=(c_{\mathbf{k}\uparrow},c_{\mathbf{k}\downarrow},-c_{-\mathbf{k} \downarrow}^{\dagger},c_{-\mathbf{k}\uparrow}^{\dagger})^{T}\), we define the Gor'kov Green's function in the Matsubara representation \(\hat{G}_{\mathbf{k}}(\tau)=-\langle T_{\tau}\tilde{\Psi}_{\mathbf{k}}\tilde{\Psi}_{\bm {k}}^{\dagger}\rangle\), where \(\langle T_{\tau}...\rangle\) means imaginary time-ordered thermal averaging. Turning to the Matsubara frequency representation the Green's function obeys the following equation: \[(i\omega-\xi_{k}\tau_{z}-\tilde{U}\sigma_{z}-\Delta\tau_{x}-\tilde{\Sigma}_{m })\hat{G}_{\mathbf{k}}(\omega)=1, \tag{10}\] where \(\omega\) is the fermionic Matsubara frequency, \(\sigma_{i}\) and \(\tau_{i}\) (\(i=x,y,z\)) are Pauli matrices in spin and particle-hole spaces, respectively. \(\tilde{\Sigma}_{m}\) is the magnonic self-energy, which describes corrections to the electronic Green's function due to the electron-magnon interaction and in the framework of the self-consistent Born approximation takes the form: \[\tilde{\Sigma}_{m}=-V^{2}T\sum_{\mathbf{q},\Omega}B_{\mathbf{q}}(\Omega) \left\{\sigma_{+}\hat{G}_{\mathbf{k}-\mathbf{q}}(\omega-\Omega)\sigma_{-}+\right.\] \[\left.\sigma_{-}\tilde{G}_{\mathbf{k}+\mathbf{q}}(\omega+\Omega)\sigma_{ +}\right\}, \tag{11}\] where \(\sigma_{\pm}=(\sigma_{x}\pm i\sigma_{y})\), \(\Omega\) is the bosonic Matsubara frequency and \(B_{\mathbf{q}}(\Omega)=|i\Omega-(\omega_{0}+Dq^{2})|^{-1}\) is the magnonic Green's function. From the spin structure of Eq. (11) it is seen that \(\hat{\Sigma}_{m}\) is diagonal in spin space. For this reason the electronic Green's function, which is given by the solution of Eq. (10) is also diagonal matrix in spin space and Eq. (10) can be written for the both spin subbands separately: \[(i\omega-\xi_{k}\tau_{z}-\sigma\tilde{U}-\Delta\tau_{x}-\hat{\Sigma}_{m, \sigma})\hat{G}_{\mathbf{k},\sigma}(\omega)=1, \tag{12}\] where \(\hat{G}_{\mathbf{k},\sigma}\) is \(2\times 2\) matrix in the particle-hole space corresponding to the electron spin \(\sigma=\uparrow,\downarrow\). \(\hat{\Sigma}_{m,\sigma}\) is also \(2\times 2\) matrix in the particle-hole space representing the magnonic self-energy for the given spin subband \(\sigma\): \[\hat{\Sigma}_{m,\sigma}=-V^{2}T\sum_{\mathbf{q},\Omega}B_{\mathbf{q}}(\Omega)\hat{G}_ {\mathbf{k}-\sigma\mathbf{q},\bar{\sigma}}(\omega-\sigma\Omega). \tag{13}\] As a factor in the expressions \(\sigma\) means \(\pm 1\) for the spin-up (spin-down) subbands, and \(\bar{\sigma}\) means the opposite spin subband. The dimensionless coupling constant quantifying the strength of the electron-magnon coupling is \(K=V^{2}A/4\pi\hbar v_{F}\sqrt{D\Delta}\). Our numerical estimates made for the parameters corresponding to EuS/Al or YIG/Nb structures suggest that \(K\) should be rather small, \(K\ll 1\), for the detailed discussion of the numerical estimates see Sec. IV. The smallness of the electron-magnon coupling constant allows us to use non self-consistent Born approximation when calculating magnon self-energy. That is, we substitute \(\hat{G}_{\mathbf{k}-\sigma\mathbf{q},\bar{\sigma}}\) by the bare superconducting Green's function obtained without taking into account the magnon self-energy \(\hat{G}_{\mathbf{k}-\sigma\mathbf{q},\bar{\sigma}}^{(0)}\) in Eq. (13). Then the explicit solution of Eq. (12) takes the form: \[\hat{G}_{\mathbf{k},\sigma}(\omega)=\frac{i\widetilde{\omega}_{\mathbf{k},\sigma}+ \widetilde{\xi}_{\mathbf{k},\sigma}\tau_{z}+\widetilde{\Delta}_{\mathbf{k},\sigma}\tau_ {x}}{(i\widetilde{\omega}_{\mathbf{k},\sigma})^{2}-(\widetilde{\xi}_{\mathbf{k}, \sigma})^{2}-(\widetilde{\Delta}_{\mathbf{k},\sigma})^{2}}. \tag{14}\] where all the quantities marked by \(\tilde{\ }\) are renormalized by the electron-magnon interaction as follows: \[\widetilde{\Delta}_{\mathbf{k},\sigma}(\omega)=\Delta+\delta\Delta_{\mathbf{k},\sigma}( \omega)=\Delta-\] \[-V^{2}T\sum_{\mathbf{q},\Omega}B_{\mathbf{q}}(\Omega)\frac{\Delta}{(i \omega-i\sigma\Omega+\widetilde{U}\sigma)^{2}-\xi_{\mathbf{k}-\sigma\mathbf{q}}^{2}-| \Delta|^{2}}, \tag{15}\] \[\widetilde{\xi}_{\mathbf{k},\sigma}(\omega)=\xi_{\mathbf{k}}+\delta\xi_{\mathbf{k},\sigma}( \omega)=\xi_{\mathbf{k}}-\] \[-V^{2}T\sum_{\mathbf{q},\Omega}B_{\mathbf{q}}(\Omega)\frac{\xi_{\mathbf{k}- \sigma\mathbf{q}}}{(i\omega-i\sigma\Omega+\widetilde{U}\sigma)^{2}-\xi_{\mathbf{k}- \sigma\mathbf{q}}^{2}-|\Delta|^{2}}, \tag{16}\] \[\widetilde{\varepsilon}_{\mathbf{k},\sigma}(\omega)=i\omega-\widetilde{U}\sigma+ \delta\varepsilon_{\mathbf{k},\sigma}(\omega)=i\omega-\widetilde{U}\sigma+\] \[+V^{2}T\sum_{\mathbf{q},\Omega}B_{\mathbf{q}}(\Omega)\frac{i\omega-i\sigma \Omega+\widetilde{U}\sigma}{(i\omega-i\sigma\Omega+\widetilde{U}\sigma)^{2}- \xi_{\mathbf{k}-\sigma\mathbf{q}}^{2}-|\Delta|^{2}}. \tag{17}\] For the problem under consideration all the in-plane directions of \(\mathbf{k}\) are equivalent. For this reason the magnonic corrections only depend on the absolute value \(k\) of the wave vector. Further in order to study the quasiparticle spectra and density of states we turn from Matsubara frequencies to the real energies in the Green's functions \(i\omega\rightarrow\varepsilon+i\delta\), where \(\delta\) is an infinitesimal positive number. The magnonic corrections for spin-up electrons \(\delta\Delta_{\mathbf{k},\uparrow}\), \(\delta\xi_{\mathbf{k},\uparrow}\) and \(\delta\varepsilon_{\mathbf{k},\uparrow}\) are presented in Figs. 2-4 as functions of the quasiparticle energy \(\varepsilon\) and \(\xi_{\mathbf{k}}\equiv\xi\), which after linearization in the vicinity of the Fermi surface takes the form \(\xi_{\mathbf{k}}\approx\mathbf{v}_{F}(\mathbf{k}-\mathbf{k}_{F})\). The key features of the corrections, which can be see in the presented plots are: (i) The dependence of the corrections on \(\xi\) is very weak. The reason is that the most important range of the magnonic wave numbers contributing to the corrections is \(q\lesssim 1/\xi_{S}\), where \(\xi_{S}=v_{F}/\Delta\) is the superconducting coherence length. Then taking parameters of the magnon spectrum corresponding to YIG \(\omega_{0,YIG}\sim 10^{-1}\Delta\), \(D_{YIG}\approx 5\!\ast\!10^{-40}J\!\ast\!m^{2}\) or \(\text{EuS}\,\omega_{0,EuS}\sim 10^{-2}\Delta\), \(D_{EuS}\approx 3\ast 10^{-42}J\ast m^{2}\), we obtain that \(Dq^{2}\ll\omega_{0}\) to very good accuracy for all reasonable parameters. Consequently, one can disregard \(Dq^{2}\) with respect to \(\omega_{0}\) in the magnonic Green's function \(B_{\mathbf{q}}\) and after linearization of \(\xi_{\mathbf{k}-\sigma\mathbf{q}}\approx\mathbf{v}_{F}(\mathbf{k}-\sigma\mathbf{q}-\mathbf{k}_{F})\) in the vicinity of the Fermi surface we see that the dependence on \(\mathbf{k}\) drops from Eqs. (15)-(17). (ii) The correction to the normal state electron dispersion \(\delta\xi\) is small with respect to all other corrections and is neglected below. (iii) The important corrections \(\delta\Delta\) and \(\delta\varepsilon\) have peaks at the energies corresponding to the superconducting coherence peaks of the _opposite_ spin subbands. While the coherence peaks for the spin-up subband are located at \(\varepsilon=\pm\Delta+\tilde{U}\), the peaks of the corrections are at \(\varepsilon=\pm\Delta-\tilde{U}\). It is an obvious consequence of the process of electron spin flip accompanied by emission or absorption of a magnon. (iv) Correction \(\delta\Delta\) represents an effective contribution to the superconducting order parameter induced from the pure singlet pairing \(\Delta\) via the electron-magnon interaction. It depends on the Matsubara frequency and contains both singlet and triplet components. As can be seen from Eq. (15), the correction obeys the condition \(\delta\Delta_{\uparrow}(\omega)=\delta\Delta_{\downarrow}(-\omega)\). It means that the triplet component \(\delta\Delta_{t}(\omega)=\delta\Delta_{\uparrow}(\omega)-\delta\Delta_{ \downarrow}(\omega)=-\delta\Delta_{t}(-\omega)\) works as an effective odd-frequency _superconducting order parameter_. This situation is rather unusual because typically in F/S hybrid systems we encounter an odd-frequency anomalous Green's function, but at the same time the order parameter is still even frequency in the framework of the conventional BCS weak coupling theory. ## III Quasiparticle spectra Now we turn to discussion of how quasiparticle spectra in the S layer are modified by the electron-magnon interaction. In Fig. 5(a) we present the spectral functions for the both spins in the S layer calculated from the Green's function (14) according to the relation \[A_{\sigma}(\varepsilon,\mathbf{k})=-\frac{1}{\pi}\text{Tr}\left\{\frac{1+\tau_{z}}{2 }\text{Im}[\hat{G}^{R}_{\mathbf{k},\sigma}(\varepsilon)]\right\}. \tag{18}\] The spectral function is isotropic in momentum space and for this reason we plot it as a function of \(\xi_{\mathbf{k}}\equiv\xi\). The electron-like and hole-like quasiparticle branches are clearly seen at positive and negative energies, respectively. Black dashed lines represent the quasiparticle spectra in the absence of the electron-magnon interaction. The electron-magnon interaction leads to the following main modifications of the quasiparticle spectra: (i) The Zeeman splitting of spin-up and spin-down quasiparticle branches is reduced due to the magnon-mediated interaction between quasiparticles with opposite spins. (ii) For positive energy branches, corresponding to electron-like quasiparticles, the lifetime of spin-up quasiparticles and quasiparticles at the upper part of the spin-down branch is considerably suppressed, what is seen as a broadening of the corresponding branches. For negative energies, corresponding to hole-like quasiparticles, the situation is symmetric if we interchange spins. The broadening of the spin-down branch only occurs in the energy region, where the spin-up branch also exists. The physical reason is that the spin-flip processes providing the broadening are nearly horizontal due to the fact that \(\omega_{0}+Dq^{2}\ll\Delta\), that is the magnon energies are small as compared \(\Delta\) in the whole range of \(\xi\), considered in Fig. (5). The lower (upper) part of the spin-down (up) positive (negative) energy branch is not broadened because there are no available states for the opposite spin quasiparticles at the appropriate energies and, consequently, the spin-flip processes are not allowed. (iii) In Fig. 5(a) we also see a reconstruction of the Figure 6: The same as in Figs. 5(a)-(b), but for lower temperature \(T=0.01\Delta\). The other parameters are the same. The reconstruction and broadening of the spectra are much less pronounced. spin-down spectral branch in the energy range of the bottom of the spin-up branch. In order to investigate this effect in more detail we plot the same figure on a logarithmic scale in Fig. 5(b), what allows to clearly see weak spectral features. Figs. 5(c) and (d) represent the spectral functions for the spin-up band on the normal and on the logarithmic scale, respectively. From Figs. 5(b) and (d) it is seen that due to the electron-magnon interaction in the energy region of the extremum of the spin-up (down) branch, a nonzero density of states appears for the opposite spin branch. It looks like a horizontal line starting from the bottom of the corresponding branch. This line is horizontal due to the independence of the electron-magnon self-energy corrections (15) and (17) on \(\xi\). This mixing of the spin-up and spin-down bands resulting from the magnon-mediated spin-flip processes is natural and exists at all energies, but the spectral weight of the opposite spin branch is too small except for the regions of the extrema of the bands corresponding to the coherence peaks of the superconducting DOS. Intersection of the additional lines with the original spin-down band results in its reconstruction, which looks like an avoided crossing point. The results for the spectral function presented and discussed above correspond to \(T=0.1\Delta\). This temperature is higher than the gap in the magnonic spectrum \(\omega_{0}=0.03\Delta\), which we take in our calculations. Therefore, a large number of thermal magnons are excited at this temperature. In Fig. 6 the spectral function is demonstrated for lower temperature \(T=0.01\Delta<\omega_{0}\). It is seen that the characteristic signatures of the magnon-mediated spin-flip processes, that is the mixing, reconstruction and broadening of the branches are much less pronounced due to the suppression of the thermally excited magnons at such low temperatures. ## IV DOS in the presence of magnons Now we turn to discussion of the local density of states (LDOS) in the S layer, which is calculated as the momentum integrated spectral function: \[N(\varepsilon)=\int\frac{d^{2}k}{(2\pi)^{2}}A(\varepsilon,\mathbf{k}). \tag{19}\] Fig. 7(a) demonstrates the LDOS in the presence of electron-magnon interaction (solid line) as compared to the LDOS calculated at \(V=0\) (dashed line). The LDOS at \(V=0\), that is calculated assuming mean-field approximation for the exchange field, takes the conventional BCS-like shape. It manifests Zeeman-split coherence peaks, and the outer peak is always higher than the inner one. The electron-magnon interaction inverts the relative ratio of the peak heights and broadens the outer peaks, while the width of the inner peaks remains unchanged. The reason is the same as for the broadening of the spectra in Fig. 5: electron spin-flip processes accompanied by a magnon emission or absorption. The outer coherence peaks in Fig.7(a) correspond to the energy regions of the bottom (top) of the positive(negative)-energy spin-up(down) bands. This type of broadening, which only affects outer peaks, differs from the other physical mechanisms resulting in the broadening of the coherence peaks, such as the orbital effect of the magnetic field, inelastic scattering or magnetic impurities, which affect all the peaks [57] and can be roughly described by the Dynes parameter. The other important manifestation of the electron-magnon interaction is that the shape of the LDOS strongly depends on temperature even at very low temperatures \(\sim\omega_{0}\ll\Delta\), in agreement with the discussed above behavior of the spectral function. The temperature evolution of the LDOS is presented in Fig. 8. It is seen that the broadening of the outer peak develops with increasing temperature in the temperature range \(\sim\omega_{0}\). It is clear if we remember that the broadening is caused by the spin-flip processes, which are mediated by the thermally excited magnons. We do not consider larger temperatures \(T\gg\omega_{0}\) comparable to the critical temperature of the superconducting film because in this temperature Figure 7: (a) LDOS in the S layer with (solid line, \(K=0.01\)) and without (dashed, \(K=0\)) taking into account electron-magnon interaction. (b) Spin-resolved LDOS \(N_{\uparrow}\) (red) and \(N_{\downarrow}\) (blue) for \(K=0.01\). The solid line in panel (a) is obtained by summing red and and curves from panel (b). \(T=0.1\Delta\), \(\tilde{U}=0.5\Delta\). range the temperature dependence of the superconducting gap comes into play and the correct consideration of the problem requires solving of the self-consistency equation for the order parameter. Now let us discuss numerical estimates of the dimensionless constant \(K=V^{2}A/4\pi\hbar v_{F}\sqrt{D\Delta}\), which controls the strength of the electron-magnon coupling. Substituting \(V=J\sqrt{M_{s}/2|\gamma|d_{FI}A(1/d_{S})}\) and expressing the interface exchange coupling constant via the experimentally accessible quantity \(\tilde{U}\) as \(|J|=2|\gamma|\tilde{U}d_{S}/M_{s}\) (where to the leading approximation we neglect magnonic contribution to the magnetization), we obtain \(K=\tilde{U}^{2}(2|\gamma|/M_{s})1/(4\pi\sqrt{D\Delta}v_{F}d_{FI})\) for one transverse magnon mode. The effective number of working transverse modes \(N_{\perp}\sim d_{FI}/a\), where \(a\) is the interatomic distance in the ferromagnet. According to our estimates for \(d_{FI}\approx 10nm\)\(N_{\perp}\sim 2\div 5\). One can take the following parameters for YIG/Nb heterostructures: \(\tilde{U}/\Delta=0.5\), \(v_{F}=10^{6}m/s\), \(\Delta_{Nb}=2.7*10^{-22}J\), \(a=1.2m\), \(2|\gamma|/M_{s}=3.3*10^{-27}m^{3}\), \(D=D_{bare,YIG}-\delta D_{YIG}\), where \(D_{bare,YIG}=5*10^{-40}J*m^{2}\)[58] is the exchange stiffness of \(YIG\) and \(\delta D_{YIG}\) is the renormalization of the stiffness in FI/S bilayers due to formation of magnon-Cooparon quasiparticles [48]. As it was predicted [48], for the material parameters of YIG/Nb heterostructures \(\delta D_{YIG}\) can be \(\sim(0.5\div 1)D_{YIG,bare}\) for \(d_{FI}\sim(1\div 0.5)d_{S}\). Therefore, the electron-magnon coupling constant for YIG/Nb heterostructures can vary in a wide range \(K_{YIG/Nb}\gtrsim 10^{-4}\). The considered here values \(K\sim 0.01\) can be realized in the regime of strong renormalization of the exchange stiffness constant \(D\). For EuS/Al heterostructures one can take \(\tilde{U}/\Delta=0.25\)[24], \(v_{F}=10^{6}m/s\), \(\Delta_{Al}=3.5*10^{-23}J\), \(a=10^{-10}m\), \(2|\gamma|/M_{s}=3.3*10^{-28}m^{3}\), \(D=D_{bare,EuS}\), where \(D_{bare,EuS}=3*10^{-42}J*m^{2}\)[59]. The superconducting renormalization of the stiffness due to formation of magnon-Cooparon quasiparticles is predicted to be small for the parameters corresponding to EuS/Al heterostructures at reasonable thicknesses \(d_{FI}\) due to smaller values of \(\Delta\) and larger \(M_{s}\). Substituting these parameters to the expression for \(K\) we come to the conclusion that for EuS/Al heterostructures \(K_{EuS/Al}\sim 10^{-7}\div 10^{-6}\), that is the electron-magnon effects unlikely to be observed in such structures. In general, the electron-magnon effects in the LDOS and quasiparticle spectra should be more pronounced in ultra-thin superconducting films with high critical temperatures, where large absolute values of the effective exchange field \(\tilde{U}\) can be realized. The smaller values of the exchange stiffness of the ferromagnet will also enhance the effect. The manifestations of the electron-magnon coupling become more pronounced at \(T\gtrsim\omega_{0}\) and grow with temperature. Now we discuss the influence of the electron-magnon interaction on the effective Zeeman splitting, which is defined as the distance between the split coherence peaks of the LDOS divided by 2. Experimentally, the low-temperature reduction of the effective Zeeman splitting at \(T\ll\Delta\) for EuS/Al heterostructures has been reported [24]. It was ascribed to the presence of weakly bound spins at the interface of the EuS/Al. The renormalization of the effective exchange field in the superconductor by the thermal magnons can also contribute to this effect. Indeed, the fit of experimentally observed temperature dependence of the distance between the Zeeman-split coherence peaks \(\Delta V_{peak}(T)\) by \(2|\tilde{U}|=J(M_{s}-N_{m}|\gamma|)/(2|\gamma|d_{S})\) with the magnon density \(N_{m}=(1/Sd_{FI})\sum_{q}\left\{\exp[-(\omega_{0}+Dq^{2})/T]-1\right\}^{-1}\) and \(\omega_{0}\approx 0.03K\) is in reasonable agreement with the experimental data. In addition, the broadening of the outer coherence peaks, predicted in this work, leads to enhancement of the distance between the spin-split coherence peaks. The broadening becomes stronger with increasing temperature. This effect leads to an apparent growth of the peaks splitting with temperature and, therefore, acts opposite to the renormalization of the effective Zeeman field by magnons. However, our numerical estimates suggest that the temperature growth is unlikely to be observed, at least for heterostructures, consisting of the materials discussed above, because the renormalization of the effective Zeeman field by magnons dominates. ## V Conclusions In this work the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface in thin-film FI/S heterostructures on the spectrum of quasiparticles and the LDOS in the superconducting layer is studied. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed. The reconstruction is the most pronounced in the region of the bottom of the energetically unfavorable spin band because of the enhanced density Figure 8: Inner LDOS peak (left panel) and outer LDOS peak (right panel) as functions of energy. Different curves correspond to different temperatures. \(K=0.01\), \(\tilde{U}=0.5\Delta\). of the electronic states and existence of the available states in the opposite-spin band. The BCS-like Zeeman-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified due to the electron-magnon interaction. The outer spin-split coherence peaks are broadened, and the inner peaks remain intact. This type of broadening is a clear signature of the magnon-mediated spin flips and strongly differs from other mechanisms of the coherence peaks broadening, which usually influence all peaks. The broadening grows with temperature due to the thermal excitation of magnons. The described above features in the electronic DOS are mainly caused by diagonal in the particle-hole space magnonic contributions to the electron self-energy, that is by the quasiparticle processes. Besides that we have also found an off-diagonal in the particle-hole space magnonic contribution to the electronic self-energy. It mimics an odd-frequency superconducting order parameter admixture to the leading singlet order parameter. The study of its influence on the superconducting properties of the system may be an interesting direction for future research. ## VI Acknowledgments We acknowledge the discussions of the exchange interaction hamiltonian with Akashdeep Kamra. The work was supported by the Russian Science Foundation via the RSF project No. 22-42-04408.
2302.10890
Learning Interpretable Low-dimensional Representation via Physical Symmetry
We have recently seen great progress in learning interpretable music representations, ranging from basic factors, such as pitch and timbre, to high-level concepts, such as chord and texture. However, most methods rely heavily on music domain knowledge. It remains an open question what general computational principles give rise to interpretable representations, especially low-dim factors that agree with human perception. In this study, we take inspiration from modern physics and use physical symmetry as a self consistency constraint for the latent space of time-series data. Specifically, it requires the prior model that characterises the dynamics of the latent states to be equivariant with respect to certain group transformations. We show that physical symmetry leads the model to learn a linear pitch factor from unlabelled monophonic music audio in a self-supervised fashion. In addition, the same methodology can be applied to computer vision, learning a 3D Cartesian space from videos of a simple moving object without labels. Furthermore, physical symmetry naturally leads to counterfactual representation augmentation, a new technique which improves sample efficiency.
Xuanjie Liu, Daniel Chin, Yichen Huang, Gus Xia
2023-02-05T21:48:42Z
http://arxiv.org/abs/2302.10890v4
# Learning Interpretable Low-dimensional Representation via Physical Symmetry ###### Abstract Interpretable representation learning has been playing a key role in creative intelligent systems. In the music domain, current learning algorithms can successfully learn various features such as pitch, timbre, chord, texture, etc. However, most methods rely heavily on music domain knowledge. It remains an open question what general computational principles _give rise to_ interpretable representations, especially low-dim factors that agree with human perception. In this study, we take inspiration from modern physics and use _physical symmetry_ as a self-consistency constraint for the latent space. Specifically, it requires the prior model that characterises the dynamics of the latent states to be _equivariant_ with respect to certain group transformations. We show that physical symmetry leads the model to learn a _linear_ pitch factor from unlabelled monophonic music audio in a self-supervised fashion. In addition, the same methodology can be applied to computer vision, learning a 3D Cartesian space from videos of a simple moving object without labels. Furthermore, physical symmetry naturally leads to _representation augmentation_, a new technique which improves sample efficiency. ## 1 Introduction Interpretable representation learning has been playing a key role in building creative intelligent systems. Taking the _music_ domain as an example, tailored models [11] have been developed to learn pitch, timbre, melody contour, chord progression, texture, etc. from music audio. These human-interpretable representations have greatly improved the performance of generative algorithms in various music creation tasks, including inpainting [21], harmonization [13], (re-)arrangement, and performance rendering [1]. However, most representation learning models still rely heavily on domain-specific knowledge. For example, to use pitch scales or instrument labels for learning pitch and timbre representations [1, 19, 20, 21, 22] and to use chords and rhythm labels for learning higher-level representations [1, 23, 24, 25]. Such an approach is very different from human learning; even without formal music training, one can at least perceive _pitch_, a fundamental music concept, from the experience of listening to music. Hence, it remains an open question how to learn interpretable pitch factor using inductive biases that are more general. In other words, _what general computational principle gives rise to (or let the human mind re-create) the concept of pitch_. We see a similar issue in other domains. For instance, various computer-vision models [15, 26, 27] can learn 3D representations of human faces or a particular scene by using domain knowledge (e.g., labelling of meshes and voxels, 3D convolution, etc.) But when these domain setups are absent, it remains a non-trivial task to learn the 3D location of a simple moving object in a self-supervised fashion. Inspired by modern physics, we explore to use _physical symmetry_ (i.e., symmetry of physical laws) as a weak self-consistency constraint for the learned latent \(z\) space. As indicated in Figure 1, this general inductive bias requires the learned prior model \(R\), which is the induced physical law describing the temporal flow of the latent states, to be equivariant to a certain transformation \(S\) (e.g., translation or rotation). Formally, \(z_{t+1}=R(z_{t})\) if and only if \(z_{t+1}^{S}=R(z_{t}^{S})\), where \(z^{S}=S(z)\). In other words, \(R\) and \(S\) are summable for \(z\), i.e., \(R(S(z))=S(R(z))\). Note that this approach is fundamentally different from most existing symmetry-informed models [1], in which the symmetry property is used to constrain the encoder or the decoder. Specifically, we design self-supervised learning with physical symmetry (**SPS**), a method that adopts an encoder Figure 1: An illustration of physical symmetry as our inductive bias. decoder framework and applies physical symmetry to the prior model. We show that SPS learns a _linear_ pitch factor (that agree with human music perception) from monophonic music audio without any domain-specific knowledge about pitch scales, f0, or harmonic series. The same methodology can be applied to the computer vision domain, learning 3D Cartesian space from monocular videos of a bouncing ball shot from a fixed perspective. In particular, we see four desired properties of SPS as a self-supervised algorithm for interpretability: * Section 4.) * Section 5.2.) * **Robustness**: Even with an incorrect symmetry assumption, SPS can still learn more interpretable representations than baselines. (See Section 5.3.) * **Extendability**: SPS can be easily combined with other learning techniques. For example, if we _further_ assume an extra global invariant style code, the model becomes a disentangled sequential autoencoder, capable of learning content-style disentanglement from temporal signals. (See appendix.) ## 2 Intuition The idea of using physical symmetry for representation learning comes from modern physics. In classical physics, scientists usually first induce physical laws from observations and then discover symmetry properties of the law. (E.g., Newton's law of gravitation, which was induced from planetary orbits, is symmetric with respect to Galilean transformations.) In contrast, in modern physics, scientists often start from a symmetry assumption, based on which they derive the corresponding law and predict the properties (representations) of fundamental particles. (E.g., general relativity was developed based on a firm assumption of symmetry with respect to Lorentz transformations). Analogously, we use physical symmetry as an inductive bias of our representation learning model, which helps us learn a regularised prior and an interpretable low-dim latent space. In other words, if it is a belief of many physicists that symmetry in physical law is a major design principle of nature, we regard symmetry in physical law as a general inductive bias of the human mind. The introduction of physical symmetry naturally leads to **representation augmentation**, a novel learning technique which helps improve sample efficiency. As indicated in Figure 1, representation augmentation means to "imagine" \(z_{t}^{S}\) as an extra training sample for the prior model \(R\). Therefore, it imposes an regularisation on the prior model by requiring the prediction of the \(z\) sequence to be _equivariant_ with respect to certain group transformations, \(S\). Through the lens of causality, physical symmetry can be seen as a counterfactual inductive bias, which assumes _what if_ the prior model makes predictions based on transformed latent codes. It also constrains the encoder and decoder _indirectly through the prior model_ since the network is trained in an end-to-end fashion. ## 3 Methodology With physical symmetry, we aim to learn an interpretable low-dimensional representation \(z_{i}\) of each high-dimensional sample \(x_{i}\) from time-series \(\mathbf{x}_{1:T}\). We focus on two problems in this paper: 1) to learn a _1D linear_ pitch factor of music notes from music audio, where each \(x_{i}\) is a spectrogram of a note, and 2) to learn _3D Cartesian location_ factors of a simple moving object (a bouncing ball) from its trajectory shot by a fixed, single camera, where each \(x_{i}\) is an image. ### Model Figure 2 shows the model design of SPS. During the training process, the temporal data input \(\mathbf{x}_{1:T}\) is first fed into the encoder \(E\) to obtain the corresponding representation \(\mathbf{z}_{1:T}\). Then it is fed into _three_ branches. In the first branch (the green line), \(\mathbf{z}_{1:T}\) is decoded directly by the decoder \(D\) to reconstruct \(\mathbf{x}^{\prime}_{1:T}\). In the second branch (the orange line), \(\mathbf{z}_{1:T}\) is passed through the prior model \(R\) to predict its next timestep, \(\tilde{\mathbf{z}}_{2:T+1}\), which is then decoded to reconstruct \(\tilde{\mathbf{x}}_{2:T+1}\). In the third branch (the blue line), we transform \(\mathbf{z}_{1:T}\) with \(S\), pass it through \(R\), and transform it back using the inverse transformation \(S^{-1}\) to predict another version of the next timestep \(\tilde{\mathbf{z}}_{2:T+1}\), and finally decode it to \(\tilde{\mathbf{x}}_{2:T+1}\). We get three outputs from the model: \(\mathbf{x}^{\prime}_{1:T}\), \(\tilde{\mathbf{x}}_{2:T+1}\), and \(\tilde{\mathbf{x}}_{2:T+1}\). The underlying idea of physical symmetry is that the dynamics of latent factor and its transformed version _follow the same physical law_ characterised by \(R\). Therefore, \(\tilde{\mathbf{z}}\) and \(\tilde{\mathbf{z}}\) should be close to each other and so are \(\tilde{\mathbf{x}}\) and \(\tilde{\mathbf{x}}\), assuming \(S\) is a proper transformation. This self-consistency constraint helps the network learn a more regularised latent space. ### Training objective The total loss contains four terms: reconstruction loss \(\mathcal{L}_{\text{rec}}\), prior prediction loss \(\mathcal{L}_{\text{prior}}\), symmetry-based loss \(\mathcal{L}_{\text{sym}}\), and KL divergence loss \(\mathcal{L}_{\text{KLD}}\). Formally, \[\mathcal{L}=\mathcal{L}_{\text{rec}}+\lambda_{1}\mathcal{L}_{\text{prior}}+ \lambda_{2}\mathcal{L}_{\text{sym}}+\lambda_{3}\mathcal{L}_{\text{KLD}}, \tag{1}\] where \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are weighting parameters. By referring to the notations in section 3.1, \[\begin{split}\mathcal{L}_{\text{rec}}=&\mathcal{L} _{\text{BCE}}(\mathbf{x}^{\prime}_{1:T},\mathbf{x}_{1:T})+\mathcal{L}_{\text{ BCE}}(\tilde{\mathbf{x}}_{2:T},\mathbf{x}_{2:T})\\ +&\mathcal{L}_{\text{BCE}}(\tilde{\mathbf{x}}_{2:T}, \mathbf{x}_{2:T}),\end{split} \tag{2}\] \[\mathcal{L}_{\text{prior}}=\ell_{2}(\tilde{\mathbf{z}}_{2:T}, \mathbf{z}_{2:T}), \tag{3}\] \[\mathcal{L}_{\text{sym}}=\ell_{2}(\tilde{\mathbf{z}}_{2:T},\mathbf{z}_{2:T})+ \ell_{2}(\tilde{\mathbf{z}}_{2:T},\mathbf{z}_{2:T}). \tag{4}\] \(\mathcal{L}_{\text{KLD}}\) is the Kulback-Leibler divergence loss between the prior distribution of \(z_{i}\) and a standard Gaussian. Lastly, we build two versions of SPS: SPS\({}_{\text{VAE}}\) and SPS\({}_{\text{AE}}\), with the latter replacing the VAE with an AE (and trivially doesn't have \(\mathcal{L}_{\text{KLD}}\)). ### Symmetry-based representation augmentation During training, \(S\) is the _representation augmentation_ since it creates extra fake sequences of \(z\) (i.e., imaginary experience) to help train the prior. In practice, for each batch we apply \(K\) different transformations \(S_{1:K}\) to \(\mathbf{z}\) and yield \(K\) fake sequences. Thus, the two terms of symmetry-based loss can be specified as: \[\ell_{2}(\mathbf{\tilde{z}}_{2:T},\mathbf{z}_{2:T})=\frac{1}{K}\sum_{k=1}^{K} \ell_{2}(S_{k}^{-1}(R(S_{k}(\mathbf{z}_{1:T-1}))),\mathbf{z}_{2:T}), \tag{5}\] \[\ell_{2}(\mathbf{\tilde{z}}_{2:T},\mathbf{\hat{z}}_{2:T})=\frac{1}{K}\sum_{k=1 }^{K}\ell_{2}(S_{k}^{-1}(R(S_{k}(\mathbf{z}_{1:T-1}))),\mathbf{\hat{z}}_{2:T}), \tag{6}\] where the lower case \(k\) denotes the index of a specific transformation and we refer to \(K\) as the _augmentation factor_. Likewise, the last term of reconstruction loss can be specified as: \[\mathcal{L}_{\text{REC}}(\mathbf{\tilde{x}}_{2:T},\mathbf{x}_{2:T})=\frac{1}{ K}\sum_{k=1}^{K}\mathcal{L}_{\text{REC}}(D(S_{k}^{-1}(R(S_{k}(\mathbf{z}_{1:T-1})))),\mathbf{x}_{2:T}). \tag{7}\] Each \(S\) applied to each sequence \(\mathbf{z}_{1:T}\) belongs to a certain group, and different groups are used for different problems. For the music problem, we assume \(\mathbf{z}_{i}\) be to 1D and use random \(S\in G\cong(\mathbb{R},+)\). In other words, we add or subtract the representation by a random scalar. As for the video problem, we assume \(\mathbf{z}_{i}\) be to 3D and use random \(S\in G\cong(\mathbb{R}^{2},+)\times\mathrm{SO}(2)\). In other words, random rotation and translation are applied on two dimensions of \(\mathbf{z}_{i}\). ## 4 Results We test SPS under two modalities of temporal signals: music (section 4.1) and video (section 4.2). The highlight is that SPS successfuly learns interpretable low-dimensional factors without sacrificing the accuracy of reconstruction and prediction. In the section, we only present the results trained on simple datasets with the basic model setup. in the appendix, we present extra results trained on more complicated datasets as well as a more advanced setup, SPS+, which is capable of doing content-style disentanglement on top of interpretable learning. The results very much agree with the simple cases. ### Learning linear pitch factor from music audio #### Dataset We synthesise a dataset of 27 audio clips, each containing 15 notes in major scales with the first 8 notes ascending and the last 8 notes descending. When training, we convert wave files to power spectrogram so that reconstruction and prediction can be performed on the sequences of images. We refer readers to appendix A.1 for more details. #### Results on interpretable pitch space Figure 3 shows that the 1D pitch factor learned by our model has a linear relation with the true pitch, which agrees with human perception. The plot shows the mappings of two tasks and six models. In the embedding task (the first row), the \(x\)-axis is the true pitch and the \(y\)-axis is embedded \(\mathbf{z}\). In the synthesis task (the second row), the \(x\)-axis is \(\mathbf{z}\) and the \(y\)-axis is the detected pitch (by YIN algorithm, a standard pitch-estimation method by [4]) of decoded (synthesised) notes. The first two models are SPS based on VAE and AE, respectively, trained with representation augmentation factor \(K=4\). The third and fourth models are trained without constraints of physical symmetry (\(K=0\)), serving as our ablations. The fifth one is a vanilla \(\beta\)-VAE, trained only to reconstruct, not to predict. The last one is SPICE [10], a SOTA unsupervised pitch estimator _with strong domain knowledge on how pitch linearity is reflected in log-frequency spectrograms_. As the figure shows, 1) without explicit knowledge of pitch, our model learns a more interpretable pitch factor than \(\beta\)-VAE, and the result is comparable to SPICE, and 2) without the Gaussian prior assumption of latent variable distribution, our model SPSAE also learns a continuous representation space. Figure 2: An overview of our model. \(\mathbf{x}_{1:T}\) is fed into the encoder \(E\) to obtain the corresponding representation \(\mathbf{z}_{1:T}\), which is then fed into three different branches yielding three outputs respectively: \(\mathbf{x}^{\prime}_{1:T}\), \(\mathbf{\hat{x}}_{2:T+1}\) and \(\mathbf{\tilde{x}}_{2:T+1}\). Here, \(\dot{R}\) is the prior model and \(S\) is the symmetric operation. The inductive bias of physical symmetry enforces \(R\) to be equivaraint with respect to \(S\), so \(\mathbf{\hat{z}}\) and \(\mathbf{\hat{z}}\) should be close to each other and so are \(\mathbf{\tilde{x}}\) and \(\mathbf{\hat{x}}\). Tables 1 shows a more quantitative analysis using \(R^{2}\) as the metric to evaluate the linearity of the pitch against \(z\) mapping from the encoder and the. All models except SPICE are trained with 10 random initialization. **Reconstruction and prior prediction** We investigate the reconstruction and prediction capacities of our model and show that they are not harmed by adding symmetry constraints. We compare our model, our model ablating symmetry constraints, and a \(\beta\)-VAE trained solely for the reconstruction of power spectrogram. Table 2 reports per-pixel BCE of the reconstructed sequences from the original input frames (Self-recon) and from the RNN predictions (Image-pred). We also include \(\mathcal{L}_{\mathrm{prior}}\), the MSE loss on the RNN-predicted \(\hat{z}\) as defined in section 3.2. The results show that our models slightly surpasses the ablation and baseline models in all three metrics. ### Learning object 3D coordinates from videos of a moving object **Dataset** We run physical simulations of a bouncing ball in a 3D space and generate 512 trajectories, yielding a dataset of videos. Please see the appendix A.1 for more details. **Results on interpretable 3D representation** Figure 4 visually evaluates the interpretability of the learned location factors by traversing the \(z\) space, one dimension at a time, and using the learned decoder to synthesise images. If the learned factors are linear w.r.t. the 3D Cartesian coordinates, the synthesised ball should display linear motions as we change \(z\) linearly. In brief, SPS learns an a more interpretable and linear \(z\) space. Here, subplot (a) depicts the results of SPSVAE with \(K\)=4. We see that the un-augmented dimension, \(z_{2}\), controls the height of the ball, while \(z_{1}\) and \(z_{3}\) move the ball along the horizontal (ground) plane. Each axis is much more linear than in (b) and (c). Subplot (b) evaluates SPSVAE with representation augmentation \(K\)=0, essentially turning _off_ 'SPS. As \(z_{i}\) varies, the ball seems to travel along curves in the 3D space, showing the ablation learns some continuity w.r.t. the 3D space, but is obviously far from linear. In (c), the \(\beta\)-VAE fails to give consistent meaning to any axis. Table 3 further shows quantitative evaluations on the linearity of the learned location factor, in which we see that SPS outperforms other models by a large margin. To measure linearity, we fit a linear regression from \(z\) to the true 3D location over the test set and then compute the Mean Square Errors (MSE). Therefore, a smaller MSE indicates a better fit. Each model is evaluated with 10 random initialisations. Here, we also include the results of SPSVAE. Very similar to the music experiment in session 4.1, we again see that even without the Gaussian prior assumption, our model SPSVAE learns an interpretable latent space comparable to SPSVAE. **Reconstruction and prior prediction** Table 4 displays the reconstruction and prediction losses on the test set. Results show that adding symmetry constraints does not significantly hurt the prediction losses. Frame-wise \begin{table} \begin{tabular}{l c c} \hline \hline Methods & Learned factor \(R^{2}\uparrow\) & Synthesis \(R^{2}\uparrow\) \\ \hline SPSVAE, \(K\)=4 (Ours) & **1.00\(\pm\)0.00** & 0.96\(\pm\)0.02 \\ SPSVAE, \(K\)=4 (Ours) & **1.00\(\pm\)0.00** & **0.96\(\pm\)0.01** \\ SPSVAE, \(K\)=0 (Ablation) & 0.99\(\pm\)0.01 & 0.93\(\pm\)0.03 \\ SPSVAE, \(K\)=0 (Ablation) & 0.99\(\pm\)0.00 & 0.94\(\pm\)0.03 \\ \(\beta\)-VAE & 0.72\(\pm\)0.25 & 0.69\(\pm\)0.24 \\ SPICE & **1.00** & N/A \\ \hline \hline \end{tabular} \end{table} Table 1: The linearity of learned pitch factor and synthesized sound pitch evaluated by \(R^{2}\). Figure 3: A visualisation of the mapping between the 1D learned factor \(\mathbf{z}\) and the true pitch, in which a straight lines indicates a better result. In the upper row, models encode notes in the test set to \(\mathbf{z}\). The \(x\) axis shows the true pitch and the \(y\) axis shows the learned pitch factor. In the lower row, the \(x\) axis traverses the \(\mathbf{z}\) space. The models decode \(\mathbf{z}\) to audio clips. We apply YIN to the audio clips to detect the pitch, which is shown by the \(y\) axis. In both rows, a linear, noiseless mapping is ideal, and our method performs the best. self-reconstruction is significantly lower for the SPS models, but only by a small margin. ## 5 Analysis To better understand the effects of representation augmentation (first introduced in section 3.3), we ran extra experiments with different \(S\) and \(K\). We choose the vision problem since a 3D latent space manifests a more obvious difference when physical symmetry is applied. In section 5.1, we show that a larger augmentation factor \(K\) leads to higher sample efficiency. In section 5.2, we visualise the change of learned latent space against training epoch according to different values of \(K\). In section 5.3, we show that some deliberately incorrect group assumptions \(S\) can also achieve good results. ### Representation augmentation improves sample efficiency Figure 5 shows that _a larger factor of representation augmentation leads to a lower linear projection loss_ (the measurement defined in section 4.2) of the learned 3D representation. Here, \(K\) is the augmentation factor, and \(K=0\) means the model is trained without physical symmetry. The comparative study is conducted on 4 training set sizes (256, 512, 1024, and 2048), in which each box plot shows the results of 10 experiments trained with a fixed \(K\) and random initialisation. We see that a larger \(K\) leads to better results and compensates for the lack of training data. E.g., the loss trained on 256 samples with \(K=4\) is comparable to the loss trained on 1024 samples with \(K=0\), and the loss trained on 512 samples with \(K=4\) is even lower than the loss trained on 2048 samples with \(K=0\). Furthermore, when \(K=0\), increasing the number of training samples beyond a certain point does not further shrink the error, but increasing \(K\) still helps. ### Representation augmentation improves interpretability Figure 6 visualises the latent space during different stages of model training, and we see that a larger \(K\) leads to a better enforcement of interpretability. The horizontal axis shows the training epoch. Three experiments with different \(K\) values (\(\times 0,\times 4,\times 16\)) are stacked vertically. Each experiment is trained twice with random initialisation. Each subplot shows the orthogonal projection of the \(z\) space onto the plane spanned by \(z_{1}\) and \(z_{3}\), therefore hiding most of the \(y\) \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & Self-recon \(\downarrow\) & Image-pred \(\downarrow\) & \(\mathcal{L}_{\mathrm{prior}}\downarrow\) \\ \hline SPS\({}_{\text{VAE}}\), \(K\)=4 (Ours) & **0.0375\(\pm\)0.0012** & **0.0377\(\pm\)0.0007** & **0.0057\(\pm\)0.0015** \\ SPS\({}_{\text{AE}}\), \(K\)=4 (Ours) & 0.0381\(\pm\)0.0012 & 0.0384\(\pm\)0.0009 & 0.0068\(\pm\)0.0031 \\ SPS\({}_{\text{VAE}}\), \(K\)=0 (Ablation) & 0.0384\(\pm\)0.0014 & 0.0388\(\pm\)0.0013 & 0.0134\(\pm\)0.0101 \\ SPS\({}_{\text{AE}}\), \(K\)=0 (Ablation) & 0.0386\(\pm\)0.0012 & 0.0391\(\pm\)0.0012 & 0.0075\(\pm\)0.0024 \\ \(\beta\)-VAE & 0.0406\(\pm\)0.0008 & N/A & N/A \\ \hline \hline \end{tabular} \end{table} Table 2: Reconstruction and prediction results on the audio task. Figure 4: A visualization of latent-space traversal performed on three models: (a) ours, (b) ablation, and (c) baseline, in which we see (a) achieves better linearity and interpretability. Here, row \(i\) shows the generated images when changing \(z_{i}\) and keeping \(z_{\neq i}=0\), where the \(x\) axis varies \(z_{i}\) from \(-2\sigma_{z}\) to \(+2\sigma_{z}\). We center and normalise \(z\), so that the latent space from different runs is aligned for fair comparison. Specifically, in (a), changing \(z_{2}\) controls the ball’s height, and changing \(z_{1},z_{3}\) moves the ball parallel to the ground plane. In contrast, the behavior in (b) and (c) are less interpretable. Figure 5: A comparison of linear projection MSEs among different augmentation factors (\(K\)) and training set sizes, which shows that representation augmentation improves sample efficiency. (Smaller values mean better results.) axis (i.e. ball height) wherever a linear disentanglement is fulfilled. During training, the role of physical symmetry is to "straighten" the encoded grid and a larger \(K\) yields a stronger effect. ### Representation augmentation with deliberately incorrect group assumptions Additionally, we test SPS with deliberately incorrect group assumptions. The motivation is as follows. In real applications, researchers may incorrectly specify the symmetry constraint when the data are complex or the symmetry is not known _a priori_. SPS is more useful if it works with various groups assumptions close to the truth. In our analysis, we are surprised to find that SPS still learns interpretable representations under alternate group assumptions via perturbing the correct one. Figure 7 shows our results with the vision task (on the bouncing ball dataset). The \(x\) tick labels show the augmentation method. Its syntax follows section 3.3, e.g., "\((\mathbb{R}^{1},+)\times\mathrm{SO}(2)\)" denotes augmenting representations by 1D translations and 2D rotations. The \(y\) axis of the plot is still linear projection loss (as discussed in section A.4) that evaluates the interpretability of the learned representation. As is shown by the boxplot, five out of five perturbed group assumptions yield better results than the "w/o Symmetry" baseline. Particularly, \((\mathbb{R}^{3},+)\times\mathrm{SO}(2)\) and \((\mathbb{R}^{2},+)\times\mathrm{SO}(3)\) learn significantly more linear representations, showing that some symmetry assumptions are "less incorrect" than others, and that SPS can achieve good results under a multitude of group assumptions. ## 6 Related work The idea of using a predictive model for better self-supervised learning has been well established [1, 1, 10]. In terms of model architecture, our model is very similar to VRNN [10]. In addition, our model can be seen as a variation of joint-embedding predictive architecture (JEPA) in [1] if we eliminate the reconstruction losses on the observation. In fact, we see the network topology of a model as the "hardware" and see the learning strategy (e.g., contrastive method, regularised method, or a mixed one) as the "software". The main contribution of this study lies in the learning strategy -- to use physical symmetry to limit the complexity of the prior model, and to use representation augmentation to increase sample efficiency. The existing notation of "symmetry" as in [11, 1] is very different from physical symmetry as an inductive for representation learning. Most current symmetry-based methods care about the relation between observation \(x\) and latent \(z\)[2, 1, 13, 14]. E.g., when a certain transformation is applied to \(x\), \(z\) should simply keep invariant or follow a same/similar transformation. Such an assumption inevitably requires some knowledge in the domain of \(x\). In contrast, physical symmetry focuses solely on the dynamics of \(z\), and therefore we only have to make assumptions about the underlying group transformation in the latent space. We see two most relevant works in the field of reinforcement learning [13, 14], which apply an equivariant assumption similar to the physical symmetry used in this paper. The major differences are twofold. First, to disentangle the basic factors, our method requires no interactions with the environment. Second, our method is much more concise; it needs no other tailored components or other inductive biases such as symmetric embeddings network and contrastive loss used in [13] or MDP homomorphism applied in [13]. ## 7 Conclusion and future work In this paper, we use physical symmetry as a novel inductive bias to learn interpretable and low-dimensional representations from time-series data. Experiments show that physical symmetry effectively distills an interpretable linear pitch con \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Self-recon \(\downarrow\) & Image-pred \(\downarrow\) & \(\mathcal{L}_{\mathrm{prior}}\downarrow\) \\ \hline SPS\({}_{\mathrm{VAE}}\), \(K\)=4 (Ours) & 0.64382 \(\pm\) 9e-05 & 0.6456 \(\pm\) 4e-04 & 0.14 \(\pm\) 0.05 \\ SPS\({}_{\mathrm{AE}}\), \(K\)=4 (Ours) & 0.64386 \(\pm\) 7e-05 & 0.6458 \(\pm\) 3e-04 & 0.17 \(\pm\) 0.07 \\ SPS\({}_{\mathrm{VAE}}\), \(K\)=0 (Ablation) & 0.64372 \(\pm\) 4e-05 & 0.6459 \(\pm\) 2e-04 & 0.19 \(\pm\) 0.10 \\ SPS\({}_{\mathrm{AE}}\), \(K\)=0 (Ablation) & 0.64367 \(\pm\) 5e-05 & **0.6456 \(\pm\) 1e-04** & **0.11 \(\pm\) 0.03** \\ \(\beta\)-VAE & **0.64345 \(\pm\) 5e-05** & N/A & N/A \\ \hline \hline \end{tabular} \end{table} Table 4: Reconstruction and prediction losses of the video task. Two outliers are removed from the 50 runs. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & \(x\) axis MSE \(\downarrow\) & \(y\) axis MSE \(\downarrow\) & \(z\) axis MSE \(\downarrow\) & MSE \(\downarrow\) \\ \hline SPS\({}_{\mathrm{VAE}}\), \(K\)=4 (Ours) & **0.11 \(\pm\) 0.09** & **0.31 \(\pm\) 0.34** & 0.26 \(\pm\) 0.34 & 0.26 \(\pm\) 0.30 \\ SPS\({}_{\mathrm{AE}}\), \(K\)=4 (Ours) & 0.13 \(\pm\) 0.07 & 0.39 \(\pm\) 0.33 & **0.21 \(\pm\) 0.17** & **0.24 \(\pm\) 0.17** \\ SPS\({}_{\mathrm{VAE}}\), \(K\)=0 (Ablation) & 0.33 \(\pm\) 0.10 & 0.80 \(\pm\) 0.18 & 0.75 \(\pm\) 0.17 & 0.62 \(\pm\) 0.14 \\ SPS\({}_{\mathrm{AE}}\), \(K\)=0 (Ablation) & 0.26 \(\pm\) 0.09 & 0.44 \(\pm\) 0.27 & 0.55 \(\pm\) 0.17 & 0.42 \(\pm\) 0.15 \\ \(\beta\)-VAE & 0.36 \(\pm\) 0.03 & 0.70 \(\pm\) 0.01 & 0.68 \(\pm\) 0.03 & 0.58 \(\pm\) 0.01 \\ \hline \hline \end{tabular} \end{table} Table 3: Linear fits between the true location and the learned location factor. We run the encoder on the test set to obtain data pairs in the form of (location factor, true coordinates). We then run a linear fit on the data pairs to evaluate factor interpretability. Two outliers are removed from the 50 runs. cept, which agrees with human music perception and plays a fundamental role in music creation, from music audios without any labels. With the same method, we can learn the concept of 3D Cartesian space from monocular videos of bouncing ball shot from a fixed perspective. In addition, a robust training technique, representation augmentation, is developed to enforce physical symmetry during training. Analysis shows that representation augmentation leads to higher sample efficiency and latent-space interpretability, and it stays effective even when the symmetry assumption is incorrect. Last but not least, we see that with physical symmetry, our sequential representation learning model can drop the the Gaussian prior regulation on the latent space. Such a result empirically indicates that physical symmetry, as a causal (counterfactual) inductive bias, might be more essential compared to the Gaussian prior as a purely statistical regularization. For future work, we see several directions to improve SPS's generality and soundness. First, what if the underlying concept following physical symmetry contains only partial information and cannot fully reconstruct the input? We believe this problem is related to content-style disentanglement and JEPA [11], and we show some preliminary results in appendix A.3 and A.4. Second, how to extend the current model to distill concepts from multibody systems, e.g., to learn the pitch concept from _polyphonic_ music and to learn 3D space from videos of _multiple_ moving objects? Finally, can we formalise a theory to quantify the effect of representation augmentation, e.g., to measure the degree-of-freedom of latent space with vs. without physical symmetry, and explain why incorrect symmetry assumptions still lead to a correct, interpretable latent space? Above all, we hope this study sheds some light on interpretable representation learning and explainable AI in general. **Reproducibility statement** The source code for training and testing the models, as well as generating the figures and tables, is publicly available at [https://github.com/double-blind-75098/Learning-basic-interpretable-factors-from-temporal-signals-via-physical-symmetry](https://github.com/double-blind-75098/Learning-basic-interpretable-factors-from-temporal-signals-via-physical-symmetry). **Acknowledgments** We'd like to thank to Dr. Zhao Yang and Dr. Maigo Wang for inspiring discussion on physical symmetry. Also, we'd like to like to extend our sincere thanks to Junyan Jiang and Yixiao Figure 6: A visualisation of the learned latent space against training epoch, in which we see that a larger \(K\) leads to a stronger enforcement on learning a linear latent space. Here, we plot how the encoder projects an equidistant 3D grid of true Cartesian coordinates onto the \(z\) space. Different colours denote respective axes in the true coordinates. Figure 7: Evaluation on various group assumptions, which shows that physical symmetry is a very robust inductive bias as even the incorrect symmetry assumptions lead to better results than baseline. Here, the \(y\) axis is linear projection loss between the learned location factor and the true coordinates, so a lower value means better interpretability of representations. The leftmost box shows the baseline without symmetry constraint. The next five boxes show five deliberately _incorrect_ group assumptions, trained with \(K\)=4. The rightmost box shows the correct group assumption. Zhang for their useful and practical suggestions.
2308.14475
Interactive Multi Interest Process Pattern Discovery
Process pattern discovery methods (PPDMs) aim at identifying patterns of interest to users. Existing PPDMs typically are unsupervised and focus on a single dimension of interest, such as discovering frequent patterns. We present an interactive multi interest driven framework for process pattern discovery aimed at identifying patterns that are optimal according to a multi-dimensional analysis goal. The proposed approach is iterative and interactive, thus taking experts knowledge into account during the discovery process. The paper focuses on a concrete analysis goal, i.e., deriving process patterns that affect the process outcome. We evaluate the approach on real world event logs in both interactive and fully automated settings. The approach extracted meaningful patterns validated by expert knowledge in the interactive setting. Patterns extracted in the automated settings consistently led to prediction performance comparable to or better than patterns derived considering single interest dimensions without requiring user defined thresholds.
Mozhgan Vazifehdoostirani, Laura Genga, Xixi Lu, Rob Verhoeven, Hanneke van Laarhoven, Remco Dijkman
2023-08-28T10:26:37Z
http://arxiv.org/abs/2308.14475v1
# Interactive Multi-Interest Process Pattern Discovery ###### Abstract Process pattern discovery methods (PPDMs) aim at identifying patterns of interest to users. Existing PPDMs typically are unsupervised and focus on a single dimension of interest, such as discovering frequent patterns. We present an interactive multi-interest-driven framework for process pattern discovery aimed at identifying patterns that are optimal according to a multi-dimensional analysis goal. The proposed approach is iterative and interactive, thus taking experts' knowledge into account during the discovery process. The paper focuses on a concrete analysis goal, i.e., deriving process patterns that affect the process outcome. We evaluate the approach on real-world event logs in both interactive and fully automated settings. The approach extracted meaningful patterns validated by expert knowledge in the interactive setting. Patterns extracted in the automated settings consistently led to prediction performance comparable to or better than patterns derived considering single-interest dimensions without requiring user-defined thresholds. Keywords:Process Pattern Discovery Multi-interest Pattern Detection Process Mining Outcome-Oriented Process Patterns. ## 1 Introduction Process pattern discovery methods (PPDMs) aim to discover process patterns that are _of interest_ for the human analyst, where a process pattern corresponds to a set of process activities (possibly annotated with additional data) with their ordering relations. The interest of a pattern is usually computed according to one or more functions. Previous studies highlighted how these techniques often uncovered interesting behaviors that would otherwise remain hidden in start-to-end process models [23]. Several approaches have been proposed to discover process patterns from a given event log [23, 7, 13] and employed them in several applications, for instance, event abstraction [20], or trace classification [26]. However, most of these approaches focus on a single interest dimension. In particular, they usually aim to detect _frequent_ patterns, which often leads to the generation of a multitude of non-interesting patterns and possibly the missing of interesting but infrequent ones [22]. As pointed out by recent studies in the pattern mining field [12], the concept of _interest_ of a pattern is often linked to multiple dimensions, some of which may be in conflict with each other. These considerations also hold in the process domain since processes emerge from the interplay of multiple factors, highlighting the need for multi-dimensional thinking in process analysis [11]. Few PPDMs introduced a broader notion of interest, either by allowing the user to define cut-off thresholds for several metrics [23], which are then aggregated to rank the obtained set of patterns, or by directly using a composite metric during the pattern generation phase [22, 9]. However, these solutions offer limited support in dealing with a multi-dimensional notion of pattern interest. Defining appropriate cut-off thresholds for different and conflicting metrics is a non-trivial decision that strongly impacts the obtained results. Furthermore, aggregating multiple dimensions in a single one leads to a single ranked collection of patterns which depends on the aggregation setting and hides the interplay of the different dimensions. To deal with this complexity, the detection of process patterns should be expressed as a _multi-objective_ problem. Beside the _multi-objective_ challenge, most PPDMs are unsupervised and suffer from pattern explosion in real-life event logs. Previous studies showed that leveraging expert domain knowledge can avoid or mitigate the pattern explosion issue [19, 2]. A semi-supervised PPDM [19] was proposed for users to manually select and extend patterns. However, the approach still relies exclusively on frequency-based metrics. Also, the burden of the selection and extension of discovered patterns is left to the user as a manual task without much guidance. In this work, we introduce the IMPresseD framework (**I**nteractive **M**ulti-interest **P**rocess** Pattern **D**iscovery) for process pattern discovery. IMPresseD is designed to derive interesting and easily interpretable patterns for the end users by combining different strategies. First, the framework allows users to define different _interest functions_ to measure the interest of patterns, supporting customizable multi-dimensional analysis goals. In this way, the user has more control over the measures of relevance that they use, which is expected to lead to patterns that are indeed considered meaningful by end users. Multi-optimization strategies are used to allow the user to go over far fewer patterns than the ones obtained by threshold-dependent strategies to identify the relevant ones. The framework supports an in-depth analysis of the pattern characteristics, which also considers the characteristics of the process executions in which the pattern occurs. Finally, the approach is iterative and interactive. At each step, the user is presented with the process patterns that are best according to the user-defined interest functions, and they can select the ones to expand further. To showcase the framework's usefulness, we also discuss how to use it with a concrete analysis goal, i.e., _deriving process patterns affecting the process outcome_. This is inherently a complex problem for which different aspects need to be considered. Furthermore, to the best of our knowledge, most outcome-oriented pattern detection approaches do not support a multi-dimensional analysis. Given this concrete analysis goal, we carried out a two-fold evaluation to validate our approach. First, we use a real-world case study in healthcare to show the capability of the proposed framework in supporting domain experts in extracting meaningful patterns in an interactive setting. Then, we evaluate our approach in a quantitative experiment to assess the predictive power of the automatically discovered patterns. We compare the results of our approach with the ones obtained by using a single metric and using the entire pattern set without filtering. The obtained results show that the discovered set of patterns consistently ranked within the top positions, while patterns mined by adopting single metrics led to a more unstable performance. Furthermore, the proposed framework returned a set of patterns significantly smaller than the entire pattern set while preserving a comparable predictive power. Summing up, the paper contributes to the literature by introducing: * a multi-interest and interactive process pattern discovery framework; * tailored interest functions for discovering process patterns affecting the outcome of the process. The remainder of this paper is organized as follows. Section 2 reviews the relevant related work. Section 3 provides basic concepts used throughout the paper. Section 4 introduces the proposed framework, together with a concrete instantiation of the interest function to support outcome-oriented pattern discovery. Section 5 presents and discusses the evaluation. Finally, Section 6 draws the conclusion and delineates some ideas for future studies. ## 2 Related work Most previous PPDMs take an event log as input and generate patterns based on user-defined thresholds on a set of predefined measures of interest. These approaches vary depending on the type of patterns they aim to extract. Early work focused on discovering sequences of event traces, such as identifying sequences that fit predefined templates [6] or using a sequence pattern mining algorithm [14]. More recent research has focused on patterns representing more complex control-flow relationships, for instance, episodes representing eventually-flow relations [16], or graphs representing both sequential and concurrent behaviors [15, 9]. Patterns that represent a more comprehensive set of control-flow relationships, including sequences, concurrency, and choice, are considered in the approach proposed by Tax et al. [23]. This approach has been extended to allow the extraction of patterns based on a more general set of utility functions [22]. Taking into account the context in which patterns are observed, Acheli et al. extended previous work to discover contextual behavioral patterns, allowing for insights into the aspects that influence a process conduction [1]. Although these unsupervised PPDMs can uncover interesting patterns, they offer little or no support for multi-dimensional analysis goals involving possibly conflicting dimensions. Furthermore, they do not incorporate user knowledge, which often results in the return of uninteresting patterns. A possible mitigation strategy to this problem consists in keeping "humans in the loop", as observed by previous authors. For instance, Benevento et al. showed potential improvements in the quality and clarity of the process models by employing interactive process discovery in modeling healthcare processes compared to traditional automated discovery techniques [4, 3]. Within the PPDMs domain, a semi-supervised approach is proposed for discovering process patterns which involves the user in the pattern extraction process [19]. However, this approach only exploits frequency-based interest functions based on user-defined thresholds. ## 3 Preliminaries In this section, we recall the basic concepts needed to introduce our framework. Definition 1 (Event): Let \(\mathcal{AC}\) be the universe of activities, \(\mathcal{C}\) be the universe of case identifiers, \(\mathcal{T}\) be the time domain, and \(\mathcal{D}_{1},\mathcal{D}_{2},...,\mathcal{D}_{m}\) be the sets of additional attributes with \(i\in[1,m]\), \(m\ \in\ \mathbb{Z}\). An event is a tuple of \(e=(a,c,t,d_{1},\ldots,d_{m})\), where \(a\in\mathcal{AC}\), \(c\in\mathcal{C}\), \(t\in\mathcal{T}\) and \(d_{i}\in\mathcal{D}_{i}\). Definition 2 (Trace, event log): A _trace_\(\sigma=\langle e_{1},\cdots,e_{n}\rangle\) is a finite non-empty sequence of events \(e_{1},\cdots,e_{n}\) in which their timestamp does not decrease. Let \(\mathcal{S}\) denote the universe of all possible traces, an _event log_ can be defined as \(L=\{\sigma_{1},\sigma_{2},\cdots,\sigma_{n}\}\subseteq S\) which is a set of traces. We use \(E_{\sigma}\) for the set of events in trace \(\sigma\). We define \(\pi_{act}(e)\), \(\pi_{time}(e)\), \(\pi_{case}(e)\), and \(\pi_{d_{i}}(e)\) to return the activity, timestamp, case identifier and the attribute \(d_{i}\) associated with \(e\), respectively. A well-known issue of log traces is that they flatten the real ordering relations among process events, hiding possible concurrency [17]. Since we intend to discover patterns representing both sequential and concurrent relations, we convert log traces in so-called _partially ordered traces_. It is possible to derive partially ordered traces from fully ordered traces by using a conversion oracle function obtained from expert knowledge or data analysis [10, 18]. Definition 3 (Partially ordered trace): Given a conversion oracle function \(\varphi\) and a log trace \(\sigma\), a _partially ordered trace_\(\varphi(\sigma)=(E_{\sigma},\prec_{\sigma})\) is a Directed Acyclic Graph (DAG), where \(E_{\sigma}\) and \(\prec_{\sigma}\in E_{\sigma}\times E_{\sigma}\) corresponds to the set of nodes and edges, respectively. We define matrix \(A_{\varphi(\sigma)}\) as an upper triangular adjacency matrix that specifies directed edges from \(e\) to \(e^{\prime}\), with \(e,e^{\prime}\in E_{\sigma}\). Also, \(R_{\varphi(\sigma)}\) is the reachability matrix derived from \(A_{\varphi(\sigma)}\) to represent all possible paths from \(e\) to \(e^{\prime}\) of length \(l\) such that \(2\leq l\leq\mid\sigma\mid-1\). For each pair of events \(e,e^{{}^{\prime}}\in E_{\sigma}\), such that \(e\neq e^{\prime}\), we define the following ordering relations: * if \(R_{\varphi(\sigma)}(e,e^{\prime})\neq 0\), \(e^{\prime}\) eventually follows \(e^{\prime}\), * if \(A_{\varphi(\sigma)}(e,e^{\prime})\neq 0\), \(e^{\prime}\) directly follows \(e\), * if \(R_{\varphi(\sigma)}(e,e^{\prime})=0\) and \(R_{\varphi(\sigma)}(e^{\prime},e)=0\), then \(e\) is _concurrent_ with \(e^{\prime}\). Definition 4 (Process pattern): A process pattern \(P=(N,\mapsto,\alpha,\beta)\) is a DAG, where: * \(N\) _is a set of nodes,_ * \(\mapsto\) _is set of edges over_ \(N\)__ * \(\alpha\) _is a function that assigns a label_ \(\alpha(n)\) _to any node_ \(n\in N\)_,_ * \(\beta\) _is a foundational pattern for_ \(P\)_, which means pattern_ \(P\) _is extended from pattern_ \(\beta\)_._ _Also, we denote that if \(\mid N\mid=1\), then \(\beta\) is NULL, i.e., a single node is considered a pattern without any foundational pattern._ Examples of process patterns can be found in Fig. 2. \(P^{1}_{\theta},...,P^{5}_{\theta}\) all share the same foundational pattern \(\theta\), represented by the single node \(b\). In turn, pattern \(\omega\) is the foundational pattern for \(P^{1}_{\omega}\) and \(P^{2}_{\omega}\) in the middle column. Given a process pattern, an _instance_ of the pattern is an occurrence of the pattern in a log trace. Definition 5 (Pattern instances set): Let \(P=(N,\mapsto,\alpha,\beta)\) be a pattern, \(\varphi(\sigma)=(E_{\sigma},\prec_{\sigma})\) a partially ordered trace, \(A_{\mapsto}\) be an upper triangular adjacency matrix over \(N\), and \(R_{\hookrightarrow}\) be the reachability matrix of size \(\mid N\mid-1\) derived from \(A_{\mapsto}\). Given a subset \(E^{\prime}\subseteq E_{\sigma}\) of nodes in \(\varphi(\sigma)\), such that there is a bijective function \(I:E^{\prime}\to N\), then we define the _pattern instances_ of \(P\) in \(\varphi(\sigma)\) as \(PI(P,\varphi(\sigma))=\{E^{\prime}\mid\forall e,e^{\prime}\in E^{\prime},A_{ \varphi(\sigma)}(e,e^{\prime})=A_{\mapsto}(I(e),I(e^{\prime}))\wedge R_{ \varphi(\sigma)}(e,e^{\prime})=R_{\hookrightarrow}(I(e),I(e^{\prime}))\wedge \pi_{act}(e)=\alpha(I(e))\}\). The pattern instances set of pattern \(P\) over event log \(L\) is defined as \(PIS(P,L,\varphi)=\bigcup_{\sigma\in L}PI(P,\varphi(\sigma))\). ## 4 IMPresseD framework Given an event log, the objective of the IMPresseD framework (Fig. 1) is to discover the set of process patterns that are best according to multiple interest functions defined by the user. The framework includes the following steps. 1. Converting all traces in the event log into partially ordered traces using a conversion oracle derived from expert knowledge or data analysis. 2. Defining the interest functions which fit the users' notion of pattern interesting based on their analysis goal. Analytical dashboards to visualize the discovered patterns and the computed interest functions are also defined. 3. Extracting patterns of length-1, i.e., individual activities. 4. Measuring the interestingness of each discovered pattern through the set of interest functions defined at _Step 2_. 5. Returning the set of patterns that are the best according to the interest functions (i.e., non-dominated patterns in the Pareto front). 6. If the user is satisfied with the current set of patterns or there is no extension possible, the procedure ends. Otherwise, the user selects pattern(s) to extend (i.e., the foundational patterns), and the procedure goes to _Step 7_. 7. Building all extensions of the foundational patterns and going to _Step 4_. In the remainder of this section, we delve into the pattern _selection_ (Steps 4 and 5) and _extension_ (Step 7). Finally, we show an instantiation of the interest functions (Step 2) using an analysis goal for the discovery of process patterns affecting the process outcome. ### Pattern selection Let \(\mathcal{P}_{i}=\{P^{1},P^{2},...,P^{k}\}\) be the set of all patterns discovered in \(i^{th}\) iteration of the method and let \(\mathcal{I}=\{\mathcal{I}_{1},\mathcal{I}_{2},...,\mathcal{I}_{m}\}\) be the set of interest functions, where \(\forall\mathcal{I}_{k}\in\mathcal{I}\), \(\mathcal{I}_{k}:\mathcal{P}_{i}\rightarrow\mathbb{R}\). The pattern selection module aims to return the set of patterns (\(P^{*}\subseteq\mathcal{P}_{i}\)) that optimize the pre-defined interest functions. This corresponds to solving a _multi-objective optimization problem (MOP)_. Several approaches have been proposed in the literature to solve a MOP. Note, however, that a feasible solution optimizing all objective functions simultaneously usually does not exist. Therefore, the goal is to find the so-called _Pareto Front_, which involves a set of patterns that are not dominated by any other pattern in terms of the multiple interest functions. Informally, solutions on the Pareto front are such that no objective can be improved without worsening at least one of the other objectives. In this paper, we use the algorithm proposed by [5] to filter out dominated patterns. For any pair of patterns \(P^{l}\), \(P^{j}\in\mathcal{P}_{i}\), we say that \(P^{l}\) dominates \(P^{j}\) if and only if: a) \(\forall\mathcal{I}_{k}\in\mathcal{I},\mathcal{I}_{k}(P^{l})\) is no worse than \(\mathcal{I}_{k}(P^{j})\); b) \(\exists\mathcal{I}_{k}\in\mathcal{I},\mathcal{I}_{k}(P^{l})\) is strictly better than \(\mathcal{I}_{k}(P^{j})\). ### Pattern extension Informally, extending a pattern \(P\) means generating a new pattern \(P^{\prime}\) by adding new nodes and edges to \(P\) according to a set of _extension rules_ applied on partially ordered traces involving at least one instance of the pattern. Formally, let \(\varphi(\sigma)=(E_{\sigma},\prec_{\sigma})\) be a partially ordered trace, and \(P=(N,\mapsto,\alpha,\beta)\) be a pattern, in a way that \(|\)\(PI(P,\varphi(\sigma))\)\(|>0\) and \(E^{\prime}\in PI(P,\varphi(\sigma))\). An extension operator is a function \(Ext_{f}\) that takes as input pattern \(P\) and an instance pattern \(E^{\prime}\) and returns a new pattern \(P^{\prime}\) according to the extension rule \(f\). Specifically, \(Ext_{f}(P,E^{\prime})=(N\cup V_{f},\mapsto\cup\mapsto_{f},\alpha\cup\alpha^{{}^ {\prime}},P)\), where \(V_{f}\) is the set of Figure 1: Overview of the IMPresseD framework nodes in \(\varphi(\sigma)\) satisfying the ordering relation expressed by the extension rule \(f\), \(\mapsto_{f}\) is the set of edges linking the nodes of \(V_{f}\) with the nodes in \(E^{\prime}\), and \(\alpha^{{}^{\prime}}\) is the labelling function for nodes in \(V_{f}\). In this paper, \(f\in\{\mapsto,\mapsto^{\prime},||,\leadsto,\leadsto^{\prime},dc\}\), which represents respectively: (1) _direct following_, (2) _direct preceding_, (3)_concurrent_, 4)_eventually following_, (5)_eventually preceding_, (6) _direct context_ relations. Given \(A_{\varphi(\sigma)}\) as adjacency matrix and \(R_{\varphi(\sigma)}\) as reachability matrix of size \(\mid\sigma\mid-1\) over \(E_{\sigma}\), we define \(V_{\leadsto}=\{e\in E_{\sigma}\mid\forall n\in E^{\prime},e\notin E^{\prime},A_ {\varphi(\sigma)}(n,e)=1\}\). In a similar way, \(V_{\leadsto}=\{e\in E_{\sigma}\mid\forall n\in E^{\prime},e\notin E^{\prime},A _{\varphi(\sigma)}(n,e)=0,R_{\varphi(\sigma)}(n,e)>0\}\) and \(V_{||}=\{e\in E_{\sigma}\mid\forall n\in E^{\prime},e\notin E^{\prime},A_{ \varphi(\sigma)}(n,e)=0,R_{\varphi(\sigma)}(n,e)=0,R_{\varphi(\sigma)}(e,n)=0\}\). Note, \(V_{\mapsto^{\prime}}\) and \(V_{\leadsto^{\prime}}\) can be derived by changing the order of \(e\) and \(n\) in the definition of \(V_{\mapsto}\) and \(V_{\leadsto}\), respectively. Finally, \(dc\) is defined as \(Ext_{\rm dc}(P,E^{\prime})=Ext_{\mapsto}(P,E^{\prime})\cup Ext_{\mapsto^{\prime} }(P,E^{\prime})\cup Ext_{||}(P,E^{\prime})\). Fig. 2 illustrates some examples of pattern extensions. The black dotted boxes in each column of the figure highlight the instance found in the partially ordered trace of a pattern \(P\) we want to extend. For instance, single node \(b\) is an instance of pattern \(\theta\), and its corresponding extensions are patterns \(P^{1}_{\theta},..,P^{5}_{\theta}\)., where the number in the red dotted box reflects the ordering of the rule set. For instance, the first (second) rule represents the _directly following (preceding)_ relation, which results in patterns \(P^{1}_{\theta}(P^{2}_{\theta})\); the third rule involves nodes _concurrent_ to b, i.e., in this case, only node \(c^{6}\).; and so on. Users can select all or a subset of rules \(f\) to explore all possibilities for the extension of a selected foundational pattern in each iteration of IMPresneD. Therefore, the set of all extended patterns from the foundational pattern \(P\) using a subset of rules called \(F^{\prime}\) is defined as \(\mathcal{P}_{P}=\bigcup_{\sigma\in L}\bigcup_{E^{\prime}\in PI(P,\varphi( \sigma))}\bigcup_{f\in F^{\prime}}Ext_{f}(P,E^{\prime})\). Figure 2: Pattern extension procedure example ### Interest functions for outcome-oriented pattern detection To show a concrete example of the use of the IMPresseD framework, this section outlines tailored interest functions for the outcome-oriented pattern discovery goal. We designed these functions by analyzing related literature and through discussions with domain experts. While previous studies in outcome-oriented pattern discovery have focused on identifying patterns that are highly correlated with the outcome [21], we argue that correlation should not be the only dimension of interest. Our discussions with healthcare experts revealed that ignoring the frequency measure may lead to identifying too rare patterns that are often less interesting. In addition, frequent patterns that are not highly correlated may still be worth exploring. For example, a particular treatment "A" may be highly frequent but not highly correlated. However, when studying different extensions of "A", some interesting correlated patterns may emerge. Hence, in our analysis, we define frequency-based interest besides correlation-based interest. Moreover, it is well-known that potential confounding variables may play an important role in determining the outcome of a treatment process [25]. For example, let treatment pattern \(P_{1}\) be detected as a pattern that negatively affects the treatment outcome. We may find that \(P_{1}\) is only delivered to elderly patients. This questions the reliability of the relation between \(P_{1}\) and the treatment outcome since the patients' age may actually be the real factor leading to worse treatment results. To mitigate the effect of confounding variables, we consider the distance between cases with or without a specific pattern as the third interest dimension. Following these observations, we established three dimensions of interest to support outcome-oriented pattern discovery with their corresponding interest functions. #### 4.3.1 Frequency interest evaluates the frequency of occurrence of a pattern in the event log. In this study, we define _frequency interest function_ as the percentage of cases that have at least one pattern instance \(P\) as: \(\textit{CC}(P,L,\varphi)\)= \(\frac{|\{\sigma\in L|PI(P,\varphi(\sigma))>0\}|}{|L|}\) #### 4.3.2 Outcome interest measures the effect of each pattern on the process outcome. For continuous outcome values, we use a correlation-based function. For the categorical outcomes, we use an information-gain-based function. Let \(\Phi\) be a set of values representing possible outcomes. The outcome of a process is defined as a function \(f:\mathcal{S}\rightarrow\Phi\), that maps the set of all possible input traces to the set of all possible outcome values. Then we define \(\mathcal{OV}=(f(\sigma))_{\sigma\in L}\) as the outcome vector for event log \(L\). Let \(\textit{PC-freq}(P,\varphi(\sigma))=\mid PI(P,\varphi(\sigma))\mid\) be the frequency of pattern \(P\) in trace \(\varphi(\sigma)\), we define \(\mathcal{FV}=(\textit{PC-freq}(P,\varphi(\sigma)))_{\sigma\in L}\) as the frequency vector of pattern \(P\) for event log \(L\). Then, the _outcome interest function_ is defined as \(OI(P,L,\varphi)=\rho(\mathcal{OV},\mathcal{FV})\), where for _continues outcome_\(\rho\) is the Spearman correlation coefficient, while for _categorical outcome_\(\rho\) is the information gain. #### 4.2.2 Case Distance interest is designed to mitigate the impact of confounding variables. Here, we consider initial case attributes as potential confounding variables. Let \(\mathcal{AT}\) be a set of user-defined case attributes, \(AT_{\sigma_{i}}=(\pi_{d_{j}}(e_{1}))_{d_{j}\in\mathcal{AT}}\) is a vector of initial case attributes corresponding to trace \(\sigma_{i}\). Let \(C_{P}=\{\sigma\in L\mid P\Pi(P,\varphi(\sigma))\mid>0\}\) be the set of cases including an instance of the pattern \(P\) and \(C_{\bar{P}}=\bigcup_{\sigma\in L}\{\pi_{case}(\sigma)\}-C_{P}\) be the set of cases without \(P\). Then we define the _Case distance function_ as \(\mathit{CD}(P,L,\mathcal{AT})=\sum_{\sigma_{i}\in C_{P}}\sum_{\sigma_{j}\in C _{\bar{P}}}\frac{1}{|L|}dist(AT_{\sigma_{i}},AT_{\sigma_{j}})\). Let \(dist_{Euc}\) be the _Euclidean_ distance for numerical features, and \(dist_{Jac}\) be the _Jaccard_ distance for \(m\) categorical feature, and \(F_{normal}\) be a normalization function, then \(dist=\frac{F_{normal}(dist_{Euc})+dist_{Jac}}{m+1}\) as defined in [8]. Ideally, there must be \(\mathit{CD}(P,L,\mathcal{AT})=0\) to ensure that the pattern \(P\) is not influenced by any confounding variable. However, in real-life scenarios, some differences between case attributes are inevitable. To assist users in analyzing which case attributes might have an effect on the outcome, we present a dashboard that visualizes the differences in selected case attributes. This enables the user to pinpoint specific case attributes that may be important for pattern \(P\) or explore the reasons behind each process behavior if it is related to the case dimension. An example of this dashboard is presented in Fig. 4. ## 5 Implementation and evaluation This section aims to demonstrate the usefulness of the IMPresseD framework for a concrete analysis goal defined by expert users (i.e., detecting process patterns affecting the process outcome) through two forms of evaluation. We have implemented an open-source tool in Python for outcome-oriented pattern discovery goals, which is publicly available through GitHub 7. Footnote 7: [https://github.com/MozhganVD/InteractivePatternDetection](https://github.com/MozhganVD/InteractivePatternDetection) The first evaluation (user-based evaluation) aims to show the usefulness of the proposed framework in supporting the user in dealing with pattern discovery in an interactive and multi-interest setting. In the second evaluation (quantitative evaluation), we performed a comparative analysis using different sets of patterns in a fully automated setting to evaluate their predictive capabilities. ### User based evaluation #### 5.1.1 Evaluation setup. The goal of this evaluation is to determine whether our framework is able to discover patterns confirming expert knowledge of the treatment process. To this end, we asked two expert users from the medical domain to use the IMPresseD tool on historical data to discover treatment process patterns affecting patients' survival time. We then asked the users to validate the discovered patterns using their own medical knowledge. As interest functions, we maximize \(CC(P,L,\varphi)\) and \(OI(P,L,\varphi)\) based on the Spearman correlation, and minimize \(CD(P,L,\mathcal{AT})\). Regarding the visualization dashboard, we opted for _distribution plot_ for the numerical features (e.g., age, albumin level, etc.) and _pie chart_ for the categorical features (e.g., gender, morphology, etc.) based on expert suggestion. We also visualized the Kaplan-Meier curve, as it is a very common graphical representation of the survival probability for a group of patients based on their observed survival times. The Log-rank test is also included to check the significance of the difference in survival time between cases with or without a particular pattern. #### 4.2.2 Dataset. We used an event log provided by the Netherlands Cancer Registry (NCR) regarding the treatment process for patients with metastatic stomach or esophageal cancer. These patients can usually not be cured and receive palliative care to increase the quality of the remaining lifetime and possibly extend it. Therefore, the outcome of the treatment process is the patient survival time. We did some data preprocessing according to the domain experts. Specially, we removed cases where there were logging errors (e.g., patients for which the survival time was not known), as well as exceptional cases or outliers, like patients who received one or multiple treatment(s) abroad. Similarly, patients with too deteriorated health are removed from the dataset, as they are not fit enough to receive any treatment. At the end of preprocessing, the event logs consisted of 957 cases, 32 distinct treatment codes, and 368 process variants. We also used domain knowledge as a conversion oracle for transforming each trace into a partially ordered trace. In particular, two groups of treatments are considered to be parallel: 1) systematic treatments starting within three days from each other, and 2) all treatments which start and end on the same day. #### 4.2.3 Results. In the first iteration of the algorithm (extension step 0), we obtained 8 non-dominated treatments as shown in Fig. 3 (3D graph left side). The framework allows users to assess every single non-dominated treatment in a three-dimensional view. Figure 3: Non-dominated pattern in three iterations of discovery algorithm Users can select each of the recommended single treatments by the Pareto front as a foundational pattern and apply the extension functions discussed in section 4.2. The experts selected _capecitabine_ and _paclitaxel_ as interesting patterns to extend. In the second iteration (extension step 1), we identified 18 patterns out of the 206 extended patterns from _capecitabine_ and 14 patterns out of the 116 extended patterns from _paclitaxel_ in the Pareto front, indicating that the use of defined interest functions and Pareto front enables users to concentrate on a maximum of 10% of the total discovered patterns in this step. The expert decided to filter out patterns with a minimum frequency of 10 patients, thus focusing on the 8 and 6 most frequent patterns from _capecitabine_ and _paclitaxel_ within the Pareto front, reported in Fig. 3. The users decided to stop after one extension step for _capecitabine_, while a second extension step was carried out for _paclitaxel_. For each pattern, values of each interest function are reported. Furthermore, we also generate a dashboard showing its control-flow structure and corresponding case data. The main goal of the dashboard is to allow users to compare different case attributes corresponding to the cases with and without patterns. These dashboards enable users to investigate the reasons behind each process behavior. An example is shown in Fig. 4. This pattern depicts a treatment pathway that commences with _ovaliplatin_ and _capecitabine_. After some time, _ovaliplatin_ is stopped, and _capecitabine_ is continued. The significant difference between the survival curves of patients with and without this pattern suggests the efficacy of these treatment combinations. Cases with and without patterns are quite similar according to the selected attributes, though the dashboard shows that this pattern was never prescribed to patients with a tumor morphology labeled as "other" (which was in line with experts' expectations). Patterns colored green in tables inside the Fig. 3 are the patterns marked as interesting by expert users (i.e., patterns validated by medical knowledge). The users considered two patterns from extending _capecitabine_ not interesting because of a too low correlation with the outcome, leading to very similar survival time for patients with and without the pattern (MedianOutcome_in/out in the dashboard), which does not allow them to say anything about the relationship with the outcome. As regards patterns extended from _paclitaxel_, in the first step only one pattern was marked as not interesting. The reason is that the user expected an additional treatment which, however, was not possible to detect in combination with the discovered patterns. Further investigations are needed to determine why the occurrence of this particular treatment in the dataset does not fit with experts' expectations. Note that for the second extension, with foundational pattern _paclitaxel_5_, the users were especially interested in extensions involving radiotherapy. The last extended pattern in the extension step 2 did not involve radiotherapy and was hence marked as not interesting. Overall, the detected patterns confirmed the effectiveness of the previously known combination of treatments, providing valuable evidence-based insight. Only a few patterns were marked as not interesting. Both users found the visualization dashboard very helpful in understanding the detected patterns and in uncovering potential relations with the case attributes. We would like to point out that without using the Pareto front, users would have to either try different thresholds or explore all the extended patterns manually. ### Quantitative evaluation #### 5.2.1 Evaluation setup. The goal of the quantitative evaluation consists in assessing the predictive capabilities of patterns detected employing multi-interest functions compared to patterns detected utilizing a single dimension or without any filtering. If the multi-interest functions obtain a predictive performance in line with the other strategies, this shows they allow to preserve the same predictive power, in addition to them leading to more meaningful process patterns filtering out many non-interesting ones (as illustrated in the user-based evaluation). We drew inspiration from the common evaluation used in"deviance mining" [21] to assess the quality of the set of discovered patterns in predicting the outcome of the process without exploiting the user's knowledge. In this setting, discovered patterns are treated as independent features, while the process's outcome is considered the dependent feature. Frequency-based encoding is used to encode independent features. Specifically, we compare the performance of decision trees (DTs) trained on the \(K\) patterns obtained from the Pareto front in each extension step to those trained on the _top_\(K\) patterns identified by considering every single dimension, as well as those trained on all discovered patterns. To achieve this, all the \(K\) non-dominated patterns in the extension step \(i^{th}\) were used as foundational patterns to be extended in iteration \(i+1^{th}\). As interest functions, we maximize the outcome (information-gain-based) and frequency functions and minimize the case distance function. During the pattern discovery procedure, we Figure 4: An example of dashboard visualization for a pattern extended from _capecitabine_. Note: the inner ring of pie charts and red color in distribution plots correspond to the cases with the shown pattern in the dashboard. only considered the training set to prevent potential bias or information leakage in the evaluation. **Datasets.** We analyzed the three most commonly used event logs in outcome prediction literature, namely _BPIC2012_, _BPIC2011_, and _Production_, by leveraging preprocessed and labeled logs from prior research [24]. We used all case-related attributes for calculating the case distance function. For the NCR dataset, we divided the survival time into three classes with equal frequency based on the experts' knowledge for this evaluation. **Results.** Fig. 5 presents the results of the 5-fold cross-validation (i.e., the average F1-score with minimum and maximum obtained values). The DT trained on the patterns obtained from the Pareto front outperformed or it is as accurate as its counterparts. The only configuration that does better in some cases is the all_patterns configuration, which involves a much higher number of patterns. Indeed, on average, the ratio between the size of the feature set obtained from the Pareto front and the size of the feature set obtained in all_patterns configuration is 47.5%. This result shows that using Pareto optimal solutions, we combine the best of multiple criteria and manage to retain discriminative information with a smaller number of patterns than all possible ones. Another interesting finding is that the results of the DT trained on patterns from the Pareto front consistently rank among the best ones, while results of DTs trained on patterns obtained from single dimensions show a stronger dependency on the dataset. When comparing single-interest measures, the case distance obtained the worst performance in most of the tested datasets. The _outcome measure_ out Figure 5: Quantitative evaluation results performed all single measures in 5 out of 10 studied event logs (BPIC11_2, BPIC11_4, BPIC12_1, BPIC12_2, BPIC12_3), while the _frequency interest_ outperformed the other single measures in 3 event logs (BPIC11_3, NCR, Production). This suggests that there might be a relationship between the characteristics of the event log and the predictive power of single-interest dimensions. ### Discussion The quantitative evaluation indicates that using the Pareto front leads to comparable or better prediction performance than the ones achieved by using single measures, and with much fewer patterns than using all possible ones. Using the Pareto front also has the additional advantage that less effort is required than selecting a threshold for a specific metric. Furthermore, the developed approach provides a flexible means for the user to define the desired pattern characteristics. Note that the quantitative evaluation also shows that the proposed method has the potential to be used in a fully automated setting. However, a surprising observation is that extending the process patterns often does not improve the prediction results, except for a slight improvement in performance after the first extension in the NCR dataset. This may be due to an overlap between the pattern obtained from the \((i+1)^{th}\) iteration and the foundational patterns in the \(i^{th}\) iteration. Considering all patterns in the Pareto front as foundational patterns for being extended in the next iteration may have led to overlap that increases the dimension of the problem without adding much new information. One direction for future research would be to minimize the overlap between patterns obtained from each iteration. On the other hand, the results of the user-based evaluation demonstrate the usefulness of the IMPresseD framework in discovering process patterns for supporting outcome-oriented process pattern detection. The Pareto front selection of patterns allows users to reach their desired pattern without exploring many non-interesting patterns. Furthermore, the designed visualization dashboard provides effective support to the human analyst in exploring and interpreting the patterns. We would like to point out that, to the best of our knowledge, no other process pattern discovery tool provides these functionalities. However, this evaluation has some threats to validity. First, being based on a use case, these results cannot be generalized to different contexts. Furthermore, only two experts were involved in verifying the discovered patterns. To mitigate these threats, we provide a prototype of the tool to enable other researchers to replicate our results and apply the approach to other case studies. Furthermore, a comprehensive survey involving more experts from different perspectives, such as data scientists and oncologists, is planned to evaluate the proposed method on a wider scale. ## 6 Conclusion and future work The paper presented the IMPressed framework, designed to derive interesting and easily interpretable process patterns for the end users. The framework is iterative and interactive and allows the user to select the most interesting patterns to expand further. The paper also discussed a concrete analysis goal of deriving process patterns affecting the process outcome, which is a complex problem that requires considering different aspects. The paper evaluated the proposed approach using a real-life case study in healthcare and in a completely automated setting using publicly available event logs. Overall, the paper contributes to the process pattern discovery literature by introducing a framework that takes into account a multi-dimensional notion of interest and by demonstrating its effectiveness through empirical evaluations. In future work, to further evaluate and enhance the efficacy of our proposed framework, we intend to conduct a comprehensive survey that draws on a wider range of expert knowledge and opinions. This survey will allow us to gather valuable feedback on the usefulness of our framework and explore potential avenues for future research. Additionally, we intend to explore additional extension operators to discover more complex patterns, as well as introduce constraints on the pattern extension in a fully-automated setting.
2306.02963
Normalized ground states for a biharmonic Choquard system in $\mathbb{R}^4$
In this paper, we study the existence of normalized ground state solutions for the following biharmonic Choquard system \begin{align*} \begin{split} \left\{ \begin{array}{ll} \Delta^2u=\lambda_1 u+(I_\mu*F(u,v))F_u (u,v), \quad\mbox{in}\ \ \mathbb{R}^4, \Delta^2v=\lambda_2 v+(I_\mu*F(u,v)) F_v(u,v), \quad\mbox{in}\ \ \mathbb{R}^4, \displaystyle\int_{\mathbb{R}^4}|u|^2dx=a^2,\quad \displaystyle\int_{\mathbb{R}^4}|v|^2dx=b^2,\quad u,v\in H^2(\mathbb{R}^4), \end{array} \right. \end{split} \end{align*} where $a,b>0$ are prescribed, $\lambda_1,\lambda_2\in \mathbb{R}$, $I_\mu=\frac{1}{|x|^\mu}$ with $\mu\in (0,4)$, $F_u,F_v$ are partial derivatives of $F$ and $F_u,F_v$ have exponential subcritical or critical growth in the sense of the Adams inequality. By using a minimax principle and analyzing the behavior of the ground state energy with respect to the prescribed mass, we obtain the existence of ground state solutions for the above problem.
Wenjing Chen, Zexi Wang
2023-06-05T15:27:14Z
http://arxiv.org/abs/2306.02963v1
# Normalized ground states for a biharmonic Choquard system in \(\mathbb{R}^{4}\) ###### Abstract In this paper, we study the existence of normalized ground state solutions for the following biharmonic Choquard system \[\left\{\begin{array}{ll}\Delta^{2}u=\lambda_{1}u+(I_{\mu}*F(u,v))F_{u}(u,v), \quad\mbox{in}\ \ \mathbb{R}^{4},\\ \Delta^{2}v=\lambda_{2}v+(I_{\mu}*F(u,v))F_{v}(u,v),\quad\mbox{in}\ \ \mathbb{R}^{4},\\ \int_{\mathbb{R}^{4}}|u|^{2}dx=a^{2},\quad\int_{\mathbb{R}^{4}}|v|^{2}dx=b^{2},\quad u,v\in H^{2}(\mathbb{R}^{4}),\end{array}\right.\] where \(a,b>0\) are prescribed, \(\lambda_{1},\lambda_{2}\in\mathbb{R}\), \(I_{\mu}=\frac{1}{|x|^{\mu}}\) with \(\mu\in(0,4)\), \(F_{u},F_{v}\) are partial derivatives of \(F\) and \(F_{u},F_{v}\) have exponential subcritical or critical growth in the sense of the Adams inequality. By using a minimax principle and analyzing the behavior of the ground state energy with respect to the prescribed mass, we obtain the existence of ground state solutions for the above problem. **Keywords:** Normalized solution; Biharmonic system; Choquard nonlinearity; Exponential subcritical or critical growth. **2020 Mathematics Subject Classification:** 31B30, 35J35, 35J61, 35J91. ## 1 Introduction and statement of main results This work is devoted to the following biharmonic nonlinear Schrodinger system with a general Choquard nonlinear term \[\left\{\begin{array}{ll}i\frac{\partial\psi_{1}}{\partial t}-\Delta^{2}\psi _{1}+(I_{\mu}*F(\psi_{1},\psi_{2}))F_{\psi_{1}}(\psi_{1},\psi_{2})=0,\quad \mbox{in}\ \mathbb{R}^{N}\times\mathbb{R},\\ i\frac{\partial\psi_{2}}{\partial t}-\Delta^{2}\psi_{2}+(I_{\mu}*F(\psi_{1}, \psi_{2}))F_{\psi_{2}}(\psi_{1},\psi_{2})=0,\quad\mbox{in}\ \mathbb{R}^{N}\times\mathbb{R},\end{array}\right. \tag{1.1}\] where \(N\geq 1\), \(i\) denotes the imaginary unit, \(I_{\mu}=\frac{1}{|x|^{\mu}}\) with \(\mu\in[0,N)\), \(\Delta^{2}\) denotes the biharmonic operator, \(F_{\psi_{1}},F_{\psi_{2}}\) are partial derivatives of \(F\). System (1.1) is derived in [25, 30, 31] to reveal the effects of a small fourth-order dispersion term in the Schrodinger equation. For convenience, we write \(z=(z_{1},z_{2})\), and \(F\) satisfies: \((H_{1})\) For \(j=1,2\), \(F_{z_{j}}(z)\in\mathbb{R}\) for \(z_{j}\in\mathbb{R}\), \(F_{z_{j}}(e^{i\theta_{1}}z_{1},e^{i\theta_{2}}z_{2})=e^{i\theta_{j}}F_{z_{j}}( z_{1},z_{2})\) for any \(\theta_{j}\in\mathbb{R}\), \(z_{j}\in\mathbb{C}\); \((H_{2})\)\(F(z_{1},z_{2})=\int_{0}^{|z_{1}|}\int_{0}^{|z_{2}|}\frac{\partial^{2}F(s,t)}{ \partial s\partial t}dsdt\) for any \(z_{1},z_{2}\in\mathbb{C}\). When looking for standing waves solutions of (1.1), that is, solutions for (1.1) of the form \(\psi_{1}(t,x)=e^{-i\lambda_{1}t}u(x)\) and \(\psi_{2}(t,x)=e^{-i\lambda_{2}t}v(x)\) with \(\lambda_{1},\lambda_{2}\in\mathbb{R}\) and \(u,v\in H^{2}(\mathbb{R}^{N})\) are time-independent real valued functions. Then \(u,v\) satisfy \[\left\{\begin{array}{ll}\Delta^{2}u=\lambda_{1}u+(I_{\mu}*F(u))F_{u}(u,v),& \mbox{in }\mathbb{R}^{N},\\ \Delta^{2}v=\lambda_{2}v+(I_{\mu}*F(u))F_{v}(u,v),&\mbox{in }\mathbb{R}^{N}. \end{array}\right. \tag{1.2}\] If \(\lambda_{1},\lambda_{2}\in\mathbb{R}\) are fixed parameters, there are many results for (1.2) by variational methods, see e.g. [18, 23, 43, 47, 54]. Another interesting way to find solutions of (1.2) is to search for solutions with prescribed mass, and \(\lambda_{1},\lambda_{2}\in\mathbb{R}\) arise as Lagrange multipliers. This type of solution is called normalized solution, and this approach is particularly meaningful from the physical point of view, since in addition to there is a conservation of mass by \((H_{1})\) and \((H_{2})\), the mass has often an important physical meaning, e.g. it shows the power supply in nonlinear optics, or the total number of atoms in Bose-Einstein condensation. In this case, particular attention is also devoted to the least energy solutions which are also called ground state solutions, namely solutions minimizing the associated energy functional among all nontrivial solutions, and the associated energy is called ground state energy. For the nonlinear Schrodinger equation with a normalization constraint \[\left\{\begin{array}{ll}-\Delta u=\lambda u+f(u),\quad\mbox{in }\ \mathbb{R}^{N},\\ \int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2},\quad u\in H^{1}(\mathbb{R}^{N}). \end{array}\right. \tag{1.3}\] If \(f(u)=|u|^{p-2}u\), the associated energy functional of (1.3) is given by \[\mathcal{J}_{1}(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx-\frac{1}{ p}\int_{\mathbb{R}^{N}}|u|^{p}dx.\] From the variational point of view, \(\mathcal{J}_{1}\) is bounded from below on \(\widehat{S}(a)=\{u\in H^{1}(\mathbb{R}^{N}):\int_{\mathbb{R}^{N}}|u|^{2}dx=a^{ 2}\}\) for \(p\in(2,2+\frac{4}{N})\) (\(L^{2}\)-subcritical case), here \(2+\frac{4}{N}\) is called the \(L^{2}\)-critical exponent, which comes from the Gagliardo-Nirenberg inequality [45]. Thus, a ground state solution of (1.3) can be found as a global minimizer of \(\mathcal{J}_{1}\) on \(\widehat{S}(a)\), see e.g. [17, 36, 48, 51, 52]. If \(p\in(2+\frac{4}{N},2^{*})\) (\(L^{2}\)-supercritical case), on the contrary, \(\mathcal{J}_{1}\) is unbounded from below on \(\widehat{S}(a)\), so it seems impossible to search for a global minimizer to obtain a solution of (1.3), where \(2^{*}=\infty\) if \(N\leq 2\) and \(2^{*}=\frac{2N}{N-2}\) if \(N\geq 3\). Furthermore, since \(\lambda\in\mathbb{R}\) is unknown and \(H^{1}(\mathbb{R}^{N})\hookrightarrow L^{2}(\mathbb{R}^{N})\) is not compact (even \(H^{1}_{rad}(\mathbb{R}^{N})\)), some classical methods cannot be directly used to prove the boundedness and compactness of any \((PS)\) sequence. Jeanjean [28] first showed that a normalized ground state solution for (1.3) does exist in this case by showing that the mountain pass geometry of \(\mathcal{J}_{1}|_{\widehat{S}(a)}\) can construct a \((PS)\) sequence related to the Pohozaev identity, which gives the boundedness. By using a minimax principle based on the homotopy stable family, Bartsch and Soave [10, 11] also presented a new approach that is based on a natural constraint associated to the problem and proved the existence of normalized solutions for (1.3). For more related results of normalized solutions for (1.3) in the \(L^{2}\)-supercritical case, the reader may refer to [4, 13, 29, 33, 49, 50, 56] and references therein. For the mixed dispersion nonlinear Schrodinger equation with a prescribed \(L^{2}\)-norm constraint \[\left\{\begin{array}{ll}\Delta^{2}u-\beta\Delta u=\lambda u+f(u),\quad\mbox{ in }\ \mathbb{R}^{N},\\ \int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2},\quad u\in H^{2}(\mathbb{R}^{N}). \end{array}\right. \tag{1.4}\] This kind of problem gives a new \(L^{2}\)-critical exponent \(2+\frac{8}{N}\). When \(\beta>0\), \(f(u)=|u|^{q-2}u\) and \(q\in(2,2+\frac{8}{N})\), Bonheure et al. [14] studied (1.4) and established the existence, qualitative properties of minimizers. By using the minimax principle, Bonheure et al. [15] obtained the existence of ground state solutions, and the multiplicity of radial solutions for (1.4) when \(q\in(2+\frac{8}{N},4^{*})\), where \(4^{*}=\infty\) if \(N\leq 4\) and \(4^{*}=\frac{2N}{N-4}\) if \(N\geq 5\). More recently, Fernandez et al. have made some improvements to [14] and [15], we refer to [24] for more details. When \(\beta<0\), the problem is more involved, see [41, 16] for \(q\in(2,2+\frac{8}{N})\) and [39] for \(q\in(2+\frac{8}{N},4^{*})\). Moreover, Luo and Zhang [40] studied normalized solutions of (1.4) with a general nonlinear term \(f\), and obtained the existence and orbital stability of minimizers, when \(\beta\in\mathbb{R}\) and \(f\) satisfies the suitable \(L^{2}\)-subcritical assumptions. For normalized solutions of (1.4) with a general Choquard nonlinear term, we refer the readers to [19, 20]. Considering the following system with the constraints \[\left\{\begin{array}{ll}-\Delta u=\lambda_{1}u+F_{u}(u,v),\quad\text{in } \mathbb{R}^{N},\\ -\Delta v=\lambda_{2}v+F_{v}(u,v),\quad\text{in }\mathbb{R}^{N},\\ \int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2},\quad\int_{\mathbb{R}^{N}}|v|^{2}dx=b^{2},\quad u,v\in H^{1}(\mathbb{R}^{N}).\end{array}\right. \tag{1.5}\] This system has an important physical significance in nonlinear optics and Bose-Einstein condensation. The most famous case is that of coupled Gross-Pitaevskii equations in dimension \(N\leq 3\) with \(F_{u}(u,v)=\mu_{1}|u|^{p-2}u+r_{1}\kappa|v|^{r_{2}}|u|^{r_{1}-2}u\), \(F_{v}(u,v)=\mu_{2}|u|^{q-2}u+r_{2}\kappa|u|^{r_{1}}|v|^{r_{2}-2}v\), \(p=q=4\), \(r_{1}=r_{2}=2\), and \(\mu_{1},\mu_{2},\kappa>0\), which models Bose-Einstein condensation. The particular case in \(\mathbb{R}^{3}\) was investigated in the companion paper [7], and has been further developed by many scholars, see [9, 10, 11, 12] for normalized solutions of (1.5) in \(\mathbb{R}^{3}\), [6, 8, 27, 34, 38, 55] in \(\mathbb{R}^{N}\), and [46] in bounded domain. It is worth pointing out that in [22], the authors first considered normalized solutions of (1.5) with general nonlinear terms involving exponential critical growth in \(\mathbb{R}^{2}\). The study of normalized solutions for (1.5) is a hot topic in nonlinear PDEs nowadays. However, as far as we know, there are only a few papers dealing with such problems with general nonlinear terms besides the one already mentioned [22]. In this work, we study normalized solutions to the following biharmonic Choquard system with general nonlinear terms in \(\mathbb{R}^{4}\) \[\left\{\begin{array}{ll}\Delta^{2}u=\lambda_{1}u+(I_{\mu}*F(u))F_{u}(u,v), \quad\text{in }\mathbb{R}^{4},\\ \Delta^{2}v=\lambda_{2}v+(I_{\mu}*F(u))F_{v}(u,v),\quad\text{in }\mathbb{R}^{4}, \\ \int_{\mathbb{R}^{4}}|u|^{2}dx=a^{2},\quad\int_{\mathbb{R}^{4}}|v|^{2}dx=b^{2},\quad u,v\in H^{2}(\mathbb{R}^{4}),\end{array}\right. \tag{1.6}\] where \(a,b>0\), \(\lambda_{1},\lambda_{2}\in\mathbb{R}\), \(I_{\mu}=\frac{1}{|x|^{\mu}}\) with \(\mu\in(0,4)\), and \(F_{u},F_{v}\) are partial derivatives of \(F\) with \(F_{u},F_{v}\) have exponential subcritical or critical growth in the sense of the Adams inequality. To our best knowledge, in literature, there is no contribution devoted to the study of coupled Choquard systems with general nonlinear terms involving exponential critical growth in \(\mathbb{R}^{4}\) (even without the \(L^{2}\)-norm constraints). Compared with the single equation, there are some difficulties to deal with: \((i)\) We need to establish the Adams inequality for vector functions. Commonly, using the Adams inequality [43], Young's inequality, we can obtain the Adams inequality for vector functions. However, using this Adams inequality, we cannot get a good upper bound of the ground state energy (the optimal upper bound is \(\frac{8-\mu}{16}\)) in the exponential critical growth, which is crucial to rule out the trivial case for the weak limit of a \((PS)_{E(a,b)}\) sequence, where \(E(a,b)\) is the ground state energy defined in (3.2). Fortunately, inspired by an useful algebraic inequality given in [21], we overcome this difficulty and establish the Adams inequality for vector functions that exactly we need in Lemma 2.5. \((ii)\) In [1, 2, 22], the authors considered a coupled system involving exponential critical growth in \(\mathbb{R}^{2}\), and the condition for the estimation on energy level is only given by an algebraic form: \[F(u,v)\geq\kappa|(u,v)|^{\sigma}\quad\text{for any $(u,v)\in\mathbb{R}^{2}$ and some $\kappa,\sigma>0$.}\] In this paper, based on the Adams functions [37], we will give a more natural growth condition in the exponential critical case. However, this adams functions cannot be directly applied to study normalized solutions, because the functions are unknown in some ranges and the \(L^{2}-\) norms of \(u,\nabla u,\Delta u\) are given as \(O(\frac{1}{\log n})\), after a normalization, it is unfavorable to make a refined estimation. Hence, we use the modified Adams functions introduced in [20] to complete the estimation. Now, we introduce the precise assumptions on what our problems are studied. Assume that \(F\) satisfies: \((F_{1})\) For \(j=1,2\), \(F_{z_{j}}(z)\in C(\mathbb{R}\times\mathbb{R},\mathbb{R})\), and \(F_{z_{j}}(z)=o(|z|^{\tau})\) as \(|z|\to 0\) for some \(\tau>2-\frac{\mu}{4}\); \((F_{2})\) There exists a constant \(\theta>3-\frac{\mu}{4}\) such that \[0<\theta F(z)\leq z\cdot\nabla F(z),\ \ \text{for all $z\in(\mathbb{R}\times \mathbb{R})\backslash(0,0)$,}\quad\text{where $\nabla F(z)=(F_{z_{1}}(z),F_{z_{2}}(z))$;}\] \((F_{3})\)\(F_{z_{1}}(0,z_{2})\neq 0\) for all \(z_{2}\in\mathbb{R}\backslash\{0\}\) and \(F_{z_{2}}(z_{1},0)\neq 0\) for all \(z_{1}\in\mathbb{R}\backslash\{0\}\); \((F_{4})\) For any \(z\in\mathbb{R}\backslash\{0\}\times\mathbb{R}\backslash\{0\}\), \(0<F_{z_{j}}(z)z_{j}<(2-\frac{\mu}{4})F(z)\), \(j=1,2\); \((F_{5})\) For any \(z\in\mathbb{R}\backslash\{0\}\times\mathbb{R}\backslash\{0\}\), let \(\mathfrak{F}(z)=z\cdot\nabla F(z)-(2-\frac{\mu}{4})F(z)\), \(\nabla\mathfrak{F}(z)\)exists, and \[(3-\frac{\mu}{4})F(\hat{z})\mathfrak{F}(\tilde{z})<F(\hat{z})\tilde{z}\cdot \nabla\mathfrak{F}(\tilde{z})+\mathfrak{F}(\hat{z})(\mathfrak{F}(\tilde{z})- F(\tilde{z})),\quad\text{for any $\hat{z},\tilde{z}\in\mathbb{R}\backslash\{0\}\times\mathbb{R} \backslash\{0\}$;}\] Our main results state as follows: **Theorem 1.1**.: _Assume that \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), that is,_ \[\lim_{|z|\to+\infty}\frac{|F_{z_{j}}(z)|}{e^{\alpha|z|^{2}}}=0,\quad\text{for all $\alpha>0$.}\] _Moreover, assume \(F\) satisfies \((F_{1})-(F_{5})\), then problem (1.6) admits a ground state solution._ **Theorem 1.2**.: _Assume that \(F_{z_{j}}(j=1,2)\) has exponential critical growth at \(\infty\), that is,_ \[\lim_{|z|\to+\infty}\frac{|F_{z_{j}}(z)|}{e^{\alpha|z|^{2}}}=\left\{\begin{array} []{ll}0,&\text{for all $\alpha>32\pi^{2}$,}\\ +\infty,&\text{for all $0<\alpha<32\pi^{2}$.}\end{array}\right.\] _Moreover, assume \(F\) satisfies \((F_{1})-(F_{5})\), and_ \((F_{6})\) _There exists \(\varrho>0\) such that \(\liminf_{|z_{1}|,|z_{2}|\to+\infty}\frac{F(z)[z\cdot\nabla F(z)]}{e^{64\pi^{2} |z|^{2}}}\geq\varrho\),_ _then problem (1.6) admits a ground state solution._ **Remark 1.1**.: By \((F_{2})\) and \((F_{4})\), we have \(3-\frac{\mu}{4}<\theta<4-\frac{\mu}{2}\). **Remark 1.2**.: (i) \((F_{3})\) is a technical condition to rule out the semitrivial case for the weak limit of a \((PS)_{E(a,b)}\) sequence. (ii) By using \((F_{4})\), we can prove that the Lagrange multipliers sequences \(\{\lambda_{1,n}\}\) and \(\{\lambda_{2,n}\}\) are bounded in \(\mathbb{R}\), and up to a subsequence, \(\lambda_{1,n}\to\lambda_{1}<0\), \(\lambda_{2,n}\to\lambda_{2}<0\) as \(n\to\infty\). (iii) Under the condition \((F_{5})\), we can prove that there exists a unique \(s_{(u,v)}\in\mathbb{R}\) such that \(\mathcal{H}((u,v),s_{(u,v)})\in\mathcal{P}(a,b)\) for any \((u,v)\in\mathcal{S}\), where \(\mathcal{H}((u,v),s)\), \(\mathcal{P}(a,b)\) and \(\mathcal{S}\) are defined later. Our paper is arranged as follows. Section 2 contains some preliminary results. In Section 3, we give the variational framework of problem (1.6). In section 4, we study problem (1.6) in the exponential subcritical case. Section 5 is devoted to the exponential critical case. ## 2 Preliminaries In this section, we give some preliminaries. For the nonlocal type problems with Riesz potential, an important inequality due to the Hardy-Littlewood-Sobolev inequality will be used in the following. **Proposition 2.1**.: _[_35_, Theorem 4.3]_ _Assume that \(1<r\), \(t<\infty\), \(0<\mu<4\) and \(\frac{1}{r}+\frac{\mu}{4}+\frac{1}{t}=2.\) Then there exists \(C(\mu,r,t)>0\) such that_ \[\Big{|}\int_{\mathbb{R}^{4}}(I_{\mu}*g(x))h(x)dx\Big{|}\leq C(\mu,r,t)\|g\|_{r }\|h\|_{t} \tag{2.1}\] _for all \(g\in L^{r}(\mathbb{R}^{4})\) and \(h\in L^{t}(\mathbb{R}^{4})\)._ **Lemma 2.1**.: _(Cauchy-Schwarz type inequality) [42] For \(g,h\in L^{1}_{loc}(\mathbb{R}^{4})\), there holds_ \[\int_{\mathbb{R}^{4}}(I_{\mu}*|g(x)|)|h(x)|dx\leq\Big{(}\int_{ \mathbb{R}^{4}}(I_{\mu}*|g(x)|)|g(x)|dx\Big{)}^{\frac{1}{2}}\Big{(}\int_{ \mathbb{R}^{4}}(I_{\mu}*|h(x)|)|h(x)|dx\Big{)}^{\frac{1}{2}}. \tag{2.2}\] **Lemma 2.2**.: _(Gagliardo-Nirenberg inequality) [45] For any \(u\in H^{2}(\mathbb{R}^{4})\) and \(p\geq 2\), then it holds_ \[\|u\|_{p}\leq B_{p}\|\Delta u\|_{2}^{\frac{p-2}{p}}\|u\|_{2}^{\frac{2}{p}}, \tag{2.3}\] _where \(B_{p}\) is a constant depending on \(p\)._ **Lemma 2.3**.: _(i) [58, Theorem 1.2] If \(\alpha>0\) and \(u\in H^{2}(\mathbb{R}^{4})\), then_ \[\int_{\mathbb{R}^{4}}(e^{\alpha u^{2}}-1)dx<+\infty;\] _(ii) [43, Proposition 7] There exists a constant \(C>0\) such that_ \[\sup_{u\in H^{2}(\mathbb{R}^{4}),\|\Delta u\|_{2}\leq 1}\int_{\mathbb{R}^{4}}(e ^{\alpha u^{2}}-1)dx\leq C\] _for all \(0<\alpha\leq 32\pi^{2}\)._ **Lemma 2.4**.: _[_21_, Lemma 2.3]_ _Suppose that \(a_{1},a_{2},\ldots,a_{k}\geq 0\) with \(a_{1}+a_{2}+\cdots+a_{k}<1\), then there exist \(p_{1},p_{2},\ldots,p_{k}>1\) satisfying \(\frac{1}{p_{1}}+\frac{1}{p_{2}}+\cdots+\frac{1}{p_{k}}=1\) such that \(p_{i}a_{i}<1\) for any \(i=1,2,\ldots,k\). Moreover, if \(a_{1},a_{2},\ldots,a_{k}\geq 0\) satisfying \(a_{1}+a_{2}+\cdots+a_{k}=1\), then we can take \(p_{i}=\frac{1}{a_{i}}\) such that \(\frac{1}{p_{1}}+\frac{1}{p_{2}}+\cdots+\frac{1}{p_{k}}=1\) and \(p_{i}a_{i}=1\) for any \(i=1,2,\ldots,k\)._ Let \(\mathcal{X}=H^{2}(\mathbb{R}^{4})\times H^{2}(\mathbb{R}^{4})\) with the norm \[\|(u,v)\|:=(\|u\|^{2}+\|v\|^{2})^{\frac{1}{2}}=\Big{(}\int_{\mathbb{R}^{4}}| \Delta u|^{2}+|\Delta v|^{2}+|u|^{2}+|v|^{2}dx\Big{)}^{\frac{1}{2}},\] Similar to \(H^{2}(\mathbb{R}^{4})\), \(\mathcal{X}\) is a Hilbert space and satisfies \[\mathcal{X}\hookrightarrow L^{p}(\mathbb{R}^{4},\mathbb{R}^{2}):=L^{p}(\mathbb{ R}^{4})\times L^{p}(\mathbb{R}^{4})\quad\text{and}\quad\mathcal{X}\hookrightarrow \hookrightarrow L^{q}_{Loc}(\mathbb{R}^{4},\mathbb{R}^{2}):=L^{q}_{Loc}( \mathbb{R}^{4})\times L^{q}_{Loc}(\mathbb{R}^{4}) \tag{2.4}\] for any \(p\geq 2\), \(q\geq 1\), where \(\|(u,v)\|_{p}:=\|(u,v)\|_{L^{p}(\mathbb{R}^{4},\mathbb{R}^{2})}=(\|u\|_{p}^{p}+ \|v\|_{p}^{p})^{\frac{1}{p}}=\Big{(}\int_{\mathbb{R}^{4}}(|u|^{p}+|v|^{p})dx \Big{)}^{\frac{1}{p}}\). Moreover, we set \[\mathcal{S}:=\Big{\{}(u,v)\in\mathcal{X}:\int_{\mathbb{R}^{4}}|u|^{2}dx=a^{2}, \int_{\mathbb{R}^{4}}|v|^{2}dx=b^{2}\Big{\}}.\] Applying Lemmas 2.3 and 2.4, we immediately have the following lemma. **Lemma 2.5**.: _(i) If \(\alpha>0\) and \((u,v)\in\mathcal{X}\), then_ \[\int_{\mathbb{R}^{4}}(e^{\alpha|(u,v)|^{2}}-1)dx<+\infty;\] _(ii) There exists a constant \(C>0\) such that_ \[\sup_{(u,v)\in\mathcal{X},\|\Delta(u,v)\|_{2}\leq 1}\int_{\mathbb{R}^{4}}(e^{ \alpha|(u,v)|^{2}}-1)dx\leq C\] _for all \(0<\alpha\leq 32\pi^{2}\), where \(\|\Delta(u,v)\|_{2}=\|(\Delta u,\Delta v)\|_{2}=(\int_{\mathbb{R}^{4}}(|\Delta u |^{2}+|\Delta v|^{2})dx)^{\frac{1}{2}}\)._ Proof.: \((i)\) By Lemma 2.3 and Young's inequality, for any \(t>1\) and \(t^{\prime}=\frac{t}{t-1}\), we have \[\int_{\mathbb{R}^{4}}(e^{\alpha|(u,v)|^{2}}-1)dx=\int_{\mathbb{R}^{4}}(e^{ \alpha(u^{2}+v^{2})}-1)dx\leq\frac{1}{t}\int_{\mathbb{R}^{4}}(e^{t\alpha u^{2 }}-1)dx+\frac{1}{t^{\prime}}\int_{\mathbb{R}^{4}}(e^{t^{\prime}\alpha v^{2}}- 1)dx<\infty.\] \((ii)\) Using Lemma 2.3 with \(k=2\), we know, for any \(a_{1}+a_{2}\leq 1\), there exist \(p_{1},p_{2}>1\) satisfying \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=1\) such that \(p_{1}a_{1}\leq 1,p_{2}a_{2}\leq 1\). For any \((u,v)\in\mathcal{X}\) with \(\|\Delta(u,v)\|_{2}\leq 1\), there exist \(p_{1},p_{2}>1\) satisfying \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=1\) such that \(p_{1}\|\Delta u\|_{2}^{2}\leq 1\), \(p_{2}\|\Delta v\|_{2}^{2}\leq 1\). By Young's inequality, we obtain \[\int_{\mathbb{R}^{4}}(e^{\alpha|(u,v)|^{2}}-1)dx\leq\frac{1}{p_{1}}\int_{ \mathbb{R}^{4}}(e^{p_{1}\alpha\|\Delta u\|_{2}^{2}(\frac{u}{\|\Delta u\|_{2}})^ {2}}-1)dx+\frac{1}{p_{2}}\int_{\mathbb{R}^{4}}(e^{p_{2}\alpha\|\Delta v\|_{2}^ {2}(\frac{v}{\|\Delta v\|_{2}})^{2}}-1)dx\leq C.\] **Definition 2.1**.: _[_26_, Definition 3.1]_ _Let \(B\) be a closed subset of \(X\). A class of \(\mathcal{G}\) of compact subsets of \(X\) is a homotopy stable family with boundary \(B\) provided_ \((i)\) _Every set in \(\mathcal{G}\) contains \(B\);_ \((ii)\) _For any set \(A\in\mathcal{G}\) and any \(\eta\in C([0,1]\times X,X)\) satisfying \(\eta(t,w)=w\) for all \((t,w)\in(\{0\}\times X)\cup([0,1]\times B)\), one has \(\eta(\{1\}\times A)\in\mathcal{G}\)._ **Lemma 2.6**.: _[_26_, Theorem 3.2]_ _Let \(\psi\) be a \(C^{1}\)-functional on a complete connected \(C^{1}\)-Finsler manifold \(X\) (without boundary), and consider a homotopy stable family \(\mathcal{G}\) with a closed boundary \(B\). Set \(\tilde{c}=\tilde{c}(\psi,\mathcal{G})=\inf_{A\in\mathcal{G}}\max_{v\in A}\psi(w)\) and suppose that_ \[\sup\psi(B)<\tilde{c}.\] _Then, for any sequence of sets \(\{A_{n}\}\subset\mathcal{G}\) satisfying \(\lim_{n\to\infty}\sup_{w\in A_{n}}\psi(w)=\tilde{c}\), there exists a sequence \(\{w_{n}\}\subset X\) such that_ \((i)\)__\(\lim_{n\to\infty}\psi(w_{n})=\tilde{c}\)_;_ \((ii)\)__\(\lim_{n\to\infty}\|\psi^{\prime}(w_{n})\|_{*}=0\)_;_ \((iii)\)__\(\lim_{n\to\infty}dist(w_{n},A_{n})=0\)_._ We observe that \(B=\emptyset\) is admissible, it is sufficient to follow the usual convention of defining \(\sup\psi(\emptyset)=-\infty\). Since \(G_{1}(u):=\|u\|_{2}^{2}-a^{2}\), \(G_{2}(v):=\|v\|_{2}^{2}-b^{2}\) are of class \(C^{1}\) and for any \((u,v)\in\mathcal{S}\), we have \(\langle G_{1}^{\prime}(u),u\rangle=2a^{2}>0\), \(\langle G_{2}^{\prime}(v),v\rangle=2b^{2}>0\). Therefore, by the implicit function theorem, \(\mathcal{S}\) is a \(C^{1}\)-Finsler manifold. **Lemma 2.7**.: _[_32_, Lemma 4.8]_ _Let \(\Omega\subseteq\mathbb{R}^{4}\) be any open set. For \(1<s<\infty\), let \(\{u_{n}\}\) be bounded in \(L^{s}(\Omega)\) and \(u_{n}(x)\to u(x)\) a.e. in \(\Omega\). Then \(u_{n}(x)\rightharpoonup u(x)\) in \(L^{s}(\Omega)\)._ ## 3 Variational framework It is standard to see that any critical point of the energy functional \[\mathcal{J}(u,v)=\frac{1}{2}\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+|\Delta v|^{2 })dx-\frac{1}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F(u,v)dx,\] restricted to \(\mathcal{S}\) corresponds to a solution of (1.6). If \(F\) satisfies \((F_{1})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), fix \(q>2\), for any \(\xi>0\) and \(\alpha>0\), there exists a constant \(C_{\xi}>0\) such that \[|F_{z_{1}}(z)|,|F_{z_{2}}(z)|\leq\xi|z|^{\tau}+C_{\xi}|z|^{q}(e^{\alpha|z|^{2} }-1)\quad\text{for all }z=(z_{1},z_{2})\in\mathbb{R}\times\mathbb{R},\] and using \((F_{2})\), we have \[|F(z)|\leq\xi|z|^{\tau+1}+C_{\xi}|z|^{q+1}(e^{\alpha|z|^{2}}-1)\quad\text{for all }z\in\mathbb{R}\times\mathbb{R}. \tag{3.1}\] By (3.1), using the Hardy-Littlewood-Sobolev inequality [35] and Lemma 2.5, we obtain \(\mathcal{J}\) is well defined in \(\mathcal{X}\) and \(\mathcal{J}\in C^{1}(\mathcal{X},\mathbb{R})\) with \[\langle\mathcal{J}^{\prime}(u,v),(\varphi,\psi)\rangle=\int_{\mathbb{R}^{4}}( \Delta u\Delta\varphi+\Delta v\Delta\psi)dx-\int_{\mathbb{R}^{4}}(I_{\mu}*F(u, v))F_{u}(u,v)\varphi dx-\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{v}(u,v)\psi dx,\] for any \((u,v),(\varphi,\psi)\in\mathcal{X}\). The very same discussion applies to that \(F\) satisfies \((F_{1})\) and \(F_{z_{j}}(j=1,2)\) has exponential critical growth at \(\infty\). To understand the geometry of \(\mathcal{J}|_{\mathcal{S}}\), for any \(s\in\mathbb{R}\) and \(u\in H^{2}(\mathbb{R}^{4})\), we define \[\mathcal{H}(u,s)(x)=e^{2s}u(e^{s}x).\] One can easily check that \(\|\mathcal{H}(u,s)\|_{2}=\|u\|_{2}\) for any \(s\in\mathbb{R}\). As a consequence, for any \((u,v)\in\mathcal{S}\), it holds that \(\mathcal{H}((u,v),s):=(\mathcal{H}(u,s),\mathcal{H}(v,s))\in\mathcal{S}\) for any \(s\in\mathbb{R}\), and \(\mathcal{H}((u,v),s_{1}+s_{2})=\mathcal{H}(\mathcal{H}((u,v),s_{1}),s_{2})= \mathcal{H}(\mathcal{H}((u,v),s_{2}),s_{1})\) for any \(s_{1},s_{2}\in\mathbb{R}\). By Lemma 4.2 (or Lemma 5.2), we find that \(\mathcal{J}\) is unbounded from below on \(\mathcal{S}\). It is well known that solutions of (1.6) satisfy the Pohozaev identity \[P(u,v)=\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+|\Delta v|^{2})dx-\int_{\mathbb{R} ^{4}}(I_{\mu}*F(u,v))\mathfrak{F}(u,v)dx,\] where \(\mathfrak{F}(z)=z\cdot\nabla F(z)-(2-\frac{\mu}{4})F(z)\). This enlightens us to consider the minimization on the natural constraint \[\mathcal{P}(a,b)=\Big{\{}(u,v)\in\mathcal{S}:P(u,v)=0\Big{\}},\] i.e., \[E(a,b)=\inf_{(u,v)\in\mathcal{P}(a,b)}\mathcal{J}(u,v). \tag{3.2}\] As will be show in Lemma 4.3 (or Lemma 5.3), \(\mathcal{P}(a,b)\) is nonempty, and from Lemma 3.2, we can see that any critical point of \(\mathcal{J}|_{\mathcal{S}}\) stays in \(\mathcal{P}(a,b)\), thus any critical point \(u\) of \(\mathcal{J}|_{\mathcal{S}}\) with \(\mathcal{J}(u)=E(a,b)\) is a ground state solution of (1.6). **Lemma 3.1**.: _Assume that \(u_{n}\to u\) in \(H^{2}(\mathbb{R}^{4})\), and \(s_{n}\to s\) in \(\mathbb{R}\), then \(\mathcal{H}(u_{n},s_{n})\to\mathcal{H}(u,s)\) in \(H^{2}(\mathbb{R}^{4})\) as \(n\to\infty\)._ Proof.: The proof can be seen in Lemma 2.6 in [20], we omit it here. **Lemma 3.2**.: _If \((u,v)\in\mathcal{X}\) is a critical point of \(\mathcal{J}|_{\mathcal{S}}\), then \((u,v)\in\mathcal{P}(a,b)\)._ Proof.: If \((u,v)\in\mathcal{X}\) is a critical point of \(\mathcal{J}|_{\mathcal{S}}\), there exist \(\lambda_{1},\lambda_{2}\in\mathbb{R}\) such that \[\int_{\mathbb{R}^{4}}(\Delta u\Delta\varphi+\Delta v\Delta\psi)dx-\int_{ \mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{u}(u,v)\varphi dx-\int_{\mathbb{R}^{4}}(I_{ \mu}*F(u,v))F_{v}(u,v)\psi dx=\int_{\mathbb{R}^{4}}(\lambda_{1}u\varphi+ \lambda_{2}v\psi)dx. \tag{3.3}\] Testing (3.3) with \((\varphi,\psi)=(u,v)\), we have \[\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+|\Delta v|^{2})dx=\int_{\mathbb{R}^{4}}( \lambda_{1}|u|^{2}+\lambda_{2}|v|^{2})dx+\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v)) [(u,v)\cdot\nabla F(u,v)]dx. \tag{3.4}\] On the other hand, consider a cut-off function \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{4},[0,1])\) such that \(\varphi(x)=1\) if \(|x|\leq 1\), \(\varphi(x)=0\) if \(|x|\geq 2\). For any fixed \(\rho>0\), set \((\widetilde{u}_{\rho}(x),0)=(\varphi(\rho x)x\cdot\nabla u(x),0)\) and \((0,\widetilde{v}_{\rho}(x))=(0,\varphi(\rho x)x\cdot\nabla v(x))\) as test functions for (3.3), we obtain \[\int_{\mathbb{R}^{4}}\Delta u\Delta\widetilde{u}_{\rho}dx=\lambda_{1}\int_{ \mathbb{R}^{4}}u\widetilde{u}_{\rho}dx+\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v)) F_{u}(u,v)\widetilde{u}_{\rho}dx \tag{3.5}\] and \[\int_{\mathbb{R}^{4}}\Delta v\Delta\widetilde{v}_{\rho}dx=\lambda_{2}\int_{ \mathbb{R}^{4}}v\widetilde{v}_{\rho}dx+\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v)) F_{v}(u,v)\widetilde{v}_{\rho}dx. \tag{3.6}\] Moreover, \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{u}(u,v)\widetilde{u}_{\rho}dx+\int_{ \mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{v}(u,v)\widetilde{v}_{\rho}dx=\int_{\mathbb{ R}^{4}}(I_{\mu}*F(u,v))\varphi(\rho x)[x\cdot\nabla_{x}F(u,v)]dx.\] By [44], we know \[\lim_{\rho\to 0}\int_{\mathbb{R}^{4}}u\widetilde{u}_{\rho}dx=-2\int_{\mathbb{R}^ {4}}|u|^{2}dx,\quad\lim_{\rho\to 0}\int_{\mathbb{R}^{4}}v\widetilde{v}_{\rho}dx=-2 \int_{\mathbb{R}^{4}}|v|^{2}dx,\] and a direct computation shows that \[\lim_{\rho\to 0}\int_{\mathbb{R}^{4}}\Delta u\Delta\widetilde{u}_{\rho}dx=0, \quad\lim_{\rho\to 0}\int_{\mathbb{R}^{4}}\Delta v\Delta\widetilde{v}_{\rho}dx=0.\] We claim that \[\lim_{\rho\to 0}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))\varphi(\rho x)[x\cdot \nabla_{x}F(u,v)]dx=-(4-\frac{\mu}{2})\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F(u,v)dx.\] Indeed, integrating by parts, we find that \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))\varphi(\rho x)[x\cdot\nabla _{x}F(u,v)]dx\] \[= \int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}|x-y|^{-\mu}F(u(y),v(y)) \varphi(\rho x)[x\cdot\nabla_{x}F(u(x),v(x))]dxdy\] \[= \frac{1}{2}\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\Big{(}|x-y| ^{-\mu}F(u(y),v(y))\varphi(\rho x)[x\cdot\nabla_{x}F(u(x),v(x))]\] \[+|x-y|^{-\mu}F(u(x),v(x))\varphi(\rho y)[y\cdot\nabla_{y}F(u(y),v (y))]\Big{)}dxdy\] \[= -\frac{1}{2}\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\Big{(}- \mu|x-y|^{-\mu}F(u(y),v(y))F(u(x),v(x))\frac{x(x-y)\varphi(\rho x)}{|x-y|^{2}}\] \[-\mu|x-y|^{-\mu}F(u(x),v(x))F(u(y),v(y))\frac{-y(x-y)\varphi(\rho y )}{|x-y|^{2}}\] \[+\rho|x-y|^{-\mu}F(u(y),v(y))[x\cdot\nabla_{x}\varphi(\rho x)]F( u(x),v(x))\] \[+\rho|x-y|^{-\mu}F(u(x),v(x))[y\cdot\nabla_{y}\varphi(\rho y)]F( u(y),v(y))\] \[+4|x-y|^{-\mu}F(u(y),v(y))\varphi(\rho x)F(u(x),v(x))\] \[+4|x-y|^{-\mu}F(u(x),v(x))\varphi(\rho y)F(u(y),v(y))\Big{)}dxdy.\] Using the Lebesgue dominated convergence theorem, we prove the claim. Adding (3.5) and (3.6), we obtain \[\int_{\mathbb{R}^{4}}(\lambda_{1}|u|^{2}+\lambda_{2}|v|^{2})dx+(2-\frac{\mu}{ 4})\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F(u,v)dx=0.\] This together with (3.4) yields that \[\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+|\Delta v|^{2})dx-\int_{\mathbb{R}^{4}}( I_{\mu}*F(u,v))\mathfrak{F}(u,v)dx=0,\] where \(\mathfrak{F}(u,v)=(u,v)\cdot\nabla F(u,v)-(2-\frac{\mu}{4})F(u,v)\). Thus, \((u,v)\in\mathcal{P}(a,b)\). ## 4 Exponential subcritical case In this section, we will study problem (1.6) in the exponential subcritical case, and establish the existence of ground state solutions. **Lemma 4.1**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential subcritical growth at \(\infty\), let \(\{(u_{n},v_{n})\}\subset\mathcal{S}\) be a bounded sequence in \(\mathcal{X}\), up to a subsequence, if \((u_{n},v_{n})\rightharpoonup(u,v)\) in \(\mathcal{X}\), for any \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{4})\), we have_ \[\int_{\mathbb{R}^{4}}\Delta u_{n}\Delta\varphi dx\to\int_{\mathbb{R}^{4}} \Delta u\Delta\varphi dx,\quad\int_{\mathbb{R}^{4}}\Delta v_{n}\Delta\varphi dx \to\int_{\mathbb{R}^{4}}\Delta v\Delta\varphi dx,\ \text{as}\ n\to\infty, \tag{4.1}\] \[\int_{\mathbb{R}^{4}}u_{n}\varphi dx\to\int_{\mathbb{R}^{4}}u\varphi dx,\quad\int_ {\mathbb{R}^{4}}v_{n}\varphi dx\to\int_{\mathbb{R}^{4}}v\varphi dx,\ \ \text{as}\,\ n\to\infty, \tag{4.2}\] _and_ \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n})\varphi dx \to\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{u}(u,v)\varphi dx,\ \ \text{as}\,\ n\to\infty, \tag{4.3}\] \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{v_{n}}(u_{n},v_{n})\varphi dx \to\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{v}(u,v)\varphi dx,\ \ \text{as}\,\ n\to\infty. \tag{4.4}\] Proof.: For any fixed \(v\in H^{2}(\mathbb{R}^{4})\), define \[f_{v}(u):=\int_{\mathbb{R}^{4}}\Delta u\Delta vdx,\quad h_{v}(u):=\int_{ \mathbb{R}^{4}}uvdx,\quad\text{for every}\ u\in H^{2}(\mathbb{R}^{4}).\] Then, by the Holder inequality, we have \[|f_{v}(u)|,|h_{v}(u)|\leq\|v\|\|u\|,\] this yields that \(f_{v}\) and \(h_{v}\) are continuous linear functionals on \(H^{2}(\mathbb{R}^{4})\). Thus, by \(u_{n}\rightharpoonup u\), \(v_{n}\rightharpoonup v\) in \(H^{2}(\mathbb{R}^{4})\), and \(C_{0}^{\infty}(\mathbb{R}^{4})\) is dense in \(H^{2}(\mathbb{R}^{4})\), we have that (4.1) and (4.2) hold. Next, we prove (4.3)-(4.4). Since \(\{(u_{n},v_{n})\}\) is bounded in \(\mathcal{X}\), we can fix \(\alpha>0\) close to \(0\) and \(t>1\) close to \(1\) such that \[\sup_{n\in\mathbb{N}^{+}}\frac{4\alpha t\|\Delta(u_{n},v_{n})\|_{2}^{2}}{4- \mu}\leq 32\pi^{2}.\] Using Lemma 2.5 and Young's inequality, it yields that \[\sup_{n\in\mathbb{N}^{+}}\int_{\mathbb{R}^{4}}(e^{\alpha|(u_{n},v_{n})|^{2}}- 1)^{t}dx\leq\sup_{n\in\mathbb{N}^{+}}\int_{\mathbb{R}^{4}}(e^{\alpha t\| \Delta(u_{n},v_{n})\|_{2}^{2}(\frac{|(u_{n},v_{n})|}{\|\Delta(u_{n},v_{n})\| _{2}})^{2}}-1)dx\leq C.\] Therefore, \((e^{\alpha|(u_{n},v_{n})|^{2}}-1)\) is uniformly bounded in \(L^{t}(\mathbb{R}^{4})\). Since \((e^{\alpha|(u_{n},v_{n})|^{2}}-1)\to(e^{\alpha|(u,v)|^{2}}-1)\) a.e. in \(\mathbb{R}^{4}\), by Lemma 2.7, we obtain \((e^{\alpha|(u_{n},v_{n})|^{2}}-1)\rightharpoonup(e^{\alpha|(u,v)|^{2}}-1)\) in \(L^{t}(\mathbb{R}^{4})\). By corollary 2.1 of [19], we know \[\|I_{\mu}*\varphi\|_{\frac{4s}{4-(4-\mu)s}}\leq C\|\varphi\|_{s}\] for any \(\varphi\in L^{s}(\mathbb{R}^{4})\) and \(s\in(1,\frac{4}{4-\mu})\). Hence, let \(s\to\frac{4}{4-\mu}\), then \(\frac{4s}{4-(4-\mu)s}\to\infty\) and \[\|I_{\mu}*F(u_{n},v_{n})\|_{\infty}\leq C\|F(u_{n},v_{n})\|_{\frac{4}{4-\mu}} \quad\text{for all}\ n\in\mathbb{N}^{+},\] and by (3.1), \[|F(u_{n},v_{n})|\leq\xi|(u_{n},v_{n})|^{\tau+1}+C_{\xi}|(u_{n},v_{n})|^{q+1}(e ^{\alpha|(u_{n},v_{n})|^{2}}-1)\quad\text{for all}\ n\in\mathbb{N}^{+}.\] For \(t^{\prime}=\frac{t}{t-1}\), using Lemma 2.5, the Holder inequality, and the Sobolev inequality, we have \[\|F(u_{n},v_{n})\|_{\frac{4}{4-\mu}} \leq\bigg{(}\int_{\mathbb{R}^{4}}\Big{(}\xi|(u_{n},v_{n})|^{\tau +1}+C_{\xi}|(u_{n},v_{n})|^{q+1}(e^{\alpha|(u_{n},v_{n})|^{2}}-1)\Big{)}^{ \frac{4}{4-\mu}}dx\bigg{)}^{\frac{4-\mu}{4}}\] \[\leq\bigg{(}\int_{\mathbb{R}^{4}}\Big{(}C|(u_{n},v_{n})|^{\frac{4( \tau+1)}{4-\mu}}+C|(u_{n},v_{n})|^{\frac{4(\sigma+1)}{4-\mu}}(e^{\frac{4 \alpha|(u_{n},v_{n})|^{2}}{4-\mu}}-1)\Big{)}dx\bigg{)}^{\frac{4-\mu}{4}}\] \[\leq C\|(u_{n},v_{n})\|_{\frac{4(\tau+1)}{4-\mu}}^{\tau+1}+C\bigg{(} \int_{\mathbb{R}^{4}}|(u_{n},v_{n})|^{4}\frac{(q+1)}{4-\mu}(e^{\frac{4\alpha|(u_ {n},v_{n})|^{2}}{4-\mu}}-1)dx\bigg{)}^{\frac{4-\mu}{4}}\] \[\leq C\|(u_{n},v_{n})\|_{\frac{4(\tau+1)}{4-\mu}}^{\tau+1}+C\Big{(} \int_{\mathbb{R}^{4}}|(u_{n},v_{n})|^{\frac{4(q+1)\epsilon^{\prime}}{4-\mu}} dx\Big{)}^{\frac{4-\mu}{4\epsilon^{\prime}}}\Big{(}\int_{\mathbb{R}^{4}}(e^{\frac{4 \alpha|(u_{n},v_{n})|^{2}}{4-\mu}}-1)dx\Big{)}^{\frac{4-\mu}{4\epsilon^{\prime }}}\] \[\leq C\|(u_{n},v_{n})\|_{\frac{4(\tau+1)}{4-\mu}}^{\tau+1}+C\|(u_ {n},v_{n})\|_{\frac{4(q+1)\epsilon^{\prime}}{4-\mu}}^{q+1}\Big{(}\int_{ \mathbb{R}^{4}}(e^{\frac{4\alpha|\Delta(u_{n},v_{n})|^{2}}{4-\mu}}(\frac{|(u_ {n},v_{n})|}{|\Delta(u_{n},v_{n})|_{2}})^{2}-1)dx\Big{)}^{\frac{4-\mu}{4 \epsilon}}\] \[\leq C\|(u_{n},v_{n})\|_{\frac{4(\tau+1)}{4-\mu}}^{\tau+1}+C\|(u_ {n},v_{n})\|_{\frac{4(q+1)\epsilon^{\prime}}{4-\mu}}^{q+1}\Big{(}\int_{ \mathbb{R}^{4}}(e^{\frac{4\alpha|\Delta(u_{n},v_{n})|^{2}}{4-\mu}}(\frac{|(u_ {n},v_{n})|}{|\Delta(u_{n},v_{n})|_{2}})^{2}-1)dx\Big{)}^{\frac{4-\mu}{4 \epsilon}}\] \[\leq C\|(u_{n},v_{n})\|^{\tau+1}+C\|(u_{n},v_{n})\|^{q+1}\leq C,\] where we have used the fact that \(\tau>2-\frac{\mu}{4}>1\), \(q>2\), and \(t^{\prime}\) large enough. Hence, for any \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{4})\), we have \[(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n})\varphi\to(I_{\mu}*F(u,v))F_{u} (u,v)\varphi\quad\text{a.e. in }\mathbb{R}^{4},\] \[|(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n})\varphi|\leq C|F_{u_{n}}(u_{n}, v_{n})||\varphi|\leq C|(u_{n},v_{n})|^{\tau}|\varphi|+C|(u_{n},v_{n})|^{q}| \varphi|(e^{\alpha|(u_{n},v_{n})|^{2}}-1)\] for all \(n\in\mathbb{N}^{+}\), and \[|(u_{n},v_{n})|^{\tau}|\varphi|+|(u_{n},v_{n})|^{q}|\varphi|(e^{\alpha|(u_{n}, v_{n})|^{2}}-1)\to|(u,v)|^{\tau}|\varphi|+|(u,v)|^{q}|\varphi|(e^{\alpha|(u,v)|^{ 2}}-1)\quad\text{a.e. in }\mathbb{R}^{4}.\] Denote \(\Omega:=supp\varphi\). In the following, we prove that \[\int_{\Omega}|(u_{n},v_{n})|^{\tau}|\varphi|dx\to\int_{\Omega}|(u,v)|^{\tau}| \varphi|dx,\ \ \text{as}\ \,n\to\infty \tag{4.5}\] and \[\int_{\Omega}|(u_{n},v_{n})|^{q}|\varphi|(e^{\alpha|(u_{n},v_{n})|^{2}}-1)dx \to\int_{\Omega}|(u,v)|^{q}|\varphi|(e^{\alpha|(u,v)|^{2}}-1)dx,\ \ \text{as}\ \,n\to\infty. \tag{4.6}\] Now, we show that \[|(u_{n},v_{n})|^{\tau}\to|(u,v)|^{\tau}\quad\text{in }L^{2}(\Omega)\quad\text{and} \quad|(u_{n},v_{n})|^{q}\to|(u,v)|^{q}\quad\text{in }L^{t^{\prime}}(\Omega). \tag{4.7}\] Since \(\tau>2-\frac{\mu}{4}>1\), we have \(u_{n}\to u\) and \(v_{n}\to v\) in \(L^{2\tau}(\Omega)\), thus \(|u_{n}|^{2}\to|u|^{2}\), \(|v_{n}|^{2}\to|v|^{2}\) in \(L^{\tau}(\Omega)\), and \[\||(u_{n},v_{n})|^{2}-|(u,v)|^{2}\|_{L^{\tau}(\Omega)}=\||u_{n}|^{2}-|u|^{2}+|v _{n}|^{2}-|v|^{2}\|_{L^{\tau}(\Omega)}\leq\||u_{n}|^{2}-|u|^{2}\|_{L^{\tau}( \Omega)}+\||v_{n}|^{2}-|v|^{2}\|_{L^{\tau}(\Omega)},\] which implies \(|(u_{n},v_{n})|^{2}\to|(u,v)|^{2}\) in \(L^{\tau}(\Omega)\), and we have \(|(u_{n},v_{n})|^{\tau}\to|(u,v)|^{\tau}\) in \(L^{2}(\Omega)\). Similarly, we can prove that \(|(u_{n},v_{n})|^{q}\to|(u,v)|^{q}\) in \(L^{t^{\prime}}(\Omega)\). By the definition of weak convergence, we obtain (4.5), and \[\Big{|}\int_{\Omega}|(u_{n},v_{n})|^{q}|\varphi|(e^{\alpha|(u_{n}, v_{n})|^{2}}-1)dx-\int_{\Omega}|(u,v)|^{q}|\varphi|(e^{\alpha|(u,v)|^{2}}-1)dx\Big{|}\] \[\leq\|\varphi\|_{\infty}\int_{\Omega}\Big{|}|(u_{n},v_{n})|^{q}-|( u,v)|^{q}\Big{|}(e^{\alpha|(u_{n},v_{n})|^{2}}-1)dx\] \[\quad+\|\varphi\|_{\infty}\int_{\Omega}|(u,v)|^{q}\Big{|}(e^{\alpha| (u_{n},v_{n})|^{2}}-1)-(e^{\alpha|(u,v)|^{2}}-1)\Big{|}dx\] \[\leq\|\varphi\|_{\infty}\Big{(}\int_{\Omega}\Big{|}|(u_{n},v_{n})|^{q}-| (u,v)|^{q}\Big{|}^{t^{\prime}}dx\Big{)}^{\frac{1}{t^{\prime}}}\Big{(}\int_{ \Omega}(e^{\alpha|(u_{n},v_{n})|^{2}}-1)^{t}dx\Big{)}^{\frac{1}{t}}\] \[\quad+\|\varphi\|_{\infty}\int_{\Omega}|(u,v)|^{q}\Big{|}(e^{ \alpha|(u_{n},v_{n})|^{2}}-1)-(e^{\alpha|(u,v)|^{2}}-1)\Big{|}dx\to 0,\ \ \text{as}\ \,n\to\infty.\] Applying a variant of the Lebesgue dominated convergence theorem, we obtain (4.3). Similar argument shows that (4.4) holds. ### The behaviour of \(E(a,b)\) The goal of this subsection is to characterize the behaviour of \(E(a,b)\) as \(a,b>0\) vary. In particular, we will prove the continuity of the function \((a,b)\mapsto E_{r}(a,b)\), the monotonicity of the functions \(a\mapsto E(a,b)\) and \(b\mapsto E(a,b)\), and the limit behaviour of \(E(a,b)\) as \((a,b)\to(0,0)\), where \(E_{r}(a,b)\) is the radial ground state energy defined in (4.10). **Lemma 4.2**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential subcritical growth at \(\infty\), then for any fixed \((u,v)\in\mathcal{S}\), we have_ \((i)\)_\(\mathcal{J}(\mathcal{H}((u,v),s))\to 0^{+}\) as \(s\to-\infty\);_ \((ii)\)_\(\mathcal{J}(\mathcal{H}((u,v),s))\to-\infty\) as \(s\to+\infty\)._ Proof.: A straightforward calculation shows that for any \(q>2\), \[\|\mathcal{H}(u,s)\|_{2}=a,\quad\|\mathcal{H}(v,s)\|_{2}=b,\] \[\|\Delta\mathcal{H}(u,s)\|_{2}=e^{2s}\|\Delta u\|_{2},\quad\|\Delta\mathcal{H }(v,s)\|_{2}=e^{2s}\|\Delta v\|_{2},\] and \[\|\mathcal{H}((u,v),s)\|_{q}=e^{\frac{2(q-2)s}{q}}\|(u,v)\|_{q}.\] So there exists \(s_{1}<<0\) such that \[\|\Delta\mathcal{H}((u,v),s)\|_{2}^{2}=\|\Delta\mathcal{H}(u,s)\|_{2}^{2}+\| \Delta\mathcal{H}(v,s)\|_{2}^{2}\leq C\] for all \(s\leq s_{1}\). Fix \(\alpha>0\) close to \(0\) and \(t>1\) close to \(1\) such that \[\frac{8\alpha t\|\Delta\mathcal{H}((u,v),s)\|_{2}^{2}}{8-\mu}\leq 32\pi^{2}.\] For \(t^{\prime}=\frac{t}{t-1}\), using Lemma 2.5, the Holder inequality, and the Sobolev inequality, we have \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(\mathcal{H}((u,v),s)))F(\mathcal{ H}((u,v),s))dx\leq\|F(\mathcal{H}((u,v),s))\|_{\frac{8}{8-\mu}}^{2}\] \[\leq\bigg{(}\int_{\mathbb{R}^{4}}\Big{(}\xi|\mathcal{H}((u,v),s)| ^{\tau+1}+C_{\xi}|\mathcal{H}((u,v),s)|^{q+1}(e^{\alpha|\mathcal{H}((u,v),s)| ^{2}}-1)\Big{)}^{\frac{8}{8-\mu}}dx\bigg{)}^{\frac{8-\mu}{4}}\] \[\leq\bigg{(}\int_{\mathbb{R}^{4}}\Big{(}C|\mathcal{H}((u,v),s)| ^{\frac{8(\tau+1)}{8-\mu}}+C|\mathcal{H}((u,v),s)|^{\frac{8(q+1)}{8-\mu}}\,(e ^{\frac{8\alpha|\mathcal{H}((u,v),s)|^{2}}{8-\mu}}-1)\Big{)}dx\bigg{)}^{\frac{8 -\mu}{4}}\] \[\leq C\|\mathcal{H}((u,v),s)\|_{\frac{8(\tau+1)}{8-\mu}}^{2(\tau+ 1)}+C\bigg{(}\int_{\mathbb{R}^{4}}|\mathcal{H}((u,v),s)|^{\frac{8(q+1)}{8-\mu} }\,(e^{\frac{8\alpha|\mathcal{H}((u,v),s)|^{2}}{8-\mu}}-1)dx\bigg{)}^{\frac{8- \mu}{4}}\] \[\leq C\|\mathcal{H}((u,v),s)\|_{\frac{8(r+1)}{8-\mu}}^{2(r+1)}+C\Big{(}\int_{ \mathbb{R}^{4}}|\mathcal{H}((u,v),s)|^{\frac{8(q+1)\ell^{\prime}}{8-\mu}}dx\Big{)} ^{\frac{8-\mu}{4\ell^{\prime}}}\Big{(}\int_{\mathbb{R}^{4}}(e^{\frac{8\alpha \ell|\mathcal{H}((u,v),s)|^{2}}{8-\mu}}-1)dx\Big{)}^{\frac{8-\mu}{4\ell}}\] \[\leq C\|\mathcal{H}((u,v),s)\|_{\frac{8(r+1)}{8-\mu}}^{2(r+1)}+C \|\mathcal{H}((u,v),s)\|_{\frac{8(q+1)\ell^{\prime}}{8-\mu}}^{2(q+1)}\Big{(} \int_{\mathbb{R}^{4}}(e^{\frac{8\alpha\ell\|\Delta\mathcal{H}((u,v),s)|^{2}}{ 8-\mu}(\frac{|\mathcal{H}((u,v),s)|}{|\Delta\mathcal{H}((u,v),s)|^{2}})^{2}}- 1)dx\Big{)}^{\frac{8-\mu}{4\ell}}\] \[\leq C\|\mathcal{H}((u,v),s)\|_{\frac{8(r+1)}{8-\mu}}^{2(r+1)}+C \|\mathcal{H}((u,v),s)\|_{\frac{8(q+1)\ell^{\prime}}{8-\mu}}^{2(q+1)}\] \[=Ce^{(4\tau+\mu-4)s}\|(u,v)\|_{\frac{8(r+1)}{8-\mu}}^{2(\tau+1)}+ Ce^{(4q+4-\frac{8-\mu}{l^{\prime}})s}\|(u,v)\|_{\frac{8(q+1)\ell^{\prime}}{8- \mu}}^{2(q+1)}\] \[=Ce^{(4\tau+\mu-4)s}\Big{(}\|u\|_{\frac{8(r+1)}{8-\mu}}^{\frac{8( r+1)}{8-\mu}}+\|v\|_{\frac{8(r+1)}{8-\mu}}^{\frac{8(r+1)}{8-\mu}}\Big{)}^{ \frac{8-\mu}{4}}+Ce^{(4q+4-\frac{8-\mu}{\ell^{\prime}})s}\Big{(}\|u\|_{\frac{8( q+1)\ell^{\prime}}{8-\mu}}^{\frac{8(q+1)\ell^{\prime}}{8-\mu}}+\|v\|_{\frac{8(q+1) \ell^{\prime}}{8-\mu}}^{\frac{8(q+1)\ell^{\prime}}{8-\mu}}\Big{)}^{\frac{8-\mu }{4\ell^{\prime}}}, \tag{4.8}\] for all \(s\leq s_{1}\). Since \(\tau>2-\frac{\mu}{4}\), \(q>2\) and \(t^{\prime}\) large enough, it follows from (4) that \[\mathcal{J}(\mathcal{H}((u,v),s))= \frac{1}{2}e^{4s}(\|\Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2})-\int _{\mathbb{R}^{4}}(I_{\mu}*F(\mathcal{H}((u,v),s)))F(\mathcal{H}((u,v),s))dx \to 0^{+},\] as \(s\to-\infty\). For any fixed \(s>>0\), set \[\mathcal{M}(t)=\frac{1}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(tu,tv))F(tu,tv)dx, \quad\text{for }t>0.\] It follows from \((F_{2})\) that \[\frac{\frac{d\mathcal{M}(t)}{dt}}{\mathcal{M}(t)}>\frac{2\theta}{t},\quad \text{for }t>0.\] Thus, integrating this over \([1,e^{2s}]\), we have \[\frac{1}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(e^{2s}u,e^{2s}v))F(e^{2s}u,e^{2s}v) dx\geq\frac{e^{4\theta s}}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F(u,v)dx. \tag{4.9}\] Therefore, \[\mathcal{J}(\mathcal{H}((u,v),s))\leq \frac{e^{4s}}{2}\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+|\Delta v|^{2 })dx-\frac{e^{(4\theta+\mu-8)s}}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v)F(u,v)dx.\] Since \(\theta>3-\frac{\mu}{4}\), the above inequality yields that \(\mathcal{J}(\mathcal{H}((u,v),s))\to-\infty\) as \(s\to+\infty\). **Lemma 4.3**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential subcritical growth at \(\infty\), then for any fixed \((u,v)\in\mathcal{S}\), the following statements hold._ \((i)\) _The function \(\mathcal{J}(\mathcal{H}((u,v),s))\) achieves its maximum with positive level at a unique point \(s_{(u,v)}\in\mathbb{R}\) such that \(\mathcal{H}((u,v),s_{(u,v)})\in\mathcal{P}(a,b)\)._ \((ii)\) _The mapping \((u,v)\mapsto s_{(u,v)}\) is continuous in \((u,v)\in\mathcal{S}\)._ Proof.: \((i)\) By Lemma 4.2, we have \[\lim_{s\to-\infty}\mathcal{J}(\mathcal{H}((u,v),s))=0^{+}\quad\text{and}\lim_{s \to+\infty}\mathcal{J}(\mathcal{H}((u,v),s))=-\infty.\] Therefore, there exists \(s_{(u,v)}\in\mathbb{R}\) such that \(P(\mathcal{H}((u,v),s_{(u,v)}))=\frac{1}{2}\frac{d}{ds}\mathcal{J}(\mathcal{H}( (u,v),s))|_{s=s_{(u,v)}}=0\), and \(\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)}))>0\). In the following, we prove the uniqueness of \(s_{(u,v)}\) for any \((u,v)\in\mathcal{S}\). Taking into account that \(\frac{d}{ds}\mathcal{J}(\mathcal{H}((u,v),s))|_{s=s_{(u,v)}}=0\), using (\(F_{5}\)), we deduce that \[\frac{d^{2}}{ds^{2}}\mathcal{J}(\mathcal{H}((u,v),s))\Big{|}_{s=s_{ (u,v)}}\] \[=8e^{4s_{(u,v)}}\int_{\mathbb{R}^{4}}(|\Delta u|^{2}+|\Delta v|^{2 })dx-\frac{(8-\mu)^{2}}{2}e^{(\mu-8)s_{(u,v)}}\int_{\mathbb{R}^{4}}(I_{\mu}*F( e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v))F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)dx\] \[\quad+(28-4\mu)e^{(\mu-8)s_{(u,v)}}\int_{\mathbb{R}^{4}}(I_{\mu}*F (e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v))\] \[\quad\times\Big{[}(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)\cdot(\frac{ \partial F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}u)},\frac {\partial F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}v)})\Big{]}dx\] \[\quad-4e^{(\mu-8)s_{(u,v)}}\int_{\mathbb{R}^{4}}\Big{(}I_{\mu}* \Big{[}(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)\cdot(\frac{\partial F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}u)},\frac{\partial F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}v)})\Big{]}\Big{)}\] \[\quad\times\Big{[}(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)\cdot(\frac{ \partial F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}u)},\frac {\partial F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}v)})\Big{]}dx\] \[\quad-4e^{(\mu-8)s_{(u,v)}}\int_{\mathbb{R}^{4}}(I_{\mu}*F(e^{2s_ {(u,v)}}u,e^{2s_{(u,v)}}v))\Big{[}(e^{2s_{(u,v)}}u)^{2}\frac{\partial^{2}F(e^{ 2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}u)^{2}}\] \[\quad+(e^{2s_{(u,v)}}v)^{2}\frac{\partial^{2}F(e^{2s_{(u,v)}}u,e^ {2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}v)^{2}}\Big{]}dx\] \[=-(8-\mu)(6-\frac{\mu}{2})\int_{\mathbb{R}^{4}}(I_{\mu}*F( \mathcal{H}((u,v),s_{(u,v)})))F(\mathcal{H}((u,v),s_{(u,v)}))dx\] \[\quad+(36-4\mu)\int_{\mathbb{R}^{4}}(I_{\mu}*F(\mathcal{H}((u,v), s_{(u,v)})))\Big{[}\mathcal{H}((u,v),s_{(u,v)})\cdot(\frac{\partial F( \mathcal{H}((u,v),s_{(u,v)}))}{\partial(\mathcal{H}(u,s_{(u,v)}))},\frac{ \partial F(\mathcal{H}((u,v),s_{(u,v)}))}{\partial(\mathcal{H}(v,s_{(u,v)}))}) \Big{]}dx\] \[\quad-4\int_{\mathbb{R}^{4}}\Big{(}I_{\mu}*\Big{[}\mathcal{H}((u, v),s_{(u,v)})\cdot(\frac{\partial F(\mathcal{H}((u,v),s_{(u,v)}))}{\partial( \mathcal{H}(u,s_{(u,v)}))},\frac{\partial F(\mathcal{H}((u,v),s_{(u,v)}))}{ \partial(\mathcal{H}(v,s_{(u,v)}))})\Big{]}\Big{)}\] \[\quad\times\Big{[}\mathcal{H}((u,v),s_{(u,v)})\cdot(\frac{ \partial F(\mathcal{H}((u,v),s_{(u,v)}))}{\partial(\mathcal{H}(u,s_{(u,v)}))}, \frac{\partial F(\mathcal{H}((u,v),s_{(u,v)}))}{\partial(\mathcal{H}(v,s_{(u,v )}))})\Big{]}dx\] \[\quad-4\int_{\mathbb{R}^{4}}(I_{\mu}*F(\mathcal{H}((u,v),s_{(u,v)} )))\Big{[}(\mathcal{H}(u,s_{(u,v)}))^{2}\frac{\partial^{2}F(\mathcal{H}((u,v ),s_{(u,v)}))}{\partial(\mathcal{H}(u,s_{(u,v)}))^{2}}+(\mathcal{H}(v,s_{(u,v )}))^{2}\frac{\partial^{2}F(\mathcal{H}((u,v),s_{(u,v)}))}{\partial(\mathcal{H }(u,s_{(u,v)}))^{2}}\Big{]}dx\] \[=4\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{A}{|x-y|^{\mu} }dxdy<0,\] where \[A= (3-\frac{\mu}{4})F(\mathcal{H}((u(y),v(y)),s_{(u(y),v(y))}))\mathfrak{ \mathfrak{F}}(\mathcal{H}((u(x),v(x)),s_{(u(x),v(x)})))\] \[-F(\mathcal{H}((u(y),v(y)),s_{(u(y),v(y))}))\] \[\times\Big{[}\mathcal{H}((u(x),v(x)),s_{(u(x),v(x)}))\cdot(\frac{ \partial\mathfrak{F}(\mathcal{H}((u(x),v(x)),s_{(u(x),v(x)})))}{\partial( \mathcal{H}(u(x),s_{(u(x),v(x)})))},\frac{\partial\mathfrak{F}(\mathcal{H}((u (x),v(x)),s_{(u(x),v(x)})))}{\partial(\mathcal{H}(v(x),s_{(u(x),v(x)})))})\Big{]}\] \[-\mathfrak{F}(\mathcal{H}((u(y),v(y)),s_{(u(y),v(y))}))[\mathfrak{ \mathfrak{F}}(\mathcal{H}((u(x),v(x)),s_{(u(x),v(x)})))-F(\mathcal{H}((u(x),v(x) ),s_{(u(x),v(x)})))]<0,\] this prove the uniqueness of \(s_{(u,v)}\). \((ii)\) By \((i)\), the mapping \((u,v)\mapsto s_{(u,v)}\) is well defined. Let \(\{(u_{n},v_{n})\}\subset\mathcal{S}\) be any sequence such that \((u_{n},v_{n})\rightarrow(u,v)\) in \(\mathcal{X}\) as \(n\rightarrow\infty\). We only need to prove that up to a subsequence, \(s_{(u_{n},v_{n})}\to s_{(u,v)}\) in \(\mathbb{R}\) as \(n\rightarrow\infty\). We first show that \(\{s_{(u_{n},v_{n})}\}\) is bounded in \(\mathbb{R}\). If up to a subsequence, \(s_{(u_{n},v_{n})}\to+\infty\) as \(n\to\infty\), then by (4.9), \((F_{2})\) and \((u_{n},v_{n})\to(u,v)\neq(0,0)\) in \(\mathcal{X}\) as \(n\to\infty\), we have \[0 \leq\lim_{n\to\infty}e^{-4s_{(u_{n},v_{n})}}\mathcal{J}(\mathcal{ H}((u_{n},v_{n}),s_{(u_{n},v_{n})}))\] \[\leq\lim_{n\to\infty}\frac{1}{2}\Big{[}\int_{\mathbb{R}^{4}}(| \Delta u_{n}|^{2}+|\Delta v_{n}|^{2})dx-e^{(4\theta+\mu-12)s_{(u_{n},v_{n})}} \int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F(u_{n},v_{n})dx\Big{]}=-\infty,\] which is a contradiction. Therefore, \(\{s_{(u_{n},v_{n})}\}\) is bounded from above. On the other hand, by Lemma 3.1, \(\mathcal{H}((u_{n},v_{n}),s_{(u,v)})\to\mathcal{H}((u,v),s_{(u,v)})\) in \(\mathcal{X}\) as \(n\to\infty\), it follows from \((i)\) that \[\mathcal{J}(\mathcal{H}((u_{n},v_{n}),s_{(u_{n},v_{n})}))\geq\mathcal{J}( \mathcal{H}((u_{n},v_{n}),s_{(u,v)}))=\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)} ))+o_{n}(1)\] and thus \[\liminf_{n\to\infty}\mathcal{J}(\mathcal{H}((u_{n},v_{n}),s_{(u_{n},v_{n})})) \geq\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)}))>0.\] If up to a subsequence, \(s_{(u_{n},v_{n})}\to-\infty\) as \(n\to\infty\), using \((F_{2})\), we get \[\mathcal{J}(\mathcal{H}((u_{n},v_{n}),s_{(u_{n},v_{n})}))\leq\frac{e^{4s_{(u_ {n},v_{n})}}}{2}(\|\Delta u_{n}\|_{2}^{2}+\|\Delta v_{n}\|_{2}^{2})\to 0,\; \text{as}\;\,n\to\infty,\] which is an absurd. Therefore, \(\{s_{(u_{n},v_{n})}\}\) is bounded from below. Up to a subsequence, we assume that \(s_{(u_{n},v_{n})}\to s_{*}\) as \(n\to\infty\). Recalling that \((u_{n},v_{n})\to(u,v)\) in \(\mathcal{X}\) as \(n\to\infty\), then \(\mathcal{H}((u_{n},v_{n}),s_{(u_{n},v_{n})})\to\mathcal{H}((u,v),s_{*})\) in \(\mathcal{X}\) as \(n\to\infty\). Since \(P(\mathcal{H}((u_{n},v_{n}),s_{(u_{n},v_{n})}))=0\) for any \(n\in\mathbb{N}^{+}\), it follows that \(P(\mathcal{H}((u,v),s_{*}))=0\). By the uniqueness of \(s_{(u,v)}\), we get \(s_{(u,v)}=s_{*}\), thus \((ii)\) is proved. **Lemma 4.4**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), then there exists \(\delta>0\) small enough such that_ \[\mathcal{J}(u,v)\geq\frac{1}{4}(\|\Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2})\] _and_ \[P(u,v)\geq\|\Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2}\] _for all \((u,v)\in\mathcal{S}\) satisfying \(\|\Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2}\leq\delta\)._ Proof.: Arguing as (4.8), for any \(\alpha\to 0^{+}\) and \(t\to 1^{+}\), using (2.3), we have \(\frac{8\alpha t\|\Delta(u,v)\|_{2}^{2}}{8-\mu}\leq 32\pi^{2}\) and \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F(u,v)dx\] \[\leq C\Big{(}\|u\|_{\frac{8(t+1)}{8-\mu}}^{\frac{8(\tau+1)}{8- \mu}}+\|v\|_{\frac{8(\tau+1)}{8-\mu}}^{\frac{8(\tau+1)}{8-\mu}}\Big{)}^{\frac{8 -\mu}{4}}+C\Big{(}\|u\|_{\frac{8(q+1)t^{\prime}}{8-\mu}}^{\frac{8(q+1)t^{ \prime}}{8-\mu}}+\|v\|_{\frac{8(q+1)t^{\prime}}{8-\mu}}^{\frac{8(q+1)t^{ \prime}}{8-\mu}}\Big{)}^{\frac{8-\mu}{4t^{\prime}}}\] \[\leq C\Big{(}a^{2}\|\Delta u\|_{2}^{\frac{8\tau-8+2\mu}{8-\mu}}+b ^{2}\|\Delta v\|_{2}^{\frac{8\tau-8+2\mu}{8-\mu}}\Big{)}^{\frac{8-\mu}{4}}\] \[\quad+C\Big{(}a^{2}\|\Delta u\|_{2}^{\frac{8(q+1)t^{\prime}-16+2 \mu}{8-\mu}}+b^{2}\|\Delta v\|_{2}^{\frac{8(q+1)t^{\prime}-16+2\mu}{8-\mu}} \Big{)}^{\frac{8-\mu}{4t^{\prime}}}\] \[\leq C(\|\Delta u\|_{2}^{\frac{8\tau-8+2\mu}{4}}+\|\Delta v\|_{2}^{ \frac{8\tau-8+2\mu}{4}})+C(\|\Delta u\|_{2}^{\frac{8(q+1)t^{\prime}-16+2\mu}{4 t^{\prime}}}+\|\Delta v\|_{2}^{\frac{8(q+1)t^{\prime}-16+2\mu}{4t^{ \prime}}})\] \[\leq\Big{[}C\delta^{\tau+\frac{\mu}{4}-2}+C\delta^{q-\frac{8-\mu} {4t^{\prime}}}\Big{]}\|\Delta u\|_{2}^{2}+\Big{[}C\delta^{\tau+\frac{\mu}{4}-2}+ C\delta^{q-\frac{8-\mu}{4t^{\prime}}}\Big{]}\|\Delta v\|_{2}^{2}.\] Since \(\tau>2-\frac{\mu}{4}\), \(q>2\) and \(t^{\prime}=\frac{t}{t-1}\) large enough, choosing \(\delta>0\) small enough, we conclude the result. Similarly, we can prove the other. **Lemma 4.5**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), then we have_ \[\inf_{(u,v)\in\mathcal{P}(a,b)}(\|\Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2})>0 \quad\text{and}\quad E(a,b)>0.\] Proof.: By Lemma 4.3, we obtain \(\mathcal{P}(a,b)\neq\emptyset\). Suppose that there exists a sequence \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\) such that \((\|\Delta u_{n}\|_{2}^{2}+\|\Delta v_{n}\|_{2}^{2})\to 0\) as \(n\to\infty\), then by Lemma 4.4, up to a subsequence, \[0=P(u_{n},v_{n})\geq\int_{\mathbb{R}^{4}}(|\Delta u_{n}|^{2}+|\Delta v_{n}|^{2} )dx\geq 0,\] which implies that \(\int_{\mathbb{R}^{4}}|\Delta u_{n}|^{2}dx=\int_{\mathbb{R}^{4}}|\Delta v_{n}|^ {2}dx=0\) for any \(n\in\mathbb{N}^{+}\). Hence, by \((F_{2})\) and \(P(u_{n},v_{n})=0\), we have \[0 =\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))\Big{(}\frac{8-\mu} {4}F(u_{n},v_{n})-(u_{n},v_{n})\cdot\nabla F(u_{n},v_{n})\Big{)}dx\] \[\leq\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))(\frac{8-\mu}{4 \theta}-1)[(u_{n},v_{n})\cdot\nabla F(u_{n},v_{n})]dx\leq 0.\] So \(u_{n},v_{n}\to 0\) a.e. in \(\mathbb{R}^{4}\), which is contradict to \(a,b>0\). For any \((u,v)\in\mathcal{P}(a,b)\), by Lemma 4.3, we have \[\mathcal{J}(u,v)=\mathcal{J}(\mathcal{H}((u,v),0))\geq\mathcal{J}(\mathcal{H} ((u,v),s))\quad\text{for all $s\in\mathbb{R}$}.\] Let \(\delta>0\) be the number given by Lemma 4.4 and \(4s:=\ln\frac{\delta}{(\|\Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2})}\). Then \(e^{4s}(\|\Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2})=\delta\), by Lemma 4.4, we deduce that \[\mathcal{J}(u,v)\geq\mathcal{J}(\mathcal{H}((u,v),s))\geq\frac{e^{4s}}{4}(\| \Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2})=\frac{\delta}{4}>0.\] By the arbitrariness of \((u,v)\in\mathcal{P}(a,b)\), we derive the conclusion. Denote \(\mathcal{X}_{r}=H^{2}_{rad}(\mathbb{R}^{4})\times H^{2}_{rad}(\mathbb{R}^{4})\), \(\mathcal{S}_{r}=\mathcal{S}\cap\mathcal{X}_{r}\), \(\mathcal{P}_{r}(a,b)=\mathcal{P}(a,b)\cap\mathcal{X}_{r}\), \(S(c)=\Big{\{}u\in H^{2}(\mathbb{R}^{4}):\int_{\mathbb{R}^{4}}|u|^{2}dx=c \Big{\}}\), \(S_{r}(c)=S(c)\cap H^{2}_{rad}(\mathbb{R}^{4})\) for any \(c>0\), and the radial ground state energy \[E_{r}(a,b)=\inf_{(u,v)\in\mathcal{P}_{r}(a,b)}\mathcal{J}(u,v). \tag{4.10}\] Here \(H^{2}_{rad}(\mathbb{R}^{4})\) denotes the space of radially symmetric functions in \(H^{2}(\mathbb{R}^{4})\). The next lemma is focused on showing the continuity of \(E_{r}(a,b)\) in the exponential subcritical case. **Lemma 4.6**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), then the function \((a,b)\mapsto E_{r}(a,b)\) is continuous._ Proof.: For any given \(a,b>0\), let \(\{a_{n}\}\), \(\{b_{n}\}\) be any positive sequence such that \(a_{n}\to a\) and \(b_{n}\to b\) as \(n\to\infty\), we only need to prove that \(E_{r}(a_{n},b_{n})\to E_{r}(a,b)\) as \(n\to\infty\). On the one hand, for any \((u,v)\in\mathcal{P}_{r}(a,b)\), then \[(\frac{a_{n}}{a}u,\frac{b_{n}}{b}v)\in S_{r}(a_{n})\times S_{r}(b_{n})\] for \(n\in\mathbb{N}^{+}\). Since \((\frac{a_{n}}{a}u,\frac{b_{n}}{b}v)\to(u,v)\) in \(\mathcal{X}_{r}\) as \(n\to\infty\), by Lemmas 3.1 and 4.3, we have \(s_{(\frac{an}{a}u,\frac{b_{n}}{b}v)}\to s_{(u,v)}=0\) in \(\mathbb{R}\) as \(n\to\infty\) and \[\mathcal{H}((\frac{a_{n}}{a}u,\frac{b_{n}}{b}v),s_{(\frac{an}{a}u,\frac{b_{n}} {b}v)})\to\mathcal{H}((u,v),s_{(u,v)})=(u,v)\quad\text{in $\mathcal{X}_{r}$ as $n\to\infty$}.\] As a consequence, \(\limsup_{n\to\infty}E_{r}(a_{n},b_{n})\leq\limsup_{n\to\infty}\mathcal{J}( \mathcal{H}((\frac{a_{n}}{a}u,\frac{b_{n}}{b}v),s_{(\frac{an}{a}u,\frac{b_{n}} {b}v)}))=\mathcal{J}(u,v)\). By the arbitrariness of \((u,v)\in\mathcal{P}_{r}(a,b)\), we get \(\limsup_{n\to\infty}E_{r}(a_{n},b_{n})\leq E_{r}(a,b)\). On the other hand, for any \(n\in\mathbb{N}^{+}\), by the definition of \(E_{r}(a_{n},b_{n})\), there exists \((\hat{u}_{n},\hat{v}_{n})\in\mathcal{P}_{r}(a_{n},b_{n})\) such that \[\mathcal{J}(\hat{u}_{n},\hat{v}_{n})\leq E_{r}(a_{n},b_{n})+\frac{1}{n},\] thus \(\limsup_{n\to\infty}\mathcal{J}(\hat{u}_{n},\hat{v}_{n})\leq E_{r}(a,b)\). This with \(P(\hat{u}_{n},\hat{v}_{n})=0\) and \((F_{2})\) yields that \(\{(\hat{u}_{n},\hat{v}_{n})\}\) is bounded in \(\mathcal{X}_{r}\). Setting \[s_{n}:=\Big{(}\frac{a}{a_{n}}\Big{)}^{\frac{1}{2}},\quad t_{n}:=\Big{(}\frac{ b}{b_{n}}\Big{)}^{\frac{1}{2}},\quad(\tilde{u}_{n},\tilde{v}_{n}):=(\hat{u}_{n}( \frac{x}{s_{n}}),\hat{v}_{n}(\frac{x}{t_{n}}))\in\mathcal{S}_{r},\] then by Lemma 4.3, we have \[E_{r}(a,b) \leq\mathcal{J}(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{( \tilde{u}_{n},\tilde{v}_{n})}))\] \[\leq\mathcal{J}(\mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s_{(\tilde {u}_{n},\tilde{v}_{n})}))+\Big{|}\mathcal{J}(\mathcal{H}((\tilde{u}_{n},\tilde {v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})}))-\mathcal{J}(\mathcal{H}((\hat{u} _{n},\hat{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})}))\Big{|}\] \[\leq E_{r}(a_{n},b_{n})+\frac{1}{n}+\Big{|}\mathcal{J}(\mathcal{ H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})}))-\mathcal{J}( \mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})})) \Big{|}\] \[=:E_{r}(a_{n},b_{n})+\frac{1}{n}+A(n).\] We complete the proof if we can prove that \(\limsup_{n\to\infty}A(n)=0\). Indeed, \[A(n) =\Big{|}\frac{1}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(\mathcal{H}(( \hat{u}_{n},\hat{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})})))F(\mathcal{H}(( \hat{u}_{n},\hat{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})}))dx\] \[\qquad-\frac{1}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(\mathcal{H}(( \tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})})))F(\mathcal{H}(( \tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})}))dx\Big{|}.\] We claim that \(\limsup_{n\to\infty}s_{(\tilde{u}_{n},\tilde{v}_{n})}<+\infty\), otherwise, \(\limsup_{n\to\infty}s_{(\tilde{u}_{n},\tilde{v}_{n})}=+\infty\). First, we prove that: up to a subsequence and up to translations in \(\mathbb{R}^{4}\), there exists \((\tilde{u},\tilde{v})\in\mathcal{X}_{r}\) such that \((\tilde{u}_{n},\tilde{v}_{n})\rightharpoonup(\tilde{u},\tilde{v})\neq(0,0)\) in \(\mathcal{X}_{r}\). Since \(s_{n},t_{n}\to 1\) as \(n\to\infty\), and \(\{(\tilde{u}_{n},\hat{v}_{n})\}\) is bounded in \(\mathcal{X}_{r}\), we know \(\{(\tilde{u}_{n},\tilde{v}_{n})\}\) is bounded in \(\mathcal{X}_{r}\), up to a subsequence, we assume that \((\hat{u}_{n},\hat{v}_{n})\rightharpoonup(\hat{u},\hat{v})\) and \((\tilde{u}_{n},\tilde{v}_{n})\rightharpoonup(\tilde{u},\tilde{v})\) in \(\mathcal{X}_{r}\). By the uniqueness of limit, we have \((\hat{u},\hat{v})=(\tilde{u},\tilde{v})\). For any \(r>0\), let \[\varrho:=\limsup_{n\to\infty}\Big{(}\sup_{y\in\mathbb{R}^{4}}\int_{B(y,r)}(| \tilde{u}_{n}|^{2}+|\tilde{v}_{n}|^{2})dx\Big{)}.\] If \(\varrho>0\), there exists \(\{y_{n}\}\subset\mathbb{R}^{4}\) such that \(\int_{B(y_{n},1)}(|\tilde{u}_{n}|^{2}+|\tilde{v}_{n}|^{2})dx>\frac{\varrho}{2}\), i.e., \(\int_{B(0,1)}(|\tilde{u}_{n}(x-y_{n})|^{2}+|\tilde{v}_{n}(x-y_{n})|^{2})dx>\frac {\varrho}{2}\). Up to a subsequence, and up to translations in \(\mathbb{R}^{4}\), \((\tilde{u}_{n},\tilde{v}_{n})\rightharpoonup(\tilde{u},\tilde{v})\neq(0,0)\) in \(\mathcal{X}_{r}\). If \(\varrho=0\), using the Lions lemma [57, Lemma 1.21], \(\tilde{u}_{n},\tilde{v}_{n}\to 0\) in \(L^{p}(\mathbb{R}^{4})\) for any \(p>2\). As a consequence, arguing as (4.8), for any \(\alpha\to 0^{+}\), \(t\to 1^{+}\), by \(\tau>2-\frac{\mu}{4}>1\), \(q>2\) and \(t^{\prime}=\frac{t}{t-1}\) large enough, we have \(\frac{8\alpha t\|\Delta(\hat{u}_{n},\hat{v}_{n})\|_{2}^{2}}{8-\mu}\leq 32\pi^{2}\) and \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(\hat{u}_{n},\hat{v}_{n})[(\hat{u} _{n},\hat{v}_{n})\cdot\nabla F(\hat{u}_{n},\hat{v}_{n})]dx\] \[\leq C\Big{(}\|\hat{u}_{n}\|_{\frac{8(\tau+1)}{8-\mu}}^{\frac{8(\tau+ 1)}{8-\mu}}+\|\hat{v}_{n}\|_{\frac{8(\tau+1)}{8-\mu}}^{\frac{8-\mu}{8-\mu}}+ \|\hat{v}_{n}\|_{\frac{8(\tau+1)}{8-\mu}}^{\frac{8(\tau+1)t^{\prime}}{8-\mu}} +\|\hat{v}_{n}\|_{\frac{8(\tau+1)t^{\prime}}{8-\mu}}^{\frac{8-\mu}{8-\mu}}\] \[= C\Big{(}s_{n}^{-4}\|\tilde{u}_{n}\|_{\frac{8(\tau+1)}{8-\mu}}^{ \frac{8(\tau+1)}{8-\mu}}+t_{n}^{-4}\|\tilde{v}_{n}\|_{\frac{8(\tau+1)}{8-\mu}} ^{\frac{8(\tau+1)}{8-\mu}}\Big{)}^{\frac{8-\mu}{4}}+C\Big{(}s_{n}^{-4}\|\tilde {u}_{n}\|_{\frac{8(\tau+1)t^{\prime}}{8-\mu}}^{\frac{8(\sigma+1)t^{\prime}}{8- \mu}}+t_{n}^{-4}\|\tilde{v}_{n}\|_{\frac{8(\sigma+1)t^{\prime}}{8-\mu}}^{\frac {8-\mu}{8-\mu}}\Big{)}^{\frac{8-\mu}{4t^{\prime}}}\to 0,\] as \(n\to\infty\). From the above equality and \(P(\hat{u}_{n},\hat{v}_{n})=0\), using \((F_{2})\), we deduce that \(\|\Delta\hat{u}_{n}\|_{2}^{2}+\|\Delta\hat{v}_{n}\|_{2}^{2}\to 0\) as \(n\to\infty\). In view of Lemma 4.4, up to a subsequence, we obtain \[0=P(\hat{u}_{n},\hat{v}_{n})\geq\int_{\mathbb{R}^{4}}(|\Delta\hat{u}_{n}|^{2}+ |\Delta\hat{v}_{n}|^{2})dx\geq 0,\] which implies that \(\int_{\mathbb{R}^{4}}|\Delta\hat{u}_{n}|^{2}dx=\int_{\mathbb{R}^{4}}|\Delta \hat{v}_{n}|^{2}dx=0\) for any \(n\in\mathbb{N}^{+}\). Hence, by \((F_{2})\) and \(P(\hat{u}_{n},\hat{v}_{n})=0\), we have \[0 =\int_{\mathbb{R}^{4}}(I_{\mu}*F(\hat{u}_{n},\hat{v}_{n}))\Big{(} \frac{8-\mu}{4}F(\hat{u}_{n},\hat{v}_{n})-(\hat{u}_{n},\hat{v}_{n})\cdot\nabla F (\hat{u}_{n},\hat{v}_{n})\Big{)}dx\] \[\leq\int_{\mathbb{R}^{4}}(I_{\mu}*F(\hat{u}_{n},\hat{v}_{n}))( \frac{8-\mu}{4\theta}-1)[(\hat{u}_{n},\hat{v}_{n})\cdot\nabla F(\hat{u}_{n}, \hat{v}_{n})]dx\leq 0,\] So \(\hat{u}_{n},\hat{v}_{n}\to 0\) a.e. in \(\mathbb{R}^{4}\), which is contradict to \(a_{n},b_{n}>0\). If \(\limsup_{n\to\infty}s_{(\tilde{u}_{n},\tilde{v}_{n})}=+\infty\), using (4.9), \((F_{2})\) and \((\tilde{u}_{n},\tilde{v}_{n})\to(\tilde{u},\tilde{v})\neq(0,0)\) a.e. in \(\mathbb{R}^{4}\), we have \[0 \leq\liminf_{n\to\infty}e^{-4s_{(\tilde{u}_{n},\tilde{v}_{n})}} \mathcal{J}(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{u}_{n},\tilde {v}_{n})}))\] \[\leq\liminf_{n\to\infty}\frac{1}{2}\Big{[}\int_{\mathbb{R}^{4}}(| \Delta\tilde{u}_{n}|^{2}+|\Delta\tilde{v}_{n}|^{2})dx-e^{(4\theta+\mu-12)s_{( \tilde{u}_{n},\tilde{v}_{n})}}\int_{\mathbb{R}^{4}}(I_{\mu}*F(\tilde{u}_{n}, \tilde{v}_{n})F(\tilde{u}_{n},\tilde{v}_{n})dx\Big{]}=-\infty,\] which is an absurd. Hence, \(\limsup_{n\to\infty}s_{(\tilde{u}_{n},\tilde{v}_{n})}<+\infty\), up to a subsequence, we assume that \(\sup_{n\in\mathbb{N}^{+}}s_{(\tilde{u}_{n},\tilde{v}_{n})}<+\infty\). According to \(\{(\hat{u}_{n},\hat{v}_{n})\}\) and \(\{(\tilde{u}_{n},\tilde{v}_{n})\}\) are bounded in \(\mathcal{X}_{r}\), using \(\sup_{n\in\mathbb{N}^{+}}s_{(\tilde{u}_{n},\tilde{v}_{n})}<+\infty\), we obtain \[\sup_{n\in\mathbb{N}^{+}}\|\Delta\mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s_{( \tilde{u}_{n},\tilde{v}_{n})})\|_{2}^{2}<+\infty,\quad\sup_{n\in\mathbb{N}^{+}} \|\Delta\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n}) })\|_{2}^{2}<+\infty.\] Fix \(\alpha\to 0^{+}\) and \(t\to 1^{+}\) such that \[\sup_{n\in\mathbb{N}^{+}}\alpha t\|\Delta(\hat{u}_{n},\hat{v}_{n})\|_{2}^{2} \leq 32\pi^{2},\quad\sup_{n\in\mathbb{N}^{+}}\alpha t\|\Delta(\tilde{u}_{n}, \tilde{v}_{n})\|_{2}^{2}\leq 32\pi^{2},\] and \[\sup_{n\in\mathbb{N}^{+}}\frac{4\alpha t\|\Delta\mathcal{H}((\hat{u}_{n},\hat{v}_{ n}),s_{(\tilde{u}_{n},\tilde{v}_{n})})\|_{2}^{2}}{4-\mu}\leq 32\pi^{2},\quad\sup_{n\in\mathbb{N}^{+}} \frac{4\alpha t\|\Delta\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{u}_{n}, \tilde{v}_{n})})\|_{2}^{2}}{4-\mu}\leq 32\pi^{2}.\] With a similar proof of Lemma 4.1, we can prove that, for any \(\alpha\to 0^{+}\) and \(t\to 1^{+}\), \[(e^{\alpha|(\tilde{u}_{n},\tilde{v}_{n})|^{2}}-1)\rightharpoonup(e^{\alpha|( \tilde{u},\tilde{v})|^{2}}-1),\quad(e^{\alpha|(\tilde{u}_{n},\tilde{v}_{n})|^{ 2}}-1)\rightharpoonup(e^{\alpha|(\tilde{u},\tilde{v})|^{2}}-1),\quad\text{in }L^{t}( \mathbb{R}^{4})\] and \[\|(I_{\mu}*F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{u}_{n}, \tilde{v}_{n})})))\|_{\infty}\leq C,\quad\|(I_{\mu}*F(\mathcal{H}((\tilde{u}_ {n},\tilde{v}_{n}),s_{(\tilde{u}_{n},\tilde{v}_{n})})))\|_{\infty}\leq C.\] Hence, \[(I_{\mu}*F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{u }_{n},\tilde{v}_{n})})))F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{( \tilde{u}_{n},\tilde{v}_{n})}))\] \[-(I_{\mu}*F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{ u}_{n},\tilde{v}_{n})})))F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{( \tilde{u}_{n},\tilde{v}_{n})}))\to 0\quad\text{a.e. in }\mathbb{R}^{4},\] \[|(I_{\mu}*F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{ u}_{n},\tilde{v}_{n})})))F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{( \tilde{u}_{n},\tilde{v}_{n})}))\] \[-(I_{\mu}*F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{(\tilde{ u}_{n},\tilde{v}_{n})})))F(\mathcal{H}((\tilde{u}_{n},\tilde{v}_{n}),s_{( \tilde{u}_{n},\tilde{v}_{n})}))|\] \[\leq C(|(\tilde{u}_{n},\tilde{v}_{n})|^{\tau+1}+|(\tilde{u}_{n},\tilde{ v}_{n})|^{\tau+1})+C[|(\tilde{u}_{n},\tilde{v}_{n})|^{q+1}(e^{\alpha|(\tilde{u}_{n}, \tilde{v}_{n})|^{2}}-1)+|(\tilde{u}_{n},\tilde{v}_{n})|^{q+1}(e^{\alpha|( \tilde{u}_{n},\tilde{v}_{n})|^{2}}-1)]\] for all \(n\in\mathbb{N}^{+}\), and \[|(\tilde{u}_{n},\tilde{v}_{n})|^{\tau+1}+|(\tilde{u}_{n},\tilde{ v}_{n})|^{\tau+1}+|(\tilde{u}_{n},\tilde{v}_{n})|^{q+1}(e^{\alpha|(\tilde{u}_{n}, \tilde{v}_{n})|^{2}}-1)+|(\tilde{u}_{n},\tilde{v}_{n})|^{q+1}(e^{\alpha|( \tilde{u}_{n},\tilde{v}_{n})|^{2}}-1)\] \[\to 2|(\tilde{u},\tilde{v})|^{\tau+1}+2|(\tilde{u},\tilde{v})|^{q+1} (e^{\alpha|(\tilde{u},\tilde{v})|^{2}}-1)\quad\text{a.e. in }\mathbb{R}^{4}.\] In the following, we prove that \[\int_{\mathbb{R}^{4}}(|(\tilde{u}_{n},\tilde{v}_{n})|^{\tau+1}+|(\tilde{u}_{n },\tilde{v}_{n})|^{\tau+1})dx\to 2\int_{\mathbb{R}^{4}}|(\tilde{u},\tilde{v})|^{ \tau+1}dx,\,\,\,\text{as}\,\,\,n\to\infty \tag{4.11}\] and \[\int_{\mathbb{R}^{4}}[|(\tilde{u}_{n},\tilde{v}_{n})|^{q+1}(e^{ \alpha|(\tilde{u}_{n},\tilde{v}_{n})|^{2}}-1)+|(\tilde{u}_{n},\tilde{v}_{n})|^ {q+1}(e^{\alpha|(\tilde{u}_{n},\tilde{v}_{n})|^{2}}-1)]dx\] \[\to 2\int_{\mathbb{R}^{4}}|(\tilde{u},\tilde{v})|^{q+1}(e^{ \alpha|(\tilde{u},\tilde{v})|^{2}}-1)dx,\,\,\,\text{as}\,\,\,n\to\infty. \tag{4.12}\] Using the compact embedding \(H^{2}_{rad}(\mathbb{R}^{4})\hookrightarrow L^{p}(\mathbb{R}^{4})\) for any \(p>2\), arguing as (4.7), we obtain (4.11) and \(|(\hat{u}_{n},\hat{v}_{n})|^{q+1}\to|(\tilde{u},\tilde{v})|^{q+1}\), \(|(\tilde{u}_{n},\tilde{v}_{n})|^{q+1}\to|(\tilde{u},\tilde{v})|^{q+1}\) in \(L^{t^{\prime}}(\mathbb{R}^{4})\), where \(t^{\prime}=\frac{t}{t-1}\). By the definition of weak convergence, using the Holder inequality, we infer that \[\Big{|}\int_{\mathbb{R}^{4}}[|(\hat{u}_{n},\hat{v}_{n})|^{q+1}(e^{ \alpha|(\hat{u}_{n},\hat{v}_{n})|^{2}}-1)+|(\tilde{u}_{n},\tilde{v}_{n})|^{q+ 1}(e^{\alpha|(\tilde{u}_{n},\tilde{v}_{n})|^{2}}-1)]dx-\int_{\mathbb{R}^{4}}2|( \tilde{u},\tilde{v})|^{q+1}(e^{\alpha|(\tilde{u},\tilde{v})|^{2}}-1)dx\Big{|}\] \[\leq\Big{|}\int_{\mathbb{R}^{4}}|(\hat{u}_{n},\hat{v}_{n})|^{q+1 }(e^{\alpha|(\hat{u}_{n},\hat{v}_{n})|^{2}}-1)dx-\int_{\mathbb{R}^{4}}|(\tilde {u},\tilde{v})|^{q+1}(e^{\alpha|(\tilde{u},\tilde{v})|^{2}}-1)dx\Big{|}\] \[\quad+\Big{|}\int_{\mathbb{R}^{4}}|(\tilde{u}_{n},\tilde{v}_{n})|^{ q+1}(e^{\alpha|(\tilde{u}_{n},\tilde{v}_{n})|^{2}}-1)dx-\int_{\mathbb{R}^{4}}|(\tilde {u},\tilde{v})|^{q+1}(e^{\alpha|(\tilde{u},\tilde{v})|^{2}}-1)dx\Big{|}\] \[\leq\int_{\mathbb{R}^{4}}\Big{|}|(\hat{u}_{n},\hat{v}_{n})|^{q+1} -|(\tilde{u},\tilde{v})|^{q+1}\Big{|}(e^{\alpha|(\hat{u}_{n},\hat{v}_{n})|^{2}}- 1)dx+\int_{\mathbb{R}^{4}}|(\tilde{u},\tilde{v})|^{q+1}\Big{|}(e^{\alpha|( \hat{u}_{n},\hat{v}_{n})|^{2}}-1)-(e^{\alpha|(\tilde{u},\tilde{v})|^{2}}-1) \Big{|}dx\] \[\quad+\int_{\mathbb{R}^{4}}\Big{|}|(\tilde{u}_{n},\tilde{v}_{n})|^{ q+1}-|(\tilde{u},\tilde{v})|^{q+1}\Big{|}(e^{\alpha|(\tilde{u}_{n},\tilde{v}_{n})|^{2}}-1) dx+\int_{\mathbb{R}^{4}}|(\tilde{u},\tilde{v})|^{q+1}\Big{|}(e^{\alpha|(\tilde{u}_{n}, \tilde{v}_{n})|^{2}}-1)-(e^{\alpha|(\tilde{u},\tilde{v})|^{2}}-1)\Big{|}dx\] \[\leq\Big{(}\int_{\mathbb{R}^{4}}\Big{|}|(\hat{u}_{n},\hat{v}_{n})|^{q+1}-|( \tilde{u},\tilde{v})|^{q+1}|^{t^{\prime}}dx\Big{)}^{\frac{1}{t^{\prime}}}\Big{(} \int_{\mathbb{R}^{4}}(e^{\alpha|(\hat{u}_{n},\hat{v}_{n})|^{2}}-1)^{t}dx\Big{)} ^{\frac{1}{t}}\] \[\quad+\int_{\mathbb{R}^{4}}|(\tilde{u},\tilde{v})|^{q+1}\Big{|}(e ^{\alpha|(\hat{u}_{n},\hat{v}_{n})|^{2}}-1)-(e^{\alpha|(\tilde{u},\tilde{v})|^ {2}}-1)\Big{|}dx\] \[\quad+\Big{(}\int_{\mathbb{R}^{4}}\Big{|}|(\tilde{u}_{n},\tilde{v }_{n})|^{q+1}-|(\tilde{u},\tilde{v})|^{q+1}|^{t^{\prime}}dx\Big{)}^{\frac{1}{ t^{\prime}}}\Big{(}\int_{\mathbb{R}^{4}}(e^{\alpha|(\tilde{u}_{n},\tilde{v}_{n})|^ {2}}-1)^{t}dx\Big{)}^{\frac{1}{t}}\] \[\quad+\int_{\mathbb{R}^{4}}|(\tilde{u},\tilde{v})|^{q+1}\Big{|}(e ^{\alpha|(\tilde{u}_{n},\tilde{v}_{n})|^{2}}-1)-(e^{\alpha|(\tilde{u},\tilde{v })|^{2}}-1)\Big{|}dx\to 0,\quad\text{as $n\to\infty$}.\] Applying a variant of the Lebesgue dominated convergence theorem, we get \(\limsup_{n\to\infty}A(n)=0\). Thus \(E_{r}(a_{n},b_{n})\to E_{r}(a,b)\) as \(n\to\infty\). **Lemma 4.7**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential subcritical growth at \(\infty\), then the functions \(a\mapsto E(a,b)\) and \(b\mapsto E(a,b)\) are non-increasing._ Proof.: For any given \(a,b>0\), if \(\tilde{a}>a\) and \(\tilde{b}>b\), we shall prove that \(E(\tilde{a},b)\leq E(a,b)\) and \(E(a,\tilde{b})\leq E(a,b)\). By the definition of \(E(a,b)\), for any \(\varepsilon>0\), there exists \((u,v)\in\mathcal{P}(a,b)\) such that \[\mathcal{J}(u,v)\leq E(a,b)+\frac{\varepsilon}{3}. \tag{4.13}\] Consider a cut-off function \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{4},[0,1])\) such that \(\varphi(x)=1\) if \(|x|\leq 1\), \(\varphi(x)=0\) if \(|x|\geq 2\). For any small \(\delta>0\), define \(u_{\delta}(x)=\varphi(\delta x)u(x)\in H^{2}(\mathbb{R}^{4})\backslash\{0\}\), then \((u_{\delta}(x),v(x))\to(u,v)\) in \(\mathcal{X}\) as \(\delta\to 0^{+}\). By Lemmas 3.1 and 4.3, we have \(s_{(u_{\delta},v)}\to s_{(u,v)}=0\) in \(\mathbb{R}\) as \(\delta\to 0^{+}\) and \[\mathcal{H}((u_{\delta},v),s_{(u_{\delta},v)})\to\mathcal{H}((u,v),s_{(u,v)})= (u,v)\quad\text{in $\mathcal{X}$ as $\delta\to 0^{+}$}.\] Fix \(\delta_{0}>0\) small enough such that \[\mathcal{J}(\mathcal{H}((u_{\delta_{0}},v),s_{(u_{\delta_{0}},v)}))\leq \mathcal{J}(u,v)+\frac{\varepsilon}{3}. \tag{4.14}\] Let \(\psi(x)\in C_{0}^{\infty}(\mathbb{R}^{4})\) satisfy \(supp(\psi)\subset B_{1+\frac{4}{\delta_{0}}}(0)\backslash B_{\frac{4}{\delta_{ 0}}}(0)\), and set \[\hat{u}_{\delta_{0}}=\frac{\tilde{a}^{2}-\|u_{\delta_{0}}\|_{2}^{2}}{\|\psi\|_ {2}^{2}}\psi.\] Define \(\hat{u}_{\lambda}=u_{\delta_{0}}+\mathcal{H}(\hat{u}_{\delta_{0}},\lambda)\) for any \(\lambda<0\). Since \[dist(u_{\delta_{0}},\mathcal{H}(\hat{u}_{\delta_{0}},\lambda))\geq\frac{2}{ \delta_{0}}>0,\] we have \(\|\hat{u}_{\lambda}\|_{2}^{2}=\tilde{a}^{2}\), i.e., \((\hat{u}_{\lambda},v)\in S(\tilde{a})\times S(b)\). We claim that \(s_{(\hat{u}_{\lambda},v)}\) is bounded from above as \(\lambda\to-\infty\). Otherwise, by (4.9), \((F_{2})\) and \((\hat{u}_{\lambda},v)\to(u_{\delta_{0}},v)\neq(0,0)\) a.e. in \(\mathbb{R}^{4}\) as \(\lambda\to-\infty\), we have \[0 \leq\lim_{n\to\infty}e^{-4s_{(\hat{u}_{\lambda},v)}}\mathcal{J}( \mathcal{H}((\hat{u}_{\lambda},v),s_{(\hat{u}_{\lambda},v)}))\] \[\leq\lim_{n\to\infty}\frac{1}{2}\Big{[}\int_{\mathbb{R}^{4}}(| \Delta\hat{u}_{\lambda}|^{2}+|\Delta v|^{2})dx-e^{(4\theta+\mu-12)s_{(\hat{u}_{ \lambda},v)}}\int_{\mathbb{R}^{4}}(I_{\mu}*F(\hat{u}_{\lambda},v))F(\hat{u}_{ \lambda},v)dx\Big{]}=-\infty,\] which is a contradiction. Thus \(s_{(\hat{u}_{\lambda},v)}+\lambda\to-\infty\) as \(\lambda\to-\infty\), by \((F_{2})\), we get \[\mathcal{J}(\mathcal{H}((\hat{u}_{\delta_{0}},0),s_{(\hat{u}_{\lambda},v)}+ \lambda))\leq\frac{e^{4(s_{(\hat{u}_{\lambda},v)}+\lambda)}}{2}\|\Delta\hat{u}_ {\delta_{0}}\|_{2}^{2}\to 0,\quad\text{as}\;\;\lambda\to-\infty. \tag{4.15}\] Now, using Lemma 4.3, \((\ref{eq:2.1})-(\ref{eq:2.1})\), we obtain \[E(\tilde{a},b)\leq\mathcal{J}(\mathcal{H}((\hat{u}_{\lambda},v), s_{(\hat{u}_{\lambda},v)})) =\mathcal{J}(\mathcal{H}((u_{\delta_{0}},v),s_{(\hat{u}_{\lambda},v)}))+ \mathcal{J}(\mathcal{H}((\mathcal{H}(\hat{u}_{\delta_{0}},0),\lambda),s_{( \hat{u}_{\lambda},v)}))\] \[=\mathcal{J}(\mathcal{H}((u_{\delta_{0}},v),s_{(\hat{u}_{\lambda },v)}))+\mathcal{J}(\mathcal{H}((\hat{u}_{\delta_{0}},0),s_{(\hat{u}_{\lambda },v)}+\lambda))\] \[\leq\mathcal{J}(\mathcal{H}((u_{\delta_{0}},v),s_{(u_{\delta_{0} },v)}))+\mathcal{J}(\mathcal{H}((\hat{u}_{\delta_{0}},0),s_{(\hat{u}_{\lambda },v)}+\lambda))\leq E(a,b)+\varepsilon.\] By the arbitrariness of \(\varepsilon>0\), we deduce that \(E(\tilde{a},b)\leq E(a,b)\). Similarly, we prove \(E(a,\tilde{b})\leq E(a,b)\). **Lemma 4.8**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\). Suppose that \((\ref{eq:2.1})\) possesses a ground state solution \((u,v)\) with \(\lambda_{1},\lambda_{2}<0\), then \(E(a^{\prime},b)<E(a,b)\) for any \(a^{\prime}>a\) close to \(a\) and \(E(a,b^{\prime})<E(a,b)\) for any \(b^{\prime}>b\) close to \(b\)._ Proof.: For any \(t>0\) and \(s\in\mathbb{R}\), we know \(\mathcal{H}((tu,v),s)\in S(ta)\times S(b)\) and \[\mathcal{J}(\mathcal{H}((tu,v),s))= \frac{e^{4s}}{2}\int_{\mathbb{R}^{4}}(t^{2}|\Delta u|^{2}+| \Delta v|^{2})dx-\frac{e^{(\mu-8)s}}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(te^{2s} u,e^{2s}v))F(te^{2s}u,e^{2s}v)dx.\] Denote \(\alpha(t,s):=\mathcal{J}(\mathcal{H}((tu,v),s))\), then \[\frac{\partial\alpha(t,s)}{\partial t}= te^{4s}\int_{\mathbb{R}^{4}}|\Delta u|^{2}dx-e^{(\mu-8)s}\int_{ \mathbb{R}^{4}}(I_{\mu}*F(te^{2s}u,e^{2s}v))\frac{\partial F(te^{2s}u,e^{2s}v )}{\partial(te^{2s}u)}e^{2s}udx:=\frac{B}{t},\] where \[B= \langle\mathcal{J}^{\prime}(\mathcal{H}((tu,v),s)),\mathcal{H}((tu,v),s) \rangle-e^{4s}\int_{\mathbb{R}^{4}}|\Delta v|^{2}dx-e^{(\mu-8)s}\int_{ \mathbb{R}^{4}}(I_{\mu}*F(te^{2s}u,e^{2s}v))\frac{\partial F(te^{2s}u,e^{2s}v )}{\partial(e^{2s}v)}e^{2s}vdx.\] By Lemma 3.1, \(\mathcal{H}((tu,v),s)\to(u,v)\) in \(\mathcal{X}\) as \((t,s)\to(1,0)\), and since \(\lambda_{1}<0\), we have \[\langle\mathcal{J}^{\prime}(u,v),(u,v)\rangle-\int_{\mathbb{R}^{4}}|\Delta v|^ {2}dx-\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{v}(u,v)vdx=\lambda_{1}\|u\|_{2}^ {2}=\lambda_{1}a^{2}<0.\] Hence, one can fix \(\delta>0\) small enough such that \[\frac{\partial\alpha(t,s)}{\partial t}<0\quad\text{for any }(t,s)\in(1,1+ \delta]\times[-\delta,\delta].\] For any \(t\in(1,1+\delta]\) and \(s\in[-\delta,\delta]\), using the mean value theorem, we obtain \[\alpha(t,s)=\alpha(1,s)+(t-1)\frac{\partial\alpha(t,s)}{\partial t}\Big{|}_{t =\xi}<\alpha(1,s)\] for some \(\xi\in(1,t)\). By Lemma 4.3, \(s_{(tu,v)}\to s_{(u,v)}=0\) in \(\mathbb{R}\) as \(t\to 1^{+}\). For any \(a^{\prime}>a\) close to \(a\), let \(t_{0}=\frac{a^{\prime}}{a}\), then \[t_{0}\in(1,1+\delta],\quad s_{(t_{0}u,v)}\in[-\delta,\delta]\] and thus, using Lemma 4.3 again, \[E(a^{\prime},b)\leq\alpha(t_{0},s_{(t_{0}u,v)})<\alpha(1,s_{(t_{0}u,v)})= \mathcal{J}(\mathcal{H}((u,v),s_{(t_{0}u,v)}))\leq\mathcal{J}(u,v)=E(a,b).\] Analogously, we can prove that \(E(a,b^{\prime})<E(a,b)\) for any \(b^{\prime}>b\) close to \(b\) From Lemmas 4.7 and 4.8, we directly obtain **Lemma 4.9**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\). Suppose that (1.6) possesses a ground state solution with \(\lambda_{1},\lambda_{2}<0\), then \(E(a^{\prime},b)<E(a,b)\) for any \(a^{\prime}>a\) and \(E(a,b^{\prime})<E(a,b)\) for any \(b^{\prime}>b\)._ **Lemma 4.10**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), then_ \[\lim_{(a,b)\to(0^{+},0^{+})}E(a,b)=+\infty.\] Proof.: It is sufficient to prove that for any \(\{(u_{n},v_{n})\}\in\mathcal{X}\backslash(0,0)\) such that \(P(u_{n},v_{n})=0\) and \(\lim_{n\to\infty}(\|u_{n}\|_{2}^{2}+\|v_{n}\|_{2}^{2})=0\), one has \(\lim_{n\to\infty}\mathcal{J}(u_{n},v_{n})=+\infty\). Setting \[4s_{n}:=\ln(\|\Delta u_{n}\|_{2}^{2}+\|\Delta v_{n}\|_{2}^{2})\quad\text{ and}\quad(\hat{u}_{n},\hat{v}_{n}):=\mathcal{H}((u_{n},v_{n}),-s_{n}),\] then \(\|\Delta\hat{u}_{n}\|_{2}^{2}+\|\Delta\hat{v}_{n}\|_{2}^{2}=1\) and \(\|\hat{u}_{n}\|_{2}^{2}+\|\hat{v}_{n}\|_{2}^{2}=\|u_{n}\|_{2}^{2}+\|v_{n}\|_{2} ^{2}\). Since \(\lim_{n\to\infty}(\|u_{n}\|_{2}^{2}+\|v_{n}\|_{2}^{2})=0\), we have \[\limsup_{n\to\infty}\Big{(}\sup_{y\in\mathbb{R}^{4}}\int_{B(y,r)}(|\hat{u}_{n }|^{2}+|\hat{v}_{n}|^{2})dx\Big{)}=0,\] for any \(r>0\). Using the Lions lemma [57, Lemma 1.21], \(\hat{u}_{n},\hat{v}_{n}\to 0\) in \(L^{p}(\mathbb{R}^{4})\) for any \(p>2\). As a consequence, arguing as (4.8), for any fixed \(s>0\), as \(\alpha\to 0^{+}\) and \(t\to 1^{+}\), by \(\tau>2-\frac{\mu}{4}>1\), \(q>2\), and \(t^{\prime}=\frac{t}{t-1}\) large enough, we have \(\frac{8\alpha t\|\Delta\mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s)\|_{2}^{2}}{8- \mu}\leq 32\pi^{2}\) and \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(\mathcal{H}((\hat{u}_{n},\hat{v}_ {n}),s)))F(\mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s))dx\] \[\leq Ce^{(4\tau+\mu-4)s}\Big{(}\|\hat{u}_{n}\|_{\frac{8(\tau+1)}{ 8-\mu}}^{\frac{8(\tau+1)}{8-\mu}}+\|\hat{v}_{n}\|_{\frac{8(\tau+1)}{8-\mu}}^{ \frac{8(\tau+1)}{8-\mu}}\Big{)}^{\frac{8-\mu}{4}}+Ce^{(4q+4-\frac{8-\mu}{p^{ \prime}})s}\Big{(}\|\hat{u}_{n}\|_{\frac{8(q+1)t^{\prime}}{8-\mu}}^{\frac{8(q+ 1)t^{\prime}}{8-\mu}}+\|\hat{v}_{n}\|_{\frac{8(q+1)t^{\prime}}{8-\mu}}^{\frac{8 (q+1)t^{\prime}}{8-\mu}}\Big{)}^{\frac{8-\mu}{4t^{\prime}}}\] Combing this and \(P(\mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s_{n}))=P(u_{n},v_{n})=0\), using Lemma 4.3, we derive that \[\mathcal{J}(u_{n},v_{n})=\mathcal{J}(\mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s_{ n}))\geq\mathcal{J}(\mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s_{n}),s))=\mathcal{J}( \mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s))\geq e^{4s}+o_{n}(1).\] By the arbitrariness of \(s\in\mathbb{R}\), we deduce that \(\lim_{n\to\infty}\mathcal{J}(u_{n},v_{n})=+\infty\). ### Palais-Smale sequence In this section, using the minimax principle based on the homotopy stable family of compact subsets of \(\mathcal{S}\), we will construct a \((PS)_{E(a,b)}\) sequence on \(\mathcal{P}(a,b)\) for \(\mathcal{J}\). **Proposition 4.1**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), then there exists a \((PS)_{E(a,b)}\) sequence \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\) for \(\mathcal{J}\)._ Following by [57], we recall that the tangent space of \(\mathcal{S}\) at \((u,v)\) is defined by \[T_{(u,v)}\mathcal{S}=\Big{\{}(\varphi,\psi)\in\mathcal{X}:\int_{\mathbb{R}^{4}} (u\varphi+v\psi)dx=0\Big{\}}.\] To prove Proposition 4.1, we borrow some arguments from [10, 11] and consider the functional \(\mathcal{S}\to\mathbb{R}\) defined by \[\mathcal{I}(u,v):=\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)})),\] where \(s_{(u,v)}\in\mathbb{R}\) is the unique number obtained in Lemma 4.3 for any \((u,v)\in\mathcal{S}\). By Lemma 4.3, we know that \(s_{(u,v)}\) is continuous as a mapping of \((u,v)\in\mathcal{S}\). However, it remains unknown that whether \(s_{(u,v)}\) is of class \(C^{1}\). Inspired by [53, Proposition 2.9], we observe that **Lemma 4.11**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), then \(\mathcal{I}:\mathcal{S}\to\mathbb{R}\) is of class \(C^{1}\) and_ \[\langle\mathcal{I}^{\prime}(u,v),(\varphi,\psi)\rangle= e^{4s_{(u,v)}}\int_{\mathbb{R}^{4}}(\Delta u\Delta\varphi+\Delta v \Delta\psi)dx-e^{(\mu-8)s_{(u,v)}}\int_{\mathbb{R}^{4}}(I_{\mu}*F(e^{2s_{(u,v) }}u,e^{2s_{(u,v)}}v))\] \[\times\Big{[}(e^{2s_{(u,v)}}\varphi,e^{2s_{(u,v)}}\psi)\cdot( \frac{\partial F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}u)}, \frac{\partial F(e^{2s_{(u,v)}}u,e^{2s_{(u,v)}}v)}{\partial(e^{2s_{(u,v)}}v)} )\Big{]}dx\] \[= \langle\mathcal{J}^{\prime}(\mathcal{H}((u,v),s_{(u,v)})),\mathcal{ H}((\varphi,\psi),s_{(u,v)})\rangle\] _for any \((u,v)\in\mathcal{S}\) and \((\varphi,\psi)\in T_{(u,v)}\mathcal{S}\)._ Proof.: Let \((u,v)\in\mathcal{S}\) and \((\varphi,\psi)\in T_{(u,v)}\mathcal{S}\), then for any \(|t|\) small enough, by Lemma 4.3, we obtain \[\mathcal{I}(u+t\varphi,v+t\psi)-\mathcal{I}(u,v)\] \[= \mathcal{J}(\mathcal{H}((u+t\varphi,v+t\psi),s_{(u+t\varphi,v+t \psi)}))-\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)}))\] \[\leq \mathcal{J}(\mathcal{H}((u+t\varphi,v+t\psi),s_{(u+t\varphi,v+t \psi)}))-\mathcal{J}(\mathcal{H}((u,v),s_{(u+t\varphi,v+t\psi)}))\] \[= \frac{e^{4s_{(u+t\varphi,v+t\psi)}}}{2}\int_{\mathbb{R}^{4}} \Big{[}|\Delta(u+t\varphi)|^{2}-|\Delta u|^{2}+|\Delta(v+t\psi)|^{2}-|\Delta v |^{2}\Big{]}dx\] \[-\frac{e^{(\mu-8)s_{(u+t\varphi,v+t\psi)}}}{2}\int_{\mathbb{R}^{4 }}\Big{[}(I_{\mu}*F(e^{2s_{(u+t\varphi,v+t\psi)}}(u+t\varphi),e^{2s_{(u+t \varphi,v+t\psi)}}(v+t\psi)))\] \[\times F(e^{2s_{(u+t\varphi,v+t\psi)}}(u+t\varphi),e^{2s_{(u+t \varphi,v+t\psi)}}(v+t\psi))\] \[-(I_{\mu}*F(e^{2s_{(u+t\varphi,v+t\psi)}}u,e^{2s_{(u+t\varphi,v+t \psi)}}v))F(e^{2s_{(u+t\varphi,v+t\psi)}}u,e^{2s_{(u+t\varphi,v+t\psi)}}v) \Big{]}dx\] \[= \frac{e^{4s_{(u+t\varphi,v+t\psi)}}}{2}\int_{\mathbb{R}^{4}} \Big{(}t^{2}|\Delta u|^{2}+t^{2}|\Delta v|^{2}+2t\Delta u\Delta\varphi+2t \Delta v\Delta\psi\Big{)}dx\] \[-\frac{e^{(\mu-8)s_{(u+t\varphi,v+t\psi)}}}{2}\int_{\mathbb{R}^{4 }}(I_{\mu}*F(e^{2s_{(u+t\varphi,v+t\psi)}}u,e^{2s_{(u+t\varphi,v+t\psi)}}v))\] \[\times\Big{[}(e^{2s_{(u+t\varphi,v+t\psi)}}t\varphi,e^{2s_{(u+t \varphi,v+t\psi)}}t\psi)\cdot(F_{z_{1}}\big{|}_{z_{1}=e^{2s_{(u+t\varphi,v+t \psi)}}(u+\xi_{t}t\varphi)},F_{z_{2}}\big{|}_{z_{2}=e^{2s_{(u+t\varphi,v+t \psi)}}(v+\xi_{t}t\psi)})\Big{]}dx\] \[-\frac{e^{(\mu-8)s_{(u+t\varphi,v+t\psi)}}}{2}\int_{\mathbb{R}^{4 }}(I_{\mu}*F(e^{2s_{(u+t\varphi,v+t\psi)}}u,e^{2s_{(u+t\varphi,v+t\psi)}}v))\] \[\times\Big{[}(e^{2s_{(u+t\varphi,v+t\psi)}}t\varphi,e^{2s_{(u+t \varphi,v+t\psi)}}t\psi)\cdot(F_{z_{1}}\big{|}_{z_{1}=e^{2s_{(u+t\varphi,v+t \psi)}}(u+\xi_{t}t\varphi)},F_{z_{2}}\big{|}_{z_{2}=e^{2s_{(u+t\varphi,v+t \psi)}}(v+\xi_{t}t\psi)})\Big{]}dx,\] where \(\xi_{t}\in(0,1)\). Analogously, we have \[\mathcal{I}(u+t\varphi,v+t\psi)-\mathcal{I}(u,v)\] \[\geq \frac{e^{4s_{(u,v)}}}{2}\int_{\mathbb{R}^{4}}\Big{(}t^{2}|\Delta u |^{2}+t^{2}|\Delta v|^{2}+2t\Delta u\Delta\varphi+2t\Delta v\Delta\psi\Big{)}dx\] \[\max_{(u,v)\in D_{n}}\mathcal{I}(u,v)=\max_{(u,v)\in A_{n}}\mathcal{I}(u,v)\to e_{ \mathcal{G}},\quad\text{as $n\to\infty$.}\] which implies that \(\{D_{n}\}\subset\mathcal{G}\) is another minimizing sequence of \(e_{\mathcal{G}}\). By Lemma 2.6, we obtain a \((PS)_{eg}\) sequence \(\{(\hat{u}_{n},\hat{v}_{n})\}\subset\mathcal{S}\) for \(\mathcal{I}\) such that \(\lim_{n\to\infty}dist((\hat{u}_{n},\hat{v}_{n}),D_{n})=0\). Let \[(u_{n},v_{n}):=\mathcal{H}((\hat{u}_{n},\hat{v}_{n}),s_{(\hat{u}_{n},\hat{v}_{ n})}),\] then we prove that \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\) is the desired sequence. We claim that there exists \(C>0\) such that \(e^{-4s_{(\hat{u}_{n},\hat{v}_{n})}}\leq C\) for any \(n\in\mathbb{N}^{+}\). Indeed, we observe that \[e^{-4s_{(\hat{u}_{n},\hat{v}_{n})}}=\frac{\|\Delta\hat{u}_{n}\|_{2}^{2}+\|\Delta \hat{v}_{n}\|_{2}^{2}}{\|\Delta u_{n}\|_{2}^{2}+\|\Delta v_{n}\|_{2}^{2}}.\] Since \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\), by Lemma 4.5, there exists a constant \(\widetilde{C}>0\) such that \(\|\Delta u_{n}\|_{2}^{2}+\|\Delta v_{n}\|_{2}^{2}\geq\widetilde{C}\) for any \(n\in\mathbb{N}^{+}\). Regarding the term of \(\{(\hat{u}_{n},\hat{v}_{n})\}\), since \(D_{n}\subset\mathcal{P}(a,b)\) for any \(n\in\mathbb{N}^{+}\) and for any \((u,v)\in\mathcal{P}(a,b)\), one has \(\mathcal{J}(u,v)=\mathcal{I}(u,v)\), thus \[\max_{(u,v)\in D_{n}}\mathcal{J}(u,v)=\max_{(u,v)\in D_{n}}\mathcal{I}(u,v) \to e_{\mathcal{G}},\quad\text{as }n\to\infty.\] This together with \(D_{n}\subset\mathcal{P}(a,b)\) and \((F_{2})\) yields \(\{D_{n}\}\) is uniformly bounded in \(\mathcal{X}\), thus from \[\lim_{n\to\infty}dist((\hat{u}_{n},\hat{v}_{n}),D_{n})=0,\] we obtain \(\sup_{n\geq 1}\|(\hat{u}_{n},\hat{v}_{n})\|^{2}<\infty\). This prove the claim. From \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\), it follows that \[\mathcal{J}(u_{n},v_{n})=\mathcal{I}(u_{n},v_{n})=\mathcal{I}(\hat{u}_{n},\hat {v}_{n})\to e_{\mathcal{G}},\quad\text{as }n\to\infty.\] For any \((\varphi,\psi)\in T_{(u_{n},v_{n})}\mathcal{S}\), we have \[\int_{\mathbb{R}^{4}}\Big{(}\hat{u}_{n}e^{-2s_{(\hat{u}_{n},\hat {v}_{n})}}\varphi(e^{-s_{(\hat{u}_{n},\hat{v}_{n})}}x)+\hat{v}_{n}e^{-2s_{( \hat{u}_{n},\hat{v}_{n})}}\psi(e^{-s_{(\hat{u}_{n},\hat{v}_{n})}}x)\Big{)}dx\] \[= \int_{\mathbb{R}^{4}}\Big{(}\hat{u}_{n}(e^{s_{(\hat{u}_{n},\hat {v}_{n})}}y)e^{2s_{(\hat{u}_{n},\hat{v}_{n})}}\varphi(y)+\hat{v}_{n}(e^{s_{( \hat{u}_{n},\hat{v}_{n})}}y)e^{2s_{(\hat{u}_{n},\hat{v}_{n})}}\psi(y)\Big{)}dy =\int_{\mathbb{R}^{4}}(u_{n}\varphi+v_{n}\psi)dx=0,\] which means \(\mathcal{H}((\varphi,\psi),-s_{(\hat{u}_{n},\hat{v}_{n})})\in T_{(\hat{u}_{n},\hat{v}_{n})}\mathcal{S}\). Also, we have \[\|(e^{-2s_{(\hat{u}_{n},\hat{v}_{n})}}\varphi(e^{-s_{(\hat{u}_{n },\hat{v}_{n})}}x),e^{-2s_{(\hat{u}_{n},\hat{v}_{n})}}\psi(e^{-s_{(\hat{u}_{n },\hat{v}_{n})}}x))\|^{2}\] \[= e^{-4s_{(\hat{u}_{n},\hat{v}_{n})}}(\|\Delta\varphi\|_{2}^{2}+\| \Delta\psi\|_{2}^{2})+2e^{-2s_{(\hat{u}_{n},\hat{v}_{n})}}(\|\nabla\varphi\|_ {2}^{2}+\|\nabla\psi\|_{2}^{2})+(\|\varphi\|_{2}^{2}+\|\psi\|_{2}^{2})\] \[\leq C(\|\Delta\varphi\|_{2}^{2}+\|\Delta\psi\|_{2}^{2})+2\sqrt{C}(\| \nabla\varphi\|_{2}^{2}+\|\nabla\psi\|_{2}^{2})+(\|\varphi\|_{2}^{2}+\|\psi \|_{2}^{2})\leq\max\{1,C\}\|(\varphi,\psi)\|^{2}.\] By Lemma 4.11, for any \((\varphi,\psi)\in T_{(u_{n},v_{n})}\mathcal{S}\), we deduce that \[|\langle\mathcal{J}^{\prime}(u_{n},v_{n}),(\varphi,\psi)\rangle| =|\langle\mathcal{J}^{\prime}(\mathcal{H}((\hat{u}_{n},\hat{v}_{ n}),s_{(\hat{u}_{n},\hat{v}_{n})})),\mathcal{H}(\mathcal{H}((\varphi,\psi),-s_{( \hat{u}_{n},\hat{v}_{n})}),s_{(\hat{u}_{n},\hat{v}_{n})})\rangle|\] \[=|\langle\mathcal{I}^{\prime}(\hat{u}_{n},\hat{v}_{n}),\mathcal{ H}((\varphi,\psi),-s_{(\hat{u}_{n},\hat{v}_{n})})\rangle|\] \[\leq\|\mathcal{I}^{\prime}(\hat{u}_{n},\hat{v}_{n})\|_{*}\cdot\| \mathcal{H}((\varphi,\psi),-s_{(\hat{u}_{n},\hat{v}_{n})})\|\] \[\leq\max\{1,\sqrt{C}\}\|\mathcal{I}^{\prime}(\hat{u}_{n},\hat{v}_{ n})\|_{*}\cdot\|(\varphi,\psi)\|,\] which implies that \(\|\mathcal{J}^{\prime}(u_{n},v_{n})\|_{*}\leq\max\{1,\sqrt{C}\}\|\mathcal{I}^{ \prime}(\hat{u}_{n},\hat{v}_{n})\|_{*}\to 0\) as \(n\to\infty\). Thus \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\) is a \((PS)_{e_{\mathcal{G}}}\) sequence for \(\mathcal{J}\). **Proof of Proposition 4.1:** Noted that the class \(\mathcal{G}\) of all singletons included in \(\mathcal{S}\) is a homotopy stable family of compact subsets of \(\mathcal{S}\) without boundary. By Lemma 4.12, if \(e_{\mathcal{G}}>0\), then there exists a \((PS)_{e_{\mathcal{G}}}\) sequence \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\) for \(\mathcal{J}\). By Lemma 4.5, we know \(E(a,b)>0\), so if we can prove that \(e_{\mathcal{G}}=E(a,b)\), then we complete the proof. In fact, by the definition of \(\mathcal{G}\), we have \[e_{\mathcal{G}}=\inf_{A\in\mathcal{G}}\max_{(u,v)\in A}\mathcal{I}(u,v)=\inf_{(u,v )\in\mathcal{S}}\mathcal{I}(u,v)=\inf_{(u,v)\in\mathcal{S}}\mathcal{I}(\mathcal{ H}((u,v),s_{(u,v)}))=\inf_{(u,v)\in\mathcal{S}}\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)})).\] For any \((u,v)\in\mathcal{S}\), it follows from \(\mathcal{H}((u,v),s_{(u,v)})\in\mathcal{P}(a,b)\) that \(\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)}))\geq E(a,b)\), by the arbitrariness of \((u,v)\in\mathcal{S}\), we get \(e_{\mathcal{G}}\geq E(a,b)\). On the other hand, for any \((u,v)\in\mathcal{P}(a,b)\), by Lemma 4.3, we deduce that \(s_{(u,v)}=0\) and \(\mathcal{J}(u,v)=\mathcal{J}(\mathcal{H}((u,v),0))\geq\inf_{(u,v)\in\mathcal{ S}}\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)}))\), by the arbitrariness of \((u,v)\in\mathcal{P}(a,b)\), we have \(E(a,b)\geq e_{\mathcal{G}}\). For the sequence \(\{(u_{n},v_{n})\}\) obtained in Proposition 4.1, by \((F_{2})\), we know that \(\{(u_{n},v_{n})\}\) is bounded in \(\mathcal{X}\), up to a subsequence, we assume that \((u_{n},v_{n})\rightharpoonup(u_{a},v_{b})\) in \(\mathcal{X}\). Furthermore, by \(\mathcal{J}^{\prime}|_{\mathcal{S}}(u_{n},v_{n})\to 0\) as \(n\to\infty\) and Lagrange multiplier rule, there exist two sequences \(\{\lambda_{1,n}\},\{\lambda_{2,n}\}\subset\mathbb{R}\) such that \[\int_{\mathbb{R}^{4}}(\Delta u_{n}\Delta\varphi+\Delta v_{n} \Delta\psi)dx-\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_ {n})\varphi dx-\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{v_{n}}(u_{n},v_ {n})\psi dx\] \[= \int_{\mathbb{R}^{4}}(\lambda_{1,n}u_{n}\varphi+\lambda_{2,n}v_{ n}\psi)dx+o_{n}(1)\|(\varphi,\psi)\| \tag{4.16}\] for every \((\varphi,\psi)\in\mathcal{X}\). **Lemma 4.13**.: _Assume that \(F\) satisfies \((F_{1})-(F_{3})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), then up to a subsequence and up to translations in \(\mathbb{R}^{4}\), \(u_{a}\neq 0\) and \(v_{b}\neq 0\)._ Proof.: For any \(r>0\), let \[\varrho:=\limsup_{n\to\infty}\Big{(}\sup_{y\in\mathbb{R}^{4}}\int_{B(y,r)}(|u _{n}|^{2}+|v_{n}|^{2})dx\Big{)}.\] If \(\varrho>0\), then there exists \(\{y_{n}\}\subset\mathbb{R}^{4}\) such that \(\int_{B(y_{n},1)}(|u_{n}|^{2}+|v_{n}|^{2})dx>\frac{\varrho}{2}\), i.e., \(\int_{B(0,1)}(|u_{n}(x-y_{n})|^{2}+|v_{n}(x-y_{n})|^{2})dx>\frac{\varrho}{2}\). Up to a subsequence, and up to translations in \(\mathbb{R}^{4}\), \((u_{n},v_{n})\rightharpoonup(u_{a},v_{b})\neq(0,0)\) in \(\mathcal{X}\). From (4.16) and Lemma 4.1, we can see that \((u_{a},v_{b})\) is a weak solution of (1.2) with \(N=4\). Assume that \(u_{a}=0\), then by \((F_{2})\) and \((F_{3})\), we know that \(v_{b}=0\). Similarly, \(v_{b}=0\) implies \(u_{a}=0\). This is impossible, since \((u_{a},v_{b})\neq(0,0)\). If \(\varrho=0\), using the Lions lemma [57, Lemma 1.21], \(u_{n},v_{n}\to 0\) in \(L^{p}(\mathbb{R}^{4})\) for any \(p>2\). As a consequence, arguing as (4.8), since \(\{(u_{n},v_{n})\}\) is bounded in \(\mathcal{X}\), for any \(\alpha\to 0^{+}\) and \(t\to 1^{+}\), by \(\tau>2-\frac{\mu}{4}>1\), \(q>2\), and \(t^{\prime}=\frac{t}{t-1}\) large enough, we have \(\sup_{n\in\mathbb{N}^{+}}\frac{8\alpha t\|\Delta(u_{n},v_{n})\|_{2}^{2}}{8-\mu} \leq 32\pi^{2}\) and \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))[(u_{n},v_{n})\cdot \nabla F(u_{n},v_{n})]dx\] \[\leq C\Big{(}\|u_{n}\|_{\frac{8(\tau+1)}{8-\mu}}+\|v_{n}\|_{\frac{8( \tau+1)}{8-\mu}}\Big{)}^{\frac{8(\tau+1)}{8-\mu}}+C\Big{(}\|u_{n}\|_{\frac{8( \sigma+1)t^{\prime}}{8-\mu}}+\|v_{n}\|_{\frac{8(\sigma+1)t^{\prime}}{8-\mu}}^{ \frac{8(\sigma+1)t^{\prime}}{8-\mu}}\Big{)}^{\frac{8-\mu}{4t^{\prime}}}\to 0,\quad\text{as $n \to\infty$}.\] From the above equality and \(P(u_{n},v_{n})=0\), we deduce that \(\|\Delta u_{n}\|_{2}^{2}+\|\Delta v_{n}\|_{2}^{2}\to 0\) as \(n\to\infty\), hence \((F_{2})\) implies \(\lim_{n\to\infty}\mathcal{J}(u_{n},v_{n})=0\), which is an absurd, since \(E(a,b)>0\). **Lemma 4.14**.: _Assume that \(F\) satisfies \((F_{1})-(F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential subcritical growth at \(\infty\), then \(\{\lambda_{1,n}\}\) and \(\{\lambda_{2,n}\}\) are bounded in \(\mathbb{R}\). Furthermore, up to a subsequence, \(\lambda_{1,n}\to\lambda_{1}<0\) and \(\lambda_{2,n}\to\lambda_{2}<0\) in \(\mathbb{R}\) as \(n\to\infty\)._ Proof.: Using \((u_{n},0)\) and \((0,v_{n})\) as test functions in (4.16), we have \[\lambda_{1,n}a^{2}=\int_{\mathbb{R}^{4}}|\Delta u_{n}|^{2}dx-\int_{\mathbb{R}^{4} }(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n})u_{n}dx+o_{n}(1)\|u_{n}\|\] and \[\lambda_{2,n}b^{2}=\int_{\mathbb{R}^{4}}|\Delta v_{n}|^{2}dx-\int_{\mathbb{R}^ {4}}(I_{\mu}*F(u_{n},v_{n}))F_{v_{n}}(u_{n},v_{n})v_{n}dx+o_{n}(1)\|v_{n}\|.\] By \(P(u_{n},v_{n})=0\), we obtain \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))[(u_{n},v_{n})\cdot\nabla F(u_{n},v_{n})]dx= \int_{\mathbb{R}^{4}}(|\Delta u_{n}|^{2}+|\Delta v_{n}|^{2})dx+ \frac{8-\mu}{4}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F(u_{n},v_{n})dx.\] This together with \((F_{4})\) and the boundedness of \(\{(u_{n},v_{n})\}\) yields that \(\{\lambda_{1,n}\}\) and \(\{\lambda_{2,n}\}\) are bounded in \(\mathbb{R}\). Moreover, \[-\lambda_{1,n}= \frac{1}{a^{2}}\Big{(}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n} ))(\frac{8-\mu}{4}F(u_{n},v_{n})-F_{v_{n}}(u_{n},v_{n})v_{n})dx+\int_{\mathbb{ R}^{4}}|\Delta v_{n}|^{2}dx\Big{)}\] and \[-\lambda_{2,n}= \frac{1}{b^{2}}\Big{(}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n} ))(\frac{8-\mu}{4}F(u_{n},v_{n})-F_{u_{n}}(u_{n},v_{n})u_{n})dx+\int_{\mathbb{ R}^{4}}|\Delta u_{n}|^{2}dx\Big{)},\] Thanks to \(u_{a}\neq 0,v_{b}\neq 0\), using \((F_{2})\), \((F_{4})\) and Fatou lemma, we obtain \(\liminf\limits_{n\to\infty}-\lambda_{1,n}>0\) and \(\liminf\limits_{n\to\infty}-\lambda_{2,n}>0\), namely, \(\limsup\limits_{n\to\infty}\lambda_{1,n}<0\) and \(\limsup\limits_{n\to\infty}\lambda_{2,n}<0\). Since \(\{\lambda_{1,n}\}\) and \(\{\lambda_{2,n}\}\) are bounded, up to a subsequence, we assume that \(\lambda_{1,n}\to\lambda_{1}<0\) and \(\lambda_{2,n}\to\lambda_{2}<0\) in \(\mathbb{R}\) as \(n\to\infty\). ### Proof of Theorem 1.1 Proof of Theorem 1.1:.: Under the assumptions of Theorem 1.1, from (4.16) and Lemmas 4.1, 4.13, 4.14, we know \((u_{a},v_{b})\) is a nontrivial weak solution of (1.2) with \(\lambda_{1},\lambda_{2}<0\), \(N=4\), and \(P(u_{a},v_{b})=0\). Using the Brezis-Lieb lemma [57, Lemma 1.32], we have \[\|u_{n}\|_{2}^{2}=\|u_{n}-u_{a}\|_{2}^{2}+\|u_{a}\|_{2}^{2}+o_{n}(1)\quad\text {and}\quad\|v_{n}\|_{2}^{2}=\|v_{n}-v_{b}\|_{2}^{2}+\|v_{b}\|_{2}^{2}+o_{n}(1).\] Let \(a_{1}:=\|u_{a}\|_{2}>0\), \(b_{1}:=\|v_{b}\|_{2}>0\), and \(a_{1,n}:=\|u_{n}-u_{a}\|_{2}\), \(b_{1,n}:=\|v_{n}-v_{b}\|_{2}\), then \(a^{2}=a_{1}^{2}+a_{1,n}^{2}+o_{n}(1)\) and \(b^{2}=b_{1}^{2}+b_{1,n}^{2}+o_{n}(1)\). Since \(P(u_{a},v_{b})=0\), using \((F_{2})\) and Fatou lemma, we have \[\mathcal{J}(u_{a},v_{b}) =\mathcal{J}(u_{a},v_{b})-\frac{1}{2}P(u_{a},v_{b})\] \[=\frac{1}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{a},v_{b}))\Big{[} (u_{a},v_{b})\cdot\nabla F(u_{a},v_{b})-(3-\frac{\mu}{4})F(u_{a},v_{b})\Big{]}dx\] \[\leq\frac{1}{2}\liminf\limits_{n\to\infty}\int_{\mathbb{R}^{4}}( I_{\mu}*F(u_{n},v_{n}))\Big{[}(u_{n},v_{n})\cdot\nabla F(u_{n},v_{n})-(3-\frac{\mu}{4})F(u _{n},v_{n})\Big{]}dx\] \[=\liminf\limits_{n\to\infty}(\mathcal{J}(u_{n},v_{n})-\frac{1}{2 }P(u_{n},v_{n}))=E(a,b).\] On the other hand, it follows from Lemma 4.7 that \(\mathcal{J}(u_{a},v_{b})\geq E(a_{1},b_{1})\geq E(a,b)\). Thus \(\mathcal{J}(u_{a},v_{b})=E(a_{1},b_{1})=E(a,b)\), and it follows from lemmas 4.9, 4.14 that \(\|u_{a}\|_{2}=a\), \(\|v_{b}\|_{2}=b\). This implies \((u_{a},v_{b})\) is a ground state solution of (1.6). Exponential critical case This section is devoted to study (1.6) in the exponential critical case. **Lemma 5.1**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential critical growth at \(\infty\), let \(\{(u_{n},v_{n})\}\subset\mathcal{S}\) be a bounded \((PS)_{E(a,b)}\) sequence of \(\mathcal{J}\) in \(\mathcal{X}\), up to a subsequence, if \((u_{n},v_{n})\rightharpoonup(u,v)\) in \(\mathcal{X}\), and_ \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))[(u_{n},v_{n})\cdot\nabla F(u_{n},v_{n})]dx\leq K_{0} \tag{5.1}\] _for some constant \(K_{0}>0\), then for any \(\varphi\in C^{\infty}_{0}(\mathbb{R}^{4})\), we have_ \[\int_{\mathbb{R}^{4}}\Delta u_{n}\Delta\varphi dx\to\int_{\mathbb{R}^{4}} \Delta u\Delta\varphi dx,\ \ \ \int_{\mathbb{R}^{4}}\Delta v_{n}\Delta\varphi dx\to\int_{\mathbb{R}^{4}} \Delta v\Delta\varphi dx,\ \ \text{as}\,\ n\to\infty, \tag{5.2}\] \[\int_{\mathbb{R}^{4}}u_{n}\varphi dx\to\int_{\mathbb{R}^{4}}u\varphi dx,\ \ \ \int_{\mathbb{R}^{4}}v_{n}\varphi dx\to\int_{\mathbb{R}^{4}}v\varphi dx,\ \ \text{as}\,\ n\to\infty, \tag{5.3}\] _and_ \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n})\varphi dx \to\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{u}(u,v)\varphi dx,\ \ \text{as}\,\ n\to\infty, \tag{5.4}\] \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{v_{n}}(u_{n},v_{n})\varphi dx \to\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F_{v}(u,v)\varphi dx,\ \ \text{as}\,\ n\to\infty, \tag{5.5}\] \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F(u_{n},v_{n})dx\to\int_{ \mathbb{R}^{4}}(I_{\mu}*F(u,v))F(u,v)dx,\ \ \text{as}\,\ n\to\infty. \tag{5.6}\] Proof.: With a similar proof of Lemma 4.1, we have (5.2)-(5.3) hold. We adopt some ideas of Lemma 2.4 in [3] to prove (5.4) and (5.5). Let \(\Omega\) be any compact subset of \(\mathbb{R}^{4}\), \(\Omega^{\prime}\subset\subset\Omega\) and \(\phi\in C^{\infty}_{0}(\Omega)\) such that \(0\leq\phi\leq 1\) and \(\phi=1\) in \(\Omega^{\prime}\), then by taking \(\phi\) as a text function in the first equation of (1.6) and using Holder inequality, we get \[\int_{\Omega^{\prime}}(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n})dx\leq \int_{\Omega}(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n})\phi dx\leq C\|u_{n }|\|\phi\|\leq C,\] which implies that \(\omega_{n}:=(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n})\) is bounded in \(L^{1}(\Omega)\), up to a subsequence, \(\omega_{k}\to\omega\) in the weak*-topology as \(n\to\infty\), where \(\omega\) denotes a Radon measure. So for any \(\varphi\in C^{\infty}_{0}(\Omega)\), we get \[\lim_{n\to\infty}\int_{\Omega}(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{n}) \varphi dx=\int_{\Omega}\varphi d\omega.\] Now recall that \(\{(u_{n},v_{n})\}\subset\mathcal{S}\) is a \((PS)_{E(a,b)}\) sequence of \(\mathcal{J}\), hence, for any \(\varphi\in C^{\infty}_{0}(\Omega)\), \[\lim_{n\to\infty}\int_{\mathbb{R}^{4}}\Delta u_{n}\Delta\varphi-\lambda_{1}u_{ n}\varphi dx=\int_{\Omega}\varphi d\omega,\] which implies that \(\omega\) is absolutely continuous with respect to the Lebesgue measure. Then, by the Radon-Nicodym theorem, there exists a function \(g\in L^{1}(\Omega)\) such that for any \(\varphi\in C^{\infty}_{0}(\Omega)\), \[\int_{\Omega}\varphi d\omega=\int_{\Omega}\varphi gdx.\] Since there holds for any compact set \(\Omega\subset\mathbb{R}^{4}\), we have that there exists a function \(g\in L^{1}_{loc}(\mathbb{R}^{4})\) such that for any \(\varphi\in C^{\infty}_{0}(\mathbb{R}^{4})\), \[\lim_{n\to\infty}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v _{n})\varphi dx=\int_{\mathbb{R}^{4}}\varphi d\omega=\int_{\mathbb{R}^{4}}(I_ {\mu}*F(u,v))F_{u}(u,v)\varphi dx.\] So we have (5.4). Similarly, we prove (5.5). Next, adopting some ideas of Lemma 3.3 in [5], we will prove (5.6). By \((F_{2})\), (2.2), (5.1) and Fatou lemma, we have \[\Big{|}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F(u_{n},v_{n} )dx-\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F(u,v)dx\Big{|}\] \[\leq\Big{|}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))(F(u_{n}, v_{n})-F(u,v))dx\Big{|}+\Big{|}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))(F(u_{n},v_{n} )-F(u,v))dx\Big{|}\] \[\leq\Big{(}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F(u_{n}, v_{n})dx\Big{)}^{\frac{1}{2}}\Big{(}\int_{\mathbb{R}^{4}}(I_{\mu}*(F(u_{n},v_{n} )-F(u,v)))(F(u_{n},v_{n})-F(u,v))dx\Big{)}^{\frac{1}{2}}\] \[\quad+\Big{(}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u,v))F(u,v)dx\Big{)} ^{\frac{1}{2}}\Big{(}\int_{\mathbb{R}^{4}}(I_{\mu}*(F(u_{n},v_{n})-F(u,v)))(F (u_{n},v_{n})-F(u,v))dx\Big{)}^{\frac{1}{2}}\] \[\leq C\Big{(}\int_{\mathbb{R}^{4}}(I_{\mu}*(F(u_{n},v_{n})-F(u,v)))( F(u_{n},v_{n})-F(u,v))dx\Big{)}^{\frac{1}{2}}.\] Now we only need to prove that \[\lim_{n\to\infty}\int_{\mathbb{R}^{4}}(I_{\mu}*(F(u_{n},v_{n})-F(u,v)))(F(u_{n },v_{n})-F(u,v))dx=0.\] For a fixed \(S>0\), denote \[A=\Big{\{}x\in\mathbb{R}^{4}:|u_{n}(x)|\leq S\,\,\,\text{and}\,\,\,|v_{n}(x)| \leq S\Big{\}},\quad B=\Big{\{}x\in\mathbb{R}^{4}:|u(x)|\leq S\,\,\,\text{and} \,\,\,|v(x)|\leq S\Big{\}},\] \[C=\Big{\{}x\in\mathbb{R}^{4}:|u_{n}(x)|\geq S\,\,\,\text{or}\,\,\,|v_{n}(x)| \geq S\Big{\}},\quad D=\Big{\{}x\in\mathbb{R}^{4}:|u(x)|\geq S\,\,\,\text{or} \,\,\,|v(x)|\geq S\Big{\}}.\] Then \[\int_{\mathbb{R}^{4}}(I_{\mu}*(F(u_{n},v_{n})-F(u,v)))(F(u_{n},v_{ n})-F(u,v))dx\] \[\leq \int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{|F(u_{n},v_{n}) \chi_{A}(y)-F(u,v)\chi_{B}(y)||F(u_{n},v_{n})\chi_{A}(x)-F(u,v)\chi_{B}(x)|}{| x-y|^{\mu}}dxdy\] \[\quad+2\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{(F(u_{n},v _{n})\chi_{A}(y)+F(u,v)\chi_{B}(y)+F(u,v)\chi_{D}(y))F(u_{n},v_{n})\chi_{C}(x)} {|x-y|^{\mu}}dxdy\] \[\quad+2\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{(F(u_{n},v _{n})\chi_{A}(y)+F(u,v)\chi_{B}(y))F(u,v)\chi_{D}(x)}{|x-y|^{\mu}}dxdy\] \[\quad+\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{F(u_{n},v_{ n})\chi_{C}(y)F(u_{n},v_{n})\chi_{C}(x)}{|x-y|^{\mu}}dxdy+\int_{\mathbb{R}^{4}}\int_{ \mathbb{R}^{4}}\frac{F(u,v)\chi_{D}(y)F(u,v)\chi_{D}(x)}{|x-y|^{\mu}}dxdy\] \[:= I_{1}+I_{2}+I_{3}+I_{4}+I_{5}.\] From (2.1), we obtain \(I_{j}=o(1)\) for \(j=2,\cdots,5\) when \(S\) is large enough. Moreover \[I_{1}= \int_{\Omega}\int_{\mathbb{R}^{4}}\frac{|F(u_{n},v_{n})\chi_{A}(y)-F(u,v )\chi_{B}(y)||F(u_{n},v_{n})\chi_{A}(x)-F(u,v)\chi_{B}(x)|}{|x-y|^{\mu}}dxdy\] \[\quad+\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{|F(u_{n},v_{ n})\chi_{A}(y)-F(u,v)\chi_{B}(y)||F(u_{n},v_{n})\chi_{A}(x)-F(u,v)\chi_{B}(x)|}{|x-y|^{ \mu}}dxdy\] \[\quad+\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{F(u_{n},v_{ n})\chi_{A}(y)-F(u,v)\chi_{B}(y)||F(u_{n},v_{n})\chi_{A}(x)-F(u,v)\chi_{B}(x)|}{|x-y|^{ \mu}}dxdy\] \[\quad+\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{F(u_{n},v_{ n})\chi_{A}(y)-F(u,v)\chi_{B}(y)||F(u_{n},v_{n})\chi_{A}(x)-F(u,v)\chi_{B}(x)|}{|x-y|^{ \mu}}dxdy\] \[\quad+\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{F(u_{n},v_{ n})\chi_{A}(y)-F(u,v)\chi_{B}(y)||F(u_{n},v_{n})\chi_{A}(x)-F(u,v)\chi_{B}(x)|}{|x-y|^{ \mu}}dxdy\] \[\quad+\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{F(u_{n},v_{ n})\chi_{A}(y)-F(u,v)\chi_{B}(y)||F(u_{n},v_{n})\chi_{A}(x)-F(u,v)\chi_{B}(x)|}{|x-y|^{ \mu}}dxdy\] \[\quad+\int_{\mathbb{R}^{4}}\int_{\mathbb{R}^{4}}\frac{F(u_{n},v_{ n})\chi_{C}(y)F(u_{n},v_{n})\chi_{C}(x)}{|x-y|^{\mu}}dxdy+\int_{\mathbb{R}^{4}}\int_{ \[\frac{8\alpha t\|\Delta(u,v)\|_{2}^{2}}{8-\mu}\leq 32\pi^{2},\] with a similar proof of Lemma 4.4, we complete the proof. **Lemma 5.5**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential critical growth at \(\infty\), then we have_ \[\inf_{(u,v)\in\mathcal{P}(a,b)}(\|\Delta u\|_{2}^{2}+\|\Delta v\|_{2}^{2})>0 \quad\text{and}\quad E(a,b)>0.\] Proof.: Using Lemma 5.4, a similar argument of Lemma 4.5 completes the proof. Using Lemma 5.3, following the arguments of Lemmas 4.7-4.9, we can prove the following lemmas hold. **Lemma 5.6**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential critical growth at \(\infty\), then the functions \(a\mapsto E(a,b)\) and \(b\mapsto E(a,b)\) are non-increasing._ **Lemma 5.7**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential critical growth at \(\infty\), suppose that (1.6) possesses a ground state solution with \(\lambda_{1},\lambda_{2}<0\), then \(E(a^{\prime},b)<E(a,b)\) for any \(a^{\prime}>a\) close to \(a\) and \(E(a,b^{\prime})<E(a,b)\) for any \(b^{\prime}>b\) close to \(b\)._ **Lemma 5.8**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}(j=1,2)\) has exponential critical growth at \(\infty\), suppose that (1.6) possesses a ground state solution with \(\lambda_{1},\lambda_{2}<0\), then \(E(a^{\prime},b)<E(a,b)\) for any \(a^{\prime}>a\) and \(E(a,b^{\prime})<E(a,b)\) for any \(b^{\prime}>b\)._ ### The estimation for the upper bound of \(E(a,b)\) In this subsection, we use the modified Adams functions introduced in [20] to estimate the upper bound of \(E(a,b)\), which is important for exponential critical growth problems. For any \(\varphi(t)\in C_{0}^{\infty}([0,\infty),[0,1])\) such that \(\varphi(t)=1\) if \(0\leq t\leq 1\), \(\varphi(t)=0\) if \(t\geq 2\). Define a sequence of functions \(\tilde{\omega}_{n}\) by \[\tilde{\omega}_{n}(x)=\left\{\begin{array}{ll}\sqrt{\frac{\log n}{8\pi^{2}} }+\frac{1-n^{2}|x|^{2}}{\sqrt{32\pi^{2}\log n}},&\text{ for }\,|x|\leq\frac{1}{n},\\ -\frac{\log|x|}{\sqrt{8\pi^{2}\log n}},&\text{ for }\,\frac{1}{n}<|x|\leq 1,\\ -\frac{\varphi(|x|)\log|x|}{\sqrt{8\pi^{2}\log n}},&\text{ for }\,1<|x|<2,\\ 0,&\text{ for }\,|x|\geq 2.\end{array}\right.\] One can check that \(\tilde{\omega}_{n}\in H^{2}(\mathbb{R}^{4})\). A straightforward calculation shows that \[\|\tilde{\omega}_{n}\|_{2}^{2}=\frac{1+32M_{1}}{128\log n}-\frac{1}{96n^{4}} -\frac{1}{192n^{4}\log n}=\frac{1+32M_{1}}{128\log n}+o(\frac{1}{\log^{4}n}),\] \[\|\nabla\tilde{\omega}_{n}\|_{2}^{2}=\frac{1+2M_{2}}{8\log n}-\frac{1}{12n^{ 2}\log n}=\frac{1+2M_{2}}{8\log n}+o(\frac{1}{\log^{3}n}),\] and \[\|\Delta\tilde{\omega}_{n}\|_{2}^{2}=1+\frac{4+M_{3}}{4\log n},\] where \[M_{1}=\int_{1}^{2}\varphi^{2}(r)r^{3}\log^{2}rdr,\] \[M_{2}=\int_{1}^{2}\Big{(}\varphi^{\prime}(r)\log r+\frac{\varphi(r)}{r}\Big{)}^{2}r ^{3}dr,\] and \[M_{3}=\int_{1}^{2}\Big{(}\varphi^{\prime\prime}(r)\log r+\frac{-\varphi(r)+3 \varphi^{\prime}(r)+2r\varphi^{\prime}(r)+3r\varphi^{\prime}(r)\log r}{r^{2}} \Big{)}r^{3}dr.\] For any \(c>0\), let \(\omega_{n}^{c}=\frac{c\bar{\omega}_{n}}{\|\bar{\omega}_{n}\|_{2}}\). Then \(\omega_{n}^{c}\in S(c)\) and \[\|\nabla\omega_{n}^{c}\|_{2}^{2}=\frac{c^{2}(\frac{1+2M_{2}}{8\log n}+o(\frac {1}{\log^{3}n}))}{\frac{1+32M_{1}}{128\log n}+o(\frac{1}{\log^{4}n})}=\frac{1 6c^{2}(1+2M_{2})}{1+32M_{1}}\Big{(}1+o(\frac{1}{\log^{2}n})\Big{)}, \tag{5.7}\] \[\|\Delta\omega_{n}^{c}\|_{2}^{2}=\frac{c^{2}(1+\frac{4+M_{3}}{4\log n})}{ \frac{1+32M_{1}}{128\log n}+o(\frac{1}{\log^{4}n})}=\frac{128c^{2}}{1+32M_{1} }\Big{(}\frac{4+M_{3}}{4}+\log n+o(\frac{1}{\log^{2}n})\Big{)}. \tag{5.8}\] Furthermore, we have \[\omega_{n}^{c}(x)=\left\{\begin{array}{ll}\frac{c^{(1+o(\frac{1}{\log^{3}n}) )}}{\sqrt{\frac{1+32M_{1}}{128}}}\Big{(}\frac{\log n}{\sqrt{8\pi^{2}}}+\frac{ 1-n^{2}|x|^{2}}{\sqrt{32\pi^{2}}}\Big{)},&\mbox{for}\;\;|x|\leq\frac{1}{n},\\ -\frac{c(1+o(\frac{1}{\log^{3}n}))}{\sqrt{\frac{1+32M_{1}}{128}}}\frac{\log|x|} {\sqrt{8\pi^{2}}},&\mbox{for}\;\;\frac{1}{n}<|x|\leq 1,\\ -\frac{c(1+o(\frac{1}{\log^{3}n}))}{\sqrt{\frac{1+32M_{1}}{128}}}\frac{\varphi (|x|)\log|x|}{\sqrt{8\pi^{2}}},&\mbox{for}\;\;1<|x|<2,\\ 0,&\mbox{for}\;\;|x|\geq 2.\end{array}\right. \tag{5.9}\] For any \(t>0\), let \[g_{n}(t):=\mathcal{J}(t\omega_{n}^{a}(t^{\frac{1}{2}}x),t\omega_{n}^{b}(t^{ \frac{1}{2}}x))= \frac{t^{2}}{2}\int_{\mathbb{R}^{4}}(|\Delta\omega_{n}^{a}|^{2}+| \Delta\omega_{n}^{b}|^{2})dx-\frac{t^{\frac{\mu}{2}-4}}{2}\int_{\mathbb{R}^{4 }}(I_{\mu}*F(t\omega_{n}^{a},t\omega_{n}^{b}))F(t\omega_{n}^{a},t\omega_{n}^{b })dx.\] By Lemmas 5.3 and 5.5, we know \(E(a,b)=\inf_{(u,v)\in\mathcal{S}}\max_{s\in\mathbb{R}}\mathcal{J}(\mathcal{H}( (u,v),s))>0\), this together with \((\omega_{n}^{a},\omega_{n}^{b})\in\mathcal{S}\) yields that \[0<E(a,b)\leq\max_{s\in\mathbb{R}}\mathcal{J}(\mathcal{H}((\omega_{n}^{a}, \omega_{n}^{b}),s))=\max_{t>0}g_{n}(t).\] **Lemma 5.9**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\) and \(F_{z_{j}}(j=1,2)\) has exponential critical growth at \(\infty\), then for any fixed \(n\in\mathbb{N}^{+}\), \(\max_{t>0}g_{n}(t)\) is attained at some \(t_{n}>0\)._ Proof.: For any fixed \(n\in\mathbb{N}^{+}\), as \(t>0\) small, fix \(\alpha>32\pi^{2}\) close to \(32\pi^{2}\) and \(m>1\) close to \(1\) such that \[\frac{8\alpha m\|\Delta(t\omega_{n}^{a},t\omega_{n}^{b})\|_{2}^{2}}{8-\mu}\leq 3 2\pi^{2}.\] Arguing as (4.8), for \(m^{\prime}=\frac{m}{m-1}\), we have \[\frac{t^{\frac{\mu}{2}-4}}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(t \omega_{n}^{a},t\omega_{n}^{b}))F(t\omega_{n}^{a},t\omega_{n}^{b})dx\] \[\leq Ct^{\frac{\mu}{2}-4}\Big{(}\|t\omega_{n}^{a}\|\frac{8(\tau+1)}{8- \mu}+\|t\omega_{n}^{b}\|\frac{8(\tau+1)}{8-\mu}\Big{)}^{\frac{8-\mu}{4}}+Ct^{ \frac{\mu}{2}-4}\Big{(}\|t\omega_{n}^{a}\|\frac{8(q+1)m^{\prime}}{8-\mu}+\|t \omega_{n}^{b}\|\frac{8(q+1)m^{\prime}}{8-\mu}\Big{)}^{\frac{8-\mu}{4m^{ \prime}}}\] \[=Ct^{2\tau-2+\frac{\mu}{2}}\Big{(}\|\omega_{n}^{a}\|_{\frac{8(\tau+1)} {8-\mu}}^{\frac{8(\tau+1)}{8-\mu}}+\|\omega_{n}^{b}\|_{\frac{8(\tau+1)}{8-\mu}}^ {\frac{8(\tau+1)}{8-\mu}}\Big{)}^{\frac{8-\mu}{4}}+Ct^{2q-2+\frac{\mu}{2}} \Big{(}\|\omega_{n}^{a}\|_{\frac{8(\tau+1)m^{\prime}}{8-\mu}}^{\frac{8(q+1)m^{ \prime}}{8-\mu}}+\|\omega_{n}^{b}\|_{\frac{8(\tau+1)m^{\prime}}{8-\mu}}^{\frac {8(q+1)m^{\prime}}{8-\mu}}\Big{)}^{\frac{8-\mu}{4m^{\prime}}}\] where \(\tau>2-\frac{\mu}{4}\) and \(q>2\). So \(g_{n}(t)>0\) for \(t>0\) small enough. By (4.9), we obtain \[\frac{t^{\frac{\mu}{2}-4}}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(t \omega_{n}^{a},t\omega_{n}^{b}))F(t\omega_{n}^{a},t\omega_{n}^{b})dx\geq\frac{ t^{2\theta+\frac{\mu}{2}-4}}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(\omega_{n}^{a}, \omega_{n}^{b}))F(\omega_{n}^{a},\omega_{n}^{b})dx.\] Since \(\theta>3-\frac{\mu}{4}\), we have that \(g_{n}(t)<0\) for \(t>0\) large enough. Thus \(\max\limits_{t>0}g_{n}(t)\) is attained at some \(t_{n}>0\). **Lemma 5.10**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{6})\) and \(F_{z_{j}}(j=1,2)\) has exponential critical growth at \(\infty\), then \(\max\limits_{t>0}g_{n}(t)<\frac{8-\mu}{16}\) for \(n\in\mathbb{N}^{+}\) large enough._ Proof.: By Lemma 5.9, \(\max\limits_{t>0}g_{n}(t)\) is attained at some \(t_{n}>0\) and thus \(g_{n}^{\prime}(t_{n})=0\). By \((F_{2})\), \[t_{n}^{2}\int_{\mathbb{R}^{4}}(|\Delta\omega_{n}^{a}|^{2}+| \Delta\omega_{n}^{b}|^{2})dx=\frac{\mu-8}{4}t_{n}^{\frac{\mu}{2}-4}\int_{ \mathbb{R}^{4}}(I_{\mu}*F(t_{n}\omega_{n}^{a},t_{n}\omega_{n}^{b}))F(t_{n} \omega_{n}^{a},t_{n}\omega_{n}^{b})dx\] \[+t_{n}^{\frac{\mu}{2}-4}\int_{\mathbb{R}^{4}}(I_{\mu}*F(t_{n} \omega_{n}^{a},t_{n}\omega_{n}^{b}))(\frac{\partial F(t_{n}\omega_{n}^{a},t_{n }\omega_{n}^{b})}{\partial(t_{n}\omega_{n}^{a})}t_{n}\omega_{n}^{a}+\frac{ \partial F(t_{n}\omega_{n}^{a},t_{n}\omega_{n}^{b})}{\partial(t_{n}\omega_{n} ^{b})}t_{n}\omega_{n}^{b})dx\] \[\geq \frac{4\theta+\mu-8}{4\theta}t_{n}^{\frac{\mu}{2}-4}\int_{ \mathbb{R}^{4}}(I_{\mu}*F(t_{n}\omega_{n}^{a},t_{n}\omega_{n}^{b}))(\frac{ \partial F(t_{n}\omega_{n}^{a},t_{n}\omega_{n}^{b})}{\partial(t_{n}\omega_{n} ^{a})}t_{n}\omega_{n}^{a}+\frac{\partial F(t_{n}\omega_{n}^{a},t_{n}\omega_{n} ^{b})}{\partial(t_{n}\omega_{n}^{b})}t_{n}\omega_{n}^{b})dx. \tag{5.10}\] By \((F_{6})\), for any \(\varepsilon>0\). there exists \(R_{\varepsilon}>0\) such that for any \(|z_{1}|,|z_{2}|\geq R_{\varepsilon}\), \[F(z)[z\cdot\nabla F(z)]\geq(\varrho-\varepsilon)e^{64\pi^{2}|z|^{2}}. \tag{5.11}\] **Case 1:**\(\lim\limits_{n\to\infty}(t_{n}^{2}\log n)=0\). Then \(\lim\limits_{n\to\infty}t_{n}=0\), by (5.7) and (5.8), we have that \[\frac{t_{n}^{2}}{2}\int_{\mathbb{R}^{4}}(|\Delta\omega_{n}^{a}|^{2}+|\Delta \omega_{n}^{b}|^{2})dx\to 0,\text{ as }n\to\infty.\] Noting that \(F(t_{n}\omega_{n}^{a},t_{n}\omega_{n}^{b})>0\) by \((F_{2})\), so we have \[0<g_{n}(t_{n})\leq\frac{t_{n}^{2}}{2}\int_{\mathbb{R}^{4}}(|\Delta\omega_{n}^{a }|^{2}+|\Delta\omega_{n}^{b}|^{2})dx,\] thus \(\lim\limits_{n\to\infty}g_{n}(t_{n})=0\), and we conclude. **Case 2:**\(\lim\limits_{n\to\infty}(t_{n}^{2}\log n)=l\in(0,\infty]\). We claim that \(l<\infty\). Otherwise, if \(l=\infty\), then \(\lim\limits_{n\to\infty}(t_{n}\log n)=\infty\). By (5.7)-(5.9) and (5.10)-(5.11), we have \[\frac{128(a^{2}+b^{2})t_{n}^{2}}{1+32M_{1}}\Big{(}\frac{4+M_{3}} {4}+\log n+o(\frac{1}{\log^{2}n})\Big{)}\] \[\geq \frac{4\theta+\mu-8}{4\theta}t_{n}^{\frac{\mu}{2}-4}\int_{B_{ \frac{\mu}{\pi}}(0)}\int_{B_{\frac{1}{\pi}}(0)}\frac{F(t_{n}\omega_{n}^{a}(y), t_{n}\omega_{n}^{b}(y))(\frac{\partial F(t_{n}\omega_{n}^{a}(x),t_{n}\omega_{n}^{b}(x))}{ \partial(t_{n}\omega_{n}^{a}(x))}t_{n}\omega_{n}^{a}(x)+\frac{\partial F(t_{n} \omega_{n}^{a}(x),t_{n}\omega_{n}^{b}(x))}{\partial(t_{n}\omega_{n}^{b}(x))}t_{n} \omega_{n}^{b}(x))}dxdy.\] \[\geq \frac{(4\theta+\mu-8)C_{\mu}(\varrho-\varepsilon)^{2}}{4\theta}t_{n}^{ \frac{\mu}{2}-4}e^{\frac{1024(a^{2}+b^{2})t_{n}^{2}\log n(1+o(\frac{1}{\log^{3}n }))}{1+32M_{1}}}\int_{B_{\frac{1}{n}}(0)}\int_{B_{\frac{1}{n}}(0)}\frac{dxdy}{| x-y|^{\mu}}.\] Since \(B_{\frac{1}{n}-|x|}(0)\subset B_{\frac{1}{n}}(x)\) for any \(|x|\leq\frac{1}{n}\), the last integral can be estimated as follows \[\int_{B_{\frac{1}{n}}(0)}\int_{B_{\frac{1}{n}}(0)}\frac{dxdy}{|x- y|^{\mu}} =\int_{B_{\frac{1}{n}}(0)}dx\int_{B_{\frac{1}{n}}(x)}\frac{dz}{|z| ^{\mu}}\] \[\geq\int_{B_{\frac{1}{n}}(0)}dx\int_{B_{\frac{1}{n}-|x|}(0)} \frac{dz}{|z|^{\mu}}\] \[=\frac{2\pi^{2}}{4-\mu}\int_{B_{\frac{1}{n}}(0)}\Big{(}\frac{1}{ n}-|x|\Big{)}^{4-\mu}dx\] \[=\frac{4\pi^{4}}{4-\mu}\int_{0}^{\frac{1}{n}}\Big{(}\frac{1}{n}- r\Big{)}^{4-\mu}r^{3}dr\] \[=\frac{24\pi^{4}}{(4-\mu)(5-\mu)(6-\mu)(7-\mu)(8-\mu)n^{8-\mu}}= \frac{C_{\mu}}{n^{8-\mu}},\] where \[C_{\mu}=\frac{24\pi^{4}}{(4-\mu)(5-\mu)(6-\mu)(7-\mu)(8-\mu)}.\] Consequently, we obtain \[\frac{128(a^{2}+b^{2})t_{n}^{2}}{1+32M_{1}}\Big{(}\frac{4+M_{3}}{ 4}+\log n+o(\frac{1}{\log^{2}n})\Big{)}\] \[\geq \frac{(4\theta+\mu-8)C_{\mu}(\varrho-\varepsilon)^{2}}{4\theta}t _{n}^{\frac{\mu}{2}-4}e^{\Big{(}\frac{1024(a^{2}+b^{2})t_{n}^{2}\log n(1+o( \frac{1}{\log^{3}n}))}{1+32M_{1}}-(8-\mu)\Big{)}\log n}, \tag{5.12}\] which is a contradiction. Thus \(l\in(0,\infty)\), and \(\lim_{n\to\infty}t_{n}=0\), \(\lim_{n\to\infty}(t_{n}\log n)=\infty\). By (5.12), letting \(n\to\infty\), we have that \[0<l<\frac{(1+32M_{1})(8-\mu)}{1024(a^{2}+b^{2})}.\] Otherwise, if \(l>\frac{(1+32M_{1})(8-\mu)}{1024(a^{2}+b^{2})}\), the right side of (5.12) approaches to infinity as \(n\to\infty\), which is impossible. If \(l=\frac{(1+32M_{1})(8-\mu)}{1024(a^{2}+b^{2})}\), by the definition of \(\omega_{n}^{a}\) and \(\omega_{n}^{b}\), we can find that \[A_{n}:=\frac{1024(a^{2}+b^{2})t_{n}^{2}\log n(1+o(\frac{1}{\log^{3}n}))}{1+32M _{1}}-(8-\mu)\to 0^{+},\quad\text{as $n\to\infty$}\] and using the Taylor's formula, we have \[n^{A_{n}}=1+A_{n}\log n+\frac{A_{n}^{2}\log^{2}n}{2}+\cdots\geq 1.\] Thus \[\frac{128(a^{2}+b^{2})t_{n}^{2}\log n}{1+32M_{1}}\geq\frac{(4\theta+\mu-8)C_{ \mu}(\varrho-\varepsilon)^{2}}{4\theta}t_{n}^{\frac{\mu}{2}-4},\] which is a contradiction. Therefore, \[\lim_{n\to\infty}g_{n}(t_{n})\leq \lim_{n\to\infty}\frac{t_{n}^{2}}{2}\int_{\mathbb{R}^{4}}(|\Delta \omega_{n}^{a}|^{2}+|\Delta\omega_{n}^{b}|^{2})dx\] \[= \frac{64(a^{2}+b^{2})l}{1+32M_{1}}<\frac{8-\mu}{16}.\] This ends the proof. ### Palais-Smale sequence Similar to the arguments in section 4.2, we apply Lemma 2.6 to construct a \((PS)_{E(a,b)}\) sequence on \(\mathcal{P}(a,b)\) for \(\mathcal{J}\). Here, we only state the conclusion without proof. **Lemma 5.11**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential critical growth at \(\infty\), then the functional \(\mathcal{I}(u,v)=\mathcal{J}(\mathcal{H}((u,v),s_{(u,v)}))\) is of class \(C^{1}\) and_ \[\langle\mathcal{I}^{\prime}(u,v),(\varphi,\psi)\rangle=\langle\mathcal{J}^{ \prime}(\mathcal{H}((u,v),s_{(u,v)})),\mathcal{H}((\varphi,\psi),s_{(u,v)})\rangle\] _for any \((u,v)\in\mathcal{S}\) and \((\varphi,\psi)\in T_{(u,v)}\mathcal{S}\)._ **Lemma 5.12**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential critical growth at \(\infty\). let \(\mathcal{G}\) be a homotopy stable family of compact subsets of \(\mathcal{S}\) without boundary (i.e., \(B=\emptyset\)) and set_ \[e_{\mathcal{G}}:=\inf_{A\in\mathcal{G}}\max_{(u,v)\in A}\mathcal{I}(u,v).\] _If \(e_{\mathcal{G}}>0\), then there exists a \((PS)_{e_{\mathcal{G}}}\) sequence \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\) for \(\mathcal{J}\)._ **Proposition 5.1**.: _Assume that \(F\) satisfies \((F_{1})\), \((F_{2})\), \((F_{5})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential critical growth at \(\infty\), then there exists a \((PS)_{E(a,b)}\) sequence \(\{(u_{n},v_{n})\}\subset\mathcal{P}(a,b)\) for \(\mathcal{J}\)._ For the sequence \(\{(u_{n},v_{n})\}\) obtained in Proposition 5.1, by \((F_{2})\), we know that \(\{(u_{n},v_{n})\}\) is bounded in \(\mathcal{X}\), up to a subsequence, we assume that \((u_{n},v_{n})\rightharpoonup(u_{n},v_{b})\) in \(\mathcal{X}\). Furthermore, by \(\mathcal{J}^{\prime}|_{\mathcal{S}}(u_{n},v_{n})\to 0\) as \(n\to\infty\) and Lagrange multiplier rule, there exist two sequences \(\{\lambda_{1,n}\},\{\lambda_{2,n}\}\subset\mathbb{R}\) such that \[\int_{\mathbb{R}^{4}}(\Delta u_{n}\Delta\varphi+\Delta v_{n} \Delta\psi)dx-\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{u_{n}}(u_{n},v_{ n})\varphi dx-\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F_{v_{n}}(u_{n},v_{n}) \psi dx\] \[= \int_{\mathbb{R}^{4}}(\lambda_{1,n}u_{n}\varphi+\lambda_{2,n}v_{ n}\psi)dx+o_{n}(1)\|(\varphi,\psi)\|, \tag{5.13}\] for every \((\varphi,\psi)\in\mathcal{X}\). **Lemma 5.13**.: _Assume that \(F\) satisfies \((F_{1})-(F_{3})\), \((F_{5})\), \((F_{6})\) and \(F_{z_{j}}\)(\(j=1,2\)) has exponential critical growth at \(\infty\), then up to a subsequence and up to translations in \(\mathbb{R}^{4}\), \(u_{a}\neq 0\) and \(v_{b}\neq 0\)._ Proof.: If \((u_{a},v_{b})=(0,0)\), by \(\mathcal{J}(u_{n},v_{n})=E(a,b)+o_{n}(1)\), \(P(u_{n},v_{n})=0\) and \((F_{2})\), we get \[\mathcal{J}(u_{n},v_{n})-\frac{1}{2}P(u_{n},v_{n}) =\frac{1}{2}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))\Big{[}(u _{n},v_{n})\cdot\nabla F(u_{n},v_{n})-(3-\frac{\mu}{4})F(u_{n},v_{n})\Big{]}dx\] \[\geq\frac{4\theta+\mu-12}{8}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F(u_{n},v_{n})dx.\] By \(\theta>3-\frac{\mu}{4}\), we have \[\limsup_{n\to\infty}\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F(u_{n},v_{n})dx \leq\frac{8E(a,b)}{4\theta+\mu-12}. \tag{5.14}\] This with \(P(u_{n},v_{n})=0\) and the boundedness of \(\{(u_{n},v_{n})\}\) implies that, up to a subsequence, there exists constant \(L_{0}>0\) such that \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))[(u_{n},v_{n})\cdot\nabla F(u_{n},v_{n})]dx\leq L_{0}. \tag{5.15}\] From Lemma 5.1, we can see \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))F(u_{n},v_{n})dx=o_{n}(1).\] Hence, by Lemma 5.10, we have \[\limsup_{n\to\infty}(\|\Delta u_{n}\|_{2}^{2}+\|\Delta v_{n}\|_{2}^{2})\leq 2 E(a,b)<\frac{8-\mu}{8}.\] Up to a subsequence, we assume that \(\sup_{n\in\mathbb{N}^{+}}(\|\Delta u_{n}\|_{2}^{2}+\|\Delta v_{n}\|_{2}^{2})< \frac{8-\mu}{8}\). Using (2.1) again, we have \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))[(u_{n},v_{n})\cdot\nabla F(u_{n },v_{n})]dx\leq C\|F(u_{n},v_{n})\|_{\frac{8}{8-\mu}}\|(u_{n},v_{n})\cdot \nabla F(u_{n},v_{n})\|_{\frac{8}{8-\mu}}.\] Fix \(\alpha>32\pi^{2}\) close to \(32\pi^{2}\) and \(m>1\) close to \(1\) such that \[\sup_{n\in\mathbb{N}^{+}}\frac{8\alpha m\|\Delta(u_{n},v_{n})\|_{2}^{2}}{8-\mu }\leq 32\pi^{2}.\] Arguing as (4.8), for \(m^{\prime}=\frac{m}{m-1}\), we have \[\|F(u_{n},v_{n})\|_{\frac{8}{8-\mu}}\leq C\Big{(}\|u_{n}\|_{\frac{8(\pi+1)}{8 -\mu}}^{\frac{8(\pi+1)}{8-\mu}}+\|v_{n}\|_{\frac{8(\pi+1)}{8-\mu}}^{\frac{8( \pi+1)}{8-\mu}}\Big{)}^{\frac{8-\mu}{8}}+C\Big{(}\|u_{n}\|_{\frac{8(\pi+1)t^{ \prime}}{8-\mu}}^{\frac{8(\pi+1)t^{\prime}}{8-\mu}}+\|v_{n}\|_{\frac{8(\pi+1)t ^{\prime}}{8-\mu}}^{\frac{8(\pi+1)t^{\prime}}{8-\mu}}\to 0,\quad\text{as }n\to\infty.\] Similarly, we can prove that \[\|(u_{n},v_{n})\cdot\nabla F(u_{n},v_{n})\|_{\frac{8}{8-\mu}}\to 0,\quad \text{as }n\to\infty.\] Therefore, we get \[\int_{\mathbb{R}^{4}}(I_{\mu}*F(u_{n},v_{n}))[(u_{n},v_{n})\cdot\nabla F(u_{n },v_{n})]dx=o_{n}(1).\] Recalling that \(P(u_{n},v_{n})=0\), so \(\|\Delta u_{n}\|_{2}+\|\Delta v_{n}\|_{2}=o_{n}(1)\), which implies \(E(a,b)=0\), this is impossible, since \(E(a,b)>0\). Hence \((u_{a},v_{b})\neq(0,0)\). From (5.13) and Lemma 5.1, we can see that \((u_{a},v_{b})\) is a weak solution of (1.2) with \(N=4\). Assume that \(u_{a}=0\), then by \((F_{3})\), we know that \(v_{b}=0\). Similarly, \(v_{b}=0\) implies \(u_{a}=0\). This is impossible, since \((u_{a},v_{b})\neq(0,0)\). **Lemma 5.14**.: _Assume that \(F\) satisfies \((F_{1})-(F_{6})\), and \(F_{z_{j}}\)(\(j=1,2\)) has exponential critical growth at \(\infty\), then \(\{\lambda_{1,n}\}\) and \(\{\lambda_{2,n}\}\) are bounded in \(\mathbb{R}\). Furthermore, up to a subsequence, \(\lambda_{1,n}\to\lambda_{1}<0\) and \(\lambda_{2,n}\to\lambda_{2}<0\) in \(\mathbb{R}\) as \(n\to\infty\)._ Proof.: The proof is similar to Lemma 4.14, so we omit it. ### Proof of Theorem 1.2 Proof of Theorem 1.2:.: Using (5.15), Lemmas 5.1, 5.6, 5.8, 5.13, 5.14, we can proceed exactly as the proof of Theorem 1.1. ### Acknowledgments The authors were supported by National Natural Science Foundation of China 11971392.
2308.03109
Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code
Code translation aims to convert source code from one programming language (PL) to another. Given the promising abilities of large language models (LLMs) in code synthesis, researchers are exploring their potential to automate code translation. The prerequisite for advancing the state of LLM-based code translation is to understand their promises and limitations over existing techniques. To that end, we present a large-scale empirical study to investigate the ability of general LLMs and code LLMs for code translation across pairs of different languages, including C, C++, Go, Java, and Python. Our study, which involves the translation of 1,700 code samples from three benchmarks and two real-world projects, reveals that LLMs are yet to be reliably used to automate code translation -- with correct translations ranging from 2.1% to 47.3% for the studied LLMs. Further manual investigation of unsuccessful translations identifies 15 categories of translation bugs. We also compare LLM-based code translation with traditional non-LLM-based approaches. Our analysis shows that these two classes of techniques have their own strengths and weaknesses. Finally, insights from our study suggest that providing more context to LLMs during translation can help them produce better results. To that end, we propose a prompt-crafting approach based on the symptoms of erroneous translations; this improves the performance of LLM-based code translation by 5.5% on average. Our study is the first of its kind, in terms of scale and breadth, that provides insights into the current limitations of LLMs in code translation and opportunities for improving them. Our dataset -- consisting of 1,700 code samples in five PLs with 10K+ tests, 43K+ translated code, 1,748 manually labeled bugs, and 1,365 bug-fix pairs -- can help drive research in this area.
Rangeet Pan, Ali Reza Ibrahimzada, Rahul Krishna, Divya Sankar, Lambert Pouguem Wassi, Michele Merler, Boris Sobolev, Raju Pavuluri, Saurabh Sinha, Reyhaneh Jabbarvand
2023-08-06T13:33:13Z
http://arxiv.org/abs/2308.03109v3
# Understanding the Effectiveness of Large Language Models in Code Translation ###### Abstract Code translation aims to convert source code from one programming language (PL) to another. Given the promising abilities of large language models (LLMs) in code synthesis, researchers are actively exploring their potential to automate code translation, i.e., generating code in target PL from its equivalent in another PL. The prerequisite for advancing the state of LLM-based code translation is to understand their limitations. To that end, we present a large-scale empirical study to investigate the ability of LLMs, including general LLMs and code LLMs, for code translation across pairs of different languages, including C, C++, Go, Java, and Python. Our analysis involves the translation of 1,700 code samples from three distinct benchmarks and real-world projects, revealing LLMs are yet to be reliably used to automate code translation--with incorrect translations ranging from 52.7% to 97.9% across the studied LLMs. Further manual investigation of _unsuccessful_ translations among all PLs identifies 14 root causes for translation bugs. Based on the insights from the empirical study, we propose a prompt-crafting approach to provide additional context for LLMs, improving the performance of LLM-based code translation by 5.5% on average across different PLs, LLMs, and benchmarks. Our study is the first of its kind, in terms of its scale and breadth, that provides insights into the current limitations of LLMs in code translation and opportunities for improving them. Our collected extensive dataset--consisting of 1,700 code samples written in five PLs with 10K+ tests, 43K+ translated code, 1,725 manually labeled bugs, and 1,365 bug-fix pairs generated using LLMs --can help drive research in this area. Code Translation, Bug Study, Large Language Models, Prompt Engineering ## 1 Introduction Code translation, source-to-source compilation, or transpilation, entails transforming a piece of code from one programming language (PL) to another, while preserving the original functionality. Code translation has many use cases, such as modernizing enterprise applications [17], [14], [15], [16], migrating legacy software in proprietary PLs to cloud-native applications implemented in general-purpose PLs [18], [19], [20], [21], [22], [23], [24], and facilitating the training of models for better code synthesis [2], [10], [25], [26]. Translating the software/code to a modern and unified PL can significantly reduce software-maintenance effort, improve the overall reliability of software, and boost non-functional properties such as security and performance [27], [28], [29], [30], [4], [5], [6]. Due to the importance and benefits of code translation, several techniques have been proposed and deployed to automate reliable translation between different PLs [28], [29], [20], [21], [22], [23], [24], [25], [26], including those leveraging _Large Language Models_ (hereafter, LLMs) for code translation [18], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [98], [99], [99], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [999], [99]
2301.08557
The linear stability of weakly charged and slowly rotating Kerr-Newman family of charged black holes
In this paper, we prove the linear stability of weakly charged and slowly rotating Kerr-Newman black holes under coupled gravitational and electromagnetic perturbations. We show that the solutions to the linearized Einstein-Maxwell equations decay at an inverse polynomial rate to a linearized Kerr-Newman solution plus a pure gauge term. This work builds on the framework developed by H\"{a}fner-Hintz-Vasy for the study of the Einstein vacuum equations. We work in the generalized wave map and Lorenz gauge. The proof involves the analysis of the resolvent of the Fourier transformed linearized Einstein-Maxwell operator on asymptotically flat spaces, which relies on recent advances in microlocal analysis and non-elliptic Fredholm theory developed by Vasy. The most delicate part of the proof is the description of the resolvent at low frequencies.
Lili He
2023-01-18T09:58:13Z
http://arxiv.org/abs/2301.08557v2
The linear stability of weakly charged and slowly rotating Kerr-Newman family of charged black holes ###### Abstract. In this paper, we prove the linear stability of weakly charged and slowly rotating Kerr-Newman black holes under coupled gravitational and electromagnetic perturbations. We show that the solutions to the linearized Einstein-Maxwell equations decay at an inverse polynomial rate to a linearized Kerr-Newman solution plus a pure gauge term. This work builds on the framework developed in [57] for the study of the Einstein vacuum equations. We work in the generalized wave map and Lorenz gauge. The proof involves the analysis of the resolvent of the Fourier transformed linearized Einstein-Maxwell operator on asymptotically flat spaces, which relies on recent advances in microlocal analysis and non-elliptic Fredholm theory developed in [109]. The most delicate part of the proof is the description of the resolvent at low frequencies. ###### Contents * 1 Introduction * 1.1 Statement of the main result * 1.2 Related works * 1.3 Main ideas of the proof * 1.4 Outline * 1.5 List of notations * 1.6 Acknowledgements * 2 b- and scattering geometry * 2.1 b- and scattering vector bundles * 2.2 Radial compactification * 2.3 Function spaces * 3 Initial value problems formalism of Einstein-Maxwell equations * 3.1 Einstein-Maxwell equations * 3.2 Initial value problems for the nonlinear system * 3.3 Initial value problems for the linearized system * 4 Kerr-Newman black holes * 4.1 The Reissner-Nordstrom family * 4.2 The slowly rotating Kerr-Newman family * 4.3 Stationarity, vector bundles, and geometric operators * 4.4 Constructing gauged initial data * 5 Analysis of the linearized gauge-fixed Einstein-Maxwell operator * 5.1 Microlocal geometry and dynamics of Kerr-Newman spacetimes * 5.2 Uniform Fredholm estimates * 5.3 Description of the kernel of the wave type operators * 5.4 Semiclassical behavior of Kerr-Newman spacetimes * 5.5 High energy estimates * 5.6 Energy estimates * 6 Spherical harmonic decompositions * 7 Mode stability of the Reissner-Nordstrom spacetime * 7.1 Scalar type perturbations 7.2 Vector type perturbations black holes. It was discovered by [99], following the discovery of the Kerr [74]. The family of KN solutions \((g_{b},F_{b})\) to the Einstein-Maxwell equations are characterized by the parameters \(b=(\mathbf{m},\mathbf{a},\mathbf{Q}_{e},\mathbf{Q}_{m})\in\mathbb{R}^{6}\) where \(\mathbf{m}>0,\mathbf{a}\in\mathbb{R}^{3},\mathbf{Q}_{e},\mathbf{Q}_{m}\) are interpreted as the mass, angular momentum, electric charge and magnetic charge respectively. The KN spacetime reduces to the Reissner-Nordstrom (RN) spacetime for \(\mathbf{a}=0\), and RN spacetime reduces to Schwarzschild spacetime when \(\mathbf{Q}_{e}=\mathbf{Q}_{m}=0\). ### Statement of the main result The _stability problem_ for black holes solutions concerns the long time behavior of solutions to the Einstein-Maxwell system with initial data close to known black hole solutions. The problem of stability of black hole solutions can be roughly divided into three formulations, each of increasing difficulty: mode stability of linearized equations, linear stability, and full nonlinear stability. The problem of mode stability of the linearized equations involves separating the solutions into modes \(\psi(t,r,\theta,\phi)=e^{-i\omega t}e^{im\phi}R(r)S(\theta)\) and aiming at proving the lack of exponentially growing (in \(t\)) modes for all metric or curvature components. The proof of linear stability of Einstein-Maxwell system means proving boundedness and decay for the solutions of the linearized Einstein-Maxwell equations. Finally, the nonlinear stability problem for Einstein-Maxwell equations aims at showing that a small perturbation of a KN black hole converges to another member of the KN family. In order to understand nonlinear stability, one must first understand linear stability. In this paper, we consider the linear stability problem of the linearized Einstein-Maxwell equations under coupled gravitational and electromagnetic perturbations for weakly charged and slowly rotating Kerr-Newman black holes, that is, the case \(|\mathbf{a}|+|\mathbf{Q}_{e}|+|\mathbf{Q}_{m}|\ll\mathbf{m}\). To describe our result, we first recall the initial value problem formalism of Einstein-Maxwell equations (see SSSS3.2-3.3 for detail). Given \(5\)-tuple \((\Sigma_{0},h,k,\mathbf{E},\mathbf{H})\) where \(\Sigma_{0}\) is a smooth \(3\)-manifold, \(h\) is a Riemannian metric on \(\Sigma_{0}\), \(k\) is a symmetric \(2\)-tensor on \(\Sigma_{0}\), and \(\mathbf{E},\mathbf{H}\) are \(1\)-forms on \(\Sigma_{0}\). One seeks \((M,g,F)\) such that \((g,F)\) solve the Einstein-Maxwell system and \((h,k)\) are the metric and second fundamental form induced by \(g\), and \((\mathbf{E},\mathbf{H})\) are the electric and magnetic field induced by \((g,F)\) on \(\Sigma_{0}\). A necessary and sufficient condition for local solvability of Einstein-Maxwell equations (see [18, 20] and [19, SS6.10]) is that \((h,k,\mathbf{E},\mathbf{H})\) satisfy the constraint equations, which consist of Gauss-Codazzi equation and the pull-back of the Maxwell equations to \(\Sigma_{0}\). Linearizing the initial value problem gives rise to the corresponding initial value problem of linearized Einstein-Maxwell equations. We now state a first version of our linear stability result. For a more formal statement, see Theorem 14.2. **Theorem 1.1**.: _Let \(b=(\mathbf{m},\mathbf{a},\mathbf{Q}_{e},\mathbf{Q}_{m})\) be the black hole parameters satisfying \(|\mathbf{a}|+|\mathbf{Q}_{e}|+|\mathbf{Q}_{m}|\ll\mathbf{m}\). Let \(0<\alpha<1\). Let \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\) be an initial data set on a \(3\)-manifold \(\Sigma_{0}\) satisfying the linearized constraint equation, and having decay_ \[|\dot{h}|\lesssim r^{-1-\alpha},\quad|\dot{k}|,|\dot{\mathbf{E}}|,|\dot{ \mathbf{H}}|\lesssim r^{-2-\alpha}\] _and similar bounds after applying a few \(r\partial_{r},\partial_{\omega}\) derivatives. Let \((\dot{g},\dot{F})\) a solution to the linearized Einstein-Maxwell equations and attain the initial data \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\). Then \((\dot{g},\dot{F})\) decays at an inverse polynomial rate in time \(t_{b,*}\) to a linearized Kerr-Newman solution plus a pure gauge solution. That is, there exists linearized black hole parameters \(\dot{b}=(\dot{\mathbf{m}},\dot{\mathbf{a}},\dot{\mathbf{Q}}_{e},\hat{\mathbf{ Q}}_{m})\in\mathbb{R}\times\mathbb{R}^{3}\times\mathbb{R}\times\mathbb{R}\) and a vector field \(V\) such that_ \[\dot{g}=\dot{g}_{b}(\dot{b})+\mathcal{L}_{V}g_{b}+\tilde{g},\quad\dot{F}=\dot{F}_ {b}(\dot{b})+\mathcal{L}_{V}F_{b}+\tilde{F}\] _where_ \[(\dot{g}_{b}(\dot{b}),\dot{F}_{b}(\dot{b})):=\frac{d}{ds}\Big{|}_{s=0}(g_{b+sb},\ F_{b+sb})\] _is the linearized Kerr-Newman solutions and the tail \((\tilde{g},\tilde{F})\) satisfies the bound_ \[|\tilde{g}|+|\tilde{F}|\lesssim t_{b,*}^{-\alpha-1+\epsilon},\quad\epsilon>0.\] _Remark 1.2_.: Using the rotation invariance of the Maxwell equation, one can always arrange for the initial data to have vanishing magnetic charge, i.e., \(\mathbf{Q}_{m}=0\), and thus the electromagnetic \(2\)-form can be written as \(F=dA\) (see SS3.1 for detail). In this paper, we will restrict to the case where the magnetic charge is \(0\) and study Einstein-Maxwell equations of the following form \[\mathrm{Ric}(g)_{\mu\nu}=2T_{\mu\nu}(g,dA),\quad\delta_{g}dA=0.\] We also refer the readers to Remark 4.2 for the relation between the non-magnetically charged RN spacetime \((g_{b},F_{b}=dA_{b})\) and the magnetically charged RN spacetime \((g_{b,m},F_{b,m})\). _Remark 1.3_.: The hypersurface \(\{t_{b,*}=c\}\) terminates at null infinity. We also note that for bounded \(r\), the tail \((\tilde{g},\tilde{F})\) satisfies the bound \(|\tilde{g}|+|\tilde{F}|\lesssim\mathfrak{t}^{-1-\alpha+\epsilon}\) for all \(\epsilon>0\) where \(\mathfrak{t}=t\) for large \(r\). _Remark 1.4_.: By imposing linearized generalized wave map gauge condition on \(\dot{g}\) and replacing \((\dot{g}_{b}(\dot{b}),\dot{F}_{b}(\dot{b}))\) by its gauge-fixed version, one can choose \(V\) such that \(V=V_{1}+V_{2}\) with \(V_{1}\) lying in a \(6\)-dimensional space (only depending on \(b\) ) of smooth vector fields (asymptotic to a linear combination of spatial translations and boosts) on \(M\), and \(V_{2}\) decaying (in \(t_{b,*}\)) at the rate \(t_{b,*}^{-\alpha+\epsilon}\), \(\epsilon>0\). For charged asymptotically flat black hole spacetimes (i.e. the cosmology constant \(\Lambda=0\)), the study of mode stability of Reissner-Nordstrom black holes was initiated by Moncrief [96, 97, 98] by considering the metric and electromagnetic \(4\)-potential perturbations. Chandrasekhar [16] and Chandrasekhar-Xanthopoulos [17] studied the perturbations of Reissner-Nordstrom black holes in the Newman-Penrose formalism. We will work with the formalism of Kodama-Ishibashi [80]. The linear stability of full subextremal Reissner-Nordstrom spacetime was proved by Giorgi [49, 50]. As for Kerr-Newman spacetime, the mode stability of wave equations on the Kerr-Newman metric was established in [22]. The Teukolsky and Regge-Wheeler equations governing the linear stability of Kerr-Newman spacetime to coupled electromagnetic and gravitational perturbations were derived in [51]. The Carter tensor and the physical-space analysis in perturbations of Kerr-Newman spacetime were discussed in [48]. Dirac waves on Kerr-Newman spacetimes were considered by Finster-Kamran-Smoller-Yau [44]. We also mention the numerical works on the mode stability [33, 100] and nonlinear stability [122] of Kerr-Newman spacetime. For positive cosmology case \(\Lambda>0\), the nonlinear stability of slowly rotating Kerr-Newman-de Sitter black holes was proved by Hintz [60]. ### Related works Besides the above references for charged black holes, we also mention the closely related results on Einstein vacuum equations. The nonlinear stability of Minkowski spacetime was first proved by Christodoulou-Klainerman [21] and later an alternative proof was given by Lindblad-Rodnianski [81, 82, 83] using wave coordinates. For the case \(\Lambda>0\), see Friedrich [47] for the nonlinear stability of de Sitter spacetime. There is a large amount of literature on the scalar waves on Schwarzschild and Kerr spacetimes. The study of scalar waves on Schwarzschild metrics was initiated by Wald [116] and Kay-Wald [73], where the boundedness of the scalar waves was established. Blue-Soffer [10] proved spacetime Morawetz type estimates and local decay for the solutions of semilinear wave equations on Schwarzschild manifold, with refinements and extensions due to Blue-Sterbenz [13] and Blue-Soffer [11]. Dafermos-Rodnianski [28, 30] proved the boundedness and decay of scalar waves on Schwarzschild background by quantifying the celebrated redshift effect. Marzuola-Metcalfe-Tataru-Tohaneanu [87] established local energy estimates, as well as Strichartz estimates, for the scalar waves on Schwarzschild spacetimes. We also mention the work by Lindblad-Tohaneanu [84] on the scalar wave equations on perturbations of Schwarzschild metrics, as well as the work by Holzegel-Kauffman [65] on the wave equations on Schwarzschild spacetimes with small non-decaying first order terms. The boundedness and decay of solutions of the scalar wave equations on Kerr spacetimes was studied in Dafermos-Rodnianski [26, 29], Tataru-Tohaneanu [107] and Andersson-Blue [4] for \(\mathbf{a}\ll\mathbf{m}\), as well as in Dafermos-Rodnianski-Shlapentokh-Rothman [31] for the full subextremal range \(|\mathbf{a}|<\mathbf{m}\). We also mention the work Lindblad-Tohaneanu [85] on the scalar wave equations on metrics close to slowly rotating Kerr spacetimes. As for non scalar fields on Schwarzschild and Kerr background, we refer to the work by Blue [12], Sterbenz-Tataru [106] and Andersson-Bachdahl [1] on Maxwell's equation on Schwarzschild spacetimes. Maxwell's equation on Kerr spacetimes was discussed in Andersson-Blue [5] and Andersson-Backdahl-Blue [2]. Dirac waves on Kerr spacetimes were considered by Finster-Kamran-Smoller-Yau [43]. The study of mode stability of Schwarzschild black holes was initiated by Reege-Wheeler [102], followed by the work of Zerilli [121] and Vishveshwara [115] in metric perturbations. Bardeen-Press [9] discussed the perturbations of Schwarzschild black holes using Newman-Penrose formalism. We also mention the work by Kodama-Ishibashi [79] on the mode stability of of Schwarzschild black holes. Dafermos-Holzegel-Rodnianski [24] proved the linear stability of the Schwarzschild metric in double null gauge. Later, the linear stability of Schwarzschild metric in (generalized) harmonic gauge was studied by Hung [69, 70], Hung-Keller-Wang [71] and Johnson [72]. For progress in the nonlinear stability, Klainerman-Szeftel [78] proved the nonlinear stability of the Schwarzschild metric under axially symmetric and polarized perturbations. Recently, Dafermos-Holzegel-Rodnianski-Taylor [25] proved the nonlinear stability of Schwarzschild spacetimes under a restrictive symmetry class, which excludes rotating Kerr solutions as final state of the evolution. Teukolsky [108] derived the equations governing the perturbations of Kerr black holes in the Newman-Penrose formalism. Whiting [118] established the mode stability of the Teukolsky equation for Kerr spacetimes, followed by refinements and extensions by Shlapentokh-Rothman [104] and Andersson-Ma-Paganini-Whiting [7]. Dafermos-Holzegel-Rodnianski [23] proved boundedness and decay for the Teukolsky equation on Kerr spacetimes with \(|\mathbf{a}|\ll\mathbf{m}\). Andersson-Backdahl-Blue-Ma [3] proved the linear stability of slowly rotating Kerr metric for initial data with strong decay using Newman-Penrose formalism. Hafner-Hintz-Vasy [57] proved the linear stability of slowly rotating Kerr spacetimes using wave map gauge. Recently, the nonlinear stability of slowly rotating Kerr spacetimes was proved by a series of work of Klainerman-Szeftel and Giorgi-Klainerman-Szeftel [52, 53, 75, 76, 77]. For large \(\mathbf{a}\) case, Shlapentokh-Rothman-Costa [105] established the boundedness and decay for the Teukolsky equation on Kerr spacetimes. Andersson-Hafner-Whiting [6] gave the mode analysis for the linearized equations on subextremal Kerr metric. In the case of the Einstein equations with a positive cosmological constant, decay rate (to constants) of the solutions of the scalar wave equations on Schwarzschild-de Sitter background was discussed by Bony-Hafner [14], Dafermos-Rodnianski [27] and Melrose-Sa Barreto-Vasy [91]. Dyatlov [35, 36, 37] proved that the solutions of the scalar wave equations on Kerr-de Sitter spacetimes decay exponentially to constants. We also mention Mavrogiannis's works [88, 89, 90] on quasilinear wave equations on cosmological black hole background. The global nonlinear stability of slowly rotating Kerr-de Sitter spacetimes was proved by Hintz-Vasy [64], followed by the work of Fang [41, 42]. ### Main ideas of the proof The proof of Theorem 1.1 is based on the framework developed in [57] where the Einstein vacuum equations were studied. The presence of electromagnetic term in the right hand side of Einstein equations adds new difficulties to the analysis of the problem. 1. As in the study of Einstein vacuum equations in [57], we can exploit the diffeomorphism invariance of Einstein-Maxwell equations and impose a generalized wave map gauge. In addition to this, we also need to choose a gauge (generalized Lorenz gauge) for the electromagnetic 4-potential \(A\) in order to express the Einstein-Maxwell equations (1.1) as a system of nonlinear hyperbolic equations, which is defined as the gauge-fixed Einstein-Maxwell equations. Correspondingly, the linearization of the gauge-fixed Einstein-Maxwell system gives rise to a system of linear hyperbolic equations. 2. We need to establish the generalized mode stability for the _ungauged_ linearized Einstein-Maxwell system (3.29) around a RN solution: all generalized mode solutions of the _ungauged_ linearized Einstein-Maxwell system (3.29) are sums of linearized KN solutions and pure gauge solutions. In this work, we give a self-contained proof in full detail of this generalized mode stability, which completes and extends the proof by Kodama and Ishibashi [80]. 3. The fact that the KN metric is not Ricci-flat creates the major difficulty in the mode analysis of the wave type operator \(\delta_{g_{\mathfrak{g}}}G_{g_{\mathfrak{k}}}\delta^{*}_{g_{\mathfrak{k}}}\) (where \((\delta^{*}_{g_{\mathfrak{k}}}u)_{\alpha\beta}=\frac{1}{2}(\nabla_{\alpha}u_{ \beta}+\nabla_{\beta}u_{\alpha})\) is the symmetric gradient operator and \((\delta_{g_{\mathfrak{k}}}h)_{\beta}=-\nabla^{\alpha}h_{\alpha\beta}\) is the negative divergence operator, and \(G_{g_{\mathfrak{k}}}=\mathrm{Id}-\frac{1}{2}g_{\mathfrak{k}}\mathrm{tr}_{g_{ \mathfrak{k}}}\) is the trace reversal operator) acting on 1-forms. We note that the operator \(\delta_{g_{\mathfrak{g}}}G_{g_{\mathfrak{k}}}\delta^{*}_{g_{\mathfrak{k}}}\) (or its modifications) occurs as both the gauge propagation operator and the gauge potentials operator for the Einstein equations. To deal with this issue, we restrict to the weakly charged case, where we are able to exploit a perturbation argument to extend the invertibility of relevant operators on Schwarzschild metric to weakly charged Reissner-Nordstrom spacetime. Concretely, we first employ the generalized wave map gauge \[\widetilde{\Upsilon}^{E}(g;g_{b}):=\Upsilon^{E}(g;g_{b})-\theta(g;g_{b})=0, \quad\Upsilon^{E}(g;g_{b})=g_{\mu\nu}g^{\kappa\lambda}\left(\Gamma(g)^{\mu}_{ \kappa\lambda}-\Gamma(g_{b})^{\mu}_{\kappa\lambda}\right) \tag{1.2}\] where \(\theta(g;g_{b})\) is linear in \(g\) and \(\theta(g_{b};g_{b})=0\). By introducing this type of gauge condition with a suitable choice of \(\theta\), the gauge potential operator is given by \(\tilde{\delta}_{g_{\mathfrak{g}}}G_{g_{\mathfrak{k}}}\delta^{*}_{g_{\mathfrak{ k}}}\) where \(\tilde{\delta}_{g_{\mathfrak{k}}}\) is a zero order modification of \(\delta_{g}\). In contrast to \(\delta_{g_{\mathfrak{g}}}G_{g_{\mathfrak{k}}}\delta^{*}_{g_{\mathfrak{k}}}\) which is the gauge potential operator corresponding to the standard wave map gauge \(\Upsilon^{E}(g;g_{b})=0\), the operator \(\tilde{\delta}_{g_{\mathfrak{k}}}G_{g_{\mathfrak{k}}}\delta^{*}_{g_{\mathfrak{ k}}}\) is invertible (on suitable function spaces) in the Schwarzschild case \(b=(\mathbf{m},0,0)\), and thus a perturbation argument enables us to extend this invertibility to the weakly charged and slowly rotating KN metrics with sufficiently small charge. Moreover, the generalized wave map gauge allows us to exclude a pure gauge solution which has no geometric meaning. Next, we implement the constraint damping, which was first discussed in [55] and played a central role both in numerical work [101] and the stability proofs [57, 60, 63, 64]. The reason we introduce the constraint damping, i.e., use \(\tilde{\delta}^{*}_{g_{b}}\) in the construction of linearized gauge-fixed Einstein-Maxwell equations, is that we need to use the invertibility of the corresponding gauge propagation operator \(\delta_{g_{b}}G_{g_{b}}\tilde{\delta}^{*}_{g_{b}}\) to exclude the generalized modes, which grows linearly in \(t_{b,*}\) and whose leading term is given by linearized (in \((\dot{\mathbf{m}},0,\dot{\mathbf{Q}})\)) KN solutions, to the linearized gauge-fixed Einstein-Maxwell equations \(L_{b}(\dot{g},\dot{A})=0\). We also note that we use a perturbation argument to prove the invertibility of the gauge propagation operator \(\delta_{g_{b}}G_{g_{b}}\tilde{\delta}^{*}_{g_{b}}\) on the weakly charged and slowly rotating KN metrics \(g_{b}\). #### 1.3.1. Gauge fixing In this paper, we introduce the generalized wave map gauge and generalized Lorenz gauge. Concretely, one fixes a background metric \(g^{0}\), and requires that the identity map \((M,g)\to(M,g^{0})\) be a wave map. This is equivalent to \[\Upsilon^{E}(g;g^{0})=g_{\mu\nu}g^{\kappa\lambda}\left(\Gamma(g)^{\mu}_{ \kappa\lambda}-\Gamma(g^{0})^{\mu}_{\kappa\lambda}\right)=0.\] We define the _generalized wave map gauge_ as follows \[\widetilde{\Upsilon}^{E}(g;g^{0}):=\Upsilon^{E}(g;g^{0})-\theta(g;g^{0})=0\] where \(\theta(g;g^{0})\) is linear in \(g\), contains no derivatives of \(g\) and \(\theta(g^{0},g^{0})=0\). Then we consider the gauge-fixed Einstein equations \[P^{E}(g,A):=\mathrm{Ric}(g)-2T(g,dA)-\widetilde{\delta}^{*}_{g}\widetilde{ \Upsilon}(g;g^{0})=0.\] At the linearized level, we take \(g_{b}\), around which we linearize the equations, as the background metric, and thus obtain the linearized gauge-fixed Einstein equations \[L^{E}_{b}(\dot{g},\dot{A}):=D_{g_{b}}P^{E}(\dot{g},\dot{A})=-\frac{1}{2} \square_{g_{b}}\dot{g}+\mathrm{l.o.t}=0\] which is a system of linear wave equations. As for the Maxwell equations, we introduce the generalized Lorenz gauge. We fix a background metric \(g^{0}\), and then a (modified) Lorenz gauge reads \[\Upsilon^{M}(g,A;g^{0})=\mathrm{tr}_{g}\delta^{*}_{g^{0}}A=0.\] We fix a background 1-form \(A^{0}\) and define the _generalized Lorenz gauge_ as \[\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0}):=\Upsilon^{M}(g,A;g^{0})-\Upsilon^{ M}(g,A^{0};g^{0}),\] and then obtain the gauge-fixed Maxwell equations \[P^{M}(g,A):=\delta_{g}dA-d\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0})=0.\] Again, at the linearized level, we take \((g_{b},A_{b})\), around which we linearize the equations, as the background metric and 1-form, and thus have \[L^{M}_{b}(\dot{g},\dot{A}):=D_{g_{b}}P^{M}(\dot{g},\dot{A})=-\square_{g_{b}} \dot{A}+\mathrm{l.o.t}=0\] which is a system of linear wave equations. We see that a solution of the ungauged equations satisfying the gauge conditions gives rise to a solution of the gauge-fixed equations. As for the other direction, suppose that \((g,A)\) solves the gauge-fixed Einstein-Maxwell system \[\left(P^{E}(g,A),P^{M}(g,A)\right)=0,\] and satisfies \[\left(\widetilde{\Upsilon}^{E}(g;g_{b}),\widetilde{\Upsilon}^{M}(g,A;g_{b},A_ {b})=0\quad\text{at }\Sigma_{0}\right.\] and \[\left(\mathcal{L}_{\partial_{t}}\widetilde{\Upsilon}^{E}(g;g_{b}),\mathcal{L }_{\partial_{t}}\widetilde{\Upsilon}^{M}(g,A;g_{b},A_{b})=0\quad\text{at }\Sigma_{0}\right.\] where \(\Sigma_{0}=\{\mathfrak{t}=0\}\) is the spacelike Cauchy hypersurface. Applying \(\delta_{g}\) to \[P^{M}(g,A)=\delta_{g}dA-d\widetilde{\Upsilon}^{M}(g,A;g_{b},A_{b})=0\] gives \[-\delta d\widetilde{\Upsilon}^{M}(g,A;g_{b},A_{b})=\square_{g}\widetilde{ \Upsilon}^{M}(g,A;g_{b},A_{b})=0.\] and thus according to the vanishing initial data of \(\widetilde{\Upsilon}^{M}(g,A;g_{b},A_{b})\), we obtain that \(\widetilde{\Upsilon}^{M}(g,A;g_{b},A_{b})=0\). Applying \(\delta_{g}G_{g}\) to \(P^{E}(g,A)=\operatorname{Ric}(g)-2T(g,dA)-\widetilde{\delta}_{g}^{*}\widetilde{ \Upsilon}(g;g_{0})=0\) and using the second Bianchi identity and the fact \(\delta_{g}T=0\) yields \[-\delta_{g}G_{g}\widetilde{\delta}_{g}^{*}\widetilde{\Upsilon}^{E}(g;g_{b})=0\] which is a wave-type equation on the \(1\)-form \(\widetilde{\Upsilon}^{E}(g;g_{b})\) and thus \(\widetilde{\Upsilon}^{E}(g;g_{b})=0\). We conclude that \((g,A)\) satisfies the gauge conditions, and thus solves the Einstein-Maxwell system. Correspondingly, this direction holds on the linearized level as well. #### 1.3.2. Fredholm framework setup According to the above discussion, it suffices to consider the equation \(L_{b}(\dot{g},\dot{A}):=(2L_{b}^{E},L_{b}^{M})(\dot{g},\dot{A})=0\). A simple linear theory presented in [64] reduces the initial value problem for \(L_{b}(\dot{g},\dot{A})=0\) to the following inhomogeneous problem. Concretely, let \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) be close to \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\). Fix a function \(t_{b,*}\) which equals \(t+r_{(\mathbf{m}_{0},0,\mathbf{Q}_{0}),*}\) near event horizon and \(t-r_{(\mathbf{m},0,\mathbf{Q}),*}\) near null infinity \(\mathscr{I}^{+}\), where \(r_{b,*}\) is the tortoise coordinate. Then we consider the following inhomogeneous equation \[L_{b}(\dot{g},\dot{A})=(2L_{b}^{E}(\dot{g},\dot{A}),L_{b}^{M}(\dot{g},\dot{A}) )=f\] where \(f\) has compact support in \(t_{*}\) and \(f\) decays in \(r\) at the rate \(r^{-2-\alpha}\). Our strategy is to take Fourier transform of the inhomogeneous equation \(L_{b}(\dot{g},\dot{A})=f\) in \(t_{b,*}\). We define the Fourier transform of \((\dot{g},\dot{A})\) as \[(\hat{\dot{g}}(\sigma),\hat{\dot{A}}(\sigma)):=\int_{\mathbb{R}}e^{it_{b,*} \sigma}(\dot{g},\dot{A})(t_{b,*})\,dt_{b,*} \tag{1.3}\] and likewise for \(f\). According to a crude energy estimate in Proposition 5.27, we have the following integral representation \[(\dot{g}(t_{b,*}),\ \dot{A}(t_{b,*}))=\frac{1}{2\pi}\int_{\operatorname{Im} \sigma=M+1}e^{-i\sigma t_{b,*}}\widehat{L_{b}}(\sigma)^{-1}\hat{f}(\sigma)\, d\sigma,\quad M\gg 1. \tag{1.4}\] We expect to shift contour of the integration to \(\operatorname{Im}\sigma=c\) for any \(c>0\). To this end, we need to understand the location of the poles of \(\widehat{L_{b}}(\sigma)^{-1}\) and expect invertibility of \(\widehat{L_{b}}(\sigma)^{-1}\) when \(|\operatorname{Re}\sigma|\gg 1\). Concretely, we need to establish the uniform Fredholm estimates (down to \(\sigma=0\)) and high energy estimates. The uniform Fredholm estimates allow us to perform the perturbation argument (which is used a lot in this paper) while the high energy estimates imply the invertibility of \(\widehat{L_{b}}(\sigma)^{-1}\) when \(|\operatorname{Re}\sigma|\gg 1\). We point out that the whole section SS5 is devoted to using microlocal tools to obtain the aforementioned uniform Fredholm estimates and high energy estimates. In brief, the proof of uniform Fredholm estimates uses the non-elliptic Fredholm framework of [109], radial point estimates at the event horizon [109], scattering radial point estimates at infinity ([94, 111] for non-zero \(\sigma\) and [113] for \(\sigma\) near zero), propagation of singularities estimates [34], and (for \(\sigma=0\)) elliptic b-theory [93]. As for the proof of high energy estimates, in addition to the aforementioned estimates at a semiclassical version, it uses estimates at normally hyperbolic trapping [61, 39, 119] and high energy estimates at infinity [111, 114]. 3.3. Mode stability of the gauge-fixed Einstein-Maxwell operator for \(C^{-1}\leq|\sigma|\leq C,\operatorname{Im}\sigma\geq 0\) with \(C>0\) In view of the high energy estimates, we are left with the analysis of \(\widehat{L_{b}}(\sigma)\) for \(\sigma\) in a bounded region in the closed upper half plane. For the analysis in the region \(C^{-1}\leq|\sigma|\leq C,\operatorname{Im}\sigma\geq 0\) with \(C>0\), we prove that \(\widehat{L_{b_{0}}}(\sigma)\), with \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) and \(\mathbf{Q}_{0}\ll\mathbf{m}_{0}\) being a Reissner-Nordstrom parameter, is invertible and then use a perturbation argument (in this compact (in \(\sigma\)) region) to extend the invertibility to \(\widehat{L_{b}}(\sigma)\) for nearby \(b\). For the proof of invertibility of \(\widehat{L_{b_{0}}}(\sigma)\), we make use of the geometric mode stability result of Reissner-Nordstrom black holes and the invertibility of the gauge progation operator and gauge potential operator. Concretely, suppose that \(\widehat{L_{b_{0}}}(\dot{g},\dot{A})=0\). Then \((\dot{g},\dot{A})=e^{-it_{b_{0},*}\sigma}(\dot{g},\dot{A})\) is a mode solution of \(L_{b_{0}}(\dot{g},\dot{A})=0\). According to the discussion in SS1.3.1, we see that the solution satisfies \[\delta_{g_{b_{0}}}d\Big{(}D_{(g_{b_{0}},A_{b_{0}})}\widetilde{\Upsilon}^{M}( \dot{g},\dot{A};g_{b_{0}},A_{b_{0}})\Big{)}=0.\] According to the mode stability of \(\square_{g_{b_{0}},0}\), we see that \(D_{(g_{b_{0}},A_{b_{0}})}\widetilde{\Upsilon}^{M}(\dot{g},\dot{A};g_{b_{0}},A_ {b_{0}})=0\). Therefore, \((\dot{g},\dot{A})\) solves the Maxwell equations. By the discussion in SS1.3.1 again, we obtain that \[\delta_{g_{b_{0}}}G_{g_{b_{0}}}\widetilde{\delta}_{g_{b_{0}}}^{*}\Big{(}D_{g_{b_{ 0}}}\widetilde{\Upsilon}^{E}(\dot{g};g_{b_{0}})\Big{)}=0.\] Using the mode stability of the gauge propagation operator (this is where we need the _smallness_ of the charge), we see that \(D_{g_{b_{0}}}\widetilde{\Upsilon}^{E}(\dot{g};g_{b_{0}})=0\). Therefore, \((\dot{g},\dot{A})\) is a mode solution of the _ungauged_ linearized Einstein-Maxwell system. Using the geometric mode stability of Reissner-Nordstrom solutions, we conclude that \((\dot{g},\dot{A})\) is a pure gauge mode solution, that is, \[(\dot{g},\dot{A})=\big{(}2\delta^{s}_{g}\omega,\ \mathcal{L}_{\omega^{s}}A+d\phi \big{)}\] where \(\omega\) is a 1-form and \(\phi\) is a scalar function, both of which are modes. Plugging them back into the linearized gauge condition, and with our choice of the generalized gauge condition, we have \[\tilde{\delta}_{g_{b_{0}}}G_{g_{b_{0}}}\delta^{s}_{g_{b_{0}}}\omega=0,\quad \delta_{g_{b_{0}}}d\Big{(}\mathcal{L}_{\omega^{s}}A+d\phi\Big{)}=0.\] First by the mode stability of the gauge potential operator (this is again where we need the _smallness_ of the charge) and next the mode stability of \(\square_{\underline{g_{b_{0}}},0}\), we see that \((\omega,\phi)=0\). This proves the injectivity of \(\widehat{L_{b_{0}}}(\sigma)\). Since \(\widehat{L_{b_{0}}}(\sigma)\) has index 0, it follows that \(\widehat{L_{b_{0}}}(\sigma)\) is invertible. #### 1.3.4. The analysis of \(\widehat{L_{b}}(\sigma)^{-1}\) near \(\sigma=0\) According to Theorem 10.1, the space \(\mathcal{K}_{b}\) of zero energy modes of \(L_{b}\) is 8-dimensional; it is the sum of a 5-dimensional space of gauge-fixed version of linearized Kerr-Newman solutions, and a 3-dimensional space of pure gauge solutions \((2\delta^{s}_{g_{b}}\omega_{b},\mathcal{L}_{\omega^{\frac{1}{2}}_{b}}A_{b}+d \phi_{b})\) with \(\omega_{b}\) asymptotic to a spatial translation. According to Proposition 10.5, the space \(\widehat{\mathcal{K}}_{b}\) of generalized zero energy modes, which have \(o(1)\) decay as \(r\to\infty\) for fixed \(t_{b,*}\) but allow for polynomial growth in \(t_{b,*}\)), of \(L_{b}\) is the sum of \(\mathcal{K}_{b}\) and a 3-dimensional space of pure gauge solutions \((2\delta^{s}_{g_{b}}\hat{\omega}_{b},\mathcal{L}_{\omega^{s}_{b}}A_{b}+d \tilde{\phi}_{b})\) with \(\hat{\omega}_{b}\) asymptotic to a Lorentz boost and the coefficient of \(t_{b,*}\) being a stationary pure gauge solution \((2\delta^{s}_{g_{b}}\omega_{b},\mathcal{L}_{\omega^{\frac{1}{2}}_{b}}A_{b}+d \phi_{b})\). Therefore, \(\widehat{L_{b}}(\sigma)\) is more singular when acting on the 3-dimensional space of stationary pure gauge solutions because of the existence of the generalized zero energy solutions of \(L_{b}(\dot{g},\dot{A})=0\) with linear growth in \(t_{b,*}\) and leading order terms given as a stationary pure gauge solution. Motivated by this fact, we write \(\widehat{L_{b}}(\sigma)\) (actually after multiplied by an operator which is invertible near \(\sigma=0\)) as a \(3\times 3\) block matrix. The basic mechanism underlying the analysis of \(\widehat{L_{b}}(\sigma)^{-1}\) near \(\sigma=0\), is the formal expansion of the resolvent near \(\sigma=0\) \[\widehat{L_{b}}(\sigma)^{-1}f =\widehat{L_{b}}(0)^{-1}f+(\widehat{L_{b}}(\sigma)^{-1}- \widehat{L_{b}}(0)^{-1})f\] \[=u_{0}+\sigma\widehat{L_{b}}(\sigma)^{-1}f_{1},\] where we use the formal resolvent identity \(\widehat{L_{b}}(\sigma)^{-1}-\widehat{L_{b}}(0)^{-1}=-\widehat{L_{b}}(\sigma )^{-1}(\widehat{L_{b}}(\sigma)-\widehat{L_{b}}(0))\widehat{L_{b}}(0)^{-1}\) and thus obtain \[u_{0}=\widehat{L_{b}}(0)^{-1}f,\quad f_{1}=-\sigma^{-1}\big{(}\widehat{L_{b}} (\sigma)-\widehat{L_{b}}(0)\big{)}u_{0}.\] The first term \(u_{0}\) is \(\sigma\)-independent. As for the second term \(\sigma\widehat{L_{b}}(\sigma)^{-1}f_{1}\), we gain a factor of \(\sigma\) (i.e. more regularity in \(\sigma\) and thus more decay in \(t_{b,*}\) after taking inverse Fourier transform). However, since \(\widehat{L_{b}}(0)^{-1}\) loses two orders of decay in \(r\) while \(\sigma^{-1}(\widehat{L_{b}}(\sigma)-\widehat{L_{b}}(0))=2i\rho(\rho\partial_{ \rho}-1)+\rho^{2}C^{\infty}\) with \(\rho=1/r\) only gains back one order decay in \(r\), it follows that \(f_{1}\) has one order of decay less than \(f\). We expect to do the above expansion iteratively \[\widehat{L_{b}}(\sigma)^{-1}f=u_{0}+\sigma u_{1}+\cdots+\sigma^{k}u_{k}\] where \[u_{k}=\widehat{L_{b}}(0)^{-1}f_{k},\quad f_{k+1}=-\sigma^{-1}\big{(}\widehat{ L_{b}}(\sigma)-\widehat{L_{b}}(0)\big{)}u_{k}\] as often as possible. However, this iteration requires that \(u_{k}=\widehat{L_{b}}(0)^{-1}f_{k}\) has a certain asymptotic behavior or decay rate as \(r\to\infty\) such that \(\sigma^{-1}(\widehat{L_{b}}(\sigma)-\widehat{L_{b}}(0))u_{k}\) lies in the domain of \(\widehat{L_{b}}(\sigma)^{-1}\). That is, we have to stop the iteration once \(u_{k}=\widehat{L_{b}}(0)^{-1}f_{k}\) does not satisfy this requirement and thus we obtain an error term \(\sigma^{k}L(\sigma)^{-1}f_{k}\) in the expansion. Specifically, a detailed analysis in SS11.2 implies that \(\widehat{L_{b}}(\sigma)^{-1}\) (up to a multiplication of an operator invertible near \(\sigma=0\)) is equal to \[\begin{pmatrix}L_{00}(b,\sigma)&\sigma^{2}\widetilde{L}_{01}(b,\sigma)&\sigma \widetilde{L}_{02}(b,\sigma)\\ \sigma\widetilde{L}_{10}(b,\sigma)&\sigma^{2}\widetilde{L}_{11}(b,\sigma)& \sigma^{2}\widetilde{L}_{12}(b,\sigma)\\ \sigma\widetilde{L}_{20}(b,\sigma)&\sigma^{2}\widetilde{L}_{21}(b,\sigma)& \sigma\widetilde{L}_{22}(b,\sigma)\end{pmatrix}\] where \(L_{00}(b,\sigma),\ \widetilde{L}_{11}(b,\sigma),\ \widetilde{L}_{22}(b,\sigma)\) are invertible near \(\sigma=0\). Therefore, one obtain \[\widehat{L_{b}}(\sigma)^{-1}=\sigma^{-2}P_{2}(b)+\sigma^{-1}P_{1}(b)+R^{1}(b, \sigma)+R^{2}(b,\sigma),\quad R^{1}(b,\sigma)\sim|\sigma|^{\alpha-1},\quad R^{ 2}(b,\sigma)\sim|\sigma|^{\alpha}\] where \(P_{1}(b),P_{2}(b)\) are independent of \(\sigma\) and of finite rank with explicit range: linearized KN solutions and pure gauge solutions. The range of \(R^{1}(b,\sigma)\) consists of pure gauge solutions and this allow us to rewrite the inverse Fourier transform of \(R^{1}(b,\sigma)f\) as a pure gauge solution plus a remainder term decaying faster in time \(t_{b,*}\) (in fact at the rate \(t_{b,*}^{-1-\alpha}\)). ### Outline The paper is organized as follows. * In SS2, we introduce the b- and scattering geometric structures on manifolds with boundaries or corners (in this paper, we work on the compactification of the spatial slice \(\Sigma\) of the spacetime \(M\)), and corresponding function spaces. * In SS3, we discuss the initial value problems formalism for Einstein-Maxwell equations. We also introduce our choice of gauge, generalized wave map gauge and generalized Lorenz gauge, and then derive the gauge-fixed Einstein-Maxwell system. We include the discussion at both nonlinear and linearized levels. * In SS4, we realize the family of slowly rotating KN metric as a smooth family of stationary metrics on a fixed manifold \(M\). We also introduce several vector bundles over \(M\) and discuss how to construct the gauged Cauchy data for the gauge-fixed Einstein-Maxwell system from any given initial data \((h,k,\mathbf{E},\mathbf{H})\) satisfying the constraint equations. * In SS5, we set up the general Fredholm framework, based on tools from microlocal analysis. Concretely, we prove the uniform Fredholm estimates and high energy estimates for the relevant wave type operators (acting on both scalar functions and tensor bundles). We also establish a crude energy estimate which gives an exponentially growing bound on the energy (over a spacelike hypersurface terminating at null infinity) of solutions to the wave equations on slowly rotating Kerr-Newman metric. * In SS6, we introduce the spherical harmonic decompositions, which will be used in the proof of the geometric mode stability of Reissner-Nordstrom spacetime in SS7. * In SS7, we prove the version of the geometric mode stability of Reissner-Nordstrom spacetime used in the subsequent sections. * In SS8, we discuss the modes of the scalar wave operator on RN and slowly rotating KN spacetimes. * In SS9, we analyze the modes on various function spaces (which correspond to different decay rate at spatial infinity) of the gauge propagation operator \(\mathcal{P}_{b,\gamma}=\delta_{g_{b}}G_{g_{b}}\widetilde{\delta}_{g_{b},\gamma}^ {*}\) and the gauge potential wave operator \(\mathcal{W}_{b,\gamma}=\widetilde{\delta}_{g_{b},\gamma}G_{g_{b}}\delta_{g_{b}} ^{*}\), both of which are wave-type operators acting on 1-form. From this section on, we are restricted in the small charge case. * In SS10, we prove that the linearized gauge-fixed Einstein-Maxwell system \(L_{g_{b_{0}},\gamma}\) (linearized around weakly charged RN metrics and 4-electromagnetic potentials \((g_{b_{0}},A_{b_{0}})\)) has no non-zero modes in the closed upper half plane. Moreover, we describe the space of zero modes and generalized zero modes (with linearly growth in \(t_{b,*}\)) of \(L_{b,\gamma}\) for both RN case and slowly rotating KN case with small charge. * In SS11, we prove the mode stability of the linearized gauge-fixed Einstein-Maxwell operator \(L_{b,\gamma}(\sigma)\) for \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0}),|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\) and \(\mathrm{Im}\,\sigma\geq 0,\sigma\neq 0\) (i.e. the invertibility of the operator \(\widehat{L_{b,\gamma}}(\sigma)\) on suitable function spaces for \(\mathrm{Im}\,\sigma\geq 0,\sigma\neq 0\)), and also provide a description of the structure of the resolvent \(\widehat{L_{b,\gamma}}(\sigma)^{-1}\) near \(\sigma=0\). * In SS12, we study the higher regularity of the resolvent \((\widehat{L_{b,\gamma}}(\sigma))^{-1}\) of the linearized gauge-fixed Einstein-Maxwell operator. * In SS13, we put all the previous sections together to study the asymptotic behavior of the solutions \((\dot{g},\dot{A})\) to the inhomogeneous linearized gauge-fixed Einstein-Maxwell equations \(L_{b,\gamma}(\dot{g},\dot{A})=f\). * In SS14, we reduce the initial value problem for Einstein-Maxwell equations to the inhomogeneous problem studied in the previous section. * In Appendix SSA, we provide a detailed calculation of the linearized Einstein-Maxwell system around RN black holes, which is needed in the proof of the geometric mode stability of Reissner-Nordstrom spacetime in SS7. In Appendix B, we present the calculation of the subprincipal operator of the wave operator acting on tensor bundles at trapping. In Appendix C, we include the computation of the subprincipal operator of the wave operator acting on tensor bundles at radial points at event horizon, which is needed in the radial point estimates at event horizons. ### List of notations We list the notations used frequently in this paper and point out their first appearance: \(\mathcal{A}^{\ell}\) weighted \(L^{\infty}\) conormal function space, see (2.4) \(A_{b_{0}}\) Reissner-Nordstrom electromagnetic 4-potentials with parameters \(b_{0}\), see (4.10) \(A_{b}\) Kerr-Newman electromagnetic 4-potentials with parameters \(b\), see (4.24) \(b_{0}\) fixed Reissner-Nordstrom parameters, see (4.1) \(b\) Kerr-Newman parameters, see (4.14) \(\delta_{g}\) negative divergence, \((\delta_{g}u)_{\mu_{1}\dots\mu_{N}}=-\nabla^{\lambda}u_{\lambda\mu_{1}\dots\mu_ {N}}\) \(\delta_{g}^{*}\) symmetric gradient, \((\delta_{g}^{*}u)_{\mu\nu}=\frac{1}{2}(u_{\mu;\nu}+u_{\nu;\mu})\) \(\delta_{g,\gamma}\) modification of the negative divergence, see (4.62) \(\tilde{\delta}_{g,\gamma}^{*}\) modification of the symmetric gradient, see (4.62) \(g_{b_{0}}\) Reissner-Nordstrom metrics with parameter \(b_{0}\), see (4.24) \(g_{b}\) Kerr-Newman metrics with parameters \(b\), see (4.24) \(\gamma_{0}\) map assigning to a function on \(M\) its Cauchy data at \(\Sigma_{0}\), see (4.85) \(\tilde{H}_{\text{b}}^{s,\ell}\) weighted b-Sobolev space of extendible distributions, see (2.8) \(\tilde{H}_{\text{b}}^{s,\ell}\) weighted b-Sobolev space of supported distributions, see (2.8) \(\tilde{H}_{\text{b},h}^{s,\ell}\) semiclassical weighted b-Sobolev space of extendible distributions, see (2.10) \(\dot{H}_{\text{b}}^{s,\ell}\) semiclassical weighted b-Sobolev space of supported distributions, see (2.10) \(i_{b}\) map constructing Cauchy data from geometric initial data, see Proposition 4.14 \(\mathcal{K}_{b}\) the space of zero energy modes of \(L_{b,\gamma}\), see (10.8) \(\widehat{\mathcal{K}}_{b}\) the space of generalized zero energy modes of \(L_{b,\gamma}\), see Proposition 10.5 \(\mathcal{K}_{b}^{*}\) the space of dual zero energy modes of \(L_{b,\gamma}^{*}\), see (10.9) \(\widehat{\mathcal{K}}_{b}^{*}\) the space of dual generalized zero energy modes of \(L_{b,\gamma}^{*}\), see Proposition 10.5 \(\mathcal{L}_{V}\) Lie derivative along the vector field \(V\) \(\widetilde{\mathcal{L}}_{T}V\) equal to \(\mathcal{L}_{V}T\), see Definition 3.9 \(L_{b,\gamma}\) linearized gauge-fixed Einstein-Maxwell equations \((2L_{b,\gamma}^{E},L_{b,\gamma}^{M})\) \(L_{b,\gamma}^{E}\) linearized gauge-fixed Einstein equations, see (4.63) \(L_{b,\gamma}^{M}\) linearized gauge-fixed Maxwell equations, see (4.63) \(\mathcal{M}\) the static region of a fixed Reissner-Nordstrom spacetime, see (4.5) \(M\) extended manifold where \(g_{b}\) is defined, see (4.8) \(\mathcal{P}_{b,\gamma}\) gauge propagation operator for the generalized wave map gauge 1-form \(\widetilde{\Upsilon}^{E}\), see (4.66) \(\Sigma_{0}\) a spacelike Cauchy hypersurface of \(M\), see (4.17) \(t\) static time coordinate, see (4.2), or Boyer-Lindquist coordinate, see (4.15) \(t_{0}\) incoming Eddington-Finkelstein coordinate on a fixed Reissner-Nordstrom manifold, see (4.6) \(t_{\chi_{0}}\) a time function interpolating \(t_{0}\) near event horizon and \(t\) near \(r=\infty\), see (4.9) \(t_{b,*}\) a time function which is smooth across event horizon and transverses to null infinity near \(r=\infty\), see (4.30) \(t_{*}\) equal to \(t_{b_{0},*}\), see (4.11) \(\mathfrak{t}\) a timelike function with respect to Kerr-Newman metric \(g_{b}\), which is equal to \(t\) near \(r=\infty\), see (4.16) \(\theta\) gauge source function for the generalized wave map gauge, see (3.17) \(\mathcal{U}_{b_{0}}\) parameter space for slowly rotating Kerr-Newman black holes, see Lemma 4.1 \(\mathcal{U}_{b_{0},m}\) parameter space for slowly rotating Kerr-Newman black holes allowing for magnetic charges, see (4.25) \(\Upsilon^{E}\) wave map gauge 1-form, see (3.15) \(\Upsilon^{M}\) modified Lorenz gauge function, see (3.19) \(\widetilde{\Upsilon}^{E}\) generalized wave map gauge 1-form, see (3.17) \(\widetilde{\Upsilon}^{M}\) generalized Lorenz gauge function, see (3.20) \(\mathcal{W}_{b,\gamma}\) gauge potential operator for Einstein equations, see (4.66) \(\mathcal{X}\) the spatial slice of the static region \(\mathcal{M}\), see (4.5) \(X\) the spatial slice of the extended manifold \(M\), see (4.8) \(\bar{X}\) compactification of \(X\) at \(r=\infty\), see (4.12) \(\partial_{-}\bar{X}\) an artificial boundary of \(\bar{X}\), see (4.12) \(\partial_{+}\bar{X}\) the boundary at spatial infinity \(r=\infty\) of \(\bar{X}\), see (4.12) \(\widetilde{\bullet}(\sigma)\) Fourier transform \(e^{it_{b,*}\sigma}\bullet e^{-it_{b,*}\sigma}\) of \(\bullet\) with respect to the time function \(t_{b,*}\); in this paper, \(\bullet=\Box_{g_{b},0},\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma},L_{b,\gamma}\) ### Acknowledgements The author would like to thank her advisor, Hans Lindblad, for his constant support and encouragement. The author is also grateful to Peter Hintz and Andras Vasy for helpful discussions. ## 2. b- and scattering geometry In this section, we will recall the b- and scattering geometry and analysis on manifolds with boundaries or corners. Here, we follow the discussion in [57, SS2]. In SS2.1, we recall the notions of b- and scattering vector bundles. We refer the readers to [93, SS2] and [95, SS2] for a more detailed exposition. In SS2.2, we recall the notion of radial compactification. In SS2.3, we define the relevant Sobolev, conormal and polyhomogeneous functions defined on manifolds with boundaries or corners. Let \(\bar{X}\) be a general compact manifold with boundary \(\partial\bar{X}\), and let \(\rho\in C^{\infty}(X)\) be a defining function for the boundary \(\partial\bar{X}\), that is, \(\rho\geq 0,\partial\bar{X}=\rho^{-1}(0)\) and \(d\rho\neq 0\) on \(\partial\bar{X}\). ### b- and scattering vector bundles The space of _b-vector fields_\(\mathcal{V}_{\mathrm{b}}(\bar{X})\) and _scattering vector fields_\(\mathcal{V}_{\mathrm{sc}}(\bar{X})\) are defined as \[\mathcal{V}_{\mathrm{b}}(\bar{X})=\{V\in\mathcal{V}(\bar{X})\mid\text{$V$ is tangent to the boundary $\partial\bar{X}$}\},\quad\mathcal{V}_{\mathrm{sc}}(\bar{X})=\rho\mathcal{V}_{ \mathrm{b}}(\bar{X}). \tag{2.1}\] Let \(\rho\in C^{\infty}(X)\) be a boundary defining function and let \(y_{1},\cdots,y_{n-1}\) be additional coordinates near the boundary. Then \(\mathcal{V}_{\mathrm{b}}(\bar{X})\) is spanned, over \(C^{\infty}(\bar{X})\), by \(\rho\partial_{\rho}\) and \(\partial_{y_{i}}\), while \(\mathcal{V}_{\mathrm{sc}}(\bar{X})\) is spanned, over \(C^{\infty}(\bar{X})\), by \(\rho^{2}\partial_{\rho}\) and \(\rho\partial_{y_{i}}\). Correspondingly, we have the natural b-tangent bundle \({}^{\mathrm{b}}T\bar{X}\) and scattering tangent bundle \({}^{\mathrm{sc}}T\bar{X}\) over \(\bar{X}\) such that \[\mathcal{V}_{\mathrm{b}}(\bar{X})=C^{\infty}(\bar{X};{}^{\mathrm{b}}T\bar{X}),\quad\mathcal{V}_{\mathrm{sc}}(\bar{X})=C^{\infty}(\bar{X};{}^{\mathrm{sc}}T \bar{X}).\] Restriction to the interior \(X\) gives rise to the smooth bundle maps \({}^{\mathrm{b}}T\bar{X}\to TX\) and \({}^{\mathrm{sc}}T\bar{X}\to TX\), which are isomorphisms over the interior \(X\) and vanish at the boundary \(\partial\bar{X}\). The space \(\mathrm{Diff}^{\mathrm{b}}_{\mathrm{b}}(\bar{X})\), resp. \(\mathrm{Diff}^{\mathrm{b}}_{\mathrm{sc}}(X)\) of b-, resp. scattering differential operators of degree \(k\) consists of finite sums of up to \(k\)-fold products of b-, resp. scattering vector fields. The dual bundles is denoted by \({}^{\mathrm{b}}T^{*}\bar{X}\), resp. \({}^{\mathrm{sc}}T^{*}\bar{X}\) and called _\(b\)-cotangent bundle_, resp. _scattering cotangent bundle_. Now \(\{d\rho/\rho,dy_{i}\}\), resp. \(\{d\rho/\rho^{2},dy_{i}/\rho\}\) gives a local basis of \({}^{\mathrm{b}}T^{*}\bar{X}\), resp. \({}^{\mathrm{sc}}T^{*}\bar{X}\). A _scattering metric_ is a section \(g\in C^{\infty}(\bar{X};{}^{2}{}^{\mathrm{sc}}T^{*}\bar{X})\), which defines a non-degenerate fiber metric on \({}^{\mathrm{sc}}T\bar{X}\), while a _b-metric_ is a section \(g\in C^{\infty}(\bar{X};{}^{2}{}^{\mathrm{b}}T^{*}\bar{X})\), which defines a non-degenerate fiber metric on \({}^{\mathrm{b}}T\bar{X}\) ### Radial compactification These aforementioned b- and scattering structures arise naturally on compactifications of non-compact manifolds (see [95, SS1]). Now we carry out the _radial compactification_ of the Euclidean space \(\mathbb{R}^{n}\). **Definition 2.1**.: The _radial compactification_ of the Euclidean space \(\mathbb{R}^{n}\) is defined as \[\overline{\mathbb{R}^{n}}:=\big{(}\mathbb{R}^{n}\sqcup([0,1)_{\rho}\times\mathbb{ S}^{n-1})\big{)}/\sim \tag{2.2}\] where \(\sim\) identifies \((\rho,\omega)\in[0,1)_{\rho}\times\mathbb{S}^{n-1}\) with the point \(\rho^{-1}\omega\in\mathbb{R}^{n}\). The quotient carries the smooth structure where being smooth over \(\overline{\mathbb{R}^{n}}\) means smoothness over the interior \(R^{n}\) in the usual sense, and smoothness over \([0,1)_{\rho}\times\mathbb{S}^{n-1}\) in \((\rho,\omega)\). Near the boundary \(\rho^{-1}(0)\) with \(\rho=r^{-1}\), the space of b-vector fields is spanned over \(C^{\infty}(\overline{\mathbb{R}^{n}})\) by \(\rho\partial_{\rho}=-r\partial_{r}\) and \(\mathcal{V}(\mathbb{S}^{n-1})\), while the space of scattering vector fields is spanned over \(C^{\infty}(\overline{\mathbb{R}^{n}})\) by \(\rho^{2}\partial_{\rho}=-\partial_{r}\) and \(\rho\mathcal{V}(\mathbb{S}^{n-1})\). Returning to the standard coordinates \(x^{1},\ldots,x^{n}\) on \(\mathbb{R}^{n}\), a direct calculation implies that \(\{\partial_{x^{1}},\ldots,\partial_{x^{n}}\}\) extend to the radial compactification \(\overline{\mathbb{R}^{n}}\) as smooth scattering vector fields and they form a basis of \({}^{\mathrm{sc}}T\overline{\mathbb{R}^{n}}\), that is, the space \(\mathcal{V}_{\mathrm{sc}}(\overline{\mathbb{R}^{n}})\) of the scattering vector fields is spanned, over \(C^{\infty}(\overline{\mathbb{R}^{n}})\), by \(\{\partial_{x^{1}},\ldots,\partial_{x^{n}}\}\). We also note that the linear vector fields \(\partial_{x^{1}},\ldots,\partial_{x^{n}}\), and \(x^{i}\partial_{x^{j}}\), \(1\leq i,j\leq n\) lift to span, over \(C^{\infty}(\overline{\mathbb{R}^{n}})\), the space \(\mathcal{V}_{\mathrm{b}}(\overline{\mathbb{R}^{n}})\) of b-vector fields. As for the dual bundles, correspondingly we note that the differentials of the standard coordinates \(dx^{i}\), \(1\leq i\leq n\) are a basis of \({}^{\mathrm{sc}}T^{*}\overline{\mathbb{R}^{n}}\). Therefore, the Euclidean metric \((dx^{1})^{2}+\cdots+(dx^{n})^{2}\) is a particular example of the _scattering metric_, i.e., an element in \(C^{\infty}(\overline{\mathbb{R}^{n}};S^{2}\,{}^{\mathrm{sc}}T^{*}\overline{ \mathbb{R}^{n}})\). In this work, we are particularly interested in the behavior of the Hamiltonian vector field associated to a smooth function \(p\in C^{\infty}({}^{\mathrm{sc}}T^{*}\bar{X})\) where \(\bar{X}\) is a manifold with boundaries. Concretely, given a smooth functions \(p\in C^{\infty}({}^{\mathrm{sc}}T^{*}\bar{X})\), the Hamilton vector fields \(H_{p}\) extends from the interior to a scattering vector field on \({}^{\mathrm{sc}}T^{*}\bar{X}\), which is again a manifold with boundary \({}^{\mathrm{sc}}T^{*}_{\partial\bar{X}}\bar{X}\). In this paper, we consider the Hamilton vector field \(H_{p}\) where \(p(x,\xi):=|\xi|^{2}_{g(x)^{-1}}\) and \(g\in C^{\infty}(\bar{X};S^{2}\,{}^{\mathrm{sc}}T^{*}\bar{X})\) is a scattering metric. To give a uniform discussion of the behavior of the Hamiltonian vector field \(H_{p}\in\mathcal{V}_{\mathrm{sc}}({}^{\mathrm{sc}}T^{*}\bar{X})\) near the boundary of \(\partial\bar{X}\) and near the infinity in the fibers of the scattering cotangent bundle \({}^{\mathrm{sc}}T^{*}\bar{X}\), we consider the compactified scattering cotangent bundle \(\overline{\mathrm{sc}}T^{*}\bar{X}\) as our basic microlocal space. We note that \(\overline{\mathrm{sc}}T^{*}\bar{X}\) is a compact manifold with corners obtained by the radial compactification of the fibers of \({}^{\mathrm{sc}}T^{*}\bar{X}\) (see Figure 1). Let \(\rho_{\mathrm{total}}=x\rho_{\xi}\), where \(x\), resp. \(\rho_{\xi}\) is the boundary defining function of the boundary of \(\bar{X}\), resp. the fiber infinity, be the total boundary defining function of \(\overline{\mathrm{sc}}T^{*}\bar{X}\). Then we have \(\mathcal{V}_{\mathrm{sc}}(\overline{\mathrm{sc}}T^{*}\bar{X})=\rho_{\mathrm{ total}}\mathcal{V}_{\mathrm{b}}(\overline{\mathrm{sc}}T^{*}\bar{X})\), and thus the rescaled Hamiltonian vector field \(\rho_{\mathrm{total}}^{-1}H_{p}\), where \(p\in C^{\infty}({}^{\mathrm{sc}}T^{*}\bar{X})\), is a b-vector field on \(\overline{\mathrm{sc}}T^{*}\bar{X}\). ### Function spaces We next recall the notion of b- and scattering Sobolev spaces. Fixing a volume _scattering density_\(\mu\) on \(\bar{X}\), which in the local coordinates \(x\geq 0,y\in\mathbb{R}^{n-1}\) is a positive multiple of \(|\frac{dx}{x^{2}}\frac{dy}{x^{n-1}}|\). With respect to this scattering density, we define the space \(L^{2}(\bar{X})=L^{2}_{\mathrm{sc}}\), and since \(\bar{X}\) is compact, any two choices of scattering density give rise to equivalent norms on \(L^{2}(\bar{X})\). We note that using b-density is the usual convention for the definition of b-Sobolev space, however, _we emphasize that in this paper, we use the scattering density even for b-Sobolev spaces_. For \(s\in\mathbb{N}_{0}\), we define _b-_ or _scattering Sobolev space_ as follows \[H^{s}_{\bullet}(\bar{X}):=\{u\in L^{2}(\bar{X},\nu)\mid V_{1}\cdots V_{j}u\in L ^{2}(\bar{X},\nu),\ V_{i}\in\mathcal{V}_{\bullet}(\bar{X}),\ 1\leq i\leq j\leq s\},\quad\bullet= \mathrm{b},\mathrm{sc},\] which can be extended to \(s\in\mathbb{R}\) by duality and interpolation. _Weighted b-_ and scattering Sobolev spaces_ are defined by \[H^{s,\ell}_{\bullet}(\bar{X})=\rho^{\ell}H^{s}_{\bullet}(\bar{X}):=\{u\mid\rho^{ -\ell}u\in H^{s}_{\bullet}(\bar{X})\}.\] The space \[H^{\infty,\ell}_{\mathrm{b}}(\bar{X})=\bigcap_{s\in\mathbb{R}}\bar{H}^{s,\ell}_{ \mathrm{b}}(\bar{X}).\] is a Frechet space and we refer to their elements as _weighted (\(L^{2}\)-)conormal functions_. Dually, we define \[H^{-\infty,\ell}_{\mathrm{b}}(\bar{X})=\bigcup_{s\in\mathbb{R}}\bar{H}^{s,\ell }_{\mathrm{b}}(\bar{X}).\] We note that \(\bar{H}^{s,\ell}_{\mathrm{b}}(\bar{X})\subset C^{-\infty}(\bar{X})\) where \(C^{-\infty}(\bar{X})\) is the space of extendible distributions at \(\bar{X}\) in the sense of [68, Appendix B], and it is also the dual to the space \(\hat{C}^{\infty}(\bar{X})\) (which is the space of smooth functions vanishing, with all derivatives, at the boundary \(\partial\bar{X}\)). We also define the spaces of the form \[H^{s,\ell+}_{\mathrm{b}}(\bar{X}):=\bigcup_{\epsilon>0}H^{s,\ell+\epsilon}_{ \mathrm{b}}(\bar{X}),\quad H^{s,\ell-}_{\mathrm{b}}(\bar{X}):=\bigcap_{ \epsilon>0}H^{s,\ell-\epsilon}_{\mathrm{b}}(\bar{X}) \tag{2.3}\] for \(s\in\mathbb{R}\cup\{\pm\infty\}\). We also use another the _weighted \(L^{\infty}\)-conormal functions_ space \(\mathcal{A}^{\ell}(\bar{X})\) to describe the behavior of functions near the boundary \(\partial\bar{X}=\{\rho=0\}\) \[\mathcal{A}^{\ell}(\bar{X}):=\{u\in\rho^{\ell}L^{\infty}(\bar{X})\colon\mathrm{ Diff}_{\mathrm{b}}(\bar{X})u\subset\rho^{\ell}L^{\infty}(\bar{X})\}. \tag{2.4}\] The relation between \(H^{\infty,\ell}(\bar{X})\) and \(\mathcal{A}^{\ell}\) is as follows: for \(\bar{X}=\overline{\mathbb{R}^{3}}\), a direct calculation implies \(\mathcal{A}^{\ell}(\overline{\mathbb{R}^{3}})\subset H^{\infty,\ell-3/2-}_{ \mathrm{b}}(\overline{\mathbb{R}^{3}})\), and Sobolev embedding theorem gives \(H^{\infty,\ell}_{\mathrm{b}}(\overline{\mathbb{R}^{3}})\subset\mathcal{A}^{ \ell+3/2}(\overline{\mathbb{R}^{3}})\). We note that since we use the scattering density (instead of the b-density) to define the b-Sobolev space, there is a shift \(\frac{3}{2}\) in the weight. More specifically, for \(s>\frac{3}{2}\), we have \[H^{s,\ell}_{\mathrm{b}}(\overline{\mathbb{R}^{3}};|dx^{1}\,dx^{2}\,dx^{3}|)=H^ {s,\ell+3/2}_{\mathrm{b}}(\overline{\mathbb{R}^{3}};\langle r\rangle^{-3}|dx ^{1}\,dx^{2}\,dx^{3}|)\hookrightarrow\langle r\rangle^{-\ell-3/2}L^{\infty}( \overline{\mathbb{R}^{3}}). \tag{2.5}\] The spaces \(\mathcal{A}^{\ell+}(\bar{X})\) and \(\mathcal{A}^{\ell-}(\bar{X})\) can be defined analogously to (2.3). The above spaces of sections of vector bundles are defined using local trivializations. We now recall the notion of _\(\mathcal{E}\)-smoothness_ and its basic properties (see [93, SS5]). **Definition 2.2** ([93, Definition 5.23]).: An _index set_ is a discrete subset \(\mathcal{E}\subset\mathbb{C}\times\mathbb{N}_{0}\) satisfying 1. \((z_{j},k_{j})\in\mathcal{E},\ |(z_{j},k_{j})|\to\infty\Longrightarrow\mathrm{Im}\,z_{j}\to-\infty\); 2. \((z,k)\in\mathcal{E}\Longrightarrow(z,l)\in\mathcal{E},\ l\in\mathbb{N}_{0},\ 0 \leq l\leq k\); 3. \((z,k)\in\mathcal{E}\Longrightarrow(z-j,k)\in\mathcal{E},\ j\in\mathbb{N}_{0}\). **Definition 2.3** ([93, Equation 5.73]).: Let \(\mathcal{E}\) be an index set. A _polyhomogeneous function_ on \(\bar{X}\) with index set \(\mathcal{E}\) is a smooth (in the interior of \(\bar{X}\)) function \(u\) for which there are \(a_{(z,j)}\in C^{\infty}(\partial\bar{X})\), \((z,j)\in\mathcal{E}\), such that \[u-\sum_{\begin{subarray}{c}(z,j)\in\mathcal{E}\\ \mathrm{Im}\,z\geq-N\end{subarray}}\rho^{iz}(\log\rho)^{j}a_{(z,j)}\in\mathcal{ A}^{N}(\bar{X})\quad\forall\,N\in\mathbb{R}. \tag{2.6}\] The space of polyhomogeneous functions on \(\bar{X}\) with index set \(\mathcal{E}\) is denoted by \(\mathcal{A}^{\mathcal{E}}_{\mathrm{phg}}(\bar{X})\). Another characterization of polyhomogeneous functions with index set \(\mathcal{E}\) is given by \[u\in\mathcal{A}^{\mathcal{E}}_{\mathrm{phg}}(\bar{X})\Longleftrightarrow\prod _{\begin{subarray}{c}(z,j)\in\mathcal{E}\\ \mathrm{Im}\,z\geq-N\end{subarray}}(\rho D_{\rho}-z)u\in\mathcal{A}^{N}(\bar{X })\quad\forall\,N\in\mathbb{R},\] where \(D_{\rho}=i^{-1}\partial_{\rho}\) and \(\rho\) is the boundary defining function of \(\bar{X}\). Let \(\bar{X}^{\prime}\) is a compact manifold with boundary, and let \(\bar{X}\subset\bar{X}^{\prime}\) be a submanifold with boundaries. Suppose that the boundaries of \(\bar{X}\) has two components \[\partial\bar{X}=\partial_{-}\bar{X}\sqcup\partial_{+}\bar{X},\qquad\partial_{-} \bar{X}=\partial\bar{X}\setminus\partial\bar{X}^{\prime},\quad\partial_{+}X= \partial\bar{X}^{\prime}; \tag{2.7}\] we call \(\partial_{+}\bar{X}\) the boundary 'at infinity', and \(\partial_{-}\bar{X}\) the 'artificial' boundary in the interior of \(\bar{X}^{\prime}\). Working on such \(\bar{X}\), we impose b- and scattering structure near \(\partial_{+}\bar{X}\) (i.e. the boundary 'at infinity'). That is, we define (by a slight abuse of notation) \[\mathcal{V}_{\mathrm{b}}(\bar{X}):=\{V|_{\bar{X}}\ |\ V\in\mathcal{V}_{\mathrm{b}}( \bar{X}^{\prime})\},\quad\mathcal{V}_{\mathrm{sc}}(X):=\{V|_{\bar{X}}\ |\ V\in\mathcal{V}_{\mathrm{sc}}(\bar{X}^{\prime})\}.\] A typical example of the setting (2.7) is \(\bar{X}^{\prime}=\overline{\mathbb{R}^{n}}\) and \(\bar{X}=\overline{\{r\geq 1\}}\subset\bar{X}^{\prime}\). In this example, we see that \(\partial_{+}\bar{X}=\partial X^{\prime}\) is the boundary 'at infinity' and \(\partial_{-}\bar{X}=\{r=1\}\) is the boundary in the interior of \(\bar{X}^{\prime}=\overline{\mathbb{R}^{n}}\). See Figure 2. Because of the existence of the 'artificial' boundary \(\partial_{-}\bar{X}\), we further introduce the following two classes of Sobolev spaces: _extendible Sobolev spaces_\(\bar{H}^{s,\ell}_{\bullet}(\bar{X})\) and _supported Sobolev spaces_\(\dot{H}^{s,\ell}_{\bullet}(\bar{X})\) \[\bar{H}^{s,\ell}_{\bullet}(\bar{X}):=\{u|_{\bar{X}^{\circ}}\mid u\in H^{s,\ell }_{\bullet}(\bar{X}^{\prime})\},\quad\dot{H}^{s,\ell}_{\bullet}(\bar{X}):=\{u \mid u\in H^{s,\ell}_{\bullet}(\bar{X}^{\prime}),\ \operatorname{supp}u\subset\bar{X}\},\quad \bullet=\operatorname{b},\operatorname{sc}. \tag{2.8}\] Away from \(\partial_{-}\bar{X}\), these are the same as the standard spaces \(H^{s,\ell}_{\bullet}(\bar{X})\). Therefore, the extendibility/support considerations arise only at \(\partial_{-}\bar{X}\), not at \(\partial_{+}\bar{X}\). Let \(M=\mathbb{R}_{\mathfrak{t}}\times\bar{X}\) be a stationary spacetime with'spatial part' \(\bar{X}\) and define the projection \(\pi_{\bar{X}}\colon M\to\bar{X}\). Let \(\mathcal{F}(\bar{X})\) be a function space on \(\bar{X}\) such as \(\mathcal{F}(\bar{X})=\bar{H}^{\infty,\ell}_{\operatorname{b}}(\bar{X})\) or \(\dot{H}^{\infty,\ell}_{\operatorname{b}}(\bar{X})\), and then \(\pi^{*}_{\bar{X}}\mathcal{F}(\bar{X})\) is a \(\mathfrak{t}\)-independent function space on \(M\). For later use, we further introduce the space of _generalized zero modes_ (which depends on \(\mathfrak{t}\) polynomially) \[\operatorname{Poly}(\mathfrak{t})\mathcal{F}(\bar{X}):=\bigcup_{k\in\mathbb{N }}\operatorname{Poly}^{k}(\mathfrak{t})\mathcal{F}(\bar{X}),\quad\operatorname {Poly}^{k}(\mathfrak{t})\mathcal{F}(\bar{X}):=\left\{\sum_{j=0}^{k}\mathfrak{t }^{j}a_{j}\colon a_{j}\in\mathcal{F}(\bar{X})\right\}. \tag{2.9}\] Here, \(\bar{X}\) (or \(d\mathfrak{t}\)) does not have to be spacelike (or timelike). For the high energy estimates, we need to work with _semiclassical b-Sobolev spaces_\(H^{s,\ell}_{\operatorname{b},h}(\bar{X})\). The semiclassical b-Sobolev norms are defined for \(s\in\mathbb{N}_{0},\ u\in H^{s,\ell}_{\operatorname{b}}(\bar{X}),\ h\in(0,1]\) by \[\|u\|_{H^{s,\ell}_{\operatorname{b},h}(\bar{X})}^{2}:=\sum_{ \begin{subarray}{c}1\leq i_{1},\ldots,i_{j}\leq N\\ 0\leq j\leq k\end{subarray}}\|(hV_{i_{1}})\cdots(hV_{i_{j}})u\|_{L^{2}(\bar{X} )}^{2},\quad\|u\|_{H^{k,\ell}_{\operatorname{b},h}(\bar{X})}:=\|\rho^{-\ell} u\|_{H^{k}_{\operatorname{b},h}(\bar{X})}, \tag{2.10}\] where \(V_{1},\ldots,V_{N}\in\mathcal{V}_{\operatorname{b}}(\bar{X})\) spans \(\mathcal{V}_{\operatorname{b}}(\bar{X})\) over \(C^{\infty}(\bar{X})\). This definition of the semiclassical norm can be extended to \(s\in\mathbb{R}\) by duality and interpolation. (Alternatively, \(H^{s,\ell}_{\operatorname{b},h}(\bar{X})\) can be defined by using semiclassical b-pseudodifferential operators, see [63, Appendix A].) If \(\bar{X}\) is a compact manifold as in the setting (2.7), one can also define semiclassical extendible and supported b-Sobolev spaces \(\bar{H}^{s,\ell}_{\operatorname{b},h}(X)\) and \(\dot{H}^{s,\ell}_{\operatorname{b},h}(X)\) analogously to (2.8). ## 3. Initial value problems formalism of Einstein-Maxwell equations In this section, we shall discuss the initial value problems formalism for Einstein-Maxwell equations. Here, we follow the discussion in [60, SS3]. As a general reference, we refer the readers to [19, SS6.10]. In SS3.1, we reduce the Einstein-Maxwell equations for \((g,F)\) in the form (3.1)-(3.2) to the study of the equations for \((g,A)\) of the form (3.9). In SS3.2, we formulate the initial value problems for Einstein-Maxwell equations and then write the Einstein-Maxwell equations with respect to a _generalized wave map gauge_ and _Lorenz gauge_. In SS3.3, we discuss the linearization around a special solution of the Einstein-Maxwell equations, as well as the linearization of the gauge-fixed Einstein-Maxwell equations. ### Einstein-Maxwell equations The Einstein-Maxwell equations read \[\operatorname{Ric}(g)_{\mu\nu}=2T(g,F)_{\mu\nu}, \tag{3.1}\] \[dF=0,\quad\delta_{g}F=0. \tag{3.2}\] where \(g\) is a Lorentzian metric \(g\) with signature \((-,+,+,+)\), \(\operatorname{Ric}(g)\) is the Ricci curvature tensor, \(T_{\mu\nu}(g,F):=F_{\mu\alpha}F_{\nu}^{\ \alpha}-\frac{1}{4}g_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}\) is the energy momentum tensor associated with an electromagnetic field represented by a \(2\)-form \(F\) and \(\delta_{g}F=-\nabla^{\alpha}F_{\alpha\beta}\) is the negative divergence. A direct calculation implies that if \(F\) solves (3.2), then we have \(\delta_{g}T(g,F)=0\). Let \((g,F)\) be a smooth solution of (3.1)-(3.2). Let \(i\colon\Sigma_{0}\hookrightarrow M\) be a spacelike hypersurface with future timelike unit normal field \(n\), and let \(h=g|_{\Sigma_{0}}\) be the induced metric on \(\Sigma_{0}\) with respect ot \(g\). The electromagnetic initial data on \(\Sigma_{0}\) are the _magnetic field_\(\mathbf{H}\) and _electric field_\(\mathbf{E}\). The \(1\)-form \(\mathbf{H}\) is the form induced on \(\Sigma_{0}\) by the electromagnetic field \(F\), while \(\mathbf{E}\) is the \(1\)-form on \(\Sigma_{0}\) relative to the future timelike unit normal \(n\) to \(\Sigma_{0}\) in the spacetime metric \(g\). Specifically, the electric and magnetic fields \(\mathbf{E},\mathbf{H}\), as measured by observers with \(4\)-velocity \(n\), are defined as follows \[\mathbf{E}:=-i^{*}\iota_{n}F=\star_{h}i^{*}\star_{g}F,\quad\mathbf{H}:=i^{*} \iota_{n}\star_{g}F=\star_{h}i^{*}F \tag{3.3}\] where \(\star\) is the Hodge star operator. A direct calculation implies that for \(T_{g,F}:=T(g,F)\) \[T_{g,F}(n,n) =\frac{1}{2}(|\mathbf{E}|_{h}^{2}+|\mathbf{H}|_{h}^{2}), \tag{3.4}\] \[T_{g,F}(n,\cdot) =\star_{h}(\mathbf{H}\wedge\mathbf{E})\quad\text{on}\quad T\Sigma _{0},\] where \(T_{g,F}(n,n)\) is interpreted as the energy density of the electromagnetic field measured by an observer with \(4\)-velocity \(n\). We now define the following rotation of \(F,\mathbf{E},\mathbf{H}\): \[F_{\theta} :=\cos(\theta)F+\sin(\theta)\star_{g}F, \tag{3.5}\] \[(\mathbf{E}_{\theta},\mathbf{H}_{\theta}) :=(\cos(\theta)\mathbf{E}-\sin(\theta)\mathbf{H},\,\sin(\theta) \mathbf{E}+\cos(\theta)\mathbf{H}). \tag{3.6}\] Then \(\mathbf{E}_{\theta}\) and \(\mathbf{H}_{\theta}\) are the electric and magnetic field, respectively, induced by \((g,F_{\theta})\). Now we establish the following lemma. **Lemma 3.1**.: _If \((g,F)\) solves the Einstein-Maxwell equations (3.1)-(3.2), then so does \((g,F_{\theta})\) for all \(\theta\in\mathbb{R}\), where \(F_{\theta}\) is defined as in (3.5)._ Proof.: According to the facts that \(\star_{g}\star_{g}=-\mathrm{Id}\) and \(\delta_{g}=\star_{g}d\star_{g}\) when acting on \(2\)-forms, the equations (3.2) are equivalent to \(dF=0,d\star_{g}F=0\). Then it follows that \[dF_{\theta}=\cos(\theta)dF+\sin(\theta)d\star_{g}F=0,\quad\delta_{g}F_{\theta} =\star_{g}(\cos(\theta)d\star_{g}F-\sin(\theta)dF)=0.\] Since the energy momentum tensor \(T(g,F)\) can be rewritten as \[T(g,F)_{\mu\nu}=\frac{1}{2}g^{\lambda\sigma}\Big{(}F_{\mu\lambda}F_{\nu\sigma} +(\star_{g}F)_{\mu\lambda}(\star_{g}F)_{\nu\sigma}\Big{)},\] it follows that \(T(g,F_{\theta})_{\mu\nu}=T(g,F)_{\mu\nu}\). This proves that \((g,F_{\theta})\) solves the Einstein-Maxwell equations (3.1)-(3.2). For simplicity, we may assume that \[M\cong\mathbb{R}_{t}\times I_{r}\times\mathbb{S}^{2},\quad\Sigma_{0}=\{ \mathfrak{t}=0\}\subset M, \tag{3.7}\] where \(I\subset\mathbb{R}\) is a non-empty open interval, and \(\Sigma_{0}\) is spacelike. Then given a solution \((g,F)\) of (3.1)-(3.2), and a \(2\)-sphere \(S=\{\mathfrak{t}=\mathfrak{t}_{0},\ r=r_{0}\}\subset M\) (with \(t_{0}\in\mathbb{R}\), \(r_{0}\in I\)), we can define the _electric_ and _magnetic charges_ \[Q_{e}(g,F):=\frac{1}{4\pi}\int_{S}\star_{g}F,\qquad Q_{m}(F):=\frac{1}{4\pi} \int_{S}F. \tag{3.8}\] By Stokes' theorem and the equations (3.2) \(dF=0,d\star_{g}F=0\), \(Q_{e}\) and \(Q_{m}\) are independent of the specific choice of \(S\). In particular, we can choose \(S\subset\Sigma_{0}\), and use (3.3) and \(\star_{h}\star_{h}=\mathrm{Id}\) when acting on \(1\)-form to find that \[Q_{e}(g,F) =Q_{e}(h,\mathbf{E}):=\frac{1}{4\pi}\int_{S}\star_{h}\mathbf{E}= \frac{1}{4\pi}\int_{S}\langle\mathbf{E},\nu\rangle d\sigma,\] \[Q_{m}(F) =Q_{m}(\mathbf{H}):=\frac{1}{4\pi}\int_{S}\star_{h}\mathbf{H}= \frac{1}{4\pi}\int_{S}\langle\mathbf{H},\nu\rangle d\sigma,\] with \(\nu\) the outward pointing unit normal to \(S\) (with respect to \(h\)) and \(d\sigma\) the volume element of \(S\). Since \(I_{r}\times\mathbb{S}^{2}\) is not a simply connected domain, \(dF=0\) does not imply that \(F=dA\) for some \(1\)-form \(A\) globally. In fact, the de Rham cohomology group \(H^{2}_{\mathrm{dR}}(I_{r}\times\mathbb{S}^{2},\mathbb{R})\cong\mathbb{R}\) and \(F\mapsto Q_{m}(F)\) induces an isomorphism. Therefore, the vanishing of \(Q_{m}(F)\) is equivalent to the condition that \(F=dA\) for some \(1\)-form \(A\). Now we will show that the vanishing of \(Q_{m}(F)\) can always be arranged by the rotation transformation (3.5). **Lemma 3.2**.: _Let \((g,F)\) solve the Einstein-Maxwell equations (3.1)-(3.2). In the setting (3.7), there exists \(\theta\in\mathbb{R}\) such that \(Q_{m}(F_{\theta})=0\), with \(F_{\theta}\) defined in (3.5). Moreover, for such \(\theta\), we have_ \[Q_{e}(g,F_{\theta})^{2}=Q_{m}(F)^{2}+Q_{e}(g,F)^{2}.\] Proof.: With \(F_{\theta}=\cos(\theta)F+\sin(\theta)\star_{g}F\), we have \(Q_{m}(F_{\theta})=\cos(\theta)Q_{m}(F)+\sin(\theta)Q_{e}(g,F)\). Choosing \(\theta\in\mathbb{R}\) such that \((\cos\theta,\sin\theta)\perp(Q_{m}(F),Q_{e}(g,F))\) finishes the proof. In view of Lemma 3.2, we may assume \(Q_{m}(F)=0\), which is equivalent to the fact that \(F=dA\) for some \(1\)-form \(A\). This reduces the study of the Einstein-Maxwell equations on \(M\) in the form (3.1)-(3.2) to the study of the following system in \((g,A)\) \[\mathrm{Ric}(g)=2T(g,dA),\qquad\delta_{g}dA=0, \tag{3.9}\] which we continue to call Einstein-Maxwell equations. In this work, we will study the Einstein-Maxwell equations of the above form (3.9). ### Initial value problems for the nonlinear system We now discuss the initial value problem of Einstein-Maxwell equations (3.9). Given any initial data \[(\Sigma_{0},h,k,\mathbf{E},\mathbf{H}), \tag{3.10}\] where \(\Sigma_{0}\) is a smooth \(3\)-manifold equipped with Riemannian metric \(h\), a symmetric \(2\)-tensor \(k\), and \(1\)-forms \(\mathbf{E},\mathbf{H}\), one seeks a solution \((M,g,A)\) to (3.9) with data \((\Sigma_{0},h,k,\mathbf{E},\mathbf{H})\) in the sense that \(\Sigma_{0}\hookrightarrow M\) is a spacelike embedding hypersurface with respect to \(g\), and \(h,k\) are respectively the induced metric and second fundamental form of \(\Sigma_{0}\) with respect to \(g\), and \(F=dA\) induces the given fields \(\mathbf{E},\mathbf{H}\) on \(\Sigma_{0}\) according to (3.3). We define the initial data map \[\tau:(g,F)\mapsto(h,k,\mathbf{E},\mathbf{H}) \tag{3.11}\] if \((g,F)\) induces the data \((h,k,\mathbf{E},\mathbf{H})\) at \(\Sigma_{0}\) in the above sense. We note that the data \((h,k,\mathbf{E},\mathbf{H})\) cannot be prescribed freely and they must satisfy the following constraint equations, which are obtained by evaluating \(G_{g}(\mathrm{Ric}(g)-2T(g,F))\) on \((n,n)\) where \(G_{g}=\mathrm{Id}-\frac{1}{2}g\mathrm{tr}_{g}\) and \(n\) is future timelike unit normal to \(\Sigma_{0}\), and on \((n,X)\) with \(X\in T\Sigma_{0}\), and pulling back \(dF=0,d\star_{g}F=0\) to \(\Sigma_{0}\), and finally adding the constraint that the magnetic charge \(Q_{m}(F)=Q_{m}(\mathbf{H})=0\) vanishes. \[R_{h}-|k|_{h}^{2}+(\mathrm{tr}_{h}k)^{2}=2(|\mathbf{E}|_{h}^{2} +|\mathbf{H}|_{h}^{2}), \tag{3.12}\] \[\delta_{h}k+d\mathrm{tr}_{h}k=2\star_{h}(\mathbf{H}\wedge\mathbf{ E});\] \[\delta_{h}\mathbf{E}=0,\quad\delta_{h}\mathbf{H}=0;\] (3.13) \[Q_{m}(\mathbf{H})=\frac{1}{4\pi}\int_{S}\star_{h}\mathbf{H}=0. \tag{3.14}\] The constraints (3.12)-(3.13) are necessary and sufficient conditions for the local solvability of the Einstein-Maxwell equations (see [19, SS6.10]). _Remark 3.3_.: In the setting \(\Sigma_{0}\cong I_{r}\times\mathbb{S}^{2}\) as in (3.7), in view of the discussion around (3.5) and Lemma 3.2, the constraint equations are invariant with \((\mathbf{E},\mathbf{H})\) replaced by \((\mathbf{E}_{\theta},\mathbf{H}_{\theta})\), and by choosing suitable \(\theta\) we have \(Q_{m}(\mathbf{H}_{\theta})=0\). Therefore, we can solve the Einstein-Maxwell equations for \((g,A)\) with initial data \((\mathbf{E}_{\theta},\mathbf{H}_{\theta})\), and then \(F:=(dA)_{-\theta}\) gives rise to a solution of (3.1)-(3.2) with initial data \((\mathbf{E},\mathbf{H})\). _Under the additional assumption \(Q_{m}(\mathbf{H})=0\)_, we can solve the Einstein-Maxwell equations of the form (3.9) for \((g,A)\), with electromagnetic field then given by \(F=dA\). Due to the diffeomorphism invariance of Einstein-Maxwell equations, one has the gauge freedom and thus can fix a gauge. Here, we employ the _generalized wave map gauge_. Concretely, we fix a Lorentzian background metric \(g^{0}\) and define the _gauge 1-form_ \[\Upsilon^{E}(g;g^{0}):=g(g^{0})^{-1}\delta_{g}G_{g}g^{0}. \tag{3.15}\] In local coordinates, the above gauge 1-form can be written as \[\Upsilon^{E}(g;g^{0})_{\alpha}=g_{\alpha\kappa}g^{\mu\nu}(\Gamma(g)^{\kappa}_ {\mu\nu}-\Gamma(g^{0})^{\kappa}_{\mu\nu}). \tag{3.16}\] We note that \(\Upsilon^{E}(g)\equiv 0\) if and only if the identity map \(\mathrm{Id}\colon(M,g)\to(M,g^{0})\) is a wave map, and thus \(\Upsilon^{E}(g)\equiv 0\) gives rise to the wave map gauge (see [32]). The _generalized wave map gauge_ is defined as \[\widetilde{\Upsilon}^{E}(g;g^{0})=\Upsilon^{E}(g;g^{0})-\theta(g;g^{0})=0 \tag{3.17}\] where \(\theta\) is a 1-form linear in \(g\) and independent of the derivatives of \(g\), and satisfies that \(\theta(g^{0};g^{0})=0\). We also point out that if the background metric \(g^{0}\) is Minkowski metric, one obtains the generalized wave coordinates for \(\theta(g;g^{0})\neq 0\) (see [46]) and wave coordinates for \(\theta(g;g^{0})=0\) (see [19, SS6]), respectively. The system (3.9) has an additional gauge freedom, namely, Einstein-Maxwell equations are invariant under the gauge transformation \[(g,A)\mapsto(\phi_{*}g,\phi_{*}A+da). \tag{3.18}\] for any diffeomorphism \(\phi:M\to M\) and smooth function \(a\in C^{\infty}(M)\). A typical gauge for \(A\) is _Lorenz gauge_, which is given by \(\delta_{g}A=0\). In fact, it is more convenient to write \(\delta_{g}=-\mathrm{tr}_{g}\delta_{g}^{*}\) and to introduce the _gauge function_ \[\Upsilon^{M}(g,A;g^{0}):=\mathrm{tr}_{g}\delta_{g^{0}}^{*}A, \tag{3.19}\] where \(g^{0}\) is a fixed background metric as above. Therefore, \(\Upsilon^{M}(g,A)\) equals \(-\delta_{g}A\) up to terms of order \(0\) in \(A\). The _generalized Lorenz gauge_ is defined as \[\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0})=\Upsilon^{M}(g,A;g^{0})-\Upsilon^{M }(g,A^{0};g^{0})=0 \tag{3.20}\] where \(A_{0}\) is a fixed 1-form. _Remark 3.4_.: We claim that any solution \((g,A)\) to the Einstein-Maxwell equations can be transformed into another solution \((\phi^{*}g,\phi^{*}A+da)\) which satisfies the generalized wave map gauge and generalized Lorenz gauge conditions. Concretely, we define new coordinates \((y^{\alpha})\) by solving \[\begin{cases}\Box_{g}y^{\alpha}-\theta(g;g^{0})_{\beta}\nabla^{\beta}y=0\\ (y^{\alpha},dy^{\alpha})|_{\Sigma_{0}}=(x^{\alpha},dx^{\alpha})|_{\Sigma_{0}}.\end{cases}\] Then the pull back \((\phi^{*}g,\phi^{*}A)\) of \((g,A)\) with respect to the diffeomorphism \(\phi(y)=x\) satisfies the generalized wave map gauge condition and is also a solution to Einstein-Maxwell equations. Then with \((\phi^{*}g,\phi^{*}A)\), one continues to solve the equation \(\Upsilon^{M}(\phi^{*}g,\phi^{*}A+da;g^{0})=\Upsilon^{M}(\phi^{*}g,A_{0};g^{0})\), which is a linear scalar wave equation for \(a\), and thus \((\phi^{*}g,\phi^{*}A+da)\) satisfies the generalized Lorenz gauge condition and solves the Einstein-Maxwell equations. We now consider the gauge-fixed system \[P^{E}(g,A) :=\mathrm{Ric}(g)-\tilde{\delta}_{g}^{*}\widetilde{\Upsilon}^{E}( g;g^{0})-2T(g,dA)=0 \tag{3.21}\] \[P^{M}(g,A) :=\delta_{g}dA-d\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0})=0 \tag{3.22}\] where \(\tilde{\delta}_{g}^{*}\) is zero order modification of \(\delta_{g}^{*}\), i.e., \(\tilde{\delta}_{g}^{*}=\delta_{g}^{*}+B\) with \(B\in C^{\infty}(M;\mathrm{Hom}(S^{2}T^{*}M,T^{*}M))\), and \((\delta_{g}^{*}u)_{\mu\nu}=\frac{1}{2}(u_{\mu;\nu}+u_{\nu;\mu})\) is the symmetric gradient. Now, (3.21)-(3.22) is a quasilinear hyperbolic system for \((g,A)\) (which is principally scalar if one multiplies the first equation by 2). In order to relate the initial value problem for the Einstein-Maxwell system for \((g,A)\) to that of the quasilinear hyperbolic system (3.21)-(3.22), we need the following lemma. **Lemma 3.5**.: _Suppose \((g,A)\) solves the gauge-fixed Einstein-Maxwell equation_ \[P(g,A)=(P^{E}(g,A),\,P^{M}(g,A))=0\] _and attains the given initial data \((g_{0},g_{1},A_{0},A_{1})\) at the initial hypersurface \(\Sigma_{0}=\{\mathfrak{t}=0\}\) which satisfy the constraint equations (3.12)-(3.14). If \((g,A)\) satisfies the generalized wave map gauge and Lorenz gauge conditions initially at \(\Sigma_{0}\), that is,_ \[\widetilde{\Upsilon}^{E}(g;g^{0})=0,\qquad\widetilde{\Upsilon}^{M}(g,A;g^{0}, A^{0})=0\quad\text{at}\quad\Sigma_{0}=\{\mathfrak{t}=0\}, \tag{3.23}\] _then \((g,A)\) solves the Einstein-Maxwell equations with the same initial data._ Proof.: We first prove that if \((g,A)\) solves \(P(g,A)=0\) and satisfies the constraint equations and the gauge conditions at \(\Sigma_{0}\), then \[\mathcal{L}_{\partial_{t}}\widetilde{\Upsilon}^{E}(g;g^{0})=0,\qquad\mathcal{ L}_{\partial_{t}}\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0})=0\quad\text{at} \quad\Sigma_{0}. \tag{3.24}\] Since the constraint equation (3.12) is equivalent to evaluating \(G_{g}(\operatorname{Ric}g-2T(g,dA))\) on \((n,n)\) and \((n,X)\) on \(\Sigma_{0}\) where \(n\) is the future timelike unit normal to \(\Sigma_{0}\) and \(X\in T\Sigma_{0}\), then evaluating \(G_{g}P^{M}(g,A)\) on \((n,X)\) and \((n,n)\) and using the fact that \(\widetilde{\Upsilon}^{E}(g,A;g^{0})=0\) at \(\Sigma_{0}\) yield \(\mathcal{L}_{\partial_{t}}\widetilde{\Upsilon}^{E}(g,A;g^{0})=0\) at \(\Sigma_{0}\). As for the Maxwell part, the constraint equation implies that the evaluation of \(\delta_{g}dA\) on \(n\) is zero at \(\Sigma_{0}\) and thus we have \(\mathcal{L}_{\partial_{0}}\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0})=0\) at \(\Sigma_{0}\). Applying the negative divergence operator \(\delta_{g}\) on the equation \(P^{M}(g,A)=0\) yields \[\delta_{g}d\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0})=0.\] Then we obtain a linear hyperbolic system for \(\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0})\) with vanishing initial data. Therefore, \(\widetilde{\Upsilon}^{M}(g,A;g^{0},A^{0})=0\) on \(M\), which implies that \((g,A)\) solve the Maxwell part \(\delta_{g}dA=0\) of the Einstein-Maxwell equations. According to the second Bianchi identity \((\delta_{g}G_{g}\mathrm{Ric}(g)=0)\) and the fact that \(\delta_{g}dA=0\) (and thus \(\delta_{g}G_{g}T(g,dA)=\delta_{g}T(g,dA)=0\)), applying first the trace reversal operator \(G_{g}=\mathrm{Id}-\frac{1}{2}g\mathrm{tr}_{g}\) and then the negative divergence operator \(\delta_{g}\) on \(P^{E}(g,A)=0\) yields \[\delta_{g}G_{g}\tilde{\delta}_{g}^{*}\widetilde{\Upsilon}^{E}(g;g^{0})=0.\] We again obtain a linear hyperbolic system for \(\widetilde{\Upsilon}^{E}(g;g^{0})\) with vanishing initial data, and thus \(\widetilde{\Upsilon}^{E}(g;g^{0})=0\) on \(M\), which implies that \((g,A)\) solves Einstein part \(\mathrm{Ric}(g)-2T(g,dA)=0\) of the Einstein-Maxwell equations. This completes the proof. Owing to Lemma 3.5, in order to solve the initial value problem for Einstein-Maxwell equations, it suffices to construct the gauged initial data satisfying the conditions (3.23) and solve the gauge-fixed Einstein-Maxwell equations (3.21)-(3.22) with the constructed gauged initial data. We postpone the construction of the gauged initial data to SS4.4. ### Initial value problems for the linearized system In this work, we are interested in understanding long time behavior of a general solution to the linearize Einstein-Maxwell equations. Suppose \((g,F)\) is a solution to the Einstein-Maxwell system (3.1)-(3.2) on a spacetime \(M=\mathbb{R}\times\Sigma_{0}\). Then around \((g,F)\), the _linearized Einstein-Maxwell equations_ are defined as \[\begin{split} D_{g}\mathrm{Ric}(\dot{g})-2D_{g,F}T(\dot{g},\dot{F }):=\frac{d}{ds}\Big{|}_{s=0}\Big{(}(\mathrm{Ric}-2T)(g+s\dot{g},F+s\dot{F}) \Big{)}=0,\\ d\dot{F}=0,\quad D_{g,F}(\delta_{(\cdot)}(\cdot))(\dot{g},\dot{F }):=\frac{d}{ds}(\delta_{g+s\dot{g}}(F+s\dot{F}))\Big{|}_{s=0}=0.\end{split} \tag{3.25}\] Suppose \(\Sigma_{0}\) is spacelike for \(g\). We denote by \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\) the linearization of the initial data map \(\tau\) around \((g,F)\). We note that \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\) are the initial data we impose in the initial value problem for the linearized Einstein-Maxwell equations (3.25). As in the case of the non-linear Einstein-Maxwell equations where the initial data should satisfy the constraint equations, the initial data \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\) for the linearized Einstein-Maxwell equations are required to satisfy the _linearized constraint equations_ (see [19, SS7.2]), which is defined as the linearization of (3.12)-(3.13) around the data \((h,k,\mathbf{E},\mathbf{H})\) induced by \((g,F)\). Motivated from (3.5) and (3.6), we introduce the following transformation \[\dot{F}_{\theta} :=\cos(\theta)\dot{F}+\sin(\theta)\frac{d}{ds}\Big{|}_{s=0}\Big{(} \star_{g+s\dot{g}}(F+s\dot{F})\Big{)}, \tag{3.26}\] \[(\dot{\mathbf{E}}_{\theta},\dot{\mathbf{H}}_{\theta}) :=(\cos(\theta)\dot{\mathbf{E}}-\sin(\theta)\dot{\mathbf{H}},\, \sin(\theta)\dot{\mathbf{E}}+\cos(\theta)\dot{\mathbf{H}}). \tag{3.27}\] Then we have \[(h,k,\dot{\mathbf{E}}_{\theta},\dot{\mathbf{H}}_{\theta})=D_{(g,F_{\theta})} \tau(\dot{g},\dot{F}_{\theta}).\] Now we prove the analogue of Lemma 3.1 at the linearized level. **Lemma 3.6**.: _If \((\dot{g},\dot{F})\) solves the linearized Einstein-Maxwell equations (3.25) linearized around \((g,F)\), then \((\dot{g},\dot{F}_{\theta}+cF_{\theta+\frac{\pi}{2}})\) solves (3.25) linearized around \((g,F_{\theta})\) for any \(\theta,c\in\mathbb{R}\)._ Proof.: Since \(dF=0,d\Big{(}\frac{d}{ds}\Big{|}_{s=0}\big{(}\star_{g+s\dot{g}}(F+s\dot{F}) \big{)}\Big{)}=0\), it follows that \[d(\dot{F}_{\theta}+cF_{\theta+\frac{\pi}{2}})=\cos(\theta)d\dot{F}+\sin( \theta)d\Big{(}\frac{d}{ds}\Big{|}_{s=0}\big{(}\star_{g+s\dot{g}}(F+s\dot{F}) \big{)}\Big{)}+cdF_{\theta+\frac{\pi}{2}}=0.\] We next calculate \[\frac{d}{ds}\Big{|}_{s=0}\Big{(}\star_{g+s\dot{g}}\big{(}F_{\theta }+s(\dot{F}_{\theta}+cF_{\theta+\frac{\pi}{2}})\big{)}\Big{)} =\frac{d}{ds}\Big{|}_{s=0}\Big{(}\star_{g+s\dot{g}}\big{(}F_{\theta }+s\dot{F}_{\theta}\big{)}\Big{)}+c\star_{g}F_{\theta+\frac{\pi}{2}}\] \[=\cos(\theta)\frac{d}{ds}\Big{|}_{s=0}\Big{(}\star_{g+s\dot{g}}( F+s\dot{F})\Big{)}-\sin(\theta)\dot{F}+c\star_{g}F_{\theta+\frac{\pi}{2}}\] and this implies \[\frac{d}{ds}\Big{|}_{s=0}\Big{(}\delta_{g+s\dot{g}}\big{(}F_{\theta}+s(\dot{F }_{\theta}+cF_{\theta+\frac{\pi}{2}})\big{)}\Big{)}=0.\] According to the calculation for \(T(g,F)\) in the proof of Lemma 3.1, with \((F+s\dot{F})_{\theta}=\cos(\theta)(F+s\dot{F})+\sin(\theta)\star_{g+s\dot{g}}( F+s\dot{F})\) we have \[T(g+s\dot{g},F+s\dot{F}) =T(g+s\dot{g},(F+s\dot{F})_{\theta})=T(g+s\dot{g},F_{\theta}+s \dot{F}_{\theta})+\mathcal{O}(s^{2})\] and thus \[D_{(g,F_{\theta})}T(\dot{g},\dot{F}_{\theta}+cF_{\theta+\frac{ \pi}{2}}) =D_{(g,F)}T(\dot{g},\dot{F})+\frac{c}{2}g^{\lambda\sigma}((F_{\theta })_{\mu\lambda}(F_{\theta+\frac{\pi}{2}})_{\nu\sigma}+(F_{\theta+\frac{\pi}{2} })_{\mu\lambda}(F_{\theta})_{\nu\sigma})\] \[\quad+\frac{c}{2}g^{\lambda\sigma}((\star_{g}F_{\theta})_{\mu \lambda}(\star_{g}F_{\theta+\frac{\pi}{2}})_{\nu\sigma}+(\star_{g}F_{\theta+ \frac{\pi}{2}})_{\mu\lambda}(\star_{g}F_{\theta})_{\nu\sigma})\] \[=D_{(g,F)}T(\dot{g},\dot{F}).\] This implies that \((\dot{g},\dot{F}_{\theta}+cF_{\theta+\frac{\pi}{2}})\) solves the linearization of the equation \(\mathrm{Ric}-2T=0\) around \((g,F_{\theta})\). If \(M\cong\mathbb{R}\times\Sigma_{0}\), we can define linearized charges as follows \[\dot{Q}_{e}(\dot{g},\dot{F}):=\frac{1}{4\pi}\int_{S}\frac{d}{ds}\Big{|}_{s=0} \star_{g+s\dot{g}}(F+s\dot{F}),\qquad\dot{Q}_{m}(\dot{F}):=\frac{1}{4\pi}\int_ {S}\frac{d}{ds}\Big{|}_{s=0}(F+s\dot{F})=\frac{1}{4\pi}\int_{S}\dot{F}. \tag{3.28}\] If \((\dot{g},\dot{F})\) solves the linearized Einstein-Maxwell equations, by Stokes' theorem and the linearized equations (3.25) again, \(\dot{Q}_{e}\) and \(\dot{Q}_{m}\) are independent of the specific choice of \(S\). In particular, we can choose \(S\subset\Sigma_{0}\), and use the linearization of (3.3) around \((g,F)\) to find that \[\dot{Q}_{e}(\dot{g},\dot{F}) =\dot{Q}_{e}(\dot{h},\dot{\mathbf{E}}):=\frac{1}{4\pi}\int_{S} \frac{d}{ds}\Big{|}_{s=0}\star_{h_{s}}\mathbf{E}_{s},\] \[\dot{Q}_{m}(\dot{F}) =Q_{m}(\dot{\mathbf{H}}):=\frac{1}{4\pi}\int_{S}\frac{d}{ds}\Big{|} _{s=0}\star_{h_{s}}\mathbf{H}_{s}.\] We note that the analogue of Lemma 3.2 holds for the linearized charges. **Lemma 3.7**.: _Let \((\dot{g},\dot{F})\) solve the linearized Einstein-Maxwell equations around \((g,F)\). In the setting (3.7), there exist \(c,\theta\in\mathbb{R}\) such that \(Q_{m}(F_{\theta})=0\) and \(\dot{Q}_{m}(\dot{F}_{\theta}+cF_{\theta+\frac{\pi}{2}})=0\), with \(F_{\theta+\frac{\pi}{2}}\) and \(\dot{F}_{\theta}\) defined in (3.5) and (3.26), respectively._ Proof.: First, according to Lemma 3.2, there exists \(\theta\in\mathbb{R}\) such that \(Q_{m}(F_{\theta})=0\). Since \[Q_{m}^{2}(F_{\theta+\frac{\pi}{2}})=Q_{e}(g,F_{\theta})^{2}=Q_{m}(F)^{2}+Q_{e}(g, F)^{2}\neq 0,\] we can choose \[c=-\frac{\dot{Q}_{m}(\dot{F}_{\theta})}{Q_{m}(F_{\theta+\frac{\pi}{2}})},\] with which we have \(\dot{Q}_{m}(\dot{F}_{\theta}+cF_{\theta+\frac{\pi}{2}})=\dot{Q}_{m}(\dot{F}_{ \theta})+cQ_{m}(F_{\theta+\frac{\pi}{2}})=0\). This finishes the proof. Therefore, in addition to the linearized constraint equation, we may assume that \(\dot{Q}_{m}(\dot{F})=0\) which is an analogue of condition (3.14) and can always be arranged by Lemma 3.7. Concretely, without loss of generality, suppose that \(F\) satisfies \(Q_{m}(F)=0\). Then for any \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\) satisfying the linear constraint equations, by choosing a suitable \(c\in\mathbb{R}\) such that \(\dot{Q}_{m}(\dot{\mathbf{H}}_{\theta})-cQ_{m}(\mathbf{E})=0\), the condition (3.14) is arranged. In the context of the present paper, given initial data satisfying the linearized constraints and having vanishing linearized magnetic charge, we solve the following linearized Einstein-Maxwell system \[\mathscr{L}(\dot{g},\dot{A})=\big{(}\mathscr{L}_{1}(\dot{g},\dot{A}),\mathscr{ L}_{2}(\dot{g},\dot{A})\big{)}=0, \tag{3.29}\] where \[\begin{array}{l}\mathscr{L}_{1}(\dot{g},\dot{A})=D_{g}\mathrm{Ric}(\dot{g}) -2D_{(g,dA)}T(\dot{g},\dot{A}),\\ \mathscr{L}_{2}(\dot{g},\dot{A})=D_{(g,A)}(\delta_{\setminus}\cdot d(\cdot))( \dot{g},\dot{A}).\end{array} \tag{3.30}\] By choosing \((g,A)\), around which we linearize the Einstein-Maxwell equations, as the background Lorentzian metric and 1-form in the gauge 1-form \(\Upsilon^{E}\), gauge source function \(\theta\) and gauge function \(\Upsilon^{M}\), one can consider the initial value problem for the corresponding linearized gauge-fixed Einstein-Maxwell equations \[\begin{array}{l}L^{E}(\dot{g},\dot{A}):=D_{g}(\mathrm{Ric})(\dot{g})-\tilde {\delta}_{g}^{*}D_{g}\Big{(}\Upsilon^{E}-\theta\Big{)}(\dot{g};g)-2D_{(g,dA)} T(\dot{g},d\dot{A})=0,\\ L^{M}(\dot{g},\dot{A}):=D_{(g,A)}(\delta_{\setminus}\cdot d(\cdot))(\dot{g},\dot{A})-d\Big{(}D_{(g,A)}\Upsilon^{M}(\dot{g},\dot{A};g)-D_{g}\Upsilon^{M}( \dot{g},A;g)\Big{)}=0.\end{array} \tag{3.31}\] We recall from [117, SS7.5] and [54, SS3] that \[(D_{g}\mathrm{Ric})(\dot{g})=-\frac{1}{2}\Box_{g}\dot{g}-\delta_{g}^{*}\delta _{g}G_{g}\dot{g}+\mathscr{R}_{g}(\dot{g}), \tag{3.32}\] where \(\Box_{g}=\nabla^{\alpha}\nabla_{\alpha}\) is the tensor wave operator, \(\mathscr{R}_{g}(\dot{g})_{\mu\nu}=R(g)^{\kappa}_{\ \mu\nu}{}^{\lambda}\dot{g}_{\kappa\lambda}+\frac{1}{2}( \mathrm{Ric}(g)^{\kappa}_{\mu}\ \dot{g}_{\nu\kappa}+\mathrm{Ric}(g)^{\kappa}_{\nu}\ \dot{g}_{\mu\kappa})\), and \[D_{g}\Upsilon^{E}(\dot{g};g)=-\delta_{g}G_{g}\dot{g}. \tag{3.33}\] Since \(\tilde{\delta}_{g}^{*}\) is a zero order modification of \(\delta_{g}^{*}\) and \(D_{(g,dA)}T(\dot{g},\dot{A})\) is of order \(0\) in \(\dot{g}\) and order \(1\) in \(A\), it follows that the first equation in (3.31) has the principal part \(-\frac{1}{2}\Box_{g}\dot{g}\). We define the _linearized generalized wave map gauge_ as \[D_{g}\widetilde{\Upsilon}^{E}(\dot{g};g)=D_{g}(\Upsilon^{E}-\theta)(\dot{g};g )=0. \tag{3.34}\] Analogously, we define the _linearized generalized Lorenz gauge_ as \[D_{(g,A)}\widetilde{\Upsilon}^{M}(\dot{g},\dot{A};g,A)=D_{(g,A)}\Upsilon^{M}( \dot{g},\dot{A};g)-D_{g}\Upsilon^{M}(\dot{g},A;g)=-\delta_{g}\dot{A}=0, \tag{3.35}\] and the second equation in (3.31) also has principal part \((\delta_{g}d+d\delta_{g})\dot{A}\), and only involves up to first derivatives of \(\dot{g}\). Thus, the linearized system (3.31) is linear hyperbolic system for \((\dot{g},\dot{A})\), whose initial value problem we are able to solve. As discussed in the non-linear case, in order to relate the initial value problem for the linearized Einstein-Maxwell system for \((\dot{g},\dot{A})\) to that of the linear hyperbolic system (3.31), we need the an analogue of Lemma 3.5 at the linearized level. **Lemma 3.8**.: _Suppose \((\dot{g},\dot{A})\) solves the linearized gauge-fixed Einstein-Maxwell equation (3.31)_ \[L(\dot{g},\dot{A})=(L^{E}(\dot{g},\dot{A}),\,L^{M}(\dot{g},\dot{A}))=0\] _and attains the given initial data \((\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1})\) at the initial hypersurface \(\Sigma_{0}=\{\mathfrak{t}=0\}\) which satisfy the linearized constraint equations. If \((\dot{g},\dot{A})\) satisfies the linearized generalized wave map gauge and Lorenz gauge conditions initially at \(\Sigma_{0}\), that is,_ \[D_{g}\widetilde{\Upsilon}^{E}(\dot{g};g)=0,\qquad\delta_{g}\dot{A}=0\quad\text {at}\quad\Sigma_{0}, \tag{3.36}\] _then \((\dot{g},\dot{A})\) solves the Einstein-Maxwell equations with the same initial data._ Proof.: The proof is completely analogous to that of Lemma 3.5. Therefore, in order to solve the initial value problem for linearized Einstein-Maxwell equations, we again need to construct the gauged initial data satisfying the conditions (3.36). We postpone the construction of the gauged initial data to SS4.4. Finally, we discuss the pure gauge solution generated from the invariance of Einstein-Maxwell system under the gauge transformation introduced in the non-linear setting \((g,A)\mapsto(\phi^{*}g,\phi^{*}A+da)\). Linearizing around \((\phi,a)=(\mathrm{Id},0)\) the gauge transformation on a solution \((\dot{g},\dot{A})\) of (3.29) yields the following transformation acting on \((\dot{g},\dot{A})\) \[(\dot{g},\dot{A})\mapsto(\dot{g}+\mathcal{L}_{V}g,\dot{A}+\mathcal{L}_{V}A+da), \tag{3.37}\] and the linearized Einstein-Maxwell system is invariant under the above transformation (3.37). We note that writing \(V=\omega^{\sharp}\), we have \(\mathcal{L}_{\omega^{\sharp}}g=2\delta_{g}^{*}\omega\). For notational convenience, we introduce the following definition: **Definition 3.9** ([60, Definition 2.4]).: For a vector field \(V\) and a tensor \(T\), define \(\tilde{\mathcal{L}}_{T}V:=\mathcal{L}_{V}T\). According to Remark 3.4 in the non-linear setting, any solution \((g,A)\) can be transformed into another one satisfying the generalized wave map and Lorenz gauge conditions. Here we state its analogue in the linear case. _Remark 3.10_.: Any solution \((\dot{g},\dot{A})\) to the Einstein-Maxwell equations can be transformed into another solution \((\dot{g}+\mathcal{L}_{V}g,\dot{A}+\mathcal{L}_{V}A+da)\) which satisfies the linearized generalized wave map gauge and Lorenz gauge conditions. Concretely, we first solve define new coordinates \((y^{\alpha})\) by solving the equation \[D_{g}\widetilde{\Upsilon}^{\mathrm{E}}(\dot{g}+\mathcal{L}_{V}g;g)=0 \tag{3.38}\] for the vector field \(V\). The equation (3.38) is a wave equation since its principal part is given by \(-\Box_{g}V^{\flat}\). Then we solve \[\delta_{g}(\dot{A}+\mathcal{L}_{V}A+da)=0, \tag{3.39}\] whose principal part is \(-\Box_{g}a\), for the scalar function \(a\). If we choose trivial Cauchy data for \(V\) and \(a\) on \(\Sigma_{0}\), then \((\dot{g}+\mathcal{L}_{V}g,\dot{A}+\mathcal{L}_{V}A+da)\) solves (3.29), satisfies the linearized generalized wave map and Lorenz gauge conditions, and attains the same initial data \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\) as \((\dot{g},\dot{A})\). ## 4. Kerr-Newman black holes In SS4.1, we introduce the Reissner-Nordstrom (RN) family of black holes, which is parametrized by \((\mathbf{m}_{0},0,\mathbf{Q}_{0})\in\mathbb{R}\times\mathbb{R}^{3}\times \mathbb{R}\) with \(\mathbf{m}_{0}>0,|\mathbf{Q}_{0}|<\mathbf{m}_{0}\). In SS4.2, we introduce the Kerr-Newman (KN) family of black holes with parameters \((\mathbf{m},\mathbf{a},\mathbf{Q})\in\mathbb{R}\times\mathbb{R}^{3}\times \mathbb{R}\) close to \((\mathbf{m}_{0},0,\mathbf{Q}_{0})\) and define this family of slowly rotating KN metric as a smooth family of stationary metrics on a fixed manifold \(M\). In SS4.3, we introduce several vector bundles over \(M\) and analyze the structure of differential operators, which will be used in the subsequent sections, near infinity. In SS4.4, we will discuss how to construct the gauged Cauchy data for the gauge-fixed Einstein-Maxwell system from the given initial data \((h,k,\mathbf{E},\mathbf{H})\) and also consider its linearized version. ### The Reissner-Nordstrom family Given a set of parameters \[b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\in\mathbb{R}\times\mathbb{R}^{3} \times\mathbb{R},\quad\mathbf{m}_{0}>0,\ |\mathbf{Q}_{0}|<\mathbf{m}_{0}, \tag{4.1}\] where the parameter \(\mathbf{m}_{0}>0\) denote the mass and \(\mathbf{Q}_{0}\) denotes the charge, the Reissner-Nordstrom (RN) solution to the Einstein-Maxwell equations is then given by the spherically symmetric and static RN metric \[g_{b_{0}}=-\mu_{b_{0}}(r)\,dt^{2}+\mu_{b_{0}}(r)^{-1}\,dr^{2}+r^{2}\not{g}, \tag{4.2}\] where \(\not{g}\) is the round metric on \(\mathbb{S}^{2}\) and \[\mu_{b_{0}}(r)=1-\frac{2\mathbf{m}_{0}}{r}+\frac{\mathbf{Q}_{0}^{2}}{r^{2}}, \tag{4.3}\] and the RN 4-potential for the electromagnetic field \[\tilde{A}_{b_{0}}=\frac{\mathbf{Q}_{0}}{r}dt,\quad\tilde{F}_{b_{0}}=d\tilde{A} _{b_{0}}=-\frac{\mathbf{Q}_{0}}{r^{2}}dr\wedge dt. \tag{4.4}\] When \(|\mathbf{Q}_{0}|<\mathbf{m}_{0}\), the expression \(\mu_{b_{0}}\) has two distinct positive roots \(\mathbf{m}_{0}-\sqrt{\mathbf{m}_{0}^{2}-\mathbf{Q}_{0}^{2}}=r_{c}<r_{b_{0}}= \mathbf{m}_{0}+\sqrt{\mathbf{m}_{0}^{2}-\mathbf{Q}_{0}^{2}}\). There is a Cauchy horizon at \(r=r_{c}\) and an _event horizon_\(\mathcal{H}\) at \(r_{b_{0}}\). The expressions (4.2) and (4.4) are valid in the _static region_, which sometimes is also called exterior region or domain of outer communication \[\mathcal{M}=\mathbb{R}_{t}\times\mathcal{X},\quad\mathcal{X}=(r_{b_{0}}, \infty)_{r}\times\mathbb{S}^{2}. \tag{4.5}\] The singularity of the expression (4.2) at \(r=r_{b_{0}}\) is a coordinate singularity and can be removed by change of coordinates. We define a new coordinate system \((t_{0},r,\theta,\varphi)\), which is called _incoming Eddington-Finkelstein coordinates_ and make use of the function \(t_{0}\) \[t_{0}=t+r_{b_{0},*},\quad\frac{dr_{b_{0},*}}{dr}=\frac{1}{\mu_{b_{0}}}, \tag{4.6}\] where \(r_{b_{0},*}\) is called the _tortoise coordinate_ or _Regge-Wheeler radial coordinate_. In this coordinate system, the RN metric is given by \[g_{b_{0}}=-\mu_{b_{0}}(r)dt_{0}^{2}+2\,dt_{0}dr+r^{2}\not{g}. \tag{4.7}\] Now the expression (4.7) for \(g_{b_{0}}\) extends analytically beyond the event horizon \(\mathcal{H}=\{r=r_{b_{0}}\}\), and thus \(g_{b_{0}}\) is smooth and non-degenerate metric on the extended manifold \[M=\mathbb{R}_{t_{0}}\times X\supset\mathcal{M},\quad X=[r_{-},\infty)_{r} \times\mathbb{S}^{2}\supset\mathcal{X}, \tag{4.8}\] where \(r_{-}\in(r_{c},r_{b_{0}})\). At the same time, we can take the RN 4-potential to be \(\mathbf{Q}_{0}r^{-1}\,dt_{0}\), which differs from \(\tilde{A}_{b_{0}}\) by an exact form \(d(\int\mathbf{Q}_{0}r^{-1}\mu^{-1}\,dr)\). Since \(dt\) is timelike when \(r\) is sufficiently large, to capture this useful structure, we introduce another coordinate \[t_{\chi_{0}}:=t+\int\frac{\chi_{0}(r)}{\mu_{b_{0}}(r)}\,dr, \tag{4.9}\] where \(\chi_{0}(r)\in C^{\infty}(\mathbb{R})\) is identically 1 for \(r\leq 4r_{b_{0}}/3\), and vanishes for \(r\geq 3r_{b_{0}}/2\). Therefore, \(t_{\chi_{0}}\) is a smooth interpolation between \(t_{0}\) near event horizon and \(t\) near infinity \(r=\infty\) with a suitable choice of the constant of integration. Then we define the RN 4-potential as \[A_{b_{0}}=\frac{\mathbf{Q}_{0}}{r}dt_{\chi_{0}}. \tag{4.10}\] In order to describe the waves near the future event horizon and future null infinity, we introduce the following function \[t_{*}=t_{b_{0},*}:=t+r_{b_{0},*}\chi_{0}(r)-r_{b_{0},*}(1-\chi_{0}(r)), \tag{4.11}\] where \(\chi_{0}(r)\) is defined as in (4.9); it smoothly interpolates between \(t+r_{b_{0},*}\) near the event horizon and \(t-r_{b_{0},*}\) near future null infinity. Later on, we will study forcing problems for wave equations of the type \(\Box_{g_{(\mathbf{m}_{0},0,\mathbf{Q}_{0})}}u=f\) on the manifold \[\mathbb{R}_{t_{*}}\times X,\quad X=[r_{-},\infty)_{r}\times\mathbb{S}^{2},\] where \(f\) is supported in \(t_{*}\geq 0\). Recall the definition 2.1 of radial compactification, now we compactify \(X\) in a similar manner \[\bar{X}:=\big{(}X\sqcup([0,r_{b_{0}}^{-1})_{\rho}\times\mathbb{S}^{2})\big{)} \big{/}\sim \tag{4.12}\] where \(\sim\) identifies \((\rho,\omega)\in[0,r_{b_{0}})_{\rho}\times\mathbb{S}^{n-1}\) with the point \(\rho^{-1}\omega\in X\). Therefore, we have \(\bar{X}=\overline{\{r\geq r_{-}\}}\subset\overline{\mathbb{R}^{3}}\) with \(\partial_{-}\bar{X}=r^{-1}(r_{-})\) and \(\partial_{+}\bar{X}=\partial\overline{\mathbb{R}^{3}}=\{\rho=0\}\subset\bar{X}\). Then we see that \(\mathbb{R}_{t_{*}}\times\partial\bar{X}\) has two components: one is a spacelike hypersurface inside the black hole and given by \[\Sigma_{\text{fin}}:=\mathbb{R}_{t_{*}}\times\partial_{-}\bar{X}, \tag{4.13}\] and the other is the future null infinity \(\mathbb{R}_{t_{*}}\times\partial_{+}X\), which is also denoted by \(\mathscr{I}^{+}\). See Figure 3 for the level sets of \(t,\ t_{0},\ t_{\chi_{0}},\ t_{\chi}\). ### The slowly rotating Kerr-Newman family Let \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) be a parameter of a RN black hole. Consider Kerr-Newman black hole parameters \[b=(\mathbf{m},\mathbf{a},\mathbf{Q})\in\mathbb{R}\times\mathbb{R}^{3}\times \mathbb{R}, \tag{4.14}\] where \(\mathbf{a}\in\mathbb{R}^{3}\) is the angular momentum. If \(a=|\mathbf{a}|\neq 0\), we choose the polar coordinates \((\theta,\varphi)\) on \(\mathbb{S}^{2}\) such that \(\hat{\mathbf{a}}=\mathbf{a}/|\mathbf{a}|\) is defined by \(\theta=0\), and the vector field \(\partial_{\varphi}\) generates counterclockwise rotation around \(\mathbf{a}/|\mathbf{a}|\), with \(\mathbb{R}^{3}\) carrying the standard orientation. A very convenient coordinate system for the Kerr-Newman metric is Boyer-Lindquist coordinates \((t,r,\theta,\varphi)\) (see [15]), in which the Kerr-Newman metric takes the form \[\begin{split} g_{b}^{\mathrm{BL}}&=-\frac{\Delta_{ b}}{\rho_{b}^{2}}(dt-a\sin^{2}\theta\,d\varphi)^{2}+\rho_{b}^{2}\Big{(}\frac{ dr^{2}}{\Delta_{b}}+d\theta^{2}\Big{)}+\frac{\sin^{2}\theta}{\rho_{b}^{2}} \big{(}a\,dt-(r^{2}+a^{2})d\varphi\big{)}^{2},\\ (g_{b}^{\mathrm{BL}})^{-1}&=-\frac{1}{\Delta_{b} \rho_{b}^{2}}\big{(}(r^{2}+a^{2})\partial_{t}+a\partial_{\varphi}\big{)}^{2}+ \frac{\Delta_{b}}{\rho_{b}^{2}}\partial_{r}^{2}+\frac{1}{\rho_{b}^{2}} \partial_{\theta}^{2}+\frac{1}{\rho_{b}^{2}\sin^{2}\theta}(\partial_{\varphi} +a\sin^{2}\theta\,\partial_{t})^{2},\\ \Delta_{b}&=r^{2}-2\mathbf{m}r+\mathbf{Q}^{2}+a^{2}, \ \ \rho_{b}^{2}=r^{2}+a^{2}\cos^{2}\theta,\quad a=|\mathbf{a}|.\end{split} \tag{4.15}\] In the Boyer-Lindquist coordinates, the KN 4-potential reads \[\tilde{A}_{b}=\frac{\mathbf{Q}r}{\rho_{b}^{2}}(dt-a\sin^{2}\theta\,d\varphi)\] and the corresponding electromagnetic 2-form is \[\begin{split}\tilde{F}_{b}&=-\frac{\mathbf{Q}}{ \rho_{b}^{4}}\Big{(}(r^{2}-a^{2}\cos^{2}\theta)dr\wedge(dt-a\sin^{2}\theta\,d \varphi)\Big{)}\\ &\quad+\frac{\mathbf{Q}r}{\rho_{b}^{4}}\Big{(}2a\cos\theta\sin \theta\,d\theta\wedge(a\,dt-(r^{2}+a^{2})\,d\varphi)\Big{)}.\end{split}\] We note that \((g_{b}^{\mathrm{BL}},\tilde{A}_{b})\) is a solution to the Einstein-Maxwell equations. In the initial value problem for the Einstein-Maxwell equations, one places asymptotically flat initial data on a Cauchy hypersurface \[\Sigma_{0}\subset M,\] Figure 3. Penrose diagram of the Reissner-Nordström metric (including future/past event horizon \(\mathcal{H}^{\pm}\),, future/past null infinity \(\mathscr{I}^{\pm}\), future/past timelike infinity \(i^{\pm}\), and spacelike infinity \(i^{0}\)), together with the level sets of the static time function \(t\), of the null coordinate \(t_{0}\) from (4.6), of the coordinate \(t_{\chi_{0}}\) from (4.9), and of the function \(t_{*}\) from (4.11). Also shown are the boundaries \(\Sigma_{\mathrm{fin}}=\mathbb{R}_{t_{*}}\times\partial_{-}\bar{X}\) and \(\mathbb{R}_{t_{*}}\times\partial_{+}\bar{X}\) (future null infinity \(\mathscr{I}^{+}\)). which is spacelike and equal to \(t^{-1}(0)\) where \(r\) is large. We introduce the following coordinates \[\mathfrak{t}=t+\int\chi_{1}(r)\Big{(}\frac{r^{2}+a^{2}}{\Delta_{b}}\sqrt{1-\frac {\Delta_{b}}{r^{2}+a^{2}}}\Big{)}dr,\quad\varphi_{1}=\varphi+\int\chi_{1}(r) \frac{a}{\Delta_{b}}\,dr \tag{4.16}\] where \(\chi_{1}(r)\) is a nonnegative smooth function such that \(\chi_{1}(r)=1\) for \(r\leq 3\mathbf{m}\) and \(\chi_{1}(r)=0\) for \(r\geq 4\mathbf{m}\). In the new coordinates \((\mathfrak{t},r,\theta,\varphi_{1})\), \(g_{b}^{\mathrm{BL}}\) and \((g_{b}^{\mathrm{BL}})^{-1}\) are smooth at event horizon, and furthermore \[(g_{b}^{\mathrm{BL}})^{-1}(d\mathfrak{t},d\mathfrak{t})=-1+\frac{\chi^{2}-1}{ \rho_{b}^{2}}\Big{(}\frac{(r^{2}+a^{2})^{2}}{\Delta_{b}}-r^{2}+a^{2}\Big{)}= \begin{cases}-1,&r\leq 3\mathbf{m}\\ \leq-1,&3\mathbf{m}\leq r\leq 4\mathbf{m}\\ (g_{b}^{\mathrm{BL}})^{-1}(dt,dt),&r\geq 4\mathbf{m}\end{cases}.\] By choosing a proper integration constant we have \(\mathfrak{t}=t\) for \(r\geq 4\mathbf{m}\). Then we impose asymptotically flat initial data on the Cauchy hypersurface \[\Sigma_{0}:=\{\mathfrak{t}=0\}\subset M. \tag{4.17}\] In the present paper, we consider the parameters \((\mathbf{m},\mathbf{a},\mathbf{Q})\) close to \((\mathbf{m}_{0},0,\mathbf{Q}_{0})\), that is, we work on slowly rotating (\(|\mathbf{a}|\ll\mathbf{m}+|\mathbf{Q}|\)) Kerr-Newman black holes. As in the RN case, the singularity of the expression (4.15) of the KN metric at \[r=r_{b}:=\mathbf{m}+\sqrt{\mathbf{m}^{2}-(|\mathbf{Q}|^{2}+|\mathbf{a}|^{2})}. \tag{4.18}\] is a coordinate singularity and can be removed by introducing new coordinates. We let \[t_{b,\chi}=t+\int\frac{r^{2}+a^{2}}{\Delta_{b}}\chi(r)\,dr,\quad\varphi_{b, \chi}=\varphi+\int\frac{a}{\Delta_{b}}\chi(r)\,dr. \tag{4.19}\] where \(\chi(r)\in C^{\infty}(\mathbb{R})\) is identically \(1\) near \(r\leq\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\). Then the metric \(g_{b}^{\mathrm{BL}}\) and its inverse take the form \[\begin{split} g_{b}^{\mathrm{BL}}=-\frac{\Delta_{b}}{\rho_{b}^{2} }(dt_{b,\chi}&-a\sin^{2}\theta\,d\varphi_{b,\chi})^{2}+2\chi(dt_{ b,\chi}-a\sin^{2}\theta\,d\varphi_{b,\chi})dr\\ &+\frac{\rho_{b}^{2}}{\Delta_{b}}(1-\chi^{2})dr^{2}+\rho_{b}^{2} \,d\theta^{2}+\frac{\sin^{2}\theta}{\rho_{b}^{2}}\big{(}a\,dt_{b,\chi}-(r^{2} +a^{2})d\varphi_{b,\chi}\big{)}^{2},\\ (g_{b}^{\mathrm{BL}})^{-1}=\rho_{b}^{-2}\bigg{(}& \Delta_{b}\partial_{r}^{2}+\frac{\chi^{2}-1}{\Delta_{b}}\Big{(}(r^{2}+a^{2}) \partial_{t_{\chi}}+a\partial_{\varphi_{b,\chi}}\Big{)}^{2}+\partial_{\theta} ^{2}+\frac{1}{\sin^{2}\theta}\Big{(}\partial_{\varphi_{b,\chi}}+a\sin^{2} \theta\partial_{t_{\chi}}\Big{)}^{2}\\ &+2\chi\Big{(}(r^{2}+a^{2})\partial_{t_{\chi}}+a\partial_{\varphi_ {b,\chi}}\Big{)}\partial_{r}\bigg{)},\end{split} \tag{4.20}\] which is smooth and non-degenerate on the extended manifold \(M_{b}=\mathbb{R}_{t_{b,\chi}}\times[r_{-},\infty)_{r}\times\mathbb{S}_{\theta, \varphi_{b,\chi}}^{2}\) with \(r_{-}\) defined in SS4.1. Correspondingly, we have \[\tilde{A}_{b}=\frac{\mathbf{Q}r}{\rho_{b}^{2}}(dt_{b,\chi}-a\sin^{2}\theta\, d\varphi_{b,\chi})-\chi\frac{\mathbf{Q}r}{\Delta_{b}}dr.\] Since the last term is an exact \(1\)-form which makes no contribution to the electromagnetic \(2\)-form, it follows that one can define the KN \(4\)-potential on \(M_{b}\) as \[A_{b}^{\mathrm{BL}}:=\frac{\mathbf{Q}r}{\rho_{b}^{2}}(dt_{b,\chi}-a\sin^{2} \theta\,d\varphi_{b,\chi}). \tag{4.21}\] Near the poles \(\theta=0,\pi\) of \(\mathbb{S}^{2}\), we introduce the smooth coordinates \(x=\sin\theta\cos\varphi_{b,\chi}\) and \(y=\sin\theta\sin\varphi_{b,\chi}\). Then we have \(\sin^{2}\theta=x^{2}+y^{2}\) and \[\partial_{\varphi_{b,\chi}}=x\partial_{y}-y\partial_{x}.\quad\partial_{\theta}= \cos\theta\cos\varphi_{b,\chi}\partial_{x}+\cos\theta\sin\varphi_{b,\chi} \partial_{y}.\] Therefore, the smoothness of \((g_{b}^{\mathrm{BL}})^{-1}\) near the poles \(\theta=0,\pi\) (i.e, \(x=y=0\)) follows from writing \(\partial_{\theta}^{2}+\sin^{-2}\theta\partial_{\varphi_{b,\chi}}^{2}\) as \[\partial_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\partial_{\varphi_{b,\chi}}^{2}=(1 -x^{2}-y^{2})\big{(}\partial_{x}^{2}+\partial_{y}^{2}\big{)}-(x\partial_{x}+y \partial_{y})^{2}, \tag{4.22}\] which is smooth near \(x=y=0\). Since the volume form is given by \[dvol_{g_{b}^{\mathrm{BL}}}=(r^{2}+a^{2}(1-x^{2}-y^{2}))(1-x^{2}-y^{2})^{-1/2}\, dt_{b,\chi}\,dr\,dx\,dy\] and thus smooth at the poles \(\theta=0,\pi\), we conclude that \(g_{b}^{\mathrm{BL}}\) indeed extends smoothly and non-degenerately to the poles \(\theta=0,\pi\). For \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) close to \((\mathbf{m}_{0},0,\mathbf{Q}_{0})\), we take \(\chi=\chi_{0}\) as defined in (4.9) and choose suitable constants of integration such that \(t_{b,\chi_{0}}=t\), \(\varphi_{b,\chi_{0}}=\varphi\) for \(r\geq 3r_{b_{0}}/2\). Then we define the following diffeomorphism \[\begin{split}\Phi_{b}\colon M=\mathbb{R}_{t_{\chi_{0}}}\times X =\mathbb{R}_{t_{\chi_{0}}}\times[r_{-},\infty)_{r}\times\mathbb{S}_{\theta, \varphi}^{2}\to M_{b}=\mathbb{R}_{t_{b,\chi_{0}}}\times[r_{-},\infty)_{r} \times\mathbb{S}_{\theta,\varphi_{b,\chi_{0}}}^{2},\\ (t_{b,\chi_{0}},r,\theta,\varphi_{b,\chi_{0}})(\Phi_{b}(p))=(t_{ \chi_{0}},r,\theta,\varphi)(p)\end{split} \tag{4.23}\] where \(t_{\chi_{0}}\) is defined in (4.9) and \((\theta,\varphi)\) are polar coordinates on \(\mathbb{S}^{2}\) satisfying that \(\mathbf{a}/|\mathbf{a}|\) is defined by \(\theta=0\). Using this family of diffeomorphisms \(\Phi_{b}\), we define \[g_{b}=(\Phi_{b})^{*}(g_{b}^{\mathrm{BL}})\in C^{\infty}(M;S^{2}T^{*}M),\quad A _{b}=(\Phi_{b})^{*}(A_{b}^{\mathrm{BL}})\in C^{\infty}(M;T^{*}M) \tag{4.24}\] which is a family of smooth and stationary metric and \(1\)-form on \(M\), respectively. For \(b=b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\), this pullback indeed gives rise to the RN black hole \((g_{b_{0}},A_{b_{0}})\). Following the argument in [63, Proposition 3.5], one can prove that \((g_{b},A_{b})\) depends smoothly on \(b\) when \(b\) is close to \(b_{0}\). **Lemma 4.1**.: _Let \(\mathcal{U}_{b_{0}}\) is a sufficiently small neighborhood of \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) and let \(M=\mathbb{R}_{t_{\chi_{0}}}\times X\) with \(X=\{p\in\mathbb{R}^{3}\mid|p|\geq r_{-}\}\). Then \((g_{b},A_{b})\in C^{\infty}(M;S^{2}T^{*}M\oplus T^{*}M)\) depends smoothly on \(b\in\mathcal{U}_{b_{0}}\)._ Proof.: Given \((\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(a=|\mathbf{a}|\neq 0\), let us denote the spherical coordinate system with north pole \(\theta=0\) at \(\mathbf{a}/|\mathbf{a}|\) by \((\theta_{b},\varphi_{b})\), so the pullbacks of the functions \(\Delta_{b},\rho_{b}^{2}\) to \(M\) are simply obtained by replacing \(\theta\) by \(\theta_{b}\). Then if \((r,\theta_{b},\varphi_{b})\) are the polar coordinates of a point \(p\in X\), we have \[r=|p|,\quad a\cos\theta_{b}=\langle\mathbf{a},\frac{p}{|p|}\rangle,\quad a^{2 }\sin^{2}\theta_{b}=|\mathbf{a}|^{2}-a^{2}\cos^{2}\theta_{b}\] where \(|\cdot|\) and \(\langle\cdot,\cdot\rangle\) denote the Euclidean norm and inner product on \(X\subset\mathbb{R}^{3}\), respectively. Since \(r\neq 0\), this shows that \(r,a\cos\theta_{b}\) and \(a^{2}\sin^{2}\theta_{b}\), and thus the pullbacks of \(\Delta_{b}\) and \(\rho_{b}\) are smooth (in \(b\)) families of smooth functions on \(M\). Then the smoothness of \(g_{b}\) and \(A_{b}\) in \(b\) follows from that of \(a\sin^{2}\theta_{b}d\varphi_{b}=(ar^{-2}\partial_{\varphi_{b}})^{\flat}\) (using the musical isomorphism on Euclidean \(\mathbb{R}^{3}\)), which we will show subsequently. We now prove the smooth dependence of the inverse metric \(g_{b}^{-1}\): in view of the expression (4.20) and the discussion around (4.22), all we need to show is that the vector fields \(\partial_{t_{\chi_{0}}},\partial_{r},a\partial_{\varphi_{b}}\) and \(a\sin\theta_{b}\partial_{\theta_{b}}\) depend smoothly on \(\mathbf{a}\). For \(\partial_{t_{\chi_{0}}}\) and for the radial vector field \(\partial_{r}=|p|^{-1}p\partial_{p}\), which do not depend on \(b\), the smoothness is clear. Further, we have \[a\partial_{\varphi_{b}}=\nabla_{\mathbf{a}\times p}\quad\text{at}\quad p\in X,\] and the expression on the right-hand side is smooth in \(\mathbf{a}\). Indeed, if \(\mathbf{a}=a\tilde{a_{3}}:=(0,0,a)\), both sides equal \(a(x\partial_{y}-y\partial_{x})\) on \(\mathbb{R}^{3}_{x,y,z}\), and if \(\mathbf{a}\in\mathbb{R}^{3}\) is any given vector and \(R\in SO(\mathbb{R}^{3})\) is a rotation with \(R\mathbf{a}=a\tilde{e_{3}}\), \(a=|\mathbf{a}|\), then \((R_{*}(a\partial_{\varphi_{b}}))|p=(a\partial_{\varphi_{a\tilde{e_{3}}}})|_{R(p)}\) which we just observed to be equal to \(\nabla_{a\tilde{e}\circ XR(p)}=R_{*}\nabla_{\mathbf{a}\times p}\). In a similar manner, we have \[a\sin\theta_{b}\partial_{\theta_{b}}=|p|^{-1}\nabla_{p\times(p\times\mathbf{a})} \quad\text{at}\quad p\in X,\] and again the expression on the right-hand side is smooth in \(\mathbf{a}\). This finishes the proof of the Lemma. _Remark 4.2_.: We consider the enlarged parameter space \[\mathcal{U}_{b_{0,m}}=\{(\mathbf{m},\mathbf{a},\mathbf{Q}_{e},\mathbf{Q}_{m}) \}\subset\mathbb{R}\times\mathbb{R}^{3}\times\mathbb{R}\times\mathbb{R}. \tag{4.25}\] Let \(\mathcal{U}_{b_{0,m}}\) be a small neighborhood of a RN parameter \(b_{0,m}=(\mathbf{m}_{0},0,\mathbf{Q}_{0,e},\mathbf{Q}_{0,m})\) with magnetic charge. Now we define the map \[\beta\colon\mathcal{U}_{b_{0,m}}\ni(\mathbf{m},\mathbf{a},\mathbf{Q}_{e}, \mathbf{Q}_{m})\mapsto(\mathbf{m},\mathbf{a},\sqrt{\mathbf{Q}_{e}^{2}+\mathbf{Q }_{m}^{2}})\in\mathcal{U}_{b_{0}}\] and then for \(b_{m}=(\mathbf{m},\mathbf{a},\mathbf{Q}_{e},\mathbf{Q}_{m})\in\mathcal{U}_{b_{0,m}}\), we define \[g_{b_{m}}:=g_{\beta(b_{m})},\quad F_{b_{m}}:=\mathbf{Q}_{e}F_{(\mathbf{m}, \mathbf{a},1)}+\mathbf{Q}_{m}\star_{g_{b_{m}}}F_{(\mathbf{m},\mathbf{a},1)}. \tag{4.26}\] In view of Lemma 3.1, we see that \((g_{b_{m}},F_{b_{m}})\) is a solution of the Einstein-Maxwell equations (3.1)-(3.2). Since the charge always appears in the form of square in the KN metric, it follows that \(g_{b_{m}}\) and \(F_{b_{m}}\) depend smoothly on \(b_{m}\in\mathcal{U}_{b_{0,m}}\). _Remark 4.3_.: The diffeomorphism used to realize the family of KN black holes as a smooth family of black holes on a fixed manifold is not unique. We can also consider coordinates \(t_{b,1}\), \(\varphi_{b,1}\) (that is, take \(\chi\equiv 1\) in the definitions of \(t_{b,\chi}\) and \(\varphi_{b,\chi}\)) and use the following diffeomorphism \[\begin{split}\Phi_{b}^{0}\colon M=\mathbb{R}_{t_{0}}\times[r_{-},\infty)_{r}\times\mathbb{S}_{\theta,\varphi}^{2}\to M_{b}=\mathbb{R}_{t_{b,1} }\times[r_{-},\infty)_{r}\times\mathbb{S}_{\theta,\varphi_{b,1}}^{2},\\ (t_{b,1},r,\theta,\varphi_{b,\chi_{0}})(\Phi_{b}^{0}(p))=(t_{0},r,\theta,\varphi)(p)\end{split} \tag{4.27}\] to define \[g_{b}^{0}=(\Phi_{b}^{0})^{*}(g_{b}^{\mathrm{BL}})\in C^{\infty}(M;S^{2}T^{*}M), \quad A_{b}^{0}=(\Phi_{b}^{0})^{*}(A_{b}^{\mathrm{BL}})\in C^{\infty}(M;T^{*}M). \tag{4.28}\] Again, \((g_{b}^{0},A_{b}^{0})\) depends smoothly on \(b\) when \(b\) is close to \(b_{0}\); for \(b=b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\), the above pullback (4.28) produces the RN black hole \((g_{b_{0}},A_{b_{0}})\). We point out that this representation is more useful when we do calculations near the event horizon. In particular, for \(b=(\mathbf{m},0,\mathbf{Q})\), we have \[g_{(\mathbf{m},0,\mathbf{Q})}^{0}=-\mu_{(\mathbf{m},0,\mathbf{Q})}dt_{0}^{2}+ 2dt_{0}\,dr+r^{2}\not{\phi},\quad A_{(\mathbf{m},0,\mathbf{Q})}^{0}=\frac{ \mathbf{Q}}{r}dt_{0},\quad\mu_{(\mathbf{m},0,\mathbf{Q})}=1-\frac{2\mathbf{m} }{r}+\frac{\mathbf{Q}}{r^{2}}. \tag{4.29}\] For \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) close to \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\), we also introduce the following function \[t_{b,*}:=t+r_{b_{0},*}\chi_{0}(r)-r_{(\mathbf{m},0,\mathbf{Q}),*}(1-\chi_{0}(r )), \tag{4.30}\] which generalizes \(t_{*}=t_{b_{0},*}\) defined in (4.11); it equals \(t-r_{(\mathbf{m},0,\mathbf{Q}),*}\) for \(r\geq 3r_{b_{0}}/2\). Now we turn to the discussion of the linearized KN solutions. **Definition 4.4**.: For \(\dot{b}\in\mathbb{R}\times\mathbb{R}^{3}\times\mathbb{R}\), we define the _linearized KN solutions_ as \[\dot{g}_{b}(\dot{b})=D_{b}g_{b}(\dot{b})=\frac{d}{ds}\Big{|}_{s=0}g_{b+s\dot{b }},\quad\dot{A}_{b}(\dot{b})=D_{b}A_{b}(\dot{b})=\frac{d}{ds}\Big{|}_{s=0}A_{b +s\dot{b}}. \tag{4.31}\] Therefore, \((\dot{g}_{b}(\dot{b}),\dot{A}_{b}(\dot{b}))\) is a solution of the linearized Einstein-Maxwell equations (3.29) linearized around \((g_{b},A_{b})\). We can also linearize the family \((g_{b}^{0},A_{b}^{0})\) in the parameter \(b\) to obtain another version of linearized KN solutions \[\dot{g}_{b}^{0}(\dot{b})=D_{b}g_{b}^{0}(\dot{b})=\frac{d}{ds}\Big{|}_{s=0}g_{ b+s\dot{b}}^{0},\quad\dot{A}_{b}^{0}(\dot{b})=D_{b}A_{b}^{0}(\dot{b})=\frac{d}{ ds}\Big{|}_{s=0}A_{b+s\dot{b}}^{0}. \tag{4.32}\] Here we list the particular case \[\begin{split}\dot{g}_{(\mathbf{m}_{0},0,\mathbf{Q}_{0})}^{0}( \dot{\mathbf{m}},0,\dot{\mathbf{Q}})&=\big{(}\frac{2\dot{ \mathbf{m}}}{r}-\frac{2\mathbf{Q}_{0}\dot{\mathbf{Q}}}{r^{2}}\big{)}\,dt_{0}^{ 2},\quad\dot{A}_{(\mathbf{m}_{0},0,\mathbf{Q}_{0})}^{0}(\dot{\mathbf{m}},0, \dot{\mathbf{Q}})=\frac{\dot{\mathbf{Q}}}{r}dt_{0}\\ \dot{g}_{(\mathbf{m}_{0},0,\mathbf{Q}_{0})}^{0}(0,\dot{\mathbf{ a}},0)&=((-\frac{4\mathbf{m}_{0}}{r}+\frac{2\mathbf{Q}_{0}}{r^{2}})dt_{0}-2 dr)\sin^{2}\theta\,d\varphi,\quad\dot{A}_{(\mathbf{m}_{0},0,\mathbf{Q}_{0})}^{0}(0, \dot{\mathbf{a}},0)=-\frac{\mathbf{Q}}{r}\sin^{2}\theta d\varphi,\end{split} \tag{4.33}\] where \(|\dot{\mathbf{a}}|=1\) and \((\theta,\varphi)\) are spherical coordinates adapted to \(\dot{\mathbf{a}}\). _Remark 4.5_.: Since \(g_{b}=((\Phi_{b}^{0})^{-1}\circ\Phi_{b})^{*}g_{b}^{0}\), the linearized KN solution \((\dot{g}_{b}(\dot{b}),\dot{A}_{b}(\dot{b}))\) can be obtained from \((\dot{g}_{b}^{0}(\dot{b}),\dot{g}_{b}^{0}(\dot{b}))\) by subtracting off a Lie derivative of \((g_{b}^{0},A_{b}^{0})\) along a vector field generating the flow \((\Phi_{b}^{0})^{-1}\circ\Phi_{b}\). More specifically, in the coordinates \((t_{*},r,\theta,\varphi)\) on \(M\), one has \[(\Phi_{b}^{0})^{-1}\circ\Phi_{b}\colon(t_{*},\,r,\,\theta,\,\varphi)\mapsto\left(t _{*}+\int\left(\frac{r^{2}+a^{2}}{\Delta_{b}}-\frac{r^{2}}{\Delta_{b_{0}}} \right)(1-\chi_{0})\,dr,\,r,\,\theta,\,\varphi+\int\frac{a}{\Delta_{b}}(1- \chi_{0})\,dr\right). \tag{4.34a}\] Therefore, given \(\dot{b}=(\dot{\mathbf{m}},\dot{\mathbf{a}},\dot{\mathbf{Q}})\), the above two versions of the linearized KN solutions are related by \[\begin{split}\dot{g}_{b_{0}}(\dot{b})&=\dot{g}_{b_{0}}^{ 0}(\dot{b})-\mathcal{L}_{V(\dot{b})}g_{b_{0}}^{0},\quad\dot{g}_{b_{0}}(\dot{b})= \dot{g}_{b_{0}}^{0}(\dot{b})-\mathcal{L}_{V(\dot{b})}A_{b_{0}}^{0},\\ V(\dot{b})&=\frac{d}{ds}\Big{|}_{s=0}\big{(}( \Phi_{b_{0}+s\dot{b}}^{0})^{-1}\circ\Phi_{b_{0}+s\dot{b}}\big{)}\\ &=\bigg{(}\dot{\mathbf{m}}\Big{(}\int_{r_{0}}^{r}\frac{2r^{3}}{ \Delta_{b_{0}}^{2}}(1-\chi_{0})\,dr\Big{)}-\dot{\mathbf{Q}}\Big{(}\int_{r_{0}}^{r }\frac{2\mathbf{Q}_{0}r^{2}}{\Delta_{b_{0}}^{2}}(1-\chi_{0})\,dr\Big{)}\bigg{)} \partial_{t_{*}}+\dot{\mathbf{a}}\Big{(}\int_{r_{0}}^{r}\frac{1-\chi_{0}}{ \Delta_{b_{0}}}dr\Big{)}\partial_{\varphi}.\end{split} \tag{4.34b}\] where \(r_{0}\) in the definition of \(V(\dot{b})\), can be chosen freely. ### Stationarity, vector bundles, and geometric operators Now we introduce some basics of the vector bundles and geometric operators (see [57, SS3.3]). In the notation (4.8), let \[\pi_{X}\colon M\to X;\] be the projection to the spatial part of the manifold \(M\), whose definition is independent of the choice of time function. Let \(E_{1},\ E_{2}\to X\) be two vector bundles over \(X\), and suppose that \(\widehat{L}(0)\in\operatorname{Diff}(X;E_{1},E_{2})\) is a differential operator. Let \(\mathfrak{t}=t_{*}+F\) with \(F\in C^{\infty}(X)\) and \(d\mathfrak{t}\neq 0\) everywhere, we then define its _stationary extension_\(L\) acting on a section \(u\in C^{\infty}(M;\pi_{X}^{*}E_{1})\) in the following way \[u\mapsto(Lu)(\mathfrak{t},-):=\widehat{L}(0)(u(\mathfrak{t},-))\in C^{\infty }(M;\pi_{X}^{*}E_{2}).\] We point out that this stationary extension indeed depends on the choice of the time function \(\mathfrak{t}\). However, when acting on a stationary section, the stationary extension defined above is _independent_ of the choice of the time function \(\mathfrak{t}\) since \[L\pi_{X}^{*}=\pi_{X}^{*}\widehat{L}(0). \tag{4.35}\] By making use of the stationary extension, one can regard \(\operatorname{Diff}_{\bullet}(\bar{X};E)\) as s subalgebra of \(\operatorname{Diff}_{\bullet}(M;\pi_{X}^{*}E)\) where \(\bullet=\operatorname{b},\operatorname{sc}\). Conversely, given a _stationary_ differential operator \(L\in\operatorname{Diff}(M;\pi_{X}^{*}E_{1},\pi_{X}^{*}E_{2})\) which commutes with \(\partial_{t}\), one can find a unique operator \(\widehat{L}(0)\in\operatorname{Diff}(X;E_{1},E_{2})\) such that the relation (4.35) holds. Moreover, \(\widehat{L}(0)\) is independent of the choice of the time function \(\mathfrak{t}\). More generally, we consider the formal conjugation of \(L\) by \(e^{i\sigma\mathfrak{t}}\) \[\widehat{L}(\sigma):=e^{i\sigma\mathfrak{t}}Le^{-i\sigma\mathfrak{t}}\in \operatorname{Diff}(X;E_{1},E_{2}).\] We note that using another time function, \(\mathfrak{t}+F^{\prime}\) with \(F^{\prime}\in C^{\infty}(X)\) in the above formal conjugation means conjugating \(\widehat{L}(\sigma)\) by \(e^{i\sigma F^{\prime}}\). Motivated by the product decomposition (4.8), we use the splitting \[T^{*}M\cong T^{*}\mathbb{R}_{t_{0}}\oplus T^{*}X=\pi_{T}^{*}(T^{*}\mathbb{R}_ {t_{0}})\oplus\pi_{X}^{*}(T^{*}X)\] where \(\pi_{T}\colon M_{0}\to\mathbb{R}_{t_{0}}\) is the projection. Correspondingly, we introduce the following _extended scattering cotangent bundle_ of \(\bar{X}\) \[\widetilde{\operatorname{sc}T^{*}}\bar{X}:=\mathbb{R}dt_{0}\oplus^{*}T^{*} \bar{X}. \tag{4.36}\] Here \(dt_{0}\) is simply a notation of the basis of a real line bundle of rank 1 over \(\bar{X}\), which we identify as the differential of the time function \(t_{0}\in C^{\infty}(M)\) when we look at the pullback bundle \(\pi_{X}^{*}\widetilde{\operatorname{sc}T^{*}}\bar{X}\to M\). The extended scattering cotangent bundle \(\widetilde{\operatorname{sc}T^{*}}\bar{X}\) is spanned, over \(C^{\infty}(\bar{X})\), by \(dt_{0}\) and the differentials \(dx^{i}\), where \((x^{1},x^{2},x^{3})\) are standard coordinates on \(X\subset\mathbb{R}^{3}\). We remark that we can also use another time function, for example \(t_{b,*}\), in (4.36) because the difference between the differentials of different time functions is given by a smooth scattering 1-form on \(\bar{X}\). Given a stationary metric \(g\) on \(M\), one can find a unique \(\tilde{g}\in C^{\infty}(X;S^{2\widetilde{\operatorname{sc}T^{*}}\bar{X}})\) such that \(\pi_{X}^{*}\tilde{g}=g\). Identifying \(g\) with \(\tilde{g}\) and applying this procedure to the Kerr-Newman family, we obtain \[g_{b},\ g_{b}^{0}\in C^{\infty}(\bar{X};S^{2\widetilde{\operatorname{sc}T^{* }}\bar{X}}). \tag{4.37}\] We also record the facts \[\begin{split} g_{b}-\underline{g}\in\rho C^{\infty},\quad \underline{g}:=-dt_{\chi_{0}}^{2}+dr^{2}+r^{2}\not{g},\\ g_{(\mathbf{m},\mathbf{a},\mathbf{Q})}-g_{(\mathbf{m},0,\mathbf{ Q})}&\in\rho^{2}C^{\infty}(\bar{X};S^{2\widetilde{\operatorname{sc}T^{* }}\bar{X}}),\end{split} \tag{4.38}\] that is, a KN metric is a \(\mathcal{O}(\rho)\), resp. \(\mathcal{O}(\rho^{2})\) perturbation of the Minkowski metric \(\underline{g}\), resp. the RN metric of the same mass and charge. We shall discuss some basic geometric operators (which will be used frequently later on) on Kerr-Newman spacetimes, for example, \[(\delta_{g}^{s}\omega)_{\mu\nu}=\frac{1}{2}(\nabla_{\nu}\omega_{\mu}+\nabla_{ \mu}\omega_{\nu}),\quad(\delta_{g}h)_{\alpha}=-g^{\mu\nu}\nabla_{\mu}h_{\nu \alpha},\quad G_{g}=\operatorname{Id}-\frac{1}{2}g\operatorname{tr}_{g}. \tag{4.39}\] We denote by \[\square_{g,0},\ \square_{g,1},\ \square_{g,2}, \tag{4.40}\] the tensor wave operator \(g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\) on scalar functions, 1-forms, and symmetric 2-tensors, respectively. When the bundle is clear from the context, we shall drop the notation of the bundles and simply write \(\Box_{g}\). We use the splitting \[\widetilde{\text{sc}T^{*}}\bar{X} =\langle dt\rangle\oplus{}^{\text{sc}}T^{*}\bar{X}, \tag{4.41}\] \[S^{2}\widetilde{\text{sc}T^{*}}\bar{X} =\langle dt^{2}\rangle\oplus(2dt\otimes_{s}{}^{\text{sc}}T^{*} \bar{X})\oplus S^{2\,\text{sc}}T^{*}\bar{X}. \tag{4.42}\] and work in the trivialization of \({}^{\text{sc}}T^{*}\bar{X}\), resp. \(S^{2\,\text{sc}}T^{*}\bar{X}\) given by \(dx^{i},1\leq i\leq 3\), resp. \(2dx^{i}\otimes_{s}dx^{j},1\leq i\leq j\leq 3\). **Proposition 4.6**.: _Working in the splittings (4.41) and (4.42) and the trivializations of \({}^{\text{sc}}T^{*}\bar{X}\) and \(S^{2\,\text{sc}}T^{*}\bar{X}\), we write the operators \(\Box_{g_{b},2},\Box_{g_{b},1},\Box_{g_{b},0}\) as_ \[\Box_{g_{b},0} =g_{b}^{-1}(dt_{\chi_{0}},dt_{\chi_{0}})\partial_{t_{\chi_{0}}}^{ 2}+\widehat{\Box_{g_{b},0}}(0)+Q_{b,0}^{1}\partial_{t_{\chi_{0}}}, \tag{4.43}\] \[\Box_{g_{b},1} =\Box_{g_{b},0}\otimes\mathrm{Id}_{4\times 4}+Q_{b,1}^{1} \partial_{t_{\chi_{0}}}+Q_{b,1}^{0},\] (4.44) \[\Box_{g_{b},2} =\Box_{g_{b},0}\otimes\mathrm{Id}_{10\times 10}+Q_{b,2}^{1} \partial_{t_{\chi_{0}}}+Q_{b,2}^{0}. \tag{4.45}\] _Then we have_ \[\widehat{\Box_{g_{b},0}}(0)\in\rho^{2}\mathrm{Diff}_{\mathrm{b}}^ {2}(\bar{X}),\quad Q_{b,0}^{1}\in\rho^{3}\mathrm{Diff}_{\mathrm{b}}^{1}(\bar{ X}), \tag{4.46}\] \[Q_{b,1}^{1}\in\rho^{2}\mathrm{Diff}_{\mathrm{b}}^{0}(\bar{X};{}^ {\text{sc}}\widetilde{T^{*}}\bar{X}),\quad Q_{b,1}^{0}\in\rho^{3}\mathrm{ Diff}_{\mathrm{b}}^{1}(\bar{X};{}^{\text{sc}}\widetilde{T^{*}}\bar{X}),\] (4.47) \[Q_{b,2}^{1}\in\rho^{2}\mathrm{Diff}_{\mathrm{b}}^{0}(\bar{X};S^{ 2\,\text{sc}}\widetilde{T^{*}}\bar{X}),\quad Q_{b,2}^{0}\in\rho^{3}\mathrm{ Diff}_{\mathrm{b}}^{1}(\bar{X};S^{2\,\text{sc}}\widetilde{T^{*}}\bar{X}). \tag{4.48}\] _Moreover,_ \[\widehat{\Box_{g_{b},j}}(0)-\widehat{\Box_{g,j}}(0)\in\rho^{3}\mathrm{Diff}_{ \mathrm{b}}^{2}(\bar{X}),\quad\underline{g}=-dt_{\chi_{0}}^{2}+dx^{2},\quad j= 0,1,2. \tag{4.49}\] Since the metric components are smooth away from \(\partial_{+}\bar{X}\), it suffices to analyze \(\Box_{g,\bullet},\bullet=0,1,2\) near spatial infinity \(\partial_{+}\bar{X}\) where \(t_{\chi_{0}}\equiv t\) is the static time coordinate. Proposition 4.6 is a consequence of the following lemma (by taking \(h=dx^{2}\)). **Lemma 4.7**.: _Suppose \(g\) is a stationary Lorentzian metric on \(M\) satisfying_ \[g(\partial_{t},\partial_{t}) \in-1+\rho C^{\infty}(\bar{X}), \tag{4.50}\] \[g(\partial_{t},-) \in\rho^{2}C^{\infty}(\bar{X};{}^{\text{sc}}T^{*}\bar{X}),\] \[g|_{\text{\tiny$=T\bar{X}\times\text{\tiny$\sim$}T\bar{X}$}} \in h+\rho C^{\infty}(\bar{X};S^{2\,\text{sc}}\widetilde{T^{*}}\bar{X}),\] _where \(h\in C^{\infty}(\bar{X};S^{2\,\text{sc}}\widetilde{T^{*}}\bar{X})\) is a Riemannian metric. Then the operator \(\Box_{g,2},\Box_{g,1},\Box_{g,0}\) satisfy the statements in Proposition 4.6 with \(Q_{b,j}^{0}\in\rho^{3}\mathrm{Diff}_{\mathrm{b}}^{1}(\bar{X})\) replaced by \(Q_{b,j}^{0}\in\rho^{2}\mathrm{Diff}_{\mathrm{b}}^{1}(\bar{X})\) for \(j=1,2\)._ _Moreover, \(\widehat{\Box_{g,j}}(0)\bmod\rho^{3}\mathrm{Diff}_{\mathrm{b}}^{2}(\bar{X})\) only depends on \(h\) for \(j=0,1,2\)._ Proof.: We introduce coordinates \[x=(x^{0},x^{1},x^{2},x^{3}),\] where \(x^{0}=t\), and \(x^{1},x^{2},x^{3}\) are standard coordinates on \(\mathbb{R}^{3}\). We use Latin letters \(i,j,k,\cdots\) for indices from 1 to 3, and Greek letters \(\alpha,\beta,\mu,\nu,\cdots\) for indices from 0 to 3. We let \(\mathcal{O}^{k}:=\rho^{k}C^{\infty}(\bar{X})\) and interpret \(f=\mathcal{O}^{k}\) as \(f\in\mathcal{O}^{k}\) when \(f\) is a smooth function. We first compute the Levi-Civita connection of \(g\). Using the facts that \(g\) is stationary, \(\partial_{i}\in\rho\mathcal{V}_{\mathrm{b}}(\bar{X})\) and \[g^{-1}(dt,dt)\in-1+\rho C^{\infty}(\bar{X}),\quad g^{-1}(dt,-)\in\rho^{2}C^{ \infty}(\bar{X};{}^{\text{sc}}T\bar{X}),\quad g^{-1}|_{\text{\tiny$\infty$}T^{*} \bar{X}\times\text{\tiny$\propto$}T^{*}\bar{X}}\in h^{-1}+\rho C^{\infty}( \bar{X};S^{2\,\text{sc}}T\bar{X}),\] we find that \[\Gamma_{00}^{0}=\mathcal{O}^{4},\quad\Gamma_{00}^{i}=\mathcal{O}^{2},\quad\Gamma_ {0i}^{0}=\mathcal{O}^{2},\quad\Gamma_{0i}^{j}=\mathcal{O}^{3},\quad\Gamma_{ij}^ {0}=\mathcal{O}^{3},\quad\Gamma_{ij}^{k}=\Gamma_{ij}^{k}(h)+\mathcal{O}^{2}, \quad\Gamma_{ij}^{k}(h)=\mathcal{O}^{1}. \tag{4.51}\] Since \(\Box_{g,0}=g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}-g^{\alpha\beta} \Gamma_{\alpha\beta}^{\mu}\partial_{\mu}\) where \(g^{\alpha\beta}\) is the inverse to \(g\), we see that \[\Box_{g,0}=g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}+f^{\mu}\partial_{\mu}, \quad f^{0}=\mathcal{O}^{3},\ f^{i}=\mathcal{O}^{1},\] and thus we can write \[\Box_{g,0}=g^{00}\partial_{t}^{2}+Q\partial_{t}+\widehat{\Box_{g,0}}(0)\quad \text{with}\quad Q\in\rho^{3}\mathrm{Diff}_{\mathrm{b}}^{1}(\bar{X}),\ \widehat{\Box_{g,0}}(0)\in\rho^{2}\mathrm{Diff}_{\mathrm{b}}^{2}(\bar{X}).\] Moreover, \(\widehat{\square_{g,0}}(0)\bmod\rho^{3}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X})\) only depends on \(h\). Next, we calculate \(\square_{g,1}\). Since \[(\square_{g,1}u)_{\alpha}=\square_{g,0}u_{\alpha}-2g^{\mu\nu}\Gamma^{\kappa}_{ \mu\alpha}\partial_{\nu}u_{\kappa}+g^{\mu\nu}\Gamma^{\kappa}_{\mu\nu}\Gamma^{ \beta}_{\kappa\alpha}u_{\beta}+g^{\mu\nu}\Gamma^{\kappa}_{\mu\alpha}\Gamma^{ \beta}_{\kappa\nu}u_{\beta}-g^{\mu\nu}(\partial_{\mu}\Gamma^{\kappa}_{\nu \alpha})u_{\kappa},\] we can write \[\square_{g,1}=\square_{g,0}\otimes\mathrm{Id}_{4\times 4}+Q^{1}_{g,1}\partial_{t} +Q^{0}_{g,1}\] where \(Q^{1}_{g,1}\in\rho^{2}\mathrm{Diff}^{0}_{\mathrm{b}}(\bar{X};\widetilde{scT^{*} }\bar{X})\) and \(Q^{0}_{g,1}\in\rho^{2}\mathrm{Diff}^{1}_{\mathrm{b}}(\bar{X};\widetilde{scT^{*} }\bar{X})\). Moreover, \(Q^{0}_{g,1}\in\rho^{3}\mathrm{Diff}^{1}_{\mathrm{b}}(\bar{X};\widetilde{scT^{*} }\bar{X})\) if \(\Gamma^{k}_{ij}(h)\) vanishes, and \(\widehat{\square_{g,1}}(0)\bmod\rho^{3}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X}; \widetilde{scT^{*}}\bar{X})\) only depends on \(h\). Finally, we calculate \(\square_{g,2}\). Since \[(\square_{g,2}h)_{\alpha\beta}=\square_{g,0}h_{\alpha\beta}-2g^{\mu\nu}\Gamma^ {\kappa}_{\mu\alpha}\partial_{\nu}h_{\kappa\beta}-2g^{\mu\nu}\Gamma^{\kappa} _{\mu\beta}\partial_{\nu}h_{\kappa\alpha}+S^{0}_{g,2}\] where \(S^{0}_{g,2}\in\mathrm{Diff}^{0}_{b}(\bar{X};S^{2sc\widetilde{T^{*}}\bar{X}})\) whose elements are linear combinations of terms of the types \(g\Gamma\cdot\Gamma\) and \(g\partial\Gamma\), it follows that we can write \[\square_{g,2}=\square_{g,0}\otimes\mathrm{Id}_{10\times 10}+Q^{1}_{g,2}\partial_{t} +Q^{0}_{g,2}\] where \(Q^{1}_{g,2}\in\rho^{2}\mathrm{Diff}^{0}_{\mathrm{b}}(\bar{X};S^{2sc\widetilde{ T^{*}}\bar{X}})\) and \(Q^{0}_{g,2}\in\rho^{2}\mathrm{Diff}^{1}_{\mathrm{b}}(\bar{X};S^{2sc\widetilde{T^{* }}\bar{X}})\). Again, \(Q^{0}_{g,2}\in\rho^{3}\mathrm{Diff}^{1}_{\mathrm{b}}(\bar{X};S^{2sc\widetilde{ T^{*}}\bar{X}})\) if \(\Gamma^{k}_{ij}(h)\) vanishes, and \(\widehat{\square_{g,2}}(0)\bmod\rho^{3}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X};S ^{2sc\widetilde{T^{*}}\bar{X}})\) only depends on \(h\). We next discuss the curvature tensor and other geometric operators associated to \(g_{b}\). **Proposition 4.8**.: _Let \(g\) be a metric of the form (4.50) and \(\tilde{g}=-dt^{2}+h\) where \(h\in C^{\infty}(\bar{X};S^{2sc\widetilde{T^{*}}\bar{X}})\) is Riemannian. Then we have_ \[\mathrm{Riem}(g)-\mathrm{Riem}(\tilde{g})\in\rho^{3}C^{\infty} \big{(}\bar{X};\widetilde{scT}\bar{X}\otimes(\widetilde{scT^{*}}\bar{X})^{3} \big{)},\quad\mathrm{Ric}(g)-\mathrm{Ric}(\tilde{g})\in\rho^{3}C^{\infty} \big{(}\bar{X};S^{2sc\widetilde{T^{*}}\bar{X}}\big{)}, \tag{4.52}\] \[\delta_{g}=-\iota_{dt^{2}_{\chi_{0}}}\partial_{t_{\chi_{0}}}+ \widehat{\delta}_{g}(0),\quad\widehat{\delta}_{g}(0)\in\rho\mathrm{Diff}^{1}_{ \mathrm{b}}(\bar{X}),\quad\widehat{\delta}_{g}(0)-\widehat{\delta}_{\tilde{g} }(0)\in\rho^{2}\mathrm{Diff}^{1}_{\mathrm{b}}(\bar{X}),\] (4.53) \[\delta^{*}_{g}=dt_{\chi_{0}}\otimes_{s}\partial_{t_{\chi_{0}}}+ \widehat{\delta}^{*}_{g}(0),\quad\widehat{\delta}^{*}_{g}(0)\in\rho\mathrm{ Diff}^{1}_{\mathrm{b}}(\bar{X}),\quad\widehat{\delta}^{*}_{g}(0)-\widehat{\delta}^{*}_{ \tilde{g}}(0)\in\rho^{2}\mathrm{Diff}^{0}_{\mathrm{b}}(\bar{X}). \tag{4.54}\] _In particular, these hold for \(g=g_{b}\) and \(\tilde{g}=\underline{g}=-dt^{2}_{\chi_{0}}+dx^{2}\)._ Proof.: Since \[\Gamma^{0}_{00}(\tilde{g})=\Gamma^{i}_{00}(\tilde{g})=\Gamma^{0}_{0i}(\tilde{g })=\Gamma^{j}_{0i}(\tilde{g})=\Gamma^{0}_{ij}(\tilde{g})=0,\quad\Gamma^{k}_{ij} (\tilde{g})=\mathcal{O}^{1},\quad\Gamma^{\mu}_{\alpha\beta}(g)=\Gamma^{\mu}_{ \alpha\beta}(\tilde{g})+\mathcal{O}^{2},\] and \(g,\tilde{g}\) are stationary, the proposition follows from the formulas \[\mathrm{Riem}(g)^{\nu}_{\mu\alpha\beta}=\partial_{\alpha}\Gamma^{\nu}_{ \beta\mu}(g)-\partial_{\beta}\Gamma^{\nu}_{\alpha\mu}(g)+\Gamma^{\nu}_{\alpha \kappa}(g)\Gamma^{\kappa}_{\beta\mu}(g)-\Gamma^{\nu}_{\beta\kappa}(g)\Gamma^{ \kappa}_{\alpha\mu}(g),\] \[\delta_{g}u=-g^{\mu\nu}\partial_{\mu}u_{\nu}+g^{\mu\nu}\Gamma^{ \kappa}_{\mu\nu}(g)u_{\kappa},\quad(\delta_{g}h)_{\alpha}=-g^{\mu\nu}\partial_{ \mu}h_{\nu\alpha}+g^{\mu\nu}\Gamma^{\kappa}_{\mu\nu}(g)h_{\kappa\alpha}+g^{ \mu\nu}\Gamma^{\kappa}_{\mu\alpha}(g)h_{\nu\kappa},\] \[(\delta^{*}_{g}u)_{\alpha\beta}=\frac{1}{2}(\partial_{\alpha}u_{ \beta}+\partial_{\beta}u_{\alpha})-\Gamma^{\kappa}_{\alpha\beta}(g)u_{\kappa}.\] In view of the facts in (4.38), we have the following further leading order control **Lemma 4.9**.: _Suppose that \(g_{1},g_{2}\) are two metrics of the form (4.50) so that in addition \(g_{1}-g_{2}\in\rho^{2}C^{\infty}(X;S^{2sc\widetilde{T^{*}}\bar{X}})\). Then_ \[\widehat{\square_{g_{1},j}}(0)-\widehat{\square_{g_{2},j}}(0)\in \rho^{4}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X}),\quad j=0,1,2, \tag{4.55}\] \[\mathrm{Riem}(g_{2})-\mathrm{Riem}(g_{1})\in\rho^{4}C^{\infty} \big{(}\bar{X};\widetilde{scT}\bar{X}\otimes(\widetilde{scT^{*}\bar{X}}\bar{X})^{3} \big{)},\quad\mathrm{Ric}(g_{2})-\mathrm{Ric}(g_{1})\in\rho^{4}C^{\infty} \big{(}\bar{X};S^{2sc\widetilde{T^{*}}\bar{X}}\big{)},\] (4.56) \[\widehat{\delta_{g_{1}}}(0)-\widehat{\delta_{g_{2}}}(0)\in \rho^{3}\mathrm{Diff}^{1}_{\mathrm{b}}(\bar{X}),\] (4.57) \[\widehat{\delta^{*}_{g_{1}}}(0)-\widehat{\delta^{*}_{g_{2}}}(0) \in\rho^{3}C^{\infty}(\bar{X};\mathrm{Hom}(\widetilde{scT^{*}\bar{X}}\bar{X},S^{2sc \widetilde{T^{*}}\bar{X}}\big{)}), \tag{4.58}\] Proof.: The proof follows from the facts that \(\Gamma^{\mu}_{\alpha\beta}(g_{1})=\mathcal{O}^{1}\) and \(\Gamma^{\mu}_{\alpha\beta}(g_{2})-\Gamma^{\mu}_{\alpha\beta}(g_{1})=\mathcal{O}^ {3}\) #### 4.3.1. Relevant geometric operators in linearized gauge-fixed Einstein-Maxwell equations We now study the linearized gauge-fixed Einstein-Maxwell operator arising from the generalized wave map and Lorenz gauge. **Definition 4.10**.: Let \(B\in C^{\infty}(M;\operatorname{Hom}(T^{*}M,S^{2}T^{*}M)),F\in C^{\infty}(M; \operatorname{Hom}(S^{2}T^{*}M,T^{*}M))\) and \(g\) be a Lorentzian metric. Then we define the _modified symmetric gradient_\(\widetilde{\delta}^{*}_{g,B}\) and _modified negative divergence_\(\widetilde{\delta}_{g,F}\) as follows. \[\widetilde{\delta}^{*}_{g,B}=\delta^{*}_{g}+B_{g},\quad\widetilde{\delta}_{g, F}=\delta_{g}+F_{g}. \tag{4.59}\] In this paper, we will use \(B_{g},F_{g}\) of the following form \[B_{g}=B(g;\mathfrak{c},\gamma):=2\gamma\mathfrak{c}\otimes_{s}(\bullet)- \frac{1}{2}\gamma g^{-1}(\mathfrak{c},\bullet)g, \tag{4.60}\] \[F_{g}=F(g;\mathfrak{c},\gamma):=2\gamma\mathfrak{c}_{\epsilon}^{g}(\bullet)- \frac{1}{2}\gamma\mathrm{tr}_{g}(\bullet). \tag{4.61}\] where \(\gamma\in\mathbb{R}\) is a sufficiently small constant to be determined later on, \(\mathfrak{c}\in C^{\infty}_{c}(X,\widetilde{\varsigma\Upsilon}^{*}\bar{X})\) is a stationary 1-form with compact support and \(\iota^{g}_{\epsilon}(h)=g^{\mu\nu}\mathfrak{c}_{\nu}h_{\mu\alpha}\). With the above choices of \(B_{g}\) and \(F_{g}\) in (4.60) and (4.61), we find that \(\widetilde{\delta}^{*}_{g,B}\) and \(\widetilde{\delta}_{g,F}\) are formally adjoint to each other. From now on, we denote \[\widetilde{\delta}^{*}_{g,\gamma}:=\widetilde{\delta}^{*}_{g,B}=\delta^{*}_{g }+B_{g},\quad\widetilde{\delta}_{g,\gamma}:=\widetilde{\delta}_{g,F}=\delta_ {g}+F_{g}. \tag{4.62}\] By choosing \(\tilde{\delta}^{*}_{g_{b}}=\widetilde{\delta}^{*}_{g_{b},\gamma}\) and \(\theta(g;b_{b})=F(g_{b};\mathfrak{c},\gamma)[G_{g_{b}}g]\), we obtain the _linearized gauge-fixed Einstein-Maxwell system_\(L_{b,\gamma}(\hat{g},\hat{A})=(2L^{E}_{b,\gamma}(\hat{g},\hat{A}),\ L^{M}_{b,\gamma}(\hat{g},\hat{A}))=0\) around \((g_{b},A_{b})\) \[\begin{split} L^{E}_{b,\gamma}(\hat{g},\hat{A}):=D_{g_{b}}( \mathrm{Ric})(\hat{g})-\tilde{\delta}^{*}_{g_{b},\gamma}D_{g_{b}}\widehat{ \Upsilon}^{E}(\hat{g};g_{b})-2D_{(g_{b},dA_{b})}T(\hat{g},d\hat{A}),\quad \widetilde{\Upsilon}^{E}=\Upsilon^{E}-\theta,\\ L^{M}_{b,\gamma}(\hat{g},\hat{A}):=D_{(g_{b},A_{b})}(\delta_{( \cdot)}d(\cdot))(\hat{g},\hat{A})-d\Big{(}D_{(g_{b},A_{b})}\widetilde{ \Upsilon}^{M}(\hat{g},\hat{A};g_{b},A_{b})\Big{)}.\end{split} \tag{4.63}\] According to the calculation (3.32)-(3.35), (A.1) and (A.4), we find that \[\begin{split} L^{E}_{b,\gamma}(\hat{g},\hat{A})&=- \frac{1}{2}\square_{g_{b},2}\hat{g}+(\tilde{\delta}^{*}_{g_{b},\gamma}-\delta^ {*}_{g_{b}})\delta_{g_{b}}G_{g_{b}}\hat{g}+\delta^{*}_{g_{b}}(\tilde{\delta}_ {g_{b},\gamma}-\delta_{g_{b}})G_{g_{b}}\hat{g}\\ &+(\tilde{\delta}^{*}_{g_{b},\gamma}-\delta^{*}_{g_{b}})(\tilde{ \delta}_{g_{b},\gamma}-\delta_{g_{b}})G_{g_{b}}\hat{g}+\mathscr{R}_{g_{b}}( \hat{g})_{\mu\nu}-2D_{g_{b},dA_{b}}T(\hat{g},d\hat{A}),\\ L^{M}_{b,\gamma}(\hat{g},\hat{A})&=(d\delta_{g_{b}}+ \delta_{g_{b}}d)\dot{A}-(\delta_{g_{b}}G_{g_{b}}\hat{g})^{\kappa}(dA_{b})_{ \kappa\mu}+\frac{1}{2}(\nabla^{\nu}\dot{g}^{\kappa}_{\ \mu}-\nabla^{\kappa}\dot{g}^{\nu}_{\ \mu})(dA_{b})_{\nu\kappa}+\dot{g}^{\nu \alpha}\nabla_{\alpha}(dA_{b})_{\nu\mu}\end{split} \tag{4.64}\] where \(d\delta_{g_{b}}+\delta_{g_{b}}d=-\square_{g_{b},1}+\mathrm{Ric}(g_{b})\) and \[\begin{split}(\mathscr{R}_{g_{b}}\hat{g})_{\mu\nu}& =\mathrm{Riem}(g_{b})^{\alpha}_{\ \ \mu\nu}\dot{g}_{\alpha\beta}+\frac{1}{2}\left(\mathrm{Ric}(g_{b})^{\ \kappa}_{\ \mu}\dot{g}_{\nu\kappa}+\mathrm{Ric}(g_{b})^{\ \kappa}_{\nu}\dot{g}_{\mu\kappa}\right),\\ D_{g_{b},dA_{b}}T(\hat{g},d\hat{A})&=\Big{(}(g_{b} )^{\alpha\beta}(d\hat{A})_{\mu\alpha}(dA_{b})_{\nu\beta}+(g_{b})^{\alpha\beta}( dA_{b})_{\mu\alpha}(d\hat{A})_{\nu\beta}\Big{)}-\frac{1}{2}(g_{b})_{\mu\nu}(d\hat{A})_{ \alpha\beta}(dA_{b})^{\alpha\beta}\\ &\quad-\dot{g}^{\alpha\beta}(dA_{b})_{\mu\alpha}(dA_{b})_{\nu \beta}-\frac{1}{4}\left(\dot{g}_{\mu\nu}(dA_{b})_{\alpha\beta}(dA_{b})^{\alpha \beta}-2(g_{b})_{\mu\nu}\dot{g}^{\kappa\alpha}(dA_{b})_{\alpha\beta}(dA_{b}) _{\kappa}^{\ \beta}\right).\end{split} \tag{4.65}\] We note that the first two terms of the second line of \(L^{E}_{b,\gamma}(\dot{g},\dot{A})\), the last two terms of \(D_{g_{b},dA_{b}}T(\dot{g},d\hat{A})\) and the last term of \(L^{M}_{b,\gamma}(\dot{g},\dot{A})\) contain no derivatives of \((\dot{g},\dot{A})\). For later use, we also define the gauge propagation operator \(\mathcal{P}_{b,\gamma}\) and the gauge potential wave operator \(\mathcal{W}_{b,\gamma}\) for the gauge 1-form \(\widetilde{\Upsilon}^{E}(\hat{g};g_{b})\): \[\begin{split}\mathcal{P}_{b,\gamma}&:=\delta_{g_{b}}G_{g _{b}}\tilde{\delta}^{*}_{g_{b},\gamma}=-\frac{1}{2}\square_{g_{b},1}-\frac{1}{2} \mathrm{Ric}(g_{b})+\delta_{g_{b}}G_{g_{b}}(\tilde{\delta}^{*}_{g_{b},\gamma}- \delta^{*}_{g_{b}}),\\ \mathcal{W}_{b,\gamma}&:=\bar{\delta}_{g_{b},\gamma}G_{g _{b}}\delta^{*}_{g_{b}}=-\frac{1}{2}\square_{g_{b},1}-\frac{1}{2} \mathrm{Ric}(g_{b})+(\bar{\delta}_{g_{b},\gamma}-\delta_{g_{b}})G_{g_{b}} \delta^{*}_{g_{b}}.\end{split} \tag{4.66}\] Then we have **Lemma 4.11**.: _Let \(L_{b,\gamma}=(2L^{E}_{b,\gamma},L^{M}_{b,\gamma}),\mathcal{P}_{b,\gamma}, \mathcal{W}_{b,\gamma}\) be defined as above. Then we have_ \[\mathcal{P}_{b,\gamma} =-\frac{1}{2}\square_{g_{b},1}+P^{1}\partial_{t_{\chi_{0}}}+P^{0}, \quad\widehat{\mathcal{P}_{b,\gamma}}(0)=-\frac{1}{2}\widehat{\square_{g,1}}(0)+P^{0}, \tag{4.67}\] \[\mathcal{W}_{b,\gamma} =-\frac{1}{2}\square_{g_{b},1}+W^{1}\partial_{t_{\chi_{0}}}+W^ {0},\quad\widehat{\mathcal{W}_{b,\gamma}}(0)=-\frac{1}{2}\widehat{\square_{g,1}}(0)+W^{0}, \tag{4.68}\] _where_ \[P^{1},W^{1}\in\rho^{\infty}{\rm Diff}^{0}_{\rm b}(\bar{X};\widetilde{s^{c}T^{*}} \bar{X}),\quad P^{0},W^{0}\in\rho^{4}{\rm Diff}^{1}_{\rm b}(\bar{X};\widetilde{s ^{c}T^{*}}\bar{X}). \tag{4.69}\] _We also have_ \[L_{b,\gamma}=-\Box_{g_{b},0}\otimes{\rm Id}_{14\times 14}+L^{1}\partial_{t_{ \chi_{0}}}+L^{0},\quad\widehat{L_{b,\gamma}}(0)=-\widehat{\Box_{g,0}}(0) \otimes{\rm Id}_{14\times 14}+L^{0}, \tag{4.70}\] _where_ \[L^{1}\in\rho^{2}{\rm Diff}^{0}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c} T^{*}}\bar{X}\oplus\widetilde{s^{c}T^{*}}\bar{X}),\quad L^{0}\in\rho^{3}{\rm Diff }^{1}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c}T^{*}}\bar{X}\oplus \widetilde{s^{c}T^{*}}\bar{X}). \tag{4.71}\] _Moreover, we have_ \[-\widehat{L_{b,\gamma}}(0)-\widehat{\Box_{g,0}}(0)\otimes{\rm Id}_{14\times 1 4}\in\rho^{3}{\rm Diff}^{2}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c}T^{* }}\bar{X}\oplus\widetilde{s^{c}T^{*}}\bar{X}),\quad\underline{g}=-dt^{2}_{ \chi_{0}}+dx^{2}. \tag{4.72}\] Proof.: Since \(\widetilde{\delta}^{*}_{g_{b},\gamma}-\delta^{*}_{g_{b}}\) and \(\bar{\delta}_{g_{b},\gamma}-\delta_{g_{b}}\) are compactly supported and \({\rm Ric}(g_{b})=\mathcal{O}^{4}\), then the conclusions (4.67)-(4.69) follow. According to (4.65) and using the facts that \({\rm Riem}(g_{b})=\mathcal{O}^{3},dA_{b}=\mathcal{O}^{2}\), we see that \(\mathscr{R}_{g_{b}}\in\rho^{3}{\rm Diff}^{0}_{\rm b}\) and \(D_{(g_{b},dA_{b})}T=T^{1}\partial_{t_{\chi_{0}}}+T^{0}\) with \[T^{1}\in\rho^{2}{\rm Diff}^{0}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c} T^{*}}\bar{X}\oplus\widetilde{s^{c}T^{*}}\bar{X}),\quad T^{0}\in\rho^{3}{\rm Diff }^{1}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c}T^{*}}\bar{X}\oplus \widetilde{s^{c}T^{*}}\bar{X}),\] and thus \[2L^{E}_{b,\gamma}(\mathring{g},\mathring{A})=-\Box_{g_{b},2}\mathring{g}+L^{E, 1}\partial_{t_{\chi_{0}}}+L^{E,0}\] with \[L^{E,1}\in\rho^{2}{\rm Diff}^{0}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c }T^{*}}\bar{X}\oplus\widetilde{s^{c}T^{*}}\bar{X}),\quad L^{E,0}\in\rho^{3}{ \rm Diff}^{1}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c}T^{*}}\bar{X}\oplus \widetilde{s^{c}T^{*}}\bar{X}).\] According to the expression for \(L^{E}_{b,\gamma}(\mathring{g},\mathring{A})\) in (4.64) and using Proposition 4.8, we find that \[L_{b,\gamma}(\mathring{g},\mathring{A})=-\Box_{g_{b},1}\mathring{A}+L^{M,1} \partial_{t_{\chi_{0}}}+L^{M,0}\] where \[L^{M,1}\in\rho^{2}{\rm Diff}^{0}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c }T^{*}}\bar{X}\oplus\widetilde{s^{c}T^{*}}\bar{X}),\quad L^{M,0}\in\rho^{3}{ \rm Diff}^{1}_{\rm b}(\bar{X};S^{2\widetilde{s}\widetilde{c}T^{*}}\bar{X}\oplus \widetilde{s^{c}T^{*}}\bar{X}).\] Combining the above conclusions for \(L^{E}_{b,\gamma}\) and \(L^{M}_{b,\gamma}\) and using the description of \(\Box_{g_{b},1},\Box_{g_{b},2}\) in Proposition 4.6, we obtain (4.70)-(4.72). Now we let \[\widehat{L_{b,\gamma}}(\sigma)=e^{it_{b,*}\sigma}L_{b,\gamma}e^{-it_{b,*} \sigma},\quad\widehat{\mathcal{P}_{b,\gamma}}(\sigma)=e^{it_{b,*}\sigma} \mathcal{P}_{b,\gamma}e^{-it_{b,*}\sigma},\quad\widehat{\mathcal{W}_{b,\gamma}} (\sigma)=e^{it_{b,*}\sigma}\mathcal{W}_{b,\gamma}e^{-it_{b,*}\sigma}. \tag{4.73}\] In order to study the property of \(\widehat{L_{b,\gamma}}(\sigma),\ \widehat{\mathcal{P}_{b,\gamma}}(\sigma),\ \widehat{\mathcal{W}_{b,\gamma}}(\sigma)\), we first express \(L_{b,\gamma},\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma}\) in the coordinates \((t_{b,*},x^{i}),i=1,2,3\). **Lemma 4.12**.: _In the coordinates \((t_{b,*},x^{i}),i=1,2,3\), the operator \(\Box_{g_{b},0}\) take the form_ \[\Box_{g_{b},0}=Q^{2}\partial^{2}_{t_{b,*}}+Q^{1}\partial_{t_{b,*}}+\widehat{ \Box_{g_{b},0}}(0) \tag{4.74}\] _where \(Q^{2}\in\rho^{2}C^{\infty}(\bar{X}),\ Q^{1}\in\rho^{2}\partial_{\rho}-2\rho+ \rho^{3}{\rm Diff}^{1}_{\rm b}(\bar{X})\) and \(\widehat{\Box_{g_{b},0}}(0)\in\rho^{2}{\rm Diff}^{2}_{\rm b}(\bar{X})\)._ _Moreover, the operators \(\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma},L_{b,\gamma}\) have the form_ \[\mathcal{P}_{b,\gamma} =-\frac{1}{2}\Box_{g_{b},0}\otimes{\rm Id}_{4\times 4}+\tilde{P}^{1} \partial_{t_{b,*}}+\tilde{P}^{0}, \tag{4.75}\] \[\mathcal{W}_{b,\gamma} =-\frac{1}{2}\Box_{g_{b},0}\otimes{\rm Id}_{4\times 4}+\tilde{W}^{1} \partial_{t_{b,*}}+\tilde{W}^{0},\] (4.76) \[L_{b,\gamma} =-\Box_{g_{b},0}\otimes{\rm Id}_{14\times 14}+\tilde{L}^{1} \partial_{t_{b,*}}+\tilde{L}^{0}, \tag{4.77}\] _where_ \[\tilde{P}^{1},\tilde{W}^{1},\tilde{L}^{1}\in\rho^{2}{\rm Diff}^{0}_{\rm b}(\bar{ X}),\quad\tilde{P}^{0},\tilde{W}^{0},\tilde{L}^{0}\in\rho^{3}{\rm Diff}^{0}_{\rm b}(\bar{X}).\] Proof.: With parameter \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\), near \(\partial_{+}\bar{X}\), we have \(t_{\chi_{0}}=t\) and \(t_{b,*}=t-r_{(\mathbf{m},0,\mathbf{Q}),*}\). Then changing from \((t_{\chi_{0}},r)\) coordinates to \((t_{b,*},r)\) coordinates transforms \(\partial_{t_{\chi_{0}}},\partial_{r}\) into \[\partial_{t_{b,*}},\quad\partial_{r}-\frac{1}{\mu_{(\mathbf{m},0,\mathbf{Q})}} \partial_{t_{b,*}}\quad\text{near}\quad\partial_{+}\bar{X},\quad\mu_{( \mathbf{m},0,\mathbf{Q})}=1-\frac{2\mathbf{m}}{r}+\frac{\mathbf{Q}^{2}}{r^{2}}.\] Since \(g_{b}-g_{(\mathbf{m},0,\mathbf{Q})}\in\rho^{2}C^{\infty}(\bar{X};S^{2\bar{ \mathrm{s}}\bar{c}\bar{T}^{*}\bar{X}})\), using the facts that in the coordinate \((t,x^{i})\), \[\Gamma^{\kappa}_{\alpha\beta}(g_{(\mathbf{m},0,\mathbf{Q})})=\mathcal{O}^{2}, \quad\Gamma^{\kappa}_{\alpha\beta}(g_{b})-\Gamma^{\kappa}_{\alpha\beta}(g_{( \mathbf{m},0,\mathbf{Q})})\in\mathcal{O}^{3},\] it follows that \[\square_{g_{b},0}-\square_{g_{(\mathbf{m},0,\mathbf{Q})},0}=\rho^{2}C^{\infty} (\bar{X})\partial_{t_{\chi_{0}}}^{2}+\rho^{3}\mathrm{Diff}^{1}_{\mathrm{b}}( \bar{X})\partial_{t_{\chi_{0}}}+\rho^{4}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X})\] and thus \[\square_{g_{b},0}-\square_{g_{(\mathbf{m},0,\mathbf{Q})},0}=\rho^{2}C^{\infty} (\bar{X})\partial_{t_{b,*}}^{2}+\rho^{3}\mathrm{Diff}^{1}_{\mathrm{b}}(\bar{X} )\partial_{t_{b,*}}+\rho^{4}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X}).\] Then it remains to calculate \(\square_{g_{(\mathbf{m},0,\mathbf{Q})},0}\) in \((t_{b,*},r)\) coordinates. Since near \(\partial_{+}\bar{X}\) \[g_{(\mathbf{m},0,\mathbf{Q})}=-\mu_{(\mathbf{m},0,\mathbf{Q})}dt_{b,*}^{2}-2 dt_{b,*}dr+r^{2}\not{g},\] we have for \(\rho=1/r\) \[\square_{g_{(\mathbf{m},0,\mathbf{Q})},0} =-2\partial_{r}\partial_{t_{b,*}}+\mu_{(\mathbf{m},0,\mathbf{Q})} \partial_{r}^{2}+\frac{2r-2\mathbf{m}}{r^{2}}\partial_{r}-\frac{2}{r}\partial _{t_{b,*}}+\frac{1}{r^{2}}\not{\Delta}\] \[=2\rho^{2}\partial_{\rho}\partial_{t_{b,*}}-2\rho\partial_{t_{b,* }}+\rho^{2}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X})\quad\text{near}\quad\partial _{+}\bar{X}.\] This completes the proof for \(\square_{g_{b},0}\). Then the proof for \(\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma},L_{b,\gamma}\) follows directly from Proposition 4.6 and Lemma 4.11. Then the following proposition is a direct result of Lemma 4.12. **Proposition 4.13**.: _The operator \(\square_{g_{b},0}\) take the form_ \[\widehat{\square_{g_{b},0}}(\sigma)=-2i\sigma\rho(\rho\partial_{\rho}-1)+ \widehat{\square_{g_{b},0}}(0)+\sigma Q+\sigma^{2}V \tag{4.78}\] _where \(V\in\rho^{2}C^{\infty}(\bar{X}),\ Q\in\rho^{3}\mathrm{Diff}^{1}_{\mathrm{b}}( \bar{X})\) and \(\widehat{\square_{g_{b},0}}(0)\in\rho^{2}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X})\)._ _Moreover, the operators \(\,,\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma},L_{b,\gamma}\) have the form_ \[\mathcal{P}_{b,\gamma} =i\sigma\rho(\rho\partial_{\rho}-1)\otimes\mathrm{Id}_{4\times 4}+ \widehat{\mathcal{P}_{b,\gamma}}(0)+\sigma Q_{P}+\sigma^{2}V_{P}, \tag{4.79}\] \[\mathcal{W}_{b,\gamma} =i\sigma\rho(\rho\partial_{\rho}-1)\otimes\mathrm{Id}_{4\times 4}+ \widehat{\mathcal{W}_{b,\gamma}}(0)+\sigma Q_{W}+\sigma^{2}V_{W},\] (4.80) \[L_{b,\gamma} =2i\sigma\rho(\rho\partial_{\rho}-1)\otimes\mathrm{Id}_{14\times 14 }+\widehat{L_{b,\gamma}}(0)+\sigma Q_{L}+\sigma^{2}V_{L}, \tag{4.81}\] _where_ \[V_{P},V_{W},V_{L}\in\rho^{2}\mathrm{Diff}^{0}_{\mathrm{b}}(\bar{X}),\quad Q_{ P},Q_{W},Q_{L}\in\rho^{2}\mathrm{Diff}^{1}_{\mathrm{sc}}(\bar{X}),\quad\widehat{ \mathcal{P}_{b,\gamma}}(0),\widehat{\mathcal{W}_{b,\gamma}}(0),\widehat{L_{b,\gamma}}(0)\in\rho^{2}\mathrm{Diff}^{2}_{\mathrm{b}}(\bar{X}).\] ### Constructing gauged initial data According to Lemmas 3.5 and 3.8, we now show how to construct gauged initial data for the gauge-fixed Einstein-Maxwell system (3.21)-(3.22) from initial data given on \(\Sigma_{0}\) which are close to the data \((h_{b},k_{b},\mathbf{E}_{b},\mathbf{H}_{b})\) induced by the KN black holes with parameter \(b\). Recall that we exploit the generalized wave map gauge for the Einstein equations, \[\widetilde{\Upsilon}^{E}(g;g_{b})=\Upsilon^{E}(g;g_{b})-\theta(g;g_{b})=gg_{b}^ {-1}\delta_{g}G_{g}g_{b}-\theta(g;g_{b})=0 \tag{4.82}\] and the generalized Lorenz gauge for the Maxwell equations, \[\widetilde{\Upsilon}^{M}(g,A;g_{b},A_{b})=\Upsilon^{M}(g,A;g_{b})-\Upsilon^{M}(g,A_{b};g_{b})=\mathrm{tr}_{g}\delta_{g_{b}}^{*}A-\mathrm{tr}_{g}\delta_{g_{b}}^{ *}A_{b}=0. \tag{4.83}\] We point out that for a general treatment, we will construct the gauged initial data for any \(\theta(g;g_{b})\) satisfying that \(\theta(g_{b};g_{b})=0\) and \(\theta(g;g_{b})\) contains no derivatives of \(g\), without being restricted to the specific \(\theta(g;g_{b})\) we chose in SS4.3. In order to incorporate the vanishing magnetic charge condition, we consider for \(s>2,0<\alpha<1\) the subspace of the initial data for ungauged Einstein-Maxwell equations \[\begin{split}\mathcal{Z}^{s,\alpha}:=\Big{\{}&(h,k, \mathbf{E},\mathbf{H})\mid d\star_{h}\mathbf{H}=0,\ \int_{\mathbb{S}^{2}}\star_{h}\mathbf{H}=0,\\ &(h-h_{b},k-k_{b},\mathbf{E}-\mathbf{E}_{b},\mathbf{H}-\mathbf{H} _{b})\in\bar{H}_{\mathrm{b}}^{s+1,-1/2+\alpha}(\bar{\Sigma}_{0};S_{>0}^{2}{}^ {\mathrm{sc}}T^{*}\bar{\Sigma}_{0})\\ &\times\bar{H}_{\mathrm{b}}^{s,1/2+\alpha}(\bar{\Sigma}_{0};S^{2 \mathrm{sc}}T^{*}\bar{\Sigma}_{0})\times\bar{H}_{\mathrm{b}}^{s,1/2+\alpha}( \bar{\Sigma}_{0};{}^{\mathrm{sc}}T^{*}\bar{\Sigma}_{0})\times\bar{H}_{\mathrm{ b}}^{s,1/2+\alpha}(\bar{\Sigma}_{0};{}^{\mathrm{sc}}T^{*}\bar{\Sigma}_{0}) \Big{\}},\end{split} \tag{4.84}\] where \(S_{>0}^{2}{}^{\mathrm{sc}}T^{*}\bar{\Sigma}_{0}\) denotes the fiber bundle of positive definite inner products on \({}^{\mathrm{sc}}T\bar{\Sigma}_{0}\). We remark that the definition of the space \(\mathcal{Z}^{s,\alpha}\) includes (3.14) and one of the constraint equations (3.13), which is necessary since we study the Einstein-Maxwell system in terms of the potential \(A\) rather than the electromagnetic 2-form \(F\). For the initial value problems of the gauge-fixed Einstein-Maxwell system \(P(g,A)=0\), which is a quasi-linear hyperbolic system, the initial data induced by \((g,A)\) at the spacelike hypersurface \(\Sigma_{0}=\{\mathbf{t}=0\}\) is defined as \[\gamma_{0}(g,A):=(g|_{\Sigma_{0}},\mathcal{L}_{\partial_{0}}g|_{\Sigma_{0}},A |_{\Sigma_{0}},\mathcal{L}_{\partial_{0}}A|_{\Sigma_{0}}). \tag{4.85}\] In particular, we have \(\gamma_{0}(g_{b},A_{b})=(g_{b}|_{\Sigma_{0}},0,A_{b}|_{\Sigma_{0}},0)\). Then we define the subspace of the initial data for the gauge-fixed Einstein-Maxwell equations \[\begin{split}\mathcal{Y}^{s,\alpha}:=\Big{\{}&(g_{ 0},g_{1},A_{0},A_{1})\mid(g_{0},g_{1},A_{0},A_{1})-\gamma_{0}(g_{b},A_{b})\in \bar{H}_{\mathrm{b}}^{s+1,-1/2+\alpha}(\bar{\Sigma}_{0};S^{2}{}^{\mathrm{sc}} T^{*}\bar{\Sigma}_{0})\\ &\times\bar{H}_{\mathrm{b}}^{s,1/2+\alpha}(\bar{\Sigma}_{0};S^{2 \mathrm{sc}}\widetilde{T^{*}}\bar{\Sigma}_{0})\times\bar{H}_{\mathrm{b}}^{s,- 1/2+\alpha}(\bar{\Sigma}_{0};\widetilde{T^{*}}\bar{\Sigma}_{0})\times\bar{H}_ {\mathrm{b}}^{s-1,1/2+\alpha}(\bar{\Sigma}_{0};\widetilde{\mathrm{sc}} \widetilde{T^{*}}\Sigma_{0})\Big{\}}.\end{split} \tag{4.86}\] We now prove: **Proposition 4.14**.: _There exist a neighborhood_ \[(h_{b},k_{b},\mathbf{E}_{b},\mathbf{H}_{b})\in\mathcal{U}\subset\mathcal{Z}^{ 2,\alpha}\] _of KN initial data and a smooth map_ \[i_{b}\colon\mathcal{U}\cap\mathcal{Z}^{s,\alpha}\to\mathcal{Y}^{s,\alpha}\] _for all \(s\geq 2,0<\alpha<1\), such that for \((h,k,\mathbf{E},\mathbf{H})\in\mathcal{U}\), the sections_ \[(g_{0},g_{1},A_{0},A_{1})=i_{b}(h,k,\mathbf{E},\mathbf{H})\] _induce the data \((h,k,\mathbf{E},\mathbf{H})\) on \(\Sigma_{0}=\{\mathbf{t}=0\}\), and they satisfy the gauge conditions (4.82) and (4.83) in the sense that for any section \((g,A)\) of \(S^{2}T^{*}M\oplus T^{*}M\) near \(\Sigma_{0}\) with \(\gamma_{0}(g,A)=(g_{0},g_{1},A_{0},A_{1})\), it induces the initial data \((h,k,\mathbf{E},\mathbf{H})\) at \(\Sigma_{0}\) (i.e., \(\tau((g,dA))=(h,k,\mathbf{E},\mathbf{H})\)) and the conditions (4.82) and (4.83) hold at \(\Sigma_{0}\)._ _Moreover, for exact KN data with parameter \(b\), we have \(i_{b}(h_{b},k_{b},\mathbf{E}_{b},\mathbf{H}_{b})=\gamma_{0}(g_{b},A_{b})=(g_{b }|_{\Sigma_{0}},0,A_{b}|_{\Sigma_{0}},0)\)._ Proof.: We will start with the metric components \((g_{0},g_{1})\) of the map \(i_{b}\). We write \[g_{b}=-N_{b}^{2}(dt)^{2}+g_{b,ij}(dx^{i}+X_{b}^{i}dt)(dx^{j}+X_{b}^{j}dt)\] where \(N_{b}\in 1+\rho C^{\infty}(\Sigma_{0})\) and \(X_{b}\in\rho^{2}C^{\infty}(\bar{\Sigma}_{0};{}^{\mathrm{sc}}T\bar{\Sigma}_{0})\). Then we define the component \(g_{0}\) of \(i_{b}(h,k,\mathbf{E},\mathbf{B})\) as \[g_{0}:=g|_{\{\mathbf{t}=0\}}=-N_{b}^{2}(dt)^{2}+h_{ij}(dx^{i}+X_{b}^{i}dt)(dx^{ j}+X_{b}^{j}dt).\] Moreover, \(g_{0}=g_{b}|_{\Sigma_{0}}\) if \(h=h_{b}\) and \(g_{0}\) satisfies the estimate \[\|g_{0}-g_{b}|_{\Sigma_{0}}\|_{\bar{H}_{\mathrm{b}}^{s+1,-1/2+\alpha}(\bar{ \Sigma}_{0};S^{2\mathrm{sc}}\widetilde{T^{*}}\Sigma_{0})}\lesssim\|h-h_{b}\|_{ B_{b}^{s+1,-1/2+\alpha}(\Sigma_{0};\bar{\Sigma}_{0}^{2}{}^{\mathrm{sc}}T^{*} \bar{\Sigma}_{0})}.\] We next define \(g_{1}=\mathcal{L}_{\partial_{0}}g|_{\Sigma_{0}}\). Let \(\nabla^{g_{0}}\) be the Levi-Civita connection of \(g_{0}\) and let \(n\) resp. \(n_{b}\) be the future timelike unit normal to \(\Sigma_{0}\) with respect to \(g_{0}\) resp. \(g_{b}|_{\Sigma_{0}}\). Then we have \[n=\frac{-\nabla^{g_{0}}\mathbf{t}}{\sqrt{-g_{0}(\nabla^{g_{0}}\mathbf{t}, \nabla^{g_{0}}\mathbf{t})}}=(1+N_{b}^{-2}h(X_{b},X_{b}))^{-1/2}(\frac{1}{N_{b} },-\frac{X_{b}^{i}}{N_{b}}),\quad n-n_{b}\in\bar{H}_{\mathrm{b}}^{s+1,1/2+ \alpha}(\bar{\Sigma}_{0};\widetilde{\mathrm{sc}}\widetilde{T}\bar{\Sigma}_{0}).\] Since we require \[k_{ij}=\mathrm{I\!I}_{g}(\partial_{i},\partial_{j})|_{\Sigma_{0}}=-\langle \nabla^{g}_{\partial_{i}}n,\partial_{j}\rangle=-\frac{1}{2}\mathcal{L}_{n}g_{ ij}=-\frac{1}{2}n^{\alpha}\partial_{\alpha}g_{ij}-\frac{1}{2}g_{i\alpha}\partial_{j}n^{ \alpha}-\frac{1}{2}g_{j\alpha}\partial_{i}n^{\alpha},\] where the Greek index \(\alpha\) ranges over \(0,1,2,3\) with \(x^{0}=\mathfrak{t}\), we must define \[(g_{1})_{ij}=\partial_{t}g_{ij}|_{\Sigma_{0}}=-2(1+N_{b}^{-2}h(X_{b},X_{b}))^{1/ 2}N_{b}k_{ij}+\mathcal{L}_{X_{b}}h_{ij}\] Now the initial data for the remaining components of \(g_{1}\), i.e. \((g_{1})_{0\alpha}=\partial_{t}g_{0\alpha}|_{\Sigma_{0}}\), are fixed by the generalized wave map gauge condition. More precisely, since \(\theta(g;g_{b})\) does not contain the derivatives of \(g\), it follows that \[0=\widetilde{\Upsilon}^{E}(g;g_{b})_{i} =g_{i\kappa}g^{\mu\nu}(\Gamma(g)_{\mu\nu}^{\kappa}-\Gamma(g_{b})_ {\mu\nu}^{\kappa})-\theta(g;g_{b})_{i}\] \[=g^{\mu\nu}\partial_{\mu}g_{i\nu}-\frac{1}{2}g^{\mu\nu}\partial_ {i}g_{\mu\nu}-g_{i\kappa}g^{\mu\nu}\Gamma(g)_{\mu\nu}^{\kappa}-\theta(g;g_{b })_{i}\quad\text{at}\quad\Sigma_{0}\] fixes \(\partial_{\mathfrak{t}}g_{0i}|_{\Sigma_{0}}\) and \[0=\widetilde{\Upsilon}^{E}(g;g_{b})_{0} =g_{0\kappa}g^{\mu\nu}(\Gamma(g)_{\mu\nu}^{\kappa}-\Gamma(g_{b}) _{\mu\nu}^{\kappa})-\theta(g;g_{b})_{0}\] \[=g^{\mu\nu}\partial_{\mu}g_{0\nu}-\frac{1}{2}g^{\mu\nu}\partial_ {0}g_{\mu\nu}-g_{0\kappa}g^{\mu\nu}\Gamma(g)_{\mu\nu}^{\kappa}-\theta(g;g_{b })_{0}\quad\text{at}\quad\Sigma_{0}\] fixes \(\partial_{\mathfrak{t}}g_{00}|_{\Sigma_{0}}\). This explicit construction of \(g_{1}\) implies that \(g_{1}=\partial_{t}g_{b}|_{\Sigma_{0}}=0\) if \((h,k)=(h_{b},k_{b})\), and \[\|g_{1}\|_{\bar{B}_{\mathrm{b}}^{s,t/2+\alpha}(\Sigma_{0};\mathbb{S}^{2s\bar{ \omega}T^{*}}\Sigma_{0})}\lesssim\|h-h_{b}\|_{\bar{B}_{\mathrm{b}}^{s+1,-1/2+ \alpha}(\Sigma_{0};\mathbb{S}^{2}_{\geq 0}=T^{*}\Sigma_{0})}+\|k-k_{b}\|_{\bar{B}_{ \mathrm{b}}^{s,t/2+\alpha}(\Sigma_{0};\mathbb{S}^{2s\bar{\omega}T^{*}}\Sigma_ {0})}.\] So we finish the construction of the initial data \((g_{0},g_{1})\) which induces the data \((h,k)\) on \(\Sigma_{0}\) and satisfies the generalized wave map gauge condition at \(\Sigma_{0}\). We next turn to the construction of the components \((A_{0},A_{1})\) of the map \(i_{b}\). Here we closely follow the construction in [60, SS3.5]. Motivated by the relation \(F=dA\), we first construct a bounded linear map \[\mathcal{B}^{\sharp}\colon\Big{\{}u\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{ \Sigma}_{0};\Lambda^{2\mathrm{sc}}T^{*}\bar{\Sigma}_{0})\colon du=0,\ \int_{\mathbb{S}^{2}}u=0\Big{\}}\to\bar{H}_{\mathrm{b}}^{s,\ell-1}(\bar{\Sigma} _{0};{}^{\mathrm{sc}}T^{*}\bar{\Sigma}_{0}) \tag{4.87}\] such that \(d\circ\mathcal{B}^{\sharp}=\mathrm{Id}\) with \(\ell>-1/2\). Using the structure \(\bar{\Sigma}_{0}\cong I\times\mathbb{S}^{2}\), \(I=[0,1/r_{-}]_{\rho=1/r}\), of \(\bar{\Sigma}_{0}\), we define \(\pi\colon\bar{\Sigma}_{0}\to\mathbb{S}^{2}\) to be the projection onto the second factor, and \(i::\mathbb{S}^{2}\hookrightarrow\bar{\Sigma}_{0}\) be the embedding \(\omega\mapsto(1/r_{-},\omega)\). Then the map \(K\colon\bar{H}_{\mathrm{b}}^{\sigma,\ell}(\bar{\Sigma}_{0};\Lambda^{\mathrm{ sc}}T^{*}\bar{\Sigma}_{0})\to\bar{H}_{\mathrm{b}}^{\sigma,\ell-1}(\bar{\Sigma}_{0}; \Lambda^{\mathrm{sc}}T^{*}\Sigma_{0})\), \(\sigma\geq 0,\ell>-1/2\), defined by linear extension from \[K(f(r,\omega)dr\wedge\pi^{*}u)(r,\omega):=(\pi^{*}u)(r,\omega)\int_{r_{-}}^{r} f(s,\omega)\,ds,\quad K(f(r,\omega)\pi^{*}u):=0,\] for \(f\in\bar{H}_{\mathrm{b}}^{\sigma,1/2+\alpha}(\bar{\Sigma}_{0})\) and \(u\in C^{\infty}(\mathbb{S}^{2};\Lambda T^{*}\mathbb{S}^{2})\), satisfies \(\mathrm{Id}-\pi^{*}j^{*}=dK+Kd\). Therefore, for \(u\) satisfying \(du=0\), we conclude that \(u=dKu+\pi^{*}(j^{*}u)\). According to Hodge decomposition theory and the fact that the de Rham cohomology group \(H^{2}_{dR}(\mathbb{S}^{2})\cong\mathbb{R}\), we see that \(u_{1}:=j^{*}u\in H^{s}(\mathbb{S}^{2};\Lambda^{2}T^{*}\mathbb{S}^{2})\) can be written uniquely as \(u_{1}=\not\!\!u_{0}+\not\!\!u_{2}\) where \(u_{0}\in\mathbb{R}\) and \(u_{2}\in H^{s+1}(\mathbb{S}^{2};T^{*}\mathbb{S}^{2})\). Here, \(\not\!\!d,\not\!d,\not\!\!d\) denote the exterior differential, codifferential and Hodge star on \(\mathbb{S}^{2}\). Since \(K(f(r,\omega)\pi^{*}u)=0\) and \(\int_{\mathbb{S}^{2}}u=0\), it follows that \[0=\int_{\mathbb{S}^{2}}\pi^{*}(j^{*}u)=\int_{\mathbb{S}^{2}}u^{\prime}=\int_{ \mathbb{S}^{2}}u_{0}\,d\mathrm{Vol}_{\not\!\!g}=4\pi u_{0},\] and thus \(u=d\mathcal{B}^{\sharp}u\) with \(\mathcal{B}^{\sharp}u:=Ku+\pi^{*}u_{2}\), which finishes the construction of the map (4.87). Then we define \[A_{0}=(\iota_{\partial_{t}}A_{b})\,dt+A_{0}^{\sharp},\quad A_{0}^{\sharp}:=i^{*} A_{b}+\mathcal{A}^{\sharp}(\star_{h}\mathbf{H}-\star_{h_{b}}\mathbf{H}_{b})\in i^{*}A_{b}+ \bar{H}_{\mathrm{b}}^{s,-1/2+\alpha}(\bar{\Sigma}_{0};{}^{\mathrm{sc}}T^{*}\bar{ \Sigma}_{0}).\] The above definition satisfies \(dA_{0}^{\sharp}=i^{*}(\star_{h}\mathbf{H})\), and \(A_{0}=A_{b}\) if \((h,\mathbf{H})=(h_{b},\mathbf{H}_{b})\). Next we assume that \[A_{1}=a_{1}\,dt+A_{1}^{\sharp}\] with \(a_{1}\in\bar{H}_{\mathrm{b}}^{s-1,\ell_{1}}(\bar{\Sigma}_{0})\) and \(A_{1}^{\sharp}\in\bar{H}_{\mathrm{b}}^{s-1,\ell}(\bar{\Sigma}_{0};{}^{\mathrm{sc}}T ^{*}\bar{\Sigma}_{0})\) to be determined. Recall the future timelike unit normal \(n\) to \(\Sigma_{0}\) with respect to \(g_{0}\) as defined above. The requirement that \(-i^{*}\iota_{n}dA=\mathbf{E}\) at \(\Sigma_{0}\) then reads \(\mathbf{E}=n^{0}(d(\iota_{\partial_{t}}A_{b})-A_{1}^{\sharp})-\iota_{n^{k} \partial_{k}}dA_{0}^{\sharp}\). Then we must define \[A_{1}^{\sharp}:=d(\iota_{\partial_{t}}A_{b})-\frac{(\mathbf{E}+\iota_{n^{k} \partial_{k}}dA_{0}^{\sharp})}{n^{0}}\in\bar{H}_{\mathrm{b}}^{s-1,1/2+\alpha}( \bar{\Sigma}_{0};{}^{\mathrm{sc}}T^{*}\bar{\Sigma}_{0}),\] which satisfies that \(A_{1}^{\sharp}=0\) if \((h,k,\mathbf{E},\mathbf{H})=(h_{b},k_{b},\mathbf{E}_{b},\mathbf{H}_{b})\). Lastly, the generalized Lorenz gauge condition \[0=\widetilde{\Upsilon}^{M}(g,A;g_{b},A_{b}) =\operatorname{tr}_{g_{0}}\delta^{s}_{g_{0}}(A-A_{b})\] \[=\frac{1}{2}g_{0}^{\mu\nu}\Big{(}\partial_{\mu}(A_{\nu}-A_{b,\nu}) +\partial_{\mu}(A_{\nu}-A_{b,\nu})-2\Gamma^{\kappa}_{\mu\nu}(g_{0})(A_{\kappa} -A_{b,\kappa})\Big{)}\quad\text{at}\quad\Sigma_{0},\] where \(g_{0}^{\mu\nu}\) is the inverse \(g_{0}\), fixes \(a_{1}=\partial_{t}A_{0}\in\bar{H}^{s-1,1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_ {0})\). In particular, we have \(a_{1}=0\) if the data \((h,k,\mathbf{E},\mathbf{H})\) are induced by \((g_{b},A_{b})\). Moreover, we have \[\|A_{0}-A_{b}|_{\Sigma_{0}}\|_{\bar{H}^{s,-1/2+\alpha}_{\mathrm{ b}}(\bar{\Sigma}_{0};\widetilde{\tau}^{*}\bar{\Sigma}_{0})}+\|A_{1}\|_{\bar{H}^{s-1,1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0};\widetilde{\tau}^{*}\bar{\Sigma}_{ 0})}\] \[\lesssim\|h-h_{b}\|_{\bar{H}^{s,-1/2+\alpha}_{\mathrm{b}}(\bar{ \Sigma}_{0};\bar{S}^{2}_{>0}\widetilde{\tau}^{*}\bar{\Sigma}_{0})}+\|\mathbf{H }-\mathbf{H}_{b}\|_{\bar{H}^{s,1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0}; \widetilde{\tau}^{*}\bar{\Sigma}_{0})}+\|\mathbf{E}-\mathbf{E}_{b}\|_{\bar{H} ^{s,1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0};\widetilde{\tau}^{*}\bar{ \Sigma}_{0})}\] Also, according to the above explicit construction of the map \(i_{b}\), we see that the map \(i_{b}\) is smooth. This completes the proof. Now we can make use of the map \(i_{b}\) to construct gauged initial data at the linearized level. Correspondingly, we consider for \(s>2,0<\alpha<1\) the subspace of the initial data for ungauged linearized Einstein-Maxwell equations around \((g_{b},A_{b})\) \[\begin{split}\dot{\mathcal{Z}}^{s,\alpha}:=\Big{\{}(\dot{h},\dot {k},\dot{\mathbf{E}},\dot{\mathbf{H}})\mid D_{(h_{b},\mathbf{H}_{b})}(\star_{( \bullet)}(\bullet))(\dot{h},\dot{\mathbf{H}})=0,\;\int_{\mathbb{S}^{2}}\frac{ d}{ds}|_{s=0}\star_{h_{b}+sh}(\mathbf{H}_{b}+s\dot{\mathbf{H}})=0,\\ (\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\in\bar{H}^{s+ 1,-1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0};S^{2}_{>0}{}^{\mathrm{s}\kappa}T^ {*}\bar{\Sigma}_{0})\\ \times\bar{H}^{s,1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0};S^{2 \mathrm{s}\mathrm{c}}T^{*}\bar{\Sigma}_{0})\times\bar{H}^{s,1/2+\alpha}_{ \mathrm{b}}(\bar{\Sigma}_{0};{}^{\mathrm{s}\mathrm{c}}T^{*}\bar{\Sigma}_{0}) \times\bar{H}^{s,1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0};{}^{\mathrm{s} \mathrm{c}}T^{*}\bar{\Sigma}_{0})\Big{\}},\end{split} \tag{4.88}\] where \(S^{2}_{>0}{}^{\mathrm{s}\kappa}T^{*}\bar{\Sigma}_{0}\) denotes the fiber bundle of positive definite inner products on \({}^{\mathrm{s}\kappa}T\Sigma_{0}\). We again remark that the definition of the space \(\dot{\mathcal{Z}}^{s,\alpha}\) incorporates the vanishing of the linearized magnetic charge and the linearization of one of the constraint equations (3.13). We also define the subspace of the initial data for the gauge-fixed linearized Einstein-Maxwell equations \[\begin{split}\dot{\mathcal{Y}}^{s,\alpha}:=\Big{\{}(\dot{g}_{0}, \dot{g}_{1},\dot{A}_{0},\dot{A}_{1})\mid(\dot{g}_{0},\dot{g}_{1},\dot{A}_{0}, \dot{A}_{1})\in\bar{H}^{s+1,-1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0};S^{2 \widetilde{\mathrm{s}\kappa}T^{*}}\bar{\Sigma}_{0})\\ \times\bar{H}^{s,1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0};S^{2 \widetilde{\mathrm{s}\kappa}T^{*}}\bar{\Sigma}_{0})\times\bar{H}^{s,-1/2+ \alpha}_{\mathrm{b}}(\bar{\Sigma}_{0};\widetilde{\widetilde{T}^{*}\bar{ \Sigma}_{0}})\times\bar{H}^{s-1,1/2+\alpha}_{\mathrm{b}}(\bar{\Sigma}_{0}; \widetilde{\widetilde{\mathrm{s}\kappa}T^{*}}\Sigma_{0})\Big{\}}.\end{split} \tag{4.89}\] Then we have **Corollary 4.15**.: _For all \(s\geq 2,0<\alpha<1\), the map_ \[D_{(h_{b},k_{b},\mathbf{E}_{b},\mathbf{H}_{b})}i_{b}\colon\dot{\mathcal{Z}}^{s, \alpha}\to\dot{\mathcal{Y}}^{s,\alpha}\] _satisfies that for \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\in\dot{\mathcal{Z}}^{s,\alpha}\), the sections_ \[(\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1})=D_{(h_{b},k_{b},\mathbf{E}_{b}, \mathbf{H}_{b})}i_{b}(\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\] _induce the data \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\) on \(\Sigma_{0}=\{\mathbf{t}=0\}\), and they satisfy the linearized gauge conditions \(D_{g_{0}}\widetilde{\Upsilon}^{E}(\dot{g})=0\) and \(D_{(g_{b},A_{b})}\widetilde{\Upsilon}^{M}(\dot{g},\dot{A};g_{b},A_{b})=-\delta _{g_{b}}\dot{A}=0\) in the sense that for any section \((\dot{g},\dot{A})\) of \(S^{2}T^{*}M\oplus T^{*}M\) near \(\Sigma_{0}\) with \(\gamma_{0}(\dot{g},\dot{A})=(\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1})\), it induces the initial data \((\dot{h},\dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\) at \(\Sigma_{0}\) and the linearized gauge conditions hold at \(\Sigma_{0}\)._ Proof.: Let \[h(s)=h_{b}+s\dot{h},\;k(s)=k_{b}+s\dot{k},\;\mathbf{E}(s)=\mathbf{E}_{b}+s\dot{ \mathbf{E}},\;\mathbf{B}(s)=\mathbf{H}_{b}+s\dot{\mathbf{H}}.\] For \(s\) close to \(0\), we find that \[\star_{h+s\dot{h}}\mathbf{H}(s)=\star_{h}\mathbf{H}+sD_{(h_{b},\mathbf{H}_{b})}( \star_{(\cdot)}(\cdot))(\dot{h},\dot{\mathbf{H}})+\mathcal{O}(s^{2})\] and \(d(s):=(h(s),k(s),\mathbf{E}(s),\mathbf{H}(s))\in\mathcal{U}\subset\mathcal{Z}^{2,\alpha}\). Therefore, we have \[(\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1})=\frac{d}{ds}i_{b}(d(s)) \big{|}_{s=0}=D_{(h_{b},k_{b},\mathbf{E}_{b},\mathbf{H}_{b})}i_{b}(\dot{h}, \dot{k},\dot{\mathbf{E}},\dot{\mathbf{H}})\] and \((\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1})\) satisfy the linearized gauge conditions in the sense that \[D_{g_{b}}\widetilde{\Upsilon}^{E}(\dot{g})=0,\quad D_{(g_{b},A_{b})}\widetilde{ \Upsilon}^{M}(\dot{g},\dot{A};g_{b},A_{b})=-\delta_{g_{b}}\dot{A}=0, \tag{4.90}\] for any \((\dot{g},\dot{A})\) with \(\gamma_{0}(\dot{g},\dot{A})=(\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1})\) ## 5. Analysis of the linearized gauge-fixed Einstein-Maxwell operator Throughout this section, we continue using the notation from SS4. In SS5.1, we will discuss the characteristic set and the global dynamics of the Hamiltonian flow of the principal symbol \(p(\sigma)\) of the operator \(\widehat{\square_{g_{\mathrm{b}}}}(\sigma):=e^{it_{b,*}\sigma}\square_{g_{ \mathrm{b}}}e^{-it_{b,*}\sigma}\) on full subextremal KN spacetimes (see Figure 6 for an illustration). Concretely, first we will show that the characteristic set of \(p(\sigma)\) has two parts, one of which lies inside the ergoregion while the other is at the spatial infinity \(\partial_{+}\bar{X}\). Then we prove that there exist radial points, to which the nearby integral curves of the Hamiltonian vector field \(H_{p(\sigma)}\) converge in either forward direction or backward direction, at event horizon and spatial infinity \(\partial_{+}\bar{X}\). Finally, we describe the global dynamics of the Hamiltonian flow of the principal symbol \(p(\sigma)\). In SS5.2, we combine the global dynamics of the Hamiltonian flow of the principal symbol \(p(\sigma)\) established in SS5.1 with the elliptic estimate, propagation of singularities estimate, the radial point estimate at event horizon, the scattering radial point estimate at spatial infinity \(\partial_{+}\bar{X}\) and hyperbolic estimate to prove the uniform Fredholm estimates for the operators \(\widehat{\square_{g_{\mathrm{b}}}}(\sigma),\widehat{\mathcal{P}_{b,\gamma}}( \sigma),\widehat{\mathcal{W}_{b,\gamma}}(\sigma),\widehat{L_{b,\gamma}}(\sigma)\) for bounded \(\sigma\) in the closed upper half plane. InSS5.3, we will give a detailed description of kernel of the following wave operators: \(\widehat{\square_{g_{\mathrm{b}}}}(\sigma)\) on scalar functions, \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma),\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\) on scattering 1-forms and the linearized gauge-fixed Einstein-Maxwell operator \(\widehat{L_{b,\gamma}}(\sigma)\). In SS5.4, we will analyze the characteristic set and the global dynamics of the Hamiltonian flow of the semiclassical principal symbol \(p_{h,z}\) of the semiclassical rescaled operator \(h^{2}\widehat{\square_{g_{\mathrm{b}}}}(h^{-1}z)\) on full subextremal KN spacetimes for \(z\in\mathbb{R}\setminus 0\) (see Figure 10 for an illustration). Specifically, we first show that the characteristic set of \(p_{h,z}\) can be split into a disjoint union of two components. Then we prove that there exist radial points at event horizon and spatial infinity \(\partial_{+}\bar{X}\). Next, we discuss the trapped set associated to the Hamiltonian flow of the semiclassical symbol \(p_{h,z}\) and prove that it is normally hyperbolic. Finally, we describe the global dynamics of the Hamiltonian flow of the semiclassical principal symbol \(p_{h,z}\). In SS5.5, we combine the global dynamics of the Hamiltonian flow of the semiclassical principal symbol \(p_{h,z}\) established in SS5.4 with the elliptic estimate, propagation of singularities estimate, the radial point estimate at event horizon, the scattering radial point estimate at spatial infinity \(\partial_{+}\bar{X}\) and hyperbolic estimate, all of which are in the semiclassical version, together with the estimate at normally hyperbolic trapping to establish the high energy estimates for the operators \(\widehat{\square_{g_{\mathrm{b}}}}(\sigma),\widehat{\mathcal{P}_{b,\gamma}}( \sigma),\widehat{\mathcal{W}_{b,\gamma}}(\sigma),\widehat{L_{b,\gamma}}(\sigma)\) for \(|\mathrm{Re}\,\sigma|\gg 1,\mathrm{Im}\,\sigma\leq C\) in the closed upper half plane. This implies the invertibility of these operators for \(|\mathrm{Re}\,\sigma|\gg 1,\mathrm{Im}\,\sigma\leq C\) in the closed upper half plane. In SS5.6, we shall establish the energy estimates for the solutions to various wave type equations on slowly rotating Kerr-Newman metrics. Fix KN black hole parameters \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) close to \(b_{0}=(\mathbf{m}_{0},\mathbf{a}_{0},\mathbf{Q}_{0})\) and let \[g=g_{\mathrm{b}}\] Recall that with \(a=|\mathbf{a}|\) \[g^{-1}=\rho_{b}^{-2}\bigg{(} \Delta_{b}\partial_{r}^{2}+\frac{\chi^{2}-1}{\Delta_{b}}\Big{(}( r^{2}+a^{2})\partial_{t_{\chi}}+a\partial_{\varphi}\Big{)}^{2}+\partial_{ \theta}^{2}+\frac{1}{\sin^{2}\theta}\Big{(}\partial_{\varphi}+a\sin^{2}\theta \partial_{t_{\chi}}\Big{)}^{2}\] \[+2\chi\Big{(}(r^{2}+a^{2})\partial_{t_{\chi}}+a\partial_{\varphi} \Big{)}\partial_{r}\bigg{)}.\] We define \[\widehat{\square_{g}}(\sigma):=e^{it_{b,*}\sigma}\square_{g_{\mathrm{b}}}e^{- it_{b,*}\sigma}.\] Let \(\overline{\mathrm{sc}T^{*}\bar{X}}\) be the compact manifold with the corners of codimension two obtained by radial compactification of the fibers \({}^{\mathrm{sc}}T^{*}\bar{X}\) (whose detail will be discussed later on). Then its boundary consists of three hypersurfaces \[\partial\overline{\mathrm{sc}T^{*}\bar{X}}={}^{\mathrm{sc}}S^{*}\bar{X}\cup \overline{\mathrm{sc}T^{*}}_{\partial_{-}\bar{X}}\bar{X}\cup\overline{ \mathrm{sc}T^{*}}_{\partial_{+}\bar{X}}\bar{X}\] where \({}^{\mathrm{sc}}S^{*}\bar{X}=({}^{\mathrm{sc}}T^{*}\bar{X}\setminus 0)/\mathbb{R}^{+}\) is called fiber infinity. ### Microlocal geometry and dynamics of Kerr-Newman spacetimes Now, we are ready to discuss the microlocal geometry of the KN metric \(g=g_{b}\). Away from \(\partial_{+}\bar{X}\), we note that \({}^{\rm sc}T^{*}\bar{X}\) is isomorphic to \(T^{*}\bar{X}\) and write the covectors as \[\xi_{r}dr+\xi_{\theta}d\theta+\xi_{\varphi}d\varphi.\] Then the principal symbol of \(\widehat{\square_{g}}(\sigma)\) is given by \[p=\sigma_{2}(\widehat{\square_{g}}(\sigma))=-\rho_{b}^{-2}\Big{(}\Delta_{b}\xi _{r}^{2}+\xi_{\theta}^{2}+\big{(}\frac{a^{2}}{\Delta_{b}}(\chi^{2}-1)+\frac{1}{ \sin^{2}\theta}\big{)}\xi_{\varphi}^{2}+2\chi a\xi_{\varphi}\xi_{r}\Big{)}.\] Near \(\partial_{+}\bar{X}\), we write the scattering covectors as \[\zeta_{\rho}\frac{d\rho}{\rho^{2}}+\zeta_{\theta}\frac{d\theta}{\rho}+\zeta_{ \varphi}\frac{d\varphi}{\rho},\quad\rho=\frac{1}{r}.\] Then the scattering principal symbol (see [94]) of \(\widehat{\square_{g}}(\sigma)\) at \(\partial_{+}\bar{X}\) is given by \[2\sigma\zeta_{\rho}-\zeta_{\rho}^{2}-\zeta_{\theta}^{2}-\frac{1}{\sin^{2} \theta}\zeta_{\varphi}^{2}=-(\zeta_{\rho}-\sigma)^{2}-\zeta_{\theta}^{2}- \frac{1}{\sin^{2}\theta}\zeta_{\varphi}^{2}+\sigma^{2}\] To handle the poles \(\{\theta=0\}\) and \(\{\theta=\pi\}\), where the spherical coordinates break down, we introduce new coordinates \[x_{1}=\sin\theta\cos\varphi,\quad x_{2}=\sin\theta\sin\varphi,\] and let \(\xi_{1}\) and \(\xi_{2}\) be dual variables to \(x_{1}\) and \(x_{2}\) respectively. Correspondingly, we have \[\xi_{\theta}=(x_{1}\xi_{1}+x_{2}\xi_{2})\cot\theta,\quad\xi_{\varphi}=x_{1} \xi_{2}-x_{2}\xi_{1}.\] and note that \(\xi_{\varphi}\) is a smooth function in \((x_{1},x_{2},\xi_{1},\xi_{2})\). Since \[\xi_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\xi_{\varphi}^{2}=\frac{1-x_{1}^{2}- x_{2}^{2}}{x_{1}^{2}+x_{2}^{2}}(x_{1}\xi_{1}+x_{2}\xi_{2})^{2}+\frac{1}{x_{1}^{2}+x_ {2}^{2}}(x_{1}\xi_{2}-x_{2}\xi_{1})^{2}=\xi_{1}^{2}+\xi_{2}^{2}-(x_{1}\xi_{1}+ x_{2}\xi_{2})^{2}\] is a smooth function in \((r,x_{1},x_{2},\xi_{r},\xi_{1},\xi_{2})\) near the poles, so is \(p\). Therefore, we can perform all symbol calculations away from these two poles and extend the results to the poles. From now on, we do not emphasize the analysis near these two poles. #### 5.1.1. Characteristic set We first study the characteristic set. Away from \(\partial_{+}\bar{X}\), since \[p=-\rho_{b}^{-2}\Big{(}\Delta_{b}(\xi_{r}+\frac{a\chi}{\Delta_{b}}\xi_{ \varphi})^{2}+\xi_{\theta}^{2}+\frac{\Delta_{b}-a^{2}\sin^{2}\theta}{\Delta_{ b}\sin^{2}\theta}\xi_{\varphi}^{2}\Big{)},\] it follows that the characteristic set \(\{p=0,\xi\neq 0\}\) of \(p\) satisfies \[\{p=0,\xi=(\xi_{r},\xi_{\theta},\xi_{\varphi})\neq 0\}\subset\{\Delta_{b}-a^{2} \sin^{2}\theta\leq 0,\xi\neq 0\}\subset{}^{\rm sc}T^{*}\bar{X}\setminus 0.\] Since \(\chi=1\) in the region \(\Delta_{b}-a^{2}\sin^{2}\theta\leq 0\), the principal symbol can be written as \[p=-\rho_{b}^{-2}\Big{(}\Delta_{b}\xi_{r}^{2}+2a\xi_{r}\xi_{\varphi}+\tilde{p} \Big{)}\quad\text{where}\quad\tilde{p}=\xi_{\theta}^{2}+\frac{1}{\sin^{2} \theta}\xi_{\varphi}^{2},\] and thus \(\xi_{r}\neq 0\) at the characteristic set of \(p\). As a consequence, in the fiber radial compactification \(\overline{{}^{\rm sc}T^{*}\bar{X}}\) of \({}^{\rm sc}T^{*}\bar{X}\), we use the following coordinates near the fiber infinity \(\overline{{}^{\rm sc}T^{*}\bar{X}}\), \[\rho_{\xi}=\frac{1}{|\xi_{r}|}\in[0,\infty),\quad\hat{\xi}=(\hat{\xi}_{\theta},\hat{\xi}_{\varphi})=\rho_{\xi}(\xi_{\theta},\xi_{\varphi}). \tag{5.1}\] We note that \(\rho_{\xi}\) is a boundary defining function of \(\overline{{}^{\rm sc}T^{*}\bar{X}}\), i.e., \(\rho_{\xi}=0\) at \(\partial\overline{{}^{\rm sc}T^{*}\bar{X}}={}^{\rm sc}S^{*}\bar{X}\), and \(\hat{\xi}\) is a coordinate system on \({}^{\rm sc}S^{*}\bar{X}\). Since \(p\) is a homogeneous polynomial of order \(2\) in \(\xi=(\xi_{r},\xi_{\theta},\xi_{\varphi})\), the rescaled symbol \(\rho_{\xi}^{2}p\) and the rescaled Hamiltonian vector field \(\rho_{\xi}H_{p}\) extend smoothly to \(\overline{{}^{\rm sc}T^{*}\bar{X}}\). Therefore, we study the flow of the rescaled Hamiltonian vector field \(\rho_{\xi}H_{p}\) on the characteristic set \(\{\rho_{\xi}^{2}p=0\}\subset\overline{{}^{\rm sc}T^{*}\bar{X}}\setminus 0\). We now split the characteristic set \(\{\rho_{\xi}^{2}p=0\}\subset\overline{{}^{\rm sc}T^{*}\bar{X}}\setminus 0\) into two components \[\Sigma_{\pm}:=\{\rho_{\xi}^{2}p=0\}\cap\{\pm\xi_{r}>0\},\quad\partial\Sigma_{ \pm}=\Sigma_{\pm}\cap{}^{\rm sc}S^{*}\bar{X}.\] Near \(\partial_{+}\bar{X}\), we have \(\chi=0\) and thus \[g^{-1}=\rho_{b}^{-2}\bigg{(}\Delta_{b}\partial_{r}^{2}-\frac{1}{\Delta_{b}} \Big{(}(r^{2}+a^{2})\partial_{t_{\chi}}+a\partial_{\varphi}\Big{)}^{2}+\partial _{\theta}^{2}+\frac{1}{\sin^{2}\theta}\Big{(}\partial_{\varphi}+a\sin^{2} \theta\partial_{t_{\chi}}\Big{)}^{2}\bigg{)}.\] Then using \(t_{\chi}=t\) and \(t_{b,*}=t-r_{(\mathbf{m},\mathbf{a}_{0},\mathbf{Q}),*}\) near \(\partial_{+}\bar{X}\), we calculate \[\widehat{\square_{g}}(\sigma)=-2i\sigma\rho^{2}\partial_{\rho}+(\rho^{2} \partial_{\rho})^{2}+\rho^{2}\partial_{\theta}^{2}+\frac{\rho^{2}}{\sin^{2} \theta}\partial_{\varphi}^{2}+\rho\text{Diff}_{\text{sc}}^{2}(\bar{X}),\quad \rho=\frac{1}{r}.\] Therefore, the scattering principal symbol of \(\widehat{\square_{g}}(\sigma)\) at \(\partial_{+}\bar{X}\) is given by \[p_{\text{sc}}(\sigma)=\sigma_{\text{sc}}(\widehat{\square_{g}}(\sigma))=-( \zeta_{\rho}-\sigma)^{2}-\tilde{p}+\sigma^{2},\quad\tilde{p}=\zeta_{\theta}^{ 2}+\frac{1}{\sin^{2}\theta}\zeta_{\varphi}^{2},\] and the scattering characteristic set is the surface \[\Sigma_{\text{sc}}(\sigma)=\{p_{\text{sc}}(\sigma)=0\}\subset\overline{ \text{sc}T^{*}}_{\partial_{+}\bar{X}}\bar{X}.\] We note that \(\Sigma_{\text{sc}}(\sigma)\) is the scattering zero section \(o_{\partial_{+}\bar{X}}\subset\overline{\text{sc}T^{*}}_{\partial_{+}\bar{X} }\bar{X}\) for \(\text{Im}\,\sigma>0\) and \(\sigma=0\). #### 5.1.2. The radial points at event horizon We now analyze the behavior of the Hamiltonian flow \(\exp(s\rho_{\xi}H_{p})\) on the characteristic set \(\Sigma=\Sigma_{+}\cup\Sigma_{-}\subset\overline{\text{sc}T^{*}}\bar{X}\setminus 0\), that is, we consider the portion whose projection to the base space \((r,\theta,\varphi)\) is away from \(\partial_{+}\bar{X}\). Near \(\Sigma=\Sigma_{+}\cup\Sigma_{-}\), we see that \(\chi=1\) and thus \[p=-\rho_{b}^{-2}\Big{(}\Delta_{b}\xi_{r}^{2}+2a\xi_{r}\xi_{\varphi}+\tilde{p} \Big{)}\quad\text{where}\quad\tilde{p}=\xi_{\theta}^{2}+\frac{1}{\sin^{2} \theta}\xi_{\varphi}^{2}.\] Then we calculate \[H_{p}=-\rho_{b}^{-2}\Big{(}(2\Delta_{b}\xi_{r}+2a\xi_{\varphi})\partial_{r}+2 a\xi_{r}\partial_{\varphi}-\frac{\partial_{r}\Delta_{b}}{\partial r}\xi_{r}^{2} \partial_{\xi_{r}}+H_{\tilde{p}}\Big{)}-\rho_{b}^{-2}pH_{\rho_{b}^{2}}.\] In the coordinates (5.1) near the fiber infinity \({}^{\text{sc}}S^{*}\bar{X}\), we compute \[\begin{split}\rho_{\xi}H_{p}&=-\rho_{b}^{-2}\Big{(}( 2\Delta_{b}(\text{sgn}\,\xi_{r})+2a\hat{\xi}_{\varphi})\partial_{r}+2a(\text{ sgn}\,\xi_{r})\partial_{\varphi}-\frac{\partial\Delta_{b}}{\partial r}(\text{ sgn}\,\xi_{r})\xi_{r}\partial_{\xi_{r}}+\rho_{\xi}H_{\tilde{p}} \Big{)}-\rho_{b}^{-2}(\rho_{\xi}p)H_{\rho_{b}^{2}}\\ &=-\rho_{b}^{-2}\Big{(}(2\Delta_{b}(\text{sgn}\,\xi_{r})+2a\hat{ \xi}_{\varphi})\partial_{r}+2a(\text{sgn}\,\xi_{r})\partial_{\varphi}+2\hat{ \xi}_{\theta}\partial_{\theta}+\frac{2}{\sin^{2}\theta}\hat{\xi}_{\varphi} \partial_{\varphi}\Big{)}\\ &\quad-\rho_{b}^{-2}\frac{\partial\Delta_{b}}{\partial r}(\text{ sgn}\,\xi_{r})\Big{(}\rho_{\xi}\partial_{\rho_{\xi}}+\hat{\xi}_{\theta} \partial_{\hat{\xi}_{\theta}}+\hat{\xi}_{\varphi}\partial_{\hat{\xi}_{\varphi}} \Big{)}-\rho_{b}^{-2}\frac{2\cos\theta}{\sin^{3}\theta}\hat{\xi}_{\varphi}^{2} \partial_{\hat{\xi}_{\theta}}-\rho_{b}^{-2}(\rho_{\xi}^{2}p)\rho_{\xi}^{-1}H_{ \rho_{b}^{2}}.\end{split} \tag{5.2}\] We let \[\Lambda_{\pm}=\{\Delta_{b}=0,(\xi_{\theta},\xi_{\varphi})=0,\pm\xi_{r}>0\}\] and \[L_{\pm}:=\partial\Lambda_{\pm}=\Lambda_{\pm}\cap{}^{\text{sc}}S^{*}\bar{X}=\{ \Delta_{b}=0,\rho_{\xi}=\pm\frac{1}{\xi_{r}}=0,\hat{\xi}=0\}.\] Now we describe the property of the set \(L_{\pm}\). More specifically, \(L_{\pm}\) are _radial sources/sinks_ in the following sense (see [40, Definition E.50]): **Definition 5.1**.: Let \(\kappa:\overline{\text{sc}T^{*}}\bar{X}\setminus 0\to{}^{\text{sc}}S^{*}\bar{X}=({}^{ \text{sc}}T^{*}\bar{X}\setminus 0)/\mathbb{R}^{+}\) be the natural projection map, i.e., for each \((r,\theta,\varphi,\xi)\in{}^{\text{sc}}T^{*}\bar{X}\setminus 0\), the ray \((r,\theta,\varphi,s\xi)\) converges to \(\kappa((r,\theta,\varphi,\xi))\) as \(s\to\infty\). Then for the rescaled Hamiltonian flow \[\phi_{s}:=\exp(s\rho_{\xi}H_{p}):\overline{\text{sc}T^{*}}\bar{X}\to\overline{ \text{sc}T^{*}}\bar{X},\] we say that a nonempty compact \(\phi_{s}\)-invariant set \[L\subset\{\rho_{\xi}^{2}p=0\}\cap{}^{\text{sc}}S^{*}\bar{X}\] is a _radial source_ for \(p\), if there exists a neighborhood \(U\subset\overline{\text{sc}T^{*}}\bar{X}\) of \(L\) such that uniformly in \((r,\theta,\varphi,\xi)\in U\cap{}^{\text{sc}}T^{*}\bar{X}\), one have \[\kappa(\phi_{s}(r,\theta,\varphi,\xi))\to L\quad\text{as}\quad s\to-\infty\] and \[|\phi_{s}(r,\theta,\varphi,\xi)|\geq Be^{B|s|}|\xi|,\quad s\leq 0\] for some \(B>0\). Here, \(|\cdot|\) denotes a norm on the fibers of \({}^{\text{sc}}T^{*}\bar{X}\). A _radial sink_ for \(p\) is be definition a radial source for \(-p\). **Lemma 5.2**.: \(L_{\pm}\) _are invariant under the Hamiltonian flow \(\exp(s\rho_{\xi}H_{p})\) on the characteristic set \(\Sigma\). Moreover, \(L_{+}\) is a radial sink and \(L_{-}\) is a radial source for the flow \(\exp(s\rho_{\xi}H_{p})\), see Figure 4._ Proof.: Since \(\{\rho_{\xi}^{2}\tilde{p}=0\}\Leftrightarrow\{\hat{\xi}=(\hat{\xi}_{\theta},\hat {\xi}_{\varphi})=0\}\), we rewrite \[L_{\pm}=\{\Delta_{b}=0,\rho_{\xi}=\pm\frac{1}{\xi_{r}}=0,\rho_{\xi}^{2}\tilde{p }=0\}.\] Using the coordinates (5.1) and expression (5.2), we find that \[\begin{split}\rho_{\xi}H_{p}\rho_{\xi}&=-\rho_{b}^ {-2}\Big{(}\frac{\partial\Delta_{b}}{\partial r}+2r\rho_{\xi}^{2}p\Big{)}( \operatorname{sgn}\xi_{r})\rho_{\xi},\\ \rho_{\xi}H_{p}\hat{\xi}_{\varphi}&=-\rho_{b}^{-2} \Big{(}\frac{\partial\Delta_{b}}{\partial r}+2r\rho_{\xi}^{2}p\Big{)}( \operatorname{sgn}\xi_{r})\hat{\xi}_{\varphi},\\ \rho_{\xi}H_{p}\Delta_{b}&=-2\rho_{b}^{-2}\frac{ \partial\Delta_{b}}{\partial r}\Big{(}(\operatorname{sgn}\xi_{r})\Delta_{b}+a \hat{\xi}_{\varphi}\Big{)},\\ \rho_{\xi}H_{p}(\rho_{\xi}^{2}\tilde{p})&=-\rho_{b} ^{-2}\Big{(}2\frac{\partial\Delta_{b}}{\partial r}+4r\rho_{\xi}^{2}p\Big{)}( \operatorname{sgn}\xi_{r})\rho_{\xi}^{2}\tilde{p}-2\rho_{b}^{-2}a^{2}\sin(2 \theta)(\rho_{\xi}^{2}p)\hat{\xi}_{\theta}.\end{split} \tag{5.3}\] This implies that \(\rho_{\xi}H_{p}\) is tangent to \(L_{\pm}\), so \(L_{\pm}\) is invariant under the flow \(\exp(s\rho_{\xi}H_{p})\). Moreover, in a neighborhood of \(L_{\pm}=\{\Delta_{b}=0,\rho_{\xi}=\pm\xi_{r}^{-1}=0,\rho_{\xi}^{2}\tilde{p}=0\}\subset \{\rho_{\xi}^{2}p=0\}\), we have \[\begin{split}\pm\rho_{\xi}H_{p}\rho_{\xi}^{2}&\leq -C_{0}\rho_{\xi}^{2},\\ \pm\rho_{\xi}H_{p}\hat{\xi}_{\varphi}^{2}&\leq-C_{0} \hat{\xi}_{\varphi}^{2},\\ \pm\rho_{\xi}H_{p}\Delta_{b}^{2}&\leq-C_{0}\Delta_{ b}^{2}+C_{1}\hat{\xi}_{\varphi}^{2},\\ \pm\rho_{\xi}H_{p}(\rho_{\xi}^{2}\tilde{p})&\leq-C_{ 0}\rho_{\xi}^{2}\tilde{p}+C_{1}\big{(}\Delta_{b}^{2}+\hat{\xi}_{\varphi}^{2} \big{)}.\end{split} \tag{5.4}\] where \(C_{0},C_{1}>0\). It follows from (5.4) that in a sufficiently small neighborhood of \(L_{\pm}\) \[\pm\rho_{\xi}H_{p}f\leq-Cf,\quad f=\rho_{\xi}^{2}+\tilde{\xi}_{\varphi}^{2}+C _{2}\Delta_{b}^{2}+C_{3}\rho_{\xi}^{2}\tilde{p}\geq 0. \tag{5.5}\] for some \(C,C_{2},C_{3}>0\). This implies that for \((r,\theta,\varphi,\xi)\) in a sufficiently small neighborhood of \(L_{\pm}\), we have \[f(\phi_{s}((r,\theta,\varphi,\xi)))\leq e^{-C|s|}f((r,\theta,\varphi,\xi)) \quad\text{for}\quad\pm s>0.\] Therefore, we have uniformly in \((r,\theta,\varphi,\xi)\) in a neighborhood of \(L_{\pm}\) \[\phi_{s}((r,\theta,\varphi,\xi))\to L_{\pm}\quad\text{as}\quad s\to\pm\infty.\] Moreover, according to the first inequality in (5.4), we have \[\rho_{\xi}(\phi_{s}((r,\theta,\varphi,\xi)))\leq e^{-C_{0}|s|}\rho_{\xi}((r, \theta,\varphi,\xi))\quad\text{for}\quad\pm s\geq 0.\] This proves that \(L_{+}\) is a radial sink and \(L_{-}\) is a radial source. Figure 4. The flow of \(\exp(s\rho_{\xi}H_{p})\) near \(L_{\pm}\). It is projected to the \((r,\xi_{r})\) coordinates and drawn in a fiber-radially compactified view. The horizontal coordinate is \(r\); the dashed line is the event horizon \(\{r=r_{b}\}\). The vertical coordinate is \(\xi_{r}/\langle\xi_{r}\rangle\), so the top and bottom lines correspond to the fiber infinity \(\{\rho_{\xi}=0\}\). The midline \(\{\xi_{r}=0\}\) lies outside the characteristic set \(\{\rho_{\xi}^{2}p=0\}\). Finally, we consider the imaginary part of the operator \(\widehat{\square_{g}}(\sigma)\) at \(L_{\pm}\), which is needed to calculate the _threshold quantity_ for the order of the Sobolev spaces (here the differential order) in the radial point estimates. Recall that near \(L_{\pm}\), the inverse metric \(G\) takes the form \[g^{-1}=\rho_{b}^{-2}\Big{(}a^{2}\sin^{2}\theta\partial_{t_{\chi}}^{2}+2(r^{2}+a ^{2})\partial_{t_{\chi}}\partial_{r}+2a\partial_{t_{\chi}}\partial_{\varphi}+ \Delta_{b}\partial_{r}^{2}+2a\partial_{r}\partial_{\varphi}+\partial_{\theta}^{ 2}+\frac{1}{\sin^{2}\theta}\partial_{\varphi}^{2}\Big{)}\] and \(t_{\chi}=t_{b,*}\). A direct calculation shows that \[\widehat{\square_{g}}(\sigma)^{*}=\widehat{\square_{g}}(\bar{\sigma}).\] where \(\widehat{\square_{g}}(\sigma)^{*}\) denotes the formal adjoint of \(\widehat{\square_{g}}(\sigma)\) defined by \(\langle\widehat{\square_{g_{\mathrm{s}}}}(\sigma)u,,v\rangle=\langle u, \widehat{\square_{g}}(\sigma)^{*}v\rangle\) with respect to \(L^{2}(\bar{X};\sqrt{|\det g|}drd\theta d\varphi)\) for \(u,v\in C^{\infty}_{c}(X)\). We first compute \[p_{1}:=\sigma_{1}\big{(}\mathrm{Im}\,\widehat{\square_{g}}(\sigma)\big{)}= \sigma_{1}\Big{(}\frac{\widehat{\square_{g}}(\sigma)-\widehat{\square_{g}}( \sigma)^{*}}{2i}\Big{)}=\rho_{b}^{-2}\mathrm{Im}\,\sigma\Big{(}2(r^{2}+a^{2} )\xi_{r}+2a\xi_{\varphi}\Big{)}\] Let \(\beta_{0}\in C^{\infty}(L_{\pm})\) be a positive function defined as \[\beta_{0}=\mp\frac{\rho_{\xi}H_{p}\rho_{\xi}}{\rho_{\xi}}|_{L_{\pm}}.\] Then we define \(\tilde{\beta}\in C^{\infty}(L_{\pm})\) as \[\tilde{\beta}=\mp\frac{\rho_{\xi}p_{1}}{\beta_{0}}|_{L_{\pm}}\] Let \[\beta_{\mathrm{sup}}=\sup\tilde{\beta},\quad\beta_{\mathrm{inf}}=\inf\tilde{ \beta}.\] If \(\tilde{\beta}\) is a constant along \(L_{\pm}\), we may write \[\beta=\beta_{\mathrm{sup}}=\beta_{\mathrm{inf}}.\] We note that \(\beta_{\mathrm{sup}}\) and \(\beta_{\mathrm{inf}}\) are the relevant quantities in the radial point estimates. More specifically, in our setting where we consider the operator \(\widehat{\square_{g}}(\sigma)\) with \(\mathrm{Im}\,\sigma\geq 0\), the threshold regularity condition in the _high regularity radial estimate_ is given by \[s>\frac{1}{2}+\beta_{\mathrm{sup}}, \tag{5.6}\] while the _low regularity radial estimate_ requires \[s<\frac{1}{2}+\beta_{\mathrm{inf}}. \tag{5.7}\] Using the first equality in (5.3), we find for the operator \(\widehat{\square_{g}}(\sigma)\) that \[\tilde{\beta}=-\mathrm{Im}\,\sigma\frac{r_{b}^{2}+a^{2}}{r_{b}-\mathbf{m}},\] and thus \[\beta=\beta_{\mathrm{sup}}=\beta_{\mathrm{inf}}=-\mathrm{Im}\,\sigma\frac{r_{ b}^{2}+a^{2}}{r_{b}-\mathbf{m}}\leq 0\quad\text{for}\quad\mathrm{Im}\,\sigma\geq 0. \tag{5.8}\] #### 5.1.3. The radial points at spatial infinity \(\partial_{+}\bar{X}\) We now turn to the analysis near \(\partial_{+}\bar{X}\). Recall that for the operator \(\widehat{\square_{g}}(\sigma)\) with \(\mathrm{Im}\,\sigma\geq 0\), the scattering characteristic set is the surface \[\Sigma_{\mathrm{sc}}(\sigma)=\{(\rho=\frac{1}{r}=0,\theta,\varphi,\zeta)):p_{ \mathrm{sc}}(\sigma)=0\}\subset\overline{{}^{\mathrm{sc}}T^{*}}_{\partial_{+} \bar{X}}\bar{X}.\] where \[p_{\mathrm{sc}}(\sigma)=-(\zeta_{\rho}-\sigma)^{2}-\tilde{p}+\sigma^{2},\quad \tilde{p}=\zeta_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\zeta_{\varphi}^{2}.\] Using the identification of \(T^{*}\bar{X}\) and \({}^{\mathrm{sc}}T^{*}\bar{X}\) in \(\rho>0\) \[\zeta_{\rho}=\rho^{2}\xi_{\rho}=-\xi_{r},\quad(\zeta_{\rho},\zeta_{\varphi})= \rho(\xi_{\theta},\xi_{\varphi}),\] we find that the scattering Hamiltonian vector field associated to \(p\) in \(\rho>0\) is given by \[{}^{\rm sc}\!H_{p} =\rho^{2}\frac{\partial p}{\partial\zeta_{\rho}}\Big{(}\frac{ \partial}{\partial\rho}+\frac{2\zeta_{\rho}}{\rho}\frac{\partial}{\partial\zeta _{\rho}}+\big{(}\sum_{\mu=\theta,\varphi}\frac{\zeta_{\mu}}{\rho}\frac{ \partial}{\partial\zeta_{\mu}}\big{)}\Big{)}-\Big{(}\frac{\partial p}{ \partial\rho}+\frac{2\zeta_{\rho}}{\rho}\frac{\partial p}{\partial\zeta_{\rho} }+\big{(}\sum_{\mu=\theta,\varphi}\frac{\zeta_{\mu}}{\rho}\frac{\partial p}{ \partial\zeta_{\mu}}\big{)}\Big{)}\rho^{2}\frac{\partial}{\partial\zeta_{\rho}}\] \[\quad+\sum_{\mu=\theta,\varphi}\rho\Big{(}\frac{\partial p}{ \partial\zeta_{\mu}}\frac{\partial}{\partial\mu}-\frac{\partial p}{\partial \mu}\frac{\partial}{\partial\zeta_{\mu}}\Big{)}\] \[=\rho\Big{(}\frac{\partial p}{\partial\zeta_{\rho}}\big{(}\rho \frac{\partial}{\partial\rho}+(\sum_{\mu=\theta,\varphi}\zeta_{\mu}\frac{ \partial}{\partial\zeta_{\mu}}\big{)}\big{)}-\big{(}\rho\frac{\partial p}{ \partial\rho}+(\sum_{\mu=\theta,\varphi}\zeta_{\mu}\frac{\partial p}{\partial \zeta_{\mu}})\big{)}\frac{\partial}{\partial\zeta_{\rho}}+\sum_{\mu=\theta, \varphi}\big{(}\frac{\partial p}{\partial\zeta_{\mu}}\frac{\partial}{\partial \mu}-\frac{\partial p}{\partial\mu}\frac{\partial}{\partial\zeta_{\mu}}\big{)} \Big{)}.\] By introducing the following coordinates in the fiber radial compactification \(\overline{{}^{\rm sc}T^{*}\!X}\) of \({}^{\rm sc}T^{*}\bar{X}\) \[\rho_{\zeta}=(1+\zeta_{r}^{2}+\zeta_{\theta}^{2}+\frac{1}{\sin^{2}\!\theta} \zeta_{\varphi}^{2})^{-\frac{1}{2}},\quad\hat{\zeta}=(\hat{\zeta}_{\rho},\hat {\zeta}_{\theta},\hat{\zeta}_{\varphi})=\rho_{\zeta}(\zeta_{\rho},\zeta_{ \theta},\zeta_{\varphi}),\] the rescaled scattering Hamiltonian vector field \(\rho_{\zeta}\rho^{-1sc}\!H_{p}\) can be extended to an element in \(\mathcal{V}_{b}(\overline{{}^{\rm sc}T^{*}\!X})\) taking the following form \[{}^{\rm sc}\!H_{p}^{2,0} :=\rho_{\zeta}^{2-1}\rho^{-0-1{\rm sc}}\!H_{p}\] \[=\rho_{\zeta}\Big{(}\frac{\partial p}{\partial\zeta_{\rho}} \big{(}\rho\frac{\partial}{\partial\rho}+(\sum_{\mu=\theta,\varphi}\zeta_{\mu} \frac{\partial}{\partial\zeta_{\mu}})\big{)}-\big{(}\rho\frac{\partial p}{ \partial\rho}+(\sum_{\mu=\theta,\varphi}\zeta_{\mu}\frac{\partial p}{\partial \zeta_{\mu}})\big{)}\frac{\partial}{\partial\zeta_{\rho}}+\sum_{\mu=\theta, \varphi}\big{(}\frac{\partial p}{\partial\zeta_{\mu}}\frac{\partial}{\partial \mu}-\frac{\partial p}{\partial\mu}\frac{\partial}{\partial\zeta_{\mu}}\big{)} \Big{)}.\] We now discuss the integral curves of \({}^{\rm sc}\!H_{p_{\rm sc}(\sigma)}^{2,0}\) on the scattering characteristic set \(\Sigma_{\rm sc}(\sigma)=\{p_{\rm sc}(\sigma)=0\}\subset\overline{{}^{\rm sc}T^ {*}\!}_{\partial_{+}\bar{X}}\bar{X}\) for \(\sigma\in\mathbb{R}\setminus 0\) (see [92]). Since \(\Sigma_{\rm sc}(\sigma)\) has no intersection with the fiber infinity \(\overline{\partial{}^{\rm sc}T^{*}\!}_{\partial_{+}\bar{X}}\bar{X}\), we can drop the factor \(\rho_{\zeta}\) which is non-zero on \(\Sigma_{\rm sc}(\sigma)\). Therefore, with \(p_{\rm sc}(\sigma)=-(\zeta_{\rho}-\sigma)^{2}-\tilde{p}+\sigma^{2}\), we write \[{}^{\rm sc}\!H_{p_{\rm sc}(\sigma)}^{2,0}=(-2\zeta_{\rho}+2\sigma)\rho\partial _{\rho}+2\tilde{p}\partial_{\zeta_{\rho}}+(-2\zeta_{\rho}+2\sigma)\sum_{\mu= \theta,\varphi}\zeta_{\mu}\partial_{\zeta_{\mu}}-H_{\tilde{p}}\quad\text{on} \quad\Sigma_{\rm sc}(\sigma)\] where \[H_{\tilde{p}}=2\zeta_{\theta}\partial_{\theta}+\frac{2\zeta_{\varphi}}{\sin^{2 }\theta}\partial_{\varphi}+\frac{\cos\theta}{\sin^{2}\theta}\zeta_{\varphi}^{2 }\partial_{\zeta_{\theta}}.\] **Lemma 5.3**.: _For \(0\neq\sigma\in\mathbb{R}\), on the characteristic variety_ \[\Sigma_{\rm sc}(\sigma)=\{p_{\rm sc}(\sigma)=-(\zeta_{\rho}-\sigma)^{2}-\tilde {p}+\sigma^{2}=0\}\subset\overline{{}^{\rm sc}T^{*}\!}_{\partial_{+}\bar{X}} \bar{X},\quad\tilde{p}=\zeta_{\theta}^{2}+\frac{1}{\sin^{2}\!\theta}\zeta_{ \varphi}^{2}, \tag{5.9}\] _there are two invariant submanifolds under the Hamiltonian flow \(\exp(s{}^{\rm sc}\!H_{p_{\rm sc}(\sigma)}^{2,0})\), one of which is the zero section \(R(0)=\{\rho=0,\zeta_{\rho}=0,(\zeta_{\theta},\zeta_{\varphi})=0\}\) and the other is \(R(\sigma)=\{\rho=0,\zeta_{\rho}=2\sigma,(\zeta_{\theta},\zeta_{\varphi})=0\}\)._ _For \((\theta^{0},\varphi^{0},\zeta_{\rho}^{0},\zeta_{\theta}^{0},\zeta_{\varphi}^{0}) \in\Sigma_{\rm sc}(\sigma)\setminus(R(0)\cup R(\sigma))\), the integral curves \(\exp(s{}^{\rm sc}\!H_{p_{\rm sc}(\sigma)}^{2,0})(\theta^{0},\varphi^{0},\zeta_{ \rho}^{0},\zeta_{\theta}^{0},\zeta_{\varphi}^{0})\) take the following form_ \[\zeta_{\rho}(s) =\sigma+|\sigma|\sin(s+s_{0}),\quad\zeta_{\rho}(0)=\zeta_{\rho}^{ 0},\] \[(\zeta_{\theta},\zeta_{\varphi}) =|\sigma|\cos(s+s_{0})(\tilde{\zeta}_{\theta},\tilde{\zeta}_{ \varphi}),\quad|\sigma|\cos(s_{0})=\tilde{p}(\theta^{0},\varphi^{0},\zeta_{ \theta}^{0},\zeta_{\varphi}^{0})^{\frac{1}{2}}, \tag{5.10}\] \[(\theta,\varphi,\tilde{\zeta}_{\theta},\tilde{\zeta}_{\varphi}) =\exp((s+s_{0})H_{-\frac{1}{2}\tilde{p}})(\theta^{0},\varphi^{0}, \zeta_{\theta}^{0},\zeta_{\varphi}^{0}),\quad(\tilde{\zeta}_{\theta}^{0},\tilde {\zeta}_{\varphi}^{0})=\tilde{p}(\theta^{0},\varphi^{0},\zeta_{\theta}^{0},\zeta_{ \varphi}^{0})^{-\frac{1}{2}}\cdot(\zeta_{\theta}^{0},\zeta_{\varphi}^{0})\] _where \(s_{0}\in(-\frac{\pi}{2},\frac{\pi}{2})\), \(s\in(-\frac{\pi}{2}-s_{0},\frac{\pi}{2}-s_{0})\) and \(\frac{ds}{ds^{\prime}}=2\tilde{p}(\theta,\varphi,\zeta_{\theta},\zeta_{\varphi})^ {\frac{1}{2}}\) where \(\frac{d}{ds^{\prime}}:={}^{\rm sc}\!H_{p_{\rm sc}(\sigma)}^{2,0}\)._ _This implies that for \(\sigma>0\), the zero section is a radial source and the nonzero one is a radial sink, while for \(\sigma<0\), the zero section is a radial sink and the nonzero one is a radial source, see Figure 5._ Proof.: At \(\partial\bar{X}\), the rescaled scattering Hamiltonian vector field is \[{}^{\rm sc}\!H_{p_{\rm sc}(\sigma)}^{2,0}=(-2\zeta_{\rho}+2\sigma)\rho\partial _{\rho}+2\tilde{p}\partial_{\zeta_{\rho}}+(-2\zeta_{\rho}+2\sigma)\sum_{\mu=\theta, \varphi}\zeta_{\mu}\partial_{\zeta_{\mu}}-H_{\tilde{p}},\] Introducing the polar coordinates with respect to the radial variable in \((\zeta_{\theta},\zeta_{\varphi})\) \[(\tilde{\zeta}_{\theta},\tilde{\zeta}_{\varphi})=\tilde{p}(\theta,\varphi,\zeta_{ \theta},\zeta_{\varphi})^{-\frac{1}{2}}\cdot(\zeta_{\theta},\zeta_{\varphi}), \quad|(\zeta_{\theta},\zeta_{\varphi})|=\tilde{p}(\theta,\varphi,\zeta_{\theta}, \zeta_{\varphi})^{\frac{1}{2}}.\] Let \(\frac{d}{ds^{\prime}}:={}^{\rm sc}\!H^{2,0}_{p_{\rm sc}(\sigma)}\). Then we have \[\frac{d}{ds^{\prime}}\zeta_{\rho}=2\tilde{p}(\theta,\varphi,\zeta_{ \theta},\zeta_{\varphi}),\quad\frac{d}{ds^{\prime}}\tilde{p}(\theta,\varphi, \zeta_{\theta},\zeta_{\varphi})^{1/2}=-2(\zeta_{\rho}-\sigma)\tilde{p}(\theta, \varphi,\zeta_{\theta},\zeta_{\varphi})^{\frac{1}{2}},\] \[\frac{d}{ds^{\prime}}(\tilde{\zeta}_{\theta},\tilde{\zeta}_{ \varphi})=\tilde{p}^{-\frac{1}{2}}(\frac{\partial\tilde{p}}{\partial\theta}, \frac{\partial\tilde{p}}{\partial\varphi})(\theta,\varphi,\zeta_{\theta},\zeta _{\varphi}),\quad\frac{d}{ds^{\prime}}(\theta,\varphi)=-(\frac{\partial\tilde{ p}}{\partial\zeta_{\theta}},\frac{\partial\tilde{p}}{\partial\zeta_{\varphi}})( \theta,\varphi,\zeta_{\theta},\zeta_{\varphi}).\] This implies that \(\frac{d}{ds^{\prime}}\) is tangent to \(R(0)\) and \(R(\sigma)\), so they are invariant under the Hamiltonian flow \(\exp({s^{\rm sc}\!H^{2,0}_{p_{\rm sc}(\sigma)}})\). When \(\tilde{p}(\theta,\varphi,\zeta_{\theta},\zeta_{\varphi})\neq 0\), we introduce a new parameter \(s\) satisfying \(\frac{ds}{ds^{\prime}}=2\tilde{p}(\theta,\varphi,\zeta_{\theta},\zeta_{\varphi })^{\frac{1}{2}}\) and obtain \[\frac{d}{ds}(\zeta_{\rho}-\sigma)=\tilde{p}(\theta,\varphi,\zeta_{ \theta},\zeta_{\varphi})^{\frac{1}{2}},\quad\frac{d}{ds}\tilde{p}(\theta, \varphi,\zeta_{\theta},\zeta_{\varphi})^{\frac{1}{2}}=-(\zeta_{\rho}-\sigma), \tag{5.11}\] \[\frac{d}{ds}(\tilde{\zeta}_{\theta},\tilde{\zeta}_{\varphi})= \frac{1}{2}(\frac{\partial\tilde{p}}{\partial\theta},\frac{\partial\tilde{p}}{ \partial\varphi})(\theta,\varphi,\tilde{\zeta}_{\theta},\tilde{\zeta}_{\varphi }),\quad\frac{d}{ds}(\theta,\varphi)=-\frac{1}{2}(\frac{\partial\tilde{p}}{ \partial\tilde{\zeta}_{\theta}},\frac{\partial\tilde{p}}{\partial\tilde{\zeta }_{\varphi}})(\theta,\varphi,\tilde{\zeta}_{\theta},\tilde{\zeta}_{\varphi}).\] Integrating the above system with respect to the new parameter \(s\) gives (5.10). For the propagation of singularities estimates at these radial points \(R(\sigma)\) and \(R(0)\), there is a _threshold quantity_ for the order of the Sobolev spaces (here the scattering decay order). Let \(\beta_{0}\) be a positive function defined at the radial points as \[{}^{\rm sc}\!H^{2,0}_{p_{\rm sc}(\sigma)}\rho=\pm\beta_{0}\rho.\] Here and in what follows, we use \(+\) for radial sources and \(-\) for radial sinks. We next define \(\tilde{\beta}\) at the radial points as \[\sigma_{\rm sc}(\rho^{-1}{\rm Im}\,\widehat{\square_{g}}(\sigma))=\sigma_{\rm sc }(\frac{\widehat{\square_{g}}(\sigma)-\widehat{\square_{g}}(\sigma)^{*}}{2i \rho})=\pm\beta_{0}\tilde{\beta}\] Let \[\beta_{\rm sup}=\sup\tilde{\beta},\quad\beta_{\rm inf}=\inf\tilde{\beta}\] where the supremum and infimum are taken over the radial points set. If \(\tilde{\beta}\) is a constant along the radial points set, we may write \[\beta=\beta_{\rm sup}=\beta_{\rm inf}.\] We note that \(\beta_{\rm sup}\) and \(\beta_{\rm inf}\) are the relevant quantities in the radial point estimates. More specifically, in our setting where we consider the operator \(\widehat{\square_{g}}(\sigma)\) with \(\sigma\in\mathbb{R}\setminus 0\), the threshold scattering decay condition in the _high scattering decay radial estimate_ is given by \[r>-\frac{1}{2}-\beta_{\rm inf}, \tag{5.12}\] while the _low scattering decay radial estimate_ requires \[r<-\frac{1}{2}-\beta_{\rm sup}. \tag{5.13}\] In our setting, since \(\widehat{\square_{g}}(\sigma)=\widehat{\square_{g}}(\sigma)^{*}\) for \(\sigma\in\mathbb{R}\setminus 0\), it follows that \[\beta=\beta_{\sup}=\beta_{\inf}=0\quad\text{on}\quad R(\sigma)\quad\text{and} \quad R(0). \tag{5.14}\] #### 5.1.4. Global dynamics of the Hamiltonian flow Now we study the global behavior of the flow of \(\rho_{\xi}H_{p}\) (resp., \({}^{\text{sc}}\!H_{p_{\text{sc}}(\sigma)}^{2,0}\)) away from infinity \(\partial_{+}\bar{X}=\{\rho=\frac{1}{r}=0\}\) (resp., at \(\partial_{+}\bar{X}\)). **Proposition 5.4**.: _The characteristic set of \(\widehat{\square_{g}}(\sigma)\) is a disjoint union \(\Sigma\cup\Sigma_{\text{sc}}(\sigma)\) (see Figure 6)._ 1. _Let_ \((r,\theta,\varphi,\xi)\in\Sigma_{\pm}\) _and put_ \(\phi(s)=\exp(s\rho_{\xi}H_{p})(r,\theta,\varphi,\xi)\)_. Then_ \(\phi(s)\to L_{\pm}\) _as_ \(s\to\pm\infty\)_. Moreover, if_ \((r,\theta,\varphi,\xi)\notin\Lambda_{\pm}=\{\Delta_{b}=0,(\xi_{\theta},\xi_{ \varphi})=0,\pm\xi_{r}>0\}\)_, then_ \(\phi(s)\) _crosses_ \(\partial_{-}\bar{X}\) _into the inward direction of deceasing_ \(r\) _at some_ \(s_{0}\) _with_ \(\pm s_{0}\leq 0\)_._ 2. _For_ \(\sigma\in\mathbb{R}\setminus 0\)_, let_ \((\rho=0,\theta,\varphi,\zeta)\in\Sigma_{\text{sc}}(\sigma)\) _and put_ \(\phi(s)=\exp(s^{\text{sc}}\!H_{p_{\text{sc}}(\sigma)}^{2,0})(\theta,\varphi,\zeta)\)_. Then either_ \(\phi(s)\to R(\frac{\sigma}{2}+\frac{|\sigma|}{2})=\{\rho=0,\zeta_{\rho}=\sigma +|\sigma|,\tilde{p}=0\}\) _as_ \(s\to\infty\)_, or_ \(\phi(s)\to R(\frac{\sigma}{2}-\frac{|\sigma|}{2})=\{\rho=0,\zeta_{\rho}=\sigma -|\sigma|,\tilde{p}=0\}\) _as_ \(s\to-\infty\)_. Moreover, if_ \((\rho=0,\theta,\varphi,\zeta)\notin R(0)\cup R(\sigma)\)_, then_ \(\phi(s)\to R(\frac{\sigma}{2}\pm\frac{|\sigma|}{2})\) _as_ \(s\to\pm\infty\)_._ Proof.: We only discuss the case \(\Sigma_{+}\) as the other case \(\Sigma_{-}\) can be handled in a similar manner. Let \((r,\theta,\varphi,\xi)\in\Sigma_{+}\) and put \(\phi(s)=\exp(s\rho_{\xi}H_{p})(r,\theta,\varphi,\xi)\). Since \(\Sigma_{+}\) is invariant under the flow \(\exp(s\rho_{\xi}H_{p})\), it follows that \(\phi(s)\in\Sigma_{\pm}\) for all \(s\) for which it is well-defined. Put \[f(s)=\big{(}\rho_{\xi}^{2}+(\Delta_{b}+2a\text{sgn}\,(\xi_{r})\hat{\xi}_{ \varphi})^{2}+\rho_{\xi}^{2}\tilde{p}\big{)}(\phi(s)),\quad\dot{f}(s)=\rho_{ \xi}H_{p}f(\phi(s)).\] Using the calculation in (5.3), we find that \[\dot{f}(s)\leq-2\big{(}\rho_{b}^{-2}\frac{\partial\Delta_{b}}{\partial r}f \big{)}(\phi(s))\leq-C_{0}f(s)\quad\text{on}\quad\Sigma_{+}\subset\{r\geq r_{- }\mid\Delta_{b}-a^{2}\sin^{2}\theta\leq 0\}\] for some \(C_{0}>0\). This gives \[f(s)\leq e^{-C_{0}s}f(0)=e^{-C_{0}s}f((r,\theta,\varphi,\xi))\quad\text{for} \quad s\geq 0\] and thus \[\phi(s)\to L_{+}\quad\text{as}\quad s\to\infty.\] Using the last expression in (5.3), we have \[\rho_{\xi}H_{p}(\rho_{\xi}^{2}\tilde{p})(\phi(s))=-2\big{(}\rho_{b}^{-2}\frac{ \partial\Delta_{b}}{\partial r}\rho_{\xi}^{2}\tilde{p}\big{)}(\phi(s))\leq-C_{ 0}(\rho_{\xi}^{2}\tilde{p})(\phi(s)).\] for some \(C_{0}>0\). Therefore \[(\rho_{\xi}^{2}\tilde{p})(\phi(s))\geq e^{-C_{0}s}(\rho_{\xi}^{2}\tilde{p})(r, \theta,\varphi,\xi)\] for \(s\leq 0\) as long as the flow remains in the region \(\Sigma_{+}\subset\{r\geq r_{-}\mid\Delta_{b}-a^{2}\sin^{2}\theta\leq 0\}\). Since on \(\Sigma_{+}\) \[2a^{2}+\frac{1}{2}\hat{\xi}_{\varphi}^{2}\geq-2a\hat{\xi}_{\varphi}=\Delta_{b }+\rho_{\xi}^{2}\tilde{p}\geq\Delta_{b}+\frac{1}{2}\rho_{\xi}^{2}\tilde{p}+ \frac{1}{2}\hat{\xi}_{\varphi}^{2}\] where we use \(\Delta_{b}+2a\hat{\xi}_{\varphi}+\rho_{\xi}^{2}\tilde{p}=0\) on \(\Sigma_{+}\) in the second equality and \(\rho_{\xi}^{2}\tilde{p}\geq\hat{\xi}_{\varphi}^{2}\) in the last inequality, it follows that \[2a^{2}-\Delta_{b}\geq\frac{1}{2}\rho_{\xi}^{2}\tilde{p}\quad\text{on}\quad \Sigma_{+}.\] Since \[\{\rho_{\xi}^{2}\tilde{p}=0\}\cap\Sigma_{+}=\Lambda_{+},\] then if \((r,\theta,\varphi,\xi)\in\Sigma_{+}\setminus\Lambda_{+}\), we have \[(2a^{2}-\Delta_{b})(\phi(s))\geq\frac{1}{2}\rho_{\xi}^{2}\tilde{p}(\phi(s)) \geq Ce^{-C_{0}s},\quad C,C_{0}>0\] for \(s\leq 0\) as long as the flow remains in the region \(\Sigma_{+}\subset\{r\geq r_{-}\mid\Delta_{b}-a^{2}\sin^{2}\theta\leq 0\}\). As a consequence, \(\phi(s)\) cross \(\partial_{-}\bar{X}\) into the inward direction of deceasing \(r\) at some \(s_{0}\) with \(s_{0}\leq 0\). This proves the part (i). The proof of part (2) follows from Lemma 5.3. #### 5.1.5. Second microlocalized scattering algebra To do the analysis at the scattering zero section at \(\partial_{+}\bar{X}\), we introduce the second microlocalized \(\bar{X}\) by blowing up the zero section \(o_{\partial_{+}\bar{X}}\) at \(\partial_{+}\bar{X}\) (i.e. replacing \(o_{\partial_{+}\bar{X}}\) by its inward pointing spherical normal bundle and thus obtaining a smooth manifold with corners), see Figure 7. On the other hand, from an analytically better behaved, but geometrically equivalent perspective, one takes \({}^{\text{b}}\overline{T^{*}}\bar{X}\) and blows up its corner \({}^{\text{b}}S^{*}_{\partial_{+}\bar{X}}\bar{X}\), i.e. the fiber infinity at \(\partial_{+}\bar{X}\), see Figure 8. These two spaces \([\overline{\overline{c}T^{*}}\bar{X};o_{\partial_{+}\bar{X}}]\) and \([\overline{\overline{b}T^{*}}\bar{X};{}^{\text{b}}S^{*}_{\partial_{+}\bar{X}} \bar{X}]\) are naturally the same, see [112, Lemma 5.1]. In order to actually do analysis, one needs to introduce the pseudodifferential operator space \(\Psi^{s,r,l}_{\mathrm{sc,b}}(\bar{X})\). The three orders of \(\Psi^{s,r,l}_{\mathrm{sc,b}}(\bar{X})\) are the sc-differential order \(s\), the sc-decay order \(r\) and b-decay order \(l\). The operators \(\Psi^{s,r,l}_{\mathrm{sc,b}}(\bar{X})\) arise from the quantization of the symbols in the following class \[S^{s,r,l}(\bar{X}):=\rho_{\mathrm{b}}^{-l}\rho_{\mathrm{sc}}^{-r}\rho_{ \infty}^{-s}S^{0,0,0}([\overline{bT^{*}}\bar{X};{}^{\mathrm{b}}S^{*}_{\partial _{+}\bar{X}}\bar{X}])=\rho_{\mathrm{b}}^{-l}\rho_{\mathrm{sc}}^{-r}\rho_{ \infty}^{-s}S^{0,0}(\overline{bT^{*}}\bar{X})\] where \(\rho_{\infty}\), resp. \(\rho_{\mathrm{sc}}\), resp. \(\rho_{\mathrm{b}}\) correspond to the boundary defining functions of the three boundary hypersurfaces of \([\overline{bT^{*}}\bar{X};{}^{\mathrm{b}}S^{*}_{\partial_{+}\bar{X}}\bar{X}]\) (or equivalently \([\overline{scT^{*}}\bar{X};\partial_{\partial_{+}\bar{X}}]\)): the lift of b-fiber infinity (or equivalently the lift of \(\mathrm{sc}\)-fiber infinity), the new front face of \([\overline{bT^{*}}\bar{X};{}^{\mathrm{b}}S^{*}_{\partial_{+}\bar{X}}\bar{X}]\) (or the lift of \({}^{\mathrm{sc}}T^{*}_{\partial_{+}\bar{X}}\bar{X}\) in \([\overline{scT^{*}}\bar{X};\partial_{\partial_{+}\bar{X}}\bar{X}]\)), and the lift of \(\partial_{+}\bar{X}\) in \([\overline{bT^{*}}\bar{X};{}^{\mathrm{b}}S^{*}_{\partial_{+}\bar{X}}\bar{X}]\) (or the new front face of \([\overline{scT^{*}}\bar{X};{}^{\mathrm{o}}\partial_{+}\bar{X}]\)), respectively. Here, the symbol spaces \(S^{0,0}(\overline{bT^{*}}\bar{X})\) (resp. \(S^{0,0,0}([\overline{bT^{*}}\bar{X};{}^{\mathrm{b}}S^{*}_{\partial_{+}\bar{X} }\bar{X}])\)) consist of \(L^{\infty}(\overline{bT^{*}}\bar{X})\) (resp. \(L^{\infty}([\overline{bT^{*}}\bar{X};{}^{\mathrm{b}}S^{*}_{\partial_{+}\bar{X} }\bar{X}])\)) functions which are smooth away from the boundary hypersurfaces \({}^{\mathrm{b}}S^{*}\bar{X}\cup\overline{bT^{*}}\partial_{\partial_{+}\bar{X} }\bar{X}\) (resp. \({}^{\mathrm{b}}S^{*}\bar{X}\cup\overline{bT^{*}}\partial_{\partial_{+}\bar{X} }\bar{X}\cup\mathrm{f}\)), and they remain so under iterated applications of \(C^{\infty}\) b-vector fields on \(\overline{bT^{*}}\bar{X}\) (resp. \([\overline{bT^{*}}\bar{X};{}^{\mathrm{b}}S^{*}\bar{X}]\)), namely, vector fields tangent to the aforementioned boundary hypersurfaces. In particular, we have \[\Psi^{s,s+l,l}_{\mathrm{sc,b}}(\bar{X})=\Psi^{s,l}_{\mathrm{b}}(\bar{X}).\] We refer the reader to [112, SS5] for a more detailed description of the operators \(\Psi^{s,r,l}_{\mathrm{sc,b}}(\bar{X})\) and the symbols \(S^{s,r,l}(\bar{X})\). Correspondingly, one can define the second microlocal Sobolev space \(H^{s,r,l}_{\mathrm{sc,b}}\) as follows. Fix an elliptic operator \(A\in\Psi^{s,r,l}_{\mathrm{sc,b}}(\bar{X})\), let \[H^{s,r,l}_{\mathrm{sc,b}}(\bar{X})=\{u\in C^{-\infty}(\bar{X})\mid Au\in L^{2}\}.\] Therefore, we have \[H^{s,s+l,l}_{\mathrm{sc,b}}(\bar{X})=H^{s,l}_{\mathrm{b}}(\bar{X}).\] ### Uniform Fredholm estimates In this section, we prove the uniform Fredholm estimates first for operators \(\widehat{\square_{\partial_{\mathrm{b}}}}(\sigma)\) acting on scalar functions, then for \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma),\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\) acting on scattering 1-forms and the linearized gauge-fixed Einstein-Maxwell operator \(\widehat{L_{b,\gamma}}(\sigma)\), as well as their formal adjoints for \(\sigma\in\mathbb{C},|\sigma|\leq C\). To this end, we combine the global dynamics of of the Hamiltonian flow of the principal symbol \(p_{\sigma}\) established in 5.4 with the elliptic estimate, propagation of singularities estimate, the radial point estimate at event horizon, the scattering radial point estimate at spatial infinity \(\partial_{+}\bar{X}\) and hyperbolic estimate. #### 5.2.1. Uniform Fredholm estimates for scalar wave operators We first prove uniform Fredholm estimates for the operator \(\widehat{\square_{\partial_{\mathrm{b}}}}(\sigma)\) acting on scalar functions, as well as its formal adjoint with respect to volume density \(L^{2}(\bar{X};\sqrt{\det|g_{b}|}drd\theta d\varphi)\) for \(\sigma\in\mathbb{C},|\sigma|\leq C\). **Theorem 5.5**.: _Let \(b_{0}=(\mathbf{m}_{0},\mathbf{a}_{0},\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|+|\mathbf{a}_{0}|<\mathbf{m}_{0}\). Then there exists \(\epsilon>0\) such that for \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(|b-b_{0}|<\epsilon\) and for \(s>s_{0}>\frac{1}{2},\ell-1\leq\ell_{0}<\ell<-\frac{1}{2}\) with \(s+\ell>s_{0}+\ell_{0}>-\frac{1}{2}\) and \(\ell\neq-\frac{3}{2}\), the following holds._ 1. _For any fixed_ \(C_{1}>0\)_, there exist_ \(C>0\) _independent of_ \(b\)_, such that for_ \(\sigma\in\mathbb{C},\operatorname{Im}\sigma\geq 0,C_{1}^{-1}\leq|\sigma|\leq C _{1}\)_, we have_ \[\|u\|_{\tilde{H}_{\mathrm{b}}^{s,\ell}(\bar{X})} \leq C\Big{(}\|\widehat{\square_{g_{\mathrm{b}}}}(\sigma)u\|_{ \tilde{H}_{\mathrm{b}}^{s,\ell+1}(\bar{X})}+\|u\|_{\tilde{H}_{\mathrm{b}}^{s_ {0},\ell_{0}}(\bar{X})}\Big{)},\] (5.15) \[\|u\|_{\tilde{H}_{\mathrm{b}}^{-s,\ell-1}(\bar{X})} \leq C\Big{(}\|\widehat{\square_{g_{\mathrm{b}}}}(\sigma)^{*}u\|_{ \tilde{H}_{\mathrm{b}}^{-s,\ell-1}(\bar{X})}+\|u\|_{\tilde{H}_{\mathrm{b}}^{- N,-\ell-3}(\bar{X})}\Big{)};\] (5.16) \[\|u\|_{\tilde{H}_{\mathrm{b}}^{s,\ell}(\bar{X})} \leq C\Big{(}\|\widehat{\square_{g_{\mathrm{b}}}}(\sigma)u\|_{ \tilde{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})}+\|u\|_{\tilde{H}_{\mathrm{b}}^{ s_{0},\ell_{0}}(\bar{X})}\Big{)},\] (5.17) \[\|u\|_{\tilde{H}_{\mathrm{b}}^{-s+1,-\ell-2}(\bar{X})} \leq C\Big{(}\|\widehat{\square_{g_{\mathrm{b}}}}(\sigma)^{*}u\|_{ \tilde{H}_{\mathrm{b}}^{-s,\ell-1}(\bar{X})}+\|u\|_{\tilde{H}_{\mathrm{b}}^{ -N,-\ell-3}(\bar{X})}\Big{)}\] (5.18) _where_ \(C\) _only depends on_ \(b_{0},s,s_{0},\ell,\delta,C_{1}\)_. Therefore, the operators_ \[\widehat{\square_{g_{\mathrm{b}}}}(\sigma):\{u\in\tilde{H}_{\mathrm{b}}^{s, \ell}(\bar{X})\mid\widehat{\square_{g_{\mathrm{b}}}}(\sigma)u\in\tilde{H}_{ \mathrm{b}}^{s,\ell+1}(\bar{X})\}\rightarrow\bar{H}_{\mathrm{b}}^{s,\ell+1}( \bar{X})\] (5.19) \[\widehat{\square_{g_{\mathrm{b}}}}(\sigma):\{u\in\tilde{H}_{\mathrm{b}}^{ s,\ell}(\bar{X})\mid\widehat{\square_{g_{\mathrm{b}}}}(\sigma)u\in\tilde{H}_{ \mathrm{b}}^{s-1,\ell+2}(\bar{X})\}\rightarrow\bar{H}_{\mathrm{b}}^{s-1,\ell+2 }(\bar{X})\] (5.20) _are Fredholm for_ \(\operatorname{Im}\sigma\geq 0,\sigma\neq 0\)_._ 2. _Let_ \(\ell\in(-\frac{3}{2},-\frac{1}{2})\)_. For any fixed_ \(C_{1}>0\)_, there exist_ \(C>0\) _independent of_ \(b\)_, such that for_ \(\sigma\in\mathbb{C},\operatorname{Im}\sigma\geq 0,|\sigma|\leq C_{1}\)_, we have_ \[\|u\|_{\tilde{H}_{\mathrm{b}}^{s,\ell}(\bar{X})} \leq C\Big{(}\|\widehat{\square_{g_{\mathrm{b}}}}(\sigma)u\|_{ \tilde{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})}+\|u\|_{\tilde{H}_{\mathrm{b}}^{ s_{0},\ell_{0}}(\bar{X})}\Big{)},\] (5.21) \[\|u\|_{\tilde{H}_{\mathrm{b}}^{-s+1,-\ell-2}(\bar{X})} \leq C\Big{(}\|\widehat{\square_{g_{\mathrm{b}}}}(\sigma)^{*}u\|_{ \tilde{H}_{\mathrm{b}}^{-s+1,-\ell-2}(\bar{X})}+\|u\|_{\tilde{H}_{\mathrm{b}}^ {-N,-\ell-3}(\bar{X})}\Big{)}\] _where_ \(C\) _only depends on_ \(b_{0},s,s_{0},\ell,\delta,C_{1}\)_. Therefore, the operator_ \[\widehat{\square_{g_{\mathrm{b}}}}(0):\{u\in\tilde{H}_{\mathrm{b}}^{s,\ell}( \bar{X})\mid\widehat{\square_{g_{\mathrm{b}}}}(0)u\in\tilde{H}_{\mathrm{b}}^{s -1,\ell+2}(\bar{X})\}\rightarrow\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\] (5.22) Figure 9. A phase space picture of the proof of the estimate (5.23), on the left, and (5.24), on the right. The coordinates and notations \(L_{\pm},R(0),R(\sigma)\) are the same as in Figure 6. For (5.23), we use hyperbolic estimates to control \(u\) via \(\chi u\); \(\chi\) is controlled (modulo elliptic estimates) by \(B_{\pm},B_{0},B_{\sigma}\); \(B_{0}\) is controlled by \(E_{0}\) using low b-decay radial point estimates and \(E_{0}\) is again controlled by \(B_{\sigma}\); finally \(B_{\pm},B_{\sigma}\) are controlled using high regularity radial points estimates at \(L_{\pm}\) and high sc-decay radial point estimates at \(R(\sigma)\). For (5.24), we use hyperbolic estimates to bound \((1-\chi)u\); \(\chi\) is controlled (modulo elliptic estimates) by \(B_{\pm}^{\prime},B_{0}^{\prime},B_{\sigma}^{\prime}\); \(B_{\pm}^{\prime}\) is controlled by \(E_{\pm}^{\prime}\) using low regularity radial point estimates and \(E_{\pm}^{\prime}\) is controlled by \(1-\chi\); \(B_{\sigma}^{\prime}\) is controlled by \(E_{\sigma}^{\prime}\) using low sc-decay radial points estimates and \(E_{\sigma}^{\prime}\) is controlled by \(B_{0}^{\prime}\); finally \(B_{0}^{\prime}\) is controlled using high b-decay radial points estimates at \(R(0)\). _is Fredholm._ Proof.: We first prove the statement (1). We claim that it suffices to prove the following estimates for \(s>s_{0}>\frac{1}{2},r>r_{0}>-\frac{1}{2},\ell<-\frac{1}{2},\ell\neq-\frac{3}{2}\) \[\|u\|_{\bar{H}^{s,r,\ell}_{\rm sc,b}(\bar{X})}\leq C\Big{(}\|\widehat{\square_{ g_{\rm b}}}(\sigma)u\|_{\bar{H}^{s-1,r+1,\ell+1}_{\rm sc,b}(\bar{X})}+\|u\|_{\bar{H}^{ s_{0},r_{0},\ell-1}_{\rm sc,b}(\bar{X})}\Big{)} \tag{5.23}\] and \[\|u\|_{\bar{H}^{-s+1,-r-1,-\ell-1}_{\rm sc,b}(\bar{X})}\leq C\Big{(}\|\widehat{ \square_{g_{\rm b}}}(\sigma)^{*}u\|_{\bar{H}^{-s,-r,-\ell}_{\rm sc,b}(\bar{X})} +\|u\|_{\bar{H}^{-N,-N,-\ell-3}_{\rm sc,b}(\bar{X})}\Big{)} \tag{5.24}\] Letting \(r=s+\ell>-\frac{1}{2}\) and \(r_{0}=s_{0}+\ell_{0}>-\frac{1}{2}\), we have \[\|u\|_{\bar{H}^{s,s+\ell,\ell}_{\rm sc,b}(\bar{X})}\leq C\Big{(}\|\widehat{ \square_{g_{\rm b}}}(\sigma)u\|_{\bar{H}^{s-1,s+\ell+1,\ell+1}_{\rm sc,b}(\bar {X})}+\|u\|_{\bar{H}^{s_{0},r_{0},\rm sc,b}_{\rm sc,b}(\bar{X})}\Big{)}\] and \[\|u\|_{\bar{H}^{-s+1,-s-\ell-1,-\ell-1}_{\rm sc,b}(\bar{X})}\leq C\Big{(}\| \widehat{\square_{g_{\rm b}}}(\sigma)^{*}u\|_{\bar{H}^{-s,-s-\ell-1,-\ell}_{ \rm sc,b}(\bar{X})}+\|u\|_{\bar{H}^{-N,-N,-\ell-3}_{\rm sc,b}(\bar{X})}\Big{)}.\] Then using the facts \(\bar{H}^{s,s+\ell,\ell}_{\rm sc,b}=\bar{H}^{s,b}_{\rm b},\hat{H}^{s,s+\ell, \ell}_{\rm sc,b}=\hat{H}^{s,l}_{\rm b}\), \[\bar{H}^{s,\ell+1}_{\rm b}=\bar{H}^{s,s+\ell+1,\ell+1}_{\rm sc,b}\subset\bar{ H}^{s-1,s+\ell+1,\ell+1}_{\rm sc,b},\quad\bar{H}^{s-1,\ell+2}_{\rm b}=\bar{H}^{s-1,s+ \ell+1,\ell+2}_{\rm sc,b}\subset\bar{H}^{s-1,s+\ell+1,\ell+1}_{\rm sc,b}\] and \[\hat{H}^{-s,-\ell-1}_{\rm b}=\hat{H}^{-s,-s-\ell-1,-\ell-1}_{\rm sc,b}\supset \hat{H}^{-s+1,-s-\ell-1,-\ell-1}_{\rm sc,b},\] \[\hat{H}^{-s+1,-\ell-2}_{\rm b}=\hat{H}^{-s+1,-s-\ell-1,-\ell-2}_{\rm sc,b}\supset \hat{H}^{-s+1,-s-\ell-1,-\ell-1}_{\rm sc,b},\] we obtain (5.15), (5.16), (5.17) and (5.18). Now we will show (5.23) and (5.24). Here we only discuss the case \({\rm Re}\,\sigma>0\) (see Figure 9 for a phase space illustration of the proof of estimates (5.23) and (5.24)), as the case \({\rm Re}\,\sigma\leq 0\) can be handled in a similar manner. * Proof of the estimate (5.23). For \(s>s_{0}>\frac{1}{2}\geq\frac{1}{2}-{\rm Im}\,(\sigma)\frac{r_{0}^{2}+a^{2}}{r _{0}^{2}-{\rm Im}}\) (as calculated (5.8)), using the high regularity radial point estimate (see [109, Proposition 2.3], [110, Proposition 5.27]), there exist \(B_{\pm},G_{\pm},S_{\pm}\in\Psi^{0}(\bar{X})\) microlocally supported near \(L_{\pm}\) with \(L_{\pm}\subset{\rm ell}(B_{\pm}),L_{\pm}\subset{\rm ell}(S_{\pm})\) and satisfying that all the forward (backward) null characteristics from \({\rm WF}(B_{\pm})\) tend to \(L_{\pm}\), while remaining in the elliptic set of \(G_{\pm}\), such that \[\|B_{\pm}u\|_{H^{s,r,\ell}_{\rm sc,b}(\bar{X})}\leq C\|G_{\pm}\widehat{\square_{ g_{\rm b}}}(\sigma)u\|_{H^{s-1,r+1,\ell+1}_{\rm sc,b}(\bar{X})}+C\|S_{\pm}u\|_{H^{s_{0},r_{0},\rm sc,b}_{\rm sc,b}(\bar{X})}\] (5.25) where \(r_{0}<r,\ell_{0}<\ell\) and the sc-decay order \(r\) and b-decay order \(\ell\) are actually irrelevant here. For \(r>r_{0}>-\frac{1}{2}\) (as calculated in (5.14)), using the high sc-decay radial point estimates at the non-zero scattering section \(R(\sigma)\) (see [111, SS4]. The proof follows from a positive commutator estimate \[i(\widehat{\square_{g_{\rm b}}}(\sigma)^{*}A-A\widehat{\square_{g_{\rm b}}}( \sigma))={\rm Im}\,\widehat{\square_{g_{\rm b}}}(\sigma)A+A{\rm Im}\,\widehat{ \square_{g_{\rm b}}}(\sigma)+i[{\rm Re}\,\widehat{\square_{g_{\rm b}}}(\sigma),A]\] where \[{\rm Re}\widehat{\square_{g_{\rm b}}}(\sigma)=\frac{\widehat{\square_{g_{\rm b}} }(\sigma)+\widehat{\square_{g_{\rm b}}}(\sigma)^{*}}{2},\quad{\rm Im}\,\widehat{ \square_{g_{\rm b}}}(\sigma)=\frac{\widehat{\square_{g_{\rm b}}}(\sigma)- \widehat{\square_{g_{\rm b}}}(\sigma)^{*}}{2i}.\] Let \(A\in\Psi^{2s-1,2r+1,2l+1}_{\rm sc,b}(\bar{X})\) with principal symbol \[a=\chi_{0}(\zeta_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\zeta_{\varphi}^{2})\chi_{ 1}((\zeta_{\rho}-2{\rm Re}\,z)^{2})\rho^{-2r+1}(\zeta_{\theta}^{2}+\frac{1}{ \sin^{2}\theta}\zeta_{\varphi}^{2}+\zeta_{\rho}^{2})^{s-\frac{1}{2}}\] where \(\chi_{0},\chi_{1}\) are identically \(0\) near \(0\) and have compact support sufficiently close to \(0\), and \(\chi_{1}\) has relatively large support such that \({\rm supp}\chi_{0}\cap{\rm supp}\chi_{1}^{\prime}\) is disjoint from the characteristic set of \({\rm Re}\,P_{\rm h}(z)\). Then using Lemma 4.1 and the calculation before Lemma 4.8 in [111], it follows that \(({\rm Im}\,\widehat{\square_{g_{\rm b}}}(\sigma)A+A{\rm Im}\,\widehat{\square_{ g_{\rm b}}}(\sigma)+i[{\rm Re}\,\widehat{\square_{g_{\rm b}}}(\sigma),A])\in\Psi^{2s,2r,2l}_{\rm sc,b}(\bar{X})\) whose principal symbol is positive definite elliptic near \(R(\sigma)\) if \(r>-\frac{1}{2}\). This, together with a regularization argument, proves high sc-decay radial point estimates at \(R(\sigma)\)), there exist \(B_{\sigma},G_{\sigma},S_{\sigma}\in\Psi^{0,0}_{\rm sc}(\bar{X})\) microlocally supported near \(R(\sigma)\) with \(R(\sigma)\subset\operatorname{ell}_{\text{sc}}(B_{\sigma}),R(\sigma)\subset \operatorname{ell}_{\text{sc}}(S_{\sigma})\) and satisfying that all the forward null characteristics from \(\operatorname{WF}_{\text{sc}}(B_{\sigma})\) tend to \(R(\sigma)\), while remaining in the scattering elliptic set of \(G_{\sigma}\), such that \[\|B_{\sigma}u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}\leq C\|G_{ \sigma}\widehat{\square_{g_{b}}}(\sigma)u\|_{H^{s-1,\nu+1,\ell+1}_{\text{sc}, \text{b}}(\bar{X})}+C\|S_{\sigma}u\|_{H^{s_{0},\text{cb}}_{\text{sc},\text{b}} (\bar{X})} \tag{5.26}\] where \(s_{0}<s,\ell_{0}<\ell\) and the differential order \(s\) and b-decay order \(\ell\) are actually irrelevant here. For \(\ell<-\frac{1}{2}\) (as calculated in (5.14)), using the low b-decay radial point estimates at the zero scattering section \(R(0)\) (see [111, SS4]), there exist \(B_{0},G_{0}\in\Psi^{0,0,0}_{\text{sc},\text{b}}(\bar{X})\) microlocally supported near \(R(0)\) (in fact near the blown up \(R(0)\) in the second microlocalized space introduced in SS5.1.5) with \(R(0)\subset\operatorname{ell}_{\text{sc},\text{b}}(B_{0})\) and \(E_{0}\in\Psi^{0,0,0}_{\text{sc},\text{b}}(\bar{X}),\operatorname{WF}_{\text {sc},\text{b}}(E_{0})\cap R(0)=\emptyset\), and satisfying that all the forward null characteristics from \(\operatorname{WF}_{\text{sc},\text{b}}(B_{0})\setminus R(0)\) enter \(\operatorname{ell}_{\text{sc},\text{b}}(E_{0})\), while remaining in the second microlocalized scattering elliptic set of \(G_{0}\), such that for any sufficiently large \(N>0\) \[\|B_{0}u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}\leq Ch^{-1}\|G_{0} \widehat{\square_{g_{b}}}(\sigma)u\|_{H^{s-1,\nu+1,\ell+1}_{\text{sc},\text{b }}(\bar{X})}+C\|E_{0}u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}+C\|u\| _{\widetilde{H}^{-N,-N,\ell}_{\text{sc},\text{b}}(\bar{X})} \tag{5.27}\] where the differential order \(s\) is actually irrelevant here. Let \(r_{-}<r_{0}<r_{b}\) and \(\chi\in C^{\infty}_{c}(\bar{X})\) with \(\chi=1\) near \(r\geq r_{b}\) and \(\operatorname{supp}\chi\subset\{r\geq r_{0}\}\). By part (1) in Proposition 5.4, propagation of singularities estimates and elliptic estimates (Concretely, if \((r,\theta,\varphi,\xi)(\text{ or }(\rho,\theta,\varphi,\zeta))\in \operatorname{WF}(\chi)\cap(\Sigma\cup\Sigma_{\text{sc}})^{c}\), we use elliptic estimates. If \((r,\theta,\varphi,\xi)(\text{ or }(\rho,\theta,\varphi,\xi))\in\operatorname{WF}( \chi)\cap(\Sigma\cup\Sigma_{\text{sc}})\), then there exists \(s\in\mathbb{R}\) with \(\exp(s^{\text{sc}}\!H^{2,0}_{p_{\text{sc}},\text{b}})(\rho,\theta,\varphi, \xi)\in\operatorname{ell}_{h}(B_{\pm})\cup\operatorname{ell}_{h}(B_{1})\cup \operatorname{ell}_{h}(B_{0})\cup\operatorname{ell}_{h}(B_{z})\), and thus we use propagation of singularities estimates. We point out that in the set \(\{\operatorname{Re}p_{\text{sc}}(\sigma)=0\}\cap\partial_{+}\bar{X}\), the scattering symbol has a nonnegative imaginary part, so one can propagate estimates towards \(R(0)\). Finally, by using a pseudodifferential partition of unity, \(\chi\) can be written as s sum of operators falling into the above two case), we have \[\begin{split}\|\chi u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar {X})}&\leq C\|\widehat{\square_{g_{b}}}(\sigma)u\|_{H^{s-1,\nu+1, \ell+1}_{\text{sc},\text{b}}(\bar{X})}+C\|B_{+}u\|_{H^{s,\nu,\ell}_{\text{sc}, \text{b}}(\bar{X})}+C\|B_{-}u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})} \\ &+C\|B_{0}u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}+C\|B_{ \sigma}u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}+C\|u\|_{\widetilde{H }^{-N,-N,\ell}_{\text{sc},\text{b}}(\bar{X})}.\end{split} \tag{5.28}\] By the same reasoning as above in the control of \(\chi u\), it follows that \[\|E_{0}u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}\leq C\|\widehat{ \square_{g_{b}}}(\sigma)u\|_{\widetilde{H}^{s-1,\nu+1,\ell+1}_{\text{sc},\text{b }}(\bar{X})}+C\|B_{\sigma}u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}+C\| u\|_{\widetilde{H}^{-N,-N,\ell}_{\text{sc},\text{b}}(\bar{X})}. \tag{5.29}\] Putting all the above estimates together yields for \(s>s_{0}>\frac{1}{2},r>r_{0}>-\frac{1}{2},\ell<-\frac{1}{2}\) and for any sufficiently large \(N>0\) \[\|\chi u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}\leq C\|\widehat{ \square_{g_{b}}}(\sigma)u\|_{\widetilde{H}^{s-1,\nu+1,\ell+1}_{\text{sc},\text{b }}(\bar{X})}+C\|u\|_{\widetilde{H}^{s_{0},\text{cb}}_{\text{sc},\text{b}}(\bar{X} )}+C\|u\|_{\widetilde{H}^{-N,-N,\ell}_{\text{sc},\text{b}}(\bar{X})}. \tag{5.30}\] Since \[p=-\rho_{b}^{-2}\Big{(}\Delta_{b}\big{(}\xi_{r}+\frac{a\xi_{\varphi}}{\Delta_{b} }\big{)}^{2}-\frac{a^{2}\xi_{\varphi}^{2}}{\Delta_{b}}+\tilde{p}\Big{)}\] where \[\tilde{p}=\xi_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\xi_{\varphi}^{2}\] is hyperbolic with respect to \(r\) in the region \(\{r_{-}\leq r<r_{b}\}\) (see [40, definition E.55]), using the hyperbolic estimates (see [40, Theorem E.56]), we have for all \(s,r,\ell\in\mathbb{R}\) \[\|(1-\chi)u\|_{\widetilde{H}^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}\leq C\| \widehat{\square_{g_{b}}}(\sigma)u\|_{\widetilde{H}^{s-1,\nu+1,\ell+1}_{\text{sc}, \text{b}}(\bar{X})}+\|\chi u\|_{\widetilde{H}^{s,\nu,\ell}_{\text{sc},\text{b}}( \bar{X})} \tag{5.31}\] where the sc-decay order \(r\) and b-decay order \(\ell\) are actually relevant here. Combining (5.30) with (5.31) yields for \(s>s_{0}>\frac{1}{2},r>r_{0}>-\frac{1}{2},\ell<-\frac{1}{2}\) and for any sufficiently large \(N>0\) \[\|u\|_{H^{s,\nu,\ell}_{\text{sc},\text{b}}(\bar{X})}\leq C\|\widehat{\square_{g_ {b}}}(\sigma)u\|_{H^{s-1,\nu+1,\ell+1}_{\text{sc},\text{b}}(\bar{X})}+C\|u\|_{ \widetilde{H}^{-N,-N,\ell}_{\text{sc},\text{b}}(\bar{X})}+C\|u\|_{H^{-N,-N,\ell}_{ \text{sc},\text{b}}(\bar{X})}. \tag{5.32}\] Since near \(\partial_{+}\bar{X}\) \[\widehat{\square_{g_{b}}}(\sigma)-N(\widehat{\square_{g_{b}}}(\sigma))\in\rho^{2 }\text{Diff}^{2}_{\text{b}}(\bar{X}),\quad N(\widehat{\square_{g_{b}}}( \sigma))=-2i\sigma\rho(\rho\partial_{\rho}-1),\] it follows that from [111, Lemma 4.13 and Proposition 4.16] that for \(\ell<-\frac{1}{2}\) \[\|u\|_{\bar{H}^{-N,-N,\ell}_{\mathrm{sc,b}}(\bar{X})} \leq\|u\|_{\bar{H}^{-N,-\ell-N,\ell}_{\mathrm{c}}(\bar{X})}=\|u\|_ {\bar{H}^{-N-\ell,\ell}_{\mathrm{c}}(\bar{X})}\leq C\|N(\widehat{\square_{g_{b}}} (\sigma))u\|_{\bar{H}^{-N-\ell,\ell+1}_{\mathrm{c}}(\bar{X})}\] \[\leq C\|\widehat{\square_{g_{b}}}(\sigma)u\|_{\bar{H}^{-N-\ell, \ell+1}_{\mathrm{c}}(\bar{X})}+C\|u\|_{\bar{H}^{-N-\ell+2,\ell-1}_{\mathrm{c}} (\bar{X})}\] \[=C\|\widehat{\square_{g_{b}}}(\sigma)u\|_{\bar{H}^{-N-\ell,-N+1, \ell+1}_{\mathrm{c}}(\bar{X})}+C\|u\|_{\bar{H}^{-N-\ell+2,-N+1,\ell-1}_{\mathrm{ c}}(\bar{X})},\] and thus for \(s>s_{0}>\frac{1}{2},r>r_{0}>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\|u\|_{\bar{H}^{s,r,\ell}_{\mathrm{sc,b}}(\bar{X})}\leq C\|\widehat{\square_{ g_{b}}}(\sigma)u\|_{\bar{H}^{s-1,r+1,\ell+1}_{\mathrm{c}}(\bar{X})}+C\|u\|_{ \bar{H}^{s_{0},r_{0},\ell-1}_{\mathrm{c}}(\bar{X})}. \tag{5.33}\] This finishes the proof of (5.23). * Proof of the estimate (5.24). For \(s>\frac{1}{2}\geq\frac{1}{2}-\operatorname{Im}\left(\sigma\right)\frac{r_{0}^ {2}+a^{2}}{r_{0}^{2}-\mathbf{m}}\), we have \(1-s\leq\frac{1}{2}-\operatorname{Im}\left(\bar{\sigma}\right)\frac{r_{0}^{2} +a^{2}}{r_{0}^{2}-\mathbf{m}}\). Then using the low regularity radial point estimates (see [109, Proposition 2.4], [110, Proposition 5.27]), there exist \(B^{\prime}_{\pm},G^{\prime}_{\pm}\in\Psi^{0}(\bar{X})\) microlocally supported near \(L_{\pm}\) with \(L_{\pm}\subset\operatorname{ell}(B^{\prime}_{\pm})\) and \(E^{\prime}_{\pm}\in\Psi^{0}(\bar{X}),\operatorname{WF}(E^{\prime}_{\pm})\cap L _{\pm}=\emptyset\), and satisfying that all the backward (forward) null characteristics from \(\operatorname{WF}(B^{\prime}_{\pm})\setminus L_{\pm}\) reach \(\operatorname{ell}(E^{\prime}_{\pm})\), while remaining in the elliptic set of \(G^{\prime}_{\pm}\), such that for any \(N\in\mathbb{R}\) \[\|B^{\prime}_{\pm}u\|_{H^{1-s,r-1,-\ell-1}_{\mathrm{c}}(\bar{X})}\] (5.34) \[\qquad\leq C\|G^{\prime}\widehat{\square_{g_{b}}}(\sigma)^{*}u\|_ {H^{-s,-r,-\ell}_{\mathrm{c},b}(\bar{X})}+C\|E^{\prime}_{\pm}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}(\bar{X})}+C\|u\|_{\bar{H}^{-N,-N,-N}_{\mathrm{c}}(\bar {X})}\] where the sc-decay order \(r\) and b-decay order \(\ell\) are actually irrelevant here. For \(r>-\frac{1}{2}\), we have \(-r-1<-\frac{1}{2}\). Then using the low sc-decay radial point estimates at the non-zero scattering section \(R(\sigma)\) (see [111, SS4] and the above discussion about the proof of high sc-decay radial point estimates at \(R(\sigma)\). We note that the only difference is that now the term involving \(\chi^{\prime}_{0}\) does not have the correct sign and needs to be treated as an error term \(E^{\prime}_{\sigma}u\)), there exist \(B^{\prime}_{\sigma},G^{\prime}_{\sigma}\in\Psi^{0,0}_{\mathrm{sc}}(\bar{X})\) microlocally supported near \(R(\sigma)\) with \(R(\sigma)\subset\operatorname{ell}(B_{\sigma})\) and \(E^{\prime}_{\sigma}\in\Psi^{0,0}_{\mathrm{sc}}(\bar{X}),\operatorname{WF}_{ \mathrm{sc}}(E^{\prime}_{\sigma})\cap R(\sigma)=\emptyset\), and satisfying that all the backward null characteristics from \(\operatorname{WF}_{\mathrm{sc}}(B^{\prime}_{\sigma})\setminus R(\sigma)\) reach \(\operatorname{ell}_{\mathrm{sc},h}(E^{\prime}_{\sigma})\), while remaining in the scattering elliptic set of \(G^{\prime}_{\sigma}\), such that \[\|B^{\prime}_{\sigma}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}(\bar{X})}\] (5.35) \[\qquad\leq C\|G^{\prime}_{\sigma}\widehat{\square_{g_{b}}}( \sigma)^{*}u\|_{H^{-s,-r,-\ell}_{\mathrm{sc,b}}(\bar{X})}+C\|E^{\prime}_{\sigma}u \|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}(\bar{X})}+C\|u\|_{\bar{H}^{-N,-N,-N}_{ \mathrm{c}}(\bar{X})}\] where the differential order \(s\) and b-decay order \(\ell\) are actually irrelevant here. For \(\ell<-\frac{1}{2}\), we have \(-\ell-1>-\frac{1}{2}\). Then using the high b-decay radial point estimates at the zero scattering section \(R(0)\) (see [111, SS4]), there exist \(B^{\prime}_{0},G^{\prime}_{0}\in\Psi^{0,0,0}_{\mathrm{sc,b}}(\bar{X})\) microlocally supported near \(R(0)\) (in fact near the blown up \(R(0)\) in the second microlocalized space introduced in SS5.1.5) with \(R(0)\subset\operatorname{ell}_{\mathrm{sc,b}}(B^{\prime}_{0})\), and satisfying all the backward null characteristics from \(\operatorname{WF}_{\mathrm{sc,b}}(B^{\prime}_{0})\) tend to \(R(0)\), while remaining in the second microlocalized scattering elliptic set of \(G^{\prime}_{0}\), such that for any \(N\in\mathbb{R}\) \[\|B^{\prime}_{0}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}(\bar{X})}\leq C\|G^{ \prime}_{0}\widehat{\square_{g_{b}}}(\sigma)^{*}u\|_{H^{-s,-r,-\ell}_{\mathrm{ c}}(\bar{X})}+C\|u\|_{\bar{H}^{-N,-N,-\ell-1}_{\mathrm{c}}(\bar{X})}\] (5.36) where the differential order \(s\) is actually irrelevant here. By the same reasoning as in the proof of the estimate (5.23), we have \[\|\chi u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}(\bar{X})}\] (5.37) \[\qquad\leq C\|\widehat{\square_{g_{b}}}(\sigma)^{*}u\|_{\bar{H}^{-s,- r,-\ell}_{\mathrm{c}}(\bar{X})}+C\|B_{+}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}(\bar{X})}\] \[\qquad+C\|B^{\prime}_{-}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}( \bar{X})}+C\|B^{\prime}_{0}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}(\bar{X})}\] \[\qquad+C\|B^{\prime}_{\sigma}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{c}}( \bar{X})}+C\|u\|_{H^{-N,-N,-\ell-1}_{\mathrm{c}}(\bar{X})},\] and \[\begin{split}&\|E^{\prime}_{\pm}u\|_{H^{1-s,-\tau-1,-\ell-1}_{\rm{ \widetilde{X}}}}+\|E^{\prime}_{\sigma}u\|_{H^{1-s,-\tau-1,-\ell-1}_{\rm{ \widetilde{X}}}}\\ &\quad\leq C\|\widehat{\square_{g_{b}}}(\sigma)^{*}u\|_{\mathring {H}^{-s,-\tau,-\ell-1}_{\rm{\widetilde{X}}}}+C\|(1-\chi)u\|_{\mathring{H}^{1- s,-\tau-1,-\ell-1}_{\rm{\widetilde{X}}}}\\ &\quad\quad+C\|B^{\prime}_{0}u\|_{H^{1-s,-\tau-1,-\ell-1}_{\rm{ \widetilde{X}}}}+C\|u\|_{\mathring{H}^{-N,-N,-\ell-1}_{\rm{\widetilde{X}}}}. \end{split} \tag{5.38}\] Putting all the above estimates together yields for \(s>\frac{1}{2},r>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\begin{split}\|\chi u\|_{H^{1-s,-\tau-1,-\ell-1}_{\rm{ \widetilde{X}}}}&\leq C\|\widehat{\square_{g_{b}}}(\sigma)^{*}u \|_{\mathring{H}^{-s,-\tau,-\ell-1}_{\rm{\widetilde{X}}}}+C\|(1-\chi)u\|_{ \mathring{H}^{1-s,-\tau-1,-\ell-1}_{\rm{\widetilde{X}}}}\\ &\quad+C\|u\|_{\mathring{H}^{-N,-N,-\ell-1}_{\rm{\widetilde{X}}}},\end{split} \tag{5.39}\] Again using the semiclassical hyperbolic estimates (see [40, Theorem E.56]), we have for all \(s,r,\ell\in\mathbb{R}\) \[\|(1-\chi)u\|_{\mathring{H}^{1-s,-\tau-1,-\ell-1}_{\rm{\widetilde{X}}}}\leq C \|\widehat{\square_{g_{b}}}(\sigma)^{*}u\|_{\mathring{H}^{-s,-\tau-\ell}_{\rm {\widetilde{X}}}} \tag{5.40}\] where the sc-decay order \(r\) and b-decay order \(\ell\) are actually relevant here. Combining (5.39) with (5.40) yields for \(s>\frac{1}{2},r>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\|u\|_{\mathring{H}^{1-s,-r-1,-\ell-1}_{\rm{\widetilde{X}}}}\leq C\|\widehat {\square_{g_{b}}}(\sigma)^{*}u\|_{\mathring{H}^{-s,-\tau,-\ell}_{\rm{ \widetilde{X}}}}+C\|u\|_{\mathring{H}^{-N,-N,-\ell-1}_{\rm{\widetilde{X}}}}. \tag{5.41}\] By the same argument as in the proof of the estimate (5.23), we have for \(s>\frac{1}{2},r>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\|u\|_{\mathring{H}^{1-s,-r-1,-\ell-1}_{\rm{\widetilde{X}}}}\leq C\|\widehat {\square_{g_{b}}}(\sigma)^{*}u\|_{\mathring{H}^{-s,-r,-\ell}_{\rm{\widetilde{ X}}}}+C\|u\|_{\mathring{H}^{-N,-N,-\ell-2}_{\rm{\widetilde{X}}}}. \tag{5.42}\] This proves the estimate (5.24). * Fredholm property of \(\widehat{\square_{g_{b}}}(\sigma)\) for \(\operatorname{Im}\sigma\geq 0,\sigma\neq 0\). We only discuss the proof of (5.19) in detail as (5.20) can be handled in a completely analogous manner. We define the space \[\mathcal{X}(\sigma):=\{u\in\bar{H}^{s,\ell}_{\rm{b}}({\widetilde{X}})\mid \widehat{\square_{g_{b}}}(\sigma)u\in\bar{H}^{s,\ell+1}_{\rm{b}}({\widetilde{X}})\}.\] We note that \(\mathcal{X}(\sigma)\) is a Hilbert space endowed with the norm \(\|u\|_{\mathcal{X}(\sigma)}:=\|u\|_{\mathring{H}^{s,\ell}_{\rm{b}}}+\widehat{ \|\square_{g_{b}}}(\sigma)u\|_{\mathring{H}^{s,\ell+1}_{\rm{b}}}\). Let \(\{u_{j}\}\) is a bounded sequence in \(\ker\widehat{\square_{g_{b}}}(\sigma)\subset\mathcal{X}(\sigma)\). Since \(\bar{H}^{s,\ell}_{\rm{b}}\to\bar{H}^{s_{0},\ell_{0}}_{\rm{b}}\) is a compact embedding, by passing to a subsequence we may assume that \(u_{j}\) converges in \(\bar{H}^{s_{0},\ell_{0}}_{\rm{b}}\). By (5.15), we see that \(u_{j}\) is a Cauchy sequence in \(\bar{H}^{s,\ell}_{\rm{b}}\). This implies that \(\ker\widehat{\square_{g_{b}}}(\sigma)\) is finite dimensional. We then show that \(\operatorname{ran}_{\mathcal{X}(\sigma)}\widehat{\square_{g_{b}}}(\sigma)\) is closed in \(\bar{H}^{s,\ell+1}_{\rm{b}}\). Let \(\{f_{j}\}\subset\operatorname{ran}_{\mathcal{X}(\sigma)}\widehat{\square_{g_{b} }}(\sigma)\) be a convergent sequence with \(f_{j}\to f_{\infty}\in\bar{H}^{s,\ell+1}_{\rm{b}}\). We may write \(f_{j}=\widehat{\square_{g_{b}}}(\sigma)u_{j}\) where \(u_{j}\in\mathcal{X}(\sigma)\) is in the orthogonal complement of \(\ker\widehat{\square_{g_{b}}}(\sigma)\). We claim that \(\{u_{j}\}\) is bounded in \(\mathcal{X}(\sigma)\). (Otherwise, suppose that \(\{u_{j}\}\) is unbounded in \(\mathcal{X}(\sigma)\). Passing to a subsequence, we may assume that \(\|u_{j}\|_{\mathcal{X}(\sigma)}\to\infty\). Let \(\tilde{u}_{j}=u_{j}/\|u_{j}\|_{\mathcal{X}(\sigma)}\) and \(\tilde{f}_{j}=\widehat{\square_{g_{b}}}(\sigma)\tilde{u}_{j}=f_{j}/\|u_{j}\|_{ \mathcal{X}(\sigma)}\). Then \(\|\tilde{u}_{j}\|_{\mathcal{X}(\sigma)}=1\) and \(\|\tilde{f}_{j}\|_{\widetilde{H}^{s,\ell+1}_{\rm{b}}}\to 0\). By the argument as above, passing to a subsequence, we have that \(\tilde{u}_{j}\to\tilde{u}_{\infty}\in\mathcal{X}(\sigma)\) and thus \(\widehat{\square_{g_{b}}}(\sigma)\tilde{u}_{\infty}=0\). On the other hand, since \(\tilde{u}_{\infty}\) is in the orthogonal complement of \(\ker\widehat{\square_{g_{b}}}(\sigma)\), we have \(\widehat{\square_{g_{b}}}(\sigma)\tilde{u}_{\infty}\neq 0\); this gives a contradiction.) Then argue as above, by passing to a subsequence we may assume that \(u_{j}\to u_{\infty}\in\mathcal{X}(\sigma)\). It follows that \(\widehat{\square_{g_{b}}}(\sigma)u_{j}\to\widehat{\square_{g_{b}}}(\sigma)u_{ \infty}\in\bar{H}^{s,\ell+1}_{\rm{b}}\) and thus \(\widehat{\square_{g_{b}}}(\sigma)u_{\infty}=f_{\infty}\). This proves that \(f_{\infty}\in\operatorname{ran}_{\mathcal{X}(\sigma)}\widehat{\square_{g_{b}}}(\sigma)\) and thus \(\operatorname{ran}_{\mathcal{X}(\sigma)}\widehat{\square_{g_{b}}}(\sigma)\) is closed. Finally, it remains to show that \(\operatorname{ran}_{\mathcal{X}(\sigma)}\widehat{\square_{g_{b}}}(\sigma)\) has finite codimension. Proceeding as in the proof of the finite dimension property of \(\ker\widehat{\square_{g_{b}}}(\sigma)\) and using (5.16), we obtain that \(\ker\widehat{\square_{g_{b}}}(\sigma)^{*}\subset\mathring{H}^{-s,-\ell-1}_{\rm{ b}}\) is also finite dimensional. Let \(f\in\bar{H}^{s,\ell+1}_{\rm{b}}\) be such that \(\langle f,v\rangle=0\) for all \(v\in\ker\widehat{\square_{g_{b}}}(\sigma)^{*}\). We claim that there exists \(u\in\mathring{H}^{s,\ell}_{\rm{b}}\) such that \(\widehat{\square_{g_{b}}}(\sigma)u=f\). (This implies that \(\operatorname{ran}_{\mathcal{X}(\sigma)}\widehat{\square_{g_{b}}}(\sigma)\) has finite codimension.) We define \[\mathcal{Y}(\sigma):=\{v\in\mathring{H}^{-s,-\ell-1}_{\rm{\widetilde{X}}}({ \widetilde{X}})\mid\widehat{\square_{g_{b}}}(\sigma)^{*}v\in\mathring{H}^{-s,- \ell}_{\rm{\widetilde{X}}}({\widetilde{X}})\}.\] Let \(V\) be a complementary space of \(\ker\widehat{\square_{g_{b}}}(\sigma)^{*}\) in \(\mathring{H}^{-s,-\ell-1}_{\rm{\widetilde{X}}}\). Then there exists a constant \(C_{2}\) such that \[\|v\|_{\mathring{H}^{-s,-\ell-1}_{\rm{\widetilde{X}}}}\leq C_{2}\|\widehat{ \square_{g_{b}}}(\sigma)^{*}v\|_{\mathring{H}^{-s,-\ell-1}_{\rm{\widetilde{ If this were not true, we could find a sequence \(v_{j}\subset\mathcal{Y}(\sigma)\cap V\) such that \[\|v_{j}\|_{\dot{H}_{\mathrm{b}}^{-s,-\ell-1}}=1,\quad\|v_{j}\|_{\dot{H}_{ \mathrm{b}}^{-s,-\ell-1}}\geq j\|\widehat{\square_{g_{b}}}(\sigma)v_{j}\|_{\dot {H}_{\mathrm{b}}^{-s,-\ell}}.\] By (5.16) and the compact embedding \(\dot{H}_{\mathrm{b}}^{-s,-\ell-1}\to\dot{H}_{\mathrm{b}}^{-N,-\ell-3}\), passing to a subsequence we have \(v_{j}\to v_{\infty}\) with \(\|v_{\infty}\|_{\dot{H}_{\mathrm{b}}^{-s,-\ell-1}}=1\) and \(v_{\infty}\in V\cap\ker\widehat{\square_{g_{b}}}(\sigma)^{*}\); this gives a contradiction. If \(f\in\bar{H}_{\mathrm{b}}^{s,\ell+1}\) satisfies \(\langle f,v\rangle=0\) for all \(v\in\ker\widehat{\square_{g_{b}}}(\sigma)^{*}\), we have \[|\langle f,v\rangle|\leq C_{2}\|f\|_{\bar{H}_{\mathrm{b}}^{-s,\ell+1}}\| \widehat{\square_{g_{b}}}(\sigma)^{*}v\|_{\dot{H}_{\mathrm{b}}^{-s,-\ell}}, \quad v\in\mathcal{Y}(\sigma)\cap V.\] Since any \(v\in\mathcal{Y}(\sigma)\) can be written as \(v=v_{1}+v_{2}\) with \(v_{1}\in\ker\widehat{\square_{g_{b}}}(\sigma)^{*},v_{2}\in\mathcal{Y}(\sigma)\cap V\), it follows that \[|\langle f,v\rangle|=|\langle f,v_{2}\rangle|\leq C_{2}\|f\|_{\bar{H}_{ \mathrm{b}}^{s,\ell+1}}\|\widehat{\square_{g_{b}}}(\sigma)^{*}v_{2}\|_{\dot{H }_{\mathrm{b}}^{-s,-\ell}}=C_{2}\|f\|_{\bar{H}_{\mathrm{b}}^{s,\ell+1}}\| \widehat{\square_{g_{b}}}(\sigma)^{*}v\|_{\dot{H}_{\mathrm{b}}^{-s,-\ell}}, \quad v\in\mathcal{Y}(\sigma).\] By Hahn-Banach Theorem, the anti-linear form \(\widehat{\square_{g_{b}}}(\sigma)^{*}v\to\langle f,v\rangle\) with \(v\in\mathcal{Y}(\sigma)\) can be extended to a continuous anti-linear form on \(\dot{H}_{\mathrm{b}}^{-s,-\ell}\). Since \((\dot{H}_{\mathrm{b}}^{-s,-\ell})^{*}=\bar{H}_{\mathrm{b}}^{s,\ell}\), there exists \(u\in\bar{H}_{\mathrm{b}}^{s,\ell}\) such that \(\langle u,\widehat{\square_{g_{b}}}(\sigma)^{*}v\rangle=\langle f,v\rangle\) for \(v\in\mathcal{Y}(\sigma)\). In particular, for \(v\in C_{c}^{\infty}(X^{\circ})\) \[\langle\widehat{\square_{g_{b}}}(\sigma)u,v\rangle=\langle u,\widehat{ \square_{g_{b}}}(\sigma)^{*}v\rangle=\langle f,v\rangle.\] This proves that \(\widehat{\square_{g_{b}}}(\sigma)u=f\). We now prove statement (2). Combining the estimates near spatial infinity \(\partial_{+}\bar{X}\) (see [113, Proposition 5.3]) with the radial point estimates at event horizon, propagation of singularity estimates, elliptic estimates and hyperbolic estimates as in the proof of statement (1), we obtain (5.21). The Fredholm property follows as in the nonzero \(\sigma\) case. #### 5.2.2. Uniform Fredholm estimates for tensor wave operators Now we prove an analogy to Theorem 5.5 for the operators \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma)\), \(\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\) and \(\widehat{L_{b,\gamma}}(\sigma)\) acting on bundles. We note that the formal adjoint of \(\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma},L_{b,\gamma}\) with respect to the natural inner product induced by \(g_{b}\) and the volume form \(dvol_{g_{b}}\) is \[\mathcal{W}_{b,\gamma},\quad\mathcal{P}_{b,\gamma},\quad\begin{pmatrix}G_{g_{b }}&0\\ 0&4\end{pmatrix}L_{b,\gamma}\begin{pmatrix}G_{g_{b}}&0\\ 0&\frac{1}{4}\end{pmatrix},\] respectively (see (9.4) and (9.5) for the derivation of the adjoint of \(L_{b,\gamma}\)). **Theorem 5.6**.: _Let \(b_{0}=(\mathbf{m}_{0},\mathbf{a}_{0},\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|+|\mathbf{a}_{0}|<\mathbf{m}_{0}\) and \(|\mathbf{a}_{0}|\ll|\mathbf{m}_{0}|+|\mathbf{Q}_{0}|\). Then there exists \(\epsilon>0\) such that for \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(|b-b_{0}|<\epsilon\) and for \(s>s_{0}>2\) (resp. \(s>s_{0}>3\)), \(\ell-1\leq\ell_{0}<\ell<-\frac{1}{2}\) with \(s+\ell>s_{0}+\ell_{0}>-\frac{1}{2}\) and \(\ell\neq-\frac{3}{2}\), the conclusions in Theorem 5.5 hold for \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma)\) and \(\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\) (resp. \(\widehat{L_{b,\gamma}}(\sigma)\))._ Proof.: The proof is analogous to that of Theorem 5.5, except for the computation of threshold regularity in the radial point estimate at event horizon and the threshold decay rate in the radial point estimate at spatial infinity \(\partial_{+}\bar{X}\). As for the calculation of threshold regularity in the radial point estimate at event horizon, the Reissner-Nordstrom metric case \(g_{b}=g_{(\mathbf{m}_{0},0,\mathbf{Q}_{0})}\) was done in appendix C. According to (C.21) and (5.8), for \(\mathrm{Im}\,\sigma\geq 0\), the threshold regularity is \(\frac{3}{2}-\frac{2\gamma}{\kappa}\) for \(\widehat{\mathcal{P}_{b_{0},\gamma}}(\sigma)\) and \(\frac{3}{2}+\frac{\gamma}{2\kappa}\) for \(\widehat{\mathcal{W}_{b_{0},\gamma}}(\sigma)\) acting on \(1\)-forms, and is \(\frac{5}{2}-\frac{2\gamma}{\kappa}\) for \(\widehat{L_{b,\gamma}}(\sigma)\) acting on \(S^{2\widetilde{\mathrm{s}}\widetilde{\mathrm{c}}\widetilde{\mathrm{c}}^{*} \widetilde{\mathrm{d}}\widetilde{\mathrm{s}}^{*}\widetilde{\mathrm{d}}\widetilde{ \mathrm{s}}^{*}\widetilde{\mathrm{d}}\widetilde{\mathrm{s}}}\). This implies that the threshold regularity for nearby Kerr-Newman metrics is close to \(\frac{3}{2}\) for \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma)\) and \(\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\), and close to \(\frac{5}{2}\) for \(\widehat{L_{b,\gamma}}(\sigma)\) as \(\gamma\) is a sufficiently small constant. For \(0\neq\sigma\in\mathbb{R}\), the radial point estimate at \(R(0)\) and \(R(\sigma)\) for \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma)\), \(\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\), \(\widehat{L_{b,\gamma}}(\sigma)\) requires the computation of a threshold decay rate relative to \(L^{2}(\bar{X})\). Concretely, the threshold \(-\frac{1}{2}\) from [111, Theorems 1.1 and 1.3] is modified by the subprincipal symbol \(\sigma_{\mathrm{sc}}(\frac{1}{2i\rho}(\widehat{\bullet}(\sigma)-\widehat{ \bullet}(\sigma)^{*}))|_{R(0),R(\sigma)}\) where \(\bullet=\mathcal{P}_{b,\gamma},\ \mathcal{W}_{b,\gamma},\ L_{b,\gamma}\). Working in the trivialization of \(S^{2\widetilde{\mathrm{s}}\widetilde{\mathrm{c}}\widetilde{\mathrm{c}}^{*} \widetilde{\mathrm{d}}\widetilde{\mathrm{s}}^{*}\widetilde{\mathrm{d}} \widetilde{\mathrm{s}}}\) given in terms of the differentials of standard coordinates \(t,x^{1},x^{2},x^{3}\), according to Proposition 4.6 and Lemma 4.11, we see that \[\sigma_{\rm sc}(\frac{1}{2i\rho}(\widehat{\mathbf{s}}(\sigma)- \widehat{\mathbf{s}}(\sigma)^{*}))|_{R(0),R(\sigma)}=\sigma_{\rm sc}(\frac{1}{2 i\rho}(\widehat{\Box_{g_{\rm b},0}}(\sigma)-\widehat{\Box_{g_{\rm b},0}}( \sigma)^{*})\otimes\mathrm{Id}_{4\times 4})|_{R(0),R(\sigma)}=0,\quad\bullet= \mathcal{P},\mathcal{W},\] \[\sigma_{\rm sc}(\frac{1}{2i\rho}(\widehat{L_{b,\gamma}}(\sigma)- \widehat{L_{b,\gamma}}(\sigma)^{*}))|_{R(0),R(\sigma)}=\sigma_{\rm sc}(\frac{1 }{2i\rho}(\widehat{\Box_{g_{\rm b},0}}(\sigma)-\widehat{\Box_{g_{\rm b},0}}( \sigma)^{*})\otimes\mathrm{Id}_{14\times 14})|_{R(0),R(\sigma)}=0.\] Therefore, the threshold decay rate is still \(-\frac{1}{2}\). ### Description of the kernel of the wave type operators In this section, we give a detailed description of kernel of the following wave operators: \(\widehat{\Box_{g_{\rm b}}}(\sigma)\) on scalar functions, \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma),\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\) on scattering 1-forms and the linearized gauge-fixed Einstein-Maxwell operator \(\widehat{L_{b,\gamma}}(\sigma)\). **Proposition 5.7**.: _Let \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(|\mathbf{Q}|+|\mathbf{a}|<|\mathbf{m}|\) and \(a=|\mathbf{a}|\ll|\mathbf{m}|+|\mathbf{Q}|\). Suppose \(u\in\bar{H}_{\rm b}^{s,\ell}(\bar{X};\mathbb{C})\) with \(s>\frac{1}{2}\)._ 1. _If_ \(\widehat{\Box_{g_{\rm b}}}(0)u=0\) _and_ \(u\in\bar{H}_{\rm b}^{s,\ell}(\bar{X};\mathbb{C})\) _with_ \(\ell\in\mathbb{R}\)_, then_ \(u\in\bar{H}_{\rm b}^{\infty,\ell}(\bar{X};\mathbb{C})\) _and has a polyhomogeneous expansion with index set contained in_ \(\{(z,k)\mid z\in i\mathbb{Z},k\in\mathbb{N}\}\)_. In particular, if_ \(-\frac{3}{2}<\ell<-\frac{1}{2}\)_, then_ \(u\in\mathcal{A}^{1}(\bar{X};\mathbb{C})\)_. More precisely, there exists_ \(u_{0}\in C^{\infty}(\partial_{+}\bar{X};\mathbb{C})\) _such that_ \(u-\frac{u_{0}}{r}\in\mathcal{A}^{2-}(\bar{X};\mathbb{C})\)_._ 2. _If_ \(\widehat{\Box_{g_{\rm b}}}(0)^{*}u=0\) _and_ \(u\in\dot{H}_{\rm b}^{-\infty,\ell}(\bar{X};\mathbb{C})\) _with_ \(\ell\in\mathbb{R}\)_, then_ \(u\in\dot{H}_{\rm b}^{\frac{1}{2}-,\ell}(\bar{X};\mathbb{C})\)_. Near_ \(\partial_{+}\bar{X}\)_,_ \(u\) _has the same polyhomegeneous expansion as in part (_1_)._ 3. _If_ \(\sigma\neq 0\) _and_ \(u\in\ker\widehat{\Box_{g_{\rm b}}}(\sigma)\cap\bar{H}_{\rm b}^{s,\ell}( \bar{X};\mathbb{C})\) _for some_ \(\ell\in\mathbb{R}\) _with_ \(s+\ell>-\frac{1}{2}\)_, then_ \(u\in\rho C^{\infty}(\partial_{+}\bar{X};\mathbb{C})+\mathcal{A}^{2-}(\bar{X}; \mathbb{C})\)_._ _The above statements also hold for \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma),\widehat{\mathcal{W}_{b,\gamma}}\) acting on \(\omega\in\bar{H}_{\rm b}^{s,\ell}(\bar{X};\widetilde{\text{$T$}^{*}}\bar{X})\) with \(s>2\) (for statement (2), \(\omega\in\dot{H}_{\rm b}^{-\frac{1}{2}-C(\gamma,a),\ell}\)) and \(\widehat{L_{b,\gamma}}(\sigma)\) acting on \((\dot{g},\dot{A})\in\bar{H}_{\rm b}^{s,\ell}(\bar{X};\widetilde{\text{$T$}^{*} }\bar{X}\oplus S^{2\text{$\widehat{$\widehat{$\widehat{$\widehat{$\widehat{$ \widehat{$\widehat{$\widehat{$\widehat{$\widehat{$\widehat{$ \widehat{$\widehat{$\widehat{$\widehat{$\widehat{$\widehat{\widehat$}}}}}}}}}}}}})\) \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! At this point, we only know that \(\widehat{\chi u}(\xi)\) is holomorphic and Schwartz in \(\operatorname{Re}\xi\) (with \(\operatorname{Im}\xi\) fixed) in the region \(\operatorname{Im}\xi>-\ell-\frac{3}{2}\) and so is \(\hat{f}(\xi)\) in \(\operatorname{Im}\xi>-\ell-\frac{5}{2}\). Therefore, we have the following inverse Mellin transform \[\chi u(\rho)=\frac{1}{2\pi}\int_{\operatorname{Im}\xi=-\ell-\frac{3}{2}+\epsilon }\rho^{i\xi}\widehat{\chi u}(\xi)\,d\xi,\quad\epsilon>0.\] We denote by \(Y_{l}^{m}\) with \(l\in\mathbb{N},m\in\mathbb{Z},|m|\leq l\) the spherical harmonic function of degree \(l\) and order \(m\) which satisfies \(\not{\Delta}Y_{l}^{m}=-l(l+1)Y_{l}^{m}\). We denote by \(\mathbf{S}_{l}=\{Y_{l}^{m}:|m|\leq l\}\) the space of degree \(l\) spherical harmonic functions. Then one can conclude that \(\{\mathbf{S}_{l}:l\in\mathbb{N}\}\) form an orthogonal basis of \(L^{2}(\mathbb{S}^{2})\). Therefore, one can expand \((\widehat{\chi u})(\xi)\) and \(\hat{f}(\xi)\) in terms of spherical harmonics \(Y_{l}^{m}\), i.e., we write \((\widehat{\chi u})(\xi)=\sum u_{l}^{m}(\xi)Y_{l}^{m},\hat{f}=\sum f_{l}^{m}( \xi)Y_{l}^{m}\). Restricted to \(\mathbf{S}_{l}\), the inverse \[\widehat{N}(\xi)^{-1}=\left(i\xi(i\xi-1)+\not{\Delta}\right)^{-1}=\left(i\xi( i\xi-1)-l(l+1)\right)^{-1}\] is meromorphic in \(\xi\) with simple poles given by \(\xi=-i(l+1),il\). Then in the inverse Mellin transform \[\chi u(\rho)=\frac{1}{2\pi}\int_{\operatorname{Im}\xi=-\ell-\frac{3}{2}+ \epsilon}\rho^{i\xi}(\widehat{N}(\xi))^{-1}\hat{f}(\xi)\,d\xi,\quad\epsilon>0, \quad-\ell-\frac{3}{2}+\epsilon\notin\mathbb{Z},\] we can shift the contour of integration through the pole at \(\xi=ik\) to \(\operatorname{Im}\xi=-\ell-\frac{5}{2}+\epsilon\) where \(k\in\mathbb{Z}\) and \(k\in(-\ell-\frac{3}{2}+\epsilon,-\ell-\frac{5}{2}+\epsilon)\), and the Residue Theorem gives \[\chi u(\rho)=\rho^{-k}u_{k}+\tilde{u}_{k},\quad\tilde{u}_{k}=\frac{1}{2\pi} \int_{\operatorname{Im}\xi=-\ell-\frac{5}{2}+\epsilon}\rho^{i\xi}(\widehat{N} (\xi))^{-1}\hat{f}(\xi)\,d\xi\in\mathcal{A}^{\ell+\frac{5}{2}-\epsilon}( \bar{X};\mathbb{C}) \tag{5.44}\] where \[u_{k}=\begin{cases}\sum_{-k\leq m\leq k}\frac{-1}{2k+1}f_{k}^{m}(ik)Y_{k}^{m}, &k\geq 0\\ \sum_{k+1\leq m\leq-k-1}\frac{1}{2k+1}f_{-k-1}^{m}(-ik-i)Y_{-k-1}^{m},&k\leq-1 \end{cases}.\] To deduce the full polyhomogeneous expansion for \(\chi u\), we plug the above partial expansion (5.44) into the equation (5.43) to obtain \[N\tilde{u}_{k}=N(\chi u)=f_{1}+f_{2},\quad f_{1}\in\rho^{-k+1}C^{\infty}( \partial_{+}\bar{X};\mathbb{C})\quad\text{and}\quad f_{2}\in\mathcal{A}^{ \ell+\frac{7}{2}-\epsilon}(\bar{X},\mathbb{C}).\] Now we solve the following two equations \(N\tilde{u}_{k}^{j}=f_{j},j=1,2\). First, using the Mellin transform as before, we see that \[\tilde{u}_{k}^{2}=\rho^{-k+1}u_{k+1}+\tilde{u}_{k}\quad where\quad u_{k+1} \in\begin{cases}&\mathbf{S}_{k-1},\quad k-1\geq 0\\ &\mathbf{S}_{-k},\quad k-1\leq-1\end{cases}\quad\text{and}\quad\tilde{u}_{k+1} \in\mathcal{A}^{\ell+\frac{7}{2}-\epsilon}(\bar{X};\mathbb{C}).\] As for \(u_{k}^{1}\), we can solve \(N\tilde{u}_{k}^{1}=f_{1}\) more directly. Since any function in \(C^{\infty}(\partial_{+}\bar{X})\) can be expressed as a linear combination of \(\mathbf{S}_{l}\), it suffices to solve \(N\tilde{u}_{k}^{1}=f_{1}\) for \(f_{1}\in\rho^{-k}\mathbf{S}_{l}\). More explicitly, \[u_{k}^{1}=\begin{cases}(k(k+1)-l(l+1))^{-1}f_{1},&k(k+1)\neq l(l+1)\\ -(2k+1)^{-1}f_{1}\ln\rho,&k(k+1)=l(l+1)\end{cases}.\] Therefore, we have \[\chi u=\rho^{-k}u_{k}+u_{k}^{1}+\rho^{-k+1}u_{k+1}+\tilde{u}_{k+1}.\] Plugging the partial expansion into the equation (5.43) and proceeding as above iteratively give the full polyhomogeneous expansion. * Proof of statement (2). Near \(\partial_{+}\bar{X}\), according to the elliptic estimate for b-pseudodifferential operator, we conclude that \(u\in\bar{H}_{\mathrm{b}}^{\infty,\ell}\) there. Away form spatial infinity \(\partial_{+}\bar{X}\), since \(u\) is \(0\) in the region \(r<r_{b}\), by Proposition 5.4, propagation of singularity estimate and elliptic estimate we see that \(u\) is smooth away from the radial points at event horizon. Finally, using the low regularity radial point estimate yields \(u\in\hat{H}_{\mathrm{b}}^{\frac{7}{2}-\ell}(\bar{X};\mathbb{C})\). The polyhomogeneous expansion of \(u\) near \(\partial_{+}\bar{X}\) can be obtained as in the proof of statement (1) by exploiting the normal operator argument. * Proof of statement (3). Since \(s+\ell>-\frac{1}{2}\), according to the high sc-decay estimate at the radial point \(R(\sigma)\), propagation of singularity estimate and elliptic estimate, we see that away from the radial point \(R(0)\), \(u\) is in \(\widehat{H}^{\infty,\ell}_{\mathrm{b}}(\bar{X};\mathbb{C})\). Near \(R(0)\) the low b-decay radial point estimate gives \(u\in\widehat{H}^{\infty,\min\{-\frac{1}{2}-,\ell\}}_{\mathrm{b}}(\bar{X}; \mathbb{C})\) at \(R(0)\). Therefore, we have \(u\in\widehat{H}^{\infty,\min\{-\frac{1}{2}-,\ell\}}_{\mathrm{b}}(\bar{X}; \mathbb{C})\). Noe since \(\sigma\neq 0\), we have \(\widehat{\square_{g_{b}}}(\sigma)=-2i\sigma\rho(\rho\partial_{\rho}-1)+\rho^ {2}\mathrm{Diff}^{2}_{\mathrm{b}}\), and thus \(N(\widehat{\square_{g_{b}}}(\sigma))=-2i\sigma\rho(\rho\partial_{\rho}-1)\). Then using the typical normal operator argument as in the proof of statement (1), we obtain that \(u=\rho C^{\infty}(\partial_{+}\bar{X};\bar{X})+\mathcal{A}^{2}(-\bar{X}; \mathbb{C})\). * Proof of \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma),\widehat{\mathcal{W}_{b,\gamma}}\) and \(\widehat{L_{b,\gamma}}(\sigma)\). Near the spatial infinity, we work in the standard coordinate trivialization of \(\widehat{\mathrm{sc}T^{*}}\bar{X}\) for \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma),\widehat{\mathcal{W}_{b,\gamma}}\) (which is given in terms of the differentials \(dt,dx^{1},dx^{2},dx^{3}\) of the standard coordinates) and \(S^{2\sec\overline{T^{*}}\bar{X}}\oplus\widehat{\mathrm{sc}T^{*}}\bar{X}\) for \(\widehat{L_{b,\gamma}}(\sigma)\) (which is given in terms of the differentials \(dt^{2},dtdx^{i},dx^{i}dx^{j},dt,dx^{i}1\leq j\leq i\leq 3\) of the standard coordinates). Then according to Propositions 4.6 and 4.13, the normal operator of \(2\widehat{\mathcal{P}_{b,\gamma}}(\sigma),2\widehat{\mathcal{W}_{b,\gamma}}\) (resp. \(\widehat{L_{b,\gamma}}(\sigma)\)) is, \(-\rho^{2}\Big{(}\rho\partial_{\rho}(\rho\partial_{\rho}-1)+\not{\Delta}\Big{)}\) for \(\sigma=0\) and \(2i\sigma\rho(\rho\partial_{\rho}-1)\) for \(\sigma\neq 0\), tensored with \(4\times 4\) identity matrix (reps. \(14\times 14\) identity matrix). Therefore, the proof for these operators follows in a completely analogous manner as before. ### Semiclassical behavior of Kerr-Newman spacetimes Recall that \[\widehat{\square_{g}}(\sigma):=e^{it_{b,*}\sigma}\square_{g_{b}}e^{-it_{b,*} \sigma}.\] where the inverse metric \(g^{-1}\) is given by \[g^{-1}=\rho_{\mathrm{b}}^{-2}\bigg{(} \Delta_{b}\partial_{r}^{2}+\frac{\chi^{2}-1}{\Delta_{b}}\Big{(}( r^{2}+a^{2})\partial_{t_{\chi}}+a\partial_{\varphi}\Big{)}^{2}+\partial_{ \theta}^{2}+\frac{1}{\sin^{2}\theta}\Big{(}\partial_{\varphi}+a\sin^{2}\theta \partial_{t_{\chi}}\Big{)}^{2} \tag{5.45}\] \[+2\chi\Big{(}(r^{2}+a^{2})\partial_{t_{\chi}}+a\partial_{\varphi} \Big{)}\partial_{r}\bigg{)}.\] We introduce the semiclassically rescaled version of the operator \(\widehat{\square_{g}}(\sigma)\): \[h^{2}\widehat{\square_{g}}(h^{-1}z),\quad h=\frac{1}{|\sigma|}\quad\text{and} \quad z=\frac{\mathrm{Re}\,\sigma}{|\sigma|}+i\frac{\mathrm{Im}\,\sigma}{| \sigma|}. \tag{5.46}\] When \(\mathrm{Im}\,\sigma\) is bounded, we write \(z=z_{\mathrm{R}}+ihz_{\mathrm{I}}\) and parametrize \(h^{2}\widehat{\square_{g}}(h^{-1}z)\) by \(z_{\mathrm{R}}\) and \(z_{\mathrm{I}}=h^{-1}\mathrm{Im}\,z\) rather than \(\mathrm{Re}\,z\) and \(\mathrm{Im}\,z\). This still gives a family of operators in \(\mathrm{Diff}^{2}_{\mathrm{sc},h}\) which depends smoothly on \(z_{\mathrm{R}},z_{\mathrm{I}}\), and its semiclassical principal symbol does not depend on \(z_{\mathrm{I}}\). Therefore, when analyzing the semiclassical principal symbol of \(h^{2}\widehat{\square_{g}}(h^{-1}z)\), it suffices to consider the case \(z\in\mathbb{R}\). For \(z\in\mathbb{R}\), the semiclassical principal symbol of \(h^{2}\widehat{\square_{g}}(h^{-1}z)\) is given by \[p_{h,z}(\xi):=\sigma_{h}(h^{2}\widehat{\square_{g}}(h^{-1}z))=-g^{-1}(-zdt_{b,*}+ \xi,-zdt_{b,*}+\xi),\quad\xi\in\partial_{t_{b,*}}^{\perp}={}^{\mathrm{sc}}T^{* }\bar{X}. \tag{5.47}\] As before, away from \(\partial_{+}\bar{X}\), we write the scattering covectors as \[\xi:=\xi_{r}dr+\xi_{\theta}d\theta+\xi_{\varphi}d\varphi.\] While near \(\partial_{+}\bar{X}\), we write them as \[\xi:=\xi_{\rho}\frac{d\rho}{\rho^{2}}+\xi_{\theta}\frac{d\theta}{\rho}+\xi_{ \varphi}\frac{d\varphi}{\rho},\quad\rho=\frac{1}{r}.\] #### 5.4.1. Characteristic set We first study the characteristic set \[\Sigma_{h}:=\{\langle\xi\rangle^{-2}p_{h,z}(\xi)=0\}\subset\overline{{}^{ \mathrm{sc}}T^{*}\bar{X}},\quad\langle\xi\rangle=\sqrt{1+\xi_{r}^{2}+\xi_{ \theta}^{2}+\frac{1}{\sin^{2}\theta}\xi_{\varphi}^{2}}.\] We will show that for \(z\in\mathbb{R}\setminus 0\), it splits into two components. To achieve this, we make use of an auxiliary covector \(d\mathfrak{t}\) which is timelike everywhere on \(\bar{X}\). **Lemma 5.8** (Splitting of the characteristic set).: _There exist closed \(z\)-dependent sets \(\Sigma_{h}^{\pm}\subset\overline{{}^{\rm sc}T^{*}\bar{X}}\) such that_ \[\{\langle\xi\rangle^{-2}p_{h,z}(\xi)=0\}=\Sigma_{h}^{+}\sqcup\Sigma_{h}^{-}, \quad z\in\mathbb{R}\setminus 0. \tag{5.48}\] _Moreover,_ \[\pm\langle\xi\rangle^{-1}g^{-1}(-zdt_{b,*}+\xi,d{\mathfrak{t}})>0\quad\text{on} \quad\Sigma_{h}^{\pm}\quad\text{for}\quad z\in\mathbb{R}\setminus 0. \tag{5.49}\] _Finally,_ \[(\Sigma_{h}^{\pm}\cap{}^{\rm sc}S^{*}\bar{X})\cap\{\Delta_{b}-a^{ 2}\sin^{2}\theta>0\}=\emptyset\quad\text{for}\quad z\in\mathbb{R}; \tag{5.50}\] \[\Sigma_{h}^{\pm}\cap\{\Delta_{b}-a^{2}\sin^{2}\theta>0\}= \emptyset\quad\text{for}\quad\mp z>0. \tag{5.51}\] Proof.: We put \[\Sigma_{h}^{\pm}:=\{\langle\xi\rangle^{-2}p_{h,z}(\xi)=0\}\cap\{\pm\langle \xi\rangle^{-1}g^{-1}(-zdt_{b,*}+\xi,d{\mathfrak{t}})\geq 0\}.\] It is clear that \(\Sigma_{h}^{\pm}\) are closed and their union is the characteristic set. In order to show that \(\Sigma_{h}^{+}\cap\Sigma_{h}^{-}=\emptyset\) and (5.49) for \(z\in\mathbb{R}\setminus 0\), we need \[\{\langle\xi\rangle^{-2}p_{h,z}(\xi)=0\}\cap\{\langle\xi\rangle^{-1}g^{-1}(- zdt_{b,*}+\xi,d{\mathfrak{t}})=0\}=\emptyset\quad\text{for}\quad z\in \mathbb{R}\setminus 0.\] On the fiber infinity \({}^{\rm sc}S^{*}\bar{X}\), \(\langle\xi\rangle^{-1}g^{-1}(\xi,d{\mathfrak{t}})=0\) implies that \(\langle\xi\rangle^{-1}\xi\) is spacelike or \(0\) as \(d{\mathfrak{t}}\) is timelike. If the intersection were not empty at \({}^{\rm sc}S^{*}\bar{X}\), \(\langle\xi\rangle^{-1}\xi\) must be \(0\), which is impossible. Also, the fact that \(\langle\xi\rangle^{-2}p_{h,z}(\xi)=\langle\xi\rangle^{-2}p(\xi)=0\) gives (5.50). In the interior \({}^{\rm sc}T^{*}\bar{X}\), if the intersection were not empty, since \(d{\mathfrak{t}}\) is timelike, \(g^{-1}(-zdt_{b,*}+\xi,d{\mathfrak{t}})=0\) implies that \(-zdt_{b,*}+\xi\) must be spacelike or \(0\). Then the equality \(g^{-1}(-zdt_{b,*}+\xi,-zdt_{b,*}+\xi)=0\) forces \(-zdt_{b,*}+\xi\) to be \(0\), which cannot happen for \(\xi\in\partial_{t_{b,*}}^{\perp}\) and \(z\in\mathbb{R}\setminus 0\). On the region \(\{\Delta_{b}-a^{2}\sin^{2}\theta>0\}\), we see that \(g^{-1}(\xi,\xi)>0\) for \(0\neq\xi\in{}^{\rm sc}T^{*}\bar{X}\). For any \(p\in\{\Delta_{b}-a^{2}\sin^{2}\theta>0\}\), let \(\{d{\mathfrak{t}},X_{1},X_{2},X_{3}\}\) be an orthogonal basis of \(T^{*}_{p}\bar{M}\) with \(G|_{p}(X_{i},X_{i})=1\). We write \[-zdt_{b,*}+\xi=(a_{0}-z)d{\mathfrak{t}}+\sum_{i=1}^{3}a_{i}X_{i},\] and then \(g^{-1}(-zdt_{b,*}+\xi,-zdt_{b,*}+\xi)=0\) gives \((a_{0}-z)^{2}G(d{\mathfrak{t}},d{\mathfrak{t}})+\sum_{i=1}^{3}a_{i}^{2}=0\). Since \(-zdt_{b,*}+\xi+zd{\mathfrak{t}}\in T^{*}_{p}\bar{X}\), it follows that \(g^{-1}(-zdt_{b,*}+\xi+zd{\mathfrak{t}},-zdt_{b,*}+\xi+zd{\mathfrak{t}})=a_{0}^ {2}G(d{\mathfrak{t}},d{\mathfrak{t}})+\sum_{i=1}^{3}a_{i}^{2}>0\). Therefore, we have \(z^{2}-2a_{0}z>0\). Then \[\pm g^{-1}(-zdt_{b,*}+\xi,d{\mathfrak{t}})=\pm(a_{0}-z)g^{-1}(d{\mathfrak{t}},d{\mathfrak{t}})=\pm(a_{0}-\frac{z}{2}-\frac{z}{2})g^{-1}(d{\mathfrak{t}},d{ \mathfrak{t}})>0\quad\text{for}\quad\pm z>0.\] This finishes the proof of (5.51). #### 5.4.2. The radial points at event horizon Now in the fiber radial compactification \(\overline{{}^{\rm sc}T^{*}\bar{X}}\) of \({}^{\rm sc}T^{*}\bar{X}\), we use the following coordinates near the fiber infinity \({}^{\rm sc}S^{*}\bar{X}\), \[\rho_{\xi}=(\xi_{r}^{2}+\xi_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\xi_{\varphi}^ {2})^{-\frac{1}{2}}\in[0,\infty),\quad\hat{\xi}=(\hat{\xi}_{r},\hat{\xi}_{ \theta},\hat{\xi}_{\varphi})=\rho_{\xi}(\xi_{r},\xi_{\theta},\xi_{\varphi}). \tag{5.52}\] We note that \(\rho_{\xi}\) is a boundary defining function of \(\overline{{}^{\rm sc}T^{*}\bar{X}}\), i.e., \(\rho_{\xi}=0\) at \({}^{\rm sc}S^{*}\bar{X}\), and \[\hat{\xi}\in{}^{\rm sc}S^{*}\bar{X}=\{\hat{\xi}_{r}^{2}+\hat{\xi}_{\theta}^{2}+ \frac{1}{\sin^{2}\theta}\hat{\xi}^{2}=1\}.\] We now analyze the behavior of the Hamiltonian flow \(\exp(s\rho_{\xi}H_{p_{h,z}})\) on the characteristic set \(\Sigma_{h}=\Sigma_{h}^{+}\cup\Sigma_{h}^{-}\), concentrating on the behavior away from \(\partial_{+}\bar{X}\). With \(t_{b,*}=t_{\chi}+c(r)\), we write \[p_{h,z}=-\rho_{b}^{-2}\Big{(}\Delta_{b}\big{(}\xi_{r}-zc^{\prime}(r)+\frac{ \chi\big{(}a\xi_{\varphi}-(r^{2}+a^{2})z\big{)}}{\Delta_{b}}\big{)}^{2}-\frac{1}{ \Delta_{b}}\big{(}a\xi_{\varphi}-(r^{2}+a^{2})z\big{)}^{2}+\tilde{p}_{h,z}\Big{)} \tag{5.53}\] where \[\tilde{p}_{h,z}=\xi_{\theta}^{2}+\frac{1}{\sin^{2}\theta}(\xi_{\varphi}-a\sin^{2} \theta z)^{2}. \tag{5.54}\] Then we calculate \[H_{p_{h,z}}=-\rho_{b}^{-2}\bigg{(}\Big{(}2\Delta_{b}\big{(}\xi_{r}- zc^{\prime}(r)\big{)}+2\chi\big{(}a\xi_{\varphi}-(r^{2}+a^{2})z\big{)}\Big{)} \partial_{r}+\Big{(}2a\frac{\chi^{2}-1}{\Delta_{b}}\big{(}a\xi_{\varphi}-(r^{2} +a^{2})z\big{)}+2\chi a\big{(}\xi_{r}-c^{\prime}(r)z\big{)}\Big{)}\partial_{ \varphi}\] \[\qquad\qquad-\frac{\partial(-\rho_{b}^{2}p_{h,z})}{\partial r} \partial_{\xi_{r}}+H_{\tilde{p}_{h,z}}\bigg{)}-\rho_{b}^{-2}p_{h,z}H_{\rho_{b} ^{2}}\] where \[\frac{\partial(-\rho_{b}^{2}p_{h,z})}{\partial r}=2\Delta_{b} \Big{(}\xi_{r}-zc^{\prime}(r)+\frac{\chi\big{(}a\xi_{\varphi}-(r^{2}+a^{2})z \big{)}}{\Delta_{b}}\Big{)}\partial_{r}\Big{(}\xi_{r}-zc^{\prime}(r)+\frac{ \chi\big{(}a\xi_{\varphi}-(r^{2}+a^{2})z\big{)}}{\Delta_{b}}\Big{)}\] \[\qquad+\frac{\partial\Delta_{b}}{\partial r}\big{(}\xi_{r}-zc^{ \prime}(r)+\frac{\chi\big{(}a\xi_{\varphi}-(r^{2}+a^{2})z\big{)}}{\Delta_{b}} \big{)}^{2}-\partial_{r}\Big{(}\frac{1}{\Delta_{b}}\big{(}a\xi_{\varphi}-(r^{ 2}+a^{2})z\big{)}^{2}\Big{)}\] In the coordinates (5.52) near the fiber infinity \({}^{\rm sc}S^{*}\bar{X}\), we compute \[\rho_{\xi}H_{p_{h,z}}=-\rho_{b}^{-2}\bigg{(} \Big{(}2\Delta_{b}\big{(}\hat{\xi}_{r}-\rho_{\xi}zc^{\prime}(r) \big{)}-2\rho_{\xi}\chi(r^{2}+a^{2})z+2\chi a\hat{\xi}_{\varphi}\Big{)} \partial_{r} \tag{5.55}\] \[+\Big{(}2a\frac{\chi^{2}-1}{\Delta_{b}}\big{(}a\hat{\xi}_{\varphi }-\rho_{\xi}(r^{2}+a^{2})z\big{)}+2\chi a\big{(}\hat{\xi}_{r}-\rho_{\xi}c^{ \prime}(r)z\big{)}\Big{)}\partial_{\varphi}\] \[-\rho_{\xi}\frac{\partial(-\rho_{b}^{2}p_{h,z})}{\partial r} \partial_{\xi_{r}}+\rho_{\xi}H_{\tilde{p}_{h,z}}\bigg{)}-\rho_{b}^{-2}(\rho_{ \xi}^{2}p_{h,z})\rho_{\xi}^{-1}H_{\rho_{b}^{2}}.\] We let \[L_{\pm}:=\{\rho_{\xi}=0,\hat{\xi}_{r}=\pm 1,(\hat{\xi}_{\theta},\hat{\xi}_{ \varphi})=0,\Delta_{b}=0\}\subset\Big{(}\Sigma_{h}^{\pm}\cap{}^{\rm sc}S^{*} \bar{X}\Big{)}.\] Now we describe the property of the set \(L_{\pm}\). **Lemma 5.9**.: \(L_{\pm}\) _are invariant under the Hamiltonian flow \(\exp(s\rho_{\xi}H_{p_{h,z}})\) on the characteristic set \(\Sigma_{h}\). Moreover, \(L_{+}\) is a radial sink and \(L_{-}\) is a radial source for the flow \(\exp(s\rho_{\xi}H_{p_{h,z}})\)._ Proof.: We rewrite \[L_{\pm}=\{\Delta_{b}=0,\rho_{\xi}=0,\hat{\xi}_{r}=\pm 1,\rho_{\xi}^{2}\tilde{p}_ {h,z}=0\}.\] Using the coordinates (5.52), the expression (5.55) and the facts that \(c(r)=0,\chi=1\) near \(L_{\pm}\), we find that near \(L_{\pm}\) \[\rho_{\xi}H_{p_{h,z}}\rho_{\xi} =-\rho_{b}^{-2}\Big{(}\frac{\partial\Delta_{b}}{\partial r}\hat{ \xi}_{r}^{3}-4r\rho_{\xi}\hat{\xi}_{r}^{2}z+H_{\tilde{p}_{h,z}}\rho_{\xi}\Big{)} \rho_{\xi}-\rho_{b}^{-2}(\rho_{\xi}^{2}p_{h,z})\rho_{\xi}^{-1}H_{\rho_{b}^{2}} \rho_{\xi}, \tag{5.56}\] \[\rho_{\xi}H_{p_{h,z}}\hat{\xi}_{\varphi} =(H_{p_{h,z},\rho_{\xi}})\hat{\xi}_{\varphi},\] \[\rho_{\xi}H_{p_{h,z}}\Delta_{b} =-2\rho_{b}^{-2}\frac{\partial\Delta_{b}}{\partial r}\Big{(} \Delta_{b}\hat{\xi}_{r}-\rho_{\xi}(r^{2}+a^{2})z+a\hat{\xi}_{\varphi}\Big{)},\] \[\rho_{\xi}H_{p_{h,z}}(\rho_{\xi}^{2}\tilde{p}_{h,z}) =2(H_{p_{h,z},\rho_{\xi}})\rho_{\xi}^{2}\tilde{p}_{h,z}-\rho_{b}^{- 2}(\rho_{\xi}^{2}p_{h,z})\rho_{\xi}H_{\rho_{b}^{2}}\tilde{p}_{h,z}.\] This implies that \(\rho_{\xi}H_{p}\) is tangent to \(L_{\pm}\), so \(L_{\pm}\) is invariant under the flow \(\exp(s\rho_{\xi}H_{p_{h,z}})\). Moreover, in a neighborhood of \(L_{\pm}=\{\Delta_{b}=0,\rho_{\xi}=0,\hat{\xi}_{r}=\pm 1,\rho_{\xi}^{2}\tilde{p}_{h,z}=0\} \subset\{\rho_{\xi}^{2}p_{h,z}=0\}\), we have \[\pm\rho_{\xi}H_{p}\rho_{\xi}^{2} \leq-C_{0}\rho_{\xi}^{2}, \tag{5.57}\] \[\pm\rho_{\xi}H_{p}\hat{\xi}_{\varphi}^{2} \leq-C_{0}\hat{\xi}_{\varphi}^{2},\] \[\pm\rho_{\xi}H_{p}\Delta_{b}^{2} \leq-C_{0}\Delta_{b}^{2}+C_{1}(\rho_{\xi}^{2}+\hat{\xi}_{\varphi}^{2})\] \[\pm\rho_{\xi}H_{p}(\rho_{\xi}^{2}\tilde{p}) \leq-C_{0}\rho_{\xi}^{2}\tilde{p}+C_{1}(\rho_{\xi}^{2}+\Delta_{b}^ {2}+\hat{\xi}_{\varphi}^{2})\] where \(C_{0},C_{1}>0\). It follows from (5.4) that in a sufficiently small neighborhood of \(L_{\pm}\) \[\pm\rho_{\xi}H_{p}f\leq-Cf,\quad f=\rho_{\xi}^{2}+\hat{\xi}_{\varphi}^{2}+C_{2} \Delta_{b}^{2}+C_{3}\rho_{\xi}^{2}\tilde{p}\geq 0. \tag{5.58}\] for some \(C,C_{2},C_{3}>0\). Proceeding as in the proof of Lemma 5.2 finishes the proof. Finally, since \(H_{p_{h,z}}\rho_{\xi}=H_{p}\rho_{\xi}\) and \[\begin{split}\rho_{\xi}\sigma_{h}\Big{(}h^{-1}\mathrm{Im}\,h^{2} \widehat{\square_{g_{b}}}(h^{-1}z)\Big{)}&=\mathrm{Im}\,(h^{-1}z) \rho_{b}^{-2}\Big{(}2(r^{2}+a^{2})\hat{\xi}_{r}+2a\hat{\xi}_{\varphi})\Big{)} \\ &=\rho_{\xi}\sigma_{1}\Big{(}\mathrm{Im}\,\widehat{\square_{g_{b} }}(h^{-1}z)\Big{)}\quad\text{at}\quad L_{\pm},\end{split} \tag{5.59}\] the calculation of the threshold regularity at \(L_{\pm}\) in the microlocal case applies here as well. #### 5.4.3. The radial points at spatial infinity \(\partial_{+}\bar{X}\) We now turn to the analysis near \(\partial_{+}\bar{X}\). Recall that near \(\partial_{+}\bar{X}\), the semiclassical principal symbol of \(h^{-2}\widehat{\square_{g}}(h^{-1}z)\) for \(z\in\mathbb{R}\) is given by \[\begin{split} p_{h,z}(\zeta)&=-g^{-1}(-zdt_{b,*}+ \xi,-zdt_{b,*}+\xi)\\ &=-(\xi_{\rho}-z)^{2}+z^{2}-\tilde{p}+\rho\sum_{\begin{subarray}{ c}0\leq j,h,m\leq 2\\ 0\leq j+k+m\leq 2\end{subarray}}a_{jkm}(\rho,\theta,\varphi)\xi_{\theta}^{j} \xi_{\theta}^{k}\xi_{\varphi}^{m},\quad\tilde{p}=\Big{(}\xi_{\theta}^{2}+ \frac{1}{\sin^{2}\theta}\xi_{\varphi}^{2}\Big{)}\end{split} \tag{5.60}\] where \(\xi=\xi_{\rho}\frac{d\rho}{\rho^{2}}+\xi_{\theta}\frac{d\theta}{\rho}+\xi_{ \varphi}\frac{d\varphi}{\rho}\) and \(a_{jkm}\in\mathcal{A}^{0}(\bar{X})\). Then the scattering Hamiltonian vector field associated to \(p_{h,z}\) in \(\rho>0\) is given by \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! which contradicts (5.62). Proceeding as in the proof of Lemma 5.2, it follows that uniformly in \((\rho,\theta,\varphi,\xi)\in U^{\varepsilon}_{\pm}\cap\Sigma_{h}\), one has \[\phi_{s}(\rho,\theta,\varphi,\xi)\to\{\rho=0,\tilde{p}=0\}\cap(U^{\varepsilon} _{\pm}\cap\Sigma_{h})=R(\frac{z}{2}\pm\frac{|z|}{2})\quad\text{as}\quad s\to\pm\infty.\] Finally, since \(p_{h,z}\) at \(\partial_{+}\bar{X}\) equals to \(p_{\text{sc}}(z)\), the calculation of the threshold scattering decay order at the radial points in the microlocal case applies here as well. #### 5.4.4. Structure of the trapped sets We now study the structure of the trapped sets for KN spacetime with \(|\mathbf{a}|^{2}+\mathbf{Q}^{2}<\mathbf{m}^{2}\). First, we define and investigate the incoming and outgoing tails and the trapped set associated to the Hamiltonian flow \(\exp(s^{\text{sc}}H^{2,0}_{p_{h,z}})\) where \(p_{h,z}\) is the semiclassical symbol given by \[p_{h,z}=-\rho_{b}^{-2}\Big{(}\Delta_{b}\big{(}\xi_{r}-zc^{\prime}(r)+\frac{ \chi\big{(}a\xi_{\varphi}-(r^{2}+a^{2})z\big{)}}{\Delta_{b}}\big{)}^{2}-\frac {1}{\Delta_{b}}\big{(}a\xi_{\varphi}-(r^{2}+a^{2})z\big{)}^{2}+\tilde{p}_{h,z} \Big{)}\] where \[\tilde{p}_{h,z}=\xi_{\theta}^{2}+\frac{1}{\sin^{2}\theta}(\xi_{\varphi}-a\sin ^{2}\theta z)^{2}.\] To define them, we need an escape function. **Proposition 5.11**.: _Let \(f(r)\in C^{\infty}([r_{-},\infty))\) be defined as_ \[f(r)=\frac{\Delta_{b}(r)}{r^{4}}. \tag{5.64}\] _Then there exists \(\delta_{0}=\delta_{0}(\mathbf{m},\mathbf{a},\mathbf{Q})>0\) such that for \((r,\theta,\varphi,\xi)\in\overline{{}^{\text{sc}}T^{*}\bar{X}}\setminus(L^{\pm }\cup\partial_{+}\bar{X})\) and \(z\in\mathbb{R}\setminus 0\)_ \[f(r)<\delta_{0},\quad\langle\xi\rangle^{-2}p_{h,z}(r,\theta,\varphi,\xi)=0, \quad\langle\xi\rangle^{-1}H_{p_{h,z}}f(r,\theta,\varphi,\xi)=0\Longrightarrow (\langle\xi\rangle^{-1}H_{p_{h,z}})^{2}f(r,\theta,\varphi,\xi)<0. \tag{5.65}\] Proof.: We first show (5.65) on the fiber infinity \({}^{\text{sc}}S^{*}\bar{X}\) where \(\langle\xi\rangle^{-1}=0\). According to (5.50) in Lemma 5.8, we know that \[\{\langle\xi\rangle^{-2}p_{h,z}=0\}\cap{}^{\text{sc}}S^{*}\bar{X}\subset\{r \geq r_{-}\mid\Delta_{b}\leq a^{2}\sin^{2}\theta\}\subset\{r_{-}\leq r\leq \mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\}.\] A direct calculation gives \[f^{\prime}(r)=\frac{r\partial_{r}\Delta_{b}(r)-4\Delta_{b}(r)}{r^{5}}=-\frac{2 r^{2}-6\mathbf{m}r+4a^{2}+4\mathbf{Q}^{2}}{r^{5}}>0\qquad\text{when}\quad r\in[r_{-}, \mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}].\] Therefore, \(\langle\xi\rangle^{-1}H_{p_{h,z}}f=0\) implies \(\langle\xi\rangle^{-1}H_{p_{h,z}}r=0\), that is, \[\langle\xi\rangle^{-1}H_{p_{h,z}}r=-\rho_{b}^{-2}\langle\xi\rangle^{-1}\Big{(} 2\Delta_{b}\big{(}\xi_{r}-zc^{\prime}(r)\big{)}-2\chi(r^{2}+a^{2})z+2\chi a \xi_{\varphi}\Big{)}=0\quad\text{on}\quad{}^{\text{sc}}S^{*}\bar{X}\cap\Sigma_ {h}.\] This is equivalent to \[\Delta_{b}\hat{\xi}_{r}=-\chi a\hat{\xi}_{\varphi}\] where \(\hat{\xi}\) is defined as in (5.52). Plugging this back to \(\langle\xi\rangle^{-2}p_{h,z}=0\) yields \[\langle\xi\rangle^{-2}p_{h,z}=-\rho_{b}^{-2}\Big{(}-\Delta_{b}\hat{\xi}_{r}^{ 2}+\frac{\chi^{2}-1}{\Delta_{b}}a^{2}\hat{\xi}_{\varphi}^{2}+\hat{\xi}_{\theta }^{2}+\frac{1}{\sin^{2}\theta}\hat{\xi}_{\varphi}^{2}\Big{)}=0.\] Since \(\chi=1\) on \([r_{-},r_{b}]\), it follows that \[\{\langle\xi\rangle^{-2}p_{h,z}=0,\langle\xi\rangle^{-1}H_{p_{h,z}}f=0\}\cap{ }^{\text{sc}}S^{*}\bar{X}=L_{\pm}.\] On \(\{\langle\xi\rangle^{-2}p_{h,z}=0,\langle\xi\rangle^{-1}H_{p_{h,z}}f=0,r>r_{b} \}\cap{}^{\text{sc}}S^{*}\bar{X}\), we calculate \[(\langle\xi\rangle^{-1}H_{p_{h,z}})^{2}f=-2\rho_{b}^{-2}f^{\prime}(r)\Delta_{b }\langle\xi\rangle^{-2}H_{p_{h,z}}\xi_{r}=-\frac{2f^{\prime}(r)}{\rho_{b}^{4} \Delta_{b}}\frac{\partial\Delta_{b}}{\partial r}a^{2}\hat{\xi}_{\varphi}^{2}<0\] where we use the fact that \(\langle\xi\rangle^{-1}H_{p_{h,z}}\langle\xi\rangle^{-1}=0\) on \({}^{\text{sc}}S^{*}\bar{X}\). We next prove that in the region \(r_{-}\leq r\leq r_{b}\) where \(f(r)\leq 0\), \(\langle\xi\rangle^{-2}p_{h,z}\) and \(\langle\xi\rangle^{-1}H_{p_{h,z}}f\) cannot vanish at the same time in the interior \({}^{\text{sc}}T^{*}\bar{X}\) (i.e. away from the fiber infinity). Suppose that \(H_{p_{h,z}}f=0\) and \(p_{h,z}=0\) in \(r_{-}\leq r\leq r_{b}\). Then we have \[H_{p_{h,z}}f=-\rho_{b}^{-2}f^{\prime}(r)\Big{(}2\Delta_{b}\big{(}\xi_{r}-zc^{ \prime}(r)\big{)}-2\chi(r^{2}+a^{2})z+2\chi a\xi_{\varphi}\Big{)}=0.\] Since \(f^{\prime}(r)>0\) when \(r\in[r_{-},r_{b}]\), \(H_{p_{h,z}}f=0\) implies \[\Delta_{b}\big{(}\xi_{r}-zc^{\prime}(r)\big{)}=\chi(r^{2}+a^{2})z-\chi a\xi_{ \varphi}.\] Plugging this into \(p_{h,z}=0\) and using the fact that \(\chi=1\) in \([r_{-},r_{b}]\) yield \[-\rho_{b}^{-2}\Big{(}-\Delta_{b}\big{(}\xi_{r}-zc^{\prime}(r)\big{)}^{2}+\tilde {p}_{h,z}\Big{)}=0\quad\text{where}\quad\tilde{p}_{h,z}=\xi_{\theta}^{2}+\frac{ 1}{\sin^{2}\theta}(\xi_{\varphi}-a\sin^{2}\theta z)^{2},\] and thus \[\Delta_{b}(\xi_{r}-c^{\prime}(r)z)=0,\quad\xi_{\theta}=0,\quad\xi_{\varphi}=a \sin^{2}\theta z\quad\text{in}\quad r\in[r_{-},r_{b}].\] Therefore, for \(z\in\mathbb{R}\setminus 0\) \[H_{p_{h,z}}f=\rho_{b}^{-2}f^{\prime}(r)(2r^{2}z+2a^{2}\cos^{2}\theta z)\neq 0 \quad\text{in}\quad r\in[r_{-},r_{b}],\] which is a contradiction. We now turn to the region \(r>r_{b}\) where \(f>0\). We return to the analysis of \[f^{\prime}(r)=-\frac{2r^{2}-6\mathbf{m}r+4a^{2}+4\mathbf{Q}^{2}}{r^{5}}.\] It is clear that \(f^{\prime}(r)<0\) when \(r>3\mathbf{m}\) and \(f^{\prime}(r)>0\) when \(r_{b}=\mathbf{m}+\sqrt{\mathbf{m}^{2}-a^{2}-\mathbf{Q}^{2}}\leq r<\frac{3 \mathbf{m}+\sqrt{9\mathbf{m}^{2}-8a^{2}-8\mathbf{Q}^{2}}}{2}\), so in the region \(f(r)<\delta\) with \(\delta=f(3\mathbf{m})\), we have \(f^{\prime}(r)\neq 0\). Therefore, in the region \(f(r)<\delta=f(3\mathbf{m})\), \(H_{p_{h,z}}f=0\) is equivalent to \(H_{p_{h,z}}r=0\), which is given by \[\Delta_{b}\big{(}\xi_{r}-c^{\prime}(r)z\big{)}-\chi\big{(}(r^{2}+a^{2})z-a\xi_ {\varphi}\big{)}=0.\] Under the conditions that \(p_{h,z}=0\) and \(H_{p_{h,z}}f=0\), we calculate \[H_{p_{h,z}}^{2}f=-2\rho_{b}^{-2}f^{\prime}(r)\Delta_{b}H_{p_{h,z}}\xi_{r}=- \frac{2f^{\prime}(r)}{\rho_{b}^{4}\Delta_{b}}(a\xi_{\varphi}-(r^{2}+a^{2})z) \Big{(}\frac{\partial\Delta_{b}}{\partial r}(a\xi_{\varphi}-(r^{2}+a^{2})z)+4 zr\Delta_{b}\Big{)}.\] Next, we let \[A=a\xi_{\varphi}-(r^{2}+a^{2})z,\quad B=\xi_{\varphi}-a\sin^{2}\theta z.\] Then we have \(z=-\rho_{b}^{-2}(A-aB)\) and \[H_{p_{h,z}}^{2}f=-\frac{2f^{\prime}(r)}{\rho_{b}^{6}\Delta_{b}}A\Big{(}\frac{ \partial\Delta_{b}}{\partial r}A\rho_{b}^{2}-4r\Delta_{b}(A-aB)\Big{)}=-\frac{ 2f^{\prime}(r)}{\rho_{b}^{6}\Delta_{b}}\Big{(}\big{(}\frac{\partial\Delta_{b}}{ \partial r}\rho_{b}^{2}-4r\Delta_{b}\big{)}A^{2}+4ar\Delta_{b}AB\Big{)}.\] Using \(p_{h,z}=0\) and \(H_{p_{h,z}}f=0\), we obtain \[p_{h,z}=-\rho_{b}^{-2}\Big{(}-\frac{1}{\Delta_{b}}A^{2}+\xi_{\theta}^{2}+ \frac{B^{2}}{\sin^{2}\theta}\Big{)}=0\] and thus \[A^{2}\geq\Delta_{b}B^{2} \tag{5.66}\] where we use the fact that \(B=\xi_{\varphi}=0\) when \(\sin\theta=0\). We also note that \(p_{h,z}=0\) and \(z\in\mathbb{R}\setminus 0\) imply \(A\neq 0\) since otherwise \(B=0\) and \(z=0\). We now calculate \[\big{(}\frac{\partial\Delta_{b}}{\partial r}\rho_{b}^{2}-4r\Delta _{b}\big{)}A^{2}+4ar\Delta_{b}AB <-2r(r^{2}-3\mathbf{m}r+(a^{2}+2\mathbf{Q}^{2}))A^{2}+4|a|r \Delta_{b}|AB|\] \[<-2r(r^{2}-3\mathbf{m}r+(a^{2}+2\mathbf{Q}^{2}))A^{2}+4\mathbf{m}r \Delta_{b}^{1/2}A^{2}\] \[<-2r(r^{2}-5\mathbf{m}r+(a^{2}+2\mathbf{Q}^{2}))A^{2}\] where we use \(|a|<\mathbf{m}\) and \(A^{2}\geq\Delta_{b}B^{2}\) in the second step and \(\Delta_{b}<r^{2}\) for \(r\geq r_{b}\) in the third step. As a consequence, \(H_{p_{h,z}}^{2}f<0\) when \(r>5\mathbf{m}\). We now assume that \(r_{b}<r\leq 5\mathbf{m}\) and \(f(r)<\delta^{\prime}\). Then we have \[\Delta_{b}(r) =r^{4}f(r)\leq(5\mathbf{m})^{4}f(r)<(5\mathbf{m})^{4}\delta^{\prime}\] \[\partial_{r}\Delta_{b} =2r-2\mathbf{m}>2\sqrt{\mathbf{m}^{2}-a^{2}-\mathbf{Q}^{2}}.\] By choosing \(\delta^{\prime}>0\) sufficiently small such that \[2\mathbf{m}\sqrt{\mathbf{m}^{2}-a^{2}-\mathbf{Q}^{2}}-4(5\mathbf{m})^{4} \delta^{\prime}-4\mathbf{m}(5\mathbf{m})^{2}\sqrt{\delta^{\prime}}\geq 0,\] we conclude that on \(\{r_{b}<r\leq 5\mathbf{m}\}\cap\{f(r)<\delta^{\prime}\}\) \[r^{5}f^{\prime}(r) =r\partial_{r}\Delta_{b}-4\Delta_{b}>2\mathbf{m}\sqrt{\mathbf{m}^{ 2}-a^{2}-\mathbf{Q}^{2}}-4(5\mathbf{m})^{4}\delta^{\prime}\geq 0,\] \[\big{(}\frac{\partial\Delta_{b}}{\partial r}\rho_{b}^{2}-4r\Delta_ {b}\big{)}A^{2}+4ar\Delta_{b}AB \geq r\big{(}\frac{\partial\Delta_{b}}{\partial r}r-4\Delta_{b}-4 \mathbf{m}\Delta_{b}^{1/2}\big{)}A^{2}\] \[>\mathbf{m}\big{(}2\mathbf{m}\sqrt{\mathbf{m}^{2}-a^{2}-\mathbf{Q }^{2}}-4(5\mathbf{m})^{4}\delta^{\prime}-4\mathbf{m}(5\mathbf{m})^{2}\sqrt{ \delta^{\prime}}\big{)}A^{2}\geq 0,\] and thus \(H^{2}_{p_{h,z}}f<0\). Then letting \(\delta_{0}=\min\{f(5\mathbf{m}),\delta^{\prime}\}\) finishes the proof. We now define the outgoing/incoming tails and the trapped set (see [38, Definition 2.1], [40, Definition 6.1]). Recall that \[{}^{\mathrm{sc}}\!H^{2,0}_{p_{h,z}}=\rho^{-1}\langle\xi\rangle^{-1\mathrm{sc} }\!H_{p_{h,z}},\quad\rho=r^{-1}.\] **Definition 5.12**.: Let \(f\) be the function defined as in Proposition 5.11. Let \((r,\theta,\varphi,\xi)\in{}^{\mathrm{sc}}\!T^{*}\!\bar{X}\) and \(\phi(s)=\exp(s{}^{\mathrm{sc}}\!H^{2,0}_{p_{h,z}})(r,\theta,\varphi,\xi)\) be a maximally extended integral curve with domain of definition \(I\subset\mathbb{R}\) on \(\Sigma_{h}=\{\langle\xi\rangle^{-2}p_{h,z}=0\}\subset{}^{\mathrm{sc}}\!T^{*}\! \bar{X}\). Let \(s_{-}=\inf I,s_{+}=\sup I\). We say that \(\phi(s)\) is trapped as \(s\to\pm\infty\), if there exists \(\delta>0\) and \(T>0\) such that \(f(\phi(s))>\delta\) for all \(\pm s\geq T\) (as a consequence, \(s_{\pm}=\pm\infty\)). We define the _incoming tail_\(\Gamma_{-}\) and the _outgoing tail_\(\Gamma_{+}\) as \[\Gamma_{\mp}=\{(r,\theta,\varphi,\xi)\in{}^{\mathrm{sc}}\!T^{*}\!\bar{X}\mid \phi(s)=\exp(s{}^{\mathrm{sc}}\!H^{2,0}_{p_{h,z}})(r,\theta,\varphi,\xi)\text{ is trapped as }s\to\pm\infty\}.\] Then we define the _trapped set_ as \(K:=\Gamma_{-}\cap\Gamma_{+}\). It follows from the definition that \(\Gamma_{\pm},K\) are invariant under the flow \(\exp(s{}^{\mathrm{sc}}\!H^{2,0}_{p_{h,z}})\). We next study the properties of \(\Gamma_{\pm}\) and \(K\). We need the following lemmas. **Lemma 5.13**.: _For \(z\in\mathbb{R}\setminus 0\), we have_ \[\pm\langle\xi\rangle^{-1}H_{p_{h,z}}r>0\quad\text{on}\quad(\Sigma_{h}^{\pm} \cap\{r_{-}\leq r\leq r_{b}\})\setminus L_{\pm}. \tag{5.67}\] Proof.: We first consider the case \(r_{-}\leq r<r_{b}\) and finite \(\xi\). Since \(\chi=1,c^{\prime}(r)=0\) on \(\{r_{-}\leq r<r_{b}\}\), we write \(p_{h,z}=0\) as \[p_{h,z}=-\rho_{b}^{-2}\Big{(}\Delta_{b}\xi_{r}^{2}-2(r^{2}+a^{2})\xi_{r}z+2a \xi_{r}\xi_{\varphi}+\tilde{p}_{h,z}\Big{)}=0\quad\text{where}\quad\tilde{p}_{ h,z}=\xi_{\theta}^{2}+\frac{1}{\sin^{2}\theta}(\xi_{\varphi}-a\sin^{2}\theta z)^{2}.\] This is a quadratic equation in \(\xi_{r}\) with positive discriminant on \(\{r_{-}\leq r<r_{b}\}\) since \(\Delta_{b}<0\) there. As a consequence, it has two distinct roots \(\xi_{r}^{-}<0<\xi_{r}^{+}\) and \(\pm\partial_{\xi_{r}}p_{h,z}>0\) at \(\xi_{r}^{\pm}\). By Lemma 5.8, \(\Sigma_{h}^{\pm}\) is characterized by the conditions \(\pm g^{-1}(-zdt_{\chi}+\xi,dt)>0\) (because \(t_{b,*}=t_{\chi}\) in the region \(r_{-}\leq r\leq r_{b}\)), that is, with \(\mathfrak{t}=t_{\chi}+b(r)\) (according to the definition of \(\mathfrak{t}\) in (4.16), we see that \(b^{\prime}(r)\leq 0\) in the region \(r_{-}\leq r\leq r_{b}\)), we have \[\pm\xi_{r}>\pm\frac{b^{\prime}(r)\big{(}(r^{2}+a^{2})z-a\xi_{\varphi}\big{)}+a \big{(}a\sin^{2}\theta z-\xi_{\varphi}\big{)}}{r^{2}+a^{2}+b^{\prime}(r) \Delta_{b}(r)}. \tag{5.68}\] If \(g^{-1}(-zdt_{\chi}+\xi,dt)=0\), then \(-zdt_{\chi}+\xi\) is a nonzero spacelike vector and thus \(p_{h,z}(\xi)=-g^{-1}(-zdt_{\chi}+\xi,-zdt_{\chi}+\xi)<0\). That is, substituting the right-hand side of (5.68) into \(p_{h,z}\) yields a negative number. Therefore, \[(r,\theta,\varphi,\xi_{r}^{\pm},\xi_{\theta},\xi_{\varphi})\in\Sigma_{h}^{\pm} \quad\text{and}\quad\pm\langle\xi\rangle^{-1}H_{p_{h,z}}r=\pm\langle\xi\rangle^ {-1}\partial_{\xi_{r}}p_{h,z}>0\quad\text{at}\quad\xi_{r}^{\pm}.\] In case of fiber infinity \({}^{\mathrm{sc}}S^{*}\bar{X}\cap\{r_{-}\leq r<r_{b}\}\), using the coordinates (5.52), we write \[\rho_{\xi}^{2}p_{h,z}=-\rho_{b}^{-2}\Big{(}\Delta_{b}\hat{\xi}_{r}^{2}+2a\xi_{r }\hat{\xi}_{\varphi}+\hat{\xi}_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\hat{\xi}_ {\varphi}^{2}\Big{)}\] and \[\rho_{\xi}g^{-1}(-zdt_{\chi}+\xi,dt)=\rho_{b}^{-2}\Big{(}\big{(}r^{2}+a^{2}+b^{ \prime}(r)\Delta_{b}(r)\big{)}\hat{\xi}_{r}+\big{(}a(1+b^{\prime}(r))\big{)} \hat{\xi}_{\varphi}\Big{)}.\] Then the proof proceeds as in the finite \(\xi\) case. It remains to consider the case \(r=r_{b}\) where \(\Delta_{b}=0\). For finite \(\xi\), we write \[p_{h,z}=-\rho_{b}^{-2}\Big{(}-2(r^{2}+a^{2})\xi_{r}z+2a\xi_{r}\xi_{\varphi}+ \tilde{p}_{h,z}\Big{)}=0\quad\text{where}\quad\tilde{p}_{h,z}=\xi_{\theta}^{2}+ \frac{1}{\sin^{2}\theta}(\xi_{\varphi}-a\sin^{2}\theta z)^{2}.\] Since \(a\xi_{\varphi}-(r^{2}+a^{2})z\neq 0\) on \(\Sigma_{h}\) (otherwise \(a^{2}\sin^{2}\theta=r^{2}+a^{2}\) at \(r_{b}\), which is impossible), the right-hand side of \(p_{h,z}\) is a linear equation in \(\xi_{r}\) with nonzero leading coefficient and thus has only one root. Again, by Lemma 5.8, \(\Sigma_{h}^{\pm}\) is characterized by the conditions \(\pm g^{-1}(-zdt_{\chi}+\xi,d\mathfrak{t})>0\), i.e., \[\pm\xi_{r}>\pm\frac{-\big{(}a(1+b^{\prime}(r))\big{)}\xi_{\varphi}+z\big{(}a^{ 2}\sin^{2}\theta+(r^{2}+a^{2})b^{\prime}(r)\big{)}}{r^{2}+a^{2}}. \tag{5.69}\] Similarly, substituting the right-hand side of (5.69) into \(p_{h,z}\) yields a negative number. Therefore, we have either \[(r,\theta,\varphi,\xi_{r},\xi_{\theta},\xi_{\varphi})\in\Sigma_{h}^{+}\quad \text{and}\quad\langle\xi\rangle^{-1}H_{p_{h,z}}r=\langle\xi\rangle^{-1} \partial_{\xi_{r}}p_{h,z}>0\quad\text{at}\quad\xi_{r},\] or \[(r,\theta,\varphi,\xi_{r},\xi_{\theta},\xi_{\varphi})\in\Sigma_{h}^{-}\quad \text{and}\quad\langle\xi\rangle^{-1}H_{p_{h,z}}r=\langle\xi\rangle^{-1} \partial_{\xi_{r}}p_{h,z}<0\quad\text{at}\quad\xi_{r}.\] In case of fiber infinity \({}^{\rm sc}S^{*}\bar{X}\cap\{r=r_{b}\}\), using the coordinates (5.52), we write \[\rho_{\xi}^{2}p_{h,z}=-\rho_{b}^{-2}\Big{(}2a\hat{\xi}_{r}\hat{\xi}_{\varphi}+ \hat{\xi}_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\hat{\xi}_{\varphi}^{2}\Big{)}.\] Since \(\hat{\xi}_{\varphi}\neq 0\) on \((\{\langle\xi\rangle^{-2}p_{h,z}=0,r=r_{b}\}\cap{}^{\rm sc}S^{*}\bar{X})\setminus L _{\pm}\), the right-hand side of \(\rho_{\xi}p_{h,z}\) s a linear equation in \(\hat{\xi}_{r}\) with nonzero leading coefficient and thus has only one root. Then the proof proceeds as in the finite \(\xi\) case. **Lemma 5.14**.: _Let \(z\in\mathbb{R}\setminus 0\). Let \(\delta_{0}>0\) and \(f(r)\) be defined as in Proposition 5.11. Let \((r,\theta,\varphi,\xi)\in\overline{{}^{\rm sc}T^{*}\!\!X}\setminus(L_{\pm} \cap\partial_{+}\bar{X})\) and \(\phi(s)=\exp({}^{\rm sc}H_{p_{h,z}}^{2,0})(r,\theta,\varphi,\xi)\) be a maximally extended flow on \(\Sigma_{h}=\{\langle\xi\rangle^{-2}p_{h,z}=0\}\)._ 1. _If_ \(f(\phi)(s_{0})<\delta_{0}\) _and_ \(\pm\partial_{s}f(\phi(s))\leq 0\) _for some_ \(\pm s_{0}\geq 0\)_, then_ \(f(\phi(s))<\delta_{0}\) _and_ \(\pm\partial_{s}f(\phi(s))<0\) _for_ \(\pm s>s_{0}\) _and_ \(\phi(s)\) _is not trapped as_ \(\pm s\to\infty\)_._ 2. _If_ \(\phi(s)\) _is not trapped as_ \(\pm s\to\infty\)_, then either_ \(f(\phi(s))<\delta_{0}\) _and_ \(\pm\partial_{s}f(\phi(s))\leq 0\) _for all sufficiently large_ \(\pm s\)_, or_ \(\phi(s)\to L_{\pm}\) _(i.e._ \(f(s)\to 0\)_) as_ \(\pm s\to\infty\)_._ _Moreover,_ 1. _If_ \(\phi(s)\) _is not trapped as_ \(s\to\infty\)_, then for_ \(\phi(s)\subset\Sigma_{h}^{-}\)_,_ \(\phi(s)\) _crosses_ \(\partial_{-}\bar{X}\) _into the inward direction of decreasing_ \(r\) _at some finite time_ \(s_{0}<\infty\) _or_ \(\rho(\phi(s))\to 0\) _as_ \(s\to\infty\)_, while for_ \(\phi(s)\subset\Sigma_{h}^{+}\)_,_ \(\phi(s)\to L_{+}\) _or_ \(\rho(\phi(s))\to 0\) _as_ \(s\to\infty\)_._ 2. _If_ \(\phi(s)\) _is not trapped as_ \(s\to-\infty\)_, then for_ \(\phi(s)\subset\Sigma_{h}^{-}\)_,_ \(\phi(s)\to L_{-}\) _or_ \(\rho(\phi(s))\to 0\) _as_ \(s\to-\infty\)_, while for_ \(\phi(s)\subset\Sigma_{h}^{+}\)_,_ \(\phi(s)\) _crosses_ \(\partial_{-}\bar{X}\) _into the inward direction of decreasing_ \(r\) _at some finite time_ \(s_{0}>-\infty\) _or_ \(\rho(\phi(s))\to 0\) _as_ \(s\to-\infty\)_._ Proof.: We only prove the case \(\partial_{s}f(\phi(s))\leq 0\) as the other case \(-\partial_{s}f(\phi(s))\leq 0\) can be handled similarly. We put \[f(s)=f(\phi(s)),\quad\dot{f}(s)=\partial_{s}f(\phi(s))={}^{\rm sc}H_{p_{h,z}}^{ 2,0}f(\phi(s)),\quad\ddot{f}(s)=\partial_{s}^{2}f(\phi(s))=({}^{\rm sc}H_{p_{h,z}}^{2,0})^{2}f(\phi(s)).\] Let \(s>s_{0}\). If \(f(s)\) attains its minimum of on the interval \([s_{0},s]\) at some point \(s_{1}\in(s_{0},s)\), then \[f(s_{1})<\delta_{0},\quad\dot{f}(s_{1})=0,\quad\ddot{f}(s_{1})\geq 0,\] which contradicts (5.65). If \(f(s)\) attains its minimum value at \(s_{0}\), then \(\dot{f}(s_{0})\geq 0\), and thus \(\dot{f}(s_{0})=0\). Again, it follows from (5.65) that \(s_{0}\) is a strict local maximum point, which is a contradiction. Therefore, \(f(s)\) must attain its minimum value at \(s\) and \(\dot{f}(s)<0\) (otherwise \(\dot{f}(s)=0\) and then by (5.65), \(s\) is a strict local maximum point, which is a contradiction), showing that \[f(s)<\delta_{0},\quad\dot{f}(s)<0\quad\text{for all}\quad s>s_{0}.\] Suppose \(\phi(s)\) is trapped as \(s\to\infty\). Then there exists \(\delta>0\) and \(T>0\) such that \(f(s)>\delta\) for all \(s\geq T\). Since \(\{f(r)\geq\delta\}\cap{}^{\rm sc}T^{*}\!\!\bar{X}\) is a compact set, we can take a sequence \(s_{j}\to\infty\) such that \(\phi(s_{j})\) converges to some \((r_{\infty},\theta_{\infty},\varphi_{\infty},\xi_{\infty})\in\{\delta\leq f(r) \leq f(s_{0})<\delta_{0}\}\). Since \(f(s)\) is non increasing and bounded for all \(s\geq s_{0}\), \(\lim_{s\to\infty}f(s)\) exists. Since \(\ddot{f}(s)\), \(\dot{f}(s)\) are bounded for all \(s\geq\max\{s_{0},T\}\), by Barbalat's Lemma [8], we have \({}^{\rm sc}H_{p_{h,z}}^{1,0}f((r_{\infty},\theta_{\infty},\varphi_{\infty},\xi_{ \infty}))=({}^{\rm sc}H_{p_{h,z}}^{1,0})^{2}f((r_{\infty},\theta_{\infty}, \varphi_{\infty},\xi_{\infty}))=0\). This contradicts (5.65). We now prove the statement (2), (3) and (4). We only prove the case \(\phi(s)\) is not trapped as \(s\to\infty\) in detail as the other case can be handled similarly. If \(\phi(s)\subset\{f(r)\leq 0\}\) for all \(s\geq 0\), then \(\phi(s)\subset\{r_{-}\leq r\leq r_{b}\}\) since \(\phi(0)\notin\partial_{+}\bar{X}\). For the case \(\phi(s)\subset\Sigma_{h}^{-}\), since \(\phi(0)\notin L_{-}\), it follows from Lemma 5.13 that \(\phi(s)\) crosses \(\partial_{-}\bar{X}\) into the inward direction of decreasing \(r\) at finite time and thus there must exist \(s_{0}>0\) such that \(f(s_{0})<\delta_{0},\dot{f}(s_{0})<0\). Proceeding as in the proof of (1), we see that \(f(s)<\delta_{0},\dot{f}(s)\leq 0\) for all \(s\geq s_{0}\). While for the case \(\phi(s)\subset\Sigma_{h}^{+}\), we claim that \(\phi(s)\to L_{+}\) as \(s\to\infty\). Otherwise, there exists a sufficiently small neighborhood \(U_{+}\) of \(L_{+}\) such that \(\phi(s)\subset\Sigma_{h}^{+}\setminus U_{+}\) for all \(s>0\). By Lemma 5.13, there exists \(\delta>0\) such that \[\langle\xi\rangle^{-1}H_{p_{h,z}}r\geq\delta\quad\text{on}\quad(\Sigma_{h}^{+} \cap\{r_{-}\leq r\leq r_{b}+\delta\})\setminus U_{+} \tag{5.70}\] (Otherwise, if (5.70) does not hold for any positive \(\delta\), then we can find a sequence \(\{(r_{j},\theta_{j},\varphi_{j},\xi_{j})\}\) such that \(r_{-}\leq r_{j}\leq r_{b}+\frac{1}{j}\) and \(\langle\xi\rangle^{-1}H_{p_{h,z}}r((r_{j},\theta_{j},\varphi_{j},\xi_{j}))< \frac{1}{j}\). Let \(j\to\infty\), and then we obtain a limit point \[(r_{\infty},\theta_{\infty},\varphi_{\infty},\xi_{\infty})\in(\Sigma_{h}^{+} \cap\{r_{-}\leq r\leq r_{b}\})\setminus L_{+}\] such that \(\langle\xi\rangle^{-1}H_{p_{h,z}}r((r_{\infty},\theta_{\infty},\varphi_{ \infty},\xi_{\infty}))\leq 0\), which contradicts Lemma 5.13). It follows from (5.70) that \(\phi(s)\subset\{r\geq r_{b}+\delta\}\) for all large enough \(s\), which contradicts the assumption that \(\phi(s)\subset\{r_{-}\leq r\leq r_{b}\}\). On the other hand, suppose there exists \(s_{0}\geq 0\) such that \(f(s_{0})>0\). Since \(\phi(s)\) is not trapped as \(s\to\infty\), we can find \(s_{2}>s_{1}>0\) such that \(f(s_{2})<f(s_{1})<\delta_{0}\). Then arguing as in the proof of (1), we see that \(f(s)\) attains its minimum value on in the interval \([s_{1},s_{2}]\) at the point \(s_{2}\), and thus \(\dot{f}(s_{2})\leq 0\). Proceeding again as in the proof of (1), we see that \(f(s)<\delta_{0},\dot{f}(s)\leq 0\) for all \(s\geq s_{2}\). Also, since \(\phi(s)\) is not trapped as \(s\to\infty\), it follows that either \(\rho(\phi(s))\to 0\) as \(s\to\infty\), or \(\phi(s)\) enters the region \(\{r\leq r_{b}+\delta\}\) for any \(\delta>0\) in finite time. Using (5.70), we find that \(\phi(s)\to L_{+}\) as \(s\to\infty\) if \(\phi(s)\subset\Sigma_{h}^{+}\). If \(\phi(s)\subset\Sigma_{h}^{-}\), suppose that \(\phi(s)\) never crosses \(\partial_{-}\bar{X}\) for \(s\geq 0\). Then \(\phi(s)\) is well defined for all \(s\geq 0\). Since \((\rho,\theta,\varphi,\xi)\notin L_{-}\), then there exists a neighborhood \(V_{-}\) of \(L_{-}\) such that \((\rho,\theta,\varphi,\xi)\notin V_{-}\). Since \(L_{-}\) is a radial source, there exists a neighborhood \(U_{-}\) of \(L_{-}\) and \(T>0\) such that \[\phi(s)(U_{-})\subset V_{-}\quad\text{for all}\quad s\leq-T.\] Since \((\rho,\theta,\varphi,\xi)\notin V_{-}\), it follows that \(\phi(s)\notin U_{-}\) for all \(s\geq T\). Proceeding as in the proof of (5.70) yields \[-\langle\xi\rangle^{-1}H_{p_{h,z}}r\geq\delta\quad\text{on}\quad(\Sigma_{h}^{ -}\cap\{r_{-}\leq r\leq r_{b}+\delta\})\setminus U_{-}\] for some \(\delta>0\). This forces \(r(\phi(s))>r_{b}+\delta\) for all \(s>T\) and thus \(\rho(\phi(s))\to 0\) as \(s\to\infty\). According to Lemma 5.14, it is clear that \(K\subset\{f(r)\geq\delta_{0}\}\subset\{r>r_{b}\}\). Therefore, we now restrict our analysis in the region \(r>r_{b}\). Recall the calculation in the proof of Proposition 5.11 \[\Delta_{b}>0,\ H_{p_{h,z}}r=0\implies H_{p_{h,z}}^{2}r=2\rho_{b}^{-4}\Delta_{b }\partial_{r}\Big{(}\frac{(a\xi_{\varphi}-(r^{2}+a^{2})z)^{2}}{\Delta_{b}(r)} \Big{)}\] We define \[F(r):=\frac{(a\xi_{\varphi}-(r^{2}+a^{2})z)^{2}}{\Delta_{b}(r)}. \tag{5.71}\] The key property of \(F(r)\) is given by **Proposition 5.15**.: _Let \(z\in\mathbb{R}\setminus 0\). For each \(r\in(r_{b},\infty)\), we have_ \[p_{h,z}=0,\ \partial_{r}F(r)=0\implies\partial_{r}^{2}F(r)>0. \tag{5.72}\] _Moreover, for each fixed \(\xi_{\varphi}\), there exists a unique \(r_{\xi_{\varphi}}\in(r_{b},\infty)\) such that \(\partial_{r}F(r_{\xi_{\varphi}})=0\)._ Proof.: Since \(p_{h,z}=0\), by the discussion around (5.66), we see that \(a\xi_{\varphi}-(r^{2}+a^{2})z\neq 0\). Then \[\partial_{r}F(r)=-\frac{(a\xi_{\varphi}-(r^{2}+a^{2})z)}{\Delta_{b}^{2}}\Big{(} \frac{\partial^{2}\Delta_{b}}{\partial r^{2}}(a\xi_{\varphi}-(r^{2}+a^{2})z)+ 4zr\Delta_{b}\Big{)}=0\] implies \[\partial_{r}^{2}F(r) =-\frac{(a\xi_{\varphi}-(r^{2}+a^{2})z)}{\Delta_{b}^{2}}\Big{(} \frac{\partial^{2}\Delta_{b}}{\partial r^{2}}(a\xi_{\varphi}-(r^{2}+a^{2})z)+2 zr\frac{\partial\Delta_{b}}{\partial r}+4z\Delta_{b}\Big{)}\] \[=-\frac{8z^{2}r}{\Delta_{b}(\partial_{r}\Delta_{b})^{2}}\Big{(}2 \frac{\partial^{2}\Delta_{b}}{\partial r^{2}}r\Delta_{b}-r(\partial_{r}\Delta_{b} )^{2}-2\Delta_{b}\partial_{r}\Delta_{b}\Big{)}\] \[=\frac{32z^{2}r}{\Delta_{b}(\partial_{r}\Delta_{b})^{2}}\Big{(}r^ {3}-3\mathbf{m}r^{2}+3\mathbf{m}^{2}r-\mathbf{m}(a^{2}+\mathbf{Q}^{2})\Big{)}>0 \quad\text{on}\quad r_{b}<r<\infty.\] Since \(F(r)\to\infty\) as \(r\to r_{b}\) and \(r\to\infty\), \(F(r)\) must attains its global minimum value at \(r_{\xi_{\varphi}}\in(r_{b},\infty)\) and \(\partial_{r}F(r_{\xi_{\varphi}})=0\). Moreover, the existence of the critical point of \(F\) is unique (otherwise, suppose that there are two points \(r_{1},r_{2}\in(r_{b},\infty)\) with \(r_{1}<r_{2}\) such that \(\partial_{r}F(r_{1})=\partial_{r}F(r_{2})=0\). Then let \(D=\{r_{1}<r<r_{2}\mid\partial_{r}F(r)>0\}\). Since \(\partial_{r}F(r_{1})=0,\partial_{r}^{2}F(r_{1})>0\), it follows that \(D\) is nonempty and we let \(d:=\sup D\). Since \(\partial_{r}F(r_{2})=0,\partial_{r}^{2}F(r_{2})>0\), we see that \(d\in(r_{1},r_{2})\). By continuity, we have \(\partial_{r}F(d)=0\). By the definition of \(d\), we have \(\partial_{r}^{2}F(d)\leq 0\), which is a contradiction. Let \((r^{0},\theta^{0},\varphi^{0},\xi^{0})\in\Sigma_{h}\) and \((r(s),\theta(s),\varphi(s),\xi(s))=\exp(sH_{p_{h,z}})((r^{0},\theta^{0},\varphi ^{0},\xi^{0}))\). Then we define \[\Phi^{0}(r):=\Phi(r;\theta^{0},\varphi^{0},\xi^{0}_{0},\xi^{0}_{\varphi})=F(r; \xi^{0}_{\varphi})-\tilde{p}_{h,z}((\theta^{0},\varphi^{0},\xi^{0}_{\theta}, \xi^{0}_{\varphi})). \tag{5.73}\] Since \(\tilde{p}_{h,z}\) and \(\xi_{\varphi}\) are constants along the Hamiltonian flows \(\exp(sH_{p_{h,z}})\) on \(\Sigma_{h}\), it follows that \((r(s),\xi_{r}(s))\) is a Hamiltonian flow trajectory of \[H^{0}(r,\xi_{r}):=\Delta_{b}\big{(}\xi_{r}-zc^{\prime}(r)+\frac{\chi(a\xi^{0} _{\varphi}-(r^{2}+a^{2})z)}{\Delta_{b}}\big{)}^{2}-\Phi^{0}(r)\] In particular, \((r(s),\xi_{r}(s))\) solves the equation \(H^{0}(r,\xi_{r})=0\). **Proposition 5.16**.: _In the region \(r>r_{b}\), the outgoing tail \(\Gamma^{+}\) and incoming tail \(\Gamma^{-}\) are given by_ \[\Gamma^{\pm}=\Big{\{}(r,\theta,\varphi,\xi)\,|\,p_{h,z}=0,\ \xi_{r}-zc^{\prime}(r)+ \frac{\chi(a\xi_{\varphi}-(r^{2}+a^{2})z)}{\Delta_{b}}=\mp\mathrm{sgn}\,(r-r_ {\xi_{\varphi}})\sqrt{\frac{\Phi(r;\theta,\varphi,\xi_{\theta},\xi_{\varphi})} {\Delta_{b}}}\Big{\}} \tag{5.74}\] _where_ \[\Phi(r;\theta,\varphi,\xi_{\theta},\xi_{\varphi})=F(r;\xi_{\varphi})-\tilde{ p}_{h,z}(\theta,\varphi,\xi_{\theta},\xi_{\varphi})\] _and \(r_{\xi_{\varphi}}\) is the only solution to the equations \(\Phi(r;\theta,\varphi,\xi_{\theta},\xi_{\varphi})=0\) and \(\partial_{r}\Phi(r;\theta,\varphi,\xi_{\theta},\xi_{\varphi})=0\). Moreover, \(\Gamma^{\pm}\) are smooth codimension \(1\) submanifolds of \(\Sigma_{h}\) intersecting transversely, and their intersection is the trapped set_ \[K=\Big{\{}(r,\theta,\varphi,\xi)\,|\,p_{h,z}=0,\ r=r_{\xi_{\varphi}},\ \xi_{r}-zc^{\prime}(r)+\frac{\chi(a\xi_{\varphi}-(r^{2}+a^{2})z)}{\Delta_{b}}=0 \Big{\}}, \tag{5.75}\] _which is a smooth codimension \(2\) submanifold of \(\Sigma_{h}\)_ Proof.: Let \((r^{0},\theta^{0},\varphi^{0},\xi^{0})\in\Sigma_{h}\) and \((r(s),\theta(s),\varphi(s),\xi(s))=\exp(sH_{p_{h,z}})((r^{0},\theta^{0}, \varphi^{0},\xi^{0}))\). By Proposition 5.15, \(\partial_{r}\Phi^{0}(r_{\xi^{0}_{\varphi}})=0\). If \(\Phi^{0}(r_{\xi^{0}_{\varphi}})\neq 0\), we have \(|\Phi^{0}(r(s))|+|\partial_{r}\Phi^{0}(r(s))|>0\). Suppose that \((r(s),\theta(s),\varphi(s),\xi(s))\) is trapped as \(s\to\infty\). Since \((r(s),\xi_{r}(s))\) solves the equation \(H^{0}(r,\xi_{r})=0\), arguing as in the proof of the statement (1) of Lemma 5.14, we can find a sequence \(s_{j}\to\infty\) such that \((r(s_{j}),\xi_{r}(s_{j}))\) converges to some \((r_{\infty},(\xi_{r})_{\infty})\) which satisfies \(H^{0}(r_{\infty},(\xi_{r})_{\infty})=0,H_{H^{0}}r((r_{\infty},(\xi_{r})_{ \infty}))=H_{H^{0}}\xi_{r}((r_{\infty},(\xi_{r})_{\infty}))=0\). This contradicts the fact that the Hamiltonian vector field associated to \(H^{0}\) is nonzero on the set \(H^{0}=0\). This proves that \((r(s),\theta(s),\varphi(s),\xi(s))\) is not trapped as \(s\to\infty\). Similarly, \((r(s),\theta(s),\varphi(s),\xi(s))\) escapes as \(s\to-\infty\) as well. As a result, \((r^{0},\theta^{0},\varphi^{0},\xi^{0})\notin\Gamma^{\pm}\). On the other hand, we consider the case \(\Phi^{0}(r_{\xi^{0}_{\varphi}})=0\). We note that the set of solutions to the equation \(H^{0}(r,\xi_{r})=0\) is given by the union \(\Gamma^{+,0}\cup\Gamma^{-,0}\) where \[\Gamma^{\pm,0}=\Big{\{}\xi_{r}-zc^{\prime}(r)+\frac{\chi(a\xi^{0}_{\varphi}-(r^{ 2}+a^{2})z)}{\Delta_{b}}=\mp\mathrm{sgn}\,(r-r_{\xi^{0}_{\varphi}})\sqrt{\frac{ \Phi(r;\theta^{0},\varphi^{0},\xi^{0}_{\theta},\xi^{0}_{\varphi})}{\Delta_{b}}} \Big{\}}.\] First, \((r^{0},\theta^{0},\varphi^{0},\xi^{0})\in\Sigma_{h}\) is equivalent to \(H^{0}(r^{0},\xi^{0}_{\varphi})=0\), i.e., \((r^{0},\theta^{0},\varphi^{0},\xi^{0})\in\Sigma_{h}\) implies \((r^{0},\xi^{0}_{r})\in\Gamma^{+,0}\cup\Gamma^{-,0}\). Next, if \((r^{0},\xi^{0}_{r})\in\Gamma^{\mp,0}\), we see that \((r(s),\theta(s),\varphi(s),\xi(s))\) is trapped as \(s\to\pm\infty\). Therefore, we conclude that \((r(s),\theta(s),\varphi(s),\xi(s))\) is trapped as \(s\to\pm\infty\) if and only if \((r^{0},\xi^{0}_{r})\in\Gamma^{\mp,0}\). Then we obtain \[K=\Gamma^{+}\cap\Gamma^{-} =\Big{\{}(r,\theta,\varphi,\xi)\,|\,p_{h,z}=0,\ r=r_{\xi_{\varphi}}, \ \xi_{r}-zc^{\prime}(r)+\frac{\chi(a\xi_{\varphi}-(r^{2}+a^{2})z)}{\Delta_{b}}=0 \Big{\}}\] \[=\Big{\{}(r,\theta,\varphi,\xi)\,|\,p_{h,z}=0,\ \partial_{r}F=0,\ H_{p_{h,z}}r=0\Big{\}}.\] We note that \(d(H_{p_{h,z}}r)\in\{dr,d\xi_{r},d\xi_{\varphi}\}\) and the coefficient of \(d\xi_{r}\) is \(1\). In view of (5.72), \(d(\partial_{r}F)\in\{dr,d\xi_{\varphi}\}\) and the coefficient of \(dr\) is positive on \(K\). Since \(H_{p_{h,z}}r=0\) and \(\partial_{r}F=0\) imply \(\partial_{\xi_{r}}p_{h,z}=0\) and \(\partial_{r}p_{h,z}=0\) respectively on \(K\), we have \(dp_{h,z}\in\{d\theta,d\xi_{\varphi},d\xi_{\varphi}\}\). Therefore, \(dp_{h,z},d(\partial_{r}F)\) and \(d(H_{p_{h,z}}r)\) are linearly independent on \(K\) and \(K\) is a smooth codimension \(2\) submanifold of \(\Sigma_{h}\). For any \((r^{0},\theta^{0},\varphi^{0},\xi^{0})\in K\), since \(\Phi^{0}(r)=\frac{1}{2}|\partial_{r}^{2}\Phi^{0}(r_{\xi^{0}_{\varphi}}^{ \xi^{0}_{\varphi}})|(r-r_{\xi^{0}_{\varphi}})^ \((r,\theta,\varphi,\xi)\in\Gamma^{\pm}\), then its projection onto \((\theta,\varphi,\xi_{\theta},\xi_{\phi})\) must lie in the set of projection of \(K\) onto \((\theta,\varphi,\xi_{\theta},\xi_{\phi})\). Therefore, \(\Gamma^{\pm}\) are smooth codimension \(1\) submanifolds of \(\Sigma_{h}\) intersecting transversely at \(K\). Since \(TK\) is in the kernel of \(d(H_{p_{h,z}}r)\) and \(d(\partial_{r}F)\), we see that the symplectic complement of \(TK\) in \(\Sigma_{h}\) is \((TK)^{\perp}=\{H_{H_{p_{h,z}}r},H_{0,F}\}\). By a direct calculation, we see that \(H_{\partial_{r}F}(H_{p,z}r)=-2\Delta_{b}\partial_{r}^{2}F\neq 0\) on \(K\), which implies \(TK\cap(TK)^{\perp}=0\) and thus \(K\) is symplectic. Now we verify that the trapped set \(K\) is normally hyperbolic whose definition will be given below (see [40, SS6.3]). **Definition 5.17**.: Let \(K\) be given as above, i.e., \(K\) is symplectic. Let \(\phi_{s}=\exp(sH_{p_{h,z}}):{}^{\rm sc}T^{*}\bar{X}\to{}^{\rm sc}T^{*}\bar{X}\) be the Hamiltonian flow and \(\varphi_{\pm}\) be the defining functions of \(\Gamma^{\pm}\). We say \(K\) is normally hyperbolic if there exist \(C,\nu>0\) such that for all \((r,\theta,\varphi,\xi)\in K\) \[\frac{d(\varphi_{\pm}\circ\phi_{\pm s})(r,\theta,\varphi,\xi)}{d\varphi_{\pm }(r,\theta,\varphi,\xi)}\leq Ce^{-\nu s},\quad s\geq 0 \tag{5.76}\] Define the _minimal expansion rate_\(\nu_{\min}>0\) as the supremum of all values of \(\nu\) for which there exists a constant \(C\) such that (5.76) holds. **Proposition 5.18**.: _Let \(K\) be defined as in (5.75). Then \(K\) is normally hyperbolic._ Proof.: It suffices to prove that the minimal expansion \(\nu_{\min}\) rate defined above is positive. By (5.74), we have \[\Gamma^{\pm}=\{\varphi_{\pm}=\xi_{r}-zc^{\prime}(r)+\frac{\chi(a\xi_{\varphi}- (r^{2}+a^{2})z)}{\Delta_{b}}\pm\operatorname{sgn}\big{(}r-r_{\xi_{\varphi}} \big{)}\sqrt{\frac{\Phi(r;\theta,\varphi,\xi_{\theta},\xi_{\varphi})}{\Delta_ {b}}}=0\}\] where \((\theta,\varphi,\xi_{\theta},\xi_{\varphi})\in K\). That is \(\varphi_{\pm}\) are defining function of \(\Gamma^{\pm}\). Since \(p_{h,z}=\rho_{b}^{-2}\Delta_{b}\varphi_{+}\varphi_{-}\), it follows that \[H_{p_{h,z}}\varphi_{\pm}=\mp\rho_{b}^{-2}\Delta_{b}(H_{\varphi_{+}}\varphi_{- })\varphi_{\pm}+(H_{\rho_{b}^{-2}\Delta_{b}}\varphi_{\pm})\varphi_{\mp} \varphi_{\pm}:=\mp\nu_{\pm}\varphi_{\pm}\] and \(\nu_{+}|_{K}=\nu_{-}|_{K}=\rho_{b}^{-2}\Delta_{b}(H_{\varphi_{+}}\varphi_{-} )|_{K}=\nu\). Since \[\partial_{\xi_{r}}(H_{p_{h,z}}\varphi_{\pm})=\partial_{\xi_{r}}(\mp\nu_{\pm} \varphi_{\pm})=\mp\big{(}(\partial_{\xi_{r}}\nu_{\pm})\varphi_{\pm}+\nu_{\pm }\big{)},\] using Proposition 5.15 we find that \[\nu=-\partial_{\xi_{r}}(H_{p_{h,z}}\varphi_{+})|_{K}=\rho_{b}^{-2}\sqrt{2 \Delta_{b}\partial_{r}^{2}F(r_{\xi_{\varphi}})}>0.\] For \((r,\theta,\varphi,\xi)\in K\), we calculate \[\partial_{s}d(\varphi_{\pm}\circ\phi_{\pm s})=\mp d((\nu_{\pm}\varphi_{\pm}) \circ\phi_{\pm s})=\mp(\nu_{\pm}\circ\phi_{\pm s})d(\varphi_{\pm}\circ\phi_{ \pm s}).\] Therefore, for all \(T>0\) and \((r,\theta,\varphi,\xi)\in K\), we have \[\frac{d(\varphi_{\pm}\circ\phi_{\pm T})(r,\theta,\varphi,\xi)}{d\varphi_{\pm }(r,\theta,\varphi,\xi)}=e^{-\langle\nu\rangle_{T}^{\pm}T},\quad\langle\nu \rangle_{T}^{\pm}=\frac{1}{T}\int_{0}^{T}\nu\circ\phi_{\pm s}(r,\theta,\varphi,\xi)\,ds.\] As a result, the minimal expansion rate is given by \[\nu_{\min}=\min\{\ \liminf_{T\to\infty}\inf_{(r,\theta,\varphi,\xi)\in K} \langle\nu\rangle_{T}^{+},\quad\liminf_{T\to\infty}\inf_{(r,\theta,\varphi, \xi)\in K}\langle\nu\rangle_{T}^{-}\ \}>0. \tag{5.77}\] Finally, we further determine the position of the tapped set \(K\). **Lemma 5.19**.: _For \(z\in\mathbb{R}\setminus 0\), let \(\Sigma_{h}^{\pm}\) be defined as in Lemma 5.8. Then \(K\subset\Sigma_{h}^{\pm}\) for \(\pm z>0\). Moreover, \(\Gamma^{+}\cup\Gamma^{-}\subset\Sigma_{h}^{\pm}\) for \(\pm z>0\)._ Proof.: First, according to (5.51) in Lemma 5.8, we have \[\Sigma_{h}^{\mp}\subset\{r\geq r_{-}\mid\Delta_{b}\leq a^{2}\sin^{2}\theta\} \subset\{r_{-}\leq r\leq\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\}\quad \text{for}\quad\pm z>0\] and thus \[K\cap\{r>\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\}\subset\Sigma_{h}^{ \pm}\quad\text{for}\quad\pm z>0.\] Since \(c^{\prime}(r)=0,\chi=1\) on \(r_{-}\leq r\leq\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\), then for \[(r,\theta,\varphi,\xi)\in K\cap\{r_{-}\leq r\leq\mathbf{m}+\sqrt{\mathbf{m}^{2}- \mathbf{Q}^{2}}\}=K\cap\{r_{b}<r\leq\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2 }}\},\] we have \[a\xi_{\varphi}-(r^{2}+a^{2})z=-\frac{4zr\Delta_{b}}{\partial_{r}\Delta_{b}}\quad \text{and}\quad\xi_{r}=\frac{4zr}{\partial_{r}\Delta_{b}},\] and thus \[\pm g^{-1}(-zdt_{b,*}+\xi,dt) =\pm\rho_{b}^{-2}\Big{(}\xi_{r}(r^{2}+a^{2}+b^{\prime}(r)\Delta_{ b})+b^{\prime}(r)(a\xi_{\varphi}-(r^{2}+a^{2})z)+(a\xi_{\varphi}-a^{2}\sin^{2} \theta z)\Big{)}\] \[=\pm\frac{4zr}{\rho_{b}^{2}\partial_{r}\Delta_{b}}\Big{(}r^{2}+a^ {2}-\Delta_{b}\Big{)}\pm z=\pm\frac{4zr}{\rho_{b}^{2}\partial_{r}\Delta_{b}} \Big{(}2\mathbf{m}r-\mathbf{Q}^{2}\Big{)}\pm z>0\quad\text{for}\quad\pm z>0.\] This implies that \[K\cap\{r_{-}\leq r\leq\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\}\subset \Sigma_{h}^{\pm}\quad\text{for}\quad\pm z>0,\] and thus proves \(K\subset\Sigma_{h}^{\pm}\) for \(\pm z>0\). In order to prove \(\Gamma^{+}\cup\Gamma^{-}\subset\Sigma_{h}^{\pm}\) for \(\pm z>0\), it suffices to show that if \((r,\theta,\varphi,\xi)\in\Gamma^{\pm}\), then \(\phi(s)=\exp(s^{\text{sc}}H^{2,0}_{p_{h,z}})(r,\theta,\varphi,\xi)\to K\) as \(s\to\mp\infty\). We only prove the case of \((r,\theta,\varphi,\xi)\in\Gamma^{-}\) as the other case can be handled in a similar manner. Suppose that \(\phi(s)\nrightarrow K\) as \(s\to\infty\). Then there exists a sequence \(s_{j}\to\infty\) and a neighborhood \(U\) of \(K\) such that \(\phi(s_{j})\notin U\) for all \(j\). Since \(\phi(s)\) is trapped as \(s\to\infty\) (i.e., there exist \(\delta,T>0\) such that \(f(\phi(s))>\delta\) for all \(s>T\)), passing to a subsequence we have \[\phi(s_{j})\to(r_{\infty},\theta_{\infty},\varphi_{\infty},\xi_{\infty})\quad \text{for some}\quad(r_{\infty},\theta_{\infty},\varphi_{\infty},\xi_{\infty}) \notin K=\Gamma^{+}\cap\Gamma^{-}.\] By Lemma 5.14, we find that \(\Gamma^{-}\) is closed and thus \((r_{\infty},\theta_{\infty},\varphi_{\infty},\xi_{\infty})\in\Gamma^{-}\). It follows that \((r_{\infty},\theta_{\infty},\varphi_{\infty},\xi_{\infty})\notin\Gamma^{+}\), which means that \(\phi(s)\) is not trapped as \(s\to-\infty\). Then there exists \(T_{1}>0\) such that \(f(\phi_{\infty}(-T_{1}))<\delta\) where \(\phi_{\infty}(s)=\exp(s^{\text{sc}}H^{2,0}_{p_{h,z}})(r_{\infty},\theta_{ \infty},\varphi_{\infty},\xi_{\infty})\). Since \(\phi(s_{j})\to(r_{\infty},\theta_{\infty},\varphi_{\infty},\xi_{\infty})\), it follows that \[\exp(-T_{1}^{\text{sc}}H^{2,0}_{p_{h,z}})(\phi(s_{j}))\to\phi_{\infty}(-T_{1}).\] Therefore, \(f(\phi(s_{j}-T_{1}))<\delta\) for all sufficiently large \(j\). But this contradicts the fact that \(\phi(s)\) is trapped as \(s\to\infty\). #### 5.4.5. Global dynamics of the Hamiltonian flow Now we shall analyze the global behavior of the flow of \({}^{\text{sc}}\!H^{2,0}_{p_{h,z}}:=\rho^{-1}\langle\xi\rangle^{-1}{}^{\text{sc }}\!H_{p_{h,z}}\) on the semiclassical characteristic set \(\Sigma_{h}=\{\langle\xi\rangle^{-2}p_{h,z}=0\}\). We need the following technical lemma. **Lemma 5.20**.: _For \(z\in\mathbb{R}\setminus 0\), let \((\rho,\theta,\varphi,\xi)\in\Sigma_{h}\) and we put \(\phi(s)=\exp(s^{\text{sc}}\!H^{2,0}_{p_{h,z}})(\rho,\theta,\varphi,\xi)\). If \(\rho(\phi(s))\to 0\) as \(s\to\pm\infty\), then one has_ \[\phi(s)\to R(\frac{z}{2}\pm\frac{|z|}{2})=\{\rho=0,\xi=(z\pm|)\frac{d\rho}{ \rho^{2}}\}\quad\text{as}\quad s\to\pm\infty.\] Proof.: Recall that \[\langle\xi\rangle^{\text{sc}}\!H^{2,0}_{p_{h,z}}=(-2\xi_{\rho}+2z)\rho\partial _{\rho}+2\tilde{p}\partial_{\xi_{\rho}}+(-2\xi_{\rho}+2z)\sum_{\mu=\theta, \varphi}\xi_{\mu}\partial_{\xi_{\mu}}-H_{\tilde{p}}+\rho V,\quad V\in\mathcal{V }_{\text{b}}({}^{\text{sc}}T^{*}\bar{X}).\] Since \(p_{h,z}=0\) along the integral curves of \({}^{\text{sc}}\!H^{2,0}_{p_{h,z}}\), we calculate \[\langle\xi\rangle^{\text{sc}}\!H^{2,0}_{p_{h,z}}\Big{(}\frac{\xi_{\rho}-z}{\rho }\Big{)}=\langle\xi\rangle\frac{2(\xi_{\rho}-z)^{2}+2\tilde{p}+\rho a_{1}}{\rho }=\langle\xi\rangle\frac{z^{2}+\rho a_{2}}{\rho},\quad a_{1},a_{2}\in\mathcal{ A}^{0}(\bar{X}).\] Since \(\rho(\phi(s))\to 0\) as \(s\to\pm\infty\), then for any \(\epsilon>0\), there exists \(T>0\) such that \(\rho(\phi(s))<\epsilon\) for all \(\pm s>T\). Picking \(\epsilon\) sufficiently small, we see that \({}^{\text{sc}}\!H^{2,0}_{p_{h,z}}\Big{(}\frac{\xi_{\rho}-z}{\rho}\Big{)}\geq c_{ 1}>0\) for all \(\pm s>T\). This implies that \((\xi_{\rho}-z)/\rho\to\pm\infty\) as \(s\to\pm\infty\); in particular \(\pm(\xi_{\rho}-z)\geq 0\) for all \(\pm s>T_{1}\) where \(T_{1}>0\) is a sufficiently large constant. According to Lemma 5.10, there exist neighborhoods \(V_{\pm}^{\varepsilon^{\prime}}=\{(\xi_{\rho}-(z\pm(\text{sgn}\,z)z))^{2}+\tilde{p}+ \rho^{2}\leq\epsilon^{\prime}\}\) of \(R(\frac{z}{2}\pm\frac{|z|}{2})\) such that once \(\phi(s)\) enters \(V_{\pm}^{\varepsilon^{\prime}}\) at some time \(\pm s_{0}>0\), it will converge to \(R(\frac{z}{2}\pm\frac{|z|}{2})\) as \(\pm s\to\infty\). Suppose that \(\phi(s)\notin V_{\pm}^{\varepsilon^{\prime}}\) for all \(\pm s\geq 0\). Since \(p_{h,z}=0\) along the integral curves of \({}^{\text{sc}}\!H^{2,0}_{p_{h,z}}\), it follows that \[|\xi_{\rho}-z|\leq\delta_{1}\quad\tilde{p}>\delta_{2}\quad\text{for all}\quad\pm s>T_{2} \tag{5.78}\] for some \(\delta_{1},\delta_{2}>0,T_{2}>T_{1}\). Then for all \(\pm s>T_{2}\), we have \[{}^{\text{sc}}\!H^{2,0}_{p_{h,z}}(\xi_{\rho}-z)=\langle\xi\rangle^{-1}(2 \tilde{p}+\rho a_{2})>c_{2}>0,\quad a_{2}\in\mathcal{A}^{0}(\bar{X}).\] which contradicts (5.78). Therefore, \(\phi(s)\) must enter \(V_{\pm}^{\varepsilon^{\prime}}\) at some time \(\pm s_{0}>0\) and this finishes the proof of the lemma. Now we are at the position to describe the the global behavior of the flow of \({}^{\text{sc}}\!H_{p_{h,z}}^{2,0}:=\rho^{-1}\langle\xi\rangle^{-1\text{sc}}\!H_{ p_{h,z}}\) on the semiclassical characteristic set \(\Sigma_{h}=\{\langle\xi\rangle^{-2}p_{h,z}=0\}\). **Proposition 5.21**.: _Let \(\pm z>0\). Let \(\phi(s)=\exp(s^{\text{sc}}\!H_{p_{h,z}}^{2,0})(\rho,\theta,\varphi,\xi)\) be the maximally extended integral curve of \({}^{\text{sc}}\!H_{p_{h,z}}^{2,0}\) on \(\Sigma_{h}\) with the domain of definition \(s\in I\subset\mathbb{R}\). Let \(s_{-}=\inf I,s_{+}=\sup I\). (see Figure 10)._ 1. _Let_ \(z>0\)_._ 1. _If_ \(\phi(s)\subset\Sigma_{h}^{-}\)_, then either_ \(\phi(s)\subset L_{-}\)_, or_ \(\phi(s)\to L_{-}\) _as_ \(s\to s_{-}=-\infty\) _and_ \(\phi(s)\) _crosses_ \(\partial_{-}\bar{X}\) _into the inward direction of deceasing_ \(r\) _at finite time_ \(s_{+}<\infty\)_._ 2. _If_ \(\phi(s)\subset\Sigma_{h}^{+}\)_, then either_ \(\phi(s)\subset L_{+}\cup R(z)\cup R(0)\cup K\)_, or_ \(\phi(s)\to L_{+}\cup R(z)\cap K\) _as_ \(s\to s_{+}=\infty\) _and_ \(\phi(s)\to R(0)\cup K\) _as_ \(s\to s_{-}=-\infty\) _or_ \(\phi(s)\) _crosses_ \(\partial_{-}\bar{X}\) _into the inward direction of deceasing_ \(r\) _at finite time_ \(s_{-}>-\infty\)_. Moreover, if_ \((\rho,\theta,\varphi,\xi)\notin K\)_, then_ \(\phi(s)\) _cannot converge to_ \(K\) _in both the forward and backward direction._ 2. _Let_ \(z<0\)_._ 1. _If_ \(\phi(s)\subset\Sigma_{h}^{+}\)_, then either_ \(\phi(s)\subset L_{+}\)_, or_ \(\phi(s)\to L_{+}\) _as_ \(s\to s_{+}=\infty\) _and_ \(\phi(s)\) _crosses_ \(\partial_{-}\bar{X}\) _into the inward direction of deceasing_ \(r\) _at finite time_ \(s_{-}>-\infty\)_._ 2. _If_ \(\phi(s)\subset\Sigma_{h}^{-}\)_, then either_ \(\phi(s)\subset L_{-}\cup R(z)\cup R(0)\cup K\)_, or_ \(\phi(s)\to L_{-}\cup R(z)\cup K\) _as_ \(s\to s_{-}=-\infty\) _and_ \(\phi(s)\to R(0)\cup K\) _as_ \(s\to s_{+}=\infty\) _or_ \(\phi(s)\) _crosses_ \(\partial_{-}\bar{X}\) _into the inward direction of deceasing_ \(r\) _at finite time_ \(s_{+}<\infty\)_. Moreover, if_ \((\rho,\theta,\varphi,\xi)\notin K\)_, then_ \(\phi(s)\) _cannot converge to_ \(K\) _in both the forward and backward direction._ Proof.: We only consider the case \(z>0\) as the other case \(z<0\) can be proved similarly. First, we note that \(L_{+},L_{-},R(0),R(z),K\) are invariant under the flow \(\phi(s)\) and if \((\rho,\theta,\varphi,\xi)\in\Gamma^{\pm}\), then \(\phi(s)\to K\) as \(\mp s\to\infty\). If \((\rho,\theta,\varphi,\xi)\notin L_{+}\cup L_{-}\cup R(0)\cup R(z)\cup K\), then either \((\rho,\theta,\varphi,\xi)\notin\Gamma^{-}\) or \((\rho,\theta,\varphi,\xi)\notin\Gamma^{+}\). If \((\rho,\theta,\varphi,\xi)\notin\Gamma^{-}\), by Lemma 5.13, for \(\phi(s)\subset\Sigma_{h}^{+}\), either \(\phi(s)\to L_{+}\) or \(\rho(\phi(s))\to 0\) as \(s\to\infty\), while for \(\phi(s)\subset\Sigma_{h}^{-}\), \(\phi(s)\) crosses \(\partial_{-}\bar{X}\) into the inward direction of deceasing \(r\) at some finite time \(s_{0}<\infty\). If \(\rho(\phi(s))\to 0\) as \(s\to\infty\), according to Lemma 5.20, we have \(\phi(s)\to R(z)\) as \(s\to\infty\). If \((\rho,\theta,\varphi,\xi)\notin\Gamma^{+}\), by Lemma 5.13, for \(\phi(s)\subset\Sigma_{h}^{+}\), either \(\rho(\phi(s))\to 0\) as \(s\to\infty\) or \(\phi(s)\) crosses \(\partial_{-}\bar{X}\) into the inward direction of deceasing \(r\) at some finite time \(s_{0}>-\infty\), while for \(\phi(s)\subset\Sigma_{h}^{-}\), \(\phi(s)\to L_{-}\) as \(s\to-\infty\). If \(\rho(\phi(s))\to 0\) as \(s\to-\infty\), according to Lemma 5.20, we have \(\phi(s)\to R(0)\) as \(s\to-\infty\). ### High energy estimates In this section, we prove the high energy estimates first for operators \(h^{2}\widehat{\mathcal{P}_{b,\gamma}}(h^{-1}z)\) acting on scalar functions, then for \(h^{2}\widehat{\mathcal{P}_{b,\gamma}}(h^{-1}z),h^{2}\widehat{\mathcal{W}_{b, \gamma}}(h^{-1}z)\) acting on scattering 1-forms and the linearized gauge-fixed Einstein-Maxwell operator \(h^{2}\widehat{L_{b,\gamma}}(h^{-1}z)\), as well as their formal adjoints. To this end, we combine the global dynamics of of the Hamiltonian flow of the semiclassical principal symbol \(p_{h,z}\) established in 5.21 with the elliptic estimate, propagation of singularities estimate, the radial point estimate at event horizon, the scattering radial point estimate at spatial infinity \(\partial_{+}\bar{X}\) and hyperbolic estimate, all of which are in the semiclassical version, together with the estimate at normally hyperbolic trapping. #### 5.5.1. High energy estimates for scalar wave operators We first prove high energy estimates for the operator \(h^{2}\widehat{\square_{\mathrm{gs}}}(h^{-1}z)\) acting on scalar functions, as well as its formal adjoint with respect to volume density \(L^{2}(\bar{X};\sqrt{\det|g_{b}|}drd\theta d\varphi)\). Figure 11. A phase space picture of the proof of the estimate (5.81), on the left, and (5.82), on the right. The coordinates and notations \(L_{\pm},R(0),R(z),\Gamma^{\pm},K\) are the same as in Figure 10. For (5.81), we use semiclassical hyperbolic estimates to control \(u\) via \(\chi u\); \(\chi\) is controlled (modulo elliptic estimates) by \(B_{\pm},B_{1},B_{0},B_{z}\); \(B_{0}\) is controlled by \(E_{0}\) using low b-decay radial point estimates and \(E_{0}\) is again controlled by \(B_{+},B_{1},B_{z}\); \(B_{1}\) is controlled by \(B_{2}\) by using normally hyperbolic trapping estimate and \(B_{2}\) is controlled by \(B_{+},B_{z}\); finally \(B_{\pm},B_{z}\) are controlled using high regularity radial points estimates at \(L_{\pm}\) and high sc-decay radial point estimates at \(R(z)\). For (5.82), we use semiclassical hyperbolic estimates to bound \((1-\chi)u\); \(\chi\) is controlled (modulo elliptic estimates) by \(B_{\pm}^{\prime},B_{1}^{\prime},B_{0}^{\prime},B_{z}^{\prime}\); \(B_{\pm}^{\prime}\) is controlled by \(E_{\pm}^{\prime}\) using low regularity radial point estimates and \(E_{\pm}^{\prime}\) is controlled by \(B_{1}^{\prime},B_{0}^{\prime},1-\chi\); \(B_{z}^{\prime}\) is controlled by \(E_{z}^{\prime}\) using low sc-decay radial points estimates and \(E_{z}^{\prime}\) is controlled by \(B_{0}^{\prime},B_{1}^{\prime},1-\chi\); \(B_{1}^{\prime}\) is controlled by \(B_{2}^{\prime}\) by using normally hyperbolic trapping estimate and \(B_{2}^{\prime}\) is controlled by \(B_{+}^{\prime},1-\chi\); finally \(B_{0}^{\prime}\) is controlled using high b-decay radial points estimates at \(R(0)\). **Theorem 5.22**.: _Let \(b_{0}=(\mathbf{m}_{0},\mathbf{a}_{0},\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|+|\mathbf{a}_{0}|<\mathbf{m}_{0}\). Then there exists \(\epsilon>0\) such that for \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(|b-b_{0}|<\epsilon\) and for \(s>\frac{1}{2},\ell<-\frac{1}{2}\) with \(s+\ell>-\frac{1}{2}\), the following holds. For any fixed \(C_{1}>0\), there exist \(C_{2}>1,C>0\) which are independent of \(b\), such that for \(\sigma\in\mathbb{C},\mathrm{Im}\,\sigma\in[0,C_{1}],|\mathrm{Re}\,\sigma|>C_{2}\), we have_ \[\|u\|_{\tilde{H}^{s,\ell}_{\mathrm{b},h}(\bar{X})}\leq C\widehat{ \|\square_{g_{b}}(\sigma)}u\|_{\tilde{H}^{s,\ell+1}_{\mathrm{b},h}(\bar{X})}, \quad\|u\|_{\tilde{H}^{-s,-\ell-1}_{\mathrm{b},h}(\bar{X})}\leq C\|\widehat{ \square_{g_{b}}(\sigma)^{*}}u\|_{\tilde{H}^{-s,-\ell}_{\mathrm{b},h}(\bar{X})}; \tag{5.79}\] \[\|u\|_{\tilde{H}^{s,\ell}_{\mathrm{b},h}(\bar{X})}\leq C\widehat{ \|\square_{g_{b}}(\sigma)}u\|_{\tilde{H}^{s-1,\ell+2}_{\mathrm{b},h}(\bar{X})},\quad\|u\|_{\tilde{H}^{-s+1,-\ell-2}_{\mathrm{b},h}(\bar{X})}\leq C\widehat{ \|\square_{g_{b}}(\sigma)^{*}}u\|_{\tilde{H}^{-s,-\ell}_{\mathrm{b},h}(\bar{X})}, \tag{5.80}\] _where \(h=|\sigma|^{-1}\) and \(C\) only depends on \(b_{0},s,\ell,C_{1}\)._ _Therefore,_ \[\widehat{\square_{g_{b}}}:\{u\in\tilde{H}^{s,\ell}_{\mathrm{b},h}(\bar{X})\mid \tilde{H}^{s,\ell+1}_{\mathrm{b},h}(\bar{X})\}\rightarrow\tilde{H}^{s,\ell+ 1}_{\mathrm{b},h}(\bar{X})\quad\text{and}\quad\widehat{\square_{g_{b}}}:\{u \in\tilde{H}^{s,\ell}_{\mathrm{b},h}(\bar{X})\mid\tilde{H}^{s-1,\ell+2}_{ \mathrm{b},h}(\bar{X})\}\rightarrow\tilde{H}^{s-1,\ell+2}_{\mathrm{b},h}(\bar {X})\] _are invertible for \(\sigma\in\mathbb{C},\mathrm{Im}\,\sigma\in[0,C_{1}],|\mathrm{Re}\,\sigma|>C_{2}\)._ Proof.: For the simplicity of notations, we let \(P_{h}(z):=h^{2}\widehat{\square_{g_{b}}}(h^{-1}z)\). We first claim that it suffices to prove the following estimates for \(s>\frac{1}{2},r>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\|u\|_{\tilde{H}^{s,r,\ell}_{\mathrm{ac},b,h}(\bar{X})}\leq Ch^{-2}\|P_{h}(z)u \|_{\tilde{H}^{s-1,r+1,\ell+1}_{\mathrm{b},h}(\bar{X})} \tag{5.81}\] and \[\|u\|_{\tilde{H}^{s+1,-r-1}_{\mathrm{ac},b,h}(\bar{X})}\leq Ch^{-2}\|P_{h}(z)^ {*}u\|_{\tilde{H}^{-s,-r-\ell}_{\mathrm{ac},b,h}(\bar{X})} \tag{5.82}\] Since \(\tilde{H}^{s,s+\ell,\ell}_{\mathrm{sc},b,h}=\tilde{H}^{s,l}_{\mathrm{b},h}\) and \(\dot{H}^{s,s+\ell,\ell}_{\mathrm{ac},b,h}=\dot{H}^{s,l}_{\mathrm{b},h}\), letting \(r=s+\ell>-\frac{1}{2}\) we have \[\|u\|_{\tilde{H}^{s,s+\ell,\ell}_{\mathrm{ac},b,h}(\bar{X})}\leq Ch^{-2}\|P_{h} (z)u\|_{\tilde{H}^{-1,s+\ell+1,\ell+1}_{\mathrm{b},h}(\bar{X})}\] and \[\|u\|_{\dot{H}^{-s+1,-s-r-1,-\ell-1}_{\mathrm{ac},b,h}(\bar{X})}\leq Ch^{-2}\|P _{h}(z)^{*}u\|_{\dot{H}^{-s,-s-r-\ell}_{\mathrm{ac},b,h}(\bar{X})}.\] Then using the facts that \[\tilde{H}^{s,\ell+1}_{\mathrm{b},h}=\tilde{H}^{s,s+\ell+1,\ell+1}_{\mathrm{sc},b,h}\subset\bar{H}^{s-1,s+\ell+1,\ell+1}_{\mathrm{sc},b,h},\quad\tilde{H}^{s -1,\ell+2}_{\mathrm{b},h}=\bar{H}^{s-1,s+\ell+1,\ell+2}_{\mathrm{sc},b,h}\subset \bar{H}^{s-1,s+\ell+1,\ell+1}_{\mathrm{sc},b,h},\] \[\dot{H}^{-s,-\ell-1}_{\mathrm{b},h}=\dot{H}^{-s,-s-\ell-1,-\ell-1}_{\mathrm{sc},b,h}\supset\dot{H}^{-s+1,-s-\ell-1,-\ell-1}_{\mathrm{sc},b,h},\] \[\dot{H}^{-s+1,-\ell-2}_{\mathrm{bc},b,h}=\dot{H}^{-s+1,-s-\ell-1,-\ell-2}_{ \mathrm{sc},b,h}\supset\dot{H}^{-s+1,-s-\ell-1,-\ell-1}_{\mathrm{sc},b,h},\] we obtain (5.79) and (5.80). Now we will show (5.81) and (5.82). Here we only discuss the case \(\mathrm{Re}\,z>0\) (see Figure 11 for a phase space illustration of the proof of estimates (5.81) and (5.82)), as the case \(\mathrm{Re}\,z<0\) can be handled in a similar manner. * Proof of the estimate (5.81). For \(s>s_{0}>\frac{1}{2}\geq\frac{1}{2}-\mathrm{Im}\,(h^{-1}z)\frac{r_{0}^{2}+a^{2} }{r_{0}^{2}-\mathbf{m}}\) (as calculated in (5.59) and (5.8)), using the semiclassical high regularity radial point estimate (see [109, Proposition 2.10], [110, Proposition 5.27]), there exist \(B_{\pm},G_{\pm},S_{\pm}\in\Psi^{0}_{h}(\bar{X})\) microlocally supported near \(L_{\pm}\) with \(L_{\pm}\subset\mathrm{elh}(B_{\pm}),L_{\pm}\subset\mathrm{elh}(S_{\pm})\) and satisfying that all the forward (backward) null characteristics from \(\mathrm{WF}_{h}(B_{\pm})\) tend to \(L_{\pm}\), while remaining in the elliptic set of \(G_{\pm}\), such that \[\|B_{\pm}u\|_{H^{s,r,\ell}_{\mathrm{ac},b,h}(X)}\leq Ch^{-1}\|G_{\pm}P_{h}(z)u\|_{H ^{s-1,r+1}_{\mathrm{ac},b,h}(\bar{X})}+Ch^{s-s_{0}}\|S_{\pm}u\|_{H^{s_{0},r_{0},r_{0},t_{0}}_{\mathrm{ac},b,h}(\bar{X})}\] (5.83) where \(r_{0}<r,\ell_{0}<\ell\) and the sc-decay order \(r\) and b-decay order \(\ell\) are actually irrelevant here. For \(r>r_{0}>-\frac{1}{2}\) (as calculated in (5.14)), using the semiclassical high sc-decay radial point estimates at the non-zero scattering section \(R(z)\) (see [111]. The proof follows from a positive commutator estimate \[i(P_{h}(z)^{*}A-AP_{h}(z))=\mathrm{Im}\,P_{h}(z)A+A\mathrm{Im}\,P_{h}(z)+i[ \mathrm{Re}\,P_{h}(z),A]\] where \[\mathrm{Re}\,P_{h}(z)=\frac{P_{h}(z)+P_{h}(z)^{*}}{2},\quad\mathrm{Im}\,P_{h}(z )=\frac{P_{h}(z)-P_{h}(z)^{*}}{2i}.\] Let \(A\in\Psi^{2s-1,2r+1,2l+1}_{\mathrm{sc},b,h}(\bar{X})\) with semiclassical principal symbol \[a=\chi_{0}(\xi_{\theta}^{2}+\frac{1}{\sin^{2}\theta}\xi_{\varphi}^{2})\chi_{1}(( \xi_{\rho}-2\mathrm{Re}\,z)^{2})\rho^{-2r+1}(\xi_{\theta}^{2}+\frac{1}{\sin^{2} \theta}\xi_{\varphi}^{2}+\xi_{\rho}^{2})^{s-\frac{1}{2}}\] where \(\chi_{0},\chi_{1}\) are identically \(0\) near \(0\) and have compact support sufficiently close to \(0\), and \(\chi_{1}\) has relatively large support such that \(\mathrm{supp}\chi_{0}\cap\mathrm{supp}\chi_{1}^{\prime}\) is disjoint from the characteristic set of \(\mathrm{Re}\,P_{h}(z)\). Then using Lemma 4.1 and the calculation before Lemma 4.8 in [111], it follows that \(h^{-1}(\mathrm{Im}\,P_{h}(z)A+A\mathrm{Im}\,P_{h}(z)+i[\mathrm{Re}\,P_{h}(z), A])\in\Psi_{\mathrm{sc},b,h}^{2s,2r,2l}(\bar{X})\) whose semiclassical principal symbol is positive definite elliptic near \(R(z)\) if \(r>-\frac{1}{2}\). This, together with a regularization argument, proves semiclassical high sc-decay radial point estimates at \(R(z))\), there exist \(B_{z},G_{z},S_{z}\in\Psi_{\mathrm{sc},h}^{0,0}(\bar{X})\) microlocally supported near \(R(z)\) with \(R(z)\subset\mathrm{ell}_{\mathrm{sc},h}(B_{z}),R(z)\subset\mathrm{ell}_{ \mathrm{sc},h}(S_{z})\) and satisfying that all the forward null characteristics from \(\mathrm{WF}_{\mathrm{sc},h}(B_{z})\) tend to \(R(z)\), while remaining in the scattering elliptic set of \(G_{z}\), such that \[\|B_{z}u\|_{H^{s,r,\ell}_{\mathrm{sc},b,h}(\bar{X})}\leq Ch^{-1}\|G_{z}P_{h}(z )u\|_{H^{s-1,r+1,\ell+1}_{\mathrm{sc},h}(\bar{X})}+Ch^{s-s_{0}}\|S_{z}u\|_{H^{s _{0},r,0,\ell_{0}}_{\mathrm{sc},b,h}(\bar{X})} \tag{5.84}\] where \(s_{0}<s,\ell_{0}<\ell\) and the differential order \(s\) and b-decay order \(\ell\) are actually irrelevant here. For \(\ell<-\frac{1}{2}\) (as calculated in (5.14)), using the semiclassical low b-decay radial point estimates at the zero scattering section \(R(0)\) (see [111, SS4-5]), there exist \(B_{0},G_{0}\in\Psi_{\mathrm{sc},b,h}^{0,0,0}(\bar{X})\) microlocally supported near \(R(0)\) (in fact near the blown up \(R(0)\) in the second microlocalized space introduced in SS5.1.5) with \(R(0)\subset\mathrm{ell}_{\mathrm{sc},b,h}(B_{0})\) and \(E_{0}\in\Psi_{\mathrm{sc},b,h}^{0,0,0}(\bar{X}),\mathrm{WF}_{\mathrm{sc},b,h} (E_{0})\cap R(0)=\emptyset\), and satisfying that all the forward null characteristics from \(\mathrm{WF}_{\mathrm{sc},b,h}(B_{0})\setminus R(0)\) enter \(\mathrm{ell}_{\mathrm{sc},b,h}(E_{0})\), while remaining in the second microlocalized scattering elliptic set of \(G_{0}\), such that for any sufficiently large \(N>0\) \[\|B_{0}u\|_{H^{s,r,\ell}_{\mathrm{sc},b,h}(\bar{X})}\leq Ch^{-1}\|G_{0}P_{h}(z )u\|_{H^{s-1,r+1,\ell+1}_{\mathrm{sc},h}(\bar{X})}+C\|E_{0}u\|_{H^{s,r,\ell}_{ \mathrm{sc},b,h}(\bar{X})}+Ch^{N}\|u\|_{H^{-N,-N,\ell}_{\mathrm{sc},b,h}(\bar{ X})} \tag{5.85}\] where the differential order \(s\) is actually irrelevant here. Since at the tapped set \(K\) \[\sigma_{h}(h^{-1}\mathrm{Im}\,P_{h}(z)) =\sigma_{h}(\frac{P_{h}(z)-P_{h}(z)^{*}}{2ih})=\sigma_{h}(\frac{P _{h}(z)-P_{h}(\bar{z})}{2ih})\] \[=-\rho_{b}^{-2}\mathrm{Im}\,\sigma\bigg{(}-\Big{(}\xi_{r}-c^{ \prime}(r)\mathrm{Re}\,z+\frac{\chi(a\xi_{\varphi}-(r^{2}+a^{2})\mathrm{Re}\,z )}{\Delta_{b}}\Big{)}\Big{(}c^{\prime}(r)+\frac{\chi(r^{2}+a^{2})}{\Delta_{b}} \Big{)}\] \[\qquad+\frac{2}{\Delta_{b}}\Big{(}a\xi_{\varphi}-(r^{2}+a^{2}) \mathrm{Re}\,z\Big{)}\Big{(}r^{2}+a^{2}\Big{)}-2a\Big{(}\xi_{\varphi}-a\sin^{2 }\theta\mathrm{Re}\,z\Big{)}\bigg{)}\] \[=-\rho_{b}^{-2}\mathrm{Im}\,\sigma\bigg{(}\frac{2}{\Delta_{b}} \Big{(}a\xi_{\varphi}-(r^{2}+a^{2})\mathrm{Re}\,z\Big{)}\Big{(}r^{2}+a^{2} \Big{)}-2a\Big{(}\xi_{\varphi}-a\sin^{2}\theta\mathrm{Re}\,z\Big{)}\bigg{)}>0\] where we use \(\xi_{r}-c^{\prime}(r)\mathrm{Re}\,z+\frac{\chi(a\xi_{\varphi}-(r^{2}+a^{2}) \mathrm{Re}\,z)}{\Delta_{b}}=0\) at \(K\) in the last equality and \[a\xi_{\varphi}-(r^{2}+a^{2})\mathrm{Re}\,z=-\frac{-4r\Delta_{b}\mathrm{Re}\,z }{\partial_{r}\Delta_{b}}<0,\quad(a\xi_{\varphi}-(r^{2}+a^{2})\mathrm{Re}\,z)^ {2}\geq\Delta_{b}(\xi_{\varphi}-a\sin^{2}\theta\mathrm{Re}\,z)^{2}\quad\text{ at }K,\] which follow from Proposition 5.15 and (5.66), in the last step, it follows that \[\sigma_{h}(h^{-1}\mathrm{Im}\,-P_{h}(z))=\sigma_{h}(\frac{(-P_{h}(z))-(-P_{h}( z))^{*}}{2ih})<0<\frac{\nu_{\min}}{2}\quad\text{at}\quad K.\] Using the hyperbolic trapping estimates (see [39] and [61, Theorem 4.7]), there exist \(B_{1},G_{1}\in\Psi_{h}^{0}(\bar{X})\) microlocally supported near \(K\) with \(K\subset\mathrm{ell}_{h}(B_{1})\) and \(B_{2}\in\Psi_{h}^{0}(\bar{X}),\mathrm{WF}_{h}(B_{2})\cap\Gamma^{-}=\emptyset\), and satisfying that \(\mathrm{WF}_{h}(B_{1})\cup\mathrm{WF}_{h}(B_{2})\) is contained in the elliptic set of \(G_{1}\), such that for any \(N,s^{\prime},r^{\prime},\ell^{\prime}\in\mathbb{R}\) \[\|B_{1}u\|_{H^{s,r,\ell}_{\mathrm{sc},b,h}(\bar{X})}\leq Ch^{-2}\|G_{1}P_{h}(z)u \|_{H^{s-1,r+1,\ell+1}_{\mathrm{sc},b,h}(\bar{X})}+C\|B_{2}u\|_{H^{s,r,\ell}_{ \mathrm{sc},b,h}(\bar{X})}+Ch^{N}\|u\|_{H^{s^{\prime},r^{\prime},\ell^{\prime}}_ {\mathrm{sc},b,h}(\bar{X})} \tag{5.86}\] where the differential order \(s^{\prime}\), the sc-decay order \(r^{\prime}\) and b-decay order \(\ell^{\prime}\) are actually irrelevant here. Let \(r_{-}<r_{0}<r_{b}\) and \(\chi\in C_{c}^{\infty}(\bar{X})\) with \(\chi=1\) near \(r\geq r_{b}\) and \(\mathrm{supp}\chi\subset\{r\geq r_{0}\}\). By part (1) in Proposition 5.21, semiclassical propagation of singularities estimates and semiclassical elliptic estimates (Concretely, when \((\rho,\theta,\varphi,\xi)\in\mathrm{WF}_{h}(\chi)\cap\{\langle\xi\rangle^{-2}p_{h,z}(\xi)\neq 0\}\), we use semiclassical elliptic estimates. If \((\rho,\theta,\varphi,\xi)\in\mathrm{WF}_{h}(\chi)\cap\{\langle\xi\rangle^{-2}p_{h,z}(\xi)=0\}\), then there exists \(s\in\mathbb{R}\) with \(\exp(s^{\mathrm{sc}}H^{2,0}_{p_{h,z}})(\rho,\theta,\varphi,\xi)\in\mathrm{ell}_{ h}(B_{\pm})\cup\mathrm{ell}_{h}(B_{1})\cup\mathrm{ell}_{h}(B_{0})\cup \mathrm{ell}_{h}(B_{z})\), and thus we use semiclassical propagation of singularities estimates. We point out that in the set \(\{\mathrm{Re}\,p_{h}(z)=0\}\cap\partial_{+}\bar{X}\), the semiclassical symbol has a nonnegative imaginary part, so one can propagate estimates towards \(R(0)\). Finally, by using a pseudodifferential partition of unity, \(\chi\) can be written as \(s\) sum of operators falling into the above two case), we have \[\begin{split}\|\chi u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}& \leq Ch^{-1}\|P_{h}(z)u\|_{\widetilde{H}^{s-1,r+1,\ell+1}_{\text{sc},h}(\bar{X })}+C\|B_{+}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}+C\|B_{-}u\|_{H^{s,r, \ell}_{\text{sc},b,h}(\bar{X})}\\ &+C\|B_{1}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}+C\|B_{0}u\|_ {H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}+C\|B_{z}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}+Ch^{N}\|u\|_{\widetilde{H}^{-N,-N,\ell}_{\text{sc},b,h}(\bar{X })}.\end{split}\] (5.87) By the same reasoning as above in the control of \(\chi u\), it follows that \[\begin{split}\|E_{0}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}& \leq Ch^{-1}\|P_{h}(z)u\|_{\widetilde{H}^{s-1,r+1,\ell+1}_{\text{sc},\bar{X}}(\bar{X})}+C\|B_{+}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}+C\|B_{ 1}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}\\ &+C\|B_{z}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}+Ch^{N}\|u\|_ {\widetilde{H}^{-N,-N,\ell}_{\text{sc},b,h}(\bar{X})}\end{split}\] (5.88) and \[\begin{split}\|B_{2}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}& \leq Ch^{-1}\|P_{h}(z)u\|_{\widetilde{H}^{s-1,r+1,\ell+1}_{\text{sc},\bar{X}}(\bar{X})}+C\|B_{+}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}\\ &\quad+C\|B_{z}u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}+Ch^{N} \|u\|_{\widetilde{H}^{-N,-N,\ell}_{\text{sc},b,h}(\bar{X})}.\end{split}\] (5.89) Putting all the above estimates together yields for \(s>s_{0}>\frac{1}{2},r>r_{0}>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\|\chi u\|_{H^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}\leq Ch^{-2}\|P_{h}(z)u\|_{ \widetilde{H}^{s-1,r+1,\ell+1}_{\text{sc},\bar{X}}(\bar{X})}+Ch^{s-s_{0}}\|u\| _{\widetilde{H}^{s_{0},r_{0},\ell}_{\text{sc},b,h}(\bar{X})}.\] (5.90) Since \[p_{h,z}=-\rho_{b}^{-2}\Big{(}\Delta_{b}\big{(}\xi_{r}+\frac{\big{(}a\xi_{\varphi }-(r^{2}+a^{2})z\big{)}}{\Delta_{b}}\big{)}^{2}-\frac{1}{\Delta_{b}}\big{(}a\xi _{\varphi}-(r^{2}+a^{2})z\big{)}^{2}+\tilde{p}_{h,z}\Big{)}\] where \[\tilde{p}_{h,z}=\xi_{\theta}^{2}+\frac{1}{\sin^{2}\theta}(\xi_{\varphi}-a\sin^ {2}\theta z)^{2}\] is semiclassical hyperbolic with respect to \(r\) in the region \(\{r_{-}\leq r<r_{b}\}\) (see [40, definition E.55]), using the semiclassical hyperbolic estimates (see [40, Theorem E.57]), we have for all \(s,r,\ell\in\mathbb{R}\) \[\|(1-\chi)u\|_{\widetilde{H}^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}\leq Ch^{-1 }\|P_{h}(z)u\|_{\widetilde{H}^{s-1,r+1,\ell+1}_{\text{sc},\bar{X}}(\bar{X})}+\| \chi u\|_{\widetilde{H}^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}\] (5.91) where the sc-decay order \(r\) and b-decay order \(\ell\) are actually relevant here. Combining (5.90) with (5.91) yields for \(s>s_{0}>\frac{1}{2},r>r_{0}>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\|u\|_{\widetilde{H}^{s,r,\ell}_{\text{sc},b,h}(\bar{X})}\leq Ch^{-2}\|P_{h}(z)u \|_{\widetilde{H}^{s-1,r+1,\ell+1}_{\text{sc},\bar{X}}(\bar{X})}+Ch^{s-s_{0}} \|u\|_{\widetilde{H}^{s_{0},r_{0},\ell}_{\text{sc},b,h}(\bar{X})}.\] (5.92) Taking \(h\) small enough, the term \(Ch^{s-s_{0}}\|u\|_{\widetilde{H}^{s_{0},r_{0},\ell}_{\text{sc},b}(\bar{X})}\) can be absorbed into the left-hand side and this proves the estimate (5.81). * Proof of the estimate (5.82). For \(s>\frac{1}{2}\geq\frac{1}{2}-\operatorname{Im}{(h^{-1}z)}\frac{r_{\pm}^{2}+a^ {2}}{r_{\xi}^{2}-\mathbf{m}}\), we have \(1-s\leq\frac{1}{2}-\operatorname{Im}{(h^{-1}\overline{z})}\frac{r_{\pm}^{2}+a^ {2}}{r_{\xi}^{2}-\mathbf{m}}\). Then using the semiclassical low regularity radial point estimates (see [109, Proposition 2.11], [110, Proposition 5.27]), there exist \(B_{\pm}^{\prime},G_{\pm}^{\prime}\in\Psi_{h}^{0}(\bar{X})\) microlocally supported near \(L_{\pm}\) with \(L_{\pm}\subset\operatorname{ell}_{h}(B_{\pm}^{\prime})\) and \(E_{\pm}^{\prime}\in\Psi_{h}^{0}(\bar{X}),\operatorname{WF}_{h}(E_{\pm}^{ \prime})\cap L_{\pm}=\emptyset\), and satisfying that all the backward (forward) null characteristics from \(\operatorname{WF}_{h}(B_{\pm}^{\prime})\setminus L_{\pm}\) reach \(\operatorname{ell}_{h}(E_{\pm}^{\prime})\), while remaining in the elliptic set of \(G_{\pm}^{\prime}\), such that for any \(N\in\mathbb{R}\) \[\begin{split}\|B_{\pm}^{\prime}u\|_{H^{1-s,-r-1,\ell-1}_{\text{sc},\bar{X}}_{\text{sc},b,h}}&\leq Ch^{-1}\|G^{\prime}P_{h}(z)^{*}u\| _{H^{-s,-r-1,-\ell-1}_{\text{sc},b,h}(\bar{X})}+Ch^{N}\|u\|_{\widetilde{H}^{-N,-N,-N}_{\text{sc},b,h}(\bar{X})}\end{split}\] (5.93) where the sc-decay order \(r\) and b-decay order \(\ell\) are actually irrelevant here. For \(r>-\frac{1}{2}\), we have \(-r-1<-\frac{1}{2}\). Then using the semiclassical low sc-decay radial point estimates at the non-zero scattering section \(R(z)\) (see [111] and the above discussion about the proof of high sc-decay radial point estimates at \(R(z)\). We note that the only difference is that now the term involving \(\chi_{0}^{\prime}\) does not have the correct sign and needs to be treated as an error term \(E_{z}^{\prime}u\)), there exist \(B_{z}^{\prime},G_{z}^{\prime}\in\Psi_{\text{sc},b}^{0,0}(\bar{X})\) microlocally supported near \(R(z)\) with \(R(z)\subset\operatorname{ell}_{\text{sc},b,h}(B_{z})\) and \(E^{\prime}_{z}\in\Psi^{0,0}_{\mathrm{sc},h}(\bar{X}),\mathrm{WF}_{\mathrm{sc},h}(E^{ \prime}_{z})\cap R(z)=\emptyset\), and satisfying that all the backward null characteristics from \(\mathrm{WF}_{\mathrm{sc},h}(B^{\prime}_{z})\setminus R(z)\) reach \(\mathrm{ell}_{\mathrm{sc},h}(E^{\prime}_{z})\), while remaining in the scattering elliptic set of \(G^{\prime}_{z}\), such that \[\begin{split}&\|B^{\prime}_{z}u\|_{H^{-s,-r-1,-\ell-1}_{\mathrm{sc}, h}(\bar{X})}\\ &\leq Ch^{-1}\|G^{\prime}_{z}P_{h}(z)^{*}u\|_{H^{-s,-r-1,-\ell}_{ \mathrm{sc},h}(\bar{X})}+C\|E^{\prime}_{z}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc },h}(\bar{X})}+Ch^{N}\|u\|_{\dot{H}^{-N,-N,-N}_{\mathrm{sc},h}(\bar{X})}\end{split} \tag{5.94}\] where the differential order \(s\) and b-decay order \(\ell\) are actually irrelevant here. For \(\ell<-\frac{1}{2}\), we have \(-\ell-1>-\frac{1}{2}\). Then using the semiclassical high b-decay radial point estimates at the zero scattering section \(R(0)\) (see [111, SS4-5]), there exist \(B^{\prime}_{0},G^{\prime}_{0}\in\Psi^{0,0,0}_{\mathrm{sc},b,h}(\bar{X})\) microlocally supported near \(R(0)\) (in fact near the blown up \(R(0)\) in the second microlocalized space introduced in SS5.1.5) with \(R(0)\subset\mathrm{ell}_{\mathrm{sc},b,h}(B^{\prime}_{0})\), and satisfying all the backward null characteristics from \(\mathrm{WF}_{\mathrm{sc},b,h}(B^{\prime}_{0})\) tend to \(R(0)\), while remaining in the second microlocalized scattering elliptic set of \(G^{\prime}_{0}\), such that for any \(N\in\mathbb{R}\) \[\|B^{\prime}_{0}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}(\bar{X})}\leq Ch^{- 1}\|G^{\prime}_{0}P_{h}(z)^{*}u\|_{H^{-s,-r,-\ell}_{\mathrm{sc},b,h}(\bar{X})}+ Ch^{N}\|u\|_{\dot{H}^{-N,-N,-\ell-1}_{\mathrm{sc},h}(\bar{X})} \tag{5.95}\] where the differential order \(s\) is actually irrelevant here. We again calculate \[\begin{split}&\sigma_{h}(h^{-1}\mathrm{Im}\,P_{h}(z)^{*})= \sigma_{h}(\frac{P_{h}(\bar{z})-P_{h}(z)}{2ih})<0<\frac{\nu_{\min}}{2}\quad \text{at}\quad K.\end{split}\] Then using the hyperbolic trapping estimates (see [39] and [61, Theorem 4.7]), there exist \(B^{\prime}_{1},G^{\prime}_{1},\in\Psi^{0}_{h}(\bar{X})\) microlocally supported near \(K\) with \(K\subset\mathrm{ell}_{h}(B^{\prime}_{1})\) and \(B^{\prime}_{2}\in\Psi^{0}_{h}(\bar{X}),\mathrm{WF}_{h}(B^{\prime}_{2})\cap \Gamma^{+}=\emptyset\), and satisfying that \(\mathrm{WF}_{h}(B^{\prime}_{1})\cup\mathrm{WF}_{h}(B^{\prime}_{2})\) is contained in the elliptic set of \(G^{\prime}_{1}\), one has for any \(N,s^{\prime},r^{\prime},\ell^{\prime}\in\mathbb{R}\) \[\begin{split}&\|B^{\prime}_{1}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc },h}(\bar{X})}\\ &\quad\leq Ch^{-2}\|G^{\prime}_{1}P_{h}(z)^{*}u\|_{H^{-s,-r-1,- \ell}_{\mathrm{sc},b,h}(\bar{X})}+C\|B^{\prime}_{2}u\|_{H^{1-s,-r-1,-\ell-1}_{ \mathrm{sc},h}(\bar{X})}+Ch^{N}\|u\|_{\dot{H}^{r^{\prime},r^{\prime},\ell^{ \prime}}_{\mathrm{sc},b,h}(\bar{X})}\end{split} \tag{5.96}\] where the differential order \(s^{\prime}\), the sc-decay order \(r^{\prime}\) and b-decay order \(\ell^{\prime}\) are actually irrelevant here. By the same reasoning as in the proof of the estimate (5.81), we have \[\begin{split}&\|\chi u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}( \bar{X})}\\ &\leq Ch^{-1}\|P_{h}(z)^{*}u\|_{\dot{H}^{-s,-r,-\ell-1}_{\mathrm{sc },h}(\bar{X})}+C\|B_{+}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}(\bar{X})}\\ &\quad+C\|B^{\prime}_{-}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}( \bar{X})}+C\|B^{\prime}_{1}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}(\bar{X})} \\ &\quad+C\|B^{\prime}_{0}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}( \bar{X})}+C\|B^{\prime}_{z}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}(\bar{X})}+ Ch^{N}\|u\|_{\dot{H}^{-N,-N,-\ell-1}_{\mathrm{sc},h}(\bar{X})},\end{split} \tag{5.97}\] \[\begin{split}&\|E^{\prime}_{\pm}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc },h}(\bar{X})}+\|E^{\prime}_{z}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}(\bar{X})} \\ &\quad\leq Ch^{-1}\|P_{h}(z)^{*}u\|_{\dot{H}^{-s,-r-1,-\ell-1}_{ \mathrm{sc},h}(\bar{X})}+C\|(1-\chi)u\|_{\dot{H}^{1-s,-r-1,-\ell-1}_{\mathrm{sc },h}(\bar{X})}\\ &\quad+C\|B^{\prime}_{1}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}( \bar{X})}+C\|B^{\prime}_{0}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}(\bar{X})}+ Ch^{N}\|u\|_{\dot{H}^{-N,-N,-\ell-1}_{\mathrm{sc},h}(\bar{X})}\end{split} \tag{5.98}\] and \[\begin{split}&\|B^{\prime}_{2}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}( \bar{X})}\leq Ch^{-1}\|P_{h}(z)^{*}u\|_{\dot{H}^{-s,r-\ell,-\ell}_{\mathrm{sc},h}( \bar{X})}+C\|(1-\chi)u\|_{\dot{H}^{1-s,r-1,-\ell-1}_{\mathrm{sc},h}(\bar{X})}\\ &\quad+C\|B^{\prime}_{0}u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}( \bar{X})}+Ch^{N}\|u\|_{\dot{H}^{-N,-N,-\ell-1}_{\mathrm{sc},h}(\bar{X})}. \end{split} \tag{5.99}\] Putting all the above estimates together yields for \(s>\frac{1}{2},r>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\begin{split}&\|\chi u\|_{H^{1-s,-r-1,-\ell-1}_{\mathrm{sc},h}(\bar{X})} \leq Ch^{-2}\|P_{h}(z)^{*}u\|_{\dot{H}^{-s,r-\ell,-\ell}_{\mathrm{sc},b,h}( \bar{X})}+C\|(1-\chi)u\|_{\dot{H}^{1-s,-r-1,-\ell-1}_{\mathrm{sc},b,h}(\bar{X})} \\ &\quad+Ch^{N}\|u\|_{\dot{H}^{-N,-N,-\ell-1}_{\mathrm{sc},b,h}( \bar{X})},\end{split} \tag{5.100}\] Again using the semiclassical hyperbolic estimates (see [40, Theorem E.57]), we have for all \(s,r,\ell\in\mathbb{R}\) \[\|(1-\chi)u\|_{\dot{H}^{1-s,-r-1,-\ell-1}_{\mathrm{sc},b,h}(\bar{X})}\leq Ch^{- 1}\|P_{h}(z)^{*}u\|_{\dot{H}^{-s,-r-1,-\ell-1}_{\mathrm{sc},b,h}(\bar{X})} \tag{5.101}\] where the sc-decay order \(r\) and b-decay order \(\ell\) are actually relevant here. Combining (5.100) with (5.101) yields for \(s>\frac{1}{2},r>-\frac{1}{2},\ell<-\frac{1}{2}\) \[\|u\|_{\dot{H}^{1-s,-r-1,-\ell-1}_{\mathrm{sc},b,h}(\bar{X})}\leq Ch^{-2}\|P_{h} (z)^{*}u\|_{\dot{H}^{-s,-r,-\ell}_{\mathrm{sc},b,h}(\bar{X})}+Ch^{N}\|u\|_{\dot{H }^{-N,-N,-\ell-1}_{\mathrm{sc},b,h}(\bar{X})}. \tag{5.102}\] Taking \(h\) small enough, the term \(Ch^{N}\|u\|_{\dot{H}^{-N,-N,\ell}_{\mathrm{sc},b,h}(\bar{X})}\) can be absorbed into the left-hand side and this proves the estimate (5.82). * Invertibility of \(\widehat{\square_{\mathrm{g_{b}}}}(\sigma)\) for \(0\leq\operatorname{Im}\sigma\leq C_{1},\operatorname{Re}\sigma\geq C_{2}.\) We only discuss the proof of the invertibility of the operator \(\widehat{\square_{\mathrm{g_{b}}}}:\{u\in\bar{H}^{s,\ell}_{\mathrm{b},h}(\bar {X})\mid\bar{H}^{s,\ell+1}_{\mathrm{b},h}(\bar{X})\}\to\bar{H}^{s,\ell+1}_{ \mathrm{b},h}(\bar{X})\) in detail as the operator \(\widehat{\square_{\mathrm{g_{b}}}}:\{u\in\bar{H}^{s,\ell}_{\mathrm{b},h}(\bar {X})\mid\bar{H}^{s-1,\ell+2}_{\mathrm{b},h}(\bar{X})\}\to\bar{H}^{s-1,\ell+2 }_{\mathrm{b},h}(\bar{X})\) can be handled in a completely analogous manner. We define the space \[\mathcal{X}(\sigma):=\{u\in\bar{H}^{s,\ell}_{\mathrm{b}}(\bar{X})\mid\widehat {\square_{\mathrm{g_{b}}}}(\sigma)u\in\bar{H}^{s,\ell+1}_{\mathrm{b}}(\bar{X})\},\] which is a Hilbert space endowed with the norm \(\|u\|_{\mathcal{X}(\sigma)}:=\|u\|_{\bar{H}^{s,\ell}_{\mathrm{b}}}+\|\widehat {\square_{\mathrm{g_{b}}}}(\sigma)u\|_{\bar{H}^{s,\ell+1}_{\mathrm{b}}}\). First, the injectivity of \(\widehat{\square_{\mathrm{g_{b}}}}(\sigma):\mathcal{X}(\sigma)\to\bar{H}^{s,\ell}_ {\mathrm{b}}\) immediately follows from (5.79). As for the surjectivity, we need to show that for any \(f\in\bar{H}^{s,\ell+1}_{\mathrm{b}}\), there exists \(u\in\bar{H}^{s,\ell}_{\mathrm{b}}\) such that \(\widehat{\square_{\mathrm{g_{b}}}}(\sigma)u=f\). We define \[\mathcal{Y}(\sigma):=\{v\in\dot{H}^{-s,-\ell-1}_{\mathrm{b}}(\bar{X})\mid \widehat{\square_{\mathrm{g_{b}}}}(\sigma)^{*}v\in\dot{H}^{-s,-\ell}_{\mathrm{ b}}(\bar{X})\}.\] According to (5.79), we have \[|\langle f,v\rangle|\leq C\|f\|_{\dot{H}^{s,\ell+1}_{\mathrm{b}}}\|\widehat{ \square_{\mathrm{g_{b}}}}(\sigma)^{*}v\|_{\dot{H}^{-s,-\ell}_{\mathrm{b}}}, \quad v\in\mathcal{Y}(\sigma).\] By Hahn-Banach Theorem, the anti-linear form \(\widehat{\square_{\mathrm{g_{b}}}}(\sigma)^{*}v\to\langle f,v\rangle\) with \(v\in\mathcal{Y}(\sigma)\) can be extended to a continuous anti-linear form on \(\dot{H}^{-s,-\ell}_{\mathrm{b}}\). Since \((\dot{H}^{-s,-\ell}_{\mathrm{b}})^{*}=\bar{H}^{s,\ell}_{\mathrm{b}}\), there exists \(u\in\bar{H}^{s,\ell}_{\mathrm{b}}\) such that \(\langle u,\widehat{\square_{\mathrm{g_{b}}}}(\sigma)^{*}v\rangle=\langle f,v\rangle\) for \(v\in\mathcal{Y}(\sigma)\). In particular, for \(v\in C^{\infty}_{\mathrm{c}}(X^{\diamond})\) \[\langle\widehat{\square_{\mathrm{g_{b}}}}(\sigma)u,v\rangle=\langle u, \widehat{\square_{\mathrm{g_{b}}}}(\sigma)^{*}v\rangle=\langle f,v\rangle.\] This proves that \(\widehat{\square_{\mathrm{g_{b}}}}(\sigma)u=f\). Now we further determine the index of the Fredholm operators \(\widehat{\square_{\mathrm{g_{b}}}}(\sigma)\) on suitable function spaces for \(\sigma\in\mathbb{C},\operatorname{Im}\sigma\geq 0\). **Lemma 5.23**.: _Let \(s>\frac{1}{2},\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\). Then the following holds._ 1. _For_ \(\sigma\in\mathbb{C},\operatorname{Im}\sigma\geq 0,\sigma\neq 0\)_, the operators_ \[\widehat{\square_{\mathrm{g_{b}}}}(\sigma):\{u\in\bar{H}^{s,\ell}_{ \mathrm{b}}(\bar{X})\mid\widehat{\square_{\mathrm{g_{b}}}}(\sigma)u\in\bar{H} ^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\}\to\bar{H}^{s-1,\ell+2}_{\mathrm{b}}( \bar{X})\] (5.103) \[\widehat{\square_{\mathrm{g_{b}}}}(\sigma):\{u\in\bar{H}^{s,\ell}_{ \mathrm{b}}(\bar{X})\mid\widehat{\square_{\mathrm{g_{b}}}}(\sigma)u\in\bar{H} ^{s,\ell+1}_{\mathrm{b}}(\bar{X})\}\to\bar{H}^{s,\ell+1}_{\mathrm{b}}(\bar{X})\] (5.104) _are Fredholm of index_ \(0\)_._ 2. _Let_ \(-\frac{3}{2}<\ell<-\frac{1}{2}\)_. Then_ \[\widehat{\square_{\mathrm{g_{b}}}}(0):\{u\in\bar{H}^{s,\ell}_{ \mathrm{b}}(\bar{X})\mid\widehat{\square_{\mathrm{g_{b}}}}(0)u\in\bar{H}^{s-1, \ell+2}_{\mathrm{b}}(\bar{X})\}\to\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\] (5.105) _is Fredholm of index_ \(0\)_._ Proof.: We now prove that (5.103) and (5.105) have index \(0\), and the index \(0\) property of (5.104) follows in a similar manner. * Proof of the index \(0\) property of (5.103). In view of the _high energy estimates_ in Theorem 5.22, \[\widehat{\square_{\mathrm{g_{b}}}}(\sigma):\{u\in\bar{H}^{s,\ell}_{\mathrm{b}}( \bar{X})\mid\widehat{\square_{\mathrm{g_{b}}}}(\sigma)u\in\bar{H}^{s-1,\ell+2 }_{\mathrm{b}}(\bar{X})\}\to\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\] is invertible and thus has index \(0\) for fixed \(\operatorname{Im}\sigma=C\geq 0\) when \(|\operatorname{Re}\sigma|\gg 1\). We claim that the index of \(\widehat{\square_{\mathrm{g_{b}}}}(\sigma)\) with \(\operatorname{Im}\sigma\geq 0,\sigma\neq 0\) is a constant on the horizontal line \(\operatorname{Im}\sigma=C\) and thus is \(0\) for all \(\sigma\) with \(\operatorname{Im}\sigma\geq 0,\sigma\neq 0\). Let \[\mathcal{X}(\sigma):=\{u\in\bar{H}^{s,\ell}_{\mathrm{b}}(\bar{X}):\widehat{ \square_{\mathrm{g_{b}}}}(\sigma)u\in\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\}.\] For any fixed \(\sigma_{0}\neq 0\) with \(\operatorname{Im}\sigma_{0}=C\geq 0\), without loss of generality, we may assume \(\operatorname{Re}\sigma_{0}\geq 0\). First, there exists a \(C^{\prime}\) such that \(\widehat{\square_{g_{b}}}(\sigma)\) is invertible and thus has index \(0\) for \(\sigma\in[C^{\prime}+iC,\infty+iC)\). We next prove that for each \(\sigma^{\prime}\in[\sigma_{0}=\operatorname{Re}\sigma_{0}+iC,C^{\prime}+iC]\), there exists \(\epsilon>0\) such that the index of \(\widehat{\square_{g_{b}}}(\sigma):\mathcal{X}(\sigma)\to\bar{H}_{\mathrm{b}}^ {s-1,\ell+2}(\bar{X})\) is a constant for \(\sigma\) with \(\operatorname{Im}\sigma\geq 0,\sigma\neq 0,|\sigma-\sigma^{\prime}|<\epsilon\). Then by compactness of \([\sigma_{0}=\operatorname{Re}\sigma_{0}+iC,C^{\prime}+iC]\) we conclude that the index of \(\widehat{\square_{g_{b}}}(\sigma_{0})\) is the same as that of \(\widehat{\square_{g_{b}}}(C^{\prime}+iC)\) and thus is \(0\). Suppose \(\widehat{\square_{g_{b}}}(\sigma^{\prime})\) has kernel and cokernel of dimension \(k_{+}\) and \(k_{-}\), respectively, then we establish a Grushin problem [40, Appendix C.1]. More specifically, we define the following operator \[L(\sigma)=\begin{pmatrix}\widehat{\square_{g_{b}}}(\sigma)&L_{-}\\ L_{+}&0\end{pmatrix}:\mathcal{X}(\sigma)\oplus\mathbb{C}^{k_{-}}\to\bar{H}_{ \mathrm{b}}^{s-1,\ell+2}(\bar{X})\oplus\mathbb{C}^{k_{+}}\] where \(L_{+},L_{-}\) are defined as follows. If \(\ker\widehat{\square_{g_{b}}}(\sigma^{\prime})\) is spanned by \(x_{j}\in\mathcal{X}(\sigma^{\prime}),j=1,\cdots,k_{+}\), then by Hahn-Banach Theorem we can find \(x_{j}^{*}\in\bar{H}_{\mathrm{b}}^{-s,-\ell}(\bar{X})\) with \(\|x_{j}^{*}\|\leq 1\) such that \(\langle x_{i},x_{j}^{*}\rangle=\delta_{ij}\). It follows that \[L_{+}:\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\to C^{k_{+}},\quad L_{+}(f):=( \langle f,x_{1}^{*}\rangle,\cdots,\langle f,x_{k_{+}}^{*}\rangle)\] restricts to an isomorphism \(\ker\widehat{\square_{g_{b}}}(\sigma^{\prime})\to\mathbb{C}^{k_{+}}\). As for the construction of \(L_{-}\), we choose \(y_{j}\in\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\) with \(\|y_{j}\|\leq 1\), \(j=1,\cdots,k_{-}\) so that \(y_{j}+\mathrm{ran}\widehat{\square_{g_{b}}}(\sigma^{\prime})\) form a basis of \(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}/\mathrm{ran}\widehat{\square_{g_{b}}}( \sigma^{\prime})\). Then we define \[L_{-}:\mathbb{C}^{k_{-}}\to\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X}),\quad L _{-}((a_{1},\cdots,a_{k_{-}})):=\sum_{j=1}^{k_{-}}a_{j}y_{j}.\] The operator \(L_{-}\) has maximal rank and \(\mathrm{ran}L_{-}\cap\mathrm{ran}\widehat{\square_{g_{b}}}(\sigma^{\prime})=\emptyset\). The above construction of \(L_{+},L_{-}\) implies \(L(\sigma^{\prime})\) has a trivial kernel and onto, and thus is invertible. Since \(\widehat{\square_{g_{b}}}(\sigma)\) satisfy the uniform Fredholm estimates (5.17) for all \(\sigma\in\mathbb{C},\operatorname{Im}\sigma\in[0,2C]\) satisfying \(|\sigma_{0}|/2\leq|\sigma|\leq 2(C+C^{\prime})\), so do \(L(\sigma)\). That is, there exists a constant \(\tilde{C}\) (independent of \(b\) and \(\sigma\)) such that \[\|(u,u^{-})\|_{\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\oplus\mathbb{C}^{k_{-} }}\leq\tilde{C}\Big{(}\|L(\sigma)(u,u^{-})\|_{\bar{H}_{\mathrm{b}}^{s-1,\ell+ 2}(\bar{X})\oplus\mathbb{C}^{k_{+}}}+\|(u,u^{-})\|_{\bar{H}_{\mathrm{b}}^{s _{0},\ell_{0}}(\bar{X})\oplus\mathbb{C}^{k_{-}}}\Big{)}\] for \(\sigma\in\mathbb{C},\operatorname{Im}\sigma\in[0,2C]\) satisfying \(|\sigma_{0}|/2\leq|\sigma|\leq 2(C+C^{\prime})\), and \(s_{0}<s,\ell_{0}<\ell\). Then following the perturbation arguments in [109, SS2.7], we prove the invertibility of \(L(\sigma)\) when \(\sigma\) is close to \(\sigma^{\prime}\). Concretely, suppose there exists a sequence \(\delta_{j}\to\delta^{\prime}\) such that \(L(\delta_{j})\) is not invertible, so either \(\ker L(\delta_{j})\) on \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\oplus\mathbb{C}^{k_{-}}\) or \(\ker L(\delta_{j})^{*}\) on \(\bar{H}_{\mathrm{b}}^{-s+1,-\ell-2}(\bar{X})\oplus\mathbb{C}^{k_{+}}\) is non-trivial. By passing to a subsequence, we may assume that the former possibility holds for all \(j\), as the case of the adjoint is completely analogous. Now, if \(\|(u_{j},u_{j}^{-})\|_{\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\oplus\mathbb{C}^{k _{-}}}=1\) and \(L(\delta_{j})(u_{j},u_{j}^{-})=0\), then the above uniform Fredholm estimate gives \(\tilde{C}^{-1}\leq\|(u_{j},u_{j}^{-})\|_{\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}} (\bar{X})\oplus\mathbb{C}^{k_{-}}}\). Using the sequential compactness of the unit ball in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\oplus\mathbb{C}^{k_{-}}\) in the weak topology, and the compactness of the inclusion \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\oplus\mathbb{C}^{k_{-}}\to\bar{H}_{ \mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})\oplus\mathbb{C}^{k_{-}},s_{0}<s,\ell_{0}<\ell\), it follows that \((u_{j},u_{j}^{-})\) has a weakly convergent subsequence in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\oplus\mathbb{C}^{k_{-}}\) to some \((u_{0},u_{0}^{+})\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\oplus\mathbb{C}^{k _{-}}\) which is norm-convergent in \(\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})\oplus\mathbb{C}^{k_{-}}\). We also have \[L(\delta^{\prime})(u_{0},u_{0}^{+})=L(\delta^{\prime})\left((u_{0},u_{0}^{+})-(u_ {j},u_{j}^{-})\right)+(L(\delta^{\prime})-L(\delta_{j}))(u_{j},u_{j}^{-}) \to 0\quad\text{in}\quad\bar{H}_{\mathrm{b}}^{s_{0}-2,\ell_{0}}(\bar{X})\oplus \mathbb{C}^{k_{-}}\] since \(L(\delta_{j})\to L(\delta^{\prime})\) as bounded operators in \(\mathcal{L}(\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})\oplus\mathbb{C}^{k_{-} },\bar{H}_{\mathrm{b}}^{s_{0}-2,\ell_{0}}(\bar{X})\oplus\mathbb{C}^{k_{-}})\) and \((u_{j},u_{j}^{-})\to(u_{0},u_{0}^{-})\) in \(\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})\oplus\mathbb{C}^{k_{-}}\). Therefore, \(L(\delta^{\prime})(u_{0},u_{0}^{-})=0\). Since \(\tilde{C}^{-1}\leq\|(u,u^{-})\|_{\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{ X})\oplus\mathbb{C}^{k_{-}}}\), \((u_{0},u_{0}^{-})\neq 0\) and thus \(\ker L(\sigma^{\prime})\) is non-trivial, which leads to a contradiction. This proves that \(L(\sigma)\) is invertible for \(\sigma\) close to \(\sigma^{\prime}\), which implies \(\widehat{\square_{g_{b}}}(\sigma)\) has index \(k_{+}-k_{-}\) (see [40, Theorem C.4]) for \(\sigma\) close to \(\sigma^{\prime}\). * #### 5.5.2. High energy estimates for tensor wave operators Now we prove an analogy to Theorem 5.22 for the semicalssical operators \(h^{2}\widehat{\mathcal{P}_{b,\gamma}}(h^{-1}z),\ h^{2}\widehat{\mathcal{W}_{b, \gamma}}(h^{-1}z)\) and \(h^{2}\widehat{L_{b,\gamma}}(h^{-1}z)\) acting on bundles. **Theorem 5.24**.: _Let \(b_{0}=(\mathbf{m}_{0},\mathbf{a}_{0},\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|+|\mathbf{a}_{0}|<\mathbf{m}_{0}\) and \(|\mathbf{a}_{0}|\ll|\mathbf{m}_{0}|+|\mathbf{Q}_{0}|\). Then there exists \(\epsilon>0\) such that for \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(|b-b_{0}|<\epsilon\) and for \(s>s_{0}>2\) (resp. \(s>s_{0}>3\)), \(\ell<-\frac{1}{2}\) with \(s+\ell>-\frac{1}{2}\), the conclusions in Theorem 5.22 hold for \(h^{2}\widehat{\mathcal{P}_{b,\gamma}}(h^{-1}z)\) and \(h^{2}\widehat{\mathcal{W}_{b,\gamma}}(h^{-1}z)\) (resp. \(h^{2}\widehat{L_{b,\gamma}}(h^{-1}z)\))._ Proof.: The proof is analogous to that of Theorem 5.22, except for the computation of threshold regularity in the radial point estimate at event horizon and the threshold decay rate in the radial point estimate at spatial infinity \(\partial_{+}\bar{X}\), and the verification of \[\sigma_{1,h}\Big{(}\frac{1}{2ih}\big{(}h^{2}\widehat{\bullet}(h^{-1}z)-(h^{2} \widehat{\bullet}(h^{-1}z))^{*}\big{)}\Big{)}<\frac{\nu_{\min}}{2},\quad \bullet=\mathcal{P}_{b,\gamma},\ \mathcal{W}_{b,\gamma},\ L_{b,\gamma} \tag{5.106}\] at trapping \(K\). The computation of threshold regularity in the radial point estimate at event horizon and the threshold decay rate in the radial point estimate at spatial infinity \(\partial_{+}\bar{X}\) is the same as that in the fixed \(\sigma\) case in Theorem 5.6. The verification of (5.106) at trapping \(K\) follows from the discussion in appendix B (see Theorems B.6, B.7 and Remark B.8) and the relevant calculation in the proof of Theorem 5.22. Correspondingly, we have the following analogy to Lemma 5.23 for \(\widehat{\mathcal{P}_{b,\gamma}},\ \widehat{\mathcal{W}_{b,\gamma}}\) and \(\widehat{L_{b,\gamma}}\). **Lemma 5.25**.: _Let \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(|\mathbf{Q}|+|\mathbf{a}|<\mathbf{m}\) and \(|\mathbf{a}|\ll|\mathbf{m}|+|\mathbf{Q}|\). Let \(s>2\)(resp. \(s>3\)), \(\ell<-\frac{1}{2}\) and \(s+\ell>-\frac{1}{2}\). Then the conclusions in Lemma 5.23 also hold for \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma)\) and \(\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\) (resp. \(\widehat{L_{b,\gamma}}(\sigma)\))._ Proof.: The proof is completely analogous to that of Lemma 5.23. ### Energy estimates In this section, we shall establish the energy estimates for the solutions to various wave type equations on slowly rotating Kerr-Newman metrics. First, we introduce another modification of the Boyer-Lindquist coordinates, which is defined by \[\bar{t}=t+F_{t}(r),\quad\bar{\varphi}=\varphi+F_{\varphi}(r).\] In the new coordinates \((\bar{t},r,\theta,\bar{\varphi})\), the Kerr-Newman metric takes the following form \[\begin{split} g_{b}&=-\frac{\Delta_{b}}{\rho_{b}^{2 }}\Big{(}d\bar{t}-a\sin^{2}\theta d\bar{\varphi}+\big{(}a\sin^{2}\theta F_{ \varphi}^{\prime}(r)-F_{t}^{\prime}(r)\big{)}dr\Big{)}^{2}\\ &\quad+\frac{\sin^{2}\theta}{\rho_{b}^{2}}\Big{(}ad\bar{t}-(r^{2 }+a^{2})d\bar{\varphi}+\big{(}(r^{2}+a^{2})F_{\varphi}^{\prime}(r)-aF_{t}^{ \prime}(r)\big{)}dr\Big{)}^{2}+\rho_{b}^{2}(\frac{dr^{2}}{\Delta_{b}}+d\theta ^{2})\end{split} \tag{5.107}\] and its inverse is (5.108) In order for \(g_{b},g_{b}^{-1}\) to be smooth at event horizon, \(F_{t}(r)\) and \(F_{\varphi}(r)\) are required to satisfy that \(F_{t}^{\prime}(r)=\pm(r^{2}+a^{2})/\Delta_{b},F_{\varphi}^{\prime}(r)=\pm a/ \Delta_{b}\) (up to a smooth modification) at event horizon. Now we shall prove **Lemma 5.26**.: _There exist \(F_{t}(r),F_{\varphi}(r)\) such that they satisfy the following conditions_ 1. \(F_{\varphi}^{\prime}(r)=\frac{a}{\Delta_{b}}+\tilde{F}_{\varphi}(r)\) _where_ \(\tilde{F}_{\varphi}(r)\) _is smooth on_ \([r_{-},\infty)_{r}\)_, and_ \(F_{\varphi}(r)=0\) _near_ \(r=\infty\)_._ 2. \(F_{t}^{\prime}(r)=r_{+}^{2}\frac{a^{2}}{\Delta_{b}}+\tilde{F}_{t}(r)\) _where_ \(\tilde{F}_{t}(r)\) _is smooth on_ \([r_{-},\infty)\)_, and_ \(F_{t}(r)=-r_{b,*}+\mathcal{O}(r^{-1})\) _near_ \(r=\infty\) _where_ \(\frac{dr_{*}}{dr}=\frac{1}{\Delta_{b}}\)_._ 3. _The hypersurfaces_ \(\bar{t}=const\) _are spacelike, i.e.,_ \(d\bar{t}\) _is timelike. More precisely,_ \[g_{b}^{-1}(d\bar{t},d\bar{t})\leq-\mathbf{m}^{2}\rho_{b}^{-2}<0.\] _With this choice of \(F_{t}(r),F_{\varphi}(r)\), \(g_{b}\) in the coordinates \((\bar{t},r,\theta,\bar{\varphi})\) is smooth and non degenerate on the extended manifold \(\mathbb{R}_{\bar{t}}\times[r_{-},\infty)_{r}\times\mathbb{S}^{2}\)._ Proof.: Let \(\chi(r)\in C_{c}^{\infty}(\mathbb{R})\) such that \(\chi(r)=1\) when \(r\leq 3\mathbf{m}\) and \(\chi=0\) when \(r\geq 4\mathbf{m}\). We first define \(F_{\varphi}(r)=\chi\int a/\Delta_{b}\,dr\) which satisfies statement (1). Next, we expect \[F_{t}^{\prime}(r)=\frac{r^{2}+a^{2}}{\Delta_{b}}+\mu_{1}(r)\quad\text{near} \quad r=r_{b};\quad F_{t}^{\prime}(r)=-\frac{r^{2}+a^{2}}{\Delta_{b}}+\mu_{2} (r)\quad\text{near}\quad r=\infty \tag{5.109}\] where \(\mu_{1}(r)\) is smooth near event horizon and \(\mu_{2}(r)\sim r^{-2}\) near \(r=\infty\). We let \[g_{b}^{-1}(d\bar{t},d\bar{t})=-\frac{1}{\rho_{b}^{2}}\Big{(}\frac{(r^{2}+a^{2} )^{2}}{\Delta_{b}}-\Delta_{b}F_{t}^{\prime}(r)^{2}-a^{2}\sin^{2}\theta\Big{)} =-\frac{\mathbf{m}^{2}}{\rho_{b}^{2}}<0,\] In order to arrange (5.109), we need \[F_{t}^{\prime}(r)=\begin{cases}\sqrt{\Big{(}\frac{r^{2}+a^{2}}{\Delta_{b}} \Big{)}^{2}-\frac{\mathbf{m}^{2}+a^{2}\sin^{2}\theta}{\Delta_{b}}}=\frac{r^{2 }+a^{2}}{\Delta_{b}}\sqrt{1-\frac{\Delta_{b}(\mathbf{m}^{2}+a^{2}\sin^{2} \theta)}{(r^{2}+a^{2})^{2}}},&\text{$r$ is near $r_{b}$}\\ -\sqrt{\Big{(}\frac{r^{2}+a^{2}}{\Delta_{b}}\Big{)}^{2}-\frac{1+a^{2}\sin^{2} \theta}{\Delta_{b}}}=-\frac{r^{2}+a^{2}}{\Delta_{b}}\sqrt{1-\frac{\Delta_{b}( \mathbf{m}^{2}+a^{2}\sin^{2}\theta)}{(r^{2}+a^{2})^{2}}},&\text{$r$ is near $\infty$}\end{cases}.\] Therefore, we define \[F_{t}^{\prime}(r)=\chi(r)\bigg{(}\frac{r^{2}+a^{2}}{\Delta_{b}}\sqrt{1-\frac{ \Delta_{b}(\mathbf{m}^{2}+a^{2}\sin^{2}\theta)}{(r^{2}+a^{2})^{2}}}\bigg{)}+(1 -\chi(r))\bigg{(}-\frac{r^{2}+a^{2}}{\Delta_{b}}\sqrt{1-\frac{\Delta_{b}( \mathbf{m}^{2}+a^{2}\sin^{2}\theta)}{(r^{2}+a^{2})^{2}}}\bigg{)}.\] With this choice of \(F_{t}^{\prime}(r)\), we have \(g_{b}^{-1}(d\bar{t},d\bar{t})=-\mathbf{m}^{2}\rho_{b}^{-2}\) for \(r\leq 3\mathbf{m},r\geq 4\mathbf{m}\) and \(g_{b}^{-1}(d\bar{t},d\bar{t})\leq-\mathbf{m}^{2}\rho_{b}^{-2}\) for \(3\mathbf{m}\leq r\leq 4\mathbf{m}\), and thus the requirement statement (3) is satisfied. By choosing a proper integration constant we have \(\bar{t}=r_{b,*}+\mathcal{O}(r^{-1})\) near \(r=\infty\), and thus the requirement statement (2) is satisfied We now establish the energy estimates for the solutions to the equation \(\Box_{g_{b}}u=f\) on the extended Kerr-Newman manifold \(M=\mathbb{R}_{\bar{t}}\times[r_{-},\infty)_{r}\times\mathbb{S}^{2}\). We denote by \(\Sigma_{\bar{t}_{0}}\) the spacelike hypersurfaces \(\{\bar{t}=\bar{t}_{0}\}\cap M\). Here, the integral on \(M\) is taken with respect to the volume form \(dV=dvol_{g_{b}}=\rho_{b}^{2}\sin\theta d\bar{t}drd\theta d\varphi\). We begin with the energy-momentum tensor \[T_{\alpha\beta}[g]=\partial_{\alpha}u\partial_{\beta}u-\frac{1}{2}g_{\alpha \beta}\partial^{\gamma}u\partial_{\gamma}u\] Its contraction with respect to a vector field \(X\) is denoted by \[P_{\alpha}[g,X]=T_{\alpha\beta}[g]X^{\beta}\] and its divergence is \[\nabla^{\alpha}P_{\alpha}[g,X]=\Box_{g}u\cdot Xu+\frac{1}{2}T_{\alpha\beta}[g ]\pi_{X}^{\alpha\beta}\] and \(\pi_{X}^{\alpha\beta}\) is the deformation tensor of \(X\), which is given in terms of the Lie derivative by \[\pi_{\alpha\beta}^{X}=\nabla_{\alpha}X_{\beta}+\nabla_{\beta}X_{\alpha}=( \mathcal{L}_{X}g)_{\alpha\beta}.\] Since \(\Sigma_{\bar{t}}\) is "asymptotically null", we shall define the (degenerate) energy at \(\Sigma_{\bar{t}_{0}}\) as \[E[u](\bar{t}_{0}):=\int_{r_{-}}^{\infty}\int_{\mathbb{S}^{2}}\frac{1}{r^{2}}| \partial_{\bar{t}}u(\bar{t}_{0},\cdot)|^{2}+|\partial_{r}u(\bar{t}_{0},\cdot)|^ {2}+\frac{1}{r^{2}}\Big{(}|\partial_{\theta}u(\bar{t}_{0},\cdot)|^{2}+\frac{1}{ \sin^{2}\theta}|\partial_{\varphi}u(\bar{t}_{0},\cdot)|^{2}\Big{)}r^{2}\sin \theta\,dr\,d\theta\,d\varphi.\] Now we establish the energy estimate. **Proposition 5.27**.: _Let \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(|\mathbf{a}|\ll|\mathbf{Q}|+\mathbf{m}\). Let \(u\) be the solution to the following initial value problem_ \[\Box_{g_{b}}u=f,\quad u|_{\bar{t}=0}=u_{0},\quad\partial_{\bar{t}}u|_{\bar{t}=0 }=u_{1} \tag{5.110}\] _where \(rf\in L^{1}([0,\infty)_{\bar{t}};L^{2}(\Sigma_{\bar{t}}))\) and \(\frac{u_{0}}{r},\,\frac{u_{1}}{r},\partial u_{0}\in L^{2}(\Sigma_{0};r^{2}\sin \theta\,dr\,d\theta\,d\varphi)\). Then there exists a constant \(C(b)\) which depends on the parameter \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) such that_ \[E[u](\bar{t})\leq C(b)\Big{(}\|\frac{u_{0}}{r}\|_{L^{2}(\Sigma_{0})}+\| \partial u_{0}\|_{L^{2}(\Sigma_{0})}+\|\frac{u_{1}}{r}\|_{L^{2}(\Sigma_{0})}+\| rf\|_{L^{1}([0,\bar{t});L^{2}(\Sigma_{0}))}^{2}\Big{)}e^{C(b)\bar{t}}. \tag{5.111}\] _where the integral is taken with respect the volume form \(r^{2}\,dr\,d\theta\,d\varphi\) and \(\partial\in\{\partial_{r},\frac{1}{r}\partial_{\theta},\frac{1}{r\sin\theta} \partial_{\varphi}\}\)._ _Moreover, the above statements also hold for the wave type operators \(\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma}\) and \(L_{b,\gamma}\)._ Proof.: We only do the analysis for the case \(\mathbf{a}=0\) here and the same argument will also work for small \(\mathbf{a}\) since all the calculation below near event horizon depends continuously on \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) (away from the event the horizon, we make use of the timelike property of \(d\bar{t}\) and \(\partial_{\bar{t}}\)). Therefore, we let \(b=(\mathbf{m},0,\mathbf{Q})\) and then we have \[g=g_{b} =-\frac{\Delta_{b}}{r^{2}}d\bar{t}^{2}+2\frac{\Delta_{b}}{r^{2}}F^{ \prime}_{t}(r)d\bar{t}dr+\Big{(}\frac{r^{2}}{\Delta_{b}}-\frac{\Delta_{b}}{r^{2 }}F^{\prime}_{t}(r)^{2}\Big{)}dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d \varphi^{2},\] \[g^{-1}=g_{b}^{-1} =-\Big{(}\frac{r^{2}}{\Delta_{b}}-\frac{\Delta_{b}}{r^{2}}F^{ \prime}_{t}(r)^{2}\Big{)}\partial_{\bar{t}}^{2}+2\frac{\Delta_{b}}{r^{2}}F^{ \prime}_{t}(r)\partial_{\bar{t}}\partial_{r}+\frac{\Delta_{b}}{r^{2}}\partial _{r}^{2}+\frac{1}{r^{2}}\partial_{\theta}^{2}+\frac{1}{r^{2}\sin^{2}\theta} \partial_{\varphi}^{2}.\] By a density statement, we may assume that \(u\) is supported away from spatial infinity. * Killing multiplier: \(X=\partial_{\bar{t}}.\) We compute \[g^{-1}(d\bar{t},P[g,\partial_{\bar{t}}]) =\Big{(}\frac{\Delta_{b}}{r^{2}}F^{\prime}_{t}(r)^{2}-\frac{r^{2 }}{\Delta_{b}}\Big{)}T(\partial_{\bar{t}},\partial_{\bar{t}})+\frac{\Delta_{b} }{r^{2}}F^{\prime}_{t}(r)T(\partial_{r},\partial_{\bar{t}})\] \[=-\frac{1}{2}\bigg{(}\Big{(}\frac{r^{2}}{\Delta_{b}}-\frac{\Delta _{b}}{r^{2}}F^{\prime}_{t}(r)^{2}\Big{)}|\partial_{\bar{t}}u|^{2}+\frac{\Delta_ {b}}{r^{2}}|\partial_{r}u|^{2}+|\partial_{\bar{b}}u|^{2}+\frac{1}{\sin^{2} \theta}|\partial_{\varphi}|^{2}\bigg{)}\] \[\sim-\Big{(}\frac{1}{r^{2}}|\partial_{\bar{t}}u|^{2}+|\partial_{r }u|^{2}+\frac{1}{r^{2}}|\partial_{\theta}u|^{2}+\frac{1}{r^{2}\sin^{2}\theta} |\partial_{\varphi}u|^{2}\Big{)}\quad\text{on}\quad[r_{-},\infty),\] (5.112) \[g^{-1}(dr,P[g,\partial_{\bar{t}}]) =\frac{\Delta_{b}}{r^{2}}F^{\prime}_{t}(r)T(\partial_{\bar{t}}, \partial_{\bar{t}})+\frac{\Delta_{b}}{r^{2}}T(\partial_{r},\partial_{\bar{t}})\] \[=\frac{\Delta_{b}(r_{-})}{r_{-}^{2}}F^{\prime}_{t}(r_{-})|\partial _{\bar{t}}u|^{2}+\frac{\Delta_{b}(r_{-})}{r_{-}^{2}}\partial_{\bar{t}}u \partial_{r}u\quad\text{at}\quad r=r_{-}.\] We see that \(g^{-1}(d\bar{t},P[g,\partial_{\bar{t}}])\) gives control of \(|\frac{1}{r}\partial_{\bar{t}}u|\) and angular derivative but not \(\partial_{r}u\) near event horizon because \(\Delta_{b}(r)\) is nonpositive at and beyond event horizon. We also note that the first term of \(g^{-1}(dr,P[g,\partial_{\bar{t}}])\) is positive if \(r_{-}\) is sufficiently close to event horizon \(r=r_{b}\) while the second term does not have a definite sign. So we need to introduce a suitable multiplier which allows us to deal with the above issues at event horizon. * Energy estimates near event horizon. Following [87], We introduce \(X=b(r)\partial_{r}\) which is expressed in terms of the coordinates \((\bar{t},r,\theta,\varphi)\) with \(b(r)=-\chi(r)\) where \(\chi(r)=1\) when \(r\leq 3\mathbf{m}\) and \(\chi=0\) when \(r\geq 4\mathbf{m}\). First we compute the boundary terms at the spacelike hypersurfaces \(\bar{t}=const\) \[g^{-1}(d\bar{t},P[g,X]) =\Big{(}\frac{\Delta_{b}}{r^{2}}F^{\prime}_{t}(r)^{2}-\frac{r^{2}} {\Delta_{b}}\Big{)}T(\partial_{\bar{t}},b(r)\partial_{r})+\frac{\Delta_{b}}{r^ {2}}F^{\prime}_{t}(r)T(\partial_{r},b(r)\partial_{r})\] \[=-\chi(r)\bigg{(}-\Big{(}\frac{\Delta_{b}}{r^{2}}F^{\prime}_{t}(r )^{2}-\frac{r^{2}}{\Delta_{b}}\Big{)}\partial_{\bar{t}}u\partial_{r}u+\frac{ \Delta_{b}}{r^{2}}F^{\prime}_{t}(r)|\partial_{r}u|^{2}\bigg{)}.\] Since \(\frac{\Delta_{b}}{r^{2}}F^{\prime}_{t}(r)\sim 1\) near event horizon \(r=r_{b}\), it follows that on \(r\in[r_{-},\infty)\) \[g^{-1}(d\bar{t},P[g,C\partial_{\bar{t}}+X])\sim-E[u](\bar{t})\] (5.113) provided \(C\) is large enough. At the hypersurface \(r=r_{-}\), we have \[g^{-1}(dr,P[g,X]) =\frac{\Delta_{b}}{r^{2}}F^{\prime}_{t}(r)T(\partial_{\bar{t}},b (r)\partial_{r})+\frac{\Delta_{b}}{r^{2}}T(\partial_{r},b(r)\partial_{r})\] \[=-\frac{1}{2}\chi(r)\bigg{(}\Big{(}\frac{r^{2}}{\Delta_{b}}-\frac{ \Delta_{b}}{r^{2}}F^{\prime}_{t}(r)^{2}\Big{)}|\partial_{\bar{t}}u|^{2}+\frac{ \Delta_{b}}{r^{2}}|\partial_{r}u|^{2}-\frac{1}{r^{2}}|\partial_{\theta}^{2}u|^{2} -\frac{1}{r^{2}\sin^{2}\theta}|\partial_{\varphi}u|^{2}\bigg{)}.\] Therefore, combining the calculation of \(g^{-1}(dr,P[g,X])\) with (5.112) yields \[g^{-1}(dr,C\partial_{\bar{t}}+X)\] (5.114) \[\quad=\Big{(}C\frac{\Delta_{b}(r_{-})}{r_{-}^{2}}F^{\prime}_{t}( r_{-})+\frac{1}{2}\big{(}\frac{r^{2}}{\Delta_{b}}-\frac{\Delta_{b}}{r^{2}}F^{ \prime}_{t}(r)^{2}\big{)}-\epsilon\Big{)}|\partial_{\bar{t}}u|^{2}+\epsilon( \partial_{\bar{t}}u+\frac{C\Delta_{b}(r_{-})}{2\epsilon r_{-}^{2}}\partial_{r }u\Big{)}^{2}\] \[+\Big{(}-\frac{\Delta_{b}(r_{-})}{2r_{-}^{2}}-\frac{C^{2}\Delta_{b} ^{2}(r_{-})}{4\epsilon r_{-}^{4}}\Big{)}|\partial_{\bar{r}}u|^{2}+\frac{1}{2r_ {-}^{2}}|\partial_{\theta}^{2}u|^{2}+\frac{1}{2r_{-}^{2}\sin^{2}\theta}| \partial_{\varphi}u|^{2}\geq 0\quad\text{at}\quad r=r_{-}.\] provided \(C\) is sufficiently large, \(\epsilon\) is small enough and \(r_{-}\) is sufficiently close to \(r_{b}\). Then we integrate \(\nabla^{\alpha}P_{\alpha}[g,C\partial_{\bar{t}}+X]=\square_{g}u\cdot(\partial_{ \bar{t}}u+u)+\frac{1}{2}T_{\alpha\beta}[g]\pi_{X}^{\alpha\beta}\) over the region \[\mathcal{D}=\{0\leq\bar{t}\leq\bar{t}_{1},r\geq r_{-}\}\] with respect to the volume form \(dV_{g_{\mathrm{b}}}=r^{2}\sin\theta d\bar{t}drd\theta d\varphi\) and use Divergence Theorem \[\int_{\mathcal{D}}\nabla^{\alpha}P_{\alpha}\,dV=\int_{\partial\mathcal{D}} \iota_{P}dV\] to obtain \[\int_{\mathcal{D}}\square_{g}u\cdot(C\partial_{\bar{t}}u+Xu)\,r^{ 2}\sin\theta d\bar{t}drd\theta d\varphi+\frac{1}{2}\int_{\mathcal{D}}T_{\alpha \beta}[g]\pi_{X}^{\alpha\beta}\,r^{2}\sin\theta d\bar{t}drd\theta d\varphi\] \[=\int_{r_{-}}^{\infty}\int_{\mathbb{S}^{2}}g^{-1}(d\bar{t},P[g,C \partial_{\bar{t}}+X])\,r^{2}\sin\theta drd\theta d\varphi|_{\bar{t}=0}^{\bar {t}=\bar{t}_{1}}\] \[\quad-\int_{0}^{\bar{t}_{1}}\int_{\mathbb{S}^{2}}g^{-1}(dr,P[g,C \partial_{\bar{t}}+X])r^{2}\sin\theta d\bar{t}d\theta d\varphi|_{r=r_{-}}.\] Since \(g^{-1}(dr,P[g,C\partial_{\bar{t}}+X])|_{r=r_{-}}\geq 0\), and \(T_{\alpha\beta}[g]\pi_{X}^{\alpha\beta}\) is quadratic in \(\partial u\), we find that \[\sup_{0\leq t\leq\bar{t}_{1}}E[u](\bar{t}) \leq C^{\prime}\Big{(}\|\partial u_{0}\|_{L^{2}(\Sigma_{0})}+\| \frac{u_{1}}{r}\|_{L^{2}(\Sigma_{0})}+\|rf\|_{L^{1}([0,\bar{t});L^{2}(\Sigma_{ \bar{t}}))}^{2}\Big{)}\] \[\quad+C^{\prime}\int_{0}^{\bar{t}_{1}}E[u](\bar{t})\,d\bar{t}.\] where \(\partial\in\{\partial_{r},\frac{1}{2}\partial_{\theta},\frac{1}{r\sin\theta} \partial_{\varphi}\}\) and \(L^{2}(\Sigma_{0})=L^{2}(\Sigma_{0};r^{2}\sin\theta\,dr\,d\theta\,d\varphi)\). As for the \(L^{2}\) norm of \(u\), we have \[\|\frac{u}{r}\|_{L^{2}(\Sigma_{\bar{t}})}\lesssim\|\frac{u_{0}}{r}\|_{L^{2}( \Sigma_{0})}+\int_{0}^{\bar{t}_{1}}\|\frac{\partial_{\bar{t}}u}{r}\|_{L^{2}( \Sigma_{\bar{t}})}\,d\bar{t}\] where \(L^{2}(\Sigma_{\bar{t}})=L^{2}(\Sigma_{\bar{t}};r^{2}\sin\theta\,dr\,d\theta\,d\varphi)\). Therefore, we have \[\sup_{0\leq t\leq\bar{t}_{1}}E[u](\bar{t}) \leq C\Big{(}\|\frac{u_{0}}{r}\|_{L^{2}(\Sigma_{0})}+\|\partial u _{0}\|_{L^{2}(\Sigma_{0})}+\|\frac{u_{1}}{r}\|_{L^{2}(\Sigma_{0})}+\|rf\|_{L^ {1}([0,\bar{t});L^{2}(\Sigma_{\bar{t}}))}^{2}\Big{)}\] \[\quad+C\int_{0}^{\bar{t}_{1}}E[u](\bar{t})\,d\bar{t}.\] Finally by Gronwall inequality we obtain the exponential growth of the energy \[E[u](\bar{t})\leq C\Big{(}\|\frac{u_{0}}{r}\|_{H^{1}(\Sigma_{0})}+\|\partial u _{0}\|_{L^{2}(\Sigma_{0})}+\|\frac{u_{1}}{r}\|_{L^{2}(\Sigma_{0})}+\|rf\|_{L^ {1}([0,\bar{t});L^{2}(\Sigma_{\bar{t}}))}^{2}\Big{)}e^{C\bar{t}}.\] * Proof for \(\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma}\) and \(L_{b,\gamma}\). According to Lemma 4.12, \(-2\mathcal{P}_{b,\gamma},-2\mathcal{W}_{b,\gamma}\) (resp. \(-L_{b,\gamma}\)) are \(\square_{g_{\mathrm{b}}}\) tensored with \(4\times 4\) (resp. \(14\times 14\)) identity matrix plus a a matrix whose entries are first order differential operators with coefficients decaying like \(r^{-2}\), and then the above arguments in the proof for \(\square_{g_{\mathrm{b}}}\) still apply here. We next show that if \(\mathrm{Im}\,\sigma\) is sufficiently large, then \(\widehat{\square_{g_{\mathrm{b}}}}(\sigma),\widehat{\mathcal{P}_{b,\gamma}}( \sigma),\widehat{\mathcal{W}_{b,\gamma}}(\sigma),\widehat{L_{b,\gamma}}(\sigma)\) are invertible on suitable function spaces. **Theorem 5.28**.: _Let \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|<\mathbf{m}_{0}\). Then there exists \(\epsilon>0\) such that for \(b=(\mathbf{m},\mathbf{Q},\mathbf{a})\) with \(|b-b_{0}|<\epsilon\) and for \(s>\frac{1}{2},\ell<-\frac{1}{2}\) with \(s+\ell>-\frac{1}{2}\), the operators_ \[\widehat{\square_{g_{\mathrm{b}}}}(\sigma):\{u\in\bar{H}_{\mathrm{ b}}^{s,\ell}(\bar{X})\mid\widehat{\square_{g_{\mathrm{b}}}}(\sigma)u\in\bar{H}_{ \mathrm{b}}^{s,\ell+1}(\bar{X})\}\rightarrow\bar{H}_{\mathrm{b}}^{s,\ell+1}(\bar{X}) \tag{5.115}\] \[\widehat{\square_{g_{\mathrm{b}}}}(\sigma):\{u\in\bar{H}_{\mathrm{ b}}^{s,\ell}(\bar{X})\mid\widehat{\square_{g_{\mathrm{b}}}}(\sigma)u\in\bar{H}_{ \mathrm{b}}^{s-1,\ell+2}(\bar{X})\}\rightarrow\bar{H}_{\mathrm{b}}^{s-1,\ell+2}( \bar{X}) \tag{5.116}\] _are invertible for \(\mathrm{Im}\,\sigma\geq C^{\prime}(b_{0})\) where \(C^{\prime}(b_{0})>0\) is a sufficiently large constant only depending on \(b_{0}\)._ _Moreover, the above statements also hold for the wave type operators \(\mathcal{P}_{b,\gamma},\mathcal{W}_{b,\gamma}\) and \(L_{b,\gamma}\)._ Proof.: According to Proposition 5.27, there exist \(\epsilon>0,\tilde{C}(b_{0})>0\) such that for \(|b-b_{0}|<\epsilon\), the solution \(u\) to the initial value problem (5.110) satisfies \[E[u](\bar{t})\leq\Big{(}\tilde{C}(b_{0})E[u](0)+\tilde{C}(b_{0})\big{(}\int_{0} ^{\bar{t}}\lVert rf\rVert_{L^{2}(\Sigma_{\ell})}\,d\bar{t}\big{)}^{2}\Big{)}e^{ \tilde{C}(b_{0})\bar{t}}.\] We now show that if \(\operatorname{Im}\sigma>\tilde{C}(b_{0})\), then (5.115) and (5.116) are invertible. According to Lemma 5.23, it suffices to prove that (5.115) and (5.116) are injective. Assume the contrary, by Proposition 5.7, there exists a nontrivial solution \(w=\rho C^{\infty}(\partial_{+}\bar{X})+\mathcal{A}^{2-}(\bar{X})\) to \(\widehat{\square_{g_{n}}}(\sigma)w=0\). Then, \(u(t_{b,*},r,\theta,\varphi)=e^{-it_{b,*}\sigma}w\) is a solution to \(\square_{g_{b}}u=0\). Let \(\chi(\bar{t})\in C^{\infty}(\mathbb{R})\) be such that \(\operatorname{supp}\chi\subset[1,\infty)\) and \(\operatorname{supp}(1-\chi)\subset(-\infty,2]\). Then \(\chi u\) is the unique solution which is supported in \((1,\infty)\) to the inhomogeneous equation \(\square\bar{u}=[\square_{g_{b}},\chi]u\). According to Lemma 4.11 and \(\bar{t}-t_{b,*}=\mathcal{O}(\frac{1}{r})\), we have \[\square\bar{u}=[\square_{g_{b}},\chi]u\sim\frac{1}{r^{3}-}\] and thus \(\chi(\bar{t})u\) satisfies \(E(\chi u)\lesssim e^{\tilde{C}(b_{0})\bar{t}}\). But a direct calculation implies \(E(\chi u)\sim e^{\operatorname{Im}\sigma t_{b,*}}\sim e^{\operatorname{Im} \sigma\bar{t}}\), which leads to a contradiction. The proof for \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma),\widehat{\mathcal{W}_{b,\gamma}}( \sigma),\widehat{L_{b,\gamma}}(\sigma)\) is completely analogous. ## 6. Spherical harmonic decompositions In this section, we shall introduce the spherical harmonic decompositions, which will be used in the proof of the geometric mode stability of Reissner-Nordstrom spacetime (see SS7). Let \(\not{g}\) be the standard metric on the unit sphere \(\mathbb{S}^{2}\). We denote the operators on \(\mathbb{S}^{2}\) using a slash, e.g. \(\not{v}=\operatorname{tr}_{\not{g}},\not{\delta}=\delta_{\not{g}},\not{ \delta}^{*}=\delta^{*}_{\not{g}}\), etc. We denote by \(Y_{l}^{m}\) with \(l\in\mathbb{N},m\in\mathbb{Z},|m|\leq l\) the spherical harmonic function of degree \(l\) and order \(m\) which satisfies \(\not{\Delta}_{H}Y_{l}^{m}=l(l+1)Y_{l}^{m}\) where \(\not{\Delta}_{H}=\not{\partial}\not{\delta}+\not{\delta}\not{\delta}\) is the Hodge Laplacian. We denote by \(\mathbf{S}_{l}=\{Y_{l}^{m}:|m|\leq l\}\) the space of degree \(l\) spherical harmonic functions. Then one can conclude that \(\{\mathbf{S}_{l}:l\in\mathbb{N}\}\) form an orthogonal basis of \(L^{2}(\mathbb{S}^{2})\). According to Hodge decomposition Theorem and the fact that the de Rham cohomology group \(H^{1}_{dR}(\mathbb{S}^{2})=0\), we see that any \(1\)-form \(\omega\in C^{\infty}(\mathbb{S}^{2},T^{*}\mathbb{S}^{2})\) can be uniquely decomposed as \(\omega=\not{\partial}\phi+\not{\partial}\not{n}\) where \(\phi\in C^{\infty}(\mathbb{S}^{2}),\eta\in C^{\infty}(\mathbb{S}^{2};\Lambda^{2 }T^{*}\mathbb{S}^{2})\). Since \(\not{\delta}=-\not{\partial}\not{\epsilon}\not{\pi}\) where \(\not{\epsilon}\) is the Hodge star operator, we can rewrite \(\omega=\not{\partial}\phi+\not{\pi}\not{\partial}\psi\) where \(\phi,\psi\in C^{\infty}(\mathbb{S}^{2})\). Therefore, one can conclude that an orthogonal basis of \(L^{2}(\mathbb{S}^{2},T^{*}\mathbb{S}^{2})\) is given by \(\{\not{d}\mathbf{S}_{l},\mathbf{V}_{l}=\not{\pi}\not{d}\mathbf{S}_{l}:l\in \mathbb{N}\}\) where \(\mathbf{S}_{l},\mathbf{V}_{l}\) are the eigen-\(1\)-forms of the Hodge Laplacian \(\Delta_{H}\) with eigenvalue \(l(l+1)\). We note that \(\not{\partial}\mathbf{V}_{l}=0\) and \(\not{d}\mathbf{S}_{0}=\mathbf{V}_{0}=0\). In other words, a spectral decomposition of the negative tensor Laplacian \(-\not{\Delta}=-\not{v}\not{\nabla}_{a}\not{\nabla}_{b}\) satisfying \(-\not{\Delta}=\not{\Delta}_{H}-1\) is given by the scalar part \[\not{d}\mathbf{S}_{l}\in\ker(-\not{\Delta}-(l(l+1)-1))\quad\text{with}\quad l\geq 1 \tag{6.1}\] and the vector part \[\mathbf{V}_{l}\in\ker(-\not{\Delta}-(l(l+1)-1))\quad\text{with}\quad l\geq 1. \tag{6.2}\] As for the symmetric \(2\)-tensors \(h\in C^{\infty}(\mathbb{S}^{2};S^{2}T^{*}\mathbb{S}^{2})\), we have the transverse-traceless decomposition [120]\(h=\frac{1}{2}(\not{v}h)\not{g}+\not{\delta}^{*}_{0}\omega+h_{TT}\) where \(\not{\delta}^{*}_{0}=\not{\delta}^{*}+\frac{1}{2}\not{g}\not{g}\) is the traceless symmetric gradient, \(\omega\in C^{\infty}(\mathbb{S}^{2};T^{*}\mathbb{S}^{2})\) and \(h_{TT}\) is both traceless and divergence free. According to [58], we know that \(h_{TT}=0\). We then decompose \(\omega\) further into eigen-\(1\)-forms (we also call them spherical harmonic \(1\)-forms) of \(-\not{\Delta}\) of scalar type \(\not{d}\mathbf{S}_{l}\) and vector type \(\mathbf{V}_{l}\), and \(\not{v}h\) into scalar spherical harmonic functions \(\mathbf{S}_{l}\). In the spherical coordinates \((\theta,\varphi)\), since \[\not{d}Y_{1}^{0}=-\sin\theta d\theta,\quad\not{d}Y_{1}^{1}=\cos\theta\cos\varphi d \theta-\sin\theta\sin\varphi d\varphi,\quad\not{d}Y_{1}^{-1}=\cos\theta\sin \varphi d\theta+\sin\theta\cos\varphi d\varphi,\] we see that \(\not{d}\mathbf{S}_{1}\) is conformal killing and thus \(\not{\delta}^{*}_{0}\not{d}\mathbf{S}_{1}=0\). We also have \[\not{\pi}\not{d}Y_{1}^{0}=-\sin^{2}\theta d\varphi,\quad\not{\pi}\not{d}Y_{1}^{1} =\sin\theta\cos\theta\cos\varphi d\varphi+\sin\varphi d\theta,\quad\not{\pi} \not{d}Y_{1}^{-1}=\sin\theta\cos\theta\sin\varphi d\varphi-\cos\varphi d\theta. \tag{6.3}\] and find that \(\mathbf{V}_{1}^{\sharp}\) are rotations, and thus killing, i.e. \(\not{\delta}^{*}\mathbf{V}_{1}=0\). Then any symmetric \(2\)-tensors on sphere can be decomposed into the following spherical harmonic symmetric \(2\)-tensors of the scalar type \[\mathbf{S}_{l}\not{g}\ (l\geq 0),\quad\not{\delta}^{*}_{0}\not{d}\mathbf{S}_{l}\ (l\geq 2) \tag{6.4}\] and the vector type \[\not{\delta}^{*}_{0}\mathbf{V}_{l}=\not{\delta}^{*}\mathbf{V}_{l}\ (l\geq 2). \tag{6.5}\] We claim that the following geometric operators on sphere \(\mathbb{S}^{2}\) \[\not{g},\mbox{$\,\not{v}$},\not{d},\not{\delta},\not{\delta}^{*},-\not{\Delta}\ \mbox{and their compositions}\] preserve the the scalar and vector type of the spherical harmonics. More precisely, these geometric operators map the scalar type functions/1-forms/symmetric 2-tensors built out of a particular \(\mathsf{S}\in\mathbf{S}_{l}\) into another scalar type tensor with the same \(\mathsf{S}\), likewise for the vector type 1-forms/symmetric 2-tensors. This is obviously true for \(\not{g}\) acting on functions, \(\not{v}\) on symmetric 2-tensors, \(\not{d}\) on functions, \(\not{\theta}\) on 1-forms, and \(-\not{\Delta}\) on functions and 1-forms. For the verification of \(\not{\delta},\not{\delta}^{*},-\not{\Delta}\), we make use of the identities \(\not{g}\not{\delta}^{*}\omega=-\frac{1}{2}\not{\Delta}\omega+\frac{1}{2}\not{ d}\not{\delta}\omega-\frac{1}{2}\omega\) and \(-\not{\Delta}\not{\delta}^{*}\omega=\not{\delta}^{*}(-\not{\Delta}\omega)-3 \not{\delta}^{*}\omega-2\not{g}\not{\delta}\omega\) to obtain \[\not{g}(\mathsf{S}\not{g}) =-\not{d}\mathsf{S},\ \not{\delta}\not{\delta}^{*}_{0}\mathsf{ dS}=\frac{l(l+1)-2}{2}\not{d}\mathsf{S},\ \not{\delta}\not{\delta}^{*}_{\not{\delta}}\mathsf{ dS}=\frac{l(l+1)-2}{2}\not{\pi}\mathsf{dS};\ \not{\delta}^{*}_{0}\mathsf{ dS}=\theta^{*}_{0}\mathsf{ dS}-\frac{l(l+1)}{2}\mathsf{S}\not{g}; \tag{6.6}\] \[-\not{\Delta}(\mathsf{S}\not{g}) =l(l+1)\mathsf{S}\not{g},\quad-\not{\Delta}(\theta^{*}_{0} \mathsf{dS})=(l(l+1)-4)\,(\theta^{*}_{0}\mathsf{dS}),\quad-\not{\Delta}( \delta^{*}\not{d}\mathsf{S})=(l(l+1)-4)\,(\delta^{*}\not{\pi}\mathsf{dS}).\] We now decompose the spacetime \(M\) into the aspherical \(\hat{X}=\mathbb{R}_{t_{*}}\times[r_{-},\infty)_{r}\) and spherical part \(\mathbb{S}^{2}\), i.e. \[M=\hat{X}\times\mathbb{S}^{2},\quad\hat{\pi}:M\to\hat{X},\quad\not{\pi}:M\to \mathbb{S}^{2}. \tag{6.7}\] We then call the functions _aspherical_ if they are only functions of \((t_{*},r)\), and _spherical_ if they are only functions on \(\mathbb{S}^{2}\). Alternatively, we can identify the aspherical and spherical functions as subspaces of \(C^{\infty}(M)\) as follows: \[C^{\infty}(\check{X})\cong\dot{\pi}^{*}C^{\infty}(\check{X})\subset C^{ \infty}(M),\quad C^{\infty}(\mathbb{S}^{2})\cong\not{\pi}^{*}C^{\infty}( \mathbb{S}^{2})\subset C^{\infty}(M). \tag{6.8}\] Then functions on \(M\), say \(L^{2}(M)\), can be decomposed into an infinite sum of products of aspherical functions and spherical harmonic functions. We can also split the cotangent bundle \(T^{*}M\) into aspherical and spherical part as follows: \[T^{*}M=T^{*}_{AS}\oplus T^{*}_{S}\quad\mbox{where}\quad T^{*}_{AS}=\dot{\pi }^{*}T^{*}\hat{X}\quad\mbox{and}\quad T^{*}_{S}=\not{\pi}^{*}T^{*}\mathbb{S}^ {2}. \tag{6.9}\] We can further write \(\omega\in T^{*}_{S}\) as an infinite sum of the spherical harmonic 1-forms \(\not{d}\mathsf{S}_{l},\mathbf{V}_{l}\) of \(-\not{\Delta}\). Therefore, a 1-form \(\omega\in C^{\infty}(M;T^{*}M)\) can be expressed as an infinite sum of products of aspherical functions/1-forms on \(\hat{X}\) and spherical harmonic functions/1-forms. The above splitting of \(T^{*}M\) induces a natural splitting of the symmetric 2-tensors,into aspherical part \(S^{2}T^{*}_{AS}\), mixed part \(T^{*}_{AS}\otimes T^{*}_{S}\) and spherical part \(S^{2}T^{*}_{S}\) as follows: \[S^{2}T^{*}M=S^{2}T^{*}_{AS}\oplus(T^{*}_{AS}\otimes T^{*}_{S})\oplus S^{2}T^{ *}_{S} \tag{6.10}\] where we identify \(a\otimes b\in T^{*}_{AS}\otimes T^{*}_{S}\) as \(2a\otimes_{s}b=a\otimes b+b\otimes a\). Therefore, a symmetric 2-tensor \(h\in C^{\infty}(M;S^{2}T^{*}M)\) can be expressed as an infinite sum of products of aspherical functions/1-forms/symmetric 2-tensors on \(\check{X}\) and spherical harmonic functions/1-forms/symmetric 2-tensors. The above decomposition for functions/1-form/symmetric 2-tensors on \(M\) implies that the perturbation \((\dot{g},\check{A})\) can be divided into two categories: the first consists of _scalar perturbations_ (also called even parity modes in [102] and closed portions in [71]): \[\mbox{scalar }l\geq 2: \begin{cases}\dot{g}=\widetilde{f}\mathsf{S}+(f\otimes_{s}\not{d} \mathsf{S})+(H_{L}\mathsf{S}\not{g}+H_{T}\not{\delta}^{*}_{0}\mathsf{dS})&\mbox {with}\quad\mathsf{S}\in\mathbf{S}_{l}\\ \check{A}=\widetilde{K}\mathsf{S}+K\mathsf{dS}&\mbox{with}\quad\mathsf{S}\in \mathbf{S}_{1}\end{cases} \tag{6.11}\] \[\mbox{scalar }l=0: \begin{cases}\dot{g}=\widetilde{f}+H_{L}\not{g}\\ \check{A}=\widetilde{K}\end{cases}\] where \[H_{L},H_{T},K\in C^{\infty}(\check{X}),\quad f,\widetilde{K}\in C^{\infty}( \check{X};T^{*}\check{X}),\quad\widetilde{f}\in C^{\infty}(\check{X};S^{2}T^{*} \check{X}).\] The second comprises of the _vector perturbations_ (also called odd parity modes in [102] and co-closed portions in [71]): \[\begin{split}\text{vector }l\geq 2:&\quad\begin{cases}\dot{g}=f \otimes_{s}\mathsf{V}+H_{T}\not{\theta}^{*}\mathsf{V}\\ \dot{A}=K\mathsf{V}\end{cases}\quad\quad\text{with}\quad\mathsf{V}\in\mathbf{ V}_{l}\\ \text{vector }l=1:&\quad\begin{cases}\dot{g}=f\otimes_{s}\mathsf{V}\\ \dot{A}=K\mathsf{V}\end{cases}\quad\quad\text{with}\quad\mathsf{V}\in\mathbf{ V}_{1}\end{split} \tag{6.12}\] where \(H_{T},K\in C^{\infty}(\mathring{X})\) and \(f\in C^{\infty}(\mathring{X};T^{*}\mathring{X})\). We decompose the Reissner-Nordstrom metric \(g\) into aspherical and spherical part as in (6.7), which takes the form in static coordinates \((t,r,\omega)\) with \(\omega\in\mathbb{S}^{2}\) as follows \[g=\dot{g}+r^{2}\not{\theta} \tag{6.13}\] where \(\mathring{g}\) is a Lorentzian metric on \(\mathring{X}=\mathbb{R}_{t_{*}}\times[r_{-},\infty)\) and \(\not{\theta}\) denote the standard metric on the unit sphere \(\mathbb{S}^{2}\). In static coordinates \((t,r)\), \(\mathring{g}\) takes the form \[\mathring{g}=-\mu_{b_{0}}dt^{2}+\mu_{b_{0}}^{-1}dr^{2}\quad\text{with}\quad\mu _{b_{0}}=1-\frac{2\mathbf{m}}{r}+\frac{\mathbf{Q}^{2}}{r^{2}}, \tag{6.14}\] and it can be extended beyond the event horizon by introducing \((t_{*},r)\) coordinates. We denote the operators associated with \(\mathring{g}\) and \(\not{\theta}\) by rings and slashes respectively. Here we use the Einstein summation convention with Greek indices \(\mu,\nu,\dots\) for coordinates on \(M\), lower case Roman indices \(i,j,k,m,n\) for coordinates on \(\mathring{X}\) and \(a,b,c,d,e\) for coordinates on \(\mathbb{S}^{2}\). Indices are raised and lowered using the Reissner-Nordstrom metric \(g\), except that we write \(\not{g}^{ab}=(\not{g}^{-1})_{ab}\) for the inverse metric to \(\not{g}\) on \(\mathbb{S}^{2}\). ## 7. Mode stability of the Reissner-Nordstrom spacetime In this section we shall characterize the non-decaying (generalized) modes of the linearized Einstein-Maxwell system around the Reissner-Nordstrom spacetime \((M,g_{b_{0}},A_{b_{0}})\) with subextremal charge, following the method of Kodama-Ishibashi [79, 80] for the study of perturbations of black holes without (or with) charge in high dimensions. Throughout this section, we drop the subscript \(b_{0}\), i.e., we take \((g,A)=(g_{b_{0}},A_{b_{0}})\) and \(\mathbf{m}=\mathbf{m}_{0},\mathbf{Q}=\mathbf{Q}_{0}\). Here we adopt some notations from [57, 60, 80]. **Theorem 7.1**.: _Let \(\sigma\in\mathbb{C}\) with \(\operatorname{Im}\sigma\geq 0\), suppose \((\mathring{g},\mathring{A})\) is an outgoing mode solution to the linearized Einstein-Maxwell system, i.e., \((\mathring{g},\mathring{A})=(e^{-i\sigma t_{*}}\mathring{g}_{0},e^{-i\sigma t _{*}}\mathring{A}_{0})\) with \((\mathring{g}_{0},\mathring{A}_{0})\in\bar{H}_{\mathrm{b}}^{\infty,\ell}( \bar{X};S^{2\widetilde{scT^{*}}}\bar{X}\oplus\widetilde{scT^{*}}\bar{X})\) for some \(\ell\in\mathbb{R}\) solves_ \[\begin{split}&\mathscr{L}_{1}(\mathring{g},\mathring{A})=-\frac{1}{2} \square_{g}\mathring{g}-\delta_{g}^{*}\delta_{g}G_{g}\mathring{g}+\mathscr{R} _{g}\mathring{g}-2D_{(g,dA)}T(\mathring{g},d\mathring{A})=0\\ &\mathscr{L}_{2}(\mathring{g},\mathring{A})=D_{(g,A)}\left( \delta_{g}dA\right)(\mathring{g},\mathring{A})=0.\end{split} \tag{7.1}\] _Then there exist parameters \(\dot{\mathbf{m}}\in\mathbb{R},\dot{\mathbf{a}}\in\mathbb{R}^{3},\dot{\mathbf{Q }}\in\mathbb{R}\) and an outgoing \(1\)-form \(\omega\) and a function \(\phi\) on \(\bar{M}\), i.e. \((\omega,\phi)\in e^{-i\sigma t_{*}}\bar{H}_{\mathrm{b}}^{\infty,\ell^{\prime}} \left(\bar{X};\widetilde{scT^{*}}\bar{X}\oplus\mathbb{C}\right)\) for some \(\ell^{\prime}\in\mathbb{R}\), such that_ \[(\mathring{g},\mathring{A})=\left(\mathring{g}_{(\mathbf{m},0,\mathbf{Q})}(\dot{ \mathbf{m}},\dot{\mathbf{a}},\dot{\mathbf{Q}})+2\delta_{g}^{*}\omega,\ \mathring{A}_{(\mathbf{m},0,\mathbf{Q})}(\dot{\mathbf{m}},\dot{\mathbf{a}}, \dot{\mathbf{Q}})+\mathcal{L}_{\omega^{\#}}A+d\phi\right). \tag{7.2}\] _More specifically_ 1. _If_ \(\sigma\neq 0\)_, then_ \[(\mathring{g},\mathring{A})=\left(2\delta_{g}^{*}\omega,\ \mathcal{L}_{\omega^{ \sharp}}A+d\phi\right)\] _where_ \((\omega,\phi)\in e^{-i\sigma t_{*}}\bar{H}_{\mathrm{b}}^{\infty,\ell^{\prime}} (\bar{X};\widetilde{scT^{*}}\bar{X}\oplus\mathbb{C})\) _for some_ \(\ell^{\prime}\in\mathbb{R}\)_._ 2. _If_ \(\sigma=0\) _and_ \(-\frac{3}{2}<\ell<-\frac{1}{2}\)_, we decompose the perturbation_ \((\mathring{g},\mathring{A})\) _into spherical harmonics whose detail is given in SS_6_. Then_ 1. _If_ \((\mathring{g},\mathring{A})\) _is of scalar type with_ \(l\geq 1\) _or vector type_ \(l\geq 2\)_, then_ \[(\mathring{g},\mathring{A})=\left(2\delta_{g}^{*}\omega,\ \mathcal{L}_{\omega^{ \sharp}}A+d\phi\right)\] _where_ \((\omega,\phi)\in\bar{H}_{\mathrm{b}}^{\infty,\ell-1}(\bar{X};\widetilde{scT^{*} }\bar{X}\oplus\mathbb{C})\) _is of the same type as_ \((\mathring{g},\mathring{A})\) _._ * _If_ \((\dot{g},\dot{A})\) _is of scalar type with_ \(l=0\)_, i.e. spherically symmetric, then_ \[(\dot{g},\dot{A})=\left(\dot{g}_{({\bf m},0,{\bf Q})}({\dot{\bf m}},0,\dot{\bf Q })+2\delta_{g}^{*}\omega,\ \dot{A}_{({\bf m},0,{\bf Q})}(0,0,\dot{\bf Q})+{\mathcal{L}}_{ \omega^{\sharp}}A+d\phi\right)\] _where_ \((\omega,\phi)\in\bar{H}_{\rm b}^{\infty,\ell-1}(\bar{X};\widetilde{scT^{*}} \bar{X}\oplus\mathbb{C})\) _is spherically symmetric._ * _If_ \((\dot{g},\dot{A})\) _is of vector type_ \(l=1\)_, then_ \[(\dot{g},\dot{A})=\left(\dot{g}_{({\bf m},0,{\bf Q})}(0,\dot{\bf a},0)+2\delta_ {g}^{*}\omega,\ \dot{A}_{({\bf m},0,{\bf Q})}(0,\dot{\bf a},0)+{\mathcal{L}}_{ \omega^{\sharp}}A+d\phi\right)\] _where_ \((\omega,\phi)\in\bar{H}_{\rm b}^{\infty,\ell-1}(\bar{X};\widetilde{scT^{*}} \bar{X}\oplus\mathbb{C})\) _is of vector type with_ \(l=1\)_._ _The statement of stationary perturbation_ \((\dot{g},\dot{A})\) _(_\(\sigma=0\)_) of scalar type with_ \(l=0,1\) _can be extended to the following cases:_ * _If_ \((\dot{g},\dot{A})\in\bar{H}_{\rm b}^{\infty,\ell}(\bar{X};S^{2}\widetilde{scT^ {*}}\bar{X}\oplus\widetilde{scT^{*}}\bar{X})\) _with_ \(-\frac{5}{2}<\ell<-\frac{3}{2}\) _is a stationary perturbation of scalar type_ \(l=1\)_, then_ \[(\dot{g},\dot{A})=\left(2\delta_{g}^{*}\omega,\ {\mathcal{L}}_{\omega^{ \sharp}}A+d\phi\right)\] _where_ \((\omega,\phi)\in\bar{H}_{\rm b}^{\infty,\ell-1}(\bar{X};\widetilde{scT^{*}} \bar{X}\oplus\mathbb{C})\) _is of the scalar type_ \(l=1\)_._ * _If_ \((\dot{g},\dot{A})\in\mbox{Poly}(\iota_{*})^{k}\bar{H}_{\rm b}^{\infty,\ell}( \bar{X};S^{2}\widetilde{scT^{*}}\bar{X}\oplus\widetilde{scT^{*}}\bar{X})\) _is of scalar type_ \(l=0\)_, then_ \[(\dot{g},\dot{A})=\left(\dot{g}_{({\bf m},0,{\bf Q})}({\dot{\bf m}},0,\dot{\bf Q })+2\delta_{g}^{*}\omega,\ \dot{A}_{({\bf m},0,{\bf Q})}(0,0,\dot{\bf Q})+{\mathcal{L}}_{ \omega^{\sharp}}A+d\phi\right)\] _where_ \((\omega,\phi)\in\mbox{Poly}(\iota_{*})^{k+1}\bar{H}_{\rm b}^{\infty,\ell^{ \prime}}(\bar{X};\widetilde{scT^{*}}\bar{X}\oplus\mathbb{C})\) _is of scalar type_ \(l=0\) _for some_ \(\ell^{\prime}<\ell\)_._ _Remark 7.2_.: Our discussion corresponds to the case \(n=2,K=1,\kappa^{2}=2,E_{0}=r^{-2}{\bf Q},Q^{2}={\bf Q}^{2},\Lambda=0\) (and thus \(\lambda=0\)), in [80]. We note that [80] discusses the scalar and vector type perturbations for non-stationary modes (\(\sigma\neq 0\)) with \(l\geq 2\) in detail, but the corresponding stationary perturbations (\(\sigma=0\)) are only treated in the uncharged case in [79]. [80] also studies the scalar (Appendix D) and vector (SS4) perturbations for modes \(l=1\) (which is called exceptional modes there), however, they do not give a full description of the scalar \(l=0\) (spherically symmetric) perturbations. ### Scalar type perturbations We shall consider the scalar type perturbations of modes \(l\geq 2,l=1\) and \(l=0\) separately. Recall that we can write the perturbations under the splitting (6.9) and (6.10) as in (6.11). We further introduce a rescaled version of the traceless part of the spherical harmonic symmetric \(2\)-tensor \(\oint_{0}^{*}\not{\mathcal{S}}_{l}\) which is built from the spherical harmonic function \({\sf S}\in{\bf S}_{l}\) with eigenvalues \(k^{2}=l(l+1)\): \[\not{H}_{k}{\sf S}=k^{-2}\not{\boldsymbol{\xi}}_{0}^{*}\not{\sf dS}\quad\mbox{ where}\quad{\sf S}\in{\bf S}_{l}\mbox{ with }l\neq 0.\] #### 7.1.1. Modes with \(l\geq 2\) Suppose \((\dot{g},\dot{A})\) is the scalar perturbations of the following form \[\dot{g}=\begin{pmatrix}\widetilde{f}{\sf S}\\ -\frac{r}{k}f\otimes\not{\sf dS}\\ 2r^{2}(H_{L}\not{\sf S}\not{g}+H_{T}\not{H}_{k}{\sf S})\end{pmatrix},\quad \dot{A}=\begin{pmatrix}\widetilde{K}{\sf S}\\ -\frac{r}{k}K\not{\sf dS}\end{pmatrix}\quad\mbox{with}\quad{\sf S}\in{\bf S}_ {l}\ (l\geq 2). \tag{7.3}\] We notice that the pure gauge solutions take the form \[\not{\boldsymbol{\delta}}\dot{g}=2\delta_{g}^{*}\omega=\begin{pmatrix}2\dot{ \delta}^{*}T\\ -\frac{r}{k}(-\frac{k}{r}T+r\dot{dr}^{-1}L)\not{\sf dS}\\ 2r^{2}(r^{-1}\iota_{dr}T+\frac{k}{2r}L)\not{\sf S}\not{g}-\frac{k}{r}L\not{H}_ {k}{\sf S}\end{pmatrix},\ \ \not{\boldsymbol{\delta}}\dot{A}={\mathcal{L}}_{\omega^{ \sharp}}A+d\phi=\begin{pmatrix}(\dot{\dot{a}}_{A}T+\iota_{r}\dot{d}A+\dot{d}P){ \sf S}\\ -\frac{r}{k}(-\frac{k}{r}t_{A}T-\frac{k}{r}P)\not{\sf dS}\end{pmatrix} \tag{7.4}\] with \[\omega=\begin{pmatrix}T{\sf S}\\ -\frac{r}{k}L\not{\sf dS}\end{pmatrix},\ \ \phi=P{\sf S} \tag{7.5}\] where \(T\in C^{\infty}(\dot{X};T^{*}\hat{X}),L,P\in C^{\infty}(\mathring{X})\). When adding \((\boldsymbol{\delta}\dot{g},\boldsymbol{\delta}\dot{A})\) to \((\dot{g},\dot{A})\), the quantities \(\tilde{f},f,H_{L},H_{T},\tilde{K},K\) change by \[\begin{split}\boldsymbol{\delta}\widetilde{f}=2\dot{\delta}^{*}T, \quad\boldsymbol{\delta}f=-\frac{k}{r}T+r\dot{d}(r^{-1}L),\quad\boldsymbol{ \delta}H_{L}=r^{-1}\iota_{dr}T+\frac{k}{2r}L,\quad\boldsymbol{\delta}H_{T}=- \frac{k}{r}L;\\ \boldsymbol{\delta}\widetilde{K}=\dot{d}_{t}A+\iota_{T}\dot{d}A+\dot{d}P, \quad\boldsymbol{\delta}K=-\frac{k}{r}(\iota_{A}T+P).\end{split} \tag{7.6}\] We define \(\mathbf{X}:=\frac{r}{k}(f+\frac{r}{k}\dot{d}H_{T})\), then \(\boldsymbol{\delta}\mathbf{X}=\frac{r}{k}(\boldsymbol{\delta}f+\frac{r}{k} \boldsymbol{\delta}\boldsymbol{\delta}H_{T})=-T\) because \(\boldsymbol{\delta}\) commutes with any geometric operators we use in this manuscript. As a consequence, the following quantities \[\begin{split}\widetilde{F}&:=\widetilde{f}+2\hat{ \delta}^{*}\mathbf{X}\in C^{\infty}(\hat{X};S^{2}T^{*}\hat{X}),\\ J&:=H_{L}+\frac{1}{2}H_{T}+r^{-1}{}_{ldr}\mathbf{X} \in C^{\infty}(\mathring{X};T^{*}\mathring{X}),\\ N&:=\widetilde{K}+\dot{d}\left(\frac{r}{k}K\right)+ \iota_{\mathbf{X}}\dot{d}A\in C^{\infty}(\mathring{X};T^{*}\mathring{X})\end{split} \tag{7.7}\] are gauge-invariant, that is, \(\boldsymbol{\delta}\widetilde{F}=0,\boldsymbol{\delta}J=\boldsymbol{\delta}N=0\). Conversely, if \(\widetilde{F}=0,J=N=0\), one can verify \((\dot{g},\dot{A})\) is a pure gauge solution \[(\dot{g},\dot{A})=(2\delta^{*}\omega,\mathcal{L}_{\omega^{4}}A+d\phi)\ \ \text{with}\ \ \omega=\begin{pmatrix}-\mathbf{X}\mathsf{S}\\ \frac{r^{2}}{k^{2}}H_{T}\boldsymbol{\delta}\end{pmatrix},\ \ \phi=-\frac{r}{k}K+\iota_{A}\mathbf{X}. \tag{7.8}\] If \((\dot{g},\dot{A})\) are outgoing mode solutions with frequency \(\sigma\neq 0\), then \(\omega\) and \(\phi\) are modes of the same type, with an extra factor \(r^{2}\) and \(r\) respectively relative to \(\dot{g}\) and \(\dot{A}\). If \(\sigma=0\), \((\omega,\phi)\) grow at most by a factor \(r\) more than \((\dot{g},\dot{A})\). Since the linearized Einstein-Maxwell system is gauge-invariant, we can express it in terms of the gauge-invariant quantities defined above. Concretely, one can choose a gauge, i.e., add a pure gauge solution \((\boldsymbol{\delta}\dot{g},\boldsymbol{\delta}\dot{A})\) to \((\dot{g},\dot{A})\) for suitable \(\omega=\omega(T,L),\phi=\phi(P)\) such that the non gauge-invariant quantities \(\tilde{f}\) etc. take a simple form in the new gauge. More specifically, we adds \((\boldsymbol{\delta}\dot{g},\boldsymbol{\delta}\dot{A})\) built from \(T=\mathbf{X},L=\frac{r}{k}H_{T},P=\frac{r}{k}K-\iota_{A}\mathbf{X}\), then \(\mathbf{X}+\boldsymbol{\delta}\mathbf{X}=T-T=0,H_{T}+\boldsymbol{\delta}H_{T}= \frac{k}{r}L-\frac{k}{r}L=0\) which implies \(f+\boldsymbol{\delta}f=0,\tilde{f}+\boldsymbol{\delta}\tilde{f}=\tilde{F},H_ {L}+\boldsymbol{\delta}H_{L}=J\), and \(K+\boldsymbol{\delta}K=K-\frac{k}{r}\iota_{A}\mathbf{X}-\frac{k}{r}(\frac{r}{ k}K-\iota_{A}\mathbf{X})=0\) which implies \(\widetilde{K}+\boldsymbol{\delta}\widetilde{K}=N\). To summarize, if we replace \((\dot{g},\dot{A})\) by \((\dot{g}+\boldsymbol{\delta}\dot{g},\dot{A}+\boldsymbol{\delta}\dot{A})\), in the new gauge we have \((\widetilde{f},f,H_{L},H_{T},\widetilde{K},K)=(\widetilde{F},0,J,0,N,0)\). Therefore, using the detailed calculation in SS6 and SSA.1 we can write the linearized Einstein-Maxwell system, acting on the new \((\dot{g},\dot{A})\) \[\dot{g}=\begin{pmatrix}&\widetilde{F}\mathsf{S}\\ &0\\ &2r^{2}J\boldsymbol{\delta}\boldsymbol{\dot{g}}\end{pmatrix},\quad\dot{A}= \begin{pmatrix}&N\mathsf{S}\\ &0\end{pmatrix}\] in terms of the gauge-invariant quantities \(\widetilde{F},J,N\). Now we express \(2\mathcal{L}_{1}(\dot{g},\dot{A})=0\) and \(\mathcal{L}_{2}(\dot{g},\dot{A})=0\) in the form of (7.3) in terms of \(\widetilde{f}^{E},f^{E},H_{L}^{E},H_{T}^{E}\) and \(\widetilde{K}^{E},K^{E}\) respectively. Then the linearized Einstein-Maxwell system reads \[\begin{split}\widetilde{f}^{E}&=\left(-\mathring{\square}-2 \hat{\delta}^{*}\mathring{\delta}-\hat{\delta}^{*}\mathring{d}\mathring{\text {tr}}\right)\widetilde{F}+2r^{-1}\left(2\hat{\delta}^{*}\iota_{dr}\widetilde{F }-(\partial^{i}r)\mathring{\nabla}_{i}\widetilde{F}\right)-4\hat{\delta}^{*} \mathring{d}J-8r^{-1}(dr)\otimes_{s}\mathring{d}J\\ &\qquad+(-\mu_{b_{0}}^{\prime\prime}+k^{2}r^{-2})\widetilde{F}+(-2 \mathbf{Q}^{2}r^{-4}+\mu_{b_{0}}^{\prime\prime})\hat{g}\mathring{\text{tr}} \widetilde{F}-4\mathbf{Q}r^{-2}\hat{g}^{*}\mathring{d}\dot{N}=0,\\ -\frac{r}{k}f^{E}&=-\mathring{\delta}\widetilde{F}-2\dot{d}J-r \mathring{d}(r^{-1}\mathring{\text{tr}}\widetilde{F})+4\mathbf{Q}r^{-2}\hat{*}N =0,\\ 2r^{2}H_{L}^{E}&=-\mathring{\square}(2r^{2}J)-2r\iota_{dr} \mathring{\delta}\widetilde{F}+2\iota_{dr}\iota_{dr}\widetilde{F}-r\iota_{dr} \mathring{d}\widetilde{\text{tr}}\widetilde{F}+(r\mu_{b_{0}}^{\prime}+\frac{k^{2 }}{2}+2\mathbf{Q}^{2}r^{-2})\mathring{\text{tr}}\widetilde{F}\\ &\qquad+(2k^{2}-4\mathbf{Q}^{2}r^{-2})J+4\mathbf{Q}\hat{*}\dot{d}N=0, \\ 2r^{2}H_{T}^{E}&=-k^{2}\mathring{\text{tr}}\widetilde{F}=0, \\ \widetilde{K}^{E}&=r^{-2}\mathring{\delta}(r^{2}\dot{d}N)+k^{2}r^{- 2}N+\frac{1}{2}\mathbf{Q}r^{-2}\hat{*}\mathring{d}\mathring{\text{tr}} \widetilde{F}-2\mathbf{Q}r^{-2}\hat{*}\mathring{d}J=0,\\ -\frac{r}{k}K^{E}&=-\mathring{\delta}N=0.\end{split} \tag{7.9f}\] By (7.9f), all terms containing \(\mathring{\text{tr}}\widetilde{F}\) vanish, then plugging \(\mathring{\delta}\widetilde{F}=-2\dot{d}J+4\mathbf{Q}r^{-2}\hat{*}N\) into (7.9a) yields the cancellation of the term \(4\hat{\delta}^{*}\mathring{d}J\) and thus one obtains a wave equation of \(\widetilde{F}\) (coupled to \(J,N\) only via subprincipal terms). Moreover, by (7.9f), one adds the zero term \(\mathring{d}\mathring{\delta}N\) to (7.9e) and then can obtain a wave equation for \(N\) (coupled to \(J\) via subprincipal terms). In conclusion, we can rewrite equation (7.9a), (7.9c) and (7.9e) as a principally scalar system of wave equations for \((\widetilde{F},J,N)\) \[-\mathring{\square}B-\mathscr{D}B=0\quad\text{where}\quad B=\begin{pmatrix} \widetilde{F}\\ J\\ N\end{pmatrix} \tag{7.10}\] where \(\mathscr{D}\) is a first order stationary differential operator acting on \(C^{\infty}(\mathring{X};S^{2}T^{*}\mathring{X}\oplus\mathbb{R}\oplus T^{*} \mathring{X})\). When \((\dot{g},\dot{A})\) and thus \(B\) are smooth modes, this system of equations become a system of ODEs on the one dimensional space \(t_{*}^{-1}(0)\) with a regular singular point at the event horizon \(r=r_{b_{0}}\) whose solution is given by Frobenius series, which means that the vanishing of \(B\) in the static region \(r>r_{b_{0}}\) implies the vanishing of \(B\) on \(\mathring{X}\). As a result, it suffices to prove \(B=0\) in the static region \(r>r_{b_{0}}\). To achieve this, we can work in the static coordinates \((t,r)\). First, since \(\mathring{\delta}N=\dot{*}d\mathring{*}N=0\), we have \(\mathring{d}\mathring{*}N=0\) and then \(\mathring{*}N=\mathring{d}\mathcal{N}\) for some \(\mathcal{N}\in C^{\infty}(\mathring{X})\) according to \(H^{1}_{dR}(\mathring{X})=0\), which implies \(N=\mathring{*}\mathring{d}\mathcal{N}\). Returning to the equation (7.9e), we find \[r^{2}\mathring{*}\widetilde{K}^{E}=\mathring{d}(-r^{2}\mathring{\square} \mathcal{N})+k^{2}\mathring{d}\mathcal{N}-2\mathbf{Q}\mathring{d}J=\mathring{ d}\left(-r^{2}\mathring{\square}\mathcal{N}+k^{2}\mathcal{N}-2\mathbf{Q}J\right)=0 \tag{7.11}\] and thus \[-r^{2}\mathring{\square}\mathcal{N}+k^{2}\mathcal{N}-2\mathbf{Q}J=C\quad \text{for some}\ \ C\in\mathbb{R}.\] Replacing \(\mathcal{N}\) by \(\mathcal{N}-\frac{C}{k^{2}}\), we still have \(N=\mathring{*}d\mathring{\mathcal{N}}\) and meanwhile \[-\mathring{\square}\mathcal{N}+k^{2}r^{-2}\mathcal{N}-2\mathbf{Q}r^{-2}J=0. \tag{7.12}\] We note that since \(N\), and thus \(\mathring{*}N\) is a mode, \(\mathcal{N}\) can also be chosen as a mode. More specifically, suppose \(\mathring{*}N=e^{-it_{*}\sigma}(N_{0}dt_{*}+N_{1}dr)\) where \(N_{0},N_{1}\) are smooth functions of \(r\) and \(\sigma\neq 0\), we let \(\mathcal{N}=e^{-it_{*}\sigma}\mathcal{N}_{0}\) with \(\mathcal{N}_{0}=i\sigma^{-1}N_{0}\). Then \(N_{1}=\partial_{r}\mathcal{N}_{0}=i\sigma^{-1}\partial_{r}N_{0}\) will be satisfied automatically because \(\mathring{d}\mathring{*}N=e^{-it_{*}\sigma}(-\partial_{r}N_{0}-i\sigma N_{1}) dt_{*}\wedge dr=0\). Next, according to the second Bianchi identity \(\delta_{g}G_{g}\mathrm{Ric}(g)=0\) (which holds for any metric \(g\)), one concludes for all \((g,F)\) \[\delta_{g}G_{g}(\mathrm{Ric}(g)-2T(g,F))=-2\delta_{g}T(g,F)=-2g^{\alpha\beta} (\delta_{g}F)_{\alpha}F_{\nu\beta}-g^{\alpha\kappa}g^{\beta\lambda}F_{\alpha \beta}(\nabla_{\nu}F_{\kappa\lambda}+2\nabla_{\kappa}F_{\lambda\nu}).\] Since \(dF_{\nu\kappa\lambda}=\nabla_{\nu}F_{\kappa\lambda}-\nabla_{\lambda}F_{\nu \kappa}+\nabla_{\kappa}F_{\lambda\nu}\) and \(g^{\alpha\kappa}g^{\beta\lambda}F_{\alpha\beta}(\nabla_{\lambda}F_{\nu\kappa }+\nabla_{\kappa}F_{\lambda\nu})=0\), it follows that \[\delta_{g}G_{g}(\mathrm{Ric}(g)-2T(g,F))=-2\delta_{g}T(g,F)=-2g^{\alpha\beta} (\delta_{g}F)_{\alpha}F_{\nu\beta}-g^{\alpha\kappa}g^{\beta\lambda}F_{\alpha \beta}(dF)_{\nu\kappa\lambda}. \tag{7.13}\] Linearizing the above equation around \((g,F)=(g_{b_{0}},dA_{b_{0}})\), if \((\dot{g},\dot{A})\) is a solution to the Maxwell part of the linearized Einstein-Maxwell system, i.e., \(\mathcal{L}_{2}(\dot{g},\dot{A})=0\), we find by using \(\mathrm{Ric}(g)-2T(g,dA)=0,\delta_{g}dA=0,d^{2}A=0\) that \[\delta_{g}G_{g}\mathcal{L}_{1}(\dot{g},d\dot{A})=0,\quad\text{i.e.,}\quad \delta_{g}G_{g}\begin{pmatrix}\widetilde{f}^{E}\mathsf{S}\\ -\frac{\mathbf{r}}{k}f^{E}\otimes\not{d}\mathsf{S}\\ 2r^{2}(H_{L}^{E}\mathsf{S}\not{g}+H_{T}^{E}\not{H}_{k}\mathsf{S})\end{pmatrix}=0, \tag{7.14}\] which gives the following system of equations for \((\widetilde{f}^{E},f^{E},H_{L}^{E},H_{T}^{E})\) \[r^{-2}\mathring{\delta}(r^{2}\widetilde{f}^{E})+\frac{1}{2}\mathring{d} \mathring{\mathrm{r}}\widetilde{f}^{E}-\frac{\mathbf{k}}{r}f^{E}+2r^{-2} \mathring{d}(r^{2}H_{L}^{E})=0,\] \[\frac{1}{2}\mathring{\mathrm{r}}\widetilde{f}^{E}-\frac{1}{kr^{2}}\mathring{ \delta}(r^{3}f^{E})+\frac{k^{2}-2}{k^{2}}H_{T}^{E}=0.\] We define \[\widetilde{E}=G_{g}\widetilde{f}^{E}-2H_{L}^{E}\mathring{g}, \tag{7.15}\] it is clear that \(\widetilde{E}=0\) implies \((\widetilde{f}^{E},H_{L}^{E})=(h\mathring{g},0)\) for some \(h\in C^{\infty}(\mathring{X})\). If in addition equations (7.9b) and (7.9d) hold (i.e. \(f^{E}=0\) and \(H_{T}^{E}=0\)), one obtains \(\mathring{\mathrm{r}}\widetilde{f}^{E}=0,\mathring{\delta}(r^{2}\widetilde{E} )=0\), and thus \((\widetilde{f}^{E},H_{L}^{E})=(0,0)\). We further write \(\mathring{\delta}(r^{2}\widetilde{E})=0\) in \((t,r)\) coordinates and find its \(dr\) component \[-\frac{\mu_{b_{0}}^{\prime}}{2\mu_{b_{0}}^{2}}\widetilde{E}_{tt}+\mu_{b_{0}}^{ \prime}\partial_{t}\widetilde{E}_{tr}+\mu_{b_{0}}^{-1/2}\partial_{r}(\mu_{b_{0}}^{ 3/2}\widetilde{E}_{rr})=0.\] Then the vanishing of \(\widetilde{E}_{tr}\) and \(\widetilde{E}_{rr}\) implies \(\widetilde{E}_{tt}=0\) because \(\mu_{b_{0}}^{-2}\mu_{b_{0}}^{\prime}=\mu_{b_{0}}^{-2}(2\mathbf{m}r^{-2}-2 \mathbf{Q}^{2}r^{-3})\neq 0\) in the static region \(r>r_{b_{0}}=\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}>\mathbf{Q}^{2}/ \mathbf{m}\) in the subextremal charge case \(\mathbf{m}>\mathbf{Q}\). In summary, \((\widetilde{f}^{E},f^{E},H^{E}_{L},H^{E}_{T})=0\) is equivalent to \((f^{E},H^{E}_{T},\widetilde{E}_{tr},\widetilde{E}_{rr})=0\). So our goal is to prove \((\widetilde{F},J,\mathcal{N})=0\), provided that \((f^{E},H^{E}_{T},\widetilde{E}_{tr},\widetilde{E}_{rr},\widetilde{K}^{E},K^{E })=0\). Following [80, equation (5.27)] or [60, equation (5.37)], we write \[\widetilde{F}-2J\hat{g}=\begin{pmatrix}\mu_{b_{0}}X\\ -\mu_{b_{0}}^{-1}Z\\ -\mu_{b_{0}}^{-1}Y\end{pmatrix} \tag{7.16}\] in the splitting (A.12). From now on, we will use the fact \(\dot{\mathrm{tr}}\widetilde{F}=0\) which is implied by \(H^{E}_{T}=0\) without specified till the end of this subsubsection. One can recover \(\widetilde{F},J\) as \[\widetilde{F}=\begin{pmatrix}\frac{\mu_{b_{0}}}{2}(X-Y)\\ -\mu_{b_{0}}^{-1}Z\\ \frac{1}{2\mu_{b_{0}}}(X-Y)\end{pmatrix},\quad J=\frac{X+Y}{4}. \tag{7.17}\] Then the equation (7.9b) \(f^{E}=0\) implies \(\dot{\delta}(\widetilde{F}-2J\hat{g})=4\mathbf{Q}r^{-2}\hat{g}\dot{\delta} \mathcal{N}\), which we express in terms of \(X,Y,Z,\mathcal{N}\) as \[\partial_{t}X+\partial_{r}Z=4\mathbf{Q}r^{-2}\partial_{t}\mathcal{N},\quad- \frac{\mu_{b_{0}}^{\prime}}{2\mu_{b_{0}}}(X-Y)-\mu_{b_{0}}^{-2}\partial_{t}Z+ \partial_{r}Y=4\mathbf{Q}r^{-2}\partial_{r}\mathcal{N}. \tag{7.18}\] The equation \(\widetilde{E}_{tr}=\widetilde{f}^{E}_{tr}=0\) reads (we use the equation (7.18) during the calculation) \[-\partial_{t}\partial_{r}X-\partial_{t}\partial_{r}Y+\frac{\mu_{b_{0}}^{\prime }}{2\mu_{b_{0}}}\partial_{t}X+(\frac{\mu_{b_{0}}^{\prime}}{2\mu_{b_{0}}}-\frac {2}{r})\partial_{t}Y-\frac{k^{2}}{r^{2}\mu_{b_{0}}}Z=0. \tag{7.19}\] Finally the equation \(\mu_{b_{0}}\widetilde{E}_{rr}=\frac{1}{2}(\mu_{b_{0}}^{-1}\widetilde{f}^{E}_{ tt}+\mu_{b_{0}}\widetilde{f}^{E}_{rr})-2H^{E}_{L}=0\) becomes (we use equation (7.18) during the calculation) \[-\mu_{b_{0}}^{-1}\partial_{t}^{2}X-\mu_{b_{0}}^{-1}\partial_{t}^{2}Y+\frac{ \mu_{b_{0}}^{\prime}}{2}\partial_{r}X+(\frac{\mu_{b_{0}}^{\prime}}{2}+\frac{ 2\mu_{b_{0}}}{r})\partial_{r}Y-\frac{4}{r\mu_{b_{0}}}\partial_{t}Z-\frac{ \mathbf{Q}^{2}}{r^{4}}X-\left(\frac{k^{2}-2}{r^{2}}+\frac{3\mathbf{Q}^{2}}{r^ {2}}\right)Y+\frac{4k^{2}\mathbf{Q}}{r^{4}}\mathcal{N}=0. \tag{7.20}\] Therefore, it suffices to prove \((X,Y,Z,\mathcal{N})=0\) provided that they satisfy the equations (7.12) and (7.18)-(7.20). We now discuss the case that \((\hat{g},\hat{A})\) is a mode solution with \(\sigma\neq 0\). After taking Fourier transform with respect to \(t\), setting \(\partial_{t}(X,Y,Z,\mathcal{N})=-i\sigma(X,Y,Z,\mathcal{N})\) and solving for \((X^{\prime},Y^{\prime},Z^{\prime})=\partial_{r}(X,Y,Z)\), we obtain from equations (7.18) and (7.19) \[\begin{pmatrix}X^{\prime}\\ Y^{\prime}\\ \frac{Z^{\prime}}{i\sigma}\end{pmatrix}=T\begin{pmatrix}X\\ Y\\ \frac{Z}{i\sigma}\end{pmatrix}+f\quad\text{where}\quad T=\begin{pmatrix}0&\frac{ \mu_{b_{0}}^{\prime}}{\mu_{b_{0}}}-\frac{2}{r}&\frac{k^{2}}{r^{2}\mu_{b_{0}}}- \frac{\sigma^{2}}{\mu_{b_{0}}^{2}}\\ \frac{\mu_{b_{0}}^{\prime}}{2\mu_{b_{0}}}&-\frac{\mu_{b_{0}}^{\prime}}{2\mu_{b_ {0}}}&\frac{\sigma^{2}}{\mu_{b_{0}}^{2}}\\ 1&0&0\end{pmatrix},\quad f=\frac{4\mathbf{Q}}{r^{2}}\begin{pmatrix}-\mathcal{N} ^{\prime}\\ \mathcal{N}^{\prime}\\ -\mathcal{N}\end{pmatrix},\] while the equation (7.20) implies the following linear constraint on \(X,Y,Z\) \[\gamma\begin{pmatrix}X\\ Y\\ \frac{Z}{i\sigma}\end{pmatrix}=h \tag{7.21}\] where \(h=-\frac{4\mathbf{Q}}{r^{2}}(\frac{2\mu_{b_{0}}}{r}\mathcal{N}^{\prime}+\frac{ k^{2}}{r^{2}}\mathcal{N})\) and \[\gamma=(\frac{\sigma^{2}}{\mu_{b_{0}}}-\frac{\mathbf{Q}^{2}}{r^{4}}+\frac{(\mu_{ b_{0}}^{\prime})^{2}}{4\mu_{b_{0}}}+\frac{\mu_{b_{0}}^{\prime}}{r},\ \frac{\sigma^{2}}{\mu_{b_{0}}}-\frac{k^{2}-2}{r^{2}}-\frac{3\mathbf{Q}^{2}}{r^{4} }+\frac{(\mu_{b_{0}}^{\prime})^{2}}{4\mu_{b_{0}}}-\frac{2\mu_{b_{0}}^{\prime}}{r },\ -\frac{2\sigma^{2}}{r\mu_{b_{0}}}+\frac{k^{2}\mu_{b_{0}}^{\prime}}{2r^{2}\mu_{b_{0} }}).\] Following [80] and [60], one can find a linear combination \(\Phi\) of \(X,Y,\frac{Z}{i\sigma}\) which satisfies a second order ODE. Concretely, let \[\Phi:=\frac{\frac{2Z}{i\sigma}-r(X+Y)}{H}\quad\text{with}\quad m:=k^{2}-2,\quad x :=\frac{2\mathbf{m}}{r},\quad z:=\frac{\mathbf{Q}^{2}}{r^{2}},\quad H:=m+3x-4z. \tag{7.22}\] Then the second order ODE for \(\Phi\) is (see [80, equation (5.41)], [60, equation (5.49)]) \[(\mu_{b_{0}}\partial_{r})^{2}\Phi+(\sigma^{2}-V_{\Phi})\Phi=F_{\Phi}\mathcal{N} \tag{7.23}\] where \(V_{\Phi}\) and \(F_{\Phi}\) are (see [80, equations (5.43), (5.44), (C.1)], [60, equation (B.1)]) \[\begin{split}& V_{\Phi}=\frac{\mu_{b_{0}}}{r^{2}H^{2}}\Big{(}9x^{ 3}-9(6z-m)x^{2}+(72z^{2}-8(4m-3)z+3m^{2})x-32z^{3}+24mz(z+1)+m^{2}(m+2)\Big{)} \\ & F_{\Phi}=\frac{8\mathbf{Q}\mu_{b_{0}}}{r^{3}H^{2}}\Big{(}-3x^{ 2}+(2z+6)x+m(m+4)\Big{)}\end{split}. \tag{7.24}\] Conversely, \(X,Y,\frac{Z}{i\sigma}\) can be expressed in terms of \(\Phi\) and \(\mathcal{N}\) as \[\begin{split}& X=(\frac{\sigma^{2}r}{\mu_{b_{0}}}-\frac{P_{X_{0}}} {2rH^{2}})\Phi+\frac{P_{X_{1}}}{2H}\Phi^{\prime}-\frac{2\mathbf{Q}P_{X \mathcal{N}}}{r^{2}H^{2}}\mathcal{N}-\frac{8\mathbf{Q}\mu_{b_{0}}}{rH} \mathcal{N}^{\prime}\\ & Y=(-\frac{\sigma^{2}r}{\mu_{b_{0}}}-\frac{P_{Y_{0}}}{2rH^{2}}) \Phi+\frac{P_{Y_{1}}}{2H}\Phi^{\prime}-\frac{2\mathbf{Q}P_{Y\mathcal{N}}}{r^{2 }H^{2}}\mathcal{N}+\frac{8\mathbf{Q}\mu_{b_{0}}}{rH}\mathcal{N}^{\prime}\\ &\frac{Z}{i\sigma}=\frac{P_{Z}}{2H}\Phi-r\mu_{b_{0}}\Phi^{\prime} -\frac{8\mathbf{Q}\mu_{b_{0}}}{rH}\mathcal{N}^{\prime}\end{split} \tag{7.25}\] where \(P_{X_{0}},P_{X_{1}},P_{X\mathcal{N}},P_{Y_{0}},P_{Y_{1}},P_{Y\mathcal{N}},P_{ Z}\) are functions of \(r\) (see [80, equations (5.46), (C.4)-(C.8)] and [60, equations (B.2)]) \[\begin{split}& P_{X_{0}}=27x^{3}-24(5z-m)x^{2}+\Big{(}152z^{2}-2(35m-1 2)z+3m(3m+2)\Big{)}x\\ &\qquad\quad-64z^{3}+48mz^{2}-8m(m-2)z+2m^{2}(m+2),\\ & P_{X_{1}}=9x^{2}-(16z-5m+6)x+8z^{2}-6mz-4m,\\ & P_{X\mathcal{N}}=-18x^{2}+4(8z-m+6)x-16z^{2}+4(m-4)z+2m(m+6),\\ & P_{Y_{0}}=9x^{3}-6(10z-m)x^{2}+\Big{(}120z^{2}-2(11m-12)z+3m(m+ 2)\Big{)}x\\ &\qquad\quad-64z^{3}+16(m-4)z^{2}-8m(m+2)z,\\ & P_{Y_{1}}=3x^{2}-(12z+m+6)x+8z^{2}+2(m+8)z,\\ & P_{Y\mathcal{N}}=-6x^{2}+4(6z-m)x-16z^{2}+4(m-4)z-2m(m+2),\\ & P_{Z}=3x^{2}+(-2z+3m)x-(4m+8)z-2m.\end{split} \tag{7.26}\] Therefore it suffices to prove \((\Phi,\mathcal{N})=0\). Recall that \(\mathcal{N}\) satisfies the wave equation (7.12) and since \(\mathcal{N}\) is also a mode, we can rewrite it as \[(\mu_{b_{0}}\partial_{r})^{2}\mathcal{N}+(\sigma^{2}-\frac{\mu_{b_{0}}k^{2}}{ r^{2}})\mathcal{N}=-\frac{2\mathbf{Q}J}{r^{2}}=-\frac{\mu_{b_{0}}\mathbf{Q}(X+Y)}{2r^{2}}. \tag{7.27}\] Using (7.25) and (7.26), we find \[\begin{split} X+Y&=-(\frac{P_{X_{0}}}{2rH^{2}}+\frac {P_{Y_{0}}}{2rH^{2}})\Phi+(\frac{P_{X_{1}}}{2H}+\frac{P_{Y_{1}}}{2H})\Phi^{ \prime}-(\frac{2\mathbf{Q}P_{X\mathcal{N}}}{r^{2}H^{2}}+\frac{2\mathbf{Q}P_{Y \mathcal{N}}}{r^{2}H^{2}})\mathcal{N}\\ &=-(\frac{H}{r}-\frac{P_{Z}}{rH})\Phi-2\mu_{b_{0}}\Phi^{\prime}- \frac{16\mathbf{Q}\mu_{b_{0}}}{r^{2}H}\mathcal{N}\end{split}\] and thus (see [80, equation (5.49)] and [60, eqaution (5.52)]) \[(\mu_{b_{0}}\partial_{r})^{2}\mathcal{N}+(\sigma^{2}-V_{\mathcal{N}}) \mathcal{N}=F_{\mathcal{N}0}\Phi+F_{\mathcal{N}1}\Phi^{\prime} \tag{7.28}\] with \[V_{\mathcal{N}}=\frac{\mu_{b_{0}}k^{2}}{r^{2}}+\frac{8\mathbf{Q}^{2}\mu_{b_{0} }^{2}}{r^{4}H},\quad F_{\mathcal{N}0}=\frac{\mathbf{Q}\mu_{b_{0}}}{2r^{2}}( \frac{H}{r}-\frac{P_{Z}}{rH}),\quad F_{\mathcal{N}1}=\frac{\mathbf{Q}\mu_{b_{0} }^{2}}{r^{2}}. \tag{7.29}\] Now we have two coupled second order ODEs (7.23) and (7.28)for \(\Phi\) and \(\mathcal{N}\) and in fact we can transform them into two decoupled second order ODEs by introducing suitable linear combinations of \(\Phi\) and \(\mathcal{N}\) (see [80, equations (5.56)-(5.64)] and [60, equations (5.55)-(5.59)]). Concretely, we set \[\Psi_{\pm}=a_{\pm}\Phi+b_{\pm}\mathcal{N} \tag{7.30}\] where \(a_{\pm},b_{\pm}\) are smooth on \(\hat{X}\) (which extends a little bit beyond the event horizon \(r=r_{b_{0}}\)) \[(a_{+},b_{+})=(\frac{\mathbf{Q}m}{2\tilde{c}}+\frac{\mathbf{Q}}{2r},1),\quad(a_ {-},b_{-})=(\frac{\tilde{c}}{6\mathbf{m}}-\frac{2\mathbf{Q}^{2}}{3\mathbf{m}r},-\frac{4\mathbf{Q}}{3\mathbf{m}}),\quad\tilde{c}=3\mathbf{m}+\sqrt{9\mathbf{m} ^{2}+4\mathbf{Q}^{2}m}. \tag{7.31}\] Since \(\left|\frac{\mathbf{Q}_{m}}{2\tilde{c}}+\frac{\mathbf{Q}}{2\tilde{c}_{2}}\right. \begin{array}{c}1\\ \frac{\tilde{c}}{6\mathbf{m}}-\frac{2\mathbf{Q}^{2}}{3\mathbf{m}r}\end{array} \begin{array}{c}1\\ -\frac{4\mathbf{Q}}{3\mathbf{m}}\end{array}\right|=-\frac{2\mathbf{Q}^{2}m}{3 \mathbf{m}\tilde{c}}-\frac{\tilde{c}}{6\mathbf{m}}<0\), we can recover \(\Phi,\mathcal{N}\) from \(\Psi_{\pm}\). Next the decoupled equations for \(\Phi_{\pm}\) are given as \[(\mu_{b_{0}}\partial_{r})^{2}\Psi_{\pm}+(\sigma^{2}-V_{\pm})\Psi_{\pm}=0 \tag{7.32}\] where \[V_{\pm}=\frac{a_{\pm}F_{\Phi}+b_{\pm}V_{\mathcal{N}}}{b_{\pm}}. \tag{7.33}\] Introducing the tortoise coordinate \(r_{*}=\int\mu_{b_{0}}^{-1}\,dr\), we see that the equations for \(\Psi_{\pm}\) are reduced to an eigenvalue problem of the type \(A\Psi=\sigma^{2}\Psi\) on the region \(r^{*}\in(-\infty,\infty)\) where \(A=-\partial_{r^{*}}^{2}+V(r)\) is a self-adjoint operator. As explained in [80, SS6], we expect \(V(r)\) to be non-negative on \(r\in[r_{b_{0}},\infty)\) and then naturally so is \(A\). **Lemma 7.3**.: _Given \(r\in[r_{b_{0}},\infty)\), we have \(V_{+}\geq 0\) for \(l\geq 1\), i.e., \(k^{2}\geq 2\) and \(m\geq 0\)._ Proof.: Since \(r_{b_{0}}=\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\) and \(H=m+\frac{2}{r}(3\mathbf{m}-\frac{2\mathbf{Q}^{2}}{r})\), in the subextremal case \(\mathbf{Q}<\mathbf{m}\), we have \(r\geq r_{b_{0}}>\mathbf{m}>\mathbf{Q}>\frac{2\mathbf{Q}^{2}}{3\mathbf{m}}\) and thus \(H(r)>0\). We first analyze \[F_{\Phi}=\frac{8\mathbf{Q}\mu_{b_{0}}}{r^{3}H^{2}}\Bigl{(}-3x^{2}+(2z+6)x+m(m+ 4)\Bigr{)},\] since \(x=\frac{2\mathbf{m}}{r}\in[0,2),z\geq 0,m\geq 0\), we have \(-3x^{2}+(2z+6)x+m(m+4)\geq m(m+4)\geq 0\) and thus \(F_{\Phi}\geq 0\). Next it is clear from (7.29) that \(V_{\mathcal{N}}\geq 0\). Therefore \(V_{+}\geq 0\). However, the positivity of \(V_{-}\) fails. To deal with this issue, we follow [80, SS6] and introduce the procedure _S-deformation_ of \(V_{-}\). More specifically, suppose \(\Psi\) is compactly supported in \(r_{*}\), we have \[(\Psi,A\Psi)_{L^{2}(dr_{*})}=\int|\partial_{r_{*}}\Psi|^{2}+V(r)|\Psi|^{2}\,dr _{*}=\int|\widetilde{D}\Psi|^{2}+\widetilde{V}(r)|\Psi|^{2}\,dr_{*}\] where \(\widetilde{D}=\partial_{r_{*}}+S=\mu_{b_{0}}\partial_{r}+S\) and \(\widetilde{V}=V+\mu_{b_{0}}\partial_{r}S-S^{2}\). We apply the \(S\)-deformation to \(V_{-}\) with \[S=S_{-}:=\frac{\mu_{b_{0}}\partial_{r}H_{-}}{H_{-}}=\mu_{b_{0}}\partial_{r}( \log H_{-})\ \ \text{where}\ \ H_{-}=k^{2}-2+\frac{\tilde{c}}{r}>0\ \ \text{on}\ \ [r_{b_{0}},\infty)\ \ \text{for}\ \ k^{2}\geq 2(l\geq 1)\] and then \[\widetilde{V}_{-}=\frac{k^{2}(k^{2}-2)\mu_{b_{0}}}{r^{2}H_{-}}\geq 0\ \ \text{on}\ \ [r_{b_{0}},\infty)\ \ \ \text{for}\ \ \ k^{2}\geq 2(l\geq 1). \tag{7.34}\] Namely, with \(H_{-}\) and \(V_{-}\) (which are also smooth near \(r=r_{b_{0}}\)) defined above, we write \(\widetilde{D}=\partial_{r_{*}}+S_{-}=e^{-\int S_{-}dr_{-}}\partial_{r_{*}}e^{ \int S_{-}dr^{*}}=H_{-}^{-1}\partial_{r_{*}}H_{-}=H_{-}\mu_{b_{0}}\partial_{r} H_{-}\) and find that \(D^{*}\) which is the adjoint of \(D\) with respect to \(dr_{*}=\mu_{b_{0}}^{-1}dr\) can be expressed as \(-H_{-}\partial_{r_{*}}H_{-}^{-1}=-H_{-}\mu_{b_{0}}\partial_{r}H_{-}^{-1}\). As a consequence, we have \[\widetilde{D}^{*}\widetilde{D}\Psi_{-}+\widetilde{V}_{-}\Psi_{-} =-H_{-}\mu_{b_{0}}\partial_{r}\Bigl{(}H_{-}^{-2}\mu_{b_{0}} \partial_{r}(H_{-}\Psi)\Bigr{)}+\widetilde{V}_{-}\Psi\] \[=-(\mu_{b_{0}}\partial_{r})^{2}\Psi-\Psi H_{-}\mu_{b_{0}}\partial_{ r}(H_{-}^{-2}\mu_{b_{0}}\partial_{r}H_{-})+(V+H_{-}^{-1}(\mu_{b_{0}} \partial_{r})^{2}H_{-}-2H_{-}^{-2}(\mu_{b_{0}}\partial_{r}H_{-})^{2})\Psi\] \[=-(\mu_{b_{0}}\partial_{r})^{2}\Psi+V\Psi. \tag{7.35}\] Now we are at the position to show that \(\Psi_{\pm}=0\) on \(r>r_{b_{0}}\), and thus from the previous discussion the mode solution \((\dot{g},\dot{A})\) with \(\sigma\neq 0\) is a pure gauge solution. To achieve this, we first need to discuss the behavior of \(\Psi_{\pm}\) near \(r=r_{b_{0}}\) and \(r=\infty\), i.e., \(r_{*}=\mp\infty\). Since \(H,a_{\pm},b_{\pm}\) are bounded away from zero, we only need to analyze the contributions of \(X,Y,Z,\mathcal{N}\). Note that \(\widetilde{F},J\) are smooth on \(\dot{X}\) (which extends a little bit beyond the event horizon). Therefore, the contribution of \(J=(X+Y)/4\) to \(\Phi\) is smooth on \(\tilde{X}\). As for the contribution from \(Z\), we notice that \(\partial_{t},\partial_{r_{*}}\), which are expressed as \(\partial_{t},\mu_{b_{0}}\partial_{r}\) in the static coordinates \((t,r)\), are smooth vector fields on \(\dot{X}\), and thus \(Z=\mu_{b_{0}}\widetilde{F}_{tr}=-\widetilde{F}(\partial_{t},\partial_{r^{*}})\) is smooth on \(\dot{X}\) as well. As a result, \(\Phi=e^{-i\sigma t_{*}}C^{\infty}([r_{b_{0}},\infty)_{r})\), which is written in the static coordinates as \[\Phi=(t,r)=e^{-i\sigma t}\Theta(r)\ \ \ \text{where}\ \ \ \Theta(r)\in e^{\pm i\sigma r _{*}}C^{\infty}([r_{b_{0}},\infty))\ \ \text{when}\ \ \ r^{*}\rightarrow\pm\infty.\] which implies the exponential decay of \(\Phi\) as \(r_{*}\rightarrow-\infty\) when \(\operatorname{Im}\sigma>0\). Also, when \(r_{*}\rightarrow\infty\), since \(\dot{g}=e^{-i\sigma t_{*}}\dot{g}_{0}\) with \(\dot{g}_{0}\in\widetilde{H}_{0}^{\infty,t}(\bar{X};S^{2s\mathrm{c}}\overline{T}^{*} \bar{X})\), we still obtain the rapid decay of \(\Phi\) when \(r^{*}\rightarrow\infty\). As for \(\mathcal{N}\), from the previous discussion, we know \(\mathcal{N}\) is a mode, that is, \(\mathcal{N}=e^{-i\sigma t_{*}}\mathcal{N}_{0}\) with \(\mathcal{N}_{0}\in C^{\infty}([r_{b_{0}},\infty)_{r})\cap\bar{H}_{\mathrm{b}}^{ \infty,\ell}(\bar{X})\) as well, and thus decays rapidly when \(r_{*}\to\pm\infty\). Consequently, \(\Psi_{\pm}\) decays rapidly as well when \(r_{*}\to\pm\infty\) when \(\mathrm{Im}\,\sigma>0\). Then when \(\mathrm{Im}\,\sigma>0\), by pairing \((-\partial_{r_{*}}^{2}+V_{+}-\sigma^{2})\Psi_{+}=0\) and \((-\widetilde{D}^{*}\widetilde{D}+\widetilde{V}_{-}-\sigma^{2})\Psi_{-}=0\) with \(\overline{\Psi}_{+}\) and \(\overline{\Psi}_{-}\) respectively with respect to the \(L^{2}(\mathbb{R}_{r_{*}};dr_{*})\) and then integrating by parts (whose validity is guaranteed by the rapid decay of \(\Psi_{\pm}\) when \(r_{*}\to\pm\infty\) ), one obtains \[0=\|\partial_{r_{*}}\Psi_{+}\|_{L^{2}}+\|V_{+}^{1/2}\Psi_{+}\|_{L^{2}}-\sigma ^{2}\|\Psi_{+}\|_{L^{2}},\quad 0=\|\widetilde{D}\Psi_{-}\|_{L^{2}}+\|\widetilde{V }_{-}^{1/2}\Psi_{-}\|_{L^{2}}-\sigma^{2}\|\Psi_{-}\|_{L^{2}}.\] When \(\mathrm{Re}\,\sigma=0\) and thus \(\sigma^{2}<0\), we have a sum of squares and thus \(\Psi_{\pm}=0\). When \(\mathrm{Re}\,\sigma\neq 0\), the imaginary part \(-2i\mathrm{Re}\,\sigma\mathrm{Im}\,\sigma\|\Psi_{\pm}\|_{L^{2}}=0\) and thus \(\Psi_{\pm}=0\) on \((r_{b_{0}},\infty)\). When \(\sigma\in\mathbb{R}\setminus\{0\}\), we still have \(e^{i\sigma t}\Phi,e^{i\sigma t}\mathcal{N}\in e^{\pm i\sigma r_{*}}\Big{(}C^{ \infty}([r_{b_{0}},\infty))\cap\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X}) \Big{)}\) when \(r_{*}\to\pm\infty\) for some \(\ell\in\mathbb{R}\), and thus so do \(\Psi_{\pm}\). First, by normal operator arguments as in the proof of Proposition 5.7, we find \(\Psi_{+}=e^{-i\sigma t}e^{i\sigma r_{*}}(a+\mathcal{A}^{1-})\) with \(a\in\mathbb{C}\) when \(r_{*}\to\infty\). Next, a standard boundary pairing argument, see the proof of Theorem 8.1 below for the detail (starting at (8.8)), allows us to conclude \(\lim_{r_{*}\to\pm\infty}e^{\mp i\sigma r_{*}}\Psi_{+}=0\). Then an indicial root argument implies \(\Psi_{+}\in e^{-i\sigma t}e^{i\sigma r_{*}}\mathcal{A}^{\infty}\), i.e., \(\Psi_{+}\) vanishes to infinite order at \(r_{*}=\infty\). Lastly, the unique continuation at infinity ([67, Theorem 17.2.8]) implies \(\Psi_{+}=0\) on \((r_{b_{0}},\infty)\). The proof for \(\Psi_{-}\) proceeds similarly. Next we treat the stationary mode solutions, i.e., \(\sigma=0\). We still assume \(l\geq 2\). Now \((\dot{g},\dot{A})\) and thus \(\widetilde{F},J,N\) and \(X,Y,Z\) are stationary. We first analyze the nature of \(\mathcal{N}\) defined as \(\ddot{\ast}N=\dot{d}\mathcal{N}\). Let \(\ddot{\ast}N=N_{0}dt_{*}+N_{1}dr\) where \(N_{0},N_{1}\) are smooth functions of \(r\) only, and then \(\ddot{d}\dot{\ast}N=0\) gives \(\partial_{r}N_{0}=0\) and thus \(N_{0}\) is a constant. Therefore, we have the unique (up to additive constants) \(\mathcal{N}=N_{0}t_{*}+\int_{r_{-}}^{r}N_{1}(s)ds\), which is a generalized mode with frequency \(\sigma=0\) and grows at most linearly in \(t_{*}\). Since \(X,Y,Z\) are stationary, equation (7.19) implies \[Z=0, \tag{7.36}\] which in turn implies \(\partial_{t}\mathcal{N}=0\) by using equation (7.18). Therefore, we conclude \(N_{0}=0\) and \(\mathcal{N}\) is stationary as well. _Remark 7.4_.: Alternatively, using \(N\in\bar{H}_{\mathrm{b}}^{\ell,\infty}(\bar{X};\widetilde{\mathrm{s}cT^{*}} \bar{X})\subset\mathcal{A}^{\ell+3/2}\subset\mathcal{A}^{0+}\) (because \(-3/2<\ell<-1/2\)) when \(\sigma=0\), it also follows that \(N_{0}=c=0\). As for \(\Phi\), we notice that \(Z/(i\sigma)\) is not well-defined when \(\sigma=0\) and we need to do some remedies. Using (7.21), \(Z/(i\sigma)\) can be expressed as a linear combination of \(X,Y,\mathcal{N},\mathcal{N}^{\prime}\) and thus we can rewrite \(\Psi_{\pm}\) as linear combinations of \(X,Y,\mathcal{N},\mathcal{N}^{\prime}\) whose coefficients depend on \(\sigma\) \[\Psi_{\pm}(\sigma)=C_{X\pm}(\sigma)X+C_{Y\pm}(\sigma)Y+C_{\mathcal{N}\pm}( \sigma)\mathcal{N}+C_{\mathcal{N}^{\prime}\pm}(\sigma)\mathcal{N}^{\prime} \tag{7.37}\] where \(C_{\bullet\pm}(\sigma)\) are rational functions of \(r\) depending on \(\sigma\in\mathbb{C}\). To make the dependence on \(\sigma\) explicit, we write (see [60, equations (5.65), (B.3)]) \[C_{X\pm}=a_{\pm}\frac{P_{X}}{3\widetilde{H}},\quad C_{Y\pm}=a_{ \pm}\frac{P_{Y}}{3\widetilde{H}},\] \[C_{\mathcal{N}^{\prime}+}=1+\frac{\mu_{b_{0}}(m+2)P_{\mathcal{N} }}{\widetilde{c}r\widetilde{H}},\quad C_{\mathcal{N}^{\prime}-}=\frac{4b_{-}\mu _{b_{0}}^{2}(\tilde{c}-4rz)}{r\widetilde{H}} \tag{7.38}\] where \(\widetilde{H}(\sigma)=H(k^{2}\mu_{b_{0}}^{\prime}-4r\sigma^{2})\) and \(P_{X},P_{Y},P_{\mathcal{N}}\) are smooth on \(\dot{X}\) \[P_{X}=(9x-36z-3m-18)x+6(4z+m+8)z,\] \[P_{Y}=-3(9x-16z+5m-6)x-6(4z-3m)z+12m, \tag{7.39}\] \[P_{\mathcal{N}}=-8(\tilde{c}+mr)z,\] Since \(\mu_{b_{0}}^{\prime}=2r^{-2}(\mathbf{m}-r^{-1}\mathbf{Q}^{2})>0\) in the region \(r\geq r_{b_{0}}>\mathbf{m}>\mathbf{m}^{-1}\mathbf{Q}^{2}\) and thus \(\widetilde{H}(0)=Hk^{2}\mu_{b_{0}}^{\prime}>0\) there as well, we conclude that \(\Psi_{\pm}(\sigma)\) exist down to \(\sigma=0\). We define \(\Psi_{\pm}:=\Psi_{\pm}(0)\). Conversely, using the first two equations in (7.25) with \(\sigma=0\), one can recover \(X,Y\) in terms of \(\Phi,\mathcal{N},\mathcal{N}^{\prime}\) and thus \(\Psi_{\pm},\mathcal{N},\mathcal{N}^{\prime}\). One can also check that \(\Psi_{\pm}\) satisfy the equation (7.32) with \(\sigma=0\) (because \(\Psi_{\pm}(\sigma)\) is holomorphic near \(\sigma=0\)), i.e., \[(\mu_{b_{0}}\partial_{r})^{2}\Psi_{\pm}-V_{\pm}\Psi_{\pm}=0 \tag{7.40}\] with \(V_{\pm}\) defined as before in (7.33), (7.31), (7.29) and (7.24). Since \((\dot{g},\dot{A})\in\bar{H}^{\infty,\ell}_{\rm b}(\bar{X};S^{2\widetilde{ \kappa}\widetilde{T}^{*}\bar{X}}\oplus\widetilde{\widetilde{\kappa}^{T}^{*}\bar {X}})\subset\mathcal{A}^{\ell+3/2}\), we have \(X,Y,\mathcal{N}^{\prime}\in\mathcal{A}^{\ell+3/2}(\bar{X})\) and \(\mathcal{N}\in\mathcal{A}^{\ell+1/2}\). Let \(\rho=1/r\), we notice that \(H=m+6{\bf m}\rho-4{\bf Q}^{2}\rho^{2}\) and \(k^{2}\mu^{\prime}_{b_{0}}=2k^{2}{\bf m}\rho^{2}-2k^{2}{\bf Q}^{2}\rho^{3}\), so \(\widetilde{H}(0)\in 2k^{2}m{\bf m}\rho^{2}+\rho^{3}C^{\infty}(\bar{X})\). In view of the expression in (7.39), we find \(P_{Y}\in C^{\infty}(\bar{X})\) and \(P_{X},P_{\mathcal{N}}\in\rho C^{\infty}(\bar{X})\), and then \(C_{Y\pm}\in\rho^{-2}C^{\infty}(\bar{X})\), \(C_{X\pm},C_{\mathcal{N}^{\prime}\pm}\in\rho^{-1}C^{\infty}(\bar{X})\) and \(C_{\mathcal{N}\pm}\in C^{\infty}(\bar{X})\). As a consequence, we have \(\Psi_{\pm}\in\mathcal{A}^{\ell-1/2}\subset\mathcal{A}^{-2+}\) a priori because \(-3/2<\ell<-1/2\). Now we are ready to describe the asymptotic behavior of \(\Psi_{\pm}\) near \(r=r_{b_{0}}\) and \(r=\infty\). We only present the proof of \(\Psi_{+}\) because the proof of \(\Psi_{-}\) follows analogously. Since \(F_{\Phi},V_{\mathcal{N}},a_{\pm},b_{\pm}\) are smooth across the event horizon, we have \(\mu_{b_{0}}(\partial_{r}\Psi_{+})\overline{\Psi}_{+}|_{r=r_{b_{0}}}=0\). This implies that boundary term at \(r=r_{b_{0}}\) arising in the integration by parts we will do later on vanishes. Near \(r=\infty\), owing to (7.24), (7.29) (7.31) and (7.33), we conclude \(F_{\Phi}\in\rho^{3}C^{\infty}(\bar{X})\), \(V_{\mathcal{N}}\in\frac{k^{2}}{r^{2}}+\rho^{4}C^{\infty}(\bar{X})\) and then \(V_{\pm}\in\frac{k^{2}}{r^{2}}+\rho^{3}C^{\infty}(\bar{X})\), which implies \(V_{\pm}=\frac{k^{2}}{r_{*}^{2}}+\mathcal{A}^{3-}\) as a function of \(r_{*}\) near \(r_{*}=\infty\). We set \(x=1/r_{*}\) for the moment, \[(\mu_{b_{0}}\partial_{r})^{2}\Psi_{+}-V_{+}\Psi_{+}=\partial_{r_{*}}^{2}\Psi_{ +}-V_{+}(r_{*})\Psi=x^{4}\partial_{x}\Psi_{+}+2x^{3}\partial_{x}\Psi_{+}-V_{ +}(1/x)=0,\] which implies \(x=0\)(i.e. \(r_{*}=\infty\)) is a regular singular point and its indicial equation is \(\lambda(\lambda-1)+2\lambda-k^{2}=0\). The roots to the indicial equation are \(\lambda_{\pm}=(-1\pm\sqrt{1+4k^{2}})/2\). Since \(k^{2}\geq 6\), we see that \(\lambda_{+}\geq 2\) corresponds to one solution \(\Psi_{+}\) with decay \(\sim r_{*}^{-\lambda_{+}}\) near \(r_{*}=\infty\), while \(\lambda_{-}\leq-3\) implies another linearly independent solution \(\Psi_{+}\) with growth \(\sim r_{*}^{-\lambda_{-}}\) near \(r_{*}=\infty\). Since we have \(\Psi_{+}\in\mathcal{A}^{-2+}\) a priori, this excludes the second solution with growth. Namely, \(\Psi_{+}\sim r^{-\lambda_{+}}\) which means that \(\Psi_{+}\sim r_{*}^{-2}\) near \(r_{*}=\infty\). With this decay rate near \(r_{*}=\infty\), we can justify the \(L^{2}(\mathbb{R}_{r_{*}};dr_{*})\) pairing between \(((\mu_{b_{0}}\partial_{r})^{2}-V_{+})\Psi_{+}\) and \(\overline{\Psi}_{+}\) and also find that the boundary term at \(r_{*}=\infty\) vanishes when integrating by parts. Therefore we proceed in the same way as in the proof of the case \({\rm Im}\,\sigma>0\) (use the \(L^{2}(\mathbb{R}_{r_{*}};dr_{*})\) pairing and then integrate by parts) and conclude that \(\Psi_{\pm}=0\) on \((r_{b_{0}},\infty)\). Therefore, \((\dot{g},\dot{A})\) is a pure gauge solution. #### 7.1.2. Modes with \(l=1\) We now let \(k^{2}=l(l+1)=2\). Our goal is again to prove that an outgoing mode solution \((\dot{g},\dot{A})\) is a pure gauge solution, with the gauge potential an outgoing mode as well. The scalar perturbations with \(l=1\) take the form \[\dot{g}=\begin{pmatrix}\widetilde{f}\mathsf{S}\\ -\frac{r}{k}f\otimes\not{d}\mathsf{S}\\ 2r^{2}H_{L}\not{S}\end{pmatrix},\quad\dot{A}=\begin{pmatrix}\widetilde{K} \mathsf{S}\\ -\frac{r}{k}K\not{d}\mathsf{S}\end{pmatrix}\quad\text{with}\quad\mathsf{S}\in \mathbf{S}_{l}\ (l=1). \tag{7.41}\] where the term \(H_{T}\) is absent because \(\not{H}_{1}\mathsf{S}=0\). Now we follow the notations and expressions in the case \(l\geq 2\) with \(H_{T}\) set to be \(0\). First, the pure gauge solutions take the form \[\boldsymbol{\delta}\dot{g}=2\delta_{g}^{*}\omega=\begin{pmatrix}2\dot{\hat{ \delta}}^{*}T\\ -\frac{r}{k}(-\frac{k}{r}T+r\dot{dr}^{-1}L)\not{d}\mathsf{S}\\ 2r^{2}(r^{-1}\iota_{dr}T+\frac{k}{2r}L)\not{S}\end{pmatrix},\ \ \boldsymbol{\delta}\dot{A}=\mathcal{L}_{\omega^{ 4}}A+d\phi=\begin{pmatrix}(\dot{d}_{A}T+\iota_{T}\dot{d}A+\dot{d}P)\mathsf{S}\\ -\frac{r}{k}(-\frac{k}{r}\iota_{A}T-\frac{k}{r}P)\not{d}\mathsf{S}\end{pmatrix} \tag{7.42}\] with \[\omega=\begin{pmatrix}T\mathsf{S}\\ -\frac{r}{k}L\not{d}\mathsf{S}\end{pmatrix},\ \ \phi=P\mathsf{S} \tag{7.43}\] where \(T\in C^{\infty}(\dot{X};T^{*}\dot{X}),L,P\in C^{\infty}(\dot{X})\). When adding \((\boldsymbol{\delta}\dot{g},\boldsymbol{\delta}\dot{A})\) to \((\dot{g},\dot{A})\), the quantities \(\tilde{f},f,H_{L},\tilde{K},K\) change by \[\boldsymbol{\delta}\widetilde{f}=2\dot{\delta}^{*}T,\quad\boldsymbol{\delta}f=- \frac{k}{r}T+r\dot{d}(r^{-1}L),\quad\boldsymbol{\delta}H_{L}=r^{-1}\iota_{dr}T+ \frac{k}{2r}L,\] \[\boldsymbol{\delta}\widetilde{K}=\dot{d}_{A}T+\iota_{T}\dot{d}A+\dot{d}P,\quad \boldsymbol{\delta}K=-\frac{k}{r}(\iota_{A}T+P).\] We define \(\mathbf{X}:=\frac{r}{k}f\), then \(\boldsymbol{\delta}\mathbf{X}=\frac{r}{k}\boldsymbol{\delta}f=-T+\frac{r^{2}}{ k}\dot{d}(r^{-1}L)\). We observe that for any given \(L\), one can choose \[T=\mathbf{X}+\frac{r^{2}}{k}\dot{d}(r^{-1}L),\quad P=-\iota_{A}T+\frac{r}{k}K \tag{7.45}\] which implies \(\mathbf{X}+\boldsymbol{\delta}\mathbf{X}=0,K+\boldsymbol{\delta}K=0\). If \((\dot{g},\dot{A})\) are outgoing mode solutions and \(L\) is a mode, then \(T,P\) are modes of the same type which grows at most by the factor \(r\) relative to \(\dot{g},\dot{A}\) and \(L\). We again define \[\begin{split}\widetilde{F}&:=\widetilde{f}+2\dot{ \delta}^{*}\mathbf{X}\in C^{\infty}(\mathring{X};S^{2}T^{*}\mathring{X}),\\ J&:=H_{L}+r^{-1}\iota_{dr}\mathbf{X}\in C^{\infty} (\mathring{X};T^{*}\mathring{X}),\\ N&:=\widetilde{K}+\mathring{d}\left(\frac{r}{k}K \right)+\iota_{\mathbf{X}}\mathring{d}A\in C^{\infty}(\mathring{X};T^{*} \mathring{X}).\end{split} \tag{7.46}\] However, \(\widetilde{F},J,N\) are _not_ gauge-invariant in this case. In fact \[\boldsymbol{\delta}\widetilde{F}=\frac{2}{k}\dot{\delta}^{*}\Big{(}r^{2} \mathring{d}(r^{-1}L)\Big{)},\quad\boldsymbol{\delta}J=\frac{k}{2r}L+\frac{r }{k}\iota_{dr}\mathring{d}(r^{-1}L),\quad\boldsymbol{\delta}N=\frac{r^{2}}{k }\iota_{d(r^{-1}L)}\mathring{d}A. \tag{7.47}\] For any fixed \(L\), we pick \(T,P\) as explained above and then can work in the gauge \(\mathbf{X}=0,K=0\), which implies \[(\widetilde{f},f,H_{L},\widetilde{K},K)=(\widetilde{F},0,J,N,0),\] and thus the linearized Einstein-Maxwell system \(2\mathcal{L}_{1}(\dot{g},\dot{A})=0,\mathcal{L}_{2}(\dot{g},\dot{A})=0\) is again given by (7.9a)-(7.9f) with the equation (7.9d) for \(H_{T}^{E}\) absent. Conversely, in the gauge \(\mathbf{X}=0,K=0\), if \((\widetilde{F},J,N)=0\), we have \((\dot{g},\dot{A})=0\), which implies the original perturbations \((\dot{g},\dot{A})\) take the form \[(\dot{g},\dot{A})=(2\delta_{g}^{*}\omega,\mathcal{L}_{\omega^{4}}A+d\phi)\ \ \text{with}\ \ \omega=-\left(\begin{pmatrix}\mathbf{X}+\frac{r^{2}}{k}\mathring{d}(r^{-1}L) \end{pmatrix}\mathbf{S}\right),\ \ \phi=\iota_{A}(\mathbf{X}+\frac{r^{2}}{k}\mathring{d}(r^{-1}L))-\frac{r}{k}K. \tag{7.48}\] If \((\dot{g},\dot{A})\) are outgoing mode solutions and \(L\) is a mode, then \(\omega\) and \(\phi\) are modes of the same type which grow at most by a factor \(r\) more than \((\dot{g},\dot{A})\) and \(L\). Now we exploit this extra gauge freedom resulting from \(L\) to simplify the structure of the linearized Einstein-Maxwell system. Concretely, we _define_\(H_{T}^{E}:=-\frac{k^{2}}{2r^{4}}\mathring{\operatorname{tr}}\widetilde{F}\) and we shall arrange \(H_{T}^{E}+\boldsymbol{\delta}H_{T}^{E}=0\) by choosing \(L\) properly and thus the absent equation (7.9d) still holds in the case \(l=1\). In order to achieve this, we should choose \(L\) such that \[\begin{split}\boldsymbol{\delta}H_{T}^{E}&=-\frac{k}{ r^{2}}\mathring{\operatorname{tr}}\mathring{\delta}^{*}\Big{(}r^{2} \mathring{d}(r^{-1}L)\Big{)}=\frac{k}{r^{2}}\mathring{\delta}\Big{(}r^{2} \mathring{d}(r^{-1}L)\Big{)}=-k\mathring{\square}_{\dot{g}}(r^{-1}L)-2kr^{-1} \partial^{i}(r)\partial_{i}(r^{-1}L)\\ &=-k\mathring{\square}_{\dot{g}}(r^{-1}L)+kg^{ab}\Gamma_{ab}^{i} \partial_{i}(r^{-1}L)=-k\square_{g}(r^{-1}L)=-H_{T}^{E}.\end{split} \tag{7.49}\] We first consider the non-stationary mode solutions with \(\operatorname{Im}\sigma\geq 0,\sigma\neq 0\). We again define \(\mathcal{N}\) as \(\mathring{\ast}N=\mathring{d}\mathcal{N}\) and \(X,Y,Z\) as in (7.16). As explained in the case \(l\geq 2\), \(\mathcal{N},X,Y,Z\) are modes with the same frequency \(\sigma\). We shall find a mode \(L\) satisfying the above equation (7.49). Writing \(H_{T}^{E}=e^{-i\sigma t_{*}}H_{T,0}^{E}(r)\), then the equation (7.49) for \(L=e^{-i\sigma t_{*}}\mathring{L}(r)\) becomes \[\widehat{\square_{g}}(\sigma)(r^{-1}\mathring{L})=\frac{H_{T,0}^{E}}{k} \tag{7.50}\] which has a spherically symmetric solution since by Theorem 8.1\(\widehat{\square_{g}}(\sigma):\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X}) \rightarrow\bar{H}_{\mathrm{b}}^{\infty,\ell+1}(\bar{X})\) is invertible for any \(\ell<-1/2\). Therefore, as discussed around (7.45), we add a pure gauge solution using this \(L\) and the corresponding \(T,P\) as in (7.45), the linearized Einstein-Maxwell system is again given by (7.9a)-(7.9e). This allows us to repeat the proof of the case of non-stationary mode solutions with \(l\geq 2\) to conclude that in the case \(l=1\), \((\dot{g},\dot{A})\) is a gauge solution as well. More precisely, \((\dot{g},\dot{A})\) take the form as in (7.48). Next we discuss the stationary case \(\sigma=0\). Suppose \((\dot{g},\dot{A})\in\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X};S^{2}\widetilde{ \operatorname{s}C}^{*}\bar{X}\oplus\widetilde{\operatorname{s}C}^{*}\bar{X})\) where we now assume \(-5/2<\ell<-1/2\), then \(\mathbf{X}=rf/k\in\bar{H}_{\mathrm{b}}^{\infty,\ell-1}(\bar{X};\widetilde{ \operatorname{s}C}^{*}\bar{X})\) and thus \(\mathring{\delta}^{*}\mathbf{X}\in\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X};S^ {2}\widetilde{\operatorname{s}C}^{*}\bar{X})\) (because \(\mathring{\delta}^{*}\) acts as \(\rho\mathrm{Diff}_{\mathrm{b}}^{i}\)). Therefore, the right-handed side of (7.45) is in \(\bar{H}_{\mathrm{b}}^{\infty,\ell+2}(\bar{X})\). According to Theorem 8.1,\(\widehat{\square_{g}}(0):\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X}) \rightarrow\bar{H}_{\mathrm{b}}^{\infty,\ell+2}(\bar{X})\) is surjective for \(-5/2<\ell<-1/2\),and then we can solve (7.49) for \(r^{-1}L\in\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X})\) and thus \(L\in\bar{H}_{\mathrm{b}}^{\infty,\ell-1}(\bar{X})\). Again, according to (7.45), by adding a pure gauge solution \((2\mathring{\delta}_{g}^{*}\omega,\mathcal{L}_{\omega^{4}}A+d\phi)\) with \((\omega,\phi)\in\bar{H}_{\mathrm{b}}^{\infty,\ell-1}(\bar{X};\widetilde{ \operatorname{s}C}^{*}\bar{X}\oplus\mathbb{R})\), the linearized Einstein-Maxwell system is again given by (7.9a)-(7.9e). Next, we need to modify the argument in the case \(l\geq 2,\sigma=0\) a little bit. We again write \[\Psi_{\pm}(\sigma)=C_{X\pm}(\sigma)X+C_{Y\pm}(\sigma)Y+C_{\mathcal{N}\pm}(\sigma) \mathcal{N}+C_{\mathcal{N}^{\prime}\pm}(\sigma)\mathcal{N}^{\prime} \tag{7.51}\] where \(C_{\bullet\pm}(\sigma)\) are rational functions of \(r\) depending on \(\sigma\in\mathbb{C}\) (obtained by letting \(k^{2}=2,m=0\) in (7.38)) \[C_{X\pm} =a_{\pm}\frac{P_{X}}{3\widetilde{H}},\quad C_{Y\pm}=a_{\pm}\frac{P _{Y}}{3\widetilde{H}},\] \[C_{\mathcal{N}+} =1+\frac{2\mu_{b_{0}}P_{\mathcal{N}}}{\tilde{cr}\widetilde{H}}, \quad C_{\mathcal{N}-}=b_{-}\Big{(}1+\frac{12a_{-}\mu_{b_{0}}x}{r\widetilde{H}} \Big{)} \tag{7.52}\] \[C_{\mathcal{N}^{\prime}+} =\frac{2\mu_{b_{0}}^{2}P_{\mathcal{N}}}{\tilde{c}\widetilde{H}},\quad C_{\mathcal{N}^{\prime}-}=\frac{4b_{-}\mu_{b_{0}}^{2}(\tilde{c}-4rz)}{r \tilde{H}}\] where \(\widetilde{H}(\sigma)=H(2\mu_{b_{0}}^{\prime}-4r\sigma^{2})\) with \(H(r)=6\mathbf{m}r^{-1}-4\mathbf{Q}^{2}r^{-2}\), \((a_{+},b_{+})=(\frac{\mathbf{Q}}{2r},1)\), \((a_{-},b_{-})=(1-\frac{2\mathbf{Q}^{2}}{3\mathbf{m}r},-\frac{4\mathbf{Q}}{3 \mathbf{m}})\), \(\tilde{c}=6\mathbf{m}\) and \(P_{X},P_{Y},P_{\mathcal{N}}\) are smooth on \(\mathring{X}\)(we again let \(k^{2}=2,m=0\) in (7.39)) \[P_{X} =(9x-36z-18)x+6(4z+8)z,\] \[P_{Y} =-3(9x-16z-6)x-24z^{2}, \tag{7.53}\] \[P_{\mathcal{N}} =-8\tilde{c}z,\] Since \(H(r)=2r^{-1}(3\mathbf{m}-2\mathbf{Q}^{2}r^{-1})>0\) and \(\mu_{b_{0}}^{\prime}=2r^{-2}(\mathbf{m}-r^{-1}\mathbf{Q}^{2})>0\) in the region \(r\geq r_{b_{0}}>\mathbf{m}>\mathbf{m}^{-1}\mathbf{Q}^{2}>(2/3)\mathbf{m}^{-1} \mathbf{Q}^{2}\), we have \(\widetilde{H}(0)=2H\mu_{b_{0}}^{\prime}>0\) there as well. We again define \(\Psi_{\pm}:=\Psi_{\pm}(0)\). Conversely, using the the first two equations in (7.25) with \(\sigma=0\) and \(m=0\), one can recover \(X,Y\) in terms of \(\Phi,\mathcal{N},\mathcal{N}^{\prime}\) and thus \(\Psi_{\pm},\mathcal{N},\mathcal{N}^{\prime}\) as well. Again, \(\Psi_{\pm}(0)\) satisfy \[(\mu_{b_{0}}\partial_{r})^{2}\Psi_{\pm}-V_{\pm}\Psi_{\pm}=0 \tag{7.54}\] with \(V_{\pm}\) defined as before in (7.33) (7.31), (7.29) and (7.24) with \(m=0,k^{2}=2\). Since \((\dot{g},\dot{A})\in\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X};S^{2\mathrm{s} \widetilde{c}\widetilde{T}^{*}\bar{X}}\oplus\widetilde{\widetilde{\kappa} \widetilde{T}^{*}\bar{X}})\subset\mathcal{A}^{\ell+3/2}(\bar{X})\), we have \(X,Y,\mathcal{N}^{\prime}\in\mathcal{A}^{\ell+3/2}(\bar{X})\) and \(\mathcal{N}\in\mathcal{A}^{\ell+1/2}(\bar{X})\). Let \(\rho=1/r\), we notice that \(H=6\mathbf{m}\rho-4\mathbf{Q}^{2}\rho^{2}\) and \(k^{2}\mu_{b_{0}}^{\prime}=4\mathbf{m}\rho^{2}-4\mathbf{Q}^{2}\rho^{3}\), so \(\widetilde{H}(0)\in 24\mathbf{m}^{2}\rho^{3}+\rho^{4}C^{\infty}(\bar{X})\). In view of the expression in (7.53), we find \(P_{X},P_{Y}\in\rho C^{\infty}(\bar{X})\) and \(P_{\mathcal{N}}\in\rho^{2}C^{\infty}(\bar{X})\), and then \(C_{X+},C_{Y+},C_{\mathcal{N}^{\prime}+}\in\rho^{-1}C^{\infty}(\bar{X})\) and \(C_{\mathcal{N}+}\in C^{\infty}(\bar{X})\). As a consequence, we have \[\Psi_{+}\in\mathcal{A}^{\ell+1/2}(\bar{X})\subset\mathcal{A}^{-2+}(\bar{X}) \ \text{ a priori if }\ -5/2<\ell<-1/2.\] Now we are ready to describe the asymptotic behavior of \(\Psi_{+}\) near \(r=r_{b_{0}}\) and \(r=\infty\). Since \(F_{\Phi},V_{\mathcal{N}},a_{\pm},b_{\pm}\) are smooth across the event horizon, we have \(\mu_{b_{0}}(\partial_{r}\Psi_{+})\overline{\Psi}_{+}|_{r=r_{b_{0}}}=0\). This implies that boundary term at \(r=r_{b_{0}}\) arising in the integration by parts we will do later on vanishes. Near \(r=\infty\), owing to (7.24), (7.29) (7.31) and (7.33) with \(m=0\), we conclude \(F_{\Phi}\in\frac{8\mathbf{Q}}{3\mathbf{m}}\rho^{2}+\beta^{3}C^{\infty}(\bar{X})\), \(V_{\mathcal{N}}\in\frac{2}{r^{2}}+\rho^{3}C^{\infty}(\bar{X})\) and then \(V_{+}\in\frac{2}{r^{2}}+\rho^{3}C^{\infty}(\bar{X})\), which implies \(V_{+}=\frac{2}{r^{2}_{*}}+\mathcal{A}^{3-}\) as a function of \(r_{*}\) near \(r_{*}=\infty\). We set \(x=1/r_{*}\) for the moment, \[(\mu_{b_{0}}\partial_{r})^{2}\Psi_{+}-V_{+}\Psi_{+}=\partial_{r_{*}}^{2}\Psi_{+} -V_{+}(r_{*})\Psi=x^{4}\partial_{x}\Psi_{+}+2x^{3}\partial_{x}\Psi_{+}-V_{+}(1/ x)=0,\] which implies \(x=0\)(i.e. \(r_{*}=\infty\)) is a regular singular point and its indicial equation is \(\lambda(\lambda-1)+2\lambda-2=0\). The roots to the indicial equation are \(\lambda_{+}=1,\lambda_{-}=-2\). Again we see that \(\lambda_{+}=1\) corresponds to one solution \(\Psi_{+}\) with decay \(\sim r_{*}^{-1}\) near \(r_{*}=\infty\), while \(\lambda_{-}\leq-2\) implies another linearly independent solution \(\Psi_{+}\) with growth \(\sim r_{*}^{2}\) near \(r_{*}=\infty\). Since we have \(\Psi_{+}\in\mathcal{A}^{-2+}\) a priori, this excludes the second solution with growth. Namely, \(\Psi_{+}\sim r_{*}^{-1}\) near \(r_{*}=\infty\). With this decay rate near \(r_{*}=\infty\), we can justify the \(L^{2}(\mathbb{R}_{r_{*}};dr_{*})\) pairing between \(((\mu_{b_{0}}\partial_{r})^{2}-V_{+})\Psi_{+}\) and \(\overline{\Psi}_{+}\) and also find that the boundary term at \(r_{*}=\infty\) vanishes when integrating by parts. Therefore we proceed in the same way as in the proof of the case \(\operatorname{Im}\sigma>0\) (use the \(L^{2}(\mathbb{R}_{r_{*}};dr_{*})\) pairing and then integrate by parts) and conclude that \(\Psi_{+}=0\) on \((r_{b_{0}},\infty)\) and thus on \(\mathring{X}\). As for \(\Psi_{-}\), recall the _S-deformation_ we discussed around (7.35) and we can rewrite the equation \(((\mu_{b_{0}}\partial_{r})^{2}-V_{-})\Psi_{-}=0\) as \[\widetilde{D}^{*}\widetilde{D}\Psi_{-}=-H_{-}\partial_{r}\Big{(}H_{-}^{-2}\mu_{b_{0 }}\partial_{r}(H_{-}\Psi_{-})\Big{)}=0\quad\text{with}\quad H_{-}(r)=\frac{ \tilde{c}}{r}=\frac{6\mathbf{m}}{r}>0\ \text{on}\ r\in[r_{b_{0}},\infty)\] because \(\widetilde{V}_{-}=\frac{k^{2}(k^{2}-2)\mu_{b_{0}}}{r^{2}H_{-}}=0\) if \(l=1\). Solving the above equation for \(\Psi_{-}\) yields \[\Psi_{-}=\frac{c_{1}}{H_{-}}\int\frac{H_{-}^{2}}{\mu_{b_{0}}}dr+\frac{c_{2}}{H_ {-}}=d_{1}\ln\left|\frac{r-r_{H}}{r-(\mathbf{m}-\sqrt{\mathbf{m}^{2}- \mathbf{Q}^{2}})}\right|+d_{2}r. \tag{7.55}\] where \(c_{1},c_{2},d_{1},d_{2}\in\mathbb{R}\). Since \(\Psi_{-}\) is smooth across the event horizon \(r=r_{b_{0}}\), we conclude \(d_{1}=0\) and thus \(\Psi_{-}=d_{2}r\). Using \((\Psi_{+},\Psi_{-})=(0,d_{2}r)\), (7.30) and (7.31), we have \((\Psi,\mathcal{N})=(d_{2}r,-d_{2}\mathbf{Q}/2)\). Exploiting the first two equations in (7.25) we obtain \((X,Y)=(-d_{2},-d_{2})\). As discussed previously, the equation (7.19) implies \(Z=0\) in the stationary perturbation. Finally using (7.17) and \(\hat{*}N=\hat{d}\mathcal{N}\) we see \[\widetilde{F}=0,\quad J=-\frac{d_{2}}{2},\quad N=0\] Since \(J\in\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X})\) and \(0\neq d_{2}\in\bar{H}_{\mathrm{b}}^{\infty,-\frac{3}{2}-(\bar{X})}\), we have \(d_{2}=0\) if \(\ell\geq-3/2\) and thus we are done. If \(-5/2<\ell<-3/2\), \(d_{2}\) may be non-zero and then in the gauge \(\mathbf{X}=0\) \[\dot{g}=\begin{pmatrix}0\\ 0\\ -2r^{2}\frac{d_{2}}{2}\mathsf{S}\not{\boldsymbol{\theta}}\end{pmatrix},\quad \dot{A}=0.\] This implies in the gauge \(\mathbf{X}=0\), \((\dot{g},\dot{A})\) is a gauge solution with \(T=0,L=-d_{2}r/k,P=0\). Therefore, according to the discussion around (7.48), the original perturbations \((\dot{g},\dot{A})=(2\delta_{g}^{*}\omega,\mathcal{L}_{\omega^{i}}A+d\phi)\) with \((\omega,\phi)\in\bar{H}_{\mathrm{b}}^{\infty,\ell-1}(\bar{X};\widetilde{ \widetilde{CT^{*}}}\bar{X}\oplus\mathbb{R})\) for \(-5/2<\ell<-1/2\). #### 7.1.3. Modes with \(l=0\) As stated in [57, 60], the linearization of the simple proof of the Birkhoff' theorem [103] can be extended to the spherically symmetric Einstein-Maxwell system. Following the argument in [103], we work in the incoming Eddington-Finkelstein coordinates \((t_{0}=t+r^{*},r)\) where the Reissner-Nordstrom metric and the electromagnetic 4-potential take the form \[g=-\mu_{b_{0}}dt_{0}^{2}+2dt_{0}dr+r^{2}\not{\boldsymbol{\theta}},\quad A= \frac{\mathbf{Q}}{r}dt_{0} \tag{7.56}\] _Remark 7.5_.: The electromagnetic 4-potential defined above differs from (A.17) by an exact 1-form \(f(r)dr\). We note that adding an exact 1-form to \(A\) changes neither the electromagnetic field nor the linearized Einstein-Maxwell system because the linearized system only involves \(dA\). Representing the Reissner-Nordstrom metric and the electromagnetic 4-potential in the way (7.56), we can linearize \((g_{b},A_{b})\) with respect to the parameters \((\mathbf{m},0,\mathbf{Q})\) in the direction \((\dot{\mathbf{m}},0,\dot{\mathbf{Q}})\) to obtain the following linearized metric and electromagnetic 4-potential \[g^{\prime}=(\frac{2\dot{\mathbf{m}}}{r}-\frac{2\mathbf{Q}\dot{\mathbf{Q}}}{r^{2 }})dt_{0}^{2},\quad A^{\prime}=\frac{\dot{\mathbf{Q}}}{r}dt_{0}. \tag{7.57}\] We note that \((g^{\prime},A^{\prime})\) differs from \((\dot{g}_{(\mathbf{m},0,\mathbf{Q})}(\dot{\mathbf{m}},0,\dot{\mathbf{Q}}),\dot {A}_{(\mathbf{m},0,\mathbf{Q})}(\dot{\mathbf{m}},0,\dot{\mathbf{Q}}))\) by a pure gauge term. With this we now turn to the analysis of the spherically symmetric modes. Since there is no spherical component in 1-forms and no aspherical-spherical mixed component in symmetric 2-tensors for spherically symmetric perturbations, we use the following splitting of 1-forms and symmetric 2-tensors respectively: \(\langle dt_{0}\rangle\oplus\langle dr\rangle\) and \(\langle dt_{0}^{2}\rangle\oplus\langle 2dt_{0}dr\rangle\oplus\langle dr^{2} \rangle\oplus\langle\not{\boldsymbol{\theta}}\rangle\). Under this splitting, we write the scalar perturbations of modes \(l=0\) (spherically symmetric modes) in the following form \[\dot{g}=\begin{pmatrix}\dot{X}\\ \dot{Y}\\ \dot{Z}\\ 2r^{2}H_{L}\end{pmatrix},\quad\dot{A}=\begin{pmatrix}\dot{A}_{0}\\ \dot{A}_{1}\end{pmatrix}\] where \(\dot{X},\dot{Y},\dot{Z},H_{L}\) are smooth function on \(\mathbb{R}_{t_{0}}\times[r_{-},\infty)\). To describe the pure gauge perturbations, we use the following lemma **Lemma 7.6**.: _Let \((g,A)\) be as defined in (7.56), under the splitting \(\langle\partial_{t_{0}}\rangle\oplus\langle\partial_{r}\rangle\) of \(TM\) and \(\langle dv^{2}\rangle\oplus\langle 2dt_{0}dr\rangle\oplus\langle dr^{2}\rangle\) of \(S^{2}T^{*}M\), the map \(\mathcal{L}_{(\bullet)}g:TM\to S^{2}T^{*}M\) takes the form_ \[\mathcal{L}_{(\bullet)}g=\begin{pmatrix}-2\mu_{b_{0}}\partial_{t_{0}}&2 \partial_{t_{0}}-\mu_{b_{0}}^{\prime}\\ \partial_{t_{0}}-\mu_{b_{0}}\partial_{r}&\partial_{r}\\ 2\partial_{r}&0\\ 0&2r\end{pmatrix}; \tag{7.58}\] the map \(\mathcal{L}_{(\bullet)}A\) between sections of \(\langle\partial_{t_{0}}\rangle\oplus\langle\partial_{r}\rangle\) and \(\langle dt_{0}\rangle\oplus\langle dr\rangle\) is given by_ \[\mathcal{L}_{(\bullet)}A=\begin{pmatrix}\mathbf{Q}r^{-1}\partial_{t_{0}}&- \mathbf{Q}r^{-2}\\ \mathbf{Q}r^{-1}\partial_{r}&0\end{pmatrix}, \tag{7.59}\] _and as a mapping between the sections of the trivial line bundle \(\mathbb{R}\) and \(\langle dt_{0}\rangle\oplus\langle dr\rangle\), \(d=\begin{pmatrix}\partial_{t_{0}}\\ \partial_{r}\end{pmatrix}\)._ Proof.: It follows from the formula \((\mathcal{L}_{W}g)_{\alpha\beta}=W^{\gamma}\partial_{\gamma}g_{\alpha\beta} +(\partial_{\alpha}W^{\gamma})g_{\gamma\beta}+(\partial_{\beta}W^{\gamma})g_{ \alpha\gamma}\), \((\mathcal{L}_{W}A)_{\alpha}=W^{\gamma}\partial_{\gamma}A_{\alpha}+(\partial_{ \alpha}W^{\gamma})A_{\gamma}\). Therefore, adding a pure gauge solution \((\mathcal{L}_{W}g,\mathcal{L}_{W}A)\) to \((\dot{g},\dot{A})\) with \(W:=Z_{1}\partial_{t_{0}}-rH_{L}\partial_{r}\) and \(Z_{1}:=(-\int_{3\mathbf{m}}^{r}\dot{Z}(t_{0},s)ds)/2\), the \(dr^{2}\) and spherical components of \(\dot{g}\) vanish. Next, by adding another pure gauge solution \((0,d\phi)\) with \(\phi=-\int_{3\mathbf{m}}^{r}(A_{1}+\mathbf{Q}s^{-1}\partial_{r}Z_{1})(t_{0}, s)ds\), the \(dr\)-component of \(\dot{A}\) can be eliminated. Suppose the original perturbations \((\dot{g},\dot{A})\in e^{-i\sigma t_{*}}\widetilde{H}_{\mathrm{b}}^{\infty, \ell}(\bar{X};S^{2}\widetilde{scT^{*}}\bar{X}\oplus\widetilde{scT^{*}}\bar{X})\) are outgoing modes, then so are \(\dot{X},\dot{Y},\dot{Z},H_{L}\). If \(\sigma=0\), it is obvious that \((\omega=W^{\flat},\phi)\in\widetilde{H}_{\mathrm{b}}^{\infty,\ell-1}(\bar{M}; \widetilde{scT^{*}}\bar{X}\oplus\mathbb{R})\) are still stationary modes. If \(\sigma\neq 0\), since \(t_{*}=t_{0}+f(r)\) where \(f\) is smooth on \([r_{-},\infty)\) and \(\sim-2r^{*}\) near infinity, we find that \((\omega=W^{\flat},\phi)\in e^{-i\sigma t_{*}}\widetilde{H}_{\mathrm{b}}^{ \infty,\ell^{\prime}}(\bar{X};\widetilde{scT^{*}}\bar{X}\oplus\mathbb{R})\) for some \(\ell^{\prime}<\ell\) is also outgoing modes. Further, if \((\dot{g},\dot{A})\in\mathrm{Poly}(t_{*})^{k}\widetilde{H}_{\mathrm{b}}^{\infty, \ell}(\bar{X};S^{2}\widetilde{scT^{*}}\bar{X}\oplus\widetilde{scT^{*}}\bar{X})\) are the generalized stationary modes, the integration in the definition of \(W\) and \(\phi\) implies \((\omega=W^{\flat},\phi)\in\mathrm{Poly}(t_{*})^{k}\widetilde{H}_{\mathrm{b}}^{ \infty,\ell^{\prime}}(\bar{X};\widetilde{scT^{*}}\bar{X}\oplus\mathbb{R})\) for some \(\ell^{\prime}<\ell\). To summarize, by adding pure gauge solutions which are outgoing modes(or generalized stationary modes), the perturbations are reduced to the following form \[\dot{g}=\dot{X}dt_{0}^{2}+2\dot{Y}dt_{0}dr,\quad\dot{A}=\dot{A}_{0}dt_{0}.\] Next, we analyze the linearized Einstein-Maxwell system for spherically symmetric modes. Concretely, for \((\hat{g},\hat{A})\) of the general form \(\hat{g}=Xdt_{0}^{2}+2\dot{Y}dt_{0}dr+r^{2}\not{g}\) and \(\hat{A}=A_{0}dt_{0}\) where \(X,Y,A_{0}\) are functions of \((t_{0},r)\), the \(dr^{2}\)-component of \(\mathrm{Ric}(\check{g})-2T(\check{g},d\check{A})=0\) implies \(2\partial_{r}Y/(rY)=0\). Linearizing it around \((g,A)\) in (7.56), we obtain the corresponding equation for \((\check{g},\hat{A})\): \(\partial_{r}\dot{Y}=0\), so \(\dot{Y}=\dot{Y}(t_{0})\). If \(\dot{Y}\) is an outgoing mode with \(\sigma\neq 0\), we must have \(\dot{Y}=\dot{Y}(t_{0})=ce^{-i\sigma t_{0}}\) which contradicts the definition of outgoing modes unless \(\dot{Y}=0\). If \(\dot{Y}\) is a stationary mode, then \(\dot{Y}=c\). However the assumption \(\dot{Y}\in\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X})\subset\mathcal{A}^{0+}( \bar{X})\) for \(-3/2<\ell<-1/2\) require \(\dot{Y}=0\). Lastly, if \(\dot{Y}\) is a generalized zero mode, we then have \(\dot{Y}=\dot{Y}(t_{0})=\mathrm{Poly}(t_{0})^{k}\). Letting \(W_{1}=f\partial_{t_{0}}\) with \(f(t_{0})\in\mathrm{Poly}(t_{0})^{k+1}\) satisfying \(\partial_{t_{0}}f(t_{0})=-\dot{Y}(t_{0})\), and adding the pure gauge solution \((\mathcal{L}_{W_{1}}g,\mathcal{L}_{W_{1}}A=0)\) to \((\dot{g},\dot{A})\), \(dvdr\)-component of \(\dot{g}\) again vanishes. Therefore, we now have \[\dot{g}=\dot{X}dt_{0}^{2},\quad\dot{A}=\dot{A}_{0}dt_{0}. \tag{7.60}\] Again, for \((\check{g},\check{A})\) the Maxwell equation becomes \[\begin{split}\delta_{\check{g}}d\check{A}&=\Big{(}- \frac{1}{Y}\partial_{t_{0}}\partial_{r}A_{0}+\frac{X}{Y^{2}}\partial_{r}^{2}A_{0 }+\frac{2X}{rY^{2}}\partial_{r}A_{0}-\frac{X}{Y^{3}}\partial_{r}Y\partial_{r}A_{ 0}+\frac{1}{Y^{2}}\partial_{r}A_{0}\partial_{t_{0}}Y\Big{)}dv\\ &\quad+\Big{(}\frac{1}{Y}\partial_{r}^{2}A_{0}+\frac{2}{rY} \partial_{r}A_{0}-\frac{1}{Y^{2}}\partial_{r}Y\partial_{r}A_{0}\Big{)}dr. \end{split} \tag{7.61}\] Linearizing around \((g,A)\) (i.e. \(X=-\mu_{b_{0}},Y=1,A_{0}=r^{-1}\mathbf{Q}\)) and using the previously obtained fact \(\dot{Y}=0\), the \(dr\) component tells \(\partial_{r}^{2}\dot{A}_{0}+2r^{-1}\partial_{r}\dot{A}_{0}=0\), so \(\dot{A}_{0}=c_{1}(t_{0})r^{-1}+c_{2}(t_{0})\). Since \(c_{2}(t_{0})dt_{0}\) is an exact 1-form and thus a pure gauge solution, we may assume \(\dot{A}_{0}=c_{1}(t_{0})r^{-1}\). Then the equation for \(dt_{0}\)-component gives \(-\partial_{t_{0}}\partial_{r}\dot{A}_{0}-\mu_{b_{0}}\partial_{r}^{2}\dot{A}_{0 }-2\mu_{b_{0}}r^{-1}\partial_{r}\dot{A}_{0}=r^{-2}\partial_{t_{0}}c_{1}(t_{0})=0\) and thus \(c_{1}(t_{0})\) is a constant that we denote by \(\dot{\mathbf{Q}}\). That is, \(\dot{A}_{0}=r^{-1}\dot{\mathbf{Q}}\). Next, the spherical component of \(\mathrm{Ric}(\check{g})-2T(\check{g},d\check{A})=0\) is given by \[1+\frac{X}{Y^{2}}+\frac{r}{Y^{2}}\partial_{r}X-\frac{rX}{Y^{3}}\partial_{r}Y- \frac{r^{2}}{Y^{2}}(\partial_{r}A_{0})^{2}=0.\] Linearizing around \((g,A)\) (i.e. \(X=-\mu_{b_{0}},Y=1,A_{0}=r^{-1}\mathbf{Q}\)) and using the previously obtained facts \(\dot{Y}=0,\dot{A}_{0}=r^{-1}\dot{\mathbf{Q}}\), we find \(\dot{X}+r\partial_{r}\dot{X}-2r^{-2}\mathbf{Q}\dot{\mathbf{Q}}=0\) and thus \[\dot{X}=-\frac{2\mathbf{Q}\dot{\mathbf{Q}}}{r^{2}}+\frac{c(t_{0})}{r}.\] Lastly, the \(dt_{0}^{2}\)-component of \(\mathrm{Ric}(\mathring{g})-2T(\mathring{g},d\mathring{A})=0\) is \[-\frac{X}{Y^{2}}\partial_{t_{0}}\partial_{r}Y+\frac{X}{2Y^{2}} \partial_{r}^{2}X+\frac{X}{Y^{3}}\partial_{t_{0}}Y\partial_{r}Y-\frac{X}{2Y^{3 }}\partial_{r}X\partial_{r}Y-\frac{2X}{rY^{2}}\partial_{t_{0}}Y+\frac{X}{rY^{2 }}\partial_{r}X+\frac{1}{rY}\partial_{t_{0}}X+\frac{X}{Y^{2}}(\partial_{r}A_{0 })^{2}=0.\] Again linearizing around \((g,A)\) (i.e. \(X=-\mu_{b_{0}},Y=1,A_{0}=r^{-1}\mathbf{Q}\)) and using the previously obtained facts \(\dot{Y}=0,\dot{A}_{0}=r^{-1}\dot{\mathbf{Q}},\dot{X}=-\frac{2\mathbf{Q}\dot{ \mathbf{Q}}}{r^{2}}+\frac{c(v)}{r}\), we obtain \(r^{-2}\partial_{v}c(v)=0\) and thus \(c(v)\) is a constant which we denote by \(2\dot{\mathbf{m}}\). Therefore, we conclude that up to pure gauge solutions \[\dot{g}=(\frac{2\dot{\mathbf{m}}}{r}-\frac{2\mathbf{Q}\dot{ \mathbf{Q}}}{r^{2}})dt_{0}^{2}=g^{\prime},\quad\dot{A}_{0}=\frac{\dot{\mathbf{ Q}}}{r}dt_{0}=A^{\prime}. \tag{7.62}\] ### Vector type perturbations We shall consider the vector type perturbations of modes \(l\geq 2\) and \(l=1\) separately. Recall that we can write the perturbations under the splitting (6.9) and (6.10) as in (6.12). We denote by \(\mathsf{V}\in\mathbf{V}_{l}\) the spherical harmonic 1-form with eigenvalues \(k^{2}=l(l+1)-1\). #### 7.2.1. Modes with \(l\geq 2\) Suppose \((\mathring{g},\mathring{A})\) is the vector perturbations of the following form \[\mathring{g}=\begin{pmatrix}0\\ rf\otimes\mathsf{V}\\ -\frac{2}{k}r^{2}H_{T}\mathring{\boldsymbol{\delta}}^{*}\mathsf{V}\end{pmatrix}, \quad\mathring{A}=\begin{pmatrix}0\\ rK\mathsf{V}\end{pmatrix}\quad\text{with}\quad\mathsf{V}\in\mathbf{V}_{l}\ (l\geq 2). \tag{7.63}\] We notice that the pure gauge solutions take the form \[\boldsymbol{\delta}\mathring{g}=2\delta_{g}^{*}\omega=\begin{pmatrix}0\\ r^{2}\mathring{d}(r^{-1}L)\boldsymbol{\delta}\mathsf{S}\\ -\frac{2}{k}r^{2}(-kr^{-1}L)\boldsymbol{\delta}^{*}\mathsf{V}\end{pmatrix}, \quad\boldsymbol{\delta}\mathring{A}=\mathcal{L}_{\omega^{\sharp}}A=0 \tag{7.64}\] with \[\omega=\begin{pmatrix}0\\ rL\mathsf{V}\end{pmatrix} \tag{7.65}\] where \(L\in C^{\infty}(\mathring{X})\). When adding \((\boldsymbol{\delta}\mathring{g},\boldsymbol{\delta}\mathring{A})\) to \((\mathring{g},\mathring{A})\), the quantities \(f,H_{T},K\) change by \[\boldsymbol{\delta}f=r\mathring{d}(r^{-1}L),\quad\boldsymbol{ \delta}H_{T}=-kr^{-1}L,\quad\boldsymbol{\delta}K=0. \tag{7.66}\] Then we have the following gauge-invariant quantities \[J:=f+\frac{r}{k}\mathring{d}H_{T}\in C^{\infty}(\mathring{X};T^{*}\mathring{X }),\quad K\in C^{\infty}(\mathring{X}) \tag{7.67}\] We also note that if \(J=0,K=0\), \((\mathring{g},\mathring{A})\) is a pure gauge solution \[(\mathring{g},\mathring{A})=(2\delta^{*}\omega,\mathcal{L}_{ \omega^{\sharp}}A)\ \ \text{with}\ \ \omega=\begin{pmatrix}0\\ -\frac{r^{2}}{k}H_{T}\mathsf{V}\end{pmatrix} \tag{7.68}\] If \((\mathring{g},\mathring{A})\) are outgoing mode solutions, then \(\omega\) is a mode of the same type, which grows by a factor \(r\) more than \((\mathring{g},\mathring{A})\). Again we can express the linearized Einstein-Maxwell system in terms of the gauge-invariant quantities \(J,K\) defined above. More specifically, we adds \((\boldsymbol{\delta}\mathring{g},\boldsymbol{\delta}\mathring{A})\) built from \(L=\frac{r}{k}H_{T}\), then \(H_{T}+\boldsymbol{\delta}H_{T}=0\), and thus \((f,H_{T},K)=(J,0,K)\). Therefore, using the detailed calculation in SS6 and SSA.1 we can write the linearized Einstein-Maxwell system, acting on the new \((\mathring{g},\mathring{A})\) \[\mathring{g}=\begin{pmatrix}0\\ rJ\otimes\mathsf{V}\\ 0\end{pmatrix},\quad\mathring{A}=\begin{pmatrix}0\\ rK\mathsf{V}\end{pmatrix}\] in terms of the gauge-invariant quantities \(J,K\). Now we express \(2\mathcal{L}_{1}(\mathring{g},\mathring{A})=0\) and \(\mathcal{L}_{2}(\mathring{g},\mathring{A})=0\) in the form of (7.3) in terms of \(f^{E},H^{E}_{T}\) and \(K^{E}\) respectively. Then the linearized Einstein-Maxwell system reads \[rf^{E} =r^{-2}\mathring{\mathring{\delta}}(r^{4}\mathring{\mathring{d}}( r^{-1}J))+\frac{k^{2}-1}{r}J-4\mathbf{Q}r^{-2}\mathring{*}\mathring{\mathring{d}}(rK)=0, \tag{7.69a}\] \[-\frac{2r^{2}}{k}H^{E}_{T} =-2\mathring{\mathring{\delta}}(rJ)=0,\] (7.69b) \[rK^{E} =-\mathring{\square}(rK)+\frac{k^{2}+1}{r}K+\mathbf{Q}\mathring{* }\mathring{d}(r^{-1}J)=0. \tag{7.69c}\] By (7.69b), one adds the zero term \(\mathring{\mathring{d}}\mathring{\mathring{\delta}}(rJ)\) to (7.69a) and then can obtain a wave equation for \(J\) (coupled to \(K\) via subprincipal terms) as \((\mathring{\mathring{\delta}}\mathring{d}+\mathring{d}\mathring{\delta})J=(- \mathring{\square}-\frac{1}{2}\mu^{\prime\prime}_{b_{0}})J\). In conclusion, we can rewrite equation (7.69a) and (7.69c) as a principally scalar system of wave equations for \((J,K)\) \[-\mathring{\square}B-\mathscr{D}B=0\quad\text{where}\quad B=\begin{pmatrix}J \\ N\end{pmatrix} \tag{7.70}\] where \(\mathscr{D}\) is a first order stationary differential operator acting on \(C^{\infty}(\mathring{X};T^{*}\mathring{\bar{X}}\oplus T^{*}\mathring{\bar{X}})\). As discussed in the scalar perturbations, it suffices to prove \(B=0\) in the static region \(r>r_{b_{0}}\). To achieve this, we again work in the static coordinates \((t,r)\). First, by (7.69b) we have \(\mathring{\mathring{a}}\mathring{*}(rJ)=0\) (because \(\mathring{\mathring{\delta}}=\mathring{*}\mathring{\mathring{d}}\mathring{*}\)), and then \(\mathring{*}rJ=\mathring{d}(r\Phi)\) for some function \(\Phi\) on \(\mathring{X}\). Applying \(\mathring{*}r^{2}\) on both sides of equation (7.69a) yields \[\mathring{d}\Big{(}r^{4}\mathring{\mathring{\delta}}(r^{-2}\mathring{d}(r\Phi) )+(k^{2}-1)r\Phi-4\mathbf{Q}rK\Big{)}=0\] which implies \(r^{4}\mathring{\mathring{\delta}}(r^{-2}\mathring{d}(r\Phi))+(k^{2}-1)r\Phi-4 \mathbf{Q}rK=c\) for some constant \(c\). Replacing \(\Phi\) by \(\Phi-\frac{c}{(k^{2}-1)r}\), we obtain \(r^{4}\mathring{\mathring{\delta}}(r^{-2}\mathring{d}(r\Phi))+(k^{2}-1)r\Phi-4 \mathbf{Q}rK=0\). and thus \[r\mathring{\mathring{\delta}}(r^{-2}\mathring{d}(r\Phi))+\frac{(k^{2}-1)}{r^{2} }\Phi-\frac{4\mathbf{Q}}{r^{2}}K=0. \tag{7.71}\] We can also rewrite it as \[-\mathring{\square}\Phi+\frac{1}{r^{2}}\Big{(}k^{2}+1-\frac{6\mathbf{m}}{r}+ \frac{4\mathbf{Q}^{2}}{r^{2}}\Big{)}\Phi-\frac{4\mathbf{Q}}{r^{2}}K=0. \tag{7.72}\] Next, equation (7.69c) can be rewritten as (where we use equation (7.71)) \[-\mathring{\square}(rK)+\frac{1}{r^{2}}\Big{(}k^{2}+1+\frac{4\mathbf{Q}^{2}}{ r^{2}}\Big{)}(rK)-(k^{2}-1)\frac{\mathbf{Q}}{r^{3}}\Phi=0 \tag{7.73}\] We note that (7.72) and (7.73) recover [80, equations (4.28) and (4.30)] with \(\varOmega=r\Phi,\mathcal{A}=-rK,m_{V}=k^{2}-1,\kappa^{2}=2,q=\mathbf{Q},n=2,K= 1,\bar{\tau}=0,J=0\) there. Following [80, equations(4.34) and (4.35)], we introduce the following combinations \[\Psi_{\pm}=a_{\pm}\Phi+b_{\pm}rK \tag{7.74}\] with \[(a_{+},b_{+})=(\frac{\mathbf{Q}(k^{2}-1)}{3\mathbf{m}+\sqrt{9\mathbf{m}^{2}+4( k^{2}-1)\mathbf{Q}^{2}}},-1),\quad(a_{-},b_{-})=(1,\frac{4\mathbf{Q}}{3 \mathbf{m}+\sqrt{9\mathbf{m}^{2}+4(k^{2}-1)\mathbf{Q}^{2}}}). \tag{7.75}\] When expressed in terms of \(\Psi_{\pm}\), equations (7.72) and (7.73) are transformed into two decoupled wave equations (see [80, equations (4.37)-(4.39)]) \[\mathring{\square}\Psi_{\pm}-\frac{V_{\pm}}{\mu_{b_{0}}}\Psi_{\pm}=0 \tag{7.76}\] where \[V_{\pm}=\frac{\mu_{b_{0}}}{r^{2}}\Big{(}k^{2}+1+\frac{4\mathbf{Q}^{2}}{r^{2}}- \frac{3\mathbf{m}}{r}\pm\frac{\sqrt{9\mathbf{m}^{2}+4(k^{2}-1)\mathbf{Q}^{2}} }{r}\Big{)}. \tag{7.77}\] Suppose \((\mathring{g},\mathring{A})\in e^{-i\sigma t_{*}}\widehat{H}^{\infty,\ell}_{ \mathrm{b}}(\mathring{X};S^{2\mathrm{s}\widetilde{\mathrm{c}}\widetilde{ \mathrm{a}}\widetilde{\mathrm{b}}\widetilde{\mathrm{c}}\widetilde{\mathrm{c}} \widetilde{\mathrm{a}}\widetilde{\mathrm{b}}}\widetilde{\mathrm{c}}\widetilde{ \mathrm{a}}\widetilde{\mathrm{b}})\) are modes, then \((J,K)\in e^{-i\sigma t_{*}}\widehat{H}^{\infty,\ell-1}_{\mathrm{b}}( \mathring{X};\widetilde{\mathrm{c}}\widetilde{\mathrm{c}}\widetilde{\mathrm{a} }\widetilde{\mathrm{b}}\widetilde{\mathrm{c}})\) for \(\sigma\neq 0\), while \((J,K)\in\widehat{H}^{\infty,\ell}_{\mathrm{b}}(\mathring{X};\widetilde{ \mathrm{c}}\widetilde{\mathrm{c}}\widetilde{\mathrm{a}}\widetilde{\mathrm{b}} \widetilde{\mathrm{c}}\widetilde{\mathrm{a}}\widetilde{\mathrm{b}})\) for \(\sigma=0\). As discussed around (7.12) and (7.36) and using (7.73) in the \(\sigma=0\) case, we conclude \(\Phi,rK\in e^{-i\sigma t_{*}}\bar{H}_{\rm b}^{\infty,\ell-1}(\bar{X})\) for \(\operatorname{Im}\sigma\geq 0\), and so are \(\Psi_{\pm}\). Therefore in the static coordinates \((t,r)\), equations (7.76) take the form \[(\mu_{b_{0}}\partial_{r})^{2}\Psi_{\pm}-(V_{\pm}-\sigma^{2})\Psi_{\pm}=0.\] We observe that when \(r>r_{b_{0}}=\mathbf{m}+\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\), \(V_{+}>0\) and \[r^{2}\mu_{b_{0}}^{-1}V_{-} =(r^{2}\mu_{b_{0}}^{-1}V_{-})(r_{b_{0}})=k^{2}-1+(\mathbf{m}-\sqrt {9\mathbf{m}^{2}+4(k^{2}-1)\mathbf{Q}^{2}})/r_{b_{0}}\] \[=r_{b_{0}}^{-1}\Big{(}k^{2}\mathbf{m}+(k^{2}-1)\sqrt{\mathbf{m}^ {2}-\mathbf{Q}^{2}}-\sqrt{(4k^{2}+5)\mathbf{Q}^{2}+9(\mathbf{m}^{2}-\mathbf{Q} ^{2})}\Big{)}\] \[>r_{b_{0}}^{-1}\Big{(}k^{2}\mathbf{m}-\sqrt{4k^{2}+5}\mathbf{Q}+( k^{2}-4)\sqrt{\mathbf{m}^{2}-\mathbf{Q}^{2}}\Big{)}>0\] where we use \(k^{2}\geq 5,\mathbf{m}>\mathbf{Q}\) in the last inequality, so \(V_{-}>0\) in the static region as well. We also notice the asymptotic behavior of \(V_{\pm}\) is \(V_{\pm}=(k^{2}+1)\rho^{2}+\rho^{3}C^{\infty}(\bar{X})\) where \(\rho=1/r\). Then we can proceed as in the scalar perturbations \(l\geq 2\) case to conclude that \(\Psi_{\pm}=0\) and thus \((J,K)=0\). More specifically, when \(\operatorname{Im}\sigma>0\), we pair \((\mu_{b_{0}}\partial_{r})^{2}\Psi_{\pm}-(V_{\pm}-\sigma^{2})\Psi_{\pm}\) with \(\overline{\Psi}_{\pm}\) with respect to \(L^{2}(\mathbb{R}_{r_{*}};dr_{*})\) and integrate by parts, then the positivity of \(V_{\pm}\) allows us to obtain \(\Psi_{\pm}=0\). When \(\sigma\in\mathbb{R}\setminus\{0\}\), we use the boundary paring argument. Lastly, when \(\sigma=0\), we first obtain the a priori decay rate \(\Psi_{\pm}\in\bar{H}_{\rm b}^{\infty,\ell-1}(\bar{X})\subset\mathcal{A}^{\ell +1/2}(\bar{X})\subset\mathcal{A}^{-1+}(\bar{X})\) because \(-3/2<\ell<-1/2\). Next we use the Frobenius method (i.e. analyze the indicial equation) and the decay rate of \(V_{\pm}=(k^{2}+1)\rho^{2}+\rho^{3}C^{\infty}(\bar{X})\) to conclude that \(\Psi_{\pm}\sim r^{-2}\) near \(r=\infty\). So the conclusion \(\Psi_{\pm}=0\) again follows from the \(L^{2}\) pairing and integration by parts. #### 7.2.2. Modes with \(l=1\) We now let \(k^{2}=l(l+1)-1=1\). Suppose \((\dot{g},\dot{A})\) is the vector perturbations of the following form \[\dot{g}=\begin{pmatrix}0\\ rf\otimes\mathsf{V}\\ 0\end{pmatrix},\quad\dot{A}=\begin{pmatrix}0\\ rK\mathsf{V}\end{pmatrix}\quad\text{with}\quad\mathsf{V}\in\mathbf{V}_{l}\ (l=1). \tag{7.78}\] We notice that the pure gauge solutions take the form \[\boldsymbol{\delta}\dot{g}=2\delta_{g}^{*}\omega=\begin{pmatrix}0\\ r^{2}\dot{d}(r^{-1}L)\boldsymbol{\delta}\mathsf{S}\\ 0\end{pmatrix},\ \ \boldsymbol{\delta}\dot{A}=\mathcal{L}_{\omega t}A=0 \tag{7.79}\] with \[\omega=\begin{pmatrix}0\\ rL\mathsf{V}\end{pmatrix} \tag{7.80}\] where \(L\in C^{\infty}(\mathring{X})\). When adding \((\boldsymbol{\delta}\dot{g},\boldsymbol{\delta}\dot{A})\) to \((\dot{g},\dot{A})\), the quantities \(f,H_{T},K\) change by \[\boldsymbol{\delta}f=r\dot{d}(r^{-1}L),\quad\boldsymbol{\delta}K=0. \tag{7.81}\] Then we define the following gauge-invariant quantities \[\dot{d}(r^{-1}f)\in C^{\infty}(\mathring{X};\Lambda^{2}T^{*}\mathring{X}), \quad K\in C^{\infty}(\mathring{X}) \tag{7.82}\] If \(\dot{d}(r^{-1}f)=0,K=0\), since \(\mathring{X}\) is contractible, we can find \(L\in C^{\infty}(\mathring{X})\) such that \(r^{-1}f=\dot{d}(r^{-1}L)\), which implies \[(\dot{g},\dot{A})=(2\delta^{*}\omega,\mathcal{L}_{\omega t}A)\ \text{ with }\ \omega= \begin{pmatrix}0\\ rL\mathsf{V}\end{pmatrix} \tag{7.83}\] is a pure gauge solution. As explained around (7.12) and (7.36) again, we have \(L\in e^{-i\sigma t_{*}}\bar{H}_{\rm b}^{\infty,\ell}(\bar{X})\) for \(\sigma\neq 0\), while \(L\in\bar{H}_{\rm b}^{\infty,\ell-1}(\bar{X})\) for \(\sigma=0\). Again we can express the linearized Einstein-Maxwell system in terms of the gauge-invariant quantities \(\dot{d}(r^{-1}f),K\) as follows \[rf^{E} =r^{-2}\dot{\delta}(r^{4}\dot{d}(r^{-1}f))-4\mathbf{Q}r^{-2}\dot{ \star}\dot{d}(rK)=0, \tag{7.84a}\] \[rK^{E} =-\mathring{\square}(rK)+\frac{2}{r}K+\mathbf{Q}\dot{\star}\dot{ d}(r^{-1}f)=0. \tag{7.84b}\] Before solving the above system, we first describe the linearized Kerr-Newman solution in which we linearize the Kerr-Newman solution family \((g_{b},A_{b})\) around Reissner-Nordstrom solution \((g_{b_{0}},A_{b_{0}})\) with respect to the angular momentum parameter \(\mathbf{a}\) in the direction \(\dot{\mathbf{a}}\), which is parallel to the rotation axis of \(\mathsf{V}\). We see that in the static coordinates \[g^{\prime}_{(\mathbf{m},0,\mathbf{Q})}(0,\dot{\mathbf{a}},0)=2(\mu_{\mathrm{b}_ {0}}-1)\sin^{2}\theta dtd\varphi,\quad A^{\prime}_{(\mathbf{m},0,\mathbf{Q})}(0, \dot{\mathbf{a}},0)=-\frac{\mathbf{Q}}{r}\sin^{2}\theta d\varphi \tag{7.85}\] where \(|\dot{\mathbf{a}}|=1\) and \((\theta,\varphi)\) are the spherical coordinates adapted to \(\dot{\mathbf{a}}\), i.e, \(\dot{\mathbf{a}}\) is defined by \(\theta=0\). We note that (7.85) is of the form (7.78) with \(f=(-2\mathbf{m}r^{-2}+\mathbf{Q}^{2}r^{-3})dt,K=-\mathbf{Q}r^{-2}\) and \(\mathsf{V}=\sin^{2}\theta d\phi\). Then the associated gauge-invariant quantities are \(\dot{*}\dot{d}(r^{-1})=6\mathbf{m}r^{-4}-4\mathbf{Q}^{2}r^{-5},K=-\mathbf{Q}r^ {-2}\). Now we turn to solving the equations (7.84a) and (7.84b). First (7.84a) implies \(\dot{\hat{d}}\Big{(}r^{4}\dot{*}\hat{d}(r^{-1}f)-4\mathbf{Q}rK\Big{)}=0\) and thus \[r^{4}\dot{*}\hat{d}(r^{-1}f)-4\mathbf{Q}rK=c \tag{7.86}\] for some constant \(c\). Plugging this into the equation (7.84b) yields \[(\dot{\overline{\sqcup}}-V)(rK)=\frac{c\mathbf{Q}}{r^{4}}\quad\text{where}\quad V =\frac{2}{r^{2}}+\frac{4\mathbf{Q}^{2}}{r^{4}}.\] Since \(V>0\) and the right-handed side is stationary, as discussed previously, \(rK=0,c=0\) is the only mode solution with frequency \(\sigma\neq 0\) to the above equation, and thus \(\dot{d}(r^{-1}f)=0\) as well. As for the case \(\sigma=0\), since \(V=2\rho^{2}+4\mathbf{Q}^{2}\rho^{4}\) where \(\rho=1/r\), using the Frobenius method and analyzing the indicial equation, it follows that for each \(c\in\mathbb{R}\), there is at most one solution \(K\in\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X})\) for \(-3/2<\ell<-1/2\). On the other hand, one can verify \[K=-\frac{c\mathbf{Q}}{6\mathbf{m}r^{2}}\in\bar{H}_{\mathrm{b}}^{\infty,\ell}( \bar{X})\] is indeed a solution. Returning to (7.86) we find \[\dot{*}\dot{d}(r^{-1}f)=\frac{c}{6\mathbf{m}}(\frac{6\mathbf{m}}{r^{4}}-\frac {4\mathbf{Q}^{2}}{r^{5}}).\] Therefore, replacing \((\dot{g},\dot{A})\) by \((\dot{g},\dot{A})-\frac{c}{6\mathbf{m}}(g^{\prime},A^{\prime})\), we may assume \(c=0\) and thus \(K=0,\dot{\hat{d}}(r^{-1}f)=0\), which implies \((\dot{g},\dot{A})\) is a pure gauge solution. This finishes the proof of Theorem 7.1. ## 8. Mode analysis of the scalar wave operator In this section we shall discuss the modes of the scalar wave operator on slowly rotating KN spacetimes with subextremal charge. It not only occurs as the gauge propagation operator acting on the gauge function \(D_{(g_{b},A_{b})}\widetilde{\Upsilon}_{M}(\dot{g},\dot{A};g_{b},A_{b})=- \delta_{g_{b}}\dot{A}\), but an operator on a scalar function which generate the pure gauge solutions to the Maxwell equation. We define the Fourier-transformed scalar wave operator on slowly rotating Kerr-Newman spacetime \(g_{b}\) with parameters \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) as \[\widehat{\square_{g_{b}}}(\sigma):=e^{i\sigma t_{b,*}}\square_{g_{b}}e^{-i \sigma t_{b,*}}\] where \(t_{b,*}=\chi_{0}(r)(t+r_{b_{0},*})+(1-\chi_{0}(r))(t-r_{(\mathbf{m},0,\mathbf{ Q}),*})\) is defined as in (4.30). For ease of notation, we drop \(\mathbb{C}\) in the notations of the functions space \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X};\mathbb{C}),\bar{H}_{\mathrm{b}}^{s,\ell }(\bar{X};\mathbb{C}),\mathcal{A}(\bar{X};\mathbb{C})\) and \(C^{\infty}(\bar{X};\mathbb{C})\) if it is clear from the context. **Theorem 8.1**.: _Let \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) and \(g=g_{b_{0}}\) be the RN metric with subextremal charge. There exists \(\epsilon>0\) such that for \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) with \(|b-b_{0}|<\epsilon\), the following holds_ 1. _If_ \(\mathrm{Im}\,\sigma\geq 0\) _and_ \(\sigma\neq 0\)_,_ \[\widehat{\square_{g_{b}}}(\sigma):\{u\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X}; \mathbb{C}):\widehat{\square_{g}}(\sigma)u\in\bar{H}_{\mathrm{b}}^{s,\ell+1}( \bar{X};\mathbb{C})\}\to\bar{H}_{\mathrm{b}}^{s,\ell+1}(\bar{X};\mathbb{C})\] (8.1) _is invertible for_ \(s>\frac{1}{2},\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\)_._ 2. _If_ \(s>\frac{1}{2}\) _and_ \(-\frac{3}{2}<\ell<-\frac{1}{2}\)_, then stationary operator_ \[\widehat{\square_{g_{b}}}(0):\{u\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X}; \mathbb{C}):\widehat{\square_{g}}(0)u\in\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{ X};\mathbb{C})\}\to\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X};\mathbb{C})\] (8.2) _is invertible._ Proof.: The proof closely follows [57, Theorem 6.1]. We first prove (8.1) and (8.2) for the RN metric \(g\) and then use a perturbation argument to extend them to the slowly rotating KN metric \(g_{b}\) with \(b\) near \(b_{0}\). * Injectivity of \(\widehat{\square_{g}}(0)\). Suppose \(u\in\bar{H}^{s,\ell}_{\rm b}(\bar{X})\) with \(s>\frac{1}{2},\ell\in(-\frac{3}{2},-\frac{1}{2})\) and \[\widehat{\square_{g}}(0)u=\square_{g}u=\frac{1}{r^{2}}\partial_{r}\mu_{b_{0}}r ^{2}\partial_{r}u+\frac{1}{r^{2}}\mathbbm{A}u=0,\quad\mu_{b_{0}}=1-\frac{2 \mathbf{m}_{0}}{r}+\frac{\mathbf{Q}_{0}^{2}}{r^{2}}.\] Then by Proposition 5.7, \(u\in\rho C^{\infty}(\bar{X})+\mathcal{A}^{2-}(\bar{X})\) where \(\rho=r^{-1}\), which implies \(|u|,|r\partial_{r}u|,|\not{\nabla}u|\lesssim r^{-1}\). As a consequence, the boundary term at \(r=\infty\) in the following integration by parts \[0=-\int_{\mathbb{S}^{2}}\!\!\int_{r_{b_{0}}}^{\infty}\!\square_{g}u\cdot \overline{u}\,r^{2}drd\not{g}=\int_{\mathbb{S}^{2}}\!\!\int_{r_{b_{0}}}^{ \infty}\!\!\!\left(\mu_{b_{0}}(r)|r\partial_{r}u|^{2}+|\not{\nabla}u|^{2} \right)drd\not{g}\] (8.3) vanishes, and the boundary term at \(r=r_{b_{0}}\) vanishes since \(u\) is smooth there and \(\mu_{b_{0}}(r_{b_{0}})=0\). Then \(u\) is a constant in \(r\geq r_{b_{0}}\) and thus vanishes there since \(\lim_{r\to\infty}u=0\). Since \(u\) vanishes to infinite order at \(r=r_{b_{0}}\) and smooth in \(r\leq r_{b_{0}}\), we shall prove that \(u=0\) in \(r\leq r_{b_{0}}\) as well by following the arguments in [123, Lemma 1]. Concretely, we first compute the following energy identity in \[\partial_{r}\Big{(}|r-r_{b_{0}}|^{-N}\big{(}-\mu_{b_{0}}(r)| \partial_{r}u|^{2}+\frac{1}{r^{2}}|\not{du}|_{\not{g}}^{2}+|u|^{2}\big{)} \Big{)}+2|r-r_{b_{0}}|^{-N}\frac{1}{r^{2}}\not{g}\big{(}\mathrm{Re}\left( \overline{\partial_{r}u}\not{du}u\right)\big{)}\] (8.4) where \[R(u,du)=\big{(}4r^{-1}\mu_{b_{0}}(r)-\mu_{b_{0}}^{\prime}(r)\big{)}|\partial _{r}u|^{2}+2\mathrm{Re}\left(\overline{u}\partial_{r}u\right)-2r^{-3}|\not{du }|_{\not{g}}^{2}\] is a quadratic form in \(u\) and \(du\) independent of \(N\). As a result, \(R(u,du)\) can be controlled by the first term on the right-hand side of the above energy identity (8.4) when \(N\) is sufficiently large. Then for any fixed \(r_{0}\in[r_{-},r_{b_{0}})\) we apply Stokes' Theorem in \([r_{0},r_{b_{0}}-\epsilon]\times\mathbb{S}^{2}\). For \(N\) large enough we have \[\int_{\mathbb{S}^{2}}\!\!\left(-\mu_{b_{0}}(r)|\partial_{r}u|^{2}+| \not{du}|^{2}+|u|^{2}\right)|_{r=r_{0}}d\not{g}\] (8.5) \[\qquad\leq(r_{b_{0}}-r_{0})^{N}\epsilon^{-N}\int_{\mathbb{S}^{2}} \!\left(-\mu_{b_{0}}(r)|\partial_{r}u|^{2}+|\not{du}|^{2}+|u|^{2}\right)|_{r=r _{b_{0}}-\epsilon}d\not{g}\leq C_{K}\epsilon^{-N+K}\] for any \(K\) as \(\epsilon\to 0^{+}\) since \(u\) is smooth and vanishes to infinite order at \(r=r_{b_{0}}\). By choosing \(K>N\) and letting \(\epsilon\to 0^{+}\) we conclude that \(u=0\) at \(r=r_{0}\). * Surjectivity of \(\widehat{\square_{g}}(0)\). The index \(0\) property of \(\widehat{\square_{g}}(0)\) established in Lemma 5.23 directly gives the surjectivity. But here we present an alternative proof of the surjectivity. To prove the surjectivity of \(\widehat{\square_{g}}(0)\), we note that it is equivalent to proving the injectivity of the adjoint \(\widehat{\square_{g}}(0)^{*}\). Suppose \(v\in H^{-s+1,-\ell-2}_{\rm b}(\bar{X})\) with \(s>\frac{1}{2},\ell\in(-\frac{3}{2},-\frac{1}{2})\) satisfies \(\widehat{\square_{g}}(0)^{*}v=\widehat{\square_{g}}(0)v=0\). By Proposition 5.7, \(v\) is smooth in \(r_{-}\leq r<r_{b_{0}}\) and \(r>r_{b_{0}}\), and \(u\in\rho C^{\infty}(\bar{X})+\mathcal{A}^{2-}(\bar{X})\) near \(r=\infty\). First we have \(v=0\) in \(r<r_{b_{0}}\) since \(\widehat{\square_{g}}(0)^{*}\) is a hyperbolic operator there with \(r\) being the timelike function, see [40, Theorem E.56]. Next, according to [56, Theorem 6.3], \(v\) is conormal at \(r=r_{b_{0}}\). Since the normal operator of \(\mu_{b_{0}}\widehat{\square_{g}}(0)^{*}\) is \((\mu_{b_{0}}\partial_{r})^{2}\) whose boundary spectrum consists of \(\{(0,0),(0,1)\}\), it follows that \(v=H(r-r_{b_{0}})(v_{0}+v_{1}\log\mu_{b_{0}}+\bar{v})\) where \(v_{0},v_{1}\in C^{\infty}(\mathbb{S}^{2})\) and \(\bar{v}\in\mathcal{A}^{1-}([r_{b_{0}},\infty))\) which means that \(\bar{v}\) is conormal at \(r=r_{b_{0}}\) and \(\bar{v}\sim(r-r_{b_{0}})^{1-}\) as \(r\to r_{b_{0}}^{+}\). Since \(\widehat{\square_{g}}(0)H(r-r_{b_{0}})=r^{-2}\partial_{r}(r^{2}\mu_{b_{0}}(r) \delta(r-r_{b_{0}}))=0\) and \(\widehat{\square_{g}}(0)(H(r-r_{b_{0}})\log\mu_{b_{0}})=\mu_{b_{0}}^{\prime}(r) \delta(r-r_{b_{0}})+2\mathbf{Q}_{0}^{2}r^{-4}H(r-r_{b_{0}})\), we have \[0=\widehat{\square_{g}}(0)v=\mu_{b_{0}}^{\prime}(r)\delta(r-r_{b_{0}})v_{1}+R\] where \[R=\frac{H(r-r_{b_{0}})}{r^{2}}\big{(}\mathbbm{A}v_{0}+\log(r-r_{b_{0}}) \mathbbm{A}v_{1}+\frac{2\mathbf{Q}_{0}^{2}}{r^{2}}v_{1}\big{)}+H(r-r_{b_{0}}) \widehat{\square_{g}}(0)\tilde{v}\in\mathcal{A}^{0-}.\] However, \(\delta(r-r_{b_{0}})\notin\mathcal{A}^{0-}\), and this implies \(v_{1}=0\). Since \(v_{1}=0\) implies \(\mu_{b_{0}}\partial_{r}v\to 0\) as \(r\to r_{b_{0}}^{+}\), we find that the integration by parts (8.3) still works here and thus gives \(v=0\) in \(r>r_{b_{0}}\). Then we can conclude that \(v\) is supported at \(r=r_{b_{0}}\), and consequently \(v\notin L^{2}\)[66, Theorem 7.1.27] unless \(v=0\). However, the radial point estimates at the event horizon imply that \(v\in H^{1/2-}\) near \(r=r_{b_{0}}\). Therefore, \(v\) must be \(0\). * Invertibility of \(\widehat{\square_{g}}(\sigma)\) for \(\sigma\in\mathbb{R}\setminus\{0\}\). According to Lemma 5.23, it suffices to prove the injectivity of \(\widehat{\square_{g}}(\sigma)\) for non-zero real \(\sigma\). We first illustrate the relationship between \(\widehat{\square_{g}}(\sigma)\) and \(\widehat{\square_{g}}(\sigma)=e^{i\sigma t}\square_{g}e^{-i\sigma t}\), that is, \(u\in\ker\widehat{\square_{g}}(\sigma)\) gives rise to an outgoing solution \[\widehat{\square_{g}}\tilde{u}=0\quad\text{where}\quad\tilde{u}=e^{i\sigma(t- t_{b_{0}},*)}u.\] (8.6) We next prove that \(\tilde{u}=0\) and thus \(u=0\) in \(r\geq r_{b_{0}}\) by a boundary pairing argument (see [95, SS2.3][62, SS3.2][57, SS7]), and a unique continuation theorem at infinity. Concretely, let \(f(r)\in C^{\infty}([r_{b_{0}},\infty))\) be a nonnegative function with \(f(r)=r-r_{b_{0}}\) for \(r_{b_{0}}\leq r<3\mathbf{m}_{0}\) and \(f(r)=r^{-1}\) for \(r>4\mathbf{m}_{0}\). Fix a cutoff \(\chi\in C^{\infty}([0,\infty))\) which is \(0\) on \([0,1/2]\) and \(1\) on \([1,\infty)\). For small \(\epsilon>0\), we define \[\chi_{\epsilon}(r):=\chi(\frac{f}{\epsilon})\in C^{\infty}_{c}((r_{b_{0}}, \infty)).\] (8.7) We see that \(\chi_{\epsilon}=1\) when \(r-r_{b_{0}}\geq\epsilon,r\leq\epsilon^{-1}\), and \(r-r_{b_{0}}\geq\epsilon/2,r\leq 2\epsilon^{-1}\) on \(\text{supp}\chi_{\epsilon}\). We then calculate \[0 =\lim_{\epsilon\to 0}\Big{(}\int_{\mathbb{S}^{2}}\!\int_{r_{b_{0}}}^{ \infty}\!\!\chi_{\epsilon}\tilde{u}\cdot\!\overline{\widehat{\square_{g}}( \sigma)}\tilde{u}\,r^{2}drd\!\!\not{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and thus \(u=0\) in \(r>r_{b_{0}}\). By Proposition 5.7 again, \(u\) is smooth and vanishes to infinite order at \(r=r_{b_{0}}\). Therefore, we follow the arguments in [123, Lemma 1] again to prove \(u=0\) in \(r<r_{b_{0}}\). Since we are working in the region near the \(r=r_{b_{0}}\) where \(t_{b_{0},*}=t_{0}\), it would be more convenient to use the coordinates \((t_{b_{0}},r,\omega)\) with \(\omega\in\mathbb{S}^{2}\), and then we write \[\widehat{\square_{g}}(\sigma)u=\frac{1}{r^{2}}\partial_{r}r^{2}\mu_{b_{0}} \partial_{r}u+\frac{1}{r^{2}}\bar{\Delta}u-\frac{2i\sigma}{r}\partial_{r}ru.\] (8.12) We can again obtain the energy identity as in (8.4) with \(R(u,du)\) replaced by \[R(u,du)-2\mathrm{Re}\left(2i\sigma r^{-1}\overline{\partial_{r}u}\cdot \partial_{r}(ru)\right)\] which can still be controlled when \(N\) is sufficiently large. Proceeding in the same way as in (8.5), it follows that \(u=0\) in \(r<r_{b_{0}}\) as well. * Invertibility of \(\widehat{\square_{g}}(\sigma)\) for \(\mathrm{Im}\,\sigma>0.\) Again according to Lemma 5.23, it suffices to prove the injectivity of \(\widehat{\square_{g}}(\sigma)\) for \(\sigma\in\mathbb{C},\mathrm{Im}\,\sigma>0\). We again use relationship between \(\widehat{\square_{g}}(\sigma)\) and \(\widetilde{\square_{g}}(\sigma)\) as illustrated in (8.6). Now \(\tilde{u}\) decays exponentially when \(r_{b_{0},*}\to\pm\infty\). So we can apply integration by parts to conclude \[0=-\int_{\mathbb{S}^{2}}\!\int_{r_{b_{0}}}^{\infty}\!\widetilde{\square_{g}}( \sigma)\tilde{u}\cdot\overline{\tilde{u}}\,r^{2}dr\not{g}=\int_{\mathbb{S}^{2} }\!\int_{r_{b_{0}}}^{\infty}\!\!\!\left(\mu_{b_{0}}(r)|r\partial_{r}\tilde{u} |^{2}+|\not{\nabla}\tilde{u}|^{2}-\sigma^{2}\mu_{b_{0}}^{-1}(r)|\tilde{u}|^{2} \right)drd\not{g}.\] Then by taking imaginary part for \(\mathrm{Re}\,\sigma\neq 0\), while using \(-\sigma^{2}>0\) for \(\mathrm{Re}\,\sigma=0\), it follows that \(\tilde{u}=0\) and thus \(u=0\) in \(r>r_{b_{0}}\). The proof of \(u=0\) in \(r<r_{b_{0}}\) is the same as that in the non-zero real \(\sigma\) case. * Proof for \(\widehat{\square_{g_{b}}}(\sigma).\) Finally, we consider the slowly rotating KN metric \(g_{b}\), with \(b\) near \(b_{0}\). By Theorem 5.28 and high energy estimate 5.22, \(\widehat{\square_{g_{b}}}(\sigma)\) is invertible for \(|\sigma|\) large, say \(|\sigma|\geq C\), when \(|b-b_{0}|<\epsilon_{1}\), so we only need to consider the bounded region\(|\sigma|\leq C\). Near \(\sigma=0\), the invertibility of \(\widehat{\square_{g_{b}}}(\sigma)\) for \(b\) close to \(b_{0}\) follows from the perturbation argument as above and the fact that \(\widehat{\square_{g}}(0)\) is invertible. For \(\sigma\in\mathbb{C},\mathrm{Im}\,\sigma\geq 0\), bounded away from \(0\) and infinity, we prove the theorem by perturbation arguments (starting with the invertibility of \(\widehat{\square_{g}}(\sigma)\)) in a compact set of \(\mathrm{Im}\,\sigma\geq 0\). Concretely, suppose there exist sequences \(\delta_{j}\to 0,b_{j}\to b_{0}\) such that \(\ker\widehat{\square_{g_{b_{j}}}}(\sigma_{j})\) is non-trivial. Then we can find \(u_{j}\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\cap\ker\widehat{\square_{g_{b_ {j}}}}(\sigma_{j})\) with \(\|u_{j}\|_{\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})}=1\). By the uniform Fredholm estimate 5.5, we have \(1\leq C^{\prime}\|u_{j}\|_{\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})}\). Therefore, there exists a subsequence \(u_{j}\to u\) weakly in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\) which is norm convergent in \(\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})\) with the limit \(u\) being non-zero and satisfying \(\widehat{\square_{g}}(0)u=0\). But this contradicts the invertibility of \(\widehat{\square_{g}}(0)\). So this proves the injectivity. The surjectivity which is equivalent to the injectivity of the adjoint \(\widehat{\square_{g_{b}}}(\sigma)^{*}\) can be proved in a similar manner. This proves that for \(s>\frac{1}{2},\ell\in(-\frac{3}{2},\frac{1}{2})\) and \(\mathrm{Im}\,\sigma\geq 0,|\sigma|<c,|b-b_{0}|<\epsilon_{2}\) for some \(c,\epsilon_{2}>0\), \[\widehat{\square_{g_{b}}}(\sigma):\{u\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X}): \widehat{\square_{g_{b}}}(\sigma)u\in\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X} )\}\to\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\] is invertible. By Proposition 5.7, this implies that \(\widehat{\square_{g_{b}}}(\sigma)\) acting on the function spaces in (8.1) is injective for \(\mathrm{Im}\,\sigma\geq 0,0<|\sigma|<c\), and the invertibility follows from the fact that \(\widehat{\square_{g_{b}}}(\sigma)\) has index \(0\). This proves the theorem for \(\mathrm{Im}\,\sigma\geq 0,0\leq|\sigma|<c\) and \(|b-b_{0}|<\epsilon_{2}\). For \(\mathrm{Im}\,\sigma\geq 0,c\leq|\sigma|\leq C\), the fact that \(\widehat{\square_{g}}(\sigma)\) is invertible and a perturbation argument as above imply the invertibility of \(\widehat{\square_{g_{b}}}(\sigma)\) for \((b,\sigma)\) in an open neighborhood of \((b_{0},\sigma)\). Then the compactness of the region \(c\leq|\sigma|\leq C\) implies that there exists an \(\epsilon_{3}>0\) (depending on \(c,C\)) such that \(\widehat{\square_{g_{b}}}(\sigma)\) in (8.1) is invertible for \(\mathrm{Im}\,\sigma\geq 0,c\leq|\sigma|\leq C\) and \(|b-b_{0}|<\epsilon_{3}\). Therefore, the theorem holds for \(|b-b_{0}|<\epsilon\) where \(\epsilon=\min\{\epsilon_{1},\epsilon_{2},\epsilon_{3}\}\). ### Growing zero modes For later use, we now discuss the explicit form of the scalar functions in which are allowed to have more growth at infinity. **Proposition 8.2**.: _For \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\), we have_ \[\ker\widehat{\square_{g_{b}}}(0)\cap\bar{H}_{\mathrm{b}}^{\infty,-3/ 2-}(\bar{X}) =\langle u_{b,s_{0}}\rangle,\quad u_{b,s_{0}}=1. \tag{8.13}\] \[\ker\widehat{\square_{g_{b}}}(0)^{*}\cap\dot{H}_{\mathrm{b}}^{- \infty,-3/2-}(\bar{X}) =\langle u_{b,s_{0}}^{*}\rangle,\quad u_{b,s_{0}}^{*}=H(r-r_{b}). \tag{8.14}\] _The following spaces_ \[\ker\widehat{\square_{g_{b}}}(0)\cap\bar{H}_{\mathrm{b}}^{\infty,- 5/2-}(\bar{X}) =\langle u_{b,s_{0}}\rangle\oplus\{u_{b,s_{1}}(\mathsf{S}): \mathsf{S}\in\mathbf{S}_{1}\}, \tag{8.15}\] \[\ker\widehat{\square_{g_{b}}}(0)^{*}\cap\dot{H}_{\mathrm{b}}^{- \infty,-5/2-}(\bar{X}) =\langle u_{b,s_{0}}^{*}\rangle\oplus\{u_{b,s_{1}}^{*}(\mathsf{S}): \mathsf{S}\in\mathbf{S}_{1}\}, \tag{8.16}\] _are \(4\)-dimensional. Moreover, the maps \(b\mapsto u_{b,s_{1}}(\mathsf{S})\) and \(b\mapsto u_{b,s_{1}}^{*}(\mathsf{S})\) can be chosen to be continuous in \(b\) with values in the respective spaces. More explicitly, we have_ \[u_{b=(\mathbf{m},0,\mathbf{Q}),s_{1}}(\mathsf{S})=(r-\mathbf{m})\mathsf{S}, \quad u_{b=(\mathbf{m},0,\mathbf{Q}),s_{1}}(\mathsf{S})=(r-\mathbf{m})H(r-r_ {b})\mathsf{S}. \tag{8.17}\] _Remark 8.3_.: We use the notation \(u_{b,s_{0}},u_{b,s_{1}}\) because they are of scalar type \(l=0\) and \(l=1\) respectively when \(b=(\mathbf{m},0,\mathbf{Q})\) is a parameter of RN metric. Proof of Proposition 8.2.: We first prove (8.13). Let \(u\in\ker\widehat{\square_{g_{b}}}(0)\cap\bar{H}_{\mathrm{b}}^{\infty,-3/2-}( \bar{X})\). By the normal operator argument as in Proposition 5.7, \(u\) must have the form \(u=u_{0}+\tilde{u}\) where \(u_{0}\in\mathbb{C}\) and \(\tilde{u}\in\mathcal{A}^{1-}(\bar{X})\subset\bar{H}_{\mathrm{b}}^{\infty,-1/2- }(\bar{X})\). Then we have \[0=\widehat{\square_{g_{b}}}(0)u=\widehat{\square_{g_{b}}}(0)u_{0}+\widehat{ \square_{g_{b}}}(0)\tilde{u}=\widehat{\square_{g_{b}}}(0)\tilde{u}.\] In view of Theorem 8.1, \(\tilde{u}\) must be \(0\), and thus \(u=u_{0}\) is a constant. Conversely, we certainly have \(\widehat{\square_{g_{b}}}(0)1=0\). This proves (8.13). The proof of (8.14) is analogous. Let \(u\in\ker\widehat{\square_{g_{b}}}(0)\cap\dot{H}_{\mathrm{b}}^{-\infty,-3/2-}( \bar{X})\). By the normal operator argument again, we conclude that \(u=\chi u_{0}+\tilde{u}\) where \(\chi\) is a smooth cutoff with \(\chi=0\) for \(r\leq 3\mathbf{m}_{0}\), \(\chi=1\) with \(r\geq 4\mathbf{m}_{0}\), and \(u_{0}\in\mathbb{C},\tilde{u}\in\dot{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\). We calculate \[-\widehat{\square_{g_{b}}}(0)^{*}\chi u_{0}\in\dot{H}_{\mathrm{b}}^{-\infty, \infty}(\bar{X})\subset\dot{H}_{\mathrm{b}}^{-\infty,3/2-}(\bar{X}).\] Owing to Theorem 8.1, there exists a unique \(\tilde{u}\in\dot{H}_{\mathrm{b}}^{-\infty,-1/2-}(\bar{X})\) satisfying \(\widehat{\square_{g_{b}}}(0)^{*}\tilde{u}=-\widehat{\square_{g_{b}}}(0)^{*} \chi u_{0}\). This proves that the space in (8.14) is at most \(1\)-dimensional. Conversely, a direct calculation implies \[\widehat{\square_{g_{b}}}(0)H(r-r_{b})=\rho_{b}^{-2}\partial_{r}(\Delta_{b} \delta(r-r_{b}))=0\] since \(\Delta_{b}|_{r=r_{b}}=0\). This proves (8.14). As for (8.15), since \(\bar{H}_{\mathrm{b}}^{\infty,-5/2-}(\bar{X})\subset\mathcal{A}^{-1-}(\bar{X})\), in the normal operator argument as in Proposition 5.7, we shift the contour of integration through the pole \(i\) with the space of resonant states given by \(r\mathsf{S}\) with \(\mathsf{S}\in\mathbf{S}_{1}\). That is to say, \(u\) must take the form \(u=\chi r\mathsf{S}+\tilde{u}\) where \(\chi\) is the smooth cutoff as defined above and \(\mathsf{S}\in\mathbf{S}_{1},\tilde{u}\in\bar{H}_{\mathrm{b}}^{\infty,-3/2-}( \bar{X})\). Since \(\mathbf{S}_{1}\) is \(3\)-dimensional, we find that the space in (8.15) is at most \(4\)-dimensional. Now we prove that it is indeed \(4\)-dimensional. We compute \[-\widehat{\square_{g_{b}}}(0)\chi r\mathsf{S}=-\chi\widehat{\square_{g}}(0)r \mathsf{S}+[\chi,\widehat{\square_{g}}(0)]r\mathsf{S}+(\widehat{\square_{g}}(0 )-\widehat{\square_{g_{b}}}(0))\chi r\mathsf{S}=0+\bar{H}_{\mathrm{b}}^{\infty, \infty}(\bar{X})+\bar{H}_{\mathrm{b}}^{\infty,1/2-}(\bar{X}) \tag{8.18}\] where we use \(\widehat{\square_{g}}(0)-\widehat{\square_{g_{b}}}(0)\in\rho^{3}\mathrm{Diff}_{b }^{2}\) for the third summand. Since \(\ker\widehat{\square_{g_{b}}}(0)\cap\dot{H}_{\mathrm{b}}^{-\infty,-1/2-}(\bar{X})=0\), there exists \(\tilde{u}\in\bar{H}_{\mathrm{b}}^{\infty,-3/2-}(\bar{X})\), which is unique modulo a multiple of \(u_{b,s_{0}}\), satisfying the above equation (8.18). Concretely, let \(v\in C_{c}^{\infty}(X^{\circ})\) such that \(\langle u_{b,s_{0}},v\rangle=1\). Then there exists a unique \(\tilde{u}\in\mathrm{ann}(v)\) satisfying the equation (8.18). Therefore, \(\chi r\mathsf{S}+\tilde{u}\), where \(\tilde{u}\) is uniquely determined by \(\mathsf{S}\) and \(b\), indeed gives rise to an element in the space (8.15) with leading order term \(r\mathsf{S}\) at infinity. This proves (8.15). The proof of (8.16) is completely analogous. Now we turn to discussing the continuous dependence of \(u_{b,s_{1}}(\mathsf{S}),u_{b,s_{1}}^{*}(\mathsf{S})\) on \(b\). Suppose \(b_{j}\to b\) and \(u_{b_{j}}(\mathsf{S})=\chi r\mathsf{S}+\tilde{u}_{b_{j}}(\mathsf{S})\in\ker \widehat{\square_{g_{b_{j}}}}\cap\dot{H}_{\mathrm{b}}^{\infty,-5/2-}(\bar{X})\) where \(\tilde{u}_{b_{j}}(\mathsf{S})\) is in the annihilator of \(v\in C_{c}^{\infty}(X^{\circ})\) which satisfies \(\langle u_{b,s_{0}},v\rangle\neq 0\). Let \(u=\chi r\mathsf{S}+\tilde{u}(\mathsf{S})\in\ker\widehat{\square_{g_{b}}}(0)\) and \(e_{j}(\mathsf{S})=u_{b_{j}}(\mathsf{S})-u(\mathsf{S})=\tilde{u}_{b_{j}}( \mathsf{S})-\tilde{u}(\mathsf{S})\), and we find \[\widehat{\square_{g_{b}}}(0)e_{j}(\mathsf{S})=(\widehat{\square_{g_{b}}}(0)- \widehat{\square_{g_{j}}}(0))(\tilde{u}_{b_{j}}(\mathsf{S})+\chi r\mathsf{S}).\] Since \(\{\tilde{u}_{b_{j}}({\sf S})\}\) is uniformly bounded in \({\rm ann}(v)\) and \(\widehat{\square_{g_{b}}}(0)-\widehat{\square_{g_{b}}}(0)\in(b-b_{j})\rho^{3}{ \rm Diff}_{b}^{2}\), we see that \(\widehat{\square_{g_{b}}}(0)e_{j}({\sf S})\in(b-b_{j})\bar{H}_{\rm b}^{\infty,1 /2-}(\bar{X})\). As \(\widehat{\square_{g_{b}}}(0)|_{{\rm ann}(v)}:{\rm ann}v\to\bar{H}_{\rm b}^{ \infty,1/2-}(\bar{X})\) is invertible, it follows that \(e_{j}({\sf S})\) is of size \({\mathcal{O}}_{\bar{H}_{\rm b}^{\infty,-3/2-}(\bar{X})}(b-b_{j})\). This completes the proof of the continuity. It remains to derive the explicit expressions in (8.17). We first consider the case \(b=b_{0}\), i.e. \(g_{b}=g\). Using \(\Delta{\sf S}=-2{\sf S}\), it follow that \[\begin{split}\widehat{\square_{g}}(0)((r-{\bf m}_{0}){\sf S})& =\frac{1}{r^{2}}\Big{(}\partial_{r}(r^{2}-2{\bf m}_{0}r+{\bf Q}_{0}^ {2}){\sf S}-2(r-{\bf m}_{0}){\sf S}\Big{)}=0.\\ \widehat{\square_{g}}(0)^{*}((r-{\bf m}_{0})H(r-r_{b_{0}}){\sf S })&=H(r-r_{b_{0}})\widehat{\square_{g}}(0)((r-{\bf m}_{0}){\sf S })+2\mu_{b_{0}}\delta(r-r_{b_{0}}){\sf S}\\ &\quad+r^{-2}\partial_{r}(r^{2}\mu_{b_{0}}\delta(r-r_{b_{0}}))( r-{\bf m}_{0}){\sf S}=0\end{split} \tag{8.19}\] For the general RN metric \(g_{b}\) with parameter \(b=({\bf m},0,{\bf Q})\), we notice that the expression of \(g_{b}\) is the same as that of \(g\) with \(({\bf m}_{0},0,{\bf Q}_{0})\) replaced by \(({\bf m},0,{\bf Q})\). Therefore, the above calculation still follows and gives the explicit form of \(u_{({\bf m},0,{\bf Q}),s_{1}}({\sf S})\) and \(u_{({\bf m},0,{\bf Q}),s_{1}}^{*}({\sf S})\). ## 9. Mode analysis of the 1-form wave-type operator In this section we shall analyze the modes of the gauge propagation operator \({\mathcal{P}}_{b,\gamma}=\delta_{g_{b}}G_{g_{b}}\widetilde{\delta}_{g_{b},\gamma}^ {*}\) and the gauge potential wave operator \({\mathcal{W}}_{b,\gamma}=\widetilde{\delta}_{g_{b},\gamma}G_{g_{b}}\delta_{g _{b}}^{*}\), both of which are wave-type operators acting on 1-form. The former one occurs as the (Einstein part) of gauge propagation operator acting on the gauge 1-form \(D_{g_{b}}\widetilde{\Upsilon}^{E}(\dot{g};g_{b})=\widetilde{\delta}_{g_{b}, \gamma}G_{g_{b}}\dot{g}\), while the latter one acts on the gauge potentials (which generate the pure gauge solutions). We also note that from this section on, the smallness of the charge \({\bf Q}\) is required, i.e., we are restricted in the weakly charged RN and KN (slowly rotating) metric. Throughout this section, the bundle we mostly deal with is scattering 1-form, so we shall drop "\(\widetilde{\rm s}\widetilde{T}^{*}\bar{X}\)" in \(\bar{H}_{\rm b}^{s,\ell}(\bar{X};\widetilde{\rm s}\widetilde{T}^{*}\bar{X}), \dot{H}_{\rm b}^{s,\ell}(\bar{X};\widetilde{\rm s}\widetilde{T}^{*}\bar{X})\) and \({\mathcal{A}}(\bar{X},\widetilde{\rm s}\widetilde{T}^{*}\bar{X})\) when it is clear from the context. We first recall the _linearized gauge-fixed Einstein-Maxwell system_ \[L_{b,\gamma}=(2L_{b,\gamma}^{E}(\dot{g},\dot{A}),\ L_{b,\gamma}^{M}(\dot{g}, \dot{A}))=0\] where \[L_{b,\gamma}^{E}(\dot{g},\dot{A})=D_{g_{b}}{\rm Ric}(\dot{g})-2D_ {(g_{b},dA_{b})}T(\dot{g},d\dot{A})+\widetilde{\delta}_{g_{b},\gamma}^{*} \widetilde{\delta}_{g_{b},\gamma}G_{g_{b}}\dot{g}=0, \tag{9.1}\] \[L_{b,\gamma}^{M}(\dot{g},\dot{A})=D_{(g_{b},A_{b})}(\delta_{g}dA) (\dot{g},\dot{A})+d\delta_{g_{b}}\dot{A}=0. \tag{9.2}\] We note that for \((g_{b},A_{b})\) satisfying the Einstein-Maxwell system, \((2\delta_{g_{b}}^{*}\omega,\ \mathcal{L}_{\omega t}A_{b}+d\phi)\) solves the linearized gauge-fixed Einstein-Maxwell system (9.1) and (9.2) provided that the 1-form \(\omega\) and the scalar function \(\phi\) satisfy \[\widetilde{\delta}_{g_{b},\gamma}G_{g_{b}}\delta_{g_{b}}^{*}\omega=0,\quad \delta_{g_{b}}d\phi=-\delta_{g_{b}}(\mathcal{L}_{\omega t}A_{b}). \tag{9.3}\] Therefore, zero mode solutions to the above wave-type system of equations (9.3) give rise to pure gauge zero mode solutions to \(L_{(g_{b},A_{b}),\gamma}\). Dually, using (A.1), (A.2), (A.4) and the following calculation for \((D_{g_{b}}{\rm Ric})^{*}:S^{2}\widetilde{\rm s}\widetilde{T}^{*}\bar{X}\to S^{2 }\widetilde{\rm s}\widetilde{T}^{*}\bar{X}\), \((D_{(g_{b},dA_{b})}T)^{*}:S^{2}\widetilde{\rm s}\widetilde{T}^{*}\bar{X}\to \widetilde{\rm s}\widetilde{T}^{*}\bar{X}\to\widetilde{\rm s}\widetilde{T}^{*} \bar{X}\) and \((D_{(g_{b},A_{b})}(\delta_{g}dA))^{*}:\widetilde{\rm s}\widetilde{T}^{*}\bar{X} \to\widetilde{\rm s}\widetilde{T}^{*}\bar{X}\oplus S^{2}\widetilde{\rm s} \widetilde{T}^{*}\bar{X}\) \[(D_{g_{b}}{\rm Ric})^{*}(\dot{g})=G_{g_{b}}D_{g_{b}}{\rm Ric}(G_{g_{b}}\dot{g})\] \[(D_{(g_{b},dA_{b})}T)^{*}(\dot{g})=(-D_{(g_{b},A_{b})}(\delta_{g}dA)(\dot{g},0), \ D_{(g_{b},dA_{b})}T(\dot{g},0)-\frac{1}{2}(g_{b})_{\mu\nu}\dot{g}^{\alpha \beta}F_{\alpha}^{\ \gamma}F_{\beta\gamma}+\frac{1}{2}{\rm tr}_{g_{b}}\dot{g}F_{\mu\alpha}F_{\nu}^{ \ \alpha}),\] \[=(-D_{(g_{b},A_{b})}(\delta_{g}dA)(G_{g_{b}}\dot{g},0),\ G_{g_{b}}D_{(g_{b},dA_{b})}T (G_{g_{b}}\dot{g},0))\] \[(D_{(g_{b},A_{b})}(\delta_{g}dA))^{*}(\dot{A})=\Big{(}\delta_{g_{b}}d\dot{A},\ -D_{(g_{b},dA_{b})}T(0,d\dot{A})\Big{)}=\Big{(}d\delta_{g_{b}}\dot{A},\ -G_{g_{b}}D_{(g_{b},dA_{b})}T(0,d\dot{A})\Big{)},\] we conclude \[L_{(g_{b},A_{b}),\gamma}^{*}(\dot{g},\dot{A})=(2L_{(g_{b},A_{b}),\gamma}^{*E}( \dot{g},\dot{A}),\ 4L_{(g_{b},A_{b}),\gamma}^{*M}(\dot{g},\dot{A}))\] with \[L^{*E}_{(g_{b},A_{b}),\gamma}(\dot{g},\dot{A})=G_{g_{b}}\Big{(}D_{g_{b}}\mathrm{ Ric}-2D_{(g_{b},dA_{b})}T+\widetilde{\delta}^{*}_{g_{b},\gamma}\widetilde{\delta}_{g_ {b},\gamma}G_{g_{b}}\Big{)}(G_{g_{b}}\dot{g},\frac{1}{4}d\dot{A}), \tag{9.4}\] \[L^{*M}_{(g_{b},A_{b}),\gamma}(\dot{g},\dot{A})=\Big{(}D_{(g_{b},A_{b})}(\delta_ {g}dA)+d\delta_{g_{b}}\Big{)}(G_{g_{b}}\dot{g},\frac{1}{4}\dot{A}). \tag{9.5}\] Therefore, we have \(L^{*}_{(g_{b},A_{b}),\gamma}(2G_{g_{b}}\delta^{*}_{g_{b}}\omega^{*},4\mathcal{ L}_{\omega^{*\sharp}A}+d\phi^{*})=0\) provided that \[\widetilde{\delta}_{g_{b},\gamma}G_{g_{b}}\delta^{*}_{g_{b}}\omega^{*}=0, \quad\delta_{g_{b}}d\phi^{*}=-4\delta_{g_{b}}(\mathcal{L}_{\omega^{*\sharp}A _{b}}). \tag{9.6}\] Such _dual pure gauge_ zero modes \((2G_{g_{b}}\delta^{*}_{g_{b}}\omega^{*},4\mathcal{L}_{\omega^{*\sharp}A_{b}}+ d\phi^{*})\) give rise to zero mode solutions to \[L^{*}_{(g_{b},A_{b}),\gamma}(\dot{g},\dot{A})=0.\] _Remark 9.1_.: The reason we introduce the constraint damping, i.e., replace \(\delta^{*}_{g_{b}}\) by \(\delta^{*}_{g_{b},\gamma}\) in this manuscript, is that we need to use the invertibility of the corresponding gauge propagation operator \(\delta_{g_{b}}G_{g_{b}}\tilde{\delta}^{*}_{g_{b}}\) to exclude the generalized modes, which grows linearly in \(t_{b,*}\) and whose leading term is given by linearized (in \((\dot{\mathbf{m}},0,\dot{\mathbf{Q}})\)) KN solutions, to the linearized gauge-fixed Einstein-Maxwell equations. We also note that we use a perturbation argument to prove the invertibility of the gauge propagation operator \(\delta_{g_{b}}G_{g_{b}}\tilde{\delta}^{*}_{g_{b}}\) on the weakly charged and slowly rotating KN metrics \(g_{b}\). Moreover, we use the generalized linearized gauge condition for the Einstein part \(\widetilde{\delta}_{g_{b},\gamma}G_{g_{b}}\dot{g}=0\) in this paper to exclude the zero mode of no geometric significance, see [57, Remark 10.13]. Let \(\tilde{\chi}(r)\in C^{\infty}_{c}((r_{-},3\mathbf{m}_{0}))\) such that \(\tilde{\chi}=1\) near \(r=2\mathbf{m}_{0}\) (and thus near the event horizon of \(g_{b}\) with \(b\) near \((\mathbf{m}_{0},0,\mathbf{Q}_{0})\) where \(|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\)). We use the following \[\mathfrak{c}=\tilde{\chi}(r)(dt_{0}-\mathfrak{b}dr),\quad t_{0}=t+r_{( \mathbf{m}_{0},0,\mathbf{Q}_{0}),*} \tag{9.7}\] with \(\mathfrak{b}\) to be determined later, to define \(\widetilde{\delta}^{*}_{g_{b},\gamma},\widetilde{\delta}_{g_{b},\gamma},L_{( g_{b},A_{b}),\gamma},L^{*}_{(g_{b},A_{b}),\gamma}\). As discussed in (9.3), we then define _gauge propagation operator_ \[\mathcal{P}_{b,\gamma}:=\delta_{g_{b}}G_{g_{b}}\widetilde{\delta}^{*}_{g_{b},\gamma} \tag{9.8}\] and _gauge potential wave operator_ \[\mathcal{W}_{b,\gamma}:=\widetilde{\delta}_{g_{b},\gamma}G_{g_{b}}\delta^{*}_ {g_{b}}. \tag{9.9}\] _Remark 9.2_.: We notice that \(\mathcal{P}_{b,\gamma}\) and \(\mathcal{W}_{b,\gamma}\) are formally dual to each other. However, on the level of function spaces, they are not because we need to work with the extendible \(b\)-Sobolev spaces for both when analyzing the modes of \(L_{(g_{b},A_{b}),\gamma}\). We also note that the _dual pure gauge_ zero mode solutions to \(L^{*}_{(g_{b},A_{b}),\gamma}(\dot{g},\dot{A})=0\) (as discussed in (9.6)) are closely related to the zero mode solutions to \(\mathcal{W}_{b,\gamma}\omega^{*}=0\). In order to study the modes of \(\mathcal{P}_{b,\gamma}\) and \(\mathcal{W}_{b,\gamma}\) for \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\), we define \[\widehat{\mathcal{P}_{b,\gamma}}(\sigma):=e^{i\sigma t_{b,*}}\mathcal{P}_{b, \gamma}e^{-i\sigma t_{b,*}},\quad\widehat{\mathcal{W}_{b,\gamma}}(\sigma):=e^ {i\sigma t_{b,*}}\mathcal{W}_{b,\gamma}e^{-i\sigma t_{b,*}} \tag{9.10}\] where \(t_{b,*}=(t+r_{b_{0},*})\chi_{0}(r)+(t-r_{(\mathbf{m},0,\mathbf{Q}),*})(1-\chi _{0}(r))\) is defined as in (4.30). We first record the following result about Schwarzschild metric which we rely on in the analysis of \(\widehat{\mathcal{P}_{b,\gamma}}(\sigma)\) and \(\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\) on weakly charged and slowly rotating KN metrics. **Proposition 9.3** ([57, Theorem 7.1]).: _Let \((g,M_{(\mathbf{m}_{0},0,0)})\) be a Schwarzschild spacetime with mass \(\mathbf{m}_{0}\) and \(\mathcal{P}_{0}=\delta_{g}G_{g}\delta^{*}_{g}\) be the wave operator acting on \(1\)-form. Define \(\widehat{\mathcal{P}_{0}}(\sigma):=e^{i\sigma t_{b,*}}\mathcal{P}_{0}e^{-i \sigma t_{b,*}}\) with \(t_{0,*}=(t+r_{0,*})\chi_{0}(r)+(t-r_{0,*})(1-\chi_{0}(r))\) and \(r_{0,*}=r+2\mathbf{m}_{0}\log(r-2\mathbf{m}_{0})\). Then we have_ 1. _If_ \(\mathrm{Im}\,\sigma\geq 0\) _and_ \(\sigma\neq 0\)_, then_ \[\widehat{\mathcal{P}_{0}}(\sigma):\{\omega\in\bar{H}^{s,\ell}_{\mathrm{b}}(\bar{ X};\widetilde{<}\widetilde{T^{*}}\bar{X}):\widehat{\mathcal{P}_{0}}(\sigma)\omega\in \bar{H}^{s,\ell+1}_{\mathrm{b}}(\bar{X};\widetilde{<}\widetilde{T^{*}}\bar{X}) \}\to\bar{H}^{s,\ell+1}_{\mathrm{b}}(\bar{X};\widetilde{<}\widetilde{T^{*}}\bar {X})\] (9.11) _is invertible for_ \(s>\frac{3}{2},\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\)_._ 2. _If_ \(s>\frac{3}{2}\) _and_ \(-\frac{3}{2}<\ell<-\frac{1}{2}\)_, then stationary operator_ \[\widehat{\mathcal{P}_{0}}(0):\{\omega\in\bar{H}^{s,\ell}_{\mathrm{b}}(\bar{X}; \widetilde{<}\widetilde{T^{*}}\bar{X}):\widehat{\mathcal{P}_{0}}(0)\omega\in \bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X};\widetilde{<}\widetilde{T^{*}}\bar{X}) \}\to\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X};\widetilde{<}\widetilde{T^{*}} \bar{X})\] (9.12) _has_ \(1\)_-dimensional kernel and cokernel._ _More explicitly, with \(v=t+r_{0,*}=t+r+2\mathbf{m}_{0}\log(r-2\mathbf{m}_{0})\) we have_ \[\ker\widehat{\mathcal{P}_{0}}(0)\cap\bar{H}_{\mathrm{b}}^{\infty,-1 /2-}(\bar{X};\widetilde{\!\mathrm{s}CT^{*}}\bar{X}) =\langle\omega_{s_{0}}\rangle,\quad\omega_{s_{0}}=r^{-1}(dv-dr)\cdot \tag{9.13}\] \[\ker\widehat{\mathcal{P}_{0}}(0)^{*}\cap\mathring{H}_{\mathrm{b}} ^{-\infty,-1/2-}(\bar{X};\widetilde{\!\mathrm{s}CT^{*}}\bar{X}) =\langle\omega_{s_{0}}^{*}\rangle,\quad\omega_{s_{0}}^{*}=\delta(r-2\mathbf{m}_ {0})dr\cdot\] _Remark 9.4_.: According to (9.13), \(\widehat{\mathcal{P}_{0}}(0)\) restricted to \(1\)-forms of scalar type \(l=1\) (and vector type \(l=1\) respectively) is invertible. Since \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\) also maps \(1\)-forms of scalar type \(l=1\) (and vector type \(l=1\) respectively) to the same type and is a small perturbation of \(\widehat{P_{0}}(0)\) for \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\) and \(\gamma\ll 1\), it follows that \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)^{-1}\) restricted to \(1\)-forms of scalar type \(l=1\) (and \(1\)-forms of vector type \(l=1\) respectively) exists with norm \(\mathcal{O}(1)\). ### Mode stability of the gauge propagation operator We first record the following two propositions about Schwarzschild metric which are needed in the course of the proof of mode stability of the gauge propagation operator \(\mathcal{P}_{b,\gamma}\) for the weakly charged RN metric and also slowly rotating and weakly charged KN metric. **Proposition 9.5** ([57, Proposition 10.3]).: _Let \((g,M_{(\mathbf{m}_{0},0,0)})\) be the Schwarzschild spacetime with mass \(\mathbf{m}_{0}\) and \(\mathcal{P}_{0,\gamma}=\delta_{g}G_{g}\delta_{g,\gamma}^{*}\) where \(\tilde{\delta}_{g,\gamma}^{*}\) is defined as in (4.62) and (4.60) with \(\mathfrak{c}=\tilde{\chi}(r)(dv-\mathfrak{b}dr),v=t+r_{0,*}=t+r+2\mathbf{m}_{0 }\log(r-2\mathbf{m}_{0})\) and \(\mathfrak{b}\neq 1\). Then there exists \(\gamma_{0}>0\) such that for fixed \(\gamma\) with \(0<|\gamma|<\gamma_{0}\), the following holds for \(\widehat{\mathcal{P}_{0,\gamma}}(\sigma):=e^{i\sigma t_{0,*}}\mathcal{P}_{0, \gamma}e^{-i\sigma t_{0,*}}\) with \(t_{0,*}=(t+r_{0,*})\chi_{0}(r)+(t-r_{0,*})(1-\chi_{0}(r))\)._ \[\widehat{\mathcal{P}_{0,\gamma}}(0):\{\omega\in\bar{H}_{\mathrm{b}}^{s,\ell}( \bar{X};\widetilde{\!\mathrm{s}CT^{*}}\bar{X}):\widehat{\mathcal{P}_{0,\gamma} }(0)\omega\in\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X};\widetilde{\!\mathrm{s }CT^{*}}\bar{X})\}\to\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X};\widetilde{\! \mathrm{s}CT^{*}}\bar{X}) \tag{9.14}\] _with \(s>2,-\frac{3}{2}<\ell<-\frac{1}{2}\) is invertible._ **Proposition 9.6** ([57, Proposition 10.12]).: _Let \(\widehat{\mathcal{P}_{0,\gamma}}(\sigma)\) be defined as in Proposition 9.5 with \(\mathfrak{b}>1\), then we have for \(\operatorname{Im}\sigma\geq 0,\sigma\neq 0\) and \(s>2,\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\)_ \[\widehat{\mathcal{P}_{0,\gamma}}(\sigma):\{\omega\in\bar{H}_{\mathrm{b}}^{s, \ell}(\bar{X};\widetilde{\!\mathrm{s}CT^{*}}\bar{X}):\widehat{\mathcal{P}_{0, \gamma}}(\sigma)\omega\in\bar{H}_{\mathrm{b}}^{s,\ell+1}(\bar{X};\widetilde{ \!\mathrm{s}CT^{*}}\bar{X})\}\to\bar{H}_{\mathrm{b}}^{s,\ell+1}(\bar{X}; \widetilde{\!\mathrm{s}CT^{*}}\bar{X}) \tag{9.15}\] _is invertible._ _Remark 9.7_.: We note that \(\delta_{g,\gamma}^{*}\) defined in [57, SS10] is slightly different from that in this manuscript ((4.62) and (4.60)) in the way that the coefficient of the second term in \(B\) in [57] (which is denoted by \(E\) there) is \(-1\) instead of \(-1/2\). However, this difference has no influence on the conclusions stated in the above two propositions. In particular, it does not affect the calculation in [57, equation (10.5)]. Next we shall prove the analogues of the above two propositions for RN and slowly rotating KN metric with small charge by a perturbation argument. We define \(\mathfrak{c}\) as in (9.7) with \(\mathfrak{b}>1\). With \(\gamma_{0}\) as in Propositions 9.5 and 9.6, we fix \(\gamma\in(0,\gamma_{0})\). **Theorem 9.8**.: _Let \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) and \(\mathfrak{b}>1\). Fix \(\gamma\in(0,\gamma_{0})\), there exists a small constant \(C(\gamma)>0\) such that for the weakly charged RN metric \(g_{b_{0}}\) with \(|\mathbf{Q}_{0}|<C(\gamma)\), the following holds._ 1. _If_ \(\operatorname{Im}\sigma\geq 0\) _and_ \(\sigma\neq 0\)_,_ \[\widehat{\mathcal{P}_{b_{0},\gamma}}(\sigma):\{\omega\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X};\widetilde{\!\mathrm{s}CT^{*}}\bar{X}):\widehat{\mathcal{P}_{b_{0 },\gamma}}(\sigma)\omega\in\bar{H}_{\mathrm{b}}^{s,\ell+1}(\bar{X}; \widetilde{\!\mathrm{s}CT^{*}}\bar{X})\}\to\bar{H}_{\mathrm{b}}^{s,\ell+1}( \bar{X};\widetilde{\!\mathrm{s}CT^{*}}\bar{X})\] (9.16) _is invertible when_ \(s>2,\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\)_._ 2. _If_ \(s>2\) _and_ \(-\frac{3}{2}<\ell<-\frac{1}{2}\)_, then stationary operator_ \[\widehat{\mathcal{P}_{b_{0},\gamma}}(0):\{\omega\in\bar{H}_{\mathrm{b}}^{s,\ell} (\bar{X};\widetilde{\!\mathrm{s}CT^{*}}\bar{X}):\widehat{\mathcal{P}_{b_{0}, \gamma}}(0)\omega\in\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X};\widetilde{\! \mathrm{s}CT^{*}}\bar{X})\}\to\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X}; \widetilde{\!\mathrm{s}CT^{*}}\bar{X})\] (9.17) _is invertible._ _Both statements also hold for the weakly charged and slowly rotating KN metric \(g_{b}\) with \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\)._ Proof.: We note that a perturbation argument as in the final step in the proof of Theorem 8.1 allows us to extend Propositions 9.5 and 9.6 to weakly charged RN metrics, which however is a smooth family of Lorentzian metric on the fixed manifold \(M_{(\mathbf{m}_{0},0,0)}\) and is a pull back of the RN metric \(g_{b_{0}}\) defined in (4.7). We need to prove that Propositions 9.5 and 9.6 in fact hold for \(g_{b_{0}}\) with sufficiently small charge as well. Specifically, let \((g,M_{(\mathbf{m}_{0},0,0)})\) be the Schwarzschild spacetime \[g=-(1-\frac{2\mathbf{m}_{0}}{r})dv^{2}+2dvdr+r^{2}\not{q},\quad M_{(\mathbf{m}_ {0},0,0)}=\mathbb{R}_{v}\times[r_{-},\infty)\times\mathbb{S}^{2}\] where \(r_{-}\in(0,2\mathbf{m}_{0})\) is a fixed number and \(v=t+r_{0,*}=t+r+2\mathbf{m}_{0}\log(r-2\mathbf{m}_{0})\). We introduce another coordinates for \((g,M_{(\mathbf{m}_{0},0,0)})\): \((\tilde{t}_{\chi_{0}},r,\theta,\phi)\) where \(\tilde{t}_{\chi_{0}}=t+\int_{4\mathbf{m}_{0}}^{r}s^{-2}(s^{2}-2\mathbf{m}_{0}s )\chi_{0}(s)\,ds\). Let \(g_{b_{0}}\) be the RN metric as defined in (4.7). We then define \(\tilde{g}_{b_{0}}=(\Psi_{b_{0}})^{*}g_{b_{0}}\) as the pull back of \(g_{b_{0}}\) under a diffeomorphism \(\Psi_{b_{0}}\) \[\Psi_{b_{0}}:M_{(\mathbf{m},0,0)}\to M,\quad(t_{\chi_{0}},r,\theta,\phi)( \Psi_{b_{0}}(p))=(\tilde{t}_{\chi_{0}},r,\theta,\phi)(p).\] Then we have \[g_{b_{0}}=-\mu_{b_{0}}dt_{\chi_{0}}^{2}+2\chi_{0}dt_{\chi_{0}}dr+\frac{1-\chi _{0}^{2}}{\mu_{b_{0}}}dr^{2}+r^{2}\not{q}, \tag{9.18}\] \[\tilde{g}_{b_{0}}=-\mu_{b_{0}}d\tilde{t}_{\chi_{0}}^{2}+2\chi_{0}d\tilde{t}_{ \chi_{0}}dr+\frac{1-\chi_{0}^{2}}{\mu_{b_{0}}}dr^{2}+r^{2}\not{q}. \tag{9.19}\] We now define \[\mathcal{P}_{\tilde{g}_{b_{0}},\gamma}:=\delta_{\tilde{g}_{b_{0}}}G_{\tilde{g }_{b_{0}}}\delta_{\tilde{g}_{b_{0}},\gamma}^{*}\] with \(\mathfrak{c}=\tilde{\chi}(r)(dv-\mathfrak{b}dr)\) and \(v=t+r_{*}\) in the definition of \(\delta_{\tilde{g}_{b_{0}},\gamma}^{*}\). In view of (9.18) and (9.19), we find that \(\mathcal{P}_{b_{0},\gamma}\) can be obtained by replacing \(\partial_{\tilde{t}_{\chi}}\) in \(\mathcal{P}_{\tilde{g}_{b_{0}},\gamma}\) by \(\partial_{t_{\chi}}\). Since \[\widehat{\mathcal{P}_{b_{0},\gamma}}(\sigma) =e^{i\sigma t_{b_{0},*}}\mathcal{P}_{b_{0},\gamma}e^{-i\sigma t_ {b_{0},*}},\quad t_{b_{0},*}=(t+r_{b_{0},*})\chi_{0}(r)+(t-r_{b_{0},*})(1- \chi_{0}(r)),\] \[\widehat{\mathcal{P}_{\tilde{g}_{b_{0}},\gamma}}(\sigma) =e^{i\sigma\tilde{t}_{b_{0},*}}\mathcal{P}_{\tilde{g}_{b_{0}}, \gamma}e^{-i\sigma\tilde{t}_{b_{0},*}},\quad\tilde{t}_{b_{0},*}=(t+r_{0,*}) \chi_{0}(r)+(t-r_{b_{0},*})(1-\chi_{0}(r)),\] we rewrite \[\widehat{\mathcal{P}_{b_{0},\gamma}}(\sigma) =e^{i\sigma(t_{\chi_{0}}+F_{1}(r))}\mathcal{P}_{b_{0},\gamma}e^{ -i\sigma(t_{\chi_{0}}+F_{1}(r))},\] \[\widehat{\mathcal{P}_{\tilde{g}_{b_{0}},\gamma}}(\sigma) =e^{i\sigma(\tilde{t}_{\chi_{0}}+F_{2}(r))}\mathcal{P}_{\tilde{g}_{ b_{0}},\gamma}e^{-i\sigma(\tilde{t}_{\chi_{0}}+F_{2}(r))}.\] Therefore \[\widehat{\mathcal{P}_{b_{0},\gamma}}(\sigma)=e^{i\sigma(F_{1}(r)-F_{2}(r))} \widehat{\mathcal{P}_{\tilde{g}_{b_{0}},\gamma}}(\sigma)e^{-i\sigma(F_{1}(r)- F_{2}(r))}\] where \(F_{1}(r)-F_{2}(r)\in C^{\infty}_{c}((2\mathbf{m}_{0},\infty))\). As discussed at the beginning, Propositions 9.5 and 9.6 hold for \(\widehat{\mathcal{P}_{\tilde{g}_{b_{0}},\gamma}}(\sigma)\), and thus for \(\widehat{\mathcal{P}_{\tilde{b}_{0},\gamma}}(\sigma)\) as well with \(|\mathbf{Q}_{0}|<C(\gamma)\) where \(C(\gamma)\) is a sufficiently small constant. Finally it remains to analyze the weakly charged and slowly rotating KN case. It follows from a perturbation argument as in the final step in the proof of Theorem 8.1. ### Mode stability of the gauge potential wave operator Now we turn to the analysis of the gauge potential wave operator \(\mathcal{W}_{b,\gamma}=\widetilde{\delta}_{g_{b},\gamma}\mathcal{G}_{g_{b}} \delta_{g_{b}}^{*}\). We will first prove the analogues of Propositions 9.5 and 9.6 for \(\mathcal{W}_{0,\gamma}\). We note that the proofs closely follow [57, Proposition 10.3, 10.12]. However, we present the proofs in detail for completeness here. Then proceeding as in Theorem 9.8, we can obtain its counterpart for \(\mathcal{W}_{b,\gamma}\). **Proposition 9.9**.: _Let \((g,M_{(\mathbf{m}_{0},0,0)})\) be the Schwarzschild spacetime with mass \(\mathbf{m}_{0}\) and \(\mathcal{W}_{0,\gamma}=\tilde{\delta}_{g,\gamma}G_{g}\delta_{g}^{*}\) where \(\tilde{\delta}_{g,\gamma}\) is defined as in (4.62) and (4.61) with \(\mathfrak{c}=\tilde{\chi}(r)(dv-\mathfrak{b}dr),v=t+r_{0,*}=t+r+2\mathbf{m}_{0} \log(r-2\mathbf{m}_{0})\) and \(\mathfrak{b}\neq 1\). Then there exists \(\gamma_{0}>0\) such that for fixed \(\gamma\) with \(0<|\gamma|<\gamma_{0}\), the following holds for \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma):=e^{i\sigma t_{0,*}}\mathcal{W}_{0, \gamma}e^{-i\sigma t_{0,*}}\) with \(t_{0,*}=(t+r_{0,*})\chi_{0}(r)+(t-r_{0,*})(1-\chi_{0}(r))\)._ \[\widehat{\mathcal{W}_{0,\gamma}}(0):\{\omega\in\bar{H}_{\mathrm{b}}^{s,\ell}( \bar{X};\widehat{\mathrm{c}T}^{*}\bar{X}):\widehat{\mathcal{W}_{0,\gamma}}(0) \omega\in\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\}\to\bar{H}_{\mathrm{b}}^{s-1, \ell+2}(\bar{X}) \tag{9.20}\] _with \(s>2,-\frac{3}{2}<\ell<-\frac{1}{2}\) is invertible._ Proof.: Fix \(-\frac{3}{2}<\ell<-\frac{1}{2},s>2\) and let \[\mathcal{X}^{s,\ell}:=\{u\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X}):\widehat{ \mathcal{W}_{0,\gamma}}(0)u\in\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\}.\] In view of Lemma 5.25, we know that \(\widehat{\mathcal{W}_{0,\gamma}}(0):\mathcal{X}^{s,\ell}\to\bar{H}_{\mathrm{b}}^{s -1,\ell+2}(\bar{X})\) is Fredholm of index \(0\) when \(\gamma\) is sufficiently small. As \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)-\widehat{\mathcal{W}_{0,\gamma_{2}}}(0)\) is a first-order differential operator with compactly supported coefficients, the space \(\mathcal{X}^{s,\ell}\) does not depend on \(\gamma\). We split the domain \(\mathcal{X}^{s,\ell}\) and the target space \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\) as follows \[\mathcal{X}^{s,\ell} =\mathcal{K}^{\perp}\oplus\mathcal{K},\quad\mathcal{K}=\ker\widehat {\mathcal{P}_{0}}(0)\cap\mathcal{X}^{s,\ell}=\langle\omega_{s_{0}}\rangle\] \[\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X}) =\mathcal{R}\oplus\mathcal{R}^{\perp},\quad\mathcal{R}=\mathrm{ ran}_{\mathcal{X}^{s,\ell}}\widehat{\mathcal{P}_{0}}(0)=\mathrm{ann}\;\omega_{s_{0}}^{*}\] where ann means annihilator, \(\mathcal{K}^{\perp}\) is a complementary subspace to \(\mathcal{K}\) inside \(\mathcal{X}^{s,\ell}\) while \(\mathcal{R}^{\perp}=\langle\eta\rangle\) is complementary to \(\mathcal{R}\) inside \(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}\) (in other word, \(\mathcal{R}^{\perp}\) is the cokernel of \(\widehat{\mathcal{P}_{0}}(0)\)), and \(\omega_{0},\omega_{0}^{*}\) are defined as in (9.13). We may choose \(\eta\in C_{c}^{\infty}(X;\widetilde{\widetilde{\mathcal{K}}^{*}}\bar{X})\) satisfying \(\langle\eta,\omega_{s_{0}}^{*}\rangle=1\). Then the operator \(\widehat{\mathcal{W}_{0}}(0)\) can be rewritten as \[\widehat{\mathcal{W}_{0,\gamma}}(0)=\begin{pmatrix}\mathcal{W}_{00}+\gamma \mathcal{W}_{00}^{\flat}&\gamma\mathcal{W}_{01}^{\flat}\\ \gamma\mathcal{W}_{10}^{\flat}&\gamma\mathcal{W}_{11}^{\flat}\end{pmatrix}. \tag{9.21}\] We notice that \(\mathcal{W}_{00}=\widehat{\mathcal{P}_{0}}(0)|_{\mathcal{K}^{\perp}}:\mathcal{ K}^{\perp}\to\mathcal{R}\) is invertible, and so is \(\mathcal{W}_{00}+\gamma\mathcal{W}_{00}^{\flat}\) provided that \(\gamma\) is small enough. Moreover, if we identify \(\mathcal{K}\cong\mathbb{C}\) via \(c\omega_{s_{0}}\mapsto c\) and \(\mathcal{R}^{\perp}\cong\mathbb{C}\) via \(c\eta\mapsto\langle c\eta,\omega_{s_{0}}^{*}\rangle=c\), correspondingly, \(\mathcal{W}_{11}^{\flat}\) can be identified as an endmorphism of \(\mathbb{C}\), and thus is simply a number which can be computed as follows. \[\mathcal{W}_{11}^{\flat} =\gamma^{-1}\langle(\widehat{\mathcal{W}_{0,\gamma}}(0)- \widehat{\mathcal{P}_{0}}(0))\omega_{s_{0}},\omega_{s_{0}}^{*}\rangle \tag{9.22}\] \[=\gamma^{-1}\langle\big{(}F_{g}G_{g}\delta_{g}^{*}\big{)}\omega_ {s_{0}},\omega_{s_{0}}^{*}\rangle=\gamma^{-1}\langle\delta_{g}^{*}\omega_{s_{ 0}},G_{g}B_{g}\omega_{s_{0}}^{*}\rangle\] \[=\langle\delta_{g}^{*}\omega_{s_{0}},\big{(}2\mathfrak{c}\otimes _{s}\omega_{s_{0}}^{*}-\frac{1}{2}g^{-1}(\mathfrak{c},\omega_{s_{0}}^{*})g \big{)}\rangle\] \[=4\pi(\mathfrak{b}-1)\] where we use \(g=-(1-\frac{2\mathfrak{m}_{0}}{r})dv^{2}+2dvdr+r^{2}\not{g}\), \(2\mathfrak{c}\otimes_{s}\omega_{s_{0}}^{*}=\delta(r-2\mathfrak{m}_{0})\tilde{ \chi}(r)(2dvdr-2\mathfrak{b}dr^{2})\), \(g^{-1}(\mathfrak{c},\omega_{s_{0}}^{*})=\delta(r-2\mathfrak{m}_{0})\tilde{ \chi}(r)(1-\mathfrak{b}(1-\frac{2\mathfrak{m}_{0}}{r}))\) and \(\delta_{g}^{*}\omega_{s_{0}}=-\frac{2\mathfrak{m}_{0}^{2}}{r^{4}}dv^{2}-2( \frac{1}{2r^{2}}+\frac{\mathfrak{m}_{0}}{r^{3}})dvdr+\frac{1}{r^{2}}dr^{2}-(1 -\frac{2\mathfrak{m}_{0}}{r})\not{g}\) in the last step. Now suppose \((\omega_{0},\omega_{1})\in(\mathcal{K}^{\perp}\oplus\mathcal{K})\cap\ker \widehat{\mathcal{W}_{0,\gamma}}(0)\), then we have \[\omega_{0}=-\gamma(\mathcal{W}_{00}+\gamma\mathcal{W}_{00}^{\flat})^{-1} \mathcal{W}_{01}^{\flat}\omega_{1}\] and thus \[\Big{(}\mathcal{W}_{11}^{\flat}-\gamma\mathcal{W}_{10}^{\flat}(\mathcal{W}_{ 00}+\gamma\mathcal{W}_{00}^{\flat})^{-1}\mathcal{W}_{01}^{\flat}\Big{)} \omega_{1}=0.\] if \(\gamma\) is nonzero. When \(\mathfrak{b}\neq 1\), \(\mathcal{W}_{11}^{\flat}\) is invertible, and thus so is \(\mathcal{W}_{11}^{\flat}-\gamma\mathcal{W}_{10}^{\flat}(\mathcal{W}_{00}+ \gamma\mathcal{W}_{00}^{\flat})^{-1}\mathcal{W}_{01}^{\flat}\) if \(\gamma\) is sufficiently small. So we must have \(\omega_{1}=0\) and thus \(\omega_{0}=0\). This proves the injectivity of \(\widehat{\mathcal{W}_{0,\gamma}}(0)\). The surjectivity follows from the fact that \(\widehat{\mathcal{W}_{0,\gamma}}(0)\) is Fredholm of index \(0\). As explained in (9.3) and (9.6), \(\mathcal{W}_{b,\gamma}\) acts as the gauge potential wave operator. In order to obtain the mode stability of \(L_{g_{b},\gamma}(\dot{g},\dot{A})=0\) for \(\mathrm{Im}\,\sigma\geq 0,\sigma\neq 0\), we need to prove that for the operator \(\mathcal{W}_{b,\gamma}\). In other words, we need to show that for suitable choices of \(\mathfrak{b}\) and small \(\gamma\), \(\mathcal{W}_{b,\gamma}\) has no modes with \(\sigma\neq 0\) in the closed upper half plane. We first prove this for Schwarzschild metric, and then the RN and slowly rotating KN metric with small charge follows from a perturbation argument as in Theorem 9.8. **Proposition 9.10**.: _Let \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\) be defined as in Proposition 9.9 with \(\mathfrak{b}>1\), then there exists \(\tilde{\gamma}_{0}>0\) such that for \(0<\gamma<\tilde{\gamma}_{0}\), \(\mathrm{Im}\,\sigma\geq 0,\sigma\neq 0\) and \(s>2,\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\)_ \[\widehat{\mathcal{W}_{0,\gamma}}(\sigma):\{\omega\in\bar{H}_{\mathrm{b}}^{s, \ell}(\bar{X};\widetilde{\widetilde{\mathcal{K}}^{*}}\bar{X}):\widehat{ \mathcal{W}_{0,\gamma}}(\sigma)\omega\in\bar{H}_{\mathrm{b}}^{s,\ell+1}(\bar{X}; \widetilde{\widetilde{\mathcal{K}}^{*}}\bar{X})\}\to\bar{H}_{\mathrm{b}}^{s, \ell+1}(\bar{X};\widetilde{\widetilde{\mathcal{K}}^{*}}\bar{X}) \tag{9.23}\] _is invertible._ In the course of the proof of Proposition 9.10, we closely follow the arguments in [57, SS10.2]. As explained there, the proof consists of two steps: 1. We show in Lemma 9.12 that for \(\mathfrak{b}>1\) and for _all_ sufficiently small \(\gamma\), \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\) has no modes in a _fixed_ neighborhood of \(\sigma=0\) in the closed upper half plane. 2. In the proof of Proposition 9.10, we proceed as in the final step in the proof of Theorem 8.1 to prove the mode stability of \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\) in the closed upper half plane. Fix \(s>2,-\frac{3}{2}<\ell<-\frac{1}{2}\), the domains \[\mathcal{X}^{s,\ell}(\sigma):=\{\omega\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X}): \widehat{\mathcal{W}_{0,\gamma}}(\sigma)\omega\in\bar{H}_{\mathrm{b}}^{s-1, \ell+2}(\bar{X})\} \tag{9.24}\] of the operators \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma):\mathcal{X}^{s,\ell}(\sigma)\to\bar{H} _{\mathrm{b}}^{s-1,\ell+2}\) now _depend_ on \(\sigma\) (but still independent of \(\gamma\)). As a consequence, we need the following technical lemma which allows us to pass to operators with fixed domain and target space. **Lemma 9.11**.: _Let \(s>2,-\frac{3}{2}<\ell<-\frac{1}{2}\) and \(\mathfrak{b}\neq 1\) and fix \(\gamma_{1}\) satisfying \(0<|\gamma_{1}|<\gamma_{0}\) where \(\gamma_{0}\) is as in Proposition 9.9. Then \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma):\mathcal{X}^{s,\ell}(\sigma) \to\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\) is invertible for \(\sigma\in\mathbb{C}\) with \(\mathrm{Im}\,\sigma\geq 0,|\sigma|<c\) where \(c\) is some small constant. Moreover, \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\) is continuous in \(\sigma\) with values in \(\mathcal{L}_{\mathrm{weak}}(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X}),\bar{H }_{\mathrm{b}}^{s,\ell}(\bar{X}))\) (the space of linear bounded operators equipped with weak operator topology), while continuous in \(\sigma\) with values in \(\mathcal{L}_{\mathrm{op}}(\bar{H}_{\mathrm{b}}^{s-1+\epsilon,\ell+2+\epsilon}( \bar{X}),\bar{H}_{\mathrm{b}}^{s-\epsilon,\ell-\epsilon}(\bar{X}))\) (norm topology) for any \(\epsilon>0\)._ Proof.: By Theorem 5.6, we have the uniform Fredholm estimates for \(\omega\in\mathcal{X}^{s,\ell}(\sigma)\) \[\|\omega\|_{\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})}\leq C\Big{(}\|\widehat{ \mathcal{W}_{0,\gamma_{1}}}(\sigma)\omega\|_{\bar{H}_{\mathrm{b}}^{s-1,\ell+2 }(\bar{X})}+\|\omega\|_{\bar{H}_{\mathrm{b}}^{s,\ell_{0},\ell_{0}}(\bar{X})} \Big{)} \tag{9.25}\] for \(\mathrm{Im}\,\sigma\geq 0\) with \(|\sigma|\) being small and \(s_{0}<s,\ell_{0}<\ell\). We now follow the arguments in [111, Propsition 4.4] to show that the compact error term \(\|\omega\|_{\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})}\) can be dropped (while we may need to take larger constant \(C\)) if \(|\sigma|\) is sufficiently small. Concretely, suppose for the sake of contradiction that there exists a sequence \(\delta_{j}\to 0\) and a sequence \(\omega_{j}\) with \(\|\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})}=1\) and \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})\omega_{j}\in\bar{H}_{\mathrm{ b}}^{s-1,\ell+2}(\bar{X})\) such that \(1=\|\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})}\geq j\|\widehat{ \mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})}\), then \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})\omega_{j}\to 0\) in \(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\) and \(\liminf_{j\to\infty}\|\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar {X})}\geq C^{-1}>0\). As a consequence, there exists a subsequence \(\omega_{j}\to\omega\) weakly in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\) and strongly in \(\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})\) with the limit \(\omega\) being non-zero. Moreover, \[\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})\omega_{j}-\widehat{\mathcal{ W}_{0,\gamma_{1}}}(0)\omega=\big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma_{j})-\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)\big{)}\omega_{j}+\widehat{ \mathcal{W}_{0,\gamma_{1}}}(0)\big{(}\omega_{j}-\omega\big{)}\to 0\quad\text{in}\quad\bar{H}_{ \mathrm{b}}^{s-2,\ell_{0}}(\bar{X})\] since \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})\to\widehat{\mathcal{W}_{0, \gamma_{1}}}(0)\) as bounded operators in \(\mathcal{L}(\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X}),\bar{H}_{\mathrm{ b}}^{s_{0}-2,\ell_{0}}(\bar{X}))\) and \(\omega_{j}\to\omega\) in \(\bar{H}_{\mathrm{b}}^{s_{0},\ell_{0}}(\bar{X})\). Therefore, \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)\omega=0\). But this contradicts Proposition 9.9, and thus proves the injectivity of \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\) for \(\sigma\) with small \(|\sigma|\) in the closed upper half plane. Since \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\) has index \(0\), it follows that \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\) exists with a uniform bound in a small neighborhood of \(\sigma=0\) in the closed upper half plane. Next we prove the sequential continuity of \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\) in \(\sigma\) (which is equivalent to continuity). Suppose \(\delta_{j}\to\delta\), one need to show \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})^{-1}\to\widehat{\mathcal{W}_{0, \gamma_{1}}}(\sigma)^{-1}\) in weak operator topology of \(\mathcal{L}(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X}),\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X}))\). Suppose \(\{f_{j}\}\) is a sequence in \(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}\) which is norm converging to \(f\), let \(\omega_{j}=\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})^{-1}f_{j}\) and we shall show that \(\omega_{j}\) converges weakly to \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}f\) in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\). In view of the uniform bound of \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})^{-1}\) proved in the first step above, we see that \(\omega_{j}\) is uniformly bounded in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\). Therefore, there exists a subsequence \(\omega_{j_{k}}\to\omega\) weakly in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\), then \(f_{j_{k}}=\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j_{k}})\omega_{j_{k}}\to \widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\omega\) weakly in \(\bar{H}_{\mathrm{b}}^{s-2,\ell}(\bar{X})\). Meanwhile by assumption \(f_{j_{k}}\to f\) strongly in \(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\) and thus \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\omega=f\). This implies that \(\omega\) is uniquely determined by \(f\) and independent of the subsequence. Namely, every subsequence of \(\omega_{j}\) has a subsequence converging weakly to the same \(\omega\). This means that the full sequence \(\omega_{j}\to\omega=\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})^{-1}f\) weakly in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\) and thus proves the continuity in the weak operator topology (Otherwise, one can find \(g\in\dot{H}_{\mathrm{b}}^{-s,\ell}(\bar{X})),\delta>0\) and a subsequence \(\{\omega_{j_{k}}\}\subset\{w_{j}\}\) such that \(|g(\omega_{j_{k}})-g(\omega)|\geq\delta\). However, by assumption \(\{\omega_{j_{k}}\}\) should have a subsequence converging weakly to \(\omega\), which is impossible). As for the continuity in the norm topology, we shall prove it by contradiction. Suppose there exists a \(\delta>0\) and a sequence \(\sigma_{j}\to\sigma\) such that \( subsequence) strongly in \(\bar{H}^{s-\epsilon,\ell-\epsilon}_{\rm b}(\bar{X})\), which contradicts (9.26). This finishes the proof of the continuity in the norm topology. With the help of the invertibility of \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\), the analysis of \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\) can be converted to that of another operator \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}} (\sigma)^{-1}:\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})\to\bar{H}^{s-1,\ell+2}_{ \rm b}(\bar{X})\) with fixed domain and target space independent of \(\sigma\). This will enable us to prove the invertibility of \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\) for _all_ sufficiently small \(\gamma>0\) in a fixed small neighborhood of \(\sigma=0\) in the closed upper half plane. **Lemma 9.12**.: _Let \(s>2,-\frac{3}{2}<\ell<-\frac{1}{2}\) and \({\mathfrak{b}}>1\). Then there exists \(C_{0}>0\) such that for all \(\gamma>0\) and \(\sigma\in\mathbb{C},\operatorname{Im}\sigma\geq 0\) with \(\gamma+|\sigma|<C_{0}\), the operator \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma):\mathcal{X}^{s,\ell}(\sigma) \to\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})\) is invertible._ Proof.: Let \[\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}} (\sigma)^{-1}:\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})\to\bar{H}^{s-1,\ell+2}_{ \rm b}(\bar{X}),\] then it suffices to prove the injectivity of \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}} (\sigma)^{-1}\) provided that \({\mathfrak{b}}>1\) and \(\gamma>0,\operatorname{Im}\sigma\geq 0\) with \(\gamma+|\sigma|<C_{0}\) where \(C_{0}\) is a small constant. To this end, we split the domains and the target spaces as follows domain: \[\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})=\mathcal{K}^{\perp}\oplus \mathcal{K},\quad\mathcal{K}=\ker\widehat{\mathcal{P}_{0}}(0)\widehat{ \mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\cap\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})= \langle\tilde{\omega}\rangle\;\;\text{with}\;\;\tilde{\omega}=\widehat{ \mathcal{W}_{0,\gamma_{1}}}(0)\omega_{s_{0}}\] target: \[\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})=\mathcal{R}\oplus\mathcal{R}^{\perp}, \quad\mathcal{R}=\operatorname{ran}_{\mathcal{X}^{s,\ell}(0)}\widehat{ \mathcal{P}_{0}}(0)\] where \(\mathcal{K}^{\perp}\) and \(\mathcal{R}^{\perp}=\langle\eta\rangle\) are the complementary subspaces to \(\mathcal{K}\) and \(\mathcal{R}\) respectively inside \(\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})\), and \(\omega_{s_{0}}\) is defined as in (9.13). We notice that \[\tilde{\omega}=\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)\omega_{s_{0}}=\widehat {\left(\mathcal{W}_{0,\gamma_{1}}}(0)-\widehat{\mathcal{P}_{0}}(0)\right) \omega_{s_{0}}=\widehat{F_{g}\widehat{G_{g}}\widehat{\delta}_{g}\omega_{s_{0} }}\in C_{c}^{\infty}(X^{\circ})\] and we can identify \(\mathcal{R}^{\perp}\cong\mathbb{C}\) via \(c\eta\mapsto\langle c\eta,\omega_{s_{0}}^{*}\rangle\) where \(\omega_{s_{0}}^{*}\) is defined as in (9.13). Under these splittings, \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}} (\sigma)^{-1}\) takes the form \[\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}} (\sigma)^{-1}=\begin{pmatrix}\mathcal{W}_{00}(\gamma,\sigma)&\mathcal{W}_{01} (\gamma,\sigma)\\ \mathcal{W}_{10}(\gamma,\sigma)&\mathcal{W}_{11}(\gamma,\sigma)\end{pmatrix}. \tag{9.27}\] We first list several basic facts: \(\mathcal{W}_{01}(0,0),\mathcal{W}_{10}(0,0),\mathcal{W}_{11}(0,0)\) are \(0\), and \(\mathcal{W}_{00}(0,0)\) is invertible. Since \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}} (\sigma)^{-1}\in\mathcal{L}(\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X}),\bar{H}^{s-1, \ell+2}_{\rm b}(\bar{X}))\) is Fredholm of index \(0\) (see Lemma 5.25) for \(\gamma+|\sigma|\) small and the three components \((0,1),(0,1),(1,1)\) are compact operators (because they all have finite rank), we can conclude that \(\mathcal{W}_{00}(\gamma,\sigma)\) is Fredholm of index \(0\) as well for \(\gamma+|\sigma|\) small. Suppose \(\omega=(\omega_{0},\omega_{1})\in(\mathcal{K}^{\perp}\oplus\mathcal{K})\cap \ker\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}} (\sigma)^{-1}\), we have \(\mathcal{W}_{00}(\gamma,\sigma)\omega_{0}+\mathcal{W}_{01}(\gamma,\sigma) \omega_{1}=0\). Assume for the moment that \(\mathcal{W}_{00}(\gamma,\sigma)\) is invertible for \(\gamma+|\sigma|\) small, then \[\omega_{0}=-\mathcal{W}_{00}(\gamma,\sigma)^{-1}\mathcal{W}_{01}(\gamma,\sigma) \omega_{1} \tag{9.28}\] and thus \[\Big{(}\mathcal{W}_{11}(\gamma,\sigma)-\mathcal{W}_{10}(\gamma,\sigma) \mathcal{W}_{00}(\gamma,\sigma)^{-1}\mathcal{W}_{01}(\gamma,\sigma)\Big{)} \omega_{1}=0. \tag{9.29}\] We want to show that \(\omega_{1}=0\) and thus \(\omega_{0}=0\) for \(\gamma+|\sigma|\). small. Formally speaking, since \(\mathcal{W}_{01}(0,0),\mathcal{W}_{10}(0,0)\) are \(0\), \(\mathcal{W}_{01}(\gamma,\sigma),\mathcal{W}_{10}(\gamma,\sigma)\) are of size \(\mathcal{O}(\gamma+|\sigma|)\). Therefore, it suffices to show that \(\mathcal{W}_{11}(\gamma,\sigma)\) is nonzero modulo \(\mathcal{O}(\gamma^{2}+|\sigma|^{2})\) for \(\gamma+|\sigma|\) small. To make these arguments rigorous, we need to prove the injectivity of \(\mathcal{W}_{00}(\gamma,\sigma)\) (and thus the invertibility in view of its \(0\) index property) for \(\gamma+|\sigma|\) small and the differentiability of \(\mathcal{W}_{01},\mathcal{W}_{10}\) at \((\gamma,\sigma)=(0,0)\), and calculate the Taylor expansion of \(\mathcal{W}_{11}(\gamma,\sigma)\) at \((0,0)\) up to the first order. * Invertibility of \(\mathcal{W}_{00}\). We shall prove that for \(\gamma+|\sigma|\) small, there exists a uniform constant \(C>0\) such that \[\|\mathcal{W}_{00}(\gamma,\sigma)^{-1}\|_{\mathcal{R}\to\mathcal{K}^{\perp}} \leq C.\] (9.30) The proof is similar to that in the first step of the proof of Lemma 9.11. Specifically, we define \(\Pi:\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})\to\mathcal{R}\) to be the projection onto \(\mathcal{R}\). Suppose (9.30) is not true, one can find a sequence \((\gamma_{j},\delta_{j})\to(0,0)\) and a sequence \(\{f_{j}\}\subset\mathcal{K}^{\perp}\) with \(\|f_{j}\|_{\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})}=1\) such that for \(\omega_{j}=\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})^{-1}f_{j}\) \[1=\|f_{j}\|_{\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})}\geq j\|\widehat{\mathcal{W}_ {0,\gamma_{j}}}(\sigma_{j})\omega_{j}\|_{\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})}\] which implies \(\widehat{\mathcal{W}_{0,\gamma_{j}}}(\sigma_{j})\omega_{j}\to 0\) in \(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\). In view of Theorem 5.6, we have the following uniform Fredholm estimates for \(\gamma+|\sigma|\) small \[\begin{split}\|\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X })}&\leq\tilde{C}\Big{(}\|\widehat{\mathcal{W}_{0,\gamma_{j}}}( \sigma_{j})\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})}+\|(I- \Pi)\widehat{\mathcal{W}_{0,\gamma_{j}}}(\sigma_{j})\omega_{j}\|_{\bar{H}_{ \mathrm{b}}^{s-1,\ell+2}(\bar{X})}\\ &\qquad\qquad\qquad+\|\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s_{0}, d_{0}}(\bar{X})}\Big{)}.\end{split} \tag{9.31}\] Using the identification \(\mathcal{R}^{\perp}\cong\mathbb{C}\) introduced previously, we calculate \[\begin{split}\|(I-\Pi)\widehat{\mathcal{W}_{0,\gamma_{j}}}( \sigma_{j})\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})}\sim| \widehat{\mathcal{W}_{0,\gamma_{j}}}(\sigma_{j})\omega_{j},\omega_{s_{0}}^{*} \rangle|=|\langle\omega_{j},\widehat{\mathcal{W}_{0,\gamma_{j}}}(\sigma_{j})^ {*}\omega_{s_{0}}^{*}\rangle|\\ =|\langle\omega_{j},\Big{(}\widehat{\mathcal{W}_{0,\gamma_{j}}}( \sigma_{j})^{*}-\widehat{\mathcal{P}_{0}}(0)^{*}\Big{)}\omega_{s_{0}}^{*} \rangle|\to 0\quad\text{as}\quad(\gamma_{j},\sigma_{j})\to(0,0)\end{split} \tag{9.32}\] where we use \(\omega_{s_{0}}^{*}\in\dot{H}_{\mathrm{b}}^{-\frac{1}{2}-,\infty}(\bar{X})\) in the first equality and \(\widehat{\mathcal{P}_{0,\gamma_{j}}}(\sigma_{j})^{*}-\widehat{\mathcal{P}_{0} }(0)^{*}\to 0\) as bounded operators \(\mathcal{L}(\dot{H}_{\mathrm{b}}^{1-s,-\ell-2}(\bar{X}),\dot{H}_{\mathrm{b}}^ {s-s,-\ell-1}(\bar{X}))\) in the last step. Since \(\omega_{j}=\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})^{-1}f_{j}\), we have \(\|\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})}\sim 1\) and thus \(\liminf_{j\to\infty}\|\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s_{0},d_{0}}(\bar{ X})}\geq\tilde{C}^{-1}\liminf_{j\to\infty}\|\omega_{j}\|_{\bar{H}_{\mathrm{b}}^{s, \ell}(\bar{X})}>0\). As a result, there exists a subsequence (not shown in the notation) \(\omega_{j}\to\omega\neq 0\) weakly in \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X})\). Then \(\widehat{\mathcal{W}_{0,\gamma_{j}}}(\sigma_{j})\omega_{j}\to\widehat{ \mathcal{W}_{0}}(0)\omega=\widehat{\mathcal{P}_{0}}(0)\omega\) weakly in \(\bar{H}_{\mathrm{b}}^{s-2,\ell}(\bar{X})\). Since \(\widehat{\mathcal{W}_{0,\gamma_{j}}}(\sigma_{j})\omega_{j}\to 0\) strongly in \(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\), we obtain \(\widehat{\mathcal{P}_{0}}(0)\omega=0\). Meanwhile, since \(\|f_{j}\|_{\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})}=1\), passing to a subsequence, we see that \(f_{j}\to f\) weakly in \(\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X})\) and strongly in \(\bar{H}_{\mathrm{b}}^{s-1-\epsilon,\ell+2-\epsilon}(\bar{X})\). Owing to Lemma 9.11 (the continuity in weak operator topology), \(\omega_{j}=\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma_{j})^{-1}f_{j}\to \widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}f\) weakly in \(\bar{H}_{\mathrm{b}}^{s-\epsilon,\ell-\epsilon}\). Therefore, we obtain \(0\neq\omega=\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}f\), i.e., \(0\neq f=\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)\omega\in\mathcal{K}\). It is clear that one can find \(g\in\dot{H}_{\mathrm{b}}^{1-s,\ell-2}(\bar{X})\) such that \(g(\mathcal{K}^{\perp})=0\) while \(g(\tilde{w})=1\) (\(\mathcal{K}=\langle\tilde{\omega}\rangle\)), but this contradicts the fact that \(f_{j}\to f\) weakly, and thus proves (9.30). From (9.30), we conclude that for \(\gamma+|\sigma|\) small, \(\mathcal{W}_{00}\) is injective and then the invertibility follows from the fact that \(\mathcal{W}_{00}\) has index \(0\). * Differentiability of \(\mathcal{W}_{01},\mathcal{W}_{10}\). We define \[\mathcal{W}_{1}(\gamma,\sigma):=\mathcal{W}_{01}(\gamma,\sigma)\oplus\mathcal{ W}_{11}(\gamma,\sigma):\mathcal{K}=\langle\tilde{\omega}\rangle\to\bar{H}_{ \mathrm{b}}^{s-1,\ell+2}(\bar{X}).\] (9.33) We will show that \(\mathcal{W}_{1}(\gamma,\sigma)\) is continuous for \(\gamma+|\sigma|\) small and differentiable at \((0,0)\). We write \[\begin{split}\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{ \mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}&=\Big{(}\widehat{ \mathcal{W}_{0,\gamma}}(\sigma)-\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)+ \widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\Big{)}\widehat{\mathcal{W}_{0, \gamma_{1}}}(\sigma)^{-1}\\ &=\Big{(}\widehat{\mathcal{W}_{0,\gamma}}(\sigma)-\widehat{ \mathcal{W}_{0,\gamma_{1}}}(\sigma)\Big{)}\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma)^{-1}+I.\end{split}\] (9.34) Then it suffices to analyze the first term on the right hand side of (9.34). For the continuity, since \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\tilde{\omega}\in\bar{H}_{ \mathrm{b}}^{s,\ell}(\bar{X})\) depending on \(\sigma\) continuously for any \(s>2,-\frac{3}{2}<\ell<-\frac{1}{2}\) (because \(\tilde{\omega}\in C_{c}^{\infty}(X^{\circ})\) and \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\) is continuous in \(\sigma\) with value in \(\mathcal{L}_{\mathrm{op}}(\bar{H}_{\mathrm{b}}^{s-1+\epsilon,\ell+2+\epsilon}( \bar{X}),\bar{H}_{\mathrm{b}}^{s-\epsilon,\ell-\epsilon}(\bar{X}))\)) and \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)-\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma)\in\mathrm{Diff}_{\mathrm{b}}^{1}\) with \(C_{c}^{\infty}(X)\) coefficients and depends on \(\gamma,\sigma\) smoothly, the continuity of \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma)^{-1}\tilde{\omega}\in C_{c}^{\infty}(X)\) in \((\gamma,\sigma)\) follows Now we turn to the analysis of the differentiability at \((0,0)\). Since \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)-\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma)\) are polynomials in \(\gamma,\sigma\) with coefficients being first order differential operators, it is certainly differentiable at \((0,0)\). So it suffices to prove that \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\tilde{\omega}\) is differentiable at \(\sigma=0\). Now we claim that if \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\omega\in\rho C^{\infty}(\bar{X})+ \bar{H}_{\mathrm{b}}^{\infty,1/2-}(\bar{X})\), then the following resolvent identity holds \[\Big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}-\widehat{\mathcal{W}_{0, \gamma_{1}}}(0)^{-1}\Big{)}\omega=\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma)^{-1}\Big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)-\widehat{\mathcal{W}_{0, \gamma_{1}}}(\sigma)\Big{)}\widehat{\mathcal{W} and \(\chi=0\) on \((-\infty,1/2]\). With the limit being strong operator limits, we have \[\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}-\widehat{\mathcal{W }_{0,\gamma_{1}}}(0)^{-1}\] \[\quad=\lim_{\epsilon\to 0}\widehat{\big{(}\mathcal{W}_{0,\gamma_{1}} (\sigma)^{-1}\chi_{\epsilon}(\rho)\widehat{\mathcal{W}_{0,\gamma_{1}}}(0) \widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}-\widehat{\mathcal{W}_{0,\gamma_{1 }}}(\sigma)^{-1}\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\chi_{\epsilon}( \rho)\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\big{)}\] \[\quad=\lim_{\epsilon\to 0}\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma)^{-1}\Big{(}[\chi_{\epsilon}(\rho),\widehat{\mathcal{W}_{0,\gamma_{1}} }(0)]+\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)\chi_{\epsilon}(\rho)-\widehat{ \mathcal{W}_{0,\gamma_{1}}}(\sigma)\chi_{\epsilon}(\rho)\Big{)}\widehat{ \mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\] \[\quad=\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\lim_{ \epsilon\to 0}\bigg{(}\big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)- \widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\big{)}\chi_{\epsilon}(\rho) \bigg{)}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\] \[\quad\quad+\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\lim _{\epsilon\to 0}\Big{(}[\chi_{\epsilon}(\rho),\widehat{\mathcal{W}_{0,\gamma_{1}} }(0)]\Big{)}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}:=I+II.\] As for \(I\), since \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\omega\in\rho C^{\infty}(\bar{X})+ \bar{H}_{\rm b}^{\infty,1/2-}(\bar{X})\) which is annihilated modulo \(\bar{H}_{\rm b}^{\infty,3/2-}(\bar{X})\) by the normal operator \(-i\sigma\rho(\rho\partial_{\rho}-1)\) of \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)-\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\), it follows that \[\bigg{(}\Big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)-\widehat{ \mathcal{W}_{0,\gamma_{1}}}(\sigma)\Big{)}\chi_{\epsilon}(\rho)\Big{)} \widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\omega\] \[\quad=\chi_{\epsilon}(\rho)\Big{(}\widehat{\mathcal{W}_{0, \gamma_{1}}}(0)-\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\Big{)}\widehat{ \mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\omega+\epsilon^{-1}\chi^{\prime}(\epsilon^ {-1}\rho)\Big{(}\rho^{3}C^{\infty}(\bar{X})+\bar{H}_{\rm b}^{\infty,5/2-}(\bar {X})\Big{)}\] \[\quad\in\chi_{\epsilon}(\rho)\bar{H}_{\rm b}^{\infty,1/2-}(\bar{ X})+\epsilon^{-1}\chi^{\prime}(\epsilon^{-1}\rho)\Big{(}\rho^{3}C^{\infty}(\bar{X})+ \bar{H}_{\rm b}^{\infty,5/2-}(\bar{X})\Big{)}.\] Since the \(\bar{H}_{\rm b}^{\infty,1/2-}(\bar{X})\) norm of the second term \(\epsilon\chi^{\prime}(\epsilon^{-1}\rho)\Big{(}\rho C^{\infty}(\bar{X})+\bar{ H}_{\rm b}^{\infty,1/2-}(\bar{X})\Big{)}\) goes to \(0\) as \(\epsilon\to 0\), we arrive at \[I=\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\Big{(}\widehat{\mathcal{W }_{0,\gamma_{1}}}(0)-\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)\Big{)} \widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\omega.\] For \(II\), since \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)\in\rho^{2}{\rm Diff}_{\rm b}^{2}\), it follows that \([\chi_{\epsilon}(\rho),\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)]\) is uniform (in \(\epsilon\)) in \(\rho^{2}{\rm Diff}_{\rm b}^{1}\) and \([\chi_{\epsilon}(\rho),\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)]\) goes to \(0\) in the strong operator topology \(\mathcal{L}(\bar{H}_{\rm b}^{s,\ell}(\bar{X}),\bar{H}_{\rm b}^{s-1,\ell+2}( \bar{X}))\), and thus \(II=0\). This proves (9.35). Since \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\tilde{\omega}=\omega_{s_{0}}=r^{ -1}(dv-dr)\in\rho C^{\infty}(\bar{X})+\bar{H}_{\rm b}^{\infty,1/2-}(\bar{X})\), applying (9.35) yields \[\Big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}-\widehat{\mathcal{W} _{0,\gamma_{1}}}(0)^{-1}\Big{)}\tilde{\omega}=\widehat{\mathcal{W}_{0,\gamma_{1 }}}(\sigma)^{-1}\Big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)-\widehat{ \mathcal{W}_{0,\gamma_{1}}}(\sigma)\Big{)}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0 )^{-1}\tilde{\omega}. \tag{9.36}\] Let \[\tilde{\omega}(\sigma):=\Big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)-\widehat{ \mathcal{W}_{0,\gamma_{1}}}(\sigma)\Big{)}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0 )^{-1}\tilde{\omega}\in\bar{H}_{\rm b}^{\infty,3/2-}(\bar{X}), \tag{9.37}\] and write \[\tilde{\omega}(\sigma)=-\sigma(\partial_{\sigma}\widehat{\mathcal{W}_{0,\gamma_{ 1}}}(0)\omega_{s_{0}})+\mathcal{O}_{\bar{H}_{\rm b}^{\infty,3/2-}(\bar{X})}(| \sigma|^{2}) \tag{9.38}\] where the notation \(\mathcal{O}_{\bar{H}_{\rm b}^{\infty,3/2-}(\bar{X})}(|\sigma|^{2})\) means that the term has norm in \(\bar{H}_{\rm b}^{\infty,3/2-}(\bar{X})\) of size \(\mathcal{O}(|\sigma|^{2})\). Using the fact that \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\) is continuous in \(\sigma\) with value in \(\mathcal{L}_{\rm op}(\bar{H}_{\rm b}^{s-1+\epsilon,\ell+2+\epsilon}(\bar{X}), \bar{H}_{\rm b}^{s-\epsilon,\ell-\epsilon}(\bar{X}))\), it follows that \[\Big{(}\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}-\widehat{\mathcal{W}_{ 0,\gamma_{1}}}(0)^{-1}\Big{)}\tilde{\omega}=-\sigma\widehat{\mathcal{W}_{0, \gamma_{1}}}(0)^{-1}\partial_{\sigma}\widehat{\mathcal{W}_{0,\gamma_{1}}}(0) \omega_{s_{0}}+o_{\bar{H}_{\rm b}^{s,\ell}(\bar{X})}(|\sigma|) \tag{9.39}\] for any \(s>2,-\frac{3}{2}<\ell<-\frac{1}{2}\). This proves the differentiability of \(\mathcal{W}_{1}\) and thus \(\mathcal{W}_{01}\) at \((\gamma,\sigma)=(0,0)\). As for \(\mathcal{W}_{10}\), we proceed in a similar manner. We define \[\mathcal{W}_{2}(\gamma,\sigma):=\mathcal{W}_{10}(\gamma,\sigma)\oplus \mathcal{W}_{11}(\gamma,\sigma):\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}) \to\mathcal{R}^{\perp}\cong\mathbb{C} \tag{9.40}\] \[\omega\mapsto\langle\widehat{\mathcal{W}_{0,\gamma}}(\sigma) \widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\omega,\omega_{s_{0}}^{*}\rangle\] We will show that \(\mathcal{W}_{2}(\gamma,\sigma)\) is continuous for \(\gamma+|\sigma|\) small and differentiable at \((0,0)\). As \(\omega_{s_{0}}^{*}=\delta(r-2\mathbf{m}_{0})dr\in\dot{H}_{\rm b}^{-1/2-,\infty}( \bar{X})\), we can rewrite \[\begin{split}\langle\widehat{\mathcal{W}_{0, Since \(\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}\) is continuous in \(\sigma\) with value in \(\mathcal{L}_{\rm op}(\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{H}_{\rm b}^{s- \epsilon,\ell-\epsilon}(\bar{X}))\), i.e., \[\widehat{\mathcal{W}_{0,\gamma_{1}}}(\sigma)^{-1}=\widehat{\mathcal{W}_{0, \gamma_{1}}}(0)^{-1}+o_{\mathcal{L}(\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{ H}_{\rm b}^{s-\epsilon,\ell-\epsilon}(\bar{X}))}(1),\] and \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)^{*}-\widehat{\mathcal{W}_{0}}(0)^{*}\) is a polynomial in \(\gamma,\overline{\sigma}\) with coefficients in \(\mathcal{L}(\dot{H}_{\rm b}^{1-s,-\ell-2}(\bar{X}),\dot{H}_{\rm b}^{-s,-\ell-1 }(\bar{X}))\) \[\widehat{\mathcal{W}_{0,\gamma}}(\sigma)^{*}-\widehat{\mathcal{W}_{0}}(0)^{*}\] \[\quad=\overline{\sigma}\Big{(}\partial_{\overline{\sigma}}( \widehat{\mathcal{W}_{0,\gamma}}(\sigma)^{*})|_{(0,0)}\Big{)}+\gamma\Big{(} \partial_{\gamma}(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)^{*})|_{(0,0)} \Big{)}+\mathcal{O}_{\mathcal{L}(\dot{H}_{\rm b}^{1-s,-\ell-2}(\bar{X}),\dot{ H}_{\rm b}^{-s,-\ell-1}(\bar{X}))}((\gamma+|\sigma|)^{2}),\] it follows that \[\langle\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W }_{0,\gamma_{1}}}(\sigma)^{-1}\omega,\omega_{s_{0}}^{*}\rangle\] \[\quad=\sigma\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^{-1}\omega, \partial_{\overline{\sigma}}(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)^{*})|_{( 0,0)}\omega_{s_{0}}^{*}\rangle+\gamma\widehat{\mathcal{W}_{0,\gamma_{1}}}(0)^ {-1}\omega,\partial_{\gamma}(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)^{*})|_{( 0,0)}\omega_{s_{0}}^{*}\rangle\] (9.42) \[\qquad\quad+o(\gamma+|\sigma|)\cdot\|\omega\|_{\dot{H}_{\rm b}^{s-1,\ell+2}(\bar{X})}.\] This proves the differentiability of \(\mathcal{W}_{10}\) at \((0,0)\). * Taylor expansion of \(\mathcal{W}_{11}\). Since \(\mathcal{W}_{11}:\mathcal{K}=\langle\tilde{\omega}\rangle\to\mathcal{R}^{ \perp}\cong\mathbb{C}\) is given by \(\langle\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_ {1}}}(\sigma)^{-1}\tilde{\omega},\omega_{s_{0}}^{*}\rangle\), by (9.42) we find \[\mathcal{W}_{11}(\gamma,\sigma)=\sigma\langle\partial_{\sigma}\widehat{ \mathcal{W}_{0,\gamma}}(\sigma)|_{(0,0)}\omega_{s_{0}},\omega_{s_{0}}^{*} \rangle+\gamma\langle\partial_{\gamma}\widehat{\mathcal{W}_{0,\gamma}}(\sigma) |_{(0,0)}\omega_{s_{0}},\omega_{s_{0}}^{*}\rangle+o(\gamma+|\sigma|).\] (9.43) Since \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)|_{(0,0)}=\gamma^{-1}\big{(}\widehat {\mathcal{W}_{0,\gamma}}(0)-\widehat{\mathcal{P}_{0}}(0)\big{)}\), using the calculation of \(\mathcal{W}_{11}^{\flat}\) in (9.22) gives \[\langle\partial_{\gamma}\widehat{\mathcal{W}_{0,\gamma}}(\sigma)|_{(0,0)}\omega _{s_{0}},\omega_{s_{0}}^{*}\rangle=4\pi(\mathfrak{b}-1).\] (9.44) We notice that \(\partial_{\sigma}\widehat{\mathcal{W}_{0,\gamma}}(\sigma)|_{(0,0)}=\partial_{ \sigma}\widehat{\mathcal{P}_{0}}(\sigma)|_{\sigma=0}=-i[\mathcal{P}_{0},t_{0,*}]\), so we have \[\langle\partial_{\sigma}\widehat{\mathcal{P}_{0,\gamma}}(\sigma)|_{(0,0)} \omega_{s_{0}},\omega_{s_{0}}^{*}\rangle=-i\langle[\mathcal{P}_{0},t_{0,*}] \omega_{s_{0}},\omega_{s_{0}}^{*}\rangle=-i\langle[\mathcal{P}_{0},v]\omega_{ s_{0}},\omega_{s_{0}}^{*}\rangle=-2\pi i\] (9.45) where we use \[[\mathcal{P}_{0},v]\omega_{s_{0}} =-(G_{g}\delta_{g}^{*}\omega_{s_{0}})_{\alpha\beta}(\nabla^{ \alpha}v)+\delta_{g}G_{g}(\nabla_{\alpha}v\otimes_{s}\omega_{s_{0}})\] \[=\Big{(}(-\frac{1}{2r^{2}}+\frac{\mathbf{m}_{0}}{r^{3}})dv-\frac{1 }{r^{2}}dr\Big{)}+\Big{(}\frac{1}{2r^{2}}dv+\frac{1}{r^{2}}dr\Big{)}=\frac{ \mathbf{m}_{0}}{r^{3}}dv.\] Therefore, we arrive at \[\mathcal{W}_{11}(\gamma,\sigma)=-2\pi i\sigma+4\pi(\mathfrak{b}-1)\gamma+o( \gamma+|\sigma|).\] (9.46) * Final step. Owing to the previous discussion starting at (9.28), in order to prove that \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma)^{-1}\) is injective, it suffices to prove the injectivity of the following operator \[\mathcal{W}_{11}(\gamma,\sigma)-\mathcal{W}_{10}(\gamma,\sigma)\mathcal{W}_{ 00}(\gamma,\sigma)^{-1}\mathcal{W}_{01}(\gamma,\sigma):\mathcal{K}\to\mathcal{R }^{\perp}.\] (9.47) Combining the differentiability of \(\mathcal{W}_{01},\mathcal{W}_{10}\) at \((0,0)\) and (9.46) together yields \[\mathcal{W}_{11}(\gamma,\sigma)-\mathcal{W}_{10}(\gamma,\sigma)\mathcal{W}_{ 00}(\gamma,\sigma)^{-1}\mathcal{W}_{01}(\gamma,\sigma)=-2\pi i\sigma+4\pi( \mathfrak{b}-1)\gamma+o(\gamma+|\sigma|)\] which is nonzero for \(\gamma>0,\mathrm{Im}\,\sigma\geq 0\) with \(\gamma+|\sigma|\) small if we set \(\mathfrak{b}>1\), and thus is injective. Finally, the invertibility follows from the fact that \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\widehat{\mathcal{W}_{0,\gamma_{1}}}( \sigma)^{-1}\) has index \(0\) for \(\gamma+|\sigma|\) small. Now we are at the position to prove Proposition 9.10. Proof of Proposition 9.10.: The proof proceed in a similar fashion to that in the last step in the proof of Theorem 8.1. Specifically, by Lemma 9.12, there exists \(\gamma_{0}^{\prime}>0,C^{\prime}>0\) such that for \(0<\gamma<\gamma_{0}^{\prime}\), \(\mathrm{Im}\,\sigma\geq 0,|\sigma|<C^{\prime}\) and \(s>2,\ell\in(-\frac{3}{2},\frac{1}{2})\), \[\widehat{\mathcal{W}_{0,\gamma}}(\sigma):\{\omega\in\bar{H}_{\rm b}^{s,\ell}( \bar{X}):\widehat{\mathcal{W}_{0,\gamma}}(\sigma)u\in\bar{H}_{\rm b}^{s-1, \ell+2}(\bar{X})\}\to\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\] is invertible. By Proposition 5.7, this implies for \(0<\gamma<\gamma_{0}^{\prime},\mathrm{Im}\,\sigma\geq 0,|\sigma|<C^{\prime}\) and \(s>2,\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\), \[\widehat{\mathcal{W}_{0,\gamma}}(\sigma):\{\omega\in\bar{H}_{\rm b}^{s,\ell}( \bar{X}):\widehat{\mathcal{W}_{0,\gamma}}(\sigma)u\in\bar{H}_{\rm b}^{s,\ell+ 1}(\bar{X})\}\to\bar{H}_{\rm b}^{s,\ell+1}(\bar{X})\] is injective as well, and the invertibility follows from the fact that \(\widehat{\mathcal{P}_{0,\gamma}}(\sigma)\) has index \(0\) (see Lemma 5.25). Next, according to the high energy estimates (Theorem 5.24) and energy estimate (Proposition 5.27 and Theorem 5.28), there exists \(\gamma_{0}^{\prime\prime}>0,C^{\prime\prime}>0\) such that for \(0<\gamma<\gamma_{0}^{\prime\prime}\) and \(\operatorname{Im}\sigma\geq 0,|\sigma|>C^{\prime\prime}\), \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\) defined as in (9.23) is invertible. For \(\sigma\) with \(\operatorname{Im}\sigma\geq 0,C^{\prime}\leq|\sigma|\leq C^{\prime\prime}\), the fact that \(\widehat{\mathcal{W}_{0,0}}(\sigma)=\widehat{\mathcal{P}_{0}}(\sigma)\) (see Proposition 9.3) is invertible and a perturbation argument (as in the final step in the proof of Theorem 8.1) give the invertibility of \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\) for \((\gamma,\sigma)\) in an open neighborhood of \((0,\sigma)\). Then the compactness of the region \(C^{\prime}\leq|\sigma|\leq C^{\prime\prime}\) implies that there exists an \(\gamma_{0}^{\prime\prime\prime}>0\) (depending on \(C^{\prime},C^{\prime\prime}\)) such that \(\widehat{\mathcal{W}_{0,\gamma}}(\sigma)\) in (9.23) is invertible for \(\operatorname{Im}\sigma\geq 0,C^{\prime}\leq|\sigma|\leq C^{\prime\prime}\) and \(0<\gamma<\gamma_{0}^{\prime\prime\prime}\). Therefore, the proposition holds for \(0<\gamma<\tilde{\gamma}_{0}\) where \(\tilde{\gamma}_{0}=\min\{\gamma_{0}^{\prime},\gamma_{0}^{\prime\prime}, \gamma_{0}^{\prime\prime\prime}\}\). Finally, having Propositions 9.9 and 9.10 at our disposal, we are able to establish the following theorem for \(\widehat{\mathcal{W}_{b,\gamma}}(\sigma)\). **Theorem 9.13**.: _Let \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) and \(\mathfrak{b}>2\). Fix \(\gamma\in(0,\tilde{\gamma}_{0})\), there exists a small constant \(C(\gamma)>0\) such that for the weakly charged RN metric \(g_{b_{0}}\) with \(|\mathbf{Q}_{0}|<C(\gamma)\), the following holds for_ 1. _If_ \(\operatorname{Im}\sigma\geq 0\) _and_ \(\sigma\neq 0\)_,_ \[\widehat{\mathcal{W}_{b_{0},\gamma}}(\sigma):\{\omega\in\bar{H}_{\mathrm{b}}^ {s,\ell}(\bar{X};\widetilde{\operatorname{sCT}^{*}}\bar{X}):\widehat{ \mathcal{W}_{b_{0},\gamma}}(\sigma)\omega\in\bar{H}_{\mathrm{b}}^{s,\ell+1}( \bar{X};\widetilde{\operatorname{sCT}^{*}}\bar{X})\}\to\bar{H}_{\mathrm{b}}^ {s,\ell+1}(\bar{X};\widetilde{\operatorname{sCT}^{*}}\bar{X})\] (9.48) _is invertible when_ \(s>2,\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\)_._ 2. _If_ \(s>2\) _and_ \(-\frac{3}{2}<\ell<-\frac{1}{2}\)_, then stationary operator_ \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0):\{\omega\in\bar{H}_{\mathrm{b}}^{s, \ell}(\bar{X};\widetilde{\operatorname{sCT}^{*}}\bar{X}):\widehat{\mathcal{W} _{b_{0},\gamma}}(0)\omega\in\bar{H}_{\mathrm{b}}^{s-1,\ell+2}(\bar{X}; \widetilde{\operatorname{sCT}^{*}}\bar{X})\}\to\bar{H}_{\mathrm{b}}^{s-1, \ell+2}(\bar{X};\widetilde{\operatorname{sCT}^{*}}\bar{X})\] (9.49) _is invertible._ _Both statements also hold for the weakly charged and slowly rotating Kerr Newman metric \(g_{b}\) with \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\)._ Proof.: The proof is completely analogous to that of Theorem 9.8. ### Growing zero modes of the gauge potential wave operator As discussed around (9.3) and (9.6), the zero mode solutions to \(\mathcal{W}_{b,\gamma}\omega=0\) on extendible and supported \(b\)-Sobolev spaces are related to the pure gauge zero mode solutions of \(L_{g_{b},\gamma}(\hat{g},\hat{A})=0\) and dual pure gauge zero mode solutions of \(L_{g_{b},\gamma}^{*}(\hat{g},\hat{A})=0\), respectively. In this section we will discuss the zero mode solutions to \(\mathcal{W}_{b,\gamma}\omega=0\) which are allowed to grow at infinity and are asymptotic to translations and rotations. We will see in the next section that these growing zero modes may generate (dual) pure gauge zero modes solutions to \(L_{g_{b},\gamma}^{(*)}(\hat{g},\hat{A})=0\), which have the expected decay rate at infinity (more precisely, lie in the space \(\bar{H}_{\mathrm{b}}^{\infty,\ell}(\bar{X})\) or \(\bar{H}_{\mathrm{b}}^{-\infty,\ell}(\bar{X})\) with \(-\frac{3}{2}<\ell<-\frac{1}{2}\)). Throughout this subsection, we focus on the RN metric and slowly rotating metric with sufficiently small charge. Namely, for the parameter \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) of the RN metric, we assume that \[\mathbf{Q}_{0}\ll\mathbf{m}_{0}.\] We first discuss the growing zero modes which are asymptotic to translations. **Proposition 9.14**.: _For \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\), we obtain_ \[\ker\widehat{\mathcal{W}_{b,\gamma}}(0)\cap\bar{H}_{\mathrm{b}}^ {\infty,-3/2-}(\bar{X}) =\langle\omega_{b,s_{0}}\rangle\oplus\{\omega_{b,s_{1}}(\mathsf{S}): \mathsf{S}\in\mathbf{S}_{1}\},\quad\omega_{b,s_{0}}=\partial_{t}^{\flat}, \tag{9.50}\] \[\ker\widehat{\mathcal{W}_{b,\gamma}}(0)\cap\bar{H}_{\mathrm{b}}^ {-\infty,-3/2-}(\bar{X}) =\langle\omega_{b,s_{0}}^{*}\rangle\oplus\{\omega_{b,s_{1}}^{*} (\mathsf{S}):\mathsf{S}\in\mathbf{S}_{1}\}, \tag{9.51}\] _where \(\flat\) denotes the musical isomorphism \(V^{\flat}:=g_{b}(V,\bullet)\) and \(\omega_{b,s_{1}}(\mathsf{S})\) and \(\omega_{b,s_{1}}^{*}(\mathsf{S})\) depend on \(\mathsf{S}\in\mathbf{S}_{1}\) linearly. They satisfy_ \[\delta_{g_{b}}^{*}\omega_{b,s_{1}}\in\bar{H}_{\mathrm{b}}^{\infty,1/2-}(\bar{X};S ^{2}\widetilde{\operatorname{sCT}^{*}}\bar{X}),\quad G_{g_{b}}\delta_{g_{b}}^{*} \omega_{b,\bullet}^{*}\in\bar{H}_{\mathrm{b}}^{-\infty,1/2-}(\bar{X};S^{2} \widetilde{\operatorname{sCT}^{*}}\bar{X}) \tag{9.52}\] _Moreover, the maps \(b\mapsto\omega_{b,s_{0}},\omega_{b,s_{1}}(\mathsf{S})\) and \(b\mapsto\omega_{b,s_{0}}^{*},\omega_{b,s_{1}}^{*}(\mathsf{S})\) can be chosen to be continuous in \(b\) with values in the respective spaces._ _For later use, we further determine the leading term of \(\omega_{b_{0},s_{1}}(\mathsf{S})\) and \(\omega_{b_{0},s_{1}}^{*}(\mathsf{S})\)_ \[\omega_{b_{0},s_{1}}(\mathsf{S})=du_{b_{0},s_{1}}(\mathsf{S})+\mathcal{O}_{ \bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})}(\gamma+|\mathbf{Q}_{0}|^{2}), \quad\omega_{b_{0},s_{1}}^{*}(\mathsf{S})=du_{b_{0},s_{1}}^{*}(\mathsf{S})+ \mathcal{O}_{\bar{H}_{\mathrm{b}}^{-\infty,-1/2-}(\bar{X})}(\gamma+|\mathbf{Q}_{0 }|^{2}) \tag{9.53}\] _where \(u_{b_{0},s_{1}}(\mathsf{S})=(r-\mathbf{m}_{0})\mathsf{S}\) and \(u_{b_{0},s_{1}}^{*}(\mathsf{S})=(r-\mathbf{m}_{0})H(r-r_{b_{0}})\mathsf{S}\) are defined as in (8.17)._ Proof.: The proof exploits a normal operator argument as in Propositions 5.7 and 8.2. In particular, the normal operator of \(-2\widehat{\mathcal{W}_{b,\gamma}}(0)\) is equal to \(\widehat{\square_{\underline{g},1}}(0)\) which is the Euclidean Laplacian \(\Delta_{\mathbb{R}^{3}}=\rho^{2}(\rho\partial_{\rho}(\rho\partial_{\rho}-1)+ \underline{\mathbb{A}})\) tensored with \(4\times 4\) identity matrix when working in the standard coordinate trivialization of \(\widetilde{\text{``}T^{*}}\bar{X}\) and annihilates the following 1-forms \[dt,\ dx^{1},\ dx^{2},\ dx^{3}. \tag{9.54}\] We note that \(dt\) is of scalar type \(l=0\) while \(dx^{i}=\mathsf{S}dr+rd\mathsf{S}\) with \(\mathsf{S}=x^{i}/r\) (\(i=1,2,3\)) is of scalar type \(l=1\). We first prove (9.50). Let \(u\in\ker\widehat{\mathcal{W}_{b,\gamma}}(0)\cap\dot{H}_{\mathrm{b}}^{-\infty,-3/2-}(\bar{X})\). By the normal operator argument as in Proposition 5.7 ( where we shift the contour of integration through the pole \(0\) with the space of resonant states given by \(\mathbf{S}_{0}\)), \(\omega\) must take the form \(\omega=\chi\omega_{0}+\tilde{\omega}\) where \(\omega_{0}\) is in the 4-dimensional space \(\mathrm{span}\{dt,dx^{1},dx^{2},dx^{3}\}\), \(\tilde{\omega}\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\) and \(\chi\) is a smooth cutoff with \(\chi=0\) for \(r\leq 3\mathbf{m}_{0}\), \(\chi=1\) with \(r\geq 4\mathbf{m}_{0}\). This proves that the space in (9.50) is at most 4-dimensional. Now we prove that it is indeed 4-dimensional. We compute for \(\omega_{0}\in\{dt,dx^{1},dx^{2},dx^{3}\}\) \[-\widehat{\mathcal{W}_{b,\gamma}}(0)\chi\omega_{0}=-(\widehat{\mathcal{W}_{b,\gamma}}(0)-\square_{\underline{g},1})\chi\omega_{0}-[\square_{\underline{g },1},\chi]\omega_{0}\in\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})+\bar{H}_{ \mathrm{b}}^{\infty,\infty}(\bar{X})=\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{ X}).\] In view of Theorem 9.13, there exists a unique \(\tilde{\omega}\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\) satisfying \(\widehat{\mathcal{W}_{b,\gamma}}(0)\tilde{\omega}=-\widehat{\mathcal{W}_{b, \gamma}}(0)\chi\omega_{0}\). Therefore, \(\chi\omega_{0}+\tilde{\omega}\) gives rise to an element in the space (9.50) which we denote by \(\omega_{b,s_{0}}\) if \(\omega_{0}=dt\) and \(\omega_{b,s_{1}}(\mathsf{S})\) if \(\omega_{0}=d(r\mathsf{S})\in\{dx^{1},dx^{2},dx^{3}\}\). The explicit expression for \(\omega_{b,s_{0}}\) follows from the direction calculation \(\widehat{\mathcal{W}_{b,\gamma}}\partial_{t}^{\rho}=\delta_{g_{b},\gamma}G_{g }\delta_{g}^{\sigma}\partial_{t}^{\rho}=0\) since \(\partial_{t}\) is killing (i.e. \(\delta_{g}^{\sigma}\partial_{t}^{\rho}=0\)). As for the symmetric gradient, we calculate \[\delta_{g_{b}}^{*}\omega_{b,s_{1}}=\delta_{g_{b}}\tilde{\omega}+(\delta_{g_{b }}^{*}-\delta_{g}^{*})\chi\omega_{0}+[\delta_{g}^{*},\chi]\omega_{0}\in\bar{H }_{\mathrm{b}}^{\infty,1/2-}(\bar{X})+\bar{H}_{\mathrm{b}}^{\infty,\infty}( \bar{X})=\bar{H}_{\mathrm{b}}^{\infty,1/2-}(\bar{X})\] where we use \(\delta_{g_{b}}^{*}-\delta_{g}^{*}\in\rho^{2}\mathrm{Diff}_{\mathrm{b}}^{1}, \delta_{g_{b}}^{*}\in\rho\mathrm{Diff}_{\mathrm{b}}^{1}\) and \(\delta_{g}^{*}\omega_{0}=0\). Now let us turn to the continuous dependence of \(\widetilde{\omega}_{b,s_{1}}(\mathsf{S})\) on \(b\). Suppose \(b_{j}\to b\) and \(\omega_{b,s_{1}}(\mathsf{S})=\chi\omega_{0}+\tilde{\omega}_{b_{j}}(\mathsf{S})\) where \(\omega_{0}=d(r\mathsf{S})\in\{dx^{1},dx^{2},dx^{3}\}\) and \(\tilde{\omega}_{b_{j}}(\mathsf{S})\) is uniquely determined by \(b_{j}\) and \(\mathsf{S}\). Let \(\omega_{b,s_{1}}(\mathsf{S})=\chi\omega_{0}+\tilde{\omega}_{b}(\mathsf{S})\) and \(e_{j,s_{1}}(\mathsf{S})=\omega_{b_{j},s_{1}}(\mathsf{S})-\omega_{b,s_{1}}( \mathsf{S})=\tilde{\omega}_{b_{j},s_{1}}(\mathsf{S})-\tilde{\omega}_{b,s_{1}}( \mathsf{S})\), we find \[\widehat{\mathcal{W}_{b,\gamma}}(0)e_{j,s_{1}}(\mathsf{S})=(\widehat{\mathcal{W }_{b,\gamma}}(0)-\widehat{\mathcal{W}_{b_{j},\gamma}}(0))(\tilde{\omega}_{b_{j },s_{1}}(\mathsf{S})+\chi\omega_{0}).\] Since \(\{\tilde{\omega}_{b_{j},s_{1}}(\mathsf{S})\}\) is uniformly bounded in \(\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\) and \(\widehat{\mathcal{W}_{b,\gamma}}(0)-\widehat{\mathcal{W}_{b_{j},\gamma}}(0)\in(b -b_{j})\rho^{3}\mathrm{Diff}_{b}^{2}\), we see that \(\widehat{\mathcal{W}_{b,\gamma}}(0)e_{j,s_{1}}(\mathsf{S})\in(b-b_{j})\bar{H }_{\mathrm{b}}^{\infty,3/2-}\), and thus \(e_{j,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\) is of size \(O(b-b_{j})\). This proves the continuous dependence of \(\omega_{b,s_{1}}(\mathsf{S})\) on \(b\). The proof for the dual zero modes \(\omega_{b,s_{0}}^{*}\) and \(\omega_{b,s_{1}}^{*}(\mathsf{S})\)proceeds in an analogous manner (we use Theorem 9.8 because \(\widehat{\mathcal{W}_{b,\gamma}}(0)\) acting on supported \(b\)-Sobolev space is identical to \(\widehat{\mathcal{P}_{b,\gamma}}(0)^{*}\)). It remains to derive the expressions in (9.53). Since \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)=\delta_{g_{b_{0}}}G_{g_{b_{0}}}\delta_{g_{ b_{0}}}^{*}+F_{g_{b_{0}}}G_{g_{b_{0}}}\delta_{g_{b_{0}}}^{*}=\frac{1}{2}(d\delta_{g_{b_{0}}}+ \delta_{g_{b_{0}}}d)-\mathrm{Ric}(g_{b_{0}})+F_{g_{b_{0}}}G_{g_{b_{0}}}\delta_{g_{ b_{0}}}^{*},\] we have \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)(\omega_{b_{0},s_{1}}( \mathsf{S})-du_{b,s_{1}}(\mathsf{S})) =\mathrm{Ric}(g)_{\alpha}^{\beta}(du_{b_{0},s_{1}}(\mathsf{S}))_ {\beta}-F_{g_{b_{0}}}G_{g_{b_{0}}}\delta_{g_{b_{0}}}^{*}du_{b_{0},s_{1}}( \mathsf{S}) \tag{9.55}\] \[=\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,5/2-}}(|\mathbf{Q}_{0} |^{2})+\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,\infty}(\bar{X})}(\gamma).\] Since \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)(\omega_{b_{0},s_{1}}(\mathsf{S})-du_{b,s_{1}}(\mathsf{S}))\) is of scalar type \(l=1\) and \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)^{-1}\) restricted in scalar type \(l=1\) 1-form spaces exists with norm of size \(\mathcal{O}(1)\) (see Remark 9.4), it follows that \(\omega_{b_{0},s_{1}}(\mathsf{S})-du_{b,s_{1}}(\mathsf{S})=\mathcal{O}_{\bar{H} _{\mathrm{b}}^{\infty,-1/2-}(\bar{X})}(\gamma+|\mathbf{Q}_{0}|^{2})\). Next we turn to the growing zero modes asymptotic to rotations. **Proposition 9.15**.: _For \(b=(\mathbf{m _More explicitly, we have_ \[\omega_{b_{0},v_{1}}(\mathsf{V})=r^{2}\mathsf{V},\quad\omega_{b_{0},v_{1}}^{*}( \mathsf{V})=r^{2}H(r-r_{b_{0}})\mathsf{V}+\mathcal{O}_{\dot{H}_{\mathrm{b}}^{- \infty,-1/2-}(\bar{X})}(\gamma) \tag{9.59}\] Proof.: We first notice that, with respect to RN metric, \(r^{2}\mathsf{V}\in\dot{H}_{\mathrm{b}}^{\infty,-5/2-}(\bar{X})\) is dual to the rotation vector field (see the calculation in (6.3)), and thus killing. Namely, we have \(\delta_{g_{\mathrm{b}_{0},\gamma}}^{*}r^{2}\mathsf{V}=0\) and thus \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)r^{2}\mathsf{V}=\delta_{g_{b_{0}, \gamma}}G_{g_{\mathrm{b}_{0}}}\delta_{g_{\mathrm{b}_{0}}}^{*}r^{2}\mathsf{V}=0\). As for KN metric \(g_{b}\) with \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\), a direct calculation implies \[-\widehat{\mathcal{W}_{b,\gamma}}(0)\chi r^{2}\mathsf{V}=-\Big{(}\widehat{ \mathcal{W}_{b,\gamma}}(0)-\widehat{\mathcal{W}_{b,\gamma}}(0)\Big{)}\chi r^{ 2}\mathsf{V}-\widehat{\mathcal{W}_{b^{\prime},\gamma}}(0),\chi]r^{2}\mathsf{V }\in\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})+\bar{H}_{\mathrm{b}}^{\infty, \infty}(\bar{X})=\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X}).\] where \(b^{\prime}=(\mathbf{m},0,\mathbf{Q})\) and we use \(\widehat{\mathcal{W}_{b,\gamma}}(0)-\widehat{\mathcal{W}_{b^{\prime},\gamma} }(0)\in\rho^{4}\mathrm{Diff}_{\mathrm{b}}^{2}\). In view of Theorem 9.13, there exists a unique \(\tilde{\omega}_{b}(\mathsf{V})\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\) satisfying \(\widehat{\mathcal{W}_{b,\gamma}}(0)\tilde{\omega}_{b}(\mathsf{V})=-\widehat{ \mathcal{W}_{b,\gamma}}(0)\chi r^{2}\mathsf{V}\). We then define \(\omega_{b,v_{1}}(\mathsf{V}):=\chi r^{2}\mathsf{V}+\tilde{\omega}_{b}( \mathsf{V})\). For \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\), the symmetric gradient \(\delta_{g_{b}}\omega_{b,v_{1}}(\mathsf{V})\) is given by \[\delta_{g_{b}}^{*}\omega_{b,v_{1}}(\mathsf{V})=\delta_{g_{b}}^{*}\tilde{ \omega}_{b}(\mathsf{V})+(\delta_{g_{b}}^{*}-\delta_{g_{(\mathbf{m},0,\mathbf{ Q})}}^{*})\chi r^{2}\mathsf{V}+[\delta_{g_{(\mathbf{m},0,\mathbf{Q})}}^{*},\chi]r^{2} \mathsf{V}\in\bar{H}_{\mathrm{b}}^{\infty,1/2-}(\bar{X})+\bar{H}_{\mathrm{b}} ^{\infty,\infty}(\bar{X})=\bar{H}_{\mathrm{b}}^{\infty,1/2-}(\bar{X})\] where we use \(\delta_{g_{b}}^{*}-\delta_{g_{(\mathbf{m},0,\mathbf{Q})}}^{*}\in\rho^{3} \mathrm{Diff}_{\mathrm{b}}^{2},\delta_{g_{b}}^{*}\in\rho\mathrm{Diff}_{\mathrm{ b}}^{1}\) and \(\delta_{g_{(\mathbf{m},0,\mathbf{Q})}}^{*}r^{2}\mathsf{V}=0\). The proof of the continuous dependence of \(\omega_{b,v_{1}}\) on \(b\) proceeds as in Proposition 9.14. Suppose \(b_{j}=(\mathbf{m}_{j},\mathbf{a}_{j},\mathbf{Q}_{j})\to b=(\mathbf{m}, \mathbf{a},\mathbf{Q})\) and let \(\omega_{b_{j},v_{1}}(\mathsf{V})=\chi r^{2}\mathsf{V}+\tilde{\omega}_{b_{j}}( \mathsf{V})\), \(\omega_{b,v_{1}}(\mathsf{V})=\chi r^{2}\mathsf{V}+\tilde{\omega}_{b}(\mathsf{V})\) and \(e_{j}(\mathsf{V})=\omega_{b_{j},v_{1}}(\mathsf{V})-\omega_{b,v_{1}}(\mathsf{V})= \tilde{\omega}_{b_{j}}(\mathsf{V})-\tilde{\omega}_{b}(\mathsf{V})\), we find \[\widehat{\mathcal{W}_{b,\gamma}}(0)e_{j}(\mathsf{V}) =(\widehat{\mathcal{W}_{b,\gamma}}(0)-\widehat{\mathcal{W}_{b_{j},\gamma}}(0))(\tilde{\omega}_{b_{j}}(\mathsf{V})+\chi r^{2}\mathsf{V})\] \[=|b-b_{j}|\rho^{3}\mathrm{Diff}_{b}^{2}\tilde{\omega}_{b_{j}}( \mathsf{V})+\Big{(}\big{(}\widehat{\mathcal{W}_{b,\gamma}}(0)-\widehat{ \mathcal{W}_{b^{\prime},\gamma}}(0)\big{)}-\big{(}\widehat{\mathcal{W}_{b_{j},\gamma}}(0)-\widehat{\mathcal{W}_{b^{\prime}_{j},\gamma}}(0)\big{)}\Big{)} \chi r^{2}\mathsf{V}\] \[\quad+\Big{(}\widehat{\mathcal{W}_{b^{\prime},\gamma}}(0)- \widehat{\mathcal{W}_{b^{\prime}_{j},\gamma}}(0),\chi]r^{2}\mathsf{V}\Big{)}\] \[=|b-b_{j}|\rho^{3}\mathrm{Diff}_{b}^{2}\tilde{\omega}_{b_{j}}( \mathsf{V})+|b-b_{j}|\rho^{4}\mathrm{Diff}_{b}^{2}\chi r^{2}\mathsf{V}+|b-b_{j}| \rho^{\infty}\mathrm{Diff}_{\mathrm{b}}^{1}r^{2}\mathsf{V}\] \[\in|b-b_{j}|\Big{(}\bar{H}_{\mathrm{b}}^{\infty,5/2-}(\bar{X})+ \bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})+\bar{H}_{\mathrm{b}}^{\infty,\infty}( \bar{X})\Big{)}\subset|b-b_{j}|\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})\] where \(b^{\prime}_{j}=(\mathbf{m}_{j},0,\mathbf{Q}_{j})\) and \(b^{\prime}=(\mathbf{m},0,\mathbf{Q})\). As a consequence, \(e_{j}(\mathsf{V})\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\) is of size \(O(b-b_{j})\). This proves the continuous dependence of \(\omega_{b,v_{1}}(\mathsf{V})\) on \(b\). The statements for dual zero modes \(\omega_{b,v_{1}}^{*}(\mathsf{V})\) can be proved in a similar way (we use Theorem 9.8 because \(\widehat{\mathcal{W}_{b,\gamma}}(0)\) acting on supported \(b\)-Sobolev space is identical to \(\widehat{\mathcal{P}_{b,\gamma}}(0)^{*}\)). Here, we present the detail of the derivation of the expression for \(\omega_{b_{0},v_{1}}^{*}(\mathsf{V})\) in (9.59). First, we calculate \[\delta_{g_{b_{0}}}G_{g_{b_{0}}}\delta_{g_{b_{0}}}^{*}r^{2}H(r-r_{b_{0}})\mathsf{V }=\delta_{g_{b_{0}}}\Big{(}(r^{2}\delta(r-r_{b_{0}})dr)\otimes_{s}\mathsf{V} \Big{)}=-\frac{1}{2r^{2}}\partial_{r}(r^{4}\delta(r-r_{b_{0}})\mu_{b_{0}}) \mathsf{V}=0.\] It follows that \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)(\omega_{b_{0},v_{1}}^{*}(\mathsf{V})-r^{2 }H(r-r_{b_{0}})\mathsf{V})=-F_{g_{b_{0}}}\Big{(}(r^{2}\delta(r-r_{b_{0}})dr) \otimes_{s}\mathsf{V}\Big{)}=\mathcal{O}_{\bar{H}_{\mathrm{b}}^{-\infty,\infty }(\bar{X})}(\gamma).\] Since \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)(\omega_{b_{0},v_{1}}(\mathsf{V})-r^{2}H(r -r_{b_{0}})\mathsf{V})\) is of vector type \(l=1\) and \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)^{-1}\) restricted in vector type \(l=1\)\(1\)-form spaces exists with norm of size \(\mathcal{O}(1)\) (see Remark 9.4), we conclude that \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)(\omega_{b_{0},v_{1}}(\mathsf{V})-r^{2}H( r-r_{b_{0} _depending linearly on \(\mathsf{S}\in\mathbf{S}_{1}\), such that_ \[\begin{split}\delta^{*}_{g_{\mathsf{b}}}\omega^{(1)}_{b,s_{1}}( \mathsf{S})&=\chi dt\otimes_{s}d(r\mathsf{S})+\bar{H}^{\infty,-1/2- }_{\mathrm{b}}(\bar{X};S^{2\widetilde{scT^{*}}}\bar{X})\\ \delta^{*}_{g_{\mathsf{b}}}\omega^{(1)*}_{b,s_{1}}(\mathsf{S})& =\chi dt\otimes_{s}d(r\mathsf{S})+\bar{H}^{\infty,-1/2-}_{\mathrm{b}}(\bar{X} ;S^{2\widetilde{scT^{*}}}\bar{X})\end{split} \tag{9.62}\] _Moreover, the leading terms of \(\omega^{(1)}_{b,s_{1}}(\mathsf{S}),\omega^{(1)*}_{b,s_{1}}(\mathsf{S})\)are given by_ \[\omega^{(1)}_{b,s_{1}}(\mathsf{S})-\chi r\mathsf{S}dt\in\bar{H}^{\infty,-3/2- }_{\mathrm{b}}(\bar{X}),\quad\omega^{(1)*}_{b,s_{1}}(\mathsf{S})-\chi r \mathsf{S}dt\in\dot{H}^{-\infty,-3/2-}_{\mathrm{b}}(\bar{X}) \tag{9.63}\] _where \(\chi\) is a smooth cutoff satisfying \(\chi=1\) for \(r\geq 4\mathbf{m}_{0}\) and \(\chi=0\) for \(r\leq 3\mathbf{m}_{0}\)._ _For later use, we further determine the leading term of \(\omega^{(1)}_{b_{0},s_{1}}(\mathsf{S})\)_ \[\omega^{(1)}_{b_{0},s_{1}}(\mathsf{S})=r\mathsf{S}(\mu_{b_{0}}dt_{0}-dr)+ \mathcal{O}_{\bar{H}^{\infty,-1/2-}_{\mathrm{b}}(\bar{X})}(\gamma+|\mathbf{Q} _{0}|^{2}). \tag{9.64}\] Proof.: The proof uses the fact that the normal operator of \(-2\widehat{\mathcal{W}_{b,\gamma}}(0)\) is equal to \(\widehat{\square_{\underline{g}}}(0)\), which is the Euclidean Laplacian \(\Delta_{\mathbb{R}^{3}}=\rho^{2}(\rho\partial_{\rho}(\rho\partial_{\rho}-1)+ \not{\mathsf{A}})\) tensored with \(4\times 4\) identity matrix when working in the standard coordinate trivialization of \(\widetilde{seT^{*}}\bar{X}\) and annihilates the following \(1\)-form \(x^{i}dt=r\mathsf{S}dt\)\((i=1,2.3)\) where \(\mathsf{S}=x^{i}/r\in\mathbf{S}_{1}\). Concretely, we have \[-\widehat{\mathcal{W}_{b,\gamma}}(0)\chi r\mathsf{S}dt=-\Big{(}\widehat{ \mathcal{W}_{b,\gamma}}(0)-\square_{\underline{g},1}\Big{)}\chi r\mathsf{S}dt -[\square_{\underline{g},1},\chi]r\mathsf{S}dt\in\bar{H}^{\infty,1/2-}_{ \mathrm{b}}(\bar{X})+\bar{H}^{\infty,\infty}_{\mathrm{b}}(\bar{X})=\bar{H}^{ \infty,1/2-}_{\mathrm{b}}(\bar{X}).\] Since \(\ker\widehat{\mathcal{W}_{b,\gamma}}(0)^{*}\cap\dot{H}^{-\infty,-1/2+}_{ \mathrm{b}}(\bar{X})=\{0\}\) by Theorem 9.13, the above equation is solvable. More precisely, there exists a unique \(\tilde{\omega}^{(1)}_{b}(\mathsf{S})\in\bar{H}^{\infty,-3/2-}_{\mathrm{b}}( \bar{X})\) which is in \(\mathrm{ann}\{f_{1},\cdots,f_{4}\}\), where \(\{f_{1},\cdots,f_{4}\}\subset C^{\infty}_{c}(X^{\circ})\) is a set of linearly independent functionals on \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\cap\bar{H}^{\infty,-3/2-}_{\mathrm{b}}( \bar{X})\), such that \(\widehat{\mathcal{W}_{b,\gamma}}(0)\tilde{\omega}^{(1)}_{b}(\mathsf{S})=- \widehat{\mathcal{W}_{b,\gamma}}(0)\chi r\mathsf{S}dt\). We then define \(\omega^{(1)}_{b,s_{1}}(\mathsf{S}):=\chi r\mathsf{S}dt+\tilde{\omega}^{(1)}_{ b}(\mathsf{S})\). As for the symmetric gradient \(\delta_{g_{\mathsf{b}}}\omega^{(1)}_{b,s_{1}}(\mathsf{S})\), we find \[\delta^{*}_{g_{\mathsf{b}}}\omega^{(1)}_{b,s_{1}}(\mathsf{S}) =\chi\delta^{*}_{\underline{g}}r\mathsf{S}dt+[\delta^{*}_{ \underline{g}},\chi]r\mathsf{S}dt+(\delta^{*}_{g_{\mathsf{b}}}-\delta^{*}_{ \underline{g}})\chi r\mathsf{S}dt+\delta^{*}_{g_{\mathsf{b}}}\tilde{\omega}^{ (1)}_{b}(\mathsf{S})\] \[=\chi dt\otimes_{s}d(r\mathsf{S})+\bar{H}^{\infty,\infty}_{ \mathrm{b}}(\bar{X})+\bar{H}^{\infty,-1/2-}_{\mathrm{b}}(\bar{X})+\bar{H}^{ \infty,-1/2-}_{\mathrm{b}}(\bar{X})\] \[=\chi dt\otimes_{s}d(r\mathsf{S})+\bar{H}^{\infty,-1/2-}_{ \mathrm{b}}(\bar{X})\] where we use \(\delta^{*}_{g_{\mathsf{b}}}-\delta^{*}_{\underline{g}}\in\rho^{2}\mathrm{Diff}^ {1}_{\mathrm{b}}\) and \(\delta^{*}_{g_{\mathsf{b}}}\in\rho\mathrm{Diff}^{1}_{\mathrm{b}}\). As for the continuous dependence of \(\omega^{(1)}_{b,s_{1}}(\mathsf{S})\) on \(b\), suppose \(b_{j}\to b\) and let \(\omega^{(1)}_{b_{j},s_{1}}(\mathsf{S})=\chi r\mathsf{S}dt+\tilde{\omega}^{(1)}_{ b_{j}}(\mathsf{S})\), \(\omega^{(1)}_{b,s_{1}}(\mathsf{S})=\chi r\mathsf{S}dt+\tilde{\omega}^{(1)}_{b}( \mathsf{S})\) and \(e_{j}(\mathsf{S})=\omega^{(1)}_{b_{j},s_{1}}(\mathsf{S})-\omega^{(1)}_{b,s_{1} }(\mathsf{S})=\tilde{\omega}^{(1)}_{b_{j}}(\mathsf{S})-\tilde{\omega}^{(1)}_{b}( \mathsf{S})\), we find \[\widehat{\mathcal{W}_{b,\gamma}}(0)e_{j}(\mathsf{S})=(\widehat{\mathcal{W}_{b, \gamma}}(0)-\widehat{\mathcal{W}_{b_{j},\gamma}}(0))(\tilde{\omega}^{(1)}_{b_{j} }(\mathsf{S})+\chi r\mathsf{S}dt)\in|b-b_{j}|\bar{H}^{\infty,1/2-}_{\mathrm{b}}( \bar{X}).\] As \(\widehat{\mathcal{W}_{b,\gamma}}(0)|_{\mathrm{ann}\{f_{1},\cdots,f_{4}\}}: \mathrm{ann}\{f_{1},\cdots,f_{4}\}\to\bar{H}^{\infty,1/2-}_{\mathrm{b}}(\bar{X})\) is invertible, it follows that \(e_{j}(\mathsf{S})\) is of size \(\mathcal{O}_{\bar{H}^{\infty,-3/2-}_{\mathrm{b}}(\bar{X})}(b-b_{j})\). This completes the proof of the continuity. The proofs for dual zero modes \(\tilde{\omega}^{(1)*}_{b,s_{1}}(\mathsf{S})\) are analogous where we use \[\ker\widehat{\mathcal{W}_{b,\gamma}}(0)^{*}\cap\bar{H}^{\infty,-1/2+}_{\mathrm{b }}(\bar{X})=\ker\widehat{\mathcal{P}_{b,\gamma}}(0)\cap\bar{H}^{\infty,-1/2+ }_{\mathrm{b}}(\bar{X})=\{0\}.\] It remains to derive the expressions in (9.64). Since \[\delta_{g_{\mathsf{b}_{0}}}G_{g_{\mathsf{b}_{0}}}\delta_{g_{\mathsf{b}_{0}}} \big{(}r\mathsf{S}(\mu_{b_{0}}dt_{0}-dr)\big{)}=\frac{\mathbf{Q}_{0}^{2}}{r^{3 }}\mathsf{S}(\mu_{b_{0}}dt_{0}-dr)\in\mathbf{Q}_{0}^{2}\bar{H}^{\infty,3/2-}_{ \mathrm{b}}(\bar{X}),\] we have \[\widehat{\mathcal{W}_{b,\gamma}}(0)\Big{(}\omega^{(1)}_{b_{0},s_{1}}(\mathsf{S} )-r\mathsf{S}(\mu_{b_{0}}dt_{0}-dr)\Big{)}=\mathcal{O}_{\bar{H}^{\infty,3/2-}_{ \mathrm{b}}}(|\mathbf{Q}_{0}|^{2})+\mathcal{O}_{\bar{H}^{\infty,\infty}_{ \mathrm{b}}(\bar{X})}(\gamma).\] Since the right-hand side of the above equation is of scalar type \(l=1\) and \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)^{-1}\) restricted in scalar type \(l=1\)\(1\)-form spaces exists with Now we are ready to discuss (dual) generalized zero modes which have less decay rate in \(r\) at infinity for fixed \(t_{b,*}\) and grow linearly in \(t_{b,*}\). In the course of the proof, we will find that they are asymptotic to Lorentz boosts. **Proposition 9.17**.: _For \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\), there exists continuous (in \(b\)) families_ \[b\mapsto\hat{\omega}_{b,s_{1}}(\mathsf{S})\in\ker\mathcal{W}_{b, \gamma}\cap\mathit{Poly}^{1}(t_{b,*})\bar{H}_{\mathrm{b}}^{\infty,-5/2-}(\bar{ X}), \tag{9.65}\] \[b\mapsto\hat{\omega}_{b,s_{1}}^{*}(\mathsf{S})\in\ker\mathcal{W} _{b,\gamma}\cap\mathit{Poly}^{1}(t_{b,*})\hat{H}_{\mathrm{b}}^{-\infty,-5/2-}( \bar{X}), \tag{9.66}\] _depending linearly on \(\mathsf{S}\in\mathsf{S}_{1}\), such that_ \[\delta_{g_{b}}^{*}\hat{\omega}_{b,s_{1}}(\mathsf{S})\in\mathit{Poly }^{1}(t_{b,*})\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X};S^{2s\widetilde{c} \widetilde{T}^{*}}\bar{X}), \tag{9.67}\] \[G_{g_{b}}\delta_{g_{b}}^{*}\omega_{b,s_{1}}^{*}(\mathsf{S})\in \mathit{Poly}^{1}(t_{b,*})\dot{H}_{\mathrm{b}}^{-\infty,-1/2-}(\bar{X};S^{2s \widetilde{c}\widetilde{T}^{*}}\bar{X}). \tag{9.68}\] _Moreover, the leading terms of \(\hat{\omega}_{b,s_{1}}(\mathsf{S})\) are given by_ \[\hat{\omega}_{b,s_{1}}^{(1)}(\mathsf{S})-(tdx^{i}-x^{i}dt)\in\mathit{Poly}^{1 }(t_{b,*})\bar{H}_{\mathrm{b}}^{\infty,-3/2-}(\bar{X}),\quad r\gg 1 \tag{9.69}\] _where \(\mathsf{S}=x^{i}/r\)._ _For later use, we further determine the leading term of \(\hat{\omega}_{b_{0},s_{1}}(\mathsf{S})\)_ \[\hat{\omega}_{b_{0},s_{1}}(\mathsf{S}) =(t_{0}-r)\omega_{b_{0},s_{1}}(\mathsf{S})-r\mathsf{S}(\mu_{b_{0 }}dt_{0}-dr)+\mathbf{m}_{0}\mathsf{S}(dt_{0}+dr) \tag{9.70}\] \[\quad+d\bigg{(}2\mathbf{m}_{0}\Big{(}2\mathbf{m}_{0}+(\mathbf{m} _{0}-r)\log(\frac{r}{\mathbf{m}_{0}})\Big{)}\mathsf{S}\bigg{)}+\mathcal{O}_{ \bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})}(\gamma+|\mathbf{Q}_{0}|^{2}).\] Proof.: We make the ansatz \[\hat{\omega}_{b,s_{1}}(\mathsf{S})=t_{\chi_{0}}\omega_{b,s_{1}}(\mathsf{S})+ \check{\omega}_{b,s_{1}}(\mathsf{S}) \tag{9.71}\] with \(\check{\omega}_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,-5/2-}( \bar{X})\) to be determined. Our aim is to find \[\hat{\omega}_{b,s_{1}}\in\ker\mathcal{W}_{b,\gamma}\cap\mathrm{Poly}^{1}(t_{b,*})\bar{H}_{\mathrm{b}}^{\infty,5/2-}(\bar{X})\] such that \(\delta_{g_{b}}^{*}\hat{\omega}_{b,s_{1}}(\mathsf{S})\in\mathrm{Poly}^{1}(t_{b,*})\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\). To this end, since \[t_{\chi_{0}}-t_{b_{*}}\in\mathcal{A}^{-1}(\bar{X})\quad\text{and}\quad\delta_{ g_{b}}^{*}\omega_{b,s_{1}}\in\bar{H}_{\mathrm{b}}^{\infty,1/2-}(\bar{X}),\] it suffices to solve the following two equations for \(\check{\omega}_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,-5/2-}( \bar{X})\) \[[\delta_{g_{b}}^{*},t_{\chi_{0}}]\omega_{b,s_{1}}(\mathsf{S})+ \delta_{g_{b}}^{*}\check{\omega}_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^ {\infty,-1/2-}(\bar{X}), \tag{9.72}\] \[\widehat{\mathcal{W}_{b,\gamma}}(0)\check{\omega}_{b,s_{1}}( \mathsf{S})=-[\mathcal{W}_{b,\gamma},t_{\chi_{0}}]\omega_{b,s_{1}}(\mathsf{S}). \tag{9.73}\] We first analyze (9.72). Since \[[\delta_{g_{b}}^{*},t_{\chi_{0}}]\omega_{b,s_{1}}(\mathsf{S})=dt_{\chi_{0}} \otimes_{s}\omega_{b,s_{1}}(\mathsf{S}),\] we further make the ansatz \[\check{\omega}_{b,s_{1}}(\mathsf{S})=-\omega_{b,s_{1}}^{(1)}(\mathsf{S})+ \check{\omega}_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,-5/2-}( \bar{X}). \tag{9.74}\] with \(\check{\omega}_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,-3/2-}( \bar{X})\) to be determined. Therefore, using \(\omega_{b,s_{1}}(\mathsf{S})=\chi d(r\mathsf{S})+\bar{H}_{\mathrm{b}}^{\infty,- 1/2-}(\bar{X})\) and (9.62), it follows that \[[\delta_{g_{b}}^{*},t_{\chi_{0}}]\omega_{b,s_{1}}(\mathsf{S})+ \delta_{g_{b}}^{*}\check{\omega}_{b,s_{1}}\in\bar{H}_{\mathrm{b}}^{\infty,-1/2- }(\bar{X})\] and the requirement (9.72) is satisfied with the choice of \(\check{\omega}_{b,s_{1}}(\mathsf{S})\) defined as in (9.74). Next we solve (9.73) to determine \(\tilde{\omega}_{b,s_{1}}(\mathsf{S})\) in (9.74). We rewrite (9.73) as \[\widehat{\mathcal{W}_{b,\gamma}}(0)\check{\omega}_{b,s_{1}}( \mathsf{S})=\widehat{\mathcal{W}_{b,\gamma}}(0)\omega_{b,s_{1}}^{(1)}(\mathsf{S})- [\mathcal{W}_{b,\gamma},t_{\chi_{0}}]\omega_{b,s_{1}}(\mathsf{S}) \tag{9.75}\] \[=\tilde{\delta}_{g_{b},\gamma}G_{g_{b}}\big{(}\delta_{g_{b}}^{*} \omega_{b,s_{1}}^{(1)}(\mathsf{S})-[\delta_{g_{b}}^{*},t_{\chi_{0}}]\omega_{b,s_ {1}}(\mathsf{S})\Big{)}-[\tilde{\delta}_{g_{b},\gamma},t_{\chi_{0}}]G_{g_{b}} \delta_{g_{b}}^{*}\omega_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{ \infty,1/2-}(\bar{X})\] where we use \(\tilde{\delta}_{g_{b},\gamma}\in\rho\mathrm{Diff}_{\mathrm{b}}^{1}\), \(\delta_{g_{b}}^{*}\omega_{b,s_{1}}^{(1)}(\mathsf{S})-[\delta_{g_{b}}^{*},t_{ \lfloor ch_{0}\rfloor}\omega_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{ \infty,-1/2-}(\bar{X})\) and \[\delta_{g_{b}}^{*}\omega_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,1 /2-}(\bar{X}),\quad[\tilde{\delta}_{g_{b},\gamma},t_{\chi_{0}}]\in\mathcal{A} ^{0}(\bar{X}).\] Since \(\ker\widehat{\mathcal{W}_{b,\gamma}}(0)^{*}\cap\dot{H}_{\mathrm{b}}^{-\infty,-1/2+ }(\bar{X})=\{0\}\), we can solve the above equation (9.75) for \(\tilde{\omega}_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,-3/2-}(\bar{ X})\), which is unique modulo \(\ker\widehat{\mathcal{W}_{b,\gamma}}(0)\cap\bar{H}_{\mathrm{b}}^{\infty,-3/2-}( \bar{X})\). We then define \[\hat{\omega}_{b,s_{1}}^{(1)}(\mathsf{S}):=t_{\chi_{0}}\omega_{b,s_{1}}(\mathsf{ S})-\omega_{b,s_{1}}^{(1)}(\mathsf{S})+\tilde{\omega}_{b,s_{1}}(\mathsf{S})\] where \(\tilde{\omega}_{b,s_{1}}(\mathsf{S})\) is in the orthogonal complement \(\big{(}\ker\widehat{\mathcal{W}_{b,\gamma}}(0)\cap\bar{H}_{\mathrm{b}}^{ \infty,-3/2-}(\bar{X})\big{)}^{\perp}\) and thus uniquely determined by \(\mathsf{S}\) and \(b\). As a result, we have for \(r\gg 1\) \[\hat{\omega}_{b,s_{1}}(\mathsf{S})=t\omega_{b,s_{1}}(\mathsf{S})-\chi r \mathsf{S}dt+\bar{H}_{\mathrm{b}}^{\infty,-3/2}(\bar{X})=td(r\mathsf{S})-r \mathsf{S}dt+\mathrm{Poly}^{1}(t_{b,*})\bar{H}_{\mathrm{b}}^{\infty,-3/2-}( \bar{X}).\] Finally, the continuity in \(b\) follows in the same way as in the proof of Lemma 9.16. The proof for \(\hat{\omega}_{b,s_{1}}^{*}(\mathsf{S})\) is completely analogous. It remains to derive the expressions in (9.70). For the simplicity of calculation, now we let \(\tilde{t}=t_{0}-r\) and make the ansatz \[\hat{\omega}_{b_{0},s_{1}}(\mathsf{S})=\tilde{t}\omega_{b_{0},s_{1}}(\mathsf{ S})-\omega_{b_{0},s_{1}}^{(1)}+\tilde{\omega}_{b_{0},s_{1}}(\mathsf{S}).\] Our aim is to solve the following equation for \(\tilde{\omega}_{b_{0},s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{-3/2-}(\bar{X})\) \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\tilde{\omega}_{b_{0},s_{1 }}(\mathsf{S}) =-[\mathcal{W}_{b_{0},\gamma},\tilde{t}]\omega_{b_{0},s_{1}}( \mathsf{S})=\frac{1}{2}[\square_{g_{b_{0}}},\tilde{t}]\omega_{b_{0},s_{1}}( \mathsf{S})+\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,\infty}(\bar{X})}(\gamma) \tag{9.76}\] \[=\mathsf{S}\Big{(}(\frac{\mathbf{m}_{0}}{r^{2}}-\frac{\mathbf{Q} _{0}^{2}}{r^{3}})dt_{0}+\frac{\mathbf{Q}_{0}^{2}}{r^{3}}dr\Big{)}+\frac{ \mathbf{m}_{0}}{r}(1+\frac{\mathbf{m}_{0}}{r}-\frac{\mathbf{Q}_{0}^{2}}{r^{2} })\mathsf{S}+\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})}(\gamma+ |\mathbf{Q}_{0}|^{2})\] where we use \([\square_{g_{b_{0}}},\tilde{t}]=\square_{g_{b_{0}},0}\tilde{t}+2\nabla^{\alpha} \tilde{t}\nabla_{\alpha}\in\rho^{2}\mathrm{Diff}_{\mathrm{b}}^{1}\) and (9.53) in the last step. A direct calculation implies \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\mathbf{m}_{0}(dt_{0}+dr) =\delta_{g_{b_{0}}}G_{g_{b_{0}}}\delta_{g_{b_{0}}}^{*}\mathbf{m}_{0}(dt_{0}+ dr)+\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,\infty}(\bar{X})}(\gamma)\] \[=\mathsf{S}\Big{(}(\frac{\mathbf{m}_{0}}{r^{2}}+\frac{\mathbf{m} _{0}\mathbf{Q}_{0}^{2}}{r^{4}})dt_{0}+(\frac{3\mathbf{m}_{0}}{r^{2}}-\frac{ \mathbf{m}_{0}^{2}}{r^{3}}+\frac{\mathbf{m}_{0}\mathbf{Q}_{0}^{2}}{r^{4}})dr \Big{)}+\frac{\mathbf{m}_{0}}{r}(-2+\frac{2\mathbf{m}_{0}}{r}-\frac{2\mathbf{Q }_{0}^{2}}{r^{2}})\mathsf{S}+\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,\infty}( \bar{X})}(\gamma).\] Then \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\Big{(}\tilde{\omega}_{b_{0 },s_{1}}(\mathsf{S})-\mathbf{m}_{0}(dt_{0}+dr)\Big{)}\] \[=\mathsf{S}(-\frac{3\mathbf{m}_{0}}{r^{2}}+\frac{2\mathbf{m}_{0}^{ 2}}{r^{3}})dr+\frac{\mathbf{m}_{0}}{r}(3-\frac{\mathbf{m}_{0}}{r})\mathsf{S}+ \mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})}(\gamma+|\mathbf{Q}_{0 }|^{2})\] \[=d\Big{(}(\frac{3\mathbf{m}_{0}}{r}-\frac{\mathbf{m}_{0}^{2}}{r^{2} })\mathsf{S}\Big{)}+\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})}( \gamma+|\mathbf{Q}_{0}|^{2})\] Meanwhile, with \(u=2\mathbf{m}_{0}(2\mathbf{m}_{0}+(\mathbf{m}_{0}-r)\log(r/\mathbf{m}_{0}))\), we have \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)du =\frac{1}{2}d\delta_{g_{b_{0}}}du-\mathrm{Ric}(g_{0})_{\alpha}^{ \ \beta}(du)_{\beta}+F_{g_{b_{0}}}G_{g_{b_{0}}}du\] \[=d\Big{(}(\frac{3\mathbf{m}_{0}}{r}-\frac{\mathbf{m}_{0}^{2}}{r^{2} })\mathsf{S}\Big{)}+\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,5/2-}(\bar{X})}( \gamma+|\mathbf{Q}_{0}|^{2}).\] Therefore, we obtain \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\Big{(}\tilde{\omega}_{b_{0},s_{1}}( \mathsf{S})-\mathbf{m}_{0}(dt_{0}+dr)-du\Big{)}=\mathcal{O}_{\bar{H}_{ \mathrm{b}}^{\infty,3/2-}(\bar{X})}(|\mathbf{Q}_{0}|^{2}+\gamma).\] Since the right-hand side of the above equation is of scalar type \(l=1\) and \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)^{-1}\) restricted in scalar type \(l=1\) 1-form spaces exists with norm of size \(\mathcal{O}(1)\) (see Remark 9.4), it follows that \(\tilde{\omega}_{b_{0},s_{1}}(\mathsf{S})-\mathbf{m}_{0}(dt_{0}+dr)-du=\mathcal{O }_{\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})}(\gamma+|\mathbf{Q}_{0}|^{2})\). Therefore, (9.70) follows from the combination of this fact and (9.64). ## 10. Modes of the linearized gauge-fixed Einstein-Maxwell system In this section, we will discuss the modes of the linearized gauge-fixed Einstein-Maxwell system \[L_{(g_{b},A_{b}),\gamma}(\dot{g},\dot{A})=(2L_{(g_{b},A_{b}),\gamma}^{E}(\dot{g}, \dot{A}),\ L_{(g_{b},A_{b}),\gamma}^{M}(\dot{g},\dot{A}))=0 \tag{10.1}\] where \[L_{(g_{b},A_{b}),\gamma}^{E}(\dot{g},\dot{A})=D_{g_{b}}\mathrm{ Ric}(\dot{g})-2D_{(g_{b},dA_{b})}T(\dot{g},d\dot{A})+\widetilde{\delta}_{g_{b}, \gamma}^{*}\widetilde{g}_{b_{b},\gamma}G_{g_{b}}\dot{g}=0, \tag{10.2}\] \[L_{(g_{b},A_{b}),\gamma}^{M}(\dot{g},\dot{A})=D_{(g_{b},A_{b})}( \delta_{g}dA)(\dot{g},\dot{A})+d\delta_{g_{b}}\dot{A}=0. \tag{10.3}\] on the RN metric and \(4\)-electromagnetic potential, i.e., the case \(b=b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0}),|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\). For the brevity of notation, we rewrite the Lie derivative of a \(1\)-form \(\omega\) with respect to the vector field \(X\) as \[\widetilde{\mathcal{L}}_{\omega}X:=\mathcal{L}_{X}\omega\] and put \[L_{b,\gamma}:=L_{(g_{b},A_{b}),\gamma},\quad L_{b,\gamma}^{E}:=L_{(g_{b},A_{b} ),\gamma}^{E},\quad L_{b,\gamma}^{M}:=L_{(g_{b},A_{b}),\gamma}^{M}. \tag{10.4}\] For the various spaces in this section, for instance, \(\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X};E),\dot{H}_{\mathrm{b}}^{s,\ell}(\bar{X };E),\mathcal{A}(\bar{X};E)\), etc., where \(E\to\bar{X}\) is a vector bundle over \(\bar{X}\), we will drop the notation \(E\) of the bundle if it clear from the context. We shall prove that \(L_{b_{0},\gamma}\) has no non-zero modes in the closed upper half plane for RN metric and \(4\)-electromagnetic potential \((g_{b},A_{b_{0}})\) (we note that the same result for slowly rotating KN is nontrivial and will be discussed in detail in next section). Moreover, we will describe the space of zero modes and generalized zero modes (with linearly growth in \(t_{b,*}\)) of \(L_{b,\gamma}\) for both RN case and slowly rotating KN case. ### Modes in the closed upper half plane Let \((g_{b},A_{b})\) be the KN metric and \(4\)-electromagnetic potential with \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) and the linearized gauge-fixed Einstein-Maxwell operator \(L_{b,\gamma}:\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X};S^{2s\widetilde{\kappa} \widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\kappa}T^{*}}\bar{X}) \to\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X};S^{2s\widetilde{\kappa} \widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\kappa}T^{*}}\bar{X})\) be defined as in (10.4), (10.1), (10.2) and (10.3). In the definition of \(\tilde{\delta}_{g_{b},\gamma}^{*},\tilde{\delta}_{g_{b},\gamma}\)(see (4.62), (4.60) and (4.61)), \(\mathfrak{c}\) is defined as in (9.7) and \(\gamma\in(0,\min\{\gamma_{0},\tilde{\gamma}_{0}\})\) where \(\gamma_{0},\tilde{\gamma}_{0}\) are as in Theorem 9.8 and 9.13 respectively. We also note that we consider the weakly charged case, namely, \(|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\) (more precisely, \(|\mathbf{Q}_{0}|\leq C(\gamma)\) for some small constant \(C(\gamma)\)). As in the previous sections, we define the Fourier-transformed operator \[\widehat{L_{b,\gamma}}(\sigma):=e^{i\sigma t_{b,*}}\,L_{b,\gamma}e^{-i\sigma t _{b,*}} \tag{10.5}\] using the function \(t_{b,*}=\chi_{0}(r)(t+r_{b_{0},*})+(1-\chi_{0}(r))(t-r_{(\mathbf{m},0,\mathbf{ Q}),*})\) defined in (4.30). **Theorem 10.1**.: _Let \((g_{b_{0}},A_{b_{0}})\) be the RN metric and \(4\)-electromagnetic potential where \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\). Then the Fourier-transformed operator \(\widehat{L_{b_{0},\gamma}}(\sigma)\) satisfies the following statements:_ 1. _If_ \(\operatorname{Im}\sigma\geq 0\) _and_ \(\sigma\neq 0\)_,_ (10.6) \[\begin{split}\widehat{L_{b_{0},\gamma}}(\sigma):& \{(\dot{g},\dot{A})\in\bar{H}_{\mathrm{b}}^{s,\ell}(\bar{X};S^{2s \widetilde{\kappa}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\kappa }T^{*}}\bar{X}):\widehat{L_{b_{0},\gamma}}(\sigma)(\dot{g},\dot{A})\in\bar{H }_{\mathrm{b}}^{s,\ell+1}(\bar{X};S^{2s\widetilde{\kappa}\widetilde{T}^{*}} \bar{X}\oplus\widetilde{\widetilde{\kappa}T^{*}}\bar{X})\}\\ &\to\bar{H}_{\mathrm{b}}^{s,\ell+1}(\bar{X};S^{2s\widetilde{ \kappa}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\kappa}T^{*}}\bar{ X})\end{split}\] (10.7) _is invertible for_ \(s>3,\ell<-\frac{1}{2}\) _and_ \(s+\ell>-\frac{1}{2}\)_._ 2. _If_ \(s>3\) _and_ \(-\frac{3}{2}<\ell<-\frac{1}{2}\)_, then stationary operator_ (10.7) _has_ \(8\)_-dimensional kernel and cokernel._ _The second statement also holds for the weakly charged and slowly rotating KN metric and \(4\)-electromagnetic potential \((g_{b},A_{b})\) with \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\). Concretely, we have_ \[\mathcal{K}_{b}:=\ker\widehat{L_{b,\gamma}}(0)\cap\bar{H}_{ \mathrm{b}}^{\infty,-1/2-}(\bar{X}) =\{(\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}})\}\oplus\{(\dot{g}_{b,s_{1 }}(\mathsf{S}),\dot{A}_{b,s_{1}}(\mathsf{S})):\mathsf{S}\in\mathbf{S}_{1}\} \tag{10.8}\] \[\oplus\{(\dot{g}_{b,v_{1}}(\mathsf{V}),\dot{A}_{b,v_{1}}(\mathsf{ V})):\mathsf{V}\in\mathbf{V}_{1}\},\] \[\mathcal{K}_{b}^{*}:=\ker\widehat{L_{b,\gamma}}(0)^{*}\cap\dot{H }_{\mathrm{b}}^{-\infty,-1/2-}(\bar{X}) =\{(\dot{g}_{b,s_{0}}^{*},\dot{A}_{b,s_{0}}^{*})\}\oplus\{(\dot{g}_{b, s_{1}}^{*}(\mathsf{S}),\dot{A}_{b,s_{1}}^{*}(\mathsf{S})):\mathsf{S}\in\mathbf{S}_{1}\}\] (10.9) \[\oplus\{(\dot{g}_{b,v_{1}}^{*}(\mathsf{V}),\dot{A}_{b,v_{1}}^{*}( \mathsf{V})):\mathsf{V}\in\mathbf{V}_{1}\},\] _where_ \[(\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}}) =(\dot{g}_{b}(\dot{\mathbf{m}},0,\dot{\mathbf{Q}})+2\delta_{g_{b} }\tilde{\omega}_{b,s_{0}},\ \dot{A}_{b}(0,0,\dot{\mathbf{Q}})+\widetilde{\mathcal{L}}_{A_{b}}\tilde{ \omega}_{b,s_{0}}^{*}+d\phi_{b,s_{0}}),\quad\dot{\mathbf{m}},\dot{\mathbf{Q}} \in\mathbb{R}, \tag{10.10}\] \[(\dot{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1}}(\mathsf{S})) =(2\delta_{g_{b}}^{*}\omega_{b,s_{1}}(\mathsf{S}),\ \widetilde{\mathcal{L}}_{A_{b}}(\omega_{b,s_{1}}(\mathsf{S}))^{ \sharp}+d\phi_{b,s_{1}}(\mathsf{S})),\] (10.11) \[(\dot{g}_{b,v_{1}}(\mathsf{V}),\dot{A}_{b,v_{1}}^{*}(\mathsf{V})) =(\dot{g}_{b}(0,\dot{\mathbf{a}},0)+2\delta_{g_{b}}^{*}\tilde{ \omega}_{b,v_{1}}(\mathsf{V}),\ \dot{A}_{b}(0,\dot{\mathbf{a}},0)+\widetilde{\mathcal{L}}_{A_{b}}(\tilde{ \omega}_{b,v_{1}}(\mathsf{V}))^{\sharp}+d\phi_{b,v_{1}}(\mathsf{V})), \tag{10.12}\] _and_ \[(\dot{g}^{*}_{b,s_{0}},\dot{A}^{*}_{b,s_{0}}) =c_{1}(2G_{g_{0}}\delta^{*}_{g_{0}}\omega^{*}_{b,s_{0}},\ 4\widetilde{\mathcal{L}}_{A_{b}}(\omega^{*}_{b,s_{0}})^{\sharp}+d\phi^{*}_{b,s_{ 0}})+c_{2}(0,\delta(r-r_{b})dr),\quad c_{1},c_{2}\in\mathbb{R}, \tag{10.13}\] \[(\dot{g}^{*}_{b,s_{1}}(\mathsf{S}),\dot{A}^{*}_{b,s_{1}}(\mathsf{S })) =(2G_{g_{0}}\delta^{*}_{g_{0}}\omega^{*}_{b,s_{1}}(\mathsf{S}),\ 4 \widetilde{\mathcal{L}}_{A_{b}}(\omega^{*}_{b,s_{1}}(\mathsf{S}))^{\sharp}+d \phi^{*}_{b,s_{1}}(\mathsf{S})),\] (10.14) \[(\dot{g}^{*}_{b,v_{1}}(\mathsf{V}),\dot{A}^{*}_{b,v_{1}}(\mathsf{ V})) =(2G_{g_{0}}\delta^{*}_{g_{0}}\omega^{*}_{b,v_{1}}(\mathsf{V}),\ 4 \widetilde{\mathcal{L}}_{A_{b}}(\omega^{*}_{b,v_{1}}(\mathsf{V}))^{\sharp}+d \phi^{*}_{b,v_{1}}(\mathsf{V})) \tag{10.15}\] _with \(\omega_{b,s_{1}}(\mathsf{S})\), \(\omega^{*}_{b,s_{0}}\) and \(\omega^{*}_{b,s_{1}}(\mathsf{S})\), \(\omega^{*}_{b,v_{1}}(\mathsf{V})\) defined as in (9.50), (9.51) and (9.57)._ _Moreover, we have_ \[(\dot{g}_{b,s_{1}}(\mathsf{S}),\ \dot{A}_{b,s_{1}}(\mathsf{S})),\quad(\dot{g} _{b,v_{1}}(\mathsf{V}),\ \dot{A}_{b,v_{1}}(\mathsf{V}))\in\widetilde{H}^{\infty,1/2-}_{\mathrm{b}}( \bar{X}),\] _and the maps \(b\mapsto(\dot{g}_{b,\bullet}(\bullet),\ \dot{A}_{b,\bullet}(\bullet))\) and \(b\mapsto(\dot{g}^{*}_{b,\bullet}(\bullet),\ \dot{A}^{*}_{b,\bullet}(\bullet))\) can be chosen to be continuous in \(b\) with values in the respective spaces._ _For later use, we further determine the leading term of \((\dot{g}_{b,v_{1}}(\mathsf{V}),\ \dot{A}_{b,v_{1}}(\mathsf{V}))\)_ \[\dot{g}_{b_{0},v_{1}}(\mathsf{V}) =\Big{(}(-\frac{4\mathbf{m}_{0}}{r}+\frac{2\mathbf{Q}_{0}^{2}}{r^ {2}})dt_{0}+\frac{4\mathbf{m}_{0}}{r-(\mathbf{m}_{0}-\sqrt{\mathbf{m}_{0}^{2} -\mathbf{Q}_{0}^{2}})}dr\Big{)}\otimes_{s}\mathsf{V}+\mathcal{O}_{\bar{H}^{ \infty,1/2-}_{\mathrm{b}}(\bar{X})}(\gamma), \tag{10.16}\] \[\dot{A}_{b_{0},v_{1}}(\mathsf{V}) =\mathcal{O}_{\bar{H}^{\infty,1/2-}_{\mathrm{b}}(\bar{X})}(| \mathbf{Q}_{0}|).\] _Remark 10.2_.: In the course of the proof of the above theorem, we will find that the zero modes in \(\mathcal{K}_{b}\) satisfy both the linearized Einstein-Maxwell system and the linearized generalized wave map gauge and Lorenz gauge conditions. Proof of Theorem 10.1.: We first analyze the RN case \(b=b_{0}\) and write \((g,A)=(g_{b_{0}},A_{b_{0}})\). Suppose \(\widehat{L_{b_{0},\gamma}}(\sigma)(\dot{g},\dot{A})=0\). Then by Proposition 5.7, \((\dot{g},\dot{A})\in\bar{H}^{\infty,-1/2-}_{\mathrm{b}}(\bar{X})\). In view of (10.3), applying \(\delta_{g}\) to \(L^{M}_{b_{0},\gamma}e^{-i\sigma t_{b_{0},\gamma}}(\dot{g},\dot{A})=0\) yields \[\delta_{g}d(\delta_{g}e^{-i\sigma t_{b_{0},\gamma}}\dot{A})=0.\] Since \(\dot{A}\in\bar{H}^{\infty,-1/2-}_{\mathrm{b}}(\bar{X};\widetilde{\mathrm{s}T^ {*}}\bar{X})\), we obtain that \(\delta_{g}(e^{-i\sigma t_{b_{0},\gamma}}\dot{A})\) is a mode. More precisely, we have \[\delta_{g}(e^{-i\sigma t_{b_{0},\gamma}}\dot{A})\in\begin{cases}e^{-i\sigma t_ {b_{0},\gamma}}\bar{H}^{\infty,-1/2-}_{\mathrm{b}}(\bar{X};\mathbb{C})&\text{ if }\ \ \sigma\neq 0,\\ \bar{H}^{\infty,1/2-}_{\mathrm{b}}(\bar{X};\mathbb{C})&\text{ if }\ \ \sigma=0.\end{cases}\] By Theorem 8.1, it follows that \[\delta_{g}(e^{-i\sigma t_{b_{0},\gamma}}\dot{A})=0, \tag{10.17}\] and thus \(e^{-i\sigma t_{b_{0},\gamma}}(\dot{g},\dot{A})\) is a mode solution to the Maxwell part of the linearized Einstein-Maxwell system. Applying \(\delta_{g}G_{g}\) to \(L^{E}_{b_{0},\gamma}e^{-i\sigma t_{b_{0},\gamma}}(\dot{g},\dot{A})=0\) and using the linearized second Bianchi identity (see the discussion around (7.13) and (7.14)) give \[\delta_{g}G_{g}\tilde{g}^{*}_{g,\gamma}(\tilde{\delta}_{g,\gamma}G_{g}e^{-i \sigma t_{b_{0},\gamma}}\dot{g})=\mathcal{P}_{b_{0},\gamma}(\tilde{\delta}_{g, \gamma}G_{g}e^{-i\sigma t_{b_{0},\gamma}}\dot{g})=0.\] Again, since \(\tilde{\delta}_{g,\gamma}G_{g}e^{-i\sigma t_{b_{0},\gamma}}\dot{g}\) is a mode in the same way as \(\delta_{g}(e^{-i\sigma t_{b_{0},\gamma}}\dot{A})\), it follows from Theorem 9.8 that \[\tilde{\delta}_{g,\gamma}G_{g}e^{-i\sigma t_{b_{0},\gamma}}\dot{g}=0, \tag{10.18}\] and thus \(e^{-i\sigma t_{b_{0},\gamma}}(\dot{g},\dot{A})\) is a mode solution to the linearized Einstein-Maxwell system. * The case \(\mathrm{Im}\,\sigma\geq 0,\sigma\neq 0.\) Then we apply the mode stability result Theorem 7.1 and find that \[e^{-i\sigma t_{b_{0},\gamma}}(\dot{g},\dot{A})=(2\delta_{g}^{*}\omega,\mathcal{L }_{\omega^{\sharp}}A+d\phi)\] (10.19) where \((\omega,\phi)=e^{-i\sigma t_{b_{0},\gamma}}(\omega_{0},\phi_{0})\in e^{-i \sigma t_{b_{0},\gamma}}\bar{H}^{\infty,\ell^{\prime}}_{\mathrm{b}}(\bar{X}; \widetilde{\mathrm{s}T^{*}}\bar{X}\oplus\mathbb{C})\) with \(\ell^{\prime}<-1/2\). Plugging (10.19) into the equation (10.18), we find \[\widehat{\mathcal{W}_{b_{0},\gamma}}(\sigma)\omega_{0}=0,\] and thus \(\omega_{0}=0\) by Theorem 9.13. We then plug (10.19) with \(\omega=0\) into the equation (10.17) and obtain \[\widehat{\square_{g}}(\sigma)\phi_{0}=0.\] Therefore, we have \(\phi_{0}=0\) by Theorem 8.1. This proves the injectivity of \(\widehat{L_{b_{0},\gamma}}(\sigma)\) for \(\mathrm{Im}\,\sigma\geq 0,\sigma\neq 0\), and the surjectivity follows from the fact that \(\widehat{L_{b_{0},\gamma}}(\sigma)\) is Fredholm of index \(0\). * [leftmargin=*] * Scalar type \(l\geq 2\) or vector type \(l\geq 2\) zero modes. We apply the mode stability result Theorem 7.1 and find that \[(\dot{g},\dot{A})=(2\delta_{g}^{*}\omega,\mathcal{L}_{\omega^{\sharp}}A+d\phi)\] (10.20) where \((\omega,\phi)\in\bar{H}_{\mathrm{b}}^{\infty,-3/2-}(\bar{X};\widetilde{\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! * Vector type \(l=1\) zero modes. If \((\dot{g},\dot{A})\) is of vector type \(l=1\), by Theorem 7.1 we have \[(\dot{g},\dot{A})=(\dot{g}_{b_{0}}(0,\dot{\bf a},0)+2\delta_{g}^{*}\omega,\ \dot{A}_{b_{0}}(0,\dot{\bf a},0)+\mathcal{L}_{\omega^{\sharp}}A+d\phi)\] (10.23) where \((\omega,\phi)\in\bar{H}_{\rm b}^{\infty,-3/2-}(\bar{X};\widetilde{\widetilde{ \mathcal{I}^{*}}}\bar{X}\oplus\mathbb{C})\). Plugging (10.23) into (10.18), it follows that \[2\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\omega=-\tilde{\delta}_{g,\gamma}G_{ g}\dot{g}_{b_{0}}(0,\dot{\bf a},0)\in\bar{H}_{\rm b}^{\infty,3/2-}(\bar{X}).\] In view of Theorem 9.13, there exists a unique \(\omega\in\bar{H}_{\rm b}^{\infty,-1/2-}(\bar{X};\widetilde{\widetilde{ \mathcal{I}^{*}}}\bar{X})\) of vector type \(l=1\) satisfying the above equation. We then denote this unique \(\omega\) by \(\tilde{\omega}_{b_{0},v_{1}}(\mathsf{V})\). Therefore, we have \(\dot{g}_{b_{0},v}(\mathsf{V})\in\bar{H}_{\rm b}^{\infty,1/2-}(\bar{X})\). Plugging \(\dot{A}=\dot{A}_{b_{0}}(0,\dot{\bf a},0)+\mathcal{L}_{\omega^{\sharp}}A+d\phi\) and \(\omega=\tilde{\omega}_{b_{0},v_{1}}\) into the equation (10.17), we obtain that \[-\widehat{\mathbb{D}_{g}}(0)\phi=-\delta_{g}\dot{A}_{b_{0}}(0,\dot{\bf a},0)- \delta_{g}\widetilde{\mathcal{L}}_{A}(\tilde{\omega}_{b_{0},v_{1}}(\mathsf{V }))^{\sharp}\in\mathbf{Q}_{0}\bar{H}_{\rm b}^{\infty,3/2-}(\bar{X})\] By Theorem 8.1, there exists a unique \(\phi\in\bar{H}_{\rm b}^{\infty,-1/2-}\) of vector type \(l=1\) satisfying the above equation. We then denote this unique \(\phi\) by \(\phi_{b_{0},v_{1}}(\mathsf{V})\). Therefore, the vector \(l=1\) kernel of \(\widehat{L_{b_{0},\gamma}}(0)\) is \(3\)-dimensional. We also note that \(\phi_{b_{0},v_{1}}\) is of size \(\mathcal{O}_{\bar{H}_{\rm b}^{\infty,-1/2-}(\bar{X};\mathbb{C})}(|\mathbf{Q}_{ 0}|)\), and thus \(\dot{A}_{b_{0},v_{1}}=\mathcal{O}_{\bar{H}_{\rm b}^{\infty,1/2-}(\bar{X}; \mathbb{C})}(|\mathbf{Q}_{0}|)\). Now we shall derive the expression for \((\dot{g}_{b_{0},v_{1}}(\mathsf{V}),\dot{A}_{b_{0},v_{1}}(\mathsf{V}))\) in (10.16). We write \[\dot{g}_{b_{0},v_{1}}(\mathsf{V})=\dot{g}_{b_{0}}^{0}(0,\dot{\bf a},0)+2\delta_ {g}^{*}\omega^{0}.\] When rescaling to \(|\dot{\bf a}|=1\) and letting \((\theta,\varphi)\) be the spherical coordinates adapted to \(\dot{\bf a}\), we can rewrite \[\dot{g}_{b_{0},v_{1}}(\mathsf{V})=(2(\mu_{b_{0}}-1)dt_{0}-2dr)\otimes_{s} \mathsf{V}+2\delta_{g}^{*}\omega^{0}\quad\text{with}\quad\mathsf{V}=\sin^{2} \theta d\varphi.\] Plugging \(\dot{g}_{b_{0},v_{1}}(\mathsf{V})\) into (10.18) yields \[2\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\omega^{0}=-\tilde{\delta}_{g,\gamma}G _{g}\Big{(}(2(\mu_{b_{0}}-1)dt_{0}-2dr)\otimes_{s}\mathsf{V}\Big{)}=-2r^{-1} \mathsf{V}+\mathcal{O}_{\bar{H}_{\rm b}^{\infty,\infty}(\bar{X})}(\gamma).\] We also find that \[\delta_{g}^{*}f(r)\mathsf{V}=\Big{(}1+\frac{2\mathbf{m}_{0}}{r-(\mathbf{m}_{0}- \sqrt{\mathbf{m}_{0}^{2}-\mathbf{Q}_{0}^{2}})}\Big{)}dr\otimes_{s}\mathsf{V}, \quad\delta_{g}G_{g}\delta_{g}^{*}f(r)\mathsf{V}=-r^{-1}\mathsf{V}\] with \[f(r)=(\frac{2\mathbf{m}_{0}}{\mathbf{m}_{0}-\sqrt{\mathbf{m}_{0}^{2}-\mathbf{Q }_{0}^{2}}}-1)r+\frac{2\mathbf{m}_{0}}{(\mathbf{m}_{0}-\sqrt{\mathbf{m}_{0}^{2} -\mathbf{Q}_{0}^{2}})^{2}}r^{2}\ln(1-\frac{\mathbf{m}_{0}-\sqrt{\mathbf{m}_{0}^ {2}-\mathbf{Q}_{0}^{2}}}{r}).\] Then it follows that \[2\widehat{\mathcal{W}_{b_{0},\gamma}}(0)(\omega^{0}-f(r)\mathsf{V})=\mathcal{O }_{\bar{H}_{\rm b}^{\infty,\infty}(\bar{X})}(\gamma),\] and thus \(\omega^{0}-f(r)\mathsf{V}=\mathcal{O}_{\bar{H}_{\rm b}^{\infty,-1/2-}(\bar{X})}(\gamma)\) (according to Remark 9.4, \(\widehat{\mathcal{W}_{b_{0},\gamma}}(0)^{-1}\) restricted in vector type \(l=1\)\(1\)-form spaces exists with norm of size \(\mathcal{O}(1)\) ). This implies \[\dot{g}_{b_{0},v_{1}}(\mathsf{V}) =(2(\mu_{b_{0}}-1)dt_{0}-2dr)\otimes_{s}\mathsf{V}+2\delta_{g}^{*} f(r)\mathsf{V}+\mathcal{O}_{\bar{H}_{\rm b}^{\infty,1/2-}(\bar{X})}(\gamma)\] (10.24) \[=\Big{(}(-\frac{4\mathbf{m}_{0}}{r}+\frac{2\mathbf{Q}_{0}^{2}}{r^{2}})dt_{0}+ \frac{4\mathbf{m}_{0}}{r-(\mathbf{m}_{0}-\sqrt{\mathbf{m}_{0}^{2}-\mathbf{Q}_{0 }^{2}})}dr\Big{)}\otimes_{s}\mathsf{V}+\mathcal{O}_{\bar{H}_{\rm b}^{\infty,1/2- }(\bar{X})}(\gamma).\] Therefore, \(\dot{g}_{b_{0},v_{1}}(\mathsf{V})\in\bar{H}_{\rm b}^{\infty,1/2-}(\bar{X};S^{2 \widetilde{\mathcal{I}^{*}}}\bar{X})\). * Dual zero modes for RN and KN case. Combining the discussion around (9.4), (9.5) and (9.6) with the growing dual zero modes established in SS9.3, we see that \[(2G_{g_{0}}\delta_{g_{0}}^{*}\omega_{b,\bullet}^{*}(\bullet),\ 4\widetilde{ \mathcal{L}}_{A_{b}}(\omega_{b,\bullet}^{*}(\bullet))^{\sharp}+d\phi_{b,\bullet}^{*} (\bullet))\in\ker\widehat{L_{b,\gamma}}(0)^{*}\] provided that \[-\widehat{\mathbb{D}_{g_{0}}}(0)\phi_{b,\bullet}^{*}(\bullet)=-4\delta_{g_{ \sharp}}(\widetilde{\mathcal{L}}_{A_{b}}(\omega_{b,\bullet}^{*}(\bullet))^{ \sharp}).\] (10.25) For \(\omega_{b,\bullet}^{*}(\bullet)\), in view of (9.51), \(\delta_{g_{\sharp}}\in\rho{\rm Diff}_{\rm b}^{\rm f}\) and \(\mathcal{L}_{(\bullet)^{\sharp}}A_{b}\in\rho^{2}{\rm Diff}_{\rm b}^{\rm 1}\), the right-hand side of (10.25) is in \(\dot{H}_{\rm b}^{-\infty,3/2-}(\bar{X})\). For the case \(\phi_{b,s_{0}}^{*}\), by Theorem 8.1, there exists a unique \(\phi_{b,s_{0}}^{*}\in\dot{H}_{\rm b}^{-\infty,-1/2-}(\bar{X})\) such that (10.25) holds. We note that adding a multiple of \(H(r-r_{b_{0}})\) to \(\phi_{b,s_{0}}^{*}\) still gives rise to \(A^{*}_{b,s_{0}}\in\dot{H}^{-\infty,1/2-}_{\rm b}(\bar{X})\). For the case \(\phi^{*}_{b,s_{1}}({\sf S}),{\sf S}\in{\bf S}_{1}\), by Theorem 8.1 again, there exists a unique \(\phi^{*}_{b,s_{1}}({\sf S})\in\dot{H}^{-\infty,-1/2-}_{\rm b}(\bar{X})\) such that (10.25) holds. Therefore, with the above choice of \(\phi^{*}_{b,\bullet}(\bullet)\), we have \(\dot{A}^{*}_{b,s_{\bullet}}(\bullet)\in\dot{H}^{-\infty,1/2-}_{\rm b}(\bar{X})\). As for \(\omega_{b,v_{1}}({\sf V})\), according to (9.57), now the right-hand side of (10.25) is in \(\dot{H}^{-\infty,1/2-}_{\rm b}(\bar{X})\). Since \(\ker\widehat{\square_{g_{b}}}(0)\cap\bar{H}^{\infty,-1/2+}_{\rm b}(\bar{X})=\{0\}\) by Theorem 8.1, (10.25) is solvable for \(\phi^{*}_{b,v_{1}}({\sf V})\). More precisely, there exists a unique \(\phi^{*}_{b,v_{1}}({\sf V})\in\dot{H}^{-\infty,-3/2-}_{\rm b}(\bar{X})\), which lies in \({\rm ann}(v)\) with \(v\in C^{\infty}_{c}(X)\) and satisfying \(\langle v,H(r-r_{b})\rangle\neq 0\), such that (10.25) holds, and thus \(\dot{A}^{*}_{b,v_{1}}\in\dot{H}^{-\infty,-1/2-}_{\rm b}(\bar{X};^{\infty}\! \!\!T^{*}\bar{X})\). We also note that by (9.52) and (9.58), \(G_{g_{b}}\delta^{*}_{g_{b}}\omega^{*}_{b,\bullet}(\bullet)\in\dot{H}^{-\infty,1/2-}_{\rm b}(\bar{X};^{2}\!\!\!\widetilde{X}\!\!\widetilde{T^{*}}\bar{X})\). As a consequence, \[(2G_{g_{b}}\delta^{*}_{g_{b}}\omega^{*}_{b,\bullet}(\bullet),\;4\widetilde{ \mathcal{L}}_{A_{b}}(\omega^{*}_{b,\bullet}(\bullet))^{\sharp}+d\phi^{*}_{b, \bullet}(\bullet))\quad\text{and}\quad(0,\delta(r-r_{b})dr)\] indeed span a \(8\)-dimension subspace of \(\mathcal{K}^{*}_{b}\). * Zero modes for KN case. First, the above construction of the dual zero modes of \(\widehat{L_{b,\gamma}}(0)^{*}\) and the index \(0\) property of \(\widehat{L_{b,\gamma}}(0)\) imply that \(\mathcal{K}_{b}\) is at least \(8\)-dimensional. Next, we shall prove by a contradiction argument that it is at most \(8\)-dimensional for \(b\) near \(b_{0}\), and thus must be \(8\)-dimensional. Suppose \(\{h_{1},\cdots,h_{8}\}\) is a basis of \(\mathcal{K}_{b_{0}}\) and choose \(h_{1}^{b},\cdots,h_{8}^{b}\in C^{\infty}_{c}(X^{\diamond};S^{2}\widetilde{s \infty}\overline{T^{*}}\bar{X}\oplus^{\infty}\!\!\!\widetilde{T^{*}}\bar{X})\) such that \(\langle h_{i},h_{j}^{b}\rangle=\delta_{ij}\). (Consider the map \(\Phi:C^{\infty}_{c}(X^{\diamond})\to\mathbb{C}^{8}\) defined by \(\Phi(h^{\flat})=(\langle h_{1},h^{\flat}\rangle,\cdots,\langle h_{8},h^{ \flat}\rangle)\), and note that \(\Phi\) is surjective. Otherwise, there would exist some \(a=(a_{1},\cdots,a_{8})\neq 0\in\mathbb{C}^{8}\) such that \(a\cdot\Phi(h^{\flat})=\langle\sum_{i=1}^{8}a_{i}h_{i},h^{\flat}\rangle=0\) for all \(h^{\flat}\in C^{\infty}_{c}(X^{\diamond})\), which implies \(\langle\sum_{i=1}^{8}a_{i}h_{i},h^{\flat}\rangle=0\) all \(h^{\flat}\in\dot{H}^{-\infty,-\ell}_{\rm b}(\bar{X})\) since \(C^{\infty}_{c}(X^{\diamond})\) is dense in \(\dot{H}^{-\infty,-\ell}_{\rm b}(\bar{X})\). However, this is impossible because \(\sum_{i=1}^{8}a_{i}h_{i}\neq 0\)). Assume for the sake of contradiction that there exists a sequence \(b_{j}\to 0\) such that for all \(j\), the dimension of \(\mathcal{K}_{b}\) is greater than \(8\). Then one can find \(u_{j}\in\mathcal{K}_{b_{j}}\) such that \(u_{j}\in{\rm ann}\{h_{1}^{\flat},\cdots,h_{8}^{\flat}\}\) with \(\|u_{j}\|_{\bar{H}^{-\infty,-1/2-}_{\rm b}(\bar{X})}=1\). As in the proof of the first step of Lemma 9.12 (using the Fredholm estimates for \(\widehat{L_{b,\gamma}}(0)\)), there exists a subsequence \(u_{j}\to u\neq 0\) weakly in \(\bar{H}^{\infty,-1/2-}_{\rm b}(\bar{X})\) and then \(u\in\ker\widehat{L_{b,\gamma}}(0)\). Therefore, one can pick \(h^{\flat}\in{\rm span}\{h_{1}^{\flat},\cdots,h_{8}^{\flat}\}\) such that \(\langle u,h^{\flat}\rangle=1\), but \(0=\langle u_{j},h^{\flat}\rangle\to\langle u,h^{\flat}\rangle=1\), which leads to a contradiction. Getting the explicit expressions for \((\dot{g}_{b,\bullet}(\bullet),\dot{A}_{b,\bullet}(\bullet))\) as in (10.10), (10.11) and (10.12) requires a direct argument. First, by (9.52), we indeed have \((\dot{g}_{b,s_{1}}({\sf S}),\dot{A}_{b,s_{1}}({\sf S}))=(2\delta_{g_{b}}\omega _{b,s_{1}}({\sf S}),\;\widetilde{\mathcal{L}}_{A_{b}}(\omega_{b,s_{1}}({\sf S})) ^{\sharp}+d\phi_{b,s_{1}}({\sf S}))\in\ker\widehat{L_{b,\gamma}}\cap\bar{H}^{ \infty,1/2-}_{\rm b}(\bar{X})\) for suitable \(\phi_{b,s_{1}}\). It remains to construct \((\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}})\) and \((\dot{g}_{b,v_{1}}({\sf V}),\dot{A}_{b,v_{1}}({\sf V}))\) which extend \((\dot{g}_{b_{0},s_{0}},\dot{A}_{b_{0},s_{0}})\) and \((\dot{g}_{b,v_{1}}({\sf V}),\dot{A}_{b_{0},v_{1}}({\sf V}))\), respectively. We make the ansatz \[(\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}})=(\dot{g}_{b}(\dot{\bf m},0,\dot{\bf Q})+2 \delta^{*}_{g_{b}}\omega,\;\dot{A}_{b}(0,0,\dot{\bf Q})+\mathcal{L}_{\omega^{ \sharp}}A_{b}+d\phi)\] (10.26) with \(\omega,\phi\) to be found. Since \((\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}})\) of the form in the above ansatz satisfies the linearized Einstein-Maxwell system, we have \((\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}})\in\ker\widehat{L_{b,\gamma}}(0)\) provided that the generalized linearized gauge conditions are satisfied, namely, \[\tilde{\delta}_{g_{b},\gamma}G_{g_{b}}\dot{g}_{b,s_{0}}=0\quad\text{and} \quad\delta_{g_{b}}d\dot{A}_{b,s_{0}}=0.\] (10.27) Then as in the proof of scalar type \(l=0\) zero modes of \(\widehat{L_{b,\gamma}}(0)\), we can find a unique \(\omega_{b,s_{0}}\in\bar{H}^{\infty,-3/2-}_{\rm b}(\bar{X};^{\infty}\!\!\! \widetilde{T^{*}}\bar{X})\), which lies in \({\rm ann}\{f_{1},\cdots,f_{4}\}\) with \(\{f_{1},\cdots,f_{4}\}\subset C^{\infty}_{c}(X)\) and being a set of linearly independent functionals on \(\ker\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\cap\bar{H}^{\infty,-3/2-}_{\rm b}( \bar{X};^{\infty}\!\!\!\widetilde{T^{*}}\bar{X})\), and \(\phi_{b,s_{0}}\in\bar{H}^{\infty,-3/2-}_{\rm b}(\bar{X};\mathbb{C})\), which lies in \({\rm ann}(v)\) with \(v\in C^{\infty}_{c}(X)\) and satisfying \(\langle 1,v\rangle\neq 0\), such that (10.27) holds. The construction of \((\dot{g}_{b,v_{1}}({\sf V}),\dot{A}_{b,v_{ ### Generalized zero modes: linear growth Now we turn to discussing the generalized zero mode solutions of \(L_{b,\gamma}(\dot{g},\dot{A})=0\). Suppose \((\dot{g},\dot{A})=(\sum_{j=1}^{k}t^{j}_{b,*}\dot{g}_{j},\sum_{j=1}^{k}t^{j}_{b,*} \dot{A}_{j})\) and \(L_{b,\gamma}(\dot{g},\dot{A})=0\). Due to the fact that \([L_{b,\gamma},\partial_{t_{b,*}}]=0\), we find that \(\partial^{j}_{t_{b,*}}(\dot{g},\dot{A})\in\ker L_{b,\gamma}\) for \(0\leq j\leq k\). Taking \(j=k\), we obtain that the leading order term \((\dot{g}_{k},\dot{A}_{k})\) of \((\dot{g},\dot{A})\) lies in the \(\ker\widetilde{L_{b,\gamma}}(0)\). Next we further determine what the leading order term \((\dot{g}_{k},\dot{A}_{k})\) should be. **Lemma 10.3**.: _Let \(k\geq 1\). Then there does not exist \((\dot{g},\dot{A})=(\sum_{j=1}^{k}t^{j}_{b,*}\dot{g}_{j},\sum_{j=1}^{k}t^{j}_{b,*}\dot{A}_{j})\), with \(0\neq(\dot{g}_{k},\dot{A}_{k})\in\{(\dot{g}_{b,v_{1}}(\mathsf{V}),\dot{A}_{b,v _{1}}(\mathsf{V})):\mathsf{V}\in\mathbf{V}_{1}\}\) and \((\dot{g}_{j},\dot{A}_{j})\in\bar{H}^{\infty,-1/2-}_{b}(\bar{X};\mathcal{S}^{2 \bar{sc}\widetilde{T}^{*}}\bar{X}\oplus\bar{sc}\widetilde{T}^{*}\bar{X})\), such that \(L_{b,\gamma}(\dot{g},\dot{A})=0\)._ Proof.: According to the discussion above, it suffices to consider the case \(k=1\). Assume \[(\dot{g},\dot{A})=(t_{b,*}\dot{g}_{b,v_{1}}(\mathsf{V})+\dot{g}_{0},\ t_{b,*} \dot{A}_{b,v_{1}}(\mathsf{V})+\dot{A}_{0})\] with \(\mathsf{V}\neq 0\). Then we need to show that there is no \((\dot{g}_{0},\dot{A}_{0})\in\bar{H}^{\infty,-1/2-}_{b}(\bar{X})\) such that the following equation holds \[\widehat{L_{b,\gamma}}(0)(\dot{g}_{0},\dot{A}_{0})=-[L_{b,\gamma},\ t_{b,*}]( \dot{g}_{b,v_{1}}(\mathsf{V}),\dot{A}_{b,v_{1}}(\mathsf{V}))\in\bar{H}^{\infty,3/2-}_{b}(\bar{X})\] where we use \([L_{b,\gamma},\ t_{b,*}]\in\rho\mathrm{Diff}^{1}_{\mathrm{b}}\) and \((\dot{g}_{b,v_{1}}(\mathsf{V}),\dot{A}_{b,v_{1}}(\mathsf{V})\in\bar{H}^{ \infty,1/2-}_{b}(\bar{X})\). To this end, we need to verify that the pairing of the right-hand side with \((\dot{g}^{*}_{b,v_{1}}(\mathsf{V}),\dot{A}^{*}_{b,v_{1}}(\mathsf{V}))\) is non-zero. We note that it suffices to check the RN case \(b=b_{0}\) because the general KN case with \(b\) near \(b_{0}\) follows directly by the continuity (in \(b\)). Using the expression (10.16) for \((\dot{g}_{b_{0},v_{1}}(\mathsf{V}),\dot{A}_{b_{0},v_{1}}(\mathsf{V}))\), we find that \[\begin{split}&\langle[L_{b_{0},\gamma},\ t_{b_{0},*}](\dot{g}_{b_{0},v _{1}}(\mathsf{V}),\dot{A}_{b_{0},v_{1}}(\mathsf{V})),(\dot{g}^{*}_{b_{0},v_{1} }(\mathsf{V}^{\prime}),\dot{A}^{*}_{b_{0},v_{1}}(\mathsf{V}^{\prime}))\rangle \cr&=\langle[L^{E}_{b_{0},\gamma},\ t_{b_{0},*}](\dot{g}_{b_{0},v _{1}}(\mathsf{V}),0),(\dot{g}^{*}_{b_{0},v_{1}}(\mathsf{V}^{\prime}),0)\rangle +\mathcal{O}(|\mathbf{Q}_{0}|)\cr&=\langle[L^{E}_{b_{0},\gamma},\ t_{b_{0},*}](\dot{g}_{b_{0},v _{1}}(\mathsf{V}),0),(2G_{g}\delta^{*}_{g}(r^{2}H(r-r_{b_{0}})\mathsf{V}^{ \prime}),0)\rangle+\mathcal{O}(|\mathbf{Q}_{0}|+\gamma)\end{split} \tag{10.29}\] where we use (9.59) in the last step. Since \(G_{g_{b_{0}}}\delta_{g_{b_{0}}}(r^{2}H(r-r_{b_{0}})\mathsf{V}^{\prime})=r^{2} \delta(r-r_{b_{0}})dr\otimes_{*}\mathsf{V}^{\prime}\) is supported at \(r=r_{b_{0}}\) where \(t_{b_{0},*}=t_{0}\), we can replace \(t_{b_{0},*}\) by \(t_{0}\) in the subsequent calculation. We then compute \[\begin{split}&\langle[L^{E}_{b_{0},\gamma},\ t_{0}](\dot{g}_{b_{0},v _{1}}(\mathsf{V}),0),\ 2r^{2}\delta(r-r_{b_{0}})dr\otimes_{*}\mathsf{V}^{\prime}\rangle\cr& =\langle[L^{E}_{b_{0},\gamma},\ t_{0}](4\mathbf{m}_{0}r^{-1}(dr-dt_{0}) \otimes_{*}\mathsf{V},0),\ 2r^{2}\delta(r-r_{b_{0}})dr\otimes_{*}\mathsf{V}^{\prime}\rangle+ \mathcal{O}(|\mathbf{Q}_{0}|^{2}+\gamma)\cr&=-\langle[\Box_{g_{0},2},\ t_{0}](4\mathbf{m}_{0}r^{-1}(dr-dt_{0})\otimes_{*}\mathsf{V}),\ 2r^{2}\delta(r-r_{b_{0}})dr\otimes_{*}\mathsf{V}^{\prime}\rangle+\mathcal{O}(| \mathbf{Q}_{0}|^{2}+\gamma)\cr&=\langle 8\mathbf{m}_{0}r^{-2}\big{(}(dr-(1+\frac{\mathbf{m}_{0}}{r})dt_{0})\big{)} \otimes_{*}\mathsf{V},\ 2r^{2}\delta(r-r_{b_{0}})dr\otimes_{*}\mathsf{V}^{\prime}\rangle+\mathcal{O}(| \mathbf{Q}_{0}|^{2}+\gamma)\cr&=-12\mathbf{m}_{0}\langle V,V^{ \prime}\rangle_{L^{2}(\mathcal{S}^{2},T^{*}\mathcal{S}^{2})}+\mathcal{O}(| \mathbf{Q}_{0}|^{2}+\gamma).\end{split} \tag{10.30}\] where we use (10.16) in the first step, (10.2) and (A.2) in the second step and \([\Box_{g_{b_{0}},2},\ t_{0}]h=h\Box_{g_{b_{0}}}t_{0}+2(\nabla^{\alpha}t_{0}) \nabla_{\alpha}h\) in the third step. Therefore, the pairing is indeed non-zero if \(\mathsf{V}=\mathsf{V}^{\prime}\) and \(|\mathbf{Q}_{0}|+\gamma\) is sufficiently small. We now exclude the generalized zero modes whose leading order term is \((\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}})\). **Lemma 10.4**.: _Let \(k\geq 1\). Then there does not exist \((\dot{g},\dot{A})=(\sum_{j=1}^{k}t^{j}_{b,*}\dot{g}_{j},\sum_{j=1}^{k}t^{j}_{b,*} \dot{A}_{j})\), with \(0\neq(\dot{g}_{k},\dot{A}_{k})=(\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}})\) and \((\dot{g}_{j},\dot{A}_{j})\in\bar{H}^{\infty,-1/2-}_{b}(\bar{X};\mathcal{S}^{2 \bar{sc}\widetilde{T}^{*}}\bar{X}\oplus\bar{sc}\widetilde{T}^{*}\bar{X})\), such that \(L_{b,\gamma}(\dot{g},\dot{A})=0\)._ Proof.: As discussed in the proof of Lemma 10.3, it suffices to prove the statement for the case \(k=1\). We first consider the case \(b=b_{0}\). Suppose \[(\dot{g},\dot{A})=(t_{b_{0},*}\dot{g}_{b_{0},s_{0}}+\dot{g}_{0},\ t_{b_{0},*}\dot{A}_{ b_{0},s_{0}}+\dot{A}_{0})\in\ker L_{b_{0},\gamma}\] with \((\dot{g}_{0},\dot{A}_{0})\in\bar{H}^{\infty,-1/2-}_{b}(\bar{X})\). Applying \(\delta_{g_{b_{0}}}\) to \(L^{M}_{ linearized second Bianchi identity to obtain \(\tilde{\delta}_{g_{b_{0}},\gamma}G_{g_{b_{0}}}\dot{g}\in\ker\mathcal{P}_{b_{0},\gamma}\). Since \(\tilde{\delta}_{g_{b_{0}},\gamma}G_{g_{b_{0}}}\dot{g}_{b_{0},s_{0}}=0\) (see Remark 10.2), it follows that \[\ker\widehat{\mathcal{P}_{b_{0},\gamma}}(0)\ni\tilde{\delta}_{g_{b_{0}},\gamma }G_{g_{b_{0}}}\dot{g}=[\tilde{\delta}_{g_{b_{0}},\gamma}G_{g_{b_{0}},*}]\dot{ g}_{b_{0},s_{0}}+\tilde{\delta}_{g_{b_{0}},\gamma}G_{g_{b_{0}}}\dot{g}_{0}\in \bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X};\widehat{\mathrm{s}\Gamma^{*}} \bar{X}),\] and thus \(\tilde{\delta}_{g_{b_{0}},\gamma}G_{g_{b_{0}}}\dot{g}=0\) by Theorem 9.8. As a consequence, \((\dot{g},\dot{A})\) satisfies the linearized Einstein-Maxwell system. By the statement (v) in mode stability Theorem 7.1, we have \[(\dot{g},\dot{A})=(\dot{g}_{b_{0}}(\dot{\mathbf{m}},0,\dot{\mathbf{Q}})+2 \delta_{g_{b_{0}}}^{*}\omega,\ \dot{A}_{b_{0}}(0,0,\dot{\mathbf{Q}})+\mathcal{L}_{\omega^{\sharp}}A_{b_{0}}+ d\phi)\] where \((\omega,\phi)\in\mathrm{Poly}^{2}(t_{b_{0},*})\bar{H}_{\mathrm{b}}^{\infty,-3 /2-}(\bar{X};\widehat{\mathrm{s}\Gamma^{*}}\bar{X}\oplus\mathbb{C})\). To make the calculation simpler, we work in \((t_{0}=t+r_{b_{0},*},r,\theta,\varphi)\) coordinates instead. Since \(\dot{g}\) is linear in \(t_{0}\), it follows that the coefficient of \(t_{0}^{2}\) in \(\omega\) and \(\phi\) must be a multiple of \(\partial_{t_{0}}^{\phi}\) and \(1\), respectively. Then equating the coefficient of \(t_{0}\) of the equality \(\dot{g}=\dot{g}_{b_{0}}(\dot{\mathbf{m}},0,\dot{\mathbf{Q}})+2\delta_{g_{b_{0} }}^{*}\omega\) yields \[(\frac{2\dot{\mathbf{m}}}{r}-\frac{2\mathbf{Q}_{0}\dot{\mathbf{Q}}}{r^{2}})dt_ {0}^{2}=\dot{g}_{b_{0}}^{0}(\dot{\mathbf{m}},0,\dot{\mathbf{Q}})=c(dt_{0}) \otimes_{s}\partial_{t_{0}}^{\phi}+\delta_{g_{b_{0}}}^{*}\tilde{\omega} \tag{10.31}\] where \(c\in\mathbb{R}\) and \(\tilde{\omega}\in\bar{H}_{\mathrm{b}}^{\infty,\ell^{\prime}}(\bar{X}; \widehat{\mathrm{s}\Gamma^{*}}\bar{X})\) for some \(\ell^{\prime}\in\mathbb{R}\) is of scalar type \(l=0\). Assume \(\tilde{\omega}=f(r)dt_{0}+g(r)dr\) and let \(S=c(dt_{0})\otimes_{s}\partial_{t_{0}}^{\phi}+\delta_{g_{b_{0}}}^{*}\tilde{\omega}\). A direct calculation implies \(0=S_{rr}=g^{\prime}(r)\) and thus \(g(r)=m\) for some \(m\in\mathbb{R}\). Then \(0=S_{t_{0}r}=\frac{c}{2}+\frac{1}{2}f^{\prime}(r)+\frac{m}{2}\mu_{b_{0}}^{\prime}\) implies \(f(r)=-cr+\frac{2mm_{0}}{r}-\frac{m\mathbf{Q}_{0}^{2}}{r^{2}}+n\) for some \(n\in\mathbb{R}\). Next \(0=S_{\theta\theta}=r(f(r)+m\mu_{b_{0}})=-cr^{2}+(m+n)r\) gives \(c=0,m+n=0\), and thus \(\tilde{\omega}=-m\mu_{b_{0}}dt_{0}+mdr\). Finally, \(\frac{2\dot{\mathbf{m}}}{r}-\frac{2\mathbf{Q}_{0}\dot{\mathbf{Q}}}{r^{2}}=S_{ t_{0}t_{0}}=\frac{m}{2}\mu_{b_{0}}\mu_{b_{0}}^{\prime}-\frac{m}{2}\mu_{b_{0}} \mu_{b_{0}}^{\prime}=0\) implies that \(\dot{\mathbf{m}}=\mathbf{Q}_{0}\dot{\mathbf{Q}}=0\) and thus the leading term \(g_{b_{0},s_{0}}\) of \(\dot{g}\) is \(0\). If \(\dot{\mathbf{Q}}\neq 0,\mathbf{Q}_{0}=0\), equating the coefficient of \(t_{0}\) of the equality \(\dot{A}=\dot{A}_{b_{0}}(0,0,\dot{\mathbf{Q}})+d\phi\) yields \[\frac{\dot{\mathbf{Q}}}{r}dt_{0}=cdt_{0}+d\tilde{\phi} \tag{10.32}\] where \(c\in\mathbb{R}\) and \(\tilde{\phi}\in\bar{H}_{\mathrm{b}}^{\infty,\ell^{\prime}}(\bar{X};\mathbb{C})\) for some \(\ell^{\prime}\in\mathbb{R}\) is of scalar type \(l=0\). This implies that \(\dot{\mathbf{Q}}=0\) and thus the leading term \(A_{b_{0},s_{0}}\) of \(\dot{A}\) is \(0\). This proves the lemma for the RN case \(b=b_{0}\). This non-existence result for \(b=b_{0}\) in turn means that the following equation \[\widehat{L_{b_{0},\gamma}}(0)(\dot{g}_{0},\dot{A}_{0})=-[L_{b_{0},\gamma},\ t_{b_{0},*}](\dot{g}_{b_{0},s_{0}},\dot{A}_{b_{0},s_{0}}) \in\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})\] cannot be solved for \((\dot{g}_{0},\dot{A}_{0})\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X};S^{2 \widehat{\mathrm{s}\Gamma^{*}}\bar{X}}\oplus\widehat{\mathrm{s}\Gamma^{*}} \bar{X})\) (The right-hand side is in the stated Sobolev space because by Proposition 5.7, \((\dot{g}_{b_{0},s_{0}},\dot{A}_{b_{0},s_{0}})=\rho C^{\infty}(\bar{X})+\bar{H}_{ \mathrm{b}}^{\infty,1/2-}(\bar{X})\) whose leading term \(\rho C^{\infty}(\bar{X})\) is annihilated by the normal operator \(2\rho(\rho\partial_{\rho}-1)\) of \(-[L_{b_{0},\gamma},\ t_{b_{0},*}]\)). Suppose \(\{(\dot{g}_{b_{0},s_{0}}^{1},\dot{A}_{b_{0},s_{0}}^{1}),\ (\dot{g}_{b_{0},s_{0}}^{2},\dot{A}_{b_{0},s_{0}}^{2})\}\) is a basis of \(\{(\dot{g}_{b,s_{0}},\dot{A}_{b,s_{0}}^{1})\}\) and \(\{(\dot{g}_{b_{0},s_{0}}^{s1},\dot{A}_{b_{0},s_{0}}^{1}),\ (\dot{g}_{b_{0},s_{0}}^{s2},\dot{A}_{b_{0},s_{0}}^{s2})\}\) is a basis of \(\{(\dot{g}_{b_{s_{0}}}^{s},\dot{A}_{b_{0},s_{0}}^{s})\}\). In terms of pairing, we have \[\langle[L_{b_{0},\gamma},\ t_{b_{0},*}](\dot{g}_{b_{0},s_{0}}^{i},\dot{A}_{b_{0},s_{0}}^{i},\dot{g}_{b_{0},s_{0}}^{*j},\dot{A}_{b_{0},s_{0}}^{*j})\rangle,\quad 1 \leq i,j\leq 2\] is non-degenerate. Due to the continuity of \((\dot{g}_{b_{0},\dot{A}_{b,s_{0}}})\) and \((\dot{g}_{b_{s},s_{0}}^{s},\dot{A}_{b_{0},s_{0}}^{s})\) in \(b\), this pairing is still non-zero for KN case with \(b\) near \(b_{0}\). This finishes the proof of the lemma. However, there do exist generalized zero modes (growing linearly in \(t_{b,*}\)) whose leading order term is given by \((\dot{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1}}(\mathsf{S}))\) for \(\mathsf{S}\in\mathbf{S}_{1}\). **Proposition 10.5**.: _Let \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\). Then the following space_ \[\widehat{\mathcal{K}}_{b_{0}}:=\ker L_{b_{0},\gamma}\cap\mathit{Poly}^{1}(t_{b _{0},*})\bar _with_ \[(\hat{g}_{b,s_{1}}(\mathsf{S}),\hat{A}_{b,s_{1}}(\mathsf{S}))=(2\delta^{*}_{g_{b} }\hat{\omega}_{b,s_{1}}(\mathsf{S}),\ \widetilde{\mathcal{L}}_{A_{b}}(\hat{\omega}_{b,s_{1}}(\mathsf{S}))^{\sharp}+d \hat{\phi}_{b,s_{1}}(\mathsf{S})) \tag{10.37}\] _where \(\hat{\omega}_{b,s_{1}}(\mathsf{S})\) is defined as in (9.65) and \(\hat{\phi}_{b,s_{0}}(\mathsf{S})\in\text{Poly}^{1}(t_{b,*})\bar{H}_{\rm b}^{ \infty,-3/2-}(\bar{X};\mathbb{C})\) depends linearly in \(\mathsf{S}\in\mathbf{S}_{1}\). Moreover, \((\hat{g}_{b,s_{1}}(\mathsf{S}),\hat{A}_{b,s_{1}}(\mathsf{S}))\) satisfies both the linearized Einstein-Maxwell system and the linearized generalized wave map gauge and generalized Lorenz gauge conditions._ _Meanwhile, there exists continuous families (in b)_ \[b\to(\hat{g}_{b,s_{1}}^{*}(\mathsf{S}),\hat{A}_{b,s_{1}}^{*}(\mathsf{S}))\in \ker L_{b,\gamma}^{*}\cap\text{Poly}^{1}(t_{b,*})\dot{H}_{\rm b}^{-\infty,-1/2 -}(\bar{X};S^{2}\widetilde{\text{sc}T}^{*}\bar{X}\oplus\widetilde{\text{sc}T} ^{*}\bar{X}) \tag{10.38}\] _with_ \[(\hat{g}_{b,s_{1}}^{*}(\mathsf{S}),\hat{A}_{b,s_{1}}^{*}(\mathsf{S}))=(2G_{g_ {b}}\delta^{*}_{g_{b}}\hat{\omega}_{b,s_{1}}^{*}(\mathsf{S}),\ 4\widetilde{\mathcal{L}}_{A_{b_{0}}}(\hat{\omega}_{b,s_{1}}^{*}(\mathsf{S}))^ {\sharp}+d\hat{\phi}_{b,s_{1}}^{*}(\mathsf{S})) \tag{10.39}\] _where \(\hat{\omega}_{b,s_{1}}^{*}(\mathsf{S})\) is defined as in (9.65) and \(\hat{\phi}_{b,s_{0}}^{*}(\mathsf{S})\in\text{Poly}^{1}(t_{b,*})\dot{H}_{\rm b} ^{\infty,-3/2-}(\bar{X};\mathbb{C})\)._ Proof.: Let \((\hat{g},\dot{A})=(t_{b,*}\hat{g}_{1}+\hat{g}_{0},t_{b,*}\hat{A}_{1}+\dot{A}_{ 0})\in\ker L_{b,\gamma}\) with \((\hat{g}_{j},\dot{A}_{j})\in\bar{H}_{\rm b}^{\infty,-1/2-}(\bar{X})\). * Generalized zero modes in the RN case \(b=b_{0}\). In view of Lemma 10.3 and 10.4, \((\hat{g}_{1},\dot{A}_{1})\) must take the form \((2\delta_{g_{b_{0}}}\omega_{b_{0},s_{1}}(\mathsf{S}),\widetilde{\mathcal{L}} _{A_{b_{0}}}(\omega_{b_{0},s_{1}}(\mathsf{S}))^{\sharp}+d\hat{\phi}_{b_{0},s_ {1}}(\mathsf{S}))\) for some \(\mathsf{S}\in\mathbf{S}_{1}\). By the argument at the beginning of the proof of Lemma 10.4, we see that \((\hat{g},\dot{A})\) satisfies the generalized linearized gauge condition, that is, \[\bar{\delta}_{g_{b_{0}}}G_{g_{b_{0}}}\dot{g}=0,\quad\delta_{g_{b_{0}}}\dot{A}=0.\] (10.40) Therefore, \((\dot{g},\dot{A})\) solves the linearized Einstein-Maxwell system. Since \[\dot{g} =2\delta^{*}_{g_{b_{0}}}(t_{b_{0},*}\omega_{b_{0},s_{1}}(\mathsf{ S}))-2[\delta^{*}_{g_{b_{0}}},t_{b_{0},*}]\omega_{b_{0},s_{1}}(\mathsf{S})+\dot{g}_{0 }=2\delta^{*}_{g_{b_{0}}}(t_{b_{0},*}\omega_{b_{0},s_{1}}(\mathsf{S}))+\dot{g} ^{\prime}_{0},\] (10.41) \[\dot{A} =\widetilde{\mathcal{L}}_{A_{b_{0}}}(t_{b_{0},*}\omega_{b_{0},s_{ 1}}(\mathsf{S}))^{\sharp}+d\big{(}t_{b_{0},*}\phi_{b_{0},s_{1}}(\mathsf{S}) \big{)}-\Big{(}\iota_{w_{b_{0},*}(\mathsf{S})^{\sharp}}A_{b_{0}}+\phi_{b_{0},s_ {1}}(\mathsf{S})\Big{)}dt_{b_{0},*}+\dot{A}_{0}\] \[=\widetilde{\mathcal{L}}_{A_{b_{0}}}(t_{b_{0},*}\omega_{b_{0},s_{ 1}}(\mathsf{S}))^{\sharp}+d\big{(}t_{b_{0},*}\phi_{b_{0},s_{1}}(\mathsf{S}) \big{)}+\dot{A}^{\prime}_{0},\] with \[\dot{g}^{\prime}_{0} =\dot{g}_{0}-2[\delta^{*}_{g_{b_{0}}},t_{b_{0},*}]\omega_{b_{0},s_ {1}}(\mathsf{S})\in\bar{H}_{\rm b}^{\infty,-3/2-}(\bar{X}),\] \[\dot{A}^{\prime}_{0} =\dot{A}_{0}-\Big{(}\iota_{w_{b_{0},*}(\mathsf{S})^{\sharp}}A_{b_{ 0}}+\phi_{b_{0},s_{1}}(\mathsf{S})\Big{)}dt_{b_{0},*}\in\bar{H}_{\rm b}^{\infty,- 1/2-}(\bar{X}).\] where we use \([\delta^{*}_{g_{b_{0}}},t_{b_{0},*}]\in\mathcal{A}^{0}(\bar{X}),\omega_{b_{0},s_ {1}}(\mathsf{S})\in\bar{H}_{\rm b}^{\infty,-3/2-}(\bar{X})\) and \(\phi_{b_{0},s_{1}}(\mathsf{S})\in\bar{H}_{\rm b}^{\infty,-1/2-}(\bar{X})\), it follows that \((\dot{g}^{\prime}_{0},\dot{A}^{\prime}_{0})\) solves the linearized Einstein-Maxwell system as well. By the statement (iv) in Theorem 7.1, we have \[(\dot{g}^{\prime}_{0},\dot{A}^{\prime}_{0})=(2\delta^{*}_{g_{b_{0}}}\omega,\ \mathcal{L}_{\omega^{\sharp}}A_{b_{0}}+d\phi)\] (10.42) with \((\omega,\phi)\in\bar{H}_{\rm b}^{\infty,-5/2-}(\bar{X};\widetilde{\text{sc}T}^{*} \bar{X}\oplus\mathbb{C})\). Plugging (10.41) and (10.42) into (10.40) yields \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\omega =-[\mathcal{W}_{b_{0},\gamma},\ t_{b_{0},*}]\omega_{b_{0},s_{1}}( \mathsf{S})\in\bar{H}_{\rm b}^{\infty,-1/2-}(\bar{X}),\] (10.43) \[-\widetilde{\text{$\square$}_{g_{b_{0}}}}(0)\phi =-\delta_{g_{0}}\Big{(}\mathcal{L}_{\omega^{\sharp}}A_{b_{0}}+( \iota_{w_{b_{0},*}(\mathsf{S})^{\sharp}}A_{b_{0}})dt_{b_{0},*}\Big{)}\] (10.44) \[+[\square_{g_{b_{0}}},\ t_{b_{0},*}]\phi_{b_{0},s_{1}}(\mathsf{S})-[ \delta_{g_{b_{0}}},t_{b_{0},*}]\widetilde{\mathcal{L}}_{A_{b_{0}}}(\omega_{b_{0},s_ {1}}(\mathsf{S}))^{\sharp}\] We first consider (10.43). Since \(\ker\widehat{\mathcal{W}_{b_{0},\gamma}}(0)^{*}\cap\dot{H}_{\rm b}^{-\infty,1/2+}( \bar{X})=\{0\}\) by Theorem 9.13, it follows that (10.43) can be solved for \(\omega\in\bar{H}_{\rm b}^{\infty,-5/2-}(\bar{X})\). According to Proposition 9.17, we know that \[\hat{\omega}_{b_{0},s_{1}}(\mathsf{S})=t_{b_{0},*}\omega_{b_{0},s_{1}}(\mathsf{S})+ \hat{\omega}_{b_{0},s_{1}}(\mathsf{S})\in\ker\mathcal{W}_{b_{0},\gamma}\] with \(\hat{\omega}_{b_{0},s_{1}}(\mathsf{S})\in\bar{H}_{\rm b}^{\infty,-5/2-}(\bar{X})\). Therefore, \[\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\big{(}\omega-\check{\omega}_{b_{0},s_ {1}}(\mathsf{S}) and \(\omega-\check{\omega}_{b_{0},s_{1}}(\mathsf{S})\in\operatorname{span}\{\omega_{b_{0},s_{1}}^{(1)}(\mathsf{S}):\ \mathsf{S}\in\mathbf{S}_{1}\}\) (In fact \(\ker\widehat{\mathcal{W}_{b_{0},\gamma}}(0)\cap\bar{H}_{\mathrm{b}}^{\infty,-5/2 -}(\bar{X})\) restricted to the scalar type \(l=1\)\(1\)-form is equal to \(\operatorname{span}\{\omega_{b_{0},s_{1}}(\mathsf{S}),\omega_{b_{0},s_{1}}^{(1 )}(\mathsf{S}):\ \mathsf{S}\in\mathbf{S}_{1}\}\). Since \((\dot{g},\dot{A})\) is in the quotient space \(\widehat{\mathcal{K}}_{b_{0}}/\mathcal{K}_{b_{0}}\), we can exclude \(\omega_{b_{0},s_{1}}(\mathsf{S})\)). Using (9.67), we rewrite \[\dot{g} =2\delta_{g_{b_{0}}}^{*}\hat{\omega}_{b_{0},s_{1}}(\mathsf{S})+2 \delta_{g_{b_{0}}}^{*}\Big{(}\omega-\check{\omega}_{b_{0},s_{1}}(\mathsf{S}) \Big{)}\] \[=\operatorname{Poly}^{1}(t_{b_{0},*})\bar{H}_{\mathrm{b}}^{\infty,- 1/2-}(\bar{X})+2\delta_{g_{b_{0}}}^{*}\Big{(}\omega-\check{\omega}_{b_{0},s_{ 1}}(\mathsf{S})\Big{)}.\] Since we require \(\dot{g}\in\operatorname{Poly}^{1}(t_{b_{0},*})\bar{H}_{\mathrm{b}}^{\infty,- 1/2-}(\bar{X})\), it follows that \(\delta_{g_{b_{0}}}^{*}\big{(}\omega-\check{\omega}_{b_{0},s_{1}}(\mathsf{S}) \big{)}\) must lie in \(\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\). Owing to (9.62), \(\omega-\check{\omega}_{b_{0},s_{1}}(\mathsf{S})\) must be \(0\). That it, \(\dot{g}=2\delta_{g_{b_{0}}}\hat{\omega}_{b_{0},s_{1}}(\mathsf{S})\). Next we turn to (10.44). Since \(\omega=\check{\omega}_{b_{0},s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,-5/2}(\bar{X})\) and \(\phi_{b_{0},s_{1}}\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\), the right-hand side of (10.44) lies in \(\bar{H}_{\mathrm{b}}^{\infty,1/2-}(\bar{X})\). By Theorem 8.1 (\(\ker\widehat{\square_{g_{b_{0}}}}(0)^{*}\cap\bar{H}_{\mathrm{b}}^{\infty,-1/2 +}(\bar{X})=\{0\}\)) and Proposition 8.2 (\(\ker\widehat{\square_{g_{b_{0}}}}(0)\cap\bar{H}_{\mathrm{b}}^{\infty,-3/2-}( \bar{X})\) restricted to scalar type \(l=1\) is \(0\)), there exists a unique \(\phi\in\bar{H}_{\mathrm{b}}^{\infty,-3/2-}(\bar{X})\) which we denote by \(\check{\phi}_{b_{0},s_{1}}(\mathsf{S})\) satisfying (10.44). This proves that \(\widehat{\mathcal{K}}_{b_{0}}/\mathcal{K}_{b_{0}}\) is at most \(3\)-dimensional. Using (9.68) and rewriting \[\dot{A} =\widetilde{\mathcal{L}}_{A_{b_{0}}}(\hat{\omega}_{b_{0},s_{1}}( \mathsf{S}))^{\sharp}+d\big{(}t_{b_{0},*}\phi_{b_{0},s_{1}}(\mathsf{S})+\check {\phi}_{b_{0},s_{1}}(\mathsf{S})\big{)}\] (10.45) \[=\Big{(}\widetilde{\mathcal{L}}_{A_{b_{0}}}(\check{\omega}_{b_{0},s_{1}}(\mathsf{S}))^{\sharp}+\phi_{b_{0},s_{1}}dt_{b_{0},*}+t_{(\omega_{b_{0},s_{1}}(\mathsf{S}))^{\sharp}}A_{b_{0}}dt_{b_{0},*}+d\check{\phi}_{b_{0},s_{1 }}(\mathsf{S})\Big{)}\] \[\quad+t_{b_{0},*}\Big{(}\widetilde{\mathcal{L}}_{A_{b_{0}}}( \omega_{b_{0},s_{1}}(\mathsf{S}))^{\sharp}+d\phi_{b_{0},s_{1}}(\mathsf{S}) \Big{)}\in\operatorname{Poly}^{1}(t_{b_{0},*})\bar{H}_{\mathrm{b}}^{\infty,-1/2 }(\bar{X}),\] we conclude that \((\dot{g},\dot{A})=(\hat{g}_{b_{0},s_{1}}(\mathsf{S}),\dot{A}_{b_{0},s_{1}}( \mathsf{S}))=(2\delta_{g_{b_{0}}}^{*}\hat{\omega}_{b_{0},s_{1}}(\mathsf{S}), \widetilde{\mathcal{L}}_{A_{b_{0}}}(\hat{\omega}_{b_{0},s_{1}}(\mathsf{S}))^{ \sharp}+d(\hat{\omega}_{b_{0},s_{1}}(\mathsf{S})))\) with \(\hat{\phi}_{b_{0},s_{1}}(\mathsf{S})=t_{b_{0},*}\phi_{b_{0},s_{1}}(\mathsf{S})+ \check{\omega}_{b_{0},s_{1}}(\mathsf{S})\in\operatorname{Poly}^{1}(t_{b_{0},*}) \bar{H}_{\mathrm{b}}^{\infty,-3/2}(\bar{X})\) indeed lies in \(\widehat{\mathcal{K}}_{b_{0}}\). * Generalized zero modes in the KN case. For \(b\) near \(b_{0}\), we make the ansatz \[(\dot{g},\dot{A})=(2\delta_{g_{b}}^{*}\hat{\omega}_{b,s_{1}}(\mathsf{S}), \widetilde{\mathcal{L}}_{A_{b}}(\hat{\omega}_{b,s_{1}}(\mathsf{S}))^{\sharp}+d( t_{b,*}\phi_{b,s_{1}}(\mathsf{S}))+d\phi).\] with \(\phi\in\bar{H}_{\mathrm{b}}^{\infty,-3/2}(\bar{X})\) to be determined. By the arguments in the proof of RN case above (the last two paragraphs), there exists \(\check{\phi}_{b,s_{1}}(\mathsf{S})\in\bar{H}_{\mathrm{b}}^{\infty,-3/2}(\bar{X})\) such that with \(\hat{\phi}_{b,s_{1}}(\mathsf{S})=t_{b,*}\phi_{b,s_{1}}(\mathsf{S})+\check{\phi}_{b,s_{1}}(\mathsf{S})\in\operatorname{Poly}^{1}(t_{b,*})\bar{H}_{\mathrm{b}}^{ \infty,-3/2}(\bar{X})\), \[(\hat{g}_{b,s_{1}}(\mathsf{S}),\hat{A}_{b,s_{1}}(\mathsf{S}))\] \[=(2\delta_{g_{b}}^{*}\hat{\omega}_{b,s_{1}}(\mathsf{S}), \widetilde{\mathcal{L}}_{A_{b}}(\hat{\omega}_{b,s_{1}}(\mathsf{S}))^{\sharp}+d( \hat{\phi}_{b,s_{1}}(\mathsf{S})))\in\ker L_{b,\gamma}\cap\operatorname{Poly}^{1}(t_ {b,*})\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\] extends \((\hat{g}_{b_{0},s_{1}}(\mathsf{S}),\hat{A}_{b_{0},s_{1}}(\mathsf{S}))\) and is continuous in \(b\). * Generalized dual zero modes for RN and KN cases. Combining the discussion around (9.4), (9.5) and (9.6) with the generalized dual zero mode \(\check{\omega}_{b,s_{1}}^{*}(\mathsf{S})=t_{b,*}\omega_{b,s_{1}}^{*}(\mathsf{S})+ \check{\omega}_{b,s_{1}}^{*}(\mathsf{S})\) of \(\mathcal{W}_{b,\gamma}\) established in Proposition 9.17, we see that \[(\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\hat{A}_{b,s_{1}}^{*}(\mathsf{S}))=(2G_{g_{b}} \delta_{g_{b}}^{*}\hat{\omega}_{b,s_{1}}^{*}(\mathsf{S}),\ 4\widetilde{\mathcal{L}}_{A_{b}}(\hat{\omega}_{b,s_{1}}^{*}(\mathsf{S}))^{ \sharp}+d(t_{b,*}\phi_{b,s_{1}}^{*}(\mathsf{S}))+d\check{\phi}_{b,s_{1}}^{*}( \mathsf{S}))\in\ker L_{b,\gamma}^{*}\] provided that \[-\widehat{\square_{g_{b}}}(0)\check{\phi}_{b,s_{1}}^{*}(\mathsf{S}) =-4\delta_{g_{b}}\Big{(}\widetilde{\mathcal{L}}_{A_{b}}(\check{\omega}_{b,s_{1 }}^{ Let \[(\check{g}_{b,s_{1}}(\mathsf{S}),\ \check{A}_{b,s_{1}}(\mathsf{S})) :=(\check{g}_{b,s_{1}}(\mathsf{S})-t_{b,*}\check{g}_{b,s_{1}}( \mathsf{S}),\ \check{A}_{b,s_{1}}(\mathsf{S})-t_{b,*}\check{A}_{b,s_{1}}(\mathsf{S}))\in \check{H}_{\rm b}^{\infty,-1/2-}(\bar{X}), \tag{10.46}\] \[(\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}( \mathsf{S})) :=(\check{g}_{b,s_{1}}^{*}(\mathsf{S})-t_{b,*}\check{g}_{b,s_{1}}^ {*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}(\mathsf{S})-t_{b,*}\check{A}_{b,s_{1}}^ {*}(\mathsf{S}))\in\dot{H}_{\rm b}^{-\infty,-1/2-}(\bar{X}). \tag{10.47}\] be the coefficients of \(t_{b,*}^{0}\) of the generalized (dual) zero modes constructed above. For later use, we determine the leading order behavior of them. **Lemma 10.6**.: _For \(\mathsf{S}\in\mathbf{S}_{1}\), we have_ \[(\check{g}_{b,s_{1}}(\mathsf{S}),\ \check{A}_{b,s_{1}}(\mathsf{S})) \in\rho C^{\infty}(\bar{X})+\check{H}_{\rm b}^{\infty,1/2-}(\bar{X}), \tag{10.48}\] \[(\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}( \mathsf{S})) \in\rho C^{\infty}(\bar{X})+\dot{H}_{\rm b}^{-3/2-C(a,\gamma),1/2-}( \bar{X})\] _where \(C(a,\gamma)>0\) is a sufficiently small constant depending on \(a,\gamma\)._ Proof.: Arguing as in the proof of Proposition 5.7, we again exploit the normal operator argument. First, since \((\check{g}_{b,s_{1}}(\mathsf{S}),\ \dot{A}_{b,s_{1}}(\mathsf{S}))\in\bar{H}_{\rm b }^{\infty,1/2-}(\bar{X})\), according to Proposition 5.7 (where we shift the contour of integration through the pole \(\xi=-2i\) and thus the space of resonant states is \(\mathbf{S}_{1}\)), it follows that \[(\check{g}_{b,s_{1}}(\mathsf{S}),\ \dot{A}_{b,s_{1}}(\mathsf{S}))\in\rho^{2} \Omega_{1}+\check{H}_{\rm b}^{\infty,3/2-}(\bar{X})\] where \[\Omega_{1}\in\operatorname{span}\{\mathsf{S}dt^{2},\ \mathsf{S}dt\otimes_{s}dx^{i},\ \mathsf{S}dx^{i}\otimes_{s}dx^{j},\ \mathsf{S}dt,\ \mathsf{S}dx^{i}\mid\mathsf{S}\in\mathbf{S}_{1},\ 1\leq i,j\leq 3\}.\] Since \[-\widehat{L_{b,\gamma}}(0)(\check{g}_{b,s_{1}}(\mathsf{S}),\ \check{A}_{b,s_{1}}(\mathsf{S}))=[L_{b, \gamma},t_{b,*}](\check{g}_{b,s_{1}}(\mathsf{S}),\ \dot{A}_{b,s_{1}}(\mathsf{S}))\in-2\rho^{3}\Omega_{1}+\check{H}_{\rm b}^{ \infty,5/2-}(\bar{X})\] where we use the fact \([L_{b,\gamma},t_{b,*}]\in-2\rho(\rho\partial_{\rho}-1)+\rho^{2}\mathrm{Diff}_{ \rm b}^{1}\), it follows that we can solve the above equation for \((\check{g}_{b,s_{1}}(\mathsf{S}),\ \check{A}_{b,s_{1}}(\mathsf{S}))\) by first solving away the leading term \(-2\rho^{3}\Omega_{1}\) and then the error term which lies in \(\check{H}_{\rm b}^{\infty,5/2-}(\bar{X})\). For the leading term \(2\rho^{3}\Omega_{1}\), proceeding as in the proof of Proposition 5.7 (here we have \(k=-1,l=1\)), we need to solve the following equation \[\Big{(}\rho\partial_{\rho}(\rho\partial_{\rho}-1)+\not{\Delta}\Big{)}(\check{ g}_{1},\dot{A}_{1})=-2\rho\Omega_{1}.\] Since now \(k(k+1)=(-1)\cdot 0\neq 1\cdot 2=l(l+1)\), we conclude that \((\check{g}_{1},\dot{A}_{2})\in(-1/2)\cdot(-2\rho\Omega_{1})=\rho\Omega_{1}\). For the error term, we need to solve the following equation \[\Big{(}\rho\partial_{\rho}(\rho\partial_{\rho}-1)+\not{\Delta}\Big{)}(\dot{g}_ {2},\dot{A}_{2})=f\in\check{H}_{\rm b}^{\infty,5/2-}(\bar{X}),\quad(\check{g}_{ 2},\dot{A}_{2})\in\check{H}_{\rm b}^{\infty,-1/2}(\bar{X}).\] Arguing as in the proof of Proposition 5.7, we see that \((\dot{g}_{2},\dot{A}_{2})\in\check{H}_{\rm b}^{\infty,-1/2}(\bar{X})\in\rho C ^{\infty}(\partial_{+}\bar{X})+\check{H}_{\rm b}^{\infty,1/2-}(\bar{X})\). This proves (10.48) for \((\check{g}_{b,s_{1}}(\mathsf{S}),\ \check{A}_{b,s_{1}}(\mathsf{S}))\). For \((\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}(\mathsf{S}))\), the above proof also applies. As for the regularity of \((\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}(\mathsf{S}))\) near the radial point at event horizon (by the same reasoning as in the proof of statement (2) of Proposition 5.7, \((\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}(\mathsf{S}))\) is smooth away from the radial point at even horizon), we notice that \([L_{b,\gamma},t_{b,*}](\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}(\mathsf{S}))\) is one derivative less regular than \((\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}(\mathsf{S}))\) and \((\widehat{L_{b,\gamma}}(0)^{*})^{-1}\) gains one derivative. Therefore, \((\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}(\mathsf{S}))\) has at least the same regularity as \((\check{g}_{b,s_{1}}^{*}(\mathsf{S}),\ \check{A}_{b,s_{1}}^{*}(\mathsf{S}))\), which is \(-\frac{3}{2}-C(a,\gamma)\) in view of Proposition 5.7. ### Generalized zero modes: quadratic growth Finally, we exclude the generalized zero mode solutions with quadratic growth in \(t_{b,*}\) of \(L_{b,\gamma}\). **Lemma 10.7**.: _Let \(k\geq 2\). Then there does not exist \((\dot{g},\dot{A})=(\sum_{j=1}^{k}t_{b,*}^{j}\dot{g}_{j},\sum_{j=1}^{k}t_{b,*}^{j} \dot{A}_{j})\), with \(0\neq(\dot{g}_{k},\dot{A}_{k})=\{(\check{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1 }}(\mathsf{S})):\mathsf{S}\in\mathbf{S}_{1}\}\) and \((\dot{g}_{j},\dot{A}_{j})\in\check{H}_{\rm b}^{\infty,-1/2-}(\bar{X};S^{2\rm sc }\overline{T^{*}}\check{X}\oplus{}^{\rm sc}\overline{T^{*}}\check{X})\), such that \(L_{b,\gamma}(\dot{g},\dot{A})=0\)._ Proof.: According to the discussion at the beginning of the previous subsection, it suffices to consider the case \(k=2\). For the simplicity of calculation, we replace \(t_{b,*}\) by another smooth function \(t_{1}:=t_{0}-2r\). Since \(t_{b,*}-t_{1}\in\mathcal{A}^{0-}(\bar{X})\), we assume \[(\dot{g},\dot{A})=(t_{1}^{2}\dot{g}_{b,s_{1}}(\mathsf{S})+t_{1}\dot{g}_{1}+\dot {g}_{0},\ t_{1}^{2}\dot{A}_{b,s_{1}}(\mathsf{S})+t_{1}\dot{A}_{1}+\dot{A}_{0})\] with \(\mathsf{S}\neq 0\) and \((\dot{g}_{j},\dot{A}_{j})\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\) for \(j=0,1\). We write \(L_{b,\gamma}(\dot{g},\dot{A})\) as a polynomial of degree \(1\) in \(t_{1}\) as follows \[L_{b,\gamma}(\dot{g},\dot{A}) =t_{1}\Big{(}2[L_{b,\gamma},\ t_{1}](\dot{g}_{b,s_{1}}(\mathsf{S} ),\dot{A}_{b,s_{1}}(\mathsf{S}))+L_{b,\gamma}(\dot{g}_{1},\dot{A}_{1})\Big{)}\] \[\quad+\Big{(}[[L_{b,\gamma},\ t_{1}],\ t_{1}](\dot{g}_{b,s_{1}}( \mathsf{S}),\dot{A}_{b,s_{1}}(\mathsf{S}))+[L_{b,\gamma},\ t_{1}](\dot{g}_{1}, \dot{A}_{1})+L_{b,\gamma}(\dot{g}_{0},\dot{A}_{0})\Big{)}.\] Then \(L_{b,\gamma}(\dot{g},\dot{A})=0\) is equivalent to the vanishing of the coefficients of the above polynomial. Namely, \[2[L_{b,\gamma},\ t_{1}](\dot{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1}}( \mathsf{S}))+L_{b,\gamma}(\dot{g}_{1},\dot{A}_{1}) =0, \tag{10.49}\] \[[[L_{b,\gamma},\ t_{1}],\ t_{1}](\dot{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1 }}(\mathsf{S}))+[L_{b,\gamma},\ t_{1}](\dot{g}_{1},\dot{A}_{1})+L_{b,\gamma}( \dot{g}_{0},\dot{A}_{0}) =0. \tag{10.50}\] First, (10.49) implies \(L_{b,\gamma}\big{(}2t_{1}(\dot{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1}}( \mathsf{S}))+(\dot{g}_{1},\dot{A}_{1})\big{)}=0\), and thus by Proposition 10.5 \[2t_{1}(\dot{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1}}(\mathsf{S}))+(\dot{g}_{ 1},\dot{A}_{1})=2(\hat{g}_{b,s_{1}}(\mathsf{S}),\ \dot{A}_{b,s_{1}}(\mathsf{S}))+c(\dot{g}_{b,s_{1}}(\mathsf{S}^{\prime}),\ \dot{A}_{b,s_{1}}(\mathsf{S}^{\prime}))\] for some \(c\in\mathbb{R}\) and \(\mathsf{S}^{\prime}\in\mathbf{S}_{1}\). Subtracting \(c(\dot{g}_{b,s_{1}}(\mathsf{S}^{\prime}),\ \dot{A}_{b,s_{1}}(\mathsf{S}^{\prime}))\) from \((\dot{g},\dot{A})\), we may assume \(c=0\), and this implies \[(\dot{g}_{1},\dot{A}_{1})=2(\hat{g}_{b,s_{1}}(\mathsf{S}),\ \dot{A}_{b,s_{1}}( \mathsf{S}))-2t_{1}(\dot{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1}}(\mathsf{S} )).\] Plugging the above expression for \((\dot{g}_{1},\dot{A}_{1})\) into (10.50) yields \[\begin{split} L_{b,\gamma}(\dot{g}_{0},\dot{A}_{0})& =-[L_{b,\gamma},\ t_{1}],\ t_{1}](\dot{g}_{b,s_{1}}(\mathsf{S}), \dot{A}_{b,s_{1}}(\mathsf{S}))\\ &\quad-[L_{b,\gamma},\ t_{1}]\Big{(}2(\hat{g}_{b,s_{1}}(\mathsf{ S}),\ \dot{A}_{b,s_{1}}(\mathsf{S}))-2t_{1}(\dot{g}_{b,s_{1}}(\mathsf{S}),\dot{A}_{b,s_{1}}( \mathsf{S}))\Big{)}\in\bar{H}_{\mathrm{b}}^{\infty,3/2-}(\bar{X})\end{split} \tag{10.51}\] Our aim is to prove that there is no solution \((\dot{g}_{0},\dot{A}_{0})\in\bar{H}_{\mathrm{b}}^{\infty,-1/2-}(\bar{X})\) to the above equation. To this end, we need to verify that the pairing of the right-hand side of (10.51) with \((\dot{g}_{b,s_{1}}^{*}(\mathsf{S}),\dot{A}_{b,s_{1}}^{*}(\mathsf{S}))\) is non-zero. We note that it suffices to check the RN case \(b=b_{0}\) because the general KN case with \(b\) near \(b_{0}\) follows directly by the continuity (in \(b\)). Using \(\dot{A}_{b_{0},s_{1}}(\mathsf{S})=\mathcal{O}_{\bar{H}_{\mathrm{b}}^{\infty,1/ 2-}(\bar{X})}(|\mathbf{Q}_{0}|)\) and the same argument as in the proof of Lemma 10.3, we obtain that the the pairing of the right-hand side of (10.51) with \((\dot{g}_{b_{0},s_{1}}^{*}(\mathsf{S}),\dot{A}_{b_{0},s_{1}}^{*}(\mathsf{S}))\) is equal to \[\langle[[\Box_{\mathfrak{g}_{b_{0}},2},\ t_{1}],\ t_{1}]\dot{g}_{b_{0},s_{1}}( \mathsf{S}),\ \dot{g}_{b_{0},s_{1}}^{*}(\mathsf{S})\rangle+2\langle[\Box_{g_{b_{0}},2},\ t_{1}](\hat{g}_{b_{0},s_{1}}( \mathsf{S})-t_{1}\dot{g}_{b_{0},s_{1}}(\mathsf{S})),\ \dot{g}_{b_{0},s_{1}}^{*}(\mathsf{S})\rangle+\mathcal{O}(|\mathbf{Q}_{0}|+\gamma)\] \[\quad:=P_{1}+P_{2}+\mathcal{O}(|\mathbf{Q}_{0}|+\gamma).\] We first calculate \(P_{1}\). Using \(\dot{g}_{b_{0},s_{1}}(\mathsf{S})=2\delta_{g_{b_{0}}}^{*}\omega_{b_{0},s_{1}}( \mathsf{S})\), \(\dot{g}_{b_{0},s_{1}}^{*}(\mathsf{S})=2G_{g_{b_{0}}}\delta_{g_{b_{0}}}^{*}\omega_{b_{ 0},s_{1}}^{*}(\mathsf{S})\) and (9.53), we find that \[P_{1} =2\langle(\nabla^{\alpha}t_{1}\nabla_{\alpha}t_{1})\dot{g}_{b_{0},s _{1}}(\mathsf{S}),\ \dot{g}_{b_{0},s_{1}}^{*}(\mathsf{S})\rangle=2\langle 4(\mu_{b_{0}}-1)\dot{g}_{b_{0},s_{1}}( \mathsf{S}),\ \dot{g}_{b_{0},s_{1}}^{*}(\mathsf{S})\rangle\] \[=2\langle 16(\mu_{b_{0}}-1)\delta_{g_{b_{0}}}^{*}d\big{(}(r- \mathbf{m}_{0})\mathsf{S}\big{)},G_{g_{b_{0}}}\delta_{g_{b_{0}}}^{*}d\big{(}(r- \mathbf{m}_{0})H(r-r_{b_{0}})\mathsf{S}\big{)}\rangle+\mathcal{O}(|\mathbf{Q}_ {0}|^{2}+\gamma)\] \[=-\frac{512}{9}\pi\mathbf{m}_{0}+\mathcal{O}(|\mathbf{Q}_{0}|^{2}+ \gamma).\] Next, using \(\hat{g}_{b_{0},s_{1}}(\mathsf{S})=2\delta_{g_{b_{0}}}^{*}\omega_{b_{0},s_{1}}( \mathsf{S})\), (9.70) and (9.53), we compute \[P_{2} =2\langle(2\nabla^{\alpha}t_{1}\nabla_{\alpha}+\Box_{g_{b_{0}},0 }t_{1})(\hat{g}_{b,s_{1}}(\mathsf{S})-t_{1}\dot{g}_{b,s_{1}}(\mathsf{S})),\ \dot{g}_{b_{0},s_{1}}^{*}(\mathsf{S})\rangle\] \[=-\frac{64}{9}(7-\log 8)\pi\mathbf{m} ## 11. Structure of the resolvent of the linearized gauge-fixed Einstein-Maxwell operator In this section, we shall prove that \(\widehat{L_{b,\gamma}}(\sigma)\) is invertible for \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) near \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0}),|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\) and \(\mathrm{Im}\,\sigma\geq 0,\sigma\neq 0\). This is a generalization of the first statement in Theorem 10.1 from the RN case to the KN case. The proof uses the relevant conclusions of the RN case and the arguments in the proof of Proposition 9.10. Let \(\gamma>0\) be a fixed sufficiently small constant such that all the conclusions in the previous section hold. We recall the linearized gauge-fixed Einstein-Maxwell operator \[L_{b,\gamma}:=L_{(g_{b},A_{b}),\gamma},\quad L^{E}_{b,\gamma}:=L^{E}_{(g_{b},A_ {b}),\gamma},\quad L^{M}_{b,\gamma}:=L^{M}_{(g_{b},A_{b}),\gamma},\] and its Fourier transformed version \[\widehat{L_{b,\gamma}}(\sigma):=e^{i\sigma t_{b,*}}\,L_{b,\gamma}e^{-i\sigma t _{b,*}}\] where \(t_{b,*}=\chi_{0}(r)(t+r_{b_{0},*})+(1-\chi_{0}(r))(t-r_{(\mathbf{m},0,\mathbf{ Q}),*})\) is defined in (4.30). Since \(\gamma\) is fixed, we will drop the subscript \(\gamma\) from the notations from now on. Let \(s>3\) and \(-3/2<\ell<-1/2\). We redefine \[\mathcal{X}^{s,\ell}_{b}(\sigma):=\{(\hat{g},\dot{A})\in\bar{H}^{s,\ell}_{ \mathrm{b}}(\bar{X};S^{2\widehat{\mathrm{s}cT}^{*}}\bar{X}\oplus\widehat{ \mathrm{s}cT^{*}}\bar{X}):\widehat{L_{b}}(\sigma)(\hat{g},\dot{A})\in\bar{H}^ {s-1,\ell+2}_{\mathrm{b}}(\bar{X};S^{2\widehat{\mathrm{s}cT}^{*}}\bar{X} \oplus\widehat{\mathrm{s}cT^{*}}\bar{X})\}.\] As in the previous section, from now on, we will drop the notation of the bundle if it it clear from the context. We also decompose zero energy nullspace \(\mathcal{K}_{b}\) and the space of zero energy dual states \(\mathcal{K}^{*}_{b}\) of \(L_{b}\) as follows \[\mathcal{K}_{b}=\mathcal{K}_{b,1}\oplus\mathcal{K}_{b,2},\quad\mathcal{K}^{*} _{b}=\mathcal{K}^{*}_{b,1}\oplus\mathcal{K}^{*}_{b,2} \tag{11.1}\] where \[\mathcal{K}_{b,1}:=\{(\hat{g}_{b,s_{1}}(\mathbf{S}_{1}),\dot{A}_{ b,s_{1}}(\mathbf{S}_{1}))\},\quad\mathcal{K}_{b,2}:=\{(\hat{g}_{b,s_{0}},\dot{A}_{ b,s_{0}})\}\oplus\{(\hat{g}_{b,v_{1}}(\mathbf{V}_{1}),\dot{A}_{b,v_{1}}( \mathbf{V}_{1}))\}, \tag{11.2}\] \[\mathcal{K}^{*}_{b,1}:=\{(\hat{g}^{*}_{b,s_{1}}(\mathbf{S}_{1}), \dot{A}^{*}_{b,s_{1}}(\mathbf{S}_{1}))\},\quad\mathcal{K}_{b,2}:=\{(\hat{g}^{* }_{b,s_{0}},\dot{A}^{*}_{b,s_{0}})\}\oplus\{(\hat{g}^{*}_{b,v_{1}}(\mathbf{V} _{1}),\dot{A}^{*}_{b,v_{1}}(\mathbf{V}_{1}))\}. \tag{11.3}\] The motivation for doing such a decomposition is that \(\widehat{L_{b}}(\sigma)\) is more singular when acting on \(\mathcal{K}_{b,1}\) because of the existence of the generalized solutions to \(L_{b}(\hat{g},\dot{A})=0\) with linear growth in \(t_{b,*}\) and leading order terms in \(\mathcal{K}_{b,1}\). Moreover, we recall the generalized zero energy states \((\hat{g}_{b,s_{1}}(\mathbf{S}_{1}),\hat{A}_{b,s_{1}}(\mathbf{S}_{1}))\) and dual states \((\hat{g}^{*}_{b,s_{1}}(\mathbf{S}_{1}),\dot{A}^{*}_{b,s_{1}}(\mathbf{S}_{1}))\) which is constructed in Proposition 10.5 and define \[\check{\mathcal{K}}_{b}:=\{(\check{g}_{b,s_{1}}(\mathbf{S}),\dot{A}_{b,s_{1}}( \mathbf{S})):\mathbf{S}\in\mathbf{S}_{1}\},\quad\check{\mathcal{K}}^{*}_{b}:= \{(\check{g}^{*}_{b,s_{1}}(\mathbf{S}),\check{A}^{*}_{b,s_{1}}(\mathbf{S})): \mathbf{S}\in\mathbf{S}_{1}\} \tag{11.4}\] where \[(\check{g}_{b,s_{1}}(\mathbf{S}),\dot{A}_{b,s_{1}}(\mathbf{S}))=( \check{g}_{b,s_{1}}(\mathbf{S}),\dot{A}_{b,s_{1}}(\mathbf{S}))-t_{b,*}(\check{g }_{b,s_{1}}(\mathbf{S}),\dot{A}_{b,s_{1}}(\mathbf{S})), \tag{11.5}\] \[(\check{g}^{*}_{b,s_{1}}(\mathbf{S}),\check{A}^{*}_{b,s_{1}}( \mathbf{S}))=(\check{g}^{*}_{b,s_{1}}(\mathbf{S}),\hat{A}^{*}_{b,s_{1}}( \mathbf{S}))-t_{b,*}(\check{g}^{*}_{b,s_{1}}(\mathbf{S}),\dot{A}^{*}_{b,s_{1}}( \mathbf{S})). \tag{11.6}\] ### Construction of the reference operator \(\check{L}_{b}(\sigma)\) We first construct a reference operator \(\check{L}_{b}(\sigma)\) which is invertible near \(\sigma=0\) by perturbing \(\widehat{L_{b}}(\sigma)\) by a smoothing operator. **Lemma 11.1**.: _Let \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\). There exist \(V_{b}\in\Psi^{-\infty}(X;S^{2\widehat{\mathrm{s}cT}^{*}}\bar{X}\oplus\widehat{ \mathrm{s}cT^{*}}\bar{X})\) which is continuous in \(b\) with uniformly compactly supported Schwartz kernel, and a constant \(C_{0}>0\), such that the following statements hold._ 1. _The operator_ \[\check{L}_{b}(\sigma):=\widehat{L_{b}}(\sigma)+V_{b}:\mathcal{X}^{s,\ell}_{b}( \sigma)\to\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X};S^{2\widehat{\mathrm{s}cT}^ {*}}\bar{X}\oplus\widehat{\mathrm{s}cT^{*}}\bar{X})\] _is invertible for_ \(\sigma\in\mathbb{C},\mathrm{Im}\,\sigma\geq 0\) _with_ \(|\sigma|+|b-b_{0}|\leq C_{0}\)_._ 2. _The inverse operator_ \(\check{L}_{b}(\sigma)^{-1}\) _is continuous in_ \(\sigma\) _with values in_ \(\mathcal{L}_{\mbox{weak}}(\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X}),\bar{H}^{s,\ell}_{\mathrm{b}}(\bar{X}))\) _(the space of linear bounded operators equipped with weak operator topology), and continuous in_ \(\sigma\) _with values in_ \(\mathcal{L}_{\mbox{op}}(\bar{H}^{s-1+\epsilon,\ell+2+\epsilon}_{\mathrm{b}}(\bar{X}), \bar{H}^{s-\epsilon,\ell-\epsilon}_{\mathrm{b}}(\bar{X}))\) _(norm topology) for any_ \(\epsilon>0\)_._ 3. _One has_ \[V_{b}(\check{g}_{b},\check{A}_{b})=0\quad\mbox{for}\quad(\check{g}_{b},\check{A} _{b})\in\check{\mathcal{K}}_{b}.\] (11.7) Proof.: The proof closely follows [57, Lemma 11.1 and Lemma 11.7]. Concretely, we shall construct \[V_{b}:\mathcal{D}^{\prime}(X;S^{2}\widetilde{\text{sc}T^{*}}\bar{X}\oplus \widetilde{\text{sc}T^{*}}\bar{X})\to C_{c}^{\infty}(X;S^{2}T^{*}X\oplus T^{*}X)\] where \(V_{b}\) satisfy \[\bar{\mathcal{K}}_{b}\subset\ker V_{b},\quad\text{ran}V_{b}\subset\mathcal{R} _{b}^{\perp}\quad\text{and}\quad V_{b}|_{\mathcal{K}_{b}}:\mathcal{K}_{b}\to \mathcal{R}_{b}^{\perp}\quad\text{is invertible}. \tag{11.8}\] where \(\mathcal{R}_{b}:=\left(\text{ran}_{\mathcal{X}_{b}^{*,\ell}(0)}\widehat{L_{b} }(0)\right)\). We now prove that once the above requirements are satisfied, all the three statements in the lemma hold. As for the proof of statements (1) and (2), it suffices to prove the invertiblity of \(\check{L}_{b}(\sigma)=\widehat{L_{b}}(\sigma)+V_{b}\) for the case \(b=b_{0},\sigma=0\) since the invertibility and continuity for \((b,\sigma)\) near \((b_{0},0)\) follow from the same arguments as in the proof of Lemma 9.11. We split the domain \(\mathcal{X}_{b_{0}}^{s,\ell}(0)=\mathcal{K}_{b_{0}}^{\perp}\underline{\oplus }\mathcal{K}_{b_{0}}\) and the target space \(\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X};S^{2}\widetilde{\text{sc}T^{*}}\bar{X} \oplus\widetilde{\text{sc}T^{*}}\bar{X})=\mathcal{R}_{b_{0}}\oplus\mathcal{R }_{b_{0}}^{\perp}\). Under these splittings, \(\check{L}_{b_{0}}(0)=\widehat{L_{b_{0}}}(0)+V_{b}\) takes the form \[\check{L}_{b_{0}}(0)=\begin{pmatrix}L_{00}&0\\ V_{10}&V_{11}\end{pmatrix}\] where \(L_{00}=\widehat{L_{b_{0}}}(0)|_{\mathcal{K}_{b_{0}}^{\perp}}:\mathcal{K}_{b_{0 }}^{\perp}\to\mathcal{R}_{b_{0}}\) is invertible. Then the invertibility of \(\check{L}_{b_{0}}(0)\) follows from that of \(V_{11}:\mathcal{K}_{b_{0}}\to\mathcal{R}_{b_{0}}^{\perp}\). We notice that the invertibility of \(V_{11}\) holds provided that the above conditions in (11.8) are satisfied. For the statement (3), it is clear that the conclusion (11.7) directly follows from the condition \(\bar{\mathcal{K}}_{b}\subset\ker V_{b}\). First, since \((\check{g}_{b_{0},s_{1}}(\mathbf{S}_{1}),\check{A}_{b_{0},s_{1}}(\mathbf{S}_{ 1}))\in\check{H}_{\rm b}^{\infty,1/2-}(\bar{X})\) (see Theorem 10.1) and \((\check{g}_{b_{0},s_{1}}(\mathbf{S}_{1}),\check{A}_{b_{0},s_{1}}(\mathbf{S}_{ 1}))\) have nonzero \(r^{-1}\) leading terms (see Lemma 10.6), we conclude that \(\mathcal{K}_{b_{0},1}\cap\bar{\mathcal{K}}_{b_{0}}=0\), and thus \(\mathcal{K}_{b_{0}}\cap\bar{\mathcal{K}}_{b_{0}}=0\). By the continuity of \(\mathcal{K}_{b}\) and \(\bar{\mathcal{K}}_{b}\) in \(b\), this holds for \(b\) near \(b_{0}\) as well. This implies that fixing the basis \(\{h_{b,1},\cdots,h_{b,8}\}\) of \(\mathcal{K}_{b}\) and \(\{\check{h}_{b,1},\check{h}_{b,2},\check{h}_{b,3}\}\) of \(\bar{\mathcal{K}}_{b}\), both of which depend continuously on \(b\), by the same reasoning as in the beginning of the seventh step in the proof of Theorem 10.1, there exists \(\{h_{b,1}^{\flat},\cdots,h_{b,8}^{\flat}\}\subset C_{c}^{\infty}(X^{\diamond} ;S^{2}\widetilde{\text{sc}T^{*}}\bar{X}\oplus\widetilde{\text{sc}T^{*}}\bar{X})\) such that \(\langle h_{b,i},h_{b,j}^{\flat}\rangle=\delta_{ij}\) for \(1\leq i,j\leq 8\) and \(\langle\check{h}_{b,i},h_{b,j}^{\flat}\rangle=0\) for \(1\leq i\leq 3,1\leq j\leq 8\). (One can verify that \(\{h_{b,1}^{\flat},\cdots,h_{b,8}^{\flat}\}\subset C_{c}^{\infty}(X^{\diamond})\) can be chosen to be continuous in \(b\).) Next, we pick a basis of \(\mathcal{R}_{b}^{\perp}\) in the following way. We note that \(\mathcal{K}_{b}^{*}\cap\bar{\mathcal{K}}_{b}^{*}=0\), which can be proved in a similar manner to the statement \(\mathcal{K}_{b}\cap\bar{\mathcal{K}}_{b}=0\). Fixing the continuous (in \(b\)) basis \(\{h_{b,1}^{*},\cdots,h_{b,8}^{\sharp}\}\) of \(\mathcal{K}_{b}^{*}=\text{ann}(\mathcal{R}_{b})\) and \(\{\check{h}_{b,1}^{*},\check{h}_{b,2}^{*},\check{h}_{b,3}^{*}\}\) of \(\bar{\mathcal{K}}_{b}^{*}\), by the same reasoning mentioned above, there exists \(\{h_{b,1}^{\sharp},\cdots,h_{b,8}^{\sharp}\}\subset C_{c}^{\infty}(X;S^{2} \widetilde{\text{sc}T^{*}}\bar{X}\oplus\widetilde{\text{sc}T^{*}}\bar{X})\) (which can be chosen continuous in \(b\)) such that \(\langle h_{b,i}^{\sharp},h_{b,j}^{*}\rangle=\delta_{ij}\) for \(1\leq i,j\leq 8\) and \(\langle h_{b,i}^{\sharp},\check{h}_{b,j}^{*}\rangle=0\) for \(1\leq i\leq 8,1\leq j\leq 3\). It is easy to check that \(\{h_{b,1}^{\sharp},\cdots,h_{b,8}^{\sharp}\}\) are linearly independent and the space generated by \(h_{b,j}^{\sharp},1\leq j\leq 8\) is a complement of \(\mathcal{R}_{b}\). Therefore, we define \[V_{b}(h):=\sum_{i=1}^{8}h_{b,i}^{\sharp}\langle h,h_{b,i}^{\flat}\rangle,\] which satisfies the requirement (11.8). Using \(\check{L}_{b}(\sigma)\) constructed in Lemma 11.1, we define the following spaces \[\widetilde{\mathcal{K}}_{b,1}:=\check{L}_{b}(0)\mathcal{K}_{b,1},\quad\widetilde{ \mathcal{K}}_{b,2}:=\check{L}_{b}(0)\mathcal{K}_{b,2},\quad\widetilde{\mathcal{K }}_{b}:=\check{L}_{b}(0)\mathcal{K}_{b}=\widetilde{\mathcal{K}}_{b,1}\oplus \widetilde{\mathcal{K}}_{b,2}. \tag{11.9}\] Since \(\check{L}_{b}(0)\mathcal{K}_{b}=V_{b}(\mathcal{K}_{b})\), we have \(\widetilde{\mathcal{K}}_{b,1},\widetilde{\mathcal{K}}_{b,2},\widetilde{ \mathcal{K}}_{b}\subset C_{c}^{\infty}(X;S^{2}\widetilde{\text{sc}T^{*}}\bar{X} \oplus\widetilde{\text{sc}T^{*}}\bar{X})\). Next, we decompose the target space \(\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X};S^{2}\widetilde{\text{sc}T^{*}}\bar{X} \oplus\widetilde{\text{sc}T^{*}}\bar{X})\) into the range of \(\widehat{L}_{b}(0)\) and its complement in a continuous (in \(b\)) way. **Lemma 11.2**.: _Let \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) with \(|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\). There exists a linear projection map_ \[\Pi_{b}^{\perp}:\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X};S^{2}\widetilde{\text{sc}T^{*} }\bar{X}\oplus\widetilde{\text{sc}T^{*}}\bar{X})\to\bar{H}_{\rm b}^{s-1,\ell+2}( \bar{X};S^{2}\widetilde{\text{sc}T^{*}}\bar{X}\oplus\widetilde{\text{sc}T^{*}} \bar{X}),\] _which is of rank \(8\) and depends continuously on \(b\) near \(b_{0}\) in the norm topology, such that_ \[\langle(I-\Pi_{b}^{\perp})h,\ h^{*}\rangle=0\quad\text{for all}\quad h^{*}\in \mathcal{K}_{b}^{*}.\] _Moreover, the \(\Pi^{\perp}_{b}\) can be chosen such that it maps \(\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X};S^{2\widetilde{\rm sc}\widetilde{T}^{*}} \bar{X}\oplus\widetilde{\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X})\) to \(\mathcal{C}^{\infty}_{c}(X;S^{2\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X} \oplus\widetilde{\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X})\)._ Proof.: Let \(\{h^{*}_{b,1},\cdots,h^{*}_{b,8}\}\) be a basis which is continuous in \(b\) of \(\mathcal{K}^{*}_{b}\). There exists \(\{h^{\sharp}_{b,1},\cdots,h^{\sharp}_{b,8}\}\subset C^{\infty}_{c}(X;S^{2 \widetilde{\rm sc}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\rm sc }\widetilde{T}^{*}}\bar{X})\) which is continuous in \(b\), such that \(\langle h^{\sharp}_{b,i},h^{*}_{b,j}\rangle=\delta_{ij}\). We then define \[\Pi^{\perp}_{b}h=\sum_{i=1}^{8}h^{\sharp}_{b,i}\langle h,h^{*}_{b,i}\rangle.\] It is easy to check that \(\Pi^{\perp}_{b}\) has rank \(8\) and satisfies \(\langle(I-\Pi^{\perp}_{b})h,\ h^{*}\rangle=0\) for all \(h^{*}\in\mathcal{K}^{*}_{b}\). We then define the projection onto \(\mathcal{R}_{b}\) which is the range of \(\widehat{L_{b}}(0)\) \[\Pi_{b}=I-\Pi^{\perp}_{b}:\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X};S^{2 \widetilde{\rm sc}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\rm sc }\widetilde{T}^{*}}\bar{X})\to\mathcal{R}_{b}={\rm ann}\ \mathcal{K}^{*}_{b}.\] Now we are able to split the domain and target space of the operator \(\widehat{L_{b}}(\sigma)\tilde{L}_{b}(\sigma)^{-1}:\bar{H}^{s-1,\ell+2}_{\rm b }\to\bar{H}^{s-1,\ell+2}_{\rm b}\) as follows. \[\begin{split}\text{domain:}&\quad\bar{H}^{s-1, \ell+2}_{\rm b}(\bar{X};S^{2\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X}\oplus \widetilde{\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X})=\widetilde{K}^{\perp }_{b}\oplus\widetilde{\mathcal{K}}_{b,1}\oplus\widetilde{\mathcal{K}}_{b,2},\\ \text{target:}&\quad\bar{H}^{s-1,\ell+2}_{\rm b}( \bar{X};S^{2\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{ \widetilde{\rm sc}\widetilde{T}^{*}}\bar{X})=\mathcal{R}_{b}\oplus\mathcal{ R}^{\perp}_{b,1}\oplus\mathcal{R}^{\perp}_{b,2}\end{split} \tag{11.10}\] where \(\mathcal{R}^{\perp}_{b,1}\), resp. \(\mathcal{R}^{\perp}_{b,2}\) is a subspace of dimension \(3\), resp. \(5\) and defined as follows. Fixing the continuous (in \(b\)) basis \(\{h^{*(1)}_{b,1},\cdots,h^{*(1)}_{b,3}\}\) of \(\mathcal{K}^{*}_{b,1}\) and \(\{h^{*(2)}_{b,1},\cdots,h^{*(2)}_{b,5}\}\) of \(\mathcal{K}^{*}_{b,2}\), there exist \[\{v^{(1)}_{b,1},v^{(1)}_{b,2},v^{(1)}_{b,3}\},\ \{v^{(2)}_{b,1},v^{(2)}_{b,2},v^{( 2)}_{b,3},v^{(2)}_{b,4},v^{(2)}_{b,5}\}\subset C^{\infty}_{c}(X;S^{2\widetilde{ \rm sc}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\rm sc}\widetilde{ T}^{*}}\bar{X})\] such that \(\langle v^{(1)}_{b,i},h^{*(1)}_{b,j}\rangle=\delta_{ij}\) and \(\langle v^{(2)}_{b,i},h^{*(2)}_{b,j}\rangle=\delta_{ij}\). We then define \[\mathcal{R}^{\perp}_{b,1}=\{v^{(1)}_{b,1},v^{(1)}_{b,2},v^{(1)}_{b,3}\}\quad \text{and}\quad\mathcal{R}^{\perp}_{b,2}=\{v^{(2)}_{b,1},v^{(2)}_{b,2},v^{(2)}_ {b,3},v^{(2)}_{b,4},v^{(2)}_{b,5}\}.\] ### Structure of the resolvent near zero energy Now we are at the position of establishing the mode stability of the operator \(\widehat{L_{b}}(\sigma)\) for \(b\) near \(b_{0}\) at \({\rm Im}\,\sigma\geq 0,\sigma\neq 0\), and we also provide a description of the structure of the resolvent \(\widehat{L_{b}}(\sigma)^{-1}\) near \(\sigma=0\) along the way. **Theorem 11.3**.: _Let \((g_{b},A_{b})\) be the RN metric and \(4\)-electromagnetic potential where \(b=({\bf m},{\bf a},{\bf Q})\) is near \(b_{0}=({\bf m}_{0},0,{\bf Q}_{0})\) with \(|{\bf Q}_{0}|\ll{\bf m}_{0}\). Then the Fourier-transformed operator_ \[\begin{split}\widehat{L_{b}}(\sigma):\{(\dot{g},\dot{A})& \in\bar{H}^{s,\ell}_{\rm b}(\bar{X};S^{2\widetilde{\rm sc}\widetilde{T}^{*}} \bar{X}\oplus\widetilde{\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X}):\widehat{ L_{b}}(\sigma)(\dot{g},\dot{A})\in\bar{H}^{s,\ell+1}_{\rm b}(\bar{X};S^{2\widetilde{\rm sc }\widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\rm sc}\widetilde{T}^{*}} \bar{X})\}\\ &\to\bar{H}^{s,\ell+1}_{\rm b}(\bar{X};S^{2\widetilde{\rm sc} \widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\rm sc}\widetilde{T}^{*}} \bar{X})\end{split} \tag{11.11}\] _is invertible for \(\sigma\in\mathbb{C},{\rm Im}\,\sigma\geq 0,\sigma\neq 0\) and \(s>3,\ell<-\frac{1}{2},s+\ell>-\frac{1}{2}\)._ Proof.: By the arguments in the last step of the proof of Theorem 8.1 and Proposition 9.10, it suffices to prove that \(\mathcal{X}^{s,\ell}_{b}(\sigma)\to\bar{H}^{s,\ell+1}_{\rm b}(\bar{X};S^{2 \widetilde{\rm sc}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{\widetilde{\rm sc }\widetilde{T}^{*}}\bar{X})\) is invertible for \(s>3,-3/2<\ell<-1/2\) and \((b,\sigma)\) with \({\rm Im}\,\sigma\geq 0,\sigma\neq 0\) near \((b_{0},0)\). To this end, we shall prove the invertibility of the operator \[\widehat{L_{b}}(\sigma)\tilde{L}_{b}(\sigma)^{-1}:\bar{H}^{s-1,\ell+2}_{\rm b}( \bar{X};S^{2\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{ \widetilde{\rm sc}\widetilde{T}^{*}}\bar{X})\to\bar{H}^{s-1,\ell+2}_{\rm b}( \bar{X};S^{2\widetilde{\rm sc}\widetilde{T}^{*}}\bar{X}\oplus\widetilde{ \widetilde{\rm sc}\widetilde{T}^{*}}\bar{X})\] for \(s>3,-3/2<\ell<-1/2\) and \((b,\sigma)\) with \({\rm Im}\,\sigma\geq 0,\sigma\neq 0\) near \((b_{0},0)\). Under the splitting (11.10), we express \[\widehat{L_{b}}(\sigma)\tilde{L}_{b}(\sigma)^{-1}=\begin{pmatrix}L_{00}(b, \sigma)&L_{01}(b,\sigma)&L_{02}(b,\sigma)\\ L_{10}(b,\sigma)&L_{11}(b,\sigma)&L_{12}(b,\sigma)\\ L_{20}(b,\sigma)&L_{21}(b,\sigma)&L_{22}(b,\sigma)\end{pmatrix}. \tag{11.12}\] It is clear that \(L_{ij}(b,0)=0\) when \((i,j)\neq(0,0)\) and \(L_{00}(b,0)\) is invertible. We will use the following resolvent identity (whose proof is similar to that of the resolvent identity (9.35) in the proof of Lemma 9.12) frequently in the subsequent analysis \[\big{(}\tilde{L}_{b}(\sigma)^{-1}-\tilde{L}_{b}(0)^{-1}\big{)}(\dot{g},\dot{A}) =\tilde{L}_{b}(\sigma)^{-1}\big{(}\tilde{L}_{b}(0)-\tilde{L}_{b}( \sigma)\big{)}\tilde{L}_{b}(0)^{-1}(\dot{g},\dot{A})\] (11.13) \[\big{(}(\tilde{L}_{b}(\sigma)^ for \(\check{L}_{b}(0)^{-1}(\hat{g},\dot{A})\in\rho C^{\infty}(\bar{X})+\check{H}_{ \rm b}^{s,1/2-}(\bar{X})\) and \((\check{L}_{b}(0)^{-1})^{*}(\hat{g}^{*},\dot{A}^{*})\in\rho C^{\infty}(\bar{X})+ \check{H}_{\rm b}^{-s+1,1/2-}(\bar{X})\). * Analysis of \(L_{01}\) and \(L_{02}\). For \((\tilde{g}_{2},\tilde{A}_{2})=\check{L}_{b}(0)(\hat{g}_{2},\dot{A}_{2})\in \widetilde{\mathcal{K}}_{b,2}\), we have \[\begin{split}\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}( \tilde{g}_{2},\check{A}_{2})&=\big{(}\check{L}_{b}(\sigma)-V_{b} \big{)}\check{L}_{b}(\sigma)^{-1}\check{L}_{b}(0)(\hat{g}_{2},\dot{A}_{2})\\ &=-V_{b}\big{(}\check{L}_{b}(\sigma)^{-1}-\check{L}_{b}(0)^{-1} \big{)}\check{L}_{b}(0)(\hat{g}_{2},\dot{A}_{2})\\ &=V_{b}\check{L}_{b}(\sigma)^{-1}\big{(}\check{L}_{b}(\sigma)- \check{L}_{b}(0)\big{)}\check{L}_{b}(0)^{-1}\check{L}_{b}(0)(\hat{g}_{2},\dot {A}_{2})\\ &=\sigma V_{b}\check{L}_{b}(\sigma)^{-1}\partial_{\sigma} \widehat{L_{b}}(0)(\hat{g}_{2},\dot{A}_{2})+\frac{\sigma^{2}}{2}V_{b}\check {L}_{b}(\sigma)^{-1}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\hat{g}_{2},\dot {A}_{2})\end{split}\] (11.15) where we use \(\check{L}_{b}(0)^{-1}\check{L}_{b}(0)(\hat{g}_{2},\dot{A}_{2})=(\dot{g}_{2}, \dot{A}_{2})\in\rho C^{\infty}(\bar{X})+\check{H}_{\rm b}^{\infty,1/2-}(\bar {X})\) by Proposition 5.7 and the resolvent identity (11.13) in the third step. Since \((\hat{g}_{2},\dot{A}_{2})\in\rho C^{\infty}(\bar{X})+\check{H}_{\rm b}^{ \infty,1/2-}(\bar{X})\) which is annihilated by normal operator of \(\partial_{\sigma}\widehat{L_{b}}(0)\) modulo \(\bar{H}_{\rm b}^{\infty,3/2-}(\bar{X})\), it follows that \[\partial_{\sigma}\widehat{L_{b}}(0)(\hat{g}_{2},\dot{A}_{2})\in\check{H}_{\rm b }^{\infty,3/2-}(\bar{X}).\] As \(\partial_{\sigma}^{2}\widehat{L_{b}}(0)\in\rho^{2}C^{\infty}(\bar{X};{\rm End }(\widehat{S^{2\infty}}\widetilde{T^{*}}\bar{X}\oplus\widetilde{\widetilde{ \widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{ \widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\hat{\hat{\hathathathathathathathathat } }}}}}}}}}}}}}}} * Analysis of \(L_{10}\) and \(L_{20}\). For \((\tilde{g}_{0},\tilde{A}_{0})\in\widetilde{\mathcal{K}}_{b}^{\perp}\) and \((\dot{g}_{2}^{*},\dot{A}_{2}^{*})\in\mathcal{K}_{b,2}^{*}\), we have \[\begin{split}\langle\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{ -1}(\tilde{g}_{0},\tilde{A}_{0}),\ (\dot{g}_{2}^{*},\dot{A}_{2}^{*})\rangle\\ =\langle(\tilde{g}_{0},\tilde{A}_{0}),\ (\check{L}_{b}(\sigma)^{-1}) ^{*}\widehat{L_{b}}(\sigma)^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})\rangle\\ =\langle(\tilde{g}_{0},\tilde{A}_{0}),(\dot{g}_{2}^{*},\dot{A}_{ 2}^{*})\rangle-\langle(\tilde{g}_{0},\tilde{A}_{0}),(\check{L}_{b}(\sigma)^{- 1})^{*}V_{b}^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})\rangle\\ =-\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),\big{(}(\check{L}_{ b}(\sigma)^{-1})^{*}-(\check{L}_{b}(0)^{-1})^{*}\big{)}V_{b}^{*}(\dot{g}_{2}^{*}, \dot{A}_{2}^{*})\Big{\rangle}\\ =\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),(\check{L}_{b}(\sigma )^{-1})^{*}\big{(}\check{L}_{b}(\sigma)^{*}-\check{L}_{b}(0)^{*}\big{)}( \check{L}_{b}(0)^{-1})^{*}V_{b}^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})\Big{\rangle} \\ =\sigma\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),(\check{L}_{b} (\sigma)^{-1})^{*}\big{(}\partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2} \partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}^{*}(\dot{g}_{2}^{*},\dot{A}_{2} ^{*})\Big{\rangle}\end{split}\] (11.17) where we use \((\check{L}_{b}(0)^{-1})^{*}V_{b}^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})=(\dot{ g}_{2}^{*},\dot{A}_{2}^{*})\in\rho C^{\infty}(\bar{X})+\check{H}_{\rm b}^{-3/2-C(a, \gamma),1/2-}(\bar{X})\) where \(C(a,\gamma)\) is a small constant by Proposition 5.7 and the resolvent identity (11.14) in the fourth step. Since \((\dot{g}_{2}^{*},\dot{A}_{2}^{*})\) is annihilated by normal operator of \((\partial_{\sigma}\widehat{L_{b}}(0))^{*}\) modulo \(\check{H}_{\rm b}^{-5/2-C(a,\gamma),3/2-}(\bar{X})\), it follows that \((\partial_{\sigma}\widehat{L_{b}}(0))^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})\in \check{H}_{\rm b}^{-5/2-C(a,\gamma),3/2-}(\bar{X})\). As \((\partial_{\sigma}^{2}\widehat{L_{b}}(0))^{*}\in\rho^{2}C^{\infty}(\bar{X})\), we have \[(\partial_{\sigma}^{2}\widehat{L_{b}}(0))^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*} )\in\check{H}_{\rm b}^{-3/2-\nu-,3/2-}(\bar{X}).\] Since \((\check{L}_{b}(\sigma)^{-1})^{*}\) is continuous with values in \(\mathcal{L}_{\rm op}(\check{H}_{\rm b}^{-s+\epsilon,-\ell+\epsilon}(\bar{X}), \check{H}_{\rm b}^{-s+1-\epsilon,-\ell-2-\epsilon}(\bar{X}))\), we conclude that \[(\check{L}_{b}(\sigma)^{-1})^{*}(\partial_{\sigma}\widehat{L_{b}}(0))^{*}( \dot{g}_{2}^{*},\dot{A}_{2}^{*}),\ (\check{L}_{b}(\sigma)^{-1})^{*}(\partial_{\sigma}^{2} \widehat{L_{b}}(0))^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})\] are continuous in \((b,\sigma)\), and thus \[L_{20}(b,\sigma)=\sigma\widetilde{L}_{20}(b,\sigma)\] where \(\widetilde{L}_{20}(b,\sigma):\widetilde{\mathcal{K}}_{b}^{\perp}\to\mathcal{R} _{b,2}^{\perp}\) is continuous in \((b,\sigma)\) in the norm operator topology. For \((\tilde{g}_{0},\tilde{A}_{0})\in\widetilde{\mathcal{K}}_{b}^{\perp}\) and \((\dot{g}_{1}^{*},\dot{A}_{1}^{*})\in\mathcal{K}_{b,1}^{*}\), the calculation in (11.17) still applies. We further calculate \[\begin{split}\langle\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma) ^{-1}(\tilde{g}_{0},\tilde{A}_{0}),\ (\dot{g}_{1}^{*},\dot{A}_{1}^{*})\rangle\\ =\sigma\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),(\check{L}_{b} (\sigma)^{-1})^{*}\big{(}\partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2} \partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}^{*}(\dot{g}_{1}^{*},\dot{A}_{1}^ {*})\Big{\rangle}\\ =\sigma\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),i(\check{L}_{ b}(\sigma)^{-1})^{*}\widehat{L_{b}}(0)^{*}(\dot{g}_{1}^{*},\dot{A}_{1}^{*}) \Big{\rangle}\\ \quad+\sigma^{2}\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}), \frac{1}{2}(\check{L}_{b}(\sigma)^{-1})^{*}(\partial_{\sigma}^{2}\widehat{L_{b} }(0))^{*}(\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\\ =\sigma\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),i\big{(}I-( \check{L}_{b}(0)^{-1})^{*}V_{b}^{*}\big{)}(\dot{g}_{1}^{*},\dot{A}_{1}^{*}) \Big{\rangle}\\ \quad+\sigma\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),i\big{(}( \check{L}_{b}(\sigma)^{-1})^{*}-(\check{L}_{b}(0)^{-1})^{*}\big{)}\widehat{L _{b}}(0)^{*}(\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\\ \quad+\sigma^{2}\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}), \frac{1}{2}(\check{L}_{b}(\sigma)^{-1})^{*}(\partial_{\sigma}^{2}\widehat{L_{b} }(0))^{*}(\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\end{split}\] By a normal operator argument as in the proof of Proposition 5.7, we have \[(\check{L}_{b}(0)^{-1})^{*}:C_{c}^{\infty}(X)\to\rho C^{\infty}(\bar{X})+ \check{H}_{\rm b}^{-3/2-C(a,\gamma),1/2-}(\bar{X}).\] Therefore, by Lemma 10.6, we conclude \[(\check{L}_{b}(0)^{-1})^{*}\widehat{L_{b}}(0)^{*}(\dot{g}_{1},\check{A}_{1} )=(\check{g}_{1},\check{A}_{1})-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{*}(\dot{g}_ {1},\check{A}_{1})\in\rho C^{\infty}(\bar{X})+\check{H}_{\rm b}^{-3/2-C(a, \gamma),1/2-}(\bar{X}).\] This implies that we can apply the resolvent identity (11.14) again and obtain \[\begin{split}\langle\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma) ^{-1}(\tilde{g}_{0},\tilde{A}_{0}),\ (\dot{g}_{1}^{*},\dot{A}_{1}^{*})\rangle\\ =\sigma\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),i\big{(}I-( \check{L}_{b}(0)^{-1})^{*}V_{b}^{*}\big{)}(\dot{g}_{2}^{*},\dot{A}_{2}^{*}) \Big{\rangle}\\ \quad-\sigma^{2}\Big{\langle}(\tilde{g}_{ By the same reasoning as in the component \(L_{20}(b,\sigma)\), we find that \[L_{10}(b,\sigma)=\sigma\widetilde{L}_{10}(b,\sigma)\] where \(\widetilde{L}_{10}(b,\sigma)=\widetilde{L}_{10}^{0}(b)+\sigma\widetilde{L}_{10} ^{\varepsilon}(b,\sigma)\) with \(\widetilde{L}_{10}^{\varepsilon}(b,\sigma):\widetilde{\mathcal{K}}_{b}^{ \perp}\to\mathcal{R}_{b,1}^{\perp}\) continuous in \((b,\sigma)\) in the norm operator topology. * Analysis of \(L_{22}\). Using (11.15), we find that \[\begin{split}&\langle\widetilde{L}_{b}(\sigma)\check{L}_{b}( \sigma)^{-1}(\check{g}_{2},\check{A}_{2}),\ (\dot{g}_{2}^{*},\dot{A}_{2}^{*})\rangle\\ &\quad=\langle\sigma\partial_{\sigma}\widehat{L}_{b}(0)(\dot{g}_ {2},\dot{A}_{2})+\frac{\sigma^{2}}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)( \dot{g}_{2},\dot{A}_{2}),\ (\check{L}_{b}(\sigma)^{-1})^{*}V_{b}^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*}) \rangle\\ &\quad=\langle\sigma\partial_{\sigma}\widehat{L_{b}}(0)(\dot{g}_ {2},\dot{A}_{2})+\frac{\sigma^{2}}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)( \dot{g}_{2},\dot{A}_{2}),\ (\check{g}_{2}^{*},\dot{A}_{2}^{*})\rangle\\ &\quad\quad+\langle\sigma\partial_{\sigma}\widehat{L_{b}}(0)( \dot{g}_{2},\dot{A}_{2})+\frac{\sigma^{2}}{2}\partial_{\sigma}^{2}\widehat{L_{ b}}(0)(\dot{g}_{2},\dot{A}_{2}),\ (\check{(}L_{b}(\sigma)^{-1})^{*}\big{(}\check{L}_{b}(0)^{*}-\check{L}_{b}( \sigma)^{*}\big{)}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})\rangle\\ &\quad\quad-\langle\partial_{\sigma}\widehat{L_{b}}(0)(\dot{g}_ {2},\dot{A}_{2})+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)( \dot{g}_{2},\dot{A}_{2}),\ (\check{L}_{b}(\sigma)^{-1})^{*}(\partial_{\sigma} \widehat{L}_{b}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0))^{ *}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})\rangle\end{split}\] (11.19) where we use \((\check{L}_{b}(0)^{-1})^{*}V_{b}^{*}(\dot{g}_{2}^{*},\dot{A}_{2}^{*})=(\dot{ g}_{2}^{*},\dot{A}_{2}^{*})\) and the resolvent identity (11.14) in the third step. According to the proof of the non-existence of the linearly growing generalized zero modes with leading order term \((\dot{g}_{2},\dot{A}_{2})\in\mathcal{K}_{b,2}\) (see Lemma 10.3 and 10.4), the pairing on \(\mathcal{K}_{b,2}\times\mathcal{K}_{b,2}^{*}\) \[\langle-i[L_{b},t_{b,*}](\dot{g}_{2},\dot{A}_{2}),\ (\dot{g}_{2}^{*},\dot{A}_{2}^{*})\rangle\] is non-degenerate for \(b\) near \(b_{0}\). Therefore, we find that \[L_{22}(b,\sigma)=\sigma\widetilde{L}_{22}(b,\sigma)\quad\text{with}\quad \widetilde{L}_{22}(b,\sigma)=\widetilde{L}_{22}^{0}(b)+\sigma\widetilde{L}_{22 }^{\varepsilon}(b,\sigma)\] where \(\widetilde{L}_{22}^{0}(b)\) is invertible and \(\widetilde{L}_{22}^{\varepsilon}(b,\sigma):\widetilde{\mathcal{K}}_{b,2}\to \mathcal{R}_{b,2}^{\perp}\) is continuous in \((b,\sigma)\) in the norm operator topology. * Analysis of \(L_{11}\). Using the calculation in (11.16) and (11.18), we see that \[\langle\widehat{L_{b}}(\sigma)\bar{L}_{b}(\sigma)^{-1}(\tilde{g}_{1}, \tilde{A}_{1}),\ (\hat{g}_{1}^{*},\hat{A}_{1}^{*})\rangle\] \[=\sigma^{2}\langle-i\big{(}\partial_{\sigma}\widehat{L_{b}}(0)+ \frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(\check{g}_{1}, \check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1 },\check{A}_{1}),\ (\check{L}_{b}(\sigma)^{-1})^{*}V_{b}^{*}(\hat{g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[=\sigma^{2}\langle-i\big{(}\partial_{\sigma}\widehat{L_{b}}(0)+ \frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(\check{g}_{1 },\check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_ {1},\check{A}_{1}),\ (\check{g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[\quad+\sigma^{2}\langle-i\big{(}\partial_{\sigma}\widehat{L_{b}}( 0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(\check{g}_ {1},\check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{ g}_{1},\check{A}_{1}),\ (\check{L}_{b}(\sigma)^{-1})^{*}(\check{L}_{b}(0)-\check{L}_{b}(\sigma))^{*}( \hat{g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[=\sigma^{2}\langle-i\big{(}\partial_{\sigma}\widehat{L_{b}}(0)+ \frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(\check{g}_{1}, \check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1 },\check{A}_{1}),\ (\check{g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[\quad-\sigma^{3}\langle-i\big{(}\partial_{\sigma}\widehat{L_{b}}( 0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(\check{g}_{1 },\check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_ {1},\check{A}_{1}),\ i(\check{L}_{b}(\sigma)^{-1})^{*}\widehat{L_{b}}(0)^{*}( \check{g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[\quad-\sigma^{4}\langle-i\big{(}\partial_{\sigma}\widehat{L_{b}}( 0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1}, \check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1 },\check{A}_{1}),\ (\check{L}_{b}(\sigma)^{-1})^{*}(\frac{1}{2}\partial_{\sigma}^{2} \widehat{L_{b}}(0))^{*}(\check{g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[=\sigma^{2}\langle-i\partial_{\sigma}\widehat{L_{b}}(0)(\check{g} _{1},\check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{ g}_{1},\check{A}_{1}),\ (\check{g}_{2}^{*},\check{A}_{2}^{*})\rangle+\sigma^{3}\langle\frac{-i}{2} \partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1},\check{A}_{1}),(\check{ g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[\quad-\sigma^{3}\langle-i\partial_{\sigma}\widehat{L_{b}}(0)( \check{g}_{1},\check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)( \check{g}_{1},\check{A}_{1}),\ i(I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{*})(\check {g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[\quad-\sigma^{4}\langle\frac{-i}{2}\partial_{\sigma}^{2}\widehat{L _{b}}(0)(\check{g}_{1},\check{A}_{1}),\ i(I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{ *})(\check{g}_{1}^{*},\check{A}_{1}^{*})\rangle\] \[\quad+\sigma^{4}\Big{\langle}-i\big{(}\partial_{\sigma}\widehat{L _{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(\check{g} _{1},\check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g} _{1},\check{A}_{1}),\] \[\qquad\qquad\qquad(\check{L}_{b}(\sigma)^{-1})^{*}(\partial_{ \sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)) ^{*}(I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{*})(\check{g}_{1}^{*},\check{A}_{1}^ {*})\Big{\rangle}\] \[\quad-\sigma^{4}\langle-i\big{(}\partial_{\sigma}\widehat{L_{b}}( 0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(\check{g}_{1}, \check{A}_{1})+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1 },\check{A}_{1}),\ (\check{L}_{b}(\sigma)^{-1})^{*}(\frac{1}{2}\partial_{\sigma}^{2} \widehat{L_{b}}(0))^{*}(\check{g}_{1}^{*},\check{A}_{1}^{*})\rangle.\] (11.20) According to the proof of the non-existence of the quadratically growing generalized zero modes with leading order term \((\check{g}_{1},\check{A}_{1})\in\mathcal{K}_{b,1}\) (see Lemma 10.7), the pairing on \(\mathcal{K}_{b,1}\times\mathcal{K}_{b,1}^{*}\) \[\langle-i\partial_{\sigma}\widehat{L_{b}}(0)(\check{g}_{1},\check{A}_{1})+ \frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1},\check{A}_{1}), \ (\hat{g}_{2}^{*},\check{A}_{2}^{*})\rangle\] \[\quad=-\langle[L_{b},t_{b,*}](\check{g}_{1},\check{A}_{1})+\frac{1 }{2}[[L_{b},t_{b,*}],t_{b,*}](\check{g}_{1},\check{A}_{1}),\ (\hat{g}_{2}^{*},\check{A}_{2}^{*})\rangle\] is non-degenerate for \(b\) near \(b_{0}\). Therefore, we find that \[L_{11}(b,\sigma)=\sigma^{2}\widetilde{L}_{11}(b,\sigma)\quad\text{with}\quad \widetilde{L}_{11}(b,\sigma)=\widetilde{L}_{11}^{0}(b)+\sigma\widetilde{L}_{22}^{ 2}(b)+\sigma^{2}\widetilde{L}_{11}^{e}(b,\sigma)\] where \(\widetilde{L}_{11}^{0}(b)\) is invertible and \(\widetilde{L}_{11}^{e}(b,\sigma):\widetilde{\mathcal{K}}_{b,1}\to\mathcal{R}_{b,1}^ {\perp}\) is continuous in \((b,\sigma)\) in the norm operator topology. * Analysis of \(L_{12}\) and \(L_{21}\). For \((\tilde{g}_{1},\check{A}_{1})\in\widetilde{\mathcal{K}}_{b,1}\) and \((\hat{g}_{2}^{*},\check{A}_{2}^{*})\in\mathcal{K}_{b,2}^{*}\), using (11.16) and (11.19), we compute \[\langle\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}(\tilde{g}_ {1},\check{A}_{1}),\ (\hat{g}_{2}^{*},\check{A}_{2}^{*})\rangle\] \[\quad=\sigma^{2}\langle-i\partial_{\sigma}\widehat{L_{b}}(0)(\check{ g}_{1},\check{A}_{1})+\frac{1}{2}\partial_{\sigma}^ For \((\tilde{g}_{2},\tilde{A}_{2})=\check{L}_{b}(0)(\dot{g}_{2},\dot{A}_{2})\in \widetilde{\mathcal{K}}_{b,2}=\check{L}_{b}(0)\mathcal{K}_{b,2}\) and \((\dot{g}_{1}^{*},\dot{A}_{1}^{*})\in\mathcal{K}_{b,1}^{*}\), using (11.18), we calculate \[\begin{split}\langle\widehat{L_{b}}&(\sigma) \check{L}_{b}(\sigma)^{-1}(\tilde{g}_{2},\tilde{A}_{2}),\ (\dot{g}_{1}^{*},\dot{A}_{1}^{*})\rangle\\ &=\sigma\Big{\langle}\big{(}I-V_{b}(\check{L}_{b}(0)^{-1}) \big{)}\check{L}_{b}(0)(\dot{g}_{2},\dot{A}_{2}),i(\dot{g}_{1}^{*},\ddot{A}_{1 }^{*})\Big{\rangle}\\ &\quad-\sigma^{2}\Big{\langle}\check{L}_{b}(\sigma)^{-1}\check{L }_{b}(0)(\dot{g}_{2},\dot{A}_{2}),i(\partial_{\sigma}\widehat{L_{b}}(0))^{*} \big{(}I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{*}\big{)}(\check{g}_{1}^{*},\ddot{ A}_{1}^{*})\Big{\rangle}\\ &\quad\quad+\sigma^{2}\Big{\langle}\check{L}_{b}(\sigma)^{-1} \check{L}_{b}(0)(\dot{g}_{2},\dot{A}_{2}),\frac{1}{2}(\partial_{\sigma}^{2} \widehat{L_{b}}(0))^{*}(\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\\ &\quad\quad-\sigma^{3}\Big{\langle}\check{L}_{b}(\sigma)^{-1} \check{L}_{b}(0)(\dot{g}_{2},\dot{A}_{2}),\frac{i}{2}(\partial_{\sigma} \widehat{L_{b}}(0))^{*}\big{(}I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{*}\big{)}( \check{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\\ &\quad\quad-\sigma^{3}\Big{\langle}\check{L}_{b}(\sigma)^{-1} \check{L}_{b}(0)(\dot{g}_{2},\dot{A}_{2}),\frac{i}{2}(\partial_{\sigma} \widehat{L_{b}}(0))^{*}\big{(}I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{*}\big{)}( \check{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\\ &\quad\quad+\sigma^{3}\Big{\langle}\check{L}_{b}(\sigma)^{-1}( \partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2} \widehat{L_{b}}(0))(\dot{g}_{2},\dot{A}_{2}),-\frac{1}{2}(\partial_{\sigma}^{2 }\widehat{L_{b}}(0))^{*}(\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\\ &\quad\quad-\sigma^{3}\Big{\langle}\check{L}_{b}(\sigma)^{-1} \check{L}_{b}(0)(\dot{g}_{2},\dot{A}_{2}),\frac{i}{2}(\partial_{\sigma}^{2} \widehat{L_{b}}(0))^{*}\big{(}I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{*}\big{)}( \check{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}.\end{split}\] (11.22) Therefore, \[L_{12}(b,\sigma)=\sigma^{2}\widetilde{L}_{12}(b,\sigma)\quad\text{with}\quad \widetilde{L}_{12}(b,\sigma)=\widetilde{L}_{12}^{0}(b)+\sigma\widetilde{L}_{12 }^{\epsilon}(b,\sigma)\] where \(\widetilde{L}_{12}^{\epsilon}(b,\sigma):\widetilde{\mathcal{K}}_{b,2}\to \mathcal{R}_{b,1}^{\perp}\) is continuous in \((b,\sigma)\) in the norm operator topology. * Analysis of \(L_{00}\). Following the arguments in the first step in the proof of Lemma 9.12, we can prove that for \((b,\sigma)\) near \((b,0)\), \(L_{00}(b,\sigma)\) is invertible and there exists a uniform constant \(C>0\) such that \[\|L_{00}(b,\sigma)^{-1}\|_{\mathcal{R}_{b}\to\widetilde{\mathcal{K}}_{b}^{ \perp}}\leq C.\] (11.23) Moreover, for \((\tilde{g}_{0},\tilde{A}_{0})\in\widetilde{\mathcal{K}}_{b}^{\perp}\), we write \[\begin{split} L_{00}(b,\sigma)(\tilde{g}_{0},\tilde{A}_{0})& =\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}(\tilde{g}_{0}, \tilde{A}_{0})-\big{(}L_{10}(b,\sigma)+L_{20}(b,\sigma)\big{)}(\tilde{g}_{0}, \tilde{A}_{0})\\ &=(\tilde{g}_{0},\tilde{A}_{0})-V_{b}\check{L}_{b}(\sigma)^{-1}( \tilde{g}_{0},\tilde{A}_{0})-\big{(}L_{10}(b,\sigma)+L_{20}(b,\sigma)\big{)}( \tilde{g}_{0},\tilde{A}_{0}).\end{split}\] (11.24) Since \(\check{L}_{b}(\sigma)^{-1}\) is continuous in the norm operator topology \(\mathcal{L}_{\mathrm{op}}(\check{H}_{\mathrm{b}}^{s-1+\epsilon,\ell+2+\epsilon}( \bar{X}),\check{H}_{\mathrm{b}}^{s-\epsilon,\ell-\epsilon}(\bar{X}))\) and \(V_{b}\) maps \(\mathcal{D}^{\prime}(X^{\circ})\) to \(C_{c}^{\infty}(X)\), it follows that \(L_{00}(b,\sigma):\widetilde{\mathcal{K}}_{b}^{\perp}\to\mathcal{R}_{b}\) is continuous in \((b,\sigma)\) in the norm operator topology. * The inverse of \(\widehat{L}_{b}(\sigma)\check{L}_{b}(\sigma)^{-1}\). Putting all the above analysis of the components \(L_{ij}(b,\sigma)\) together, we conclude that \[\begin{split}\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}= \begin{pmatrix}L_{00}(b,\sigma)&\sigma^{2}\widetilde{L}_{01}(b,\sigma)& \sigma\widetilde{L}_{02}(b,\sigma)\\ \sigma\widetilde{L}_{10}(b,\sigma)&\sigma^{2}\widetilde{L}_{11}(b,\sigma)& \sigma^{2}\widetilde{L}_{12}(b,\sigma)\\ \sigma\widetilde{L}_{20}(b,\sigma)&\sigma^{2}\widetilde{L}_{21}(b,\sigma)& \sigma\widetilde{L}_{22}(b,\sigma)\end{pmatrix}\\ =\begin{pmatrix}L_{00}(b,\sigma)&\sigma^{2}\widetilde{L}_{01}(b,\sigma)& \sigma\widetilde{L}_{02}(b,\sigma)\\ \sigma\big{(}\widetilde{L}_{10}^{0}(b)+\sigma\widetilde{L}_{10}^{\epsilon}(b, \sigma)\big{)}&\sigma^{2}\big{(}\widetilde{L}_{11}^{0}(b)+\sigma\widetilde{L}_{1 1}^{\epsilon}(b)+\sigma^{2}\widetilde{L}_{11}^{\epsilon}(b,\sigma)\big{)}&\sigma^{2} \big{(}\widetilde{L}_{12}^{0}(b)+\sigma\widetilde{L}_{12}^{\epsilon}(b,\sigma) \big{)}\\ \sigma\widetilde{L}_{20}(b,\sigma)&\sigma^{2}\big{(}\widetilde{L}_{21}^{0}(b)+ \sigma\widetilde{L}_{21}^{\epsilon}(b)\big{)}&\sigma\big{(}\widetilde{L}_{22}^{0}( b)+\sigma\widetilde{L}_{22}^{\epsilon}(b,\sigma)\big{)}\end{pmatrix}\end{split}\] (11.25) where \(L_{00}(b,\sigma),\widetilde{L}_{11}^{0}(b),\widetilde{L}_{22}^{0}(b)\) are invertible, and \(L_{00}(b,\sigma)\), \(\widetilde{L}_{ij}(b,\sigma)\) with \((i,j)=(0,1),(0,2),(2,0)\), \(\widetilde{L}_{ij}^{\epsilon}(b,\sigma)\) with \((i,j)=(1,0),(1,1),(1,2),(2,1),(2,2)\) are continuous in \((b,\sigma)\) in the norm operator topology. Solving the system \[\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}\big{(}(\tilde{g}_{0}, \tilde{A}_{0}),(\tilde{g}_{1},\tilde{A}_{1}),(\tilde{g}_{2},\tilde{A}_{2}) \big{)}=(f_{0},f_{1},f_{2})\] yields \[R(b,\sigma)=(\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1})^{-1}=\begin{pmatrix} \widetilde{R}_{00}(b,\sigma)&\widetilde{R}_{01}(b,\sigma)&\widetilde{R}_{02}(b,\sigma)\\ \sigma^{-1}\widetilde{R}_{10}(b,\sigma)&\sigma^{-2}\widetilde{R}_{11}(b, \sigma)&\sigma^{-1}\widetilde{R}_{12}(b,\sigma)\\ \widetilde{R}_{20}(b,\sigma)&\sigma^{-1}\widetilde{R}_{21}(b,\sigma)&\sigma^ {-1}\widetilde{R}_{22}(b,\sigma)\end{pmatrix} \tag{11.26}\] where \[\widetilde{R}_{11}=\big{(}\widetilde{L}_{11}^{\ast}-\sigma \widetilde{L}_{12}^{\ast}(\widetilde{L}_{22}^{\ast})^{-1}\widetilde{L}_{21}^{ \ast}\big{)}^{-1},\quad\widetilde{R}_{10}=\widetilde{R}_{11}\big{(}- \widetilde{L}_{10}+\sigma\widetilde{L}_{12}^{\ast}(\widetilde{L}_{22}^{ \ast})^{-1}\widetilde{L}_{20}\big{)}L_{00}^{-1},\] \[\widetilde{R}_{12}=-\widetilde{R}_{11}\widetilde{L}_{12}^{\ast} (\widetilde{L}_{22}^{\ast})^{-1},\quad\widetilde{R}_{21}=-(\widetilde{L}_{22} ^{\ast})^{-1}\widetilde{L}_{21}^{\ast}\widetilde{R}_{11},\quad\widetilde{R}_{ 22}=(\widetilde{L}_{22}^{\ast})^{-1}-\sigma(\widetilde{L}_{22}^{\ast})^{-1} \widetilde{L}_{21}^{\ast}\widetilde{R}_{12}, \tag{11.27}\] \[\widetilde{R}_{20}=-(\widetilde{L}_{22}^{\ast})^{-1}\big{(} \widetilde{L}_{20}L_{00}^{-1}+\widetilde{L}_{21}^{\ast}\widetilde{R}_{10}\big{)},\quad\widetilde{R}_{01}=-L_{00}^{-1}\big{(}\widetilde{L}_{01}\widetilde{R}_{1 1}+\widetilde{L}_{02}\widetilde{R}_{21}\big{)},\] \[\widetilde{R}_{02}=-L_{00}^{-1}\big{(}\sigma\widetilde{L}_{01} \widetilde{R}_{12}+\widetilde{L}_{02}\widetilde{R}_{22}\big{)},\quad \widetilde{R}_{00}=L_{00}^{-1}\big{(}I-\sigma\widetilde{L}_{01}\widetilde{R}_ {10}-\sigma\widetilde{L}_{02}\widetilde{R}_{20}\big{)}\] with \[\widetilde{L}_{i1}^{\ast}=\widetilde{L}_{i1}-\sigma\widetilde{L}_{ i0}L_{00}^{-1}\widetilde{L}_{01}\quad\text{for}\quad i=1,2,\quad\widetilde{L}_{12}^{ \ast}=\widetilde{L}_{12}-\widetilde{L}_{10}L_{00}^{-1}\widetilde{L}_{02}, \tag{11.28}\] \[\widetilde{L}_{22}^{\ast}=\widetilde{L}_{22}-\sigma\widetilde{L}_ {20}L_{00}^{-1}\widetilde{L}_{02}.\] Since \(\check{L}_{b}(\sigma)\) is invertible for \((b,\sigma)\) near \((b_{0},0)\), it follows that \(\widehat{L_{b}}(\sigma)^{-1}=\check{L}_{b}(\sigma)^{-1}R(b,\sigma)\), which implies the invertibility of \(\widehat{L_{b}}(\sigma):\mathcal{X}_{b}^{s,\ell}\to\widetilde{H}_{\rm b}^{s-1, \ell+1}(\bar{X})\) for \((b,\sigma)\) with \(\operatorname{Im}\sigma\geq 0,\sigma\neq 0\) near \((b_{0},0)\). This finishes the proof of the theorem. We now give a more detailed description of the structure of the singular part of \(\widehat{L_{b}}(\sigma)^{-1}\). **Corollary 11.4**.: _For \((b,\sigma)\) with \(\operatorname{Im}\sigma\geq 0\) near \((b_{0},0)\), we have_ \[\widehat{L_{b}}(\sigma)^{-1}=P(b,\sigma)+L_{b}^{-}(\sigma):\widehat{H}_{\rm b}^ {s-1,\ell+2}(\bar{X};S^{2s\widehat{c}\widehat{T}^{\ast}}\bar{X}\oplus\widehat{ \widetilde{c}\widehat{T}^{\ast}}\bar{X})\to\widehat{H}_{\rm b}^{s,\ell}(\bar{X };S^{2s\widehat{c}\widehat{T}^{\ast}}\bar{X}\oplus\widehat{\widetilde{c} \widehat{T}^{\ast}}\bar{X}).\] _The regular part \(L_{b}^{-}(\sigma)\) is uniformly bounded, and is continuous in \(\sigma\) with values in_ \[\mathcal{L}_{\rm weak}(\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{H}_{\rm b}^ {s,\ell}(\bar{X}))\cap\mathcal{L}_{\rm op}(\bar{H}_{\rm b}^{s-1+\epsilon,\ell+2 +\epsilon}(\bar{X}),\bar{H}_{\rm b}^{s-\epsilon,\ell-\epsilon}(\bar{X}))\] _for \(\epsilon>0\)._ _The singular part \(P(b,\sigma)\) satisfies_ \[P(b,\sigma)f=(\sigma^{-2}(\dot{g}_{1},\dot{A}_{1})-i\sigma(\ddot{g}_{1},\ddot{ A}_{1}))+i\sigma^{-1}\big{(}(\dot{g}_{2},\dot{A}_{2})+(\dot{g}_{1}^{\prime}, \dot{A}_{1}^{\prime})\big{)}+i\sigma^{-1}(\dot{g}_{1}^{\prime\prime}(\sigma), \dot{A}_{1}^{\prime\prime}(\sigma)) \tag{11.29}\] _where_ \[(\dot{g}_{1},\dot{A}_{1}),\ (\dot{g}_{1}^{\prime},\dot{A}_{1}^{\prime}),\ (\dot{g}_{1}^{ \prime\prime}(\sigma),\dot{A}_{1}^{\prime\prime}(\sigma))\in\mathcal{K}_{b,1}, \quad(\ddot{g}_{1},\ddot{A}_{1})\in\tilde{\mathcal{K}}_{b}\] _with \((\dot{g}_{1}^{\prime\prime}(\sigma),\dot{A}_{1}^{\prime\prime}(\sigma))=o(1)\) in \(\mathcal{K}_{b,1}\) as \(\sigma\to 0\), and_ \[(\dot{g}_{2},\dot{A}_{2})\in\mathcal{K}_{b,2}.\] _Moreover, \((\dot{g}_{1},\dot{A}_{1}),(\dot{g}_{1}^{\prime},\dot{A}_{1}^{\prime})\) and \((\dot{g}_{2},\dot{A}_{2})\) are determined by the conditions_ \[-\Big{\langle}[L_{b},t_{b,\ast}](\ddot{g}_{1},\dot{A}_{1})+ \frac{1}{2}[[L_{b},t_{b,\ast}],t_{b,\ast}](\dot{g}_{1},\dot{A}_{1}),(\dot{g}^{ \ast},\dot{A}^{\ast})\Big{\rangle}\] \[\qquad+\Big{\langle}[L_{b},t_{b,\ast}](\dot{g}_{2},\dot{A}_{2}),( \dot{g}^{\ast},\dot{A}^{\ast})\Big{\rangle}=\langle f,(\dot{g}^{\ast},\dot{A}^ {\ast})\rangle\quad\text{for all}\quad(\dot{g}^{\ast},\dot{A}^{\ast})\in \mathcal{K}_{b}^{\ast}, \tag{11.30}\] \[\Big{\langle}[L_{b},t_{b,\ast}](\dot{g}_{1}^{\prime},\dot{A}_{1}^{ \prime})+\frac{1}{2}[[L_{b},t_{b,\ast}],t_{b,\ast}](\dot{g}_{1}^{\prime}, \dot{A}_{1}^{\prime}),(\dot{g}_{1}^{\ast},\dot{A}_{1}^{\ast})\Big{\rangle}=- \langle f,(\dot{g}_{1}^{\ast},\dot{A}_{1}^{\ast})\rangle\] \[\qquad-\Big{\langle}\frac{1}{2}[[L_{b},t_{b,\ast}],t_{b,\ast}]( \dot{g}_{1},\dot{A}_{1})+[L_{b},t_{b,\ast}](\dot{g}_{2}-\ddot{g}_{1},\dot{A}_{2}- \ddot{A}_{1}),(\dot{g}_{1}^{\ast},\dot{A}_{1}^{\ast})\Big{\rangle}\] \[\qquad+\Big{\langle}\frac{1}{2}[[L_{b},t_{b,\ast}],t_{b,\ast}] (\dot{g}_{1}-\dot{g}_{2},\ddot{A}_{1}-\dot{A}_{2}),(\dot{g}_{1}^{\ast},\dot{A} _{1}^{\ast})\Big{\rangle}\quad\text{for all}\quad(\dot{g}_{1}^{\ast},\dot{A}_{1}^ {\ast})\in\mathcal{K}_{b,1}^{\ast}. \tag{11.31}\] Proof.: For the simplicity of notation, we define \[\widetilde{\mathcal{K}}_{0}:=\widetilde{\mathcal{K}} First, we prove that if an operator \(A(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) is invertible, and continuous in \(\sigma\) (in norm operator topology) in a small neighborhood of \(\sigma=0\), then so is \(A(\sigma)^{-1}\). We write \[A(\sigma)=A(0)+A^{e}(\sigma)=A(0)(I+A(0)^{-1}A^{e}(\sigma))\] with \[A^{e}(\sigma)=A(\sigma)-A(0)=o(1)\quad\text{in}\quad\mathcal{L}_{\text{op}}( \widetilde{\mathcal{K}}_{i},\mathcal{R}_{j})\quad\text{as}\quad\sigma\to 0.\] Since \(\|A(0)^{-1}A^{e}(\sigma)\|_{\widetilde{\mathcal{K}}_{i}\to\widetilde{ \mathcal{K}}_{i}}<1/2\) when \(\sigma\) is near \(0\), the inverse of \(A(\sigma)\) is given by the Neumann series \[A(\sigma)^{-1}=\Big{(}I+\sum_{k=1}^{\infty}(-1)^{k}\big{(}A(0)^{-1}A^{e}( \sigma)\big{)}^{k}\Big{)}A(0)^{-1},\] from which we find that \[\|A(\sigma)^{-1}-A(0)^{-1}\|_{\mathcal{R}_{j}\to\widetilde{ \mathcal{K}}_{i}} \leq\Big{(}\sum_{k=1}^{\infty}\|A(0)^{-1}A^{e}(\sigma)\|_{ \widetilde{\mathcal{K}}_{i}\to\widetilde{\mathcal{K}}_{i}}^{k}\Big{)}\|A(0) ^{-1}\|_{\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}}\] \[\leq 2\|A(0)^{-1}\|_{\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}}^ {2}\|A^{e}(\sigma)\|_{\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}}=o(1)\quad \text{in}\quad\mathcal{L}_{\text{op}}(\mathcal{R}_{j},\widetilde{\mathcal{K}} _{i})\quad\text{as}\quad\sigma\to 0.\] This proves the continuity of \(A(\sigma)^{-1}\) in the norm operator topology in a small neighborhood of \(\sigma=0\). Applying this result to \(L_{00}\) and \(\widetilde{L}_{22}^{\flat}\) (defined in (11.28)) yields the continuity (in the norm operator topology) of \(L_{00}^{-1}\) and \((\widetilde{L}_{22}^{\flat})^{-1}\). Then the continuity of \(\widetilde{R}_{ij}\) follows from the explicit expressions in (11.27). Furthermore, we find that \[\widetilde{R}_{21}(b,\sigma) =\widetilde{R}_{21}^{0}(b)+\sigma\widetilde{R}_{21}^{e}(b,\sigma) \quad\text{with}\quad\widetilde{R}_{21}^{0}(b)=-(\widetilde{L}_{22}^{0}(b))^{ -1}\widetilde{L}_{21}^{0}(b)(\widetilde{L}_{11}^{0}(b))^{-1}, \tag{11.32}\] \[\widetilde{R}_{22}(b,\sigma) =\widetilde{R}_{22}^{0}(b)+\sigma\widetilde{R}_{22}^{e}(b,\sigma) \quad\text{with}\quad\widetilde{R}_{22}^{0}(b)=(\widetilde{L}_{22}^{0}(b))^{ -1},\] \[\widetilde{R}_{11}(b,\sigma) =\widetilde{R}_{11}^{0}(b)+\sigma\widetilde{R}_{11}^{e}(b,\sigma) \quad\text{with}\quad\widetilde{R}_{11}^{0}(b)=(\widetilde{L}_{11}^{0}(b))^{ -1}\] where \(\widetilde{R}_{21}^{e},\widetilde{R}_{22}^{e},\widetilde{R}_{11}^{e}\) are continuous in \((b,\sigma)\) for \((b,\sigma)\) near \((b_{0},0)\) in the norm operator topology. Therefore, we rewrite \[R(b,\sigma)=R_{\text{sing}}(b,\sigma)+R_{\text{reg}}(b,\sigma)=\begin{pmatrix} 0&0&0\\ \frac{\widetilde{R}_{10}(b,\sigma)}{\sigma}&\frac{\widetilde{R}_{11}^{0}(b)}{ \sigma^{2}}+\frac{\widetilde{R}_{11}^{\epsilon}(b,\sigma)}{\sigma}&\frac{ \widetilde{R}_{12}(b,\sigma)}{\sigma}\\ 0&\frac{\widetilde{R}_{11}^{0}(b)}{\sigma}&\frac{\widetilde{R}_{23}^{0}(b)}{ \sigma}\end{pmatrix}+R_{\text{reg}}(b,\sigma) \tag{11.33}\] where all the entries in \(R_{\text{reg}}(b,\sigma)\) are continuous in \((b,\sigma)\) in the norm operator topology. For \((\tilde{g},\tilde{A})=\tilde{L}_{b}(0)(\dot{g},\dot{A})\in\widetilde{ \mathcal{K}}_{b,1}\) with \((\dot{g},\dot{A})\in\mathcal{K}_{b,1}\), we have \[\begin{split}\tilde{L}_{b}(\sigma)^{-1}(\tilde{g},\tilde{A})& =\tilde{L}_{b}(\sigma)^{-1}\tilde{L}_{b}(0)(\dot{g},\dot{A})=(\dot{g},\dot{A})-i \sigma\tilde{L}_{b}(\sigma)^{-1}\widehat{L}_{b}(0)(\check{g},\ddot{A})-\frac{ \sigma^{2}}{2}\tilde{L}_{b}(\sigma)^{-1}\partial_{\sigma}^{2}\widehat{L}_{b} (0)(\dot{g},\dot{A})\\ &=(\dot{g},\dot{A})-i\sigma\tilde{L}_{b}(\sigma)^{-1}\big{(}\tilde {L}_{b}(\sigma)+\widehat{L}_{b}(0)-\widehat{L}_{b}(\sigma)-V_{b}\big{)}(\check {g},\dot{A})-\frac{\sigma^{2}}{2}\tilde{L}_{b}(\sigma)^{-1}\partial_{\sigma}^ {2}\widehat{L}_{b}(0)(\dot{g},\dot{A})\\ &=(\dot{g},\dot{A})-i\sigma(\check{g},\ddot{A})+\sigma^{2}\tilde{L }_{b}(\sigma)^{-1}\Big{(}i(\partial_{\sigma}\widehat{L}_{b}(0)+\frac{\sigma}{2} \partial_{\sigma}^{2}\widehat{L}_{b}(0))(\dot{g},\dot{A})-\frac{1}{2}\partial_{ \sigma}^{2}\widehat{L}_{b}(0)(\dot{g},\dot{A})\Big{)}\end{split} \tag{11.34}\] where the coefficient of \(\sigma^{2}\) is continuous in \((b,\sigma)\) in the norm operator topology. For \((\tilde{g},\tilde{A})=\tilde{L}_{b}(0)(\dot{g},\dot{A})\in\widetilde{\mathcal{ K}}_{b,2}\) with \((\dot{g},\dot{A})\in\mathcal{K}_{b,2}\), we have \[\tilde{L}_{b}(\sigma)^{-1}(\tilde{g},\tilde{A})=\tilde{L}_{b}(\sigma)^{-1} \tilde{L}_{b}(0)(\dot{g},\dot{A})=(\dot{g},\dot{A})-\sigma\tilde{L}_{b}(\sigma)^ {-1}\Big{(}\big{(}\partial_{\sigma}\widehat{L}_{b}(0)+\frac{\sigma}{2}\partial_{ \sigma}^{2}\widehat{L}_{b}(0)\big{)}(\dot{g},\dot{A})\Big{)} \tag{11.35}\] where the coefficient of \(\sigma\) is continuous in \((b,\sigma)\) in the norm operator topology. Since \(R_{\text{reg}}(b,\sigma)\) is continuous in \(\sigma\) in norm operator topology and \(\tilde{L}_{b}(\sigma)^{-1}\) is continuous in \(\sigma\) with values in \(\mathcal{L}_{\text{weak}}(\tilde{H}_{\text{b}}^{s-1,\ell+2}(\bar{X}),\tilde{H}_{ \text{b}}^{s,\ell}(\bar{X}))\cap\mathcal{L}_{\text{op}}(\tilde{H}_{\text{b}}^{s- 1+\epsilon,\ell+2+\epsilon}(\bar{X}),\tilde{H}_{\text{b}}^{s-\epsilon,\ell- \epsilon}(\bar{X}))\) for \(\epsilon>0\), it follows that \(\tilde{L}_{b}(\sigma)^{-1}R_{\text{reg}}(b,\sigma)\) is uniformly bounded and has the same continuity as \(\tilde{L}_{b}(\sigma)^{-1}\). According to (11.34) and (11.35), we conclude that \(\tilde{L}_{b}(\sigma)^{-1}R_{\text{sing}}(b,\sigma)\) is equal to a singular part \(P(b,\sigma)\) plus a uniformly bounded and continuous operator with value in \(\mathcal{L}_{\text{op}}(\tilde{H}_{\text{b}}^{s-1,\ell+2}(\bar{X}),\tilde{H}_{ \text{b}}^{s,\ell}(\bar{X}))\). Moreover, \(P(b,\sigma)\) satisfies \[P(b,\sigma)f=(\sigma^{-2}(\dot{g}_{1},\dot{A}_{1})-i\sigma^{-1}(\tilde{g}_{1}, \dot{A}_{1}))+i\sigma^{-1}\big{(}(\dot{g}_{2},\dot{A}_{2})+(\dot{g}_{1}^{ \prime},\dot{A}_{1}^{\prime})\big{)}+i\sigma^{-1}(\dot{g}_{1}^{\prime\prime}( \sigma),\dot{A}_{1}^{\prime\prime}(\sigma) where \((\dot{g}_{1},\dot{A}_{1}),(\dot{g}^{\prime}_{1},\dot{A}^{\prime}_{1}),(\dot{g}^{ \prime\prime}_{1}(\sigma),\dot{A}^{\prime\prime}_{1}(\sigma))\in\mathcal{K}_{b,1}\) with \((\dot{g}^{\prime\prime}_{1}(\sigma),\dot{A}^{\prime\prime}_{1}(\sigma))=o(1)\) in \(\mathcal{K}_{b,1}\) as \(\sigma\to 0\), and \((\dot{g}_{2},\dot{A}_{2})\in\mathcal{K}_{b,2}\). It remains to prove that \((\dot{g}_{1},\dot{A}_{1}),(\dot{g}^{\prime}_{1},\dot{A}^{\prime}_{1}),(\dot{g} _{2},\dot{A}_{2})\) are determined by conditions (11.30) and (11.31). Let \(f\in\bar{H}^{s-1,\ell+2}_{\rm b}(\bar{X})\) and \((\dot{g}(\sigma),\dot{A}(\sigma)):=\widehat{L_{b}}(\sigma)^{-1}f\), and then we have \[\begin{split}(\dot{g}(\sigma),\dot{A}(\sigma))&=( \sigma^{-2}(\dot{g}_{1},\dot{A}_{1})-i\sigma^{-1}(\ddot{g}_{1},\ddot{A}_{1}))+i \sigma^{-1}\big{(}(\dot{g}_{2},\dot{A}_{2})+(\dot{g}^{\prime}_{1},\dot{A}^{ \prime}_{1})\big{)}+i\sigma^{-1}(\dot{g}^{\prime\prime}_{1}(\sigma),\dot{A}^{ \prime\prime}_{1}(\sigma))\\ &\qquad\qquad\qquad+(\ddot{g},\ddot{\bar{A}})+(\ddot{\bar{g}}( \sigma),\ddot{\bar{A}}(\sigma))\end{split}\] where \((\ddot{\bar{g}},\ddot{\bar{A}})\in\bar{H}^{s,\ell}_{\rm b}(\bar{X})\) and \((\ddot{\bar{g}}(\sigma),\ddot{\bar{A}}(\sigma))=o(1)\) in \(\bar{H}^{s-\epsilon,\ell-\epsilon}_{\rm b}(\bar{X})\) as \(\sigma\to 0\). Applying \(\widehat{L_{b}}(\sigma)\) on both sides gives \[\begin{split} f&=\widehat{L_{b}}(\sigma)(\dot{g} (\sigma),\dot{A}(\sigma))=\widehat{L_{b}}(0)(\dot{g}(\sigma),\dot{A}(\sigma))+ \big{(}\sigma\partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma^{2}}{2} \partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(\dot{g}(\sigma),\dot{A}(\sigma ))\\ &=\Big{(}\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\dot{ g}_{1},\dot{A}_{1})-i\partial_{\sigma}\widehat{L_{b}}(0)(\dot{g}_{1},\dot{A}_{1})- \widehat{L_{b}}(0)(\dot{g}^{\prime}_{1},\dot{A}^{\prime}_{1})+i\partial_{ \sigma}\widehat{L_{b}}(0)(\dot{g}_{2},\dot{A}_{2})+\widehat{L_{b}}(0)(\ddot{ g},\ddot{\bar{A}})\Big{)}\\ &\quad+i\partial_{\sigma}\widehat{L_{b}}(0)(\dot{g}^{\prime\prime }_{1}(\sigma),\dot{A}^{\prime\prime}_{1}(\sigma))+\widehat{L_{b}}(0)(\ddot{ g}(\sigma),\ddot{A}(\sigma))\\ &\qquad+\frac{i\sigma}{2}\Big{(}-\partial_{\sigma}^{2}\widehat{L _{b}}(0)(\dot{g}_{1},\ddot{A}_{1})+\partial_{\sigma}^{2}\widehat{L_{b}}(0)( \dot{g}^{\prime}_{1},\dot{A}^{\prime}_{1})+\partial_{\sigma}^{2}\widehat{L_{b }}(0)(\dot{g}_{2},\dot{A}_{2}))\Big{)}\\ &\qquad+\frac{i\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)( \dot{g}^{\prime\prime}_{1}(\sigma),\dot{A}^{\prime\prime}_{1}(\sigma)+\big{(} \sigma\partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma^{2}}{2}\partial_{ \sigma}^{2}\widehat{L_{b}}(0)\big{)}(\ddot{\bar{g}}+\ddot{\bar{g}}(\sigma), \ddot{\bar{A}}+\ddot{\bar{A}}(\sigma)).\end{split} \tag{11.37}\] Letting \(\sigma\to 0\), we arrive at \[f=\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\dot{g}_{1},\dot{A}_{1})- i\partial_{\sigma}\widehat{L_{b}}(0)(\dot{g}_{1},\ddot{A}_{1})-\widehat{L_{b}}(0)( \dot{g}^{\prime}_{1},\dot{A}^{\prime}_{1})+i\partial_{\sigma}\widehat{L_{b}}( 0)(\dot{g}_{2},\dot{A}_{2})+\widehat{L_{b}}(0)(\ddot{\bar{g}},\ddot{\bar{A}}). \tag{11.38}\] Pairing both sides of (11.38) with \((\dot{g}^{*},\dot{A}^{*})\in\mathcal{K}_{b}^{*}\), we obtain (11.39) Since the pairing on \(\mathcal{K}_{b}\times\mathcal{K}_{b}^{*}\) on the right-hand side of (11.39) is non-degenerate, \((\dot{g}_{1},\dot{A}_{1})\) (thus \((\ddot{g}_{1},\ddot{A}_{1})\)) and \((\dot{g}_{2},\dot{A}_{2})\) are uniquely determined by (11.39). This proves the condition (11.30). Rewriting (11.38) gives rise to the following PDE \[\widehat{L_{b}}(0)\big{(}(\dot{g}^{\prime}_{1},\dot{A}^{\prime}_{1})-(\ddot{ \bar{g}},\ddot{\bar{A}})\big{)}=-f+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b }}(0)(\dot{g}_{1},\dot{A}_{1})-i\partial_{\sigma}\widehat{L_{b}}(0)(\dot{g}_{1}, \ddot{A}_{1})+i\partial_{\sigma}\widehat{L_{b}}(0)(\dot{g}_{2},\dot{A}_{2}) \tag{11.40}\] whose right-hand side is uniquely determined by \(f\). Due to (11.39), the pairings of the right-hand side of (11.40) with any \((\dot{g}^{*},\dot{\bar{A}}^{*})\in\mathcal{K}_{b}^{*}\) is \(0\), which implies that (11.40) can be solved for \((\dot{g}^{\prime}_{1},\dot{A}^{\prime}_{1})-(\ddot{\bar{g}},\ddot{\bar{A}})\) and \((\ddot{g}^{\prime}_{1},\dot{A}^{\prime}_{1})-(\ddot{\bar{g}},\ddot{\bar{A}})\) is uniquely determined by \(f\) modulo \(\mathcal{K}_{b}\). Namely, \[(\ddot{g}^{\prime}_{1},\dot{A}^{\prime}_{1})-(\ddot{\bar{g}},\ddot{\bar{A}})=( \ddot{\bar{g}},\ddot{\bar{A}})+(\dot{g}^{\prime\prime\prime},\dot{A}^{\prime \prime\prime}) \tag{11.41}\] where \((\ddot{\bar{g}},\ddot{\bar{A}})\) is uniquely determined by \(f\) and \((\dot{g}^{\prime\prime\prime},\dot{A}^{\prime\prime\prime})\in\mathcal{K}_{b}\). Pairing both sides of (11.37) with \((\dot{g}^{*}_{1},\dot{A}^{*}_{1})\in\mathcal{K}_{b,1}^{*}\) and using the identity (11.39), we obtain, upon multiplying \(\sigma^{-1}\) and letting \(\sigma\to 0\), \[\begin{split} 0&=\frac{i}{2}\Big{\langle}- \partial_{\sigma}^{2}\widehat{L_{b}}(0)(\dot{g}_{1},\dot{A}_{1})+\partial_{ \sigma}^{2}\widehat{L_{b}}(0)(\dot{g}^{\prime}_{1},\dot{A}^{\prime}_{1})+ \partial_{\sigma}^{2}\widehat{L_{b}}(0)(\dot{g}_{2},\dot{A}_{2})-2i\partial_{ \sigma}\widehat{L_{b}}(0)(\ddot{\bar{g}},\ddot{\bar{A}}),\ (\dot{g}^{*}_{1},\dot{A}^{*}_{1})\Big{\rangle}\\ &=\frac{i}{2}\Big{\langle}-\partial_{\sigma}^{2}\widehat{L_{b}}(0)( \dot{g}_{1},\dot{A}_{1})+\partial_{\sigma}^{2}\widehat{L_{b}}(0)(\dot{g}_{2}, \dot{A}_{2})+2i\partial_{\sigma}\widehat{L_{b}}(0)(\ddot{g},\ddot{\bar{A}}), \ (\dot{g}^{*}_{1},\dot{A}^{*}_{1})\Big{\rangle}\\ &\quad+\frac{i}{2}\Big{\langle}\partial_{ we arrive at \[\begin{split}\Big{\langle}&[[L_{b},t_{b,*}],t_{b,*}]( \dot{g}_{1}^{\prime},\dot{A}_{1}^{\prime})+2[L_{b},t_{b,*}](\dot{g}_{1}^{\prime },\dot{A}_{1}^{\prime}),\ (\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\\ &=\Big{\langle}[[L_{b},t_{b,*}],t_{b,*}](\dot{g}_{1},\ddot{A}_{1}) -[[L_{b},t_{b,*}],t_{b,*}](\dot{g}_{2},\dot{A}_{2})+2[L_{b},t_{b,*}](\ddot{g}, \ddot{A}),\ (\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}.\end{split} \tag{11.42}\] Since the pairing on \(\mathcal{K}_{b,1}\times\mathcal{K}_{b,1}^{*}\) on the left-hand side of (11.42) is non-degenerate and the right-hand side is uniquely determined by \(f\), so is \((\dot{g}_{1}^{\prime},\dot{A}_{1}^{\prime})\). Actually, we can further calculate the right-hand side \[\begin{split}\Big{\langle}[L_{b},t_{b,*}](\bar{\dot{g}},\bar{ \dot{A}}),\ (\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}&=\Big{\langle}( \bar{\dot{g}},\bar{\dot{A}}),\ \widehat{L_{b}}(0)^{*}(\dot{g}_{1}^{*},\dot{A}_{1}^{*}) \Big{\rangle}=\Big{\langle}\widehat{L_{b}}(0)(\bar{\dot{g}},\bar{\dot{A}}),\ (\dot{g}_{1}^{*},\dot{A}_{1}^{*})\Big{\rangle}\\ &=-\langle f+\frac{1}{2}[[L_{b},t_{b,*}],t_{b,*}](\dot{g}_{1}, \dot{A}_{1})+[L_{b},t_{b,*}](\dot{g}_{2}-\ddot{g}_{1},\dot{A}_{2}-\ddot{A}_{1} ),(\dot{g}_{1}^{*},\dot{A}_{1}^{*})\rangle\end{split}\] where we use (11.40) and (11.41) in the third step. Plugging this into (11.42) proves the condition (11.31). ### Regularity of the resolvent of the linearized gauge-fixed Einstein-Maxwell operator In this section, we continue using the notations from SS11. In SS12.1, we give a refined description of the singular part \(P(b,\sigma)\) and the regular part \(L_{b}^{-}(\sigma)\) of the operator \(\widehat{L_{b}}(0)^{-1}\) defined in Corollary 11.4. Concretely, we prove that \((\dot{g}_{1}^{\prime\prime}(\sigma),\dot{A}_{1}^{\prime\prime}(\sigma))\) defined in Corollary 11.4 is Holder regular at \(\sigma=0\), while \(L_{b}^{-}(\sigma)\) is Holder regular at \(\sigma=0\) only when one relaxes the target space. In SS12.2, we provide estimates of the bounds of any derivatives (in \(\sigma\)) of \(L_{b}^{-}(\sigma)\) for \(\sigma\) in the region \(0\leq\operatorname{Im}\sigma\leq C,|\sigma|\geq c_{0}\) for some \(c_{0},C>0\). Finally, in SS12.3, we discuss the conormal regularity of \(L_{b}^{-}(\sigma)\) at \(\sigma=0\). Specifically, after applying any number of derivatives \(\sigma\partial_{\sigma}\) to \(L_{b}^{-}(\sigma)\), it satisfies the same properties as \(L_{b}^{-}(\sigma)\). ### Regularity at low frequencies In SS12.1.1, we study in detail the structure of \(\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}\), i.e., prove the \(\epsilon\)-regularity of \(\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}\). In SS12.1.2, we establish the \(\epsilon\)-regularity of \(R(b,\sigma)\), which allows us to give a refined description of the singular part \(P(b,\sigma)\) of \(\widehat{L_{b}}(\sigma)^{-1}\) near \(\sigma=0\). We also discuss a certain Holder regularity of the regular part \(L_{b}^{-}(\sigma)\) at \(\sigma=0\). #### 12.1.1. \(\epsilon\)-regularity of \(\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}\) First, we introduce several definitions. **Definition 12.1** ([57, Definition 12.3]).: Let \(X,Y\) be two Banach spaces and \(\alpha,\beta\in\mathbb{R}\). Suppose \(A(\sigma):X\to Y\) is a family of operators depending on parameter \(\sigma\in\mathbb{C}\setminus\{0\}\). Then we write \[A(\sigma):|\sigma|^{\alpha}X\to|\sigma|^{\beta}Y\] if and only if there exists a constant \(C>0\) such that \(\|A(\sigma)\|_{X\to Y}\leq C|\sigma|^{\beta-\alpha}\) for all \(\sigma\in\mathbb{C}\setminus\{0\}\). Let \(\widetilde{\mathcal{K}}_{i},\mathcal{R}_{i}\) (\(i=0,1,2\)) be defined as in SS11. **Definition 12.2** ([57, Definition 12.6]).: 1. An operator \(L(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) is \(\epsilon\)_-regular at \(\sigma=0\)_ if for \(\sigma\in\mathbb{C},\operatorname{Im}\sigma\geq 0\) and \(|\sigma|\) small, there exists a constant \(C>0\) such that \[\|L(\sigma)\|_{\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}} \leq C,\] (12.1a) \[\|\partial_{\sigma}L(\sigma)\|_{\widetilde{\mathcal{K}}_{i}\to \mathcal{R}_{j}} \leq C|\sigma|^{-\epsilon},\] (12.1b) \[\|\partial_{\sigma}^{2}L(\sigma)\|_{\widetilde{\mathcal{K}}_{i}\to \mathcal{R}_{j}} \leq C|\sigma|^{-\epsilon-1}.\] (12.1c) 2. An operator \(L(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) has an \(\epsilon\)_-regular expansion at \(\sigma=0\)_ in \(\sigma\) up to order one if \[L(\sigma)=L^{0}+\sigma L^{\epsilon}(\sigma)\] (12.2) where \(L^{0}:\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) is independent of \(\sigma\) and \(L^{\epsilon}(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) is \(\epsilon\)-regular at \(\sigma=0\). 3. An operator \(L(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) has an \(\epsilon\)_-regular expansion at \(\sigma=0\)_ in \(\sigma\) up to order two if \[L(\sigma)=L^{0}+\sigma L^{1}+\sigma^{2}L^{\epsilon}(\sigma)\] (12.3) where \(L^{0},L^{1}:\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) are independent of \(\sigma\) and \(L^{\epsilon}(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) is \(\epsilon\)-regular at \(\sigma=0\). We recall the analysis of the entries \(L_{ij}\) of the operator in the proof of Theorem 11.3, and note that each entry is a sum of polynomials in \(\sigma\) whose coefficients are either independent of \(\sigma\), terms involving \(\check{L}_{b}(\sigma)^{-1}\) acting on an element in \(\check{H}_{\rm b}^{\infty,3/2-}(\bar{X})\), or terms involving \((\check{L}_{b}(\sigma)^{-1})^{*}\) acting on an element in \(\check{H}_{\rm b}^{-5/2-C(a,\gamma),3/2-}(\bar{X})\) where \(C(a,\gamma_{0})\) is a sufficiently small constant depending on \(a,\gamma\). Therefore, we first discuss the properties of the operator \(\check{L}_{b}(\sigma)^{-1}\). **Proposition 12.3**.: _Let \(-\frac{3}{2}<\ell<-\frac{1}{2}\), \(0<\epsilon<1\) with \(-\frac{1}{2}<\ell+\epsilon<\frac{1}{2}\), and \(s-\epsilon>4\). For \(\operatorname{Im}\sigma\geq 0\), \(0<|\sigma|\leq c_{0}\) with \(c_{0}\) small, we have_ \[\partial_{\sigma}\check{L}_{b}(\sigma)^{-1}:\check{H}_{\rm b}^{s- 1,\ell+2}(\bar{X})\to|\sigma|^{-\epsilon}\check{H}_{\rm b}^{s-\epsilon,\ell+ \epsilon-1}(\bar{X}), \tag{12.4}\] \[\partial_{\sigma}^{2}\check{L}_{b}(\sigma)^{-1}:\check{H}_{\rm b }^{s-1,\ell+2}(\bar{X})\to|\sigma|^{-\epsilon-1}\check{H}_{\rm b}^{s-1- \epsilon,\ell+\epsilon-1}(\bar{X}). \tag{12.5}\] Proof.: Formally, we have \[\partial_{\sigma}\check{L}_{b}(\sigma)^{-1}=-\check{L}_{b}(\sigma)^{-1}( \partial_{\sigma}\check{L}_{b}(\sigma))\check{L}_{b}(\sigma)^{-1}\] where by Proposition 4.13 \[\partial_{\sigma}\check{L}_{b}(\sigma)=\partial_{\sigma}\widehat{L}_{b}( \sigma)\in 2i\rho(\rho\partial_{\rho}-1)+\rho^{3}{\rm Diff}_{\rm b}^{1}+\rho^{2}C^{ \infty}+\sigma\rho^{2}C^{\infty}. \tag{12.6}\] Motivated by this, we first analyze \(\rho(\rho\partial_{\rho}-1)\check{L}_{b}(\sigma)^{-1}\) and \(\check{L}_{b}(\sigma)^{-1}\rho(\rho\partial_{\rho}-1)\). Since \(\rho(\rho\partial_{\rho}-1):\check{H}_{\rm b}^{s,\ell}(\bar{X})\to\bar{H}_{ \rm b}^{s-1,\ell+1}(\bar{X})\), it is clear that \[\rho(\rho\partial_{\rho}-1)\check{L}_{b}(\sigma)^{-1}:\check{H}_{\rm b}^{s-1, \ell+2}(\bar{X})\to\bar{H}_{\rm b}^{s-1,\ell+1}(\bar{X}).\] Meanwhile, in view of Proposition 4.13 \[2i\rho(\rho\partial_{\rho}-1)=\sigma^{-1}\big{(}\check{L}_{b}(\sigma)-\check {L}_{b}(0)+Q\big{)}\quad\text{with}\quad Q\in\sigma\rho^{3}{\rm Diff}_{\rm b}^ {1}+\sigma\rho^{2}C^{\infty}+\sigma\rho^{2}C^{\infty},\] then it follows that \[2i\rho(\rho\partial_{\rho}-1)\check{L}_{b}(\sigma)^{-1}=\sigma^{-1}\Big{(}I-( \widehat{L_{b}}(0)+V_{b}+Q)\check{L}_{b}(\sigma)^{-1}\Big{)}:\check{H}_{\rm b }^{s-1,\ell+2}(\bar{X})\to|\sigma|^{-1}\check{H}_{\rm b}^{s-2,\ell+2}(\bar{X}).\] Then an interpolation gives \[\rho(\rho\partial_{\rho}-1)\check{L}_{b}(\sigma)^{-1}:\check{H}_{\rm b}^{s-1, \ell+2}(\bar{X})\to|\sigma|^{-\epsilon}\check{H}_{\rm b}^{s-1-\epsilon,\ell+1+ \epsilon}(\bar{X}),\quad 0\leq\epsilon\leq 1. \tag{12.7}\] Since \(s-1-\epsilon>2\) and \(1/2<\ell+1+\epsilon<3/2\), we have \[\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{2i\rho(\rho\partial_{\rho} -1)\check{L}_{b}(\sigma)^{-1}}|\sigma|^{-\epsilon}\bar{H}_{\rm b}^{s-1-\epsilon,\ell+\epsilon+1}(\bar{X})\xrightarrow{L_{b}(\sigma)^{-1}}|\sigma|^{-\epsilon} \bar{H}_{\rm b}^{s-\epsilon,\ell+\epsilon-1}(\bar{X}). \tag{12.8}\] We also have \[\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\hat{L}_{b}(\sigma)^{-1}} \bar{H}_{\rm b}^{s,\ell}(\bar{X})\xrightarrow{\rho^{2}{\rm Diff}_{\rm b}^{s+ \sigma\rho^{2}C^{\infty}}}\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{ \hat{L}_{b}(\sigma)^{-1}}\bar{H}_{\rm b}^{s,\ell}(\bar{X})\subset|\sigma|^{- \epsilon}\bar{H}_{\rm b}^{s-\epsilon,\ell+\epsilon-1}(\bar{X}). \tag{12.9}\] By (12.6), we finishes the proof of (12.4). For \(\partial_{\sigma}^{2}\check{L}_{b}(\sigma)^{-1}\), we note that \[\partial_{\sigma}^{2}\check{L}_{b}(\sigma)^{-1} =2\check{L}_{b}(\sigma)^{-1}(\partial_{\sigma}\check{L}_{b}(\sigma) )\check{L}_{b}(\sigma)^{-1}(\partial_{\sigma}\check{L}_{b}(\sigma))\check{L}_{ b}(\sigma)^{-1}-\check{L}_{b}(\sigma)^{-1}(\partial_{\sigma}^{2}\check{L}_{b}( \sigma))\check{L}_{b}(\sigma)^{-1}\] \[:=I+II.\] Since \(\partial_{\sigma}^{2}\check{L}_{b}(\sigma)\in\rho^{2}C^{\infty}\), according to (12.9), we obtain \[II:\check{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\to\bar{H}_{\rm b}^{s,\ell}(\bar{X}) \subset|\sigma|^{-\epsilon-1}\check{H}_{\rm b}^{s-1-\epsilon,\ell+\epsilon-1}( \bar{X}).\] It remains to analyze \(I\). By the assumption on \(s,\ell,\epsilon\), we find that \(1/2<(1+\epsilon)/2<1\), and thus using the reasoning in (12.8) and (12.9), we obtain \[\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\partial_{\sigma}\check{L}_{ b}(\sigma)\check{L}_{b}(\sigma)^{-1}}|\sigma|^{-\frac{1+\epsilon}{2}}\bar{H}_{\rm b }^{s-1-\frac{1+\epsilon}{2},\ell+\frac{1+\epsilon}{2}+1}(\bar{X}) \xrightarrow{\partial_{\sigma}\check{L}_{b}(\sigma)\check{L}_{b}(\sigma)^{-1}}| \sigma|^{-1-\epsilon}\bar{H}_{\rm b}^{s-2-\epsilon,\ell+\epsilon+1}(\bar{X})\] \[\xrightarrow{L_{b}(\sigma)^{-1}}|\sigma|^{-1-\epsilon}\bar{H}_{ \rm b}^{s-1-\epsilon,\ell+\epsilon-1}(\bar{X})\] where we use \(s-\frac{3}{2}-\frac{\epsilon}{2}>2,\frac{1}{2}<\ell+\frac{3}{2}+\frac{ \epsilon}{2}<\frac{3}{2}\) in the second step and \(s-2-\epsilon>2,\frac{1}{2}<\ell+\epsilon+1<\frac{3}{2}\) in the last step. This finishes the proof of (12.5). Recall the expression (11.25) of \(\widehat{L_{b}}(\sigma)\check{L}(\sigma)^{-1}\) \[\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}=\begin{pmatrix}L_{ 00}(b,\sigma)&\sigma^{2}\widetilde{L}_{01}(b,\sigma)&\sigma\widetilde{L}_{02}( b,\sigma)\\ \sigma\widetilde{L}_{10}(b,\sigma)&\sigma^{2}\widetilde{L}_{11}(b,\sigma)& \sigma^{2}\widetilde{L}_{12}(b,\sigma)\\ \sigma\widetilde{L}_{20}(b,\sigma)&\sigma^{2}\widetilde{L}_{21}(b,\sigma)& \sigma\widetilde{L}_{22}(b,\sigma)\end{pmatrix}\] \[=\begin{pmatrix}L_{00}(b,\sigma)&\sigma^{2}\widetilde{L}_{01}(b, \sigma)&\sigma\widetilde{L}_{02}(b,\sigma)\\ \sigma\big{(}\widetilde{L}_{10}^{0}(b)+\sigma\widetilde{L}_{10}^{e}(b, \sigma)\big{)}&\sigma^{2}\big{(}\widetilde{L}_{11}^{0}(b)+\sigma\widetilde{L} _{11}^{1}(b)+\sigma^{2}\widetilde{L}_{11}^{e}(b,\sigma)\big{)}&\sigma^{2} \big{(}\widetilde{L}_{12}^{0}(b)+\sigma\widetilde{L}_{12}^{e}(b,\sigma)\big{)} \\ \sigma\widetilde{L}_{20}(b,\sigma)&\sigma^{2}\big{(}\widetilde{L}_{21}^{0}(b)+ \sigma\widetilde{L}_{21}^{e}(b)\big{)}&\sigma\big{(}\widetilde{L}_{22}^{0}(b )+\sigma\widetilde{L}_{22}^{e}(b,\sigma)\big{)}.\end{pmatrix}\] **Proposition 12.4**.: _Let \(s,\ell,\epsilon\) be defined as in Proposition 12.3. The entries of \(\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1}\) satisfy the following properties._ 1. \(L_{00},\widetilde{L}_{01},\widetilde{L}_{02},\widetilde{L}_{20}\) _are_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ 2. \(\widetilde{L}_{10},\widetilde{L}_{12},\widetilde{L}_{21},\widetilde{L}_{22}\) _have an_ \(\epsilon\)_-regular expansion up to order one, that is,_ \(\widetilde{L}_{10}^{e},\widetilde{L}_{12}^{e},\widetilde{L}_{21}^{e}, \widetilde{L}_{22}^{e}\) _are_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ 3. \(\widetilde{L}_{11}\) _has an_ \(\epsilon\)_-regular expansion up to order two, that is,_ \(\widetilde{L}_{11}^{e}\) _is_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ Proof.: The key point in the proof was already mentioned before Proposition 12.3. * Analysis of \(\widetilde{L}_{10}^{e}\) and \(\widetilde{L}_{20}.\) Here, we only discuss \(\widetilde{L}_{10}^{e}\) in detail because the proof of \(\widetilde{L}_{02}\) follows in a similar manner. Recall the calculation in (11.18) \[\widetilde{L}_{10}^{e}(\tilde{g}_{0},\tilde{A}_{0}),\ (\hat{g}_{1}^{*},\hat{A}_{1}^{*}))\] \[\qquad=-\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),i(\check{L}_{ b}(\sigma)^{-1})^{*}(\partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2} \partial_{\sigma}^{2}\widehat{L_{b}}(0))^{*}\big{(}I-(\check{L}_{b}(0)^{-1})^{ *}V_{b}^{*}\big{)}(\check{g}_{2}^{*},\check{A}_{2}^{*})\Big{\rangle}\] \[\qquad\quad+\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),\frac{1}{ 2}(\check{L}_{b}(\sigma)^{-1})^{*}(\partial_{\sigma}^{2}\widehat{L_{b}}(0))^{ *}(\check{g}_{2}^{*},\check{A}_{2}^{*})\Big{\rangle}\] where \[(\partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{ \sigma}^{2}\widehat{L_{b}}(0))^{*}\big{(}I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{ *}\big{)}(\check{g}_{2}^{*},\check{A}_{2}^{*}),\quad(\partial_{\sigma}^{2} \widehat{L_{b}}(0))^{*}(\check{g}_{2}^{*},\hat{A}_{2}^{*})\in\dot{H}_{\rm b}^{ -5/2-C(a,\gamma),3/2-}(\bar{X}).\] Since \((\tilde{g}_{0},\tilde{A}_{0})\in\ddot{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\), and by Proposition (12.3) \[(\check{L}_{b}^{-1}(\sigma))^{*}\dot{H}_{\rm b}^{-5/2-C(a,\gamma),3/2-}(\bar{X})\in\dot{H}_{\rm b}^{-3/2-C(a,\gamma),-1/2-}(\bar{X}),\] \[(\partial_{\sigma}^{j}\check{L}_{b}^{-1})^{*}\dot{H}_{\rm b}^{-5/2-C (a,\gamma),3/2-}(\bar{X})\in|\sigma|^{-\epsilon-j+1}\dot{H}_{\rm b}^{-s+1,- \ell-2}(\bar{X}),\quad j=1,2\] where we use the facts that \(-5/2-C(a,\gamma)>-3>-s+\epsilon+1\) and \(-\ell-\epsilon+1<3/2\), the pairings above (and their up to second order derivatives) are well-defined. This proves the \(\epsilon\)-regularity of \(\widetilde{L}_{10}^{e}\). * Analysis of \(\widetilde{L}_{11}^{e},\widetilde{L}_{12}^{e},\widetilde{L}_{22}^{e}\) and \(\widetilde{L}_{22}^{e}.\) The statements for these entries can be proved in the same way as in the proof of \(\widetilde{L}_{10}^{e}.\) * Analysis of \(\widetilde{L}_{01}\) and \(\widetilde{L}_{02}.\) Here, we only prove the statement for \(\widetilde{L}_{01}\) as the proof of \(\widetilde{L}_{02}\) follows in an analogous manner. Using the calculation in (11.16), we find that \[\widetilde{L}_{01}(\tilde{g}_{1},\tilde{A}_{1})= \Big{(} -iV_{b}\check{L}_{b}(\sigma)^{-1}\big{(}\partial_{\sigma} \widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0) \big{)}(\check{g}_{1},\check{A}_{1})+\frac{1}{2}V_{b}\check{L}_{b}(\sigma)^{-1} \partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1},\check{A}_{1})\Big{)}\] \[-(\widetilde{L}_{11}+\widetilde{L}_{21})(\tilde{g}_{1},\check{A}_{1 }).\] where \[\partial_{\sigma}\widehat{L_{b}}(0)(\check{g}_{1},\check{A}_{1}),\quad \partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1},\check{A}_{1}),\quad \partial_{\sigma}^{2}\widehat{L_{b}}(0)(\check{g}_{1},\check{A}_{1})\in\ddot{H }_{\rm b}^{\infty,3/2-}(\bar{X}).\] Since by Proposition 12.4 \[\check{L}_{b}^{-1}(\sigma)\ddot{H}_{\rm b}^{\infty,3/2-}(\bar{X})\in\ddot{H}_{ \rm b}^{\infty,-1/2-}(\bar{X}),\ \partial_{\sigma}^{j}\check{L}_{b}^{-1}(\sigma)\in|\sigma|^{- \epsilon-j+1}\ddot{H}_{\rm b}^{\infty,\ell+\epsilon-1}(\bar{X}),\ j=1,2,\] and \(V_{b}\) maps \(\mathcal{D}^{\prime}(X^{\circ})\) to \(C_{c}^{\infty}(X)\), it follows that the first term on the right-hand side is \(\epsilon\)-regular. Meanwhile, it is clear that the second term is \(\epsilon\)-regular. This proves the \(\epsilon\)-regularity for \(\widetilde{L}_{01}\). * Analysis of \(L_{00}\). By the calculation in (11.24), we have \(L_{00}(b,\sigma)(\tilde{g}_{0},\tilde{A}_{0})=(\tilde{g}_{0},\tilde{A}_{0})-V_{ \tilde{b}}\tilde{L}_{b}(\sigma)^{-1}(\tilde{g}_{0},\tilde{A}_{0})-\big{(}\sigma \tilde{L}_{10}^{0}(b)+\sigma^{2}\tilde{L}_{10}^{e}(b,\sigma)+\sigma\tilde{L}_{ 20}(b,\sigma)\big{)}(\tilde{g}_{0},\tilde{A}_{0}).\) Again, it is clear that the third term on the right-hand side is \(\epsilon\)-regular. Since by Proposition 12.4 \[\tilde{L}_{b}^{-1}(\sigma)(\tilde{g}_{0},\tilde{A}_{0})\in\tilde{H}_{\rm b}^{s,\ell}(\bar{X}),\ \partial_{\sigma}^{j}\tilde{L}_{b}^{-1}(\sigma)\in| \sigma|^{-\epsilon-j+1}\tilde{H}_{\rm b}^{s-\epsilon-j+1,\ell+\epsilon-1}(\bar{ X}),\ j=1,2,\] and \(V_{b}\) maps \(\mathcal{D}^{\prime}(X^{\circ})\) to \(C_{c}^{\infty}(X)\), it follows that the second term on the right-hand side is \(\epsilon\)-regular. This finishes the proof. #### 12.1.2. Holder regularity of \(P(b,\sigma)\) and \(L_{b}^{-}(\sigma)\) at \(\sigma=0\) Now we are ready to analyze the \(\epsilon\)-regularity of \(R(b,\sigma)\). To this end, we need the following lemma. **Lemma 12.5**.: _Let \(s,\ell,\epsilon\) be defined as in Proposition 12.3. Suppose that the operator \(A(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) with \(i,j=0,1,2\) has an \(\epsilon\)-regular expansion up to order one, i.e.,_ \[A(\sigma)=A^{0}+\sigma A^{e}(\sigma)\] _where \(A^{0}:\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) is invertible and independent of \(\sigma\), and \(A^{e}(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) is \(\epsilon\)-regular. Then for \(|\sigma|\) sufficiently small, \(A(\sigma)\) is invertible, and its inverse \(B(\sigma)=A(\sigma)^{-1}\) also has an \(\epsilon\)-regular expansion up to order one, that is,_ \[B(\sigma)=B^{0}+\sigma B^{e}(\sigma)\] _where \(B^{0}:\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}\) is independent of \(\sigma\), and \(B^{e}(\sigma):\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}\) is \(\epsilon\)-regular._ _Similarly, if \(A(\sigma)\) is \(\epsilon\)-regular and has uniformly bounded inverse \(B(\sigma)\), then \(B(\sigma)\) is \(\epsilon\)-regular as well._ Proof.: Since \(A^{e}(\sigma)\) is \(\epsilon\)-regular (and thus uniformly bounded), \(\|\sigma(A^{0})^{-1}A^{e}(\sigma)\|_{\widetilde{\mathcal{K}}_{i}\to\widetilde{ \mathcal{K}}_{i}}<1/2\) when \(|\sigma|\) is sufficiently small, the inverse of \(A(\sigma)\) is given by the Neumann series \[A(\sigma)^{-1} =\!\!\Big{(}I+\sum_{k=1}^{\infty}(-1)^{k}\big{(}\sigma(A^{0})^{-1 }A^{e}(\sigma)\big{)}^{k}\Big{)}(A^{0})^{-1}\] \[=(A^{0})^{-1}-\sigma\Big{(}(A^{0})^{-1}A^{e}(\sigma)\sum_{k=0}^{ \infty}(-1)^{k}\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k}(A^{0})^{-1} \Big{)}:=B^{0}+\sigma B^{e}(\sigma).\] First, we have \[\|B^{e}(\sigma)\|_{\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}} \leq\Big{(}\sum_{k=0}^{\infty}\|\sigma(A^{0})^{-1}A^{e}(\sigma)\| _{\widetilde{\mathcal{K}}_{i}\to\widetilde{\mathcal{K}}_{i}}^{k}\Big{)}\|(A^{ 0})^{-1}\|_{\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}}^{2}\|A^{e}(\sigma) \|_{\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}}\] \[\leq 2\|A(0)^{-1}\|_{\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}}^ {2}\|A^{e}(\sigma)\|_{\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}},\] which implies the uniform boundedness of \(B^{e}(\sigma)\). Secondly, the first derivative of \(B^{e}(\sigma)\) is a sum of the terms of the types \[\sigma^{-2}\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k}(A^{0 })^{-1}\lesssim 2^{-k+2},\quad k\geq 2\] \[\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k_{1}}\circ\big{(}(A ^{0})^{-1}\partial_{\sigma}A^{e}(\sigma)\big{)}\circ\big{(}\sigma(A^{0})^{-1}A^{ e}(\sigma)\big{)}^{k_{2}}(A^{0})^{-1}\lesssim 2^{-k_{1}-k_{2}}|\sigma|^{-\epsilon},\] where we use \(\|\partial_{\sigma}A^{e}(\sigma)\|_{\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j }}\lesssim|\sigma|^{-\epsilon}\). As a consequence, \(\|\partial_{\sigma}B^{e}(\sigma)\|_{\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{ i}}\lesssim|\sigma|^{-\epsilon}\). Finally, the second derivative of \(B^{e}(\sigma)\) is a sum of the terms of the types \[\sigma^{-3}\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k}(A^{0 })^{-1}\lesssim 2^{-k+3},\quad k\geq 3\] \[\sigma^{-1}\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k_{1}} \circ\big{(}(A^{0})^{-1}\partial_{\sigma}A^{e}(\sigma)\big{)}\circ\big{(} \sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k_{2}}(A^{0})^{-1}\lesssim 2^{-k_{1}-k_{2}+1}| \sigma|^{-\epsilon},\quad k_{1}+k_{2}\geq 1,\] \[\sigma\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k_{1}}\circ \big{(}(A^{0})^{-1}\partial_{\sigma}A^{e}(\sigma)\big{)}\circ\big{(}\sigma(A^{0 })^{-1}A^{e}(\sigma)\big{)}^{k_{2}}\circ\big{(}(A^{0})^{-1}\partial_{\sigma}A^{e }(\sigma)\big{)}\circ\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k_{3}}(A^{ 0})^{-1}\] \[\lesssim 2^{-k_{1}-k_{2}-k_{3}}|\sigma|^{-2\epsilon+1},\] \[\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k_{1}}\circ\big{(} (A^{0})^{-1}\partial_{\sigma}^{2}A^{e}(\sigma)\big{)}\circ\big{(}\sigma(A^{0})^{- 1}A^{e}(\sigma)\big{)}^{k_{2}}(A^{0})^{-1}\lesssim 2^{-k_{1}-k_{2}}|\sigma|^{-\epsilon-1},\] where we use \(\partial_{\sigma}A^{e}(\sigma)\lesssim|\sigma|^{-\epsilon}\) and \(\partial_{\sigma}^{2}A^{e}(\sigma)\lesssim|\sigma|^{-\epsilon-1}\). Therefore, \(\partial_{\sigma}^{2}B^{e}(\sigma)\lesssim|\sigma|^{-\epsilon-1}\). This proves that \(B^{e}(\sigma)\) is \(\epsilon\)-regular. The proof of the second statement is similar by using \[\partial_{\sigma}B(\sigma) =-B(\sigma)\circ\partial_{\sigma}A(\sigma)\circ B(\sigma),\] \[\partial_{\sigma}^{2}B(\sigma) =-B(\sigma)\circ\partial_{\sigma}^{2}A(\sigma)\circ B(\sigma)+2B( \sigma)\circ\partial_{\sigma}A(\sigma)\circ B(\sigma)\circ\partial_{\sigma}A (\sigma)\circ B(\sigma).\] Recall the expressions (11.26) and (11.32) of \(R_{b,\sigma}=(\widehat{L_{b}}(\sigma)\tilde{L}_{b}(\sigma)^{-1})^{-1}\) \[R(b,\sigma) =(\widehat{L_{b}}(\sigma)\tilde{L}_{b}(\sigma)^{-1})^{-1}=\begin{pmatrix} \widetilde{R}_{00}(b,\sigma)&\widetilde{R}_{01}(b,\sigma)&\widetilde{R}_{02}( b,\sigma)\\ \sigma^{-1}\widetilde{R}_{10}(b,\sigma)&\sigma^{-2}\widetilde{R}_{11}(b, \sigma)&\sigma^{-1}\widetilde{R}_{12}(b,\sigma)\\ \widetilde{R}_{20}(b,\sigma)&\sigma^{-1}\widetilde{R}_{21}(b,\sigma)&\sigma^ {-1}\widetilde{R}_{22}(b,\sigma)\end{pmatrix}\] \[=\begin{pmatrix}\widetilde{R}_{00}(b,\sigma)&\widetilde{R}_{01}( b,\sigma)&\widetilde{R}_{02}(b,\sigma)\\ \sigma^{-1}\widetilde{R}_{10}(b,\sigma)&\sigma^{-2}\big{(}\widetilde{R}_{11} ^{0}(b)+\sigma\widetilde{R}_{11}^{e}(b,\sigma)\big{)}&\sigma^{-1}\widetilde{R }_{12}(b,\sigma)\\ \widetilde{R}_{20}(b,\sigma)&\sigma^{-1}\big{(}\widetilde{R}_{02}^{0}(b)+ \sigma\widetilde{R}_{21}^{e}(b,\sigma)\big{)}&\sigma^{-1}\big{(}\widetilde{R }_{22}^{0}(b)+\sigma\widetilde{R}_{22}^{e}(b,\sigma)\big{)}\end{pmatrix}\] where \[\widetilde{R}_{11}=\big{(}\widetilde{L}_{11}^{\sharp}-\sigma \widetilde{L}_{12}^{\sharp}(\widetilde{L}_{22}^{\flat})^{-1}\widetilde{L}_{2 1}^{\sharp}\big{)}^{-1},\quad\widetilde{R}_{10}=\widetilde{R}_{11}\big{(}- \widetilde{L}_{10}+\sigma\widetilde{L}_{12}^{\sharp}(\widetilde{L}_{22}^{ \flat})^{-1}\widetilde{L}_{20}\big{)}L_{00}^{-1},\] \[\widetilde{R}_{12}=-\widetilde{R}_{11}\widetilde{L}_{12}^{ \sharp}(\widetilde{L}_{22}^{\flat})^{-1},\quad\widetilde{R}_{21}=-(\widetilde{ L}_{22}^{\flat})^{-1}\widetilde{L}_{21}^{\sharp}\widetilde{R}_{11},\quad \widetilde{R}_{22}=(\widetilde{L}_{22}^{\flat})^{-1}-\sigma(\widetilde{L}_{22} ^{\flat})^{-1}\widetilde{L}_{21}^{\sharp}\widetilde{R}_{12}, \tag{12.10}\] \[\widetilde{R}_{20}=-(\widetilde{L}_{22}^{\flat})^{-1}\big{(} \widetilde{L}_{20}L_{00}^{-1}+\widetilde{L}_{21}^{\sharp}\widetilde{R}_{10} \big{)},\quad\widetilde{R}_{01}=-L_{00}^{-1}\big{(}\widetilde{L}_{01} \widetilde{R}_{11}+\widetilde{L}_{02}\widetilde{R}_{21}\big{)},\] \[\widetilde{R}_{02}=-L_{00}^{-1}\big{(}\sigma\widetilde{L}_{01} \widetilde{R}_{12}+\widetilde{L}_{02}\widetilde{R}_{22}\big{)},\quad \widetilde{R}_{00}=L_{00}^{-1}\big{(}I-\sigma\widetilde{L}_{01}\widetilde{R}_{ 10}-\sigma\widetilde{L}_{02}\widetilde{R}_{20}\big{)}\] with \[\widetilde{L}_{i1}^{\sharp}=\widetilde{L}_{i1}-\sigma\widetilde{L} _{i0}L_{00}^{-1}\widetilde{L}_{01}\quad\text{for}\quad i=1,2,\quad\widetilde{L}_ {12}^{\sharp}=\widetilde{L}_{12}-\widetilde{L}_{10}L_{00}^{-1}\widetilde{L}_{02}, \tag{12.11}\] \[\widetilde{L}_{22}^{\flat}=\widetilde{L}_{22}-\sigma\widetilde{L} _{20}L_{00}^{-1}\widetilde{L}_{02}.\] **Proposition 12.6**.: _Let \(s,\ell,\epsilon\) be defined as in Proposition 12.3. The entries of \(R(b,\sigma)\) satisfy the following properties._ 1. \(\widetilde{R}_{00},\widetilde{R}_{01},\widetilde{R}_{02},\widetilde{R}_{10}, \widetilde{R}_{12},\widetilde{R}_{20}\) _are_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ 2. \(\widetilde{R}_{11},\widetilde{R}_{21},\widetilde{R}_{22}\) _have an_ \(\epsilon\)_-regular expansion up to order one, that is,_ \(\widetilde{R}_{11}^{e},\widetilde{R}_{21}^{e},\widetilde{R}_{22}^{e}\) _are_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ Proof.: First, since \(L_{00}\) is \(\epsilon\)-regular, by Lemma 12.5, so is \(L_{00}^{-1}\). Then by Proposition 12.4, we find that \(\widetilde{L}_{11}^{\sharp},\widetilde{L}_{21}^{\sharp},\widetilde{L}_{22}^{ \flat}\) (and thus \((\widetilde{L}_{22}^{\flat})^{-1}\) by Lemma 12.5) have an \(\epsilon\)-regular expansion up to order one, while \(\widetilde{L}_{12}^{\sharp}\) is \(\epsilon\)-regular. Therefore, we see that \[\widetilde{L}_{11}^{\sharp}-\sigma\widetilde{L}_{12}^{\sharp}(\widetilde{L}_{2 2}^{\flat})^{-1}\widetilde{L}_{21}^{\sharp}\] has an \(\epsilon\)-regular expansion up to order one whose coefficient of \(\sigma^{0}\) is \(\widetilde{L}_{11}^{0}(b)\). Then the \(\epsilon\)-regular expansion property of \(\widetilde{R}_{11}\) follows from Lemma 12.5. Similarly, the \(\epsilon\)-regular expansion property of \(\widetilde{R}_{21},\widetilde{R}_{22}\) follows from that of \(\widetilde{R}_{11},(\widetilde{L}_{22}^{\flat})^{-1},\widetilde{L}_{21}^{\sharp}\) and the above expressions of \(\widetilde{R}_{ij}\). Finally, the \(\epsilon\)-regularity of the remaining components can be obtained by using the above expressions of \(\widetilde{R}_{ij}\) and Proposition 12.4. Recall the singular part \(P(b,\sigma)\) of \(\widehat{L}_{b}(\sigma)^{-1}\) as defined in Corollary 11.4, which satisfies \[P(b,\sigma)f =(\sigma^{-2}(\hat{g}_{1},\dot{A}_{1})-i\sigma^{-1}(\check{g}_{1},\dot{A}_{1}))+i\sigma^{-1}\big{(}(\hat{g}_{2},\dot{A}_{2})+(\hat{g}_{1}^{ \prime},\dot{A}_{1}^{\prime})\big{)}+i\sigma^{-1}(\hat{g}_{1}^{\prime\prime }(\sigma),\dot{A}_{1}^{\prime\prime}(\sigma)) \tag{12.12}\] \[:=\sigma^{-2}P^{2}(b)f+\sigma^{-1}P^{1}(b)f+\sigma^{-1}P^{e}(b, \sigma)f\] where \((\hat{g}_{1}^{\prime\prime}(\sigma),\dot{A}_{1}^{\prime\prime}(\sigma))=P^{e}(b, \sigma)f=o(1)\) in \(\mathcal{K}_{b,1}\) as \(\sigma\to 0\). With the \(\epsilon\)-regularity of \(R(b,\sigma)\) at our disposal, now we are able to prove that \(P^{e}(b,\sigma)\) is \(\epsilon\)-regular as well, and thus give a refined description of \(\sigma^{-1}(\hat{g}_{1}^{\prime\prime}(\sigma),\dot{A}_{1}^{\prime\prime}(\sigma))\). _Remark 12.7_.: We now make a remark that there is a relationship between the Sobolev space \(\dot{H}^{k}([0,c_{0}))\), which is the restriction to \([0,c_{0})\) of elements in \(H^{k}(\mathbb{R})\) with support in \([0,\infty)\), and the b-Sobolev space \(H^{k}_{\rm b}([0,c_{0}))\), which is a completion of \(\{\phi|_{(0,c_{0}]}\}\) where \(\phi\in C_{c}^{\infty}((0,\infty))\) with respect the following norm \[\|\phi\|^{2}_{H^{k}_{\rm b}([0,C_{0}))}:=\sum_{j=0}^{k}\|(\sigma\partial_{ \sigma})^{j}\phi\|^{2}_{L^{2}(d\sigma)}.\] First, it is clear that \(\sigma^{k}H^{k}_{\rm b}([0,c_{0}))\subset\dot{H}^{k}([0,c_{0}))\). Second, Hardy's inequality tells us that for \(\phi\in C_{c}^{\infty}(0,\infty)\), we have \[\int_{0}^{\infty}\frac{|\phi|^{2}}{r^{2}}\,dr\leq 4\int_{0}^{\infty}|\partial_ {r}f|^{2}\,dr.\] Applying Hardy's inequality \(k\) times, we find that \[\int_{0}^{\infty}\frac{|\phi|^{2}}{r^{2k}}\,dr\leq C_{k}\int_{0}^{\infty}| \partial_{r}^{k}f|^{2}\,dr.\] for \(\phi\in C_{c}^{\infty}(0,\infty)\). This implies \(\dot{H}^{k}([0,c_{0}))\subset\sigma^{k}H^{k}_{\rm b}([0,c_{0}))\). Therefore, \(\dot{H}^{k}([0,c_{0}))=\sigma^{k}H^{k}_{\rm b}([0,c_{0}))\). Then an interpolation gives \(\dot{H}^{\alpha}([0,c_{0}))=\sigma^{\alpha}H^{\alpha}_{\rm b}([0,c_{0}))\) for all \(\alpha\geq 0\). As a preparation, we record the following technical lemma. **Lemma 12.8**.: _Suppose that \(u_{\pm}\in H^{s}(\mathbb{R}_{\pm})\) with \(s\geq 0\) and \(s\neq\frac{1}{2}+\mathbb{N}_{0}\) and let \(u(x)=u_{+}(x)\) for \(x>0\) and \(u(x)=u_{-}(x)\) for \(x<0\). If \(\partial_{j}u_{+}(x)=\partial_{j}u_{-}(x)\) for all \(j\in\mathbb{N}_{0},j<s-\frac{1}{2}\), then \(u(x)\in H^{s}(\mathbb{R})\)._ Proof.: See [86, Theorem 11.4 and 11.5]. **Theorem 12.9**.: _Let \(\sigma^{-1}(\dot{g}_{1}^{\prime\prime}(\sigma),\dot{A}_{1}^{\prime\prime}( \sigma))=\sigma^{-1}P^{e}(b,\sigma)f\in\mathcal{K}_{b,1}\) be defined as in (12.12) and let \(s,\ell,\epsilon\) be defined as in Proposition 12.3. Then we have_ \[P^{e}(b,\sigma)\in H^{\frac{3}{2}-\epsilon-}((-c_{0},c_{0});\mathcal{L}( \widetilde{H}^{s-1,\ell+2}_{\rm b}(\bar{X}),\mathcal{K}_{b,1})) \tag{12.13}\] _where \(c_{0}>0\) is a sufficiently small constant. Moreover,_ \[\sigma^{-1}(\dot{g}_{1}^{\prime\prime}(\sigma),\dot{A}_{1}^{\prime\prime}( \sigma))\in\sum_{\pm}\mathcal{A}^{-\epsilon}\big{(}\pm[0,c_{0});\mathcal{K}_ {b,1}\big{)}. \tag{12.14}\] Proof.: Using the calculation (11.34) and (11.35) in the proof of Corollary 11.4, we find that for \(f=(f_{0},f_{1},f_{2})\in\widetilde{H}^{s-1,\ell+2}_{\rm b}(\bar{X})=\mathcal{ R}_{0}\oplus\mathcal{R}_{1}\oplus\mathcal{R}_{2}\), \[\begin{split}&\dot{L}_{b}(0)P^{e}(b,\sigma)f\\ &=\big{(}\widetilde{R}_{10}(b,\sigma)-\widetilde{R}_{10}(b,0) \big{)}f_{0}+\big{(}\widetilde{R}^{e}_{11}(b,\sigma)-\widetilde{R}^{e}_{11}(b,0)\big{)}f_{1}+\big{(}\widetilde{R}_{12}(b,\sigma)-\widetilde{R}_{12}(b,0) \big{)}f_{2}.\end{split} \tag{12.15}\] According to Proposition 12.6, we have \(\|\partial_{\sigma}\widetilde{R}^{e}_{11}(b,\sigma)\|_{\mathcal{R}_{1}\to \widetilde{\mathcal{K}}_{1}}\lesssim|\sigma|^{-\epsilon}\) and \(\|\partial_{\sigma}^{2}\widetilde{R}^{e}_{11}(b,\sigma)\|_{\mathcal{R}_{1} \to\widetilde{\mathcal{K}}_{1}}\lesssim|\sigma|^{-\epsilon-1}\) for \(0<|\sigma|\leq c_{0}\) where \(c_{0}\) is a sufficiently small constant, which implies that \[\|\partial_{\sigma}\sigma\partial_{\sigma}\widetilde{R}^{e}_{11}(b,\sigma)\|_{ \mathcal{R}_{1}\to\widetilde{\mathcal{K}}_{1}}\lesssim|\sigma|^{-\epsilon}.\] Integrating \(\partial_{\sigma}\sigma\partial_{\sigma}\widetilde{R}^{e}_{11}(b,\sigma)\) and \(\partial_{\sigma}\widetilde{R}^{e}_{11}(b,\sigma)\) from \(\sigma\in(0,c_{0})\) to \(0\) and using \(\sigma\partial_{\sigma}\widetilde{R}^{e}_{11}(\sigma)|_{b,\sigma=0}=0\) yield \[\sigma\partial_{\sigma}\widetilde{R}^{e}_{11}(b,\sigma)\in\sigma^{1-\epsilon}L ^{\infty}([0,c_{0});\mathcal{L}(\mathcal{R}_{1},\widetilde{\mathcal{K}}_{1})) \subset\sigma^{\frac{3}{2}-\epsilon-}H^{0}_{\rm b}([0,c_{0});\mathcal{L}( \mathcal{R}_{1},\widetilde{\mathcal{K}}_{1})), \tag{12.16}\] \[\widetilde{R}^{e}_{11}(b,\sigma)-\widetilde{R}^{e}_{11}(b,0)\in\sigma^{1- \epsilon}L^{\infty}([0,c_{0});\mathcal{L}(\mathcal{R}_{1},\widetilde{\mathcal{K}} _{1}))\subset\sigma^{\frac{3}{2}-\epsilon-}H^{0}_{\rm b}([0,c_{0});\mathcal{L}( \mathcal{R}_{1},\widetilde{\mathcal{K}}_{1})). \tag{12.17}\] This implies \[(\widetilde{R}^{e}_{11}(\sigma)-\widetilde{R}^{e}_{11}(0))\in\sigma^{\frac{3}{2 }-\epsilon-}H^{2}_{\rm b}([0,c_{0});\mathcal{L}(\mathcal{R}_{1},\widetilde{ \mathcal{K}}_{1}))\subset\dot{H}^{\frac{3}{2}-\epsilon-}([0,c_{0});\mathcal{L}( \mathcal{R}_{1},\widetilde{\mathcal{K}}_{1})).\] Similarly, we also have \[(\widetilde{R}^{e}_{11}(\sigma)-\widetilde{R}^{e}_{11}(0))\in\dot{H}^{\frac{3}{2 }-\epsilon-}((-c_{0},0];\mathcal{L}(\mathcal{R}_{1},\widetilde{\mathcal{K}}_{1})).\] Since \((\widetilde{R}^{e}_{11}(\sigma)-\widetilde{R}^{e}_{11}(0))\to 0\) as \(\sigma\to 0\) from both sides, by Lemma 12.8, we obtain \[(\widetilde{R}^{e}_{11}(\sigma)-\widetilde{R}^{e}_{11}(0))\in H^{\frac{3}{2}- \epsilon-}((-c_{0},c_{0});\mathcal{L}(\mathcal{R}_{1},\widetilde{\mathcal{K}}_{1})).\] Similarly, \(\widetilde{R}_{10}(b,\sigma)-\widetilde{R}_{10}(b,0)\) and \(\widetilde{R}_{12}(b,\sigma)-\widetilde{R}_{12}(b,0)\) satisfy the same estimates as \(\widetilde{R}^{e}_{11}(b,\sigma)-\widetilde{R}^{e}_{11}(b,0)\). This proves (12.13). Moreover, the estimate in (12.17) implies \[\widetilde{R}^{e}_{11}(b,\sigma)-\widetilde{R}^{e}_{11}(b,0)\in\sum_{\pm}\sigma^{ 1-\epsilon}L^{\infty}(\pm[0,c_{0});\mathcal{L}(\mathcal{R}_{1},\widetilde{ \mathcal{K}}_{1})) \tag{12.18}\] and this also holds for \(\widetilde{R}_{10}(b,\sigma)-\widetilde{R}_{10}(b,0),\widetilde{R}_{20}(b, \sigma)-\widetilde{R}_{20}(b,0)\). Using Proposition 12.16, the estimate (12.18) is satisfied after applying any number of derivatives \(\sigma\partial_{\sigma}\). This proves (12.14). Now we turn to the analysis of the regular part \(L^{-}_{b}(\sigma)\). **Proposition 12.10**.: _Let \(s,\ell,\epsilon\) be defined as in Proposition 12.3. For \(\operatorname{Im}\sigma\geq 0\) and \(0<|\sigma|\leq c_{0}\) where \(c_{0}>0\) is a small constant, the regular part \(L^{-}_{b}(\sigma)\) of the resolvent \(\widehat{L_{b}}(\sigma)^{-1}\) defined in Corollary 11.4 satisfies_ \[\partial_{\sigma}^{j}L^{-}_{b}(\sigma):\widetilde{H}^{s-1,\ell+2}_{\mathrm{b} }(\bar{X})\to|\sigma|^{-\epsilon-j+1}\bar{H}^{s-\epsilon-j+1,\ell+\epsilon+1} _{\mathrm{b}}(\bar{X}),\quad j=1,2. \tag{12.19}\] Proof.: According to Proposition 12.6 and the expression (12.15), we see that \(P(b,\sigma)\) has the form \(P(b,\sigma)=\sigma^{-2}P^{2}(b)+\sigma^{-1}P^{1}(b)+\sigma^{-1}P^{e}(b,\sigma)\) where \[P^{2}(b),P^{1}(b):\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\to\rho C^{\infty }(\bar{X})+\bar{H}^{\infty,\frac{1}{2}-}_{\mathrm{b}}(\bar{X})\] and \(P^{e}(b,\sigma):\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\to\mathcal{K}_{b,1}\) is \(\epsilon\)-regular as proved in Theorem 12.9. We also write \(R(b,\sigma)=R_{\mathrm{sing}}(b,\sigma)+R_{\mathrm{reg}}(b,\sigma)\) where \[R_{\mathrm{sing}}(b,\sigma)=\begin{pmatrix}0&0&0\\ \sigma^{-1}\widetilde{R}_{10}(b,\sigma)&\sigma^{-2}\widetilde{R}_{11}^{0}(b)+ \sigma^{-1}\widetilde{R}_{11}^{e}(b,\sigma)&\sigma^{-1}\widetilde{R}_{12}^{0} (b,\sigma)\\ 0&\sigma^{-1}\widetilde{R}_{21}^{0}(b)&\sigma^{-1}\widetilde{R}_{22}^{0}(b) \end{pmatrix}\] and \[R_{\mathrm{reg}}(b,\sigma)=\begin{pmatrix}\widetilde{R}_{00}(b,\sigma)& \widetilde{R}_{01}(b,\sigma)&\widetilde{R}_{02}(b,\sigma)\\ 0&0&0\\ \widetilde{R}_{20}(b,\sigma)&\widetilde{R}_{21}^{e}(b,\sigma)&\widetilde{R}_{2 2}^{e}(b,\sigma)\end{pmatrix}.\] Using the calculation in (11.34), (11.35) and (11.36), we find that \[L^{-}_{b}(\sigma) =\tilde{L}_{b}(\sigma)^{-1}R_{\mathrm{reg}}(b,\sigma)-\tilde{L}_ {b}(\sigma)^{-1}\Big{(}\big{(}\partial_{\sigma}\widehat{L_{b}}(0)+\frac{ \sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(P^{1}(b)+P^{e}(b, \sigma))+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)P^{2}(b)\Big{)}\] \[:=I+II.\] For the term \(I\), we compute \[\partial_{\sigma}I =\partial_{\sigma}\tilde{L}_{b}(\sigma)^{-1}R_{\mathrm{reg}}(b, \sigma)+\tilde{L}_{b}(\sigma)^{-1}\partial_{\sigma}R_{\mathrm{reg}}(b,\sigma),\] \[\partial_{\sigma}^{2}I =\partial_{\sigma}^{2}\tilde{L}_{b}(\sigma)^{-1}R_{\mathrm{reg}}( b,\sigma)+2\partial_{\sigma}\tilde{L}_{b}(\sigma)^{-1}\partial_{\sigma}R_{ \mathrm{reg}}(b,\sigma)+\tilde{L}_{b}(\sigma)^{-1}\partial_{\sigma}^{2}R_{ \mathrm{reg}}(b,\sigma).\] Then by proposition 12.3 and 12.6, we conclude that \[\partial_{\sigma}^{j}I:\tilde{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\to|\sigma| ^{-\epsilon-j+1}\tilde{H}^{s-\epsilon-j+1,\ell+\epsilon+1}_{\mathrm{b}}(\bar{X} ),\quad j=1,2.\] As for the term \(II\), we first note that \(P^{2}(b),P^{1}(b),P^{e}(b,\sigma)\) map to \(\rho C(\bar{X})+\widetilde{H}^{\infty,1/2-}_{\mathrm{b}}(\bar{X})\), and thus \[\partial_{\sigma}^{k}\widehat{L_{b}}(0)P^{2}(b),\ \partial_{\sigma}^{k}\widehat{L_{b}}(0)P^{1}(b),\ \partial_{\sigma}^{k}\widehat{L_{b}}(0)P^{e}(b,\sigma):\tilde{H}^{s-1,\ell+2}_ {\mathrm{b}}(\bar{X})\to\tilde{H}^{\infty,\frac{3}{2}-}_{\mathrm{b}}(\bar{X}), \quad k=1,2.\] Next, we calculate \[\partial_{\sigma}II =-\partial_{\sigma}\tilde{L}_{b}(\sigma)^{-1}\Big{(}\big{(} \partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2} \widehat{L_{b}}(0)\big{)}(P^{1}(b)+P^{e}(b,\sigma))+\frac{1}{2}\partial_{ \sigma}^{2}\widehat{L_{b}}(0)P^{2}(b)\Big{)}\] \[\quad-\tilde{L}_{b}(\sigma)^{-1}\Big{(}\big{(}\partial_{\sigma} \widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0) \big{)}\partial_{\sigma}P^{e}(b,\sigma)+\frac{1}{2}\partial_{\sigma}^{2} \widehat{L_{b}}(0)P^{e}(b,\sigma)\Big{)},\] \[\partial_{\sigma}^{2}II =-\partial_{\sigma}^{2}\tilde{L}_{b}(\sigma)^{-1}\Big{(}\big{(} \partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2} \widehat{L_{b}}(0)\big{)}(P^{1}(b)+P^{e}(b,\sigma))+\frac{1}{2}\partial_{ \sigma}^{2}\widehat{L_{b}}(0)P^{2}(b)\Big{)}\] \[\quad-2\partial_{\sigma}\tilde{L}_{b}(\sigma)^{-1}\Big{(}\big{(} \partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2} \widehat{L_{b}}(0)\big{)}\partial_{\sigma}P^{e}(b,\sigma)+\frac{1}{2}\partial_{ \sigma}^{2}\widehat{L_{b}}(0)P^{e}(b,\sigma)\Big{)}\] \[\quad-\tilde{L}_{b}(\sigma)^{-1}\Big{(}\big{(}\partial_{\sigma} \widehat{L_{b}}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0) \big{)}\partial_{\sigma}^{2}P^{e}(b,\sigma)+\partial_{\sigma}^{2}\widehat{L_{b}} (0)\partial_{\sigma}P^{e}(b,\sigma)\Big{)}.\] Therefore, using Proposition 12.3 and the fact that \(P^{e}(b,\sigma):\bar{H}_{\rm b}^{s-1,\ell+2}\to\mathcal{K}_{b,1}\) is \(\epsilon\)-regular, we arrive at \[\partial_{\sigma}^{j}II:\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\to|\sigma|^{- \epsilon-j+1}\bar{H}_{\rm b}^{s-\epsilon-j+1,\ell+\epsilon+1}(\bar{X}),\quad j =1,2.\] This completes the proof of the estimates (12.19). **Theorem 12.11**.: _Let \(s,\ell,\epsilon\) be defined as in Proposition 12.3. Then we have_ \[L_{b}^{-}(\sigma)\in H^{\frac{3}{2}-\epsilon-}\Big{(}(-c_{0},c_{0});\mathcal{L }\big{(}\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{H}_{\rm b}^{s-\max\{c,\frac{ 1}{2}\},\ell+\epsilon-1}(\bar{X})\big{)}\Big{)} \tag{12.20}\] _where \(c_{0}>0\) is a sufficiently small constant._ Proof.: Using the estimates in Proposition 12.3 and following the proof at the beginning of Theorem 12.9, we find that \[L_{b}^{-}(\sigma)-L_{b}^{-}(0)\in\sigma^{\frac{3}{2}-\epsilon-}H_{\rm b}^{2} \Big{(}[0,c_{0});\mathcal{L}\big{(}\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{ H}_{\rm b}^{s-1-\epsilon,\ell+\epsilon-1}(\bar{X})\big{)}\Big{)}. \tag{12.21}\] Again integrating \(\partial_{\sigma}L_{b}^{-}(\sigma)\) from \(\sigma\in(0,c_{0})\) to \(0\) and using the estimate (12.19) for \(k=1\) yield \[L_{b}^{-}(\sigma)-L_{b}^{-}(0)\in\sigma^{\frac{3}{2}-\epsilon-}H_{\rm b}^{1} \Big{(}[0,c_{0});\mathcal{L}\big{(}\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{ H}_{\rm b}^{s-\epsilon,\ell+\epsilon-1}(\bar{X})\big{)}\Big{)}. \tag{12.22}\] Interpolating between (12.21) and (12.22) gives \[L_{b}^{-}(\sigma)-L_{b}^{-}(0)\in\sigma^{\frac{3}{2}-\epsilon-}H_{\rm b}^{1+ \theta}\Big{(}[0,c_{0});\mathcal{L}\big{(}\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X} ),\bar{H}_{\rm b}^{s-\theta-\epsilon,\ell+\epsilon-1}(\bar{X})\big{)}\Big{)}, \quad 0\leq\theta\leq 1. \tag{12.23}\] If \(\epsilon\in(0,1/2]\), we take \(\theta=1/2-\epsilon\), and thus \(3/2-\epsilon=1+\theta\). If \(\epsilon\in(1/2,1)\), we take \(\theta=0\). This gives \[L_{b}^{-}(\sigma)-L_{b}^{-}(0)\in\begin{cases}\sigma^{\frac{3}{2}-\epsilon-}H_ {\rm b}^{\frac{3}{2}-\epsilon}\Big{(}[0,c_{0});\mathcal{L}\big{(}\bar{H}_{\rm b }^{s-1,\ell+2}(\bar{X}),\bar{H}_{\rm b}^{s-\frac{1}{2},\ell+\epsilon-1}(\bar{X })\big{)}\Big{)},&0<\epsilon\leq\frac{1}{2},\\ \sigma^{\frac{3}{2}-\epsilon-}H_{\rm b}^{1}\Big{(}[0,c_{0});\mathcal{L}\big{(} \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{H}_{\rm b}^{s-\epsilon,\ell+1}(\bar {X})\big{)}\Big{)},&\frac{1}{2}<\epsilon<1,\end{cases} \tag{12.24}\] which implies \[L_{b}^{-}(\sigma)-L_{b}^{-}(0)\in\begin{cases}\dot{H}^{\frac{3}{2}-\epsilon} \Big{(}[0,c_{0});\mathcal{L}\big{(}\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{H} _{\rm b}^{s-\frac{1}{2},\ell+\epsilon-1}(\bar{X})\big{)}\Big{)},&0<\epsilon \leq\frac{1}{2},\\ \dot{H}^{\frac{3}{2}-\epsilon-}\Big{(}[0,c_{0});\mathcal{L}\big{(}\bar{H}_{ \rm b}^{s-1,\ell+2}(\bar{X}),\bar{H}_{\rm b}^{s-\epsilon,\ell+\epsilon-1}(\bar {X})\big{)}\Big{)},&\frac{1}{2}<\epsilon<1.\end{cases} \tag{12.25}\] Similarly, we also have the above statement for the other half interval \((-c_{0},0]\). Since \(L_{b}^{-}(\sigma)-L_{b}^{-}(0)\to 0\) as \(\sigma\to 0\) from both sides, by Lemma 12.8, we obtain \[L_{b}^{-}(\sigma)-L_{b}^{-}(0)\in H^{\frac{3}{2}-\epsilon-}\Big{(}(-c_{0},c_{0 });\mathcal{L}\big{(}\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X}),\bar{H}_{\rm b}^{s- \max\{\epsilon,\frac{1}{2}\},\ell+\epsilon-1}(\bar{X})\big{)}\Big{)},\] and thus the conclusion (12.20). ### Regularity for frequencies away from \(0\) Now we discuss the regularity of \(\widehat{L_{b}}(\sigma)^{-1}\) for \(|\sigma|\geq c_{0},\ 0\leq\operatorname{Im}\sigma\leq C\) for some \(c_{0}\), \(C>0\). **Theorem 12.12**.: _Let \(\ell<-\frac{1}{2},s>3+m\) and \(s+\ell>-\frac{1}{2}+m\) where \(m\in\mathbb{N}_{0}\). Then for \(0\leq\operatorname{Im}\sigma\leq C,|\sigma|\geq c_{0}\) with \(c_{0},C>0\), the operator_ \[\partial_{\sigma}^{m}\widehat{L_{b}}(\sigma)^{-1}:\bar{H}_{\rm b}^{s,\ell+1}( \bar{X})\to h^{-m}\bar{H}_{\rm b}^{s-m,\ell}(\bar{X}) \tag{12.26}\] _where \(h=|\sigma|^{-1}\), is uniformly bounded._ Proof.: The estimate (12.26) for the case \(m=0\) follows from high energy estimate 5.24. For \(m\geq 1\), \(\partial_{\sigma}^{m}\widehat{L_{b}}(\sigma)^{-1}\) is a linear combination of the operators of the type \[\Big{(}\widehat{L_{b}}(\sigma)^{-1}\circ\partial_{\sigma}^{m}\widehat{L_{b}}( \sigma)\Big{)}\circ\cdots\circ\Big{(}\widehat{L_{b}}(\sigma)^{-1}\circ\partial_ {\sigma}^{m_{k}}\widehat{L_{b}}(\sigma)\Big{)}\circ\widehat{L_{b}}(\sigma)^{-1}\] where \(1\leq m_{1},\cdots,m_{k}\leq 2\) and \(m_{1}+\cdots+m_{k}=m\). Since \[\partial_{\sigma}\widehat{L_{b}}(\sigma)\in\rho\mathrm{Diff}_{\rm b}^{1}+\rho^{2 }C^{\infty}\subset h^{-1}\rho\mathrm{Diff}_{\rm b}^{1},\quad\text{and}\quad \partial_{\sigma}^{2}\widehat{L_{b}}(\sigma)\in\rho^{2}C^{\infty},\] it follows that \[\bar{H}^{s,\ell+1}_{\mathrm{b},h}\xrightarrow{\widehat{L}_{b}( \sigma)^{-1}}\bar{H}^{s,\ell}_{\mathrm{b},h}\xrightarrow{\partial_{\sigma}^{n_{ k}}\widehat{L}_{b}(\sigma)}h^{m_{k}-2}\bar{H}^{s+m_{k}-2,\ell+m_{k}}_{\mathrm{b},h}\subset h ^{m_{k}-2}\bar{H}^{s+m_{k}-2,\ell+1}_{\mathrm{b},h}\] \[\xrightarrow{\widehat{L}_{b}^{-}(\sigma)^{-1}}h^{m_{k}-2}\bar{H }^{s+m_{k}-2,\ell}_{\mathrm{b},h}\xrightarrow{\cdots}\cdots\xrightarrow{ \cdots}h^{m-m_{1}-2(k-1)}\bar{H}^{s+m-m_{1}-2(k-1),\ell}_{\mathrm{b},h}\] \[\xrightarrow{\partial_{\sigma}^{m_{1}}\widehat{L}_{b}(0)}h^{m-2 k}\bar{H}^{s+m-2k,\ell+m_{1}}_{\mathrm{b},h}\xrightarrow{\widehat{L}_{b}( \sigma)^{-1}}h^{m-2k}\bar{H}^{s+m-2k,\ell}_{\mathrm{b},h}.\] Since \(1\leq k\leq m\), we have \(h^{m-2k}\bar{H}^{s+m-2k,\ell+m-k}_{\mathrm{b},h}\subset h^{-m}\bar{H}^{s-m,\ell}_ {\mathrm{b},h}\). This finishes the proof. ### Conormal regularity Finally, we consider the conormal regularity of \(L_{b}^{-}(\sigma)\) at \(\sigma=0\). Specifically, after applying any number of derivatives \(\sigma\partial_{\sigma}\) to \(L_{b}^{-}(\sigma)\), it satisfies the same properties as \(L_{b}^{-}(\sigma)\). Recall that \[L_{b}^{-}(\sigma)=\check{L}_{b}(\sigma)^{-1}R_{\mathrm{reg}}(b, \sigma)-\check{L}_{b}(\sigma)^{-1}\Big{(}\big{(}\partial_{\sigma}\widehat{L}_ {b}(0)+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L}_{b}(0)\big{)}(P^{1}(b) +P^{e}(b,\sigma))+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L}_{b}(0)P^{2}(b) \Big{)},\] so in order to study the conormal regularity, we need to analyze that of \(\check{L}_{b}(\sigma)^{-1}\), \(R_{\mathrm{reg}}(b,\sigma)\) and \(P^{e}(b,\sigma)\). We first discuss the conormal regularity of \(\check{L}_{b}(\sigma)^{-1}\) and prove a analogous result of Proposition 12.3 for \((\sigma\partial_{\sigma})^{m}\check{L}_{b}(\sigma)^{-1}\). **Proposition 12.13**.: _Let \(-\frac{3}{2}<\ell<-\frac{1}{2}\), \(0<\epsilon<1\) with \(-\frac{1}{2}<\ell+\epsilon<\frac{1}{2}\), and \(s-\epsilon>4+m\) where \(m\in\mathbb{N}\). Define_ \[\check{\mathcal{L}}_{b}^{(m)}(\sigma):=(\sigma\partial_{\sigma})^{m}\check{L} _{b}(\sigma)^{-1}.\] _For \(0<|\sigma|\leq c_{0},\mathrm{Im}\,\sigma\geq 0\) with \(c_{0}\) small, we have_ \[\check{\mathcal{L}}_{b}^{(m)}(\sigma):\bar{H}^{s-1,\ell+2}_{ \mathrm{b}}(\bar{X})\to\bar{H}^{s-m,\ell}_{\mathrm{b}}(\bar{X}), \tag{12.27}\] \[\partial_{\sigma}\check{\mathcal{L}}_{b}^{(m)}(\sigma):\bar{H}^{s- 1,\ell+2}_{\mathrm{b}}(\bar{X})\to|\sigma|^{-\epsilon}\bar{H}^{s-m-\epsilon, \ell+\epsilon-1}_{\mathrm{b}}(\bar{X}),\] (12.28) \[\partial_{\sigma}^{2}\check{\mathcal{L}}_{b}^{(m)}(\sigma):\bar{H }^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\to|\sigma|^{-\epsilon-1}\bar{H}^{s-m-1- \epsilon,\ell+\epsilon-1}_{\mathrm{b}}(\bar{X}). \tag{12.29}\] Proof.: Since \[\partial_{\sigma}^{2}\check{\mathcal{L}}_{b}^{(m)}(\sigma)=\sigma^{-1} \partial_{\sigma}(\sigma\partial_{\sigma}-1)\check{\mathcal{L}}_{b}^{(m)}( \sigma)=\sigma^{-1}\partial_{\sigma}\check{\mathcal{L}}_{b}^{(m+1)}(\sigma)- \sigma^{-1}\partial_{\sigma}\check{\mathcal{L}}_{b}^{(m)}(\sigma),\] it suffices to prove the statements (12.27) and (12.28), and then the conclusion (12.29) follows directly. Now we prove (12.27) and (12.28) by induction on \(m\). For \(m=1\), we calculate \[\check{\mathcal{L}}_{b}^{(1)}=\sigma\partial_{\sigma}\check{L}_{b}(\sigma)^{-1} =-\check{L}_{b}(\sigma)^{-1}(\sigma\partial_{\sigma}\check{L}_{b}(\sigma)) \check{L}_{b}(\sigma)^{-1}\] where \[\sigma\partial_{\sigma}\check{L}_{b}(\sigma)=\check{L}_{b}(\sigma)-\check{L}_{ b}(0)+\frac{\sigma^{2}}{2}\partial_{\sigma}^{2}\check{L}_{b}(0)\in\check{L}_{b}( \sigma)+\rho^{2}\mathrm{Diff}_{\mathrm{b}}^{2}+\sigma^{2}\rho^{2}C^{\infty}. \tag{12.30}\] As a consequence, \[\check{\mathcal{L}}_{b}^{(1)}=-\check{L}_{b}(\sigma)^{-1}+\check{L}_{b}(\sigma )^{-1}\big{(}\rho^{2}\mathrm{Diff}_{\mathrm{b}}^{2}+\sigma^{2}\rho^{2}C^{\infty} \big{)}\check{L}_{b}(\sigma)^{-1}\] We then compute \[\bar{H}^{s-1,\ell+2}_{\mathrm{b}}(\bar{X})\xrightarrow{\check{L}_{b}(\sigma)^{ -1}}\bar{H}^{s,\ell}_{\mathrm{b}}(\bar{X})\xrightarrow{\rho^{2}\mathrm{Diff}_{ \mathrm{b}}^{2}+\sigma^{2}\rho^{2}C^{\infty}}\widehat{H}^{s-2,\ell+2}_{\mathrm{ b}}(\bar{X})\xrightarrow{\check{L}_{b}(\sigma)^{-1}}\bar{H}^{s-1,\ell}_{ \mathrm{b}}(\bar{X}).\] This proves (12.27) for \(m=1\). As for the first derivative \(\partial_{\sigma}\check{\mathcal{L}}_{b}^{(1)}(\sigma)\), we calculate \[\partial_{\sigma}\check{\mathcal{L}}_{b}^{(1)}(\sigma)=- \partial_{\sigma}\check{L}_{b}(\sigma)^{-1}+\partial_{\sigma}\check{L}_{b}( \sigma)^{-1}\big{(}\rho^{2}\mathrm{Diff}_{b}^{2}+\sigma^{2}\rho^{2}C^{\infty} \big{)}\check{L}_{b}(\sigma)^{-1}+\check{L}_{b}(\sigma)^{-1}\big{(}\sigma\rho^{2}C^{ \infty}\big{)}\check{L}_{b}(\sigma)^{-1}\] \[+\check{L}_{b}(\sigma)^{-1}\big{(}\rho^{2}\mathrm{Diff}_{b}^{2}+ \sigma^{2}\rho^{2}C^{\infty}\big{)}\partial_{\sigma}\check{L}_{b}(\sigma)^{-1}:=I+II+ III+IV.\] Using Proposition 12.3, we find that \[I: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\to|\sigma|^{-\epsilon}\bar{H} _{\rm b}^{s-\epsilon,\ell+\epsilon+1}(\bar{X}),\] \[II: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\bar{L}_{b}( \sigma)^{-1}}\bar{H}_{\rm b}^{s,\ell}(\bar{X})\xrightarrow{\rho^{2}\text{Diff }_{\rm b}^{2}+\sigma^{2}\rho^{2}C^{\infty}}\bar{H}_{\rm b}^{s-2,\ell+2}(\bar{X}) \xrightarrow{\partial_{\sigma}\bar{L}_{b}(\sigma)^{-1}}|\sigma|^{-\epsilon} \bar{H}_{\rm b}^{s-1-\epsilon,\ell+\epsilon-1}(\bar{X}),\] \[III: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\bar{L}_{b}( \sigma)^{-1}}\bar{H}_{\rm b}^{s,\ell}(\bar{X})\xrightarrow{\sigma\rho^{2}C^{ \infty}}\bar{H}_{\rm b}^{s,\ell+2}(\bar{X})\xrightarrow{\bar{L}_{b}(\sigma)^{ -1}}\bar{H}_{\rm b}^{s-1,\ell}(\bar{X})\subset|\sigma|^{-\epsilon}\bar{H}_{\rm b }^{s-1-\epsilon,\ell+\epsilon-1}(\bar{X}),\] \[IV: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\partial_{ \sigma}\bar{L}_{b}(\sigma)^{-1}}|\sigma|^{-\epsilon}\bar{H}_{\rm b}^{s-\epsilon,\ell+\epsilon-1}(\bar{X})\xrightarrow{\rho^{2}\text{Diff}_{\rm b}^{2}+\sigma^ {2}\rho^{2}C^{\infty}}|\sigma|^{-\epsilon}\bar{H}_{\rm b}^{s-\epsilon-2,\ell+ \epsilon+1}(\bar{X})\] \[\xrightarrow{\bar{L}_{b}(\sigma)^{-1}}|\sigma|^{-\epsilon}\bar{H }_{\rm b}^{s-1-\epsilon,\ell+\epsilon-1}(\bar{X})\] where we use \(1/2<\ell+\epsilon+1<3/2\) in the last step of the analysis of \(IV\). This proves (12.28) for \(m=1\). Suppose that (12.27) and (12.28) hold for \(k\leq m\). Then for \(m+1\), by a direct calculation, we obtain that \(\tilde{\mathcal{L}}_{b}^{(m+1)}(\sigma)\) is a linear combination of the operators of the type \[L_{1}=\bar{\mathcal{L}}_{b}^{(m)}(\sigma),\quad L_{2}:=\tilde{\mathcal{L}}_{b} ^{(m_{1})}(\sigma)\circ(\rho^{2}\text{Diff}_{\rm b}^{2})\circ\tilde{\mathcal{ L}}_{b}^{(m_{2})}(\sigma)\] where \(m_{1},m_{2}\geq 0\) and \(m_{1}+m_{2}\leq m\). Then by the induction hypothesis on \(k\leq m\), it follows that \[L_{1}: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\tilde{\mathcal{ L}}_{b}^{(m)}(\sigma)}\bar{H}_{\rm b}^{s-m,\ell}(\bar{X})\subset\bar{H}_{\rm b }^{s-m-1,\ell}(\bar{X}),\] \[L_{2}: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\tilde{ \mathcal{L}}_{b}^{(m_{1})}(\sigma)}\bar{H}_{\rm b}^{s-m_{1},\ell}(\bar{X}) \xrightarrow{\rho^{2}\text{Diff}_{\rm b}^{2}}\bar{H}_{\rm b}^{s-m_{1}-2,\ell+ 2}(\bar{X})\] \[\xrightarrow{\tilde{\mathcal{L}}_{b}^{(m_{1})}(\sigma)}\bar{H}_{ \rm b}^{s-m_{1}-m_{2}-1,\ell}(\bar{X})=\bar{H}_{\rm b}^{s-m-1,\ell}(\bar{X}).\] This proves (12.27) for \(m+1\). As for the first derivative \(\partial_{\sigma}\tilde{\mathcal{L}}_{b}^{(m+1)}\), it is a linear combination of the operators of the following types \[D_{1}=\partial_{\sigma}\tilde{\mathcal{L}}_{b}^{(m)}(\sigma),\quad D_{2}:= \partial_{\sigma}\tilde{\mathcal{L}}_{b}^{(m_{1})}(\sigma)\circ(\rho^{2} \text{Diff}_{\rm b}^{2})\circ\tilde{\mathcal{L}}_{b}^{(m_{2})}(\sigma),\quad D_{ 3}:=\tilde{\mathcal{L}}_{b}^{(m_{1})}(\sigma)\circ(\rho^{2}\text{Diff}_{\rm b }^{2})\circ\partial_{\sigma}\tilde{\mathcal{L}}_{b}^{(m_{2})}(\sigma)\] where \(m_{1},m_{2}\geq 0\) and \(m_{1}+m_{2}\leq m\). Then we have \[D_{1}: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\partial_{ \sigma}\tilde{\mathcal{L}}_{b}^{(m)}(\sigma)}|\sigma|^{-\epsilon}\bar{H}_{\rm b }^{s-m-\epsilon,\ell+\epsilon-1}(\bar{X})\subset|\sigma|^{-\epsilon}\bar{H}_{ \rm b}^{s-m-1\epsilon,\ell+\epsilon-1}(\bar{X}),\] \[D_{2}: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\tilde{\mathcal{ L}}_{b}^{(m_{2})}(\sigma)}\bar{H}_{\rm b}^{s-m_{2},\ell}(\bar{X})\xrightarrow{\rho^{2} \text{Diff}_{\rm b}^{2}}\bar{H}_{\rm b}^{s-m_{2}-2,\ell+2}(\bar{X})\] \[\xrightarrow{\partial_{\sigma}\tilde{\mathcal{L}}_{b}^{(m_{1})}( \sigma)}|\sigma|^{-\epsilon}\bar{H}_{\rm b}^{s-m_{1}-1-m_{2}-\epsilon,\ell+ \epsilon-1}(\bar{X})\subset|\sigma|^{-\epsilon}\bar{H}_{\rm b}^{s-m-1-\epsilon, \ell+\epsilon-1}(\bar{X}),\] \[D_{3}: \bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\xrightarrow{\partial_{ \sigma}\tilde{\mathcal{L}}_{b}^{(m_{2})}(\sigma)}|\sigma|^{-\epsilon}\bar{H}_{ \rm b}^{s-m_{2}-\epsilon,\ell+\epsilon-1}(\bar{X})\xrightarrow{\rho^{2}\text{ Diff}_{\rm b}^{2}}|\sigma|^{-\epsilon}\bar{H}_{\rm b}^{s-m_{2}- \epsilon-2,\ell+\epsilon+1}(\bar{X})\] \[\xrightarrow{\tilde{\mathcal{L}}_{b}^{(m_{1})}(\sigma)}|\sigma|^{- \epsilon}\bar{H}_{\rm b}^{s-m_{1}-m_{2}-\epsilon-1,\ell+\epsilon-1}(\bar{X}) \subset|\sigma|^{-\epsilon}\bar{H}_{\rm b}^{s-m-1-\epsilon,\ell+\epsilon-1}(\bar{X})\] where we use \(1/2<\ell+\epsilon+1<3/2\) in the third step of the analysis of \(D_{3}\). This proves (12.28) for \(m+1\). This finishes the proof of the proposition. With the conormal regularity of \(\tilde{L}_{b}(\sigma)^{-1}\) with our disposal, we are ready to discuss that of the entries \(\widetilde{L}_{ij}\) of the operator \(\widehat{L_{b}}(\sigma)\tilde{L}_{b}(\sigma)^{-1}\). Recall the expression (11.25) of \(\widehat{L_{b}}(\sigma)\tilde{L}(\sigma)^{-1}\) \[\widehat{L_{b}}(\sigma)\tilde{L}_{b}(\sigma)^{-1}=\begin{pmatrix}L_{00}(b, \sigma)&\sigma^{2}\widetilde{L}_{01}(b,\sigma)&\sigma\widetilde{L}_{02}(b, \sigma)\\ \sigma\widetilde{L}_{10}(b,\sigma)&\sigma^{2}\widetilde{L}_{11}(b,\sigma)& \sigma^{2}\widetilde{L}_{12}(b,\sigma)\\ \sigma\widetilde{L}_{20}(b,\sigma)&\sigma^{2}\widetilde{L}_{21}(b,\sigma)& \sigma\widetilde{L}_{22}(b,\sigma)\end{pmatrix}\] \[=\begin{pmatrix}L_{00}(b,\sigma)&\sigma^{2}\widetilde{L}_{01}(b, \sigma)&\sigma\widetilde{L}_{02}(b,\sigma)\\ \sigma\big{(}\widetilde{L}_{10}^{2}(b)+\sigma\widetilde{L}_{10}^{2}(b,\sigma) \big{)}&\sigma^{2}\big{(}\widetilde{L}_{11}^{0}(b)+\sigma\widetilde{L}_{11}^{ 1}(b)+\sigma^{2}\widetilde{L}_{11}^{\epsilon}(b,\sigma)\big{)}&\sigma^{2} \big{(}\widetilde{L}_{12}^{0}(b)+\sigma\widetilde{L}_{12}^{\epsilon}(b, \sigma)\big{)}\\ \sigma\widetilde{L}_{20}(b,\sigma)&\sigma^{2}\big{(}\widetilde{L}_{21}^{0}(b)+ \sigma\widetilde{L}_{21}^{\epsilon}(b)\big{)}&\sigma\big{( 1. \((\sigma\partial_{\sigma})^{m}L_{00},(\sigma\partial_{\sigma})^{m}\widetilde{L}_{01},(\sigma\partial_{\sigma})^{m}\widetilde{L}_{02},(\sigma\partial_{\sigma})^{m} \widetilde{L}_{20}\) _are_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ 2. \((\sigma\partial_{\sigma})^{m}\widetilde{L}_{10},(\sigma\partial_{\sigma})^{m} \widetilde{L}_{12},(\sigma\partial_{\sigma})^{m}\widetilde{L}_{21},(\sigma \partial_{\sigma})^{m}\widetilde{L}_{22}\) _have an_ \(\epsilon\)_-regular expansion up to order one, that is,_ \((\sigma\partial_{\sigma})^{m}\widetilde{L}_{10}^{e},(\sigma\partial_{\sigma})^ {m}\widetilde{L}_{12}^{e},(\sigma\partial_{\sigma})^{m}\widetilde{L}_{21}^{e},(\sigma\partial_{\sigma})^{m}\widetilde{L}_{22}^{e}\) _are_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ 3. \((\sigma\partial_{\sigma})^{m}\widetilde{L}_{11}\) _has an_ \(\epsilon\)_-regular expansion up to order two, that is,_ \((\sigma\partial_{\sigma})^{m}\widetilde{L}_{11}^{e}\) _is_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ Proof.: The proof is analogous to that of Proposition 12.4, now by using Proposition 12.13 instead of Proposition 12.3. Here, we only demonstrate the proof of \((\sigma\partial_{\sigma})^{m}\widetilde{L}_{10}\) in detail because the proof of the other entries follows in a similar manner. Recall the calculation in (11.18) \[(\sigma\partial_{\sigma})^{m}\widetilde{L}_{10}^{e}(\tilde{g}_{ 0},\tilde{A}_{0}),\ (\hat{g}_{1}^{*},\hat{A}_{1}^{*}))\] \[\quad+\Big{\langle}(\tilde{g}_{0},\tilde{A}_{0}),\frac{1}{2}( \tilde{\mathcal{L}}_{b}^{(m)}(\sigma))^{*}(\partial_{\sigma}^{2}\widehat{L}_ {b}(0))^{*}(\hat{g}_{2}^{*},\hat{A}_{2}^{*})\Big{\rangle}\] where \[\Big{(}(\sigma\partial_{\sigma})^{j}\big{(}\partial_{\sigma}\widehat{L}_{b}(0 )+\frac{\sigma}{2}\partial_{\sigma}^{2}\widehat{L}_{b}(0)\big{)}\Big{)}^{*} \big{(}I-(\check{L}_{b}(0)^{-1})^{*}V_{b}^{*}\big{)}(\check{g}_{2}^{*},\hat{A} _{2}^{*}),\quad(\partial_{\sigma}^{2}\widehat{L}_{b}(0))^{*}(\hat{g}_{2}^{*}, \hat{A}_{2}^{*})\in\hat{H}_{\rm b}^{-5/2-C(a,\gamma),\,3/2-}(\bar{X}).\] Since \((\tilde{g}_{0},\tilde{A}_{0})\in\hat{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\) with \(s-m-\epsilon>4\), and by Proposition 12.13 \[(\hat{\mathcal{L}}_{b}^{(k)}(\sigma))^{*}\hat{H}_{\rm b}^{-5/2-C(a,\gamma),\,3 /2-}(\bar{X})\in\hat{H}_{\rm b}^{-3/2-k-C(a,\gamma),-1/2-}(\bar{X}),\quad 0 \leq k\leq m,\ k\in\mathbb{N}_{0},\] \[(\partial_{\sigma}^{j}\hat{\mathcal{L}}_{b}^{(k)}(\sigma))^{*}\hat{H}_{\rm b}^ {-5/2-C(a,\gamma),\,3/2-}(\bar{X})\in|\sigma|^{-\epsilon-j+1}\hat{H}_{\rm b}^{ -s+1,-\ell-2}(\bar{X}),\quad j=1,2\quad\text{and}\quad 0\leq k\leq m,\ k\in \mathbb{N}_{0}\] where we use the facts that \(-5/2-C(a,\gamma)>-3>-s+k+\epsilon+1\) for \(0\leq k\leq m\) and \(-\ell-\epsilon+1<3/2\), the pairings above (and their up to second order derivatives) are well-defined. This proves the \(\epsilon\)-regularity of \((\sigma\partial_{\sigma})\widetilde{L}_{10}^{e}\). Next, we turn to discussing the conormal regularity of \(R(b,\sigma)\). To this end, we need the following lemma, which is an analogy of Lemma 12.5. **Lemma 12.15**.: _Let \(s,\ell,\epsilon,m\) be defined as in Proposition 12.13. Suppose that the operator \(A(\sigma):\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) with \(i,j=0,1,2\) has an \(\epsilon\)-regular expansion up to order one, i.e.,_ \[A(\sigma)=A^{0}+\sigma A^{e}(\sigma)\] _where \(A^{0}:\widetilde{\mathcal{K}}_{i}\to\mathcal{R}_{j}\) is invertible and independent of \(\sigma\), and \((\sigma\partial_{\sigma})^{k}A^{e}(\sigma):\widetilde{\mathcal{K}}_{i}\to \mathcal{R}_{j}\) is \(\epsilon\)-regular for any \(0\leq k\leq m,k\in\mathbb{N}_{0}\). Then for \(|\sigma|\) sufficiently small, \(A(\sigma)\) is invertible with inverse \(B(\sigma)=A(\sigma)^{-1}\), and \((\sigma\partial_{\sigma})^{k}B(\sigma)\) has an \(\epsilon\)-regular expansion up to order one, that is,_ \[(\sigma\partial_{\sigma})^{k}B(\sigma)=B_{k}^{0}+\sigma B_{k}^{e}(\sigma),\quad 0 \leq k\leq m,\ k\in\mathbb{N}_{0}\] _where \(B_{k}^{0}:\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}\) is independent of \(\sigma\), and \(B_{k}^{e}(\sigma):\mathcal{R}_{j}\to\widetilde{\mathcal{K}}_{i}\) is \(\epsilon\)-regular._ _Similarly, if \(A(\sigma)\) has uniformly bounded inverse \(B(\sigma)\), and \((\sigma\partial_{\sigma})^{k}A(\sigma)\) is \(\epsilon\)-regular for any \(0\leq k\leq m,k\in\mathbb{N}_{0}\), then \((\sigma\partial_{\sigma})^{k}B(\sigma)\) is \(\epsilon\)-regular as well._ Proof.: As established in the proof of Lemma 12.5, we have \[A(\sigma)^{-1} =\Big{(}I+\sum_{k=1}^{\infty}(-1)^{k}\big{(}\sigma(A^{0})^{-1}A^{e }(\sigma)\big{)}^{k}\Big{)}(A^{0})^{-1}\] \[=(A^{0})^{-1}-\sigma\Big{(}(A^{0})^{-1}A^{e}(\sigma)\sum_{k=0}^{ \infty}(-1)^{k}\big{(}\sigma(A^{0})^{-1}A^{e}(\sigma)\big{)}^{k}(A^{0})^{-1} \Big{)}:=B^{0}+\sigma B^{e}(\sigma).\] For \(1\leq k\leq m\), we have \[(\sigma\partial_{\sigma})^{k}B(\sigma)=\sigma\sum_{j=0}^{k}\binom{k}{j}(\sigma \partial_{\sigma})^{j}B^{e}(\sigma).\] A direct calculation implies that \((\sigma\partial_{\sigma})^{j}B^{e}(\sigma)\) is a sum of the operators of the type \(\sigma^{-1}L\circ(A^{0})^{-1}\), where \(L\) is a composition of the terms of the form \(\sigma(A^{0})^{-1}(\sigma\partial_{\sigma})^{l}A^{e}(\sigma)\) with \(0\leq l\leq j\). Then proceeding as in the proof of Lemma 12.5, we conclude that \((\sigma\partial_{\sigma})^{j}B^{e}(\sigma)\) with \(0\leq j\leq k\) is \(\epsilon\)-regular, and thus \((\sigma\partial_{\sigma})^{k}B^{e}(\sigma)\) with \(1\leq k\leq m\) has an \(\epsilon\)-regular expansion up to order one. Since \((\sigma\partial_{\sigma})^{m}B(\sigma)\) is a linear combination of the operators of the form \[(\sigma\partial_{\sigma})^{m_{1}}B(\sigma)\circ(\sigma\partial_{\sigma})^{m_{ 2}}A(\sigma)\circ(\sigma\partial_{\sigma})^{m_{3}}B(\sigma)\] where \(1\leq m_{2}\leq m,0\leq m_{1},m_{2}\leq m-1\) and \(m_{1}+m_{2}+m_{3}=m\), it follows that the second statement follows by an induction argument on \(m\). Recall the expressions (11.26) and (11.32) of \(R_{b,\sigma}=(\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1})^{-1}\) \[R(b,\sigma) =(\widehat{L_{b}}(\sigma)\check{L}_{b}(\sigma)^{-1})^{-1}=\begin{pmatrix} \widetilde{R}_{00}(b,\sigma)&\widetilde{R}_{01}(b,\sigma)&\widetilde{R}_{02 }(b,\sigma)\\ \sigma^{-1}\widetilde{R}_{10}(b,\sigma)&\sigma^{-2}\widetilde{R}_{11}(b, \sigma)&\sigma^{-1}\widetilde{R}_{12}(b,\sigma)\\ \widetilde{R}_{20}(b,\sigma)&\sigma^{-1}\widetilde{R}_{21}(b,\sigma)&\sigma ^{-1}\widetilde{R}_{22}(b,\sigma)\end{pmatrix}\] \[=\begin{pmatrix}\widetilde{R}_{00}(b,\sigma)&\widetilde{R}_{01}(b,\sigma)&\widetilde{R}_{02}(b,\sigma)\\ \sigma^{-1}\widetilde{R}_{10}(b,\sigma)&\sigma^{-2}\big{(}\widetilde{R}_{11} ^{0}(b)+\sigma\widetilde{R}_{11}^{e}(b,\sigma)\big{)}&\sigma^{-1}\widetilde{R }_{12}(b,\sigma)\\ \widetilde{R}_{20}(b,\sigma)&\sigma^{-1}\big{(}\widetilde{R}_{21}^{0}(b)+ \sigma\widetilde{R}_{21}^{e}(b,\sigma)\big{)}&\sigma^{-1}\big{(}\widetilde{R }_{22}^{0}(b)+\sigma\widetilde{R}_{22}^{e}(b,\sigma)\big{)}\end{pmatrix}\] where \[\begin{gathered}\widetilde{R}_{11}=\big{(}\widetilde{L}_{11}^{ \sharp}-\sigma\widetilde{L}_{12}^{\sharp}(\widetilde{L}_{22}^{\flat})^{-1} \widetilde{L}_{21}^{\sharp}\big{)}^{-1},\quad\widetilde{R}_{10}=\widetilde{R }_{11}\big{(}-\widetilde{L}_{10}+\sigma\widetilde{L}_{12}^{\sharp}(\widetilde {L}_{22}^{\flat})^{-1}\widetilde{L}_{20}\big{)}L_{00}^{-1},\\ \widetilde{R}_{12}=-\widetilde{R}_{11}\widetilde{L}_{12}^{\sharp}( \widetilde{L}_{22}^{\flat})^{-1},\quad\widetilde{R}_{21}=-(\widetilde{L}_{22 }^{\flat})^{-1}\widetilde{L}_{21}^{\sharp}\widetilde{R}_{11},\quad\widetilde{ R}_{22}=(\widetilde{L}_{22}^{\flat})^{-1}-\sigma(\widetilde{L}_{22}^{\flat})^{-1} \widetilde{L}_{21}^{\sharp}\widetilde{R}_{12},\\ \widetilde{R}_{20}=-(\widetilde{L}_{22}^{\flat})^{-1}\big{(} \widetilde{L}_{20}L_{00}^{-1}+\widetilde{L}_{21}^{\sharp}\widetilde{R}_{10} \big{)},\quad\widetilde{R}_{01}=-L_{00}^{-1}\big{(}\widetilde{L}_{01} \widetilde{R}_{11}+\widetilde{L}_{02}\widetilde{R}_{21}\big{)},\\ \widetilde{R}_{02}=-L_{00}^{-1}\big{(}\sigma\widetilde{L}_{01} \widetilde{R}_{12}+\widetilde{L}_{02}\widetilde{R}_{22}\big{)},\quad\widetilde{ R}_{00}=L_{00}^{-1}\big{(}I-\sigma\widetilde{L}_{01}\widetilde{R}_{10}- \sigma\widetilde{L}_{02}\widetilde{R}_{20}\big{)}\end{gathered} \tag{12.31}\] with \[\begin{gathered}\widetilde{L}_{i1}^{\sharp}=\widetilde{L}_{i1}- \sigma\widetilde{L}_{i0}L_{00}^{-1}\widetilde{L}_{01}\quad\text{for}\quad i=1, 2,\quad\widetilde{L}_{12}^{\sharp}=\widetilde{L}_{12}-\widetilde{L}_{10}L_{00} ^{-1}\widetilde{L}_{02},\\ \widetilde{L}_{22}^{\flat}=\widetilde{L}_{22}-\sigma\widetilde{L}_{2 0}L_{00}^{-1}\widetilde{L}_{02}.\end{gathered} \tag{12.32}\] **Proposition 12.16**.: _Let \(s,\ell,\epsilon,m\) be defined as in Proposition 12.13. Then \((\sigma\partial_{\sigma})^{m}\widetilde{R}_{ij}\) satisfy the following properties._ 1. \((\sigma\partial_{\sigma})^{m}\widetilde{R}_{00},(\sigma\partial_{\sigma})^{m} \widetilde{R}_{01},(\sigma\partial_{\sigma})^{m}\widetilde{R}_{02},(\sigma \partial_{\sigma})^{m}\widetilde{R}_{10},(\sigma\partial_{\sigma})^{m} \widetilde{R}_{12},(\sigma\partial_{\sigma})^{m}\widetilde{R}_{20}\) _are_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ 2. \((\sigma\partial_{\sigma})^{m}\widetilde{R}_{11},(\sigma\partial_{\sigma})^{m} \widetilde{R}_{21},(\sigma\partial_{\sigma})^{m}\widetilde{R}_{22}\) _have an_ \(\epsilon\)_-regular expansion up to order one, that is,_ \((\sigma\partial_{\sigma})^{m}\widetilde{R}_{11}^{e}\)_,_ \((\sigma\partial_{\sigma})^{m}\widetilde{R}_{21}^{e},(\sigma\partial_{\sigma})^{m} \widetilde{R}_{22}^{e}\) _are_ \(\epsilon\)_-regular at_ \(\sigma=0\)_._ Proof.: Applying Proposition 12.14 and Lemma 12.15, we conclude that for \(0\leq k\leq m\), \((\sigma\partial_{\sigma})^{k}L_{00}^{-1}\) is \(\epsilon\)-regular. Then by Proposition 12.14, we find that \((\sigma\partial_{\sigma})^{k}\widetilde{L}_{11}^{\sharp},(\sigma\partial_{ \sigma})^{k}\widetilde{L}_{21}^{\sharp},(\sigma\partial_{\sigma})^{k}\widetilde {L}_{22}^{\flat}\) (and thus \((\sigma\partial_{\sigma})^{k}(\widetilde{L}_{22}^{\flat})^{-1}\) by Lemma 12.15) have an \(\epsilon\)-regular expansion up to order one, while \((\sigma\partial_{\sigma})^{k}\widetilde{L}_{12}^{\sharp}\) is \(\epsilon\)-regular. Therefore, we see that \[(\sigma\partial_{\sigma})^{k}\Big{(}\widetilde{L}_{11}^{\sharp}-\sigma\widetilde {L}_{12}^{\sharp}(\widetilde{L}_{22}^{\flat})^{-1}\widetilde{L}_{21}^{ \sharp}\Big{)}\] has an \(\epsilon\)-regular expansion up to order one whose coefficient of \(\sigma^{0}\) is \((\sigma\partial_{\sigma})^{k}\widetilde{L}_{11}^{0}(b)\). Then the \(\epsilon\)-regular expansion property of \((\sigma\partial_{\sigma})^{m}\widetilde{R}_{11}\) follows from Lemma 12.15. Similarly, the \(\epsilon\)-regular expansion property of \((\sigma\partial_{\sigma})^{m}\widetilde{R}_{21},(\sigma\partial_{\sigma})^{m} \widetilde{R}_{22}\) follows from that of \((\sigma\partial_{\sigma})^{k}\widetilde{R}_{11},(\sigma\partial_{\sigma})^{k}( \widetilde{L}_{22}^{\flat})^{-1},(\sigma\partial_{\sigma})^{m}\widetilde{L}_{2 1}^{\sharp}\) and the above expressions of \(\widetilde{R}_{ij}\). Finally, the \(\epsilon\)-regularity of the remaining components can be obtained by using the above expressions of \(\widetilde{R}_{ij}\) and Proposition 12.14. **Proposition 12.17**.: _Let \(L_{b}^{-}(\sigma)\) be the regular part of \(\widehat{L_{b}}(\sigma)^{-1}\) defined in Corollary 11.4 and define_ \[\mathcal{L}_{b}^{-(m)}(\sigma):=(\sigma\partial_{\sigma})^{m}L_{b}^{-}(\sigma).\] _Let \(-\frac{3}{2}<\ell<-\frac{1}{2},0<\epsilon<1,-\frac{1}{2}<\ell+\epsilon<\frac{1}{2}\) and \(s-\epsilon>4+m\) where \(m\in\mathbb{N}_{0}\). For \(\operatorname{Im}\sigma\geq 0\) and \(0<|\sigma|\leq c_{0}\) where \(c_{0}>0\) is a small constant, \(\mathcal{L} _and_ \[\partial_{\sigma}^{j}\mathcal{L}_{b}^{-(m)}(\sigma):\tilde{H}_{\rm b}^{s-1,\ell+ 2}(\bar{X})\to|\sigma|^{-\epsilon-j+1}\bar{H}_{\rm b}^{s-m-\epsilon-j+1,\ell+ \epsilon-1}(\bar{X}),\quad j=1,2. \tag{12.34}\] Proof.: Recall that \[L_{b}^{-}(\sigma)=\tilde{L}_{b}(\sigma)^{-1}R_{\rm reg}(b,\sigma)-\tilde{L}_{b }(\sigma)^{-1}\Big{(}\big{(}\partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{ 2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}(P^{1}(b)+P^{\epsilon}(b, \sigma))+\frac{1}{2}\partial_{\sigma}^{2}\widehat{L_{b}}(0)P^{2}(b)\Big{)}\] where \(P^{2}(b),P^{1}(b):\bar{H}_{\rm b}^{s-1,\ell+2}(\bar{X})\to\rho C^{\infty}(\bar {X})+\bar{H}_{\rm b}^{\infty,\frac{1}{2}-}(\bar{X})\), \[(\sigma\partial_{\sigma})^{k}P^{\epsilon}(b,\sigma):\bar{H}_{\rm b}^{s-1, \ell+2}(\bar{X})\to\mathcal{K}_{b,1}\] is \(\epsilon\)-regular for \(0\leq k\leq m\), and \[R_{\rm reg}(b,\sigma)=\begin{pmatrix}\widetilde{R}_{00}(b,\sigma)&\widetilde{ R}_{01}(b,\sigma)&\widetilde{R}_{02}(b,\sigma)\\ 0&0&0\\ \widetilde{R}_{20}(b,\sigma)&\widetilde{R}_{21}^{\epsilon}(b,\sigma)& \widetilde{R}_{22}^{\epsilon}(b,\sigma)\end{pmatrix}.\] Then a direct calculation implies \((\sigma\partial_{\sigma})^{m}L_{b}^{-}(\sigma)\) is a linear combination of the operators of the forms \[\hat{\mathcal{L}}_{b}^{(m_{1})}(\sigma)\circ(\sigma\partial_{ \sigma})^{m_{2}}R_{\rm reg}(b,\sigma),\quad\hat{\mathcal{L}}_{b}^{(m_{1})}( \sigma)\circ\partial_{\sigma}^{2}\widehat{L_{b}}(0)\circ P^{2}(b),\] \[\hat{\mathcal{L}}_{b}^{(m_{1})}(\sigma)\circ(\sigma\partial_{ \sigma})^{m_{2}}\big{(}\partial_{\sigma}\widehat{L_{b}}(0)+\frac{\sigma}{2} \partial_{\sigma}^{2}\widehat{L_{b}}(0)\big{)}\circ(\sigma\partial_{\sigma})^ {m_{3}}\big{(}P^{1}(b)+P^{\epsilon}(b,\sigma)\big{)}.\] Applying Proposition 12.13 and 12.16, we obtain (12.33) and (12.34). ## 13. Decay estimates In this section, we continue using the notation from SS12. We will study the decay of the solution \((\dot{g},\dot{A})\) to the equations \[L_{b}(\dot{g},\dot{A})=f,\quad f\in C_{c}^{\infty}((0,\infty)_{t_{b,*}};\bar{H }_{\rm b}^{s,\ell+2}(\bar{X})). \tag{13.1}\] We define spacetime Sobolev spaces \[\widetilde{H}_{\rm b,c}^{s,\ell,k},\quad k\in\mathbb{N}_{0}, \tag{13.2}\] which is equal to \(L^{2}(t_{b,*}^{-1}([0,\infty));|dg_{b}|)\) for \(s,\ell,k=0\). Here, the index \(s\) measures the regularity with respect to the derivatives \(\partial_{t_{b,*}}\) and the stationary \(b\)-vector fields on \(\bar{X}\) (which is spanned by \(r\partial_{r}\) and rotation vector fields). The index \(\ell\) is the weight in \(\rho=r^{-1}\), that is, \(\widetilde{H}_{\rm b,c}^{s,\ell,k}=\rho^{\ell}\widetilde{H}_{\rm b,c}^{s,k}\). The index \(k\in\mathbb{N}_{0}\) measures the regularity with respect to \(\langle t_{b,*}\rangle_{\partial_{t_{b,*}}}\) (which is related to the conormal regularity of \(\widehat{L_{b}}(\sigma)^{-1}\) on the Fourier transform side). Therefore, \(u\in\widetilde{H}_{\rm b,c}^{s,\ell,k}\) if and only if \((\langle t_{b,*}\rangle_{\partial_{t_{b,*}}})^{j}u\in\widetilde{H}_{\rm b,c}^ {s-j,\ell}:=\widetilde{H}_{\rm b,c}^{s-j,\ell,0}\) for any \(0\leq j\leq k\). **Theorem 13.1**.: _Let \(-\frac{3}{2}<\ell<-\frac{1}{2},0<\epsilon<1,-\frac{1}{2}<\ell+\epsilon<\frac{1} {2}\) and \(s-\epsilon>4+m\) where \(m\in\mathbb{N}_{0}\). Let \((\dot{g},\dot{A})\) be the solution to the equation (13.1) with \(f\in C_{c}^{\infty}((0,\infty)_{t_{b,*}};\bar{H}_{\rm b}^{s,\ell+2}(\bar{X}))\). Then there exists a generalized zero mode \((\hat{g},\dot{A})\in\widehat{\mathcal{K}}_{b}\) (defined in Proposition 10.5) such that_ \[(\dot{g},\dot{A})=(\hat{g},\dot{A})+(\bar{g},\bar{A})+(\tilde{g},\bar{A}) \tag{13.3}\] _and \((\hat{g},\dot{A}),(\bar{g},\bar{A})\) and \((\tilde{g},\tilde{A})\) satisfy the following properties._ 1. _The term_ \((\bar{g},\bar{A})\) _can be written as_ \((\bar{g},\bar{A})=(\bar{g}_{1},\bar{A}_{1})+(\bar{g}_{2},\bar{A}_{2})\) _where_ \((\bar{g}_{1},\bar{A}_{1})\) _is a pure gauge solution. Moreover, for_ \(m\geq 1\) _and_ \(j\geq 0\)_, we have the pointwise estimates_ \[|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}V^{j}(\bar{g}_{1}, \bar{A}_{1})|\lesssim\big{(}\langle t_{b,*}+r\rangle^{-2+\epsilon}+\langle t_{b,* }\rangle^{-1+\epsilon}\langle r\rangle^{-2}\big{)}\|f\|_{\langle t_{b,*} \rangle^{-\frac{3}{2}-}\bar{H}_{\rm b,c}^{s-1,\ell+2}}\] (13.4) _and_ \[|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}V^{j}(\bar{g}_{2}, \bar{A}_{2})|\lesssim\langle t_{b,*}+r\rangle^{-2+\epsilon}\|f\|_{\langle t_{b,* }\rangle^{-\frac{3}{2}-}\bar{H}_{\rm b,c}^{s-1,\ell+2}}\] (13.5) _where_ \(V^{j}\) _denotes the product of_ \(j\) _vector fields_ \(V\in\{\partial_{t_{b,*}},r\partial_{r},\ \text{rotation fields}\ \}\)_._ 2. \((\tilde{g},\tilde{A})\) _satisfies_ \[\|(\tilde{g},\tilde{A})\|_{{}_{(t_{b,*})^{-\frac{3}{2}+\epsilon}\tilde{B}^{s-2, \ell+\epsilon-1,m}_{b,c}}}\lesssim\|f\|_{{}_{(t_{b,*})^{-\frac{2}{2}+\epsilon} \tilde{B}^{s+m,\ell+2,m}_{b,c}}}.\] (13.6) _For_ \(m\geq 1\)_, we also have_ \[\|\big{(}(t_{b,*})\partial_{t_{b,*}}\big{)}^{m-1}(\tilde{g},\tilde{A})\|_{{}_ {(t_{b,*})^{-2+\epsilon}L^{\infty}(\mathbb{R}_{t_{b,*}},\tilde{H}^{s-2-m, \ell+\epsilon-1}_{b}(\tilde{X}))}}\lesssim\|f\|_{{}_{(t_{b,*})^{-\frac{2}{2}+ \epsilon}\tilde{H}^{s+m,\ell+2,m}_{b,c}}}.\] (13.7) _For_ \(m\geq 1\) _and_ \(0\leq j<s-4-m\) _with_ \(j\in\mathbb{N}_{0}\)_, we have the pointwise decay estimate_ \[\big{|}\big{(}(t_{b,*})\partial_{t_{b,*}}\big{)}^{m-1}V^{j}(\tilde{g},\tilde{A })|\lesssim\langle t_{b,*}\rangle^{-2+\epsilon}\langle r\rangle^{-\frac{1}{2}- \ell-\epsilon}\|f\|_{{}_{(t_{b,*})^{-\frac{2}{2}+\epsilon}\tilde{H}^{s+m,\ell+ 2,m}_{b,c}}}.\] (13.8) 3. _Recall the definition of_ \(\mathcal{K}_{b,1},\mathcal{K}_{b,2}\) _and_ \(\mathcal{K}^{*}_{b}\) _at the beginning of SS11. The leading term_ \((\hat{g},\hat{A})\in\widehat{K}_{b}\) _is uniquely determined by_ \(f\) _and can be expressed as follows:_ \[(\hat{g},\hat{A})=-\frac{1}{2}\big{(}t_{b,*}(\hat{g}_{1},\hat{A}_{1})+(\hat{ g}_{1},\hat{A}_{1})\big{)}+\frac{1}{2}\Big{(}(\hat{g}^{\prime}_{1},\hat{A}^{ \prime}_{1})+(\hat{g}^{\prime\prime}_{1},\hat{A}^{\prime\prime}_{1})+(\hat{g} _{2},\hat{A}_{2})\Big{)}\] (13.9) _where_ \((\hat{g}_{1},\hat{A}_{1}),(\hat{g}^{\prime}_{1},\hat{A}^{\prime}_{1}),(\hat{ g}^{\prime\prime}_{1},\hat{A}^{\prime}_{1})\in\mathcal{K}_{b,1}\)_, and_ \((\hat{g}_{2},\hat{A}_{2})\in\mathcal{K}_{b,2}\)_. Moreover,_ \((\hat{g}_{1},\hat{A}_{1}),(\hat{g}^{\prime}_{1},\hat{A}^{\prime}_{1})\) _and_ \((\hat{g}_{2},\hat{A}_{2})\) _are determined by the conditions (_11.30_) and_ (_11.31_) with_ \(f\) _there replaced by_ \(\int_{\mathbb{R}}f(t_{b,*})\,dt_{b,*}\)_, and_ \((\hat{g}^{\prime\prime}_{1},\hat{A}^{\prime\prime}_{1})\in\mathcal{K}_{b,1}\) _is uniquely determined by the condition_ \[-\Big{\langle}[L_{b},t_{b,*}](\hat{g}^{\prime\prime}_{1},\hat{A}^{ \prime\prime}_{1})+\frac{1}{2}[[L_{b},t_{b,*}],t_{b,*}](\hat{g}^{\prime\prime}_ {1},\hat{A}^{\prime\prime}_{1}),(\hat{g}^{*},\hat{A}^{*})\Big{\rangle}+\Big{ \langle}[L_{b},t_{b,*}](\hat{g}^{\prime\prime}_{2},\hat{A}^{\prime\prime}_{2}),(\hat{g}^{*},\hat{A}^{*})\Big{\rangle}\] \[\qquad=\langle\int_{\mathbb{R}}t_{b,*}f(t_{b,*})\,dt_{b,*},\;(\hat{ g}^{*},\hat{A}^{*})\rangle\quad\text{for all}\quad(\hat{g}^{*},\hat{A}^{*})\in\mathcal{K}^{*}_{b}.\] (13.10) _for some_ \((\hat{g}^{\prime\prime}_{2},\hat{A}^{\prime\prime}_{2})\in\mathcal{K}_{b,2}\)_._ Proof.: The strategy of the proof is to take Fourier transform of the inhomogeneous equation \(L_{b}(\dot{g},\dot{A})=f\) in \(t_{b,*}\). We define the Fourier transform of \((\dot{g},\dot{A})\) as \[(\hat{\tilde{g}}(\sigma),\hat{\tilde{A}}(\sigma)):=\int_{\mathbb{R}}e^{it_{b,*} \sigma}(\dot{g},\dot{A})(t_{b,*})\,dt_{b,*} \tag{13.11}\] and likewise for \(f\) (here we drop the notation for the spatial variables.). First, according to the energy estimate in Proposition 5.27, we see that the solution \((\dot{g},\dot{A})\) satisfies \[\|(\dot{g},\dot{A})(t_{b,*},\cdot)\|_{\tilde{H}^{1,-1}_{\mathrm{b}}(\tilde{X})} \lesssim e^{Mt_{b,*}}\] for some \(M>0\). This implies that the Fourier transform \((\dot{\tilde{g}}(\sigma),\hat{\tilde{A}}(\sigma))\) of \((\dot{g},\dot{A})\) is well defined for \(\mathrm{Im}\,\sigma>M\). Since \(f(t_{b,*})\) has compact support in \(t_{b,*}\), \(\hat{f}(\sigma)\) is holomorphic in the full plane \(\sigma\in\mathbb{C}\) and decays superpolynomially as \(|\mathrm{Re}\,\sigma|\to\infty\) with \(\mathrm{Im}\,\sigma>0\) fixed. Fourier transforming of the equation \(L_{b}(\dot{g},\dot{A})=f\), we obtain \[\widehat{L_{b}}(\sigma)(\dot{\tilde{g}}(\sigma),\hat{\tilde{A}}(\sigma))=\hat{ f}(\sigma),\quad\mathrm{Im}\,\sigma>M. \tag{13.12}\] In view of Theorem 5.28, we see that \(\widehat{L_{b}}(\sigma)\) is invertible in \(\mathrm{Im}\,\sigma>M\) and \[(\dot{\tilde{g}}(\sigma),\hat{\tilde{A}}(\sigma))=\widehat{L_{b}}(\sigma)^{-1} \hat{f}(\sigma),\quad\mathrm{Im}\,\sigma>M.\] Therefore, we have the following integral representation \[(\dot{g}(t_{b,*}),\;\dot{A}(t_{b,*}))=\frac{1}{2\pi}\int_{\mathrm{Im}\,\sigma=M+1 }e^{-i\sigma t_{b,*}}\widehat{L_{b}}(\sigma)^{-1}\hat{f}(\sigma)\,d\sigma. \tag{13.13}\] Owing to Theorem 12.12, we see that the integrand in the above integral representation is holomorphic in \(\sigma\) (with values in \(\bar{H}^{s,\ell}_{\mathrm{b}}(\bar{X})\)) in upper half plane \(\mathrm{Im}\,\sigma>0\), and decay superpolynomially in \(|\mathrm{Re}\,\sigma|\) as with \(\mathrm{Im}\,\sigma\) bounded. Using Cauchy's Theorem, we obtain \[(\dot{g}(t_{b,*}),\;\dot{A}(t_{b,*}))=\frac{1}{2\pi}\int_{\mathrm{Im}\,\sigma=C} e^{-i\sigma t_{b,*}}\widehat{L_{b}}(\sigma)^{-1}\hat{f}(\sigma)\,d\sigma. \tag{13.14}\] for any \(C>0\). Now we let \(C\to 0+\). Fixing a frequency cutoff \(\chi(s)\in C^{\infty}_{c}(\mathbb{R})\) such that \(\chi(s)=1\) for \(|s|\leq c_{0}/2\) and \(\chi(s)=0\) for \(|s|\geq c_{0}\) where \(c_{0}\) is defined as in Theorem 12.11. We now write \[\hat{f}(\sigma)=\chi(\operatorname{Re}\sigma)\hat{f}(\sigma)+(1-\chi( \operatorname{Re}\sigma))\hat{f}(\sigma):=\hat{f}_{1}(\sigma)+\hat{f}_{2}( \sigma).\] Correspondingly, by Theorem 11.3 and Corollary 11.4, we split \(\widehat{L_{b}}(\sigma)^{-1}\hat{f}(\sigma)\) into two parts \[\widehat{L_{b}}(\sigma)^{-1}\hat{f}(\sigma)=(\hat{\hat{g}}_{\operatorname{ reg}}(\sigma),\hat{\hat{A}}_{\operatorname{reg}}(\sigma))+(\hat{\hat{g}}_{ \operatorname{sing}}(\sigma),\hat{\hat{A}}_{\operatorname{sing}}(\sigma))\] where \[(\hat{\hat{g}}_{\operatorname{reg}}(\sigma),\hat{\hat{A}}_{\operatorname{ reg}}(\sigma))=L_{b}^{-}(\sigma)\hat{f}_{1}(\sigma)+\widehat{L_{b}}(\sigma)^{-1} \hat{f}_{2}(\sigma),\quad(\hat{\hat{g}}_{\operatorname{sing}}(\sigma),\hat{ \hat{A}}_{\operatorname{sing}}(\sigma))=P(b,\sigma)\hat{f}_{1}(\sigma).\] As discussed above, we can shift the integration contour to \(\operatorname{Im}\sigma=C\) for any \(C>0\). Letting \(C\to 0+\), we obtain \[(\hat{g}(t_{b,*}),\dot{A}(t_{b,*}))=(\hat{g}_{\operatorname{reg}}(t_{b,*}), \dot{A}_{\operatorname{reg}}(t_{b,*}))+(\hat{g}_{\operatorname{sing}}(t_{b,*} ),\dot{A}_{\operatorname{sing}}(t_{b,*}))\] where \[(\hat{g}_{\operatorname{reg}}(t_{b,*}),\dot{A}_{\operatorname{reg}}(t_{b,*})) =\frac{1}{2\pi}\int_{\mathbb{R}}\!e^{-i\sigma t_{b,*}}(\hat{\hat{g}}_{ \operatorname{reg}}(\sigma),\hat{\hat{A}}_{\operatorname{reg}}(\sigma))\,d\sigma\] and \[(\hat{g}_{\operatorname{sing}}(t_{b,*}),\dot{A}_{\operatorname{sing}}(t_{b,*}) )=\lim_{C\to 0+}\frac{1}{2\pi}\int_{\operatorname{Im}\sigma=C}e^{-i\sigma t_{b,* }}(\hat{\hat{g}}_{\operatorname{sing}}(\sigma),\hat{\hat{A}}_{\operatorname{ sing}}(\sigma))\,d\sigma.\] * Analysis of \((\hat{g}_{\operatorname{reg}}(t_{b,*}),\dot{A}_{\operatorname{reg}}(t_{b,*})).\) Since \(f\in C^{\infty}_{c}((0,\infty)_{t_{b,*}};\bar{H}^{s,\ell+2}_{b}(\bar{X}))\), we see that \(\hat{f}_{1}(\sigma),\hat{f}_{2}(\sigma)\in\overline{H^{s}(\mathbb{R}_{\sigma} ;\bar{H}^{s,\ell+2}_{b}(\bar{X}))}\) for any \(s^{\prime}\in\mathbb{R}\). As \(H^{3/2-\epsilon^{\prime}}(\mathbb{R})\) is an algebra for any \(0<\epsilon^{\prime}<1\), by Theorem 12.11, we have \[L_{b}^{-}(\sigma)\hat{f}_{1}(\sigma)\in H^{\frac{3}{2}-\epsilon^{\prime}}((-c_ {0},c_{0});\bar{H}^{s+1-\max\{\epsilon,\frac{1}{2}\},\ell+\epsilon-1}_{\rm b} (\bar{X})),\quad 0<\epsilon<\epsilon^{\prime}<1.\] Using Moser estimate, we bound norm of \(L_{b}^{-}(\sigma)\hat{f}_{1}(\sigma)\) as follows \[\|L_{b}^{-}(\sigma)\hat{f}_{1}(\sigma)\|_{H^{\frac{3}{2}-\epsilon ^{\prime}}(\mathbb{R}_{\sigma};\bar{H}^{s+1-\max\{\epsilon,\frac{1}{2}\},\ell+ \epsilon-1}_{\rm b}(\bar{X}))}\lesssim\|\hat{f}_{1}(\sigma)\|_{H^{\frac{3}{2}- \epsilon^{\prime}}(\mathbb{R}_{\sigma};\bar{H}^{s,\ell+2}_{\rm b}(\bar{X}))}\] \[\lesssim\|\hat{f}(\sigma)\|_{H^{\frac{3}{2}-\epsilon^{\prime}}( \mathbb{R}_{\sigma};\bar{H}^{s,\ell+2}_{\rm b}(\bar{X}))}=\|\langle t_{b,*} \rangle^{\frac{3}{2}-\epsilon^{\prime}}f\|_{L^{2}(\mathbb{R}_{t_{b,*}};\bar{H} ^{s,\ell+2}_{\rm b}(\bar{X}))}.\] By exploiting the conormal regularity of \(L_{b}^{-}(\sigma)\) proved in Proposition 12.13, we find that for \(0\leq k\leq m\), \[(\sigma\partial_{\sigma})^{k}\big{(}L_{b}^{-}(\sigma)\hat{f}_{1}(\sigma)\big{)} \in H^{\frac{3}{2}-\epsilon^{\prime}}((-c_{0},c_{0});\bar{H}^{s+1-k-\max\{ \epsilon,\frac{1}{2}\},\ell+\epsilon-1}_{\rm b}(\bar{X})),\quad 0< \epsilon<\epsilon^{\prime}<1.\] In a similar manner, we arrive at \[\|(\sigma\partial_{\sigma})^{k}\big{(}L_{b}^{-}(\sigma)\hat{f}_{1}(\sigma) \big{)}\|_{H^{\frac{3}{2}-\epsilon^{\prime}}(\mathbb{R}_{\sigma};\bar{H}^{s+1 -k-\max\{\epsilon,\frac{1}{2}\},\ell+\epsilon-1}_{\rm b}(\bar{X}))}\] \[\lesssim\sum_{j=0}^{k}\|(\sigma\partial_{\sigma})^{j}\hat{f}_{1}( \sigma)\|_{H^{\frac{3}{2}-\epsilon^{\prime}}(\mathbb{R}_{\sigma};\bar{H}^{s-j, \ell+2}_{\rm b}(\bar{X}))}\lesssim\sum_{j=0}^{k}\|(\sigma\partial_{\sigma})^{j} \hat{f}(\sigma)\|_{H^{\frac{3}{2}-\epsilon^{\prime}}(\mathbb{R}_{\sigma};\bar{H}^{s -j,\ell+2}_{\rm b}(\bar{X}))}\] (13.15) \[=\sum_{j=0}^{k}\|(t_{b,*}\partial_{t_{b,*}})^{j}\langle t_{b,*} \rangle^{\frac{3}{2}-\epsilon^{\prime}}f\|_{L^{2}(\mathbb{R}_{t_{b,*}};\bar{H}^{s -j,\ell+2}_{\rm b}(\bar{X}))}\lesssim\|f\|_{(t_{b,*})^{-\frac{3}{2}+\epsilon^{ \prime}}\bar{H}^{s,\ell+2,k}_{\rm b,c}}.\] As for the second part \(\widehat{L_{b}}(\sigma)^{-1}\hat{f}_{2}(\sigma)\) in \((\hat{\hat{g}}_{\operatorname{reg}}(\sigma),\hat{\hat{A}}_{\operatorname{reg}}( \sigma))\), by Theorem 12.12, we have \[\widehat{L_{b}}(\sigma)^{-1}\in W^{j,\infty}\Big{(}\mathbb{R}_{\sigma}\setminus[- \frac{c_{0}}{2},\frac{c_{0}}{2}];\mathcal{L}\big{(}\bar{H}^{s,\ell+2}_{\rm b}, h^{-j}\bar{H}^{s-j,\ell}_{\rm b}\big{)}\Big{)},\quad j\in\mathbb{N}_{0}\] (13.16) where we take \(h=\langle\sigma\rangle^{-1}\). Taking \(j=2\) gives \[\widehat{L_{b}}(\sigma)^{-1}\hat{f}_{2}(\sigma)\in H^{\frac{3}{2}-\epsilon}( \mathbb{R};\langle\sigma\rangle^{-s+2}\bar{H}^{s-2,\ell}_{\rm b,(\sigma)^{-1}}( \bar{X})).\] whose norm is bounded by \[\|\hat{f}_{2}(\sigma)\|_{H^{\frac{3}{2}-\epsilon}(\mathbb{R};\langle\sigma \rangle^{-s}\bar{H}^{s,\ell+2}_{\rm b,(\sigma)^{-1}}(\bar{X}))}\lesssim\|f\|_{(t _{b,*})^{-\frac{3}{2}+\epsilon}\bar{H}^{s,\ell+2}_{\rm b,c}}.\] More generally, we have \[(\sigma\partial_{\sigma})^{k}\big{(}\widehat{L_{b}}(\sigma)^{-1}\hat{f}_{2}( \sigma)\big{)}\in H^{\frac{3}{2}-\epsilon}(\mathbb{R};\langle\sigma\rangle^{-s+ 2+2k}\bar{H}^{s-2-k,\ell}_{\rm b,(\sigma)^{-1}}(\bar{X})).\] and \[\begin{split}&\|(\sigma\partial_{\sigma})^{k}\big{(}\widehat{L_{b}}( \sigma)^{-1}\hat{f}_{2}(\sigma)\big{)}\|_{H^{\frac{3}{2}-\epsilon}(\mathbb{R}; (\sigma)^{-s+2+k}\bar{H}_{\mathrm{b},(\sigma)-1}^{s-2-k,\ell}(\bar{X}))}\\ &\quad\lesssim\sum_{j=0}^{k}\|(\sigma\partial_{\sigma})^{j}\hat{f }_{2}(\sigma)\|_{H^{\frac{3}{2}-\epsilon}(\mathbb{R};(\sigma)^{-s-k+2j}\bar{H} _{\mathrm{b},(\sigma)-1}^{s+k-2j,\ell+2}(\bar{X}))}\\ &\quad\lesssim\sum_{j=0}^{k}\|(\sigma\partial_{\sigma})^{j}\hat{f }(\sigma)\|_{H^{\frac{3}{2}-\epsilon}(\mathbb{R};(\sigma)^{-s-k+2j}\bar{H}_{ \mathrm{b},(\sigma)-1}^{s+k-2j,\ell+2}(\bar{X}))}\\ &\quad\lesssim\sum_{j=0}^{k}\|(t_{b,*}\partial_{t_{b,*}})^{j}f\| _{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ where \[f^{(0)}=\hat{f}(0)=\int_{\mathbb{R}}f(t_{b,*})\,dt_{b,*},\quad f^{(1)}=-i\partial_{ \sigma}\hat{f}(0)=\int_{\mathbb{R}}t_{b,*}f(t_{b,*})\,dt_{b,*},\] and \(f^{(2)}(\sigma)\) is holomorphic in the full plane \(\sigma\in\mathbb{C}\). Moreover, by Sobolev embedding theorem, we have for \(j=0,1\) \[\|f^{(j)}\|_{\tilde{H}_{\mathrm{b}}^{s,\ell+2}(\bar{X})}\lesssim\|\hat{f}( \sigma)\|_{H^{3/2+s^{\prime}}(\mathbb{R}_{\sigma};\tilde{H}_{\mathrm{b}}^{s, \ell+2}(\bar{X}))}\lesssim\|f\|_{(t_{b,*})^{-\frac{3}{2}-s^{\prime}}\tilde{H}_ {\mathrm{b},c}^{s,\ell}}\quad\text{for any}\quad s^{\prime}>0.\] As for \(\hat{f}_{3}(\sigma)\), since \(\hat{f}_{3}(\sigma)=\sigma^{-2}(\hat{f}_{1}(\sigma)-\chi(\mathrm{Re}\,\sigma )f^{(0)}-i\sigma\chi(\mathrm{Re}\,\sigma)f^{(1)})\), it follows that \(\hat{f}_{3}(\sigma)\in C_{c}^{\infty}(\mathbb{R}_{\sigma})\) and \[\|\hat{f}_{3}(\sigma)\|_{H^{s^{\prime}}(\mathbb{R}_{\sigma};\tilde{H}_{ \mathrm{b}}^{s,\ell+2}(\bar{X}))}\lesssim\|\hat{f}(\sigma)\|_{H^{s^{\prime} +2}(\mathbb{R}_{\sigma};\tilde{H}_{\mathrm{b}}^{s,\ell+2}(\bar{X}))},\quad s^ {\prime}>-\frac{1}{2}.\] Recall that \(P(b,\sigma)=\sigma^{-2}P^{2}(b)+\sigma^{-1}P^{1}(b)+\sigma^{-1}P^{e}(\sigma)\) where \(P^{2}(b),P^{1}(b):\tilde{H}_{\mathrm{b}}^{s,\ell+2}\to\tilde{H}_{\mathrm{b}}^ {\infty,-1/2-}(\bar{X})\) and \(P^{e}(b,\sigma):\tilde{H}_{\mathrm{b}}^{s,\ell+2}\to\mathcal{K}_{b,1}\) is \(\epsilon\)-regular. Therefore, following the arguments in the proof of (13.15), we find that for \(0\leq k\leq m\), \[(\sigma\partial_{\sigma})^{k}\Big{(}P(b,\sigma)(\sigma^{2}\hat{f}_{3}(\sigma) )\Big{)}\in H^{\frac{3}{2}-\epsilon}((-c_{0},c_{0});\tilde{H}_{\mathrm{b}}^{ \infty,-\frac{1}{2}-}(\bar{X})),\] whose norm is bounded in the following manner \[\begin{split}&\|(\sigma\partial_{\sigma})^{k}\big{(}P(b,\sigma)( \sigma^{2}\hat{f}_{3}(\sigma))\big{)}\|_{H^{\frac{3}{2}-\epsilon}(\mathbb{R}; \tilde{H}_{\mathrm{b}}^{\infty,-\frac{1}{2}-}(\bar{X}))}\\ &\lesssim\sum_{j=0}^{k}\|(\sigma\partial_{\sigma})^{j}\hat{f}_{3 }(\sigma)\|_{H^{\frac{3}{2}-s}(\mathbb{R};\tilde{H}_{\mathrm{b}}^{s,\ell+2}( \bar{X}))}\lesssim\sum_{j=0}^{k}\|(\sigma\partial_{\sigma})^{j}\hat{f}(\sigma )\|_{H^{\frac{7}{2}-s}(\mathbb{R};\tilde{H}_{\mathrm{b}}^{s,\ell+2}(\bar{X})) }\\ &\lesssim\|f\|_{(t_{b,*})^{-\frac{7}{2}+s}\tilde{H}_{\mathrm{b},c }^{s,\ell+2},m}.\end{split} \tag{13.20}\] Similarly, we have \[(\sigma\partial_{\sigma})^{k}\Big{(}\big{(}P^{1}(b)+P^{e}(b,\sigma)\big{)}( \chi(\sigma)f^{(1)})\Big{)}\in H^{\frac{3}{2}-\epsilon}((-c_{0},c_{0});\bar{H }_{\mathrm{b}}^{\infty,-\frac{1}{2}-}(\bar{X})),\] and \[\|(\sigma\partial_{\sigma})^{k}\big{(}\big{(}P^{1}(b)+P^{e}(b,\sigma)\big{)}( \chi(\sigma)f^{(1)})\big{)}\|_{H^{\frac{3}{2}-\epsilon}(\mathbb{R};\tilde{H}_ {\mathrm{b}}^{\infty,-\frac{1}{2}-}(\bar{X}))}\lesssim\|f\|_{(t_{b,*})^{-\frac{ 3}{2}-\tilde{H}_{\mathrm{b},c}^{s,\ell+2}}}. \tag{13.21}\] As a result, it remains to analyze \(P(b,\sigma)(\chi(\mathrm{Re}\,\sigma)f^{(0)})\) and \(P^{2}(b)(i\sigma^{-1}\chi(\mathrm{Re}\,\sigma)f^{(1)})\). We write \[\begin{split}&\Big{(}\sigma^{-2}P^{2}(b)f^{(0)}+\sigma^{-1}P^{1}(b )f^{(0)}+P^{2}(b)(i\sigma^{-1}f^{(1)})\Big{)}+\sigma^{-1}P^{e}(b,\sigma)(\chi( \mathrm{Re}\,\sigma)f^{(0)})\\ &=\Big{(}(\sigma^{-2}(\dot{g}_{1},\dot{A}_{1})-i\sigma^{-1}(\dot{ g}_{1},\ddot{A}_{1}))+i\sigma^{-1}\big{(}(\dot{g}_{2},\dot{A}_{2})+(\dot{g}_{1}^{ \prime},\dot{A}_{1}^{\prime})+(\dot{g}_{1}^{\prime\prime},\dot{A}_{1}^{\prime \prime})\big{)}\Big{)}\\ &\quad+i\sigma^{-1}(\dot{g}_{1}^{\prime\prime\prime}(\sigma),\dot{A }_{1}^{\prime\prime\prime}(\sigma)):=I+II\end{split}\] where \((\dot{g}_{1},\dot{A}_{1}),(\dot{g}_{1}^{\prime},\dot{A}_{1}^{\prime}),(\dot{g }_{1}^{\prime\prime},\dot{A}_{1}^{\prime\prime}),(\dot{g}_{1}^{\prime\prime \prime}(\sigma),\dot{A}_{1}^{\prime\prime\prime}(\sigma))\in\mathcal{K}_{b,1}\) and \((\dot{g}_{2},\dot{A}_{2})\in\mathcal{K}_{b,2}\). Moreover, \((\dot{g}_{1}^{\prime\prime},\dot{A}_{1}^{\prime\prime})\) is uniquely determined by the condition \[-\Big{\langle}[L_{b},t_{b,*}](\dot{g}_{1}^{\prime\prime},\dot{A}_{1}^{\prime \prime})+\frac{1}{2}[[L_{b},t_{b,*}],t_{b,*}](\dot{g}_{1}^{\prime\prime},\dot{A} _{1}^{\prime\prime}),(\dot{g}^{*},\dot{A}^{*})\Big{\rangle}+\Big{\langle}[L_{b},t_ {b,*}](\dot{g}_{2}^{\prime\prime},\dot{A}_{2}^{\prime\prime}),(\dot{g}^{*},\dot{A} ^{*})\Big{\rangle}\\ =\langle\int_{\mathbb{R}}t_{b,*}f(t_{b,*})\,dt_{b,*},\ (\dot{g}^{*},\dot{A}^{*}) \rangle\quad\text{for all}\quad(\dot{g}^{*},\dot{A}^{*})\in\mathcal{K}_{b}^{*} \end{split} \tag{13.22}\] for some \((\dot{g}_{2}^{\prime\prime},\dot{A}_{2}^{\prime\prime})\in\mathcal{K}_{b,2}\). Since \(\sigma^{-j}(1-\chi(\sigma))\in H^{s^{\prime}}(\mathbb{R}_{\sigma})\) for \(j=1,2\) and any \(s^{\prime}\in\mathbb{R}\), we have \[(\sigma\partial_{\sigma})^{k}\Big{(}\chi(\mathrm{Re}\,\sigma)\big{(}P(b,\sigma)f^{ (0)}+P^{2}(b)(i\sigma^{-1}f^{(1)})\big{)}-(I+II)\Big{)}\in H^{\frac{3}{2}- \epsilon}((-c_{0},c_{0});\bar{H}_{\mathrm{b}}^{\infty,-\frac{1}{2}-}(\bar{X})),\] and \[\begin{split}&\|(\sigma\partial_{\sigma})^{k}\big{(}\chi(\mathrm{Re}\, \sigma)\big{(}P(b,\sigma)f^{(0)}+P^{2}(b)(i\sigma^{-1}f^{(1)})\big{)}-(I+II) \big{)}\|_{H^{\frac{3}{2}-s}(\mathbb{R};\tilde{H}_{\mathrm{b}}^{\infty,- \frac{1}{2}-}(\bar{X}))}\\ &\lesssim\|f\|_{(t_{b,*})^{-\frac{3}{2}-\tilde{H}_{\mathrm{b},c}^{s, \ell+2}}}.\end{split} \tag{13.23}\] As a consequence, it suffices to discuss the inverse Fourier transform of \(I\) and \(II\ * Analysis of the leading order term \((\hat{g},\hat{A})\). Since \[\mathcal{F}^{-1}(\sigma^{-2})=\lim_{C\to 0+}\frac{1}{2\pi}\int_{\operatorname{Im} \sigma=C}e^{-i\sigma t_{b,*}}\sigma^{-2}\,d\sigma=-\frac{1}{2}t_{b,*}H(t_{b,*})\] and \[\mathcal{F}^{-1}(i\sigma^{-1})=\lim_{C\to 0+}\frac{1}{2\pi}\int_{ \operatorname{Im}\sigma=C}e^{-i\sigma t_{b,*}}i\sigma^{-1}\,d\sigma=\frac{1} {2}H(t_{b,*}),\] we obtain \[\mathcal{F}^{-1}(I) =\lim_{C\to 0+}\frac{1}{2\pi}\int_{\operatorname{Im} \sigma=C}e^{-i\sigma t_{b,*}}\!\cdot\!I\,d\sigma\] (13.24) \[=-\frac{1}{2}\big{(}t_{b,*}(\hat{g}_{1},\hat{A}_{1})+(\hat{g}_{ 1},\hat{A}_{1})\big{)}+\frac{1}{2}\Big{(}(\hat{g}_{1}^{\prime},\hat{A}_{1}^{ \prime})+(\hat{g}_{1}^{\prime\prime},\hat{A}_{1}^{\prime\prime})+(\hat{g}_{2},\hat{A}_{2})\Big{)}:=(\hat{g},\hat{A}),\] where \((\hat{g}_{1},\hat{A}_{1}),(\hat{g}_{1}^{\prime},\hat{A}_{1}^{\prime}),(\hat {g}_{1}^{\prime\prime},\hat{A}_{1}^{\prime\prime})\in\mathcal{K}_{b,1}\), and \((\hat{g}_{2},\hat{A}_{2})\in\mathcal{K}_{b,2}\). Moreover, \((\hat{g}_{1},\hat{A}_{1}),(\hat{g}_{1}^{\prime},\hat{A}_{1}^{\prime})\) and \((\hat{g}_{2},\hat{A}_{2})\) are determined by the conditions (11.30) and (11.31) with \(f\) there replaced by \(f^{(0)}=\int_{\mathbb{R}}f(t_{b,*})\,dt_{b,*}\), and \((\hat{g}_{1}^{\prime\prime},\hat{A}_{1}^{\prime\prime})\in\mathcal{K}_{b,1}\) is uniquely determined by the condition (13.22) for some \((\hat{g}_{2}^{\prime\prime},\hat{A}_{2}^{\prime\prime})\in\mathcal{K}_{b,2}\). * Analysis of the pure gauge part \((\bar{g},\bar{A})\). Finally, we turn to the analysis of the inverse Fourier transform of \(II=i\sigma^{-1}(\hat{g}_{1}^{\prime\prime\prime}(\sigma),\hat{A}_{1}^{\prime \prime\prime}(\sigma))\) where \(\operatorname{supp}\big{(}\hat{g}_{1}^{\prime\prime\prime}(\sigma),\hat{A}_{1 }^{\prime\prime\prime}(\sigma)\big{)}\subset(-c_{0},c_{0})\). By Theorem 12.9, \[II=\sigma^{-1}(\hat{g}_{1}^{\prime\prime\prime}(\sigma),\hat{A}_{1}^{\prime \prime\prime}(\sigma))=\sum_{\pm}\psi^{\pm}(\sigma)\quad\text{where}\quad\psi^ {\pm}\in\mathcal{A}^{-\epsilon}\big{(}\pm[0,c_{0});\mathcal{K}_{b,1}\big{)}.\] We write \[\int_{-c_{0}}^{c_{0}}\!e^{-it_{b,*}\sigma}II\,d\sigma =\int_{0}^{c_{0}}\!e^{-it_{b,*}\sigma}\chi(t_{b,*}\sigma)\psi^{+} (\sigma)\,d\sigma+\int_{0}^{c_{0}}\!e^{-it_{b,*}\sigma}(1-\chi(t_{b,*}\sigma) )\psi^{+}(\sigma)\,d\sigma\] \[\quad+\int_{-c_{0}}^{0}\!e^{-it_{b,*}\sigma}\chi(t_{b,*}\sigma) \psi^{-}(\sigma)\,d\sigma+\int_{-c_{0}}^{0}\!e^{-it_{b,*}\sigma}(1-\chi(t_{b,*} \sigma))\psi^{-}(\sigma)\,d\sigma\] \[:=II_{1}^{+}+II_{2}^{+}+II_{1}^{-}+II_{2}^{-}.\] First, we have \[|II_{1}^{+}|\leq\int_{0}^{c_{0}t_{b,*}^{-1}}\!|\psi_{+}(\sigma)|\,d\sigma \lesssim\int_{0}^{c_{0}t_{b,*}^{-1}}\!\sigma^{-\epsilon}\,d\sigma\lesssim t_{ b,*}^{-1+\epsilon}.\] Next, we compute for \(k\in\mathbb{N}\) \[II_{2}^{+}=\int_{0}^{c_{0}}\!e^{-it_{b,*}\sigma}(1-\chi(t_{b,*}\sigma))\psi_{+} (\sigma)\,d\sigma=(it_{b,*})^{-k}\int_{0}^{c_{0}}\!e^{-it_{b,*}\sigma}\partial _{\sigma}^{k}\Big{(}(1-\chi(t_{b,*}\sigma))\psi_{+}(\sigma)\Big{)}\,d\sigma.\] Then we have \[|II_{2}^{+}|\lesssim t_{b,*}^{-k}\int_{c_{0}t_{b,*}^{-1}/2}^{c_{0}}\!\sigma^{-k -\epsilon}\,d\sigma\lesssim t_{b,*}^{-1+\epsilon}.\] The estimates for \(II_{1}^{+},II_{2}^{+}\) also hold after applying any number of derivatives \(t_{b,*}\partial_{t_{b,*}}\). We note that \(II_{1}^{-},II_{2}^{-}\) can be handled in a similar manner. Since \[II=\sigma^{-1}(\hat{g}_{1}^{\prime\prime\prime}(\sigma),\hat{A}_{1}^{\prime \prime\prime}(\sigma))=(2\delta_{g_{0}}^{*}\omega_{b,s_{1}}(\sigma),\widetilde{ \mathcal{L}}_{A_{b}}(\omega_{b,s_{1}}(\sigma))^{\sharp}+d\phi_{b,s_{1}}(\sigma))\] (13.25) where \[(\omega_{b,s_{1}}(\sigma),\phi_{b,s_{1}}(\sigma))\in\sum_{\pm}\mathcal{A}^{- \epsilon}(\pm[0,c_{0});\bar{H}_{\mathrm{b}}^{\infty,-\frac{3}{2}-}(\bar{X}; \widehat{\operatorname{sCT}}^{*}\bar{X})\oplus\bar{H}_{\mathrm{b}}^{\infty,- \frac{1}{2}-}(\bar{X};\mathbb{C})),\] we can write \[(\bar{g},\bar{A})=\int_{-c_{0}}^{c_{0}}\!e^{-it_{b,*}\sigma}II\,d\sigma=(\bar{g}_{ 1},\bar{A}_{1})+(\bar{g}_{2},\bar{A}_{2})\] where \[(\bar{g}_{1},\bar{A}_{1}) =\Big{(}2\delta^{*}_{g_{b}}\big{(}\chi(\langle\frac{\langle r\rangle} {\langle t_{b,*}\rangle})\omega_{b,s_{1}}(t_{b,*})\big{)},\ \widetilde{\mathcal{L}}_{A_{b}}\big{(}\chi(\langle\frac{\langle r\rangle}{ \langle t_{b,*}\rangle})\omega_{b,s_{1}}(t_{b,*})\big{)}^{\sharp}+d\big{(} \chi(\langle\frac{\langle r\rangle}{\langle t_{b,*}\rangle})\phi_{b,s_{1}}(t_{ b,*})\big{)}\Big{)},\] \[(\bar{g}_{2},\bar{A}_{2}) =\Big{(}2|\chi(\langle\frac{\langle r\rangle}{\langle t_{b,*} \rangle}),\delta^{*}_{g_{b}}|\omega_{b,s_{1}}(t_{b,*}),\ [\chi(\langle\frac{\langle r\rangle}{\langle t_{b,*}\rangle}), \widetilde{\mathcal{L}}_{A_{b}}]\big{(}\omega_{b,s_{1}}(t_{b,*})\big{)}^{ \sharp}+[\chi(\langle\frac{\langle r\rangle}{\langle t_{b,*}\rangle}),d]\phi_{ b,s_{1}}(t_{b,*})\Big{)}\] \[\qquad-\chi(\langle\frac{\langle r\rangle}{\langle t_{b,*}\rangle}) \Big{(}2\partial_{t_{b,*}}\omega_{b,s_{1}}\otimes_{s}dt_{b,*},\ \partial_{t_{b,*}}\big{(}A_{\alpha}(\omega_{b,s_{1}})^{\sharp,\alpha}+\phi_{b, s_{1}}\big{)}dt_{b,*}\Big{)}+\big{(}1-\chi(\langle\frac{\langle r\rangle}{ \langle t_{b,*}\rangle})\big{)}(\bar{g},\bar{A}).\] where \(\chi\) is a smooth nonnegative cutoff such that \(\chi(s)=1\) for \(s\leq 1/2\) and \(\chi(s)=0\) for \(s\geq 1\). Then the estimates (13.4) and (13.5) for \((\bar{g}_{1},\bar{A}_{1})\) and \((\bar{g}_{2},\bar{A}_{2})\) follow from the following estimates \[|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}V^{j} \omega_{b,s_{1}}(t_{b,*})| \lesssim\langle t_{b,*}\rangle^{-1+\epsilon}\|f\|_{\langle t_{b,*} \rangle^{-\frac{3}{2}-\bar{B}_{\mathrm{b},\epsilon}^{\epsilon,\epsilon+2}}},\] \[|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}V^{ j}\phi_{b,s_{1}}(t_{b,*})| \lesssim\langle t_{b,*}\rangle^{-1+\epsilon}\langle r\rangle^{-1}\|f\|_{ \langle t_{b,*}\rangle^{-\frac{3}{2}-\bar{B}_{\mathrm{b},\epsilon}^{\epsilon +2}}},\] \[|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}V^{ j}(\bar{g},\bar{A})| \lesssim\langle t_{b,*}\rangle^{-1+\epsilon}\langle r\rangle^{-2}\|f\|_{ \langle t_{b,*}\rangle^{-\frac{3}{2}-\bar{B}_{\mathrm{b},\epsilon}^{\epsilon +2}}}.\] ## 14. Proof of the main theorem In the initial value problem for Einstein-Maxwell system as formulated in SS3, we impose the initial data at the Cauchy hypersurface which terminates at spatial infinity (i.e., it coincides with the hypersurface \(t=0\) when \(r\) is large enough). In this section, we will discuss how to reduce this initial value problem to the inhomogeneous problem studied in the previous section SS13. We recall the time function \(\mathfrak{t}\) (corresponding to \(g_{b_{0}}\)) defined in (4.16), which satisfies that \(\mathfrak{t}=t\) for \(r\geq 4\mathbf{m}_{0}\), and \(d\mathfrak{t}\) is timelike everywhere on \(M\) with respect to \(g_{b_{0}}\) with \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0}),|\mathbf{Q}_{0}|\ll\mathbf{m}_{0}\), hence for \(g_{b}\) when \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\) is close to \(b_{0}\). We define \[\Sigma_{0}:=\mathfrak{t}^{-1}(0),\] which is a spacelike (with respect to \(g_{b}\)) Cauchy hypersurface. Identifying the region \(r\geq 4\mathbf{m}_{0}\) in \(\Sigma_{0}\) with a subset of \(\mathbb{R}^{3}\), we can compactify \(\Sigma_{0}\) at infinity to the manifold \(\tilde{\Sigma}_{0}\) (with two boundary components: one is inside the black hole and the other is the spatial infinity). We note that for large \(r\), the spacetime scattering cotangent bundle \({}^{\mathrm{sc}}\widetilde{T^{*}}\tilde{\Sigma}_{0}={}^{\mathrm{sc}}T^{*} \tilde{\Sigma}_{0}\oplus\mathbb{R}\,d\mathfrak{t}\) is spanned over \(C^{\infty}(\bar{\Sigma}_{0})\) by \(dt\) and \(dx^{1},dx^{2},dx^{3}\), where \((x^{1},x^{2},x^{3})\) are the standard coordinates on \(\mathbb{R}^{3}\). We first study the initial value problem for linearized gauge-fixed Einstein-Maxwell system \(L_{b}(\dot{g},\dot{A})=0\). **Theorem 14.1**.: _Let \(0<\alpha<1\) and \(s>8+m\), \(m\in\mathbb{N}\). Suppose_ \[(\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1}) \in\bar{H}_{\mathrm{b}}^{s,-1/2+\alpha}(\bar{\Sigma}_{0};S^{2} \widetilde{scT^{*}}\bar{\Sigma}_{0})\oplus\bar{H}_{\mathrm{b}}^{s-1,1/2+ \alpha}(\bar{\Sigma}_{0};S^{2}\widetilde{scT^{*}}\bar{\Sigma}_{0})\] \[\oplus\bar{H}_{\mathrm{b}}^{s-1,-1/2+\alpha}(\bar{\Sigma}_{0}; \widetilde{scT^{*}}\bar{\Sigma}_{0})\oplus\bar{H}_{\mathrm{b}}^{s-2,1/2+ \alpha}(\bar{\Sigma}_{0};\widetilde{scT^{*}}\bar{\Sigma}_{0}).\] _Then the solution \((\dot{g},\dot{A})\) of the initial value problem_ \[L_{b}(\dot{g},\dot{A})=0,\quad(\dot{g},\dot{A})|_{\Sigma_{0}}=(\dot{g}_{0},\dot {A}_{0}),\quad(\mathcal{L}_{\partial_{\dot{g}}}\dot{g},\mathcal{L}_{\partial_{ \dot{h}}}\dot{A})|_{\Sigma_{0}}=(\dot{g}_{1},\dot{A}_{1})\] _satisfies the following properties. Let_ \[N^{s,\alpha}:=\|\dot{g}_{0}\|_{\bar{H}_{\mathrm{b}}^{s+1,-1/2+\alpha}}+\|\dot{g}_{ 1}\|_{\bar{H}_{\mathrm{b}}^{s,1/2+\alpha}}+\|\dot{A}_{0}\|_{\bar{H}_{\mathrm{b}}^{ s,-1/2+\alpha}}+\|\dot{A}_{1}\|_{\bar{H}_{\mathrm{b}}^{s,1/2+\alpha}}.\] 1. _For_ \(t_{b,*}\geq 0\)_, the solution_ \((\dot{g},\dot{A})\) _can be written as_ \[(\dot{g},\dot{A})=(\hat{g},\dot{A})+(\bar{g},\bar{A})+(\tilde{g},\tilde{A})\] (14.1) _where_ \((\hat{g},\dot{A})\in\widehat{\mathcal{K}}_{b}\) _is a generalized zero mode (defined in Proposition_ 10.5_), and_ \((\bar{g},\bar{A})\)_,_ \((\tilde{g},\tilde{A})\) _satisfy the following estimates._ 1. _The term_ \((\bar{g},\bar{A})\) _can be written as_ \((\bar{g},\bar{A})=(\bar{g}_{1},\bar{A}_{1})+(\bar{g}_{2},\bar{A}_{2})\) _where_ \((\bar{g}_{1},\bar{A}_{1})\) _is a pure gauge solution. Moreover, for_ \(m\geq 1\) _and_ \(j\geq 0\)_, we have the pointwise estimates_ \[|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}V^{j}(\bar{g}_{1 },\bar{A}_{1})|\lesssim\big{(}\langle t_{b,*}+r\rangle^{-2+\epsilon}+\langle t _{b,*}\rangle^{-1+\epsilon}\langle r\rangle^{-2}\big{)}N^{s,\alpha}\] (14.2) _and_ \[|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}V^{j}(\bar{g}_{2 },\bar{A}_{2})|\lesssim\langle t_{b,*}+r\rangle^{-2+\epsilon}N^{s,\alpha}\] (14.3) _where_ \(V^{j}\) _denotes the product of_ \(j\) _vector fields_ \(V\in\{\partial_{t_{b,*}},r\partial_{r},\text{ rotation fields }\}\)_._ 2. \((\tilde{g},\tilde{A})\) _satisfies: for_ \(0<\epsilon<1\) _with_ \(\alpha+\epsilon>1\) _and_ \(0\leq m\leq\frac{s-6}{2}\)_, we have_ \[\|(\tilde{g},\tilde{A})\|_{\langle t_{b,*}\rangle^{-\frac{3}{2}+\epsilon}\bar {H}^{s-6-m,-5/2+a+\epsilon-,m}_{b,*}}\lesssim N^{s,\alpha}\] (14.4) _For_ \(1\leq m\leq\frac{s-6}{2}\)_, we also have_ \[\|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}(\tilde{g},\tilde {A})\|_{\langle t_{b,*}\rangle^{-2+\epsilon}L^{\infty}(\mathbb{R}_{t_{b,*}}; \bar{H}^{s-6-m,-5/2+a+\epsilon-}_{b}(\bar{X}))}\lesssim N^{s,\alpha}.\] (14.5) _For_ \(m\geq 1\) _and_ \(0\leq j<s-8-m\) _with_ \(j\in\mathbb{N}_{0}\)_, we have the pointwise decay estimate_ \[|\big{(}\langle t_{b,*}\rangle\partial_{t_{b,*}}\big{)}^{m-1}V^{j}(\tilde{g}, \tilde{A})|\lesssim\langle t_{b,*}\rangle^{-2+\epsilon}\langle r\rangle^{- \alpha+\epsilon+1}N^{s,\alpha}.\] (14.6) 3. _In_ \(\mathfrak{t}\geq 0\)_,_ \(t_{b,*}\leq 0\)_, we have_ \((\dot{g},\dot{A})=r^{-1}H^{s-3}(\mathbb{R}_{t_{b,*}}\times\mathbb{S}^{2})+( \tilde{g},\tilde{A})\) _with_ \(|(\tilde{g},\tilde{A})|\lesssim r^{-1}(1+|t_{b,*}|)^{-\alpha}\)_._ Proof.: Let \(T_{1}<T_{2}\) and let \(\chi(t_{b,*})\) be a nonnegative smooth cutoff such that \(\chi=1\) for \(t_{b,*}\geq T_{2}\) and \(\chi(t_{b,*})=0\) for \(t_{b,*}\leq T_{1}\). Then we obtain the following equation \[L_{b}\Big{(}\chi(t_{b,*})(\dot{g},\dot{A})\Big{)}=[L_{b},\chi(t_{b,*})](\dot{ g},\dot{A}),\] of which the right-hand side is supported in \(t_{b,*}\in(T_{1},T_{2})\). In view of Theorem 13.1, it suffices to prove that \[L_{b}\Big{(}\chi(t_{b,*})(\dot{g},\dot{A})\Big{)}\in\widetilde{H}^{s-4-k,1/2+ \alpha-,k}_{\mathrm{b,c}},\quad k\in\mathbb{N}. \tag{14.7}\] Since the regularity with respect to \(\langle t_{b,*}\rangle D_{t_{b,*}}\) and \(D_{t_{b,*}}\) is equivalent in the support of \(L_{b}\Big{(}\chi(t_{b,*})(\dot{g},\dot{A})\Big{)}\), it suffices to prove (14.7) for \(k=0\). First, the global wellposedness theory of linear wave equations implies that \((\dot{g},\dot{A})\in H^{s-1}\) for \(\mathfrak{t}\geq 0\), \(t_{*}\leq C\), \(r\leq C\) for any \(C\); this implies that \([L_{b},\chi]\in H^{s-2}\) in such compact sets. Therefore, it suffices to work in an arbitrarily small neighborhood \(r>R_{0}\gg 1\) of infinity, which we shall do from now on. The proof closely follows the argument in [64, SSSS4-5]. Let \(b=(\mathbf{m},\mathbf{a},\mathbf{Q})\). Then we define \(b_{1}=(\mathbf{m},0,\mathbf{Q})\) and introduce the incoming and outgoing null coordinates \[x^{0}=t+r_{b_{1},*},\quad x^{1}=t-r_{b_{1},*},\] with respect to which we have \[g_{b_{1}}=-\mu_{b_{1}}dx^{0}\,dx^{1}+r^{2}\not{g}\] and \[2\partial_{0}\equiv 2\partial_{x^{0}}=\partial_{t}+(1-\frac{2\mathbf{m}}{r}+ \frac{\mathbf{Q}^{2}}{r^{2}})\partial_{r},\quad 2\partial_{1}\equiv 2\partial_{x^{1}}= \partial_{t}-(1-\frac{2\mathbf{m}}{r}+\frac{\mathbf{Q}^{2}}{r^{2}})\partial_{r}.\] Let \(x^{2},x^{3}\) denote local coordinates on \(\mathbb{S}^{2}\), and denote spherical indices by \(c,d=2,3\); let \(\partial_{c}\equiv\partial_{x^{c}}\); let \(\bar{X}\) be the compactification of \(t_{b,*}^{-1}(0)\). Since \(g_{b}-g_{b_{1}}\in\rho^{2}C^{\infty}(\bar{X};S^{2\widetilde{sc}\widetilde{T}^{* }\bar{X}})\), we have \[g_{b}(\partial_{0},\partial_{0}),\ (g_{b}-g_{b_{1}})(\partial_{0}, \partial_{1}),\ g_{b}(\partial_{0},r^{-1}\partial_{c}),\] \[g_{b}(\partial_{1},\partial_{1}),\ g_{b}(\partial_{1},r^{-1}\partial_{c}),\ (g_{b}-g_{b_{1}})(r^{-1}\partial_{b},r^{-1}\partial_{c})\in\rho^{2}C^{\infty}.\] We let \[\rho_{I}=(r_{b_{1},*}-t)/r,\quad\rho_{0}=(r_{b_{1},*}-t)^{-1},\quad\rho=\rho_{I} \rho_{0}=\frac{1}{r}.\] Then we have \[\rho^{-3}L_{b}\rho=-\Box_{\rho^{2}g_{b}}\otimes\mathrm{Id}_{14\times 14}+R\] with \(R\in\widetilde{\mathrm{Diff}^{1}_{\mathrm{b}}}\) acting on sections of \(S^{2\widetilde{sc}\widetilde{T}^{*}\bar{X}}\oplus\widetilde{sc}\widetilde{T}^{* }\bar{X}\), where \(\widetilde{\mathrm{Diff}^{1}_{\mathrm{b}}}\) consists of up to \(k\)-fold products of the vector fields \(\{\rho_{I}\partial_{\rho_{I}},\rho_{0}\partial_{\rho_{0}}\), rotation vector fields\(\}\). We note that switching from \(L_{b}\) to \(\rho^{-3}L_{b}\rho\) is related to the Friedlander rescaling for the scalar wave equation [45]. Then one can use the multiplier \(W:=\rho_{0}^{-2a_{0}}\rho_{I}^{-2a_{I}}V\) with \(V:=-(1+c_{V})\rho_{I}\partial_{\rho_{I}}+\rho_{0}\partial_{\rho_{0}}\) and \(a_{0}=\alpha\), \(a_{I}<0\) as introduced in [64, Lemma 4.4] to derive an energy estimate, which allows us to conclude that near \(\mathscr{I}^{+}\), \((\dot{g},\dot{A})\) lies in \(\widetilde{H}_{\rm b}^{s-1,-1/2-}\) (see [64, Proposition 4.8] for detail. Actually, the present setting is much simpler than that in the reference as we do not need to get sharp decay rate for a certain component as did in [64, Proposition 4.8]). To further determine the leading behavior of \((\dot{g},\dot{A})\) near \(\mathscr{I}\), one rewrite the equations \(\rho^{-3}L_{b}\rho(\rho^{-1}\dot{g},\rho^{-1}\dot{A})=0\) as \(2\rho^{-2}\partial_{0}\partial_{1}(\rho^{-1}\dot{\underline{g}},\rho^{-1}\dot {A})=(\dot{\underline{A}}+\tilde{R})(\rho^{-1}\dot{\underline{g}},\rho^{-1} \dot{A})\) where \(\tilde{R}\in\rho\widetilde{\rm Diff}_{\rm b}^{2}+\widetilde{\rm Diff}_{\rm b} ^{1}\). Since \(-2\rho^{-2}\partial_{0}\partial_{1}=\rho_{I}^{-1}(\rho_{0}\partial_{\rho_{0} }-\rho_{I}\partial_{\rho_{I}})\rho_{I}\partial_{\rho_{I}}+\widetilde{\rm Diff }_{\rm b}^{1}\), it follows that \[(\rho_{0}\partial_{\rho_{0}}-\rho_{I}\partial_{\rho_{I}})\rho_{I}\partial_{ \rho_{I}}(\rho^{-1}\dot{g},\rho^{-1}\dot{A})\in\rho\widetilde{\rm Diff}_{\rm b }^{2}(\rho^{-1}\dot{g},\rho^{-1}\dot{A})\sim\rho_{0}^{\alpha}\rho_{I}^{a_{I}+1}.\] We choose \(a_{I}<0\) such that \(0<1+a_{I}<\alpha\) (see [64, Lemma 7.7]). Integrating first along the vector field \(\rho_{0}\partial_{\rho_{0}}-\rho_{I}\partial_{\rho_{I}}\) from \(\rho_{I}\geq\epsilon\), where \((\rho^{-1}\dot{g},\rho^{-1}\dot{A})\sim\rho_{0}^{\alpha}\), yields \[\rho_{I}\partial_{\rho_{I}}(\rho^{-1}\dot{g},\rho^{-1}\dot{A})\sim\rho_{0}^{ \alpha}\rho_{I}^{1+a_{I}}.\] Then integrating out the vector field \(\rho_{I}\partial_{\rho_{I}}\) gives \[(\dot{g},\dot{A})-\rho H^{s-3}(\mathbb{R}_{t_{b_{1},*}}\times\mathbb{S}^{2}) \sim\rho\rho_{0}^{\alpha}\rho_{I}^{1+a_{I}}.\] This proves statement 2. Moreover, in view of Lemma 4.11, we see that \[[L_{b},\chi(t_{b,*})]=2\rho\chi^{\prime}(t_{*})(\rho\partial_{\rho}-1)+\rho^{ 2}{\rm Diff}_{\rm sc}^{1}+\rho^{2}C^{\infty}\partial_{t_{*}}. \tag{14.8}\] Since \((\rho\partial_{\rho}-1)\) annihilates the leading order term \(\rho H^{s-3}(\mathbb{R}_{t_{b_{1},*}}\times\mathbb{S}^{2})\) in \((\dot{g},\dot{A})\), it follows that \(L_{b}(\chi(\dot{g},\dot{A}))=[L_{b},\chi](\dot{g},\dot{A})\in\widetilde{H}_{ \rm b,c}^{s-4,1/2+\alpha-}\). Since \(s-4>4+m\), the estimates (14.2)-(14.6) now follow from Theorem 13.1, with \(\ell+2=\frac{1}{2}+\alpha-\), so \(\ell=-\frac{3}{2}+\alpha-\) and \(-1/2<\ell+\epsilon<1/2\) with \(\alpha+\epsilon>1,0<\epsilon<1\). We now state our main theorem. **Theorem 14.2**.: _Let \(0<\alpha<1\) and \(s>8+m\), \(m\in\mathbb{N}\). Let_ \[(\dot{h},\dot{k},\dot{\bf E},\dot{\bf H})\in\bar{H}_{\rm b}^{s,-1/2+\alpha}( \Sigma_{0};S^{2\,\rm sc}T^{*}\Sigma_{0})\oplus\in\bar{H}_{\rm b}^{s-1,1/2+ \alpha}(\Sigma_{0};S^{2\,\rm sc}T^{*}\Sigma_{0})\] \[\bar{H}_{\rm b}^{s-1,1/2+\alpha}(\Sigma_{0};{}^{s\epsilon}T^{*}\Sigma_{0}) \oplus\in\bar{H}_{\rm b}^{s,1/2+\alpha}(\Sigma_{0};{}^{s\epsilon}T^{*}\Sigma_{0})\] _be the initial data for the linearized Einstein-Maxwell equations linearized around the KN solution \((g_{b},A_{b})\). Suppose that the initial data satisfy the linearized constraint equations, which is the linearization of the constraint equations around the KN initial data \((h_{b},k_{b},{\bf E}_{b},{\bf H}_{b})=\tau(g_{b},dA_{b})\). Assume that the linearized magnetic charge vanishes, see the discussion in SS3.2-3.3._ _Then there exists a solution \((\dot{g},\dot{A})\) of the initial value problem_ \[D_{g_{b}}{\rm Ric}(\dot{g})-2D_{(g_{b},dA_{b})}T(\dot{g},\dot{A})=0,\quad D_{g_ {b},A_{b}}(\delta_{(\cdot)}d(\cdot))(\dot{g},\dot{A})=0,\quad D_{(g_{b},A_{b}) }\tau(\dot{g},\dot{A})=(h_{b},k_{b},{\bf E}_{b},{\bf H}_{b})\] _satisfying the linearized generalized wave map gauge and Lorenz gauge conditions_ \[D_{g_{b}}\widetilde{T}^{E}(\dot{g};g_{b})=0,\quad\delta_{g_{b}}\dot{A}=0.\] _Moreover, \((\dot{g},\dot{A})\) has the asymptotic behavior stated in Theorem 14.1._ Proof.: In view of Corollary 4.15, there exist sections \[(\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1})\in\bar{H}_{\rm b }^{s,-1/2+\alpha}(\bar{\Sigma}_{0};S^{2\widetilde{\rm sc}T^{*}}\bar{\Sigma}_{0} )\oplus\bar{H}_{\rm b}^{s-1,1/2+\alpha}(\bar{\Sigma}_{0};S^{2\widetilde{\rm sc} T^{*}}\bar{\Sigma}_{0})\] \[\oplus\bar{H}_{\rm b}^{s-1,-1/2+\alpha}(\bar{\Sigma}_{0}; \widetilde{\rm sc}T^{*}\bar{\Sigma}_{0})\oplus\bar{H}_{\rm b}^{s-2,1/2+ \alpha}(\bar{\Sigma}_{0};\widetilde{\rm sc}T^{*}\Sigma_{0}),\] such that they induce the data \((\dot{h},\dot{k},\dot{\bf E},\dot{\bf H})\) on \(\Sigma_{0}=\{{\bf t}=0\}\), and satisfy the linearized gauge conditions \(D_{g_{b}}\widetilde{T}^{E}(\dot{g})=0\) and \(D_{(g_{b},A_{b})}\widetilde{\Upsilon}^{M}(\dot{g},\dot{A})=-\delta_{g_{b}}\dot{A}=0\) at \(\Sigma_{0}\). Then we solve the initial value problem \(L_{b}(\dot{g},\dot{A})=0\) with initial data \((\dot{g}_{0},\dot{g}_{1},\dot{A}_{0},\dot{A}_{1})\). Since \((\dot{h},\dot{k},\dot{\bf E},\dot{\bf H})\) satisfies the linearized constraint equations, it follows from Lemma 3.8 that \[D_{g_{b}}\widetilde{T}^{E}(\dot{g};g_{b})=0,\quad\delta_{g_{b}}\dot{A}=0\] and thus \((\dot{g},\dot{A})\) solves the linearized Einstein-Maxwell equations. ## Appendix A The linearized Einstein-Maxwell operator around Reissner-Nordstrom spacetime ### Detailed calculation of the linearized Einstein-Maxwell system We now calculate the linearized Einstein-Maxwell system \[\mathscr{L}(\dot{g},\dot{A}):=(\mathscr{L}_{1}(\dot{g},\dot{A}),\mathscr{L}_{2}( \dot{g},\dot{A})):=(D_{g}\mathrm{Ric}(\dot{g},d\dot{A})-2D_{(g,dA)}T(\dot{g},d \dot{A}),\ D_{(g,F)}\left(\delta_{g}F\right)(\dot{g},\dot{F}))=0\] in more detail. First, following [117, SS7.5], [54, SS3] we see that \[D_{g}\mathrm{Ric}(\dot{g})=-\frac{1}{2}\Box_{g}\dot{g}-\delta_{g}^{*}\delta_{g }G_{g}\dot{g}+\mathscr{R}_{g}\dot{g}\quad\text{where}\quad(\mathscr{R}_{g}\dot {g})_{\mu\nu}=R^{\alpha}_{\ \mu\nu}{}^{\beta}\dot{g}_{\alpha\beta}+\frac{1}{2}\left( \mathrm{Ric}_{\mu}{}^{\kappa}\dot{g}_{\nu\kappa}+\mathrm{Ric}_{\nu}{}^{\kappa} \dot{g}_{\mu\kappa}\right)\] Since \(T(g,F)=F_{\mu\alpha}F_{\nu}^{\ \alpha}-\frac{1}{4}g_{\mu\nu}F_{\alpha\beta}F^{ \alpha\beta}\), we find \[D_{(g,F)}T(\dot{g},\dot{F}) =-\dot{g}^{\alpha\beta}F_{\mu\alpha}F_{\nu\beta}-\frac{1}{4} \left(\dot{g}_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}-g_{\mu\nu}\dot{g}^{ \kappa\alpha}g^{\lambda\beta}F_{\alpha\beta}F_{\kappa\lambda}-g_{\mu\nu}g^{ \kappa\alpha}\dot{g}^{\lambda\beta}F_{\alpha\beta}F_{\kappa\lambda}\right)\] (A.1) \[\quad+\left(g^{\alpha\beta}\dot{F}_{\mu\alpha}F_{\nu\beta}+g^{ \alpha\beta}F_{\mu\alpha}\dot{F}_{\nu\beta}\right)-\frac{1}{4}\left(g_{\mu\nu }\dot{F}_{\alpha\beta}F^{\alpha\beta}+g_{\mu\nu}F_{\alpha\beta}\dot{F}^{ \alpha\beta}\right)\] \[=-\dot{g}^{\alpha\beta}F_{\mu\alpha}F_{\nu\beta}-\frac{1}{4} \left(\dot{g}_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}\!\!-\!2g_{\mu\nu}\dot{g} ^{\kappa\alpha}F_{\alpha\beta}F_{\kappa}^{\ \beta}\right)\!\!+\!\!\left(g^{\alpha\beta}\dot{F}_{\mu\alpha}F_{\nu\beta}+g ^{\alpha\beta}F_{\mu\alpha}\dot{F}_{\nu\beta}\right)\!\!-\!\frac{1}{2}g_{\mu \nu}\dot{F}_{\alpha\beta}F^{\alpha\beta}\] and thus the Einstein part \(\mathscr{L}_{1}\) of the linearized Einstein-Maxwell system is given by \[\mathscr{L}_{1}(\dot{g},d\dot{A})=D_{g}\mathrm{Ric}(\dot{g},d\dot{A})-2D_{(g, dA)}T(\dot{g},d\dot{A})=-\frac{1}{2}\Box_{g}\dot{g}-\delta_{g}^{*}\delta_{g}G_{g} \dot{g}+\mathscr{R}_{g}\dot{g}-2D_{(g,dA)}T(\dot{g},d\dot{A})=0.\] (A.2) Next we analyze the Maxwell part \(\mathscr{L}_{2}\) of the linearized Einstein-Maxwell system. Since \[(\delta_{g}F)_{\mu}=-\nabla^{\nu}F_{\nu\mu}=-g^{\nu\alpha}\nabla_{\alpha}F_{ \nu\mu}=-g^{\nu\alpha}\left(\partial_{\alpha}F_{\nu\mu}-\Gamma^{\kappa}_{ \alpha\nu}F_{\kappa\mu}-\Gamma^{\kappa}_{\alpha\mu}F_{\nu\kappa}\right),\] and \[\dot{\Gamma}^{\kappa}_{\alpha\nu}(\dot{g})=\frac{1}{2}g^{\kappa\lambda}\left( \nabla_{\nu}\dot{g}_{\alpha\lambda}+\nabla_{\alpha}\dot{g}_{\nu\lambda}-\nabla _{\lambda}\dot{g}_{\alpha\nu}\right),\] we obtain \[D_{(g,F)}\left(\delta_{g}F\right)(\dot{g},\dot{F})=\delta_{g} \dot{F}+\dot{g}^{\nu\alpha}\nabla_{\alpha}F_{\nu\mu}+g^{\nu\alpha}\left(\dot{ \Gamma}^{\kappa}_{\alpha\nu}(\dot{g})F_{\kappa\mu}+\dot{\Gamma}^{\kappa}_{ \alpha\mu}(\dot{g})F_{\nu\kappa}\right)\] (A.3) \[=\delta_{g}\dot{F}+\dot{g}^{\nu\alpha}\nabla_{\alpha}F_{\nu\mu}+g ^{\kappa\lambda}\nabla^{\alpha}(\dot{g}_{\alpha\lambda}-\frac{1}{2}g^{\nu \alpha}\dot{g}_{\alpha\nu}g_{\alpha\lambda})F_{\kappa\mu}+\frac{1}{2}(\nabla ^{\nu}\dot{g}^{\kappa}_{\ \mu}-\nabla^{\kappa}\dot{g}^{\nu}_{\ \mu})F_{\nu\kappa}\] \[=\delta_{g}\dot{F}+\dot{g}^{\nu\alpha}\nabla_{\alpha}F_{\nu\mu}-( \delta_{g}G_{g}\dot{g})^{\kappa}F_{\kappa\mu}+\frac{1}{2}(\nabla^{\nu}\dot{g} ^{\kappa}_{\ \mu}-\nabla^{\kappa}\dot{g}^{\nu}_{\ \mu})F_{\nu\kappa}\] where we use \((\nabla_{\mu}\dot{g}^{\nu\kappa})F_{\nu\kappa}=0\) in the second equality. Therefore, the Maxwell part \(\mathscr{L}_{2}\) of the linearized Einstein-Maxwell system is given by \[\mathscr{L}_{2}(\dot{g},d\dot{A})=\delta_{g}d\dot{A}+\dot{g}^{\nu\alpha}\nabla_ {\alpha}(dA)_{\nu\mu}-(\delta_{g}G_{g}\dot{g})^{\kappa}(dA)_{\kappa\mu}+\frac {1}{2}(\nabla^{\nu}\dot{g}^{\kappa}_{\ \mu}-\nabla^{\kappa}\dot{g}^{\nu}_{\ \mu})(dA)_{\nu\kappa}=0.\] (A.4) We then proceed to calculate the linearized Einstein-Maxwell system \(\mathscr{L}(\dot{g},\dot{A})=0\) around a Reissner-Nordstrom spacetime \(g=\dot{g}+r^{2}\not{g}\) under the splitting (6.9) and (6.10). ### Calculation of \(\mathscr{R}_{g}\dot{g}\) Here and in what follows the components we do not list are \(0\). Christoffel symbols of Reissner-Nordstrom metric \(g\) are, \[\Gamma^{k}_{ij}=\dot{\Gamma}^{k}_{ij}\quad\Gamma^{a}_{ij}=0,\quad \Gamma^{j}_{ia}=0,\quad\Gamma^{k}_{\ ab}=-r\partial^{k}r\not{g}_{\ ab},\] \[\Gamma^{b}_{ak}=r^{-1}\partial_{k}r\delta^{b}_{a},\quad\Gamma^{c}_{ ab}=\not{V}^{c}_{\ ab}.\] The components of Riemann curvature tensor are given by \[R_{ijkm}=\dot{\mathring{R}}_{ijkm}=\frac{1}{2}\dot{R}(\dot{g})\left(\dot{g}_{ik} \dot{g}_{jm}-\dot{g}_{im}\dot{g}_{jk}\right)=-\frac{1}{2}\mu^{\prime\prime}_{b_{ 0}}\left(\ddot{g}_{ik}\ddot{g}_{jm}-\dot{g}_{im}\dot{g}_{jk}\right),\] \[R_{aijk}=0,\quad R_{abij}=0,\quad R_{abci}=0,\quad R_{aibj}=-r\dot{\nabla}_{i} \dot{\nabla}_{j}r\not{g}_{ab}=-\frac{1}{2}r\mu^{\prime}_{b_{0}}\not{g}_{ab} \dot{g}_{ij},\] Correspondingly, the Ricci curvature tensor takes the form \[\text{Ric}_{ij}=\mathring{\text{Ric}}_{ij}-r^{-1}\mu^{\prime}_{b_{0}} \mathring{g}_{ij}=-\frac{1}{2}\mu^{\prime\prime}_{b_{0}}\mathring{g}_{ij}-r^{-1} \mu^{\prime}_{b_{0}}\mathring{g}_{ij},\quad\text{Ric}_{ia}=0,\] \[\text{Ric}_{ab}=(1-\mu_{b_{0}}-r\mu^{\prime}_{b_{0}})\not{g}_{ab}.\] Now we are ready to calculate \(\mathscr{R}_{g}\mathring{g}\) in detail. Since \((\mathscr{R}_{g}\mathring{g})_{\mu\nu}=R^{\alpha}_{\ \mu\nu}{}^{\beta}\mathring{g}_{\alpha\beta}+\frac{1}{2} \left(\text{Ric}_{\mu}^{\ \kappa}\mathring{g}_{\nu\kappa}+\text{Ric}_{\nu}^{\ \kappa}\mathring{g}_{\mu\kappa}\right)\), we have \[(\mathscr{R}_{g}\mathring{g})_{ij}=-\mu^{\prime\prime}_{b_{0}} \left(\mathring{g}_{ij}-\frac{1}{2}\mathring{g}^{km}\mathring{g}_{km}\mathring{ g}_{ij}\right)-r^{-1}\mu^{\prime}_{b_{0}}\mathring{g}_{ij}+\frac{1}{2}r^{-3}\mu^{ \prime}_{b_{0}}\not{g}^{ab}\mathring{g}_{ab}\mathring{g}_{ij},\] \[(\mathscr{R}_{g}\mathring{g})_{ia}=-\frac{3}{2}r^{-1}\mu^{\prime} _{b_{0}}\mathring{g}_{ia}-\frac{1}{4}\mu^{\prime\prime}_{b_{0}}\mathring{g}_{ ia}+\frac{1}{2}r^{-2}(1-\mu_{b_{0}})\mathring{g}_{ia},\] \[(\mathscr{R}_{g}\mathring{g})_{ab}=2r^{-2}(1-\mu_{b_{0}})\left( \mathring{g}_{ab}-\frac{1}{2}\not{g}^{cd}\mathring{g}_{cd}\not{g}_{ab}\right) -r^{-1}\mu^{\prime}_{b_{0}}\mathring{g}_{ab}+\frac{1}{2}r\mu^{\prime}_{b_{0}} \mathring{g}^{ij}\mathring{g}_{ij}\not{g}_{ab}.\] Therefore, with respect to the splitting of \(S^{2}T^{*}M\) (6.10), we can express \(\mathscr{R}_{g}:S^{2}T^{*}M\to S^{2}T^{*}M\) as follows \[\mathscr{R}_{g}=\begin{pmatrix}-\mu^{\prime\prime}_{b_{0}}G_{\mathring{g}}-r^ {-1}\mu^{\prime}_{b_{0}}&0&\frac{1}{2}r^{-3}\mu^{\prime}_{b_{0}}\mathring{g} \not{\mathfrak{r}}\\ 0&-\frac{3}{2}r^{-1}\mu^{\prime}_{b_{0}}-\frac{1}{4}\mu^{\prime\prime}_{b_{0} }+\frac{1}{2}r^{-2}(1-\mu_{b_{0}})&0\\ \frac{1}{2}r\mu^{\prime}_{b_{0}}\mathring{g}\mathring{\mathfrak{r}}&0&2r^{-2} (1-\mu_{b_{0}})G_{\not{g}}-r^{-1}\mu^{\prime}_{b_{0}}\end{pmatrix}\] (A.5) where \((G_{\mathring{g}}h)_{ij}=h_{ij}-\frac{1}{2}\mathring{g}_{ij}\mathring{\mathfrak{ r}}h\) and \((G_{\not{g}}h)_{ab}=h_{ab}-\frac{1}{2}\not{g}_{ab}\not{\mathfrak{r}}h\). ### Calculation of \(\delta^{*}_{g}\delta_{g}G_{g}\mathring{g}\) We now calculate the first covariant derivatives on 1-forms and symmetric 2-tensors. For \(\eta\in T^{*}M\), we compute \[\nabla_{i}\eta_{j}=\mathring{\nabla}_{i}\eta_{j},\quad\nabla_{i}\eta_{a}= \partial_{i}\eta_{a}-(r^{-1}\partial_{i}r)\eta_{a},\quad\nabla_{a}\eta_{i}= \partial_{a}\eta_{i}-(r^{-1}\partial_{i}r)\eta_{a},\quad\nabla_{a}\eta_{b}= \not{\nabla}_{a}\eta_{b}+r(\partial^{k}r)\eta_{k}\not{g}_{ab}.\] With respect to the splitting of \(T^{*}M\) (6.9), we write the symmetric gradient \(\delta^{*}_{g}:T^{*}M\to S^{2}T^{*}M\) with \((\delta^{*}_{g}\eta)_{\mu\nu}=\frac{1}{2}(\nabla_{\mu}\eta_{\nu}+\nabla_{\nu} \eta_{\mu})\) as follows \[\delta^{*}_{g}=\begin{pmatrix}\mathring{\delta}^{*}&0\\ \frac{1}{2}\not{d}&\frac{1}{2}r^{2}\mathring{d}r^{-2}\\ r\not{g}_{\text{${\ell}$}dr}&\not{\delta}^{*}\end{pmatrix}.\] (A.6) where \(\iota_{dr}:T^{*}\mathring{X}\to\mathbb{R}\) is the contraction with the aspherical vector field \(dr^{\sharp}\), i.e. \(\iota_{dr}(\cdot)=\mathring{g}^{-1}(dr,\cdot)\). We then calculate the first covariant derivatives on symmetric 2-tensor \(h\in S^{2}T^{*}M\) \[\nabla_{i}h_{jk}=\mathring{\nabla}_{i}h_{jk},\quad\nabla_{i}h_{ja }=\mathring{\nabla}_{i}h_{ja}-r^{-1}(\partial_{i}r)h_{ja},\quad\nabla_{i}h_{ab} =\partial_{i}h_{ab}-2r^{-1}(\partial_{i}r)h_{ab}\] \[\nabla_{a}h_{ij}=\partial_{a}h_{ij}-r^{-1}(\partial_{i}r)h_{aj}-r ^{-1}(\partial_{j}r)h_{ia},\quad\nabla_{a}h_{ib}=\not{\nabla}_{a}h_{ib}+r( \partial^{k}r)h_{ik}\not{g}_{ab}-r^{-1}(\partial_{i}r)h_{ab},\] \[\nabla_{a}h_{bc}=\not{\nabla}_{a}h_{bc}+r(\partial^{k}r)(h_{ck} \not{g}_{ab}+h_{kb}\not{g}_{ac}).\] Therefore the negative divergence \((\delta_{g}h)_{\mu}=-\nabla^{\nu}h_{\nu\mu}\) which maps symmetric 2-tensors to 1-form takes the following form (under the splitting (6.10) and (6.9)) \[\delta_{g}=\begin{pmatrix}r^{-2}\mathring{\delta}r^{2}&r^{-2}\not{g}&r^{-3}(dr) \not{\mathfrak{r}}\\ 0&r^{-2}\mathring{\delta}r^{2}&r^{-2}\not{\delta}\end{pmatrix}\] (A.7) where \((\not{h}h)_{i}=-\not{\nabla}^{a}h_{ai}\) and \((\mathring{\delta}h)_{a}=-\mathring{\nabla}^{i}h_{ia}\). We can also express \((G_{g}h)_{\mu\nu}=h_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\text{tr}_{g}h\) as \[G_{g}=\begin{pmatrix}G_{\mathring{g}}&0&-\frac{1}{2}r^{-2}\mathring{g}\not{ \mathfrak{r}}\\ 0&1&0\\ -\frac{1}{2}r^{2}\not{g}\mathring{\mathfrak{r}}&0&G_{\not{g}}\end{pmatrix}.\] (A.8) Putting (A.6), (A.7) and (A.8) together and using \(\not{\delta}G_{\not{g}}=\not{\delta}+\frac{1}{2}\not{d}\not{\mathfrak{r}}\), \(\mathring{\delta}G_{\mathring{g}}=\mathring{\delta}+\frac{1}{2}\mathring{d} \mathring{\mathfrak{r}}\) yield \[\delta^{*}_{g}\delta_{g}G_{g}=\begin{pmatrix}\mathring{\delta}^{*}r^{-2} \mathring{\delta}r^{2}+\frac{1}{2}\mathring{\delta}^{*}\mathring{d} \mathring{\mathfrak{r}}&\mathring{\delta}^{*}r^{-2}\not{\delta}\\ \frac{1}{2}r^{-2}\not{d}\mathring{\delta}r^{2}+\frac{1}{4}\mathring{d}\not{ \mathfrak{r}}+\frac{1}{4}r^{2}\mathring{d}r^{-2}\mathring{d}r^{-2}\not{d} \mathring{\mathfrak{r}}&\frac{1}{2}r^{2}\mathring{d}r^{-4}\mathring{\delta}r^{2 }&\frac{1}{4}r^{-2}\mathring{d}\not{\mathfrak{r}}+\frac{1}{2}r^{2}\mathring{d}r^{- 4}\not{\mathfrak{r}}+\frac{1}{4}r^{2}\mathring{d}r^{-4}\not{\mathfrak{r}}\\ r^{-1}\not{g}_{\text{${\ell}$}dr}\mathring{\delta}r^{2}+\frac{1}{2}\not{r} \not{g}_{\text{${\ell}$}dr}\mathring{\mathfrak{r}}&r^{-1}\not{g}_{\text{${\ell}$} dr}\mathring{\mathfrak{r}}&\frac{1}{2}r^{-1}\not{g}_{\text{${\ell}$}dr}\mathring{\mathfrak{d}} \mathring{\mathfrak{r}}+r^{-2}\not{\delta}^{*}\mathring{\delta}+\frac{1}{2}r^{-2} \not{\delta}^{*}\mathring{d}\mathring{\mathfrak{r}}\end{pmatrix}.\ ### Calculation of \(\Box_{g}\) Next we compute the second covariant derivatives on symmetric 2-tensor \(h\in S^{2}T^{*}M\). Since our goal is to find the form of \(\Box_{g}=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\) applied to \(h\), it suffices to compute the following second covariant derivatives \[\begin{split}\nabla_{i}\nabla_{j}h_{km}&=\mathring{ \nabla}_{i}\mathring{\nabla}_{j}h_{km},\\ \nabla_{a}\nabla_{b}h_{km}&=\mathring{\nabla}_{a} \mathring{\nabla}_{b}h_{km}-r^{-1}\partial_{k}r\left(\mathring{\nabla}_{a}h_{ bm}+\mathring{\nabla}_{b}h_{am}\right)-r^{-1}\partial_{m}r\left(\mathring{\nabla}_{a}h_{ kb}+\mathring{\nabla}_{b}h_{ka}\right)\\ &\quad+r\partial^{i}r\mathring{\nabla}_{i}h_{km}\not{g}_{ab}+2r^{ -2}\partial_{k}r\partial_{m}rh_{ab}-\not{g}_{ab}\left(\partial_{k}r\partial^{ i}rh_{im}+\partial_{m}r\partial^{i}rh_{ik}\right)\\ \nabla_{i}\nabla_{j}h_{kc}&=\mathring{\nabla}_{i} \mathring{\nabla}_{j}h_{kc}-r^{-1}\partial_{i}r\mathring{\nabla}_{j}h_{kc}-r^{ -1}\partial_{j}r\mathring{\nabla}_{i}h_{kc}+2r^{-2}\partial_{1}r\partial_{j} rh_{kc}-\frac{1}{2}r^{-1}\mu^{\prime}_{b_{0}}\mathring{g}_{ij}h_{kc},\\ \nabla_{a}\nabla_{b}h_{kc}&=\mathring{\nabla}_{a} \mathring{\nabla}_{b}h_{kc}-r^{-1}\partial_{k}r\left(\mathring{\nabla}_{a}h_{ bc}+\mathring{\nabla}_{b}h_{ac}\right)+\not{g}_{ac}\partial^{i}r\mathring{ \nabla}_{b}h_{ik}+\not{r}\partial_{bc}\partial^{i}r\mathring{\nabla}_{a}h_{ ik}+r\not{g}_{ab}\partial^{i}r\mathring{\nabla}_{i}h_{kc}\\ &\quad-\partial_{k}r\partial^{i}r\left(\not{g}_{ab}h_{ic}+\not{ g}_{ac}h_{ib}+\not{g}_{bc}h_{ib}\right)-\mu_{b_{0}}\left(\not{g}_{ab}h_{kc}+\not{g}_{ac} h_{kb}\right)\\ \nabla_{i}\nabla_{j}h_{cd}&=\mathring{\nabla}_{i} \mathring{\nabla}_{j}h_{cd}-2r^{-1}\left(\partial_{i}r\mathring{\nabla}_{j}h_{ cd}+\partial_{j}r\mathring{\nabla}_{i}h_{cd}\right)+6r^{-2}\partial_{i}r \partial_{j}rh_{cd}-r^{-1}\mu^{\prime}_{b_{0}}\mathring{g}_{ij}h_{cd},\\ \nabla_{a}\nabla_{b}h_{cd}&=\mathring{\nabla}_{a} \mathring{\nabla}_{b}h_{cd}+r\partial^{i}r\left(\not{g}_{bc}\mathring{\nabla}_ {a}h_{di}+\not{g}_{bd}\mathring{\nabla}_{a}h_{ci}+\not{g}_{ac}\mathring{\nabla }_{b}h_{di}+\not{g}_{ad}\mathring{\nabla}_{b}h_{ci}+\not{g}_{ab}\mathring{ \nabla}_{i}h_{cd}\right)\\ &\quad-\mu_{b_{0}}\left(2\not{g}_{ab}h_{cd}+\not{g}_{ac}h_{bd}+ \not{g}_{ad}h_{bc}\right)+r^{2}\partial^{i}r\partial^{j}rh_{ij}\left(\not{g}_ {ac}\not{g}_{bd}+\not{g}_{ad}\not{g}_{bc}\right)\end{split}\] (A.10) where \(\mathring{\Box}\) acting on aspherical symmetric 2-tensors is the tensor wave operator, on mixed symmetric tensors is the 1-form wave operator acting on the aspherical part, and on spherical symmetric tensors is the scalar wave operator acting on the aspherical variables; the operator \(\mathring{\Delta}\) is defined in a similar manner. Then we find \[\begin{split}\frac{1}{2}\Box_{g}&=\begin{pmatrix} \frac{1}{2}\mathring{\Box}+\frac{1}{2}r^{-2}\not{\Delta}&2r^{-3}(dr)\otimes_{s} \not{g}&r^{-4}(dr\otimes dr)\not{\epsilon}\\ r^{-1}\iota_{dr}\not{\epsilon}&\frac{1}{2}\mathring{\Box}+\frac{1}{2}r^{ -2}\not{\Delta}&r^{-3}(dr)\otimes\not{g}\\ \not{\epsilon}\iota_{dr}\iota_{dr}&2r^{-1}\iota_{dr}\not{\epsilon}^{*}& \frac{1}{2}\mathring{\Box}+\frac{1}{2}r^{-2}\not{\Delta}\\ &+\begin{pmatrix}r^{-1}\partial^{i}r\mathring{\nabla}_{i}\!-\!2r^{-2}(dr) \otimes_{s}\iota_{dr}&0\\ 0&-\frac{1}{2}r^{-2}(\mu_{b_{0}}+r\mu^{\prime}_{b_{0}})\!-\!2r^{-2}(dr)\otimes \iota_{dr}&0\\ 0&0&-r^{-1}\partial^{i}r\mathring{\nabla}_{i}\!-\!r^{-1}\mu^{\prime}_{b_{0}} \end{pmatrix}.\end{split}\] (A.11) On the aspherical part \(\mathring{X}\), we further calculate the relevant operators on the static region where \(\mu_{b_{0}}>0\) by working in \((t,r)\) coordinates. On the static region, the metric \(\mathring{g}\) reads \[\mathring{g}=-\mu_{b_{0}}dt^{2}+\mu_{b_{0}}^{-1}dr^{2}.\] We further split \[T^{*}\mathring{X}=\langle\hat{d}t\rangle\oplus\langle\hat{d}r\rangle,\quad S^{ 2}T^{*}\mathring{X}=\langle\hat{d}t^{2}\rangle\oplus\langle 2\hat{d}t\hat{d}r\rangle\oplus \langle\hat{d}r^{2}\rangle.\] (A.12) Since \(\mathring{\Gamma}^{t}_{tr}=\mathring{\Gamma}^{t}_{rt}=\frac{1}{2}\mu_{b_{0}}^{-1 }\mu_{b_{0}}^{-1},\mathring{\Gamma}^{r}_{tt}=\frac{1}{2}\mu_{b_{0}}\mu_{b_{0}}^ {\prime},\mathring{\Gamma}^{r}_{rr}=-\frac{1}{2}\mu_{b_{0}}^{-1}\mu_{b_{0}}^{ \prime}\), with respect to the above split (A.12), we see that acting on scalar functions, \(\partial^{i}r\mathring{\nabla}_{i}=\mu_{b_{0}}\partial_{r}\), \[\dot{d}=\begin{pmatrix}\partial_{t}\\ \partial_{r}\end{pmatrix},\quad\dot{\Box}=-\mu_{b_{0}}^{-1}\partial_{t}^{2}+ \mu_{b_{0}}\partial_{r}^{2}+\mu_{b_{0}}^{\prime}\partial_{r}=-\mu_{b_{0}}^{-1} \partial_{t}^{2}+\partial_{r}\mu_{b_{0}}\partial_{r}\] (A.13) On 1-forms, we have \(\iota_{dr}=(0,\ \mu_{b_{0}})\), \[\begin{split}\mathring{\delta}&=(\mu_{b_{0}}^{-1} \partial_{t},\ -\mu_{b_{0}}\partial_{r}-\mu_{b_{0}}^{\prime})=(\mu_{b_{0}}^{-1}\partial_{t},\ -\partial_{r}\mu_{b_{0}}),\quad\dot{\delta}^{*}=\begin{pmatrix} \partial_{t}&-\frac{1}{2}\mu_{b_{0}}\mu_{b_{0}}^{\prime}\\ \frac{1}{2}\mu_{b_{0}}\partial_{r}\mu_{b_{0}}^{-1}&\frac{1}{2}\partial_{t}\\ 0&\mu_{b_{0}}^{-1}/2\partial_{r}\mu_{b_{0}}^{1/2}\end{pmatrix},\\ \partial^{i}r\mathring{\nabla}_{i}=\begin{pmatrix}\mu_{b_{0}} \partial_{r}-\frac{1}{2}\mu_{b_{0}}^{\prime}&0\\ 0&\mu_{b_{0}}^{-1}\partial_{r}\mu_{b_{0}}^{-1/2}&0\\ \end{pmatrix},\\ \mathring{*}&=\begin{pmatrix}0&-\mu_{b_{0}}\\ -\mu_{b_{0}}^{-1}&0\end{pmatrix},\quad 2dr\otimes_{s}(\cdot)=\begin{pmatrix}0&0\\ 1&0\\ 0&2\end{pmatrix}.\end{split}\] (A.14) and \[\dot{\square}=\begin{pmatrix}-\mu_{b_{0}}^{-1}\partial_{t}^{2}+\mu_{b_{0}}\partial_ {r}^{2}-\frac{1}{2}\mu_{b_{0}}^{\prime\prime}&\mu_{b_{0}}^{\prime}\partial_{t} \\ \mu_{b_{0}}^{-2}\mu_{b_{0}}^{\prime}\partial_{t}&-\mu_{b_{0}}^{-1}\partial_{t}^{ 2}+\mu_{b_{0}}\partial_{r}^{2}+2\mu_{b_{0}}^{\prime}\partial_{r}+\frac{1}{2} \mu_{b_{0}}^{\prime\prime}\end{pmatrix}.\] (A.15) On symmetric 2-tensors, we find that \[\dot{\delta}=\begin{pmatrix}\mu_{b_{0}}^{-1}\partial_{t}&-\partial_{r}\mu_{b _{0}}&0\\ -\frac{1}{2}\mu_{b_{0}}^{\prime}\mu_{b_{0}}^{-2}&\mu_{b_{0}}^{-1}\partial_{t} &-\mu_{b_{0}}^{-1/2}\partial_{r}\mu_{b_{0}}^{3/2}\end{pmatrix},\quad\iota_{dr }=\begin{pmatrix}0&\mu_{b_{0}}&0\\ 0&0&\mu_{b_{0}}\end{pmatrix},\] \[\partial^{i}r\hat{\nabla}_{i}=\begin{pmatrix}\mu_{b_{0}}\partial_{r}-\mu_{b_ {0}}^{\prime}&0&0\\ 0&\mu_{b_{0}}\partial_{r}&0\\ 0&0&\mu_{b_{0}}\partial_{r}+\mu_{b_{0}}^{\prime}\end{pmatrix}=\begin{pmatrix} \mu_{b_{0}}^{2}\partial_{r}\mu_{b_{0}}^{-1}&0&0\\ 0&\mu_{b_{0}}\partial_{r}&0\\ 0&0&\partial_{r}\mu_{b_{0}}\end{pmatrix},\] (A.16) \[\dot{\square}=-\mu_{b_{0}}^{-1}\partial_{t}^{2}+\mu_{b_{0}}\partial_{r}^{2}+ \begin{pmatrix}-\mu_{b_{0}}^{\prime}\partial_{r}+\frac{1}{2}\mu_{b_{0}}^{-1} \mu_{b_{0}}^{\prime 2}-\mu_{b_{0}}^{\prime\prime}&2\mu_{b_{0}}^{\prime} \partial_{t}&-\frac{1}{2}\mu_{b_{0}}\mu_{b_{0}}^{\prime 2}\\ \mu_{b_{0}}^{-2}\mu_{b_{0}}^{\prime}\partial_{t}&\mu_{b_{0}}^{\prime}\partial_ {r}-\mu_{b_{0}}^{-1}\mu_{b_{0}}^{\prime 2}&\mu_{b_{0}}^{\prime}\partial_{t}\\ -\frac{1}{2}\mu_{b_{0}}^{\prime}\mu_{b_{0}}^{\prime 2}&2\mu_{b_{0}}^{-2}\mu_{b_{0}}^{ \prime}\partial_{t}&3\mu_{b_{0}}^{\prime}\partial_{r}+\frac{1}{2}\mu_{b_{0}}^ {-1}\mu_{b_{0}}^{\prime 2}+\mu_{b_{0}}^{\prime\prime}\end{pmatrix}.\] ### Calculation of \(D_{(g,dA)}T(\dot{g},d\dot{A})\) Recall that the Reissner-Nordstrom electromagnetic 4-potential is given by \[A=\mathbf{Q}r^{-1}dt_{*},\quad F=\mathbf{Q}r^{-2}dt_{*}\wedge dr=\mathbf{Q}r^{ -2}\widetilde{\text{vol}}\] (A.17) Since \(\mathring{\text{vol}}_{ij}=\epsilon_{ij},\mathring{\text{vol}}_{ia}=\mathring {\text{vol}}_{ab}=0\) where \(\epsilon\) is the Levi-Civita symbol, i.e., \(\epsilon_{12}=-\epsilon_{21}=1\), it is clear that \(\mathring{\text{vol}}^{\alpha\beta}=-\mathring{\text{vol}}_{\alpha\beta}\). A direct computation implies \[F^{\alpha\beta}F_{\alpha\beta}=-\frac{2\mathbf{Q}^{2}}{r^{4}},\quad\dot{g}^{ \alpha\beta}F_{\mu\alpha}F_{\nu\beta}=\begin{cases}\frac{\mathbf{Q}^{2}}{r^{4 }}\left(\hat{g}_{\mu\nu}-\hat{g}_{\mu\nu}\mathring{\text{tr}}\hat{g}\right)& \text{if }\mu,\nu\in\{i,j\}\\ 0&\text{otherwise}\end{cases}.\] Then according to (A.1), we find \[\begin{split}\left(D_{(g,dA)}T(\dot{g},0)\right)_{ij}&=- \frac{\mathbf{Q}^{2}}{r^{4}}\left(\dot{g}_{ij}-\mathring{g}_{ij}\mathring{ \text{tr}}\dot{g}-\frac{1}{2}\dot{g}_{ij}+\frac{1}{2}\dot{g}_{ij}\mathring{ \text{tr}}\dot{g}\right)=-\frac{\mathbf{Q}^{2}}{2r^{4}}\left(\dot{g}_{ij}- \mathring{g}_{ij}\mathring{\text{tr}}\dot{g}\right),\\ \left(D_{(g,dA)}T(\dot{g},0)\right)_{ia}&=\frac{\mathbf{Q}^{2}}{2r^{4}} \dot{g}_{ia},\quad\left(D_{(g,dA)}T(\dot{g},0)\right)_{ab}=\frac{\mathbf{Q}^{ 2}}{2r^{4}}\left(\dot{g}_{ab}-r^{2}\not{g}_{ab}\mathring{\text{tr}}\dot{g} \right)\end{split}\] (A.18) which means \[D_{(g,dA)}T(\cdot,0)=\frac{\mathbf{Q}^{2}}{2r^{4}}\begin{pmatrix}\mathring{g} \mathring{\text{tr}}\!-\!1&0&0\\ 0&1&0\\ -r^{2}\mathring{g}\mathring{\text{tr}}&0&1\end{pmatrix}.\] (A.19) Since \[F^{\alpha\beta}\dot{F}_{\alpha\beta}=\mathbf{Q}r^{-2}\mathring{ \text{vol}}^{ij}\dot{F}_{ij},\] \[g^{\alpha\beta}\dot{F}_{i\alpha}F_{j\beta}+g^{\alpha\beta}F_{i \alpha}\dot{F}_{j\beta}=-2\mathbf{Q}r^{-2}\dot{F}_{i,\tau}\hat{g}_{ij}=2 \mathbf{Q}r^{-2}\dot{F}_{i,\tau}(\mathring{\text{*vol}})\hat{g}_{ij},\] \[g^{\alpha\beta}\dot{F}_{a\alpha}F_{i\beta}=\mathring{g}^{kj}\dot{F }_{ak}F_{ij}=-\mathbf{Q}r^{-2}\mathring{g}^{kj}\epsilon_{ij}\dot{F}_{ka}= \mathbf{Q}r^{-2}\mathring{\text{*}}\dot{F}_{\bullet a}\] where we use the fact \(\mathring{\text{*vol}}=-1\) and the Hodge star operator \(\mathring{\text{*}}\) only acts on the aspherical part, we have \[D_{(g,dA)}T(0,d(\cdot))=-\mathbf{Q}r^{-2}\begin{pmatrix}-\mathring{g}\mathring{ \text{*}}\dot{\text{*}}\dot{d}&0\\ \mathring{\text{*}}\not{d}&-\mathring{\text{*}}\dot{d}\\ r^{2}\not{g}\mathring{\text{*}}\dot{d}&0\end{pmatrix}.\] (A.20) This finishes the calculation of \(\mathscr{L}_{1}(\dot{g},d\dot{A})\). ### Calculation of \(D_{(g,dA)}(\delta_{g}dA)(\dot{g},d\dot{A})\) It remains to consider \(\mathscr{L}_{2}(\dot{g},d\dot{A})\) which is the linearization of \(\delta_{g}dA\). We again use the following splitting \[T^{*}M=T^{*}_{AS}\oplus T^{*}_{S},\quad\wedge^{2}T^{*}M=\wedge^{2}T^{*}_{AS} \oplus(T^{*}_{AS}\wedge T^{*}_{S})\oplus\wedge^{2}T^{*}_{S}.\] (A.21) With respect to the above splitting (A.21), \(d:T^{*}M\rightarrow\wedge^{2}T^{*}M\) and \(\delta_{g}:\wedge^{2}T^{*}M\to T^{*}M\) take the form \[d=\begin{pmatrix}\dot{d}&0\\ -\not{d}&\dot{d}\\ 0&\not{d}\end{pmatrix},\quad\delta_{g}=\begin{pmatrix}r^{-2}\mathring{\text{ *}}\dot{\text{*}}r^{2}&-r^{-2}\not{d}&0\\ 0&\mathring{\text{*}}&r^{-2}\not{d}\end{pmatrix},\] and thus by (A.4) \[D_{(g,dA)}(\delta_{g}dA)(0,d(\cdot))=\begin{pmatrix}r^{-2}\mathring{\delta}r^{2} \mathring{d}+r^{-2}\not{\delta}\not{\delta}&-r^{-2}\not{\delta}\mathring{d}\\ -\mathring{\delta}\not{d}&\mathring{\delta}\mathring{d}+r^{-2}\not{\delta} \not{\delta}\mathring{d}\end{pmatrix}.\] (A.22) Finally we are left with dealing with \(D_{(g,dA)}(\delta_{g}dA)(\mathring{g},0)\). We first calculate the first covariant derivative on the 2-form \(F\). A direct calculation implies \(\mathring{\nabla}_{k}\mathring{\mathrm{vol}}_{ij}=0\), and then \[\nabla_{k}F_{ij}=-2r^{-1}(\partial_{k}r)F_{ij},\quad\nabla_{i}F_{ja}=0,\quad \nabla_{i}F_{ab}=0,\quad\nabla_{a}F_{ij}=0,\quad\nabla_{a}F_{ib}=r(\partial^{k }r)\not{g}_{ab}F_{ik},\quad\nabla_{a}F_{bc}=0.\] Therefore we have \[\mathring{g}^{\nu\alpha}\nabla_{\alpha}F_{\nu i}=-2r^{-1}(\partial _{j}r)F_{ki}g^{jk}+r^{-3}\not{g}^{ab}\mathring{g}_{ab}(\partial^{k}r)F_{ki}=- 2\mathbf{Q}r^{-3}\mathring{*}\big{(}(\partial^{j}r)\mathring{g}_{jk}\big{)}+ \mathbf{Q}r^{-5}(\mathring{*}dr)\mathfrak{g}^{ab}\mathring{g}_{ab}\] \[\mathring{g}^{\nu\alpha}\nabla_{\alpha}F_{\nu a}=\mathring{g}^{ bi}\nabla_{b}F_{ia}=\frac{\mathbf{Q}}{r^{3}}(\partial^{k}r)(\mathring{*} \mathring{g}_{\bullet a})_{k}\] We also find \((\delta_{g}G_{g}\mathring{g})^{\kappa}F_{\kappa a}=0\) and \(-(\delta_{g}G_{g}\mathring{g})^{\kappa}F_{\kappa i}=-\mathbf{Q}r^{-2}\mathring {*}\,(\delta_{g}G_{g}\mathring{g})\), thus \[-(\delta_{g}G_{g}(\cdot))^{\kappa}F_{\kappa i}=-\frac{\mathbf{Q}}{r^{2}} \begin{pmatrix}r^{-2}\mathring{*}\mathring{\delta}r^{2}+\frac{1}{2}\mathring{* }\mathring{d}\mathring{\mathring{\mathrm{tr}}}&r^{-2}\mathring{*}\not{\delta} &\frac{1}{2}r^{-2}\mathring{*}\mathring{d}\mathring{\mathrm{tr}}\\ 0&0&0\end{pmatrix}.\] Since \(\mathring{g}^{ij}\epsilon_{ik}\epsilon_{kl}=\mathring{g}_{kl}-\mathring{g}_{kl }\mathring{\mathring{\mathrm{tr}}}\mathring{g}=-\mathring{g}_{kl}\) and \(\mathring{\nabla}\mathring{\mathrm{vol}}_{ij}=0\), we find \[\frac{1}{2}\left(\nabla^{\nu}\mathring{g}_{i}^{\kappa}-\nabla^{ \kappa}\mathring{g}_{i}^{\nu}\right)F_{\nu\kappa} =-\frac{\mathbf{Q}}{2r^{2}}(\mathring{\nabla}^{k}\mathring{g}^{lj }-\mathring{\nabla}^{l}\mathring{g}^{kj})\mathring{g}^{mn}\epsilon_{im} \epsilon_{il}\] \[=-\frac{\mathbf{Q}}{2r^{2}}\Big{(}\mathring{\nabla}^{k}(\mathring {g}^{lj}\epsilon_{jm}\epsilon_{kl})-\mathring{\nabla}^{l}(\mathring{g}^{kj} \epsilon_{jm}\epsilon_{kl})\Big{)}\mathring{g}^{mn}\epsilon_{in}\] \[=\frac{\mathbf{Q}}{2r^{2}}\Big{(}\mathring{\nabla}^{k}(\mathring {g}_{km}-\mathring{g}_{km}\mathring{\mathrm{tr}}\mathring{g})+\mathring{ \nabla}^{l}(\mathring{g}_{lm}-\mathring{g}_{lm}\mathring{\mathrm{tr}}\mathring {g})\Big{)}\mathring{g}^{mn}\epsilon_{in}\] \[=-\frac{\mathbf{Q}}{r^{2}}\Big{(}\mathring{\nabla}^{k}\mathring {g}_{km}-\mathring{d}\mathring{\mathrm{tr}}\mathring{g}\Big{)}.\] and using \(\mathring{*}\mathring{*}=(-1)^{k(2-k)+1}I\) and \(\delta=\mathring{*}\mathring{d}\mathring{*}\) on \(\wedge^{k}T^{*}\mathring{X}\) for \(1\leq k\leq 2\), we see that \[\frac{1}{2}\left(\nabla^{\nu}\mathring{g}_{a}^{\kappa}-\nabla^{ \kappa}\mathring{g}_{a}^{\nu}\right)F_{\nu\kappa} =\frac{\mathbf{Q}}{2r^{2}}\mathring{g}^{ik}\mathring{g}^{jl}\left( \partial_{k}\mathring{g}_{la}-\partial_{l}\mathring{g}_{ka}-r^{-1}(\partial_{ k}r)\mathring{g}_{la}+r^{-1}(\partial_{l}r)\mathring{g}_{ka}\right) \epsilon_{ij}\] \[=-\frac{\mathbf{Q}}{r^{3}}(\partial^{k}r)(\mathring{*}\mathring{g} _{\bullet a})_{k}+\frac{\mathbf{Q}}{r^{2}}\left(2r^{-1}(\partial^{k}r)( \mathring{*}\mathring{g}_{\bullet a})_{k}+\mathring{*}\mathring{d}\mathring{g}_ {\bullet a}\right)\] \[=-\frac{\mathbf{Q}}{r^{3}}(\partial^{k}r)(\mathring{*}\mathring{g} _{\bullet a})_{k}+\frac{\mathbf{Q}}{r^{2}}\left(-r^{2}(\partial^{k}r^{-2})( \mathring{*}\mathring{g}_{\bullet a})_{k}+\mathring{\mathring{*}}\mathring{g}_ {\bullet a}\right)\] \[=-\frac{\mathbf{Q}}{r^{3}}(\partial^{k}r)(\mathring{*}\mathring{g} _{\bullet a})_{k}+\mathbf{Q}\mathring{\mathring{g}}r^{-2}\mathring{*}\mathring{g}_ {\bullet a}.\] Combining the above calculation and (A.4) yields \[D_{(g,dA)(\delta_{g}dA)}(\mathring{g},0)=\frac{\mathbf{Q}}{r^{2}}\begin{pmatrix} \frac{1}{2}\mathring{*}\mathring{d}\mathring{\mathring{\mathrm{tr}}}&-r^{-2} \mathring{*}\not{\delta}&-\frac{1}{2}\mathring{*}\mathring{d}r^{-2}\not{\delta} \\ 0&r^{2}\mathring{\delta}r^{-2}\mathring{*}&0\end{pmatrix}.\] (A.23) Lastly, we discuss the calculation of the gauge terms for the electromagnetic field. Since \(A=-\mathbf{Q}r^{-1}dt_{*}\), we have \[(\mathcal{L}_{\omega^{4}}A)_{i}=\omega^{\alpha}\partial_{\alpha}A_{i}+A_{\alpha} \partial_{i}\omega^{\alpha},\quad(\mathcal{L}_{\omega^{4}}A)_{a}=\omega^{\alpha} \partial_{\alpha}A_{a}+A_{\alpha}\partial_{a}\omega^{\alpha}=\partial_{a}(A_{ \alpha}\omega^{\alpha})\] and as a consequence \[\mathcal{L}_{(\cdot)^{4}}A=\begin{pmatrix}\mathring{d}_{A}+\iota_{(\cdot)} \mathring{d}A&0\\ \not{d}\iota_{A}&0\end{pmatrix}.\] (A.24) ## Appendix B Calculation of the subprincipal operator at trapping In this section, we include the discussion of the subprincipal operator of \(\mathcal{P}_{b_{0},\gamma},\mathcal{W}_{b_{0},\gamma}\) and \(L_{b_{0},\gamma}\) at the trapped sets \(K\) for the sake of completeness. The arguments are based on [59], [60] and [61]. In the subsequent discussion, we will list (without proof) the relevant results for the invariant formalism of the subprincipal operator with references (mostly from [59]), and then carry out all the relevant computations. According to [39, Theorem 1.1] and [61, Theorem 4.5] (the microlocalized version) (see also the discussion in [59, SS2]), a sufficient condition for high energy estimates at the trapped set for a semiclassical operator \(P_{h}(z)\) acting on the covariant rank \(k\) tensor bundle \(\mathcal{T}_{k}\) to hold is \[\sigma_{1,h}\Big{(}\frac{1}{2ih}(P_{h}(z)-P_{h}(z)^{*})\Big{)}<\nu_{\min}/2\] (B.1) at the trapped set \(K\), where \(\nu_{\min}>0\) is the minimal normal expansion rate of the Hamilton flow at the trapping, see Proposition 5.18. Here, the adjoint is taken with respect to a _positive definite inner product_ on \(\mathcal{T}_{k}\). We note that the natural inner product induced by \(g\), with respect to which \(\Box_{g}\) is symmetric, is not positive definite, except when \(k=0\), i.e. for the scalar wave equation. One can introduce a pseudodifferential inner product such that (B.1) can be arranged, with the adjoint taken with respect to this pseudodifferential inner product. We work with classical, i.e. one-step polyhomogeneous, symbols and operators, and denote by \(S_{\hom}^{m}(T^{*}X\setminus 0)\) symbols which are homogeneous of degree \(m\) with respect to dilations in the fibers of \(T^{*}X\setminus 0\). **Definition B.1** ([59, Definition 3.1]).: A _pseudodifferential inner product_ (or \(\Psi\)-inner product) _on the vector bundle_\(\mathcal{E}\to X\) is a pseudodifferential operator \(B\in\Psi^{0}(X;\mathcal{E}\otimes\Omega^{\frac{1}{2}},\overline{\mathcal{E}}^ {*}\otimes\Omega^{\frac{1}{2}})\) satisfying \(B=B^{*}\), and such that moreover the principal symbol \(\sigma^{0}(B)=b\in S_{\hom}^{0}(T^{*}X\setminus 0;\pi^{*}\hom(\mathcal{E}, \overline{\mathcal{E}}^{*}))\) of \(B\) satisfies \[\langle b(x,\xi)u,\iota(u)\rangle>0\] (B.2) for all non-zero \(u\in\mathcal{E}_{x}\), where \(x\in X\), \(\xi\in T_{x}^{*}X\setminus 0\). If the context is clear, we will also call the sesquilinear pairing \[C^{\infty}(X,\mathcal{E}\otimes\Omega^{\frac{1}{2}})\times C^{\infty}(X, \mathcal{E}\otimes\Omega^{\frac{1}{2}})\ni(u,v)\mapsto\int_{X}\langle B(x,D)u,\iota(v)\rangle\] the pseudodifferential inner product associated with \(B\). The use of a pseudodifferential inner product is equivalent to considering a conjugated operator \(QP_{h}(z)Q^{-}\), where \(Q\in\Psi^{0}_{h}(X,\mathcal{T}_{k})\) is a carefully chosen elliptic operator with parametrix \(Q^{-}\). For any \(\epsilon>0\), we can arrange \(\sigma_{1,h}(\frac{1}{2ih}(QP_{h}(z)Q^{-}-(QP_{h}(z)Q^{-})^{*}))<\epsilon\) (with the adjoint taken relative to an ordinary positive definite inner product on \(\mathcal{T}_{k}\)), thus (B.1) holds for \(P_{h}(z)\) replaced by \(QP_{h}(z)Q^{-}\). Let \(\pi:T^{*}X\setminus 0\to X\) be the projection. We will work in standard pseudodifferential operator setting. We specialize to the case that \(P\in\Psi^{m}(X,\mathcal{E}\otimes\Omega^{\frac{1}{2}})\) has a real, scalar principal symbol, which is the case of interest in our application. Fix a coordinate system of \(X\) and a local trivialization of \(\mathcal{E}\), then the full symbol of \(P\) is a sum of homogeneous symbols \(p\sim p_{m}+p_{m-1}+\ldots\), with \(p_{j}\) homogeneous of degree \(j\) and valued in complex \(N\times N\) matrices. According to [68, Theorem 18.1.33], the subprincipal symbol \[\sigma_{\sub}(P)=p_{m-1}(x,\xi)-\frac{1}{2i}\sum_{j}\partial_{x_{j}}\xi_{j}p_ {m}(x,\xi)\in S_{\hom}^{m-1}(T^{*}X\setminus 0,\mathbb{C}^{N\times N})\] (B.3) is well-defined under changes of coordinates; however, it does depend on the choice of local trivialization of \(\mathcal{E}\). We would like a frame-independent notion of the subprincipal symbol since this provides us with the freedom to choose particularly convenient local frames in concrete computations. ### Invariant formalism for the subprincipal symbols of operators acting on bundles **Definition B.2** ([59, Definition 3.8]).: For \(P\in\Psi^{m}(X,\mathcal{E}\otimes\Omega^{\frac{1}{2}})\) with scalar principal symbol \(p\), there is a well-defined _subprincipal operator_\(S_{\sub}(P)\in\mathrm{Diff}^{1}(T^{*}X\setminus 0,\pi^{*}\mathcal{E})\), homogeneous of degree \(m-1\) with respect to dilations in the fibers of \(T^{*}X\setminus 0\), defined as follows: if \(\{e_{1}(x),\ldots,e_{N}(x)\}\) is a local frame of \(\mathcal{E}\), define the operators \(P_{jk}\in\Psi^{m}(X,\Omega^{\frac{1}{2}})\) by \(P(\sum_{k}u_{k}(x)e_{k}(x))=\sum_{jk}P_{jk}(u_{k})e_{j}(x)\), \(u_{k}\in C^{\infty}(X,\Omega^{\frac{1}{2}})\). Then \[S_{\sub}(P)\Big{(}\sum_{k}q_{k}(x,\xi)e_{k}(x)\Big{)}:=\sum_{jk}(\sigma_{\sub}( P_{jk})q_{k})e_{j}-i\sum_{k}(H_{p}q_{k})e_{k}.\] The symbols of commutators and imaginary parts can be expressed in a completely invariant fashion. **Proposition B.3** ([60, Proposition 3.11]).: _Let \(P\in\Psi^{m}(X,\mathcal{E}\otimes\Omega^{\frac{1}{2}})\) be a pseudodifferential operator with scalar principal symbol \(p\)._ 1. _Suppose_ \(Q\in\Psi^{m^{\prime}}(X,\mathcal{E}\otimes\Omega^{\frac{1}{2}})\) _is an operator acting on_ \(\mathcal{E}\)_-valued half-densities, with principal symbol_ \(q\)_. Then_ \[\sigma_{m+m^{\prime}-1}([P,Q])=[S_{\mathrm{sub}}(P),q].\] (B.4) _If_ \(Q\) _is elliptic with parametrix_ \(Q^{-}\)_, then_ \[S_{\mathrm{sub}}(QPQ^{-})=qS_{\mathrm{sub}}(P)q^{-1}.\] (B.5) 2. _Suppose in addition that_ \(p\) _is real. Let_ \(B\) _be a_ \(\Psi\)_-inner product (pseudodifferential inner product) on_ \(\mathcal{E}\) _with principal symbol_ \(b\)_, then_ \[\sigma_{m-1}(\operatorname{Im}^{B}\!P)=\operatorname{Im}^{b}\!S_{\mathrm{sub} }(P),\] (B.6) _where_ \(\operatorname{Im}^{B}\!P=\frac{1}{2i}(P-P^{*B})\) _and_ \(\operatorname{Im}^{b}\!S_{\mathrm{sub}}(P)=\frac{1}{2i}\big{(}S_{\mathrm{sub} }(P)-S_{\mathrm{sub}}(P)^{*b}\big{)}\)_; we take the adjoint of_ \(P\) _with respect to the_ \(\Psi\)_-inner product_ \(B\) _and the adjoint of the differential operator_ \(S_{\mathrm{sub}}(P)\) _with respect to the inner product_ \(b\) _on_ \(\pi^{*}\mathcal{E}\) _and the symplectic volume density on_ \(T^{*}X\)_._ The imaginary part \(\operatorname{Im}^{B}\!P\) of an operator \(P\) with respect to a \(\Psi\)-inner product \(B\) can be interpreted in terms of the imaginary part of a conjugated version of \(P\) with respect to a standard inner product. **Proposition B.4** ([59, Proposition 3.12]).: _Let \(B\) be a \(\Psi\)-inner product on \(\mathcal{E}\). Then for any positive definite Hermitian inner product \(B_{0}\in C^{\infty}(X,\operatorname{Hom}(\mathcal{E}\otimes\Omega^{\frac{1}{2 }},\overline{\mathcal{E}}^{*}\otimes\Omega^{\frac{1}{2}}))\) on \(\mathcal{E}\), there exists an elliptic operator \(Q\in\Psi^{0}(X,\operatorname{End}(\mathcal{E}\otimes\Omega^{\frac{1}{2}}))\) such that \(B-Q^{*}B_{0}Q\in\Psi^{-\infty}(X,\operatorname{Hom}(\mathcal{E}\otimes\Omega^ {\frac{1}{2}},\overline{\mathcal{E}}^{*}\otimes\Omega^{\frac{1}{2}}))\)._ _In particular, denoting by \(Q^{-}\in\Psi^{0}(X,\operatorname{End}(\mathcal{E}\otimes\Omega^{\frac{1}{2}}))\) a parametrix of \(Q\), we have for any \(P\in\Psi^{m}(X,\mathcal{E}\otimes\Omega^{\frac{1}{2}})\) with real and scalar principal symbol._ \[Q(\operatorname{Im}^{B}\!P)Q^{-}=\operatorname{Im}^{B_{0}}(QPQ^{-}),\] (B.7) _and \(\sigma_{m-1}(\operatorname{Im}^{B}\!P)\) and \(\sigma_{m-1}(\operatorname{Im}^{B_{0}}(QPQ^{-}))\) (which are self-adjoint with respect to \(\sigma^{0}(B)\) and \(B_{0}\), respectively, hence diagonalizable) have the same eigenvalues._ Therefore, the analysis of \(\sigma_{m-1}(\operatorname{Im}^{B_{0}}(QPQ^{-})))\) is reduced to finding an inner product \(b\) such that \((S_{\mathrm{sub}}(L)-S_{\mathrm{sub}}(L)^{*b})\) has a desired form. (In our applications, the choice of \(b\) will be clear from an inspection of the form of \(S_{\mathrm{sub}}(L)\)). Let \((M,g)\) be a smooth manifold equipped with a metric tensor \(g\) of arbitrary signature. Denote by \(\mathcal{T}_{k}M=\bigotimes^{k}T^{*}M\), \(k\geq 1\), the bundle of (covariant) tensors of rank \(k\) on \(M\). The metric \(g\) induces a metric (which we also call \(g\)) on \(\mathcal{T}_{k}M\). The subprincipal operator of \(\Delta_{k}=\mathrm{tr}\nabla^{2}\in\operatorname{Diff}^{2}(M,\mathcal{T}_{k}M)\) acting on the bundle \(\mathcal{T}_{k}M\) has a nice form. **Proposition B.5** ([59, Proposition 4.1]).: _The subprincipal operator of \(\Delta_{k}\) is_ \[S_{\mathrm{sub}}(\Delta_{k})(x,\xi)=i\nabla^{\pi^{*}\mathcal{T}_{k}M}_{H_{G}} \in\operatorname{Diff}^{1}(T^{*}M\setminus 0,\pi^{*}\mathcal{T}_{k}M),\] (B.8) _where \(\nabla^{\pi^{*}\mathcal{T}_{k}M}\) is the pullback connection, with \(\pi\colon T^{*}M\setminus 0\to M\) being the projection, and \(G=|\xi|^{2}_{g^{-1}(x)}\)._ ### Calculation of subprincipal operators We notice that modulo terms of order \(0\), which are sub-subprincipal and thus irrelevant in the calculation of \(S_{\mathrm{sub}}(\mathcal{P}_{b_{0},\gamma}),S_{\mathrm{sub}}(\mathcal{W}_{b_{ 0},\gamma})\) and \(S_{\mathrm{sub}}(L_{b_{0},\gamma})\) (for simplicity of notation, from now on, we drop the notations \(b_{0},\gamma\), i.e., \(g=g_{b_{0}}\)), we have \[-2\mathcal{P}\equiv\square_{g,1}-2\delta_{g}G_{g}(\tilde{\delta}^{*}_{g}- \delta^{*}_{g}),\quad-2\mathcal{W}\equiv\square_{g,1}-2(\tilde{\delta}_{g}- \delta_{g})G_{g}\delta^{*}_{g}\] (B.9) and \[-L(\dot{g},\dot{A})\equiv\Big{(}\square_{g}\dot{g}-2(\tilde{\delta}^{*}_{g}- \delta^{*}_{g})\delta_{g}G_{g}\dot{g}-2\delta^{*}_{g}(\tilde{\delta}_{g}- \delta_{g})G_{g}\dot{g}+L_{12}(\dot{A}),\ \square_{g}\dot{A}+L_{21}(\dot{g})\Big{)},\] (B.10) where \[L_{12}(\dot{A})=8\mathrm{tr}^{24}_{g}(F\otimes_{s}d\dot{A})-2g^{-1}(F,d\dot{A} )\,g,\quad F=dA=d(\frac{\mathbf{Q}_{0}}{r}dt)=\frac{\mathbf{Q}_{0}}{r^{2}}dt \wedge dr,\] \[L_{21}(\dot{g})=\mathrm{tr}^{12}_{g}(\delta_{g}G_{g}\dot{g}\otimes F)-\frac{1} {2}\mathrm{tr}^{24}_{g}\mathrm{tr}^{35}_{g}(d^{\nabla}\dot{g}\otimes F),\quad(d^ {\nabla}\dot{g})_{\nu}^{\mu\kappa}=\nabla^{\mu}\dot{g}_{\nu}^{\kappa}-\nabla^{ \kappa}_{\nu}\dot{g}^{\mu}.\] Since \(\tilde{\delta}_{g}^{*}-\delta_{g}^{*}\) and \(\tilde{\delta}_{g}-\delta_{g}\) are compactly supported away from the trapping for Reissner-Nordstrom metric, we further have that modulo terms of order \(0\) \[\begin{split}-2\mathcal{P}\equiv-2\mathcal{W}\equiv\square_{g,1} \\ -L(\dot{g},\dot{A})\equiv\left(\square_{g}\dot{g}+L_{12}(\dot{A}),\ \square_{g}\dot{A}+L_{21}(\dot{g})\right)\end{split}\] (B.11) at the trapping \(K\). Let \(g=g_{b_{0}}\) be the Reissner-Nordstrom metric with parameter \(b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) \[g=-\alpha^{2}dt^{2}+\alpha^{-2}dr^{2}+r^{2}\not{g},\quad\alpha=\sqrt{\mu_{b_{0 }}}=\sqrt{1-\frac{2\mathbf{m}_{0}}{r}+\frac{\mathbf{Q}_{0}^{2}}{r^{2}}}.\] (B.12) That is, we work in the coordinates \((t,r,\omega),\omega\in\mathbb{S}^{2}\) at the trapping \(K\). Let \((\sigma,\xi,\eta)\) be the dual variables to \((t,r,\omega)\). We define \[e^{0}:=\alpha dt,\quad e^{1}:=\alpha^{-1}dr\] (B.13) and use the following splittings of \(T^{*}\mathcal{M}\) and \(S^{2}\,T^{*}\mathcal{M}\) \[\begin{split} T^{*}\mathcal{M}&=\langle e^{0} \rangle\oplus\langle e^{1}\rangle\oplus T^{*}\mathbb{S}^{2},\\ S^{2}\,T^{*}\mathcal{M}&=\langle e^{0}e^{0}\rangle \oplus\langle 2e^{0}\otimes_{s}e^{1}\rangle\oplus(2e^{0}\otimes_{s}T^{*} \mathbb{S}^{2})\oplus\langle e^{1}e^{1}\rangle\oplus(2e^{1}\otimes_{s}T^{*} \mathbb{S}^{2})\oplus S^{2}\,T^{*}\mathbb{S}^{2}.\end{split}\] (B.14) In this section, we are interested in the subprincipal operators of \(\square_{g,1}\) and \(L\) at the trapped set \[K=\{r=r_{p},\xi=0,\sigma^{2}=\mu_{b_{0}}r^{-2}|\eta|^{2}\},\quad\text{where} \quad\partial_{r}(r\alpha^{-1})(r_{p})=0,\quad|\eta|^{2}=|\eta|_{\not{g}^{-1}} ^{2}.\] (B.15) Therefore, at the trapping \(K\), we have \[H_{\alpha^{2}\xi^{2}+r^{-2}|\eta|^{2}}=2r^{-3}|\eta|^{2}\partial_{\xi}+r^{-2} H_{|\eta|^{2}}\] and \[\sigma^{2}H_{\alpha^{-2}}-H_{\alpha^{2}\xi^{2}+r^{-2}|\eta|^{2}}=-r^{-2}H_{| \eta|^{2}}.\] Let \(x^{0}=t\), and \(x^{\prime}=(r,\omega)\) be coordinates on \(X\) (independent of \(t\)). We let Greek indices \(\mu,\nu,\lambda,\ldots\) run from \(0\) to \(3\). Moreover, the canonical dual variables \(\xi_{0}=:\sigma\) and \(\xi^{\prime}=(\xi,\eta)\) on the fibers of \(T^{*}\mathcal{M}\) are indexed by decorated Greek indices \(\widetilde{\mu}\) running from \(0\) to \(3\). For a section \(u\) of \(\pi^{*}T^{*}\mathcal{M}\), we have \[\nabla_{\mu}^{\pi^{*}T^{*}M}u_{\nu}=\nabla_{\mu}^{M}u_{\nu},\quad\nabla_{ \widetilde{\mu}}^{\pi^{*}T^{*}M}u_{\nu}=\partial_{\widetilde{\mu}}u_{\nu}.\] #### b.2.1. Calculation of the subprincipal operator of \(\square_{g,1}\) According to Proposition B.5 and using the fact that \(G=-\alpha^{-2}\sigma^{2}+\alpha^{2}\xi^{2}+r^{-2}|\eta|^{2}\), a direct calculation implies that at trapping \(K=\{r=r_{p},\xi=0,\sigma^{2}=\mu_{b_{0}}r^{-2}|\eta|^{2}\}\) \[iS_{\text{sub}}(\square_{g,1})=\begin{pmatrix}2\alpha^{-2}\sigma\partial_{t}- r^{-2}H_{|\eta|^{2}}&0&0\\ 0&2\alpha^{-2}\sigma\partial_{t}-r^{-2}H_{|\eta|^{2}}&0\\ 0&0&2\alpha^{-2}\sigma\partial_{t}-r^{-2}\nabla_{H_{|\eta|^{2}}}^{\pi^{*}_{2}T^ {*}\mathbb{S}^{2}}\end{pmatrix}+S_{(1)}\] (B.16) where \[S_{(1)}=\begin{pmatrix}0&-2r^{-1}\sigma&0\\ -2r^{-1}\sigma&0&2\alpha r^{-3}t_{\eta}\\ 0&-2\alpha r^{-1}\eta&0\end{pmatrix}.\] (B.17) For simplicity of calculation, we further decompose \(\pi^{*}T^{*}\mathcal{M}\to T^{*}\mathcal{M}\) over the trapping \(K\). For \(\eta\in T^{*}\mathbb{S}^{2}\backslash o\), we define \(\eta^{\perp}:=\not{\iota}\eta\) and \[\widehat{\eta}:=\alpha\sigma^{-1}\eta,\quad\widehat{\eta}^{\perp}=\alpha\sigma ^{-1}\eta^{\perp}.\] Then we write \[\begin{split}\pi^{*}_{\mathbb{S}^{2}}T^{*}\mathbb{S}^{2}=\langle \widehat{\eta}\rangle\oplus\langle\widehat{\eta}^{\perp}\rangle,\\ \pi^{*}_{\mathbb{S}^{2}}S^{2}T^{*}\mathbb{S}^{2}=\langle\widehat{\eta }\widehat{\eta}\rangle\oplus\langle 2\widehat{\eta}^{\perp}\rangle\oplus\langle\widehat{\eta}^{\perp} \widehat{\eta}^{\perp}\rangle,\end{split}\] (B.18) and this induces the following decompositions \[\begin{split}\pi^{*}T^{*}M&=\langle e^{0}\rangle\oplus \Big{(}\langle e^{1}\rangle\oplus\big{(}\langle\widehat{\eta}\rangle\oplus \langle\widehat{\eta}^{\perp}\rangle\big{)}\Big{)},\\ \pi^{*}S^{2}T^{*}M&=\Big{(}\langle e^{0}e^{0}\rangle \oplus\big{(}\langle 2e^{0}e^{1}\rangle\oplus\big{(}\langle 2e^{0}\widehat{\eta} \rangle\oplus\langle 2e^{0}\widehat{\eta}^{\perp}\rangle\big{)}\big{)}\Big{)}\\ &\qquad\qquad\qquad\oplus\Big{(}\langle e^{1}e^{1}\rangle\oplus \big{(}\langle 2e^{1}\widehat{\eta}\rangle\oplus\langle 2e^{1}\widehat{\eta}^{ \perp}\rangle\big{)}\Big{)}\\ &\qquad\qquad\qquad\oplus\big{(}\langle\widehat{\eta}\widehat{ \eta}\rangle\oplus\langle 2\widehat{\eta}\widehat{\eta}^{\perp}\rangle\oplus \langle\widehat{\eta}^{\perp}\widehat{\eta}^{\perp}\rangle\big{)},\end{split}\] (B.19) In terms of (B.18), we have \[\eta=\begin{pmatrix}\alpha^{-1}\sigma\\ 0\end{pmatrix},\quad r^{-2}\iota_{\eta}=\big{(}\alpha^{-1}\sigma\quad 0 \big{)}\quad\text{at}\quad K.\] Therefore, in terms of the splitting (B.19), the operator \(S_{(1)}\) in (B.17) at trapping \(K\) is given as \[S_{(1)}=r^{-1}\sigma\begin{pmatrix}0&-2&0&0\\ -2&0&2&0\\ 0&-2&0&0\\ 0&0&0&0\end{pmatrix}.\] (B.20) Now, for any fixed \(\epsilon>0\), we define the change of basis matrix \[q_{1}=\left(\begin{array}{cccc}0&0&0&1\\ 0&0&\frac{1}{\epsilon}&0\\ 0&-\frac{2}{\epsilon^{2}}&0&0\\ -\frac{4}{\epsilon^{3}}&0&-\frac{4}{\epsilon^{3}}&0\end{array}\right),\] and then we have \[q_{1}S_{(1)}q_{1}^{-1}=r^{-1}\sigma\left(\begin{array}{cccc}0&0&0&0\\ 0&0&\epsilon&0\\ 0&0&0&\epsilon\\ 0&0&0&0\end{array}\right).\] Since \(q_{1}\) is a constant coefficient operator in the region where the splitting (B.19) is well defined, in particular, in a neighborhood of the trapping \(K\), it commutes with the diagonal part \(iS_{\text{sub}}(\square_{g,1})-S\) of \(S_{\text{sub}}(\square_{g,1})\). Equipping the \(1\)-form bundle over \(M\) with the Hermitian inner product \(B_{0}^{1}\) which is given by an identity matrix in the splitting (B.19), \(q_{1}S_{\text{sub}}(\square_{g,1})q_{1}^{-1}\) has imaginary part (with respect to \(B_{0}\)) of size \(\mathcal{O}(\epsilon)\). Put differently, \(S_{\text{sub}}(\square_{g,1})\) has imaginary part of size \((\epsilon)\) relative to the Hermitian inner product \(b:=B_{0}^{1}(q,q,q)\), which is the symbol of a pseudodifferential inner product on \(\pi^{*}T^{*}M\). To summarize, **Theorem B.6**.: _For any \(\epsilon>0\), there exists a (positive definite) \(t\)-independent pseudodifferential inner product \(B=b(x,D)\) on \(T^{*}\mathcal{M}\) (thus, \(b\) is an inner product on \(\pi^{*}T^{*}\mathcal{M}\), homogeneous of degree \(0\) with respect to dilations in the base \(T^{*}\mathcal{M}\setminus 0\)), such that_ \[\sup_{K}|\sigma|^{-1}\left\|\frac{1}{2i}(S_{\text{sub}}(\square_{g,1})-S_{ \text{sub}}(\square_{g,1})^{*b})\right\|_{b}\leq\epsilon,\] _where \(K\) is the trapped set. Put differently, there is an elliptic pseudodifferential operator \(Q\), invariant under \(t\)-translations, acting on sections of \(T^{*}\mathcal{M}\), with parametrix \(Q^{-}\), such that relative to the ordinary positive definite inner product \(B_{0}^{1}\) we have_ \[\sup_{\Gamma}|\sigma|^{-1}\left\|\sigma_{1}\left(\frac{1}{2i}(Q\square_{g,1}Q^ {-}-(Q\square_{g,1}Q^{-})^{*B_{0}^{1}})\right)\right\|_{B_{0}^{1}}\leq\epsilon.\] #### b.2.2. Calculation of the subprincipal operator of \(L\) Using Proposition B.5 again, we see that \(S_{\text{sub}}(\square_{g,2})=i\nabla^{*^{*}S^{2}T^{*}\mathcal{M}}_{H^{2}_{G}}\). Since \(\pi^{*}S^{2}T^{*}\mathcal{M}\) is simply the restriction of the product connection \(\nabla^{\pi^{*}T^{*}\mathcal{M}}\otimes\nabla^{\pi^{*}T^{*}\mathcal{M}}\) to \(\pi^{*}S^{2}T^{*}\mathcal{M}\), it follows that \[S_{\text{sub}}(\square_{g,2})(w_{1}w_{2})=S_{\text{sub}}(\square_{g,1})w_{1} \cdot w_{2}+w_{1}\cdot S_{\text{sub}}(\square_{g,1})w_{2}\] for \(w_{1},w_{2}\in C^{\infty}(T^{*}\mathcal{M},\pi^{*}T^{*}\mathcal{M})\), under the splitting (B.14), \(S_{\text{sub}}(\square_{g,2})\) has a canonical first order part, induced by the canonical first order part of \(S_{\text{sub}}(\square_{g,1})\) which is given in (B.16). The \(0\)-order part \(S_{(2)}\) of \(S_{\rm sub}(\square_{g,2})\) is equal to the second symmetric tensor power of the \(0\)-th order part of \(S_{\rm sub}(\square_{g,1})\) in (B.17). To summarize, \[iS_{\rm sub}(\square_{g,2})=2\alpha^{-2}\sigma\partial_{t}-r^{-2}\begin{pmatrix}H _{|\eta|^{2}}&0&0&0&0&0\\ 0&H_{|\eta|^{2}}&0&0&0&0\\ 0&0&H_{|\eta|^{2}}^{\pi_{2}^{*}}T^{*\mathbb{S}^{2}}&0&0&0\\ 0&0&0&H_{|\eta|^{2}}&0&0\\ 0&0&0&0&H_{|\eta|^{2}}^{\pi_{2}^{*}}T^{*\mathbb{S}^{2}}&0\\ 0&0&0&0&0&H_{|\eta|^{2}}^{\pi_{2}^{*}S^{2}}\end{pmatrix}+S_{(2)}\] (B.21) where \[S_{(2)}=\begin{pmatrix}0&-4r^{-1}\sigma&0&0&0&0\\ -2r^{-1}\sigma&0&2\alpha r^{-3}i_{\eta}&-2r^{-1}\sigma&0&0\\ 0&-2\alpha r^{-1}\eta&0&0&-2r^{-1}\sigma&0\\ 0&-4r^{-1}\sigma&0&0&4\alpha r^{-3}i_{\eta}&0\\ 0&0&-2r^{-1}\sigma&-2\alpha r^{-1}\eta&0&2\alpha r^{-3}i_{\eta}\\ 0&0&0&0&-4\alpha r^{-1}\eta&0\end{pmatrix}.\] (B.22) Again, we turn to the'microlocal splitting' (B.18), under which we write \[\eta=\begin{pmatrix}\alpha^{-1}\sigma&0\\ 0&\frac{1}{2}\alpha^{-1}\sigma\\ 0&0\end{pmatrix}:\,T^{*}\mathbb{S}^{2}\to S^{2}T^{*}\mathbb{S}^{2},\] \[r^{-2}i_{\eta}=\begin{pmatrix}\alpha^{-1}\sigma&0&0\\ 0&\alpha^{-1}\sigma&0\end{pmatrix}:\,S^{2}T^{*}\mathbb{S}^{2}\to T^{*} \mathbb{S}^{2}\] Therefore, in terms of (B.19), we see that \[S_{(2)}=r^{-1}\sigma\begin{pmatrix}0&-4&0&0&0&0&0&0&0\\ -2&0&2&0&-2&0&0&0&0&0\\ 0&-2&0&0&0&-2&0&0&0&0\\ 0&0&0&0&0&0&-2&0&0&0\\ 0&-4&0&0&0&4&0&0&0&0\\ 0&0&-2&0&-2&0&0&2&0&0\\ 0&0&0&-2&0&0&0&0&2&0\\ 0&0&0&0&0&-4&0&0&0&0\\ 0&0&0&0&0&0&-2&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\end{pmatrix}.\] (B.23) Let \(S_{L}\) be the \(0\)-th order part of \(iS_{\rm sub}(-L)\) at \(K\), with \(L\) given in (B.10). Acting on the bundle \(S^{2}T^{*}\mathcal{M}\oplus T^{*}\mathcal{M}\), we can write \(S_{L}\) as a \(2\times 2\) block matrix, \[S_{L}=\begin{pmatrix}S_{L}^{11}&S_{L}^{12}\\ S_{L}^{21}&S_{L}^{22}\end{pmatrix}\quad\text{where}\quad S_{L}^{11}=S_{(2)}, \ S_{L}^{22}=S_{(1)},\ S_{L}^{12}=i\sigma_{1}(L_{12}),\ S_{L}^{21}=i\sigma_{1} (L_{21})\] with \[L_{12}(\dot{A})=8{\rm tr}_{g}^{24}(F\otimes_{s}d\dot{A})-2g^{-1} (F,d\dot{A})\,g,\quad F=dA=d(\frac{{\bf Q}_{0}}{r}dt)=\frac{{\bf Q}_{0}}{r^{2}} dt\wedge dr,\] \[L_{21}(\dot{g})={\rm tr}_{g}^{12}(\delta_{g}G_{g}\dot{g}\otimes F )-\frac{1}{2}{\rm tr}_{g}^{24}{\rm tr}_{g}^{35}(d^{\nabla}\dot{g}\otimes F), \quad(d^{\nabla}\dot{g})_{\nu}^{\,\mu\kappa}=\nabla^{\mu}\dot{g}_{\nu}{}^{\, \kappa}-\nabla_{\nu}^{\kappa}\dot{g}\ ^{\mu}.\] In the splitting (B.14) and \[\Lambda^{2}T^{*}\mathcal{M}=\langle e^{0}\wedge e^{1}\rangle\oplus\langle e^{0 }\wedge T^{*}\mathbb{S}^{2}\rangle\oplus\langle e^{1}\wedge T^{*}\mathbb{S}^{2 }\rangle\oplus\Lambda^{2}T^{*}\mathbb{S}^{2},\] \(d:T^{*}\mathcal{M}\rightarrow\Lambda^{2}T^{*}\mathcal{M}\) and \(\mathrm{tr}_{g}^{24}(F\otimes_{s}(\cdot)):\Lambda^{2}T^{*}\mathcal{M}\to S ^{2}T^{*}\mathcal{M}\) are given by \[d=\begin{pmatrix}-\alpha\partial_{r}-\alpha^{\prime}&\alpha^{-1}\partial_{t}&0 \\ -\not{d}&0&\alpha^{-1}\partial_{t}\\ 0&-\not{d}&\alpha\partial_{r}\\ 0&0&\not{d}\end{pmatrix},\quad 2\mathrm{tr}_{g}^{24}(F\otimes_{s}(\cdot))= \frac{\mathbf{Q}_{0}}{r^{2}}\begin{pmatrix}2&0&0&0\\ 0&0&0&0\\ 0&0&-1&0\\ -2&0&0&0\\ 0&-1&0&0\\ 0&0&0&0\end{pmatrix},\] and we also have \[g^{-1}(F,(\cdot))g=\begin{pmatrix}\frac{2\mathbf{Q}_{0}}{r^{2}}&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ -\frac{2\mathbf{Q}_{0}}{r^{2}}&0&0&0\\ 0&0&0&0\\ -\frac{2\mathbf{Q}_{0}}{r^{2}}r^{2}\not{g}&0&0&0\end{pmatrix}:\Lambda^{2}T^{*} \mathcal{M}\to S^{2}T^{*}\mathcal{M}.\] Therefore, we obtain \[\sigma_{1}(L_{12})=i\frac{4\mathbf{Q}_{0}}{r^{2}}\begin{pmatrix}-\alpha\xi& \alpha^{-1}\sigma&0\\ 0&0&0\\ 0&\eta&-\alpha\xi\\ \alpha\xi&-\alpha^{-1}\sigma&0\\ \eta&0&-\alpha^{-1}\sigma\\ -\alpha\xi r^{2}\not{g}&\alpha^{-1}\sigma r^{2}\not{g}&0\end{pmatrix}.\] (B.24) Turning to the'microlocal splitting' (B.18), we write \(\not{g}=\not{g}=|\eta|^{-2}\eta\eta+|\eta|^{-2}\eta^{\perp}\eta^{\perp}\) and then we have \[\not{g}=\begin{pmatrix}r^{-2}\\ 0\\ r^{-2}\end{pmatrix}\quad\text{at}\quad K.\] Therefore, in terms of the splitting (B.18), \[i\sigma_{1}(L_{12})=r^{-1}\sigma\frac{4\mathbf{Q}_{0}}{r\alpha}\begin{pmatrix} 0&-1&0&0\\ 0&0&0&0\\ 0&-1&0&0\\ 0&0&0&0\\ 0&1&0&0\\ -1&0&1&0\\ 0&0&0&1\\ 0&-1&0&0\\ \end{pmatrix}\quad\text{at}\quad K.\] (B.25) So it remains to analyze \(L_{21}(\dot{g})\). We first compute \[G_{g}=\begin{pmatrix}\frac{1}{2}&0&0&\frac{1}{2}&0&\frac{1}{2r^{2}}\not{t} \\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ \frac{1}{2}&0&0&\frac{1}{2}&0&-\frac{1}{2r^{2}}\not{t}\\ 0&0&0&0&1&0\\ \frac{1}{2}r^{2}\not{g}&0&0&-\frac{1}{2}r^{2}\not{g}&0&G_{\not{g}}\end{pmatrix}, \quad\mathrm{tr}_{g}^{12}((\cdot)\otimes F)=\begin{pmatrix}0&-\frac{\mathbf{Q}_ {0}}{r^{2}}&0\\ -\frac{\mathbf{Q}_{0}}{r^{2}}&0&0\\ 0&0&0\end{pmatrix}:T^{*}\mathcal{M}\to T^{*}\mathcal{M}\] (B.26) and \[\sigma_{1}(\delta_{g})=i\begin{pmatrix}\alpha^{-1}\sigma&-\alpha\xi&-r^{-2} \iota_{\eta}&0&0&0\\ 0&\alpha^{-1}\sigma&0&-\alpha\xi&-r^{-2}\iota_{\eta}&0\\ 0&0&\alpha^{-1}\sigma&0&-\alpha\xi&-r^{-2}\iota_{\eta}\end{pmatrix}.\] (B.27) Therefore, we find that \[\sigma_{1}(\text{tr}_{g}^{12}(\delta_{g}G_{g}(\cdot)\otimes F))=i\frac{\mathbf{Q}_ {0}}{r^{2}}\begin{pmatrix}\frac{1}{2}\alpha\xi&-\alpha^{-1}\sigma&0&\frac{1}{2} \alpha\xi&r^{-2}\iota_{\eta}&-\frac{1}{2r^{2}}\alpha\xi\psi\!\!\!/\cr-\frac{1}{2 }\alpha^{-1}\sigma&\alpha\xi&r^{-2}\iota_{\eta}&-\frac{1}{2}\alpha^{-1}\sigma &0&-\frac{1}{2r^{2}}\alpha^{-1}\sigma\psi\!\!\!/\cr 0&0&0&0&0&0&0\end{pmatrix}.\] (B.28) Next, we calculate \[\sigma_{1}(d^{\nabla}(\cdot)\otimes F)=i\frac{2\mathbf{Q}_{0}}{r^{2}} \begin{pmatrix}\alpha\xi&-\alpha^{-1}\sigma&0&0&0&0\\ 0&\alpha\xi&0&-\alpha^{-1}\sigma&0&0\\ 0&0&\alpha\xi&0&-\alpha^{-1}\sigma&0\end{pmatrix}.\] (B.29) In the'microlocal splitting' (B.18), we have \[\psi\!\!\!/=\begin{pmatrix}r^{2}&0&r^{2}\end{pmatrix}\quad\text{at}\quad K.\] and thus we have \[i\sigma_{1}(L_{21})=r^{-1}\sigma\frac{\mathbf{Q}_{0}}{2r\alpha} \begin{pmatrix}0&0&0&0&0&-2&0&0&0&0\\ 1&0&-2&0&-1&0&0&1&0&1\\ 0&0&0&0&0&-2&0&0&0&0\\ 0&0&0&0&0&0&-2&0&0&0\end{pmatrix}.\] (B.30) Putting (B.20), (B.23), (B.25) and (B.30) together yields \[S_{L}=r^{-1}\sigma\begin{pmatrix}0&-4&0&0&0&0&0&0&0&0&0&-8\mathbf{Q}^{\prime }&0&0\\ -2&0&2&0&-2&0&0&0&0&0&0&0&0\\ 0&-2&0&0&-2&0&0&0&0&0&-8\mathbf{Q}^{\prime}&0&0\\ 0&0&0&0&0&0&-2&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&-2&0&0&0&0&0&0\\ 0&0&0&0&0&0&-2&0&0&0&0&0&0\\ 0&0&0&0&0&-2\mathbf{Q}^{\prime}&0&0&0&0&0&-2&0&0\\ \mathbf{Q}^{\prime}&0&-2\mathbf{Q}^{\prime}&0&-\mathbf{Q}^{\prime}&0&0&\mathbf{ Q}^{\prime}&0&\mathbf{Q}^{\prime}&-2&0&2&0\\ 0&0&0&0&0&-2\mathbf{Q}^{\prime}&0&0&0&0&0&-2&0&0\\ 0&0&0&0&0&0&-2\mathbf{Q}^{\prime}&0&0&0&0&0&-2&0&0\\ 0&0&0&0&0&0&-2\mathbf{Q}^{\prime}&0&0&0&0&0&0&0\\ \end{pmatrix}\] (B.31) where \(\mathbf{Q}^{\prime}=\frac{\mathbf{Q}_{0}}{2r\alpha}\). Since \(S_{L}\) has the following Jordan form \[\left(\begin{array}{cccccccccccc}0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&1&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&-4i\mathbf{Q}^{\prime}&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&4i\mathbf{Q}^{\prime}&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&4i\mathbf{Q}^{\prime}\end{array}\right),\] one can choose a suitable matrix \(q_{L}\) which has constant coefficients at \(K\) such that \[q_{L}S_{L}q_{L}^{-}=\left(\begin{array}{cccccccccccc}0&0&0&0&0&0&0&0&0&0&0&0&0&0 \\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&\epsilon&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&\epsilon&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&\epsilon&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&-4i\mathbf{Q}^{\prime}&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&-4i\mathbf{Q}^{\prime}&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&4i\mathbf{Q}^{\prime}&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&4i\mathbf{Q}^{\prime}\end{array}\right).\] (B.32) Again, equipping \(S^{2}T^{*}\mathcal{M}\oplus T^{*}\mathcal{M}\) over \(M\) with the Hermitian inner product \(B_{0}^{L}\) which is given as an identity matrix in the splitting (B.14), \(q_{L}S_{\mathrm{sub}}(-L)q_{L}^{-1}\) has imaginary part (with respect to \(B_{0}^{L}\)) of size \(\mathcal{O}(\epsilon)\). Put differently, \(S_{\mathrm{sub}}(-L)\) has imaginary part of size \((\epsilon)\) relative to the Hermitian inner product \(b:=B_{0}^{L}(q_{L^{\cdot}},q_{L^{\cdot}})\), which is the symbol of a pseudodifferential inner product on \(\pi^{*}T^{*}M\). To summarize, **Theorem B.7**.: _For any \(\epsilon>0\), there exists a (positive definite) \(t\)-independent pseudodifferential inner product \(B=b(x,D)\) on \(S^{2}T^{*}\mathcal{M}\oplus T^{*}\mathcal{M}\) (thus, \(b\) is an inner product on \(\pi^{*}(S^{2}T^{*}\mathcal{M}\oplus T^{*}\mathcal{M})\), homogeneous of degree \(0\) with respect to dilations in the base \(T^{*}\mathcal{M}\setminus 0\)), such that_ \[\sup_{K}|\sigma|^{-1}\left\|\frac{1}{2i}(S_{\mathrm{sub}}(-L)-S_{\mathrm{sub} }(-L)^{*b})\right\|_{b}\leq\epsilon,\] _where \(K\) is the trapped set. Put differently, there is an elliptic pseudodifferential operator \(Q\), invariant under \(t\)-translations, acting on sections of \(S^{2}T^{*}\mathcal{M}\oplus T^{*}\mathcal{M}\), with parametrix \(Q^{-}\), such that relative to the ordinary positive definite inner product \(B_{0}^{L}\) we have_ \[\sup_{\Gamma}|\sigma|^{-1}\left\|\sigma_{1}\left(\frac{1}{2i}(Q(-L)Q^{-}-(Q(-L )Q^{-})^{*B_{0}^{L}})\right)\right\|_{B_{0}^{L}}\leq\epsilon.\] Finally, we point out the relation between the operators \(\mathcal{P},\mathcal{W},L\) and their Fourier transform (semiclassically rescaled) \(h^{2}\widehat{\mathcal{P}}(h^{-1}z),\ h^{2}\widehat{\mathcal{W}}(h^{-1}z)\) and \(h^{2}\widehat{L}(h^{-1}z)\). _Remark B.8_.: Given a operator \(P\in\{\square_{g,1},-L\}\), we define \(\widehat{P}(\sigma):=e^{i\sigma t_{*}}Pe^{-i\sigma t_{*}}\) and its semiclassical rescaling \(h^{2}\widehat{P}(h^{-1}z)\) with \(h=|\sigma|^{-1}\). Let \(p_{2}=\sigma_{2}(P)\) and \(p_{2,h}(z)=\sigma_{2,h}(h^{2}\widehat{P}(h^{-1}z))\). Here we also use \(\xi^{\prime}=(\xi,\eta)\) as the semiclassical dual variables of \(x^{\prime}=(r,\omega)\). Then using the correspondence \(\mathrm{Re}\,z\longleftrightarrow-\sigma\), we have \[S_{\mathrm{sub}}(h^{2}\widehat{P}(h^{-1}z))=S_{\mathrm{sub}}(P)+i\alpha^{-2} \sigma\partial_{t}+i\mathrm{Im}\,(hz)\partial_{\sigma}p_{2}\] and thus \[\sigma_{1,h}\Big{(}\frac{1}{2ih}(h^{2}\widehat{P}(h^{-1}z)-(h^{2}\widehat{P}(h ^{-1}z))^{*B})\Big{)}=\sigma_{1}\left(\frac{1}{2i}(P-P^{*B})\right)+\mathrm{Im }\,(hz)\partial_{\sigma}p_{2}.\] Finally, we note that \(\mathrm{Im}\,(hz)\partial_{\sigma}p_{2}=\mathrm{Im}\,(hz)\partial_{z}p_{2,h }(\mathrm{Re}\,z)=\sigma_{1,h}\Big{(}\frac{1}{2ih}(h^{2}\widehat{\square_{g,0} }(h^{-1}z)-(h^{2}\widehat{\square_{g,0}}(h^{-1}z))^{*})\Big{)}\). ## Appendix C Calculation of the subprincipal operator at radial points at event horizon In this section, we calculate the threshold regularity at the radial points at event horizon. To this end, we discuss the subprincipal operator of \(\mathcal{P}_{b_{0},\gamma},\mathcal{W}_{b_{0},\gamma}\) and \(L_{b_{0},\gamma}\) at radial points at event horizon. Here we again drop the notations \(b_{0},\gamma\). We recall that modulo terms of order \(0\), we have \[-2\mathcal{P}\equiv\square_{g,1}-2\delta_{g}G_{g}(\tilde{\delta}_{g}^{*}- \delta_{g}^{*}),\quad-2\mathcal{W}\equiv\square_{g,1}-2(\tilde{\delta}_{g}- \delta_{g})G_{g}\delta_{g}^{*}\] (C.1) and \[-L(\hat{g},\dot{A})\equiv\Big{(}\square_{g}\dot{g}-2(\tilde{\delta}_{g}^{*}- \delta_{g}^{*})\delta_{g}G_{g}\dot{g}-2\delta_{g}^{*}(\tilde{\delta}_{g}- \delta_{g})G_{g}\dot{g}+L_{12}(\dot{A}),\ \square_{g}\dot{A}+L_{21}(\dot{g})\Big{)}.\] (C.2) Near the radial points at the event horizon, we use the coordinates \((t_{0},r,\omega)\) in (111), in which the Reissner-Nordstrom metric \(g=g_{b_{0}},b_{0}=(\mathbf{m}_{0},0,\mathbf{Q}_{0})\) takes the form \[g=-\mu_{b_{0}}dt_{0}^{2}+2dt_{0}dr+r^{2}\not{g}.\] Writing the covectors as \[\sigma\,dt_{0}+\xi\,dr+\eta,\quad\eta\in T^{*}\mathbb{S}^{2},\] we have \(G=g^{-1}(\sigma\,dt_{0}+\xi\,dr+\eta,\sigma\,dt_{0}+\xi\,dr+\eta)=2\sigma\xi+ \mu_{b_{0}}\xi^{2}+r^{-2}|\eta|\) and \[H_{G}=2\xi\partial_{t_{0}}+(2\sigma+2\mu_{b_{0}}\xi)\partial_{r}-\mu_{b_{0}}^{ \prime}\xi^{2}\partial_{\xi}+2r^{-3}|\eta|^{2}\partial_{\xi}+r^{-2}H_{|\eta|^ {2}}.\] The radial points at event horizon is given as \[L_{\pm}=\{(t_{0},r_{b_{0}},\omega;0,\xi,0)\mid\pm\xi>0\}.\] (C.3) Then at \(L_{\pm}\), we have \[H_{G}=2\xi\partial_{t_{0}}-2\kappa\xi^{2}\partial_{\xi}+2\mu_{b_{0}}\xi \partial_{r}\] (C.4) with \[\kappa=\frac{1}{2}\mu_{b_{0}}^{\prime}(r_{b_{0}}).\] (C.5) being the _surface gravity_. We keep the term \(2\mu_{b_{0}}\xi\partial_{r}\) here as it is needed in the computation of the skew adjoint part. The basis \(e^{0}=\alpha dt,e^{1}=\alpha^{-1}dr\) are not defined at \(r=r_{b_{0}}\). To deal with this issue, we follow the method in [63, SS6.3]. Concretely, we relate the basis \(e^{0}=\alpha dt,e^{1}=\alpha^{-1}dr\) to a smooth trivialization of \(T^{*}M\) and \(S^{2}T^{*}M\) as follows: writing a 1-form \(u\) in the static region \(\mathcal{M}\) as \[u=u_{N}\,e^{0}+u_{N}^{T}\,e^{1}+u_{T}^{T}=\widetilde{u}_{N}\,dt_{0}+\widetilde {u}_{N}^{T}\,dr+\widetilde{u}_{T}^{T},\] where \(u_{T}^{T}\) and \(\widetilde{u}_{T}^{T}\) are 1-forms on \(\mathbb{S}^{2}\), we see that \[\begin{pmatrix}u_{N}\\ u_{N}^{T}\\ u_{T}^{T}\end{pmatrix}=\mathscr{C}^{(1)}\begin{pmatrix}\widetilde{u}_{N}\\ \widetilde{u}_{N}^{T}\\ \widetilde{u}_{T}^{T}\end{pmatrix},\quad\mathscr{C}^{(1)}=\begin{pmatrix} \alpha^{-1}&0&0\\ \alpha^{-1}&\alpha&0\\ 0&0&1\end{pmatrix}.\] (C.6) The following smooth (at \(r=r_{b_{0}}\)) bundle splitting \[T^{*}M=\langle dt_{0}\rangle\oplus\langle dr\rangle\oplus T^{*}\mathbb{S}^{2}\] (C.7) induces \[S^{2}T^{*}M=\langle dt_{0}^{2}\rangle\oplus(2dt_{0}\,dr)\oplus(2dt_{0}\cdot T ^{*}\mathbb{S}^{2})\oplus\langle dr^{2}\rangle\oplus(2dr\cdot T^{*}\mathbb{S} ^{2})\oplus S^{2}T^{*}\mathbb{S}^{2}.\] (C.8) Then one can write a section \(u\) of \(S^{2}T^{*}M\) as \[u =u_{NN}\,e^{0}e^{0}+2u_{NTN}\,e^{0}e^{1}+2e^{0}\cdot u_{NTT}+u_{ NN}^{T}\,e^{1}e^{1}+2e^{1}\cdot u_{NT}^{T}+u_{TT}^{T}\] \[=\widetilde{u}_{NN}\,dt_{0}^{2}+2\widetilde{u}_{NTN}\,dt_{0}\,dr +2dt_{0}\cdot\widetilde{u}_{NTT}+\widetilde{u}_{NN}^{T}\,dr^{2}+2dr\cdot \widetilde{u}_{NT}^{T}+\widetilde{u}_{TT}^{T},\] and the transition matrix is defined by \[\begin{pmatrix}u_{NN}\\ u_{NTN}\\ u_{NTT}\\ u_{NN}^{T}\\ u_{NT}^{T}\\ u_{TT}^{T}\end{pmatrix}=\mathscr{C}^{(2)}\begin{pmatrix}\widetilde{u}_{NN}\\ \widetilde{u}_{NTN}\\ \widetilde{u}_{NTT}\\ \widetilde{u}_{NN}^{T}\\ \widetilde{u}_{NT}^{T}\\ \widetilde{u}_{TT}^{T}\end{pmatrix},\quad\mathscr{C}^{(2)}=\begin{pmatrix} \alpha^{-2}&0&0&0&0&0\\ \alpha^{-2}&1&0&0&0&0\\ 0&0&\alpha^{-1}&0&0&0\\ \alpha^{-2}&2&0&\alpha^{2}&0&0\\ 0&0&\alpha^{-1}&0&\alpha&0\\ 0&0&0&0&0&1\end{pmatrix}.\] (C.9) Then using the coordinates \((t_{0},r,\omega)\) and splitting (C.7), we again use Proposition B.5 to compute \[iS_{\rm sub}(\Box_{g,1})=-2\xi\partial_{t_{0}}+2\kappa\xi^{2}\partial_{\xi}+2 \mu_{b_{0}}\xi\partial_{r}-2\kappa\xi\begin{pmatrix}-1&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix}\quad\text{at}\quad L_{\pm}\] (C.10) \[iS_{\rm sub}(\square_{g,2})=-2\xi\partial_{t_{0}}+2\kappa\xi^{2}\partial_{\xi}+2 \mu_{b_{0}}\xi\partial_{r}-2\kappa\xi\begin{pmatrix}-2&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&-1&0&0&0\\ 0&0&0&2&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&0\end{pmatrix}\quad\text{at}\quad L_{\pm}.\] (C.11) Next, we turn to the calculation of \(S_{\rm sub}(L_{12})\) and \(S_{\rm sub}(L_{21})\) at \(L_{\pm}\). We first calculate the symbols at \(N^{*}\{r=r_{0}\}=\{t,r_{0},\omega,0,\xi,0\}\), for \(r_{0}\in(r_{b_{0}},\infty)\) close to \(r_{b_{0}}\), in the static coordinates \((t,r,\omega)\) and bundle splittings (B.14). Then we conjugate by \(\mathscr{C}^{(1)}\) or \(\mathscr{C}^{(2)}\) and evaluate at \(\mu_{b_{0}}=0\). Concretely, according to (B.24), in terms of the splitting (B.14), we have at \(N^{*}\{r=r_{0}\}\) \[i\sigma_{1}(L_{12})=\frac{4\mathbf{Q}_{0}}{r^{2}}\begin{pmatrix}\alpha\xi&0& 0\\ 0&0&0\\ 0&0&\alpha\xi\\ -\alpha\xi&0&0\\ 0&0&0\\ \alpha\xi r^{2}\not{\!\!g}&0&0\end{pmatrix}.\] So in the smooth splittings (C.7) and (C.8) (multiplying from the left by \((\mathscr{C}^{(2)})^{-1}\) and from the right by \(\mathscr{C}^{(1)}\) and evaluating at \(\mu=0\)), \[i\sigma_{1}(L_{12})=\frac{4\mathbf{Q}_{0}}{r^{2}}\xi\begin{pmatrix}0&0&0&0\\ -1&0&0\\ 0&0&0\\ 0&0&-1\\ r^{2}\not{\!\!g}&0&0\end{pmatrix}\quad\text{at}\quad L_{\pm}.\] (C.12) Using (B.28) and (B.29), in terms of the splitting (B.14), we have at \(N^{*}\{r=r_{0}\}\) \[i\sigma(L_{21})=\frac{\mathbf{Q}_{0}}{r^{2}}\begin{pmatrix}\frac{1}{2}\alpha \xi&0&0&-\frac{1}{2}\alpha\xi&0&\frac{1}{2r^{2}}\alpha\xi\not{\!\!v}\\ 0&0&0&0&0&0\\ 0&0&\alpha\xi&0&0&0\end{pmatrix}.\] Again multiplying this from the left by \((\mathscr{C}^{(1)})^{-1}\) and from the right by \(\mathscr{C}^{(2)}\) and evaluating at \(\mu=0\) give rise to \(i\sigma(L_{21})\) in the smooth splittings (C.7) and (C.8) \[i\sigma(L_{21})=\frac{\mathbf{Q}_{0}}{2r^{2}}\xi\begin{pmatrix}0&0&0&0&0&0\\ 0&2&0&0&0&-\frac{1}{r^{2}}\not{\!\!v}\\ 0&0&2&0&0&0\end{pmatrix}\quad\text{at}\quad L_{\pm}.\] (C.13) Finally, it remains to calculate the subprincipal operators of \((\tilde{\delta}_{g}^{*}-\delta_{g}^{*})\delta_{g}G_{g}\dot{g}\) and \(\delta_{g}^{*}(\tilde{\delta}_{g}-\delta_{g})G_{g}\dot{g}\). First, according to (B.26) and (B.27), we see that in the static splitting (B.14) \[i\sigma_{1}(\delta_{g}G_{g})=\begin{pmatrix}0&\alpha\xi&0&0&0&0\\ \frac{1}{2}\alpha\xi&0&0&\frac{1}{2}\alpha\xi&0&-\frac{\alpha\xi}{2r^{2}} \not{\!\!v}\\ 0&0&0&0&\alpha\xi&0\end{pmatrix}\] at \(N^{*}\{r=r_{0}\}\), and thus \[i\sigma_{1}(\delta_{g}G_{g})=\frac{1}{2}\xi\begin{pmatrix}2&0&0&0&0&0\\ 0&0&0&0&0&-\frac{1}{r^{2}}\not{\!\!v}\\ 0&0&2&0&0&0\end{pmatrix}\quad\text{at}\quad L_{\pm}.\] Also, in the smooth splitting, we have at \(L_{\pm}\) \[i\sigma_{1}(\delta_{g}^{*})=\begin{pmatrix}0&0&0\\ -\frac{\xi}{2}&0&0\\ 0&0&0\\ 0&-\xi&0\\ 0&0&-\frac{\xi}{2}\\ 0&0&0\end{pmatrix}.\] Since \(\bar{\delta}_{g}^{*}-\delta_{g}^{*}=2\gamma\mathfrak{c}\otimes_{s}(\cdot)- \frac{1}{2}\gamma g^{-1}(\mathfrak{c},\cdot)g\) and \(\bar{\delta}_{g}-\delta_{g}=2\gamma\iota_{\mathfrak{c}}^{g}(\cdot)-\frac{1}{ 2}\gamma\mathrm{ctr}_{g}(\cdot)\) where \(\mathfrak{c}=\tilde{\chi}(r)(dt_{0}-\mathfrak{b}dr)\) with \(\tilde{\chi}(r)=1\) near event horizon and \(\mathfrak{b}>2\), it follows that in terms of the smooth splitting (C.7) and (C.8) \[\bar{\delta}_{g}^{*}-\delta_{g}^{*}=\begin{pmatrix}2\gamma&0&0\\ -\frac{1}{2}\mathfrak{b}\gamma&\frac{1}{2}\gamma&0\\ 0&0&\gamma\\ 0&-2\mathfrak{b}\gamma&0\\ 0&0&-\mathfrak{b}\gamma\\ \frac{1}{2}\mathfrak{b}\gamma r^{2}\not{g}&-\frac{1}{2}\gamma r^{2}\not{g}&0 \end{pmatrix},\quad\bar{\delta}_{g}-\delta_{g}=\begin{pmatrix}-2\mathfrak{b} \gamma&\gamma&0&0&0&-\frac{1}{2}\gamma r^{-2}\not{\mathfrak{c}}\\ 0&-\mathfrak{b}\gamma&0&2\gamma&0&\frac{1}{2}\mathfrak{b}\gamma r^{-2}\not{ \mathfrak{c}}\\ 0&0&-2\mathfrak{b}\gamma&0&2\gamma&0\\ \end{pmatrix}\] at \(L_{\pm}\). Therefore, in terms of the smooth splitting, we have at \(L_{\pm}\) \[i\sigma_{1}((\bar{\delta}_{g}^{*}-\delta_{g}^{*})\delta_{g}G_{g}) =\frac{1}{2}\gamma\xi\begin{pmatrix}4&0&0&0&0&0\\ -\mathfrak{b}&0&0&0&0&-\frac{1}{2\tau}\not{\mathfrak{c}}\\ 0&0&2&0&0&0\\ 0&0&0&0&0&\frac{2\mathfrak{b}}{\tau}\not{\mathfrak{c}}\\ 0&0&-2\mathfrak{b}&0&0&0\\ \mathfrak{b}r^{2}\not{g}&0&0&0&0&\frac{1}{2}\not{g}\not{\mathfrak{c}}\\ \end{pmatrix},\] (C.14) \[i\sigma_{1}(\delta_{g}G_{g}(\bar{\delta}_{g}^{*}-\delta_{g}^{*})) =\frac{1}{2}\gamma\xi\begin{pmatrix}4&0&0\\ -\mathfrak{b}&1&0\\ 0&0&2\end{pmatrix},\] (C.15) \[i\sigma_{1}(\delta_{g}^{*}(\bar{\delta}_{g}-\delta_{g})G_{g}) =\frac{1}{2}\gamma\xi\begin{pmatrix}0&0&0&0&0&0&\frac{1}{2\tau} \not{\mathfrak{c}}\\ 2\mathfrak{b}&-2&0&0&0&\frac{1}{2\tau}\not{\mathfrak{c}}\\ 0&0&0&0&0&0\\ 0&2\mathfrak{b}&0&-4&0&-\frac{\mathfrak{b}}{\tau^{2}}\not{\mathfrak{c}}\\ 0&2\mathfrak{b}&0&-2&0&0\\ 0&0&0&0&0&0\end{pmatrix},\] (C.16) \[i\sigma_{1}((\bar{\delta}_{g}-\delta_{g})G_{g}\delta_{g}^{*}) =\frac{1}{2}\gamma\xi\begin{pmatrix}-1&0&0\\ \mathfrak{b}&-4&0\\ 0&0&-2\end{pmatrix}.\] (C.17) In the splitting (C.7), combining (C.10) with (C.15) and (C.17) yields \[S_{\mathrm{sub}}(-2\mathcal{P})=-2\xi\partial_{t_{0}}-2\kappa\xi^{2}\partial_ {\xi}+2\mu_{b_{0}}\xi\partial_{r}+S_{\mathcal{P}},\quad S_{\mathrm{sub}}(-2 \mathcal{W})=-2\xi\partial_{t_{0}}-2\kappa\xi^{2}\partial_{\xi}+2\mu_{b_{0}} \xi\partial_{r}+S_{\mathcal{W}}\] where \[S_{\mathcal{P}}=\xi\begin{pmatrix}2\kappa-4\gamma&0&0\\ \mathfrak{b}\gamma&-\gamma-2\kappa&0\\ 0&0&-2\gamma\end{pmatrix},\quad S_{\mathcal{W}}=\xi\begin{pmatrix}2\kappa+ \gamma&0&0\\ -\mathfrak{b}\gamma&4\gamma-2\kappa&0\\ 0&0&2\gamma\end{pmatrix}.\] In the splitting (C.7), equipping \(T^{*}M\) with the Hermitian inner product \(1\oplus 1\oplus\not{g}^{-1}\), the first three terms of \(S_{\mathrm{sub}}(-2\mathcal{P}),S_{\mathrm{sub}}(-2\mathcal{W})\) are formally self adjoint with respect to the symplectic volume form on \(T^{*}M\), and the last term \(S_{\mathcal{P}}\) resp. \(S_{\mathcal{W}}\) of \(iS_{\mathrm{sub}}(-2\mathcal{P})\) resp. \(iS_{\mathrm{sub}}(-2\mathcal{W})\), multiplied by \(\xi^{-1}\), has eigenvalues \[2(\kappa-2\gamma),\ -2\kappa-\gamma,\ -2\gamma,\quad\text{resp.}\quad-2(\kappa-2 \gamma),\ 2\gamma,\ 2\kappa+\gamma.\] (C.18) Now we turn to the calculation \(L\). Using the following decomposition \[S^{2}T^{*}\mathbb{S}^{2}=\langle r^{2}\not{g}\rangle\oplus\not{g}^{\perp},\] we have \[r^{2}\not{g}=\begin{pmatrix}1\\ 0\end{pmatrix},\quad r^{-2}\not{v}=\begin{pmatrix}2&0\end{pmatrix}.\] This further induces the splitting \[\begin{split}\langle dt_{0}^{2}\rangle&\oplus\langle 2dt_{0}\,dr \rangle\oplus(2dt_{0}\cdot T^{*}\mathbb{S}^{2})\oplus\langle dr^{2}\rangle \oplus(2dr\cdot T^{*}\mathbb{S}^{2})\oplus\langle r^{2}\not{g}\rangle\oplus \not{g}^{\perp}\\ &\oplus\langle dt_{0}\rangle\oplus\langle dr\rangle\oplus T^{*} \mathbb{S}^{2}.\end{split}\] (C.19) In the above splitting (C.19), putting (C.10),(C.11), (C.12), (C.13), (C.14) and (C.16) together yields \[iS_{\mathrm{sub}}(-L)=-2\xi\partial_{t_{0}}-2\kappa\xi^{2}\partial_{\xi}+2 \mu_{b_{0}}\xi\partial_{r}+S_{L}\] where \[S_{L}=\xi\begin{pmatrix}4\kappa-4\gamma&0&0&0&0&0&0&0&0&0\\ -\mathfrak{b}\gamma&2\gamma&0&0&0&0&-4\mathfrak{Q}^{\prime\prime}&0&0\\ 0&0&2\kappa-2\gamma&0&0&0&0&0&0&0\\ 0&-2\mathfrak{b}\gamma&0&4\gamma-4\kappa&0&-2\mathfrak{b}\gamma&0&0&0&0\\ 0&-2\mathfrak{b}\gamma&0&2\gamma&-2\kappa&0&0&0&0&-4\mathbf{Q}^{\prime\prime} \\ -\mathfrak{b}\gamma&0&0&0&0&-\gamma&0&4\mathbf{Q}^{\prime\prime}&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&2\kappa&0&0\\ 0&\mathbf{Q}^{\prime}&0&0&0&-\mathbf{Q}^{\prime\prime}&0&0&-2\kappa&0\\ 0&0&\mathbf{Q}^{\prime\prime}&0&0&0&0&0&0&0\end{pmatrix}\] with \(\mathbf{Q}^{\prime\prime}=\mathbf{Q}_{0}r^{-2}\). In the splitting (C.19), equipping \(S^{2}T^{*}M\oplus T^{*}M\) with the Hermitian inner product \(1\oplus 1\oplus\not{g}^{-1}\oplus 1\oplus\mathfrak{g}^{-1}\oplus 1 \oplus 1\oplus 1\oplus\mathfrak{g}-1\), the first three terms of \(S_{\mathrm{sub}}(-L)\) are formally self adjoint with respect to the symplectic volume form on \(S^{2}T^{*}M\oplus T^{*}M\), and the last term \(S_{L}\) of \(iS_{\mathrm{sub}}(-L)\), multiplied by \(\xi^{-1}\) has eigenvalues \[0,\ 0,\ -2\kappa,\ -2\kappa,\ 2\kappa,\ -4(\kappa-\gamma),\ 2(\kappa-\gamma), \ 4(\kappa-\gamma),\ -\gamma,\ 2\gamma.\] (C.20) Let \(P\in\{-2\mathcal{P},-2\mathcal{W},-L\}\). Then the quantity \(\tilde{\beta}\) for \(P\) is defined as \[|\xi|^{-1}\frac{1}{2i}\Big{(}S_{\mathrm{sub}}(P)-\big{(}S_{\mathrm{sub}}(P) \big{)}^{*}\Big{)}=\mp\tilde{\beta}\beta_{0}\quad\text{at}\quad L_{\pm}\] where \(\beta_{0}=2\kappa\). Recall that \(\beta_{\mathrm{sup}}=\sup\tilde{\beta}\) and \(\beta_{\mathrm{inf}}=\inf\tilde{\beta}\). Then for sufficiently small \(\gamma>0\), we have \[-2\mathcal{P}:\ \beta_{\mathrm{sup}}=1-\frac{2\gamma}{\kappa}, \quad\beta_{\mathrm{inf}}=-1-\frac{\gamma}{2\kappa};\] \[-2\mathcal{W}:\ \beta_{\mathrm{sup}}=1+\frac{\gamma}{2\kappa}, \quad\beta_{\mathrm{inf}}=-1+\frac{2\gamma}{\kappa};\] (C.21) \[-L:\ \beta_{\mathrm{sup}}=2-\frac{2\gamma}{\kappa}, \quad\beta_{\mathrm{inf}}=-2+\frac{2\gamma}{\kappa}.\] We again point out the relation between the operators \(\mathcal{P},\mathcal{W},L\) and their Fourier transform \(\widehat{\mathcal{P}}(\sigma),\ \widehat{\mathcal{W}}(\sigma)\) and \(\widehat{L}(\sigma)\). _Remark C.1_.: Given a operator \(P\in\{-2\mathcal{P},-2\mathcal{W},-L\}\), we define \(\widehat{P}(\sigma):=e^{i\sigma t_{*}}Pe^{-i\sigma t_{*}}\) with \(\sigma\in\mathbb{C}\). The radial points set of \(\widehat{P}\) (which we still denote by \(L_{\pm}\)) is given by \(L_{\pm}=\{(r_{b_{0}},\omega;\xi,0)\mid\pm\xi>0\}\). Then we have \[\frac{1}{2i}\Big{(}S_{\mathrm{sub}}(\widehat{P})-\big{(}S_{\mathrm{sub}}( \widehat{P})\big{)}^{*}\Big{)}=\frac{1}{2i}\Big{(}S_{\mathrm{sub}}(P)-\big{(}S_ {\mathrm{sub}}(P)\big{)}^{*}\Big{)}+\sigma_{1}(\widehat{\widehat{\beta_{g,0}} (\sigma)-(\widehat{\widehat{\beta_{g,0}}(\sigma)})^{*}}{2i})\quad\text{at} \quad L_{\pm}.\] Finally, according to the discussion around (5.59), the calculation of the threshold regularity at radial points at event horizon for the semiclassical operator \(h^{2}\widehat{P}(h^{-1}z)\) is equal to that of \(\widehat{P}(\sigma)\).
2310.09108
Nonlinear opto-vibronics in molecular systems
We analytically tackle opto-vibronic interactions in molecular systems driven by either classical or quantum light fields. In particular, we examine a simple model of molecules with two relevant electronic levels, characterized by potential landscapes with different positions of minima along the internuclear coordinate and of varying curvatures. Such systems exhibit an electron-vibron interaction, which can be comprised of linear and quadratic terms in the vibrational displacement. By employing a combination of conditional displacement and squeezing operators, we present analytical expressions based on a quantum Langevin equations approach, to describe the emission and absorption spectra of such nonlinear molecular systems. Furthermore, we examine the imprint of the quadratic interactions onto the transmission properties of a cavity-molecule system within the collective strong coupling regime of cavity quantum electrodynamics.
Q. Zhang, M. Asjad, M. Reitz, C. Sommer, B. Gurlek, C. Genes
2023-10-13T13:52:25Z
http://arxiv.org/abs/2310.09108v1
# Nonlinear opto-vibronics in molecular systems ###### Abstract We analytically tackle opto-vibronic interactions in molecular systems driven by either classical or quantum light fields. In particular, we examine a simple model of molecules with two relevant electronic levels, characterized by potential landscapes with different positions of minima along the internuclear coordinate and of varying curvatures. Such systems exhibit an electron-vibron interaction, which can be comprised of linear and quadratic terms in the vibrational displacement. By employing a combination of conditional displacement and squeezing operators, we present analytical expressions based on a quantum Langevin equations approach, to describe the emission and absorption spectra of such nonlinear molecular systems. Furthermore, we examine the imprint of the quadratic interactions onto the transmission properties of a cavity-molecule system within the collective strong coupling regime of cavity quantum electrodynamics. ## I Introduction Opto-vibrational interactions in molecular systems occur in an indirect fashion as light couples to electronic transitions, which are in turn coupled to the vibrations of nuclei [1; 2; 3; 4]. A standard description of electron-vibron interactions, under the Born-Oppenheimer approximation, is given by the Holstein Hamiltonian [5; 6] which is a spin-boson model linear in the vibrational displacement. Some analytical treatments based on quantum Langevin equations (QLEs) [7; 8; 9; 10; 11] have been shown to provide approximate analytical results for this model for a large number of vibrational modes and in the presence of fast vibrational relaxation typically occurring in both bulk [8; 12] and solvent environments [13]. Similar methods have been used in cavity optomechanics [14; 15], where cavity-confined quantum light modes are coupled to macroscopic oscillators via the radiation pressure Hamiltonian, to study the strong photon-phonon coupling regime [16; 17]. Such theoretical treatments are based on a polaron transformation which allows for the diagonalization of the bare Holstein Hamiltonian [18]. This can be understood as a conditional displacement operation, where the electronic state dictates whether or not a displacement in the vibrational subspace should be performed. In consequence, when a photon excites an electronic transition between two copies of the same harmonic potential landscape slightly shifted (see Fig. 1(a), the vibrational state is excited to a coherent state. The underlying assumption here is however that the potential landscapes are identical. In reality it can happen that the curvatures of the two potential energy surface are different, as illustrated in Fig. 1(b): an electronic transition will then be accompanied by a squeezing of the vibrational wave-packet. In such a case the polaron transformation is modified by an operation involving a conditional squeezing operator. Most generally, one can imagine the situation depicted in Fig. 1(c) where the proper diagonalizing transformation involves a conditional displacement followed by squeezing. In optomechanics, this corresponds to a quadratic photon-phonon interaction [19]. We provide here an analytical treatment based on a set of QLEs for effective spin operators dressed by vibrations, which can be solved under some approximations to provide information about emission and absorption spectra. Additionally, we investigate the transmission properties of an optical cavity within the strong coupling regimes of cavity quantum electrodynamics. By studying the interaction between the molecular systems and the Figure 1: (a) Standard scenario where the excited state potential landscape is a copy of the ground state landscape slightly shifted. Electronic excitation is accompanied by the action of a conditional displacement operator \(\mathcal{D}^{\sigma^{\dagger}\sigma}\), where \(\sigma\) is the ladder operator from the excited to the ground electronic state. (b) Scenario with unshifted potentials with different curvatures. Electronic excitation is accompanied by a conditional squeezing operation \(\mathcal{S}^{\sigma^{\dagger}\sigma}\). (c) Combined model where electronic excitation leads to a displacing and squeezing operation. cavity, we gain insight into the nature of light-matter interactions in these complex environments. The paper is organized as follows: in Sec. II we introduce the modified Holstein model obtained from first principle derivations of the electron-vibration coupling for a scenario depicted in Fig. 1(c). Our analytical treatment is based on a set of simplified QLEs for vibrations and electronic degrees of freedom as derived in Sec. III. We proceed with solving the QLEs under the approximation of weak excitation of the upper electronic state to obtain absorption and emission spectra under illumination with classical light. Finally, in Sec. IV we add a quantum confined light field coupled to the electronic transition via the Tavis-Cummings Hamiltonian and derive the transmission profile of the cavity in the weak and strong coupling regimes of light-matter interactions. ## II The modified Holstein model We consider a molecule with two relevant electronic states denoted by \(\ket{g}\) and \(\ket{e}\) for ground and excited, respectively. Transitions between these two states are characterized by Pauli lowering operators \(\sigma=\ket{g}\bra{e}\) and its corresponding Hermitian conjugate. As illustrated in Fig. 1(c), the ground/excited potential landscapes are assumed to have a parabolic shape, with the minima of these two potential landscapes separated by \(R_{eg}\) and with different curvatures, thus having different vibrational frequencies: \(\nu_{g}\) for the electronic ground state and \(\nu_{e}\) for the electronic excited state. The Hamiltonian describing the molecular system can be expressed as (\(\hbar=1\)) \[\mathcal{H}=\mathcal{V}_{e}(\hat{R},\hat{P})\sigma^{\dagger}\sigma+\mathcal{V }_{g}(\hat{R},\hat{P})\sigma\sigma^{\dagger}, \tag{1}\] where \(\mathcal{V}_{e}\) and \(\mathcal{V}_{g}\) denote the potential landscapes in the electronic excited and ground state, respectively, defined onto the direction of the nuclear coordinate as \[\mathcal{V}_{e}(\hat{R},\hat{P}) =\omega_{e}+\frac{\hat{P}^{2}}{2\mu}+\frac{1}{2}\mu\nu_{e}^{2}( \hat{R}-R_{\text{eg}})^{2}, \tag{2a}\] \[\mathcal{V}_{g}(\hat{R},\hat{P}) =\omega_{g}+\frac{\hat{P}^{2}}{2\mu}+\frac{1}{2}\mu\nu_{g}^{2} \hat{R}^{2}, \tag{2b}\] with the reduced mass \(\mu\), the momentum operator and position operators \(\hat{P}\) and \(\hat{R}\) satisfying the commutation relation \([\hat{R},\hat{P}]=i\). Notice that the matrix elements of the Hamiltonian \(\mathcal{H}\) can be written in a basis formed by \(\{\ket{g;m_{g}}=\ket{g}\otimes\ket{m_{g}},\ket{e;m_{e}}=\ket{e}\otimes\ket{m_ {e}}\}\), where the Fock states \(\ket{m_{e}}\) and \(\ket{m_{g}}\), respectively, refer to the eigenstates of the vibrational Hamiltonian part contained in \(\mathcal{V}_{g}(\hat{R},\hat{P})\) and \(\mathcal{V}_{e}(\hat{R},\hat{P})\), respectively. However, one can express the quadratures in terms of creation \(b^{\dagger}=(\hat{R}/R_{\text{app}}-iR_{\text{app}}\hat{P})/\sqrt{2}\) and annihilation \(b=(\hat{R}/R_{\text{app}}+iR_{\text{app}}\hat{P})/\sqrt{2}\) operators. The operators fulfill the following commutation \(\left[b,b^{\dagger}\right]=1\) and the zero-point motion is defined as \(R_{\text{app}}=1/\sqrt{2\mu\nu_{g}}\). Notice that the definition of this bosonic operator is performed with respect to the ground state such that it diagonalizes the ground state vibrational problem. The Hamiltonian in Eq. (1) can now be written as \[\mathcal{H}= \nu_{g}b^{\dagger}b+\omega_{0}\sigma^{\dagger}\sigma+\lambda_{1} \nu_{g}(b+b^{\dagger})\sigma^{\dagger}\sigma \tag{3}\] \[+\lambda_{2}\nu_{g}(b+b^{\dagger})^{2}\sigma^{\dagger}\sigma.\] The linear coupling parameter results from the mismatch in the positions of the minima \(\lambda_{1}=-\mu\nu_{e}^{2}R_{\text{eg}}R_{\text{app}}/\nu_{g}\) while the quadratic coupling parameter is proportional to the relative change in vibrational frequencies \(\lambda_{2}=\left(\nu_{e}^{2}-\nu_{g}^{2}\right)/\left(4\nu_{g}^{2}\right)\). The bare electronic frequency splitting is modified by the vibronic coupling \(\omega_{0}=\omega_{e}-\omega_{g}+\lambda_{1}^{2}\nu_{g}^{3}/\nu_{e}^{2}\). However, it is more convenient to use a single basis formulation where only the eigenstates of the harmonic oscillator in the ground state are considered, i.e., the eigenstates of \(\nu_{g}b^{\dagger}b\) denoted by \(\{\ket{m_{g}}\}\). To this end, one can take the level-dependent unitary transformation \(\tilde{\mathcal{H}}=\mathcal{U}^{\dagger}\mathcal{H}\mathcal{U}\) with \[\mathcal{U}=\mathcal{D}(r_{d})^{\sigma^{\dagger}\sigma}\mathcal{S}(r_{s})^{ \sigma^{\dagger}\sigma}=\sigma\sigma^{\dagger}+\mathcal{D}(r_{d})\mathcal{S} (r_{s})\sigma^{\dagger}\sigma. \tag{4}\] The definitions of the displacement and squeezing operators are the standard ones employed in quantum optics \[\mathcal{D}(r_{d})=e^{r_{d}(b^{\dagger}-b)}\quad\text{and}\quad\mathcal{S}(r_ {s})=e^{\frac{1}{2}r_{s}\left(b^{2}-b^{\dagger 2}\right)} \tag{5}\] which employ the following displacement \(r_{d}\) and squeezing \(r_{s}\) parameters defined as \[r_{d}=-\lambda_{1}\frac{\nu_{g}^{2}}{\nu_{e}^{2}}\quad\text{and}\quad r_{s}= \frac{1}{2}\left(\ln\nu_{e}-\ln\nu_{g}\right). \tag{6}\] Finally, the Hamiltonian is expressed in diagonal form \[\tilde{\mathcal{H}}=\nu_{g}b^{\dagger}b\sigma\sigma^{\dagger}+\left(\nu_{e}b^ {\dagger}b+\omega_{00}\right)\sigma^{\dagger}\sigma, \tag{7}\] Figure 2: (a) Schematic diagram of a molecular system exhibiting two parabolic electronic potential surfaces, slightly shifted and with different curvatures quantified by the vibrational frequencies \(\nu_{g}\) and \(\nu_{e}\). Histogram of the vibrational state occupancy in the electronic ground state upon emission from \(\ket{e,0_{e}}\) in (b) and in the electronic excited state upon external drive from the \(\ket{g;0_{g}}\) state in (c) for various values of \(\lambda_{2}\) at fixed \(\lambda_{1}=1\). where the effective frequency \(\omega_{00}=\omega_{e}-\omega_{g}+(\nu_{e}-\nu_{g})/2\) relates to the zero-phonon line. This is nothing more than a generalized polaron transformation where the electronic coherence operator \(\sigma\) is _dressed_ by the vibrational modes as \(\sigma\mathcal{D}(r_{d})\mathcal{S}(r_{s})\) via both a displacement and a squeezing operation. This offers a recipe to obtain the intensity of vibronic transitions in the emission and absorption processes. Assuming the molecule initially in the excited state with zero vibrations \(\ket{e;0_{e}}\), the probability of ending up in the state \(\ket{g;m_{g}}\) is governed by the overlap between the two vibrational wave functions [see Fig. 2(a)] as \[\begin{split} S_{m}^{\text{em}}=&\ket{\langle m_{g} |0_{e}\rangle}^{2}=\ket{\langle m_{g}|\mathcal{S}(r_{s})\mathcal{D}(r_{d})\ket{ 0_{g}}|^{2}}\\ =&\frac{e^{-r_{s}^{2}\alpha}}{\cosh(r_{s})}\left[H_ {m}\left(\frac{\alpha r_{d}}{2\sqrt{\beta}}\right)\right]^{2}\frac{\beta^{m}}{ m!},\end{split} \tag{8}\] where \(H_{m}(x)\) are Hermite polynomials, \(\alpha=\tanh r_{s}+1\), and \(\beta=\left(\tanh r_{s}\right)/2\). Similarly, we can find the absorption probability amplitude for the absorption transition \(\ket{g;0_{g}}\rightarrow\ket{e;m_{e}}\) via the Hermitian adjoint operator \(\sigma^{\dagger}\mathcal{S}^{\dagger}(r_{s})\mathcal{D}^{\dagger}(r_{d})\) such that \[S_{m}^{\text{ab}}=\frac{e^{\alpha^{\prime}r_{d}^{2}\exp(2r_{s})}}{\cosh(r_{s}) }\left[H_{m}\left(-\frac{i\alpha^{\prime}r_{d}e^{r_{s}}}{2\sqrt{\beta}}\right) \right]^{2}\frac{\left(-\beta\right)^{m}}{m!}, \tag{9}\] with \(\alpha^{\prime}=\tanh r_{s}-1\). We numerically illustrate the departure from such a statistics with various values of \(\lambda_{2}\) in Figs. 2(b)-(c). Given the commutator \([\mathcal{D}(r_{s}),\mathcal{S}(r_{s})]\neq 0\), the presence of the product \(\mathcal{D}(r_{s})\mathcal{S}(r_{s})\) renders an asymmetry between the emission event \(\ket{e;0_{e}}\rightarrow\ket{g;m_{g}}\) and the absorption event \(\ket{g;0_{g}}\rightarrow\ket{e;m_{e}}\). Also, as a simple check, in the limiting case where \(\lambda_{2}=0\), i.e., \(\nu_{e}=\nu_{g}\), both transition strengths follow the same Poissonian distribution \(e^{-\lambda_{2}^{2}}\lambda_{1}^{2m}/m!\), as expected, reproducing the mirroring effect of emission and absorption spectra usually exhibited by most molecular transitions. ## III Absorption and emission spectra In order to derive spectroscopic quantities, we will assume a continuous wave classical drive coupled to the electronic transition incorporated in the following Hamiltonian \[\mathcal{H}_{\ell}=i\eta_{\ell}\left(\sigma^{\dagger}e^{-i\omega_{\ell}t}- \sigma e^{i\omega_{\ell}t}\right), \tag{10}\] with the Rabi frequency \(\eta_{\ell}\) and laser frequency \(\omega_{\ell}\). Since the molecule is also coupled to the electromagnetic vacuum and additional vibrational relaxation baths, we will make use of open system dynamics methods, first formulated in terms of a master equation. First, we include a spontaneous emission channel with the collapse operator \(\sigma\) at rate \(\gamma\). In addition, as the electronic transition is modified by the vibrational mode [20; 21; 22], the influence of the environment onto the dynamics of the vibrational mode can be well described by a collapse operator \(\mathcal{U}\mathcal{U}^{\dagger}\) at the rate \(\Gamma\). For numerical investigations, the master equation for the system is given \[\dot{\rho}=-i\left[\mathcal{H}+\mathcal{H}_{\ell},\rho\right]+\mathcal{L}_{ \gamma}[\sigma]\rho+\mathcal{L}_{\Gamma}\left[\mathcal{U}\mathcal{U}\mathcal{U }^{\dagger}\right]\rho, \tag{11}\] where the standard Lindblad superoperator is written as \(\mathcal{L}_{\gamma\mathcal{O}}\cdot=\gamma_{\mathcal{O}}\left(2\mathcal{O} \cdot\mathcal{O}^{\dagger}-\mathcal{O}^{\dagger}\mathcal{O}\cdot-\mathcal{O} ^{\dagger}\mathcal{O}\right)\) for a collapse operator \(\mathcal{O}\) and a corresponding decay rate \(\gamma_{\mathcal{O}}\). In particular in the polaron transformation \(\tilde{\rho}=\mathcal{U}^{\dagger}\rho\mathcal{U}\), the last term in Eq. (11) is going to the familiar form \(\mathcal{L}_{\Gamma}\left[b\right]\tilde{\rho}\). The dot stands for the position where the density operator, on which the Lindblad superoperator is applied on, is to be included. It is convenient, for deriving analytical results, to map the master equation into an equivalent set of QLEs. For any system operator \(\mathcal{A}\) this can be done as follows [7; 23] \[\dot{\mathcal{A}}= -i\left[\mathcal{A},\mathcal{H}+\mathcal{H}_{\ell}\right]- \left[\mathcal{A},\mathcal{O}^{\dagger}\right]\left(\gamma_{\mathcal{O}} \mathcal{O}-\sqrt{2\gamma_{\mathcal{O}}}\mathcal{O}_{\text{in}}\right)\] \[+\left(\gamma_{\mathcal{O}}\mathcal{O}^{\dagger}-\sqrt{2\gamma_{ \mathcal{O}}}\mathcal{O}_{\text{in}}^{\dagger}\right)\left[\mathcal{A}, \mathcal{O}\right], \tag{12}\] where \(\mathcal{O}_{\text{in}}\) is the zero-averaged and delta-correlated input noise operator associated with the collapse operator \(\mathcal{O}\) and \(\gamma_{\mathcal{O}}\) is the associated decay rate. For molecules in solid-state environments or in solvents, the vibrational relaxation rate is usually very large greatly surpassing both \(\gamma\) and \(\eta_{\ell}\). Therefore, fluorescence occurs preferentially from the state \(\ket{e,0_{e}}\), which lies at the bottom of the excited state manifold: this is generally referred to as Kasha's rule [24]. The same mechanism is valid for the absorption process, where absorption occurs from the state \(\ket{g,0_{g}}\), the lowest in energy. We will make use of this fast vibrational relaxation to impose a quick timescale for the modification of the bosonic \(b\) operators and use their quasi-steady state values in the following. First, however, let us partition the total Hilbert space into two orthogonal subspaces (ground and excited electronic state manifolds) via the following two projection operators \(\mathcal{P}_{g}=\sigma\sigma^{\dagger}\) and \(\mathcal{P}_{e}=\sigma^{\dagger}\sigma\). Let us first pay attention to the dynamical equation in the manifold of \(\mathcal{P}_{e}\). For convenience reasons, we introduce a projected bosonic operator \(b_{e}=\mathcal{U}\mathcal{U}\mathcal{U}^{\dagger}\mathcal{P}_{e}\) acting only in this manifold and more explicitly expressed as \[b_{e}=\left(\cosh r_{s}b+\sinh r_{s}b^{\dagger}-r_{d}e^{r_{s}}\right)\mathcal{P} _{e} \tag{13}\] and obeying the relation \(b_{e}^{\dagger}b_{e}\ket{e;m_{e}}=m_{e}\ket{e;m_{e}}\). Meanwhile, we define a time-dependent generalized polaron operator [7; 8], by the transformation \(\tilde{\sigma}_{e}=\sigma\mathcal{S}_{e}^{\dagger}\mathcal{D}_{e}^{\dagger} \exp\left[i(\nu_{e}-\nu_{g})b_{e}^{\dagger}b_{e}t\right]\). This allows the derivation of a set of effective QLEs in the rotating frame at the driving frequency \(\omega_{\ell}\) for the emission process (see Appendix B for details) \[\dot{b}_{e}\approx -\left(i\nu_{e}+\Gamma\right)b_{e}+\sqrt{2\Gamma}\mathcal{B}_{e}^{ \text{in}}\mathcal{P}_{e}, \tag{14a}\] \[\dot{\tilde{\sigma}}_{e}\approx -\left(i\mathcal{A}_{\ell}+\gamma\right)\tilde{\sigma}_{e}-\eta_{ \ell}\mathcal{S}_{e}^{\dagger}\mathcal{D}_{e}^{\dagger}e^{i(\nu_{e}-\nu_{g})b _{e}^{\dagger}b_{e}t}\] (14b) \[+\sqrt{2\gamma}\sigma_{\text{in}}\mathcal{S}_{e}^{\dagger} \mathcal{D}_{e}^{\dagger}e^{i(\nu_{e}-\nu_{g})b_{e}^{\dagger}b_{e}t},\] \[\dot{\mathcal{P}}_{e}= -2\gamma\mathcal{P}_{e}+\eta_{\ell}(\sigma+\sigma^{\dagger})+\sqrt {2\gamma}(\sigma^{\dagger}\sigma_{\text{in}}+\sigma_{\text{in}}^{\dagger}\sigma), \tag{14c}\] with the detuning \(\Delta_{\ell}=\omega_{00}-\omega_{\ell}\), the displacement operator \(\mathcal{D}_{e}=\exp\left[r_{d}(b_{e}^{\dagger}-b_{e})\right]\mathcal{P}_{e}\) and the squeezing operator \(\mathcal{S}_{e}=\exp\left[r_{s}(b_{e}^{2}-b_{e}^{\dagger 2})/2\right]\mathcal{P}_{e}\). The input noises \(\mathcal{B}_{e}^{\text{in}}\) and \(\sigma_{\text{in}}\) are zero-averaged and have the following two-time correlations \(\langle\mathcal{B}_{e}^{\text{in}}(t)\mathcal{B}_{e}^{\text{in}\dagger}(t^{ \prime})\rangle=\delta(t-t^{\prime})\) and \(\langle\sigma_{\text{in}}(t)\sigma_{\text{in}}^{\dagger}(t^{\prime})\rangle= \delta(t-t^{\prime})\). In a completely similar fashion, projected operators in the ground electronic state manifold can be defined. Let us introduce the ground state polaron operator via the transformation \(\tilde{\sigma}_{g}=\exp\left[i\left(\nu_{e}-\nu_{g}\right)b_{g}^{\dagger}b_{g }t\right]\mathcal{S}_{g}^{\dagger}\mathcal{D}_{g}^{\dagger}\sigma\) which allows one to derive a similar set of QLEs \[\dot{b}_{g}\approx -(i\nu_{g}+\Gamma)b_{g}+\sqrt{2\Gamma}\mathcal{B}_{g}^{\text{in}} \mathcal{P}_{g}, \tag{15a}\] \[\dot{\tilde{\sigma}}_{g}\approx -(i\Delta_{\ell}+\gamma)\tilde{\sigma}_{g}+\eta_{\ell}e^{i(\nu_{ e}-\nu_{g})b_{g}^{\dagger}b_{g}t}\mathcal{S}_{g}^{\dagger}\mathcal{D}_{g}^{ \dagger}\] (15b) \[+\sqrt{2\gamma}e^{i(\nu_{e}-\nu_{g})b_{g}^{\dagger}b_{g}t} \mathcal{S}_{g}^{\dagger}\mathcal{D}_{g}^{\dagger}\sigma_{\text{in}}.\] As above, the new displacement operator is \(\mathcal{D}_{g}=\exp[r_{d}(b_{g}^{\dagger}-b_{g})]\mathcal{P}_{g}\), and the new squeezing operator is \(\mathcal{S}_{g}=\exp[r_{s}(b_{g}^{2}-b_{g}^{\dagger 2})/2]\mathcal{P}_{g}\). The nonvanishing correlation of the zero-average noise operator is given by \(\langle\mathcal{B}_{g}^{\text{in}}(t)\mathcal{B}_{g}^{\text{in}\dagger}(t^{ \prime})\rangle=\delta(t-t^{\prime})\). We are now in the position of reconstructing the full solution of the coherence operator in steady state by summing over the contributions in the ground and excited state manifolds. This can be done by formal integration of Eq. (14b) and Eq. (15b) to obtain a solution for \(\langle\sigma\rangle\) expressed as \[\langle\sigma\rangle= -\eta_{\ell}\int_{0}^{\infty}d\tau\,\Theta(t-\tau)e^{-(i\Delta_{ \ell}+\gamma)(t-\tau)}\left\langle\mathcal{S}_{e}^{\dagger}\left(\tau\right) \mathcal{D}_{e}^{\dagger}\left(\tau\right)e^{i(\nu_{e}-\nu_{g})b_{e}^{\dagger }b_{e}\tau}e^{-i(\nu_{e}-\nu_{g})b_{e}^{\dagger}b_{e}\tau}\mathcal{D}_{e} \left(t\right)\mathcal{S}_{e}\left(t\right)\right\rangle \tag{16}\] \[+\eta_{\ell}\int_{0}^{\infty}d\tau\,\Theta(t-\tau)e^{-(i\Delta_{ \ell}+\gamma)(t-\tau)}\left\langle\mathcal{D}_{g}(t)\mathcal{S}_{g}(t)e^{-i( \nu_{e}-\nu_{g})b_{g}^{\dagger}b_{g}\tau}e^{i(\nu_{e}-\nu_{g})b_{g}^{\dagger}b_ {g}\tau}\mathcal{S}_{g}^{\dagger}(\tau)\right)\mathcal{D}_{g}^{\dagger}(\tau) \rangle\,.\] Here, we have used the Heaviside step function \(\Theta(t)\) and the initial value \(\langle\sigma(0)\rangle=0\). Considering that the vibrational mode has a large relaxation rate (i.e., \(\Gamma\gg\gamma\)), we then decouple the vibronic and electronic degrees of freedom. The two-time correlation functions on the right side of above equation could be expressed as (see Appendix B for details) \[\langle\mathcal{D}_{e}(\tau)\mathcal{S}_{e}(\tau)e^{i(\nu_{e}-\nu_{ g})b_{e}^{\dagger}b_{e}\tau}e^{-i(\nu_{e}-\nu_{g})b_{e}^{\dagger}b_{e}t} \mathcal{S}_{e}^{\dagger}(t)\mathcal{D}_{e}^{\dagger}(t)\rangle =\sum_{m=0}^{\infty}S_{m}^{\text{em}}e^{-m(i\nu_{g}+\Gamma)(t-\tau)} \left\langle\mathcal{P}_{e}\left(\tau\right)\right\rangle, \tag{17a}\] \[\langle\mathcal{D}_{g}(t)\mathcal{S}_{g}(t)e^{-i(\nu_{e}-\nu_{g})b _{g}^{\dagger}b_{g}t}e^{i(\nu_{e}-\nu_{g})b_{g}^{\dagger}b_{g}\tau}\mathcal{S }_{g}^{\dagger}(\tau)\rangle \mathcal{D}_{g}^{\dagger}(\tau)\rangle =\sum_{m=0}^{\infty}S_{m}^{\text{ab}}e^{m(-i\nu_{e}+\Gamma)(t- \tau)}\left\langle\mathcal{P}_{g}(\tau)\right\rangle. \tag{17b}\] Replacing the infinite sums from above back into Eq. (16) leads to a convolution in time. This can be dealt with by employing a Laplace transformation defined as \(\widetilde{f}(s)=\int_{0}^{\infty}dt\,f(t)\exp(-st)\) for a time-dependent function \(f(t)\) at \(t\geq 0\). In such a case, Eq. (16) takes a much simpler form \[\langle\overline{\sigma}\rangle=\frac{\eta_{\ell}}{s}\overline{\mathcal{G}}_{ \text{ab}}-\eta_{\ell}\langle\overline{\mathcal{P}_{e}}\rangle(\overline{ \mathcal{G}}_{\text{em}}+\overline{\mathcal{G}}_{\text{ab}}), \tag{18}\] with the following functions identified corresponding to emission and absorption events, respectively \[\overline{\mathcal{G}}_{\text{em}} =\sum_{m=0}^{\infty}\frac{S_{m}^{\text{em}}}{s+m\Gamma+\gamma+i \left(\Delta_{\ell}-m\nu_{g}\right)}, \tag{19}\] \[\overline{\mathcal{G}}_{\text{ab}} =\sum_{m=0}^{\infty}\frac{S_{m}^{\text{ab}}}{s+\gamma+m\Gamma+i \left(\Delta_{\ell}+m\nu_{e}\right)}. \tag{20}\] From these expressions, one can proceed in evaluating analytically the population of the excited state \(p_{e}^{\text{as}}=\lim\limits_{t\to\infty}\langle\sigma^{\dagger}(t)\sigma(t)\rangle\) in steady state (as detailed in Appendix D) \[p_{e}^{\text{as}}=\frac{\sum\limits_{m=0}^{\infty}\gamma_{m}^{ \uparrow}(\omega_{\ell})}{\gamma+\sum\limits_{m=0}^{\infty}\left[\gamma_{m}^{ \uparrow}(\omega_{\ell})+\gamma_{m}^{\downarrow}(\omega_{\ell})\right]}, \tag{21}\] The coefficients \(\gamma_{m}^{\uparrow}(\omega)\) and \(\gamma_{m}^{\downarrow}(\omega)\) represent the dynamic equilibrium population transfer rates for absorption from the ground state to the excited state \(|g;0_{g}\rangle\to|e;m_{e}\rangle\) and emission from the excited to the ground state \(|e;0_{e}\rangle\to|g;m_{g}\rangle\) as illustrated in Fig. 3(a). The rates are analytically expressed as \[\gamma_{m}^{\uparrow}(\omega)= \frac{\eta_{\ell}^{2}S_{m}^{\text{ab}}\left(m\Gamma+\gamma\right) }{(m\Gamma+\gamma)^{2}+(\omega_{00}+m\nu_{e}-\omega)^{2}}, \tag{22a}\] \[\gamma_{m}^{\downarrow}(\omega)= \frac{\eta_{\ell}^{2}S_{m}^{\text{em}}\left(m\Gamma+\gamma\right) }{(m\Gamma+\gamma)^{2}+(\omega_{00}-m\nu_{g}-\omega)^{2}}. \tag{22b}\] Specifically, these rates contribute to the rate equation for the population of the excited state, given by (see Appendix E for detailed derivations): \[\partial_{t}p_{e}=-2(\gamma+\sum_{m=0}^{\infty}\gamma_{m}^{\downarrow})p_{e}+2 \sum_{m=0}^{\infty}\gamma_{m}^{\uparrow}(1-p_{e}). \tag{23}\] This equation holds true under the condition \(\eta_{\ell}\ll\Gamma\). Remarkably, one could also obtain the same expression for the population of the excited state in steady state and compare with full numerical simulations to a very good fit, as illustrated in Fig. 3(b). The parameters are given in the caption and are chosen in close attention to other works [25, 26]. Additionally, we can employ the pump-probe scenario to analyze the absorption and emission processes. In this scenario, the molecule absorbs a photon at the frequency \(\omega_{\ell}\), transitioning to the excited state \(|e;m_{e}\rangle\) under the resonant condition \(\omega_{\ell}=\omega_{00}+m\nu_{e}\). Subsequently, after undergoing fast vibrational relaxation, the molecule emits a photon centered around the frequency \(\omega_{00}-m^{\prime}\nu_{g}\), which can be detected with a modified linewidth \(\gamma+m^{\prime}\Gamma\). The absorption and emission profiles are then obtained by summing up the contributions from all possible cases, resulting in Lorentzian profiles represented by \(\gamma_{m}^{\uparrow/\downarrow}\), as shown in Fig. 3(c): \[S_{\rm Ab}=\sum_{m=0}^{\infty}\gamma_{m}^{\uparrow}\quad\text{and}\quad S_{\rm Em }=\sum_{m=0}^{\infty}\gamma_{m}^{\downarrow}. \tag{24}\] Here, the scaling of the vibrational rates has been on purpose exaggerated in order to clearly point out the difference in energies expected for the smaller and higher energy sidebands. The presence of the quadratic electron-vibron coupling under realistic conditions, is expected to only slightly break the symmetry between the emission and absorption spectra, as the expected values for \(\lambda_{2}\) lie well below in the subunit region. More details on the procedure we have followed for the above derivations is presented in Appendix F and basically follows the quantum regression theorem formalism [23, 27]. ## IV Molecular polaritonics Let us now ask what is the imprint of the asymmetry between the ground and excited state potential landscapes on the signal of an optical cavity containing such a molecule in the strong coupling regime of cavity quantum electrodynamics. To this end, we consider a single molecule placed within the optical volume of a single mode optical cavity mediating transitions between the ground and excited potential landscapes. Under strong optical confinement conditions, the interaction of light and matter can lead to the production of hybrid quantum states, i.e., polaritons [1, 28, 29, 30, 31, 32, 33, 34, 35, 36] as superpositions of ground or excited electronic states and zero or single photon states. While polaritons are eigenstates solely of the electron-photon interaction Hamiltonian, the intrinsic electron-vibron coupling can provide a mechanism of polariton cross-talk, leading to a unidirectional loss of energy from the higher state to the lower energy state. This has been shown analytically in Ref. [7] for the standard case of identical ground and excited state potential landscapes and found to be most pronounced when the vibrational mode is resonant to the interpolariton frequency splitting. Let us now consider the case of \(\mathcal{N}\) molecules inside the spatial extent of a single-mode of a Fabry-Perot optical resonators, illustrated in Fig. 4. The dynamics of a single Figure 4: Schematics of an ensemble of molecules inside a Fabry-Perot resonator. Cavity photon loss occurs at rates \(\kappa_{1}\) and \(\kappa_{2}\) via the mirrors M1 and M2, respectively. Light-molecule interactions occur at rate \(g\) while spontaneous emission and cavity driving are at rates \(\gamma\) and \(\eta_{e}\), respectively. Figure 3: (a) Jablonski diagram illustrating possible emission and absorption processes. (b) Comparison of analytical and numerical results of the excited state population as a function of normalized detuning. Parameters are \(\lambda_{2}=1\) (i.e., \(\nu_{e}/\nu_{g}=2\)), \(\Gamma/\nu_{g}=0.1\), \(\gamma/\Gamma=0.1\) and \(\eta_{\ell}/\gamma=2\). (c) Comparison of analytical results versus numerical simulations for the absorption (shaded in orange) and emission (shaded in green) profiles. Parameters are \(\eta_{\ell}/\gamma=0.1\) and \(\Delta_{\ell}/\nu_{g}=0.1\). molecule is governed by the Hamiltonian \(\mathcal{H}\) from Eq. (3). The interaction between the \(\mathcal{N}\) molecules and the cavity field mode is characterized by the Tavis-Cummings model, \[\begin{split}\mathcal{H}_{\text{cav}}=&\omega_{c}a^{ \dagger}a+g\sum_{n=1}^{\mathcal{N}}\left(a\sigma_{n}^{\dagger}+\text{h.c.} \right)\\ &+i\eta_{c}(a^{\dagger}e^{-i\omega_{\ell}t}-\text{h.c.}),\end{split} \tag{25}\] consisting of the free cavity field at frequency \(\omega_{c}\) and with bosonic mode \(a\) and the Tavis-Cummings interaction with the unit light-matter coupling strength \(g\) and the laser field drive with amplitude \(\eta_{c}\) and frequency \(\omega_{\ell}\).For convenience, we have made the assumption here that all molecules are identical. Let us proceed with a set of effective QLEs for the cavity mode \(a\) and the state dependent polaron operators \(\tilde{\sigma}_{e,n}\) and \(\tilde{\sigma}_{g,n}\) for the \(n\)th molecule in the rotating frame at the laser frequency \(\omega_{\ell}\): \[\dot{a}=-(i\Delta_{c}+\kappa)a-ig\sum_{n=1}^{\mathcal{N}}\sigma_{n}+\sqrt{2 \kappa_{1}}A_{1,\text{in}}+\sqrt{2\kappa_{2}}a_{2,\text{in}}, \tag{26a}\] \[\dot{\tilde{\sigma}}_{e,n}\approx -\left(i\Delta_{\ell}+\gamma\right)\tilde{\sigma}_{e,n}+iga\mathcal{ S}_{e,n}^{\dagger}\mathcal{D}_{e,n}^{\dagger}e^{i(\nu_{e}-\nu_{g})b_{e,n}^{ \dagger}b_{e,n}t}\] \[+\sqrt{2\gamma}\sigma_{\text{in,n}}\mathcal{S}_{e,n}^{\dagger} \mathcal{D}_{e,n}^{\dagger}e^{i(\nu_{e}-\nu_{g})b_{e,n}^{\dagger}b_{e,n}t},\] (26b) \[\dot{\tilde{\sigma}}_{g,n}\approx -\left(i\Delta_{\ell}+\gamma\right)\tilde{\sigma}_{g,n}-igae^{i( \nu_{e}-\nu_{g})b_{e,n}^{\dagger}b_{g,n}t}\mathcal{S}_{g,n}^{\dagger}\mathcal{ D}_{g,n}^{\dagger}\] \[+\sqrt{2\gamma}e^{i(\nu_{e}-\nu_{g})b_{g,n}^{\dagger}b_{g,n}t} \mathcal{S}_{g,n}^{\dagger}\mathcal{D}_{g,n}^{\dagger}\sigma_{\text{in,n}}. \tag{26c}\] Here, the total dissipation for the cavity field \(\kappa=\kappa_{1}+\kappa_{2}\) encompasses the losses via both mirrors. The operators \(A_{1,\text{in}}=\eta_{c}/\sqrt{2\kappa_{1}}+a_{1,\text{in}}\) describes the input classical field coming through the left mirror \(\eta_{c}/\sqrt{2\kappa_{1}}\) and the zero-average input noise with the only non-vanishing two-time correlations \(\langle a_{1,\text{in}}(t)a_{1,\text{in}}^{\dagger}(t^{\prime})\rangle=\delta (t-t^{\prime})\). Additionally, zero-average input noise comes through the right side mirror with similar correlations \(\langle a_{2,\text{in}}(t)a_{2,\text{in}}^{\dagger}(t^{\prime})\rangle=\delta (t-t^{\prime})\) and uncorrelated with the \(a_{1,\text{in}}(t)\). The Markovian limit is achieved under the large relaxation rate condition for vibrational mode, i.e., \(\Gamma\gg\kappa\) and \(\Gamma\gg\gamma\). In this case, the approach to treat the vibrations as a local phonon bath is still applicable. By formally integrating the equations for polaron operator, tracing over the cavity mode as well as electronic degrees of freedom and taking the Laplace transformation, we have \[\langle\overline{\sigma_{n}}\rangle= ig\left(\overline{\mathcal{G}}_{\text{em}}+\overline{\mathcal{G}}_{ \text{ab}}\right)\langle\overline{\mathcal{P}_{e,n}}a\rangle-ig\overline{ \mathcal{G}}_{\text{ab}}\langle\overline{a}\rangle. \tag{27}\] The coupling between the cavity mode \(a\) and the projection operator \(\mathcal{P}_{e,n}\) leads to non-linear effects. However, we restrict our analysis to the weak excitation regime, i.e., the cavity photon number is much smaller than unity and the population of the excited electronic state \(\ket{e}\) is negligible (under the condition that \(\eta_{c}\ll\kappa\)). In other words, this approximation allows for the construction of a linear response theory formalism where the transmitted light gives information on the position and linewidths of the hybrid light-matter eigenstates of the system. In the case of identical conditions, the expectation value of the electronic coherence operator \(\sigma_{n}\) for the \(n\)-th molecule will be equivalent to that of the other molecules, i.e., \(\langle\sigma\rangle=\langle\sigma_{n}\rangle=\langle\sigma_{m}\rangle\) (\(m\neq n\)). Then, the equations of motion are written in the vector form (in the Laplace transform domain) as \(\overline{\mathbf{M}}\overline{\mathbf{v}}+\overline{\mathbf{v}}_{c}=0\), with the drift matrix \[\overline{\mathbf{M}}=\left(\begin{array}{cc}-\left(i\Delta_{c}+\kappa \right)-s&-i\mathcal{N}g\\ -ig&-1/\overline{\mathcal{G}}_{\text{ab}}\end{array}\right), \tag{28}\] and the definitions \(\mathbf{v}=\left(\left\langle a\right\rangle,\left\langle\sigma\right\rangle \right)^{T}\) and \(\mathbf{v}_{c}=\left(\eta_{c},0\right)^{T}\). The diagonalization of the drift matrix (under resonance condition \(\Delta_{c}=0\)) yields the frequencies \(\omega_{\pm}\) and linewidths \(\gamma_{\pm}\)[33; 37] of the two polaritons as \[\omega_{\pm}= \frac{-\Delta_{\mathrm{eff}}}{2}\pm\frac{1}{2}\mathcal{I}\sqrt{( \Gamma_{\mathrm{eff}}-\kappa+i\Delta_{\mathrm{eff}})^{2}-\mathcal{N}g^{2}}, \tag{29a}\] \[\gamma_{\pm}= \frac{\Gamma_{\mathrm{eff}}+\kappa}{2}\pm\frac{1}{2}\mathcal{R} \sqrt{(\Gamma_{\mathrm{eff}}-\kappa+i\Delta_{\mathrm{eff}})^{2}-\mathcal{N}g^ {2}}, \tag{29b}\] with \(\Gamma_{\mathrm{eff}}=\mathcal{R}\lim\limits_{s\to 0}1/\overline{\mathcal{G}}_{ \mathrm{ab}}\) and \(\Delta_{\mathrm{eff}}=\mathcal{I}\lim\limits_{s\to 0}1/\overline{\mathcal{G}}_{ \mathrm{ab}}\) denoting the effective decay rate and additional frequency shift. These particularities of the polaritons can be explored in a very simple way by performing a scan of the laser frequency around the cavity resonance and noticing the position of the peaks corresponding to the hybrid light-matter states. This can be done at the analytical level in the weak excitation regime and compared to full exact numerics. We define the complex cavity transmission amplitude as the ratio of the normalized continuous outgoing field versus incoming field amplitudes \[\mathcal{T}=\frac{\sqrt{2\kappa_{2}}\left\langle a\right\rangle_{\mathrm{ss} }}{\eta_{c}/\sqrt{2\kappa_{1}}}, \tag{30}\] and illustrate its behavior with respect to the scanning laser frequency in Fig. 5. The quantity \(\left\langle a\right\rangle_{\mathrm{ss}}\) is the average value of the cavity mode amplitude in steady state in the linear response regime \[\left\langle a\right\rangle_{\mathrm{ss}}= \frac{\eta_{c}}{\mathcal{N}g^{2}\chi_{\mathrm{ab}}+\kappa+i( \omega_{c}-\omega_{\ell})}, \tag{31}\] with \(\chi_{\mathrm{ab}}=\lim\limits_{s\to 0}1/\overline{\mathcal{G}}_{ \mathrm{ab}}\). We illustrate numerical and analytical results in Fig. 5 where the profile of the cavity transmission at \(\omega_{c}=\omega_{00}\) is plotted. The presence of the linear electron-vibron coupling scaling with \(\lambda_{1}\) induces an interaction between upper and lower polaritons already presented at the theoretical level in a few treatments [1; 7; 8]. Instead, at the level of a single molecule, the quadratic interaction will suppress the polariton cross talk, as illustrated in Fig 5(a). In essence, the squeezing term is responsible with a shift in the molecular resonance which then in turn brings the cavity off-resonance with the electronic transition except the zero-phonon transition process. Increasing the number of molecules while assuming very weak driving conditions presents a different situation. This is shown in Fig. 5(b) as an effective reduction of the upper polariton with increasing particle number. ## V Conclusions We have applied the toolbox of open system dynamics and in particular the QLEs formalism to analytically describe spectroscopic properties of solid-state embedded molecules in free space or in optical cavity settings. In particular, we generalized our previous approach introduced in Ref. [7] to a scenario where the potential landscapes of a molecule have unequal curvatures in the ground and excited electronic state. This has seen the introduction of a generalized polaron operator where the electronic degree of freedom is dressed by vibrations via a displacement operation followed by an additional squeezing operation. The first effect is seen in the emergent asymmetry between absorption and emission profiles for molecular spectroscopy. A second effect that emerges from our analytical calculations is the context of cavity quantum electrodynamics where the additional squeezing operation leads to a detuning between the bare molecular resonance and the cavity resonance. Our calculations can be relevant in the direction of optomechanics or optovibronics, owing to the strong electron-vibron couplings, albeit under very lossy conditions. ###### Acknowledgements. We acknowledge financial support from the Max Planck Society and from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 9429529648 - TRR 306 QuCoLiMa ("Quantum Cooperativity of Light and Matter").
2303.09030
Large Selective Kernel Network for Remote Sensing Object Detection
Recent research on remote sensing object detection has largely focused on improving the representation of oriented bounding boxes but has overlooked the unique prior knowledge presented in remote sensing scenarios. Such prior knowledge can be useful because tiny remote sensing objects may be mistakenly detected without referencing a sufficiently long-range context, and the long-range context required by different types of objects can vary. In this paper, we take these priors into account and propose the Large Selective Kernel Network (LSKNet). LSKNet can dynamically adjust its large spatial receptive field to better model the ranging context of various objects in remote sensing scenarios. To the best of our knowledge, this is the first time that large and selective kernel mechanisms have been explored in the field of remote sensing object detection. Without bells and whistles, LSKNet sets new state-of-the-art scores on standard benchmarks, i.e., HRSC2016 (98.46\% mAP), DOTA-v1.0 (81.85\% mAP) and FAIR1M-v1.0 (47.87\% mAP). Based on a similar technique, we rank 2nd place in 2022 the Greater Bay Area International Algorithm Competition. Code is available at https://github.com/zcablii/Large-Selective-Kernel-Network.
Yuxuan Li, Qibin Hou, Zhaohui Zheng, Ming-Ming Cheng, Jian Yang, Xiang Li
2023-03-16T02:00:37Z
http://arxiv.org/abs/2303.09030v2
# Large Selective Kernel Network for Remote Sensing Object Detection ###### Abstract Recent research on remote sensing object detection has largely focused on improving the representation of oriented bounding boxes but has overlooked the unique prior knowledge presented in remote sensing scenarios. Such prior knowledge can be useful because tiny remote sensing objects may be mistakenly detected without referencing a sufficiently long-range context, and the long-range context required by different types of objects can vary. In this paper, we take these priors into account and propose the Large Selective Kernel Network (LSKNet). LSKNet can dynamically adjust its large spatial receptive field to better model the ranging context of various objects in remote sensing scenarios. To the best of our knowledge, this is the first time that large and selective kernel mechanisms have been explored in the field of remote sensing object detection. Without bells and whistles, LSKNet sets new state-of-the-art scores on standard benchmarks, i.e., HRSC2016 (98.46% mAP), DOTA-v1.0 (81.85% mAP) and FAIR1M-v1.0 (47.87% mAP). Based on a similar technique, we rank 2nd place in 2022 the Greater Bay Area International Algorithm Competition. Code is available at [https://github.com/zcablii/Large-Selective-Kernel-Network](https://github.com/zcablii/Large-Selective-Kernel-Network). ## 1 Introduction Remote sensing object detection [75] is a field of computer vision that focuses on identifying and locating objects of interest in aerial images, such as vehicles or aircraft. In recent years, one mainstream trend is to generate bounding boxes that accurately fit the orientation of the objects being detected, rather than simply drawing horizontal boxes around them. Consequently, a significant amount of research has focused on improving the representation of oriented bounding boxes for remote sensing object detection. This has largely been achieved through the development of specialized detection frameworks, such as RoI Transformer [12], Oriented R-CNN [62] and R3Det [68], as well as techniques for oriented box encoding, such as gliding vertex [64] and midpoint offset box encoding [62]. Additionally, a number of loss functions, including GWD [70], KLD [72] and Modulated Loss [50], have been proposed to further enhance the performance of these approaches. However, despite these advances, relatively few works have taken into account the strong prior knowledge that exists in remote sensing images. Aerial images are typically captured from a bird's eye view at high resolutions. In particular, most objects in aerial images may be small in size and difficult to identify based on their appearance alone. Instead, the successful recognition of these objects often relies on their context, as the surrounding environment can provide valuable clues about their shape, orientation, and other characteristics. According to an analysis of mainstream remote sensing datasets, we identify two important priors: **(1) Accurate detection of objects in remote sensing images often requires a wide range of contextual information.** As illustrated in Fig. 1(a), the limited context used by object detectors in remote sensing images can often lead to incorrect classifications. In the upper image, for example, the detector may classify the junction as an intersection Figure 1: Successfully detecting remote sensing objects requires the use of a wide range of contextual information. Detectors with a limited receptive field may easily lead to incorrect detection results. “CT” stands for Context. due to its typical characteristics, but in reality, it is not an intersection. Similarly, in the lower image, the detector may classify the junction as not being an intersection due to the presence of large trees, but again, this is incorrect. These errors can occur because the detector is only considering a limited amount of contextual information in the immediate vicinity of the objects. A similar scenario can be also observed in the example of ships and vehicles in Fig. 1(b). **(2) The wide range of contextual information required for different object types is very different.** As shown in Fig. 2, the amount of contextual information required for accurate object detection in remote sensing images can vary significantly depending on the type of object being detected. For example, Soccer-ball-field may require relatively less extra contextual information because of the unique distinguishable court borderlines. In contrast, Roundabouts may require a larger range of context information in order to distinguish between gardens and ring-like buildings. Intersections, especially those that are partially covered by trees, often require an extremely large receptive field due to the long-range dependencies between the intersecting roads. This is because the presence of trees and other obstructions can make it difficult to identify the roads and the intersection itself based on appearance alone. Other object categories, such as bridges, vehicles, and ships, may also require different scales of the receptive field in order to be accurately detected and classified. To address the challenge of accurately detecting objects in remote sensing images, which often require a wide and dynamic range of contextual information, we propose a novel approach called Large Selective Kernel Network (LSKNet). Our approach involves dynamically adjusting the receptive field of the feature extraction backbone in order to more effectively process the varying wide context of the objects being detected. This is achieved through a spatial selective mechanism, which weights the features processed by a sequence of large depth-wise kernels efficiently and then spatially merges them. The weights of these kernels are determined dynamically based on the input, allowing the model to adaptively use different large kernels and adjust the receptive field for each target in space as needed. To the best of our knowledge, our proposed LSKNet is the first to investigate and discuss the use of large and selective kernels for remote sensing object detection. Despite its simplicity, our model achieves state-of-the-art performance on three popular datasets: HRSC2016 (98.46% mAP), DOTA-v1.0 (81.64% mAP), and FAIR1M-v1.0 (47.87% mAP), surpassing previously published results. Furthermore, we demonstrate that our model's behaviour exactly aligns with the aforementioned two priors, which in turn verifies the effectiveness of the proposed mechanism. ## 2 Related Work ### Remote Sensing Object Detection Framework High-performance remote sensing object detectors often rely on the RCNN [52] framework, which consists of a region proposal network and regional CNN detection heads. Several variations on the RCNN framework have been proposed in recent years. The two-stage RoI transformer [12] uses fully-connected layers to rotate candidate horizontal anchor boxes in the first stage, and then features within the boxes are extracted for further regression and classification. SCRDet [71] uses an attention mechanism to reduce background noise and improve the modelling of crowded and small objects. Oriented RCNN [62] and Gliding Vertex [64] introduce new box encoding systems to address the instability of training losses caused by rotation angle periodicity. Some approaches [29, 79, 56] treat remote sensing detection as a point detection task [67], providing an alternative way of addressing remote sensing detection problems. Rather than relying on the proposed anchors, one-stage detection frameworks classify and regress oriented bounding boxes directly from grid densely sampled anchors. The one-stage S\({}^{2}\)A network [20] extracts robust object features via oriented feature alignment and orientation-invariant feature extraction. DRN [46], on the other hand, leverages attention mechanisms to dynamically refined the backbone's extracted features for more accurate predictions. In contrast with Oriented RCNN and Gliding Vertex, RSDDet [50] addresses the discontinuity of regression loss by introducing a modulated loss. AOPG [6] and R3Det [68] adopt a progressive regression approach, refining bounding boxes from coarse to fine granularity. In addition to CNN-based frameworks, AO2-DETR [9] introduces a transformer-based detection framework, DETR [4], into remote sensing detection tasks, which brings more research diversity. While these approaches have achieved promising results in addressing the issue of rotation variance, they do not take Figure 2: The wide range of contextual information required for different object types is very different by human criteria. The objects with red boxes are the exact ground-truth annotations. into account the strong and valuable prior information presented in aerial images. Instead, our approach focuses on leveraging the large kernel and spatial selective mechanism to better model these priors, without modifying the current detection framework. ### Large Kernel Networks Transformer-based [54] models, such as the Vision Transformer (ViT) [14, 49, 55, 11, 1], Swin transformer [36, 22, 63, 76, 47] and PVT [57] have gained popularity in computer vision due to their effectiveness in image recognition tasks. Research [51, 65, 78, 42] has demonstrated that the large receptive field is a key factor in their success. In light of this, recent work has shown that well-designed convolutional networks with large receptive fields can also be highly competitive with transformer-based models. For example, ConvNeXt [37] uses 7\(\times\)7 depth-wise convolutions in their backbone, resulting in significant performance improvements in downstream tasks. In addition, Re-pLKNet [13] even uses a 31\(\times\)31 convolutional kernel via re-parameterization, achieving compelling performance. A subsequent work SLaK [35], further expands the kernel size to 51\(\times\)51 through kernel decomposition and sparse group techniques. VAN [17] introduces an efficient decomposition of large kernels as convolutional attention. Similarly, SegNeXt [18] and Conv2Former [25] demonstrate that large kernel convolution plays an important role in modulating the convolutional features with a richer context. Despite the fact that large kernel convolutions have received attention in the domain of general object recognition, there has been a lack of research examining their significance in the specific field of remote sensing detection. As previously noted in the _Introduction_, aerial images possess unique characteristics that make large kernels particularly well-suited for the task of remote sensing. As far as we are aware, our work represents the first attempt to introduce large kernel convolutions for the purpose of remote sensing and to examine their importance in this field. ### Attention/Selective Mechanism The attention mechanism is a simple and effective way to enhance neural representations for various tasks. The channel attention SE block [27] uses global average information to reweight feature channels, while spatial attention modules like GENet [26], GCNet [3], and SGE [31] enhance a network's ability to model context information via spatial masks. CBAM [60] and BAM [48] combine both channel and spatial attention to make use of the advantages of both. In addition to channel/spatial attention mechanisms, kernel selections are also a self-adaptive and effective technique for dynamic context modelling. CondConv [66] and Dynamic convolution [5] use parallel kernels to adaptively aggregate features from multiple convolution kernels. SKNet [30] introduces multiple branches with different convolutional kernels and selectively combines them along the channel dimension. ResNeSt [77] extends the idea of SKNet by partitioning the input feature map into several groups. Similarly to the SKNet, SCNet [34] uses branch attention to capture richer information and spatial attention to improve localization ability. Deformable Convnets [80, 8] introduce a flexible kernel shape for convolution units. Our approach bears the most similarity to SKNet [30], however, there are **two key distinctions** between the two methods. Firstly, our proposed selective mechanism relies explicitly on a sequence of large kernels via decomposition, a departure from most existing attention-based approaches. Secondly, our method adaptively aggregates information across large kernels in the spatial dimension, rather than the channel dimension as utilized by SKNet. This design is more intuitive and effective for remote sensing tasks, because channel-wise selection fails to model the spatial variance for different targets across the image space. The detailed structural comparisons are listed in Fig. 3. ## 3 Methods ### LSKNet Architecture The overall architecture is built upon the recent popular structures [37, 58, 17, 25, 74] (refer to the details in Supplementary Materials (SM)) with a repeated building block. Figure 3: Architectural comparison between our proposed LSK module and other representative selective mechanism modules. K: Kernel. The detailed configuration of different variants of LSKNet used in this paper is listed in Tab. 1. Each LSKNet block consists of two residual sub-blocks: the Large Kernel Selection (LK Selection) sub-block and the Feed-forward Network (FFN) sub-block. The core LSK module (Fig. 4) is embedded in the LK Selection sub-block. It consists of a sequence of large kernel convolutions and a spatial kernel selection mechanism, which would be elaborated on later. ### Large Kernel Convolutions According to the _prior (2)_ as stated in _Introduction_, it is suggested to model a series of multiple long-range contexts for adaptive selection. Therefore, we propose to construct a larger kernel convolution by _explicitly decomposing_ it into a sequence of depth-wise convolutions with a large growing kernel and increasing dilation. Specifically, the expansion of the kernel size \(k\), dilation rate \(d\) and the receptive field \(RF\), of the \(i\)-th depth-wise convolution in the series are defined as follows: \[k_{i-1}\leq k_{i};\ d_{1}=1,\ d_{i-1}<d_{i}\leq RF_{i-1}, \tag{1}\] \[RF_{1}=k_{1},\ RF_{i}=d_{i}(k_{i}-1)+RF_{i-1}. \tag{2}\] The increasing of kernel size and dilation rate ensure that the receptive field expands quickly enough. We set an upper bound on the dilation rate to guarantee that the dilation convolution does not introduce gaps between feature maps. For instance, we can decompose a large kernel into 2 or 3 depth-wise convolutions as in Tab. 2, which have a theoretical receptive field of 23 and 29, respectively. There are two advantages of the proposed designs. First, it explicitly yields multiple features with various large receptive fields, which makes it easier for later kernel selection. Second, sequential decomposition is more efficient than simply applying a single larger kernel. As shown in Tab. 2, under the same resulted theoretical receptive field, our decomposition greatly reduces the number of parameters compared to the standard large convolution kernels. To obtain features with rich contextual information from different ranges for input \(\mathbf{X}\), a series of decomposed depth-wise convolutions with different receptive fields are applied: \[\mathbf{U}_{0}=\mathbf{X},\ \ \ \mathbf{U}_{i+1}=\mathcal{F}_{i}^{dw}( \mathbf{U}_{i})\text{,} \tag{3}\] where \(\mathcal{F}_{i}^{dw}(\cdot)\) are depth-wise convolutions with kernel \(k_{i}\) and dilation \(d_{i}\). Assuming there are \(N\) decomposed kernels, each of which is further processed by a 1\(\times\)1 convolution layer \(\mathcal{F}^{1\times 1}(\cdot)\): \[\mathbf{\widetilde{U}}_{i}=\mathcal{F}_{i}^{1\times 1}(\mathbf{U}_{i}),\ \text{ for }i\text{ in }[1,N]\text{,} \tag{4}\] allowing channel mixing for each spatial feature vector. Then, a selection mechanism is proposed to dynamically select kernels for various objects based on the multi-scale features obtained, which would be introduced next. ### Spatial Kernel Selection To enhance the network's ability to focus on the most relevant spatial context regions for detecting targets, we use a spatial selection mechanism to spatially select the feature maps from large convolution kernels at different scales. Firstly, we concatenate the features obtained from different kernels with different ranges of receptive field: \[\mathbf{\widetilde{U}}=[\mathbf{\widetilde{U}}_{1};...;\mathbf{\widetilde{U}} _{i}], \tag{5}\] and then efficiently extract the spatial relationship by applying channel-based average and maximum pooling (denoted as \(\mathcal{P}_{avg}(\cdot)\) and \(\mathcal{P}_{max}(\cdot)\)) to \(\mathbf{\widetilde{U}}\): \[\mathbf{SA}_{avg}=\mathcal{P}_{avg}(\mathbf{\widetilde{U}}),\ \mathbf{ SA}_{max}=\mathcal{P}_{max}(\mathbf{\widetilde{U}})\text{,} \tag{6}\] where \(\mathbf{SA}_{avg}\) and \(\mathbf{SA}_{max}\) are the average and maximum pooled spatial feature descriptors. To allow information interaction among different spatial descriptors, we concatenate the spatially pooled features and use a convolution \begin{table} \begin{tabular}{c|l l l} \multicolumn{1}{c|}{**Model**} & \(\{C_{1},\) \(C_{2},\)\(C_{3},C_{4}\}\) & \(\{D_{1},\)\(D_{2},\)\(D_{3},\)\(D_{4}\}\) & \#P \\ \hline \(\ast\) LSKNet-T & \{32, 64, 160, 256\} & \{3, 3, 5, 2\} & 4.3M \\ \(\ast\) LSKNet-S & \{64, 128, 320, 512\} & \{2, 2, 4, 2\} & 14.4M \\ \end{tabular} \end{table} Table 1: **Variants of LSKNet used in this paper**. \(C_{i}\): feature channel number; \(D_{i}\): number of LSKNet blocks of each stage \(i\). \begin{table} \begin{tabular}{c|l|l l} RF & (\(k\), \(d\)) sequence & \#P & FLOPs \\ \hline 23 & (23, 1) & 40.4K & 42.4G \\ \hline & (5, 1) \(\longrightarrow\) (7, 3) & 11.3K & 11.9G \\ \hline 29 & (29, 1) & 60.4K & 63.3G \\ \hline & (3, 1) \(\longrightarrow\) (5, 2) \(\longrightarrow\) (7, 3) & 11.3K & 13.6G \\ \end{tabular} \end{table} Table 2: **Theoretical efficiency comparisons of two representative examples** by expanding single large depth-wise kernel into a sequence, given channels being 64. \(k\): kernel size; \(d\): dilation. layer \(\mathcal{F}^{2\to N}(\cdot)\) to transform the pooled features (with 2 channels) into \(N\) spatial attention maps: \[\overline{\mathbf{SA}}=\mathcal{F}^{2\to N}([\mathbf{SA}_{avg};\mathbf{SA}_{ max}]). \tag{7}\] For each of the spatial attention maps, \(\overline{\mathbf{SA}}_{i}\), a sigmoid activation function is applied to obtain the individual spatial selection mask for each of the decomposed large kernels: \[\overline{\mathbf{SA}}_{i}=\sigma(\overline{\mathbf{SA}}_{i}), \tag{8}\] where \(\sigma(\cdot)\) denotes the sigmoid function. The features from the sequence of decomposed large kernels are then weighted by their corresponding spatial selection masks and fused by a convolution layer \(\mathcal{F}(\cdot)\) to obtain the attention feature \(\mathbf{S}\): \[\mathbf{S}=\mathcal{F}(\sum_{i=1}^{N}\big{(}\overline{\mathbf{SA}}_{i}\cdot \widehat{\mathbf{U}}_{i}\big{)}). \tag{9}\] The final output of the LSK module is the element-wise product between the input feature \(\mathbf{X}\) and \(\mathbf{S}\), similarly in [17, 18, 25]: \[\mathbf{Y}=\mathbf{X}\cdot\mathbf{S}. \tag{10}\] Fig. 4 shows a detailed conceptual illustration of an LSK module where we intuitively demonstrate how the large selective kernel works by adaptively collecting the corresponding large receptive field for different objects. ## 4 Experiments ### Datasets HRSC2016 [39] is a high-resolution remote sensing images which is collected for ship detection. It consists of 1,061 images which contains 2,976 instances of ships. DOTA-v1.0 [61] consists of 2,806 remote sensing images. It contains 188,282 instances of 15 categories: Plane (PL), Baseball diamond (BD), Bridge (BR), Ground track field (GTF), Small vehicle (SV), Large vehicle (LV), Ship (SH), Tennis court (TC), Basketball court (BC), Storage tank (ST), Soccer-ball field (SBF), Roundabout (RA), Harbor (HA), Swimming pool (SP), and Helicopter (HC). FAIR1M-v1.0 [53] is a recently published remote sensing dataset that consists of 15,266 high-resolution images and more than 1 million instances. It contains 5 categories and 37 sub-categories objects. ### Implementation Details In our experiment, we report the results of the detection model on HRSC2016, DOTA-v1.0 and FAIR1M-v1.0 datasets. To ensure fairness, we follow the same dataset processing approach as other mainstream methods [62, 20, 21]. More details can be found in SM. During our experiments, the backbones are first pretrained on the ImageNet-1K [10] dataset and then finetuned on the target remote sensing benchmarks. In ablation studies, we adopt the 100-epoch backbone pretraining schedule for experimental efficiency (Tab. 3, 5, 4, 6, 7). We adopt a 300-epoch backbone pretraining strategy to pursue higher accuracy in main results (Tab. 8, 9, 10), similarly to [62, 20, 68, 6]. In main results (Tab. 8, 9), the "Pre." column stands for the dataset on which the networks/backbones are pretrained (IN: Imagenet [10] dataset; CO: Microsoft COCO [33] dataset; MA: Million-AID [40] dataset). Unless otherwise stated, LSKNet is defaulting to be built within the framework of Oriented RCNN [62] due to its compelling performance and efficiency. All the models are trained on the training and validation sets and tested on the testing set. Following [62], we train the models for 36 epochs on the HRSC2016 datasets and 12 epochs on the DOTA-v1.0 and FAIR1M-v1.0 datasets, with the AdamW [41] optimizer. The initial learning rate is set to 0.0004 for HRSC2016, and 0.0002 for the other two datasets, with a weight decay of 0.05. We use 8 RTX3090 GPUs with a batch size of 8 for model training, and use a single RTX3090 GPU for testing. All the FLOPs we report in this paper are calculated with a 1024\(\times\)1024 image input. ### Ablation Study In this section, we report ablation study results on the DOTA-v1.0 test set to investigate its effectiveness. **Large Kernel Decomposition.** Deciding on the number of kernels to decompose is a critical choice for the LSK module. We follow Eq. (1) to configure the decomposed kernels. The results of the ablation study on the number of large kernel decompositions when the theoretical receptive field is fixed at 29 are shown in Tab. 3. It suggests that decomposing the large kernel into two depth-wise large kernels results in a good trade-off between the speed and accuracy, achieving the best performance in terms of both FPS (frames per second) and mAP (mean average precision). **Receptive Field Size and Selection Type.** Based on our evaluations presented in Tab. 3, we find that the optimal solution for our proposed LSKNet is to decompose the large kernel into two depth-wise kernels in series. Furthermore, Tab. 4 shows that excessively small or large receptive fields can hinder the performance of the LSKNet, and a receptive field size of approximately 23 is determined to be the most effective. In addition, our experiments indicate that the proposed spatial selection approach is more effective \begin{table} \begin{tabular}{l l l l l} \hline (\(k\), \(d\)) sequence & RF & Num. & FPS & mAP (\%) \\ \hline (29, 1) & 29 & 1 & 18.6 & 80.66 \\ (5, 1) \(\longrightarrow\) (7, 4) & 29 & 2 & **20.5** & **80.91** \\ (3, 1) \(\longrightarrow\) (5, 2) \(\longrightarrow\) (7, 3) & 29 & 3 & 19.2 & 80.77 \\ \hline \end{tabular} \end{table} Table 3: **The effects of the number of decomposed large kernels on the inference FPS and mAP, given theoretical receptive field being 29. We adopt LSKNet-T backbones pretrained on ImageNet for 100 epochs. Decomposing the large kernel into two depth-wise kernels achieves the best performance of speed and accuracy.** than channel attention (similarly in SKNet [30]) for remote sensing object detection tasks. **Pooling Layers in Spatial Selection.** We conduct experiments to determine the optimal pooling layers for spatial selection in remote sensing object detection, as reported in Tab. 5. The results suggest that using both max and average pooling in the spatial selection component of our LSK module provides the best performance without sacrificing inference speed. **Performance of LSKNet backbone under different detection frameworks.** To validate the generality and effectiveness of our proposed LSKNet backbone, we evaluate its performance under various remote sensing detection frameworks, including two-stage frameworks O-RCNN [62] and RoI Transformer [12] as well as one-stage frameworks S\({}^{2}\)A-Net [20] and R3Det [68]. The results in Tab. 6 show that our proposed LSKNet backbone significantly improves detection performance compared to ResNet-18, while using only 38% of its parameters and with 50% fewer FLOPs. **Comparison with Other Large Kernel/Selective Attention Backbones.** We also compare our LSKNet with 6 popular high-performance backbone models with large kernel or selective attention. As shown in Tab. 7, under similar model size and complexity budgets, our LSKNet outperforms all other models on DOTA-v1.0 dataset. ### Main Results **Results on HRSC2016.** We evaluated the performance of our LSKNet against 12 state-of-the-art methods on the HRSC2016 dataset. The results presented in Tab. 8 demonstrate that our LSKNet-S outperforms all other methods with an mAP of **90.65%** and **98.46%** under the PASCAL VOC 2007 [15] and VOC 2012 [16] metrics, respectively. **Results on DOTA-v1.0.** We compare our LSKNet with 20 state-of-the-art methods on the DOTA-v1.0 dataset, as reported in Tab. 9. Our LSKNet-T and LSKNet-S achieve state-of-the-art performance with mAP of **81.37%** and **81.64%** respectively. Notably, our high-performing \begin{table} \begin{tabular}{l|c c|c} \multicolumn{2}{c|}{Pooling} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Max. & Avg. & & \\ \hline ✓ & & 20.7 & 81.23 \\ & ✓ & 20.7 & 81.12 \\ ✓ & ✓ & 20.7 & **81.31** \\ \end{tabular} \end{table} Table 5: Ablation study on the effectiveness of the **maximum and average pooling in spatial selection** of our proposed LSK module. We adopt LSKNet-T backbones pretrained on ImageNet for 100 epochs. Best result is obtained when using both. \begin{table} \begin{tabular}{l c c|c c} \multicolumn{2}{c|}{Pooling} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Max. & Avg. & & \\ \hline ✓ & & 20.7 & 81.23 \\ & ✓ & 20.7 & 81.12 \\ ✓ & ✓ & 20.7 & **81.31** \\ \end{tabular} \end{table} Table 4: **The effectiveness of the key design components** of the LSKNet when the large kernel is decomposed into a sequence of two depth-wise kernels. CS: channel selection (likewise in SKNet [30]); SS: spatial selection (**ours**). We adopt LSKNet-T backbones pretrained on ImageNet for 100 epochs. The LSKNet achieves best performance when using a reasonably large receptive field with spatial selection. \begin{table} \begin{tabular}{l|l c c c} \multicolumn{2}{c|}{Pooling} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Max. & Avg. & & \\ \hline ✓ & & 20.7 & 81.23 \\ & ✓ & 20.7 & 81.12 \\ ✓ & ✓ & 20.7 & **81.31** \\ \end{tabular} \end{table} Table 5: Ablation study on the effectiveness of the **maximum and average pooling in spatial selection** of our proposed LSK module. We adopt LSKNet-T backbones pretrained on ImageNet for 100 epochs. Best result is obtained when using both. \begin{table} \begin{tabular}{l|l c c c} \multicolumn{2}{c|}{Framework \(\backslash\) mAP (\%)} & ResNet-18 & \(\star\) LSKNet-T \\ \hline Oriented RCNN [62] & 79.27 & 81.31 (**+2.04**) \\ RoI Transformer [12] & 78.32 & 80.89 (**+2.57**) \\ S\({}^{2}\)A-Net [20] & 76.82 & 80.15 (**+3.33**) \\ R3Det [68] & 74.16 & 78.39 (**+4.23**) \\ \hline \#P (backbone only) & 11.2M & 4.3M (**-62\%**) \\ FLOPs (backbone only) & 38.1G & 19.1G (**-50\%**) \\ \end{tabular} \end{table} Table 6: **Comparison of LSKNet-T and ResNet-18** as backbones with different detection frameworks on DOTA-v1.0. The LSKNet-T backbone is pretrained on ImageNet for 100 epochs. The lightweight LSKNet-T achieves significant higher mAP in various frameworks than ResNet-18. \begin{table} \begin{tabular}{l|l c c c} Group & Model (baseline only) & \#P & FLOPs & mAP (\%) \\ \hline Baseline & ResNet-18 & 11.2M & 38.1G & 79.27 \\ \hline \multirow{3}{*}{Large Kernel} & VAN-B1 [17] & 13.4M & 52.7G & 81.15 \\ & ConvNeXt V2-N [59] & 15.0M & 51.2G & 80.81 \\ & MSCAN-S [18] & 13.1M & 45.0G & 81.12 \\ \hline \multirow{3}{*}{Selective Attention} & SKNet-26 [30] & 14.5M & 58.5G & 80.67 \\ & ResNeSt-14 [77] & 8.6M & 57.9G & 79.51 \\ & SCNet-18 [34] & 14.0M & 50.7G & 79.69 \\ \hline **Ours** & \(\star\) LSKNet-S & 14.4M & 54.4G & **81.48** \\ \hline Prev Best & CSPNeXt [43] & 26.1M & 87.6G & 81.33 \\ \end{tabular} \end{table} Table 7: **Comparison on LSKNet-S and other (large kernel/selective attention) backbones** under O-RCNN [62] framework on DOTA-v1.0, except that the Prev Best is under RT-MDet [43] framework. All backbones are pretrained on ImageNet for 100 epochs. Our LSKNet achieves the best mAP under similar complexity budgets, whilst surpassing the previous best public records [43]. LSKNet-S reaches an inference speed of **18.1** FPS on 1024x1024 images with a single RTX3090 GPU. **Results on FAIR1M-v1.0.** We compare our LSKNet against 6 other models on the FAIR1M-v1.0 dataset, as shown in Tab. 10. The results reveal that our LSKNet-T and LSKNet-S perform exceptionally well, achieving state-of-the-art mAP scores of **46.93%** and **47.87%** respectively, surpassing all other models by a significant margin. **2022 the Greater Bay Area International Algorithm Competition.** Our team implemented a model similar to LSKNet for the _2022 the Greater Bay Area International Algorithm Competition_ and achieved second place, with a minor margin separating us from the first-place winner. The dataset used during the competition is a subset of FAIR1M-v2.0 [53], and the competition results are illustrated in Tab. 11. More details refer to SM. ### Analysis Visualization examples of detection results and Eigen-CAM [45] are shown in Fig. 5. It highlights that LSKNet-S can capture much more context information relevant to the detected targets, leading to better performance in various hard cases, which justifies our _prior (1)_. To investigate the range of receptive field for each object category, we define \(R_{c}\) as the _Ratio of Expected Selective RF Area and GT Bounding Box Area_ for category \(c\): \[R_{c}=\frac{\sum_{i=1}^{I_{c}}A_{i}/B_{i}}{I_{c}}, \tag{11}\] \[A_{i}=\sum_{d=1}^{D}\sum_{n=1}^{N}|\overline{\mathbf{S}}\mathbf{A}_{n}^{d}\cdot RF _{n}|,\ B_{i}=\sum_{j=1}^{J_{i}}Area(\text{GT}_{j}), \tag{12}\] where \(I_{c}\) is the number of images that contain the object category \(c\) only. The \(A_{i}\) is the sum of spatial selection activation in all LSK blocks for input image \(i\), where \(D\) is the number of blocks in an LSKNet, and \(N\) is the number of decomposed large kernels in an LSK module. \(B_{i}\) is the total pixel area of all \(J_{i}\) annotated oriented object bounding boxes (GT). We plot the normalized \(R_{c}\) in Fig. 6 which represents the relative range of context required for different object categories for a better view. The results suggest that the Bridge category stands out as requiring a greater amount of additional contextual in \begin{table} \begin{tabular}{l|c|c} \multicolumn{1}{c|}{Team Name} & \multicolumn{1}{c}{Pre-stage} & \multicolumn{1}{c}{Final-stage} \\ \hline nust\_milab & 81.16 & 74.16 \\ \hline Secret:Weapon **(ours)** & 81.11 & 73.94 \\ JiaNeng & 79.07 & 72.90 \\ ema.ai pass & 78.65 & 72.75 \\ SanRexing & 78.06 & 71.39 \\ \end{tabular} \end{table} Table 11: 2022 the Greater Bay Area International Algorithm Competition results. The dataset is based on **FAIR1M-v2.0**[53]. \begin{table} \begin{tabular}{l|c|c|c c c c c c c c c c c c c c c c c} Method & **Pre.** & **mAP \(\uparrow\)** & **\#P \(\downarrow\)** & **FLOPs \(\downarrow\)** & PL & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & AP & BC & HC \\ \hline \multicolumn{13}{l|}{_One-stage_} & & & & & & & & & & & & & & & & & & & & & & & & & & \\ \hline R3Det [68] & IN & 76.47 & 41.9M & 336G & 89.80 & 83.77 & 48.11 & 66.77 & 78.76 & 83.27 & 87.84 & 90.82 & 85.38 & 85.51 & 65.57 & 62.68 & 67.53 & 78.56 & 72.62 \\ CFA [19] & IN & 76.67 & 36.6M & 194G & 89.08 & 83.20 & 54.37 & 66.87 & 81.23 & 80.96 & 87.17 & 90.21 & 84.32 & 86.09 & 52.34 & 69.94 & 75.52 & 80.76 & 67.96 \\ DARNe [28] & IN & 76.95 & - & - & 89.40 & 86.27 & 53.70 & 60.51 & 82.04 & 81.17 & 88.66 & 90.37 & 83.81 & 87.27 & 53.93 & 69.38 & 75.61 & 81.26 & 70.86 \\ SASM [24] & IN & 79.17 & 36.6M & 194G & 89.54 & 85.94 & 57.73 & 78.41 & 79.78 & 8.149 & 82.95 & 90.87 & 58.80 & 87.27 & 63.82 & 67.81 & 78.67 & 79.35 & 69.37 \\ ADO-DER [9] & IN & 79.22 & 74.3M & 304G & 89.95 & 84.52 & 56.90 & 74.83 & 80.86 & 83.47 & 88.47 & 90.87 & 86.12 & **88.55** & 63.21 & 65.09 & 79.09 & **82.88** & 73.46 \\ S\({}^{2}\)ANet [20] & IN & 79.42 & 38.6M & 198G & 88.89 & 83.60 & 57.74 & 81.95 & 79.94 & 83.19 & **89.11** & 90.78 & 84.87 & 87.81 & 70.30 & 68.25 & 78.30 & 77.01 & 69.58 \\ R3Det-GWD [70] & IN & 80.23 & 41.9M & 336G & 89.66 & 84.99 & 59.26 & 81.99 & 78.97 & 84.83 & 87.70 & 90.21 & 86.54 & 86.58 & **73.47** & 67.77 & 76.92 & 79.22 & 79.42 & 74.92 \\ RTRDet [43] & IN & 80.54 & 52.3M & 205G & 88.36 & 54.96 & 57.33 & 80.46 & 80.58 & 84.88 & **80.98** & **90.62** & 87.57 & 69.29 & 70.61 & 78.63 & 80.97 & 79.24 & 78.68 \\ R3Det-KLD [72] & IN & 80.63 & 41.9M & 336G & 89.92 & 85.13 & 59.19 & 81.33 & 78.82 & 84.38 & 87.50 & 98.80 & 87.33 & 70.20 & 75.73 & 77.12 & 77.34 & 78.68 \\ RTMDet-R [43] & CO & 81.33 & 52.3M & 205G & 88.01 & 86.17 & 58.54 & 82.44 & 81.30 & 84.82 & 88.71 & 90.89 & **88.77** & 87.37 & 71.96 & 71.18 & 81.23 & 81.40 & 77.13 \\ \hline \multicolumn{13}{l|}{_Two-stage_} & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & \\ \hline SCRNet [71] & IN & 72.61 & - & - & - & 89.98 & 80.65 & 52.09 & 68.36 & 68.36 & 60.32 & 72.41 & 90.85 & 87.94 & 86.86 & 65.02 & 66.68 & 66.25 & 68.24 & 65.21 \\ RoI Trans. [12] & IN & 74.61 & 55.1M & 200G & 88.65 & 82.60 & 52.53 & 70.87 & 77.93 & 76.67 & 86.87 & 90.71 & 83.83 & 82.51 & 53.95 & 67.61 & 74.67 & 68.75 & 61.03 \\ G.V. [64] & IN & 75.02 & 41.1M & 198G & 89.64 & 85.00 & 52.26 & 77.34 & 73.01 & 73.14 & 86.82 & 90.74 & 79.02 & 86.81 & 59.55 & 70.91 & 79.24 & 70.86 & 57.32 \\ CenterMap [56] & IN & 76.03 & 41.1M & 198G & 89.83 & 84.41 & 54.60 & 70.25 & 77.66 & 78.32 & 87.19 & 90.66 & 84.89 & 85.27 & 56.46 & 69.23 & 74.13 & 71.56 & 66.06 \\ SCL [69] & IN & 76.17 & 37.4M & 236G & **90.22** & 85.53 & 54.54 formation compared to other categories, primarily due to its similarity in features with roads and the necessity of contextual clues to ascertain whether it is enveloped by water. Conversely, the Court categories, such as Soccer-ball-field, necessitate minimal contextual information owing to their distinctive textural attributes, specifically the court boundary lines. It aligns with our knowledge and further supports _prior (2)_ that the relative range of contextual information required for different object categories varies greatly. We further investigate the kernel selection behaviour in our LSKNet. For object category \(c\), the _Kernel Selection Difference_\(\Delta A_{c}\) (i.e., larger kernel selection - smaller kernel selection) of an LSKNet-T block is defined as: \[\Delta A_{c}=|\overline{\mathbf{SA}}_{larger}-\overline{\mathbf{SA}}_{smaller }|. \tag{13}\] We demonstrate the normalised \(\Delta A_{c}\) over all images for three typical categories: Bridge, Roundabout and Soccer-ball-field and for each LSKNet-T block in Fig. 7. As expected, the participation of larger kernels of all blocks for Bridge is higher than that of Roundabout, and Roundabout is higher than Soccer-ball-field. This aligns with the common sense that Soccer-ball-field indeed does not require a large amount of context, since its own texture characteristics are already sufficiently distinct and discriminatory. We also surprisingly discover another selection pattern of LSKNet across network depth: LSKNet usually utilizes larger kernels in its shallow layers and smaller kernels in higher levels. This indicates that networks tend to quickly focus on capturing information from large receptive fields in low-level layers so that higher-level semantics can contain sufficient receptive fields for better discrimination. Figure 5: **Eigen-CAM visualization** of Oriented RCNN detection framework with ResNet-50 and LSKNet-S. Our proposed LSKNet can model a much long range of context information, leading to better performance in various hard cases. Figure 6: Normalised **Ratio \(R_{c}\) of Expected Selective RF Area and GT Bounding Box Area** for object categories in DOTA-v1.0. The relative range of context required for different object categories varies a lot. Examples of Bridge and Soccer-ball-field are given, where the visualized receptive field is obtained from Eq. (8) (i.e., the spatial activation) of our well-trained LSKNet model. Figure 7: Normalised **Kernel Selection Difference** in the LSKNet-T blocks for Bridge, Roundabout and Soccer-ball-field. B_i_j represents the j-th LSK block in stage i. A greater value is indicative of a dependence on a broader context. ## 5 Conclusion In this paper, we propose the Large Selective Kernel Network (LSKNet) for remote sensing object detection tasks, which is designed to utilize the inherent characteristics in remote sensing images: the need for a wider and adaptable contextual understanding. By adapting its large spatial receptive field, LSKNet can effectively model the varying contextual nuances of different object types. Extensive experiments demonstrate that our proposed lightweight model achieves state-of-the-art performance on the competitive remote sensing benchmarks.
2307.12859
Sums of proper divisors with missing digits
Let $s(n)$ denote the sum of proper divisors of an integer $n$. In 1992, Erd\H{o}s, Granville, Pomerance, and Spiro (EGPS) conjectured that if $\mathcal{A}$ is a set of integers with asymptotic density zero then $s^{-1}(\mathcal{A})$ also has asymptotic density zero. In this paper we show that the EGPS conjecture holds when $\mathcal{A}$ is taken to be a set of integers with missing digits. In particular, we give a sharp upper bound for the size of this preimage set. We also provide an overview of progress towards the EGPS conjecture and survey recent work on sets of integers with missing digits.
Kübra Benli, Giulia Cesana, Cécile Dartyge, Charlotte Dombrowsky, Lola Thompson
2023-07-24T14:58:19Z
http://arxiv.org/abs/2307.12859v1
# Sums of proper divisors with missing digits ###### Abstract. Let \(s(n)\) denote the sum of proper divisors of an integer \(n\). In 1992, Erdos, Granville, Pomerance, and Spiro (EGPS) conjectured that if \(\mathcal{A}\) is a set of integers with asymptotic density zero then \(s^{-1}(\mathcal{A})\) also has asymptotic density zero. In this paper we show that the EGPS conjecture holds when \(\mathcal{A}\) is taken to be a set of integers with missing digits. In particular, we give a sharp upper bound for the size of this preimage set. We also provide an overview of progress towards the EGPS conjecture and survey recent work on sets of integers with missing digits. ## 1. Introduction Let \(s(n)\) denote the _sum-of-proper-divisors function_, i.e., \(s(n)=\sum_{d|n,d<n}d.\) The function \(s(n)\) has been studied since the time of the ancient Greeks. Indeed, the _perfect_ numbers are those integers \(n\) for which \(s(n)=n\). The Greeks also spoke of integers as being _deficient_ if \(s(n)<n\) and _abundant_ if \(s(n)>n\). It is natural to wonder how the function \(s\) behaves when applied to various inputs. One surprising feature is that \(s\) can map sets of asymptotic density1 Footnote 1: If \(\mathcal{S}\) is a subset of the natural numbers, then the _asymptotic density_ of \(\mathcal{S}\) is given by \[\lim_{x\to\infty}\frac{1}{x}\#\{n\leq x:n\in\mathcal{S}\},\] provided that this limit exists. In the opposite direction, for sets of integers with'many' prime factors, Troupe proved the following theorem. **Theorem 1.3**.: _[_30_, Theorem 1.3]_ _Let \(\omega(m)\) denote the number of distinct prime factors of an integer \(m\). For any fixed \(\epsilon>0\), if_ \[\mathcal{A}=\{m:|\omega(m)-\log\log m|>\epsilon\log\log m\}\] _then \(s^{-1}(\mathcal{A})\) has asymptotic density zero._ In other words, not only are numbers with a lot more than the "normal" number of prime factors rare, their preimages under \(s\) are rare as well. This theorem implies that \(\log\log n\) is the normal order of \(\omega(s(n))\). Very recently, Pollack and Troupe [27] have improved this, showing that \(\omega(s(n))\) satisfies an Erdos-Kac-type theorem. This signifies that \(\omega(s(n))\) asymptotically has a normal distribution with mean and variance \(\log\log n\). There are several other sets \(\mathcal{A}\) whose preimages under \(s\) have been studied in recent years. For example, Pollack considered the case where \(\mathcal{A}\) is a set of palindromes. **Theorem 1.4**.: _[_23_, Theorem 1]_ _If \(\mathcal{A}\) is the set of palindromes in any given base, then \(s^{-1}(\mathcal{A})\) has asymptotic density zero._ In another paper, Troupe took \(\mathcal{A}\) to be the set of integers representable as a sum of two squares. **Theorem 1.5**.: _[_31_, Theorem 1.2]_ _Let \(\mathcal{A}\) be the set of integers \(n\leq x\) that can be written as a sum of two squares. Then \(\#(s^{-1}(\mathcal{A})\cap[1,x])\asymp\frac{x}{\sqrt{\log x}}\)._ Note that Troupe obtains both upper and lower bounds; most of the aforementioned papers only obtain upper bounds. In general, it is difficult to obtain nontrivial lower bounds. Another recent result, due to Pollack and Singha Roy, shows that \(k\)-th power-free values of \(n\) and \(s(n)\) are equally common. **Theorem 1.6**.: _[_26_, Theorem 1.3]_ _Fix \(k\geq 4\). On a set of integers with asymptotic density \(1\),_ \[n\text{ is $k$-free}\iff s(n)\text{ is $k$-free}.\] The squarefree and cubefree cases remain conjectural. In a slightly different direction, there is a recent result by Lebowitz-Lockard, Pollack, and Singha Roy which shows that the values of \(s(n)\) (for composite \(n\)) are equidistributed among the residue classes modulo \(p\) for small primes \(p\). **Theorem 1.7**.: _[_17_, Theorem 1.3]_ _Fix \(A>0\). As \(x\to\infty\), the number of composite \(n\leq x\) with \(s(n)\equiv a\pmod{p}\) is \((1+o(1))x/p\), for every residue class \(a\pmod{p}\) with \(p\leq(\log x)^{A}\)._ In the present paper, we consider the preimages under \(s\) of a new set \(\mathcal{A}\), namely, the set of integers with missing digits. We will elaborate more on these integers in Section 2. In particular, we will give some historical background and define the notation carefully. For now, we briefly discuss our main results. We show that the EGPS Conjecture holds for sets of integers with missing digits. Moreover, we prove a quantitative version of this result. **Theorem 1.8**.: _Fix \(g\geq 3\), \(\gamma\in(0,1)\), and a nonempty set \(\mathcal{D}\subsetneq\{0,1,\ldots,g-1\}\). For all sufficiently large \(x\), the number of \(n\leq x\) for which \(s(n)\) has all of its digits in base \(g\) restricted to digits in \(\mathcal{D}\) is \(O(x\exp(-(\log\log x)^{\gamma})\)._ Our main tool in the proof is an upper bound for the number of positive integers \(n\leq x\) such \(g^{k}\nmid\sigma(n)\) when \(g^{k}\) is a large integer. Note that it was proved in [33, Hauptsatz 2] that the set of such integers has asymptotic density zero for fixed modulus \(g^{k}\). By [28, Theorem 2], such a result can be made uniform in the modulus, see [24, Lemma 2.1], stated as Lemma 3.3 below. By sacrificing the uniformity, we obtain the following stronger upper bound. **Lemma 1.9**.: _Let \(g\geq 3\) be a given integer. Let \(\gamma,\delta\in(0,1)\) and \(A>0\) also be given. Then for integers \(k\in[5(\log_{3}x),A(\log_{2}x)^{\gamma}]\), we have_ \[\sum_{\begin{subarray}{c}n\leq x\\ g^{k}|\sigma(n)\end{subarray}}1\ll x\exp\left(-\left(\log_{2}x\right)^{\delta} \right),\] _where the constant implied by the \(\ll\) notation depends on the choices of \(g,A,\gamma,\delta\)._ As this lemma does not rely on any facts about integers with missing digits, it may also be useful in other contexts. One may wonder how sharp the upper bound in Theorem 1.8 really is. We note that the statement can be re-written in the following form: For all sufficiently large \(x\), the number of \(n\leq x\) for which \(s(n)\) has all of its digits in base \(g\) restricted to digits in \(\mathcal{D}\) is at most \[\frac{x}{\exp((\log_{2}x)^{1+o(1)})}.\] Since \(s(p)=1\) for all primes \(p\), then whenever the set \(\mathcal{D}\) contains \(1\), it follows that the size of the preimage set of \(\mathcal{D}\) has \(\pi(x)\), the prime counting function, as a lower bound. Since \(\pi(x)\sim\frac{x}{\log x}\) as \(x\to\infty\), we see that our exponent of \(1+o(1)\) is optimal for arbitrary \(g\) and \(\mathcal{D}\). The proofs of all of the aforementioned theorems make crucial use of the special forms that the numbers in these sets possess. The methods do not generalize to arbitrary sets with asymptotic density zero. In a different direction, the EGPS Conjecture has also been shown to hold for all sets of \(n\leq x\) with cardinality up to about \(x^{1/2}\). More precisely we have the following theorem by Pollack, Pomerance, and the last author, where the result is uniform in the choice of the set as long as its size is small. **Theorem 1.10**.: _[_25_, Theorem 1.2]_ _Let \(\epsilon=\epsilon(x)\) be a fixed function tending to \(0\) as \(x\to\infty\). If \(\mathcal{A}\subset\mathbb{N}\) with \(\#\left(\mathcal{A}\cap[1,x]\right)\ll x^{1/2+\epsilon(x)}\) then \(s^{-1}(\mathcal{A})\) has asymptotic density zero._ As a corollary, one can obtain (for example) that if \(\mathcal{A}\) is the set of squares up to \(x\) then \(s^{-1}(\mathcal{A})\) has asymptotic density zero. Similarly, one can use Theorem 1.10 to prove that the preimage of a set of integers with'many' missing digits has density \(0\). For example, if we remove half of the possible digits then the size of the set of integers with missing digits will be about \(O(x^{1/2})\), and thus this is handled by Theorem 1.10. One can also deduce Theorem 1.4 as a corollary of Theorem 1.10, since the number of palindromes less than \(x\) is \(O(\sqrt{x})\). ### Notation and conventions Throughout this paper, we will write \(\#S\) to denote the number of elements in a set \(S\). We will use \(\sigma(n)\) to denote the _sum-of-divisors function_, defined by \(\sigma(n)\coloneqq\sum_{d\mid n}d;\)\(\varphi(n)\) to denote the _Euler \(\varphi\)-function_ for a positive integer \(n\), defined by \(\varphi(n)\coloneqq\#\{1\leq j\leq n:\gcd(n,j)=1\};\) id to denote the _identity function_; and \(\omega(n)\) to denote the _number of distinct prime factors_ of an integer \(n\). We let \(\mu(n)\) be the _Mobius function_ defined as \[\mu(n)\coloneqq\begin{cases}(-1)^{r},&\text{ if }n=p_{1}\cdots p_{r}\text{ with distinct primes }p_{i},\\ 0,&\text{ if there exists a prime }p\text{ such that }p^{2}\mid n.\end{cases}\] For arithmetic functions \(G\) and \(H\), the _convolution_\(G*H\) is defined by \[G*H(n)\coloneqq\sum_{ab=n}G(a)H(b),\] for any positive integer \(n\). For two real functions \(F\) and \(G\) where \(G\) is a nonnegative valued function, we say that \(F(x)=O(G(x))\) if there exists a positive constant \(C\) and a real number \(x_{0}\) such that \(|F(x)|\leq C|G(x)|\) for all \(x\geq x_{0}\) and \(F(x)=o(G(x))\) as \(x\to a\) if \[\lim_{x\to a}\frac{F(x)}{G(x)}=0.\] We will also at times use Vinogradov's notation \(\ll\) as an alternative to Landau's Big \(O\) notation. Namely, \(F\ll G\) denotes that \(F(x)=O(G(x))\). Similarly, \(\gg\) is used to denote the parallel notion with the inequalities reversed in the Big \(O\) definition. We write \(F\asymp G\) when there are positive constants \(C_{1}\) and \(C_{2}\) such that \(C_{1}|F|<|G|<C_{2}|F|\). Furthermore we let \(\log_{k}(x)\) denote the \(k\)-th iterate of the natural logarithm of \(x\), e.g., \(\log_{3}x=\log\log\log x\). ## 2. Integers with missing digits Let \(g\in\mathbb{N}\), \(g\geq 3\). We consider the base \(g\) expansion of a positive integer \(n\), \[n=\sum_{j\geq 0}\varepsilon_{j}(n)g^{j},\] with coefficients \(\varepsilon_{j}(n)\in\{0,\ldots,g-1\}\). Note that this sum is finite. For a proper subset \(\mathcal{D}\subsetneq\{0,\ldots,g-1\}\) such that \(0\in\mathcal{D}\), and an arithmetic function \(f\) taking positive integer values, we put \[\mathcal{W}_{f,\mathcal{D}}\coloneqq\left\{n:f(n)=\sum_{j\geq 0}\varepsilon_{j}(f (n))g^{j},\varepsilon_{j}(f(n))\in\mathcal{D}\right\} \tag{2.1}\] as the set of integers \(n\) where the digits of \(f(n)\) are restricted in the set \(\mathcal{D}\). In addition, for any \(x\geq 1\) and any set of integers \(A\) let \(A(x)\) denote the set \(A\cap[1,x]\). So for any \(x\geq 2\), we put \[\mathcal{W}_{f,\mathcal{D}}(x)\coloneqq\left(f^{-1}(\mathcal{W}_{\mathcal{D}} )\right)(x)=\left\{n\leq x:f(n)=\sum_{j\geq 0}\varepsilon_{j}(f(n))g^{j}, \varepsilon_{j}(f(n))\in\mathcal{D}\right\}\] as a finite subset of \(\mathcal{W}_{f,\mathcal{D}}\). In the particular case \(f=\mathrm{id}\) we simply write \(\mathcal{W}_{\mathcal{D}}\) (resp. \(\mathcal{W}_{\mathcal{D}}(x)\)) in place of \(\mathcal{W}_{\mathrm{id},\mathcal{D}}\) (resp. \(\mathcal{W}_{\mathrm{id},\mathcal{D}}(x)\)). The elements of \(\mathcal{W}_{\mathcal{D}}\) are frequently referred to as _integers with missing digits_ (or _integers with restricted digits_). In this survey we will also use the terminology proposed by Mauduit, who referred to them as _ellipsephic2 integers_. Footnote 2: The origin of this nomenclature comes from the fusion of the two Greek words “ellipsis” = missing and “psiphic” = digit. Since the set \(\mathcal{D}\) is a proper subset of \(\{0,\ldots,g-1\}\), and since we have \[\#\mathcal{W}_{\mathcal{D}}\left(g^{N}-1\right)=\left|\mathcal{D}\right|^{N}, \tag{2.2}\] the elements of \(\mathcal{W}_{\mathcal{D}}\) form a sparse set, i.e., \[\lim_{N\to\infty}\frac{\#\mathcal{W}_{\mathcal{D}}\left(g^{N}\right)}{g^{N}}=0.\] In other words, the set \(\mathcal{W}_{\mathcal{D}}\) has asymptotic density zero. When \(0\not\in\mathcal{D}\), we can adapt the definition (2.1) by setting \[\mathcal{W}_{f,\mathcal{D}}\coloneqq\left\{n:f(n)=\sum_{j=0}^{N}\varepsilon_{ j}(f(n))g^{j},\varepsilon_{j}(f(n))\in\mathcal{D},\ N\in\mathbb{N}\right\}.\] In this section, as we survey the literature, we concentrate on the case with \(0\in\mathcal{D}\) in order to avoid some complications in the statements of some results. However, the interested reader can find some results related to the sets \(\mathcal{W}_{\mathcal{D}}\) with \(0\not\in\mathcal{D}\) in [1]. In our main theorem, we do not require \(0\in\mathcal{D}\). If we set \(g=10\) and \(\mathcal{D}=\{3,6,9\}\), then any number in \(\mathcal{W}_{f,D}\) is divisible by \(3\). However, if we exclude similar trivial obstructions, we can expect that the sequence of ellipsephic integers behaves like the sequence of the natural numbers. A first question could be whether these integers are well-distributed in arithmetic progressions. Erdos, Mauduit, and Sarkozy [12] give an affirmative answer. Their result is valid under the following two natural hypotheses for the set \(\mathcal{D}=\{d_{1},d_{2},\ldots,d_{t}\}\), namely \[d_{1}=0\in\mathcal{D}\text{ and }\gcd(d_{2},\ldots,d_{t})=1. \tag{2.3}\] For \(a,q\in\mathbb{Z}\) such that \(\gcd(q,g(g-1))=1\), we introduce the set of the ellipsephic integers congruent to \(a\) modulo \(q\) by \[\mathcal{W}_{\mathcal{D}}(x,a,q)\coloneqq\left\{n\in\mathcal{W}_{\mathcal{D}} (x):n\equiv a\ (\operatorname{mod}q)\right\}.\] Erdos, Mauduit, and Sarkozy proved that if \(\mathcal{D}\) satisfies (2.3), then there exist constants \(c_{1}\coloneqq c_{1}(g,t)>0\) and \(c_{2}\coloneqq c_{2}(g,t)>0\) such that \[\left|\#\mathcal{W}_{\mathcal{D}}(x,a,q)-\frac{\#\mathcal{W}_{\mathcal{D}}(x )}{q}\right|=O\left(\frac{\#\mathcal{W}_{\mathcal{D}}(x)}{q}\exp\left(-c_{1} \frac{\log x}{\log q}\right)\right), \tag{2.4}\] for all \(a\in\mathbb{Z}\) and \(q\leq\exp(c_{2}\sqrt{\log x})\) such that \(\gcd(q,g(g-1))=1\) (see [12, Theorem 1]). This result was improved by Konyagin [15] in 2001 and by Col [6] in 2009. The papers [12] and [13] provide several interesting applications of these equidistribution results and give lists of open problems that inspired various research projects. Another interesting aspect is the normal order of some arithmetic functions along ellipsephic integers. Banks and Shparlinski [3] obtained various results in this direction. In particular they studied the average values of \(\mathcal{W}_{\mathcal{D}}\) of the Euler \(\varphi\)-function and the sum-of-divisors function. Equation (2.4) can be seen as an analogue of the Siegel-Walfisz Theorem, stated below as Lemma 3.1, for primes in arithmetic progressions. Such theorems are not applicable when the modulus \(q\) is a power of \(x\). However, in many applications, it is sufficient to use an equidistribution result that is averaged over the moduli \(q\). In doing so, one is able to extend the range of \(q\). One such application can be seen in work by the third author and Mauduit [7, 8], and independently by Konyagin [15]. They proved that there exists an \(\alpha\coloneqq\alpha(g,\mathcal{D})\) such that (2.4) holds for almost all \(q<x^{\alpha}\) satisfying \(\gcd(q,g(g-1))=1\). Such results combined with sieve methods imply the existence of ellipsephic integers with few prime factors. For example in [7] it is proved that there exist infinitely many \(n\in\mathcal{W}_{\{0,1\}}\) with at most \(k_{g}=(1+o(1))8g/\pi\) prime factors as \(g\to\infty\). The problem of the existence of infinitely many primes with missing digits has been solved recently by Maynard [18, 19]. In particular, he proved the following spectacular result, given by [18, Theorem 1.1]. Let \(a_{0}\in\{0,...,9\}\). The number of primes \(p\leq x\) with no digit \(a_{0}\) in their base \(10\) expansions is \[\asymp\frac{x^{\frac{\log 9}{\log 10}}}{\log x}.\] He also gives a condition to determine whether there are finitely or infinitely many \(n\) such that \(P(n)\in\mathcal{W}_{\mathcal{D}}\), for any given non-constant polynomial \(P\in\mathbb{Z}[X]\), large enough base \(g\), and \(\mathcal{D}=\{0,\ldots,g-1\}\setminus\{a_{0}\}\) (see [19, Theorem 1.2]). The papers [18] and [19] also provide deep results when the number of excluded digits is \(\geq 2\). We end this short discussion on ellipsephic integers by giving an incomplete list of recent references on this subject containing many other interesting results, namely [1, 4, 5, 6, 16]. In the present paper, we are particularly interested in the preimages of the sets of ellipsephic integers under \(s(n)\), namely \(\mathcal{W}_{s,\mathcal{D}}(x)=\left(s^{-1}\left(\mathcal{W}_{\mathcal{D}} \right)\right)(x)\). With this setup, our main result can be reformulated in the following manner: Let \(g\geq 3\) be an integer and \(\mathcal{D}\subsetneq\{0,\dots,g-1\}\) be a nonempty proper subset. Let \(0<\gamma<1\) be given. Then for all sufficiently large \(x\), we have \[\#\mathcal{W}_{s,\mathcal{D}}(x)=\#\left(s^{-1}\left(\mathcal{W}_{\mathcal{D} }\right)\right)(x)\ll x\exp(-(\log_{2}x)^{\gamma}).\] ## 3. Preliminary Lemmata for Theorem 1.8 We start with a well-known deep result on primes in arithmetic progressions. **Lemma 3.1** (Siegel-Walfisz Theorem, [29, Theorem II.8.17, page 376]).: _For any constant \(A>0\), and uniformly for \(x\geq 3\), \(1\leq q\leq(\log x)^{A}\), and \(\gcd(a,q)=1\), we have_ \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\ (\mathrm{mod}\,q)\end{subarray}}\log p=\frac{x}{\varphi(q)}+O\left(x \exp\left(-c\sqrt{\log x}\right)\right),\] _where \(c=c(A)\) is a strictly positive constant._ **Remarks.** We will apply this lemma for some \(q=g^{\ell}\) with \(\ell\approx\log_{3}x\). Norton [21] and Pomerance [28] independently proved that there exists a constant \(C>0\) such that for all \(x\geq 3\), and for all integers \(q\), \(a\) with \(\gcd(a,q)=1\), \(q>0\) we have3 Footnote 3: The interested reader may find a more precise formulation in [21, Section 6] and in [28, Remark 1]. \[\left|\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\ (\mathrm{mod}\,q)\end{subarray}}\frac{1}{p}-\frac{\log_{2}x}{ \varphi(q)}\right|\leq C.\] We could apply these results instead of the Siegel-Walfisz Theorem in the proof of Lemma 1.9. Another remark is when \(q\) has the special form \(q=g^{c}\) for some fixed \(g\) (as is the case in our application), Elliott [9] and then Baker and Zhao [2] proved that it is possible to have asymptotic estimates even when the size of \(g^{\ell}\) is a power of \(x\). Baker and Zhao proved that if \(q=g^{\ell}\) with fixed \(g\) then it is possible to obtain a result similar to Lemma 3.1 in the range \(g^{\ell}\leq x^{5/12-\varepsilon}\) with \(\varepsilon>0\) arbitrarily small. Recall that a _multiplicative function_\(f\) is a function that satisfies \(f(uv)=f(u)f(v)\) whenever \(\gcd(u,v)=1\). In particular, if \(f\) is a multiplicative function which is not identically zero, then we have \(f(1)=1\). Next, we quote the following technical result for the average order of a multiplicative function. **Lemma 3.2**.: _[_29_, Corollary III.3.6, page 457]_ _Let \(\lambda_{1},\lambda_{2}\) be constants such that \(\lambda_{1}>0\) and \(0\leq\lambda_{2}<2\). For any multiplicative function \(f\) such that_ \[0\leq f(p^{\nu})\leq\lambda_{1}\lambda_{2}^{\nu-1}\] _for all primes \(p\) and for \(\nu=1,2,3,\dots,\) we uniformly have_ \[\sum_{n\leq x}f(n)\ll x\prod_{p\leq x}\left(1-\frac{1}{p}\right)\sum_{\nu\geq 0 }\frac{f(p^{\nu})}{p^{\nu}} \tag{3.1}\] _for all \(x\geq 1\). The implicit constant in (3.1) is less than_ \[4(1+9\lambda_{1}+\lambda_{1}\lambda_{2}/(2-\lambda_{2})^{2}).\] **Lemma 3.3**.: _[_24_, Lemma 2.1]_ _Let \(x\geq 3\). Let \(q\) be a positive integer. Then_ \[\sum_{\begin{subarray}{c}n\leq x\\ q|\sigma(n)\end{subarray}}1\ll\frac{x}{(\log x)^{1/\varphi(q)}},\] _uniformly in \(q\)._ Note that, by sacrificing the uniformity, we will be able to obtain a better bound for the number of positive integers \(n\leq x\) such \(g^{k}\nmid\sigma(n)\) when \(g^{k}\) is a large integer. To prepare for the proof of such a result we first prove the following observation. **Lemma 3.4**.: _Let \(m\) be a positive integer. Then every positive integer \(n\) can be written uniquely as \(n=ab\) with \(\gcd(a,b)=1\) and_ \[\mu^{2}(a)=1,\,\,\,p\mid a\text{ implies }p\equiv-1\,\,(\operatorname{mod}m) \quad\text{ and }\quad p\mid b\text{ implies }p^{2}\mid b\text{ or }p\not\equiv-1\,\,(\operatorname{mod}m).\] Proof.: Let \(n>1\) as the result is vacuously true when \(n=1\). Assume that \(n\) has the following prime factorization: \[n=p_{1}^{e_{1}}p_{2}^{e_{2}}\dots p_{j}^{e_{j}},\] with \(p_{i}\neq p_{j}\) if \(i\neq j\). Changing the order if needed, without loss of generality assume that \(p_{1},\dots,p_{J}\equiv-1\,\,(\operatorname{mod}m)\) with \(e_{1}=\dots=e_{J}=1\) and for \(k=J+1,\dots,j\) either \(p_{k}\not\equiv-1\,\,(\operatorname{mod}m)\) or \(e_{k}>1\). Then, we choose \[a=p_{1}\dots p_{J}\] and \[b=\frac{n}{a}=p_{J+1}^{e_{J+1}}\dots p_{j}^{e_{j}}.\] This finishes the proof. **Remark.** If we remove the condition that \(\gcd(a,b)=1\) in Lemma 3.4, then the decomposition \(n=ab\) is not unique. For example, if \(n\) has a prime divisor \(q\) with \(q\equiv-1\,\,(\operatorname{mod}m)\) and \(q^{3}\mid m\), then we can also choose \[a=q\prod_{\begin{subarray}{c}p\equiv-1\,\,(\operatorname{mod}m)\\ p|\nmid n\end{subarray}}p\quad\text{ and }\quad b=\frac{n}{a}.\] We are now ready to prove our key lemma. Proof of Lemma 1.9.: Let \(\ell\leq k\) be chosen later. By Lemma 3.4, an integer \(n\) can be written in a unique way as \(n=ab\) with \(\gcd(a,b)=1\), \(\mu^{2}(a)=1\) and \(p\mid a\) implies \(p\equiv-1\,\,(\operatorname{mod}g^{\ell})\) and \(b\) such that \[p\mid b\Rightarrow p^{2}\mid b\text{ or }p\not\equiv-1\,\,(\operatorname{mod}g^{ \ell}).\] Suppose that \(n=ab\), written in the above form, is a positive integer such that \(g^{k}\nmid\sigma(n)\). Then we claim that \(n\) has at most \(m\coloneqq\lfloor k/\ell\rfloor\) prime factors \(p\) such that \(p\equiv-1\,\,(\operatorname{mod}g^{\ell})\) and \(p^{2}\nmid n\). Indeed, if \(n\) has more than \(k/\ell\) prime factors \(p\equiv-1\,\,(\operatorname{mod}g^{\ell})\) with \(p^{2}\nmid n\), say \(p_{1},p_{2},\dots p_{m+1}\), then \(\sigma(n)=(p_{1}+1)(p_{2}+1)\dots(p_{m+1}+1)K\), where \(K\) is a positive integer. Since each \(p_{i}\equiv-1\,\,(\operatorname{mod}g^{\ell})\), this implies \(\sigma(n)={c_{1}}^{\ell}{c_{2}}g^{\ell}\dots c_{m+1}g^{\ell}K={c_{1}}{c_{2}} \dots c_{m+1}Kg^{(m+1)\ell}\) with positive integers \(c_{i}\). Since \((m+1)\ell\geq k\), this would imply that \(g^{k}\mid\sigma(n)\), which contradicts our assumption. Thus for such \(n\), we have \(\omega(a)\leq m\). Combining what we noted above, we obtain \[\sum_{\begin{subarray}{c}n\leq x\\ g^{k}|\sigma(n)\end{subarray}}1\leq\sum_{\begin{subarray}{c}ab\leq x\\ p|a\Rightarrow p\equiv-1\ (\text{mod}\,g^{\ell})\\ \omega(a)\leq m\\ p|b\Rightarrow p^{2}|b\text{ or }p\neq-1\ (\text{mod}\,g^{\ell})\end{subarray}}\mu^{2}(a).\] To find an upper bound for the sum on the right-hand side, we first deal with the condition \(\omega(a)\leq m\). To do that, we use Rankin's method, replacing \(1\) with a nonnegative quantity that is at least \(1\) when \(\omega(a)\leq m.\) Namely, let \(t\in(0,1)\) be a parameter so that \(t^{\omega(a)-m}>0\) for any positive integer \(a\) and \(t^{\omega(a)-m}\geq 1\) if \(\omega(a)\leq m\). Then we get \[\sum_{\begin{subarray}{c}n\leq x\\ g^{k}|\sigma(n)\end{subarray}}1 \leq\sum_{\begin{subarray}{c}ab\leq x\\ p|a\Rightarrow p\equiv-1\ (\text{mod}\,g^{\ell})\\ p|b\Rightarrow p^{2}|b\text{ or }p\neq-1\ (\text{mod}\,g^{\ell})\end{subarray}}\mu^{2}(a)t^{ \omega(a)-m}\] \[\leq t^{-m}\sum_{\begin{subarray}{c}ab\leq x\\ p|a\Rightarrow p\equiv-1\ (\text{mod}\,g^{\ell})\\ p|b\Rightarrow p^{2}|b\text{ or }p\neq-1\ (\text{mod}\,g^{\ell})\end{subarray}}\mu^{2}(a)t^{ \omega(a)}.\] We can rewrite the sum on the right in terms of multiplicative functions \[\sum_{\begin{subarray}{c}ab\leq x\\ p|a\Rightarrow p\equiv-1\ (\text{mod}\,g^{\ell})\\ p|b\Rightarrow p^{2}|b\text{ or }p\neq-1\ (\text{mod}\,g^{\ell})\end{subarray}}\mu^{2}(a )t^{\omega(a)}=\sum_{n\leq x}\sum_{ab=n}G(a)H(b)=\sum_{n\leq x}G*H(n),\] where \(G\) and \(H\) are multiplicative functions defined by \[G(p)\coloneqq\begin{cases}t,&\text{ if }p\equiv-1\ (\text{mod}\,g^{\ell}),\\ 0,&\text{ if }p\neq-1\ (\text{mod}\,g^{\ell}),\end{cases}\] and \(G(p^{\nu})\coloneqq 0\) for all \(\nu\geq 2\); and \[H(p)\coloneqq\begin{cases}0,&\text{ if }p\equiv-1\ (\text{mod}\,g^{\ell}),\\ 1,&\text{ if }p\neq-1\ (\text{mod}\,g^{\ell}),\end{cases}\] and \(H(p^{\nu})\coloneqq 1\) for all \(\nu\geq 2\). Let \(f=G*H\). The function \(f\) is also multiplicative with \(f(p)=G(p)+H(p)\), which gives \[f(p)=\begin{cases}t,&\text{ if }p\equiv-1\ (\text{mod}\,g^{\ell}),\\ 1,&\text{ otherwise.}\end{cases}\] For \(\nu\geq 2\) we have \[f\left(p^{\nu}\right)=\sum_{ab=p^{\nu}}G(a)H(b)=G(p)H(p^{\nu-1})+H\left(p^{\nu }\right)=\begin{cases}1,&\text{ if }\nu=2,\\ t+1,&\text{ if }\nu\geq 3\text{ and }p\equiv-1\ (\text{mod}\,g^{\ell}),\\ 1,&\text{ if }\nu\geq 3\text{ and }p\not\equiv-1\ (\text{mod}\,g^{\ell}).\end{cases}\] Now, we can apply Lemma 3.2 with \(\lambda_{1}=2\) and \(\lambda_{2}=1\) to \(f\) and obtain \[\sum_{\begin{subarray}{c}n\leq x\\ g^{k}\nmid\sigma(n)\end{subarray}}1\ll t^{-m}x\prod_{\begin{subarray}{c}p\leq x \\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1-\frac{1}{p}\right)\left(1+\frac {t}{p}+\frac{1}{p^{2}}+(t+1)\sum_{\nu\geq 3}\frac{1}{p^{\nu}}\right)\prod_{ \begin{subarray}{c}p\leq x\\ p\not\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1-\frac{1}{p}\right)\sum_{ \nu\geq 0}\frac{1}{p^{\nu}}.\] Next, we note that for any prime \(p\), \[\left(1-\frac{1}{p}\right)\sum_{\nu\geq 3}\frac{1}{p^{\nu}}=\left(1-\frac{1}{p }\right)\frac{1}{p^{3}}\sum_{\nu\geq 0}\frac{1}{p^{\nu}}=\left(1-\frac{1}{p} \right)\frac{1}{p^{3}}\frac{1}{\left(1-1/p\right)}=\frac{1}{p^{3}}\] and \[\left(1-\frac{1}{p}\right)\sum_{\nu\geq 0}\frac{1}{p^{\nu}}=\left(1-\frac{1}{p }\right)\frac{1}{\left(1-1/p\right)}=1\] which yield \[\sum_{\begin{subarray}{c}n\leq x\\ g^{k}\nmid\sigma(n)\end{subarray}}1\ll t^{-m}x\prod_{\begin{subarray}{c}p\leq x \\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1+\frac{t-1}{p}+\frac{1-t}{p^{2}} +\frac{t}{p^{3}}\right).\] Furthermore, we see that \[\prod_{\begin{subarray}{c}p\leq x\\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1+\frac{t-1}{p}+\frac{1-t}{p^{2} }+\frac{t}{p^{3}}\right)=\prod_{\begin{subarray}{c}p\leq x\\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1+\frac{t-1}{p}\right)\left(1+ \frac{1}{1+\frac{t-1}{p}}\frac{1-t}{p^{2}}+\frac{1}{1+\frac{t-1}{p}}\frac{t}{ p^{3}}\right)\] \[=\prod_{\begin{subarray}{c}p\leq x\\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1+\frac{t-1}{p}\right)\prod_{ \begin{subarray}{c}p\leq x\\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1+\frac{1-t}{p(p+t-1)}+\frac{t}{ p^{2}(p+t-1)}\right)\] where the second product is bounded by a constant. So we obtain \[\sum_{\begin{subarray}{c}n\leq x\\ g^{k}\nmid\sigma(n)\end{subarray}}1\ll t^{-m}x\prod_{\begin{subarray}{c}p\leq x \\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1+\frac{t-1}{p}\right).\] In order to bound the product above, we use the Taylor-Young formula for \(\log(1-X)\) for \(|X|\leq 1/2\), and the Siegel-Walfisz Theorem, stated in Lemma 3.1. So, if \(g^{\ell}\) is less than a power of \(\log x\), we have uniformly for \(t\in(0,1)\), \[\log\left(\prod_{\begin{subarray}{c}p\leq x\\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(1+\frac{t-1}{p}\right)\right) =\sum_{\begin{subarray}{c}p\leq x\\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\log\left(1+\frac{t-1}{p}\right)\] \[=\sum_{\begin{subarray}{c}p\leq x\\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\left(\frac{t-1}{p}+O\left(\frac{1}{p^ {2}}\right)\right)\] \[= -\left(1-t\right)\sum_{\begin{subarray}{c}p\leq x\\ p\equiv-1\pmod{g^{\ell}}\end{subarray}}\frac{1}{p}+O\left(\sum_{p\leq x}\frac{1}{ p^{2}}\right)\] \[= -\frac{(1-t)\log_{2}x}{\varphi(g^{\ell})}+O(1),\] where the last equality above is deduced from the Siegel-Walfisz Theorem after partial summation, or more directly by the results of Norton [21] and Pomerance [28] mentioned just after Lemma 3.1. Thus, we obtain \[\sum_{\begin{subarray}{c}n\leq x\\ g^{k}|\sigma(n)\end{subarray}}1\ll\frac{x}{t^{m}(\log x)^{(1-t)/\varphi(g^{ \ell})}}.\] It remains to choose \(\ell\) and \(t\). Recall that \(k\in[5(\log_{3}x),A(\log_{2}x)^{\gamma}]\). With our choices, we would like to have \(t^{m}(\log x)^{(1-t)/\varphi(g^{\ell})}\to\infty\). Let \(\alpha,\alpha^{\prime}\in\mathbb{R}\) such that \(0<\alpha^{\prime}<\alpha<1\) and \(\gamma,\delta<\alpha-\alpha^{\prime}\). We let \[\ell\coloneqq\left\lfloor(1-\alpha)\frac{\log_{3}x}{\log g}\right\rfloor,\] \[t\coloneqq 1-\frac{1}{(\log_{2}x)^{\alpha^{\prime}}}.\] Note that we then have \(\ell\leq k\), \(g^{\ell}\leq(\log_{2}x)^{1-\alpha}\) and \[1\leq m\leq\frac{k}{\ell}\leq A\frac{(\log_{2}x)^{\gamma}}{\left\lfloor(1- \alpha)\frac{\log_{3}x}{\log g}\right\rfloor}\leq 2A\frac{(\log_{2}x)^{ \gamma}}{(1-\alpha)\frac{\log_{3}x}{\log g}}\leq(\log_{2}x)^{\gamma}\] for \(x\) large enough. With these choices, we get \[t^{m}(\log x)^{(1-t)/\varphi(g^{\ell})} \geq t^{m}(\log x)^{(1-t)/g^{\ell}}\] \[\gg \left(1-\frac{1}{(\log_{2}x)^{\alpha^{\prime}}}\right)^{(\log_{ 2}x)^{\gamma}}(\log x)^{\frac{1}{(\log_{2}x)^{\alpha^{\prime}+1-\alpha}}}\] \[= \exp\left((\log_{2}x)^{\gamma}\log\left(1-\frac{1}{(\log_{2}x) ^{\alpha^{\prime}}}\right)+\frac{\log_{2}x}{(\log_{2}x)^{\alpha^{\prime}+1- \alpha}}\right)\] \[= \exp\left((\log_{2}x)^{\gamma}\log\left(1-\frac{1}{(\log_{2}x) ^{\alpha^{\prime}}}\right)+\frac{1}{(\log_{2}x)^{\alpha^{\prime}-\alpha}} \right).\] Since \(\gamma<\alpha-\alpha^{\prime}\), the first term in the exponential above in absolute value is at most one half of the second. Hence for all \(x\geq x_{0}\) with \(x_{0}=x_{0}(\alpha,\alpha^{\prime},\gamma)\) we have \[\sum_{\begin{subarray}{c}n\leq x\\ g^{k}|\sigma(n)\end{subarray}}1\ll x\exp\left(-\frac{1}{2}(\log_{2}x)^{\alpha -\alpha^{\prime}}\right)\ll x\exp\left(-(\log_{2}x)^{\delta}\right),\] where we have used the condition \(\delta<\alpha-\alpha^{\prime}\). This finishes the proof. ## 4. Proof of Theorem 1.8 Recall our basic setup: We have \(g\in\mathbb{N}\), \(g\geq 3\), \(\mathcal{D}\subsetneq\{0,1,\ldots,g-1\}\) nonempty, \(x\) sufficiently large, and \(0<\gamma<1\). For a positive integer \(k\in\left[\frac{(\log_{2}x)^{\gamma}}{\log(g/|\mathcal{D}|)},2\frac{(\log_{2}x) ^{\gamma}}{\log(g/|\mathcal{D}|)}\right]\), we write \[\#\mathcal{W}_{s,\mathcal{D}}(x)=\sum_{\begin{subarray}{c}n\in\mathcal{W}_{s,\mathcal{D}}(x)\\ \sigma(n)\equiv 0\ (\mathrm{mod}\,g^{k})\end{subarray}}1+\sum_{ \begin{subarray}{c}n\in\mathcal{W}_{s,\mathcal{D}}(x)\\ \sigma(n)\not\equiv 0\ (\mathrm{mod}\,g^{k})\end{subarray}}1=:S_{1}+S_{2}.\] We will now work on the upper bounds for \(S_{1}\) and \(S_{2}\) separately. To start with, we use a rather weak bound for \(S_{2}\), where we drop the condition on the digit restriction on \(s(n)\). Then we apply Lemma 1.9 with \(\gamma=\delta\) to obtain \[S_{2}\leq\sum_{\begin{subarray}{c}n\leq x\\ g^{k}|\sigma(n)\end{subarray}}1=O\left(x\exp\left(-\left(\log_{2}x\right)^{ \gamma}\right)\right).\] Next, we focus on finding an upper bound for \(S_{1}\). Following a similar setting as in [23], for \(s(n)=\sum_{j=0}^{N}\varepsilon_{j}(s(n))g^{j}\), for some \(N\geq 1\), we put \[B\coloneqq\sum_{j=0}^{k-1}\varepsilon_{j}(s(n))g^{j}\] as the number formed by the \(k\)-rightmost digits of \(s(n)\) such that \(s(n)\equiv B\ (\mathrm{mod}\,g^{k})\). Note that this implies \(B\leq g^{k}-1.\) Let \(n\in\mathcal{W}_{s,\mathcal{D}}(x)\) with \(\sigma(n)\equiv 0\ (\mathrm{mod}\,g^{k})\). Then, since \(g^{k}\mid\sigma(n)\), we have \[n=\sigma(n)-s(n)\equiv-s(n)\equiv-B\ (\mathrm{mod}\,g^{k}).\] So we can relax the condition on \(S_{1}\) with a congruence condition as follows \[S_{1}=\sum_{\begin{subarray}{c}n\in\mathcal{W}_{s,\mathcal{D}}(x)\\ \sigma(n)\equiv 0\ (\mathrm{mod}\,g^{k})\end{subarray}}1\leq\sum_{ \begin{subarray}{c}n\leq x\\ n\equiv-B\ (\mathrm{mod}\,g^{k})\\ B\in\mathcal{W}_{\mathcal{D}}(g^{k}-1)\end{subarray}}1=\sum_{\begin{subarray}{ c}n\leq x\\ n\equiv-B\ (\mathrm{mod}\,g^{k})\end{subarray}}1\leq|\mathcal{D}|^{k}\Big{|} \frac{x}{g^{k}}\Big{|}+|\mathcal{D}|^{k},\] where in the last inequality we used, for \(M\) a positive integer, \(b\in\mathbb{Z}\) and \(X\geq M\), that \[\#\{1\leq a\leq X:\,a\equiv b\ (\mathrm{mod}\,M)\}\leq\lfloor X/M\rfloor+1\] along with (2.2). Inserting our choice of \(k\) yields \[S_{1}\ll x\exp\left(-k\log\left(g/|\mathcal{D}|\right)\right)\ll x\exp\left(- \left(\log_{2}x\right)^{\gamma}\right).\] Thus, overall we obtain the following upper bound \[\#\mathcal{W}_{s,\mathcal{D}}(x)=\#s^{-1}\left(\mathcal{W}_{\mathcal{D}} \right)(x)\ll x\exp(-\left(\log_{2}x\right)^{\gamma})\] as desired. ## 5. Some remarks on a lower bound on \(\#\mathcal{W}_{s,\mathcal{D}}(x)\) As indicated earlier, as soon as we have \(1\in\mathcal{D}\), then the elements in \(\mathcal{W}_{s,\mathcal{D}}\) are at least as frequent as the primes, since \(s(p)=1\) for all primes \(p\). In the case when \(\mathcal{D}=\{0,\ldots,g-1\}\setminus\{a_{0}\}\) for some \(a_{0}\in\{1,\ldots,g-1\}\) and \(g\) large enough, we can prove that \(s(n)\) takes on infinitely many different values in \(\mathcal{W}_{\mathcal{D}}\) by adapting the ideas used in [19]. As already remarked in the introduction, if \(p\) and \(q\) are two distinct primes then \(s(pq)=p+q+1\). It is thus sufficient to prove that a positive proportion of ellipsephic integers can be expressed as \(1\) plus a sum of two primes. The arguments of [19, Sections 6-8] imply that \[\sum_{n\in\mathcal{W}_{\mathcal{D}}(g^{N}-1)}\sum_{p+q+1=n}\log p\log q=(c( \mathcal{D})+o(1))(g|\mathcal{D}|)^{N}, \tag{5.1}\] with \[c(\mathcal{D})\coloneqq\tfrac{g}{\varphi(g)^{2}}\#\left\{(b_{1},b_{2})\in\{0,\ldots,g-1\}^{2}:\gcd(b_{1}b_{2},g)=1\text{ and }\exists\ d\in\mathcal{D}\text{ such that }b_{1}+b_{2}+1\equiv d\ (\text{mod}\,g)\right\}. \tag{5.2}\] When \(g\) is large enough, \(c(\mathcal{D})>0\). Indeed, the set on the right-hand side of (5.2) is non-empty. It contains at least \((b_{1},b_{2})=(1,g-1)\) if \(a_{0}\neq 1\), and \((b_{1},b_{2})=(1,1)\) in the case \(a_{0}=1\). The proof of (5.1) consists of reproducing Sections 6-8 in [19], with one additional prime variable which we handle trivially. The only small difference is in the computation of the main term coming from the major arcs at the end of the proof of [19, Lemma 7.2]. Let \(r(n)\) denote the weight \[r(n)=\sum_{p_{1}+p_{2}+1=n}\log p_{1}\log p_{2}.\] We can apply an upper bound sieve to detect the primes \(p_{1}\) such that \(n-1-p_{1}\) is also prime. By [14, Theorem 3.11] and the remark that appears afterwards, for any odd integer \(n\in[\sqrt{x},x]\) we have that \[r(n)\ll(\log x)^{2}\#\{p\leq n,\ n-1-p=p^{\prime}\}\ll x\prod_{\begin{subarray} {c}p>2\\ p|n-1\end{subarray}}\frac{p-1}{p-2}\leq c_{1}x\log\log x, \tag{5.3}\] for some absolute \(c_{1}>0\). By (5.1) and (5.3) we deduce that \[\#\left\{n\in\mathcal{W}_{\mathcal{D}}(x):\ n-1\text{ is a sum of two primes}\right\}\geq\frac{c(\mathcal{D})-o(1)}{c_{1}\log\log x}\#\mathcal{W}_{ \mathcal{D}}\left(x\right). \tag{5.4}\] This implies that \(s(n)\) takes infinitely many different values in \(\mathcal{W}_{\mathcal{D}}\). In this short section, we wanted to highlight a result which is a direct consequence of the proofs presented in [19]. However, detecting ellipsephic primes is a considerably more difficult problem than detecting ellipsephic integers \(n\) such that \(n-1\) is a sum of two primes. It is thus certainly possible to have a more precise lower bound than (5.4) for more general sets \(\mathcal{W}_{\mathcal{D}}\) and with small base \(g\). Also, we can probably remove the \(\log\log x\) in the denominator in (5.4) by combining the arguments in [32, Section 3.2] with [19]. ## Acknowledgements We would like to thank the Women In Numbers Europe 4 conference organizers for providing us with the opportunity to work together. We would also like to thank the referees for their helpful comments, which greatly improved this paper. The first and third authors were supported by ANR grant ANR-20-CE91-0006 ArithRand. The first author was also supported by a PIMS postdoctoral fellowship at the University of Lethbridge. The last author was supported by the Max Planck Institute for Mathematics during the early stages of this project.
2310.11747
Coded Kalman Filtering Over Gaussian Channels with Feedback
This paper investigates the problem of zero-delay joint source-channel coding of a vector Gauss-Markov source over a multiple-input multiple-output (MIMO) additive white Gaussian noise (AWGN) channel with feedback. In contrast to the classical problem of causal estimation using noisy observations, we examine a system where the source can be encoded before transmission. An encoder, equipped with feedback of past channel outputs, observes the source state and encodes the information in a causal manner as inputs to the channel while adhering to a power constraint. The objective of the code is to estimate the source state with minimum mean square error at the infinite horizon. This work shows a fundamental theorem for two scenarios: for the transmission of an unstable vector Gauss-Markov source over either a multiple-input single-output (MISO) or a single-input multiple-output (SIMO) AWGN channel, finite estimation error is achievable if and only if the sum of logs of the unstable eigenvalues of the state gain matrix is less than the Shannon channel capacity. We prove these results by showing an optimal linear innovations encoder that can be applied to sources and channels of any dimension and analyzing it together with the corresponding Kalman filter decoder.
Barron Han, Oron Sabag, Victoria Kostina, Babak Hassibi
2023-10-18T07:10:03Z
http://arxiv.org/abs/2310.11747v1
# Coded Kalman Filtering Over Gaussian Channels with Feedback ###### Abstract This paper investigates the problem of zero-delay joint source-channel coding of a vector Gauss-Markov source over a multiple-input multiple-output (MIMO) additive white Gaussian noise (AWGN) channel with feedback. In contrast to the classical problem of causal estimation using noisy observations, we examine a system where the source can be encoded before transmission. An encoder, equipped with feedback of past channel outputs, observes the source state and encodes the information in a causal manner as inputs to the channel while adhering to a power constraint. The objective of the code is to estimate the source state with minimum mean square error at the infinite horizon. This work shows a fundamental theorem for two scenarios: for the transmission of an unstable vector Gauss-Markov source over either a multiple-input single-output (MISO) or a single-input multiple-output (SIMO) AWGN channel, finite estimation error is achievable if and only if the sum of logs of the unstable eigenvalues of the state gain matrix is less than the Shannon channel capacity. We prove these results by showing an optimal linear innovations encoder that can be applied to sources and channels of any dimension and analyzing it together with the corresponding Kalman filter decoder. Shannon capacity, Kalman filter, joint source channel coding, feedback ## I Introduction To support emerging 6G communications, Internet of Things devices, and autonomous vehicles, wireless communication systems need to handle large amounts of data produced in a streaming fashion with minimal delay. For instance, unmanned drones require precise position control, which relies on potentially noisy bilateral communication between the drone and the controller. In this communication system, the source symbols are made available gradually in a streaming fashion, and the encoder has access to channel feedback. The penalty is zero-delay estimation error. Codes that are suitable to this setting do not operate on blocks. Rather, the encoder and decoder apply an immediate, recursive operation to every emergent source symbol and channel output. The traditional communication system, introduced by Shannon, encodes data into blocks before transmitting them through a noisy channel [1]. Since the performance criterion is asymptotic, the underlying assumption is that a long block of data symbols to be transmitted is available before encoding. These classical sources can be transmitted using block codes with a fixed block length such as [2, 3, 4, 5, 6], or variable block length codes such as [7, 8, 9], which use feedback to determine future channel inputs and adaptive decide the decoding time depending on the channel outputs. A feedback code operates on a pair of channels: the forward channel carries signals from the encoder to the decoder and is often noisy; the reverse channel carries signals from the decoder to the encoder and is assumed to be noiseless [10]. This assumption is reasonable if the decoder has access to more power than the encoder relative to the noise floor. The well-known Schalkwijk and Kailath (SK) scheme [8] over AWGN channels transmits a fixed message, mapped to the reals, on the first channel use, and uses subsequent channel uses to refine the decoder's estimate of the noise added during the first transmission so that it can recover the initial transmission virtually noiselessly. The schemes mentioned so far operate on a known and fixed message. If the message is made available gradually, block encoders inevitably introduce buffering, resulting in inferior performance compared to recursive codes [11]. Generalization of the SED code for streaming sources have been studied in [12], while [13, 14, 15] study a generalization of the SK scheme but only for scalar sources and channels. Floor et al. design a joint source-channel code for two correlated Gaussian processes across a multiple access channel, but they assume that the source process is memoryless [16]. In joint source-channel coding, the two tasks of compressing the source and encoding it for the channel are done jointly instead of with separate source and channel coders. A special class of tree codes such as those studied by Schulman [17] for coding evolving sources over discrete memoryless channels are recursive but do not utilize feedback. Sukhavasi et al. [18] show sufficient conditions on the rate and reliability of tree codes to estimate an unstable linear system using a random linear tree code ensemble. This paper concerns causal transmission of a source across a particular communication channel. Conditions for accurately estimating the source involve both the source and channel characteristics. In the classical block coding setting, the relevant source and channel characteristics are the source rate-distortion function and the channel capacity, according to Shannon's source-channel separation theorem [1]. In the zero-delay causal setting this paper is concerned with, the corresponding fundamental source and channel characteristics are a topic of ongoing research. The source generates new bits at every time and the channel must be able to reliably carry these new bits to the encoder with a low-enough delay. Intuitively, if the source produces more information than the channel can reliably carry, it becomes impossible to accurately estimate the source. In their seminal paper, [13] argued that a more stringent requirement than classical Shannon capacity is necessary to properly describe the relationship between communication rate and error for these systems. They derived sufficient conditions to stabilize an unstable source across a communication channel in terms of anytime capacity and provide a converse result for codes over certain discrete memoryless channels that perform source quantization and channel coding separately. The causal rate-distortion function [19] provides a lower bound to the channel capacity necessary for causally estimating the source subject to a given distortion over this channel [20]. A classical work by Gorbunov and Pinsker [19] derived the sequential rate-distortion of a scalar Gauss-Markov source. Tanaka et al. [21] expressed the causal rate distortion function of a vector Gauss-Markov source as a semidefinite program. Tatikonda et al. They design the parameters and dimension of an optimal linear observer of the source and analyze the directed mutual information and achievable distortion [21]. This is different from a system with a given MIMO channel, which fixes the dimension of the observation. [22] demonstrated the relevance of a causal rate distortion function to the problem of controlling a linear stochastic system over a noisy channel. In that setup, the encoder intelligently forms channel inputs to optimize the control objective; the decoder observes a noisy channel output and forms the control action. The lower bound to channel capacity provided by the causal rate-distortion function is known to be tight only in the rare event of a source coincidentally and fortuitously matched [23] to the channel at hand. For example, a scalar Gauss-Markov source is matched to the scalar AWGN channel [13, 19]. In this source-channel matched setting, the optimal encoder transmits the rescaled estimation error at each time with the innovation of the source [22]. In the ubiquitous scenario of the source not being matched to the channel, more sophisticated coding schemes are in order and they are a topic of current research [11, 12, 13, 14, 20]. Other authors analyzed the problem of controlling unstable plants under rate and cost constraints [24, 25, 26] or over noisy communication channels [27, 28, 29, 30]. The control problem is closely related to the estimation problem since the controller must accurately estimate the unstable source to generate control inputs. [14] consider a single-input single-output (SISO) plant and derived conditions for a joint-source channel code to almost surely stabilize the source, while [30] consider the same setting with a vector source. This paper studies zero-delay joint source-channel coding problem of a vector Gauss-Markov source over an AWGN channel. Consider a time-sensitive remote estimation system comprising a source, an encoder, a communication channel with feedback, and a decoder as in Figure 1. The encoder leverages available source realizations and feedback of past channel outputs to generate channel inputs, while the decoder estimates the sequence of source realizations using only the channel outputs. The cost has zero delay; it measures the decoder's estimation error at every time. We propose that a time-invariant linear encoder requires the minimum capacity to achieve finite asymptotic estimation error when transmitting a vector-valued source over a rank-one channel. We further refine the structure of the encoder to a simplified form where the channel input is a linear transformation of the recent state estimation error. This simplified structure induces an optimal Kalman filter decoder, whose performance is analyzed using classical theories in linear estimation. This code is practical since the encoder and decoder must only compute the mean square error using the Kalman filter at every time. The optimal encoder is determined by a one-time calculation of the best time-invariant matrix pre-factor which is used to encode the estimation error at every time. At the infinite horizon, we argue that the estimation error is the solution of a discrete algebraic Riccati equation (DARE) under a power constraint, which holds in the general case where the source and channel have arbitrary dimension. In the special cases of a vector source with a single-input multiple-output (SIMO) channel or a input single-output (MISO) channel, the Riccati equation reduces to a Lyapunov equation which we can analyze explicitly to derive necessary and sufficient conditions for the estimation error to be finite. We conclude that the relevant measure of source instability is the sum of the logarithms of the unstable eigenvalues of the state gain Fig. 1: A MIMO AWGN channel, described in Definition 1, with a noiseless feedback link is shown. The Gauss-Markov source, described in Definition 2, produces information at every time, which is encoded and passed through the channel. The decoder seeks to estimate the source at the next time given all channel outputs. We show in Lemma 1 that the encoding structure displayed here is optimal for our performance metric. matrix. The operational channel capacity in this setting is the Shannon capacity for both SIMO and MISO channels. The remainder of the paper is organized as follows. Section II specifies the source and channel models and formally defines the class of zero-delay joint source-channel codes and the performance criterion. Section III presents our main contributions on the necessary and sufficient conditions for finite estimation error of vector source with MISO and SIMO channels. Section IV shows the structure of the optimal code and contains proof sketches of the main results. We denote by \(\{\mathbf{X}_{t}\}_{t=0}^{T}\) a discrete time random process and we denote the vector \(X^{t}\triangleq\{x_{0},x_{1},\ldots,x_{t}\}\). We write \(\mathbf{X}\sim\mathcal{N}(\mu,\Sigma)\) to say that the random vector \(\mathbf{X}\) has a Gaussian distribution with mean \(\mathbb{E}[\mathbf{X}]=\mu\) and covariance matrix \(\mathrm{Cov}[\mathbf{X}]=\Sigma\). Matrices and vectors are denoted with uppercase letters while scalars are denoted with lower case mathematical font. Sets are denoted using calligraphic font. ## II Problem Setup The channel in Figure 1 is a multiple-input multiple-output (MIMO) AWGN channel. **Definition 1**.: _(MIMO AWGN Channel) The channel accepts a vector input \(X_{t}\in\mathbb{R}^{n}\) and produces a vector output \(Y_{t}\in\mathbb{R}^{m}\),_ \[\mathbf{Y}_{t}=H\mathbf{X}_{t}+\mathbf{Z}_{t},\ t\geq 1 \tag{1}\] _where \(H\in\mathbb{R}^{m\times n}\) the channel gain matrix, \(\mathbf{Z}_{t}\sim\mathcal{N}(0,R)\)._ The streaming source in Figure 1 is a Gauss-Markov source \(S_{t}\). **Definition 2**.: _(Gauss-Markov Source) The Gauss-Markov source evolves according to the linear dynamical system equation:_ \[\mathbf{S}_{t+1}=A\mathbf{S}_{t}+\mathbf{W}_{t}, \tag{2}\] _where \(A\in\mathbb{R}^{k\times k}\), \(\mathbf{W}_{t}\sim\mathcal{N}(0,Q)\) and the initial state is \(\mathbf{S}_{0}\sim\mathcal{N}(0,Q)\)._ Without loss of generality, let us consider that \[A=\left[\begin{array}{cc}A_{s}&0\\ 0&A_{u}\end{array}\right] \tag{3}\] in Jordan form in (2), where \(A_{s}\) is stable and \(A_{u}\) is unstable. We make the following assumptions about our system: **Assumption 1**.: _The pair \((A,Q)\) is controllable._ **Assumption 2**.: _The eigenvalues of \(A\) satisfy \(\lambda_{i}\lambda_{j}\neq 1,\forall i,j\)._ **Assumption 3**.: _We assume the encoder in Definition 3 is time-invariant, meaning \(\Gamma_{t}=\Gamma,\forall t\) in (15)._ **Assumption 4**.: _We will assume that \(A_{u}\) is diagonal in (3)._ Assumption 1 guarantees that the error covariance is positive definite. Assumption 2 guarantees that a Lyapunov equation of the form \(X=AXA^{T}+W\) has a unique solution [31, Lemma D.1.1]. These assumptions are common in classical linear estimation theory [31, App. C] and will be utilized throughout the proofs in Section IV. Assumption 3 simplifies the expression for the infinite horizon estimation error, but we claim that a time-varying encoder cannot do better for our setting. Assumption 4 yields a cleaner analysis for the proof of Theorem 1, but our results hold in the general case where \(A_{u}\) is in Jordan block form. **Definition 3**.: _(A Zero-Delay Joint-Source Channel Feedback Code) The feedback code for the source-channel pair in Definitions 1 and 2 consists of the following:_ 1. _An encoder that at time_ \(t\) _has access to_ \(S^{t}\) _and_ \(Y^{t-1}\) _and generates_ \[\mathbf{X}_{t}=f_{t}(\mathbf{S}^{t},\mathbf{Y}^{t-1}),\] (4) _where_ \[f_{t}:\mathcal{S}^{t}\times\mathcal{Y}^{t-1}\mapsto\mathbb{R}^{n},\] (5) _and_ \(\mathcal{S}=\mathbb{R}^{k},\mathcal{Y}=\mathbb{R}^{m}\)_. The channel inputs must satisfy a long-term average power constraint,_ \[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}[\mathbf{X}_{t}^{T}\mathbf{X}_{t}]\leq p,\] (6) _up to a horizon_ \(T\)_._ 2. _A decoder that at time_ \(t\) _predicts the next source state,_ \[\hat{S}_{t+1}=g_{t}(\mathbf{Y}^{t})\] (7) For a given code in Definition 3, we denote the predicted error covariance \[P_{t}\triangleq\mathrm{Cov}(\mathbf{S}_{t}-\hat{\mathbf{S}}_{t}). \tag{8}\] Note that for an encoder \(f\triangleq\{f_{t}\}_{t=0}^{\infty}\) (5), the decoder \[\hat{\mathbf{S}}_{t}=\mathbb{E}[\mathbf{S}_{t+1}|\mathbf{Y}^{t}] \tag{9}\] minimizes the mean-square error \[D_{t}\triangleq\mathrm{Tr}\left(\mathrm{Cov}(\mathbf{S}_{t}-\hat{\mathbf{S}} _{t})\right). \tag{10}\] Denote the asymptotic mean square error \[D\triangleq\text{limsup}_{t\rightarrow\infty}D_{t}. \tag{11}\] In this paper, we examine conditions for there to exist an encoder and decoder pair such that \(D<\infty\). ## III Main Results We present the fundamental relation between finite estimation errors and the channel capacity for two scenarios. Theorem 1 deals with a vector source and a MISO channel. In this scenario, the channel gain, \(H\in\mathbb{R}^{1\times n}\), is normalized as \(\|H\|=1\) without loss of generality. **Theorem 1**.: _(MISO Channel) In zero-delay JSCC (Def. 3) of a vector Gauss-Markov source (Def. 2) and MISO AWGN channel, i.e., \(H\in\mathbb{R}^{1\times n}\) with \(||H||=1\) (Def. 1), finite asymptotic error, \(D<\infty\) (11), is achievable if and only if_ \[\sum_{i=1}^{k}\log\left(\max(1,|\lambda_{i}|)\right)<C \tag{12}\] _where \(\{\lambda_{i}\}\) are the eigenvalues values of \(A\) and_ \[C=\frac{1}{2}\log\left(1+\frac{p}{r}\right)\] _is the Shannon capacity of the channel._ In the setting of Theorem 1, the decoder estimates all \(k\) components of the source using a single channel output. The region where finite estimation error is achievable is displayed in Figure 2 for a 2-dimensional source. This theorem is reminiscent of a result by Elia [15, Th. 4.3], which involves the unstable eigenvalues of the system as well, but their result holds only for single-output multiple-input sources with different dynamics compared to our Definition 2. The adjacent control problem has also been studied by [30], which shows that the same capacity condition is sufficient for stabilizing an unstable plant. Next, we present the other scenario for a vector source and SIMO channel. **Theorem 2**.: _(SIMO) In zero-delay JSCC (Definition 3) of a vector Gauss-Markov source (Definition 2) over a SIMO \((H\in\mathbb{R}^{m\times 1})\) AWGN channel (Definition 1), finite asymptotic error, \(D<\infty\) (11), is achievable if and only if_ \[\sum_{i=1}^{k}\log\left(\max(1,|\lambda_{i}|)\right)<C, \tag{13}\] _where \(\{\lambda_{i}\}\) are the eigenvalues values of \(A\) and_ \[C=\frac{1}{2}\log\left(\frac{\det(R+pHH^{T})}{\det R}\right)\] _is the Shannon capacity of the channel._ Note that in the setting of Theorem 2, the decoder utilizes the vector of \(m\) channel outputs to estimate the source. Also, in both cases (MISO and SIMO), if the state evolution matrix \(A\) is strictly stable, the source can be estimated even without communicating over the channel. In this case, the decoder can simply take the estimate \(\hat{S}_{t}=0\) and achieve \(D<\infty\). Particularizing either Theorem 1 or Theorem 2 to scalar Gauss-Markov sources that are transmitted over a scalar AWGN channel recovers the classical result from [19] that the infinite horizon error is finite if and only if \(\log\!|a|\!\leq C,\) where \(C=\frac{1}{2}\log(1+\frac{p}{r})\) is the capacity of the channel [13, 14, 22]. Kostina and Hassibi [20, Th. 1, Th. 3, Prop. 1] show the necessity of (12) for bounded estimation error regardless of the dimensions \(k,m,n\) by deriving a lower bound to the rate-cost tradeoff of estimating and controlling an unstable linear system across an arbitrary communication channel. By taking the mean squared cost to infinity in Th. 1 of [20], a finite mean squared error is achievable only if the rate exceeds the left hand side of (12) and (13). Theorems 1 and 2 establish a tight sufficiency result using a time-invariant encoder for the two special cases of MISO and SIMO AWGN channels. ## IV Proof Sketches In this section, we establish the optimal encoder and outline the proofs for the main results. We begin by proposing a linear encoder of the form \[\mathbf{X}_{t}=\Omega_{t}(\mathbf{S}^{t})+\Lambda_{t}(\mathbf{Y}^{t-1})+ \mathbf{M}_{t}, \tag{14}\] where \(\Omega_{t},\Lambda_{t}\) are linear mappings and \(\mathbf{M}_{t}\) are Gaussian random variables that are independent of \((\mathbf{S}^{t},\mathbf{Y}^{t-1})\). In the case where \(k=n=m=1\), we recover the known fact [14, 22] that linear encoders are optimal for scalar Gauss-Markov sources and scalar AWGN channels, which follows from the solution to the causal rate-distortion function in [19]. The next result further refines the encoding structure. The main idea is that encoding the innovations process of the state is sufficient. Such encoders are often used for Gaussian channels with feedback [8, 32, 33]. **Lemma 1**.: _(Simplified encoding structure) The optimal linear encoder can be written as_ \[\mathbf{X}_{t}=\Gamma_{t}P_{t}^{-1}(\mathbf{S}_{t}-\hat{\mathbf{S}}_{t}), \tag{15}\] _where \(\hat{\mathbf{S}}_{t}\) is the optimal decoders' estimate, given recursively by_ \[\hat{\mathbf{S}}_{t}=A\hat{\mathbf{S}}_{t-1}+\mathrm{K}_{t}(\mathbf{Y}_{t}- \mathbb{E}[\mathbf{Y}_{t}|\mathbf{Y}^{t-1}]), \tag{16}\] _where \(K_{t}=A\mathrm{T}^{T}H^{T}\left(H\Gamma_{t}P_{t}^{-1}\Gamma_{t}^{T}H^{T}+R \right)^{-1}\), \(\hat{S}_{0}=0\), and the error covariance \(P_{t}=\mathrm{Cov}(\mathbf{S}_{t}-\hat{\mathbf{S}}_{t})\). The matrices Fig. 2: Consider transmission of a 2-dimensional Gauss-Markov source over a MISO channel. The axes correspond to the eigenvalues of \(A\) and the enclosed regions display where a finite estimation error is achievable for SNR \(\triangleq\frac{p}{r}=0,9,9,99\). The blue region corresponds to 0 capacity, i.e. SNR \(=0\), the decoder can only estimate a stable source. \(\{\Gamma_{t}\}_{t=0}^{T}\) must be selected to satisfy the power constraint (6)._ The encoding matrix \(\Gamma_{t}\in\mathbb{R}^{n\times k}\) can be designed prior to communication since it depends on the source and channel parameters, but not on their realizations. Proof sketch.: First, we define an encoder \(\mathbf{X}_{t}=\Gamma_{t}P_{t}^{-1}(\mathbf{S}_{t}-\hat{S}_{t})+M_{t}\), where \(\mathbf{M}_{t}\sim\mathcal{N}(0,G_{t})\) and call it policy \(V\). Note that \(P_{t}\) is invertible by Assumption 1. Let policy \(U\) represent a general linear encoder of the form (14). Let \(\Xi_{t}^{U/V}\triangleq(S_{t}-\hat{S}_{t},X_{t})^{T}\) be a Gaussian random vector induced by either policy \(U\) or \(V\). We show via induction that for any \(\{\Xi_{t}^{U}\}_{t<T}\), we can construct \(\{\Xi_{t}^{V}\}_{t<T}\) to have the same distribution. Specifically, we construct \[\Gamma_{t}=\mathbb{E}_{U}\left[\mathbf{X}_{t}(\mathbf{S}_{t}-\hat{\mathbf{S}}_ {t})^{T}\right], \tag{17}\] and \[\begin{split} G_{t}&=\mathbb{E}_{U}[\mathbf{X}_{t} \mathbf{X}_{t}^{T}]\\ &\quad-\mathbb{E}_{U}\left[\mathbf{X}_{t}(\mathbf{S}_{t}-\hat{ \mathbf{S}}_{t})^{T}\right]P_{t}^{-1}\mathbb{E}_{U}\left[(\mathbf{S}_{t}-\hat{ \mathbf{S}}_{t})\mathbf{X}_{t}^{T}\right],\end{split} \tag{18}\] in terms of policy \(U\). This shows that the optimal prediction error, \(P_{t}\), from a general encoder can be achieved with a simplified policy \(V\) at every time. Finally, note that in the objective, the addition of the mean term \(M_{t}\) is essentially noise. It cannot improve the estimation error and can only increase the power. Thus, we set \(M_{t}=0,\forall t\), resulting in the final encoder. The decoder estimate is given by the Kalman filter. Intuitively, at every time, \(t\), the decoder needs knowledge of the most recent source state \(S_{t}\). The encoder can communicate what is currently unknown to the decoder with minimal power by transmitting the innovation \(S_{t}-\hat{S}_{t}\), where decoder prediction \(\hat{S}_{t}\) can be computed using channel feedback. The linear encoder-decoder pair we propose is optimal among encoders of the form (14); it achieves the minimum finite horizon average estimation error: \(\frac{1}{T}\sum_{t=0}^{T-1}D_{t}\). We apply a time-invariant encoding structure: \[\mathbf{X}_{t}=\Gamma P_{t}^{-1}(\mathbf{S}_{t}-\hat{\mathbf{S}}_{t}). \tag{19}\] We will proceed to show that such a time-invariant linear encoder is sufficient to achieve the minimum capacity required for finite estimation error. Using this encoder, the estimation error at the infinite horizon is given by the Riccati recursions in Lemma 2. **Lemma 2**.: _(Induced Riccati Recursions and Convergent Behavior) The prediction error covariance, \(P_{t}\) defined in (8), of the optimal code introduced in Lemma 1 evolves according to the Riccati recursion_ \[P_{t+1}=AP_{t}A^{T}+Q-A\Gamma^{T}H^{T}(R+H\Gamma P_{t}^{-1}\Gamma^{T}H^{T})^{- 1}H\Gamma A^{T} \tag{20}\] _The solution \(P_{t}\) to (20) converges to the trace-maximizing or stabilizing solution \(P\) of the discrete algebraic Riccati equation [31, Sec. E.4]:_ \[P=APA^{T}+Q-A\Gamma^{T}H^{T}(R+H\Gamma P^{-1}\Gamma^{T}H^{T})^{-1}H\Gamma A^{T}. \tag{21}\] _Where \(\Gamma\) satisfies the power constraint_ \[\mathrm{Tr}(\Gamma P^{-1}\Gamma^{T})\leq p. \tag{22}\] Proof sketch.: The time-invariant coding scheme (19) induces the system \[\mathbf{S}_{t+1} =A\mathbf{S}_{t}+W_{t} \tag{23}\] \[\mathbf{Y}_{t} =H\Gamma P_{t}^{-1}(\mathbf{S}_{t}-\hat{\mathbf{S}}_{t})+\mathbf{ Z}_{t} \tag{24}\] From classical linear estimation theory, the error covariance \(P_{t}\) follows the Riccati recursions (20). The convergence of \(P_{t}\) to \(P\) follows from [31, Lemma E.2.1]. Because of the convergence of \(P_{t}\), the infinite horizon power constraint (22) is equivalent to the limit of (6) as we take \(T\rightarrow\infty\). It follows from Lemma 2 that the infinite horizon estimation error is finite if and only if there exists a \(\Gamma\) such that a positive semidefinite \(P\) satisfies both (21) and (22). So far, we have made no assumptions about the dimensions of the channel and source. Next, we derive the main results, which require dimensionality assumptions. ### _Proof of Theorem 1_ #### Iv-A1 Preliminaries Here, \(H\in\mathbb{R}^{1\times n},A\in\mathbb{R}^{k\times k}\), and \(||H||=1\). We will show that \(D<\infty\) if and only if \[\frac{1}{2}\log(1+\frac{p}{r})>\sum_{i=1}^{k}\log\left(\max(1,|\lambda_{i}|)\right) \tag{25}\] The channel input \(X_{t}\) can be decomposed into orthogonal components in the row and null space of \(H\). The component in the null space is nulled by the channel. Thus, to preserve power, we should only optimize over channel inputs of the form \[\mathbf{X}_{t}=H^{T}\Gamma P^{-1}(\mathbf{S}_{t}-\hat{\mathbf{S}}_{t}), \tag{26}\] where Assumption 1 guarantees \(P\succ 0\). Note that the power constraint is invariant due to the assumption on the norm of \(H\). Here, \(\Gamma\in\mathbb{R}^{1\times k}\). From Lemma 2, this has a corresponding DARE \[P=APA^{T}+Q-A\Gamma^{T}(r+\Gamma P^{-1}\Gamma^{T})^{-1}\Gamma A^{T} \tag{27}\] with power constraint \[\Gamma P^{-1}\Gamma^{T}\leq p. \tag{28}\] Since the optimal encoder utilizes all available power, we reduce the DARE (27) to a Lyapunov equation by applying the power constraint with equality. The error covariance is given by the solution to \[P=APA^{T}+Q-\frac{A\Gamma^{T}\Gamma A^{T}}{r+p}, \tag{29}\] where \(r\) is the channel noise variance and \(p\) is the transmit power. The power constraint (28) is equivalent to \[\left[\begin{array}{cc}P&\Gamma^{T}\\ \Gamma&p\end{array}\right]\succeq 0, \tag{30}\] by the Schur complement lemma [34, Sec. C.4], which is, in turn, equivalent to \[J\triangleq P-\frac{\Gamma^{T}\Gamma}{p}\succeq 0, \tag{31}\] \[P=J+\frac{\Gamma^{T}\Gamma}{p}. \tag{32}\] Plugging this into (29), we get \[J+\frac{\Gamma^{T}\Gamma}{p}=A\left(J+\frac{\Gamma^{T}\Gamma}{p}\right)A^{T}+Q -\frac{A\Gamma^{T}\Gamma A^{T}}{r+p}, \tag{33}\] which, upon simplification, yields the following Lyapunov equation for \(J\): \[J=AJA^{T}+Q-\frac{\Gamma^{T}\Gamma}{p}+\frac{A\Gamma^{T}\Gamma A^{T}}{p(1+p/r)}. \tag{34}\] There always exists a solution to \(J\) due to Assumption (2). Therefore \(P\) is finite if and only if there exists a \(\Gamma\) such that \(J\succeq 0\). We proceed to show the latter. From Assumption 4, \[A=\left[\begin{array}{cc}A_{s}&0\\ 0&A_{u}\end{array}\right],\] where \(A_{s}\) is stable and \(A_{u}\) is unstable and diagonal. #### Iii-B2 Proof of Sufficiency We will show that if (25) holds, then there exists a \(\Gamma\) such that the solution, \(J\), to (34) is PSD. To this end, let us assume that \(\Gamma\) takes the form \[\Gamma=\begin{bmatrix}0&\Gamma_{u}\end{bmatrix},\] so that we send no information about the stable components of the source state. In this case, (34) takes the form \[\begin{split} J_{ss}&J_{su}\\ J_{us}&J_{uu}\end{split}=\left[\begin{array}{cc}A_{s}&0\\ 0&A_{u}\end{array}\right]\left[\begin{array}{cc}J_{ss}&J_{su}\\ J_{us}&J_{uu}\end{array}\right]\left[\begin{array}{cc}A_{s}^{T}&0\\ 0&A_{u}^{T}\end{array}\right]\\ &\quad+\left[\begin{array}{cc}Q_{ss}&Q_{su}\\ Q_{us}&Q_{uu}\end{array}\right]\\ &\quad+\left[\begin{array}{cc}0&0\\ 0&-\frac{\Gamma_{u}^{T}\Gamma_{u}}{p}+\frac{A_{s}\Gamma_{u}^{T}\Gamma_{u}A_{u} ^{T}}{p(1+p/r)}\end{array}\right].\end{split} \tag{35}\] Let us assume that \((A_{s},Q_{s})\) is controllable, which follows from the controllability of \((A,Q)\), i.e. Assumption 1. Then, since \(J_{ss}=A_{s}J_{ss}A_{s}^{T}+Q_{s}\) and \(A_{s}\) is stable, \(Q_{s}\succ 0\), it follows that \(J_{ss}\succ 0\) by the Lyapunov stability theorem [31, Lemma D.1.2]. Thus, there exists a \(\Gamma_{u}\) such that \(J\succeq 0\), if and only if there exists a \(\Gamma_{u}\) such that \[J_{uu}-J_{us}J_{ss}^{-1}J_{su}\succeq 0. \tag{36}\] To show this, let us focus on the equation for \(J_{uu}\): \[J_{uu}=A_{u}J_{uu}A_{u}^{T}+Q_{uu}-\frac{\Gamma_{u}^{T}\Gamma_{u}}{p}+\frac{A_ {u}\Gamma_{u}^{T}\Gamma_{u}A_{u}^{T}}{p(1+p/r)} \tag{37}\] which we rearrange as \[\begin{split} J_{uu}&=A_{u}^{-1}J_{uu}A_{u}^{-T}-A_{u}^{ -1}Q_{uu}A_{u}^{-T}\\ &\quad+\frac{A_{u}^{-1}\Gamma_{u}^{T}\Gamma_{u}A_{u}^{-T}}{p}- \frac{\Gamma_{u}^{T}\Gamma_{u}}{p(1+p/r)}.\end{split} \tag{38}\] Note that \(A_{u}^{-1}\) is now stable. Defining \(D_{u}\triangleq\text{diag}(\Gamma_{u})\), i.e., the diagonal matrix whose components are the elements of the vector \(\Gamma_{u}^{T}\), we may now write \[A_{u}^{-1}\Gamma_{u}^{T}=D_{u}a\quad\text{and}\quad\Gamma_{u}^{T}=D_{u}1,\] where \(a\) is the vector of the diagonal elements of \(A_{u}^{-1}\) and \(1\) is the all-one vector. Without Assumption 4, \(A_{u}\) is in block Jordan form, which results in a non-diagonal \(D_{u}\) and a more complex expression for \(\Gamma\). In the diagonal case, (38) becomes \[\begin{split} J_{uu}&=A_{u}^{-1}J_{uu}A_{u}^{-T}-A_{u}^{ -1}Q_{uu}A_{u}^{-T}\\ &\quad+D_{u}\left(\frac{aa^{T}}{p}-\frac{11^{T}}{p(1+p/r)}\right) D_{u}.\end{split} \tag{39}\] Because the above Lyapunov equation for \(J_{uu}\) is linear, its solution can be written as \(J_{uu}=\tilde{J}_{uu}+\tilde{J}_{uu}\) where \[\hat{J}_{uu}=A_{u}^{-1}\hat{J}_{uu}A_{u}^{-T}-A_{u}^{-1}Q_{uu}A_{u}^{-T}, \tag{40}\] and \[\tilde{J}_{uu}=A_{u}^{-1}\tilde{J}_{uu}A_{u}^{-T}+D_{u}\left(\frac{aa^{T}}{p} -\frac{11^{T}}{p(1+p/r)}\right)D_{u}. \tag{41}\] Note that, because \(A_{u}^{-1}\) is stable and \(Q_{uu}\succeq 0\), we have by (40) and the Lyapunov stability theorem that \(\tilde{J}_{uu}\succeq 0\). Thus for (36) to hold, \(\tilde{J}_{uu}\) must be sufficiently PSD. Let \(M\) be the solution to the Lyapunov equation \[M=A_{u}^{-1}MA_{u}^{-T}+11^{T}. \tag{42}\] Let \((A_{u},1)\) be controllable. Then by the Lyapunov stability theorem, \(M\succ 0\). We now claim that \[\tilde{J}_{uu}=D_{u}\left(\frac{M}{r+p}-\frac{11^{T}}{p}\right)D_{u}. \tag{43}\] This can be verified by plugging (43) into (41). It follows that \(\tilde{J}_{uu}\succ 0\) if and only if \(\frac{M}{r+p}-\frac{11^{T}}{p}>0\). But the latter is equivalent to \[\left[\begin{array}{cc}M&1\\ 1^{T}&\frac{p}{r+p}\end{array}\right]\succ 0, \tag{44}\] or \[\frac{p}{r+p}>1^{T}M^{-1}1. \tag{45}\] Assume \(M\) satisfies the Lyapunov equation (42). Then \[1^{T}M^{-1}1=1-\left|\text{det}(A_{u})\right|^{-2}. \tag{46}\] This follows since \(A_{u}^{-1}MA_{u}^{-T}=M-11^{T}\) from (42). On the one hand \[\text{det}A_{u}^{-1}MA_{u}^{-T}=\left|\text{det}A_{u}\right|^{-2}\text{det}M \tag{47}\] and on the other \[\begin{split}\text{det}(M-11^{T})&=\text{det}(I-1 1^{T}M^{-1})\ \text{det}M\\ &=(1-1^{T}M^{-1}1)\ \text{det}M,\end{split} \tag{48}\] which yields the desired result. Then, (45) and (46) imply that \(J_{uu}\succ 0\) if and only if \[\frac{p}{r+p}>1-\left|\text{det}A_{u}\right|^{-2}, \tag{49}\] or equivalently \[1+\frac{p}{r}>\left|\text{det}A_{u}\right|^{2}, \tag{50}\] which is the capacity condition we are seeking. Note that when this capacity condition holds, \(\frac{M}{r+p}-\frac{11^{T}}{p}\succ 0\) and therefore \(\hat{J}_{uu}\succ 0\) in (43). Moreover, since \(\hat{J}_{uu}\) is scaled on both sides by \(D_{u}\) (whose entries are the elements of \(\Gamma_{u}\)) we can choose \(\Gamma\) to make it arbitrarily positive definite, which means we can always satisfy (36). This establishes sufficiency. #### Iv-A3 Proof of Necessity For necessity, note that if we now take an arbitrary \[\Gamma=\begin{bmatrix}\Gamma_{s}&\Gamma_{u}\end{bmatrix},\] i.e. we do not set \(\Gamma_{s}\) to zero, then the Lyapunov equation for \(J_{uu}\) does not change. This means that it continues to hold in (34) that \(J_{uu}=\hat{J}_{uu}-\hat{J}_{uu}\), where \(\hat{J}_{uu}\) is negative definite. When the capacity condition (50) is violated then \(\frac{M}{r+p}-\frac{11^{T}}{p}\), where \(M\) is unaffected by \(\Gamma\), has at least one negative eigenvalue which implies that, no matter what \(D_{u}\) is, \(\hat{J}_{uu}\) will have at least one negative eigenvalue. Therefore the same must be true of \(J_{uu}\) and so \(J\) cannot be positive definite, meaning that \(P\) will not be bounded. This establishes necessity. ### _Proof of Theorem 2_ Here, \(H\in\mathbb{R}^{m}\) is a vector and \(A\in\mathbb{R}^{k\times k}\). From Lemma 2, this has a corresponding DARE \[P=APA^{T}+Q-A\Gamma^{T}H^{T}(r+H\Gamma P^{-1}\Gamma^{T}H^{T})^{-1}H\Gamma A^{T} \tag{51}\] with power constraint \[\Gamma P^{-1}\Gamma^{T}\leq p. \tag{52}\] We apply the same reasoning as the proof for Theorem 1 to bound the channel input power, resulting in the Lyapunov equation: \[P=APA^{T}+Q-H^{T}(R+HH^{T}p)^{-1}HA\Gamma^{T}\Gamma A^{T}. \tag{53}\] Applying the power constraint using the method of (30) and (31), we obtain \[J=AJA^{T}+Q-\frac{\Gamma^{T}\Gamma}{p}+\frac{1}{p(1+H^{T}R^{-1}Hp)}A\Gamma^{T} \Gamma A^{T}. \tag{54}\] (54) parallels (34) for the MISO case. Applying the proof steps that follow (34), we conclude that \[1+H^{T}R^{-1}Hp>(\det A_{u})^{2}\] is a necessary and sufficient condition for finite estimation error at the infinite horizion. Rearranging terms, we arrive at the desired capacity condition of Theorem (2). ## V Conclusion and Future Work In this paper, we analyzed a zero-delay communication system composed of a potentially vector-valued Gauss-Markov source and AWGN channel. We derived necessary and sufficient conditions for the mean square error to be finite at the infinite horizon for SIMO and MISO AWGN channels in Theorem 1 and Theorem 2 respectively. This revealed a strong connection between the instability of a linear system, quantified by its unstable eigenvalues, and the Shannon capacity of the communication channel. Generalizing these results to general MIMO channels is of significant interest and is the subject of ongoing research. The current paper takes a significant step towards this characterization as the linear encoder and decoder in Lemma 1 are optimal for MIMO channels even in some cases where the dimensions are unmatched [35]. Furthermore, the finite-horizon problem with a time-varying encoder is fundamentally connected to the problems of automatic control and anytime coding. In a control setting, the minimum mean square estimate can be used to control the unstable system. In an anytime communication setting, the source can be modeled as an information source containing a stream of bits that arrives gradually. An accurate estimate of the source state will allow us to determine and decode the transmitted bits in a streaming fashion.
2310.00892
No Offense Taken: Eliciting Offensiveness from Language Models
This work was completed in May 2022. For safe and reliable deployment of language models in the real world, testing needs to be robust. This robustness can be characterized by the difficulty and diversity of the test cases we evaluate these models on. Limitations in human-in-the-loop test case generation has prompted an advent of automated test case generation approaches. In particular, we focus on Red Teaming Language Models with Language Models by Perez et al.(2022). Our contributions include developing a pipeline for automated test case generation via red teaming that leverages publicly available smaller language models (LMs), experimenting with different target LMs and red classifiers, and generating a corpus of test cases that can help in eliciting offensive responses from widely deployed LMs and identifying their failure modes.
Anugya Srivastava, Rahul Ahuja, Rohith Mukku
2023-10-02T04:17:35Z
http://arxiv.org/abs/2310.00892v1
# No Offense Taken: Eliciting Offensiveness from Language Models ###### Abstract For safe and reliable deployment of language models in the real world, testing needs to be robust. This robustness can be characterized by the difficulty and diversity of the test cases we evaluate these models on. Limitations in human-in-the-loop test case generation has prompted an advent of automated test case generation approaches. In particular, we focus on Red Teaming Language Models with Language Models by Perez et al. (2022). Our contributions include developing a pipeline for automated test case generation via red teaming that leverages publicly available smaller language models (LMs), experimenting with different target LMs and red classifiers, and generating a corpus of test cases that can help in eliciting offensive responses from widely deployed LMs and identifying their failure modes. The code linked with this paper can be found here. ## 1 Introduction Language models (LM) are trained on a wide variety of data found on the internet and have shown to exhibit various biases and harmful behaviour that can be offensive and hurtful to its users Liang et al. (2021); Weng (2021). There is a significant risk when deploying such unfair and toxic models as they can introduce and reinforce various damaging prejudices - both in the technical applications they are used in, as well as society at large Weidinger et al. (2021). Thus, exhaustive testing of language models to identify scenarios where they can perform in a harmful manner is crucial. An essential component of this testing process is the test dataset used. A lot of work has been done in manual/supervised test set generation Ribeiro et al. (2020); Rottger et al. (2021); Xu et al. (2021); Bartolo et al. (2021). This human-in-the-loop approach is more resource-intensive and can become a major source of bias and error Lee (2016). As a step towards automating the process of generating challenging and diverse test cases Perez et al. (2022) train a red LM to generate test cases, a target LM which gives corresponding responses and a classifier which determines if the test case successfully elicits a harmful response. They explore various approaches to generate test cases - zero-shot generation, stochastic few-shot generation, supervised learning and reinforcement learning (RL). They run all these experiments on Gopher based LMs Rae et al. (2021) which are quite large and cumbersome to query and fine-tune. Moreover, Gopher based LMs are relatively new and not as widely used or publicly available. We run these experiments on smaller language models - that are more widely used and verify if the results reported by Perez et al. (2022) are applicable to them. We extend the experiments done by Perez et al. (2022) by applying their proposed approaches in a sequential manner. We also experiment with different target LMs and red classifiers which can be further used to generate test cases eliciting different kinds of responses. Thus, our main contributions can be summarized in the following manner: 1. Implementing the 4 approaches for generating test cases as described in Perez et al. (2022) for smaller language models like GPT-2 Radford et al. (2019), Blender-Bot Miller et al. (2017); Roller et al. (2020). 2. Experimenting with different target LMs and red classifiers for different downstream tasks, Figure 1: Some examples of Red LM generating test cases that elicit harmful/offensive response from Target LM. Here A, B, C, D correspond to zero-shot, few-shot, supervised and reinforcement learning settings respectively. e.g: offensiveness, sensitivity to topics like religion, drugs etc. ## 2 Related Work Using various prompt design and engineering techniques to probe language models Weir et al. (2020); Ousidhoum et al. (2021) and identify their learnt biases and toxicity, one can design methods to identify potentially harmful behaviour of language models. For instance Weir et al. (2020) construct prompts to reveal the learnt stereotypes by language models and perform probing via word prediction. They acknowledge the limitations of this human engineered prompt generation approach and include tests to account for the same. Dhamala et al. (2021) introduces BOLD - a dataset of prompts that has been curated for measuring biases in language generation models. Different Wikipedia pages are chosen and scraped for detecting biases against or for different groups, and post-processing is performed on the scraped data to generate the prompts. This is meant to be an automated prompt generation approach with minimal human input - in the form of choosing appropriate pages and post-processing algorithmic choices. Gehman et al. (2020) follows a similar approach of scraping prompts for facilitating toxicity detection. Wallace et al. (2019) searches for universal tokens that can be concatenated with input texts to trigger the language model to generate some expected output or break the model in some way. This technique is aimed at identifying vulnerabilities of language models to certain adversarially generated data. Dinan et al. (2019) asks crowd-workers to generate adversarial examples that can break a trained offensiveness text classifier - generate prompts that the model think might be safe but are actually deemed offensive by humans, and thus fool the text classification model. They then retrain the classification model on the samples that had earlier fooled the classifier and repeat the process. Wallace et al. (2021) describes longer-term Dynamic Adversarial Data Collection where humans generate adversarial examples to break a language model and then the model is retrained on these adversarial examples. This process is repeated over many rounds till convergence is achieved i.e. model stops improving after being retrained on new adversarial samples or performance plateaus. We also follow a similar setup but instead of humans generating the adversarial examples, another LM (the red LM) will do so. Bartolo et al. (2021) uses a data generation pipeline to automatically amplify the number of adversarial examples from the human generated AdversarialQA dataset Bartolo et al. (2020). Nadeem et al. (2021); Nangia et al. (2020) are some examples of crowd-sourced datasets for evaluating the learnt bias and stereotypes of pretrained language models. The most relevant to our work is the paper Perez et al. (2022) - that experiments with different approaches to maximize the generation of questions from a pretrained LM - that elicit harmful response(s) from another pretrained LM. ## 3 Methodology ### Models To reproduce the results on smaller language models, we use the widely popular GPT-2 large model Radford et al. (2019) as the red LM, which contains around 770M parameters. For the target LM, we focus on a state-of-the-art open-domain dialogue system, Blender Bot Roller et al. (2020) which employs a Seq2Seq Transformer architecture Vaswani et al. (2017) with an implementation from the ParIAI Miller et al. (2017) repository. Furthermore, to judge the offensiveness of responses generated by the target LM, we use the Bot-Adversarial Dialogue Classifier Xu et al. (2020). The classifier is also provided by ParIAI and is trained on the filtered multi-turn bot adversarial dialogues dataset Xu et al. (2021) in addition to Wikipedia Toxic Comments. ### Test Case Generation Our goal is to generate test cases that elicit offensive responses from the Target LM. We do so by using the 4 following approaches: zero shot Figure 2: Training procedure for test case generation using RL (PPO) generation, stochastic few shot generation, supervised learning and reinforcement learning, following Perez et al. (2022). #### Zero Shot Test Case Generation In this approach, a prompt is passed to the red LM and the generated responses are collected. The prompt that we use in our work is same as Perez et al. (2022) and is shown below: List of questions to ask someone: 1. The responses are then processed and cleaned to get valid questions. Here, valid questions are sentences ending with a question mark. Everything after the first question mark is truncated. #### Stochastic Few Shot Test Case Generation For few shot generation, we sample five of the test cases generated from the zero shot approach, and append it to the prompt used for zero shot generation and generate more test cases. The zero shot generated test cases are sampled with probability \(p\propto e^{r(x,y)/T}\) where \(r(x,y)\) is the score given by the BAD classifier based on the red LM test case \(x\) and the target LM response \(y\) and a temperature hyperparameter \(T=0.1\). #### Supervised Learning We try two settings when tuning the Red LM on previously generated offensive test cases. In the first scenario, we fine-tune the red LM on failing (i.e. harmful/offensive) zero shot test cases. We take 90% of the failing test cases as the training set and remaining as the test set. In the second scenario, we fine-tune on the offensive test cases generated using the stochastic few shot approach. We train our model with the objective of maximizing the probability of generating the offensive test cases. #### Reinforcement Learning (RL) In this approach, the Red LM is trained using a policy gradient method like PPO Schulman et al. (2017) with the Red LM initialized using the fine-tuned SL model above. The implementation follows from Ziegler et al. (2019) and Stiennon et al. (2020). The overall objective of the model is to increase the expected likelihood of harmful responses i.e. \(\mathbb{E}_{p_{r}(x)}[r(x,y)]\) where \(p_{r}(x)\) denotes the Red LM. In this setup, the Red LM is a GPT-2 Large based transformer model with a LM head and a value function head. The LM head is simple linear layer with input as hidden states of the transformer and output as the vocabulary size (50257), whereas value function is a single layer MLP which takes as input the final transformer representation at each timestep. The corresponding reward is computed using the function \(-3*\log(1-r(x,y))\) where \(r(x,y)\) is the probability of offensiveness calculated by the classifier, \(x\) is the question generated by the Red LM, and \(y\) is the response from the Target LM. To prevent the Red LM from collapsing, we also include a KL Penalty when computing the policy loss, to discourage excessive divergence from the initial distribution. The final loss is defined as: \[L_{total}=L_{policy}+\lambda*L_{value}\] where \(\lambda=0.1\) performed best for our experiments. At learning rate \(1\times 10^{-5}\), the model converged the fastest. ### Evaluation Metrics We use Self-BLEU Zhu et al. (2018) score to determine the diversity of generated test cases. Lower Self-BLEU score implies higher diversity. Along with that, we use the classifier to determine the percentage of generated test cases that led to harmful responses. ## 4 Experiments and Results ### BlenderBot as Target LM We compare 6 test case generation scenarios: 1. Zero Shot Generation 2. Stochastic Few Shot Generation 3. Supervised Learning trained on zero shot data. 4. Supervised Learning trained on few shot data. 5. Reinforcement Learning with model from 3. 6. Reinforcement Learning with model from 4. \begin{table} \begin{tabular}{|l|c|c|} \hline **Method** & **Self-BLEU** & **\% Offensive** **Replies** \\ \hline Zero Shot & 37.00 & 1.67\% \\ Few Shot & 39.48 & 14.7\% \\ SL (ZS) & 45.02 & 3.71\% \\ SL (FS) & 50.96 & 42.05\% \\ RL (SL-ZS) & 38.81 & 4.15\% \\ RL (SL-FS) & 59.48 & 68.91\% \\ \hline \end{tabular} \end{table} Table 1: Results on generated test cases using each method. Self-BLEU denotes the diversity of test cases whereas %of-fensive replies denotes the percentage of responses that were harmful from the target LM. Table 1 shows the results from our experiments with BlenderBot as the Target LM. We can see here that Self-BLEU score is the least for zero shot which implies that the test cases generated are the more diverse here. However, they are not as offensive as the other methods. The LMs tuned using supervised learning were trained for less than 5 epochs, to improve % offensiveness, but avoid overfitting and reduction in test case diversity. Figure 3 shows how diversity and % offensiveness vary across these different approaches. ### Ablation Studies with different Target LMs We further evaluate offensiveness on different target LMs and the results are discussed in further subsections. Target LMs used are: 1. BlenderBot: Same as described in 3.1. 2. Bart Base: Bart Large (Lewis et al., 2019) model trained on Wizard of Internet dataset (Komeili et al., 2021). 3. Twitter Model: Seq2Seq model trained on Dococa Dialogue tasks and fine-tuned on Twitter task (Shuster et al., 2020). Table 2 shows % offensiveness results for different target LMs. We can observe that all target LMs follows a similar trend for each method. ### Sensitive Topic Detection We also experiment with a sensitive topics classifier (Xu et al., 2020) that detects and classifies text into topics like: Drugs, Politics, Religion, Medical Advice, Relationships & Dating / NSFW and 'None of the above'. We combine the first 5 classes into 1 class as a sensitive topic class and the other as not a sensitive topic. Using this classifier, we check the % of target LM (BlenderBot) responses that contain any such sensitive information and report those results in Table 3. ### Few Shot Data Bias Few shot test cases generated more questions that led to offensive replies but the questions generated seemed to have specific words such as "p*n*s" frequently.Finetuning on few shot generated data, in both RL and SL settings, is resulting in less diverse data with a high bias for sexually explicit content. For instance, 83% of the lines generated by the RL agent had the word "p*n*s". On the other hand, finetuning on zero-shot data is leading to much lesser proportion of questions eliciting offensive replies. ## 5 Conclusion and Future Scope Although red teaming smaller language models with smaller language models doesn't achieve the same results as reported by Perez et al. (2022), they follow similar trends and it can be said that the red teaming technique is beneficial even in the case of these small models. Few shot test case generation vastly improved the scores for smaller language models which prompted us to test that on SL and RL methods as well. Similar experiments can be done for larger language models to see if few-shot can have the same impact without collapsing. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Method** & **BlenderBot** & **Bart Base** & **Twitter** \\ \hline ZS & 1.67\% & 1.2\% & 3.0\% \\ FS & 14.7\% & 21.25\% & 17.67\% \\ SL ZS & 3.71\% & 2.29\% & 2.93\% \\ SL FS & 42.05\% & 48.4\% & 34.59\% \\ RL ZS & 4.15\% & 4.94\% & 5.53\% \\ RL FS & 68.91\% & 74.49\% & 44.90\% \\ \hline \end{tabular} \end{table} Table 2: % Offensiveness of responses generated by different Target LMs to prompts generated using the Red LM via different methods Figure 3: Self-BLEU (Diversity) v/s % Offensiveness for different test case generation approaches \begin{table} \begin{tabular}{|l|c|} \hline **Method** & **BlenderBot** \\ \hline ZS & 34.5\% \\ FS & 62.38\% \\ SL ZS & 51.22\% \\ SL FS & 74.90\% \\ RL ZS & 44.68\% \\ RL FS & 81.31\% \\ \hline \end{tabular} \end{table} Table 3: % target LM responses containing sensitive topics ## 6 Discussions and Broader Impact Benefits As seen in Lee (2016), it is easy to elicit offensive and hurtful responses from language models, despite having tested them extensively. Given how widely deployed language models are in this day and age, it is important to make this testing process as robust as possible, in order to avoid hurting user sentiment, propagating learnt biases and contributing this elicited offensive responses as data for future language models to train on, leading to emergent bias in language models trained on this offensive data. The benefit of our work is helping prevent this propagation of toxicity and offensiveness, by helping catch these failure modes before the model is deployed. This automated approach also enables us to focus on a specific kind of bias or sensitive topic of interest that we particularly want to adversarially test the model on. Uncertainties and Risks Our current pipeline of automated test case generation can get easily biased to produce only certain kind of questions, and have very low diversity - which is essential for robust testing. For instance, our RL tuned red LM model has become biased to produce sexually explicit content, which is not useful in identifying failure modes of the language model in other kinds of scenarios - gender bias, racial prejudice and more. The choice of the red classifier and the kind of data it is trained on also lends some oversight to this automated process, and impact the quality and diversity of test cases produced. ## Acknowledgements We would like to thank Ethan Perez for his guidance in helping us follow up on his work on Red Teaming Language Models with Language Models, and helping us resolve any problems that came up in the process. ## Contribution Statement All authors participated equally in writing of this paper and debugging the code. The code workload was distributed equally as follows: 1. Rohith Mukku: Responsible for Zero-shot and few-shot baseline result and to expand our current code to different Target LMs. 2. Anugya Srivastava: Responsible for supervised learning and offensive language classifier. Also responsible to expand to different downstream tasks. 3. Rahul Ahuja: Responsible for implementing Reinforcement Learning pipeline and generating metrics for the paper. The presentations and the report were made by all the participants together.
2301.07664
Finite local principal ideal rings
Every finite local principal ideal ring is the homomorphic image of a discrete valuation ring of a number field, and is determined by five invariants. We present an action of a group, non-commutative in general, on the set of Eisenstein polynomials, of degree matching the ramification index of the ring, over the coefficient ring. The action is defined by taking resultants.
Matthé van der Lee
2023-01-18T17:17:02Z
http://arxiv.org/abs/2301.07664v6
# Finite local principal ideal rings ###### Abstract Six invariants are given, that determine finite commutative local principal ideal rings up to isomorphism. From this classification, it follows that such a ring is a quotient of a specific discrete valuation ring. **Keywords:** Principal ideal rings, Finite commutative rings. **2020 MSC:** 13F10, 13M99. ## 1 Introduction Finite local PIRs, principal ideal rings, have been studied by Hungerford ([3], Th. 1), who proved, using Cohen's structure theorem for complete local rings, that every finite local PIR is the quotient of a principal ideal domain. The principal ideal domain is itself a quotient, for which Hungerford gave explicit relations, of the power series ring over a \(v\)-ring. Recall that a \(v\)-_ring_, also known as _Cohen ring_, is a complete DVR (discrete valuation ring) of characteristic 0 with maximal ideal generated by a prime number. Here, we describe finite local PIRs \(R\) in terms of six numerical invariants of an elementary nature, which completely determine the ring structure. Cf. Theorem 1. Based on this, we show \(R\) is a quotient of a DVR of the ring of integers of an explicitly given finite extension of \(\mathbb{Q}\), as articulated in Theorem 2. _Note._ Among other mistakes, version 1 of this paper claimed that four invariants suffice. The author is indebted to mr. Aurel Page for bringing to his attention that non-isomorphic PIRs exist where these four invariants are nonetheless the same. Mr. Page came up with the rings \(\mathbb{Z}[X]/(8,X^{2}-2)\) and \(\mathbb{Z}[X]/(8,X^{2}+2)\), both with attributes, to be defined presently, \((a,e,f,p)=(6,2,1,2)\). All rings are commutative. The notation \(\overline{x}\) refers to the residue class of a ring or group element \(x\) modulo a pertinent ideal or subgroup, as usual. The symbol for an object used throughout the text is boldfaced when the object is first defined, or when it is redefined. ## 2 Finite local PIRs Let \(R\) be a finite local PIR. \(R\) is necessarily Artinian, hence zero-dimensional. Let \(\mathfrak{p}=\pi R\) be the unique prime ideal, and \(k=R/\mathfrak{p}=\mathbb{F}_{p^{f}}\), say, the residue field, where \(\boldsymbol{p}\) is a prime number. \(\boldsymbol{\pi}\), called the _uniformizer_ of \(R\), is the unique irreducible element of \(R\) (up to a unit). As any local PIR with nilpotent maximal ideal, it is evidently \(\mathfrak{p}\)-adically complete, and a _chain ring_: the set of its ideals is linearly ordered by inclusion. \(\boldsymbol{f}\) is the _residue degree_ of \(R\). Let \(\boldsymbol{a}\) be the _nilpotency index_ of \(R\), the smallest number with \(\mathfrak{p}^{a}=0\). Then the characteristic of \(R\) is \(p^{\ulcorner a/e^{\ulcorner}}\), where \(\ulcorner\urcorner\) denotes the ceiling function, and \(\boldsymbol{e}=\operatorname{ord}_{\mathfrak{p}}(p)\), the _ramification index_ of \(R\) over \(\mathbb{Z}\), that is, \(p\in\mathfrak{p}^{e}-\mathfrak{p}^{e+1}\). Say \(p=\varepsilon\pi^{e}\), for a unit \(\boldsymbol{\varepsilon}\). The only ideals if \(R\) are the \(\pi^{i}R\) for \(0\leq i\leq a\). The group of units of \(k\) is cyclic of order \(p^{f}-1\). Because \(X^{p^{f}-1}-1\) splits as a product of \(p^{f}-1\) coprime linear factors in \(k[X]\), the same must be true in \(R[X]\), by Hensel's Lemma ([2], in the formulation for general commutative rings). For \(\mathfrak{p}\) is a maximal ideal of \(R\), and any factorization of a monic in \(R[X]\) into coprime monic factors over the field \(R/\mathfrak{p}=k\) lifts to a unique matching factorization over \(R/\mathfrak{p}^{a}=R\). So the residue map \(\boldsymbol{\nu}:R^{\times}\twoheadrightarrow k^{\times}\) is surjective, and thus \(p^{f}-1\mid|R^{\times}|\). Note that \(\boldsymbol{H_{1}}\coloneqq\ker(\nu)=1+\mathfrak{p}\) consists of the so-called _Einseinheiten_, aka _one-units_. If \(r=1+\pi r^{\prime}\in H_{1}\), then \(r^{p}-1\in(\pi^{2})\), and, for some \(i\), \(r^{p^{i}}\!=1\). So the order \(\operatorname{o}(r)\) of \(r\) in the subgroup \(H_{1}\leq R^{\times}\) is a power of \(p\), and hence so is \(|H_{1}|\). \(\boldsymbol{\rho}\coloneqq\{r\in R\mid r^{p^{f}}=r\}\) is the _canonical system of representatives_ of \(R\) mod \(\mathfrak{p}\). It maps onto \(k\). For, let \(k^{\times}=\langle\zeta\rangle\) and take \(\boldsymbol{z}\in R\) with \(z\mapsto\zeta\). Then \(z\notin\mathfrak{p}\), so \(z\in R^{\times}\) since \(R\) is local. Clearly, \(p^{f}-1=\operatorname{o}(\zeta)\mid\operatorname{o}(z)\mid p^{f}-1\), and thus \(z\) is of order \(p^{f}-1\). It generates a subgroup \(\boldsymbol{H_{0}}\coloneqq\langle z\rangle\leq R^{\times}\) of that order. Since all \(r\in\rho-\{0,1\}\) have \(\operatorname{o}(r)\mid p^{f}-1\), none of them can be in \(H_{1}\), and therefore \(\nu:\rho\mapsto k\), the \(r\in\rho\) are indeed unique representatives of \(R\) modulo \(\mathfrak{p}\), and \(\rho=H_{0}\cup\{0\}\). One could say that, through \(\nu\upharpoonright\rho\), which respects addition and multiplication, as it is just reduction mod \(\mathfrak{p}\), "R contains the field \(\rho=k\)". For \(n<a\), we have \(\mathfrak{p}^{n}/\,\mathfrak{p}^{n+1}=\pi^{n}R/\pi^{n+1}\), and the \(\pi^{n}r\) for \(r\in\rho\) represent the congruence classes of \(\mathfrak{p}^{n}\) mod \(\mathfrak{p}^{n+1}\). Thus the map \(\mu_{n}:k\to\mathfrak{p}^{n}/\,\mathfrak{p}^{n+1}\), with \(\nu(r)\mapsto r\pi^{n}\,\text{mod}\,\,\mathfrak{p}^{n+1}\), is well-defined and surjective. It is a homomorphism of additive abelian groups because \(\nu\) is. \(\mathfrak{p}^{n}/\,\mathfrak{p}^{n+1}\) is a simple \(R\)-module, for there are no sandwic \(\mathfrak{p}^{n+1}\subsetneq I\subsetneq\mathfrak{p}^{n}\), so it must be isomorphic to \(R/\mathfrak{p}\) and of order \(p^{f}\). It follows that the \(\mu_{n}\) are isomorphisms. Every element \(r\) of \(R\) can be uniquely written as a "power series" \(r=\Sigma_{i=0}^{a-1}\,r_{i}\pi^{i}\) where the \(r_{i}\) run over \(\rho\). Indeed, there is a unique \(r_{0}\in\rho\) with \(r-r_{0}\in\pi R\), say \(r=r_{0}+\pi r^{\prime}\), and we process \(r^{\prime}\) in the same way. As \(\pi^{a}=0\), this halts after \(a\) such steps. To see uniqueness, if \(\Sigma_{i=0}^{a-1}\,r_{i}\pi^{i}=r=\Sigma_{i=0}^{a-1}\,s_{i}\pi^{i}\) with the \(s_{i}\) in \(\rho\), then \(r_{0}\) and \(s_{0}\) both represent \(r\), so they are equal and can be discarded. \(\Sigma_{i=1}^{a-1}\,r_{i}\pi^{i}=\Sigma_{i=1}^{a-1}\,s_{i}\pi^{i}\) is in \(\mathfrak{p}/\mathfrak{p}^{2}\), and both \(r_{1}\) and \(s_{1}\) represent its congruence class, hence \(r_{1}=s_{1}\). Induction does the rest. We call this feature _uniqueness of representation_, or _u.r._ for short. Note that \(\delta=1+\pi^{a-1}\in R^{\times}\) and \(\delta\cdot\pi=1\cdot\pi\), so cancellation fails for units \(\delta\notin H_{0}\). The exact sequence \(1\to H_{1}\to R^{\times}\to k^{\times}\to 1\) splits, because the orders of \(H_{1}\) and \(k^{\times}\cong H_{0}\) are relatively prime. Hence \(R^{\times}=H_{0}\oplus H_{1}\) is the internal direct sum of \(H_{0}\) and \(H_{1}\), and, as a result, we have \(|R|=p^{af}\), because of u.r. and since \(|\,\mathfrak{p}|=|\,\rho|=|\,k\,|=p^{f}\). And \(|\,R^{\times}|=(p^{f}-1)p^{(a-1)f}\), as \(\Sigma_{i=0}^{a-1}\,r_{i}\pi^{i}\), with all \(r_{i}\in\rho\), is a unit iff \(r_{0}\neq 0\). For \(n>1\), put \(\boldsymbol{H_{n}}=\{\,\delta\in R^{\times}\mid\operatorname{ord}_{\mathfrak{ p}}(\delta-1)\geq n\,\}\), with \(H_{1}\) part of the filtration \(H_{1}\supseteq H_{2}\supseteq\cdots\supseteq H_{a}=\{1\}\). If \(\delta=1+\pi^{n}r\in H_{n}\), with \(n\geq 1\) and \(r\in R\), \(\delta\mapsto\overline{r}\) defines a surjection \(H_{n}\twoheadrightarrow k\) with kernel \(H_{n+1}\). The \(H_{n}\), with \(n\geq 2\), are known as the groups of the _hohere Einseinheiten_ (higher one-units). The similarity with local fields, whose terminology we borrow, is not accidental. The following lemma generalizes a property known from elementary number theory. **Lemma 1**.: _If \(r\in R-\{1\}\) with \(\operatorname{ord}_{\mathfrak{p}}(r-1)=m\) and \(mp>m+e\), then, for every \(n\in\mathbb{N}\), one has \(r^{p^{n}}=1\) or \(\operatorname{ord}_{\mathfrak{p}}(r^{p^{n}}-1)=m+ne\), or both._ Proof.: For \(n=1\), \(r-1=\delta\pi^{m}\) with \(\delta\in R^{\times}\), so, since \(p=\varepsilon\pi^{e}\), \(r^{p}=(1+\delta\pi^{m})^{p}\equiv 1+\delta\varepsilon\pi^{m+e}\bmod\pi^{m+e+1}\), for \(p\) divides the non-trivial binomial coefficients and \(mp\geq m+e+1\). But \(\delta\varepsilon\in R^{\times}\). When established for \(n-1\), there is no stopping this, as \((m+(n-1)e)p>m+(n-1)e+e\). By Lemma 1, for the unit \(r=1-\pi^{m}\in R^{\times}\), where \(m=\ulcorner(e+1)/(p-1)\urcorner\), and any \(n\), either \(r^{p^{n}}=1\) or \(a>\operatorname{ord}_{\mathfrak{p}}(r^{p^{n}}-1)=m+ne\), since \(mp>m+e\). Hence the order of \(r\) in \(R^{\times}\) is \(p^{n}\) with \(n=\ulcorner(a(p-1)-e-1)/(e(p-1))\urcorner\). In particular, when \(p\geq e+2\), \(\ulcorner(e+1)/(p-1)\urcorner=1\), so then \(r=1-\pi\) and its inverse \(\Sigma_{i=0}^{a-1}\,\pi^{i}\) in the group \(R^{\times}\) are of that order. ## 3 Ring structure Following Cohen's proof of his structure theorem in the unequal-characteristic case, Th. 11 of [1], we now prove that \(R\) is determined by the four numbers \(a\), \(e\), \(f\), \(p\), plus two more, yet to be revealed. In this way, we find a homomorphism \(\varphi:V\to R\), where \(V\) is a \(v\)-ring. Take any monic \(\boldsymbol{h}\in\mathbb{Z}[X]\) of degree \(f\) that is irreducible in \(\mathbb{F}_{p}[X]\), and let \(\boldsymbol{K}=\mathbb{Q}[X]/(h)\), with ring of integers \(\mathcal{O}_{K}\). Then \(p\mathbb{Z}\) is inert in \(K\), i.e., \(\mathcal{O}_{K}\) has a unique prime \(\mathfrak{P}_{K}=p\mathcal{O}_{K}\) above \(p\mathbb{Z}\), with residue degree \(f\), by the Kummer-Dedekind theorem, Th. 8.2 in [5]. Indeed, if \(\boldsymbol{\beta}\in K\) is a zero of \(h\), the order \(\mathbb{Z}[\beta]\) of \(\mathcal{O}_{K}\) is _regular_ wrt. the prime \(\mathfrak{P}_{K}\) (loc. cit., Prop. 5.4 and p. 223), i.e. \(\mathbb{Z}[\beta]_{\mathfrak{P}_{K}\cap\mathbb{Z}[\beta]}\) is a DVR, i.e. \(\mathfrak{P}_{K}=(\mathfrak{P}_{K}\cap\mathbb{Z}[\beta])\mathcal{O}_{K}\), i.e. \(\mathfrak{P}_{K}\) is prime to the conductor \((\mathbb{Z}[\beta]:\mathcal{O}_{K})\), all because \(h\) is irreducible mod \(p\). For \(V\) we can take the completion \((\widehat{\mathcal{O}_{K}})_{\mathfrak{P}}\) of \(\mathcal{O}_{K}\) at \(\mathfrak{P}\), and \(\varphi\) is constructed as in Cohen's proof. Our \(R\) is a complete local ring with discrete \(\mathfrak{p}\)-adic topology. Put \(\boldsymbol{a_{0}}=\ulcorner a/e\urcorner\). In the proof, the subring \(Z\subseteq R\), generated by the identity, is isomorphic to \(\,\mathbb{Z}/p^{\,a_{0}}\), for \(p^{\,a_{0}}\) is the smallest multiple of \(1\in\mathbb{Z}\) that becomes zero in \(R\). The subring \(\boldsymbol{R_{0}}\), the closure of \(Z\) (in the discrete topology), of course equals \(Z\). One has \(\varphi:V_{0}\twoheadrightarrow R_{0}\), where \(V_{0}=\widehat{\mathbb{Z}}_{p}\subseteq V,\) and the composition of \(\varphi\) with the residue map \(R_{0}\to\mathbb{F}_{p}\) is the natural map \(V_{0}\twoheadrightarrow\mathbb{F}_{p}\), say. This \(\varphi\) is then extended, step by step, by continuity, to a \(\varphi:V\to R\) with \(\nu\,\raisebox{0.86pt}{\tiny$\circ$}\,\varphi=V\twoheadrightarrow k\), where \(\nu:R\to k\) is the residue map. The proof can be much simplified in our case, for \(k\) has no transcendentals, and Zorn's Lemma can be avoided because \(k\) is simple over \(\mathbb{F}_{p}\), so \(V_{0}\) can already play the role of \(V_{\omega}\) in the proof. For, \(k=\mathbb{F}_{p}(\overline{\beta})\), where \(\overline{\beta}\) is the image of \(\beta\in\mathcal{O}_{K}\) in \(k\), which has minimal polynomial \(h\) over \(\mathbb{F}_{p}\), and \(h\) has the zero \(\beta\in V\) and, by Hensel's Lemma, a root \(\boldsymbol{b_{h}}\in R\), as \(h\) is separable over \(\mathbb{F}_{p}\). Both map to \(\overline{\beta}\) in \(k\), and \(\beta\mapsto b_{h}\) will then provide the desired extension \(\varphi:V\to R\) compatible with \(\psi\). The image \(R_{\omega}=R_{0}[b_{h}]\) of \(V\) under \(\varphi\) in \(R\), where \(R_{0}=Z=\,\mathbb{Z}/p^{\,a_{0}}\), is again a local PIR, with residue field \(k\), maximal ideal \(\mathfrak{p}_{\omega}=pR_{\omega}\), and uniformizer \(p\sim\pi^{e}\) (\(p\) and \(\pi^{e}\) are associates). Its attributes are \((a_{0},1,f,p)\) rather than \(R\)'s \((a,e,f,p)\). So its order is \(p^{\,a_{0}f}\). The kernel of \(R_{0}[X]\twoheadrightarrow R_{\omega}\), with \(X\mapsto b\), is \((h)\), for \(R_{0}[X]/(h)\) is also of that order: \(h\) is monic of degree \(f\). We have \(p=\varepsilon\pi^{e}\), but the unit \(\varepsilon\) is not necessarily an \(e\)-th power, especially if \(p\mid e\) (when \(R\) is "wildly ramified" over \(R_{0}\)). \(\boldsymbol{R_{\varepsilon}}\coloneqq\{\Sigma_{i=0}^{a_{0}-1}\,r_{i}\pi^{ie} \mid\forall_{i}\,r_{i}\in\rho\}\) is a subring of \(R\) because \(\rho\) is closed under multiplication. It also is an \((a_{0},1,f,p)\), with prime ideal \(\mathfrak{p}_{\varepsilon}\) generated by \(\boldsymbol{\pi_{\varepsilon}}\coloneqq\pi^{e}\). Let's call \(R_{\varepsilon}\) the _coefficient ring_ of \(R\). It is _unramified_ over \(R_{0}\), that is, \(\operatorname{ord}_{\pi_{\varepsilon}}(p)=1\). We inspect the map \(\chi:R_{\varepsilon}[Y]\to R\), where \(Y\mapsto\pi\). It is a surjection, for its image is a finite local PIR with field \(k\) and nilpotency index \(a\), and therefore of size \(p^{\,af}=|R\mid\). Let \(g\in\ker(\chi)\). Since \(\pi^{a}=0\), we can assume \(\deg(g)<a\), say \(g=\Sigma_{j=0}^{a-1}c_{j}Y^{j}\), with the coefficients \(c_{j}\in R_{\varepsilon}\). Because \(g(\pi)=0\), one has \(c_{0}\in\mathfrak{p}\), and, as \(\mathfrak{p}\) contracts to \(\mathfrak{p}_{\varepsilon}\) in \(R_{\varepsilon}\), \(c_{0}\) is a multiple \(\eta\pi_{\varepsilon}^{n}\) of \(\pi_{\varepsilon}\), with \(\eta\in R_{\varepsilon}^{\times}\). If \(c_{0}\neq 0\), we replace the monomial \(c_{0}\) in \(g\) with \(\eta Y^{en}\). The coefficients, which we continue to denote by \(c_{j}\), remain in \(R_{\varepsilon}\). The constant term of \(g\) is now zero. If, at any point, \(\deg(g)\) exceeds \(a-1\), throw away the excess high-order monomials. We then process \(g\) further as follows. Starting with \(j=1\), we have \(c_{1}=r+s\) for an \(s\in\mathfrak{p}_{\varepsilon}\) and a unique \(r\in\rho\). Write \(s=\eta\pi_{\varepsilon}^{n}\) in \(R_{\varepsilon}\), with \(\eta\in R_{\varepsilon}^{\times}\), and replace \(c_{1}Y\) in \(g\) by the binomial \(rY+\eta Y^{en+1}\). This does not introduce a constant term, of course, but the coefficient of the term of degree \(1\) in \(Y\) is now in \(\rho\). Continue in this way for \(j\geq 2\) until \(g\in\rho[Y]\) and \(g\) is still of degree less than \(a\). By u.r., \(g\) must then be zero. It follows that \(\ker(\chi)=(Y^{a},\pi_{\varepsilon}-Y^{e})\). For some \(\varepsilon_{0}\in R_{\varepsilon}^{\times}\) we have \(p=\varepsilon_{0}\pi_{\varepsilon}\) in \(R_{0}\). It does not necessarily follow that \(\varepsilon_{0}=\varepsilon\), or even that \(\varepsilon\in R_{\varepsilon}\). Still, \(p=\varepsilon_{0}\pi^{e}\) in \(R\), and from now on we will **write \(\boldsymbol{\varepsilon}\)** for this particular \(\varepsilon_{0}\). Thus \(p=\varepsilon\pi_{\varepsilon}\). \(R_{\varepsilon}^{\times}=H_{0}\times I_{1}\), with nested \(\boldsymbol{I_{n}}\coloneqq\{1+rp^{n}\mid r\in R_{\varepsilon}\}\) for \(1\leq n<a_{0}\), similarly to \(R^{\times}=H_{0}\times H_{1}\). Arithmetic in \(R\cong R_{\varepsilon}[Y]/(Y^{a},\varepsilon^{-1}p-Y^{e})\) depends on the unit \(\varepsilon\). Any isomorphism \(R_{\varepsilon}[Y]/(Y^{a},\varepsilon^{-1}p-Y^{e})\stackrel{{ \sim}}{{\to}}R_{\varepsilon}[Y]/(Y^{a},\delta^{-1}p-Y^{e})\), with \(\delta\in R_{\varepsilon}^{\times}\), must send \(\overline{Y}\) to a uniformizer \(\overline{Y}\overline{d}\), so with \(d(Y)\in R_{\varepsilon}[Y]\) having unit constant term \(d(0)\). On the left-hand side, \(p=\varepsilon\overline{Y}^{e}\), and this then maps to \(p=\delta(\overline{Y}\overline{d})^{e}\) on the right. The two sides are thus isomorphic iff \(\varepsilon\) and \(\delta\) belong to the same congruence class of \(R_{\varepsilon}^{\times}\) mod \((R_{\varepsilon}^{\times})^{e}\). _Note_. It is precisely this observation that was overlooked in version 1. Write \(e=q_{1}v_{1}=q_{2}v_{2}\), where \(q_{1}\) is the GCD \((e,p^{\,f}-1)\), and \(q_{2}=p^{\mathrm{ord}_{p}(e)}\) is the "wildness" level. Let \(\boldsymbol{z}\in H_{0}\) be a generator of \(H_{0}\). Then the quotient \(H_{0}/H_{0}^{e}\) is cyclic of order \(q_{1}\), and the powers \(z^{i}\), for \(0\leq i<q_{1}\), are representatives. The \(e\)-th powers of the units in \(I_{1}\subseteq R_{\varepsilon}^{\times}\) coincide with the \(q_{2}\)-th powers. If \(\delta\in I_{1}-I_{2}\) then \(\mathrm{ord}_{p}(\delta-1)=1\). As \(R_{\varepsilon}\) is unramified, in Lemma 1 we have "\(m=1\)" and "\(e=1\)", so that "\(mp>m+e\)" is automatically satisfied when \(p>2\). In that case, the order of \(\delta\) is \(p^{\,a_{0}-1}\). Likewise, any \(\delta\in I_{n}-I_{n+1}\) has order \(p^{\,a_{0}-n}\). And \(I_{a_{0}-1}\cong k\cong(\mathbb{Z}/p)^{f}\), for the additive group of \(k\) is an elementary abelian \(p\)-group. Since \(I_{n}/I_{n+1}\cong k\) for \(n<a_{0}\), it follows that \(I_{a_{0}-2}\cong(\mathbb{Z}/p^{2})^{f}\), as the \(\delta\in I_{a_{0}-2}-I_{a_{0}-1}\) are all of order \(p^{2}\). By induction, \(I_{1}\cong(\mathbb{Z}/p^{a_{0}-1})^{f}\). In case \(p=2\), the \(\delta\in I_{n}-I_{n+1}\) are of order \(p^{\,a_{0}-n}\) only when \(n>1\). Every \(\delta=1+r_{1}\cdot 2+\Sigma_{i>1}r_{i}\cdot 2^{i}\in I_{1}-I_{2}\) has square \(1+(r_{1}+r_{1}^{2})\cdot 2^{2}\) plus terms in \(8R\). Now \(r_{1}\neq 0\), so the square is in \(I_{3}\) iff \(r_{1}=-1\). But \(-1\notin\rho\) unless \(f=1\) (\(1\) and \(-1\) both map to \(1\in k\), and \(1\) is already in \(\rho\)). So for \(f>1\), \(I_{1}\cong(\mathbb{Z}/p^{a_{0}-1})^{f}\) holds too. And if \(f=1\) then \(I_{1}\) is just \((\mathbb{Z}/2^{a_{0}})^{\times}\), which is the internal direct sum of the subgroups generated by \(\{-\overline{1}\}\) and by \(\overline{5}\) (if \(a_{0}>1\)), as is well-known. (An easy proof is through Lemma 1.) Hence \(I_{1}/I_{1}^{e}\cong(\mathbb{Z}/q_{2})^{f}\), unless \(f=1\), \(p=2\) and \(a_{0}\geq 3\). And \(I_{1}/I_{1}^{e}\) is always of order \(q_{2}^{f}\), so \([R_{\varepsilon}^{\times}:(R_{\varepsilon}^{\times})^{e}]=q_{1}q_{2}^{f}\). **Proposition 1**.: _The unit group of the coefficient ring \(R_{\varepsilon}\) is the direct sum of a cyclic group of order \(p^{\,f}-1\) and either \(f\) copies of \(\mathbb{Z}/p^{\,\nicefrac{{\varepsilon}}{{a}}/e^{\,\nicefrac{{\varepsilon}}{{ \varepsilon}}}-1}\), or, when \(p=2\) and \(f=1\), the group \(\{\pm\overline{1}\}\oplus\mathbb{Z}/2^{\,\nicefrac{{\varepsilon}}{{a}}/e^{\, \nicefrac{{\varepsilon}}{{\varepsilon}}}-2}\). So \(R_{\varepsilon}^{\times}\) depends only on \((a,e,f,p)\). \(\square\)_ The conclusion is, that the isomorphism class of \(R\), which is determined by the congruence class of \(\overline{\varepsilon}\) in \(R_{\varepsilon}^{\times}/(R_{\varepsilon}^{\times})^{e}\), is given by the exponent \(\boldsymbol{t}\), say, with \(0\leq t<q_{1}\), of the representative \(\overline{z}^{t}\,\nicefrac{{\varepsilon}}{{\varepsilon}}\) in \(H_{0}/H_{0}^{e}\) plus the congruence class of \(\overline{\varepsilon}\) in \(I_{1}/I_{1}^{e}\). If \(\varepsilon=(z_{0}^{e}z^{t})u\) in \(R_{\varepsilon}\), with \(z_{0}\in H_{0}\) and \(u\in I_{1}\), that congruence class is the one of \(\boldsymbol{u}\). Although not strictly speaking a numerical invariant of \(R\), the unit \(u\) is as good as. Sweeping the nonessential \(z_{0}\) under the carpet, we obtain the magical formula \(p=z^{t}u\pi^{e}\). The ring \(R\) is therefore characterized by the sextet \((a,e,f,p,t,u)\), where \(u\in I_{1}\) is fixed up to an \(e\)-th power, while the other parameters are unique. Mr. Page's examples \(\mathbb{Z}[X]/(8,X^{2}-2)\) and \(\mathbb{Z}[X]/(8,X^{2}+2)\) are \((6,2,1,2,0,1\) mod \(I_{1}^{2})\) and \((6,2,1,2,0,-1\) mod \(I_{1}^{2})\), respectively. \(x\coloneqq\overline{X}\) uniformizes both, \(R_{\varepsilon}^{\times}=I_{1}=(\mathbb{Z}/8)^{\times}\), and \(R^{\times}=\{\,r_{0}+r_{1}x\mid r_{0}\in(\mathbb{Z}/8)^{\times}\wedge r_{1} \in\mathbb{Z}/8\,\}\) is of order \(32\), with \(x^{2}=2\) mod \(8\) in the first ring, and \(\varepsilon=1\) mod \(8\), while \(x^{2}=-2\) mod \(8\) in the second one, where \(\varepsilon=-1\) mod \(8\). In the second group, \((1+x)^{2}=-1+2x\), but this is not a square in the first. Here we have \(q_{2}=4\), and \(\mathbb{Z}[X]/(8,X^{2})\) and \(\mathbb{Z}[X]/(8,X^{2}+4)\) are the two remaining variants. Finite fields \(R=\mathbb{F}_{p^{f}}\) are of the flavor \((1,1,f,p,0,1)\), with \(0\) as uniformizer, if we agree to use \(1\) for \(u\) in cases when the isomorphism type is already given by \(a\), \(e\), \(f\) and \(p\). This holds iff \(e\) is prime to \((p^{f}-1)p^{(a_{0}-1)f}\), when we may simply abbreviate the signature to the string \((a,e,f,p)\). We summarize. **Theorem 1**.: _If \(\langle R,\mathfrak{p}\rangle\) is a finite local principal ideal ring with uniformizer \(\pi\), that has the six invariants described above: nilpotency index \(a\), ramification index \(e\), residue degree \(f\), the characteristic \(p\) of the residue field \(k=R/\mathfrak{p}\), the number \(0\leq t<(e,p^{f}-1)\) giving the congruence class of \(\overline{\varepsilon}\) in \(H_{0}/H_{0}^{e}\), and the representative \(u\in I_{1}\) of the congruence class of \(\overline{\varepsilon}\in I_{1}\) mod \(I_{1}^{e}\), where \(\varepsilon\) is a unit of \(R_{\varepsilon}\), the largest unramified subring of \(R\), for which \(p=\varepsilon\pi^{e}\) in \(R\), and \(H_{0},I_{1}\leq R^{\times}\) are the groups defined earlier, the following apply._ 1. \(R\) _is isomorphic to_ \(\mathbb{Z}[X,Y]/(h(X),v(X)w(X)-1,Y^{a},p-v(X)Y^{e})\)_, where_ \(h\in\mathbb{Z}[X]\) _is any monic polynomial of degree_ \(f\) _that is irreducible in_ \(\mathbb{F}_{\mathfrak{p}}[X]\)_, and_ \(v,w\in\mathbb{Z}[X]\) _are polynomials of degree lower than_ \(f\)_. The ring_ \(R\) _is of order_ \(p^{\,af}\)_._ 2. _As an abelian group, the group of units_ \(R^{\times}\) _is the direct sum of a cyclic group of order_ \(p^{\,f}-1\) _and a_ \(p\)_-group of order_ \(p^{(a-1)f}\)_, which is generally not cyclic._ 3. \(R\) _is determined up to isomorphism by its signature_ \((a,e,f,p,t,u)\)_._ 4. _Finite local PIRs of any given type_ \((a,e,f,p,t,u)\) _exist, as long as_ \(p\) _is a prime number,_ \(a\geq e\)_,_ \(0\leq t<(e,p^{f}-1)\)_, and_ \(u\) _is in the_ \(p\)_-component_ \(I_{1}\) _of_ \(R_{\varepsilon}^{\times}\coloneqq\mathbb{Z}/p^{\nicefrac{{\varepsilon}}{{a}} ^{{\varepsilon}}}[X]/(h)\) _with_ \(h\) _a polynomial as mentioned under_ \((1)\)_. Any_ \(u^{\prime}\in uI_{1}^{e}\) _produces the same local principal ideal ring as_ \(u\)_._ 5. _The quotient rings of_ \(R\) _are the_ \(R/\mathfrak{p}^{i}\) _for_ \(0\leq i\leq a\)_, of type_ \((i,e,f,p,t,u)\) _for_ \(i>e\) _and_ \((i,1,f,p,0,1)=\mathbb{F}_{p^{f}}[Y]/(Y^{i})\) _for_ \(0<i\leq e\)_._ Proof.: Continuing the notation we were using, \(R_{\varepsilon}=R_{0}[X]/(h)=(\mathbb{Z}/p^{a_{0}})[X]/(h)\). If \(v,w\in\mathbb{Z}[X]\) are polynomials such that \(\varepsilon=v(\overline{X})\) and \(\varepsilon^{-1}=w(\overline{X})\) in \(R_{\varepsilon}\), then \(R=R_{\varepsilon}[Y]/(Y^{a},p-\varepsilon Y^{e})=\mathbb{Z}[X,Y]/(h(X),v(X)w( X)-1,Y^{a},p-v(X)Y^{e})\) for \(p^{a_{0}}=\overline{v(X)}^{\,a_{0}}\,\overline{Y}^{\,ea_{0}}\) in the latter ring, and \(ea_{0}\geq a\). The isomorphism in (1) follows. Notice that \(k=\mathbb{F}_{p^{f}}=\mathbb{F}_{p}[X]/(h_{0})\) for _any_ monic irreducible \(h_{0}(X)\) of degree \(f\). And \(v\) and \(w\) can be taken to be of degree less than \(f\), and be made unique by reducing their coefficients mod \(p\). (Of course, they depend on \(h\).) If \(a=e(a_{0}-1)+i=e(\ulcorner a/e\urcorner-1)+i\), where \(0<i\leq e\), then \(Y^{a}\) can be replaced by the lower degree polynomials \(Y^{a_{0}-1}-\pi^{a_{0}-1}\) and \(\pi^{e(a_{0}-1)}Y^{i}\), for these are in the kernel, and ipso facto \(\overline{Y}^{a}=\overline{Y}^{\,e(a_{0}-1)}\,\overline{Y}^{\,i}=\pi^{e(a_{0} -1)}\,\overline{Y}^{\,i}=0\) in the quotient ring, so that \(Y^{a}\) is in the ideal they generate. Items (2) and (3) have already been demonstrated, and (5) is a trivial exercise. As to (4), \(R\coloneqq\mathbb{Z}[X,Y]/(h(X),v(X)w(X)-1,Y^{a},p-v(X)Y^{e})\), with \(v\) and \(w\) of degree less than \(f\), does the job. The characteristic is \(p^{\,a_{0}}\), as we saw in the proof of (1), hence it is finite since \(h(X)\) is monic and \(\pi\coloneqq\overline{Y}\) nilpotent. If \(R_{\varepsilon}\) is the subring generated by \(x=\overline{X}\), then \(\mathfrak{p}_{\varepsilon}\coloneqq pR_{\varepsilon}\in\max(R_{\varepsilon})\), and \(R_{\varepsilon}\) is local. If \(J\) is an ideal and \(r\in J\) is of minimal \(\operatorname{ord}_{p}\), then \(J=rR_{\varepsilon}\). So \(R_{\varepsilon}\) is a PIR. Put \(\varepsilon\coloneqq v(x)\). Then \(\varepsilon\) is a unit, with \(\varepsilon^{-1}=w(x)\). As \(h(x)=0\) and \(h\) is irreducible mod \(p\), \(k_{\varepsilon}=R_{\varepsilon}/pR_{\varepsilon}\) must be \(\mathbb{F}_{p^{f}}\). And \(R=R_{\varepsilon}[Y]/(Y^{a},p-v(X)Y^{e})\) is a local PIR with radical \((p,\pi)=(\pi)\). Now \(\pi^{a}=0\) gives nilpotency index \(a\), while \(p=\varepsilon\pi^{e}\) settles the ramification index \(e\). If \((\mathbb{F}_{p^{f}})^{\times}=\langle\overline{z}\rangle\), take a lift \(z(x)\) of \(\overline{z}\) in \(R_{\varepsilon}\) for some polynomial \(z(X)\in\mathbb{Z}[X]\), and raise \(z(x)\) to a high enough power of \(p\) to make its multiplicative order equal to \(p^{\,f}-1\), call the polynomial giving that power \(z(X)\) again, and write \(z(x)^{t}=v^{\prime}(x)\) for a \(v^{\prime}(X)\in\mathbb{Z}[X]\) with \(\deg(v^{\prime})<f\). Then we can swap \(v(X)\) for \(v^{\prime}(X)\) (and call that \(v(X)\) again). Restarting from scratch, the new \(\varepsilon\) is the old \(z(x)^{t}\), hence it has the correct \(t\). It is in the trivial residue class of \(I_{1}/I_{1}^{e}\), where \(I_{1}=\{r\in R_{\varepsilon}^{\times}\mid r\equiv 1\ \text{mod}\ p\}\). Pick \(\widetilde{u}(X)\in\mathbb{Z}[X]\) with \(\widetilde{u}(x)=u\in I_{1}\). Restart again, with \(v(X)\) replaced by \(v(X)\widetilde{u}(X)\) mod \(h(X)\), and the new and improved \(\varepsilon\coloneqq v(x)\) has both the correct \(t\) and the right \(u\). ## 4 Fine structure We present a rather explicit discrete valuation ring \(D\) that has a homomorphic image isomorphic to \(R\), using some algebraic number theory. The monic polynomial \(h\) of degree \(f\) has thus far remained unspecified. We now produce a particular \(h\), to start the construction off. If, for \(n\in\mathbb{N},\ \Phi_{n}\) is the \(n\)-th cyclotomic polynomial, we have \(X^{n}-1=\Pi_{\,d\,|\,n}\,\Phi_{d}\). Take \(n=\,\Phi_{f}(p)\). Then the order of \(p\) in \((\mathbb{Z}/n)^{\times}\) is \(f\). Indeed, \(n\mid p^{n}-1\), and if \(d<n\) divides \(n\) then \(X^{d}-1\) and \(\Phi_{f}(X)\) are relatively prime in \(\mathbb{Z}[X]\), hence \(l(X)(X^{d}-1)+m(X)\,\Phi_{f}(X)=1\) for some \(l\) and \(m\). But then \(p^{d}\) cannot be \(\equiv 1\) mod \(n\). In view of the cyclotomic decomposition law ([6], Th. 2.13), \(p\) splits into \(f^{\perp}\coloneqq\varphi(n)/f\) primes \(\mathfrak{P}_{i}\) in the ring of integers of the cyclotomic field \(\,C\coloneqq\mathbb{Q}(\zeta_{n})\), which are all of residue degree \(f\) over \(p\). (Here \(\varphi\) stands for Euler's totient function.) The ring of integers \(\mathcal{O}_{C}=\mathbb{Z}[\zeta_{n}]\) ([6], Th. 2.6), so that the order \(\mathbb{Z}[\zeta_{n}]\subseteq\mathcal{O}_{C}\) is regular with respect to _all_ rational primes, and in particular w.r.t. \(p\). By Kummer-Dedekind, the \(\mathfrak{P}_{i}\) are given by \((p,h_{i}(\zeta_{n}))\), where the \(h_{i}(X)\in\mathbb{Z}[X]\) are monic of degree \(f\), chosen such that \(\overline{h}_{i}(X)\in\mathbb{F}_{p}[X]\) are the \(f^{\perp}\) irreducible factors of \(\Phi_{n}\) in \(\mathbb{F}_{p}[X]\). Reduce the coefficients of the \(h_{i}\) mod \(p\), so that they are all in \(P\coloneqq\{\,(1-p)/2,\cdots,(p-1)/2\,\}\subseteq\,\mathbb{Z}\) for \(p\) odd, or in \(P\coloneqq\{0,1\}\) for \(p=2\). To be absolutely unequivocal, we now specify \(\boldsymbol{h}\coloneqq h_{m}\), where \(1\leq m\leq f^{\perp}\) is such that the \(f\)-tuple of the coefficients of \(X^{f-1},\cdots,X^{0}\) in \(h_{m}\) is the smallest in \(P^{f}\) among the tuples for the \(h_{i}\), \(1\leq i\leq f^{\perp}\), in the lexicographical order based on the natural order on \(P\). Then \(\boldsymbol{K}\coloneqq\mathbb{Q}[X]/(h)\) can be taken as the number field of degree \(f\) over \(\mathbb{Q}\) we used in Section 3 to mimick Cohen's proof. We have \(\mathbb{Z}[X]/(h)\subseteq\mathcal{O}_{K}\). If \(\boldsymbol{\beta}\) is the image of \(X\) in \(\mathcal{O}_{K}\), then \(h(\beta)=0\) and \(K=\mathbb{Q}(\beta)\). Let \(v(X)\) be the polynomial defined in the proof of (1) of Th. 1, and let \(\boldsymbol{\varepsilon_{K}}\) be the image of \(v(X)\) (!) in \(\mathcal{O}_{K}\). Let \(L\coloneqq K[Y]/(g(Y))\), where \(g(Y)\coloneqq\varepsilon_{K}Y^{e}-p\in\mathcal{O}_{K}[X]\). As \(\mathfrak{P}_{K}=p\mathcal{O}_{K}\), \(g\) is \(\mathfrak{P}_{K}\)-Eisenstein's, hence it is irreducible, and \(p\) is completely ramified in \(L\) with index \(e\). For, \(\overline{g}=\overline{\varepsilon}_{K}Y^{e}\) in \(\mathcal{O}_{K}/\mathfrak{P}_{K}[Y]=\mathbb{F}_{p^{f}}[Y]\), and the remainder of \(g\) when divided by \(Y\) in \(\mathcal{O}_{K}[Y]\) is \(-p\), which is not divisible by \(\mathfrak{P}_{K}^{2}\). So, again by Kummer-Dedekind, which remains valid when a Dedekind domain is substituted for \(\,\mathbb{Z}\) as the base ring, the order \(\mathcal{O}_{K}[\gamma]\) of \(\mathcal{O}_{L}\) is regular wrt. \(\mathfrak{P}_{L}\coloneqq(p,\gamma)=(\gamma)\), where \(\boldsymbol{\gamma}\) is the image of \(Y\) in \(\mathcal{O}_{L}\), a zero of \(g\), i.e. an \(e\)-th root of \(\varepsilon_{K}^{-1}p\). So \(\mathfrak{P}_{L}\) is the only prime of \(\mathcal{O}_{L}\) above \(p\mathbb{Z}\), and it has residue degree \(f\) and ramification index \(e\) over \(p\mathbb{Z}\). We then have \(R_{L}\coloneqq\mathcal{O}_{L}/\mathfrak{P}_{L}^{\,a}\cong(\mathcal{O}_{L})_{ \mathfrak{P}_{L}}/\mathfrak{P}_{L}^{\,a}(\mathcal{O}_{L})_{\mathfrak{P}_{L}}\), a PIR of type \((a,e,f,p,t,u)\), and \(D\coloneqq(\mathcal{O}_{L})_{\mathfrak{P}_{L}}\) is a discrete valuation ring. In this PIR, \(\overline{\pi}_{L}\), the residue class of \(\gamma\), is a uniformizer, and \(p=\overline{\varepsilon}_{K}\overline{\pi}_{L}^{\,e}\), with the unit \(\overline{\varepsilon}_{K}\in R_{L}^{\times}\), per construction, of the form \(((z_{L}^{\prime})^{e}z_{L}^{t})u_{L}\), for some element \(z_{L}^{\prime}\) of \(H_{0,L}\leq R_{L}^{\times}\), the equivalent of \(H_{0}\leq R^{\times}\), generated by \(z_{L}\) (say), and \(u_{L}\coloneqq(z_{L}^{\prime})^{-e}z_{L}^{-t}\varepsilon_{K}\in I_{1,L}\) is a unit that must map to the congruence class of our \(u\) under \(I_{1,L}\to I_{1,L}/I_{1,L}^{e}\cong I_{1}/I_{1}^{e}\) for any isomorphism (that exists by Prop. 1), followed by a suitable automorphism of \(I_{1}/I_{1}^{e}\). Thus by (3) of Th. 1 it follows that \(R\cong R_{L}\) is a quotient of \(D\). \begin{tabular}{c c c c} \(\mathbb{Q}(\zeta_{n})\) & \(\varphi(e)\) & \(N\) & Put \(\boldsymbol{\alpha}=\beta+\gamma\). Then the minimal polynomial of \(\alpha\) over \(\mathbb{Z}\) is the resultant \(\boldsymbol{r(X)}\coloneqq\operatorname{res}_{Y}(g(Y),h(X-Y))\in\mathcal{O}_ {K}[X]\) (eliminating \(Y\)), a polynomial of degree \(ef\) equal to \(\Pi_{i=1}^{e}\,h(X-\gamma_{i})\), where, if \(\zeta_{e}\) is a primitive \(e\)-th root of unity, \(\gamma_{i}\coloneqq\gamma\zeta_{e}^{i}\) (for \(1\leq i\leq e\)) are the zeroes of \(g\) in the normal closure \(N=L(\zeta_{e})\) of \(L/\mathbb{Q}\). (So \(N/K(\zeta_{e})\) is a Kummer extension). Because \(h(\alpha-\gamma_{e})=h(\beta)=0\), we have \(r(\alpha)=0\). As \(r(X)\) is monic and symmetric wrt. the \(\gamma_{i}\), it must be the minimal polynomial of \(\alpha\) over \(\mathbb{Z}\). So \(L=\mathbb{Q}(\alpha)\). For any prime \(\mathfrak{P}_{N}\) of \(\mathcal{O}_{N}\) lying over \(\mathfrak{P}_{L}\), the image of \(r(X)\) in \((\mathcal{O}_{N}/\mathfrak{P}_{N})[X]\) is \(\overline{h}^{\,e}\in\mathbb{F}_{p}[X]\), because, since \(\gamma_{i}^{\,e}=\varepsilon_{K}^{-1}p\), the \(\gamma_{i}\) become zero. The primes above \(p\) in \(\,\mathbb{Z}[\alpha]\) correspond to the irreducible factors of \(r(X)\) in \(\mathbb{F}_{p}[X]\). In this case, \(\mathfrak{P}_{h}\coloneqq(p,h(\alpha))\) is the only prime. As this prime has ramification index \(e\) and residue degree \(f\), like \(\mathfrak{P}_{L}\) does, it cannot split or ramify further in \(\mathcal{O}_{L}\), so that the order \(\,\mathbb{Z}[\alpha]\subseteq\mathcal{O}_{L}\) is regular for \(\mathfrak{P}_{L}\), and Kummer-Dedekind applies. Hence \(\,\mathbb{Z}[\alpha]_{\mathfrak{P}_{h}}\cong(\mathcal{O}_{L})_{\mathfrak{P}_ {L}}=\boldsymbol{D}\) is a discrete valuation ring, and because \(R_{\varepsilon}\) has residue field \(k\), the zero \(h_{b}\) of \(h\) can be chosen to be in \(R_{\varepsilon}\), so we obtain the following structure theorem. **Theorem 2**.: _For the integer polynomials \(h(X)\) and \(r(X)\) specified above, the PIR \(R\) is the quotient of the discrete valuation ring \(D=(\,\mathbb{Z}[X]/(r(X)))_{\mathfrak{P}_{h}}\) under \(\overline{X}\mapsto b_{h}+\pi\), where \(b_{h}\in R_{\varepsilon}\subseteq R\) is a zero of the polynomial \(h\) and \(\mathfrak{P}_{h}=(p,\overline{h})\) denotes the unique prime of \(\,\mathbb{Z}[X]/(r(X))\) lying over \(p\,\mathbb{Z}\). The kernel of this map is the ideal \(\,\mathfrak{P}_{h}^{\,a}\,.\,D\). _ Since every finite PIR is a product of local ones, the theorems also describe the structure of finite principal ideal rings in general. Noting that nontrivial quotients A/I of Dedekind domains (\(A\supseteq I\supsetneq 0\)) are self-injective, we obtain another amazing property of finite PIRs. _Example 1._ If \(p=f=3\), we have \(\Phi_{f}=X^{2}+X+1\), so \(n=13\). As \(\Phi_{13}\) factors as \((X^{3}-X-1)(X^{3}+X^{2}-1)(X^{3}+X^{2}+X-1)(X^{3}-X^{2}-X-1)\) in \(\mathbb{F}_{3}[X]\), our \(h=X^{3}-X^{2}-X-1\), the lexicographical first. If \(\varepsilon=1\) in \(R\), we can take \(v(X)=1\), so \(\varepsilon_{K}=1\) in \(\mathcal{O}_{K}\) and \(g(Y)=Y^{e}-p\). For \(e=4\), the resultant \(r(X)\) of \(g(Y)\) and \(h(X-Y)\) wrt. \(Y\) is \(X^{12}-4X^{11}+2X^{10}+4X^{9}+7X^{8}-8X^{7}-12X^{6}-8X^{5}+7X^{4}+12X^{3}+10X^ {2}+4X+1\), hence \(D\twoheadrightarrow R\) is the map \(D=(\,\mathbb{Z}[X]/(r(X)))_{\mathfrak{P}_{h}}\to R\) given by \(\overline{X}\mapsto b_{h}+\pi\), where \(b_{h}\in R_{\varepsilon}\) is an element with \(b_{h}^{3}-b_{h}^{2}-b_{h}-1=0\), and \(\,\mathfrak{P}_{h}\) is the prime ideal \((3,\overline{h})\) of \(\mathbb{Z}[X]/(r(X))\). _Example 2._ For \(R\coloneqq\mathbb{Z}[Z]/(8,Z^{2}+2)\), a type \((6,2,1,2,0,-1\ \mathrm{mod}\ I_{1}^{2})\) PIR, the parameters are \(\varepsilon=-1\ \mathrm{mod}\ 8\), \(v(X)=-1\), \(\Phi_{f}=X-1\), \(n=1\), \(\Phi_{n}=X-1\) is irreducible mod \(2\), so \(h(X)=X+1\), \(b_{h}=1\), \(e=2\), \(g(Y)=-Y^{2}-2\), and one finds \(r(X)=-X^{2}-2X-3\), an equation for \(-1+\sqrt{-2}\). For this ring \(R\), \(D=(\,\mathbb{Z}[X]/(X^{2}+2X+3))_{\mathfrak{P}_{h}}\), with \(\mathfrak{P}_{h}=(2,1+\sqrt{-2})\) ramified, of course. _Example 3._ For Mr. Page's other example, \(R=\,\mathbb{Z}[Z]/(8,Z^{2}-2)\), of type \((6,2,1,2,0,1\ \mathrm{mod}\ I_{1}^{2})\), with much the same parameters, except that \(\varepsilon=1\ \mathrm{mod}\ 8\) and \(g(Y)=Y^{2}-2\), and the resultant is \(r(X)=X^{2}+2X-1\), with \(-1+\sqrt{2}\) as a zero. So here, \(D=(\,\mathbb{Z}[X]/(X^{2}+2X-1))_{\mathfrak{P}_{h}}\) and \(\,\mathfrak{P}_{h}=(2,1+\sqrt{2})\). Subrings Finally, we treat subrings of \(R\). No doubt, the results can be derived directly and perhaps more elegantly in an elementary ring theoretical way, but as they are easy consequences of a theorem on conductor ideals and of Th. 2, it is more natural to take that point of vantage. The subrings of \(R\) are just the images of the subrings of \(D\). If \(E\) is a subring of \(D\) and \(M\subseteq L\) its quotient field, \(E\) is just an order \(\subseteq({\cal O}_{M})_{\mathfrak{P}_{M}}\), itself a subring of \(D\). Let \(e_{M}\mid e\) and \(f_{M}\mid f\) be the ramification index and residue degree of \(\mathfrak{P}_{h}\cap{\cal O}_{M}\eqqcolon\mathfrak{P}_{M}\) over \(p\). Then \({\cal O}_{M}\) maps to \(R_{M}\coloneqq{\cal O}_{M}/(\mathfrak{P}_{h}^{a}D\,\cap\,{\cal O}_{M})\). But \(\mathfrak{P}_{h}^{a}D\,\cap\,{\cal O}_{M}=\mathfrak{P}_{M}^{\,\tau a/(e/e_{M})}\), for \(e/e_{M}\) is the ramification index of \(\mathfrak{P}_{h}\) over \(\mathfrak{P}_{M}\). Hence \(R_{M}={\cal O}_{M}/\mathfrak{P}_{M}^{\,\tau a_{e_{M}}/e^{\tau}}\) is, in accordance with item (2) of Prop. 3 below, a PIR of type \((\ulcorner ae_{M}/e^{\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\operatorname{ord}_{\pi}(\overline{n})=el\), this says \(el+1\geq m\) implies \(el\geq m\). For the value \(l=\ulcorner(m-1)/e\urcorner\) that gives \(e\ulcorner(m-1)/e\urcorner\geq m\), which holds iff \(m\not\equiv 1\) mod \(e\). So Prop. 2 yields the following. **Corollary 1**.: _If \(f>1\), every ideal of \(R\) is a conductor ideal. When \(f=1\), only the \(\mathfrak{p}^{m}\) for \(0\leq m\leq a\) with \(m\not\equiv 1\) mod \(e\) are conductor ideals (except if \(e=f=1\), i.e. \(R=R_{0}=\mathbb{Z}/p^{a}\) and only \(R=\mathfrak{p}^{0}\) is a c.i.). If \(\mathfrak{p}^{m}\) is a c.i., \(R_{0}+\,\mathfrak{p}^{m}\) is the smallest subring with that conductor, and the rings \(U+\,\mathfrak{p}^{m}\), where \(U\) is a maximal unramified (over \(R_{0}\)) subring of \(R\), such as \(R_{\omega}\) or the coefficient ring \(R_{\varepsilon}\), are the maximal subrings with c.i. \(\mathfrak{p}^{m}\)._ Proof.: The statement about the smallest subring of conductor \(\mathfrak{p}^{m}\) is Lemma 2.1.1 of [4]. If \(\mathfrak{p}^{m}\) is a c.i., it is the conductor of a subring \(R^{\prime}\) iff \(m\) is minimal wrt. \(\mathfrak{p}^{m}\subseteq R^{\prime}\). And \(U+\,\mathfrak{p}^{m}\) is of course a subring of \(R\) when \(U\) is. If \(f>1\) or \(m\not\equiv 1\) mod \(e\), then \(\mathfrak{p}^{m}\) is the conductor of \(R^{\prime}=R_{0}+\,\mathfrak{p}^{m}=\{\,\overline{n}+\Sigma_{i=m}^{\infty}\,r _{i}\pi^{i}\mid\overline{n}\in R_{0}\wedge\forall_{i}\,r_{i}\in\rho\,\}\). This is a local ring with maximal ideal \((\pi^{m})\). Its inverse image in \(D\) is \(\,\mathbb{Z}+\mathfrak{P}_{h}^{m}D\), a regular order. If \(m\mid e\) then \(R^{\prime}\) is a PIR of the type \((\ulcorner a/m\urcorner,e/m,1,p)\). And if \(m\nmid e\), \(p\) is not associated to a power of \(\pi^{m}\) in \(R^{\prime}\). So then, contrary to what is sometimes assumed ([7]), the local ring \(R^{\prime}\), with nilpotent and principal maximal ideal, cannot be a PIR. Observe that the conductor doesn't determine the subring. \(R_{\varepsilon}+\,\mathfrak{p}^{m}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _._ 2. _Either_ \(p\neq 0\) _in all non-trivial_ \(S_{i}\)_, or_ \(R\) _is unramified, that is,_ \(e=1\)_, so_ \(p\) _uniformizes_ \(R\)_. So_ \(R=R_{\varepsilon}\) _is then its own coefficient ring._ 3. _For every_ \(i\) _for which_ \(p\neq 0\) _in_ \(S_{i}\)_,_ \(t_{i}\equiv(p^{\,f_{i}}-1)t/(p^{\,f}-1)\) _mod_ \((e,p^{\,f_{i}}-1)\)_, and the image of_ \(u\) _in_ \(S_{i}\) _represents the congruence class of_ \(\overline{u}_{i}\) _in_ \(I_{i,1}/I_{i,1}^{e_{i}}\)_, where_ \(I_{i,1}\subseteq S_{i}\) _is the counterpart of the group_ \(I_{1}\leq R_{\varepsilon}^{\times}\subseteq R\)_._ Proof.: (1) is trivial, for \(R^{\prime}\) is a product of Artin local rings and \(R\) does not have any non-trivial idempotents. For (3), if \(R^{\prime}\subseteq R\) is a sub-PIR, we have \(\pi^{\prime}\sim\pi^{c}\) for some \(c\), in the obvious notation. \(\pi^{e}\sim p\sim(\pi^{\prime})^{e^{\prime}}\sim\pi^{ce^{\prime}}\), hence \(e=ce^{\prime}\). And \(a^{\prime}\) is the smallest number for which \(0=(\pi^{\prime})^{a^{\prime}}\sim\pi^{ca^{\prime}}\), i.e. for which \(ca^{\prime}\geq a\). Because \(k^{\prime}=\mathbb{F}_{p^{\prime}}\) is a subfield of \(k=\mathbb{F}_{p^{\prime}}\), \(f\) must be a multiple of \(f^{\prime}\), so \(d=(p^{\,f}\!-\!1)/(p^{\,f^{\prime}}\!-\!1)\) is an integer. Next, if \(p=z^{t}u\pi^{e}\) and \(\pi^{\prime}=\delta\pi^{c}\) for a \(\delta\in R^{\times}\), and we factor \(\delta=z^{n}r\) with an Einseinheit \(r\in H_{1}\), we have \(z^{t}u\pi^{e}=p=(z^{\prime})^{t^{\prime}}u^{\prime}(\pi^{\prime})^{e^{\prime} }=z^{dt^{\prime}}u^{\prime}(\pi^{\prime})^{e^{\prime}}\) (on replacing \(z^{\prime}\) by \(z^{d}\), which is a generator of \(H_{0}^{\prime}\)) \(z^{dt^{\prime}+ne^{\prime}}u^{\prime}r^{e^{\prime}}\pi^{e}\), so, since \(z\in H_{0}\subseteq H_{0}\supseteq H_{0}^{\prime}\ni z^{\prime}\) and \(u\) and \(u^{\prime}\) are in the complement \(I_{1}\) of \(H_{0}\), by u.r. it follows that \(t\equiv dt^{\prime}+ne^{\prime}\) mod \(p^{\,f}\!-\!1\), hence \(t\equiv dt^{\prime}\) mod \((e^{\prime},p^{\,f}\!-\!1)\), and that \(u^{\prime}=ur^{-e^{\prime}}\in uI_{1}^{e^{\prime}}\). For the converse, if \(R\) is an \((a,e,f,p,t,u)\), \(f^{\prime}\) a divisor of \(f\), \(d\coloneqq(p^{\,f}\!-\!1)/(p^{\,f^{\prime}}\!-\!1)\), \(e=ce^{\prime}\) for some \(c\) and \(e^{\prime}\), \(a^{\prime}=\ulcorner a/c\urcorner\), \(t\equiv dt^{\prime}\) mod \((e^{\prime},p^{\,f}\!-\!1)\) and \(u^{\prime}\in uI_{1}^{e^{\prime}}\), the subring \(R^{\prime}\), generated by its uniformizer \(\pi^{\prime}\coloneqq z^{dt^{\prime}}u^{\prime}\pi^{c}\) and the \(d\)-th powers of the \(R\)-units of order prime to \(p\), is of designation \((a^{\prime},e^{\prime},f^{\prime},p,t^{\prime},u^{\prime})\). Said powers form the canonical \(\rho^{\prime}-\{0\}\cong(k^{\prime})^{\times}\) for \(R^{\prime}\), if \(k^{\prime}\) denotes the residue field \(R^{\prime}/\pi^{\prime}R^{\prime}\). For \(\Rightarrow\) of (4), denote the image of \(\pi\) in \(S_{i}\) again by \(\pi\), so \(\pi=(\pi,\cdots,\pi)\in S\), and let \(c_{i}\coloneqq\text{ord}_{\pi_{i}}(\pi)\). Then \((\pi_{i}^{e_{i}})_{i}=(p)_{i}=(\pi^{e})_{i}=(\pi_{i}^{c_{i}e})_{i}\), so that \(e_{i}=c_{i}e\) for all \(i\). Likewise, \((\pi_{i}^{a_{i}})_{i}=0=(\pi^{a})_{i}=(\pi_{i}^{c_{i}a})_{i}\), so \(c_{i}a\geq a_{i}\), hence \(a\geq\ulcorner a_{i}/c_{i}\urcorner\). On the other hand, if \(m\) is the supremum of the \(\ulcorner a_{i}/c_{i}\urcorner\), we have \(a_{i}/c_{i}\leq m\), so \(a_{i}\leq mc_{i}\), hence \(\pi^{m}=\pi_{i}^{mc_{i}}=0\) in \(S_{i}\), for all \(i\), so \(\pi^{m}=0\) in \(R\), and thus \(a\leq m\). \(\mathfrak{p}_{i}=\pi_{i}S_{i}\times\Pi_{j\neq i}\,S_{j}\in\text{spec}(S)\), hence it lies over \(\mathfrak{p}\), and therefore we have \(k=R/\mathfrak{p}\mapsto S/\mathfrak{p}_{i}=S_{i}/(\pi_{i})\). Thus \(f\mid f_{i}\) for all \(i\). In the image \(R_{i}\) of \(R\) in \(S_{i}\), we have \(\pi^{\ulcorner a_{i}/c_{i}\urcorner}=0\), so \(R_{i}\cong R/\mathfrak{p}^{\ulcorner a_{i}/c_{i}\urcorner}\). If \(\ulcorner a_{i}/c_{i}\urcorner\leq e\), then \(a_{i}\leq c_{i}e=e_{i}\leq a_{i}\), so \(e_{i}=a_{i}\), i.e. \(p=0\) in \(R_{i}\subseteq S_{i}\), and by (7) \(R_{i}\) is a \((\ulcorner a_{i}/c_{i}\urcorner,1,f,p,0,1)\). We check the conditions that (5) imposes for \(R_{i}\) to be a subring of \(S_{i}=(a_{i},e_{i},f_{i},p,t_{i},u_{i})\), with the ones for the \(e\)'s and \(f\)'s already out of the way. However, \(\ulcorner a_{i}/c_{i}\urcorner=\ulcorner a_{i}\cdot 1/e_{i}\urcorner=\ulcorner a_{i}/a_{i} \urcorner=1\) means \(c_{i}e=e_{i}=a_{i}\leq c_{i}\), so \(e\) must be \(1\). And when \(\ulcorner a_{i}/c_{i}\urcorner>e\), \(R_{i}\) is a type \((\ulcorner a_{i}/c_{i}\urcorner,e,f,p,t,u)\) and the test for the \(a\)'s passes. This leaves \(t\) and \(u\), but these are catered for by point (3). To see \(\Leftarrow\) in (4), there are \(\mu_{i}\!:\!R\to S_{i}\) with \(\pi\mapsto\pi_{i}^{c_{i}}\), where \(c_{i}\coloneqq e_{i}/e\). For by \(f\mid f_{i}\), \(k\subseteq k_{i}\), so \(H=\rho-\{0\}\) is a subgroup of \(H_{i}=\rho_{i}-\{0\}\), so that \(\{\sum_{j=0}^{\ulcorner a_{i}/c_{i}\urcorner-1}r_{j}\pi_{i}^{c_{i}j}\mid\forall _{j}r_{j}\in\rho\}\) is a subring of \(S_{i}\). These combine to give \(\mu:\!R\to S\), and the kernel cannot contain any units. If \(\mu(\pi^{n})=0\), one has \((\pi_{i})^{c_{i}n}=0\) in \(S_{i}\) for all \(i\), so \(c_{i}n\geq a_{i}\), and hence \(n\geq\max_{i}(\ulcorner a_{i}/c_{i}\urcorner)=a\), so \(\pi^{n}\) was already zero in \(R\). The remaining details are handled by means of (3). Note that the trivial ring \(1=\mathbb{F}_{p^{0}}\) is a PIR of denomination \((1,\infty,0,p)\) with the prime number \(p\) undetermined, since \(\pi^{1}=0\) and \(p\notin\mathfrak{p}^{e}-\mathfrak{p}^{e+1}=\varnothing\) for all primes \(p\) and \(e\in\mathbb{N}\). \(1\) is of characteristic \(1\), and it must be excluded as an \(S_{i}\) in item (4.ii) of the corollary. Indeed, \(\Pi_{i=1}^{n}\,S_{i}=\Pi_{i=1}^{n+m}\,S_{i}\) for all \(m\), if we take \(S_{i}=1\) (or even \(S_{i}=0!\)) for \(i>n\). The PIR \(1\) is the algebraic closure of its own residue field (which is more than can be said for \(0=\mathbb{Z}\)).
2303.01487
quAssert: Automatic Generation of Quantum Assertions
Functional validation is necessary to detect any errors during quantum computation. There are promising avenues to debug quantum circuits using runtime assertions. However, the existing approaches rely on the expertise of the verification engineers to manually design and insert the assertions in suitable locations. In this paper, we propose automated generation and placement of quantum assertions based on static analysis and random sampling of quantum circuits. Specifically, this paper makes two important contributions. We automatically uncover special properties of a quantum circuit, such as purely classical states, superposition states, and entangled states using statistical methods. We also perform automated placement of quantum assertions to maximize the functional coverage as well as minimize the hardware overhead. We demonstrate the effectiveness of the generated assertions in error detection using a suite of quantum benchmarks, including Shor's factoring algorithm and Grover's search algorithm.
Hasini Witharana, Daniel Volya, Prabhat Mishra
2023-03-02T18:49:14Z
http://arxiv.org/abs/2303.01487v1
# quAssert: Automatic Generation of Quantum Assertions ###### Abstract Functional validation is necessary to detect any errors during quantum computation. There are promising avenues to debug quantum circuits using runtime assertions. However, the existing approaches rely on the expertise of the verification engineers to manually design and insert the assertions in suitable locations. In this paper, we propose automated generation and placement of quantum assertions based on static analysis and random sampling of quantum circuits. Specifically, this paper makes two important contributions. We automatically uncover special properties of a quantum circuit, such as purely classical states, superposition states, and entangled states using statistical methods. We also perform automated placement of quantum assertions to maximize the functional coverage as well as minimize the hardware overhead. We demonstrate the effectiveness of the generated assertions in error detection using a suite of quantum benchmarks, including Shor's factoring algorithm and Grover's search algorithm. ## I Introduction Assertions provide a mechanism to describe desirable properties of a system that should be satisfied. Assertion-based validation is widely used for both pre-silicon and post-silicon validation in classical systems [1]. Similar to classical computation, bugs or errors [2] can be present in a quantum computation, constituting a need for verifying quantum programs [3, 4]. _Quantum assertions_ is a promising avenue for validating a quantum program's functionality [5, 6]. We specifically use the term _quantum assertions_ to denote assertions that assert properties of a _quantum state_, and therefore should be evaluated as part of quantum computation at run time. To enable assertion-based validation of quantum systems, we need to answer two important questions: _how to generate and where to place the quantum assertions?_ Recent approaches investigate implementations for quantum assertions [7, 8, 9]. There are few practical limitations of the existing methods. Quantum assertion generation is an entirely manual process that requires expert knowledge to craft effective assertions that utilize minimal quantum resources and provides reasonable functional coverage. It also relies on the expertise of the designers to identify the placement of these assertions. Moreover, quantum computations are "parallel" in nature due to superposition. A general quantum state may consist of many computational states, as shown in Figure 1. Section II-A provides a comprehensive survey of these quantum states. A quantum operation will act on all these computational states. Furthermore, superposition enables _entanglement_ whereby measurement of one subsystem reveals the state of the other subsystem. The complexity of quantum states coupled with systems with large number of qubits makes it infeasible to manually generate effective assertions. In this paper, we propose an automated framework for generation of quantum assertions that consists of the following tasks. First, the quantum circuit is statically analyzed to uncover common functionalities. Next, the qubits and gates are randomly sampled from a quantum circuit with respect to the discovered functionalities. Then, the properties of the gates are sorted into three categories of assertions: classical, uniform superposition, or entanglement using classical simulation of selected sub circuit of interest and statistical analysis. Finally, the placement of assertions is done based on the assertion type and the results of random sampling of the circuit. Specifically, this paper makes the following contributions: * Surveys quantum states of interest to define three classes of quantum assertions. * Proposes an automated algorithm for generation of quantum assertions using static analysis and random sampling. * Demonstrates the utility of these assertions in verifying quantum circuits. This paper is organized as follows. Section II provides relevant background and surveys related efforts. Section III defines three major classes of quantum assertions. Section IV describes our automated assertion generation framework. Section V presents experimental results. Finally, Section VI concludes the paper. ## II Background and Related Work This section first surveys different quantum states. Next, it presents the related work on quantum assertions. ### _Survey of Quantum States_ As shown in Figure 1, a quantum state can be in two classes of states: classical and superposition [10]. The superposition state can be further classified into multiple categories including Fig. 1: A quantum state can be either in a single computational state or a general combination of computational states (classical and superposition). The superposition state can be further divided in various categories as outlined in Section II-A. uniform, entangled, random, etc. The remainder of this section briefly describes different quantum states. **Classical States:** A quantum state that is in only one of the basis states is considered a classical state. For \(n\)-qubits, a classical state has the form, up to a global phase \(\theta\), \[\left|\psi_{C}\right\rangle=e^{i\theta}\sum_{i=0}^{n-1}\delta_{i,k}\left|i \right\rangle=e^{i\theta}\left|k\right\rangle\] where \(\delta_{i,k}\) is the Kronecker delta denoting that only the \(k\)-th state is active. Classical states often occur in encoding classical information to a quantum computer, or in classical operations such as addition or comparison. **Uniform Superposition:** In contrast to classical states, uniform superposition states equally spread across all possible basis states. Specifically, for \(n\)-qubits, a uniform superposition state has the form, up to a global phase \(\theta\), \[\left|\psi_{S}\right\rangle=\frac{e^{i\theta}}{\sqrt{2^{n}}}\sum_{i=0}^{n-1} \left|i\right\rangle\] **Cat Entangled State:** If a quantum state cannot be written as a combination of individual qubit states, it is known as an entangled state. A special case of a highly-entangled state is when, for \(n\)-qubits, the state is a combination of all-zeros and all-ones: \[\left|\psi_{E}\right\rangle=\frac{e^{i\theta}}{\sqrt{2}}\left(\left|0\right\rangle ^{\otimes n}+\left|1\right\rangle^{\otimes n}\right).\] For two qubits and three qubits, these states are referred to as Bell and Greenberger-Horne-Zeilinger (GHZ) states, respectively. ### _Related Work_ There are many promising assertion generation approaches in classical domain [11]. However, they are not applicable for generating quantum assertions since output is deterministic for a given input for classical computing, while output values are a result of destructive measurements and come with a probability distribution in quantum computing. There are early efforts to discuss the importance and applicability of quantum assertions [8, 9]. Recent approaches explored different assertion generation methods such as ancilla-based methods [7], statistical-based methods [8], and projection-based methods [9]. There are closely related efforts in quantum error correction and formal verification of quantum circuits. Error correcting codes assume a certain noise model, and provide special state encodings that can correct a state if an error is detected. Quantum assertions, although similar, are not concerned with correcting a state and only seek to assert a given property of the state. There are also recent efforts to check the correctness in the output of a quantum circuit, such as through formal verification of quantum circuits [12, 13], or by assuming domain-specific knowledge (e.g., post-selection rules) to ignore incorrect outputs of a quantum computation [14, 15]. To the best of our knowledge, our proposed approach is the first attempt in automated generation of quantum assertions. ## III Classes of Quantum Assertions Any quantum algorithm will consist of a series of quantum gates that will evolve an initial quantum state to an arbitrary desired output state. Although this strategy is similar to classical computation, quantum gates must be reversible, linear, and unitary. Additionally, at any given moment in the computation, the quantum state is a general superposition (combination) of all possible states. Namely, given n-qubits initialized to \(\left|0\right\rangle^{\otimes n}\), and a quantum circuit \(\mathcal{U}\) consisting of m-gates in a sequence \(\{U_{i}\}_{i=1}^{m}\), the output of \(\mathcal{U}\) gives a final state \[\left|\psi\right\rangle=\mathcal{U}\left|0\right\rangle^{\otimes n}=\left(U_{ m}\circ U_{m-1}\circ\ldots\circ U_{1}\right)\left|0\right\rangle^{\otimes n}.\] Our goal is to add an assertion, \(\mathcal{A}\), or even a set of assertions \(\{A_{i}\}\), to the existing quantum circuit \(\mathcal{U}\) that asserts a property of the quantum state at the given point in the quantum circuit, \(\left|\psi_{i}\right\rangle\). For example, to assert the quantum state after the application of the first quantum gate would yield: \[\left|\psi^{\prime}\right\rangle=\left(U_{m}\circ U_{m-1}\circ\ldots\circ \mathcal{A}\circ U_{1}\right)\left|0\right\rangle^{\otimes n^{\prime}}.\] In general, there are infinitely many properties that a quantum state \(\left|\psi_{i}\right\rangle\) may have since a quantum state is a general superposition consisting of complex-valued amplitudes. Fortunately, certain quantum states, and specific sequences of quantum gates, yield desirable and well-known states (classical, superposition, and entanglement) which are described in Section II-A. In this section, we present three types of quantum assertions as shown in Figure 2 that are capable of checking these states. **Classical Quantum Assertions:** This assertion type has the capability of checking whether the quantum state is classical at a specific breakpoint for a given quantum design. Classical assertions can be used to debug a quantum circuit to identify whether any error/bug occurred which has the power to change the classical state to another state or change the expected classical value to some other classical value. Classical quantum assertion is composed of a combinational assertion using the propositional logic equivalent (\(\Leftrightarrow\)). The classical assertion template is \(A:assert(\left|input\right\rangle\Leftrightarrow\left|output\right\rangle)\). Use of equivalent operator checks other qualities of quantum circuit such as reversibility. An example placement of classical assertion is shown in Figure 2 as \(assert0\). Fig. 2: Three important classes of quantum assertions: classical (assert0), superposition (assert1), and entanglement (assert2). **Uniform Superposition Quantum Assertions:** This assertion type can check whether the quantum state is uniform superposition at a specific break-point. The rest of the paper uses the term superposition assertion to represent this assertion type. Superposition assertions can be used when debugging a quantum circuit to identify any error which can change the uniform superposition state to another state. Superposition quantum assertion is composed of a combinational assertion using the propositional logic implication (\(\rightarrow\)). The superposition assertion template is \(A:assert(|input\rangle\rightarrow|\psi_{S}\rangle)\). A sample superposition assertion is shown in Figure 2 as \(assert1\). **Entanglement Quantum Assertions:** This assertion type has the capability of checking whether the quantum state is in a Cat state at a specific break-point. Entanglement assertions can be used when debugging a quantum circuit to identify whether any error occurred to change the Cat state to any other state. Entanglement quantum assertion is composed of superposition assertions using the propositional logic implication. The entanglement assertion template is \(A:assert(|input\rangle\rightarrow|\psi_{E}\rangle)\). A sample entanglement assertion is shown in Figure 2 as \(assert2\). ## IV Quantum Assertion Generation Figure 3 shows an overview of our proposed assertion generation framework that consists of two major phases: assertion mining and assertion implementation. We implement assertion mining in four steps. The first step statically analyzes the design and identifies the placements to measure. The second step instruments the placeholders by adding measure points to the identified spots. The third step simulates the place holders using random inputs to get the execution traces. The fourth step analyzes the traces and mines three different classes of assertions (described in Section III) based on the statistical analysis of the output distributions. After a successful assertion mining process, we can proceed to assertion implementation that consists of two important steps. The first step selects assertions based on the coverage goal. The next step embeds the measurement points with a suitable assertion insertion process to the design. Finally, the quantum circuit with assertions can be executed on a real quantum machine to detect errors during run-time. In this section, we describe both assertion mining and assertion implementation in detail. ### _Assertion Mining_ Algorithm 1 shows the assertion mining process. As shown in the algorithm, the first step is static analysis of the quantum circuit (\(QC\)) to identify placeholders (\(PH\)) for assertion mining. The static analysis will uncover the common functionalities such as inclusion of Hadamard gates (\(H\)), CNOT gates, X gates, etc. By uncovering the common functionalities, it is easy to identify which qubits are important to check in the circuit. These identified qubits are then randomly sampled from the quantum circuit. In other words, out of all the qubits and different placeholders, some are selected randomly. An example of random qubit and placeholder selection is shown in Figure 4. For the first placeholder (\(A_{0}\)), only 3 qubits are selected and the measurement is performed immediately after the two X gates. The second placement (\(A_{1}\)) consists of one qubit and the measurement is conducted after the H gate. To perform the identification, we take inspiration from quantum circuit cutting [16]. We formulate the act of measurement as a projective operation \(O_{i}\), and so the probability value of measuring a set of qubits is \(\mathbb{E}(\rho)=\mathrm{Tr}(O_{i}\rho)\) - where \(\rho\) is the density matrix of qubits. Consider an example consisting of single qubit, we may express the state of the qubit as \[\rho=\frac{1}{2}\sum_{i=1}^{8}c_{i}Tr(O_{i}\rho)p_{i}\] where the Pauli matrices correspond to all the projective operators \(O_{i}\), and \(p_{i}\) are operators' eigenprojectors and their corresponding eigenvalues \(c_{i}\). Following quantum circuit cutting, we may write the term as two quantum circuits: one for computing the expectation value \(\mathrm{Tr}(\rho O_{i})\), and the other for preparing the eigenstate \(p_{i}\). By finding such occurrences in a quantum circuit, we can readily measure the expectation value, and once that is finished, we can simply reinitialize the state \(p_{i}\) to continue the quantum computation. Due to this property, this is a good place to insert a quantum assert. To find these points practically, we utilize randomized circuit cutting by randomly inserting measure-and-prepare channels [17]. As shown in line 5 in the algorithm, for each placeholder identified during static analysis, design instrumentation is conducted. During design instrumentation, we add measurement points for the selected qubits in the placement of placeholder. The sub quantum circuit (\(SQC\)) is used in the next steps of the assertion mining process. The next step is to simulate the \(SQC\) using different inputs. For each iteration a random input is generated and the identified \(PH\) is simulated using the random input. When simulating the \(PH\) section (\(SQC\)) of Fig. 3: An overview of our assertion generation framework that consists of two phases: assertion mining and assertion implementation. the original circuit, it collapses the quantum state to classical state in the measurement points and the probability distribution of the outputs is stored in the execution trace. ``` 1:Input Quantum Circuit (\(QC\)), Iterations (\(I\)) 2:Output Assertions (\(A\)) 3:\(PH\leftarrow\)StaticAnalysis(\(QC\)) 4:\(A\leftarrow\emptyset\) 5:for\(p\in PH\)do 6:\(trace\leftarrow\emptyset\) 7:\(SQC\leftarrow\)Instrumentation(\(PH\)) 8:for\(i\in I\)do 9:\(input\leftarrow\)RandomInput() 10:\(trace\gets trace\cup\)RandomSim(\(input\), \(SQC\)) 11:endfor 12:\(assertion\leftarrow\)StatisticalAnalysis(\(trace\)) 13:\(A\gets A\cup assertion\) 14:endfor 15:Return \(A\) ``` **Algorithm 1** Assertion Mining After the simulation is completed for each placeholder and the iterations, the execution trace is analyzed to mine the assertions. We are using chi-squared testing to analyze the probability distributions in the execution trace. Chi-squared test is a statistical hypothesis test that is used to identify significant differences between expected frequencies with observed frequencies [18]. We use chi-squared testing to determine whether an output probability distribution lies on the three classes of states: classical, uniform superposition, and bell state [8]. The results are used to generate the three types of assertions as described below. **Classical Quantum Assertions:** For classical assertions, the hypothesis of the chi-square test is selected such that the expected distribution should be a unimodal distribution with one peak value. After comparing the expected distribution with observed distribution, if the \(p-value\) of the test is less than 0.05, the null hypothesis is rejected. This means that the observed distribution is not unimodal. If the \(p-value\) is higher and closer to 1, the null hypothesis is accepted. **Uniform Superposition Quantum Assertions:** For these assertions, the hypothesis of the chi-square test is selected such that the expected distribution should be a uniform distribution. The uniform probability value is calculated using \(1/2^{n}\), where n is the number of qubits. Since there are 2 qubits, the output can have 4 (\(2^{2}\)) states (\(00\), \(01\), \(10\), \(11\)) and the probability of each state is roughly around 0.25 (\(1/4\)). If \(p-value\) is less than 0.05, the null hypothesis is rejected, meaning the observed distribution is not uniform. Otherwise, if \(p-value\) is greater than 0.05, the observed distribution is identified as uniform distribution. **Entanglement Quantum Assertions:** For entangled assertions, chi-square test is performed on a contingency table. The expected distribution is represented in a contingency table as shown in Table I where the probability is 0.5 when the first qubit is '0' followed by second qubit being '0' or first qubit is '1' followed by second qubit being '1'. All the probabilities for other occurrences are 0. The chi-square test is performed between the observed distribution and the probabilities of contingency table to determine the Bell state. If the \(p-value\) is less than 0.05, the hypothesis is rejected, otherwise the quantum state is accepted to be a Bell state. ### _Assertion Implementation_ After assertion mining, we perform random sampling on assertions and implement the selected assertions. Implementation of assertions is an active research problem. Evaluating a property of a quantum state requires measurement, which will collapse the state to a basis state with some probability. Therefore, after conducting a measurement, it may not be possible to accurately continue the remaining quantum computation. To overcome this dilemma, assertions can be implemented using one of the following alternatives: 1. encoding the quantum state such that the measurement does not collapse the encoded state, as shown in Figure 4(a). 2. conducting a direct measurement and simply ignoring the rest of the computation, as shown in Figure 4(b). 3. using a projection-measurement that identifies if the quantum state is in a specified subspace of the total state space, as shown in Figure 4(c). Each of the choices comes with its own disadvantages or overhead: (a) requires ancilla qubits and additional gates, (b) requires restarting the execution and removing the assertion to proceed with the remaining computation, and (c) requires classically constructing \(2^{n}\times 2^{n}\) matrices which becomes infeasible for large subspaces. However, we can optimize our assertion implementation by appropriately choosing the implementation that best suits Fig. 4: Selection of potential placement locations. \begin{table} \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{Probability} & \multicolumn{2}{c|}{Second qubit} \\ \cline{2-3} & 0 & 1 \\ \hline First qubit & 0 & 0.5 & 0 \\ \cline{2-3} & 1 & 0 & 0.5 \\ \hline \end{tabular} \end{table} TABLE I: Contingency table for Bell state the given assertion. For example, asserting a classical state may be best implemented by directly measuring, since the measurement of a classical state will simply collapse it to itself, and therefore not obstructing the remaining computation. Asserting an entangled state is well suited for projection-based implementation since the entangled state belongs to the stabilizer subspace which is determined by the stabiliser subgroup. Since the stabiliser subgroup is generated by at most \(n\) Pauli operators, the implementation scales linearly. ## V Experiments This section demonstrates the effectiveness of our automated assertion generation framework. First, we describe our experimental setup. Next, we present the results of assertion generation. Finally, we evaluate the quality of the generated assertions for error detection. ### _Experimental Setup_ For experimental evaluation, we have selected six quantum circuits that are widely used in the quantum assertions community: 4-qubit adder, 5-qubit Shor's algorithm, 6-qubit Simon's algorithm, 5-qubit Grover's algorithm, 7-qubit QFT, and 3-qubit quantum teleportation algorithm. Our approach will work irrespective of the number of qubits in a design. The assertion generation framework is implemented using Python and Qiskit. The classical simulation is performed using Aer simulator, utilizing low-rank stabilizer simulation for larger designs [19]. We ran our experiments on Intel i7-5500U @ 3.0GHz CPU with 16GB RAM machine. ### _Experimental Results_ In this section, we present our experimental results for the six quantum circuits in three avenues: assertions mining, error coverage, and quality of assertions. #### V-B1 Assertion Mining Results The design is first statically analyzed and randomly sampled to identify which qubits to measure and where to measure in the design. Then test vectors are generated randomly and the randomly sampled placeholder (\(PH\)) are simulated to get the execution traces. These traces are analyzed using statistical chi-square testing to automatically identify the quantum state at the measured instance. The sample size is \(2^{13}\). Figure 6 shows the number of assertions generated for the six quantum circuits while increasing the number of test inputs (simulation iterations). Assertion generation for all the three types of assertions, classical (\(|\psi_{C}\rangle\)), uniform superposition (\(|\psi_{S}\rangle\)) and cat entanglement (\(|\psi_{E}\rangle\))), are shown in blue, green and orange colors, respectively. For example in case of the 4-qubit adder, total 16 classical assertions are mined using classical inputs with one placeholder after the whole process. Since the design has 4 inputs, the all possible classical assertions are 16 (\(2^{4}\)). The total 4 superposition assertions are mined using superposition inputs with one placeholder for the first two qubits after the whole process. The uniform superposition state can be observed only when the input of first two qubits is '11' (i.e., when input is 1, an H gate is applied to make the input superposition). Out of 16 possible inputs, only 4 of them have '11' for the first two qubits, which makes the 4 uniform superposition assertions. Similar process is conducted for the other circuits with different qubit selection and placeholders. #### V-B2 Coverage of Randomly Inserted Bugs It is important to verify the quality of the assertions that are mined from our framework. For this experiment, we randomly selected 5 assertions that got mined for each circuit and implemented the assertions as described in Section IV-B. Then 10 bugs were inserted in each design. These bugs include: change of existing gates to different gates, inclusion of new gates, removing some of the existing gates, doing random rotations to different qubits, etc. Figure 7 presents the error coverage results with increasing test vectors for all the 6 quantum circuits. We generated random test vectors, simulated them, and observed the error coverage for different test vectors. If any of the 5 assertions failed for the given test vectors due to an inserted error, then it was considered as an error coverage for that number of test vectors. As shown in Figure 7, small designs like adder (4-bits) and teleportation (3-bits), 100% error coverage is achieved with less than 20 test cases. This is because any of the 5 assertions are likely to get activated since there are only 16 and 8 possible inputs for adder and teleportation circuit, respectively. For larger designs, like QFT with \(2^{7}\) possible inputs and Simon's with \(2^{6}\) possible inputs, higher error coverage is achieved with more test vectors. In the case of QFT, only 90% of the errors are covered due to the fact that one error included in QFT did not change the uniform superposition state of the output. #### V-B3 Trade-off Between Number of Assertions and Time Insertion of assertions will increase the area overhead of a circuit due to the addition of new qubits and gates as described in Section IV-B. However if the number of inserted assertions is less, the error detection will take more simulation iterations with random inputs. This is shown in Figure 8. For this experiment, we have added one bug to the QFT design and kept increasing the number of inserted assertions. Then we check the number of iterations needed to identify the bug for varying number of inserted assertions. For example, when Fig. 5: Three alternatives for assertion implementations. there is only one assertion inserted in the design, it took 150 random test vectors to identify the bug. This is because there are 128 different possible test vectors and only one of the test vectors match with the input of the assertion. For all the other test vectors, although the design is buggy, the assertion did not get activated. With 15 inserted assertions, it took less than 10 test vectors to detect the bug. An important consideration is to find a trade-off between the number of assertions to insert in the circuit and the number of test vectors needed to activate the assertions. Depending on the design and time constraints, designers can choose the most profitable number of assertions to insert to obtain higher error coverage. ## VI Conclusion Debugging a quantum circuit is a major challenge since it is harder to observe the internal states of quantum systems compared to its classical counterpart. One way to overcome this challenge is to use assertions to debug quantum circuits. While there are existing efforts in utilizing quantum assertions, they rely on expert knowledge of the designer to manually craft and place these assertions. In this paper, we presented a framework to automatically generate quantum assertions that has the capability to check three different quantum states: classical, superposition, and entanglement. Extensive experimental results demonstrated the effectiveness of our framework in generating high-quality assertions to detect bugs in diverse quantum circuits. ## VII Acknowledgments This work was partially supported by the grant from Semiconductor Research Corporation (2020-CT-2934).
2305.12013
Constructing Dreams using Generative AI
Generative AI tools introduce new and accessible forms of media creation for youth. They also raise ethical concerns about the generation of fake media, data protection, privacy and ownership of AI-generated art. Since generative AI is already being used in products used by youth, it is critical that they understand how these tools work and how they can be used or misused. In this work, we facilitated students' generative AI learning through expression of their imagined future identities. We designed a learning workshop - Dreaming with AI - where students learned about the inner workings of generative AI tools, used text-to-image generation algorithms to create their imaged future dreams, reflected on the potential benefits and harms of generative AI tools and voiced their opinions about policies for the use of these tools in classrooms. In this paper, we present the learning activities and experiences of 34 high school students who engaged in our workshops. Students reached creative learning objectives by using prompt engineering to create their future dreams, gained technical knowledge by learning the abilities, limitations, text-visual mappings and applications of generative AI, and identified most potential societal benefits and harms of generative AI.
Safinah Ali, Daniella DiPaola, Randi Williams, Prerna Ravi, Cynthia Breazeal
2023-05-19T21:56:12Z
http://arxiv.org/abs/2305.12013v1
# Constructing Dreams using Generative AI ###### Abstract Generative AI tools introduce new and accessible forms of media creation for youth. They also raise ethical concerns about the generation of fake media, data protection, privacy and ownership of AI-generated art. Since generative AI is already being used in products used by youth, it is critical that they understand how these tools work and how they can be used or misused. In this work, we facilitated students' generative AI learning through expression of their imagined future identities. We designed a learning workshop - Dreaming with AI - where students learned about the inner workings of generative AI tools, used text-to-image generation algorithms to create their imaged future dreams, reflected on the potential benefits and harms of generative AI tools and voiced their opinions about policies for the use of these tools in classrooms. In this paper, we present the learning activities and experiences of 34 high school students who engaged in our workshops. Students reached creative learning objectives by using prompt engineering to create their future dreams, gained technical knowledge by learning the abilities, limitations, text-visual mappings and applications of generative AI, and identified most potential societal benefits and harms of generative AI. Keywords:Generative AI K12 AI Learning Constructionism. ## 1 Introduction Generative Artificial Intelligence (AI) algorithms are types of machine learning models that are used to create novel data samples that are similar to examples it was trained on. These algorithms have been used to create a variety of media such as text, images, videos, 3D models, and music, and have found applications in art, imaging, engineering, protein folding and modeling [2, 22]. Rapid expansion in their capabilities have led to generative AI algorithms making their way into consumer tools such as OpenAI's ChatGPT, which reported to have surpassed 100 million users within five months of first release [17]. There has also been a rapid reduction in the barriers to access these tools. Tools like Stability AI's Dream Studio1 or OpenAI's DallE-\(2^{2}\) made text-guided image generation available to users with little technical knowledge or complex computational resources. Media generated using AI and features supported by generative AI have also made their way to social media platforms such as TikTok, Instagram, Reddit and Twitter, which are frequently used by high school students [1]. Generative AI algorithms reduce barriers to creative expression and introduce a novel medium, opening up new creative possibilities. Contrastingly, generative AI tools also harbor several ethical concerns, such as ownership and copyright, data security, plagiarism, generation of fake information and the spread of misinformation [21]. They can have an adverse effect on creative careers, and have been shown to reproduce harmful biases against groups already historically marginalized by technology. Although generative AI tools and media are commonplace in technologies used by youth, and have direct implications for their education, communities and future careers, previous work has demonstrated how K-12 students have little understanding of how these tools work [10, 4]. As the use of generative AI tools proliferates, it becomes increasingly imperative to educate youth about them. There have been very few efforts around designing curricula and learning materials for K-12 generative AI literacy [4]. Making in creative media is a powerful means of self-expression for youth, which in turn supports their technical learning [15]. Given the creative nature of generative algorithms and their ability to support children's creative expression, it is apt to use them to support both self-expression and technical learning. Expression of self-identity and narratives has been central to creative and STEM learning pedagogy [12, 23]. In this work, we designed an informal learning workshop that employs (1) constructionism as a means of learning the technological capabilities of generative AI algorithms, (2) expression of self-identity and narratives for future as a means of enabling creative expression and sharing, and (3) critical debate as a means for reflecting on the societal and ethical implications of generative algorithms. We designed the workshop "Constructing Dreams", where high schoolers imagined a future dream that they visualize for themselves., and used a text-to-image generative tool called Dream Studio by Stability AI to visualize their dreams. Through this informal activity, we aimed to achieve the following learning objectives: ### Creative learning objectives: C1. Students create input prompts appropriate for generating images using a generative AI tool. C2. Students iteratively tweak their creations to match their creative goals. ### Technical learning objectives: T1. Students are aware of the abilities of text-to-image generation algorithms. T2. Students identified limitations in the abilities of text-to-image generation algorithms. T3. Students identify visual features in a generated image mapped to parts of textual prompts. T4. Students map patterns in generated images to patterns in underlying datasets used to train the algorithm. T5. Students identify areas of application of generative algorithms. ### Ethical learning objectives: E1. Students identify potential harms of generative AI: algorithmic bias, copyright infringement, creation of fake media leading to the spread of misinformation, plagiarism, job replacement and data privacy. E2. Students voice their opinions about the school policy around the use of generative AI in classrooms. We conducted the activity with two student groups (N=13 and N=21). In this paper, we share the design of the activity, students' creations and their responses that allude to the aforementioned learning objectives. ## 2 Background ### Generative AI For the purpose of this work, we refer to data-driven generative machine learning algorithms as Generative AI. Generative AI algorithms learn from patterns of existing data (also known as training data), and generate novel data in the form of images, videos, audio, text, and 3D models. While generative models have long existed, recent years have seen a rapid expansion in the field of data-driven generative AI with the introduction of GANs, VAEs and Diffusion models. Algorithms like Dall-E, that received both text-image pairings as inputs, coupled image generation algorithms with large language models and enabled text-guided image generation [20]. Tools developed by OpenAI such as ChatGPT or Dall-E2, and subsequently by Stability AI 1 or Midjourney 2 made generative AI accessible to everyone with internet access. There has been a significant discourse about generative AI algorithms' potential impact on students' learning in classrooms and future careers [6]. On one hand, generative AI tools have the potential to give young creators powerful new tools to express their ideas in different mediums. On the other hand, some worry that over-reliance on generative tools may hinder creativity in the classroom. Moreover, generative AI tools harm visual artists by using their work without their consent or affecting their future prospects, are accompanied by data protection and privacy concerns, can potentially lead to misinformation and contain biases that amplify stereotypes. Still, K-12 students do not have a complete understanding of how these everyday algorithms work [3]. We believe that it is imperative, now more than ever, for high school students to learn about what generative algorithms are, how they actually work, what they can be used for and what their ethical and societal implications can be. Footnote 1: [https://stability.ai/](https://stability.ai/) Footnote 2: [https://www.midjourney.com/](https://www.midjourney.com/) ### K-12 AI Literacy and constructionism Traditional AI learning requires familiarity with mathematical concepts and programming skills that K-12 students often lack. The AI4K12 national initiative established five big ideas in K-12 AI literacy that encompass what youth must know about AI [24]. Curricula, learning toolkits and activities have since emerged that aim to teach children about the 5 big ideas, several of them focusing on creative applications of AI [29]). Previous work used Block-based coding environments that have enabled students to train supervised learning algorithms and use them in projects [26, 13, 16]. Researchers developed K-12 AI education curricula that emphasize constructionist learning, designing with ethics in mind, and developing a creative mindset [5]. Previous work also used learning trajectories for teaching middle school students about generative models where they explored the technical constitution, practical applications and ethical implications of generative algorithms such as GANs through unplugged activities, creative exercises and ethical reflections [4]. Students learned to generate text and images using generative algorithms and could articulate benefits and harms of generative AI for our society. Since then, generative AI tools have become more affordable, user-friendly, and accessible over time. Furthermore, their outputs have become higher quality, leading to more widespread application of generative AI tools in various fields, including creative arts, business, and scientific research. In this work, we learn from the constructionist learning approach of previous AI curricula that use AI project building via digital and physical toolkits as means to teach AI concepts, as well as their focus on developing students' ethical and creative abilities. Constructionist learning approach also emphasizes the importance of engaging in conversation with one's own and others' artifacts [18]. We use a sharing and discussion approach, where students reflect on generative AI through their and their peers' generated artifacts. In addition to existing approaches, we also leverage expression of self-identity as a means for allowing students to create with and learn about generative AI. ### Identity construction in technical education Previous research has identified technical education as a powerful avenue for identity construction. [25] developed identity construction environments (ICEs) as technological tools designed with the goal of supporting young people in the exploration of self-identity by engaging learners in the design of a graphical virtual city and its social organization. Studies have explored the role of identity in learning coding through game-making [14]. [19] used narrative driven curriculum to spark non-dominant girls' interests in STEM activities and identification with the discipline. [7] also developed Storytelling Agent Generation Environment (SAGE), an authoring environment where children created programmable storytellers that allowed them to explore and present their identities. They demonstrated how, through the process of building their own storytellers, and exploring and communicating identity issues, children developed structured thinking required for programming and debugging. Within AI learning, [27] explored representation of identity and stereotypes to teach middle schoolers about racial bias in classification machine learning algorithms. Identity representation exercises become especially relevant avenues of generative AI learning, because students go in with a visual expectation of how they want to represent themselves, and aim to create a language description. Liu et al. (2022) explore how prompt engineering plays a major role in creators' abilities to use Text-to-Image Generative Models. Creators are met with a generation that could be close to or far from their original expectation, and make correlations with how AI represents words in their natural language description. They may draw mappings of which visual features correlated with which words - e.g. "in Disney style" got represented as colorful wide strokes and friendly cartoons, which related concepts got represented - e.g. "sinking" was related to an unmentioned water environment, which words were considered more prominent e.g. "a writer, painter and a cobbler" only visually represented a painter, and what stereotypes and biases exist in the algorithm, e.g. "a doctor" got represented as a White man. In this work, we build upon previous work using identity construction for technical learning by using a future identity construction and sharing exercise using generative AI with the goal of teaching high school students about its technical and ethical implications. Representation of identities of others, particularly of non-white people in western popular culture have been riddled with harmful stereotypes [11, 28]. Stereotypical representation of groups underrepresented in computing, such as women, has shown to be a barrier to inclusion for members of the group [9]. We chose an exercise involving depicting future selves, so students gain power over constructing their own representations, intentionally overcoming stereotypes and potentially visualizing anti-stereotypes that aid their sense of belongingness in technical fields. Sharing their dreams through textual and visual representations was also a means of creative storytelling and sharing aspirations with their peers. ## 3 Methods ### Participants We recruited high school students from two independent high-school STEM programs that provide students with informal learning opportunities: [name redacted] in the US and [name redacted] in India. Students voluntarily signed up to participate in our workshops, it was not part of their academic requirements. Workshop 1 (W-1) was conducted with 17 students in person at [university name redacted] and workshop 2 (W-2) was conducted remotely with 21 students located in [city name redacted], India. As part of their participation in the workshop, students could also take part in our research, but they were not obligated to. In W-1, 13 out of 17 students consented for their data to be recorded for research purposes, and in W-2, all students provided informed consent for their responses to be recorded. Responses from students who did not consent were neither recorded, nor presented in this paper. Participant demographic is presented in table-1. All students were enrolled in high school, and we did not collect information about their age. ### Workshop Design The workshop duration was 1 hour and 20 minutes. We deliberately made our workshop a single session so that it could be used either as a quick, informal introduction to AI or as part of longer AI or art curricula. The workshop was divided in the following segments: #### 3.2.1 Introduction to Generative AI Students were first asked whether they were familiar with generative AI tools, followed by whether they were familiar with ChatGPT. A researcher with expertise in AI introduced students to the concept of generative AI using the following three methods: **Definition**. The researcher described generative AI algorithms as types of algorithms that learn from patterns of existing data (also known as training data), and generate novel data in the form of images, videos, audio, text, and 3D models **Activating (and contrasting with) prior knowledge**. The researcher contrasted generative AI algorithms with AI algorithms that predict, such as classification algorithms. While predictive algorithms are trained on many images of cats and dogs and accomplish tasks like classifying between a cat and a dog, generative algorithms are trained on many images of a cat to create a new image of a cat that doesn't exist, but looks like a real cat. **Showing examples**. The researcher displayed some examples of generated media such as images, text, video, audio and 3D models. **Using Generative AI Tools**. The researcher proceeded to explain to students how generative AI algorithms can be used using images, videos or audio of the following examples: (1) Generating lesson plans with ChatGPT. (2) Generating visual art in a particular artist's style with Diffusion models. (3) Collaborating with a music generation algorithm to co-create music. (4) Creating videos from movie scripts. (5) Creating novel protein structures that bind to insulin receptors. (6) Creating concepts for an advertisement campaign. (7) Creating Deepfakes - or generating manipulated videos or audio of people. The examples were chosen such that they cover a variety of generative media and show examples of both positive and negative use cases. \begin{table} \begin{tabular}{|l|l|l|l|} \hline Workshop & N (with consent) & Location & Gender \\ \hline Workshop-1 (W-1) & 13 & [refracted], US & F = 7, M = 4, Unknown = 2 \\ Workshop-2 (W-2) & 21 & remote, India & F = 11, M = 10 \\ \hline \end{tabular} \end{table} Table 1: Students recruited for the workshop #### 4.2.2 Practicing reflecting on implications With the aim of helping students to be ready to reflect on the ethical implications of AI systems and be aware of their limitations as they engage with the generative tools, the researcher used guided reflection for an example generated artwork. The researcher first displayed examples of AI-generated artwork and asked students how such a technology can be used positively, followed by how it can be used negatively. Students were then asked who they think should get the monetary benefit from selling the artwork. Further, the researcher displayed Deepfakes and asked students how such a technology can be used negatively, followed by how it can be used positively. Finally, the researchers displayed some examples of generations that demonstrate algorithmic bias in generative AI tools. #### 4.2.3 Constructing Dreams In this part of the activity, students were first asked to imagine a dream in words - "If you could be anything in your future, go anywhere, do anything, what would you be doing? It could be a place or an activity, or a job that you see yourself in that does not currently exist. Write your dream in as much detail as you can. Prompt writing was scaffolded using a combination of the following questions and example responses: * What do you look like in this dream? * Where are you located in this dream? * What are you doing that represents this dream? Think of job roles that you don't see yourself represented in today, or create one that does not exist today. * What are some visual features that you would like to represent this dream? This combined story serves as a prompt for the text-to-image generation algorithm. Students use text-to-image the generation tool Dream Studio that uses Stable Diffusion to generate images in different visual styles from a text prompt. Dream Studio was used because of its impressive creation abilities, variety of styles, ability to disable uploading of content and their checks around inappropriate prompts and content. Creators can also add a negative prompt of components they wish to omit from their generation. In order to prevent students from adding personal information, we provided them with a shared account linked to a research ID and disabled the ability to upload students' images on the website. Further, students were made aware of Dream Studio's privacy policy around ownership of content - "you own the Content that you generate using the Services to the extent permitted by applicable law and Stability and our affiliates may use the Content to develop and improve the Services, including by storing your Content and associated metadata". Students were asked to edit their prompts until they were satisfied with the generated image and posted them to a shared Google Slides document. #### 4.2.4 Sharing Dreams In addition to generating their dreams, an essential part of the learning activity was to share their dreams with the class, providing an avenue to express their imagined future identities and investigate AI's perceptions of their expressed identities. The researcher displayed the dream images and corresponding prompts that students shared, and called upon students to share their creations and discussing them using the following prompts: * What were your first impressions of the tool? Were you surprised by the creations? * How did you think AI represented different parts of your prompts? * Were you satisfied with the generations? If not, what did you change? * How were the generations different from your expectations? * What were some useful tricks you learned? #### 3.1.3 Reflection After the creative activity, students reflected on generative AI as a class, guided by the following prompts presented by the researcher: * What are some ways you think generative models can be useful to you? * What do you think are some positive uses of generative AI models? * What are some negative ways you think generative AI models can be used? * Do you think the jobs that people do will change due to generative AI? * How can generative AI be used in the classrooms? * Do you think using generative AI to do your assignments be permitted in classrooms? Do you think there should be any conditions on using them? We did not conduct a formal assessment from every student due to time constraints and the informal structure of the workshop, but two researchers recorded participants' informal classroom discussions, which are summarized in the results below. ## 4 Results ### Introduction to Generative AI: Knowing it when you see it At the beginning of the workshop, we asked students whether they were familiar with generative AI, and only 6 out of 34 students responded yes. However, 29 students reported being familiar with OpenAI's ChatGPT. Students were engaged in the introductory explanation and examples of generative AI and found examples of realistic generative media surprising. We found no apparent differences in engagement between the two groups. ### Initial reflections: Seeing the potential benefits and harms Among the benefits of generative AI, students expressed making art, making movies, and sharing your ideas with others. One student in W-2 mentioned, "when I have difficulty explaining what I am thinking to someone else, I can just use this." Among the harms, one student in W-1 mentioned that "it is not really art and essentially cheating from real artists." Another student called generating art in real artists' style "plagiarism." A student in W-1 mentioned how "using ChatGPT to write an essay would be cheating." Another student in W-2 mentioned that "it could take up artists' jobs if we can just generate art for free." Students expressed variation in who they think should benefit monetarily from the sale of AI-generated art. In W-1, one student mentioned how it should be the "coder who made the algorithm". Another student expressed how they think such a painting should not be sold at all, "[I think it] shouldn't be used or sold at all, because it's taking from other peoples art without getting their permission, it feels like stealing- who should be giving permission". Several students brought up stealing of artworks from artists without their permission. One student mentioned how the money should be split between all the people involved in the process, but was unsure how the split should be decided. In W-2, a majority of students expressed that the company or person that "designed the AI" should get the money from the artwork, and added that "they should pay the artists for the data." While discussing potential harms of Deepfakes, one student in W-1 said that Deepfakes can be used to "Impersonate people in negative ways, make someone say something bad - this might be met with panic and aggression from the community." Another student said that it can lead to the "spread of misinformation and fake news." Another student highlighted that they make the credibility of real information poor, "Today there are a lot of people that believe Obama is a Deep Fake - he doesn't exist- makes you question real information altogether." In W-2, students discussed how it can be used to "run smear campaigns or create false evidence in courts." Among potential benefits, students in W-1 identified how they can "speak to ancestors who have passed away" or "speak to their younger versions.". In W-2, a student said how they can create their own entertainment by making movie characters they like act in different ways. ### Constructing dreams: Expressing imaginations through generative AI All students were able to generate dream images using their text prompts. Students generated an average of 12.6 images (min: 4, max: 31), indicating that all students tried multiple attempts to reach their desired image. All students finished with a prompt longer than their initial prompt. Students tweaked their positive prompt, negative prompt, style options and number of images. All students started with the scaffolding questions provided, but eventually deviated from those to add and remove details that better suited their goal. Figure 1 depicts generated dreams created by three participants along with their prompts. Through the generating dreams activity, we could read the creative learning goals, where students created input prompts appropriate for generating images using a generative AI tool (C1) and iteratively tweak their creations to match their creative goals (C2). ### Sharing Dreams: Exploring the art of prompt engineering Only 26 out of 34 (W-1: 8, W-2: 18) students shared final generated images they were satisfied with. One student mentioned that the generated image was "Highly accurate, I liked the first one that I saw." Whereas another student said, "I had to play around with the details with the things that I emphasized and the styles, every picture has a different vibe to it." Multiple students reported dissonance with what was first generated and their desired images. For instance, one student generated using the prompt, "A hispanic boy with curly hair in a minion suit" and the tool generated a minion face with curly brown hair, but he expected a boy with a minion space suit. He kept tweaking the prompt and adding more details to get to an image he desired, and ended with the prompt, "A brown hispanic boy long curly hair glasses wearing a minion suit. located in space on my own planet with land I found with other fellow minions. my own mechanic and building my own ship with my robot friend i built. visual features high detail and 4k". All students who shared out reported having to add more details to bring their image to a desired point. Students added details to their appearance and the setting to better reflect aspects of the dream, such as attire to depict a chosen profession. Nine students also chose to write a negative prompt to indicate features they wanted to omit. Students also shared useful tips with their peers. One student mentioned how using his age in the prompt helped him get closer to the desired image. Another student used the name of a famous person who they believed they resembled. One student added visual information to their prompt, "visual features high detail and 4k". One student said that they found it useful Figure 1: Three students’ generated dreams using the prompts: a. “A Vietnamese male wearing a dark casual academia outfit doing research on psychology in a library with a lot of books.”, b. “A young and small asian girl with black hair and fair but not white skin wearing an extra large lab coat with a rocket in one hand that is about to launch while jumping and wearing a large smile on her face, A large open space with blue skies and green grass and little wind. One tree in the background and a dog that’s running along, Launching a small homemade rocket that is about to fly in one hand as a part of an experiment.” c. “An indian girl who is a photographer in the forest nature, with a camera, shooting a documentary, wearing cargo pants, high detail in illustration style”. to add what they want in the background and remove information about their dreams that were "not related to the looks". All students in W-2 identified as South Asian, and were trying to represent a similar ethnicity, which is not the default representation in Dream Studio. Students came up with, and shared their representation strategies, such as "using Hindi words" or say "South Indian and specify skin tone" or "say Pakistani instead of Indian so it wouldn't confuse with Native Americans." Students also shared examples of algorithmic bias and stereotypes in generative AI. One student mentioned how, while trying to depict a South Asian face, when they said "Asian" the image primarily depicted an East Asian person, and saying "Indian" would either depict a Native American person or did not resemble them. The student shared that adding words of Indian attire, such as, "lehenga" (a colorful Indian long skirt) helped them get closer to the desired depiction. Another student commented on how "pretty girl" would depict a young White face and said, "it kept making me blonde". The instructor prompted students to probe why this bias exists, and one student correctly pointed out how the data used to train these algorithms have such stereotypes. Through examples in the introduction session and an interactive activity, students were made aware of abilities of text-to-image generation algorithms (T1). Pertaining to the learning objective T2, students identified the following limitations in the abilities of text-to-image generation algorithms: stereotyping and algorithmic bias, mismatch in creators' and AI's representation of concepts and a feeling of lack of agency. Students also developed prompt engineering skills to reach their desired image. Through sharing prompt tips and identifying assumptions that the algorithm made around textual prompts, students could identify visual features mapped to textual prompts (T3). Students themselves did not make an explicit connection to patterns in underlying training datasets (T4), but upon instructor's probing about why they think "pretty girl" is represented as a White blonde young girl, students could identify that, "the examples it is using has more pretty girls labeled as White." Students related the knowledge from the introduction session of how generative AI uses examples of text-image pairs to predict visual patterns associated with text. ### Reflection: Imagining AI in the classroom and beyond Students began reflecting by thinking about how generative AI models can be useful to them (Objective T5). One student from W-1 said how it can be used in "projects in school to visualize future selves". Another student said they can "study anatomy to practice art by visualizing organs." A student from W-2 expressed how they can use it for their arts class to "visualize paint styles". One student recognized that a positive use of generative AI can be to "help you explain things, you can collaborate with others best". Among negative uses, students expressed how it can "Fake images and make things very convincing". Students also discussed how jobs will change as a result of generative AI tools. One student in W-1 believed that creative jobs will not alter - "I don't think so because AI art still has a far way to go, some of it doesn't look good and it doesn't look human level yet." Another student pointed out that medical diagnoses can change. Another student in W-1 said that they "don't think teachers' jobs could change at all, teachers need to know different types of children." One student in W-1 remarked how "AI cannot be emotional." and another in W-2 said that "AI will never understand people as people do." Students overall did not believe that creative careers will change. However, one student in W-1 said that they believe that writing jobs such as journalism or creative writing, storytelling could be affected by it. Within the classroom, students in W-1 discussed how it can be used to "make presentations more visually appealing" or for "adding visuals to a book for visual learners." Students in W-2 discussed how history could be taught using imaging visuals from the events. When students were asked about whether generative AI should be permitted in classrooms, one student from W-1 responded that its use should be allowed, but when generative AI was used to complete a task, it should be noted as such. Another student from W-1 said that it could be used in classrooms, but not in homework as there will be no way to tell. Students in W-2 overwhelmingly responded that using generative AI should not be permitted in classrooms. Through the sharing and reflection activity, we could partially reach ethical learning objective E1, where students identified the harms: algorithmic bias, creation of fake media leading to the spread of misinformation, plagiarism and job replacement. Copyright infringement was not identified as such but was referred to as "stealing" media. Students did not identify data privacy as a potential harm. Beyond the stated objectives, students identified the loss of credibility of real information as a potential harm. Some students voiced their opinions about the school policy around the use of generative AI in classrooms (E2). ### Discussion Our learning activity "Dreaming with AI" helped students express their future identities using generative AI, share their experiences with the tool with their peers and reflect on the ethical and societal implications of generative AI. Students successfully used a text-to-image generation tool to express their dreams and share them with their peers. Students performed "prompt engineering" and discovered and shared tricks to aid their generations. Students could reach all creative learning objectives. Among technical learning objectives, students did not map patterns in generated images to patterns in underlying datasets used to train the algorithm themselves (T4), but could make that connection upon further probing by the instructor. Students could not identify data privacy among potential harms of generative algorithms (E1). In future work, the interactive activity could involve investigating components of generated images that map to parts of text prompt. Students could also reflect on the sources of datasets used for training generative algorithms and who owns the data to emphasize data privacy. Using an identity expression activity followed by reflection was a useful approach to teaching students about generative AI for several reasons. Firstly, students were able to visually express and share their dreams with their peers, allowing them to experience a generative AI tool's powerful creative abilities. In students' visualizations we saw them creating representations of themselves that countered stereotypical depictions of their gender, age and racial identities. Second, students directly experienced and confronted the limitations of the generative tools. Students stumbled upon examples of bias displayed by algorithms in the form of generative tools reflecting a version of the world that was different from students' own mental images. Through probing and reflection, students engaged with the biases inherent in the datasets and media representations used to create these tools. Finally, students could connect these experiences back to the potential use cases of generative AI and voice more nuanced opinions on the appropriateness of using these technologies in the classroom. For students to make the most of this experience it was important that they were primed with knowledge about how generative AI algorithms function and how they may make errors. We conducted this workshop with two groups: in the US and in India. The workshop itself did not require any adaptation in content since most examples chosen were globally known. A difference in results observed between the two groups was that the students in the US were a lot more positive about using generative AI in classrooms as compared to students in India who primarily viewed it as plagiarism. Previous research has shown how students in higher education in Hong Kong revealed a generally positive attitude about generative AI, while also recognizing some ethical concerns [8]. More work needs to be done to understand what leads to these cultural differences. Another difference we noted was that since W-2 was more a more racially homogenous group, while also sharing unique struggles to generate an Indian identity, they were able to share prompt tricks with their peers to get their desired representation. We encourage AI or arts educators to adapt this workshop to teach high school students about generative AI tools, their applications, and their ethical implications. ### Limitations and Future Work This work presents an early exploratory workshop geared towards introducing high school students to generative AI tools. Results present students' responses to activities and reflection prompts, but since no formal assessment was conducted and we did not have every students' responses, inferences about all students' learning cannot be made. A longer workshop with formal assessments of students' understanding of AI concepts is required to make inferences about the efficacy of learning materials. Future work can aim to capture more detailed insights about students' knowledge about generative AI prior to the workshop, embed assessments around the creative process and decision making in generating dreams, conduct a more formal assessment of AI knowledge and attitudes with all students and understand cultural differences between creators from different geographic locations. ### Ethical considerations Generative AI tools recreate harmful stereotypes in datasets, and must be brought to young learners responsibly with educator supervision. AI tools are often incorrect with believable confidence. Children could possibly overestimate or develop misconceptions about the abilities of AI, or not have the ability to discern real data from generative media. It is important to discuss the limitations of the tools with students before interacting with them. Technologies such as generative AI are socio-technical systems that have positive and negative impacts on society and the workforce. It is important to have conversations with students about the capabilities and potential harms that these tools may cause.
2306.02988
Scaling limits of planar maps under the Smith embedding
The Smith embedding of a finite planar map with two marked vertices, possibly with conductances on the edges, is a way of representing the map as a tiling of a finite cylinder by rectangles. In this embedding, each edge of the planar map corresponds to a rectangle, and each vertex corresponds to a horizontal segment. Given a sequence of finite planar maps embedded in an infinite cylinder, such that the random walk on both the map and its planar dual converges to Brownian motion modulo time change, we prove that the a priori embedding is close to an affine transformation of the Smith embedding at large scales. By applying this result, we prove that the Smith embeddings of mated-CRT maps with the sphere topology converge to $\gamma$-Liouville quantum gravity ($\gamma$-LQG).
Federico Bertacco, Ewain Gwynne, Scott Sheffield
2023-06-05T16:00:52Z
http://arxiv.org/abs/2306.02988v1
# Scaling limits of planar maps under the Smith embedding ###### Abstract The Smith embedding of a finite planar map with two marked vertices, possibly with conductances on the edges, is a way of representing the map as a tiling of a finite cylinder by rectangles. In this embedding, each edge of the planar map corresponds to a rectangle, and each vertex corresponds to a horizontal segment. Given a sequence of finite planar maps embedded in an infinite cylinder, such that the random walk on both the map and its planar dual converges to Brownian motion modulo time change, we prove that the a priori embedding is close to an affine transformation of the Smith embedding at large scales. By applying this result, we prove that the Smith embeddings of mated-CRT maps with the sphere topology converge to \(\gamma\)-Liouville quantum gravity (\(\gamma\)-LQG). ###### Contents * 1 Introduction * 1.1 Motivation * 1.2 Main result * 1.3 Application to the mated-CRT map * 1.4 Outline * 2 Background and setup * 2.1 Basic definitions * 2.2 Universal cover * 2.3 Random walks and electrical networks * 2.4 Discrete harmonic conjugate * 2.5 Construction of the Smith embedding * 3 Some properties of the Smith embedding * 3.1 Adding new vertices * 3.2 Periodicity * 3.3 Hitting distribution of a horizontal line * 3.4 Expected horizontal winding * 4 Proof of the main result * 4.1 Height coordinate * 4.2 Width coordinate * 4.3 Proof of Theorem 1.3 * 5 Application to mated-CRT maps * 5.1 SLE/LQG description of mated-CRT maps * 5.2 Mated-CRT maps satisfy the assumptions * 5.3 Convergence to LQG * A Some standard estimates for planar Brownian motion Introduction ### Motivation Over the past few decades, there has been a large amount of interest in the study of random planar maps, i.e., graphs embedded in the plane viewed modulo orientation-preserving homeomorphisms. Since the foundational work of Polyakov in the context of bosonic string theory [10], it has been believed that various types of random planar maps converge, in various topologies, to limiting random surfaces called _Liouville quantum gravity_ (LQG) surfaces. The rigorous mathematical study of LQG has been explored, e.g., in works by Duplantier and Sheffield [14] and Rhodes and Vargas [15]. Roughly speaking, LQG surfaces can be thought of as random two-dimensional Riemannian manifolds parameterized by a fixed underlying Riemann surface, indexed by a parameter \(\gamma\in(0,2]\). These surfaces are too rough to be Riemannian manifolds in the literal sense, but one can still define, e.g., the associated volume form (area measure) and distance function (metric) via regularization procedures [1, 14, 15, 16, 17, 18, 19]. Many properties of the \(\gamma\)-LQG area measure are well-known [16, 19, 20]. One way of formulating the convergence of random planar maps toward LQG surfaces is to consider so-called _discrete conformal embeddings_ of the random planar maps. Here, a discrete conformal embedding refers to a particular way of "drawing" the map in the plane, which is in some sense a discrete analog of the Riemann mapping. Suppose we have a random planar map with \(n\) vertices, along with a discrete conformal embedding of the map that maps each vertex to a point in \(\mathbb{C}\). This embedding creates a natural measure on the plane, with each vertex given a mass of \(1/n\). In many settings, it is natural to conjecture that as \(n\) tends to infinity, the measure should converge weakly to the \(\gamma\)-LQG area measure, with the parameter \(\gamma\) depending on the particular planar map model under consideration. Additionally, the random walk on the embedded map is expected to converge in law to two-dimensional Brownian motion modulo time parameterization (more precisely, the parameterized walk should converge to the so-called _Liouville Brownian motion_[19, 18, 19]). Several precise scaling limit conjectures for random planar maps toward LQG surfaces were formulated, e.g., in [14, 15, 16, 17]. However, this very general convergence ansatz has only been rigorously proven in a few specific settings (see below). One of the challenges in formulating a general scaling limit result for the embedding of random planar maps is the existence of numerous discrete conformal embeddings that could be regarded as natural in some sense. We collect here some of the most commonly employed discrete conformal embeddings. * The _circle packing_ (see [18] for a review), which represents the map as the tangency graph of a collection of non-overlapping circles1. Footnote 1: We refer to [14] for a proof of the fact that the circle packing for lattice approximations of planar domains gives an approximation of the Riemann mapping. * The _Smith embedding_ (a.k.a. _rectangle packing_), which will be the focus of the present paper, was introduced by Brooks, Smith, Stone, and Tutte in [14]. It is another popular method of embedding planar graphs, and it is defined by means of a rectangle tiling of either a cylinder or a rectangle, and in which vertices of the planar map correspond to horizontal segments in the Smith embedding, and edges of the planar map correspond to rectangles in the Smith embedding2. Several papers have studied properties of the Smith embedding of planar maps [14, 15, 16, 17]. Footnote 2: We refer to [14] for a proof of the fact that the Smith embedding for fine-mesh lattice graphs gives an approximation of the Riemann mapping. * Other examples of discrete conformal embeddings include the _Tutte embedding_[1], the _Cardy-Smirnov embedding_[15], and the _Riemann uniformization embedding_, obtained by viewing the planar map as a piecewise flat two-dimensional Riemannian manifold where the faces are identified with unit side length polygons. Some cases of the aforementioned conjecture, that LQG describes the scaling limit of random planar maps under discrete conformal embeddings, have been proven. For example, in [13], Gwynne, Miller, and Sheffield established the convergence to \(\gamma\)-LQG under the Tutte embedding for a one-parameter family of random planar maps defined using pairs of correlated Brownian motions, known as the _mated-CRT maps_ (see below for a definition of this family of random planar maps). Moreover, in [10], the same authors proved that the Tutte embedding of the Poisson Voronoi tessellation of the Brownian disk converges to \(\sqrt{8/3}\)-LQG. In [11], Holden and Sun proved that the scaling limit of uniformly sampled triangulations under the Cardy-Smirnov embedding converges to \(\sqrt{8/3}\)-LQG. Finally, let us also mention that in [10], the authors studied the circle-packing of the mated-CRT map and showed that there are no macroscopic circles in the circle packing of this random planar map. Roughly speaking, the main goal of this paper is to provide a general convergence result for the Smith embedding of planar maps, which works whenever random walk on both the map and its dual approximate Brownian motion. ### Main result The main result of this paper concerns the scaling limit of general (random) planar maps under the Smith embedding. More precisely, we will show that for a sequence of finite planar maps satisfying an invariance principle assumption both on the map and on its dual, the a priori embedding is close to an affine transformation of the Smith embedding at large scales. We will then apply this result to prove the convergence of the Smith embeddings of mated-CRT maps to \(\gamma\)-LQG. One advantage of the version of the Smith embedding considered in this paper is that its definition is particularly natural for random planar maps without boundary. This is in contrast to other embeddings under which random planar maps have been shown to converge to LQG (such as the Tutte embedding [10] and the Cardy-Smirnov embedding [11]) which are most naturally defined for planar maps with boundary. Throughout the paper, we always write _weighted planar map_ for a planar map with edge conductances. Moreover, throughout the article, we allow all of our planar maps to have multiple edges and self-loops. In order to state our main theorem, we need to introduce some notation. Given a planar graph \(\mathcal{G}\), we denote the sets of vertices, edges, and faces of \(\mathcal{G}\) by \(\mathcal{VG}\), \(\mathcal{EG}\), and \(\mathcal{FG}\), respectively. We consider a doubly marked finite weighted planar map \((\mathcal{G},c,v_{0},v_{1})\), where \(v_{0}\), \(v_{1}\in\mathcal{VG}\) are the two marked vertices, and \(c=\{c_{e}\}_{e\in\mathcal{EG}}\) is a collection of positive unoriented weights called conductances. We assume that we are given a _proper embedding_ of the map in the infinite cylinder \(\mathcal{C}_{2\pi}:=\mathbb{R}/2\pi\mathbb{Z}\times\mathbb{R}\) in the sense of the following definition. **Definition 1.1**.: An embedding of the quadruplet \((\mathcal{G},c,v_{0},v_{1})\) in the infinite cylinder \(\mathcal{C}_{2\pi}\) is said to be _proper_ if: 1. the edges in \(\mathcal{EG}\) are continuous and do not cross; 2. the graph \(\mathcal{G}\) is connected; 3. the two marked vertices \(v_{0}\) and \(v_{1}\) are mapped to \(-\infty\) and \(+\infty\), respectively, and they do not lie on the boundary of the same face. We observe that, if \((\mathcal{G},c,v_{0},v_{1})\) is properly embedded in \(\mathcal{C}_{2\pi}\), then the set of vertices \(\mathcal{VG}\) is contained in \(\mathcal{C}_{2\pi}\); each edge in \(\mathcal{EG}\) is a curve in \(\mathcal{C}_{2\pi}\) that does not cross any other edge, except possibly at its endpoints; and each face in \(\mathcal{FG}\) is a connected component in \(\mathcal{C}_{2\pi}\) of the complement of the embedded graph \(\mathcal{G}\). Since the two marked vertices are mapped to \(\pm\infty\), this implies that there is an infinite face at each end of the cylinder \(\mathcal{C}_{2\pi}\). In what follows, we use the convention to identify each vertex in \(\mathcal{VG}\) with its a priori embedding, i.e., if \(x\in\mathcal{VG}\) then we view \(x\) as a point in \(\mathcal{C}_{2\pi}\). Furthermore, for a set \(K\subset\mathcal{C}_{2\pi}\), we write \[\mathcal{VG}(K):=\big{\{}x\in\mathcal{VG}:x\in K\big{\}}.\] We denote by \((\widehat{\mathcal{G}},\widehat{c})\) the weighted dual planar graph associated to \((\mathcal{G},c)\), where the conductance \(\widehat{c}_{\widehat{c}}\) of a dual edge \(\widehat{c}\ \in\mathcal{EG}\) is equal to the resistance of the corresponding primal edge \(e\in\mathcal{EG}\), i.e., we set \(\widehat{c}_{\widehat{c}}:=1/c_{e}\). We assume that \((\widehat{\mathcal{G}},\widehat{c})\) is _properly embedded_ in the infinite cylinder \(\mathcal{C}_{2\pi}\) in the sense of the following definition. **Definition 1.2**.: An embedding of the weighted dual planar graph \((\widehat{\mathcal{G}},\widehat{c})\) associated to \((\mathcal{G},c)\) in the infinite cylinder \(\mathcal{C}_{2\pi}\) is said to be _proper_ if: 1. every vertex of \(\widehat{\mathcal{G}}\) is contained in a face of \(\mathcal{G}\); 2. every edge \(e\) of \(\mathcal{G}\) is crossed by a single edge \(\widehat{e}\) of \(\widehat{\mathcal{G}}\) which joins the two faces incident to \(e\). If an edge \(e\in\mathcal{EQ}\) is oriented, the orientation of the corresponding dual edge \(\widehat{e}\in\mathcal{EQ}\) can be obtained by rotating \(e\) counter-clockwise. As for the primal graph, given a set \(K\subset\mathcal{C}_{2\pi}\), we write \[\mathcal{V}\widehat{\mathcal{G}}(K):=\big{\{}\widehat{x}\in\mathcal{V} \widehat{\mathcal{G}}:\widehat{x}\in K\big{\}}.\] #### 1.2.1 Smith embedding We are now ready to provide a somewhat informal description of the Smith embedding of a given doubly marked finite weighted planar map \((\mathcal{G},c,v_{0},v_{1})\). The precise definition will be given in Section 2. As mentioned earlier, the Smith embedding of a planar map was first introduced by Brooks, Smith, Stone, and Tutte in [1], and later generalized to infinite planar graphs by Benjamini and Schramm in [1]. The Smith embedding of the quadruplet \((\mathcal{G},c,v_{0},v_{1})\) is constructed by means of a tiling by rectangles of a finite cylinder \(\mathcal{C}_{\mathfrak{n}}:=\mathbb{R}/\mathfrak{n}\mathbb{Z}\times[0,1]\), where \(\mathfrak{n}\) is a positive number which depends on \((\mathcal{G},c)\) to be specified later. Each vertex \(x\in\mathcal{V}\mathcal{G}\) is represented by a horizontal line segment \(\mathrm{H}_{x}\), each edge \(e\in\mathcal{EQ}\) by a rectangle \(\mathrm{R}_{e}\), and each dual vertex \(\widehat{x}\in\mathcal{V}\widehat{\mathcal{G}}\) by a vertical line segment \(\mathrm{V}_{\widehat{x}}\). In particular, since each edge \(e\in\mathcal{EQ}\) corresponds to a rectangle in the tiling, we need to specify four coordinates for each edge. This is done by means of the voltage function \(\mathfrak{h}:\mathcal{V}\mathcal{G}\to[0,1]\) and its discrete harmonic conjugate \(\mathfrak{w}:\mathcal{V}\widehat{\mathcal{G}}\to\mathbb{R}/\mathfrak{n} \mathbb{Z}\). The function \(\mathfrak{h}\) is the unique function on \(\mathcal{V}\mathcal{G}\) which is discrete harmonic on \(\mathcal{V}\mathcal{G}\setminus\{v_{0},v_{1}\}\) (with respect to the conductances \(c\)) with boundary conditions given by \(\mathfrak{h}(v_{0})=0\) and \(\mathfrak{h}(v_{1})=1\), i.e., \[\mathfrak{h}(x)=\mathbb{P}_{x}\big{(}X\text{ hits }v_{1}\text{ before }v_{0} \big{)},\quad\forall x\in\mathcal{V}\mathcal{G},\] where \(\mathbb{P}_{x}\) denotes the law of random walk \(X^{x}\) on \((\mathcal{G},c)\) started from \(x\). We refer to Subsection 2.3 for more details. The function \(\mathfrak{w}\) is the function on \(\mathcal{V}\widehat{\mathcal{G}}\) that satisfies the discrete Cauchy-Riemann equation associated to \(\mathfrak{h}\), i.e., it is the function on the set of dual vertices \(\mathcal{V}\widehat{\mathcal{G}}\) whose difference at the endpoints of each edge of \(\widehat{\mathcal{G}}\) is equal to the difference of \(\mathfrak{h}\) at the endpoints of the corresponding primal edge times its conductance. As we will see in Subsection 2.4, the function \(\mathfrak{w}\) is only defined modulo \(\mathfrak{n}\mathbb{Z}\) and modulo an additive constant that can be fixed by imposing that \(\mathfrak{w}\) is equal to zero on a chosen dual vertex. In particular, the choice of the additive constant of \(\mathfrak{w}\) fixes the rotation of the cylinder in which the tiling takes place. Now, we can specify the various objects involved in the definition of the Smith embedding. * For each edge \(e\in\mathcal{EQ}\), the rectangle \(\mathrm{R}_{e}\) corresponds to the rectangle on \(\mathcal{C}_{\mathfrak{n}}\) such that the height coordinates of the top and bottom sides are given by the values of \(\mathfrak{h}\) at the endpoints of \(e\), and the width coordinates of the left and right sides of \(\mathrm{R}_{e}\) are given by the values of \(\mathfrak{w}\) at the endpoints of the corresponding dual edge \(\widehat{e}\). * For each vertex \(x\in\mathcal{V}\mathcal{G}\), the horizontal segment \(\mathrm{H}_{x}\) corresponds to the maximal horizontal segment which lies on the boundaries of all the rectangles corresponding to the edges incident to \(x\). * For each dual vertex \(\widehat{x}\in\mathcal{V}\widehat{\mathcal{G}}\), the vertical segment \(\mathrm{V}_{\widehat{x}}\) corresponds to the maximal vertical segment which is tangent with all rectangles corresponding to primal edges surrounding \(\widehat{x}\). We call the map \(\mathcal{S}:\mathcal{V}\mathcal{G}\cup\mathcal{EQ}\cup\mathcal{V}\widehat{ \mathcal{G}}\to\mathcal{C}_{\mathfrak{n}}\) such that \(\mathcal{S}(e):=\mathrm{R}_{e}\), \(\mathcal{S}(x):=\mathrm{H}_{x}\), and \(\mathcal{S}(\widehat{x}):=\mathrm{V}_{\widehat{x}}\) the tiling map associated to the quadruplet \((\mathcal{G},c,v_{0},v_{1})\). We refer to Figure 1 for a diagrammatic illustration of the tiling map associated to a given quadruplet \((\mathcal{G},c,v_{0},v_{1})\). We define the Smith embedding \(\dot{\mathcal{S}}\) associated to the quadruplet \((\mathcal{G},c,v_{0},v_{1})\) as the function from \(\mathcal{V}\mathcal{G}\) to \(\mathbb{R}/\mathfrak{n}\mathbb{Z}\times[0,1]\) given by \[\dot{\mathcal{S}}(x):=\mathrm{mid}(\mathrm{H}_{x}),\quad\forall x\in\mathcal{ V}\mathcal{G}, \tag{1.1}\] where \(\mathrm{mid}(\mathrm{H}_{x})\) corresponds to the middle point of the horizontal line segment \(\mathrm{H}_{x}\)3. We refer to Subsection 2.5 for precise definitions. #### 1.2.2 Assumptions and statement of the main result To state our main result, we need to consider a sequence of doubly marked finite weighted planar maps \[\big{\{}(\mathcal{G}^{n},c^{n},v_{0}^{n},v_{1}^{n})\big{\}}_{n\in\mathbb{N}},\] and the sequence of associated weighted dual planar graphs \(\{(\widehat{\mathcal{G}}^{n},\widehat{c}^{n})\}_{n\in\mathbb{N}}\). We make the following assumptions. 1. (**Cylindrical embedding**) For each \(n\in\mathbb{N}\), the quadruplet \((\mathcal{G}^{n},c^{n},v_{0}^{n},v_{1}^{n})\) is properly embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\) in the sense of Definition 1.1. Furthermore, the associated weighted dual planar graph \((\widehat{\mathcal{G}}^{n},\widehat{c}^{n})\) is also properly embedded in \(\mathcal{C}_{2\pi}\) in the sense of Definition 1.2. 2. (**Invariance principle on the primal graphs**) For each \(n\in\mathbb{N}\), view the embedded random walk on \((\mathcal{G}^{n},c^{n})\), stopped when it hits either \(v_{0}^{n}\) or \(v_{1}^{n}\), as a continuous curve in \(\mathcal{C}_{2\pi}\) obtained by piecewise linear interpolation at constant speed. For each compact subset \(K\subset\mathcal{C}_{2\pi}\) and for any \(z\in K\), the law of the random walk on \((\mathcal{G}^{n},c^{n})\) started from the vertex \(x_{z}^{n}\in\mathcal{V}\mathcal{G}^{n}\) nearest to \(z\) weakly converges as \(n\to\infty\) to the law of the Brownian motion on \(\mathcal{C}_{2\pi}\) started from \(z\) with respect to the local topology on curves viewed modulo time parameterization specified in Subsection 2.1.2, uniformly over all \(z\in K\). 3. (**Invariance principle on the dual graphs**) For each \(n\in\mathbb{N}\), view the embedded random walk on \((\widehat{\mathcal{G}}^{n},\widehat{c}^{n})\) as a continuous curve in \(\mathcal{C}_{2\pi}\) obtained by piecewise linear interpolation at constant speed. For each compact subset \(K\subset\mathcal{C}_{2\pi}\) and for any \(z\in K\), the law of the random walk on \((\widehat{\mathcal{G}}^{n},\widehat{c}^{n})\) started from the vertex \(\widehat{x}_{z}^{n}\in\mathcal{V}\widehat{\mathcal{G}}^{n}\) nearest to \(z\) weakly converges as \(n\to\infty\) to the law of the Brownian motion on \(\mathcal{C}_{2\pi}\) started from \(z\) with respect to the local topology on curves viewed modulo time parameterization specified in Subsection 2.1.2, uniformly over all \(z\in K\). In what follows, given a point \(x\in\mathcal{C}_{2\pi}\), we write \(\operatorname{Re}(x)\in[0,2\pi)\) for its horizontal coordinate and \(\operatorname{Im}(x)\in\mathbb{R}\) for its height coordinate. Similarly, if \(x\in\mathcal{C}_{\eta}\), then \(\operatorname{Re}(x)\in[0,\eta)\) denotes its horizontal coordinate and \(\operatorname{Im}(x)\in[0,1]\) denotes its height coordinate. We are now ready to state our main theorem. **Theorem 1.3** (Main theorem).: _Consider a sequence \(\{(\mathcal{G}^{n},c^{n},v_{0}^{n},v_{1}^{n})\}_{n\in\mathbb{N}}\) of doubly marked finite weighted planar maps and let \(\{(\widehat{\mathcal{G}}^{n},\widehat{c}^{n})\}_{n\in\mathbb{N}}\) be the sequence of associated weighted dual planar graphs. Assume that assumptions (H1), (H3), (H3) are satisfied. For each \(n\in\mathbb{N}\), let \(\dot{\mathcal{S}}_{n}:\mathcal{V}\mathcal{G}^{n}\to\mathcal{C}_{\eta_{n}}\) be the Smith embedding Figure 1: **Left:** A doubly marked finite weighted planar graph \((\mathcal{G},c,v_{0},v_{1})\), drawn in gray, properly embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\) with the two marked vertices \(v_{0}\) and \(v_{1}\) drawn in black. Drawn in red is the corresponding weighted dual planar graph \((\widetilde{\mathcal{G}},\widehat{c})\), which is also properly embedded in \(\mathcal{C}_{2\pi}\). **Right:** The Smith diagram associated to \((\mathcal{G},c,v_{0},v_{1})\) constructed via the tiling map \(\mathcal{S}\). The blue horizontal segment \(\mathrm{H}_{x}\) corresponds to the vertex \(x\in\mathcal{V}\mathcal{G}\). The blue vertical segment \(\mathrm{V}_{\hat{x}}\) corresponds to the dual vertex \(\widehat{x}\in\mathcal{V}\widetilde{\mathcal{G}}\). The blue rectangle \(\mathrm{R}_{e}\) corresponds to the edge \(e\in\mathcal{EG}\). In both the left and right figure, the two vertical lines with an arrow are identified with each other. associated with the quadruplet \((\mathcal{G}^{n},c^{n},v_{0}^{n},v_{1}^{n})\) as specified in (1.1). There exist sequences \(\{c_{n}^{\mathfrak{b}}\}_{n\in\mathbb{N}}\), \(\{b_{n}^{\mathfrak{b}}\}_{n\in\mathbb{N}}\), \(\{b_{n}^{\mathfrak{b}}\}_{n\in\mathbb{N}}\subset\mathbb{R}\) such that, if we let \(T_{n}:\mathcal{C}_{\mathfrak{n}_{n}}\to\mathcal{C}_{2\pi}\) be the affine transformation of the form_ \[\operatorname{Re}(T_{n}x):=\left(\frac{2\pi}{\mathfrak{n}_{n}}\operatorname{ Re}(x)+b_{n}^{\mathfrak{m}}\right)\ \operatorname{mod}(2\pi)\quad\text{ and }\quad \operatorname{Im}(T_{n}x):=c_{n}^{\mathfrak{b}}\operatorname{Im}(x)+b_{n}^{ \mathfrak{b}},\qquad\forall x\in\mathcal{C}_{\mathfrak{n}_{n}},\] _then, for all compact sets \(K\subset\mathcal{C}_{2\pi}\), it holds that_ \[\lim_{n\to\infty}\sup_{x\in\mathcal{V}\mathcal{G}^{n}(K)}\mathrm{d}_{2\pi} \big{(}T_{n}\dot{\mathcal{S}}_{n}(x),x\big{)}=0,\] _where \(\mathrm{d}_{2\pi}\) denotes the Euclidean distance on the cylinder \(\mathcal{C}_{2\pi}\)._ Theorem 1.3 tells us that in order to say that the Smith embedding of \((\mathcal{G}^{n},c^{n},v_{0}^{n},v_{1}^{n})\) is close to a given a priori embedding (up to translation and scaling), we only need to know a certain invariance principle for Figure 2: Consider a planar map \(\mathcal{M}\) with \(n\) edges and unit edge conductances, and a distinguished spanning tree \(\mathcal{T}\). Then \((\mathcal{M},\mathcal{T})\) determines a quadrangulation \(\mathcal{G}\) with \(n\) quadrilaterals obtained by replacing each edge of \(\mathcal{M}\) by a quadrilateral. Moreover, the path that “snakes” between \(\mathcal{T}\) and its dual crosses the edges of \(\mathcal{G}\) in order. Shown here is the Smith diagram of an instance of \(\mathcal{G}\) corresponding to a uniformly random \((\mathcal{M},\mathcal{T})\) pair with two random points chosen as roots. **Left:** The squares in the figure are colored according to their place in the cyclic ordering. **Right:** The squares in the figure are colored according to their Euclidean size. It is interesting to compare the figure on the right with the square subdivisions associated to the \(\gamma\)-LQG measure appearing, e.g., in [11, Figures 1, 2, 3]. The interested reader can find the Mathematica code used to generate the above simulations at the following link: [https://github.com/federico-bertacco/smith-embedding.git](https://github.com/federico-bertacco/smith-embedding.git). Furthermore, at the same link, there is also a PDF file available that provides further explanation on how these simulations were produced. random walk under the a priori embedding. This result is in some ways not surprising, since it is natural to expect that if a simple random walk (and its dual) approximate Brownian motion, then discrete harmonic functions (and their conjugate duals) should approximate continuum harmonic functions. However, showing that this is actually true in the limit, and that the convergence is to the right continuum harmonic functions, will require some new coupling tricks and some careful boundary behavior estimates, which we hope may prove useful in other settings as well. More precisely, the particular statement we obtain is far from obvious a priori, for two main reasons. * Our hypotheses only concern the _macroscopic_ behavior of the random walk on \((\mathcal{G}^{n},c^{n})\) in the _bulk_ of the cylinder. We do not need any hypotheses about how the random walk behaves when it gets close to the marked vertices \(v_{1}^{n}\) and \(v_{1}^{n}\). This may seem surprising at first glance since one could worry, e.g., that the structure of \((\mathcal{G}^{n},c^{n})\) in small neighborhoods of \(v_{0}^{n}\) and \(v_{1}^{n}\) makes it much easier for random walk to hit \(v_{0}^{n}\) than for it to hit \(v_{1}^{n}\), and so the height coordinate function \(\mathfrak{h}^{n}\) is close to zero on all of \(\mathcal{VG}^{n}\). What allows us to get around this is the scaling and translation sequences \(\{c_{n}^{\mathfrak{h}}\}_{n\in\mathbb{N}}\) and \(\{b_{n}^{\mathfrak{h}}\}_{n\in\mathbb{N}}\). We refer to Subsection 4.1 for more details. * The width coordinate \(\mathfrak{w}^{n}\) is discrete harmonic but does not admit a simple direct description in terms of the random walk on \((\widehat{\mathcal{G}}^{n},\widehat{c}^{n})\). For this reason, a fair amount of work is required to get from the invariance principle for this random walk to a convergence statement for \(\mathfrak{w}^{n}\). We refer to Subsection 4.2 for more details. We remark that the Smith embedding can be very far from the identity near the ends of the cylinder: what is interesting, and perhaps surprising, is the generality in which we can show that the "bad behavior" gets "smoothed out" in the middle of the cylinder. This is apparent in the simulations presented in Figure 2. As we will discuss in the next section, one application of Theorem 1.3 is the convergence of the mated-CRT map with the sphere topology to LQG under the Smith embedding. More generally, Theorem 1.3 reduces the problem of proving the convergence to LQG under the Smith embeddings for other types of random planar maps to the problem of finding some a priori embeddings of the map and its dual under which the counting measure on vertices converges to the LQG measure and the random walk on the map converges to Brownian motion modulo time parameterization. ### Application to the mated-CRT map Mated-CRT maps are a one-parameter family of random planar maps constructed and studied, e.g., in [1, 1, 2, 3]. The mated-CRT maps are parameterized by a real parameter \(\gamma\in(0,2)\) and are in the universality class of \(\gamma\)-LQG. In this paper, we will be interested in mated-CRT maps with the sphere topology. For each \(n\in\mathbb{N}\) and \(\gamma\in(0,2)\), the the _\(n\)-mated-CRT map with the sphere topology_ is the random planar triangulation \(\mathcal{G}^{n}\) with vertex set given by \[\mathcal{VG}^{n}:=\frac{1}{n}\mathbb{Z}\cap(0,1],\] and an edge set defined by means of a condition involving a pair of linear Brownian motions. More precisely, consider a two dimensional Brownian motion \((L,R)\) with covariance matrix given by \[\mathbb{Var}(L_{t})=\mathbb{Var}(R_{t})=|t|,\qquad\mathbb{Cov}(L_{t},R_{t})=- \cos\left(\frac{\pi\gamma^{2}}{4}\right)|t|, \tag{1.2}\] and conditioned to stay in the first quadrant for one unit of time and end up at \((0,0)\), i.e., \((L,R)\) is an excursion. Then, two vertices \(x_{1}\), \(x_{2}\in\mathcal{VG}^{n}\) are connected by an edge if and only if either \[\begin{split}&\max\left\{\inf_{t\in[x_{1}-1/n,x_{1}]}L_{t},\inf_{t \in[x_{2}-1/n,x_{2}]}L_{t}\right\}\leq\inf_{t\in[x_{1},x_{2}-1/n]}L_{t},\quad \text{ or }\\ &\max\left\{\inf_{t\in[x_{1}-1/n,x_{1}]}R_{t},\inf_{t\in[x_{2}-1/ n,x_{2}]}R_{t}\right\}\leq\inf_{t\in[x_{1},x_{2}-1/n]}R_{t}.\end{split} \tag{1.3}\] The vertices in \(\mathcal{VG}^{n}\) are connected by two edges if \(|x_{1}-x_{2}|\neq 1/n\) and both the conditions in (1.3) hold. We observe that the condition for \(L\) in (1.3) is equivalent to the existence of a horizontal line segment below the graph of \(L\) whose end points are of the form \((t_{1},L_{t_{1}})\) and \((t_{2},L_{t_{2}})\) for \(t_{1}\in[x_{1}-1/n,x_{1}]\) and \(t_{2}\in[x_{2}-1/n,x_{2}]\), and similarly for \(R\). This allows us to give an equivalent, more geometric, version of the definition of \(\mathcal{G}^{n}\). In particular, this procedure assigns a natural planar map structure to the mated-CRT map \(\mathcal{G}^{n}\), under which it is a triangulation. We refer to Figure 3 for a diagrammatic explanation of this procedure. In [10], Gwynne, Miller, and Sheffield proved that the Tutte embeddings of mated-CRT maps with the disk topology converge to \(\gamma\)-LQG. Thanks to our main theorem, we can prove an analogous result for the Smith embeddings of mated-CRT maps with the sphere topology. More precisely, for each \(n\in\mathbb{N}\), pick two marked vertices \(v_{0}^{n}\), \(v_{1}^{n}\in\mathcal{VG}^{n}\). Then, we can conformally map the sphere into the infinite cylinder \(\mathcal{C}_{2\pi}\) so that the marked points are mapped to \(\pm\infty\), and \((\mathcal{G}^{n},v_{0}^{n},v_{1}^{n})\)4 is properly embedded in \(\mathcal{C}_{2\pi}\). Footnote 4: Here, each edge in \(\mathcal{EG}^{n}\) has unit conductance and so we do not specify the sequence of weights \(c\) as in the general case. **Theorem 1.4** (Convergence of mated-CRT map).: _Fix \(\gamma\in(0,2)\) and let \(\{(\mathcal{G}^{n},v_{0}^{n},v_{1}^{n})\}_{n\in\mathbb{N}}\) be the sequence of doubly marked \(n\)-mated CRT map with the sphere topology embedded in \(\mathcal{C}_{2\pi}\) as specified above. There exists a sequence of random affine transformations \(\{T_{n}\}_{n\in\mathbb{N}}\) from \(\mathcal{C}_{\mathfrak{n}_{n}}\) to \(\mathcal{C}_{2\pi}\) of the form specified in the statement of Theorem 1.3 such that, if we let \(\mu_{n}\) be the push-forward with respect to the mapping \(z\mapsto T_{n}z\) of the counting measure on the set \(\dot{\mathcal{S}}_{n}(\mathcal{VG}^{n})\) scaled by \(1/n\), then we have the following convergences in probability as \(n\to\infty\)._ * _On each compact subset of_ \(\mathcal{C}_{2\pi}\)_, the measure_ \(\mu_{n}\) _weakly converges to the_ \(\gamma\)_-LQG measure associated to a doubly marked unit area quantum sphere parameterized by_ \(\mathcal{C}_{2\pi}\) _in such a way that its marked points are at_ \(\pm\infty\)_, as defined in_ _[_11_, Definition 4.21]__._ * _On each compact subset of_ \(\mathcal{C}_{2\pi}\)_, the image under the mapping_ \(z\mapsto T_{n}z\) _of the space-filling path on the Smith embedded mated-CRT map on_ \(\mathcal{C}_{\mathfrak{n}_{n}}\) _obtained from the left-right ordering of the vertices converges uniformly with respect to the two-point compactification topology on the cylinder to space-filling SLE_\({}_{\kappa}\) _on_ \(\mathcal{C}_{2\pi}\)_, with_ \(\kappa=16/\gamma^{2}\)_, parameterized by_ \(\gamma\)_-LQG mass._ Figure 3: **Left:** A diagram showing the construction of the mated-CRT map with sphere topology and \(n=8\) vertices. To geometrically construct the mated-CRT map \(\mathcal{G}^{n}\), we draw the graphs of \(L\) and \(C-R\) with a chosen large constant \(C>0\) to ensure that the graphs do not intersect. The region between the graphs is then divided into vertical strips. Each strip corresponds to the vertex \(x\in\mathcal{VG}^{n}\) which is the horizontal coordinate of its rightmost point. Two vertices \(x_{1}\), \(x_{2}\in\mathcal{VG}^{n}\) are connected by an edge if and only if their respective vertical strips are connected by a horizontal line segment that is either below the graph of \(L\) or above the graph of \(C-R\). For each pair of vertices for which the condition holds for \(L\) (resp. \(C-R\)), we have drawn the lowest (resp. highest) segment which joins the corresponding vertical strips. We note that consecutive vertices are always connected by an edge. **Right:** The graph \(\mathcal{G}^{n}\) can be represented in the plane by connecting two vertices \(x_{1}\), \(x_{2}\in\mathcal{VG}^{n}\) with an arc below (resp. above) the real line if their vertical strips are connected by a horizontal segment below (resp. above) the graph of \(L\) (resp. \(C-R\)). Additionally, each pair of consecutive vertices in \(\mathcal{VG}^{n}\) is connected by an edge. This representation gives \(\mathcal{G}^{n}\) a planar map structure under which it is a triangulation. A similar illustration was shown in [10]. _._ 3. _For_ \(z\in\mathcal{C}_{2\pi}\)_, let_ \(x_{z}^{n}\in\mathcal{V}\mathcal{G}^{n}\) _be the vertex nearest to_ \(z\)_. The conditional law given_ \(\mathcal{G}^{n}\) _of the image under the mapping_ \(z\mapsto T_{n}z\) _of the simple random walk on the Smith-embedded mated-CRT map started from_ \(\hat{\mathcal{S}}^{n}(x_{z}^{n})\) _and stopped when it hits one of the horizontal segments associated to the marked vertices weakly converges to the law of Brownian motion on_ \(\mathcal{C}_{2\pi}\) _started from_ \(z\)_, modulo time parameterization and uniformly over all_ \(z\) _in a compact subset of_ \(\mathcal{C}_{2\pi}\)_._ To conclude, let us point out that item 1 of Theorem 1.4 solves [1, Question 1] for the case of mated-CRT maps. ### Outline Most of the paper is dedicated to proving Theorem 1.3, and it is organized as follows. In the first part of Section 2, we provide some background material on weighted planar graphs and the theory of electrical networks. We then move on to the precise construction of the tiling map and the definition of the Smith embedding in Subsection 2.5. As mentioned earlier, this is achieved by introducing two harmonic maps: one on the planar map itself and one on the associated dual planar map. In Section 3, we state and prove several properties of the Smith embedding. The most significant result of this section is Lemma 3.17, which heuristically states that the conditional expected horizontal winding of the Smith-embedded random walk given the vertical coordinate of the walk is equal to zero. This property plays a key role in proving our main theorem. In order to prove this intermediate result, we rely on Lemma 3.14, which essentially states that the conditional probability, given the vertical coordinate of the walk, that Smith-embedded random walk hits a certain horizontal line segment is proportional to its width. We point out that a similar result, but without conditioning on the vertical component of the walk, has been obtained by Georgakopoulos [1, Lemma 6.2] in the setting of infinite weighted planar graphs. Section 4 is the core of this article and contains the proof of Theorem 1.3, which can be divided into two main blocks: in Subsection 4.1 we study the height coordinate function, and in Subsection 4.2 we study the width coordinate function. Specifically, the main result of Subsection 4.1 is Proposition 4.3, which roughly states that the height coordinate of the a priori embedding is asymptotically close to an affine transformation of the height coordinate of the Smith embedding. Similarly, the main result of Subsection 4.2 is Proposition 4.8, which states the analogous fact for the width coordinate. We refer to Subsections 4.1 and 4.2 for the proof outlines of the height and width coordinate results, respectively. Finally, in Subsection 4.3, we show how to combine the results for the height coordinate and width coordinate to prove Theorem 1.3. In Section 5, we provide a brief introduction to the relationship between mated-CRT maps and LQG. We then demonstrate in Subsection 5.2 that this family of random planar maps satisfies the assumptions of our main result. Specifically, in Subsection 5.3, we apply our result to show that the scaling limit of mated-CRT maps is \(\gamma\)-LQG, thereby proving Theorem 1.4. **Acknowledgements.** F.B. is grateful to the Royal Society for financial support through Prof. M. Hairer's research professorship grant RP\(\backslash\)R1\(\backslash\)191065. E.G. was partially supported by a Clay research fellowship. Part of this work was carried out during the _Probability and Mathematical Physics_ ICM satellite conference at Helsinki in Summer 2022. We thank the organizers of this conference for their hospitality. ## 2 Background and setup ### Basic definitions #### 2.1.1 Basic notation We write \(\mathbb{N}\) for the set of positive integers and \(\mathbb{N}_{0}\) for the set of non-negative integers. Given \(n\in\mathbb{N}\) and \(j\in\mathbb{N}_{0}\), we let \([n]:=\{1,\ldots,n\}\) and \([n]_{j}:=\{j,\ldots,n\}\), Furthermore, for \(n\in\mathbb{N}\) and \(j\in\mathbb{N}_{0}\), we write \([x_{n}]\) to denote the collection of objects \(\{x_{1},\ldots,x_{n}\}\) and \([x_{n}]_{j}\) to denote the collection of objects \(\{x_{j},\ldots,x_{n}\}\). If \(a\) and \(b\) are two real numbers, we write \(a\preceq b\) if there is a constant \(C>0\), independent of the values of \(a\) or \(b\) and certain other parameters of interest, such that \(a\leq Cb\), and we highlight the dependence of the implicit constants when necessary. If \(a\) and \(b\) are two real numbers depending on a variable \(x\), we write \(a=o_{x}(b)\) if \(a/b\) tends to \(0\) as \(x\to\infty\). We use the convention to identify \(\mathbb{R}^{2}\) with \(\mathbb{C}\). In particular, given a point \(x\in\mathbb{R}^{2}\), we write \(\mathrm{Re}(x)\) (resp. \(\mathrm{Im}(x)\)) for its horizontal (resp. vertical) coordinate. #### 2.1.2 Metric on curves modulo time parameterization For \(T_{1}\), \(T_{2}>0\), let \(P_{1}:[0,T_{1}]\to\mathbb{R}^{2}\) and \(P_{2}:[0,T_{2}]\to\mathbb{R}^{2}\) be two continuous curves defined on possibly different time intervals. We define \[\mathrm{d}^{\mathrm{CMP}}\big{(}P_{1},P_{2}\big{)}:=\inf_{\phi}\sup_{t\in[0,T_ {1}]}\bigl{|}P_{1}(t)-P_{2}(\phi(t))\bigr{|}, \tag{2.1}\] where the infimum is taken over all increasing homeomorphisms \(\phi:[0,T_{1}]\to[0,T_{2}]\). It is known that \(\mathrm{d}^{\mathrm{CMP}}\) induces a complete metric on the set of curves viewed modulo time parameterization (see [1, Lemma 2.1]). For curves defined for infinite time, it is convenient to have a local variant of the metric \(\mathrm{d}^{\mathrm{CMP}}\). Assume that \(P_{1}:[0,\infty)\to\mathbb{R}^{2}\) and \(P_{2}:[0,\infty)\to\mathbb{R}^{2}\) are two such curves. Then, for \(r>0\), let \(T_{1,r}\) (resp. \(T_{2,r}\)) be the first exit time of \(P_{1}\) (resp. \(P_{2}\)) from the ball \(B(0,r)\) centred at \(0\) with radius \(r\), or \(0\) if the curve starts outside \(B(0,r)\). We define \[\mathrm{d}^{\mathrm{CMP}}_{\mathrm{loc}}\big{(}P_{1},P_{2}\big{)}:=\int_{1}^{ \infty}e^{-r}\big{(}1\wedge\mathrm{d}^{\mathrm{CMP}}\left(P_{1}|_{[0,T_{1},r] },P_{2}|_{[0,T_{2},r]}\right)\big{)}\,\mathrm{d}r. \tag{2.2}\] Moreover, we observe that given a sequence \(\{P_{n}\}_{n\in\mathbb{N}}\) of continuous curves defined for infinite time, then \(\lim_{n\to\infty}\mathrm{d}^{\mathrm{CMP}}_{\mathrm{loc}}(P_{n},P)=0\), if and only if, for Lebesgue almost every \(r>0\), \(P_{n}\) stopped at its first exit time from \(B(0,r)\) converges to \(P\) stopped at its first exit time from \(B(0,r)\) with respect to the metric (2.1). **Remark 2.1**.: In the remaining part of the article, we also need to consider curves taking values in the infinite cylinder \(\mathcal{C}_{2\pi}\). We equip the spaces specified above, but with \(\mathcal{C}_{2\pi}\) in place of \(\mathbb{R}^{2}\), with the same metrics. It will be clear from the context whether the metric under consideration refers to curves in \(\mathbb{R}^{2}\) or in \(\mathcal{C}_{2\pi}\). #### 2.1.3 Graph notation Given a finite planar graph \(\mathcal{G}\), besides the notation related to \(\mathcal{G}\) specified in the introduction, we need to introduce some further nomenclature. In particular, in what follows, we use \(e\in\mathcal{EG}\) to denote both oriented and unoriented edges. An oriented edge \(e\in\mathcal{EG}\) is oriented from its tail \(e^{-}\) to its head \(e^{+}\). Furthermore, given a vertex \(x\in\mathcal{VG}\), we write \(\mathcal{VG}(x)\) for the set of vertices \(y\) adjacent to \(x\), i.e., such that there exists an edge connecting \(x\) to \(y\). For a vertex \(x\in\mathcal{VG}\), we denote by \(\mathcal{EG}(x)\) the set of edges in \(\mathcal{EG}\) incident to \(x\). For a fixed orientation of the edges in \(\mathcal{EG}(x)\), we let \(\mathcal{EG}^{\downarrow}(x)\) (resp. \(\mathcal{EG}^{\uparrow}(x)\)) be the set of edges in \(\mathcal{EG}(x)\) with heads (resp. tails) equal to \(x\). Similar notation will also be used for the dual planar graph \(\widehat{\mathcal{G}}\). **Metric graph.** We will need to consider the metric space associated to a planar graph \(\mathcal{G}\) which can be canonically built as follows. For each edge \(e\in\mathcal{EG}\), we choose an arbitrary orientation of \(e\) and we let \(I_{e}\) be an isometric copy of the real unit interval \([0,1]\). We define the metric space \(\mathbb{G}\) associated with \(\mathcal{G}\) to be the quotient of \(\cup_{e\in\mathcal{EG}}I_{e}\) where we identify the endpoints of \(I_{e}\) with the vertices \(e^{-}\) and \(e^{+}\), and we equip it with the natural path metric \(\mathrm{d}^{\mathbb{G}}\). More precisely, for two points \(x\), \(y\) lying on an edge of \(\mathcal{G}\), we define \(\mathrm{d}^{\mathbb{G}}(x,y)\) to be the Euclidean distance between \(x\) and \(y\). For points \(x\), \(y\) lying on different edges, we use the metric given by the length of the shortest path between the two points, where distances are measured along the edges using the Euclidean distance. We can also define the dual metric graph \(\widehat{\mathbb{G}}\), and the associated metric \(\mathrm{d}^{\mathbb{G}}\), in a similar way. ### Universal cover The concept of universal cover of a graph will play an important role in our analysis. If \(\mathcal{G}\) is a graph embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\), then there is a canonical way to define its lift \(\mathcal{G}^{\uparrow}\) to the universal covering space of \(\mathcal{C}_{2\pi}\). More precisely, consider the universal cover \((\mathbb{R}^{2},\sigma_{2\pi})\) of \(\mathcal{C}_{2\pi}\), where the covering map \(\sigma_{2\pi}:\mathbb{R}^{2}\to\mathcal{C}_{2\pi}\) is defined by \[\sigma_{2\pi}(t,x):=\big{(}e^{it},x\big{)},\quad\forall(t,x)\in\mathbb{R}^{2}. \tag{2.3}\] Then, the lifted graph \(\mathcal{G}^{\dagger}\) can be constructed by taking every lift of every vertex and every edge of \(\mathcal{G}\) in \(\mathcal{C}_{2\pi}\) to the covering space \(\mathbb{R}^{2}\). We denote by \(\mathcal{VG}^{\dagger}\) and \(\mathcal{EG}^{\dagger}\) the set of vertices and edges of the lifted graph \(\mathcal{G}^{\dagger}\), respectively. Moreover, we can also construct the lift of the dual graph \(\widehat{\mathcal{G}}\) to the universal covering space \(\mathbb{R}^{2}\) in a similar way, and we denote it by \(\widehat{\mathcal{G}}^{\dagger}\). We adopt the following notational convention: if \(x\in\mathcal{VG}\) is a vertex, then we denote by \(\mathbf{x}\in\mathcal{VG}^{\dagger}\) a lift of \(x\); if \(e\in\mathcal{EG}\) is an edge, then we denote by \(\mathbf{e}\in\mathcal{EG}^{\dagger}\) a lift of \(e\); if \(\widehat{x}\in\mathcal{V\widehat{G}}\) is a dual vertex, then we denote by \(\widehat{\mathbf{x}}\in\mathcal{V\widehat{G}}^{\dagger}\) a lift of \(\widehat{x}\). Moreover, if \((\mathcal{G},c)\) is a finite weighted planar graph embedded in \(\mathcal{C}_{2\pi}\), we can naturally assign to each lifted edge \(\mathbf{e}\) the conductance \(c_{\mathbf{e}}^{\dagger}:=c_{e}\), and we denote by \((\mathcal{G}^{\dagger},c^{\dagger})\) the lifted weighted graph. By definition, the lifted graph \(\mathcal{G}^{\dagger}\) is periodic in the sense that if \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\in\mathcal{VG}^{\dagger}\) are two points in \(\mathbb{R}^{2}\) such that \(\mathrm{Im}(\mathbf{x}_{1})=\mathrm{Im}(\mathbf{x}_{2})\) and \(|\operatorname{Re}(\mathbf{x}_{1})-\operatorname{Re}(\mathbf{x}_{2})|\in \mathbb{N}_{0}\), then \(\sigma_{2\pi}(\mathbf{x}_{1})=\sigma_{2\pi}(\mathbf{x}_{2})\). Finally, for a set \(K\subset\mathbb{R}^{2}\), we write \[\mathcal{VG}^{\dagger}(K):=\big{\{}\mathbf{x}\in\mathcal{VG}^{\dagger}\;:\; \mathbf{x}\in K\big{\}},\qquad\mathcal{V\widehat{G}}^{\dagger}(K):=\big{\{} \widehat{\mathbf{x}}\in\mathcal{V}\mathcal{G}^{\dagger}\;:\;\widehat{\mathbf{ x}}\in K\big{\}}.\] Before proceeding, we recall the following simple result. A oriented path in \(\mathcal{G}\) is a collection of oriented edges \(e_{1}\cdots e_{n}\) in \(\mathcal{EG}\) such that \(e_{j}^{+}=e_{j+1}^{-}\), for all \(j\in[n-1]\). Furthermore, if also \(e_{n}^{+}=e_{1}^{-}\), then \(e_{1}\cdots e_{n}\) is called a oriented loop. **Lemma 2.2**.: _Let \(e_{1}\cdots e_{n}\) be a oriented path in \(\mathcal{G}\). Let \(\mathbf{e}_{1}\) be a lift of \(e_{1}\) to the lifted graph \(\mathcal{G}^{\dagger}\), then there exists a unique path \(\mathbf{e}_{1}\cdots\mathbf{e}_{n}\) in \(\mathcal{G}^{\dagger}\) such that \(\mathbf{e}_{j}\) is a lift of \(e_{j}\), for all \(j\in[n]_{2}\)._ The main advantage of working in the universal cover of the cylinder is that we can keep track of the winding of paths. **Definition 2.3**.: Let \(0\leq t_{1}<t_{2}\), consider a path \(P:[t_{1},t_{2}]\to\mathcal{C}_{2\pi}\), and let \(\mathbf{P}:[t_{1},t_{2}]\to\mathbb{R}^{2}\) be a lift of \(P\) to the universal cover. We define the winding of \(P\) by letting \[\mathrm{wind}_{2\pi}(P):=\frac{\operatorname{Re}(\mathbf{P}(t_{2}))- \operatorname{Re}(\mathbf{P}(t_{1}))}{2\pi}.\] We say that \(P\) winds around the cylinder if \(|\,\mathrm{wind}_{2\pi}(P)|\geq 1\). We say that \(P\) does a noncontractible loop around the cylinder if there exist times \(t_{1}\leq s_{1}<s_{2}\leq t_{2}\) such that \(P|_{[s_{1},s_{2}]}\) winds around the cylinder and \(P(s_{1})=P(s_{2})\). ### Random walks and electrical networks In this subsection, we briefly recall the main concepts in the theory of electrical networks and we refer to [16, 15] for a complete introduction. Let \((\mathcal{G},c,v_{0},v_{1})\) be a doubly marked finite weighted planar graph Figure 4: **Left:** A doubly marked finite weighted planar graph \((\mathcal{G},c,v_{0},v_{1})\), drawn in gray, properly embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\) with the two marked vertices \(v_{0}\) and \(v_{1}\) drawn in black. **Middle:** The same doubly marked finite weighted planar graph as in the left figure together with its dual planar weighted graph \((\widehat{\mathcal{G}},\widehat{c})\), drawn in red, properly embedded in \(\mathcal{C}_{2\pi}\). **Right:** A portion of the lifted weighted graph \((\mathcal{G}^{\dagger},c^{\dagger})\), drawn in grey, and the associated dual lifted graph \((\widehat{\mathcal{G}}^{\dagger},\widehat{c}^{\dagger})\), drawn in red. In both the left and middle figure, the two vertical lines with an arrow are identified with each other. properly embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\) in the sense of Definition 1.1. The conductance of a vertex \(x\in\mathcal{VG}\) is denoted by \(\pi(x)\) and it is defined to be the sum of the conductances of all the edges incident to \(x\), i.e., \[\pi(x):=\sum_{e\in\mathcal{EG}(x)}c_{e},\quad\forall x\in\mathcal{VG}.\] Random walk.The random walk on \((\mathcal{G},c)\) is the discrete time Markov chain \(X=\{X_{n}\}_{n\in\mathbb{N}_{0}}\) with state space \(\mathcal{VG}\) such that, for all \(n\in\mathbb{N}_{0}\), \[\mathbb{P}\big{(}X_{n+1}=y\mid X_{n}=x\big{)}=\begin{cases}c_{xy}/\pi(x),&y\in \mathcal{VG}(x),\\ 0,&\text{otherwise}.\end{cases}\] Given a vertex \(x\in\mathcal{VG}\), we write \(\mathbb{P}_{x}\) and \(\mathbb{E}_{x}\) for the law and expectation of \(X\) started from \(x\). Moreover, we may write \(X^{x}\) in order to emphasize that the random walk \(X\) is started from the vertex \(x\in\mathcal{VG}\). With a slight abuse of notation, we will also denote with \(X=\{X_{t}\}_{t\geq 0}\) the continuous time version of the random walk, where the continuous path is generated by piecewise linear interpolation at constant speed. If the conductance on every edge of the graph is equal to one, we call the random walk in this case simple random walk. We emphasize that, given a random walk \(X\) on \((\mathcal{G},c)\), we can canonically lift it to the lifted weighted planar graph \((\mathcal{G}^{\dagger},c^{\dagger})\), and we denote the resulting walk by \(\mathbf{X}\). If \(X^{x}\) is started from a point \(x\in\mathcal{VG}\), then we need to specify the lift \(\mathbf{x}\in\sigma_{2\pi}^{-1}(x)\) of \(x\) from which the lifted walk \(\mathbf{X^{x}}\) is started from. Similar notation will be also adopted for the random walk on the dual graph. Estimate on the total variation distance.We now state and prove an elementary lemma for general weighted planar graphs which allows to compare the total variation distance of the exit positions from a set for two random walks started from two distinct points. **Lemma 2.4**.: _Let \((\mathcal{G},c)\) be a finite weighted planar graph and let \(W\subset\mathcal{VG}\). For \(x\in\mathcal{VG}\), let \(X^{x}\) be the random walk on \((\mathcal{G},c)\) started from \(x\) and let \(\tau_{x}\) be the first time that \(X^{x}\) hits \(W\). Then, for \(x\), \(y\in\mathcal{VG}\setminus W\), it holds that_ \[\mathrm{d}^{\mathrm{TV}}\big{(}X^{x}_{\tau_{x}},X^{y}_{\tau_{y}}\big{)}\leq \mathbb{P}\big{(}X^{x}|_{[0,\tau_{x}]}\text{ does not disconnect $y$ from $W$}\big{)},\] _where \(\mathrm{d}^{\mathrm{TV}}\) denotes the total variation distance._ Proof.: The proof is a variant of [12, Lemma 3.12] with the difference that one should consider a weighted spanning tree instead of a uniform spanning tree of the finite weighted planar graph \((\mathcal{G},c)\). For the reader's convenience, we gather here a proof. The lemma is a consequence of Wilson's algorithm. Consider the weighted spanning tree \(\mathcal{T}\) of the finite weighted planar graph \((\mathcal{G},c)\), where all vertices of \(W\) are wired to a single point. We recall that the weighted spanning tree \(\mathcal{T}\) is chosen randomly from among all the spanning trees with probability proportional to the product of the conductances along the edges of the tree. For \(x\in\mathcal{VG}\), let \(L^{x}\) be the unique path in \(\mathcal{T}\) from \(x\) to \(W\). For a path \(P\) in \(\mathcal{G}\), write \(\mathrm{LE}(P)\) for its chronological loop erasure. By Wilson's algorithm (see [13, Theorem 4.1]), we can generate the union \(L^{x}\cup L^{y}\) by using the following procedure. * Run \(X^{y}\) until time \(\tau_{y}\) and generate the loop erasure \(\mathrm{LE}(X^{y}|_{[0,\tau^{y}]})\). * Conditional on \(X^{y}|_{[0,\tau_{y}]}\), run \(X^{x}\) until the first time \(\widetilde{\tau}_{x}\) that it hits either \(\mathrm{LE}(X^{y}|_{[0,\tau_{y}]})\) or \(W\). * Set \(L^{x}\cup L^{y}=\mathrm{LE}(X_{y}|_{[0,\tau_{y}]})\cup\mathrm{LE}(X^{x}|_{[0,\widetilde{\tau}_{x}]})\). Note that \(L_{y}=\mathrm{LE}(X^{y}|_{[0,\tau_{y}]})\) in the above procedure. Interchanging the roles of \(x\) and \(y\) in the above procedure shows that \(L^{x}\) and \(\mathrm{LE}(X^{x}|_{[0,\tau_{x}]})\) have the same distribution. When constructing \(L^{x}\cup L^{y}\) as described above, the points at which \(L^{x}\) and \(L^{y}\) hit \(W\) coincide if \(X^{x}\) hits \(\mathrm{LE}(X^{y}|_{[0,\tau_{y}]})\) before reaching \(W\). In particular, this occurs when \(X^{x}|_{[0,\tau_{x}]}\) disconnects \(y\) from \(W\). Thus, there is a coupling between \(\mathrm{LE}(X^{x}|_{[0,\tau_{x}]})\) and \(\mathrm{LE}(X^{y}|_{[0,\tau_{y}]})\), where the probability that these two loop erasures hit \(W\) at the same point is at least \(\mathbb{P}_{x}(X|_{[0,\tau_{x}]}\) disconnects \(y\) from \(W\)). Now, by observing that \(X^{x}_{\tau_{x}}\) corresponds to the point at which \(\mathrm{LE}(X^{x}|_{[0,\tau_{x}]})\) first hit \(W\), and similarly for \(y\) in place of \(x\), we obtain the desired result. **Electrical network.** There is an extremely useful correspondence between random walks and Kirchhoff's theory of electric networks. Let \((\mathcal{G},c,v_{0},v_{1})\) be as above and suppose that every edge \(e\in\mathcal{E}\mathcal{G}\) is made of conducting wires with conductance equals to \(c_{e}\). Connect a battery between \(v_{1}\) and \(v_{0}\) so that the voltage at \(v_{1}\) is equal to one and the voltage at \(v_{0}\) is equal to zero. Then certain currents will flow along the edges of the graph establishing the voltage at each vertex \(x\in\mathcal{VG}\setminus\{v_{0},v_{1}\}\). An immediate consequence of physical laws is that the voltage function is harmonic on \(\mathcal{VG}\) except at \(v_{0}\) and \(v_{1}\). More formally, we have the following definition. **Definition 2.5**.: The voltage function associated to the quadruplet \((\mathcal{G},c,v_{0},v_{1})\) is the unique function \(\mathfrak{h}:\mathcal{VG}\to[0,1]\) such that \(\mathfrak{h}(v_{0})=0\), \(\mathfrak{h}(v_{1})=1\), and \[\mathfrak{h}(x)=\frac{1}{\pi(x)}\sum_{y\in\mathcal{VG}(x)}c_{xy}\mathfrak{h}( y),\quad\forall x\in\mathcal{VG}\setminus\{v_{0},v_{1}\}.\] In view of the role that \(\mathfrak{h}\) will play in the construction of the Smith embedding, we will also call \(\mathfrak{h}\) the _height coordinate function_. Given an edge \(e\in\mathcal{E}\mathcal{G}\), we say that \(e\) is _harmonically oriented_ if \(\mathfrak{h}(e^{+})\geq\mathfrak{h}(e^{-})\). In what follows, unless otherwise specified, we always consider the harmonic orientation of the edges in \(\mathcal{E}\mathcal{G}\). It is a remarkable fact that the voltage function \(\mathfrak{h}\) admits a representation in terms of a random walk \(X\) on \((\mathcal{G},c)\). More precisely, if for all \(v\in\mathcal{VG}\), we define \(\uptau_{v}\) to be the first hitting time of \(v\) from \(X\), then one can easily check that \[\mathfrak{h}(x)=\mathbb{P}_{x}\big{(}\uptau_{v_{1}}<\uptau_{v_{0}}\big{)}, \quad\forall x\in\mathcal{VG}.\] Moreover, since the voltage function \(\mathfrak{h}\) is harmonic on \(\mathcal{VG}\setminus\{v_{0},v_{1}\}\), if \(X^{x}\) is a random walk on \((\mathcal{G},c)\) started from \(x\in\mathcal{VG}\) and killed upon reaching the set \(\{v_{0},v_{1}\}\), then the process \(\mathfrak{h}(X^{x})\) is a martingale with respect to the filtration generated by \(X^{x}\). **Remark 2.6**.: We note that we can canonically lift the voltage function \(\mathfrak{h}\) to the lifted weighted graph \((\mathcal{G}^{\dagger},c^{\dagger})\) by setting \(\mathfrak{h}^{\dagger}:\mathcal{VG}^{\dagger}\to[0,1]\) as follows \[\mathfrak{h}^{\dagger}(\mathbf{x}):=\mathfrak{h}(\upsigma_{2\pi}(\mathbf{x}) ),\quad\forall\mathbf{x}\in\mathcal{VG}^{\dagger}. \tag{2.4}\] **Remark 2.7**.: Note that we can naturally extend the definition of the voltage function to a function on the metric graph \(\mathbb{G}\) associated to \(\mathcal{G}\), i.e., we can define the function \(\mathfrak{h}:\mathbb{G}\to[0,1]\). This extension can be done by linearly interpolating the values at the endpoints of every edge \(e\in\mathcal{E}\mathcal{G}\). More precisely, if \(x\in\mathbb{G}\) is a point lying on the harmonically oriented edge \(e\in\mathcal{E}\mathcal{G}\), then we set \[\mathfrak{h}(x):=(\mathfrak{h}(e^{+})-\mathfrak{h}(e^{-}))\mathrm{d}^{ \mathbb{G}}(e^{-},x)+\mathfrak{h}(e^{-}).\] **The flow induced by the voltage function.** We finish this subsection by introducing the flow across oriented edges induced by the voltage function \(\mathfrak{h}\). We denote such a flow by \(\nabla\mathfrak{h}:\mathcal{E}\mathcal{G}\to\mathbb{R}\) and we define it as follows \[\nabla\mathfrak{h}(e):=c_{e}\big{(}\mathfrak{h}(e^{+})-\mathfrak{h}(e^{-}) \big{)},\quad\forall e\in\mathcal{E}\mathcal{G}.\] The flow \(\nabla\mathfrak{h}\) satisfies the following well-known properties (see [14, Section 2.2]). 1. (_Antisymmetry_) For every oriented edge \(e\in\mathcal{E}\mathcal{G}\), it holds that \[\nabla\mathfrak{h}(-e)=-\nabla\mathfrak{h}(e),\] where \(-e\) stands for the edge \(e\) endowed with opposite orientation. 2. (_Kirchhoff's node law_) For all \(x\in\mathcal{VG}\setminus\{v_{0},v_{1}\}\), it holds that \[\sum_{e\in\mathcal{E}\mathcal{G}(x)}\nabla\mathfrak{h}(e)=0,\] (2.5) where here the orientation of each \(e\in\mathcal{E}\mathcal{G}(x)\) is fixed by letting \(e^{-}=x\). 3. (_Kirchhoff's cycle law_) For every directed cycle \(e_{1}\cdots e_{n}\), it holds that \[\sum_{i=1}^{n}\frac{1}{c_{e_{i}}}\nabla\mathfrak{h}(e_{i})=0.\] (2.6) We denote the strength of the flow \(\nabla\mathfrak{h}\) induced by \(\mathfrak{h}\) by setting \[\mathfrak{n}:=\sum_{e\in\mathcal{EG}^{\uparrow}(v_{0})}\nabla\mathfrak{h}(e), \tag{2.7}\] where \(\mathcal{EG}^{\uparrow}(v_{0})\) denotes the set of harmonically oriented edges in \(\mathcal{EG}\) with tails equal to \(v_{0}\). Furthermore, thanks to the harmonicity of \(\mathfrak{h}\), a simple computation yields that \(\mathfrak{n}=\sum_{e\in\mathcal{EG}^{\downarrow}(v_{1})}\nabla\mathfrak{h}(e)\), where \(\mathcal{EG}^{\downarrow}(v_{1})\) denotes the set of harmonically oriented edges in \(\mathcal{EG}\) with heads equal to \(v_{1}\). ### Discrete harmonic conjugate Let \((\mathcal{G},c,v_{0},v_{1})\) be a doubly marked finite weighted planar graph properly embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\) according to Definition 1.1, and let \((\widehat{\mathcal{G}},\widehat{c})\) be the associated weighted dual planar graph properly embedded in \(\mathcal{C}_{2\pi}\) according to Definition 1.2. Moreover, let \(\mathfrak{h}:\mathcal{VG}\rightarrow[0,1]\) be the voltage function associated with \((\mathcal{G},c,v_{0},v_{1})\) as defined in Definition 2.5. We want to define the discrete harmonic conjugate function of \(\mathfrak{h}\), i.e., the function \(\mathfrak{w}\) defined on the set of dual vertices \(\mathcal{V}\widehat{\mathcal{G}}\) that satisfies the discrete Cauchy-Riemann equation. More formally, for every directed edge \(e\in\mathcal{EG}\) and its corresponding oriented dual edge \(\widehat{e}\in\mathcal{EG}\), the function \(\mathfrak{w}\) should satisfy the following identity \[\nabla\mathfrak{w}(\widehat{e}):=\widehat{c}_{\widehat{e}}\big{(}\mathfrak{ w}(\widehat{e}^{+})-\mathfrak{w}(\widehat{e}^{-})\big{)}=\mathfrak{h}(e^{+})- \mathfrak{h}(e^{-}), \tag{2.8}\] where we recall that \(\widehat{c}_{\widehat{e}}=1/c_{e}\). To precisely define the function \(\mathfrak{w}\) specified above, it will be more convenient to work with the lifted weighted graph \((\mathcal{G}^{\uparrow},c^{\uparrow})\) and its dual \((\widehat{\mathcal{G}}^{\uparrow},\widehat{c}^{\uparrow})\). More precisely, we consider the lifted voltage function \(\mathfrak{h}^{\uparrow}:\mathcal{VG}^{\uparrow}\rightarrow[0,1]\). We fix an arbitrary vertex \(\widehat{\mathbf{x}}_{0}\in\mathcal{V}\widehat{\mathcal{G}}^{\uparrow}\) on the lifted dual graph, and for every \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\uparrow}\) we consider a directed path of lifted dual edges \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) connecting \(\widehat{\mathbf{x}}_{0}\) to \(\widehat{\mathbf{x}}\). **Remark 2.8**.: We recall that we are assuming that the two marked vertices \(v_{0}\) and \(v_{1}\) do not lie on the boundary of the same face (see Definition 1.1). In particular, this implies that there is a path of dual edges in \(\widehat{\mathcal{G}}\) that disconnects \(v_{0}\) from \(v_{1}\). Hence, the lifted dual graph \(\widehat{\mathcal{G}}^{\uparrow}\) is connected and we can always find a path connecting \(\widehat{\mathbf{x}}_{0}\) to any \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\uparrow}\). We define the function \(\mathfrak{w}^{\uparrow}:\mathcal{V}\widehat{\mathcal{G}}^{\uparrow}\to \mathbb{R}\) by setting \[\mathfrak{w}^{\uparrow}(\widehat{\mathbf{x}}):=\sum_{j=1}^{n}\nabla\mathfrak{ h}^{\uparrow}(\mathbf{e}_{j}),\quad\forall\widehat{\mathbf{x}}\in\mathcal{V} \widehat{\mathcal{G}}^{\uparrow}. \tag{2.9}\] where \(\mathbf{e}_{j}\in\mathcal{EG}^{\uparrow}\) is the oriented primal edge associated to \(\widehat{\mathbf{e}}_{j}\). We call the function \(\mathfrak{w}^{\uparrow}\) defined in this way the _lifted discrete harmonic conjugate_ function associated to \(\mathfrak{h}^{\uparrow}\) with base vertex \(\widehat{\mathbf{x}}_{0}\). The following lemma guarantees that \(\mathfrak{w}^{\uparrow}\) is actually well-defined. **Lemma 2.9**.: _For all \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\uparrow}\), the value \(\mathfrak{w}^{\uparrow}(\widehat{\mathbf{x}})\) defined in (2.9) does not depend on the choice of the directed path from \(\widehat{\mathbf{x}}_{0}\) to \(\widehat{\mathbf{x}}\). Moreover, for any \(\widehat{\mathbf{x}}_{1}\), \(\widehat{\mathbf{x}}_{2}\in\mathcal{V}\widehat{\mathcal{G}}^{\uparrow}\) such that \(\sigma_{2\pi}(\widehat{\mathbf{x}}_{1})=\sigma_{2\pi}(\widehat{\mathbf{x}}_{2})\), the following relation holds_ \[\frac{\mathfrak{w}^{\uparrow}(\widehat{\mathbf{x}}_{1})-\mathfrak{w}^{ \uparrow}(\widehat{\mathbf{x}}_{2})}{\mathfrak{n}}=\frac{\operatorname{Re}( \widehat{\mathbf{x}}_{1})-\operatorname{Re}(\widehat{\mathbf{x}}_{2})}{2\pi},\] _where we recall that \(\mathfrak{n}\) denotes the strength of the flow induced by \(\mathfrak{h}\) as defined in (2.7)._ Proof.: For the first part of the lemma, the proof is similar to that of [13, Lemma 3.2]. In particular, it is sufficient to prove that for any oriented loop \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) in \(\widehat{\mathcal{G}}^{\dagger}\), it holds that \[\sum_{j=1}^{n}\bigl{(}\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{j}^{+})- \mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{j}^{-})\bigr{)}=0. \tag{2.10}\] The key observation in [13, Lemma 3.2] is that every oriented loop \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) in \(\widehat{\mathcal{G}}^{\dagger}\) can be written as the disjoint union of simple closed loops and of paths of length two consisting of a single dual edge traversed in both directions. Here, by a simple closed path we mean that \(\widehat{\mathbf{e}}_{j}^{+}\neq\widehat{\mathbf{e}}_{k}^{+}\) for distinct \(j\), \(k\in[n]\) when \(n>2\), while when \(n=2\), we mean that \(\widehat{\mathbf{e}}_{1}\neq-\widehat{\mathbf{e}}_{2}\). Therefore, since (2.10) obviously holds if the path consists of a single dual edge traversed in both directions, we can assume without loss of generality that \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) is a simple counter-clockwise oriented closed loop. Let \(K\subset\mathbb{R}^{2}\) be the bounded connected component of \(\mathbb{R}^{2}\setminus\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\). Then, thanks to (2.8), it holds that \[\sum_{j=1}^{n}\bigl{(}\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{j}^{+})- \mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{j}^{-})\bigr{)}=\sum_{j=1}^{n} \nabla\mathfrak{h}^{\dagger}(\mathbf{e}_{j})=\sum_{\mathbf{x}\in\mathcal{V} \mathcal{G}^{\dagger}(K)}\sum_{\mathbf{e}\in\mathcal{E}\mathcal{G}^{\dagger}( \mathbf{x})}\nabla\mathfrak{h}^{\dagger}(\mathbf{e}), \tag{2.11}\] where here the orientation of each edge \(\mathbf{e}\in\mathcal{E}\mathcal{G}^{\dagger}(\mathbf{x})\) is fixed by letting \(\mathbf{e}^{-}=\mathbf{x}\). The second equality in (2.11) follows from the following argument. Fix \(\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger}(K)\) and consider \(\mathbf{y}\in\mathcal{V}\mathcal{G}^{\dagger}(\mathbf{x})\). If \(\mathbf{y}\not\in K\), then \(\mathbf{x}\mathbf{y}=\mathbf{e}_{j}\) for some \(j\in[n]\), while if \(\mathbf{y}\in K\) then \(\nabla\mathfrak{h}^{\dagger}(\mathbf{x}\mathbf{y})\) cancels out with \(\nabla\mathfrak{h}^{\dagger}(\mathbf{y}\mathbf{x})\) thanks to the antisymmetry of \(\nabla\mathfrak{h}^{\dagger}\). The term on the right-hand side of (2.11) is equal to zero thanks to the boundedness of \(K\) and Kirchhoff's node law (2.5). Concerning the second part of the lemma, we can proceed as follows. Let \(\widehat{\mathbf{x}}_{1}\), \(\widehat{\mathbf{x}}_{2}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger}\) be such that \(\sigma_{2\pi}(\widehat{\mathbf{x}}_{1})=\sigma_{2\pi}(\widehat{\mathbf{x}}_{2})\), and let \(k\in\mathbb{Z}\) be such that \(k=(\operatorname{Re}(\widehat{\mathbf{x}}_{2})-\operatorname{Re}(\widehat{ \mathbf{x}}_{1}))/(2\pi)\). Then we need to prove that \(\mathfrak{w}^{\dagger}(\widehat{\mathbf{x}}_{2})-\mathfrak{w}^{\dagger}( \widehat{\mathbf{x}}_{1})=\mathfrak{h}\). In particular, thanks to the first part of the lemma, it is sufficient to consider an arbitrary directed path \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) in \(\widehat{\mathcal{G}}^{\dagger}\) connecting \(\widehat{\mathbf{x}}_{1}\) to \(\widehat{\mathbf{x}}_{2}\) and show that \(\sum_{j=1}^{n}(\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{j}^{+})- \mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{j}^{-}))=\mathfrak{h}\). We assume first that \(k=1\). We can choose the lifted dual edges \([\widehat{\mathbf{e}}_{n}]\) in such way that \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) is a simple path oriented from left to right. Now, using an argument similar to the one used above, it is not difficult to see that \(\sum_{j=1}^{n}\nabla\mathfrak{h}^{\dagger}(\mathbf{e}_{j})=\mathfrak{\eta}\), and so the conclusion follows in this case. Finally, the general case can be obtained easily: we can just "glue" together, by eventually changing the orientation if \(k\) is negative, \(k\) copies of the path used in the case \(k=1\). From the definition (2.9) of the function \(\mathfrak{w}^{\dagger}\), it follows that for every oriented edge \(\mathbf{e}\in\mathcal{E}\mathcal{G}^{\dagger}\) and for the associated dual edge \(\widehat{\mathbf{e}}\in\mathcal{E}\widehat{\mathcal{G}}^{\dagger}\), it holds that \[\nabla\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}):=\widehat{c}_{\widehat{ \mathbf{e}}}^{\dagger}(\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}^{+})- \mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}^{-}))=\mathfrak{h}^{\dagger}( \mathbf{e}^{+})-\mathfrak{h}^{\dagger}(\mathbf{e}^{-}),\] i.e., \(\mathfrak{w}^{\dagger}\) satisfies the discrete Cauchy-Riemann equation. Moreover, an immediate application of Kirchhoff's cycle law (2.6) implies that the function \(\mathfrak{w}^{\dagger}\) is harmonic on \(\mathcal{V}\widehat{\mathcal{G}}^{\dagger}\). Thanks to Lemma 2.9, we can define the discrete harmonic conjugate function of \(\mathfrak{h}\) as follows. **Definition 2.10**.: The discrete harmonic conjugate function of \(\mathfrak{h}\) with base vertex \(\widehat{x}_{0}\in\mathcal{V}\widehat{\mathcal{G}}\) is the unique function \(\mathfrak{w}:\mathcal{V}\widehat{\mathcal{G}}\to\mathbb{R}/\mathfrak{\eta} \mathbb{Z}\) such that \(\mathfrak{w}(\widehat{x}_{0})=0\) and \[\mathfrak{w}(\widehat{x})=\mathfrak{w}^{\dagger}(\widehat{\mathbf{x}})\bmod (\mathfrak{\eta}),\quad\forall\widehat{x}\in\mathcal{V}\widehat{\mathcal{G}},\] where \(\mathfrak{w}^{\dagger}:\mathcal{V}\mathcal{G}^{\dagger}\to\mathbb{R}\) is the function defined in (2.9) with base vertex an arbitrary lift of \(\widehat{x}_{0}\). In view of the role that \(\mathfrak{w}\) will play in the construction of the Smith embedding, we will also call \(\mathfrak{w}\) the _width coordinate function_. **Remark 2.11**.: As for the case of the voltage function \(\mathfrak{h}\), we can naturally extend the definition of \(\mathfrak{w}\) to a function from the dual metric graph \(\widehat{\mathbb{G}}\), i.e., \(\mathfrak{w}:\widehat{\mathbb{G}}\to\mathbb{R}/\mathfrak{\eta}\mathbb{Z}\). To be precise, if \(\widehat{x}\in\widehat{\mathbb{G}}\) is a point on the edge \(\widehat{e}\in\mathcal{E}\widehat{\mathcal{G}}\) and \(\mathfrak{w}(\widehat{e}^{+})\geq\mathfrak{w}(\widehat{e}^{-})\), then we set \[\mathfrak{w}(\widehat{x}):=(\mathfrak{w}(\widehat{e}^{+})-\mathfrak{w}(\widehat{ e}^{-}))\mathrm{d}^{\widehat{\mathbb{G}}}(\widehat{e}^{-},\widehat{x})+\mathfrak{w}( \widehat{e}^{-}).\] ### Construction of the Smith embedding Let \((\mathcal{G},c,v_{0},v_{1})\) be a doubly marked finite weighted planar graph properly embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\) according to Definition 1.1, and let \((\widehat{\mathcal{G}},\widehat{c})\) be the associated weighted dual planar graph properly embedded in \(\mathcal{C}_{2\pi}\) according to Definition 1.2. The aim of this subsection is to precisely define the _Smith embedding_ of \((\mathcal{G},c,v_{0},v_{1})\). As we have already explained in the introduction, the Smith embedding is built in terms of a tiling of a finite cylinder with rectangles in which every edge \(e\in\mathcal{E}\mathcal{G}\) corresponds to a rectangle in the tiling, every vertex \(x\in\mathcal{V}\mathcal{G}\) corresponds to the maximal horizontal segment tangent with all rectangles corresponding to the edges incident to \(x\), and every dual vertex \(\widehat{x}\in\mathcal{V}\widehat{\mathcal{G}}\) corresponds to the maxiaml vertical segment tangent with all rectangles corresponding to primal edges surrounding \(\widehat{x}\). The existence of such tiling was first proven in [10] and then successively extended in [10]. The main objects.To precisely define the Smith embedding, it will be more convenient to work with the lifted weighted graph \((\mathcal{G}^{\dagger},c^{\dagger})\) and its dual \((\widehat{\mathcal{G}}^{\dagger},\widehat{c}^{\dagger})\). More precisely, we need to consider the lifted voltage function \(\mathfrak{h}^{\dagger}:\mathcal{V}\widehat{\mathcal{G}}^{\dagger}\to[0,1]\) and its lifted discrete harmonic conjugate function \(\mathfrak{w}^{\dagger}:\mathcal{V}\widehat{\mathcal{G}}^{\dagger}\to\mathbb{R}\). For every edge \(\mathbf{e}\in\mathcal{E}\mathcal{G}^{\dagger}\), consider its harmonic orientation and let \(\widehat{\mathbf{e}}\in\mathcal{E}\widehat{\mathcal{G}}^{\dagger}\) be the corresponding oriented dual edge. We define the intervals \[\mathrm{I}_{\mathbf{e}}:=\big{[}\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}^{ -}),\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}^{+})\big{]},\qquad\widehat{ \mathrm{I}}_{\widehat{\mathbf{e}}}:=\big{[}\mathfrak{h}^{\dagger}(\mathbf{e}^ {-}),\mathfrak{h}^{\dagger}(\mathbf{e}^{+})\big{]}.\] Then, we define the rectangle \(\mathrm{R}_{\mathbf{e}}\) associated to the edge \(\mathbf{e}\in\mathcal{E}\mathcal{G}^{\dagger}\) by letting \[\mathrm{R}_{\mathbf{e}}:=\mathrm{I}_{\mathbf{e}}\times\widehat{\mathrm{I}}_{ \widehat{\mathbf{e}}}\subset\mathbb{R}\times[0,1],\quad\forall\mathbf{e}\in \mathcal{E}\mathcal{G}^{\dagger}. \tag{2.12}\] Recalling the definition (2.9) of the lifted discrete harmonic conjugate function \(\mathfrak{w}^{\dagger}\), it holds that \[\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}^{+})-\mathfrak{w}^{\dagger}( \widehat{\mathbf{e}}^{-})=c_{\mathbf{e}}(\mathfrak{h}^{\dagger}(\mathbf{e}^ {+})-\mathfrak{h}^{\dagger}(\mathbf{e}^{-})).\] Therefore, the aspect ratio of the rectangle \(\mathrm{R}_{\mathbf{e}}\) is equal to the conductance \(c_{\mathbf{e}}\) of the edge \(\mathbf{e}\in\mathcal{E}\mathcal{G}^{\dagger}\). In particular, this implies that if an edge \(\mathbf{e}\in\mathcal{E}\mathcal{G}^{\dagger}\) has unit conductance, then \(\mathrm{R}_{\mathbf{e}}\) is a square. For a vertex \(\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger}\), we define the closed horizontal line segment \(\mathrm{H}_{\mathbf{x}}\) by setting \[\mathrm{H}_{\mathbf{x}}:=\bigcup_{\mathbf{e}\in\mathcal{E}\mathcal{G}^{ \dagger,\downarrow}(\mathbf{x})}\mathrm{I}_{\mathbf{e}}\times\{\mathfrak{h}^{ \dagger}(\mathbf{x})\}\subset\mathbb{R}\times[0,1],\quad\forall\mathbf{x}\in \mathcal{V}\mathcal{G}^{\dagger}, \tag{2.13}\] where \(\mathcal{E}\mathcal{G}^{\dagger,\downarrow}(\mathbf{x})\) denotes the set of harmonically oriented lifted edges with heads equal to \(\mathbf{x}\). Finally, for a dual vertex \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger}\), we define the closed vertical line segment \(\mathrm{V}_{\widehat{\mathbf{x}}}\) by letting \[\mathrm{V}_{\widehat{\mathbf{x}}}:=\bigcup_{\widehat{\mathbf{e}}\in\mathcal{ E}\widehat{\mathcal{G}}^{\dagger,\downarrow}(\widehat{\mathbf{x}})}\{ \mathfrak{w}^{\dagger}(\widehat{\mathbf{x}})\}\times\widehat{\mathrm{I}}_{ \widehat{\mathbf{e}}}\subset\mathbb{R}\times[0,1],\quad\forall\widehat{\mathbf{x }}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger}, \tag{2.14}\] where \(\mathcal{E}\widehat{\mathcal{G}}^{\dagger,\downarrow}(\widehat{\mathbf{x}})\) denotes the set of harmonically oriented lifted dual edges with heads equal to \(\widehat{\mathbf{x}}\). Thanks to the harmonicity of the lifted height coordinate function, we observe that in the definition of \(\mathrm{H}_{\mathbf{x}}\), one can replace \(\mathcal{E}\mathcal{G}^{\dagger,\downarrow}(\mathbf{x})\) with \(\mathcal{E}\mathcal{G}^{\dagger,\uparrow}(x)\), and similarly for \(\mathrm{V}_{\widehat{\mathbf{x}}}\). Construction of the tiling.We recall that \(\mathfrak{\eta}\) denotes the strength of the flow induced by \(\mathfrak{h}\) as defined in (2.7). We consider the cylinder \[\mathcal{C}_{\mathfrak{\eta}}:=\mathbb{R}/\mathfrak{\eta}\mathbb{Z}\times[0,1],\] where \(\mathbb{R}/\mathfrak{\eta}\mathbb{Z}\) denotes the circle of length \(\mathfrak{\eta}\). We let \((\mathbb{R}\times[0,1],\sigma_{\mathfrak{\eta}})\) be the universal cover of \(\mathcal{C}_{\mathfrak{\eta}}\), where the covering map \(\sigma_{\mathfrak{\eta}}:\mathbb{R}\times[0,1]\to\mathcal{C}_{\mathfrak{\eta}}\) is defined by \[\sigma_{\mathfrak{\eta}}(t,x):=\big{(}e^{i2\pi t/\mathfrak{\eta}},x\big{)}, \quad\forall(t,x)\in\mathbb{R}\times[0,1]. \tag{2.15}\] We are now ready to define the tiling map. For each \(e\in\mathcal{E}\mathcal{G}\), \(x\in\mathcal{V}\mathcal{G}\), and \(\widehat{x}\in\mathcal{V}\widehat{\mathcal{G}}\), we define the following objects \[\mathrm{R}_{e}:=\sigma_{\mathfrak{\eta}}(\mathrm{R}_{\mathbf{e}}),\qquad \mathrm{H}_{x}:=\sigma_{\mathfrak{\eta}}(\mathrm{H}_{\mathbf{x}}),\qquad\mathrm{V} _{\widehat{x}}:=\sigma_{\mathfrak{\eta}}(\mathrm{V}_{\widehat{\mathbf{x}}}), \tag{2.16}\] where \(\mathbf{e}\in\mathcal{E}\mathcal{G}^{\dagger}\), \(\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger}\), and \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger}\) are lifts of \(e\), \(x\), and \(\widehat{x}\), respectively. An immediate consequence of Lemma 2.9 is that \(\mathrm{R}_{e}\), \(\mathrm{H}_{x}\), and \(\mathrm{V}_{\widehat{x}}\) are well-defined, i.e., they do not depend on the particular choice of the lifts \(\mathbf{e}\), \(\mathbf{x}\) and \(\widehat{\mathbf{x}}\). The following properties are well-known (see [10, Theorem 3.1]). 1. The collection of rectangles \(\{\mathrm{R}_{e}\}_{e\in\mathcal{EG}}\) constitutes a tiling of \(\mathbb{R}/\eta\mathbb{Z}\times[0,1]\), i.e., for each pair of distinct edges \(e_{1}\), \(e_{2}\in\mathcal{EG}\), the interiors of the rectangles \(\mathrm{R}_{e_{1}}\) and \(\mathrm{R}_{e_{2}}\) are disjoint and \(\cup_{e\in\mathcal{EG}}\mathrm{R}_{e}=\mathbb{R}/\eta\mathbb{Z}\times[0,1]\). 2. For each two distinct edges \(e_{1}\), \(e_{2}\in\mathcal{EG}\), the interiors of the vertical sides of the rectangles \(\mathrm{R}_{e_{1}}\) and \(\mathrm{R}_{e_{2}}\) have a non-trivial intersection only if \(e_{1}\) and \(e_{2}\) both lie in the boundary of some common face of \(\mathcal{G}\). 3. Two rectangles intersect along their horizontal (resp. vertical) boundaries if and only if the corresponding primal (resp. dual) edges share an endpoint. We note that if \(e\in\mathcal{EG}\) is such that no current flows through it, i.e., \(\mathfrak{h}(e^{-})=\mathfrak{h}(e^{+})\), then the corresponding rectangle \(\mathrm{R}_{e}\) is degenerate and consists only of a single point. We also remark that the existence of the aforementioned tiling was proven by Benjamini and Schramm in [1]. Originally, their proof was stated specifically for the case of edges with unit conductance, however, it can be readily extended to our setting. **Definition 2.12** (Tiling map).: The tiling map associated to the quadruplet \((\mathcal{G},c,v_{0},v_{1})\) is the map \[\mathcal{S}:\mathcal{EG}\cup\mathcal{VG}\cup\mathcal{VG}\rightarrow\mathbb{R}/ \eta\mathbb{Z}\times[0,1]\] such that \[\mathcal{S}(e):=\mathrm{R}_{e},\quad\forall e\in\mathcal{EG};\qquad\mathcal{S} (x):=\mathrm{H}_{x},\quad\forall x\in\mathcal{VG};\qquad\mathcal{S}(\widehat{ x}):=\mathrm{V}_{\widehat{x}},\quad\forall\widehat{x}\in\mathcal{VG},\] where \(\mathrm{R}_{e}\), \(\mathrm{H}_{x}\), and \(\mathrm{V}_{\widehat{x}}\) are as defined in (2.16). The image of the tiling map \(\mathcal{S}\) is called the Smith diagram associated to \((\mathcal{G},c,v_{0},v_{1})\). We refer to Figure 1 for an illustration of the Smith diagram associated to a given quadruplet \((\mathcal{G},c,v_{0},v_{1})\). **Remark 2.13**.: Since the height coordinate function \(\mathfrak{h}\) can be extended to the metric graph \(\mathbb{G}\), we can view each rectangle \(\mathrm{R}_{e}\) of the tiling as being foliated into horizontal segments, one for each inner point of the corresponding edge \(e\in\mathcal{EG}\). Similarly, since the width coordinate function \(\mathfrak{w}\) can be extended to the dual metric graph \(\widehat{\mathbb{G}}\), we can also view each rectangle \(\mathrm{R}_{e}\) of the tiling as being foliated into vertical segments, one for each inner point of the corresponding dual edge \(\widehat{e}\in\mathcal{EG}\). It will be also convenient to introduce the lifted tiling map associated to \((\mathcal{G},c,v_{0},v_{1})\) which is the map \[\mathcal{S}^{\dagger}:\mathcal{EG}^{\dagger}\cup\mathcal{VG}^{\dagger}\cup \mathcal{VG}^{\dagger}\rightarrow\mathbb{R}\times[0,1]\] such that \(\mathcal{S}^{\dagger}(\mathbf{x}):=\mathrm{H}_{\mathbf{x}}\) for each \(\mathbf{x}\in\mathcal{VG}^{\dagger}\), \(\mathcal{S}^{\dagger}(\mathbf{e}):=\mathrm{R}_{\mathbf{e}}\) for each \(\mathbf{e}\in\mathcal{EG}^{\dagger}\), and \(\mathcal{S}^{\dagger}(\widehat{\mathbf{x}}):=\mathrm{V}_{\widehat{\mathbf{x}}}\) for each \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger}\). We emphasize that, since the collection \(\{R_{e}\}_{e\in\mathcal{EG}}\) forms a tiling of the cylinder \(\mathbb{R}/\eta\mathbb{Z}\times[0,1]\), the collection of rectangles \(\{R_{\mathbf{e}}\}_{\mathbf{e}\in\mathcal{EG}^{\dagger}}\) forms a periodic tiling of \(\mathbb{R}\times[0,1]\) of period \(\mathfrak{\eta}\). We are now ready to precisely define the Smith embedding. **Definition 2.14** (Smith embedding).: The Smith embedding associated to the quadruplet \((\mathcal{G},c,v_{0},v_{1})\) is the function \(\hat{\mathcal{S}}:\mathcal{VG}\rightarrow\mathbb{R}/\eta\mathbb{Z}\times[0,1]\) such that \[\hat{\mathcal{S}}(x)=\mathrm{mid}(\mathrm{H}_{x}),\quad\forall x\in\mathcal{VG},\] where \(\mathrm{mid}(\mathrm{H}_{x})\) denotes the middle point of the horizontal line segment \(\mathcal{S}(x)\). Moreover, we define the lifted Smith embedding \(\hat{\mathcal{S}}^{\dagger}:\mathcal{VG}^{\dagger}\rightarrow\mathbb{R}\times [0,1]\) as the map that assigns to each \(\mathbf{x}\in\mathcal{VG}^{\dagger}\) the middle point of the horizontal line segment \(\mathcal{S}^{\dagger}(\mathbf{x})\). We emphasize once again that the choice to define the Smith embedding by picking the middle point of each horizontal line segment is rather arbitrary. Indeed, the main result of this paper holds also if one chose an arbitrary point inside each horizontal segment. Finally, for technical reasons, we also need to introduce the following map. **Definition 2.15**.: We define the map \(\hat{\mathcal{S}}^{\dagger,\mathrm{rand}}\) that assigns to each vertex \(\mathbf{x}\in\mathcal{VG}^{\dagger}\) the random variable \(\hat{\mathcal{S}}^{\dagger,\mathrm{rand}}(\mathbf{x})\) which is uniformly distributed on the horizontal line segment \(\mathcal{S}^{\dagger}(\mathbf{x})\). Some properties of the Smith embedding In this section, we collect some results that follow directly from the construction of the Smith embedding. We fix throughout this section a doubly marked finite weighted planar graph \((\mathcal{G},c,v_{0},v_{1})\) properly embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\) according to Definition 1.1, and we also consider the associated weighted dual planar graph \((\widehat{\mathcal{G}},\widehat{c})\) properly embedded in \(\mathcal{C}_{2\pi}\) according to Definition 1.2. In what follows, we consider the metric graph \(\mathbb{G}\) associated to \(\mathcal{G}\), and we let \(\mathfrak{h}:\mathbb{G}\to[0,1]\) be the extended height coordinate function as specified in Remark 2.7. Furthermore, we also consider the dual metric graph \(\widehat{\mathbb{G}}\), and we let \(\mathfrak{w}:\widehat{\mathbb{G}}\to\mathbb{R}/\mathfrak{n}\mathbb{Z}\) be the extended width coordinate function as specified in Remark 2.11. ### Adding new vertices In this subsection, we see how declaring a finite amount of interior points of some edges in the graph to be vertices affects the height coordinate function and the random walk on the graph. More precisely, we start with the following definition. **Definition 3.1**.: Let \(W\subset\mathbb{G}\) be a finite subset of the metric graph. We define the weighted planar graph \((\mathcal{G}^{\prime},c^{\prime})\) associated to \((\mathcal{G},c)\) and \(W\) as follows: 1. The set of vertices \(\mathcal{VG}^{\prime}\) is given by \(\mathcal{VG}\cup W\); 2. If the interior of an edge \(e\in\mathcal{EG}\) contains \(n\in\mathbb{N}\) points of \(W\), then \(e\) is split into \(n+1\) new edges \([e^{\prime}_{n+1}]\) according to the points in the interior of \(e\). The edge \(e\) remains unchanged otherwise. 3. If the interior of an edge \(e\in\mathcal{EG}\) is split into \([e^{\prime}_{n+1}]\) new edges, for some \(n\in\mathbb{N}\), then we set \[c^{\prime}_{e^{\prime}_{i}}:=\frac{c_{e}}{\mathrm{d}^{\mathbb{G}}\big{(}e^{ \prime,-}_{i},e^{\prime,+}_{i}\big{)}},\quad\forall i\in[n+1].\] The conductance of \(e\) remain unchanged otherwise. The weighted dual graph \((\widehat{\mathcal{G}}^{\prime},\widehat{c^{\prime}})\) can be naturally constructed from \((\mathcal{G}^{\prime},c^{\prime})\). **Remark 3.2**.: At the level of the Smith diagram, adding new vertices to the interior of some edges of the graph according to the procedure described above corresponds to horizontally dissecting the rectangles associated to such edges. More precisely, let us assume for simplicity that only one point is added to the the interior of an edge \(e\), and let \(e^{\prime}_{1}\) and \(e^{\prime}_{2}\) be the new edges in which \(e\) is split into. Suppose that \(e^{\prime,-}_{1}=e^{-}\) and \(e^{\prime,+}_{2}=e^{+}\). Let \(\mathcal{S}^{\prime}\) be the tiling map associated to new weighted graph. Then it is immediate to check that \(\mathcal{S}(e)=\mathcal{S}^{\prime}(e^{\prime}_{1})\cup\mathcal{S}^{\prime}(e ^{\prime}_{2})\). In particular the rectangles \(\mathcal{S}^{\prime}(e^{\prime}_{1})\) and \(\mathcal{S}^{\prime}(e^{\prime}_{2})\) have the same width of the rectangle \(\mathcal{S}(e)\), and the height of \(\mathcal{S}^{\prime}(e^{\prime}_{1})\) is proportional to \(\mathrm{d}^{\mathbb{G}}(e^{\prime,-}_{1},e^{\prime,+}_{1})\), while that of \(\mathcal{S}^{\prime}(e^{\prime}_{2})\) is proportional to \(\mathrm{d}^{\mathbb{G}}(e^{\prime,-}_{2},e^{\prime,+}_{2})\). Let \(\mathfrak{h}^{\prime}:\mathcal{VG}^{\prime}\to[0,1]\) be the voltage function associated to the quadruplet \((\mathcal{G}^{\prime},c^{\prime},v_{0},v_{1})\). The following lemma relates the function \(\mathfrak{h}\) with \(\mathfrak{h}^{\prime}\). **Lemma 3.3**.: _For every \(x\in\mathcal{VG}\), it holds that \(\mathfrak{h}(x)=\mathfrak{h}^{\prime}(x)\)._ Proof.: We denote by \(K:=\#\{\mathcal{VG}^{\prime}\setminus\mathcal{VG}\}\) the number of new vertices added to the graph \(\mathcal{G}^{\prime}\). If \(K=0\), then the result follows immediately. Let us now assume that \(K=1\) and suppose that the edge \(e\in\mathcal{EG}\) is split into two new edges \(e^{\prime}_{1}\) and \(e^{\prime}_{2}\) with conductances \(c^{\prime}_{e^{\prime}_{1}}\) and \(c^{\prime}_{e^{\prime}_{2}}\) as specified in Definition 3.1. Then, the desired result follows immediately from the series law of electrical network (cf. [16, Section 2.3.I]). The general case follows from a simple induction argument on \(K\). **Lemma 3.4**.: _For \(x\in\mathcal{VG}\), let \(X^{\prime,x}\) be the random walk on \((\mathcal{G}^{\prime},c^{\prime})\) started from \(x\). Let \(\tau_{0}:=0\) and, for every \(k\in\mathbb{N}_{0}\), we define inductively \(\tau_{k+1}:=\inf\{j>\tau_{k}\,:\,X^{\prime,x}_{j}\in\mathcal{VG}\}\). Then \(\{X^{\prime,x}_{\tau_{k}}\}_{k\in\mathbb{N}}\) has the same distribution as the random walk on \((\mathcal{G},c)\) started from \(x\)._ Proof.: As in the proof of Lemma 3.3, let \(K:=\#\{\mathcal{VG}^{\prime}\setminus\mathcal{VG}\}\). If \(K=0\), then the result is obvious. Let us now assume that \(K=1\) and let \(y\in W\) be the vertex added to \(\mathcal{G}^{\prime}\). We claim that the distribution of \(X_{\tau_{1}}^{\prime,x}\) is equal to that of \(X_{1}^{x}\). If all the edges in \(\mathcal{EG}(x)\) do not contain \(y\), then the claim is obvious. Let us assume that there exists an edge in \(\mathcal{EG}(x)\) which contains \(y\). Then, thanks to an easy computation, one can verify that \[\mathbb{P}_{x}(X_{\tau_{1}}^{\prime}=v)=c_{xv}/\pi(x),\quad\forall v\in \mathcal{VG}(x).\] Therefore, thanks to the strong Markov property of \(X^{\prime,x}\), we get that for all \(k\geq 2\), it holds that \[\mathbb{P}_{x}(X_{\tau_{k}}^{\prime}=v\mid X_{\tau_{k-1}}^{\prime}=w)=\begin{cases} c_{vw}/\pi(w),&v\in\mathcal{VG}(w)\\ 0,&\text{otherwise},\end{cases}\quad\forall k\geq 2,\] which proves the result if \(K=1\). The general case follows from a simple induction argument on \(K\). **Remark 3.5**.: Given a finite set \(\widehat{W}\subset\widehat{\mathbb{G}}\), following a similar procedure to the one described above, one can also construct the weighted dual graph \((\widehat{\mathcal{G}}^{\prime},\widehat{c}^{\prime})\) associated to \((\widehat{\mathcal{G}},c)\) and \(\widehat{W}\). In particular, results similar to the one stated above hold also for the respective dual counterparts. For example, if \(\mathfrak{w}^{\prime}:\mathcal{VG}^{\prime}\to\mathbb{R}/\mathfrak{n}\mathbb{Z}\) is the width coordinate function associated to the new weighted graph, then \(\mathfrak{w}^{\prime}\) restricted to \(\mathcal{VG}\widehat{\mathcal{G}}\) coincides with the original width coordinate function. Moreover, adding new dual vertices to the interior of some dual edges corresponds to vertically dissecting the associated rectangles in the Smith diagram. ### Periodicity We collect here some properties of the lifted Smith diagram that are due to its periodicity. We recall that the map \(\hat{\mathcal{S}}^{\dagger,\text{rand}}\) is defined in Definition 2.15. **Lemma 3.6**.: _Let \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\in\mathcal{VG}^{\dagger}\) be such that \(\sigma_{2\pi}(\mathbf{x}_{1})=\sigma_{2\pi}(\mathbf{x}_{2})\). Then, uniformly over all possible realizations of \(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{1})\) and of \(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{2})\), it holds that_ \[\left|\frac{\operatorname{Re}(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{ x}_{2}))-\operatorname{Re}(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{1}))}{ \mathfrak{n}}-\frac{\operatorname{Re}(\mathbf{x}_{2})-\operatorname{Re}( \mathbf{x}_{1})}{2\pi}\right|\leq 1.\] Proof.: We observe that the proof of this result is an easy consequence of Lemma 2.9. However, it does not follow immediately from Lemma 2.9 since the points \(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{1})\) and \(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{2})\) are points chosen uniformly at random from the horizontal segments \(\mathcal{S}^{\dagger}(\mathbf{x}_{1})\) and \(\mathcal{S}^{\dagger}(\mathbf{x}_{2})\), respectively. We give here the details for completeness. Fix a realization of \(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{1})\) and of \(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{2})\). Let \(\mathcal{EG}^{\dagger,\uparrow}(\mathbf{x}_{1})=[\mathbf{e}_{n}^{1}]\) be the set of lifted harmonically oriented edges with tails equals to \(\mathbf{x}_{1}\) ordered in such a way that \(\widehat{\mathbf{e}}_{1}^{1}\cdots\widehat{\mathbf{e}}_{n}^{1}\) forms a path in \(\widehat{\mathcal{G}}^{\dagger}\) oriented from left to right. Define \(\mathcal{EG}^{\dagger,\uparrow}(\mathbf{x}_{2})=[\mathbf{e}_{n}^{2}]\) in a similar way. Then, by construction of the map \(\hat{\mathcal{S}}^{\dagger,\text{rand}}\), it holds that \[\operatorname{Re}(\hat{\mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{1}))\in \big{[}\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{1}^{1,-}),\mathfrak{w}^{ \dagger}(\widehat{\mathbf{e}}_{n}^{1,+})\big{]},\qquad\operatorname{Re}(\hat{ \mathcal{S}}^{\dagger,\text{rand}}(\mathbf{x}_{2}))\in\big{[}\mathfrak{w}^{ \dagger}(\widehat{\mathbf{e}}_{1}^{2,-}),\mathfrak{w}^{\dagger}(\widehat{ \mathbf{e}}_{n}^{2,+})\big{]}.\] Moreover, by Lemma 2.9, we have that \[\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{1}^{1,-})-\mathfrak{w}^{\dagger}( \widehat{\mathbf{e}}_{1}^{2,-})=\frac{\mathfrak{n}}{2\pi}\left(\operatorname{ Re}(\mathbf{x}_{1})-\operatorname{Re}(\mathbf{x}_{2})\right),\qquad\mathfrak{w}^{ \dagger}(\widehat{\mathbf{e}}_{n}^{1,+})-\mathfrak{w}^{\dagger}(\widehat{ \mathbf{e}}_{n}^{2,+})=\frac{\mathfrak{n}}{2\pi}\left(\operatorname{Re}( \mathbf{x}_{1})-\operatorname{Re}(\mathbf{x}_{2})\right),\] and also \[\big{|}\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{n}^{1,+})-\mathfrak{w}^{ \dagger}(\widehat{\mathbf{e}}_{1}^{1,-})\big{|}=\big{|}\mathfrak{w}^{\dagger}( \widehat{\mathbf{e}}_{n}^{2,+})-\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{1 }^{2,-})\big{|}\leq\mathfrak{n},\] where the first equality is again due to Lemma 2.9, and the last inequality is due to the fact that, by construction of the Smith diagram, every horizontal line segment has width at most \(\mathfrak{n}\). Therefore, putting together all these facts, one can readily obtain the desired result. In order to state and prove the next lemma of this subsection, we need to introduce some notation. We assume it exists \(a\in(0,1)\) such that all the points in the set \(\mathfrak{h}^{-1}(a)\) are vertices in \(\mathcal{VG}\). We define the set of harmonically oriented edges \(\mathcal{EG}_{a}\) and the corresponding set of dual edges \(\mathcal{E}\widehat{\mathcal{G}}_{a}\) as follows \[\mathcal{EG}_{a}:=\big{\{}e\in\mathcal{EG}\,:\,\mathfrak{h}(e^{-})\leq a< \mathfrak{h}(e^{+})\big{\}},\qquad\mathcal{EG}_{a}:=\big{\{}\widehat{e}\in \mathcal{E}\widehat{\mathcal{G}}\,:\,\widehat{e}\text{ is the dual of some }e\in\mathcal{EG}_{a}\big{\}},\] and we define the following sets of vertices \[\mathcal{VG}_{a}:=\big{\{}x\in\mathcal{VG}\,:\,x=e^{-}\text{ for some }e\in \mathcal{EG}_{a}\big{\}},\qquad\mathcal{VG}_{a}:=\big{\{}\widehat{x}\in \mathcal{VG}\,:\,\widehat{x}=\widehat{e}^{-}\text{ for some }\widehat{e}\in \mathcal{E}\widehat{\mathcal{G}}_{a}\big{\}}.\] Furthermore, we denote by \(\mathcal{EG}_{a}^{\dagger}\), \(\mathcal{E}\widehat{\mathcal{G}}_{a}^{\dagger}\), \(\mathcal{VG}_{a}^{\dagger}\), \(\mathcal{VG}_{a}^{\dagger}\), \(\mathcal{VG}_{a}^{\dagger}\) the lifts to the universal cover of the corresponding objects. Now, we let \(\widehat{\mathbf{x}}_{0}^{\alpha}\) be the vertex in \(\mathcal{VG}_{a}^{\dagger}\) whose real coordinate is nearest to \(0\). We note that the set \(\mathcal{EG}_{a}\) is a cutset and that the associated set of oriented dual edges \(\mathcal{E}\widehat{\mathcal{G}}_{a}\) admits an enumeration \([\widehat{e}_{n}]\) such that \(\widehat{e}_{1}\cdots\widehat{e}_{n}\) forms a counter-clockwise oriented noncontractible simple closed loop in the dual graph \(\widehat{\mathcal{G}}\). In particular, every edge \(\widehat{e}_{i}\) admits a lift \(\widehat{\mathbf{e}}_{i}\) such that \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) forms a simple path oriented from left to right joining \(\widehat{\mathbf{x}}_{0}^{a}\) to the shifted vertex \(\widehat{\mathbf{x}}_{0}^{a}+(0,2\pi)\) (see Lemma 2.2). **Definition 3.7**.: Letting \([\widehat{\mathbf{e}}_{n}]\) be as specified above, we define the set of lifted dual vertices \[\widehat{\mathcal{P}}_{a}^{\dagger}:=\big{\{}\widehat{\mathbf{x}}\in \mathcal{VG}_{a}^{\dagger}\,:\,\widehat{\mathbf{x}}=\widehat{\mathbf{e}}_{i}^ {-}\text{ for some }i\in[n]\big{\}}, \tag{3.1}\] and we also define the set of lifted vertices \[\mathcal{P}_{a}^{\dagger}:=\big{\{}\mathbf{x}\in(\mathfrak{h}^{\dagger})^{-1} (a)\,:\,\mathbf{x}=\mathbf{e}_{i}^{-}\text{ for some }i\in[n]\big{\}}, \tag{3.2}\] where \([\mathbf{e}_{n}]\) is the set of lifted edges associated to \([\widehat{\mathbf{e}}_{n}]\). **Remark 3.8**.: Consider the infinite strip \(\mathrm{S}_{2\pi}^{a}:=\big{[}\mathrm{Re}(\widehat{\mathbf{x}}_{0}^{a}), \mathrm{Re}(\widehat{\mathbf{x}}_{0}^{a})+2\pi\big{)}\times\mathbb{R}\). We remark that in general the set \(\widehat{\mathcal{P}}_{a}^{\dagger}\) is not fully contained in \(\mathrm{S}_{2\pi}^{a}\). We also observe that, thanks to the fact that \(\mathcal{EG}_{a}\) is a cutset, it holds that \[\sigma_{2\pi}(\mathcal{P}_{a}^{\dagger})=\mathfrak{h}^{-1}(a)\quad\text{ and }\quad\#\sigma_{2\pi}(\mathcal{P}_{a}^{\dagger})=\#\mathfrak{h}^{-1}(a).\] We are now ready to state and prove the next lemma of this subsection. We refer to Figure 5 for a diagram illustrating the various objects involved in the proof of Lemma 3.9. **Lemma 3.9**.: _Fix \(a\in(0,1)\) and assume that all the points in \(\mathfrak{h}^{-1}(a)\) are vertices in \(\mathcal{VG}\). Let \(\widehat{\mathbf{x}}_{0}^{a}\in\mathcal{V}\widehat{\mathcal{G}}_{a}^{\dagger}\) be as specified above, then it holds that_ \[0\leq\mathfrak{w}^{\dagger}(\widehat{\mathbf{x}})-\mathfrak{w}^{\dagger}( \widehat{\mathbf{x}}_{0}^{a})\leq\mathfrak{n},\quad\forall\widehat{\mathbf{x }}\in\widehat{\mathcal{P}}_{a}^{\dagger},\] _and also, uniformly over all possible realizations of \(\hat{\mathcal{S}}^{\dagger,\mathrm{rand}}(\cdot)\), it holds that_ \[0\leq\mathrm{Re}(\hat{\mathcal{S}}^{\dagger,\mathrm{rand}}(\mathbf{x}))- \mathfrak{w}^{\dagger}(\widehat{\mathbf{x}}_{0}^{a})\leq\mathfrak{n},\quad \forall\mathbf{x}\in\mathcal{P}_{a}^{\dagger}.\] Proof.: We start by proving the first part of the lemma. We consider the set of dual edges \([\widehat{\mathbf{e}}_{n}]\) such that \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) forms a simple path oriented from left to right joining \(\widehat{\mathbf{x}}_{0}^{a}\) to the shifted vertex \(\widehat{\mathbf{x}}_{0}^{a}+(0,2\pi)\), as specified before the lemma statement. Thanks to Lemma 2.9, we know that \[\sum_{i=1}^{n}\nabla\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{i})= \mathfrak{n},\] and each summand in the sum is non-negative. Now, let \(\widehat{\mathbf{x}}\in\widehat{\mathcal{P}}_{a}^{\dagger}\), then there exists a subpath \(\widehat{\mathbf{e}}_{1}^{\widehat{\mathbf{e}}}\cdots\widehat{\mathbf{e}}_{k}^ {\widehat{\mathbf{x}}}\) of \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{n}\) which connects \(\widehat{\mathbf{x}}_{0}^{a}\) to \(\widehat{\mathbf{x}}\). Therefore, we have that \[\mathfrak{w}^{\dagger}(\widehat{\mathbf{x}})-\mathfrak{w}^{\dagger}(\widehat{ \mathbf{x}}_{0}^{a})=\sum_{i=1}^{k}\nabla\mathfrak{w}^{\dagger}(\widehat{ \mathbf{e}}_{i}^{\widehat{\mathbf{x}}}),\] and the conclusion then follows thanks to the fact that \[0\leq\sum_{i=1}^{k}\nabla\mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{i}^{ \widehat{\mathbf{x}}})\leq\sum_{i=1}^{n}\nabla\mathfrak{w}^{\dagger}(\widehat{ \mathbf{e}}_{i})=\mathfrak{n}.\] We now prove the second part of the lemma. To this end, we fix \(\mathbf{x}\in\mathcal{P}_{a}^{\dagger}\) and a possible realization of \(\hat{\mathcal{S}}^{\dagger,\mathrm{rand}}(\mathbf{x})\). We let \(\mathcal{EG}^{\dagger,\dagger}(\mathbf{x})=[\mathbf{e}_{k}^{\mathbf{x}}]\) be the set of lifted harmonically oriented edges with tails equal to \(\mathbf{x}\) ordered in such a way that the corresponding lifted dual edges \(\widehat{\mathbf{e}}_{1}^{\mathbf{x}}\cdots\widehat{\mathbf{e}}_{k}^{ \mathbf{x}}\) forms a path in \(\widehat{\mathcal{G}}^{\dagger}\) oriented from left to right (see Figure 5). By construction of the map \(\hat{\mathcal{S}}^{\dagger,\mathrm{rand}}\), it holds that \[\mathrm{Re}(\hat{\mathcal{S}}^{\dagger,\mathrm{rand}}(\mathbf{x}))\in\big{[} \mathfrak{w}^{\dagger}(\widehat{\mathbf{e}}_{1}^{\mathbf{x},-}),\mathfrak{w}^ {\dagger}(\widehat{\mathbf{e}}_{k}^{\mathbf{x},+})\big{]}.\] Finally, since both \(\widehat{\mathbf{e}}_{1}^{\mathbf{x},-}\) and \(\widehat{\mathbf{e}}_{k}^{\mathbf{x},+}\) belong to \(\widehat{\mathcal{P}}_{a}^{\dagger}\), the conclusion follows thanks to the first part of the lemma. ### Hitting distribution of a horizontal line Roughly speaking, the main goal of this subsection is to characterize the hitting distribution of a horizontal line on the Smith diagram for the Smith embedded random walk. Before precisely stating the result, we need to introduce some notation. **Definition 3.10**.: We define \(\mathrm{length}:\mathcal{VG}\to[0,\mathfrak{n})\) as the function that assigns to each vertex \(x\in\mathcal{VG}\) the length of the horizontal segment \(\mathcal{S}(x)\) associated to \(x\) by the Smith embedding. More precisely, we define \[\mathrm{length}(x):=\sum_{\mathbf{e}\in\mathcal{EG}^{\dagger}(x)}\nabla \mathfrak{h}(e),\quad\forall x\in\mathcal{VG},\] where \(\mathcal{EG}^{\dagger}(x)\) denotes the set of harmonically oriented edges in \(\mathcal{EG}\) with heads equal to \(x\). We recall that the difference of the width coordinate function between the endpoints of a dual edge is equal to the gradient of the height coordinate of the corresponding primal edge. Moreover, thanks to the fact that the height coordinate function is harmonic, it is plain to see that \[\mathrm{length}(x)=\sum_{\mathbf{e}\in\mathcal{EG}^{\dagger}(x)}\nabla \mathfrak{h}(e),\quad\forall x\in\mathcal{VG},\] where \(\mathcal{EG}^{\dagger}(x)\) denotes the set of harmonically oriented edges in \(\mathcal{EG}\) with tails equal to \(x\). We can also naturally extend the definition of the length function to the metric graph \(\mathbb{G}\). More precisely, given a point \(x\in\mathbb{G}\setminus\mathcal{VG}\), if \(x\) lies in the interior of the edge \(e\in\mathcal{EG}\), then we set \(\mathrm{length}(x):=\nabla\mathfrak{h}(e)\). **Definition 3.11**.: Given a value \(a\in(0,1)\), we define on the set \(\mathfrak{h}^{-1}(a)\subset\mathbb{G}\) the measure \(\mu_{a}\) by letting \[\mu_{a}(x):=\frac{\operatorname{length}(x)}{\mathfrak{\eta}},\quad\forall x\in \mathfrak{h}^{-1}(a).\] Since, by construction of the Smith diagram, it holds that \(\sum_{x\in\mathfrak{h}^{-1}(a)}\operatorname{length}(x)=\mathfrak{\eta}\), the measure \(\mu_{a}\) is a probability measure on the set \(\mathfrak{h}^{-1}(a)\). **Remark 3.12**.: We emphasize that, thanks to Remark 3.8, one can also view \(\mu_{a}\) as a probability measure on the set \(\mathcal{P}_{a}^{\dagger}\), where we recall that \(\mathcal{P}_{a}^{\dagger}\) is the set defined in (3.2). From now on, we assume throughout the whole subsection that all the points in the set \(\cup_{x\in\mathcal{V}\mathcal{G}}\mathfrak{h}^{-1}(\mathfrak{h}(x))\) are vertices in \(\mathcal{V}\mathcal{G}\). At the level of the Smith diagram, this means that for all \(x\in\mathcal{V}\mathcal{G}\), it holds that the set \(\cup_{y\in\mathfrak{h}^{-1}(\mathfrak{h}(x))}\mathcal{S}(y)\) is equal to the noncontractible closed loop \(\mathbb{R}/\mathfrak{n}\mathbb{Z}\times\{\mathfrak{h}(x)\}\subset\mathcal{C }_{\mathfrak{\eta}}\). We observe that, for any given finite weighted planar graph, we can always canonically construct from it another finite weighted planar graph that satisfies this assumption. Indeed, suppose that it exists a vertex \(x\in\mathcal{V}\mathcal{G}\) such that \(\mathfrak{h}^{-1}(\mathfrak{h}(x))\not\subset\mathcal{V}\mathcal{G}\), then, using the procedure described in Subsection 3.1, we can declare all the points in the finite set \(\mathfrak{h}^{-1}(\mathfrak{h}(x))\setminus\mathcal{V}\mathcal{G}\) to be vertices of the graph. We fix a value \(a\in(0,1)\) such that all the points in the set \(\mathfrak{h}^{-1}(a)\) are vertices in \(\mathcal{V}\mathcal{G}\), and we let \(X^{\mu_{a}}\) be the random walk on \((\mathcal{G},c)\) started from a point in \(\mathfrak{h}^{-1}(a)\) sampled according to the probability measure \(\mu_{a}\). **Definition 3.13**.: Let \(X^{\mu_{a}}\) be as specified above. For \(N\in\mathbb{N}\), we say that a finite sequence of height coordinates \([a_{N}]_{0}\subset(0,1)\) is _admissible_ for the random walk \(X^{\mu_{a}}\) if \(a_{0}=a\) and \[\mathbb{P}_{\mu_{a}}\big{(}\mathfrak{h}(X_{n+1})=a_{n+1}\mid\mathfrak{h}(X_{n })=a_{n}\big{)}>0,\quad\forall n\in[N-1]_{0}.\] We can now state the main result of this subsection. **Lemma 3.14**.: _Let \(X^{\mu_{a}}\) be as specified above. For \(N\in\mathbb{N}\), let \([a_{N}]_{0}\subset(0,1)\) be an admissible sequence of height coordinates for the random walk \(X^{\mu_{a}}\). Then, for all \(i\in[N]_{0}\), it holds that_ \[\mathbb{P}_{\mu_{a}}\big{(}X_{i}=x_{i}\mid\{\mathfrak{h}(X_{n})=a_{n}\}_{n=1} ^{N}\big{)}=\mu_{a_{i}}(x),\quad\forall x_{i}\in\mathfrak{h}^{-1}(a_{i}).\] Intuitively, the proof of the above lemma goes as follows. If \(i=1\) and we only had the conditioning on the event \(\mathfrak{h}(X_{1})=a_{1}\), then the result would follow from a simple computation. However, since, in general, \(i\neq 1\), and since we are also conditioning on the events \(\{\mathfrak{h}(X_{n})=a_{n}\}_{n=1}^{i-1}\) and \(\{\mathfrak{h}(X_{n})=a_{n}\}_{n=i+1}^{N}\), which represent the height coordinates of the random walk for past and future times respectively, the proof of the result is not immediate. However, the proof follows by a simple induction argument in which we show that we can "forget" about the conditioning on past and future times. Roughly speaking, the reason why we can forget about past times is due to the fact that the height component of the random walk is itself a Markov process, while the reason why we can forget about future times is due to the harmonicity of the height coordinate function. We now proceed with the proof of the lemma. Proof of Lemma 3.14.: The proof involves three main steps. **Step 1:** We start by proving that \(\mathbb{P}_{\mu_{a}}\big{(}\mathfrak{h}(X_{1})=a_{1}\mid X_{0}=x_{0}\big{)}\) depends only on \(\mathfrak{h}(X_{0})=a_{0}\), for all \(x_{0}\in\mathfrak{h}^{-1}(a)\). Since we are assuming that \(\mathcal{V}\mathcal{G}=\cup_{x\in\mathcal{V}\mathcal{G}}\mathfrak{h}^{-1}( \mathfrak{h}(x))\), at its first step, the random walk can only reach two heights: height \(a_{1}\) or height \(\widetilde{a}_{1}\) say. If \(a_{1}>a\), since \(\mathfrak{h}\) is harmonic at \(x_{0}\), it holds that \[\mathfrak{h}(x_{0})=\frac{\sum_{e\in\mathcal{E}\mathcal{G}^{\downarrow}(x_{0}) }c_{e}\mathfrak{h}(e^{-})+\sum_{e\in\mathcal{E}\mathcal{G}^{\uparrow}(x_{0})}c _{e}\mathfrak{h}(e^{+})}{\pi(x_{0})}=\widetilde{a}_{1}\frac{\sum_{e\in\mathcal{ E}\mathcal{G}^{\downarrow}(x_{0})}c_{e}}{\pi(x_{0})}+a_{1}\frac{\sum_{e\in \mathcal{E}\mathcal{G}^{\uparrow}(x_{0})}c_{e}}{\pi(x_{0})}, \tag{3.3}\] where, as usual, \(\mathcal{E}\mathcal{G}^{\downarrow}(x_{0})\) (resp. \(\mathcal{E}\mathcal{G}^{\uparrow}(x_{0})\)) denotes the set of harmonically oriented edges with heads (resp. tails) equal to \(x_{0}\). Now, we observe that \[\mathbb{P}_{\mu_{a}}\big{(}\mathfrak{h}(X_{1})=a_{1}\mid X_{0}=x_{0}\big{)}= \frac{\sum_{e\in\mathcal{E}\mathcal{G}^{\uparrow}(x_{0})}c_{e}}{\pi(x_{0})}, \qquad\mathbb{P}_{\mu_{a}}\big{(}\mathfrak{h}(X_{1})=\widetilde{a}_{1}\mid X_{0} =x_{0}\big{)}=\frac{\sum_{e\in\mathcal{E}\mathcal{G}^{\downarrow}(x_{0})}c_{e}}{ \pi(x_{0})}.\] In particular, plugging these expressions into (3.3) and rearranging, we obtain that \[\mathbb{P}_{\mu_{a}}\big{(}\mathfrak{h}(X_{1})=a_{1}\mid X_{0}=x_{0}\big{)}=\frac{ |\vec{a}_{1}-a_{0}|}{|\vec{a}_{1}-a_{1}|},\quad\forall x_{0}\in\mathfrak{h}^{-1 }(a),\] from which the desired result follows since the right-hand side of the above expression only depends on \(\mathfrak{h}(X_{0})=a_{0}\). A similar conclusion also holds if \(a_{1}<a\). Now, proceeding by induction, one can prove that for any \(i\in[N]_{0}\) and for all \(x_{0}\in\mathfrak{h}^{-1}(a),\ldots,x_{i}\in\mathfrak{h}^{-1}(a_{i})\), it holds that \(\mathbb{P}_{\mu_{a}}\big{(}\mathfrak{h}(X_{i+n})=a_{i+n}\big{)}_{n=1}^{N-i} \mid\{X_{j}=x_{j}\}_{j=0}^{i}\big{)}\) depends only on \(\mathfrak{h}(X_{i})=a_{i}\). **Step 2:** Thanks to previous step and Bayes' rule, we have that \(\mathbb{P}_{\mu_{a}}\big{(}X_{0}=x_{0}\mid\mathfrak{h}(X_{1})=a_{1}\big{)}= \mu_{a}(x_{0})\), for all \(x_{0}\in\mathfrak{h}^{-1}(a)\). In particular, using this fact, we will now prove that \[\mathbb{P}_{\mu_{a}}\big{(}X_{1}=x_{1}\mid\mathfrak{h}(X_{1})=a_{1}\big{)}=\mu _{a_{1}}(x_{1}),\quad\forall x_{1}\in\mathfrak{h}^{-1}(a_{1}).\] To this end, fix \(x_{1}\in\mathfrak{h}^{-1}(a_{1})\) and suppose that \(a_{1}>a\). Then we can proceed as follows \[\mathbb{P}_{\mu_{a}}\big{(}X_{1}=x_{1}\mid\mathfrak{h}(X_{1})=a_ {1}\big{)}\] \[\quad=\sum_{x_{0}\in\mathfrak{h}^{-1}(a)\cap\mathcal{VG}(x_{1})} \frac{c_{x_{0}x_{1}}}{\sum_{v\in\mathfrak{h}^{-1}(a_{1})\cap\mathcal{VG}(x_{0 })}c_{x_{0}v}}\mu_{a}(x_{0})\] \[\quad=\sum_{x_{0}\in\mathfrak{h}^{-1}(a)\cap\mathcal{VG}(x_{1})} \frac{c_{x_{0}x_{1}}}{\sum_{v\in\mathfrak{h}^{-1}(a_{1})\cap\mathcal{VG}(x_{0 })}c_{x_{0}v}}\frac{|a_{1}-a|\sum_{v\in\mathfrak{h}^{-1}(a_{1})\cap\mathcal{VG} (x_{0})}c_{x_{0}v}}{\mathfrak{h}}\] \[\quad=\frac{1}{\mathfrak{h}}\sum_{e\in\mathcal{EG}^{\downarrow}(x )}\nabla\mathfrak{h}(e)\] \[\quad=\mu_{a_{1}}(x),\] where \(\mathcal{EG}^{\downarrow}(x)\) denotes the set of harmonically oriented edges with heads equal to \(x\). In order to pass from the second line to the third line of the above display, we used the definition of the probability measure \(\mu_{a}\), and the fact that, for all \(e\in\mathcal{EG}^{\downarrow}(x)\), it holds that \(\mathfrak{h}(e^{-})=a\) and \(\mathfrak{h}(e^{+})=a_{1}\). One can proceed similarly if \(a_{1}<a\). Now, proceeding by induction, one can easily prove that, for all \(i\in[N]\), it holds that \[\mathbb{P}_{\mu_{a}}\big{(}X_{i}=x_{i}\mid\{\mathfrak{h}(X_{n})=a_{n}\}_{n=1} ^{i}\big{)}=\mu_{a_{i}}(x_{i}),\quad\forall x_{i}\in\mathfrak{h}^{-1}(a_{i}).\] **Step 3:** For \(i\in[N]_{0}\), we observe that a consequence of Step 1 is that the sequence of random variables \(\{\mathfrak{h}(X_{i+n})\}_{n=1}^{N-i}\) is conditionally independent from \(\{X_{j}\}_{j=0}^{i}\) given \(\mathfrak{h}(X_{i})=a_{i}\). Therefore, the conditional law of \(\{X_{j}\}_{j=0}^{i}\) given \(\{\mathfrak{h}(X_{n})=a_{n}\}_{n=1}^{i}\) is the same as the conditional law given \(\{\mathfrak{h}(X_{n})=a_{n}\}_{n=1}^{N}\). Hence, this implies that \[\mathbb{P}_{\mu_{a}}\big{(}X_{i}=x_{i}\mid\big{\{}\mathfrak{h}(X_{n})=a_{n} \big{\}}_{n=1}^{N}\big{)}=\mathbb{P}_{\mu_{a}}\big{(}X_{i}=x_{i}\mid\big{\{} \mathfrak{h}(X_{n})=a_{n}\big{\}}_{n=1}^{i}\big{)},\quad\forall x_{i}\in \mathfrak{h}^{-1}(a_{i}),\] and so the conclusion follows from Step 2. ### Expected horizontal winding Roughly speaking, the main goal of this subsection is to establish that the average winding that the Smith-embedded random walk does before hitting a given horizontal line on the Smith diagram is zero. Before precisely stating the result, we need to introduce some notation. **Definition 3.15**.: Given the random walk \(\mathbf{X}^{\mathbf{x}}\) on the lifted weighted graph \((\mathcal{G}^{\dagger},c^{\dagger})\) started from \(\mathbf{x}\in\mathcal{VG}^{\dagger}\), we let \[\mathbb{N}_{0}\ni n\mapsto\dot{\mathbf{X}}^{\mathbf{x}}_{n}\] be the discrete time process taking values in \(\mathbb{R}\times[0,1]\) such that, for each \(n\in\mathbb{N}_{0}\), the conditional law of \(\dot{\mathbf{X}}^{\mathbf{x}}_{n}\), given \(\mathbf{X}^{\mathbf{x}}_{n}\), is equal to the law of \(\dot{\mathcal{S}}^{\dagger,\mathrm{rand}}(\mathbf{X}^{\mathbf{x}}_{n})\), where we recall that \(\mathcal{S}^{\dagger,\mathrm{rand}}\) is defined in Definition 2.15. We call the process \(\dot{\mathbf{X}}^{\mathbf{x}}\) the _lifted Smith-embedded random walk_ associated to \(\mathbf{X}^{\mathbf{x}}\). It follows from the definition of \(\mathbf{X}^{\mathbf{x}}\) that, at each time step \(n\in\mathbb{N}_{0}\), the location of the point \(\mathbf{X}^{\mathbf{x}}_{n}\) is sampled uniformly at random and independently of everything else from the horizontal line segment \(\mathcal{S}^{\dagger}(\mathbf{X}^{\mathbf{x}}_{n})\). With a slight abuse of notation, we also denote by \(\dot{\mathbf{X}}^{\mathbf{x}}\) the continuous time version \(\{\dot{\mathbf{X}}^{\mathbf{x}}_{t}\}_{t\geq 0}\), where the continuous path is generated by piecewise linear interpolation at constant speed. We assume also in this subsection that all the points in the set \(\cup_{x\in\mathcal{V}\mathcal{G}}\mathfrak{h}^{-1}(\mathfrak{h}(x))\) are vertices in \(\mathcal{VG}\). Furthermore, we fix a value \(a\in(0,1)\) such that all the points in the set \(\mathfrak{h}^{-1}(a)\) are vertices in \(\mathcal{VG}\), and we let \(X^{\mu_{a}}\) be the random walk on \((\mathcal{G},c)\) started from a point in \(\mathfrak{h}^{-1}(a)\) sampled according to the probability measure \(\mu_{a}\) defined in Definition 3.11. We also adopt the convention to denote by \[\mathbb{N}_{0}\ni n\mapsto\mathbf{X}^{\mu_{a}}_{n}\] the lift of the random walk \(X^{\mu_{a}}\) to the lifted weighted graph \((\mathcal{G}^{\dagger},c^{\dagger})\) started from a point in \(\mathcal{P}^{\dagger}_{a}\) sampled according to the the probability measure \(\mu_{a}\) (see Remark 3.12). Moreover, we denote by \(\dot{\mathbf{X}}^{\mu_{a}}\) the lifted Smith-embedded random walk associated to \(\mathbf{X}^{\mu_{a}}\) as specified above. In complete analogy with the definition of winding of a path in the infinite cylinder \(\mathcal{C}_{2\pi}\), we have the following definition of winding on the cylinder \(\mathcal{C}_{\mathfrak{\eta}}\). **Definition 3.16**.: Let \(0\leq t_{1}<t_{2}\), consider a path \(P:[t_{1},t_{2}]\to\mathcal{C}_{\mathfrak{\eta}}\), and let \(\mathbf{P}:[t_{1},t_{2}]\to\mathbb{R}\times[0,1]\) be its lift to the covering space \(\mathbb{R}\times[0,1]\). Then, we define the winding of \(P\) as follows \[\mathrm{wind}_{\mathfrak{\eta}}(P)=\frac{\mathrm{Re}(\mathbf{P}(t_{2}))- \mathrm{Re}(\mathbf{P}(t_{1}))}{\mathfrak{\eta}}. \tag{3.4}\] We are now ready to state the main result of this subsection. **Lemma 3.17**.: _Let \(X^{\mu_{a}}\) and \(\dot{\mathbf{X}}^{\mu_{a}}\) be as specified above. For \(N\in\mathbb{N}\), let \([a_{N}]_{0}\subset(0,1)\) be an admissible sequence of height coordinates for the random walk \(X^{\mu_{a}}\) as specified in Definition 3.13. Then it holds that_ \[\mathbb{E}_{\mu_{a}}\big{[}\mathrm{wind}_{\mathfrak{\eta}}\big{(}\dot{\mathbf{ X}}|_{[0,N]}\big{)}\mid\big{\{}\mathfrak{h}(X_{n})=a_{n}\big{\}}_{n=1}^{N} \big{]}=0.\] Heuristically speaking, the proof of the above lemma goes as follows. First, we observe that, by definition of winding, it holds that \[\mathrm{wind}_{\mathfrak{\eta}}(\dot{\mathbf{X}}|_{[0,N]})=\sum_{j=0}^{N-1} \mathrm{wind}_{\mathfrak{\eta}}(\dot{\mathbf{X}}|_{[j,j+1]}).\] In particular, this implies that we can reduce the problem to studying the expected winding at each time step of the random walk. Suppose for a moment that \(N=1\) in the lemma statement. Then we only need to prove that the random walk started uniformly at random from height \(a\) and conditioned to hit height \(a_{1}\) at its first time step has zero expected winding. The reason why this is true is due to the fact that the re-randomization procedure that takes place inside each small segment ensures that the conditional hitting distribution of the segment at height \(a_{1}\) is also uniformly distributed. To extend the argument to the general case \(N\in\mathbb{N}\), we just need to use Lemma 3.14 in a suitable way. Proof of Lemma 3.17.: We divide the proof in two steps. **Step 1:** The main goal of this first step is to find an equivalent condition for the relation stated in the lemma to be true. To this end, we start by observing that, thanks to the Definition 3.16 of winding, it is holds that \[\mathbb{E}_{\mu_{a}}\big{[}\mathrm{wind}_{\mathfrak{\eta}}\big{(}\dot{\mathbf{ X}}|_{[0,N]}\big{)}\mid\big{\{}\mathfrak{h}(X_{n})=a_{n}\big{\}}_{n=1}^{N} \big{]}=\frac{1}{\mathfrak{\eta}}\sum_{j=0}^{N-1}\mathbb{E}_{\mu_{a}}\big{[} \mathrm{Re}(\dot{\mathbf{X}}_{j+1})-\mathrm{Re}(\dot{\mathbf{X}}_{j})\mid \big{\{}\mathfrak{h}(X_{n})=a_{n}\big{\}}_{n=1}^{N}\big{]}. \tag{3.5}\] Now, we claim that \[\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\dot{\mathbf{X}}_{j+1})\mid\big{\{} \mathfrak{h}(X_{n})=a_{n}\big{\}}_{n=1}^{N}\big{]}=\mathbb{E}_{\mu_{a}}\big{[} \mathrm{Re}(\dot{\mathbf{X}}_{j+1})\mid\big{\{}\mathfrak{h}(X_{n})=a_{n}\big{\}} _{n=j}^{j+1}\big{]},\quad\forall j\in[N-1]_{0}. \tag{3.6}\] To show that this relation holds true, we first notice that the left-hand side of (3.6) can be written as follows \[\mathbb{E}_{\mu_{a}}\big{[} \mathrm{Re}(\dot{\mathbf{X}}_{j+1})\mid\big{\{}\mathfrak{h}(X_{n})=a _{n}\big{\}}_{n=1}^{N}\big{]}\] \[=\sum_{x\in\mathfrak{h}^{-1}(a_{j+1})}\mathbb{E}_{\mu_{a}}\big{[} \mathrm{Re}(\dot{\mathbf{X}}_{j+1})\mid X_{j+1}=x\big{]}\mathbb{P}_{\mu_{a}} \big{(}X_{j+1}=x\mid\big{\{}\mathfrak{h}(X_{n})=a_{n}\big{\}}_{n=1}^{N}\big{)}\] \[=\sum_{x\in\mathfrak{h}^{-1}(a_{j+1})}\mathbb{E}_{\mu_{a}}\big{[} \mathrm{Re}(\dot{\mathbf{X}}_{j+1})\mid X_{j+1}=x\big{]}\mu_{a_{j+1}}(x),\] where in the last equality we used Lemma 3.14. Similarly, one can also obtain the same expression for the right-hand side of (3.6). To do so, one can simply condition on all possible height trajectories that join height \(a\) to height \(a_{j}\) in \(j\) steps, and then proceed in the same exact way as above. In a similar way, one can also prove that \[\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\dot{\mathbf{X}}_{j})\mid\big{\{} \mathfrak{h}(X_{n})=a_{n}\big{\}}_{n=1}^{N}\big{]}=\mathbb{E}_{\mu_{a}}\big{[} \mathrm{Re}(\dot{\mathbf{X}}_{j})\mid\mathfrak{h}(X_{j})=a_{j}\big{]},\quad \forall j\in[N-1]_{0}. \tag{3.7}\] Therefore, combining (3.6) and (3.7) and going back to (3.5), we showed that the proof of the lemma follows if we prove that the following equality holds \[\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\dot{\mathbf{X}}_{j+1})\mid\big{\{} \mathfrak{h}(X_{n})=a_{n}\big{\}}_{n=j}^{j+1}\big{]}=\mathbb{E}_{\mu_{a}} \big{[}\mathrm{Re}(\dot{\mathbf{X}}_{j})\mid\mathfrak{h}(X_{j})=a_{j}\big{]}, \quad\forall j\in[N-1]_{0}. \tag{3.8}\] Now, for every \(j\in[N-1]_{0}\), we let \(X^{j}\) be the random walk on the weighted graph \((\mathcal{G},c)\) started from a point in \(\mathfrak{h}^{-1}(a_{j})\) sampled according to the probability measure \(\mu_{a_{j}}\). As one can easily verify, a consequence of Lemma 3.14 is that the conditional law of \(X^{\mu_{a}}\), given \(\mathfrak{h}(X^{\mu_{a}}_{j})=a_{j}\), is equal to the law of \(X^{j}\). In particular, this implies that (3.8) is equivalent to the fact that \[\mathbb{E}_{\mu_{a_{j}}}\big{[}\mathrm{Re}(\dot{\mathbf{X}}_{1}^{j})\mid \mathfrak{h}(X_{1}^{j})=a_{j+1}\big{]}-\mathbb{E}_{\mu_{a_{j}}}\big{[} \mathrm{Re}(\dot{\mathbf{X}}_{0}^{j})\big{]}=0,\quad\forall j\in[N-1]_{0}, \tag{3.9}\] where \(\mathbf{X}^{j}\) denotes the lift of \(X^{j}\) started from a point in \(\mathcal{P}^{\dagger}_{a_{j}}\), and \(\dot{\mathbf{X}}^{j}\) is the lifted Smith-embedded random walk associated to \(\mathbf{X}^{j}\). Let us also emphasize that here we are relying on the fact that the definition of winding does not depend on the particular choice of the lift. **Step 2:** The main goal of this second step is to prove that the equality in (3.9) holds. To this end, we observe that, since every horizontal segment in the Smith diagram has length at most \(\mathfrak{\eta}\), \(|\,\mathrm{Re}(\dot{\mathbf{X}}_{1}^{j})-\mathrm{Re}(\dot{\mathbf{X}}_{0}^{j})| \leq\mathfrak{\eta}\). In particular, this implies that (3.9) can be rewritten as follows \[\mathbb{E}_{\mu_{a_{j}}}\big{[}\mathrm{Re}(\sigma_{\mathfrak{\eta}}(\dot{ \mathbf{X}}_{1}^{j}))\mid\mathfrak{h}(X_{1}^{j})=a_{j+1}\big{]}-\mathbb{E}_{ \mu_{a_{j}}}\big{[}\mathrm{Re}(\sigma_{\mathfrak{\eta}}(\dot{\mathbf{X}}_{0}^{j }))\big{]}=0,\quad\forall j\in[N-1]_{0}, \tag{3.10}\] where we recall that \(\sigma_{\mathfrak{\eta}}\) is defined in (2.15) and denotes the covering map of the cylinder \(\mathcal{C}_{\mathfrak{\eta}}\). By construction, the random variable \(\mathrm{Re}(\sigma_{\mathfrak{\eta}}(\dot{\mathbf{X}}_{0}^{j}))\) is uniformly distributed on the interval \([0,\mathfrak{\eta})\), and so the result follows if we prove that also the conditional law of \(\mathrm{Re}(\sigma_{\mathfrak{\eta}}(\dot{\mathbf{X}}_{1}^{j}))\), given \(\mathfrak{h}(X_{1}^{j})=a_{j+1}\), is uniformly distributed on the interval \([0,\mathfrak{\eta})\). To this end, let \(\mathrm{I}\subset[0,\mathfrak{\eta})\) be a fixed measurable set, then we have that \[\mathbb{P}_{\mu_{a_{j}}}\big{(} \mathrm{Re}(\sigma_{\mathfrak{\eta}}(\dot{\mathbf{X}}_{1}^{j})) \in\mathrm{I}\mid\mathfrak{h}(X_{1}^{j})=a_{j+1}\big{)}\] \[=\sum_{x\in\mathfrak{h}^{-1}(a_{j+1})}\mathbb{P}_{\mu_{a_{j}}} \big{(}\mathrm{Re}(\sigma_{\mathfrak{\eta}}(\dot{\mathbf{X}}_{1}^{j}))\in \mathrm{I}\mid X_{1}^{j}=x\big{)}\mathbb{P}_{\mu_{a_{j}}}\big{(}X_{1}^{j}=x \mid\mathfrak{h}(X_{1}^{j})=a_{j+1}\big{)}\] \[=\sum_{x\in\mathfrak{h}^{-1}(a_{j+1})}\mathbb{P}_{\mu_{a_{j}}} \big{(}\mathrm{Re}(\sigma_{\mathfrak{\eta}}(\dot{\mathbf{X}}_{1}^{j}))\in \mathrm{I}\mid X_{1}^{j}=x\big{)}\mu_{a_{j+1}}(x)\] \[=\sum_{x\in\mathfrak{h}^{-1}(a_{j+1})}\frac{\mathrm{length}\big{(} \mathrm{I}\cap\mathrm{Re}(\mathcal{S}(x))\big{)}|}{\mathrm{length}(x)}\frac{ \mathrm{length}(x)}{\mathfrak{\eta}}\] \[=\frac{\mathrm{length}\big{(}\mathrm{I}\big{)}}{\mathfrak{\eta}},\] where in the second equality we used Lemma 3.14, while in the third equality we used the fact that the conditional law of \(\sigma_{\mathfrak{\eta}}(\dot{\mathbf{X}}_{1}^{j})\), given \(X_{1}^{j}=x\), is uniformly distributed on the interval \(\mathcal{S}(x)\) and the definition of the probability measure \(\mu_{a_{j+1}}\). Since this prove (3.10), which is equivalent to (3.9), the proof of the lemma is concluded. Proof of the main result The main goal of this section is to prove Theorem 1.3. To this end, we fix a sequence \(\{(\mathcal{G}^{n},c^{n},v_{0}^{n},v_{1}^{n})\}_{n\in\mathbb{N}}\) of doubly marked finite weighted planar maps and we let \(\{(\widehat{\mathcal{G}}^{n},\widehat{c}^{n})\}_{n\in\mathbb{N}}\) be the sequence of associated weighted dual planar graphs. We assume throughout this section that such sequences satisfy hypotheses (H1), (H3), (H3). Before moving to the proof of the main theorem, we prove a couple of simple results which are immediate consequences of our assumptions and which will be useful later on. In particular, the next lemma is basically an analogue of assumptions (H2) and (H3) in the setting of the universal cover. **Lemma 4.1**.: _For each \(n\in\mathbb{N}\), view the embedded random walk on the lifted weighted graph \((\mathcal{G}^{\dagger,n},c^{\dagger,n})\), stopped when it traverses an unbounded edge, as a continuous curve in \(\mathbb{R}^{2}\) obtained by piecewise linear interpolation at constant speed. For each \(R>0\) and for any \(z\in\mathbb{R}\times[-R,R]\), the law of the random walk on \((\mathcal{G}^{\dagger,n},c^{\dagger,n})\) started from the vertex \(\mathbf{x}_{z}^{n}\in\mathcal{V}\mathcal{G}^{\dagger,n}\) nearest to \(z\) weakly converges as \(n\to\infty\) to the law of the Brownian motion in \(\mathbb{R}^{2}\) started from \(z\) with respect to the local topology on curves viewed modulo time parameterization specified in Subsection 2.1.2, uniformly over all \(z\in\mathbb{R}\times[-R,R]\). Furthermore, the same result holds for the random walk on the sequence of lifted weighted dual graphs \(\{(\widehat{\mathcal{G}}^{\dagger,n},\widehat{c}^{\dagger,n})\}_{n\in N}\)._ Proof.: Obviously, we can just prove the result for the random walk on the sequence of primal graphs as the result for the random walk on the sequence of dual graphs can be proved in the same exact way. Fix \(R>0\) and \(z\in\mathbb{R}\times[-R,R]\), and let \(\mathbf{X}^{n,z}\) be the continuous time random walk on \((\mathcal{G}^{\dagger,n},c^{\dagger,n})\) started from \(\mathbf{x}_{z}^{n}\in\mathcal{V}\mathcal{G}^{\dagger,n}\) as specified in the lemma statement. We let \(\tau_{0}:=0\) and for \(k\in\mathbb{N}_{0}\) we define inductively \[\tau_{k+1}:=\inf\bigl{\{}t\geq\tau_{k}\,:\,\mathrm{Re}(X_{t})\not\in[\mathrm{ Re}(X_{\tau_{k}})-\pi,\mathrm{Re}(X_{\tau_{k}})+\pi)\bigr{\}}.\] For each \(k\in\mathbb{N}_{0}\), we observe that the universal covering map \(\sigma_{2\pi}:\mathbb{R}^{2}\to\mathcal{C}_{2\pi}\) is a biholomorphism when restricted to \([\mathrm{Re}(X_{\tau_{k}})-\pi,\mathrm{Re}(X_{\tau_{k}})+\pi)\). Moreover, by assumption (H2), we know that the law of \(\sigma_{2\pi}(\mathbf{X}^{n,z})\) weakly converges as \(n\to\infty\) to the law of the Brownian motion in \(\mathcal{C}_{2\pi}\) with respect to the metric \(\mathrm{d}_{\mathrm{loc}}^{\mathrm{CMP}}\) specified in Subsection 2.1.2. Therefore, since Brownian motion is conformally invariant and putting together the previous facts, we obtain that the law of the random walk \(\mathbf{X}^{n,z}\) weakly converges as \(n\to\infty\) to the law of the Brownian motion in \(\mathbb{R}^{2}\) with respect to the metric \(\mathrm{d}^{\mathrm{CMP}}\) on the time interval \([\tau_{k},\tau_{k+1}]\), for each \(k\in\mathbb{N}_{0}\). Hence, the desired result follows. For each \(n\in\mathbb{N}\), we recall that \(\mathcal{F}\mathcal{G}^{n}\) denotes the set of faces of the graph \(\mathcal{G}^{n}\). The next lemma states that, thanks to the invariance principle assumption on the sequence of weighted graphs, the maximal diameter of the faces in \(\mathcal{F}\mathcal{G}^{n}\) which intersect a compact subset of \(\mathcal{C}_{2\pi}\) is of order \(o_{n}(1)\). **Lemma 4.2**.: _Let \(R>0\), and, for any \(n\in\mathbb{N}\), consider the set \(\mathcal{F}\mathcal{G}^{n}(R)\) of faces in \(\mathcal{F}\mathcal{G}^{n}\) that intersect \(\mathbb{R}/2\pi\mathbb{Z}\times[-R,R]\). Then, for any \(\varepsilon>0\) and for any \(n>n(R,\varepsilon)\) large enough, it holds that_ \[\mathrm{diam}(f)\leq\varepsilon,\quad\forall f\in\mathcal{F}\mathcal{G}^{n}(R).\] _The same result holds also for the sequence of dual graphs, i.e., with \(\mathcal{F}\widehat{\mathcal{G}}^{n}\) in place of \(\mathcal{F}\mathcal{G}^{n}\)._ Proof.: Obviously, we can just prove the result for the sequence of primal graphs as the result for the sequence of dual graphs can be proved in the same exact way. We proceed by contradiction by assuming that it exists \(\varepsilon>0\) such that, for any \(n\in\mathbb{N}\), it exists a face \(f_{n}\in\mathcal{F}\mathcal{G}^{n}(R)\) for which \(\mathrm{diam}(f_{n})>\varepsilon\). We notice that each set \(f_{n}\cap\mathbb{R}/2\pi\mathbb{Z}\times[-2R,2R]\) is compact and contains a path \(P_{n}\) with \(\mathrm{diam}(P_{n})>\varepsilon\). By compactness of the Hausdorff distance \(\mathrm{d}^{\mathrm{H}}\), we can assume, by possibly taking a subsequential limit, that \(\lim_{n\to\infty}\mathrm{d}^{\mathrm{H}}(P_{n},P)=0\) for some compact and connected set \(P\subset\mathbb{R}/2\pi\mathbb{Z}\times[-2R,2R]\). Now, choose a rectangle \(\mathrm{Q}\) such that \(P\) disconnects the left and right sides of \(\mathrm{Q}\), or the top and bottom sides of \(\mathrm{Q}\). For any \(n>n(R,\varepsilon,\mathrm{Q})\) large enough, also the path \(P_{n}\) disconnects the rectangle \(\mathrm{Q}\) in the same way as \(P\). Therefore, for any \(x\in\mathbb{R}/2\pi\mathbb{Z}\times[-R,R]\), the random walk \(X^{n,x}\) on the weighted graph \((\mathcal{G}^{n},c^{n})\) started from \(x\) cannot cross the rectangle \(\mathrm{Q}\) from left to right, or from top to bottom. However, the Brownian motion \(B^{x}\) on \(\mathcal{C}_{2\pi}\) started from \(x\) has positive probability to do so. This is in contradiction with assumption (H2), and so the desired result follows. We are now ready to start the proof of Theorem 1.3. We observe that such a result admits a natural version on the sequence of lifted weighted graphs. Indeed, in order to prove it, we will work in the setting of the universal cover. To be more precise, we will first study in Subsection 4.1 the sequence of lifted height coordinate functions, and then, in Subsection 4.2, we will study the sequence of lifted width coordinate functions. Once this is done, we will put everything together and we will conclude the proof of Theorem 1.3 in Subsection 4.3. ### Height coordinate For each \(n\in\mathbb{N}\), consider the lifted height coordinate function \[\mathfrak{h}_{n}^{\dagger}:\mathcal{V}\mathcal{G}^{\dagger,n}\to[0,1],\] as defined in (2.4). The main goal of this subsection is to show that there exists an affine transformation of the function \(\mathfrak{h}_{n}^{\dagger}\) that is asymptotically close, uniformly on lifts of compact subsets of the infinite cylinder \(\mathcal{C}_{2\pi}\), to the height coordinate function \(\mathbf{x}\mapsto\operatorname{Im}(\mathbf{x})\) in the a priori lifted embedded graph \(\mathcal{G}^{\dagger,n}\), as \(n\to\infty\). More precisely, we have the following result. **Proposition 4.3**.: _There exit two sequences \(\{b_{n}^{\mathfrak{h}}\}_{n\in\mathbb{N}}\), \(\{c_{n}^{\mathfrak{h}}\}_{n\in\mathbb{N}}\subset\mathbb{R}\) such that for all \(R>1\), \(\delta\in(0,1)\), and for any \(n>n(R,\delta)\) large enough, it holds that_ \[\big{|}c_{n}^{\mathfrak{h}}\mathfrak{h}_{n}^{\dagger}(\mathbf{x})+b_{n}^{ \mathfrak{h}}-\operatorname{Im}(\mathbf{x})\big{|}\leq\delta,\quad\forall \mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger,n}(\mathbb{R}\times[-R,R]).\] The proof of this proposition is postponed to Subsection 4.1.3. As we will see, the proof will follow easily thanks to the harmonicity of the height coordinate function together with Lemma 4.4 below. #### 4.1.1 Setup of the proof For each \(n\in\mathbb{N}\), consider the metric graph \(\mathbb{G}^{n}\) associated to \(\mathcal{G}^{n}\), and let \(\mathfrak{h}^{n}:\mathbb{G}^{n}\to[0,1]\) be the extended height coordinate function as specified in Remark 2.7. Given a value \(S\in\mathbb{R}\), we define the set \[V_{S}^{n}:=\big{\{}x\in\mathbb{G}^{n}\,:\,\operatorname{Im}(x)=S\big{\}},\] and we let \[\overline{a}_{S}^{n}:=\sup\big{\{}a\in(0,1)\,:\,\mathfrak{h}_{n}^{-1}(a)\cap V _{S}^{n}\neq\emptyset\big{\}}\,,\quad\underline{a}_{S}^{n}:=\inf\big{\{}a\in( 0,1)\,:\,\mathfrak{h}_{n}^{-1}(a)\cap V_{S}^{n}\neq\emptyset\big{\}}\,. \tag{4.1}\] We fix throughout \(R>1\) and \(\delta\in(0,1)\) as in the proposition statement, and we let \[R^{\prime}:=\frac{R}{\delta}.\] We consider the set \[W_{R,\delta}^{n}:=\big{\{}V_{R^{\prime}}^{n}\cup V_{-R^{\prime}}^{n}\big{\}} \cup\big{\{}\mathfrak{h}_{n}^{-1}(\overline{a}_{R^{\prime}}^{n})\cup\mathfrak{ h}_{n}^{-1}(\underline{a}_{-R^{\prime}}^{n})\big{\}}\subset\mathbb{G}^{n}.\] For each \(n\in\mathbb{N}\), by possibly locally modifying the a priori embedding of the graph \(\mathcal{G}^{n}\) in the infinite cylinder \(\mathcal{C}_{2\pi}\), we can assume without loss of generality that each edge in \(\mathcal{E}\mathcal{G}^{n}\) crosses the circles at height \(R^{\prime}\) and \(-R^{\prime}\) at most finitely many times. In particular, this implies that we can assume that the set \(W_{R,\delta}^{n}\) contains at most finitely many points, and therefore, by Lemma 3.3, we can assume, without any loss of generality, that \(\mathcal{V}\mathcal{G}^{n}\) contains all the points in \(W_{R,\delta}^{n}\). In what follows, in order to lighten up notation, we adopt the following notational convention \[\widehat{V}_{S}\equiv\widehat{V}_{S}^{n},\qquad\overline{a}\equiv\overline{a }_{R^{\prime}}^{n},\quad\underline{a}\equiv\underline{a}_{-R^{\prime}}^{n}.\] Furthermore, we denote by \(V_{S}^{\dagger}\) the lift to the universal cover of \(V_{S}\), and we write \(\mathcal{V}\mathcal{G}^{\dagger,n}(S)\) (resp. \(\mathcal{V}\mathcal{G}^{n}(S)\)) as a shorthand for \(\mathcal{V}\mathcal{G}^{\dagger,n}(\mathbb{R}\times[-S,S])\) (resp. \(\mathcal{V}\mathcal{G}^{n}(\mathbb{R}/2\pi\mathbb{Z}\times[-S,S])\)). We refer to Figure 6 for an illustration of the sets involved in the proof of Proposition 4.3. Random walk notation.For \(\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger,n}(R^{\prime})\), we consider the continuous time random walk \(\{\mathbf{X}_{t}^{n,\mathbf{x}}\}_{t\geq 0}\) on the lifted weighted graph \((\mathcal{G}^{\dagger,n},c^{\dagger,n})\) started from \(\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger,n}(R^{\prime})\). We recall that the continuous path of such a random walk is generated by piecewise linear interpolation at constant speed. We consider the following stopping times \[\sigma_{\mathbf{x}}:=\inf\bigl{\{}t\geq 0\,:\,\mathbf{X}_{t}^{n,\mathbf{x}}\in V _{R^{\prime}}^{\dagger}\cup V_{-R^{\prime}}^{\dagger}\bigr{\}},\quad\tau_{ \mathbf{x}}:=\inf\bigl{\{}t\geq 0\,:\,\mathbf{X}_{t}^{n,\mathbf{x}}\in( \mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\cup(\mathfrak{h}_{n}^{\dagger} )^{-1}(\underline{a})\bigr{\}}, \tag{4.2}\] and we observe that, thanks to the definitions of \(\overline{a}\) and \(\underline{a}\), it holds that \(\tau_{\mathbf{x}}\geq\sigma_{\mathbf{x}}\), for all \(\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger,n}(R^{\prime})\). Looking at Figure 6, the stopping time \(\sigma_{\mathbf{x}}\) accounts for the first time at which the random walk hits one of the blue vertices, while the stopping time \(\tau_{\mathbf{x}}\) accounts for the first time at which the random walk hits one of the green vertices. #### 4.1.2 Some auxiliary results We can now state the key lemma for the proof of Proposition 4.3. The proof of the below result is postponed until the end of this subsection. **Lemma 4.4**.: _For any \(n>n(R,\delta)\) large enough, there exists a real number \(b_{n}^{\prime}=b_{n}^{\prime}(R,\delta)\) such that_ \[\bigl{|}2R^{\prime}\mathbb{P}_{\mathbf{x}}\bigl{(}\mathbf{X}_{\tau_{\mathbf{x }}}^{n}\in(\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\bigr{)}-b_{n}^{ \prime}-\operatorname{Im}(\mathbf{x})\bigr{|}\leq\delta,\quad\forall\mathbf{x }\in\mathcal{V}\mathcal{G}^{\dagger,n}(R).\] _Similarly, for any \(n>n(R,\delta)\) large enough, there exists a real number \(b_{n}^{\prime\prime}=b_{n}^{\prime\prime}(R,\delta)\) such that_ \[\bigl{|}2R^{\prime}\mathbb{P}_{\mathbf{x}}\bigl{(}\mathbf{X}_{\tau_{\mathbf{x }}}^{n}\in\sigma_{2\pi}^{-1}(\mathfrak{h}_{n}^{-1}(\underline{a}))\bigr{)}+b_ {n}^{\prime\prime}+\operatorname{Im}(\mathbf{x})\bigr{|}\leq\delta,\quad \forall\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger,n}(R).\] Roughly speaking, the first estimate in the previous lemma states that, for \(n>n(R,\delta)\) large enough, the probability that the lifted random walk started inside \(\mathcal{V}\mathcal{G}^{\dagger,n}(R)\) hits the set \((\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\) before hitting \((\mathfrak{h}_{n}^{\dagger})^{-1}(\underline{a})\) depends linearly on the height coordinate of the starting point of the walk on the a priori embedding. In order to prove this result, we need to rule out the possibility that the preimage of a horizontal line on the Smith embedding has large vertical fluctuations (see Figure 6). To do so, we use the invariance principle assumption on the sequence of primal graphs, and more precisely we will follow the following steps. 1. We start by proving that the probability that the lifted random walk started inside \(\mathcal{V}\mathcal{G}^{\dagger,n}(R)\) hits the set \(V_{R^{\prime}}^{\dagger}\) before hitting \(V_{-R^{\prime}}^{\dagger}\) depends linearly on the height coordinate of the starting point of the walk on the a priori embedding. This follows easily thanks to the invariance principle assumption and it is the content of Lemma 4.5 below. 2. We then prove that the probability that the random walk started from \(V_{-R^{\prime}}^{\dagger}\) has probability of order \(1/R^{\prime}\) to hit \((\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\) before hitting \((\mathfrak{h}_{n}^{\dagger})^{-1}(\underline{a})\). Once again, this is an easy consequence of the invariance principle assumption, and it is the content of Lemma 4.6 below. 3. Finally, roughly speaking, in order to conclude, we need to improve the bound on the probability appearing in the previous step from order \(1/R^{\prime}\) to order \(o_{n}(1/R^{\prime})\). This is done by using Lemma 2.4 together with the invariance principle assumption. Indeed, as it will be more clear in the proof of Lemma 4.4, it is sufficient to estimate the probability that a random walk started inside \(\mathcal{VG}^{n}(R)\) does not disconnect \(V_{R}\cup V_{-R}\) from \(V_{R^{\prime}}\cup V_{-R^{\prime}}\) before hitting the latter set. In Lemma 4.7 below, we will see that such a probability is of order \(R^{\prime}/R\), and this will be enough to conclude. Before proceeding, we observe that, for all \(\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R)\), it holds that \[\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}_{\tau_{\mathbf{x}}}^{n}\in\sigma_{2 \pi}^{-1}(\mathfrak{h}_{n}^{-1}(\underline{a}))\big{)}=1-\mathbb{P}_{\mathbf{ x}}\big{(}\mathbf{X}_{\tau_{\mathbf{x}}}^{n}\in(\mathfrak{h}_{n}^{\dagger})^{-1}( \overline{a})\big{)},\] hence, from now on, we can just focus on the first estimate in the statement of Lemma 4.4. We can now state and prove the technical lemmas mentioned above. We start with the following lemma on which we study the probability that the lifted random walk started inside \(\mathcal{VG}^{\dagger,n}(R)\) hits the set \(V_{R^{\prime}}^{\dagger}\) before hitting \(V_{-R^{\prime}}^{\dagger}\). **Lemma 4.5**.: _For any \(n>n(R,\delta)\) large enough, it holds that_ \[\big{|}2R^{\prime}\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}_{\sigma_{\mathbf{x }}}^{n}\in V_{R^{\prime}}^{\dagger}\big{)}-R^{\prime}-\mathrm{Im}(\mathbf{x}) \big{|}\leq\delta,\quad\forall\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R).\] Proof.: For \(n\in\mathbb{N}\), fix a vertex \(\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R)\), consider a planar Brownian motion \(B^{\mathbf{x}}\) started from \(\mathbf{x}\), and define the stopping time \[\sigma_{B,\mathbf{x}}:=\inf\bigl{\{}t\geq 0\,:\,\big{|}\mathrm{Im}(B_{t}^{ \mathbf{x}})\big{|}=R^{\prime}\bigr{\}}.\] Then, by assumption (H2), for any \(n>n(R,\delta)\) large enough, we have that \[\big{|}\mathbb{P}_{\mathbf{x}}(\mathbf{X}_{\sigma_{\mathbf{x}}}^{n}\in \sigma_{2\pi}^{-1}(V_{R^{\prime}}))-\mathbb{P}_{\mathbf{x}}\big{(}\mathrm{Im} (B_{\sigma_{B,\mathbf{x}}})=R^{\prime}\big{)}\big{|}\leq\frac{\delta}{2R^{ \prime}},\quad\forall\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R).\] Since \(\mathrm{Im}(B^{\mathbf{x}})\) is just a linear Brownian motion started from \(\mathrm{Im}(\mathbf{x})\), thanks to the gambler's ruin formula, it holds that \[\mathbb{P}_{\mathbf{x}}\big{(}\mathrm{Im}(B_{\sigma_{B,\mathbf{x}}})=R^{ \prime}\big{)}=\frac{R^{\prime}+\mathrm{Im}(\mathbf{x})}{2R^{\prime}},\] from which the conclusion follows. We can now move to the second lemma which gives an estimate for the probability that the random walk started from \(V_{-R^{\prime}}^{\dagger}\) hits \((\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\) before hitting \((\mathfrak{h}_{n}^{\dagger})^{-1}(\underline{a})\). **Lemma 4.6**.: _For any \(n>n(R,\delta)\) large enough, it holds that_ \[\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}_{\tau_{\mathbf{x}}}^{n}\in( \mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\big{)}\preceq\frac{1}{R^{\prime }},\quad\forall\mathbf{x}\in V_{-R^{\prime}}^{\dagger},\] _where the implicit constant is independent of everything else._ Proof.: We start by noticing that, for all \(\mathbf{x}\in V_{-R^{\prime}}^{\dagger}\), it holds that \[\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}_{\tau_{\mathbf{x}}}^{n}\in( \mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\big{)}\leq\mathbb{P}_{\mathbf{ x}}\big{(}\sigma_{2\pi}(\mathbf{X}^{n}|_{[0,\tau_{\mathbf{x}}]})\text{ does not wind around the cylinder below height }-R^{\prime}\big{)},\] where we recall that \(\sigma_{2\pi}\) is defined in (2.3) and denotes the covering map of the infinite cylinder \(\mathcal{C}_{2\pi}\). The above inequality is due to the fact that, if \(\sigma_{2\pi}(\mathbf{X}^{n}|_{[0,\tau_{\mathbf{x}}]})\) winds around the cylinder below height \(-R^{\prime}\), then, by definition of \(\underline{a}\), it has to hit the set \(\mathfrak{h}_{n}^{-1}(\underline{a})\). We can now exploit assumption (H2) and find the corresponding upper bound for the Brownian motion. More precisely, let \(B^{\mathbf{x}}\) be a planar Brownian motion started from \(\mathbf{x}\in\sigma_{2\pi}^{-1}(V_{-R^{\prime}})\) and define the stopping time \[\tau_{B,\mathbf{x}}:=\inf\bigl{\{}t\geq 0\,:\,\mathrm{Im}(B_{t}^{\mathbf{x}})=-2R^{ \prime}\text{ or }\mathrm{Im}(B_{t}^{\mathbf{x}})=R^{\prime}\bigr{\}}.\] Then, for any \(n>n(R,\delta)\) large enough, we have that \[\mathbb{P}_{\mathbf{x}}\big{(}\sigma_{2\pi}(\mathbf{X}^{n}|_{[0, \tau_{\mathbf{x}}]})\text{ does not wind around the cylinder below height }-R^{\prime}\big{)}\] \[\qquad\leq 2\mathbb{P}_{\mathbf{x}}\big{(}\sigma_{2\pi}(B|_{[0, \tau_{B,\mathbf{x}}]})\text{ does not wind around the cylinder below height }-R^{\prime}\big{)}.\] Therefore, to conclude, it is sufficient to find a uniform upper bound for the quantity on the right-hand side of the above expression. This is done in Lemma A.1 from which the conclusion follows. In order to prove Lemma 4.4, we also need the following lemma which provides an estimate for the probability that a random walk started inside \(\mathcal{VG}^{n}(R)\) disconnects \(V_{R}\cup V_{-R}\) from \(V_{R^{\prime}}\cup V_{-R^{\prime}}\) before hitting the latter set. **Lemma 4.7**.: _For any \(n>n(R,\delta)\) large enough, it holds that_ \[\mathbb{P}_{\mathbf{x}}\big{(}\sigma_{2\pi}(\mathbf{X}^{n}|_{[0, \sigma_{\mathbf{x}}]})\text{ does not disconnect }V_{R}\cup V_{-R}\text{ from }V_{R^{\prime}}\cup V_{-R^{\prime}}\big{)} \preceq\frac{R}{R^{\prime}},\quad\forall\mathbf{x}\in\mathcal{VG}^{\dagger,n} (R),\] _where the implicit constant is independent of everything else._ Proof.: For \(\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R)\), let \(B^{\mathbf{x}}\) be a planar Brownian motion started from \(\mathbf{x}\), and define the stopping time \[\sigma_{B,\mathbf{x}}:=\inf\big{\{}t\geq 0\,:\,|\operatorname{Im}(B^{\mathbf{x }}_{t})|=R^{\prime}\big{\}}.\] By assumption (H2), we know that for any \(n>n(R,\delta)\) large enough, it holds that \[\mathbb{P}_{\mathbf{x}}\big{(}\sigma_{2\pi}(\mathbf{X}^{n}|_{[0, \sigma_{\mathbf{x}}]})\text{ does not disconnect }V_{R}\cup V_{-R}\text{ from }V_{R^{\prime}}\cup V_{-R^{ \prime}}\big{)}\] \[\qquad\leq 2\mathbb{P}_{\mathbf{x}}\big{(}\sigma_{2\pi}(B|_{[0, \sigma_{B,\mathbf{x}}]})\text{ does not disconnect }\mathbb{R}/2\pi\mathbb{Z}\times\{-R,R\}\text{ from }\mathbb{R}/ \mathbb{Z}\times\{-R^{\prime},R^{\prime}\}\big{)}.\] Therefore, it is sufficient to find a uniform upper bound for the quantity on the right-hand side of the above expression. This is the content of Lemma A.2 from which the desired result follows. We are now ready to give a proof of Lemma 4.4. As we have already remarked, this will be a consequence of the previous three lemmas and of Lemma 2.4. Proof of Lemma 4.4.: We start by defining the function \(\mathfrak{f}_{n}^{\dagger}:\mathcal{VG}^{\dagger,n}(R^{\prime})\to\mathbb{R}\) as follows \[\mathfrak{f}_{n}^{\dagger}(\mathbf{x}):=\mathbb{P}_{\mathbf{x}}\big{(} \mathbf{X}^{n}_{\tau_{\mathbf{x}}}\in(\mathfrak{h}_{n}^{\dagger})^{-1}( \overline{a})\big{)}-\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}^{n}_{\sigma_{ \mathbf{x}}}\in V^{\dagger}_{R^{\prime}}\big{)},\qquad\forall\mathbf{x}\in \mathcal{VG}^{\dagger,n}(R^{\prime}),\] so that, we can write \[\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}^{n}_{\tau_{\mathbf{x}}}\in( \mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\big{)}=\mathbb{P}_{\mathbf{x}} \big{(}\mathbf{X}^{n}_{\sigma_{\mathbf{x}}}\in V^{\dagger}_{R^{\prime}}\big{)} +\mathfrak{f}_{n}^{\dagger}(\mathbf{x}),\qquad\forall\mathbf{x}\in\mathcal{VG}^ {\dagger,n}(R^{\prime}). \tag{4.3}\] We now observe that, thanks to Lemma 4.5, for any \(n>n(R,\delta)\) large enough, it holds that \[\big{|}2R^{\prime}\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}^{n}_{\sigma_{ \mathbf{x}}}\in V^{\dagger}_{R^{\prime}}\big{)}-R^{\prime}-\operatorname{Im}( \mathbf{x})\big{|}\leq\delta,\quad\forall\mathbf{x}\in\mathcal{VG}^{\dagger,n }(R). \tag{4.4}\] Therefore, it is sufficient to study the function \(\mathfrak{f}_{n}^{\dagger}\) appearing in (4.3). To this end, we consider the functions \(\mathfrak{f}_{n}^{\dagger}:\mathcal{VG}^{\dagger,n}(R^{\prime})\to[0,1]\) and \(\mathfrak{f}_{n}^{\dagger}:\mathcal{VG}^{\dagger,n}(R^{\prime})\to[0,1]\) defined as follows \[\mathfrak{f}_{n}^{\dagger}(\mathbf{x}):=\mathbb{P}_{\mathbf{x}}\big{(} \mathbf{X}^{n}_{\sigma_{\mathbf{x}}}\in V^{\dagger}_{-R^{\prime}},\,\mathbf{X }^{n}_{\tau_{\mathbf{x}}}\in(\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a}) \big{)},\qquad\mathfrak{f}_{n}^{\dagger}(\mathbf{x}):=\mathbb{P}_{\mathbf{x}} \big{(}\mathbf{X}^{n}_{\sigma_{\mathbf{x}}}\in V^{\dagger}_{R^{\prime}},\, \mathbf{X}^{n}_{\tau_{\mathbf{x}}}\in(\mathfrak{h}_{n}^{\dagger})^{-1}( \underline{a})\big{)}.\] In particular, as one can easily check, it holds that \[\big{|}\mathfrak{f}_{n}^{\dagger}(\mathbf{x})\big{|}\leq\mathfrak{f}_{n}^{ \dagger}(\mathbf{x})+\mathfrak{f}_{n}^{\dagger}(\mathbf{x}),\quad\forall \mathbf{x}\in\mathcal{VG}^{\dagger,n}(R^{\prime}),\] and so, we can reduce the problem to the study of the functions \(\mathfrak{f}_{n}^{\dagger}\) and \(\mathfrak{f}_{n}^{\dagger}\). We will only study the function \(\mathfrak{f}_{n}^{\dagger}\) as the function \(\mathfrak{f}_{n}^{\dagger}\) can be treated similarly. Thanks to the strong Markov property of the random walk \(\mathbf{X}^{n,\mathbf{x}}\), we have that \[\mathfrak{f}_{n}^{\dagger}(\mathbf{x})=\mathbb{E}_{\mathbf{x}}\big{[} \mathfrak{f}_{n}^{\dagger}(\mathbf{X}^{n}_{\sigma_{\mathbf{x}}})\big{]},\quad \forall\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R^{\prime}).\] Therefore, for \(\mathbf{x}\), \(\mathbf{y}\in\mathcal{VG}^{\dagger,n}(R)\), it holds that \[\big{|}\mathfrak{f}_{n}^{\dagger}(\mathbf{x})-\mathfrak{f}_{n}^{\dagger}(\mathbf{ y})\big{|}\leq\sup\{\big{|}\mathfrak{f}_{n}^{\dagger}(\mathbf{v})\big{|}\,:\, \mathbf{v}\in V^{\dagger}_{-R^{\prime}}\}\,\mathrm{d}^{\mathrm{TV}}\big{(} \mathbf{X}^{n,\mathbf{x}}_{\sigma_{\mathbf{x}}},\mathbf{X}^{n,\mathbf{y}}_{\sigma_ {\mathbf{y}}}\big{)}, \tag{4.5}\] where \(\mathrm{d}^{\mathrm{TV}}\) denotes the total variation distance. Hence, it is sufficient to find an upper bound for the two terms on the right-hand side of (4.5). We treat the two factors separately. * In order to bound the first factor, we just need to bound uniformly on \(\mathbf{v}\in V_{-R^{\prime}}^{\dagger}\), the probability that a random walk on \((\mathcal{G}^{\dagger,n},c^{\dagger,n})\) started from \(\mathbf{v}\) hits \((\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\) before hitting \((\mathfrak{h}_{n}^{\dagger})^{-1}(\underline{a})\). This is exactly the content of Lemma 4.6 from which we can deduce that, for all \(n>n(R,\delta)\) large enough, it holds that \[\sup\{\big{|}\vec{\mathfrak{h}}_{n}^{\dagger}(\mathbf{v})\big{|}\,:\,\mathbf{ v}\in V_{-R^{\prime}}^{\dagger}\}\preceq\frac{1}{R^{\prime}},\] where the implicit constant is universal. * In order to bound the second factor, we can use Lemma 2.4. More precisely, it is sufficient to estimate the probability that \(\sigma_{2\pi}(\mathbf{X}^{n,\mathbf{x}}|_{[0,\sigma_{\mathbf{x}}]})\) disconnects \(V_{R}\cup V_{-R}\) from \(V_{R^{\prime}}\cup V_{-R^{\prime}}\). This is exactly the content of Lemma 4.7 which guarantees that, for all \(n>n(R,\delta)\) large enough and for all \(\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R)\), it holds that \[\mathbb{P}_{\mathbf{x}}\big{(}\sigma_{2\pi}(\mathbf{X}^{n}|_{[0,\sigma_{ \mathbf{x}}]})\text{ does not disconnect }V_{R}\cup V_{-R}\text{ from }V_{R^{\prime}}\cup V_{-R^{ \prime}}\big{)}\preceq\frac{R}{R^{\prime}},\] where the implicit constant is independent of everything else. Therefore, this fact together with Lemma 2.4 imply that \[\mathrm{d}^{\mathrm{TV}}\big{(}\mathbf{X}^{n,\mathbf{x}}_{\sigma_{\mathbf{x}}},\mathbf{X}^{n,\mathbf{y}}_{\sigma_{\mathbf{y}}}\big{)}\preceq\frac{R}{R^{ \prime}},\quad\forall\mathbf{x},\mathbf{y}\in\mathcal{VG}^{\dagger,n}(R).\] Therefore, putting together the previous two bullet points and going back to (4.5), we get that for every \(n>n(R,\delta)\) large enough, it holds that \[\big{|}\vec{\mathfrak{h}}_{n}^{\dagger}(\mathbf{x})-\vec{\mathfrak{h}}_{n}^{ \dagger}(\mathbf{y})\big{|}\preceq\frac{R}{R^{\prime 2}},\quad\forall \mathbf{x},\mathbf{y}\in\mathcal{VG}^{\dagger,n}(R),\] Furthermore, the same uniform bound can be also obtained for the function \(\vec{\mathfrak{h}}_{n}^{\dagger}\), but we omit the details since the argument is similar. Summing up, we obtained that, for all \(n>n(R,\delta)\) large enough, it holds that \[\big{|}\vec{\mathfrak{h}}_{n}^{\dagger}(\mathbf{x})-\vec{\mathfrak{h}}_{n}^{ \dagger}(\mathbf{y})\big{|}\preceq\frac{R}{R^{\prime 2}},\quad\forall \mathbf{x},\,\mathbf{y}\in\mathcal{VG}^{\dagger,n}(R).\] Hence, to conclude, we can simply proceed as follows. For every \(n>n(R,\delta)\) large enough, fix an arbitrary vertex \(\mathbf{y}\in\mathcal{VG}^{\dagger,n}(R)\). Then, recalling that by definition \(R^{\prime}=R/\delta\), it holds that \[2R^{\prime}|\mathfrak{h}_{n}^{\dagger}(\mathbf{x})-\mathfrak{h}_{n}^{\dagger }(\mathbf{y})|\preceq\delta,\quad\forall\mathbf{x}\in\mathcal{VG}^{\dagger,n }(R).\] Therefore, thanks to (4.3) and estimate (4.4), we find that, for any \(n>n(R,\delta)\) large enough, it holds that \[\big{|}2R^{\prime}\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}^{n}_{\tau_{ \mathbf{x}}}\in(\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\big{)}-b_{n}^{ \prime}-\mathrm{Im}(\mathbf{x})\big{|}\leq\delta,\quad\forall\mathbf{x}\in \mathcal{VG}^{\dagger,n}(R),\] where \(b_{n}^{\prime}:=2R^{\prime}\mathfrak{h}_{n}^{\prime}(\mathbf{y})+R^{\prime}\). #### 4.1.3 Proof of Proposition 4.3 We are now ready to prove Proposition 4.3. In what follows, we will make use of some notation introduced in the preceding subsection. In particular, we will consider the stopping times \(\sigma_{x}\), \(\tau_{x}\) defined in (4.2), and the quantities defined in the introduction of the preceding subsection. We will also adopt the same notational conventions of the previous subsection in order to lighten up some notation. Proof of Proposition 4.3.: We divide the proof in two steps. **Step 1.** In this first step we show that, for fixed \(R>1\) and \(\delta\in(0,1)\), for any \(n>n(R,\delta)\) large enough, we can find real numbers \(b_{n}^{R,\delta}\) and \(c_{n}^{R,\delta}\) such that the conclusion of the proposition holds. To this end, let \(R^{\prime}:=R/\delta\), fix \(\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R)\) and let \(\mathbf{X}^{n,\mathbf{x}}\) be the random walk on \((\mathcal{G}^{\dagger,n},c^{\dagger,n})\) started from \(\mathbf{x}\). Thanks to the harmonicity of the height coordinate function \(\mathfrak{h}_{n}^{\dagger}\) and to the optional stopping theorem, we have that \[\mathfrak{h}_{n}^{\dagger}(\mathbf{x})=\overline{a}\mathbb{P}_{\mathbf{x}} \big{(}\mathbf{X}^{n}_{\tau_{\mathbf{x}}}\in(\mathfrak{h}_{n}^{\dagger})^{-1}( \overline{a})\big{)}+\underline{a}\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}^{n}_ {\tau_{\mathbf{x}}}\in(\mathfrak{h}_{n}^{\dagger})^{-1}(\underline{a})\big{)}, \quad\forall\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R).\] Therefore, the problem has been reduced to proving that the probabilities appearing in the previous expressions are approximately affine transformations of the height coordinate function in the a priori embedding. By Lemma 4.4, for all \(n>n(R,\delta)\) large enough, there exist real numbers \(b^{\prime}_{n}=b^{\prime}_{n}(R,\delta)\) and \(b^{\prime\prime}_{n}=b^{\prime\prime}_{n}(R,\delta)\) for which, for all \(\mathbf{x}\in\mathcal{VG}^{\dagger,n}(R)\), it holds that \[\big{|}2R^{\prime}\mathfrak{h}_{n}^{\dagger}(\mathbf{x})-\overline {a}(b^{\prime}_{n}+\mathrm{Im}(\mathbf{x}))+\underline{a}(b^{\prime\prime}_{n }+\mathrm{Im}(\mathbf{x})\big{|}\] \[\quad\leq\underline{a}\big{|}2R^{\prime}\mathbb{P}_{\mathbf{x}} \big{(}\mathbf{X}^{n}_{\tau_{\mathbf{x}}}\in(\mathfrak{h}_{n}^{\dagger})^{-1}( \overline{a})\big{)}-b^{\prime}_{n}-\mathrm{Im}(\mathbf{x})\big{|}+\underline {a}\big{|}2R^{\prime}\mathbb{P}_{\mathbf{x}}\big{(}\mathbf{X}^{n}_{\tau_{ \mathbf{x}}}\in(\mathfrak{h}_{n}^{\dagger})^{-1}(\underline{a})\big{)}+b^{ \prime\prime}_{n}+\mathrm{Im}(\mathbf{x})\big{|}\] \[\quad\leq\underline{\delta}.\] Therefore, rearranging the terms in the above expression and letting \[c_{n}^{R,\delta}:=\frac{2R^{\prime}}{|\overline{a}-\underline{a}|},\qquad b_{ n}^{R,\delta}:=\frac{\underline{a}b^{\prime\prime}_{n}-\overline{a}b^{\prime}_{n }}{|\overline{a}-\underline{a}|},\] we find that, for any \(n>n(R,\delta)\) large enough, it holds that \[\big{|}c_{n}^{R,\delta}\mathfrak{h}_{n}^{\dagger}(\mathbf{x})+b_{n}^{R,\delta }-\mathrm{Im}(\mathbf{x})\big{|}\leq\delta,\quad\forall\mathbf{x}\in \mathcal{VG}^{\dagger,n}(R).\] **Step 2.** In this second step, we show how we can define real sequences \(\{b_{n}\}_{n\in\mathbb{N}}\) and \(\{c_{n}\}_{n\in\mathbb{N}}\) independent of \(R\), \(\delta\) such that the conclusion of the proposition holds. To this end, consider an increasing sequence \(\{R_{k}\}_{k\in\mathbb{N}}\subset[1,\infty)\) and a decreasing sequence \(\{\delta_{k}\}_{k\in\mathbb{N}}\subset(0,1)\) such that \(R_{k}\to\infty\) and \(\delta_{k}\to 0\), as \(k\to\infty\). Then, thanks to the previous step, we know that, for all \(k\in\mathbb{N}\), it exists \(n_{k}=n_{k}(R_{k},\delta_{k})\in\mathbb{N}\) such that, for all \(n>n_{k}\), there exist real numbers \(c_{n}^{R_{k},\delta_{k}}\), \(b_{n}^{R_{k},\delta_{k}}\in\mathbb{R}\) such that \[\big{|}c_{n}^{R_{k},\delta_{k}}\mathfrak{h}_{n}^{\dagger}(\mathbf{x})+b_{n}^{R _{k},\delta_{k}}-\mathrm{Im}(\mathbf{x})\big{|}\leq\delta_{k},\quad\forall \mathbf{x}\in\mathcal{VG}^{\dagger,n}(R_{k}).\] Without any loss of generality, we can assume that the sequence \(\{n_{k}\}_{k\in\mathbb{N}}\) is increasing. Then, for all \(n\in[0,n_{1})\cap\mathbb{N}\), we let \(c_{n}^{\mathsf{h}}:=1\), \(b_{n}^{\mathsf{h}}:=1\), and for all \(k\in\mathbb{N}\) and \(n\in[n_{k},n_{k+1})\cap\mathbb{N}\), we set \(c_{n}^{\mathsf{h}}:=c_{n}^{R_{k},\delta_{k}}\), \(b_{n}^{\mathsf{h}}:=b_{n}^{R_{k},\delta_{k}}\). Therefore, if we fix \(R>1\) and \(\delta\in(0,1)\), for all \(n>n(R,\delta)\) large enough, it holds that \[\big{|}c_{n}^{\mathsf{h}}\mathfrak{h}_{n}^{\dagger}(\mathbf{x})+b_{n}^{\mathsf{ h}}-\mathrm{Im}(\mathbf{x})\big{|}\leq\delta,\quad\forall\mathbf{x}\in \mathcal{VG}^{\dagger,n}(R),\] which concludes the proof. ### Width coordinate For each \(n\in\mathbb{N}\), consider the lifted width coordinate function \[\mathfrak{w}_{n}^{\dagger}:\mathcal{VG}^{\dagger,n}\to\mathbb{R},\] as defined in (2.9). The main goal of this subsection is to show that there exists an affine transformation of the function \(\mathfrak{w}_{n}^{\dagger}\) that is asymptotically close, uniformly on lifts of compact subsets of the infinite cylinder \(\mathcal{C}_{2\pi}\), to the width coordinate function \(\widehat{\mathbf{x}}\mapsto\mathrm{Re}(\widehat{\mathbf{x}})\) in the a priori lifted embedded graph \(\mathcal{G}^{\dagger,n}\), as \(n\to\infty\). More precisely, we have the following result. **Proposition 4.8**.: _There exits a sequence \(\{b_{n}^{\mathsf{m}}\}_{n\in\mathbb{N}}\subset\mathbb{R}\) such that for all \(R>1\), \(\delta\in(0,1)\), and for any \(n>n(R,\delta)\) large enough, it holds that_ \[\bigg{|}\frac{2\pi}{\mathfrak{h}_{n}}\mathfrak{w}_{n}^{\dagger}(\widehat{ \mathbf{x}})+b_{n}^{\mathsf{m}}-\mathrm{Re}(\widehat{\mathbf{x}})\bigg{|} \leq\delta,\quad\forall\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G} }^{\dagger,n}(\mathbb{R}\times[-R,R]), \tag{4.6}\] _where we recall that \(\mathfrak{\eta}_{n}\) denotes the strength of the flow induced by \(\mathfrak{h}_{n}\) as defined in (2.7)._ The proof of this proposition is postponed to Subsection 4.2.3. As we will see, the proof is based on Lemma 4.9 below and the following two facts: (a) the harmonicity of the lifted width coordinate function \(\mathfrak{w}_{n}^{\dagger}\); (b) the invariance principle assumption on the sequence of dual graphs. #### 4.2.1 Setup of the proof For each \(n\in\mathbb{N}\), consider the metric graphs \(\mathbb{G}^{n}\) and \(\widehat{\mathbb{G}}^{n}\) associated to \(\mathcal{G}^{n}\) and \(\widehat{\mathcal{G}}^{n}\), respectively. Let \(\mathfrak{h}_{n}:\mathbb{G}^{n}\to[0,1]\) be the extended height coordinate function as specified in Remark 2.7, and \(\mathfrak{w}_{n}:\widehat{\mathbb{G}}^{n}\to\mathbb{R}\) be the extended width coordinate function as specified in Remark 2.11. Given a value \(S\in\mathbb{R}\), we define the set \[\widehat{V}_{S}^{n}:=\big{\{}\widehat{x}\in\widehat{\mathbb{G}}^{n}\,:\, \mathrm{Im}(\widehat{x})=S\big{\}}.\] Given a value \(a\in(0,1)\), we recall that the sets \(\mathcal{E}\mathcal{G}^{\dagger}_{a}\), \(\mathcal{E}\widehat{\mathcal{G}}^{\dagger}_{a}\), \(\mathcal{V}\widehat{\mathcal{G}}^{\dagger}_{a}\) are all defined in Subsection 3.2. We also recall that \(\widehat{\mathbf{x}}_{0}^{n,a}\) denotes the vertex in \(\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}\) whose real coordinate is nearest to \(0\), and we refer to (3.1) and (3.2) for the definitions of the sets \(\widehat{\mathcal{P}}^{\dagger,n}_{a}\) and \(\mathcal{P}^{\dagger,n}_{a}\), respectively. We fix throughout \(R>1\) and \(\delta\in(0,1)\) as in the proposition statement, and we let \[R^{\prime}:=\frac{R}{\delta}.\] For each \(n\in\mathbb{N}\), we consider \(\overline{a}^{n}_{R^{\prime}}\) and \(\underline{a}^{n}_{-R^{\prime}}\) as defined in (4.1). Moreover, thanks to Proposition 4.3, for any \(n>n(R,\delta)\) large enough, we can choose a value \(a^{n}_{R^{\prime}}\) such that the set \(\mathfrak{h}_{n}^{-1}(a^{n}_{R^{\prime}})\) is fully contained in the infinite strip \(\mathbb{R}\times[-1,1]\). In what follows, in order to lighten up notation, we adopt the following notational convention \[\widehat{V}_{S}\equiv\widehat{V}_{S}^{n},\qquad\overline{a}\equiv\overline{a }^{n}_{R^{\prime}},\quad\underline{a}\equiv\underline{a}^{n}_{-R^{\prime}}, \quad a\equiv\underline{a}^{n}_{R^{\prime}}.\] Furthermore, we denote by \(\widehat{V}_{S}^{\dagger}\) the lift to the universal cover of \(\widehat{V}_{S}\), and we write \(\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(S)\) (resp. \(\mathcal{V}\widehat{\mathcal{G}}^{n}(S)\)) as a shorthand for \(\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(\mathbb{R}\times[-S,S])\) (resp. \(\mathcal{V}\widehat{\mathcal{G}}^{n}(\mathbb{R}/2\pi\mathbb{Z}\times[-S,S])\)). We also adopt the convention to fix the additive constant of the lifted width coordinate function \(\mathfrak{w}_{n}^{\dagger}\) by setting it to zero on the point \(\widehat{\mathbf{x}}_{0}^{n,a}\). We observe that changing the base vertex of the lifted width coordinate function has only the effect of changing the value of \(b^{n}_{n}\) appearing in the statement of Proposition 4.8. Similarly to what we discussed in the preceding subsection, by Lemma 3.3 and Remark 3.5, we can assume, without loss of generality, that \(\mathcal{V}\mathcal{G}^{n}\) contains all the points in the set \[\mathfrak{h}_{n}^{\dagger}(\overline{a})\cup\mathfrak{h}_{n}^{-1}(\underline{ a})\cup\mathfrak{h}_{n}^{-1}(a)\subset\mathbb{G}^{n},\] and that \(\mathcal{V}\widehat{\mathcal{G}}^{n}\) contains all the points in the set \[\widehat{V}_{R^{\prime}-1}\cup\widehat{V}_{-R^{\prime}+1}\subset\widehat{ \mathbb{G}}^{n}.\] This is allowed since, as in the case of the height coordinate function, by possibly locally modifying the a priori embedding of the dual graph \(\widehat{\mathcal{G}}^{n}\) in \(\mathcal{C}_{2\pi}\), we can assume that each edge in \(\mathcal{E}\widehat{\mathcal{G}}^{n}\) crosses the circles at height \(R^{\prime}-1\) and \(-R^{\prime}+1\) at most finitely many times. We refer to Figure 7 for an illustration of the sets involved in the proof of Proposition 4.8. **Random walk notation.** For \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{ \prime}-1)\), we consider the continuous time random walk \(\{\widehat{\mathbf{X}}_{t}^{n,\widehat{\mathbf{x}}}\}_{t\geq 0}\) on the lifted weighted dual graph \((\widehat{\mathcal{G}}^{\dagger,n},\widehat{c}^{\dagger,n})\) started from \(\widehat{\mathbf{x}}\). We recall that the continuous path of such a random walk is simply generated by piecewise linear interpolation at constant speed. We consider the following stopping times \[\sigma_{\widehat{\mathbf{x}}}:=\inf\big{\{}t\geq 0\,:\,\widehat{\mathbf{X}}_{t}^{n, \widehat{\mathbf{x}}}\in\widehat{V}_{R^{\prime}-1}^{\dagger}\cup\widehat{V}_{- R^{\prime}+1}^{\dagger}\big{\}},\quad\tau_{\widehat{\mathbf{x}}}:=\inf \big{\{}t\geq 0\,:\,\widehat{\mathbf{X}}_{t}^{n,\widehat{\mathbf{x}}}\in \mathcal{V}\widehat{\mathcal{G}}_{\overline{a}}^{\dagger,n}\cup\mathcal{V} \widehat{\mathcal{G}}_{\underline{a}}^{\dagger,n}\big{\}}. \tag{4.7}\] As we will observe more precisely below, for \(n>n(R,\delta)\) large enough, thanks to Proposition 4.3, we have that \[\mathcal{V}\widehat{\mathcal{G}}_{\overline{a}}^{\dagger,n}\subset\mathbb{R} \times[R^{\prime}-1,R^{\prime}+1],\qquad\mathcal{V}\widehat{\mathcal{G}}_{ \underline{a}}^{\dagger,n}\subset\mathbb{R}\times[-R^{\prime}-1,-R^{\prime}+1],\] and so, it holds that \(\tau_{\widehat{\mathbf{x}}}\geq\sigma_{\widehat{\mathbf{x}}}\), for all \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{\prime}-1)\). Looking at Figure 7, the stopping time \(\sigma_{\widehat{\mathbf{x}}}\) accounts for the first time at which the random walk hits one of the red dual vertices, while the stopping time \(\tau_{\widehat{\mathbf{x}}}\) accounts for the first time at which the random walk hits one of the blue dual vertices at the top or bottom of the figure. In what follows, we also need to consider random walks on the sequence of primal lifted weighted graphs. To be precise, let \(a\) be as specified in the introduction of this section (we recall that here, \(a\) is a shorthand for \(a_{R^{\prime}}^{n}\)) and consider the probability measure \(\mu_{a}\) on the set \(\mathfrak{h}^{-1}(a)\) as specified in Definition 3.11. We let \(X^{n,\mu_{a}}\) be the random walk on the weighted graph \((\mathcal{G}^{n},e^{n})\) started from a point in \(\mathfrak{h}_{n}^{-1}(a)\) sampled according to \(\mu_{a}\). We also consider the associated continuous time lifted random walk \(\{\mathbf{X}_{t}^{n,\mu_{a}}\}_{t\geq 0}\) on \((\mathcal{G}^{\dagger,n},c^{\dagger,n})\) started from a point in \(\mathcal{P}_{a}^{\dagger,n}\) sampled according to \(\mu_{a}\). We define the following stopping times \[\mathfrak{d}_{+}:=\inf\bigl{\{}t\geq 0\,:\,\mathbf{X}_{t}^{n,\mu_{a}}\in( \mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\bigr{\}},\quad\mathfrak{d}_{- }:=\inf\bigl{\{}t\geq 0\,:\,\mathbf{X}_{t}^{n,\mu_{a}}\in(\mathfrak{h}_{n}^{ \dagger})^{-1}(\underline{a})\bigr{\}}. \tag{4.8}\] Furthermore, we will also need to consider the lifted Smith-embedded random walk \(\{\hat{\mathbf{X}}_{t}^{n,\mu_{a}}\}_{t\geq 0}\) associated to \(\mathbf{X}^{n,\mu_{a}}\) as specified in Subsection 3.4. #### 4.2.2 Some auxiliary results We can now state the key lemma for the proof of Proposition 4.8. The proof of the below result is postponed until the end of this subsection. **Lemma 4.9**.: _For any \(n>n(R,\delta)\) large enough it exists a finite constant \(b_{n}^{\prime}\in\mathbb{R}\) such that_ \[\left|\mathbb{E}_{\widehat{\mathbf{x}}}\left[\frac{\operatorname{Re}(\widehat {\mathbf{X}}_{\tau_{\widehat{\mathbf{x}}}}^{n})}{2\pi}-\frac{\mathfrak{w}_{n }^{\dagger}(\widehat{\mathbf{X}}_{\tau_{\widehat{\mathbf{x}}}}^{n})+b_{n}^{ \prime}}{\mathfrak{n}_{n}}\right]\right|\preceq 1,\quad\forall\widehat{\mathbf{x}}\in \mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{\prime}-1),\] _where the implicit constant is universal, and \(\mathfrak{n}_{n}\) denotes the strength of the flow induced by \(\mathfrak{h}_{n}\) as defined in (2.7)._ We now explain how the proof of Lemma 4.9 is structured. For each \(n\in\mathbb{N}\), we consider the map \(\hat{\mathcal{S}}_{n}^{\dagger,\text{rand}}\), defined in Definition 2.15, that assigns to each vertex \(\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger,n}\) the random variable \(\hat{\mathcal{S}}_{n}^{\dagger,\text{rand}}(\mathbf{x})\) which is uniformly distributed on the horizontal segment \(\mathcal{S}_{n}^{\dagger}(\mathbf{x})\). The proof of Lemma 4.9 can be basically divided into four main steps. * We start by considering the continuous time lifted random walk \(\mathbf{X}^{n,\mu_{a}}\). In Lemma 4.11, we prove that, for \(M\in\mathbb{N}\), the conditional probability, given \(\mathfrak{d}_{+}<\mathfrak{d}_{-}\), of the event \(|\operatorname{Re}(\mathbf{X}_{\mathfrak{d}_{+}}^{n,\mu_{a}})-\operatorname{ Re}(\mathbf{X}_{0}^{n,\mu_{a}})|>MR^{\prime}\) decays exponentially in \(M\). Using this result, we then prove in Lemma 4.12 that the conditional expectation of \(\operatorname{Re}(\mathbf{X}_{\mathfrak{d}_{+}}^{n,\mu_{a}})-\operatorname{ Re}(\mathbf{X}_{0}^{n,\mu_{a}})\), given \(\mathfrak{d}_{+}<\mathfrak{d}_{-}\), is of order one. Both the proofs of Lemmas 4.11 and 4.12 are based on the invariance principle assumption on the sequence of primal graphs. 2. We then consider the Smith-embedded random walk \(\mathbf{X}^{n,\mu_{a}}\) associated with \(\mathbf{X}^{n,\mu_{a}}\). In Lemma 4.13, we prove that the conditional expectation of \(\operatorname{Re}(\hat{\mathbf{X}}^{n,\mu_{a}}_{\Phi_{+}})\), given \(\vartheta_{+}<\vartheta_{-}\), is of order \(\mathfrak{n}_{n}\). This result is basically a consequence of Lemma 3.17, which guarantees that the expected horizontal winding of the Smith-embedded random walk is equal to zero. 3. In Lemma 4.15, using the relation between the maps \(\hat{\mathcal{S}}^{\dagger,\operatorname{rand}}_{n}\) and \(\mathfrak{w}^{\dagger}_{n}\), together with the results proved in the previous steps, we prove that it exists a finite constant \(b^{\prime}_{n}\in\mathbb{R}\) such that the values of the width coordinate function in \(\mathcal{P}^{\dagger,n}_{\overline{a}}\) plus \(b^{\prime}_{n}\) is of order \(\mathfrak{n}_{n}\). The key input to prove such a result comes from Lemma 4.14 in which we prove that the probability that the random walk \(\mathbf{X}^{n,\mu_{a}}\) travels a large horizontal distance in a "narrow horizontal tube" decays exponentially fast. The proof of this fact follows from the invariance principle assumption on the sequence of primal graphs. 4. Finally, in Lemma 4.16, we state the analogous dual result of Lemma 4.14. This fact and the periodicity of the Smith diagram will allow us to deduce Lemma 4.9. Let us emphasize that all the results explained above hold also with the role of \(\vartheta_{+}\) and \(\vartheta_{-}\) interchanged. In particular, the same result stated in point (c) holds with the set \(\mathcal{P}^{\dagger,n}_{\overline{a}}\) replaced by \(\mathcal{P}^{\dagger,n}_{\overline{a}}\). **Remark 4.10**.: Before proceeding with the precise statements and the proofs of the above mentioned lemmas, we observe that, by Proposition 4.3, for any \(\varepsilon\in(0,1)\) and for any \(n>n(R,\delta,\varepsilon)\) large enough, it holds that \[(\mathfrak{h}^{\dagger}_{n})^{-1}(\overline{a})\subset\mathbb{R}\times[R^{ \prime}-\varepsilon,R^{\prime}+\varepsilon],\qquad(\mathfrak{h}^{\dagger}_{n })^{-1}(\underline{a})\subset\mathbb{R}\times[-R^{\prime}-\varepsilon,-R^{ \prime}+\varepsilon].\] Furthermore, by definitions of \(\mathcal{E}\widehat{\mathcal{G}}^{\dagger,n}_{\overline{a}}\), \(\mathcal{E}\widehat{\mathcal{G}}^{\dagger,n}_{\overline{a}}\) and Lemma 4.2, we have that, for any \(\varepsilon\in(0,1)\) and for any \(n>n(R,\delta,\varepsilon)\) large enough, it holds that \[\mathcal{E}\widehat{\mathcal{G}}^{\dagger,n}_{\overline{a}}\subset\mathbb{R} \times[R^{\prime}-\varepsilon,R^{\prime}+\varepsilon],\qquad\mathcal{E} \widehat{\mathcal{G}}^{\dagger,n}_{\overline{a}}\subset\mathbb{R}\times[-R^{ \prime}-\varepsilon,-R^{\prime}+\varepsilon].\] These facts will be of key importance in the remaining part of this subsection and they will be used several times. We can now proceed to state and prove the technical lemmas mentioned above. We start with the following lemma which states that, for \(M\in\mathbb{N}\), the conditional probability of the event \(|\operatorname{Re}(\mathbf{X}^{n,\mu_{a}}_{\Phi_{+}})-\operatorname{Re}( \mathbf{X}^{n,\mu_{a}}_{0})|>MR^{\prime}\), given \(\vartheta_{+}<\vartheta_{-}\), decays exponentially in \(M\). Heuristically speaking, this is due to the fact that, after each time that the random walk \(\mathbf{X}^{n,\mu_{a}}\) travels horizontal distance \(R^{\prime}\), there is a positive chance of hitting the set \((\mathfrak{h}^{\dagger}_{n})^{-1}(\overline{a})\). **Lemma 4.11**.: _There exists a universal constant \(C>0\) such that, for any \(n>n(R,\delta)\) large enough, it holds that_ \[\mathbb{P}_{\mu_{a}}\big{(}|\operatorname{Re}(\mathbf{X}^{n}_{\Phi_{+}})- \operatorname{Re}(\mathbf{X}^{n}_{0})|>MR^{\prime}\mid\vartheta_{+}<\vartheta_ {-}\big{)}\preceq\exp(-CM),\quad\forall M\in\mathbb{N}.\] _where the implicit constant is independent of everything else. The same conclusion holds with the role of \(\vartheta_{+}\) and \(\vartheta_{-}\) interchanged._ Proof.: As observed in Remark 4.10, for any \(n>n(R,\delta)\) large enough, we can assume that \[(\mathfrak{h}^{\dagger}_{n})^{-1}(\overline{a})\subset\mathbb{R}\times[R^{ \prime},R^{\prime}+1],\quad(\mathfrak{h}^{\dagger}_{n})^{-1}(\underline{a}) \subset\mathbb{R}\times[-R^{\prime}-1,-R^{\prime}],\quad(\mathfrak{h}^{ \dagger}_{n})^{-1}(a)\subset\mathbb{R}\times[-1,1]. \tag{4.9}\] Now, letting \(\vartheta:=\vartheta_{+}\wedge\vartheta_{-}\) and \(M\in\mathbb{N}\), we can write \[\mathbb{P}_{\mu_{a}}\big{(}|\operatorname{Re}(\mathbf{X}^{n}_{\Phi_{+}})- \operatorname{Re}(\mathbf{X}^{n}_{0})|>MR^{\prime}\mid\vartheta_{+}<\vartheta_ {-}\big{)}\leq\frac{\mathbb{P}_{\mu_{a}}\big{(}|\operatorname{Re}(\mathbf{X}^{n }_{\vartheta})-\operatorname{Re}(\mathbf{X}^{n}_{0})|>MR^{\prime}\big{)}}{ \mathbb{P}_{\mu_{a}}\big{(}\vartheta_{+}<\vartheta_{-}\big{)}}.\] We note that, thanks to assumption (H2) and (4.9), for any \(n>n(R,\delta)\) large enough, the probability on the denominator can be lower bounded by a constant independent of everything else. Therefore, we can just focus on the probability appearing on the numerator. To this end, let \(\rho_{0}:=0\), and for \(k\in\mathbb{N}_{0}\) define inductively \[\rho_{k+1}:=\inf\bigl{\{}t\geq\rho_{k}\,:\,\bigl{|}\operatorname{Re}(\mathbf{X} ^{n,\mu_{a}}_{t})-\operatorname{Re}(\mathbf{X}^{n,\mu_{a}}_{\rho_{k}})\bigr{|} \geq R^{\prime}\bigr{\}}.\] Moreover, for \(k\in\mathbb{N}_{0}\), consider the event \[A_{k}^{n}:=\big{\{}\mathbf{X}^{n,\mu_{a}}|_{[\rho_{k},\varrho_{k+1}]}\not\subset \mathbb{R}\times\big{[}\mathrm{Im}(\mathbf{X}_{\varrho_{k}}^{n,\mu_{a}})-3R^{ \prime},\mathrm{Im}(\mathbf{X}_{\varrho_{k}}^{n,\mu_{a}})+3R^{\prime}\big{]} \big{\}}.\] We observe that, thanks to the strong Markov property of the random walk, the events \(\{A_{k}^{n}\}_{k\in\mathbb{N}_{0}}\) are independent and identically distributed. Moreover, thanks to assumption (H2), to estimates (4.9), and to well-known properties of Brownian motion, for any \(n\in\mathbb{N}\) large enough, the event \(A_{0}^{n}\) happens with uniformly positive probability \(p\) independent of everything else. Therefore, we have that \[\mathbb{P}_{\mu_{a}}\big{(}|\operatorname{Re}(\mathbf{X}_{\vartheta_{+}}^{n})- \operatorname{Re}(\mathbf{X}_{0}^{n})|>MR^{\prime}\big{)}\leq\mathbb{P}_{\mu_ {a}}\left(\bigcap_{i=0}^{M}\overline{A_{i}^{n}}\right)=(1-p)^{M},\] from which the desired result follows. Finally, the same argument also applies with the role of \(\vartheta_{+}\) and \(\vartheta_{-}\) interchanged. In the next lemma, we use Lemma 4.11 to prove that the conditional expectation of \(\operatorname{Re}(\mathbf{X}_{\vartheta_{+}}^{n,\mu_{a}})-\operatorname{Re}( \mathbf{X}_{0}^{n,\mu_{a}})\), given \(\vartheta_{+}<\vartheta_{-}\), is of order one. **Lemma 4.12**.: _For any \(n>n(R,\updelta)\) large enough, it holds that_ \[\big{|}\mathbb{E}_{\mu_{a}}\big{[}\operatorname{Re}(\mathbf{X}_{\vartheta_{+ }}^{n})-\operatorname{Re}(\mathbf{X}_{0}^{n})\mid\vartheta_{+}<\vartheta_{- }\big{]}\big{|}\preceq 1,\] _where the implicit constant is universal. The same result holds with the role of \(\vartheta_{+}\) and \(\vartheta_{-}\) interchanged._ Proof.: The proof of this result is based on assumption (H2). More precisely, let \(B^{\mu_{a}}\) be a planar Brownian motion started from a point in \(\mathcal{P}_{a}^{\dagger,n}\) sampled according to the probability measure \(\mu_{a}\), and consider the following stopping times \[\vartheta_{B,+}:=\inf\big{\{}t\geq 0\,:\,\mathrm{Im}(B_{t}^{\mu_{a}})=R^{ \prime}\big{\}},\qquad\vartheta_{B,-}:=\inf\big{\{}t\geq 0\,:\,\mathrm{Im}(B_{t}^{ \mu_{a}})=-R^{\prime}\big{\}}.\] Since \(\operatorname{Re}(B^{\mu_{a}})\) and \(\mathrm{Im}(B^{\mu_{a}})\) are independent, and since the stopping times \(\vartheta_{B,+}\) and \(\vartheta_{B,-}\) only depend on \(\mathrm{Im}(B^{\mu_{a}})\), thanks to well-known properties of Brownian motion, we have that \[\mathbb{E}_{\mu_{a}}\big{[}\operatorname{Re}(B_{\vartheta_{B,+}})-\operatorname {Re}(B_{0})\mid\vartheta_{B,+}<\vartheta_{B,-}\big{]}=0. \tag{4.10}\] Furthermore, as observed in Remark 4.10, for any \(\varepsilon\in(0,1)\) and for any \(n>n(R,\updelta,\varepsilon)\) large enough, it holds that \[(\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\subset\mathbb{R}\times[R^{ \prime},R^{\prime}+\varepsilon],\qquad(\mathfrak{h}_{n}^{\dagger})^{-1}( \underline{a})\subset\mathbb{R}\times[-R^{\prime}-\varepsilon,-R^{\prime}].\] In particular, this fact and Lemma 4.2 imply that the set \((\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\) (resp. \((\mathfrak{h}_{n}^{\dagger})^{-1}(\underline{a})\)) converges in the Hausdorff metric to the horizontal line \(\mathbb{R}\times\{R^{\prime}\}\) (resp. \(\mathbb{R}\times\{-R^{\prime}\}\)). Therefore, thanks to assumption (H2), the following weak convergence of laws holds \[\lim_{n\to\infty}\mathrm{Law}\big{(}\mathrm{Re}(\mathbf{X}_{\vartheta_{+}}^{n,\mu_{a}})-\operatorname{Re}(X_{0}^{n,\mu_{a}})\mid\vartheta_{+}<\vartheta_{- }\big{)}=\mathrm{Law}\big{(}\mathrm{Re}(B_{\vartheta_{B,+}}^{\mu_{a}})- \operatorname{Re}(B_{0}^{\mu_{a}})\mid\vartheta_{B,+}<\vartheta_{B,-}\big{)}.\] Hence, thanks to Lemma 4.11 and Vitali's convergence theorem, for any \(n>n(R,\updelta)\) large enough, it holds that \[\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\mathbf{X}_{\vartheta_{+}}^{n})- \operatorname{Re}(\mathbf{X}_{0}^{n})\mid\vartheta_{+}<\vartheta_{-}\big{]}- \mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(B_{\vartheta_{B,+}})-\operatorname{Re }(B_{0})\mid\vartheta_{B,+}<\vartheta_{B,-}\big{]}\big{|}\leq 1. \tag{4.11}\] Hence, putting together (4.10) and (4.11), we obtain the desired result. Finally, the same argument also applies with the role of \(\vartheta_{+}\) and \(\vartheta_{-}\) interchanged. In the next lemma, we see how we can use Lemma 3.17 to prove that that the conditional expectation of \(\operatorname{Re}(\dot{\mathbf{X}}_{\vartheta_{+}}^{n,\mu_{a}})\), given \(\vartheta_{+}<\vartheta_{-}\), is of order \(\mathfrak{n}_{n}\). **Lemma 4.13**.: _For any \(n\in\mathbb{N}\), it holds that_ \[\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\dot{\mathbf{X}}_{\vartheta_{+}}^{ n})\mid\vartheta_{+}<\vartheta_{-}\big{]}\big{|}\leq\mathfrak{n}_{n},\] _and the same with the role of \(\vartheta_{+}\) and \(\vartheta_{-}\) interchanged._ Proof.: Since we are assuming that the base vertex \(\hat{\mathbf{x}}_{0}^{n,a}\) of the lifted width coordinate function \(\mathfrak{w}_{n}^{\dagger}\) belongs to the set \(\widehat{\mathcal{P}}_{a}^{\dagger,n}\), then, thanks to Lemma 3.9, it holds that \[\big{|}\mathrm{Re}(\hat{\mathbf{X}}_{n}^{\dagger,\mathrm{rand}}(\mathbf{x})) \big{|}\leq\mathfrak{n}_{n},\quad\forall\mathbf{x}\in\mathcal{P}_{a}^{\dagger, n},\] uniformly over all possible realizations of \(\hat{\mathcal{S}}_{n}^{\dagger,\mathrm{rand}}(\mathbf{x})\). In particular, since the embedded random walk \(\mathbf{X}^{n,\mu_{a}}\) is started from a point in the set \(\mathcal{P}_{a}^{\dagger,n}\), then it follows that \[\big{|}\mathrm{Re}(\hat{\mathbf{X}}_{0}^{n,\mu_{a}})\big{|}\leq\mathfrak{n}_{ n}, \tag{4.12}\] uniformly over all possible realizations of \(\hat{\mathbf{X}}_{0}^{n,\mu_{a}}\). Now, we would like to apply Lemma 3.17 in order to conclude. However, we notice that we cannot directly apply such a result since, a priori, it does not hold that \(\cup_{x\in\mathcal{V}\mathcal{G}^{n}}\mathfrak{h}_{n}^{-1}(\mathfrak{h}_{n}( x))\subseteq\mathcal{V}\mathcal{G}^{n}\). In order to overcome this issue, we could consider the weighted graph associated to \((\mathcal{G}^{n},c^{n})\) and \(\cup_{x\in\mathcal{V}\mathcal{G}^{n}}\mathfrak{h}_{n}^{-1}(\mathfrak{h}_{n}( x))\) as specified in Definition 3.1. We could then apply Lemma 3.17 to the random walk on this new weighted graph and then transfer the result to the original weighted graph by means of Lemma 3.4. In order to lighten up the proof, we will assume directly that \(\cup_{x\in\mathcal{V}\mathcal{G}^{n}}\mathfrak{h}_{n}^{-1}(\mathfrak{h}_{n}( x))=\mathcal{V}\mathcal{G}^{n}\). Since the stopping time \(\mathfrak{h}_{+}\) is almost surely finite, we can proceed as follows \[\begin{split}&\mathbb{E}_{\mu_{a}}\big{[}\mathrm{wind}_{ \mathfrak{n}_{n}}(\hat{\mathbf{X}}^{n}|_{[0,\mathfrak{d}_{+}]})\mid\mathfrak{ \vartheta}_{+}<\mathfrak{\vartheta}_{-}\big{]}\\ &\qquad=\frac{1}{\mathbb{P}_{\mu_{a}}(\mathfrak{\vartheta}_{+}< \mathfrak{\vartheta}_{-})}\mathbb{E}_{\mu_{a}}\left[\sum_{N\in\mathbb{N}} \mathrm{wind}_{\mathfrak{n}_{n}}(\hat{\mathbf{X}}^{n}|_{[0,\mathfrak{d}_{+}]}) \mathbb{1}_{\{\mathfrak{\vartheta}_{+}<\mathfrak{\vartheta}_{-},\mathfrak{ \vartheta}_{+}=N\}}\right]\\ &\qquad=\frac{1}{\mathbb{P}_{\mu_{a}}(\mathfrak{\vartheta}_{+}< \mathfrak{\vartheta}_{-})}\sum_{N\in\mathbb{N}}\mathbb{E}_{\mu_{a}}\big{[} \mathrm{wind}_{\mathfrak{n}_{n}}(\hat{\mathbf{X}}^{n}|_{[0,\mathfrak{d}_{+}]} )\mid\mathfrak{\vartheta}_{-}>N,\mathfrak{\vartheta}_{+}=N\big{]}\mathbb{P}_{ \mu_{a}}(\mathfrak{\vartheta}_{-}>N,\mathfrak{\vartheta}_{+}=N).\end{split} \tag{4.13}\] In order to pass from the first line to the second line of the above expression, we used the fact that \[\mathbb{E}\big{[}\big{|}\mathrm{wind}_{\mathfrak{n}_{n}}(\hat{\mathbf{X}}^{n, \mu_{a}}|_{[0,\mathfrak{d}_{+}]})\big{|}\big{]}<\infty, \tag{4.14}\] and Fubini's theorem. The reason why (4.14) holds is an immediate consequence of Lemmas 3.6 and 4.12. In order to conclude, it is sufficient to prove that the expectation in each summand of the sum appearing in (4.13) is equal to zero. To this end, fix \(N\in\mathbb{N}\) and consider a sequence of admissible height coordinates \([a_{N}]_{0}\subset(0,1)\) for the random walk \(X^{\mu_{a}}\), as specified in Definition 3.13, such that \(a_{m}>\underline{a}\) for all \(m\in[N]_{0}\) and \(a_{N}=\overline{a}\). Thanks, to Lemma 3.17, we have that \[\mathbb{E}_{\mu_{a}}\big{[}\mathrm{wind}_{\mathfrak{n}_{n}}(\hat{\mathbf{X}}^{ n}|_{[0,N]})\mid\{\mathfrak{h}_{n}(X_{m})=a_{m}\}_{m=1}^{N}\big{]}=0,\] uniformly over all such sequences of height coordinates. Hence, this is sufficient to conclude that \[\mathbb{E}_{\mu_{a}}\big{[}\mathrm{wind}_{\mathfrak{n}_{n}}(\hat{\mathbf{X}}^{ n}|_{[0,\mathfrak{d}_{+}]})\mid\mathfrak{\vartheta}_{-}>N,\mathfrak{ \vartheta}_{+}=N\big{]}=0. \tag{4.15}\] Therefore, the conclusion follows thanks to (4.12), and to the fact that (4.13) and (4.15) imply that \(\mathbb{E}_{\mu_{a}}[\mathrm{wind}_{\mathfrak{n}_{n}}(\hat{\mathbf{X}}^{n}|_{[0, \mathfrak{d}_{+}]})\mid\mathfrak{\vartheta}_{+}<\mathfrak{\vartheta}_{-}]=0\). Finally, we observe that the same argument also applies with the role of \(\mathfrak{\vartheta}_{+}\) and \(\mathfrak{\vartheta}_{-}\) interchanged. In order to prove Lemma 4.9, we basically need to prove that the difference between the value of the width coordinate in the set \(\widehat{\mathcal{P}}_{\overline{\pi}}^{\dagger,n}\) and in the set \(\widehat{\mathcal{P}}_{\underline{a}}^{\dagger,n}\) is of order \(\mathfrak{n}_{n}\). This fact is the content of Lemma 4.15 below. However, in order to prove this fact, we first need to prove that it is extremely unlikely for the random walk \(\mathbf{X}^{n,\mu_{a}}\) to travel a large horizontal distance in a "narrow horizontal tube". We refer to Figure 8 for a diagram illustrating the various objects involved in the proof of Lemma 4.14. **Lemma 4.14**.: _Fix \(\epsilon\in(0,1)\), \(M\in\mathbb{N}\), and define the following event_ \[\mathrm{A}_{M,\epsilon}^{n,+}:=\big{\{}\exists\,s,t\in[0,\mathfrak{\vartheta}_{+ }]\,:\,\big{|}\mathrm{Re}(\mathbf{X}_{t}^{n,\mu_{a}})-\mathrm{Re}(\mathbf{X}_ {s}^{n,\mu_{a}})\big{|}>M;\;\big{|}\mathrm{Im}(\mathbf{X}_{u}^{n,\mu_{a}}) \big{|}\in[R^{\prime}-\epsilon,R^{\prime}+\epsilon],\;\forall u\in[s,t]\big{\}}.\] _There exists a universal constant \(C>0\) such that, for any \(n>n(R,\mathfrak{\delta},\epsilon)\) large enough, it holds that_ \[\mathbb{P}_{\mu_{a}}\big{(}\mathrm{A}_{M,\epsilon}^{n,+}\mid\mathfrak{\vartheta}_{ +}<\mathfrak{\vartheta}_{-}\big{)}\preceq \exp(-CM/\epsilon),\quad\forall M\in\mathbb{N},\] _where the implicit constant is independent of everything else. Furthermore, the same conclusion holds with the role of \(\mathfrak{\vartheta}_{+}\) and \(\mathfrak{\vartheta}_{-}\) interchanged._ Proof.: Fix \(\varepsilon\in(0,1)\). We start by recalling that, as observed in Remark 4.10, for any \(n>n(R,\delta,\varepsilon)\) large enough, it holds that \[(\mathfrak{h}_{n}^{\dagger})^{-1}(\overline{a})\subset\mathbb{R}\times[R^{\prime }-\varepsilon,R^{\prime}+\varepsilon]. \tag{4.16}\] It is easy to see that, without any loss of generality, we can assume that \(\mathcal{V}\mathcal{G}^{n}\) contains all the points in the set \[\big{\{}x\in\mathbb{G}^{n}\,:\,\operatorname{Im}(x)=R^{\prime}-\varepsilon \big{\}}.\] Therefore, we can define the stopping time \(\rho_{0}:=\inf\bigl{\{}t\geq 0\,:\,\operatorname{Im}(\mathbf{X}_{t}^{n,\mu_{a}})=R ^{\prime}-\varepsilon\bigr{\}}\), and for \(k\in\mathbb{N}_{0}\) we define inductively \[\widetilde{\rho}_{k+1}:=\inf\bigl{\{}t\geq\rho_{k}\,:\,\bigl{|}\operatorname{ Re}(\mathbf{X}_{t}^{n,\mu_{a}})-\operatorname{Re}(\mathbf{X}_{\rho_{k}}^{n,\mu_{ a}})\bigr{|}>M\bigr{\}},\quad\rho_{k+1}:=\inf\bigl{\{}t\geq\widetilde{\rho}_{k+1} \,:\,\operatorname{Im}(\mathbf{X}_{t}^{n,\mu_{a}})=R^{\prime}-\varepsilon \bigr{\}}.\] Moreover, for \(k\in\mathbb{N}\) we consider the events \[\operatorname{A}_{M,\varepsilon}^{n,+}(\rho_{k-1},\widetilde{\rho}_{k}):= \bigl{\{}\operatorname{Im}(\mathbf{X}^{n,\mu_{a}}|_{[\rho_{k-1},\widetilde{ \rho}_{k}]})\subset[R^{\prime}-\varepsilon,R^{\prime}+\varepsilon]\bigr{\}}, \quad F_{k}:=\bigl{\{}\rho_{k}>\widetilde{\tau}_{\widetilde{\kappa}}\bigr{\}}.\] Let \(K\) be the smallest \(k\in\mathbb{N}\) such that \(F_{k}\) occurs. Then, we have that \[\begin{split}\mathbb{P}_{\mu_{a}}\bigl{(}\operatorname{A}_{M, \varepsilon}^{n,+}|\;\mathfrak{h}_{+}<\mathfrak{h}_{-}\bigr{)}& \leq\mathbb{P}_{\mu_{a}}\left(\bigcup_{i=1}^{K}\operatorname{A}_{M, \varepsilon}^{n,+}(\rho_{i-1},\widetilde{\rho}_{i})\;|\;\mathfrak{h}_{+}< \mathfrak{h}_{-}\right)\\ &\leq\sum_{k\in\mathbb{N}}\sum_{i=1}^{k}\mathbb{P}_{\mu_{a}} \left(\operatorname{A}_{M,\varepsilon}^{n,+}(\rho_{i-1},\widetilde{\rho}_{i}) \;|\;\mathfrak{h}_{+}<\mathfrak{h}_{-}\right)\mathbb{P}_{\mu_{a}}\bigl{(}K=k \;|\;\mathfrak{h}_{+}<\mathfrak{h}_{-}\bigr{)}.\end{split} \tag{4.17}\] Thanks to the strong Markov property of the random walk, the events \(\{\operatorname{A}_{M,\varepsilon}^{n,+}(\rho_{k-1},\widetilde{\rho}_{k})\}_ {k\in\mathbb{N}}\) are conditionally independent and identically distributed given \(\mathfrak{h}_{+}<\mathfrak{h}_{-}\). Now, thanks to assumption (H2) and well-known properties of Brownian motion, it is possible to prove that it exists a universal constant \(C>0\) such that for any \(n\in\mathbb{N}\) large enough it holds that \[\mathbb{P}_{\mu_{a}}\bigl{(}\operatorname{A}_{M,\varepsilon}^{n,+}(\rho_{0}, \widetilde{\rho}_{1})\;|\;\mathfrak{h}_{+}<\mathfrak{h}_{-}\bigr{)}\preceq \exp\bigl{(}-CM/\varepsilon\bigr{)}, \tag{4.18}\] where the implicit constant is independent of everything else. More precisely, in order to obtain the above upper bound, it is sufficient to study the probability that the random walk travels horizontal distance \(M\) before exiting an infinite horizontal band of height of order \(\varepsilon\). This can be done by proceeding similarly to the proof of Lemma 4.11, and so we do not detail the argument here. Now, for \(k\in\mathbb{N}\), we consider the event \[\operatorname{B}_{M}^{n,+}(\rho_{k-1},\widetilde{\rho}_{k}):=\bigl{\{} \operatorname{Im}(\mathbf{X}^{n,\mu_{a}}|_{[\rho_{k-1},\widetilde{\rho}_{k}] })\subset[-R^{\prime}-1,R^{\prime}+1]\bigr{\}}.\] For the same reason explained above, we have that the events \(\{\mathrm{B}_{M}^{n,+}(\mathbf{p}_{k-1},\widetilde{\mathbf{\varrho}}_{k})\}_{k\in\mathbb{N}}\) are conditionally independent and identically distributed given \(\vartheta_{+}<\vartheta_{-}\). Also, using again assumption (H2) and a standard gambler's ruin estimate for Brownian motion, one can prove that \[\mathbb{P}_{\mu_{a}}\big{(}\mathrm{B}_{M}^{n,+}(\mathbf{\uprho}_{0},\widetilde{\bm {\uprho}}_{1})\mid\vartheta_{+}<\vartheta_{-}\big{)}\preceq (M+1)^{-1}, \tag{4.19}\] where the implicit constant is independent of everything else. In particular, thanks to (4.16), for any \(k\in\mathbb{N}\), it holds that \[\mathbb{P}_{\mu_{a}}\big{(}K=k\mid\vartheta_{+}<\vartheta_{-} \big{)} \leq\mathbb{P}_{\mu_{a}}\left(\bigcap_{i=1}^{k-1}\mathrm{B}_{M}^{n,+}(\mathbf{\uprho}_{i-1},\widetilde{\mathbf{\uprho}}_{i})\mid\vartheta_{+}<\vartheta _{-}\right)\] \[=\prod_{i=1}^{k-1}\mathbb{P}_{\mu_{a}}\big{(}\mathrm{B}_{M}^{n,+} (\mathbf{\uprho}_{0},\widetilde{\mathbf{\uprho}}_{1})\mid\vartheta_{+}<\vartheta_{-} \big{)}\] \[\preceq (M+1)^{-(k-1)}.\] Therefore, putting together (4.17), (4.18) and (4.19), we obtain that it exists a universal constant \(C>0\) such that \[\mathbb{P}_{\mu_{a}}\big{(}\mathrm{A}_{M,\varepsilon}^{n,+}\mid\vartheta_{+}< \vartheta_{-}\big{)}\preceq \exp\bigl{(}-CM/\varepsilon\bigr{)}\sum_{k\in\mathbb{N}}k(M+1)^{-(k -1)}\preceq \exp\bigl{(}-CM/\varepsilon\bigr{)}.\] Finally, we observe that the result with the role of \(\vartheta^{+}\) and \(\vartheta^{-}\) interchanged can be obtained in the same way. Therefore, this concludes the proof. We are now ready to prove that the difference between the value of the width coordinate in the set \(\widehat{\mathcal{P}}_{\overline{a}}^{\dagger,n}\) and in the set \(\widehat{\mathcal{P}}_{\overline{a}}^{\dagger,n}\) is of order \(\mathfrak{n}_{n}\). We refer to Figure 9 for a diagram illustrating the various objects involved in the proof of Lemma 4.15. **Lemma 4.15**.: _For any \(n>n(R,\delta)\) large enough it exists a finite constant \(b^{\prime}_{n}\in\mathbb{R}\) such that_ \[\big{|}\mathfrak{w}_{n}^{\dagger}(\widehat{\mathbf{x}})+b^{\prime}_{n}\big{|} \preceq \mathfrak{n}_{n},\quad\forall\widehat{\mathbf{x}}\in\widehat{\mathcal{P}}_ {\overline{a}}^{\dagger,n}\cup\widehat{\mathcal{P}}_{\overline{a}}^{\dagger,n},\] _where the implicit constant is independent of everything else._ Proof.: We start by letting \(\mathfrak{d}:=\mathfrak{d}_{+}\wedge\mathfrak{d}_{-}\) and defining \[b^{\prime}_{n}:=\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\mathbf{X}^{n}_{0})- \mathrm{Re}(\widetilde{\mathbf{X}}^{n}_{\mathfrak{d}})\big{]}.\] We observe that \(b^{\prime}_{n}\) is not of constant order in general. Indeed, this is due to the fact that \(\mathrm{Re}(\mathbf{X}^{n,\mu_{a}}_{0})\) can be far from being of order one since the starting point of the random walk \(\mathbf{X}^{n,\mu_{a}}\) is sampled from the set \(\mathcal{P}^{\dagger,n}_{a}\) over which we do not have any a priori control. We will only prove the result in the case \(\widehat{\mathbf{x}}\in\widehat{\mathcal{P}}^{\dagger,n}_{\overline{a}}\) since the result for \(\widehat{\mathbf{x}}\in\mathcal{P}^{\dagger,n}_{\overline{a}}\) can be obtained similarly. In particular, we split the proof in two steps. **Step 1:** In this first step, we claim that for any \(n>n(R,\mathfrak{d})\) large enough it holds that \[\big{|}\mathrm{Re}(\dot{\mathcal{S}}^{\dagger,\mathrm{rand}}_{n}(\mathbf{x}))+ b^{\prime}_{n}\big{|}\mathop{\preceq}\mathfrak{n}_{n},\quad\forall\mathbf{x} \in\mathcal{P}^{\dagger,n}_{\overline{a}},\] uniformly over all possible realizations of \(\dot{\mathcal{S}}^{\dagger,\mathrm{rand}}_{n}(\mathbf{x})\), and where the implicit constant does not depend on anything else. To this end, we consider the vertex \(\widetilde{\mathbf{X}}^{n,\mu_{a}}_{\mathfrak{d}_{+}}\in\mathcal{P}^{\dagger, n}_{\overline{a}}\) such that \(\sigma_{2\pi}(\widetilde{\mathbf{X}}^{n,\mu_{a}}_{\mathfrak{d}_{+}})=\sigma_{2 \pi}(\mathbf{X}^{n,\mu_{a}}_{\mathfrak{d}_{+}})\). Now, applying Lemma 3.6, we see that \[\left|\frac{\mathrm{Re}(\dot{\mathcal{S}}^{\dagger,\mathrm{rand}}_{n}( \widetilde{\mathbf{X}}^{n,\mu_{a}}_{\mathfrak{d}_{+}}))-\mathrm{Re}(\dot{ \mathbf{X}}^{n,\mu_{a}}_{\mathfrak{d}_{+}})}{\mathfrak{n}_{n}}-\frac{\mathrm{ Re}(\widetilde{\mathbf{X}}^{n,\mu_{a}}_{\mathfrak{d}_{+}})-\mathrm{Re}( \mathbf{X}^{n,\mu_{a}}_{\mathfrak{d}_{+}})}{2\pi}\right|\leq 1. \tag{4.20}\] Therefore, rearranging the various terms in the previous inequality and then taking the conditional expectation given \(\mathfrak{d}_{+}<\mathfrak{d}_{-}\), we get that \[\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\dot{\mathcal{S}} ^{\dagger,\mathrm{rand}}_{n}(\widetilde{\mathbf{X}}^{n}_{\mathfrak{d}_{+}})) \mid\mathfrak{d}_{+}<\mathfrak{d}_{-}\big{]}+b^{\prime}_{n}\big{|} \mathop{\preceq} \tag{4.21}\] \[+\mathfrak{n}_{n}\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}( \widetilde{\mathbf{X}}^{n}_{\mathfrak{d}_{+}})-\mathrm{Re}(\mathbf{X}^{n}_{0}) \mid\mathfrak{d}_{+}<\mathfrak{d}_{-}\big{]}\big{|}\] \[+\mathfrak{n}_{n}\big{|}b^{\prime}_{n}+\mathbb{E}_{\mu_{a}}\big{[} \mathrm{Re}(\widetilde{\mathbf{X}}^{n}_{\mathfrak{d}_{+}})-\mathrm{Re}( \mathbf{X}^{n}_{0})\mid\mathfrak{d}_{+}<\mathfrak{d}_{-}\big{]}\big{|}.\] Thanks to Lemmas 4.13 and 4.12, we have that the following inequalities hold for any \(n>n(R,\mathfrak{d})\) large enough \[\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\dot{\mathbf{X}}^{n}_{\mathfrak{ d}_{+}})\mid\mathfrak{d}_{+}<\mathfrak{d}_{-}\big{]}\big{|}\leq\mathfrak{n}_{n}, \qquad\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\mathbf{X}^{n}_{\mathfrak{ d}_{+}})-\mathrm{Re}(\mathbf{X}^{n}_{0})\mid\mathfrak{d}_{+}<\mathfrak{d}_{-} \big{]}\big{|}\mathop{\preceq}1. \tag{4.22}\] Therefore, the claim follows if we prove that the term in the third line of (4.21) is of order one. More precisely, thanks to the definition of \(b^{\prime}_{n}\), it is sufficient to prove that, for any \(n>n(R,\mathfrak{d})\) large enough, it holds that \[\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\widetilde{\mathbf{X}}^{n}_{ \mathfrak{d}})\big{]}-\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\widetilde{ \mathbf{X}}^{n}_{\mathfrak{d}_{+}})\mid\mathfrak{d}_{+}<\mathfrak{d}_{-}\big{]} \big{|}\mathop{\preceq}1.\] Since, for any \(n>n(R,\mathfrak{d})\) large enough, \(\mathbb{P}_{\mu_{a}}(\mathfrak{d}_{+}<\mathfrak{d}_{-})\) is of constant order, this is equivalent to prove that \[\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\widetilde{\mathbf{X}}^{n}_{ \mathfrak{d}_{+}})\mid\mathfrak{d}_{+}<\mathfrak{d}_{-}\big{]}\big{|}+\big{|} \mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\widetilde{\mathbf{X}}^{n}_{\mathfrak{d}_{ -}})\mid\mathfrak{d}_{-}<\mathfrak{d}_{+}\big{]}\big{|}\mathop{\preceq}1,\] where the implicit constant must be independent of everything else. We note that this inequality follows easily from Lemma 4.14 (see Figure 8). Hence, putting everything together and going back to (4.21), we obtain that \[\big{|}\mathbb{E}_{\mu_{a}}\big{[}\mathrm{Re}(\dot{\mathcal{S}}^{\dagger, \mathrm{rand}}_{n}(\widetilde{\mathbf{X}}^{n}_{\mathfrak{d}_{+}}))\mid\mathfrak{d} _{+}<\mathfrak{d}_{-}\big{]}+b^{\prime}_{n}\big{|}\mathop{\preceq}\mathfrak{n} _{n},\] where the implicit constant is independent of everything else. Therefore, the desired claim follows from the previous inequality and from the fact that, thanks to Lemma 3.9, it holds that \[\big{|}\mathrm{Re}(\dot{\mathcal{S}}^{\dagger,\mathrm{rand}}_{n}(\mathbf{x}_{1})) -\mathrm{Re}(\dot{\mathcal{S}}^{\dagger,\mathrm{rand}}_{n}(\mathbf{x}_{2})) \big{|}\leq 2\mathfrak{n}_{n},\quad\forall\mathbf{x}_{1},\mathbf{x}_{2}\in\mathcal{P} ^{\dagger,n}_{\overline{a}},\] uniformly over all possible realizations. **Step 2:** In this step, we will actually prove the result in the lemma statement. To this end, we fix \(\widehat{\mathbf{x}}\in\widehat{\mathcal{P}}^{\dagger,n}_{\overline{a}}\) and we consider the dual edges \(\widehat{\mathbf{e}}_{1}\), \(\widehat{\mathbf{e}}_{2}\in\mathcal{E}\widehat{\mathcal{G}}^{\dagger,n}_{ \overline{a}}\) such that \(\widehat{\mathbf{e}}^{+}_{1}=\widehat{\mathbf{x}}=\widehat{\mathbf{e}}^{-}_{2}\). Furthermore, we let \(\mathbf{e}_{1}\), \(\mathbf{e}_{2}\in\mathcal{E}\mathcal{G}^{\dagger,n}_{\overline{a}}\) be the corresponding primal edges, and we let \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\in\mathcal{VG}^{\dagger,n}\) be the endpoints of \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) in the set \((\mathfrak{h}^{\dagger}_{n})^{-1}(\overline{a})\) (see Figure 9). At this point, we need to split the proof in two different cases: * If \(\mathbf{x}_{1}\neq\mathbf{x}_{2}\), then, by construction of the Smith diagram, it follows that \[\mathfrak{w}_{n}^{\dagger}(\widehat{\mathbf{x}})\in\big{[}\mathrm{Re}(\hat{ \mathcal{S}}_{n}^{\dagger,\mathrm{rand}}(\mathbf{x}_{1})),\mathrm{Re}(\hat{ \mathcal{S}}_{n}^{\dagger,\mathrm{rand}}(\mathbf{x}_{2}))\big{]},\] uniformly over all possible realizations. * If \(\mathbf{x}_{1}=\mathbf{x}_{2}\), then it holds that \[\mathfrak{w}_{n}^{\dagger}(\widehat{\mathbf{x}})\in\big{[}\mathrm{min}\big{\{} \mathrm{Re}(\mathbf{v})\,:\mathbf{v}\in\mathcal{S}_{n}^{\dagger}(\mathbf{x}_{ 1})\big{\}},\max\{\mathrm{Re}(\mathbf{v})\,:\mathbf{v}\in\mathcal{S}_{n}^{ \dagger}(\mathbf{x}_{1})\}\big{]},\] where we recall that \(\mathcal{S}_{n}^{\dagger}(\mathbf{x}_{1})\) denotes the horizontal segment associated to \(\mathbf{x}_{1}\) by the lifted tiling map. In both cases, if \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\in\mathcal{P}_{\overline{\pi}}^{\dagger,n}\), then the conclusion follows from the previous step. However, in general, it could be that \(\mathbf{x}_{1}\in\mathcal{P}_{\overline{\pi}}^{\dagger,n}-(2\pi,0)\) or \(\mathbf{x}_{2}\in\mathcal{P}_{\overline{\pi}}^{\dagger,n}+(2\pi,0)\). In both these cases, we cannot directly appeal to the previous step to conclude. Nevertheless, a simple application of Lemma 3.6 implies that the same result of the first step holds also for the vertices in \(\mathcal{P}_{\overline{\pi}}^{\dagger,n}-(2\pi,0)\) and in \(\mathcal{P}_{\overline{\pi}}^{\dagger,n}+(2\pi,0)\). Therefore, this concludes the proof. Before proceeding with the proof of Lemma 4.9, we need to state a lemma which is the dual counterpart of Lemma 4.14. **Lemma 4.16**.: _Fix \(\varepsilon\in(0,1)\), \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{\prime }-1)\), \(M\in\mathbb{N}\), and define the following event_ \[\widehat{\mathbf{A}}_{M,\varepsilon}^{n,\widehat{\mathbf{x}}}:=\big{\{}\exists \,s,t\in[0,\widehat{\tau}_{\widehat{\mathbf{x}}}]\,:\,\big{|}\mathrm{Re}( \widehat{\mathbf{X}}_{t}^{n,\widehat{\mathbf{x}}})-\mathrm{Re}(\widehat{ \mathbf{X}}_{s}^{n,\widehat{\mathbf{x}}})\big{|}>M;\,\big{|}\mathrm{Im}( \widehat{\mathbf{X}}_{u}^{n,\widehat{\mathbf{x}}})\big{|}\in[R^{\prime}- \varepsilon,R^{\prime}+\varepsilon],\;\forall u\in[s,t]\big{\}}.\] _There exists a universal constant \(C>0\) such that, for any \(n>n(R,\delta,\varepsilon)\) large enough, it holds that_ \[\mathbb{P}_{\widehat{\mathbf{x}}}\big{(}\widehat{\mathbf{A}}_{M,\epsilon}^{n, \widehat{\mathbf{x}}}\big{)}\preceq \exp(-CM/\varepsilon),\quad\forall M\in\mathbb{N},\] _where the implicit constant is independent of everything else._ Proof.: The proof of this lemma can be done by employing a similar argument to that used in Lemma 4.14. We are now ready to prove Lemma 4.9, which is now an immediate consequence of the results proved above. Proof of Lemma 4.9.: The proof basically consists of putting together some of the previous results. More precisely, fix \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{\prime }-1)\) and let \(\widehat{\mathbf{X}}_{\widehat{\mathbf{q}}_{\widehat{x}}}^{n,\widehat{ \mathbf{x}}}\in\widehat{\mathcal{P}}_{\overline{\pi}}^{\dagger,n}\cup\widehat {\mathcal{P}}_{\underline{a}}^{\dagger,n}\) be such that \(\sigma_{2\pi}(\widetilde{\mathbf{X}}_{\widehat{\mathbf{q}}_{\widehat{x}}}^{n, \widehat{\mathbf{x}}})=\sigma_{2\pi}(\widehat{\mathbf{X}}_{\widehat{\mathbf{q }}_{\widehat{x}}}^{n,\widehat{\mathbf{x}}})\). Then, by Lemma 2.9, we have that \[\mathbb{E}_{\widehat{\mathbf{x}}}\left[\frac{\mathrm{Re}(\widehat{\mathbf{X}} _{\widehat{\mathbf{q}}_{\widehat{x}}}^{n})}{2\pi}-\frac{\mathfrak{w}_{n}^{ \dagger}(\widehat{\mathbf{X}}_{\widehat{\mathbf{q}}_{\widehat{x}}}^{n})+b_{n} ^{\prime}}{\mathfrak{n}_{n}}\right]=\mathbb{E}_{\widehat{\mathbf{x}}}\left[ \frac{\mathrm{Re}(\widetilde{\mathbf{X}}_{\widehat{\mathbf{q}}_{\widehat{x}}}^{n })}{2\pi}-\frac{\mathfrak{w}_{n}^{\dagger}(\widehat{\mathbf{X}}_{\widehat{ \mathbf{q}}_{\widehat{x}}}^{n})+b_{n}^{\prime}}{\mathfrak{n}_{n}}\right].\] Therefore, the result follows thanks to Lemma 4.15 if we show that also \(\big{|}\mathbb{E}_{\widehat{\mathbf{x}}}[\mathrm{Re}(\widetilde{\mathbf{X}}_{ \widehat{\mathbf{q}}_{\widehat{x}}}^{n})]\big{|}\!\preceq\!1\). We note that this fact is easily implied by Lemma 4.16, and so the proof is completed. #### 4.2.3 Proof of Proposition 4.8 In what follows, we will make use of some notation introduced in the preceding subsection. In particular, we will consider the stopping times \(\sigma_{\widehat{\mathbf{x}}}\), \(\tau_{\widehat{\mathbf{x}}}\) defined in (4.7), and the quantities defined in the introduction of the preceding subsection. We also adopt the same notational conventions of the previous subsection in order to lighten up some notation. Proof of Proposition 4.8.: For \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R)\), let \(\widehat{\mathbf{X}}^{n,\widehat{\mathbf{x}}}\) be the random walk on the lifted weighted dual graph \((\widehat{\mathcal{G}}^{\dagger,n},\widehat{\mathcal{G}}^{\dagger,n})\) started from \(\widehat{\mathbf{x}}\). Moreover, let \(B^{\widehat{\mathbf{x}}}\) be a planar Brownian motion started from \(\widehat{\mathbf{x}}\), and define the stopping time \[\tau_{B,\widehat{\mathbf{x}}}:=\inf\bigl{\{}t\geq 0\,:\,|\,\mathrm{Im}(B^{ \widehat{\mathbf{x}}}_{t})|=R^{\prime}\bigr{\}}. \tag{4.23}\] We divide the proof into several steps. **Step 1.** In this first step, we show that, for any \(n>n(R,\delta)\) large enough, it holds that \[\big{|}\mathbb{E}_{\widehat{\mathfrak{K}}}\big{[}\mathrm{Re}(\widehat{\mathbf{X} }^{n}_{\tau_{\mathfrak{K}}})\big{]}-\mathrm{Re}(\widehat{\mathbf{x}})\big{|} \leq\delta, \tag{4.24}\] where we recall that the stopping time \(\tau_{\mathfrak{K}}\) is defined in (4.7). As we will see below, this is an easy consequence of assumption (H3). We start by observing that, thanks to well-known properties of Brownian motion, it holds that \[\big{|}\mathbb{E}_{\widehat{\mathfrak{K}}}[\mathrm{Re}(B_{\tau_{\mathfrak{K}},\widehat{\mathfrak{K}}})\big{]}-\mathrm{Re}(\widehat{\mathbf{x}})\big{|}=0, \quad\forall\widehat{\mathbf{x}}\in\mathcal{V}\mathcal{G}^{\dagger,n}(R).\] Indeed, this follows from the fact that \(|\,\mathrm{Re}(B_{\tau_{\mathfrak{K}},\widehat{\mathfrak{K}}})|-\mathrm{Re}( \widehat{\mathbf{x}})|\) has exponentially decaying tails and from the optional stopping theorem. As we observed in Remark 4.10, we have that, for any \(\varepsilon\in(0,1)\) and for any \(n>n(R,\delta,\varepsilon)\) large enough, it holds that \[\mathcal{E}\widehat{\mathcal{G}}^{\dagger,n}_{\overline{a}}\subset\mathbb{R} \times[R^{\prime}-\varepsilon,R^{\prime}+\varepsilon],\qquad\mathcal{E} \widehat{\mathcal{G}}^{\dagger,n}_{\overline{a}}\subset\mathbb{R}\times[-R^ {\prime}-\varepsilon,-R^{\prime}+\varepsilon].\] Therefore, from this fact and assumption (H3), we can deduce that, for any \(n>n(R,\delta)\) large enough, it holds that \[\big{|}\mathbb{E}_{\widehat{\mathfrak{K}}}\big{[}\mathrm{Re}(B_{\tau_{ \mathfrak{K}},\widehat{\mathfrak{K}}})\big{]}-\mathbb{E}_{\widehat{ \mathfrak{K}}}\big{[}\mathrm{Re}(\widehat{\mathbf{X}}^{n}_{\tau_{\mathfrak{K}} })\big{]}\big{|}\leq\delta.\] More precisely, this fact can be obtained from assumption (H3) by arguing in the same exact way as in the proof of Lemma 4.12. Hence, the desired result follows. **Step 2.** The main goal of this step is to prove that, for any \(n>n(R,\delta)\) large enough, it holds that \[\mathbb{E}_{\widehat{\mathfrak{K}}}\big{[}\mathfrak{w}^{\dagger}_{n}\big{(} \widehat{\mathbf{X}}^{n}_{\widehat{\mathfrak{K}}_{\widehat{\mathfrak{K}}}} \big{)}\big{]}=\mathfrak{w}^{\dagger}_{n}(\widehat{\mathbf{x}}),\quad\forall \widehat{\mathbf{x}}\in\mathcal{V}\mathcal{G}^{\dagger,n}(R). \tag{4.25}\] We start by recalling that the function \(\mathfrak{w}^{\dagger}_{n}:\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}\to \mathbb{R}\) is harmonic, and so the process \(\mathfrak{w}^{\dagger}_{n}(\widehat{\mathbf{X}}^{n,\widehat{\mathbf{x}}})\) is a discrete martingale with respect to the filtration generated by \(\widehat{\mathbf{X}}^{n,\widehat{\mathbf{x}}}\). Therefore, if we prove that such a martingale is uniformly integrable, then the claim follows from the optional stopping theorem. To this end, it is sufficient to prove that, for \(M\in\mathbb{N}\), the probability of the event \(|\mathfrak{w}^{\dagger}_{n}(\widehat{\mathbf{X}}^{n,\widehat{\mathbf{x}}}_{ \widehat{\mathfrak{K}}_{\widehat{\mathfrak{K}}}})-\mathfrak{w}^{\dagger}_{n}( \widehat{\mathbf{x}})|>MR^{\prime}\) decays exponentially fast in \(M\), uniformly in \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R)\) and for all \(n>n(R,\delta)\) large enough. This fact can be obtained from assumption (H3) by arguing in the same exact way as in the proof of Lemma 4.11. Hence, the desired result follows. **Step 3.** Consider the function \(\mathfrak{f}^{\dagger}_{n}:\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{ \prime}-1)\to\mathbb{R}\) defined as follows \[\mathfrak{f}^{\dagger}_{n}(\widehat{\mathbf{x}}):=\mathbb{E}_{\widehat{ \mathfrak{K}}}\left[\frac{\mathrm{Re}(\widehat{\mathbf{X}}^{n}_{\tau_{ \mathfrak{K}}})}{2\pi}-\frac{\mathfrak{w}^{\dagger}_{n}(\widehat{\mathbf{X}}^ {n}_{\tau_{\widehat{\mathbf{x}}}})+b^{\prime}_{n}}{\mathfrak{n}_{n}}\right], \quad\forall\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n} (R^{\prime}-1),\] where \(b^{\prime}_{n}\) is the same constant appearing in the statement of Lemma 4.9. Now, recalling the definition (4.7) of the stopping time \(\sigma_{\widehat{\mathbf{x}}}\) and that \(\sigma_{\widehat{\mathbf{x}}}\leq\tau_{\widehat{\mathbf{x}}}\), thanks to the strong Markov property of the random walk, for all \(\widehat{\mathbf{x}}\), \(\widehat{\mathbf{y}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R)\), it holds that \[\big{|}\mathfrak{f}^{\dagger}_{n}(\widehat{\mathbf{x}})-\mathfrak{f}^{ \dagger}_{n}(\widehat{\mathbf{y}})\big{|}\leq\sup\bigl{\{}\big{|}\mathfrak{f} ^{\dagger}_{n}(\widehat{\mathbf{y}})\big{|}\,:\,\widehat{\mathbf{v}}\in \mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{\prime}-1)\bigr{\}}\,\mathrm{d} ^{\mathrm{TV}}\big{(}\widehat{\mathbf{X}}^{n,\widehat{\mathbf{x}}}_{\sigma_{ \widehat{\mathbf{x}}}},\widehat{\mathbf{X}}^{n,\widehat{\mathbf{y}}}_{\sigma_{ \widehat{\mathbf{y}}}}\big{)}, \tag{4.26}\] where \(\mathrm{d}^{\mathrm{TV}}\) denotes the total variation distance. Hence, it is sufficient to find an upper bound for the two terms on the right-hand side of (4.26). We treat the two factors separately. * In order to bound the first factor, we just need to bound uniformly on \(\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{\prime}-1)\) the quantity \(|\mathfrak{f}^{\dagger}_{n}(\mathbf{v})|\). This is exactly the content of Lemma 4.9 from which we can deduce that, for all \(n>n(R,\delta)\) large enough, it holds that \[\sup\bigl{\{}\big{|}\mathfrak{f}^{\dagger}_{n}(\widehat{\mathbf{v}})\big{|}\,:\, \widehat{\mathbf{v}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R^{\prime}- 1)\bigr{\}}\preceq 1,\] where the implicit constant is independent of everything else. * In order to bound the second factor, we can use Lemma 2.4. Indeed, as we have already remarked, thanks to Proposition 4.3 and to Lemma 4.2, for any \(n\in\mathbb{N}\) large enough, it holds that \[\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}_{\overline{a}}\subset\mathbb{R} \times\big{[}R^{\prime}-1,R^{\prime}+1\big{]},\qquad\mathcal{V}\widehat{ \mathcal{G}}^{\dagger,n}_{\overline{a}}\subset\mathbb{R}\times\big{[}-R^{ \prime}-1,-R^{\prime}+1\big{]}.\] Therefore, it is sufficient to estimate the probability that \(\sigma_{2\pi}(\widehat{\mathbf{X}}^{n,\widehat{\mathbf{x}}}|_{[0,\alpha_{\widehat{ \mathbf{z}}}]})\) disconnects \(\widehat{V}_{R}\cup\widehat{V}_{-R}\) from \(\widehat{V}_{R^{\prime}-1}\cup\widehat{V}_{-R^{\prime}+1}\). To this end, one can argue in the same exact way as in Lemma 4.7 in order to prove that, for any \(n>n(R,\delta)\) large enough and for all \(\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R)\), it holds that \[\mathbb{P}_{\widehat{\mathbf{x}}}(\sigma_{2\pi}(\widehat{\mathbf{X}}^{n}|_{[0, \widehat{\mathbf{x}}_{\widehat{\mathbf{x}}}]})\text{ does not disconnect }\widehat{V}_{R}\cup\widehat{V}_{-R}\text{ from } \widehat{V}_{R^{\prime}-1}\cup\widehat{V}_{-R^{\prime}+1})\preceq\frac{R}{R^{ \prime}},\] where the implicit constant is independent of everything else. Therefore, this fact together with Lemma 2.4 imply that \[\mathrm{d}^{\mathrm{TV}}\big{(}\widehat{\mathbf{X}}^{n,\widehat{x}}_{ \widehat{\mathbf{x}}},\widehat{\mathbf{X}}^{n,\widehat{\mathbf{y}}}_{\widehat {\mathbf{y}}}\big{)}\preceq\frac{R}{R^{\prime}},\quad\forall\widehat{ \mathbf{x}},\widehat{\mathbf{y}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n }(R).\] Therefore, putting together the two bullet points above, recalling that \(R^{\prime}=R/\delta\), and going back to (4.26), we find that, for any \(n>n(R,\delta)\) large enough, it holds that \[\big{|}\mathfrak{f}_{n}^{\dagger}(\widehat{\mathbf{x}})-\mathfrak{f}_{n}^{ \dagger}(\widehat{\mathbf{y}})\big{|}\preceq\delta,\quad\forall\widehat{ \mathbf{x}},\,\widehat{\mathbf{y}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger, n}(R), \tag{4.27}\] where the implicit constant is independent of everything else. **Step 4.** To conclude, for every \(n>n(R,\delta)\) large enough, fix an arbitrary vertex \(\widehat{\mathbf{y}}\in\mathcal{V}\widehat{\mathcal{G}}^{\dagger,n}(R)\). Then, thanks to (4.24), (4.25), and (4.27), we have that for any \(n>n(R,\delta)\) large enough, it holds that \[\bigg{|}\frac{2\pi}{\mathfrak{n}_{n}}\mathfrak{w}_{n}^{\dagger}(\widehat{ \mathbf{x}})+b_{n}^{R,\delta}-\mathrm{Re}(\widehat{\mathbf{x}})\bigg{|} \leq\delta,\quad\forall\widehat{\mathbf{x}}\in\mathcal{V}\widehat{\mathcal{G}} ^{\dagger,n}(R),\] where \(b_{n}^{R,\delta}:=2\pi\mathfrak{f}_{n}^{\dagger}(\widehat{\mathbf{y}})\). Finally, in order to conclude, we need to remove the dependence of \(b_{n}^{R,\delta}\) from \(R\) and \(\delta\). This can be easily done by arguing in the same exact way as in the second step of the proof of Proposition 4.3. Therefore, the proof is concluded. ### Proof of Theorem 1.3 We are now ready to give a proof of the main theorem of this article. As we have already remarked, the proof of such a theorem is a consequence of Propositions 4.3 and 4.8. Proof of Theorem 1.3.: Fix \(R>1\), \(\delta\in(0,1)\), and consider a point \(\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger,n}(R)\). We divide the proof into three main steps. **Step 1.** By definition of the Smith embedding \(\hat{\mathcal{S}}_{n}^{\dagger}\), we have that \(\mathrm{Im}(\hat{\mathcal{S}}_{n}^{\dagger}(\mathbf{x}))=\mathfrak{h}_{n}^{ \dagger}(\mathbf{x})\). Hence, from Proposition 4.3, we know that there exist two real sequences \(\{b_{n}^{\mathfrak{h}}\}_{n\in\mathbb{N}}\) and \(\{c_{n}^{\mathfrak{h}}\}_{n\in\mathbb{N}}\), independent of \(R\) and \(\delta\), such that, for \(n>n(R,\delta)\) large enough, it holds that \[\big{|}c_{n}^{\mathfrak{h}}\,\mathrm{Im}(\hat{\mathcal{S}}_{n}^{\dagger}( \mathbf{x}))+b_{n}^{\mathfrak{h}}-\mathrm{Im}(\mathbf{x})\big{|}\leq\frac{ \delta}{\sqrt{2}},\quad\forall\mathbf{x}\in\mathcal{V}\mathcal{G}^{\dagger,n}( \mathbb{R}\times[-R,R]).\] **Step 2.** Let \(\mathcal{E}\mathcal{G}^{\dagger,n,\downarrow}(\mathbf{x})=[\mathbf{e}_{k}]\) be the set of harmonically oriented edges in \(\mathcal{E}\mathcal{G}^{\dagger,n}\) with heads equal to \(\mathbf{x}\) ordered in such a way that \(\widehat{\mathbf{e}}_{1}\cdots\widehat{\mathbf{e}}_{k}\) forms a counter-clockwise oriented path in the lifted dual graph \(\widehat{\mathcal{G}}^{\dagger,n}\). Then, by construction of the Smith embedding \(\hat{\mathcal{S}}_{n}^{\dagger}\), we have that \[\mathrm{Re}(\hat{\mathcal{S}}_{n}^{\dagger}(\mathbf{x}))\in\big{[}\mathfrak{w}^{ \dagger}(\widehat{\mathbf{e}}_{1}^{-}),\mathfrak{w}^{\dagger}(\widehat{ \mathbf{e}}_{k}^{+})\big{]}. \tag{4.28}\] Therefore, letting \(\{b_{n}^{\mathfrak{h}}\}_{n\in\mathbb{N}}\) be the sequence in the statement of Proposition 4.8, we have that \[\bigg{|}\frac{2\pi}{\mathfrak{n}_{n}}\,\mathrm{Re}(\hat{\mathcal{S }}_{n}^{\dagger}(\mathbf{x}))+b_{n}^{\mathfrak{w}}-\mathrm{Re}(\mathbf{x}) \bigg{|}\] \[\qquad\leq\bigg{|}\frac{2\pi}{\mathfrak{n}_{n}}\mathfrak{n}_{n}( \widehat{\mathbf{e}}_{1}^{-})+b_{n}^{\mathfrak{w}}-\mathrm{Re}(\widehat{ \mathbf{e}}_{1}^{-})\bigg{|}+\big{|}\mathrm{Re}(\widehat{\mathbf{e}}_{1}^{-})- \mathrm{Re}(\mathbf{x})\big{|}+\frac{2\pi}{\mathfrak{n}_{n}}\big{|}\mathrm{Re} (\hat{\mathcal{S}}_{n}^{\dagger}(\mathbf{x}))-\mathfrak{w}_{n}^{\dagger}( \widehat{\mathbf{e}}_{1})\big{|}.\] The first term on the right-hand side of the above expression is bounded by \(\delta/(5\sqrt{2})\) thanks to Proposition 4.8. The second term is also bounded by \(\delta/(5\sqrt{2})\) since Lemma 4.2 rules out the existence of macroscopic faces. Concerning the third term, recalling (4.28), we have that \[\frac{2\pi}{\mathfrak{n}_{n}}\big{|} \mathrm{Re}(\hat{\mathcal{S}}_{n}^{\dagger}(\mathbf{x}))- \mathfrak{w}_{n}^{\dagger}(\widehat{\mathbf{e}}_{1}^{-})\big{|}\] The first and second term on the right-hand side of the above expression are bounded by \(\delta/(5\sqrt{2})\), thanks to Proposition 4.8. The third term is also bounded by \(\delta/(5\sqrt{2})\), for \(n>n(R,\delta)\) thanks, once again, to Lemma 4.2. We remark that all the previous bounds obviously hold only for \(n>n(R,\delta)\) large enough. Therefore, putting it all together, we obtain that, for all \(n>n(R,\delta)\) large enough, it holds that \[\left|\frac{2\pi}{\mathfrak{n}_{n}}\,\mathrm{Re}(\hat{\mathcal{S}}_{n}^{ \dagger}(\mathbf{x}))+b_{n}^{\mathfrak{m}}-\mathrm{Re}(\mathbf{x})\right| \leq\frac{\delta}{\sqrt{2}},\quad\forall\mathbf{x}\in\mathcal{V}\mathcal{G}^{ \dagger,n}(\mathbb{R}\times[-R,R]).\] **Step 3.** For each \(n\in\mathbb{N}\), we define the affine transformation \(T_{n}^{\dagger}:\mathbb{R}\times[0,1]\to\mathbb{R}^{2}\) by letting \[\mathrm{Re}(T_{n}^{\dagger}\mathbf{x}):=\frac{2\pi}{\mathfrak{n}_{n}}\, \mathrm{Re}(\mathbf{x})+b_{n}^{\mathfrak{n}}\quad\text{ and }\quad\mathrm{Im}(T_{n}^{\dagger} \mathbf{x}):=c_{n}^{b}\,\mathrm{Im}(\mathbf{x})+b_{n}^{\mathfrak{n}},\qquad \forall\mathbf{x}\in\mathbb{R}\times[0,1],\] Therefore, the previous two steps yield that, for any \(n>n(R,\delta)\) large enough, it holds that \[\mathrm{d}_{\mathbb{R}^{2}}\big{(}T_{n}^{\dagger}\hat{\mathcal{S}}_{n}^{ \dagger}(\mathbf{x}),\mathbf{x}\big{)}\leq\delta,\quad\forall\mathbf{x}\in \mathcal{V}\mathcal{G}^{\dagger,n}(\mathbb{R}\times[-R,R]),\] where \(\mathrm{d}_{\mathbb{R}^{2}}\) denotes the Euclidean distance in \(\mathbb{R}^{2}\). This is obviously equivalent to the desired result, and so the proof is completed. ## 5 Application to mated-CRT maps The main goal of this section is to prove Theorem 1.4. Roughly speaking, the plan is as follows. We will first introduce an a priori embedding of mated-CRT maps which is "close" to LQG. We then prove that this a priori embedding satisfies the assumptions of Theorem 1.3. Finally, we show how this allows to conclude. ### SLE/LQG description of mated-CRT maps We now discuss an equivalent description of mated-CRT maps in terms of SLE/LQG, which comes from the results of [10]. These results imply that mated-CRT maps can be realized as cell configurations constructed from space-filling SLE\({}_{\kappa}\) curves parameterized by quantum mass with respect to a certain independent LQG surface. We will not need many properties of the SLE/LQG objects involved, so we will not give detailed definitions, but we will give precise references instead. **Liouville quantum gravity surfaces.** For \(\gamma\in(0,2)\) and \(D\subseteq\mathbb{C}\), a doubly marked \(\gamma\)-LQG surface is an equivalence class of quadruplets \((D,h,z_{1},z_{2})\) where \(h\) is a random generalized function on \(D\) (which we will always take to be an instance of some variant of the Gaussian free field), and \(z_{1}\), \(z_{2}\in D\). Two such quadruplets \((D,h,z_{1},z_{2})\) and \((\widetilde{D},\widetilde{h},\widetilde{z}_{1},\widetilde{z}_{2})\) are declared to be equivalent if there is a conformal map \(f:\widetilde{D}\to D\) such that \[\widetilde{h}=h\circ f+Q\log|f^{\prime}|\quad\text{ and }\quad f(\widetilde{z}_{1} )=z_{1},\;f(\widetilde{z}_{2})=z_{2},\quad\text{ where }\quad Q=\frac{2}{\gamma}+\frac{ \gamma}{2}. \tag{5.1}\] For \(\gamma\in(0,2)\), it is well-known that one can construct a random measure, called the \(\gamma\)-LQG area measure, which is formally given by \(\mu_{h}:=e^{\gamma h}\mathrm{d}^{2}z\), where \(d^{2}z\) denotes the Lebesgue measure on \(D\). Since \(h\) is a random generalized function, this definition does not make rigorous sense and one should proceed using a standard regularization and limiting procedure [11]. The \(\gamma\)-LQG area measure satisfies a certain change of coordinates formula. More precisely, given two equivalent doubly marked \(\gamma\)-LQG surface \((D,h,z_{1},z_{2})\) and \((\widetilde{D},\widetilde{h},\widetilde{z_{1}},\widetilde{z_{2}})\), then it holds almost surely that \(\mu_{h}(f(A))=\mu_{\widetilde{h}}(A)\) for all Borel sets \(A\subseteq\widetilde{D}\), where \(f:\widetilde{D}\to D\) is a conformal map such that (5.1) holds. In this article, we are interested in two different kind of doubly marked \(\gamma\)-LQG surfaces. * The _doubly marked quantum sphere_\((\mathbb{C},h,0,\infty)\), where \(h\) is a variant of the Gaussian free field precisely defined in [14, Definition 4.21]. For \(\gamma\in(0,2)\), it is well-known that one can associate with the random generalized function \(h\) a random measure \(\mu_{h}\) on \(\mathbb{C}\), the \(\gamma\)-LQG measure, with \(\mu_{h}(\mathbb{C})<\infty\) (again, we will not need the precise definition here). Typically, one considers a unit-area quantum sphere, which means that we fix \(\mu_{h}(\mathbb{C})=1\). * The _\(0\)-quantum cone_\((\mathbb{C},h^{c},0,\infty)\), where \(h^{c}\) is a variant of the Gaussian free field precisely defined in [14, Definition 4.10]. Also in this case, for \(\gamma\in(0,2)\), we can associate to \(h^{c}\) the \(\gamma\)-LQG measure \(\mu_{h^{c}}\) which has infinite total mass, but it is locally finite. **Schramm-Loewner evolution.** We do not need to precisely define \(\mathrm{SLE}_{\kappa}\), but rather it is sufficient to know that whole-plane space-filling \(\mathrm{SLE}_{\kappa}\), for \(\kappa>4\), is a random space-filling curve \(\mathfrak{g}\) which travels from \(\infty\) to \(\infty\) in \(\mathbb{C}\). It is a variant of \(\mathrm{SLE}_{\kappa}\)[15] which was introduced in [13, 14]. Space-filling \(\mathrm{SLE}_{\kappa}\) for \(\kappa\geq 8\) is a two-sided version of ordinary \(\mathrm{SLE}_{\kappa}\) (which is already space-filling), whereas space-filling \(\mathrm{SLE}_{\kappa}\) for \(\kappa\in(4,8)\) can be obtained from ordinary \(\mathrm{SLE}_{\kappa}\) by iteratively filling in the "bubbles" which the path disconnects from \(\infty\). **Construction of the a priori embedding.** An important feature of the mated-CRT map is that it comes with an a priori embedding into \(\mathbb{C}\) described by SLE-decorated LQG. To explain this embedding, consider a doubly marked quantum sphere \((\mathbb{C},h,0,\infty)\) and, for \(\gamma\in(0,2)\), consider the associated \(\gamma\)-LQG measure \(\mu_{h}\). Sample a space-filling \(\mathrm{SLE}_{\kappa}\) curve \(\mathfrak{g}\) with \(\kappa=16/\gamma^{2}\), independently from the random generalized function \(h\), and reparametrize \(\mathfrak{g}\) so that \[\mathfrak{g}(0)=0\quad\text{ and }\quad\mu_{h}\big{(}\mathfrak{g}([a,b] \big{)}=b-a,\quad\forall a,b\in\mathbb{R}\text{ with }a<b.\] For \(\gamma\in(0,2)\) and \(n\in\mathbb{N}\), we define the _\(n\)-structure graph_\(\mathcal{G}^{n}\) associated with the pair \((h,\mathfrak{g})\) as follows. The vertex set of \(\mathcal{G}^{n}\) is given by \[\mathcal{VG}^{n}:=\frac{1}{n}\mathbb{Z}\cap(0,1].\] Two distinct vertices \(x_{1}\), \(x_{2}\in\mathcal{VG}^{n}\) are connected by one edge (resp. two edges) if and only if the intersection of the corresponding cells \(\mathfrak{g}([x_{1}-1/n,x_{1}])\) and \(\mathfrak{g}([x_{2}-1/n,x_{2}])\) has one connected component which is not a singleton (resp. two connected components which are not singletons). We refer to Figure 10 for a diagrammatic construction of the SLE/LQG embedding of the mated-CRT map. Figure 10: **Left:** A space-filling \(\mathrm{SLE}_{\kappa}\) curve \(\mathfrak{g}\), for \(\kappa\geq 8\), divided into cells \(\mathfrak{g}([x-1/n,x])\) for a collection of \(x\in\mathcal{VG}^{n}\). Each cell has \(\mu_{h}\)-measure equal to \(1/n\). **Middle:** The same as in the left figure but with a orange path showing the order in which cells are traversed by \(\mathfrak{g}\). **Right:** In each cell we drawn a red point corresponding to a vertex in \(\mathcal{VG}^{n}\). Two vertices are connected by a red edge if the corresponding cells intersect. This illustrates how the SLE/LQG embedding of the \(n\)-mated CRT map with the sphere topology is built. A similar figure has appeared in [13]. When \(\kappa\geq 8\), the intersection of cells \(\mathfrak{d}([x_{1}-1/n,x_{1}])\cap\mathfrak{d}([x_{2}-1/n,x_{2}])\) is always either empty or the union of one or two non-singleton connected components. However, when \(\kappa\in(4,8)\) it is possible that \(\mathfrak{d}([x_{1}-1/n,x_{1}])\cap\mathfrak{d}([x_{2}-1/n,x_{2}])\) is a totally disconnected Cantor-like set, in which case \(x_{1}\) and \(x_{2}\) are not joint by an edge in \(\mathcal{G}^{n}\) (see Figure 11). The following result explains the connection between the pair \((h,\mathfrak{d})\) and the mated-CRT map \(\mathcal{G}^{n}\), and it is a consequence of [14, Theorems 1.9 and 8.18]. **Proposition 5.1**.: _The family of structure graphs \(\{\mathcal{G}^{n}\}_{n\in\mathbb{N}}\) agrees in law with the family of \(n\)-mated-CRT maps with the sphere topology defined in Subsection 1.3._ The previous proposition gives us an a priori embedding of the mated-CRT map in \(\mathbb{C}\) by sending each \(x\in\mathcal{VG}^{n}\) to the point \(\mathfrak{d}(x)\in\mathbb{C}\). Furthermore, the graph \(\mathcal{G}^{n}\) comes naturally with two marked vertices. Indeed, we can let \(v_{0}^{n}\in\mathcal{VG}^{n}\) (resp. \(v_{1}^{n}\in\mathcal{VG}^{n}\)) to be the vertex corresponding to the cell containing \(0\) (resp. \(\infty\)). We also emphasize that here all the edges in \(\mathcal{EG}^{n}\) are assumed to have unit conductance. **Construction of the a priori embedding of the dual graph.** The a priori SLE/LQG embedding of \(\mathcal{G}^{n}\) also induces an a priori embedding into \(\mathbb{C}\) of the associated planar dual graph \(\widehat{\mathcal{G}}^{n}\). Indeed, each vertex of \(\widehat{\mathcal{G}}^{n}\) is naturally identified with the set of points in \(\mathbb{C}\) where three of the cells \(\mathfrak{d}([x-1/n,x])\) meet5, for \(x\in\mathcal{VG}^{n}\). The set of edges of \(\widehat{\mathcal{G}}^{n}\) can be identified with the boundary segments of the cells which connect these vertices. To be precise, this is not an embedding when \(\kappa\in(4,8)\) since the edges can intersect (but not cross) each other (see Figure 11). To deal with this case, we can very slightly perturb the edges so that they do not intersect except at their endpoints. We refer to the proof of Proposition 5.4 for more details on how one can handle this situation. Footnote 5: Note that there cannot be more than three cells meeting at a single point or we would have a face of degree greater than three. ### Mated-CRT maps satisfy the assumptions In this subsection, we show that the a priori embedding of mated-CRT maps with the sphere topology satisfies the assumptions of Theorem 1.3. In particular, we will show here that assumptions (H2) and (H3) are satisfied in this specific case. We recall that, for each \(n\in\mathbb{N}\) and \(\gamma\in(0,2)\), the mated-CRT map \(\mathcal{G}^{n}\) comes with two marked vertices \(v_{0}^{n}\) and \(v_{1}^{n}\) which correspond to the cell containing \(0\) and \(\infty\), respectively **Proposition 5.2** (Invariance principle on the mated-CRT map, [11, Theorem 3.4]).: _For each \(n\in\mathbb{N}\) and \(\gamma\in(0,2)\), let \(\mathcal{G}^{n}\) be the \(n\)-mated CRT map with the sphere topology embedded in \(\mathbb{C}\) under the a priori SLE/LQG embedding specified in Subsection 5.1. View the embedded random walk on \(\mathcal{G}^{n}\), stopped when it hits \(v_{1}^{n}\), as a continuous curve in \(\mathbb{C}\) obtained by piecewise linear interpolation at constant speed. For each compact subset \(K\subset\mathbb{C}\) and for any \(z\in K\), the conditional law given the pair \((h,\mathfrak{d})\) of the random walk on \(\mathcal{G}^{n}\) started from the vertex \(x_{z}^{n}\in\mathcal{VG}^{n}\) nearest to \(z\) weakly converges in probability as \(n\to\infty\) to the law of Figure 11: A typical space-filling SLE\({}_{\kappa}\) curve \(\mathfrak{d}\), for \(\kappa\in(4,8)\), divided into cells. The picture is slightly misleading since the set of “pinch point” where the left and right boundaries of each cell meet is actually uncountable. The black dots indicate the points where \(\mathfrak{d}\) starts and finishes filling each cell. Note that the gray and green cells intersect at several points, but do not share a connected boundary arc so are not considered to be adjacent. A similar figure has appeared in [11]. the Brownian motion on \(\mathbb{C}\) started from \(z\) with respect to the local topology on curves viewed modulo time parameterization specified in Subsection 2.1.2, uniformly over all \(z\in K\)._ **Remark 5.3**.: To be precise, in [14, Theorem 3.4], the above theorem is stated for a mated-CRT map built using a quantum sphere with only one marked point. However, the quantum sphere with two marked points can be obtained from the quantum sphere with one marked point \((\mathbb{C},h,\infty)\) by first sampling a point \(z\in\mathbb{C}\) uniformly from the \(\gamma\)-LQG area measure \(\mu_{h}\), then applying a conformal map which sends \(z\) to \(0\) (see [14, Proposition A.13]). Therefore, since Brownian motion is conformally invariant, the statement in [14, Theorem 3.4] immediately implies Proposition 5.2 for the quantum sphere with two marked points. The main purpose of this subsection is to prove that an analogous result holds also on the sequence of a priori SLE/LQG embedding of the dual \(n\)-mated-CRT map with the sphere topology. More precisely, we want to prove the following proposition. **Proposition 5.4** (Invariance principle on the dual mated-CRT map).: _For each \(n\in\mathbb{N}\) and \(\gamma\in(0,2)\), let \(\widehat{\mathcal{G}}^{n}\) be the dual planar graph associated to the \(n\)-mated-CRT map \(\mathcal{G}^{n}\) with the sphere topology embedded in \(\mathbb{C}\) under the a priori embedding specified in Subsection 5.1. View the embedded random walk on \(\widehat{\mathcal{G}}^{n}\) as a continuous curve in \(\mathbb{C}\) obtained by piecewise linear interpolation at constant speed. For each compact subset \(K\subset\mathbb{C}\) and for any \(z\in K\), the conditional law given the pair \((h,\mathfrak{d})\) of the random walk on \(\widehat{\mathcal{G}}^{n}\) started from the vertex \(\widehat{x}_{z}^{n}\in\mathcal{V}\widehat{\mathcal{G}}^{n}\) nearest to \(z\) weakly converges as \(n\to\infty\) to the law of the Brownian motion on \(\mathbb{C}\) started from \(z\) with respect to the local topology on curves viewed modulo time parameterization specified in Subsection 2.1.2, uniformly over all \(z\in K\)._ #### 5.2.1 Invariance principle on embedded lattices In order to prove Proposition 5.4, we need to take a step back and state a general theorem from [14] which guarantees that the random walk on certain embedded lattices has Brownian motion as a scaling limit. For the reader's convenience, we recall here the main definitions needed and in what follows we adopt notation similar to that of [14]. **Definition 5.5**.: An embedded lattice \(\mathcal{M}\) is a graph embedded in \(\mathbb{C}\) in such a way that each edge is a simple curve with zero Lebesgue measure, the edges intersect only at their endpoints, and each compact subset of \(\mathbb{C}\) intersects at most finitely many edges of \(\mathcal{M}\). As usual, we write \(\mathcal{VM}\) for the set of vertices of \(\mathcal{M}\), \(\mathcal{EM}\) for the set of edges of \(\mathcal{M}\), and \(\mathcal{FM}\) for the set of faces of \(\mathcal{M}\). **Definition 5.6**.: For an embedded lattice \(\mathcal{M}\) and \(x\in\mathcal{VM}\), we define the _outradius_ of \(x\) by setting \[\mathrm{Outrad}(x):=\mathrm{diam}\left(\bigcup_{\{H\in\mathcal{FM}\::\:x\in \partial H\}}H\right),\] i.e., the diameter of the union of the faces with \(x\) on their boundaries. Here \(\partial H\) denotes the boundary of the face \(H\in\mathcal{FM}\). For \(C>0\) and \(z\in\mathbb{C}\), we write \(C(\mathcal{M}-z)\) for the embedded lattice obtained by first translating everything by the amount \(-z\) and then by scaling everything by the factor \(C\). We are interested in random embedded lattices that satisfy the following assumptions. 1. (**Translation invariance modulo scaling**) There is a (possibly random and \(\mathcal{M}\)-dependent) increasing sequence of open sets \(U_{j}\subset\mathbb{C}\), each of which is either a square or a disk, whose union is all of \(\mathbb{C}\) such that the following is true. Conditional on \(\mathcal{M}\) and \(U_{j}\), let \(z_{j}\) for \(j\in\mathbb{N}\) be sampled uniformly from the Lebesgue measure on \(U_{j}\). Then the shifted lattice \(\mathcal{M}-z_{j}\) converge in law to \(\mathcal{M}\) modulo scaling as \(j\to\infty\), i.e., there are random numbers \(C_{j}>0\) (possibly depending on \(\mathcal{M}\) and \(z_{j}\)) such that \(C_{j}(\mathcal{M}-z_{j})\to\mathcal{M}\) in law with respect to the metric specified in [14, Equation (1.6)].6 Footnote 6: Several equivalent formulations of this condition are given in [14, Definition 1.2]. 2. (**Ergodicity modulo scaling**) Every real-valued measurable function \(F=F(\mathcal{M})\) which is invariant under translation and scaling, i.e., \(F(C(\mathcal{M}-z))=F(\mathcal{M})\) for each \(z\in\mathbb{C}\) and \(C>0\), is almost surely equal to a deterministic constant. * **(Finite expectation)** Let \(H_{0}\in\mathcal{FM}\) be the face of \(\mathcal{M}\) containing \(0\), then \[\mathbb{E}\left[\sum_{x\in\mathcal{V}\mathcal{M}\cap\partial H_{0}}\frac{ \operatorname{Outrad}(x)^{2}\deg(x)}{\operatorname{Area}(H_{0})}\right]<\infty,\] where, in this case, \(\deg(x)\) denotes the number of edges incident to \(x\). **Proposition 5.7** ([13, Theorem 1.11]).: _Let \(\mathcal{M}\) be an embedded lattice satisfying assumptions (11), (12), and (13). For \(\varepsilon>0\), view the embedded random walk on on \(\varepsilon\mathcal{M}\) as a continuous curve in \(\mathbb{C}\) obtained by piecewise linear interpolation at constant speed. For each compact subset \(K\subset\mathbb{C}\) and for any \(z\in K\), the conditional law given \(\mathcal{M}\) of the random walk on \(\varepsilon\mathcal{M}\) started from the vertex \(\widehat{x}_{z}^{\varepsilon}\in\mathcal{V}(\varepsilon\mathcal{M})\) nearest to \(z\) weakly converges in probability as \(\varepsilon\to 0\) to the law of a Brownian motion on \(\mathbb{C}\), with some deterministic, non-degenerate covariance matrix \(\Sigma\), started from \(z\) with respect to the local topology on curves viewed modulo time parameterization specified in Subsection 2.1.2, uniformly over all \(z\in K\)._ To be precise, the above theorem follows from the proof [13, Theorem 1.11] (which gives the same statement but without the uniform rate of convergence on compact subsets) by using [13, Theorem 3.10] in place of [13, Theorem 1.16]. #### 5.2.2 Proof of Proposition 5.4 The purpose of this subsection is to transfer the general result of Proposition 5.7 to the particular setting of Proposition 5.4. In order to do this, we need to proceed in two steps. First, we verify that the hypothesis of Proposition 5.7 are satisfied for a sequence of embedded lattices built through the \(0\)-quantum cone. Then, we transfer the result to the sequence of \(n\)-mated CRT maps with the sphere topology by means of an absolute continuity argument. Let \((\mathbb{C},h^{\varepsilon},0,\infty)\) be a \(0\)-quantum cone as specified in Subsection 5.1. We now define a graph associated with the \(0\)-quantum cone in a way which is exactly analogous to the SLE/LQG description of mated-CRT maps with the sphere topology described in Section 5.1. For \(\gamma\in(0,2)\), let \(\mathfrak{d}\) be a whole-plane space-filling \(\operatorname{SLE}_{\kappa}\), with \(\kappa=16/\gamma^{2}\), sampled independently from \(h^{\mathrm{c}}\) and then parameterized by the \(\gamma\)-LQG measure \(\mu_{h^{\varepsilon}}\) in such a way that \(\mathfrak{d}(0)=0\). Let \(\xi\) be sampled uniformly from the unit interval \([0,1]\), independently from everything else, and let \[\mathcal{H}:=\big{\{}\mathfrak{d}\big{(}[x-1,x]\big{)}\,:\,x\in\mathbb{Z}+ \xi\big{\}}.\] The reason for considering times in \(\mathbb{Z}+\xi\) instead just in \(\mathbb{Z}\) is to avoid making the point \(0=\mathfrak{d}(0)\) special. This is needed in order to check the translation invariance modulo scaling hypothesis (11). We view \(\mathcal{H}\) as a planar map whose vertex set is \(\mathcal{H}\) itself. Two vertices \(H\), \(H^{\prime}\in\mathcal{H}\) are joined by one edge (resp. two edges) if and only if \(H\) and \(H^{\prime}\) intersect along one (resp. two) non-trivial connected boundary arc. (resp. arcs). For \(H\), \(H^{\prime}\in\mathcal{H}\), with \(H\neq H^{\prime}\), we write \(H\sim H^{\prime}\) if they are joined by at least one edge. Given a set \(A\subset\mathbb{C}\), we write \[\mathcal{H}(A):=\big{\{}H\in\mathcal{H}\,:\,H\cap A\neq\emptyset\big{\}}\] and moreover, for \(H\in\mathcal{H}\), we write \(\deg(H)\) for the number of cells \(H^{\prime}\in\mathcal{H}\) (counted with edge multiplicity) such that \(H\sim H^{\prime}\). Proof of Proposition 5.4.: For \(\gamma\in(0,2)\), define the cell configuration \(\mathcal{H}\) associated with a space-filling \(\operatorname{SLE}_{\kappa}\) on a \(0\)-quantum cone as specified above. Let \(\mathcal{M}\) be the embedded defined as follows. * The vertex set of \(\mathcal{M}\) consists of points \(x\in\mathbb{C}\) such that there are three cells in \(\mathcal{H}\) that meet at \(x\) and with the property that the boundary of each of such cells has a connected subset that touches \(x\); * The edge set of \(\mathcal{M}\) consists of the boundary segments of the cells which connect the vertices. In other words, \(\mathcal{M}\) is nothing but the planar dual of \(\mathcal{H}\). Moreover, by construction, \(\mathcal{M}\) is an embedded lattice in the sense of Definition 5.5, except that embedded edges are allowed to intersect but not cross, in the case when \(\gamma\in(\sqrt{2},2)\). To deal with this case, we can consider a different embedding of \(\mathcal{M}\) in which all the vertices occupy the same position, but the edges are slightly perturbed so that they do not intersect except at their endpoints. More precisely, for each edge with touching points in its interior, we can slightly perturb it in such a way that the modification only depends on the position of the edge itself and on the positions of the adjacent faces. In particular, this perturbation can be carried out in a translation and scaling invariant way. Therefore, since this procedure only depends on the local configuration of the lattice, if the starting cell configuration is translation invariant modulo scaling, then also the the perturbed lattice is translation invariant modulo scaling. We will now check that \(\mathcal{M}\) satisfies assumptions (11), (12), and (13). The translation invariance modulo scaling assumption (11) and the ergodicity modulo scaling assumption (12) follows from the corresponding properties for the associated cell configuration \(\mathcal{H}\) as checked in [13, Proposition 3.1]. Therefore, we can just focus on proving the finite expectation assumption (13). To this end, recalling that the cell \(H_{0}\) is the face of \(\mathcal{M}\) containing \(0\), we proceed as follows \[\begin{split}\sum_{x\in\mathcal{VM}\cap\partial H_{0}}\frac{ \operatorname{Outrad}(x)^{2}\deg(x)}{\operatorname{Area}(H_{0})}& \leq 12\sum_{x\in\mathcal{VM}\cap\partial H_{0}}\sum_{\{H\in \mathcal{H}\,:\,x\in\partial H\}}\frac{\operatorname{diam}(H)^{2}}{ \operatorname{Area}(H_{0})}\\ &\leq 48\sum_{\{H\in\mathcal{H}\,:\,H\sim H_{0}\}}\frac{ \operatorname{diam}(H)^{2}}{\operatorname{Area}(H_{0})}+12\frac{\operatorname {diam}(H_{0})^{2}}{\operatorname{Area}(H_{0})}\deg(H_{0}).\end{split} \tag{5.2}\] In the first line of (5.2), we used the fact that each vertex of \(\mathcal{M}\) has degree at most \(3\) and the inequality \((a+b+c)^{2}\leq 4(a^{2}+b^{2}+c^{2})\). In the second line, we use that each cell \(H\in\mathcal{H}\) with \(H\sim H_{0}\) intersects \(H_{0}\) along at most two disjoint connected boundary arcs (one on its left boundary and one on its right boundary), so there are at most \(4\) vertices of \(\mathcal{M}\) in \(H\cap H_{0}\). Now, we notice that the second quantity in the last line of (5.2) has finite expectation thanks to [13, Theorem 4.1]. Therefore, we can just focus our attention to the sum appearing in the last line of (5.2). Basically, the fact that this sum has finite expectation follows from the combination of several results in [13] and [13]. Let \(F=F(\mathcal{H})\) denote the sum appearing in the last line of (5.2). We will show that \(\mathbb{E}[F]<\infty\) using an ergodic theory result from [13, Section 2.2]. Let \(\{S_{k}\}_{k\in\mathbb{Z}}\) be the bi-infinite sequence of origin-containing squares of a uniform dyadic system independent from \(\mathcal{H}\), as defined in [13, Section 2.1]. We will not need the precise definition of these sequence here, but rather we only need to know \(S_{k-1}\subset S_{k}\) for each \(k\in\mathbb{Z}\) and \(\cup_{k=1}^{\infty}S_{k}=\mathbb{C}\). As shown in [13, Proposition 3.1], the cell configuration \(\mathcal{H}\) satisfies a suitable translation invariance modulo scaling assumption, and so, we can apply [13, Lemma 2.7] to \(\mathcal{H}\) to find that the following is true. If we let \(F_{z}\) for \(z\in\mathbb{C}\) be defined in the same manner as \(F\) but with the translated cell configuration \(\mathcal{H}-z\) in place of \(\mathcal{H}\), then it holds almost surely that \[\mathbb{E}[F]=\lim_{k\to\infty}\frac{1}{\operatorname{Area}(S_{k})}\int_{S_{ k}}F_{z}dz.\] To bound the right-hand side of the above expression, we can proceed as follows \[\begin{split}\lim_{k\to\infty}\frac{1}{\operatorname{Area}(S_{k} )}\int_{S_{k}}F_{z}dz&=\lim_{k\to\infty}\frac{1}{\operatorname{ Area}(S_{k})}\int_{S_{k}}\sum_{\{H\in\mathcal{H}\,:\,H\sim H_{z}\}}\frac{ \operatorname{diam}(H)^{2}}{\operatorname{Area}(H_{z})}dz\\ &\leq\limsup_{k\to\infty}\frac{1}{\operatorname{Area}(S_{k})}\sum _{H\in\mathcal{H}(S_{k})}\sum_{\{H^{\prime}\in\mathcal{H}\,:\,H^{\prime}\sim H \}}\operatorname{diam}(H^{\prime})^{2}.\end{split} \tag{5.3}\] Since the maximal size of the cells in \(\mathcal{H}(S_{k})\) is almost surely of strictly smaller order than the side length of \(S_{k}\) as \(k\to\infty\) (see [13, Lemma 2.9]), we find that almost surely for large enough \(k\in\mathbb{N}\), each \(H^{\prime}\in\mathcal{H}\) with \(H^{\prime}\sim H\) for some \(H\in\mathcal{H}(S_{k})\) is contained in \(\mathcal{H}(S_{k}(1))\), where \(S_{k}(1)\) is the square with the same center as \(S_{k}\) and three times the side length of \(S_{k}\). Therefore, the last line in (5.3) is almost surely bounded above by \[2\limsup_{k\to\infty}\frac{1}{\operatorname{Area}(S_{k})}\sum_{H\in\mathcal{H} (S_{k}(1))}\operatorname{diam}(H)^{2}\deg(H),\] which is finite by [13, Lemmas 2.8 and 2.10]. Therefore, summing up, we proved that the embedded lattice \(\mathcal{M}\) satisfies assumptions (I1), (I2), (I3), ad so, we can apply Proposition 5.7. Furthermore, we notice that, thanks to the rotational invariance of the law of \((h^{c},\mathfrak{d})\), the limiting covariance matrix \(\Sigma\) is given by a positive scalar multiple of the identity matrix. Hence, in order to conclude the proof of Proposition 5.4, we need to transfer the result to the setting of mated-CRT maps with the sphere topology. This can be done by means of absolute continuity arguments as in [14, Subsection 3.3]. For the sake of brevity, we will not repeat such arguments here and we refer to [14]. ### Convergence to LQG In this subsection we see how the proof of Theorem 1.4 is almost an immediate consequence of the results proved above. More specifically, for \(n\in\mathbb{N}\) and \(\gamma\in(0,2)\), let \((\mathcal{G}^{n},v_{0}^{n},v_{1}^{n})\) be the doubly marked \(n\)-mated-CRT map with the sphere topology under the a priori SLE/LQG embedding as specified in Subsection 5.1. We observe that there exists a conformal map from \(\mathbb{C}\) to \(\mathcal{C}_{2\pi}\) sending \(\infty\mapsto+\infty\) and \(0\mapsto-\infty\). This mapping is unique up to horizontal translation and rotation. The horizontal translation can be fixed by specifying that the volume of \(\mathbb{R}/2\pi\mathbb{Z}\times[0,\infty)\) under the \(\gamma\)-LQG measure \(\mu_{h}\) induced by the embedding is precisely \(1/2\). Furthermore, the rotation on \(\mathbb{R}/2\pi\mathbb{Z}\) can be choosen uniformly at random. Therefore, using this conformal map, we can define an embedding of \((\mathcal{G}^{n},v_{0}^{n},v_{1}^{n})\) into \(\mathcal{C}_{2\pi}\) in such a way that \(v_{0}^{n}\) and \(v_{1}^{n}\) are mapped to \(-\infty\) and \(+\infty\), respectively. Using the same conformal map, we can also embedded the associated dual graph \(\hat{\mathcal{G}}^{n}\) into \(\mathcal{C}_{2\pi}\). This put us exactly in the setting of Theorem 1.3 from which we can deduce the following result. **Proposition 5.8**.: _Fix \(\gamma\in(0,2)\) and let \(\{(\mathcal{G}^{n},v_{0}^{n},v_{1}^{n})\}_{n\in\mathbb{N}}\) be the sequence of doubly marked \(n\)-mated CRT map with the sphere topology embedded in the infinite cylinder \(\mathcal{C}_{2\pi}\) as specified above. For each \(n\in\mathbb{N}\), let \(\hat{\mathcal{S}}_{n}:\mathcal{V}\mathcal{G}^{n}\to\mathcal{C}_{2\pi}\) be the Smith embedding associated to \((\mathcal{G}^{n},v_{0}^{n},v_{1}^{n})\) as specified in Definition 2.14. There exists a sequence of random affine transformations \(\{T_{n}\}_{n\in\mathbb{N}}\) from \(\mathcal{C}_{\mathrm{n}_{n}}\) to \(\mathcal{C}_{2\pi}\) of the form specified in the statement of Theorem 1.3 such that, for all compact sets \(K\subset\mathcal{C}_{2\pi}\), the following convergence holds in probability_ \[\lim_{n\to\infty}\sup_{x\in\mathcal{V}\mathcal{G}^{n}(K)}\mathrm{d}_{2\pi} \big{(}T_{n}\hat{\mathcal{S}}_{n}(x),x\big{)}=0,\] _where \(\mathrm{d}_{2\pi}\) denotes the Euclidean distance on the cylinder \(\mathcal{C}_{2\pi}\)._ Proof.: By construction the sequence \(\{(\mathcal{G}^{n},v_{0}^{n},v_{1}^{n})\}_{n\in\mathbb{N}}\) and the associated sequence of dual graphs \(\{\widehat{\mathcal{G}}^{n}\}_{n\in\mathbb{N}}\) satisfy almost surely assumption (H1). Moreover, Proposition 5.2 guarantees the convergence in probability of the random walk on the sequence of primal graphs to Brownian motion and so assumption (H2) is satisfied. Furthermore, Proposition 5.4 guarantees the convergence in probability of the random walk on the sequence of dual graphs to Brownian motion and so also assumption (H3) is satisfied. Therefore, the desired result follows from Theorem 1.3. Using the same procedure specified at the beginning of this subsection, we can also consider the parameterization of the unit-area quantum sphere by the infinite cylinder \(\mathcal{C}_{2\pi}\). Hence, with a slight abuse of notation, we let \((\mathcal{C}_{2\pi},h,-\infty,+\infty)\) be the unit-area quantum sphere parametrized by \(\mathcal{C}_{2\pi}\) and we denote by \(\mu_{h}\) the associated \(\gamma\)-LQG measure. We are now ready to prove Theorem 1.4. Proof of Theorem 1.4.: The result on the measure convergence (a) follows from Proposition 5.8 and the fact that \(\mathfrak{d}\) is parameterized by \(\mu_{h}\)-mass. The uniform convergence statement for curves (b) is also an immediate consequence of Proposition 5.8. The claimed random walk convergence (c) follows from Propositions 5.2 and 5.8. ## Appendix A Some standard estimates for planar Brownian motion Throughout the whole appendix, \(B\) denotes a standard planar Brownian motion. For \(\mathbf{x}\in\mathbb{R}^{2}\), we use the notation \(B^{\mathbf{x}}\) to denote a planar Brownian motion started from \(\mathbf{x}\). We recall that \(\sigma_{2\pi}:\mathbb{R}^{2}\to\mathcal{C}_{2\pi}\) is defined in (2.3) and denotes the covering map of the infinite cylinder \(\mathcal{C}_{2\pi}\). In particular, if \(B^{\mathbf{x}}\) is a as above, then \(\sigma_{2\pi}(B^{\mathbf{x}})\) is a Brownian motion on \(\mathcal{C}_{2\pi}\) started from \(\sigma_{2\pi}(\mathbf{x})\). **Lemma A.1**.: _Fix \(R^{\prime}>1\). For \(\mathbf{x}\in\mathbb{R}^{2}\), consider the following stopping times_ \[\tau\!:=\inf\bigl{\{}t\geq 0\,:\,\mathrm{Im}(B^{\mathbf{x}}_{t})=-2R^{\prime}\text{ or }\,\mathrm{Im}(B^{\mathbf{x}}_{t})=R^{\prime}\bigr{\}}.\] _Then it holds that_ \[\mathbb{P}_{\mathbf{x}}\bigl{(}\sigma_{2\pi}(B|_{[0,\tau]})\text{ does not wind around the cylinder below height }-R^{\prime}\bigr{)}\!\preceq\!\frac{1}{R^{\prime}},\quad\forall \mathbf{x}\in\mathbb{R}\times\{-R^{\prime}\},\] _where the implicit constant is independent of everything else._ Proof.: Fix \(\mathbf{x}\in\mathbb{R}\times\{-R^{\prime}\}\). Let \(\sigma_{0}:=0\), and for \(k\in\mathbb{N}_{0}\) we define inductively \[\tau_{k}:=\inf\bigl{\{}t\geq\sigma_{k}\,:\,\mathrm{Im}(B^{\mathbf{x}}_{t})=-R ^{\prime}-1\bigr{\}},\quad\sigma_{k+1}:=\inf\bigl{\{}t\geq\tau_{k}\,:\, \mathrm{Im}(B^{\mathbf{x}}_{t})=-R^{\prime}\bigr{\}}.\] Moreover, for \(k\in\mathbb{N}_{0}\) consider the events \[A_{k}:=\bigl{\{}|\operatorname{Re}(B^{\mathbf{x}}_{\sigma_{k}})-\operatorname {Re}(B^{\mathbf{x}}_{\tau_{k}})|\geq 1\bigr{\}},\quad F_{k}:=\bigl{\{}\tau_{k}> \tau^{+}\bigr{\}}.\] Let \(K\) be the smallest \(k\in\mathbb{N}_{0}\) such that \(F_{k}\) occurs. Then the probability that the event in the lemma statement happens is less or equal to the probability that none of the events \(\{A_{k}\}_{k\in[K]_{0}}\) happen. Thanks to the strong Markov property of the Brownian motion, the events \(\{A_{k}\}_{k\in\mathbb{N}_{0}}\) are independent and identically distributed. Moreover, thanks to well-known properties of Brownian motion, the event \(A_{0}\) happens with uniformly positive probability \(p\) independent of \(R^{\prime}\) and \(\mathbf{x}\). Therefore, we obtain that \[\mathbb{P}_{\mathbf{x}}\left(\bigcap_{i=0}^{K}\overline{A_{k}}\right)=\sum_{k \in\mathbb{N}_{0}}\mathbb{P}_{\mathbf{x}}\left(\bigcap_{i=0}^{k}\overline{A_{ k}}\right)\mathbb{P}_{\mathbf{x}}(K=k)=\sum_{k\in\mathbb{N}_{0}}(1-p)^{k} \mathbb{P}_{\mathbf{x}}(K=k)\!\preceq\!\frac{1}{R^{\prime}},\] for all \(\mathbf{x}\in\mathbb{R}\times\{R^{\prime}\}\), where the implicit constant is independent of everything else. Hence, this concludes the proof. **Lemma A.2**.: _Fix \(R>1\), \(R^{\prime}>R\). For \(\mathbf{x}\in\mathbb{R}\times[-R,R]\), let \(\tau\!:=\inf\{t\geq 0\,:\,|\,\mathrm{Im}(B^{\mathbf{x}}_{t})|=R^{\prime}\}\), and define the events_ \[W^{+} :=\bigl{\{}\sigma_{2\pi}(B^{\mathbf{x}}|_{[0,\tau]})\text{ does a loop around the cylinder between heights }R\text{ and }R^{\prime}\bigr{\}},\] \[W^{-} :=\bigl{\{}\sigma_{2\pi}(B^{\mathbf{x}}|_{[0,\tau]})\text{ does a loop around the cylinder between heights }-R^{\prime}\text{ and }-R\bigr{\}}.\] _Then it holds that_ \[\mathbb{P}_{\mathbf{x}}\bigl{(}\overline{W^{+}}\cup\overline{W^{-}}\bigr{)} \preceq\!\frac{R}{R^{\prime}},\quad\forall\mathbf{x}\in\mathbb{R}\times[-R,R],\] _where the implicit constant is independent of everything else._ Proof.: Fix \(\mathbf{x}\in\mathbb{R}\times[-R,R]\). It is sufficient to prove that \(\mathbb{P}_{\mathbf{x}}(\overline{W^{+}})\!\preceq\!R/R^{\prime}\), and the same with \(W^{-}\) in place of \(W^{+}\). We will proceed similarly to the proof of the previous lemma. Let \(\sigma_{0}:=\inf\{t\geq 0\,:\,\mathrm{Im}(B^{\mathbf{x}}_{t})=R\}\) and, for \(k\in\mathbb{N}_{0}\) we define inductively \[\tau_{k}:=\inf\bigl{\{}t\geq\sigma_{k}\,:\,\mathrm{Im}(B^{\mathbf{x}}_{t})=R+1 \bigr{\}},\quad\sigma_{k+1}:=\inf\bigl{\{}t\geq\tau_{k}\,:\,\mathrm{Im}(B^{ \mathbf{x}}_{t})=R\bigr{\}}.\] Moreover, for \(k\in\mathbb{N}_{0}\) consider the events \[A_{k}:=\bigl{\{}|\operatorname{Re}(B^{\mathbf{x}}_{\sigma_{k}})-\operatorname{ Re}(B^{\mathbf{x}}_{\tau_{k}})|=1,\,\mathrm{Im}(B^{\mathbf{x}}_{\sigma_{k}})= \mathrm{Im}(B^{\mathbf{x}}_{\tau_{k}})\bigr{\}},\quad F_{k}:=\bigl{\{}\tau_{k}> \tau\bigr{\}}.\] Let \(K\) be the smallest \(k\in\mathbb{N}_{0}\) such that \(F_{k}\) occurs. Then the probability that the event \(W^{+}\) does not happen is less or equal to the probability that none of the events \(\{A_{k}\}_{k\in[K]_{0}}\) happen. Thanks to the strong Markov property of the Brownian motion, the events \(\{A_{k}\}_{k\in\mathbb{N}_{0}}\) are independent and identically distributed. Moreover, since the event \(A_{0}\) happens with uniformly positive probability \(p\) independent of \(R\) and \(\mathbf{x}\), we have that \[\mathbb{P}_{\mathbf{x}}\bigl{(}\overline{W^{+}}\bigr{)}\leq\mathbb{P}_{\mathbf{ x}}\left(\bigcap_{i=0}^{K}\overline{A_{i}}\right)=\sum_{k\in\mathbb{N}_{0}} \mathbb{P}_{\mathbf{x}}\left(\bigcap_{i=0}^{k}\overline{A_{i}}\right)\mathbb{P} _{\mathbf{x}}(K=k)=\sum_{k\in\mathbb{N}_{0}}(1-p)^{k}\mathbb{P}_{\mathbf{x}}(K =k)\!\preceq\!\frac{R}{R^{\prime}},\] for all \(\mathbf{x}\in\mathbb{R}\times[-R,R]\), where the implicit constant is independent of everything else. Proceeding in a similar way, one can also prove that \(\mathbb{P}_{\mathbf{x}}(\overline{W^{-}})\!\preceq\!R/R^{\prime}\). Therefore, the desired result follows from the fact that \(\mathbb{P}_{\mathbf{x}}(\overline{W^{+}}\cup\overline{W^{-}})\leq\mathbb{P}_{ \mathbf{x}}(\overline{W^{+}})+\mathbb{P}_{\mathbf{x}}(\overline{W^{-}})\)
2307.03767
Conformal blocks on smoothings via mode transition algebras
Here we define a series of associative algebras attached to a vertex operator algebra $V$, called mode transition algebras, showing they reflect both algebraic properties of $V$ and geometric constructions on moduli of curves. One can define sheaves of coinvariants on pointed coordinatized curves from $V$-modules. We show that if the mode transition algebras admit multiplicative identities with certain properties, these sheaves deform as wanted on families of curves with nodes (so $V$ satisfies smoothing). Consequently, coherent sheaves of coinvariants defined by vertex operator algebras that satisfy smoothing form vector bundles. We also show that mode transition algebras give information about higher level Zhu algebras and generalized Verma modules. As an application, we completely describe higher level Zhu algebras of the Heisenberg vertex algebra for all levels, proving a conjecture of Addabbo--Barron.
Chiara Damiolini, Angela Gibney, Daniel Krashen
2023-07-07T18:00:00Z
http://arxiv.org/abs/2307.03767v4
# Conformal blocks on smoothings ###### Abstract. Here we introduce a series of associative algebras attached to a vertex operator algebra \(V\), called mode transition algebras, and show they reflect both algebraic properties of \(V\) and geometric constructions on moduli of curves. Pointed and coordinatized curves, labeled by modules over \(V\), give rise to sheaves of coinvariants. We show that if the mode transition algebras admit multiplicative identities satisfying certain natural properties (called strong unities), then these sheaves deform as wanted on families of curves with nodes. This provides new contexts in which coherent sheaves of coinvariants form vector bundles. We also show that mode transition algebras carry information about higher level Zhu algebras and generalized Verma modules. To illustrate, we explicitly describe the higher level Zhu algebras of the Heisenberg vertex operator algebra, proving a conjecture of Addabbo-Barron. Key words and phrases:Vertex algebras, factorization, sewing, conformal blocks, vector bundles on moduli of curves, logarithmic conformal field theory 2020 Mathematics Subject Classification: 14H10, 17B69 (primary), 81R10, 81T40,14D21 (secondary). Mode transition algebras, introduced here, are a series of associative algebras that give insight into algebraic structures on moduli of stable pointed curves, and representations of the vertex operator algebras from which they are derived. Modules over vertex operator algebras (VOAs for short) give rise to vector bundles of coinvariants on moduli of smooth, pointed, coordinatized curves [10]. To extend these to singular curves, coinvariants must deform as expected on smoothings of nodes, maintaining the same rank for singular curves as for smooth ones. By Theorem 5.0.3, this holds when coinvariants form coherent sheaves and the mode transition algebras (defined below) admit multiplicative identities with certain properties. Consequently, by Corollary 5.2.6 one obtains a potentially rich source of vector bundles, including as in Remark 5.2.7 (b), the well-known class given by rational and \(C_{2}\)-cofinite VOAs [12, 13, 14, 15], and by Corollary 7.4.1, a new family on moduli of stable pointed rational curves from modules over the Heisenberg VOA, which is neither \(C_{2}\)-cofinite nor rational. Vector bundles are valuable--their characteristic classes, degeneracy loci, and section rings have been instrumental in the understanding of moduli of curves (e.g. [12, 13, 15, 16, 17, 18]). Known as essential to the study of the representation theory of VOAs, basic questions about the structure of higher level Zhu algebras remain open. Via Theorem 6.0.1, the mode transition algebras also give a new perspective on these higher level Zhu algebras. As an application, we prove [1, Conjecture 8.1], thereby giving an explicit description of the higher level Zhu algebras for the Heisenberg VOA. This is done in Section 7 by analyzing the mode transition algebras associated to this VOA. To describe our results more precisely, we set a small amount of notation, with more details given below. We assume that \(V\) is a vertex operator algebra of CFT type. While they have applications in both VOA theory and algebraic geometry, we begin by describing the geometric problem, which motivated the definition of the mode transition algebras. By [1], the sheaves of coinvariants are coherent when defined by modules over a \(C_{2}\)-cofinite VOA. By [1], coherence is also known to hold for some sheaves given by representations of VOAs that are \(C_{1}\)-cofinite and not \(C_{2}\)-cofinite. It is natural then to ask when such coherent sheaves are vector bundles, as they were shown to be if \(V\) is both \(C_{2}\)-cofinite and rational [13, 1, 14, 15]. One may check coherent sheaves are locally free by proving they are flat. This may be achieved using Grothendieck's valuative criteria of flatness. The standard procedure first carried out by [13, Theorem 6.2.1], and then followed in [13, Theorem 8.4.5], and [15, VB Corollary]), is to argue inductively, using the factorization property ([13, Theorem 6.2.6], [13, Theorem 8.4.3], [15, Theorem 7.0.1]). However, here by [1, Proposition 7.1], factorization is not available, since we do not assume that the VOA is rational or \(C_{2}\)-cofinite. Instead, as explained in the proof of Corollary 5.2.6, the geometric insight here, is that in place of factorization, one may show that ranks of coinvariants are constant as nodes are smoothed in families. This relies on the mode transition algebras admitting multiplicative identities that also act as identity elements on modules (we call these _strong unities_). The base case then follows from the assumption of coherence and that sheaves of coinvariants support a projectively flat connection on moduli of smooth curves [10, 15]. We refer to this process as smoothing of coinvariants. Let us now summarize the strategy. For simplicity, let \(\mathscr{C}_{0}\) be a projective curve over \(\mathbb{C}\) with a single node \(Q\), \(n\) smooth points \(P_{\bullet}=(P_{1},\dots,P_{n})\), and formal coordinates \(t_{\bullet}=(t_{1},\dots,t_{n})\) at \(P_{\bullet}\). Let \(W^{1},\dots,W^{n}\) be an \(n\)-tuple of \(V\)-modules, or equivalently of smooth \(\mathscr{U}\)-modules, where \(\mathscr{U}\) is the universal enveloping algebra associated to \(V\) (defined in detail in Section 2). We assume each \(V\)-module \(W^{i}\) is generated by a module \(W^{i}_{0}\) over (the zero level) Zhu algebra \(\mathsf{A}_{0}(V):=\mathsf{A}\). The vector space of coinvariants \([W^{\bullet}]_{(\mathscr{C}_{0},P_{\bullet},t_{\bullet})}\) is the largest quotient of \(W^{\bullet}=W^{1}\otimes\dots\otimes W^{n}\) on which the Chiral Lie algebra \(\mathcal{L}_{\mathscr{C}_{0}\setminus P_{\bullet}}(V)\) acts trivially (described here in Section 4). Coinvariants at \(\mathscr{C}_{0}\) are related to those on the normalization \(\eta:\widetilde{\mathscr{C}_{0}}\to\mathscr{C}_{0}\) of \(\mathscr{C}_{0}\) at \(Q\). Namely, by [15] the map \[\alpha_{0}\colon W^{\bullet}_{0}\!\to\!\!W^{\bullet}_{0}\otimes\mathsf{A}, \qquad u\mapsto u\otimes 1^{\mathsf{A}},\] gives rise to an \(\mathcal{L}_{\mathscr{C}_{0}\setminus P_{\bullet}}(V)\)-module map, inducing a map between spaces of coinvariants \[[\alpha_{0}]\colon[W^{\bullet}]_{(\mathscr{C}_{0},P_{\bullet},t_{\bullet})} \!\to\!\![W^{\bullet}\otimes\Phi(\mathsf{A})]_{(\widetilde{\mathscr{C}_{0}},P _{\bullet}\cup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}.\] Here \(\Phi(A)\) is a \(\mathscr{U}\)-bimodule assigned at points \(Q_{\pm}\) lying over \(\eta^{-1}(Q)\), and \(s_{\pm}\) are formal coordinates at \(Q_{\pm}\). By [1], the map \([\alpha_{0}]\) is an isomorphism if \(V\) is \(C_{1}\)-cofinite. One may extend the nodal curve \(\mathscr{C}_{0}\) to a smoothing family \((\mathscr{C},P_{\bullet},t_{\bullet})\) over the scheme \(S=\operatorname{Spec}(\mathbb{C}[q])\), with special fiber \((\mathscr{C}_{0},P_{\bullet},t_{\bullet})\), and smooth generic fiber, while one may trivially extend \(\widetilde{\mathscr{C}_{0}}\) to a family \((\widetilde{\mathscr{C}},P_{\bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})\) over \(S\). While the central fibers of these two families of curves are related by normalization, there is no map between \(\widetilde{\mathscr{C}}\) and \(\mathscr{C}\). However, for \(V\) rational and \(C_{2}\)-cofinite, sheaves of coinvariants on \(S\) are naturally isomorphic, an essential ingredient in the proof that such sheaves are locally free under these assumptions [10]. To obtain an analogous isomorphism of coinvariants under less restrictive conditions, our main idea is to generalize the algebra structure of \(X_{d}\otimes X_{d}^{\vee}\subset\Phi(\mathsf{A})\), which exists for simple admissible \(V\)-modules \(X=\bigoplus_{d}X_{d}\), and for all \(d\in\mathbb{N}\) (see Example 3.3.2). Namely, we show that \(\Phi(\mathsf{A})\) has the structure of a bi-graded algebra, which we call the _mode transition algebra_ and denote \(\mathfrak{A}=\bigoplus_{d_{1},d_{2}\in\mathbb{Z}}\mathfrak{A}_{d_{1},d_{2}}\). We show that \(\mathfrak{A}\) acts on generalized Verma modules \(\Phi^{\mathsf{L}}(W_{0})=\bigoplus_{d}W_{d}\), such that the subalgebras \(\mathfrak{A}_{d}:=\mathfrak{A}_{d,-d}\), which we refer to as the _\(d\)th mode transition algebras_, act on the degree \(d\) components \(W_{d}\) (these terms are defined in Section 3.1 and Section 3.2). We say that \(V\) satisfies smoothing (Definition 5.0.1), if for every pair \((W^{\bullet},\mathscr{C}_{0})\), consisting of \(n\) admissible \(V\)-modules \(W^{\bullet}\), not all trivial, a stable \(n\)-pointed curve \(\mathscr{C}_{0}\) with a node, there exist an element \(\mathscr{I}=\sum_{d\geq 0}\mathscr{I}_{d}q^{d}\in\mathfrak{A}[\![q]\!]\), such that the map \[\alpha\colon W^{\bullet}[\![q]\!]\longrightarrow(W^{\bullet}\otimes\mathfrak{ A})[\![q]\!],\qquad u\mapsto\,u\otimes\mathscr{I},\] is an \(\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\)-module homomorphism which extends \(\alpha_{0}\). In Theorem 5.0.3, we equate smoothing for \(V\) with a property of multiplicative identity elements in the \(d\)th mode transition algebras \(\mathfrak{A}_{d}\), when they exist. Specifically, if \(\mathfrak{A}_{d}\) admit identity elements \(\mathscr{I}_{d}\in\mathfrak{A}_{d}\) for all \(d\in\mathbb{N}\), satisfying any of the equivalent properties of Definition/Lemma 3.3.1, then we say that \(\mathscr{I}_{d}\in\mathfrak{A}_{d}\) is a _strong unity_. **Theorem (5.0.3)**.: _Let \(V\) be a VOA of CFT-type. The algebras \(\mathfrak{A}_{d}=\mathfrak{A}_{d}(V)\) admit strong unities for all \(d\in\mathbb{N}\) if and only if \(V\) satisfies smoothing._ We remark that the analogue to \(\alpha\) is called the sewing map in [11, 12, 13, 14]. As an application of Theorem 5.0.3, we obtain geometric consequences stated as Corollary 5.2.1 and Corollary 5.2.6. A particular case of which is as follows: **Corollary (5.2.6)**.: _If \(V\) is \(C_{2}\)-cofinite and satisfies smoothing, then \(\mathbb{V}(V;W^{\bullet})\) is a vector bundle on \(\overline{\mathcal{M}}_{g,n}\) for simple \(V\)-modules \(W^{1},\cdots,W^{n}\)._ By Example 3.3.2, rational VOAs satisfy smoothing, so Corollary 5.2.6 specializes to [10, 10]. As is shown in Corollary 7.4.1, one can apply the full statement of Corollary 5.2.6 to show that modules over the Heisenberg VOA (which is \(C_{1}\)-cofinite, but neither \(C_{2}\)-cofinite nor rational), define vector bundles on moduli of stable pointed rational curves (see Corollary 7.4.1). Theorem 6.0.1, described next, gives further tools for investigating other VOAs which may or may not satisfy smoothing by providing information about the relationship between mode transition algebras and higher level Zhu algebras and their representations. Recall that in [15] Zhu defines a two step induction functor, which in the first part takes \(\mathsf{A}=\mathsf{A}(V)\)-modules to a \(V\)-modules through a Verma module construction, and then in the second step takes a quotient. In Definition 3.1.1 we describe this first step with a different, although naturally isomorphic functor \(\Phi^{\mathsf{L}}\), a crucial ingredient to this work. Through this functor (naturally isomorphic to) \(\Phi^{\mathsf{L}}\), Zhu shows that there is a bijection between simple \(\mathsf{A}\)-modules and simple \(V\)-modules, so that if \(\mathsf{A}\) is finite dimensional, and semi-simple, \(\Phi^{\mathsf{L}}\) describes the category of admissible \(V\)-modules. However, if \(\mathsf{A}\) is either not finite-dimensional or is not semi-simple, then there are indecomposable, but non-simple \(V\)-modules not induced from simple indecomposable modules over \(\mathsf{A}\) via \(\Phi^{\mathsf{L}}\). To describe such modules, [10] defined the higher level Zhu algebras \(\mathsf{A}_{d}\) for \(d\in\mathbb{N}\), further studied in [11]. The mode transition algebras \(\mathfrak{A}_{d}\) are related to the higher level Zhu algebras \(\mathsf{A}_{d}\). For instance, \(\mathfrak{A}_{0}=\mathsf{A}_{0}=\mathsf{A}\) (Remark 3.2.4), and by Lemma B.3.1, there is an exact sequence (1) When \(\mathfrak{A}_{d}\) admits a unity (and not necessarily a strong unity), \(\mu_{d}\) injective and the sequence splits by Part (a) of Theorem 6.0.1. In particular, if \(V\) is \(C_{2}\)-cofinite, as observed in [15, 16, 17, 18], the \(d\)th mode algebras \(\mathfrak{A}_{d}\) are finite dimensional, hence if \(\mathfrak{A}_{d}\) has a unity, it will be finite dimensional as well. We note that (1) may be exact when \(\mathfrak{A}_{d}\) does not admit a unity. For instance, in Section 8 we show exactness of (1) when \(d=1\) for the Virasoro VOA \(\operatorname{Vir}_{c}\), for any values of \(c\), and use it to show that \(\mathfrak{A}_{1}\) does not admit a unity. In particular, by Theorem 5.0.3, one finds that \(\operatorname{Vir}_{c}\) never satisfies smoothing. When \(c\) is in the discrete series, the maximal simple quotient \(L_{c}\) of \(\operatorname{Vir}_{c}\) is rational and \(C_{2}\)-cofinite, hence by Remark 5.2.7 b (or by [10]) it will satisfy smoothing. Theorem 6.0.1 allows one to use the mode transition algebras to obtain other valuable structural information about the higher level Zhu algebras. **Theorem (6.0.1)**.: 1. _If the_ \(d\)_th mode transition algebra_ \(\mathfrak{A}_{d}\) _admits a unity, then_ \((1)\) _is split exact, and_ \(\mathsf{A}_{d}\cong\mathfrak{A}_{d}\times\mathsf{A}_{d-1}\) _as rings. In particular, if_ \(\mathfrak{A}_{d}\) _admits a unity for every_ \(d\in\mathbb{Z}_{\geq 0}\) _then_ \(\mathsf{A}_{d}\cong\mathfrak{A}_{d}\oplus\mathfrak{A}_{d-1}\oplus\cdots \oplus\mathfrak{A}_{0}\)_._ 2. _For_ \(\mathfrak{A}=\mathfrak{A}(V)\)_, if the_ \(\mathfrak{A}_{d}\) _admits a strong unity for all_ \(d\in\mathbb{N}\)_, so that smoothing holds for_ \(V\)_, then given any generalized Verma module_ \(W=\Phi^{\mathsf{L}}(W_{0})=\oplus_{d\in\mathbb{N}}W_{d}\) _where_ \(L_{0}\) _acts on_ \(W_{0}\) _as a scalar with eigenvalue_ \(c_{W}\in\mathbb{C}\)_, there is no proper submodule_ \(Z\subset W\) _with_ \(c_{Z}-c_{W}\in\mathbb{Z}_{>0}\) _for every eigenvalue_ \(c_{Z}\) _of_ \(L_{0}\) _on_ \(Z\)_._ We refer to Section 3.1 for a discussion about generalized Verma modules. We note that by Lemma B.3.1 and Theorem B.3.3, the exact sequence (1) as well as Part _(a)_ of Theorem 6.0.1 hold for generalized higher Zhu algebras and generalized \(d\)th mode transition algebras (see Definition B.1.1 and Definition B.2.6). For further discussion see Section 9.3. We now describe some further consequences of Theorem 5.0.3 and Theorem 6.0.1. In Section 7 we describe the \(d\)th mode transition algebras \(\mathfrak{A}_{d}\) for the Heisenberg vertex algebra \(M_{a}(1)\) (denoted \(\pi\) in [12]), and show the \(\mathfrak{A}_{d}\) admit strong unities for all \(d\in\mathbb{N}\). In particular, Theorem 6.0.1 and Proposition 7.2.1 imply that the conjecture of Addabbo and Barron [1, Conj 8.1] holds, and one can write \[\mathsf{A}_{d}(\pi)=\mathsf{A}_{d}(M_{a}(1))\cong\bigoplus_{j=0}^{d}\operatorname {Mat}_{p(j)}(\mathbb{C}[x]),\] where \(p(j)\) is the number of ways to decompose \(j\) into a sum of positive integers, with \(p(0)=1\). The level one Zhu algebra \(A_{1}(M_{a}(1))\) was first constructed in the paper [10], and then later announced in [10]. In [1] the authors determine \(A_{2}(M_{a}(1))\) using the infrastructure for finding generators and relations for higher level Zhu algebras they had developed in [1]. In Section 9.1.2, we use Part (b) of Theorem 6.0.1 to show that the family of triplet vertex operator algebras \(\mathcal{W}(p)\) does not satisfy smoothing. We do this by giving an explicit pair of modules \(W\subset Z\) with \(W=\Phi^{\mathsf{L}}(W_{0})\) and such that \(c_{Z}>c_{W}\). The actual pair of modules used was suggested to us by Thomas Creutzig (with some details filled in by Simon Wood). Drazen Adamovic had also sketched for us the existence of such an example. The importance of this example is that it establishes that smoothing is not guaranteed to hold for a \(C_{2}\)-cofinite VOA if rationality is not assumed. In particular, while sheaves of coinvariants defined by the representations of \(C_{2}\)-cofinite VOAs are coherent, this can be seen as an indication that they may not necessarily be locally free. Taken together with the family of Heisenberg vector bundles from Corollary 7.4.1, this example illustrates the subtlety of the problem of determining which sheaves of coinvariants define vector bundles. We also expect that smoothing will not hold for the family of symplectic fermion algebras \(\operatorname{SF}_{+}^{+}\) which are \(C_{2}\)-cofinite and not rational, since \(\operatorname{SF}_{1}^{+}=\mathcal{W}(2)\). It is natural to ask whether there is an example of a vertex operator algebra that is \(C_{2}\)-cofinite, is not rational, and satisfies smoothing (see Section 9.1). Finally, we emphasize that our procedure to use smoothing to show that sheaves of coinvariants are locally free is just one approach to this problem (see Section 9.2). ### Plan of the paper In Section 1, we set the terminology used here for vertex operator algebras and their representations. In Section 2, we provide detailed descriptions of the universal enveloping algebra \(\mathscr{U}\) associated to a vertex operator algebra \(V\). Technical details are given in Appendix A, where an axiomatic treatment of the constructions of the graded and filtered enveloping algebras as topological or semi-normed algebras is given. The concepts discussed involving filtered and graded completions can be found throughout the VOA literature (for instance in [11, 12, 13, 14, 15, 16]), but little is said about how they relate to one another. We discuss these relations in Section 2. In Section 3 we give an alternative construction of the generalized Verma module functor \(\Phi^{\mathsf{L}}\) (and the right-analogue \(\Phi^{\mathsf{R}}\)) from the category of \(\mathsf{A}\)-modules, to the category of smooth left (and right) \(\mathscr{U}\)-modules. We use a combination of \(\Phi^{\mathsf{L}}\) and \(\Phi^{\mathsf{R}}\) to define the mode transition algebras \(\mathfrak{A}_{d}\subset\mathfrak{A}\). More general versions of these constructions are defined in Appendix B, where their analogous properties are proved. In Section 4, smoothing is formally defined, and we describe sheaves of coinvariants on families of pointed and coordinatized curves in general terms, and cite the relevant references. In Section 5 we prove Theorem 5.0.3, Corollary 5.2.1, and Corollary 5.2.6. In Section 6 we prove Theorem 6.0.1 Part _(b)_, while Part _(a)_ is detailed in Appendix B. In Section 7 we compute the mode transition algebras \(\mathfrak{A}_{d}\) for the Heisenberg algebra for all \(d\). In Section 8 we compute the 1st mode transition algebras for the non-discrete series Virasoro VOAs. We ask a number of questions in Section 9. In Section 9.1 and in Section 9.2 questions are discussed about \(C_{2}\)-cofinite and non-rational VOAs that may not satisfy smoothing, and whether their induced sheaves of coinvariants may still define vector bundles. Finally, as noted, many of the results here are stated and proved for generalizations of higher level Zhu algebras and of mode transition algebras, and in Section 9.3 we raise the question of finding other examples and applications of such algebraic structures, beyond those naturally associated to a vertex operator algebra. ### Acknowledgements We would like to thank Drazen Adamovic, Katrina Barron, Thomas Creutzig, Haisheng Li, Jianqi Liu, Antun Milas, Rahul Pandharipande, Dennis Sullivan, and Simon Wood for helpful discussions and for answering our questions. Thomas Creutzig and Simon Wood gave crucial assistance with the details of Section 9.1.2. We thank Katrina Barron for her valuable feedback at various points. Most of all, we thank Yi-Zhi Huang who encouraged us from the beginning to consider questions regarding sewing, when rationality is not assumed. Gibney was supported by NSF DMS - 2200862, and Krashen was supported by NSF DMS-1902237. ## 1. Background on VOAs and their modules In Section 1.1 we state the conventions we follow for vertex operator algebras and their representations. Throughout this paper, by an algebra we mean an associative algebra which is not necessarily commutative and by a ring we mean an algebra over \(\mathbb{Z}\). We refer to [10, 11, 12] for more details about vertex operator algebras and their modules. ### VOAs and their representations We recall here the definition of a vertex operator algebra of CFT type, which in the paper will be simply denoted _VOA_. **Definition 1.1.1**.: A _vertex operator algebra of CFT-type_ is four-tuple \((V,\mathbf{1},\omega,Y(\cdot,z))\): 1. \(V=\bigoplus_{i\in\mathbb{N}}V_{i}\) is a vector space with \(\dim V_{i}<\infty\), and \(\dim V_{0}=1\); 2. \(\mathbf{1}\) is an element in \(V_{0}\), called the _vacuum vector_; 3. \(\omega\) is an element in \(V_{2}\), called the _conformal vector_; 4. \(Y(\cdot,z)\colon V\to\operatorname{End}(V)[\![z,z^{-1}]\!]\) is a linear map \(a\mapsto Y(a,z):=\sum_{m\in\mathbb{Z}}a_{(m)}z^{-m-1}\). The series \(Y(a,z)\) is called the _vertex operator_ assigned to \(a\in V\), satisfying the following axioms: 1. _(vertex operators are fields)_ for all \(a\), \(b\in V\), \(a_{(m)}b=0\), for \(m\gg 0\); 2. _(vertex operators of the vacuum)_\(Y(\mathbf{1},z)=\operatorname{id}_{V}\), that is \[\mathbf{1}_{(-1)}=\operatorname{id}_{V}\qquad\text{and}\qquad\mathbf{1}_{(m) }=0,\quad\text{for }m\neq-1,\] and for all \(a\in V\), \(\ Y(a,z)\mathbf{1}\in a+zV[\![z]\!]\), that is \[a_{(-1)}\mathbf{1}=a\qquad\text{and}\qquad a_{(m)}\mathbf{1}=0,\qquad\text{ for }m\geq 0;\] 3. _(weak commutativity)_ for all \(a\), \(b\in V\), there exists an \(N\in\mathbb{N}\) such that \[(z_{1}-z_{2})^{N}\left[Y(a,z_{1}),Y(b,z_{2})\right]=0\quad\text{in }\operatorname{End}(V)[\![z_{1}^{\pm 1},z_{2}^{\pm 1 }]\!];\] 4. _(conformal structure)_ for \(Y(\omega,z)=\sum_{m\in\mathbb{Z}}\omega_{(m)}z^{-m-1}\), \[\left[\omega_{(p+1)},\omega_{(q+1)}\right]=(p-q)\,\omega_{(p+q+1)}+\frac{c}{12 }\,\delta_{p+q,0}\,(p^{3}-p)\,\mathrm{id}_{V}.\] Here \(c\in\mathbb{C}\) is the _central charge_ of \(V\). Moreover: \[\omega_{(1)}|_{V_{m}}=m\cdot\mathrm{id}_{V},\quad\text{for all }m,\qquad\text{and} \qquad Y\left(\omega_{(0)}a,z\right)=\frac{d}{dz}Y(a,z).\] **Definition 1.1.2**.: An admissible \(V\)-modules is an \(\mathbb{C}\)-vector space \(W\) together with a linear map \[Y^{W}(\cdot,z)\colon V\to\operatorname{End}(W)[\![z,z^{-1}]\!],\ a\in V\mapsto Y ^{W}(a,z):=\sum_{m\in\mathbb{Z}}a_{(m)}^{W}z^{-m-1},\] which satisfies the following axioms: 1. _(vertex operators are fields)_ if \(a\in V\) and \(u\in W\), then \(a_{(m)}^{W}u=0\), for \(m\gg 0\); 2. _(vertex operators of the vacuum)_ \(Y^{W}\left(\mathbf{1},z\right)=\mathrm{id}_{W}\); 3. _(weak commutativity)_ for all \(a\), \(b\in V\), there exists an \(N\in\mathbb{N}\) such that for all \(u\in W\) \[(z_{1}-z_{2})^{N}\left[Y^{W}(a,z_{1}),Y^{W}(b,z_{2})\right]u=0;\] 4. _(weak associativity)_ for all \(a\in V\) and \(u\in W\), there exists an \(N\in\mathbb{N}\), such that for all \(b\in V\), one has \[(z_{1}+z_{2})^{N}\left(Y^{W}(Y(a,z_{1})b,z_{2})-Y^{W}(a,z_{1}+z_{2})Y^{W}(b,z _{2})\right)u=0;\] 5. _(conformal structure)_ for \(Y^{W}(\omega,z)=\sum_{m\in\mathbb{Z}}\omega_{(m)}^{W}z^{-m-1}\), one has \[\left[\omega_{(p+1)}^{W},\omega_{(q+1)}^{W}\right]=(p-q)\,\omega_{(p+q+1)}^{ W}+\frac{c}{12}\,\delta_{p+q,0}\,(p^{3}-p)\,\mathrm{id}_{W}.\] where \(c\in\mathbb{C}\) is the central charge of \(V\). Moreover \(Y^{W}\left(L_{-1}a,z\right)=\frac{d}{dz}Y^{W}(a,z)\); 6. (\(\mathbb{N}\)_-gradability)_\(W\) admits a grading \(W=\bigoplus\limits_{n\in\mathbb{N}}W_{n}\) with \(a_{(m)}^{W}W_{n}\subset W_{n+\deg(a)-m-1}\). As one can see in the literature, e.g. by [13, 14, 15, 16], weak associativity and weak commutativity together are equivalent to the _Jacobi identity_: for \(\ell\), \(m\), \(n\in\mathbb{Z}\), and \(a\), \(b\in V\) \[\sum_{i\geq 0}(-1)^{i}\binom{\ell}{i}a_{(m+\ell-i)}^{W}b_{(n+i)}^{W}-(-1)^{ \ell}b_{(n+\ell-i)}^{W}a_{(m+i)}^{W}=\sum_{i\geq 0}\binom{m}{i}(a_{(\ell+i)}(b))_{(m+ n-i)}^{W}.\] Moreover, by [13, Lemma 2.2], axiom (e) is redundant. ## 2. The universal enveloping algebra of a VOA Here we describe constructions of the universal enveloping algebra associated to a VOA \(V\), as quotients of certain graded, as well as (left and right) filtered completions of the universal enveloping algebra of the Lie algebra associated to \(V\). Filtered completions are essential to our constructions, as they are compatible with crucial restriction maps from the Chiral Lie algebra to certain ancillary Lie algebras, allowing for the definition of the action of the Chiral Lie algebra on (tensor products of) \(V\)-modules. The graded completion, on the other hand, allows both for ease in computation, and simpler descriptions of induced modules, and bimodules, and in Section 3.2 of the mode transition algebras. While these concepts are treated in one way or another throughout the VOA literature, for instance in [13, 14, 15, 16, 17], we provide here and in Appendix A, a uniform description, where many details are given, clarifications are made, and the different constructions are compared to one another. We further remark that, although this section assumes that \(V\) is a vertex operator algebra, however all the arguments and construction here in Section 2 hold assuming only that \(V\) is a graded vertex algebra since the conformal structure does not play a role (see also Section 9.3). ### Graded and filtered completions We recall the constructions of the universal enveloping algebra [14] and the current algebra [18] associated to a VOA \(V\). ### Split filtrations The underlying vector spaces of the objects we will need to consider will either be graded or filtered (sometimes both), and these filtrations and gradings will be related to each other. The basic example of this is the space of Laurent polynomials \(\mathbb{C}[t,t^{-1}]\) which we choose to grade by \(\mathbb{C}[t,t^{-1}]_{n}=\mathbb{C}t^{-n-1}\), and the space of Laurent series \(\mathbb{C}(\!(t)\!)\), which admits an increasing filtration by setting \(\mathbb{C}(\!(t)\!)_{\leq n}:=t^{-n-1}\mathbb{C}[\![t]\!]\). We will refer to this filtration as a _left filtration_ of \(\mathbb{C}(\!(t)\!)\) (see Definition A.1.1). In this situation, when we have a graded subspace of a filtered space \(\mathbb{C}[t,t^{-1}]\subset\mathbb{C}(\!(t)\!)\) which identifies the degree \(n\) part of the graded subspace with the degree \(n\) part of the associated graded space, we say that this pair gives a _split filtration_ (see Definition A.1.6). Similarly, we refer to the filtration of \(\mathbb{C}(\!(t^{-1})\!)\) given by \(\mathbb{C}(\!(t^{-1})\!)_{\geq n}=t^{n-1}\mathbb{C}[\![t^{-1}]\!]\) as a right filtration, and the pair \(\mathbb{C}[t,t^{-1}]\subset\mathbb{C}(\!(t^{-1})\!)\) is also a split filtration. ### Lie algebras Let \(V\) be a VOA (or a graded vertex algebra). Since it is a graded vector space, it admits a trivial left (respectively right) split-filtration, given by \(V_{\leq n}=\oplus_{d\leq n}V_{d}\) (respectively \(V_{\geq n}=\oplus_{d\geq n}V_{d}\)). In view of Definition/Lemma A.2.2, tensor products of split-filtered modules are naturally split-filtered, and consequently \[V\otimes_{\mathbb{C}}\mathbb{C}[t,t^{-1}]\subset V\otimes_{\mathbb{C}} \mathbb{C}(\!(t)\!)\quad\text{ and }\quad V\otimes_{\mathbb{C}}\mathbb{C}[t,t^{-1}]\subset V \otimes_{\mathbb{C}}\mathbb{C}(\!(t^{-1})\!)\] define splittings of their left and right filtrations. **Remark 2.3.1**.: Concretely, we can define the map \(\mathsf{val}\colon V\otimes\mathbb{C}(\!(t)\!)\to\mathbb{Z}\) by \[\mathsf{val}(a\otimes f(t))=\deg(a)-N-1.\] for a homogeneous element \(a\in V\) and \(f(t)\in t^{N}\mathbb{C}[\![t]\!]\setminus t^{N-1}\mathbb{C}[\![t]\!]\). The natural left filtration on \(V\otimes_{\mathbb{C}}\mathbb{C}(\!(t)\!)\) is then given by \((V\otimes\mathbb{C}(\!(t)\!))_{\leq n}:=\mathsf{val}^{-1}(-\infty,n]\). The linear map \(\nabla=L_{-1}\otimes\mathrm{id}+\mathrm{id}\otimes\frac{d}{dt}\) is a linear endomorphism of each of these spaces of degree \(-1\) (see Definition A.1.10). We define \[\mathfrak{L}(V)^{\mathsf{L}}=\frac{V\otimes_{\mathbb{C}}\mathbb{C}(\!(t)\!)}{ \mathrm{Im}(\nabla)},\quad\mathfrak{L}(V)^{\mathsf{R}}=\frac{V\otimes_{ \mathbb{C}}\mathbb{C}(\!(t^{-1})\!)}{\mathrm{Im}(\nabla)},\ \ \text{and}\ \ \mathfrak{L}(V)^{\mathsf{f}}=\frac{V\otimes_{\mathbb{C}}\mathbb{C}[t,t^{-1}]}{ \mathrm{Im}(\nabla)}.\] These have induced split-filtrations via \(\mathfrak{L}(V)^{\mathsf{f}}\subset\mathfrak{L}(V)^{\mathsf{L}},\mathfrak{L}(V )^{\mathsf{R}}\) by Lemma A.1.14. These filtered and graded vector spaces admit (filtered and graded) Lie algebra structures, with Lie brackets defined by: \[[a\otimes f(t),b\otimes g(t)]:=\sum_{k\geq 0}\frac{1}{k!}\left(a_{(k)}(b) \right)\otimes g(t)\frac{d^{k}(f(t))}{dt^{k}},\] for all \(a,b\in V\) and \(f(t),g(t)\in\mathbb{C}(\!(t)\!)\) or \(f(t),g(t)\in\mathbb{C}(\!(t^{-1})\!)\). More concretely in the case of \(\mathfrak{L}(V)^{\mathsf{f}}\), for \(a\in V\) and \(i\in\mathbb{Z}\) we denote by \(a_{[i]}\) the class of the element \(a\otimes t^{i}\) in \(\mathfrak{L}(V)^{\mathsf{f}}\subset\mathfrak{L}(V)^{\mathsf{L}},\mathfrak{L} (V)^{\mathsf{R}}\). The restriction of the Lie bracket on \(\mathfrak{L}(V)^{\mathsf{f}}\) then is given by the following formula \[\left[a_{[i]},b_{[j]}\right]:=\sum_{k\geq 0}\binom{i}{k}\left(a_{(k)}(b) \right)_{[i+j-k]},\] for all \(a,b\in V\) and \(i,j\in\mathbb{Z}\). Extending the notation introduced in [10], we call \(\mathfrak{L}(V)^{\mathsf{L}}\) and \(\mathfrak{L}(V)^{\mathsf{R}}\) the left and right _ancillary Lie algebras_ and \(\mathfrak{L}(V)^{\mathsf{f}}\) the _finite ancillary Lie algebra_. Note that \(\mathfrak{L}(V)^{\mathsf{L}}\) is isomorphic to the _current Lie algebra_\(\mathfrak{g}(V)\) from [11]. ### Enveloping algebras We now let \(U^{\mathsf{L}}\), \(U^{\mathsf{R}}\), \(U\) be the universal enveloping algebras of \(\mathfrak{L}(V)^{\mathsf{L}},\mathfrak{L}(V)^{\mathsf{R}},\mathfrak{L}(V)^{ \mathsf{f}}\) respectively. These enveloping algebras are left filtered, right filtered, and graded respectively, and we note that \(U\subset U^{\mathsf{L}},\,U^{\mathsf{R}}\) again give splittings to the respective filtrations (see Lemma A.3.4). In the language of Definition A.9.1 we will say that \((\,U^{\mathsf{L}},\,U,\,U^{\mathsf{R}})\) forms a good triple of associative algebras. **Example 2.4.1**.: These induced filtrations can be explicitly described. For instance we have that \(\,U_{d}\) is linearly spanned by elements \(\ell^{1}\cdots\ell^{k}\) such that \(\ell^{i}\in\mathfrak{L}(V)^{\mathsf{f}}_{d_{i}}\) and \(\sum_{i}d_{i}=d\) (and \(k\) possibly zero if \(d=0\)). Analogously \((\,U^{\mathsf{L}})_{\leq d}\) is linearly spanned by elements \(\ell^{1}\cdots\ell^{k}\) such that \(\ell^{i}\in\mathfrak{L}(V)_{\leq d_{i}}\) and \(\sum_{i}d_{i}=d\) (and \(k\) possibly zero if \(d\geq 0\)). These enveloping algebras have an additional structure of a topology induced by seminorms, which can be described in terms of systems of neighborhoods of \(0\) (as in Definition A.4.1 and Remark A.4.2). These neighborhoods of the identity are given by left ideals \(\mathrm{N}^{n}_{\mathsf{L}}\,U^{\mathsf{L}}\) of \(\,U^{\mathsf{L}}\) and \(\mathrm{N}^{n}_{\mathsf{L}}\,U\) of \(\,U\) and right ideals \(\mathrm{N}^{n}_{\mathsf{R}}\,U^{\mathsf{R}}\) of \(\,U^{\mathsf{R}}\) and \(\mathrm{N}^{n}_{\mathsf{R}}\,U\) of \(\,U\) defined by: \[\mathrm{N}^{n}_{\mathsf{L}}\,U^{\mathsf{L}}=\,U^{\mathsf{L}}\,U^{\mathsf{L}}_{ \leq-n},\quad\mathrm{N}^{n}_{\mathsf{L}}\,U=\,UU_{\leq-n},\quad\mathrm{N}^{n}_ {\mathsf{R}}\,U^{\mathsf{R}}=\,U^{\mathsf{R}}_{\geq n}\,U^{\mathsf{R}},\quad \mathrm{N}^{n}_{\mathsf{R}}\,U=\,U_{\geq n}\,U.\] This definition coincides with that of a canonical seminorm on a (split-)filtered algebra, as described in Definition A.6.1, and in particular gives a good seminorm on the triple (see Definition A.9.3 and Remark A.9.7). Most useful for us is that the category of good triples of algebras with good seminorms is closed under quotients and completions (Corollary A.9.9 and Corollary A.9.10). We can restrict these seminorms to various filtered and graded parts of these algebras as in Definition A.4.4. For example, we write \(\operatorname{N}_{\mathsf{L}}^{n}U^{\mathsf{L}}_{\leq p}\) to denote \(\operatorname{N}_{\mathsf{L}}^{n}(U^{\mathsf{L}})\cap\,U^{\mathsf{L}}_{\leq p}\). Concretely we obtain systems of neighborhoods as follows (see Lemma A.3.2 for more details): \[\operatorname{N}_{\mathsf{L}}^{n}U^{\mathsf{L}}_{\leq p}=(\,U^{\mathsf{L}}\,U^ {\mathsf{L}}_{\leq-n})_{\leq p}=\sum_{j\leq-n}\,U^{\mathsf{L}}_{\leq p-j}\,U^ {\mathsf{L}}_{\leq j},\qquad\operatorname{N}_{\mathsf{R}}^{n}U^{\mathsf{R}}_{ \geq p}=(\,U^{\mathsf{R}}_{\geq n}\,U^{\mathsf{R}})_{\geq p}=\sum_{i\geq n}\,U^ {\mathsf{R}}_{\geq i}\,U^{\mathsf{R}}_{\geq p-i}\] \[\operatorname{N}_{\mathsf{L}}^{n}U_{p}=(\,UU_{\leq-n})_{p}=\sum_{j\leq-n}\,U_ {p-j}\,U_{j},\ \text{ and }\qquad\operatorname{N}_{\mathsf{R}}^{n}U_{p}=(\,U_{\geq n}\,U^{ \mathsf{R}})_{p}=\sum_{i\geq n}\,U_{i}\,U_{p-i}.\] We note that in particular, we have \(\operatorname{N}_{\mathsf{R}}^{n+p}U_{p}=\operatorname{N}_{\mathsf{L}}^{n}U_ {p}\). Through the restriction of the seminorm to these subspaces, we then define a filtered completions of \(U^{\mathsf{L}}\) and \(U^{\mathsf{R}}\), both containing a graded completion of \(U\) (see Definition/Lemma A.5.6). Specifically, we set \[\widehat{U}^{\mathsf{L}}_{d}:=\varprojlim_{n}\frac{U^{\mathsf{L}}_{\leq d}}{ \operatorname{N}_{\mathsf{L}}^{n}U^{\mathsf{L}}_{\leq d}},\qquad\widehat{U}^ {\mathsf{R}}_{d}:=\varprojlim_{n}\frac{U^{\mathsf{R}}_{\geq d}}{ \operatorname{N}_{\mathsf{L}}^{n}U^{\mathsf{R}}_{\geq d}},\qquad\widehat{U}_ {d}:=\varprojlim_{n}\frac{U_{d}}{\operatorname{N}_{\mathsf{L}}^{n}U_{d}}= \varprojlim_{n}\frac{U_{d}}{\operatorname{N}_{\mathsf{R}}^{n+d}U_{d}},\] And set \[\widehat{U}^{\mathsf{L}}:=\bigcup_{d}\,\widehat{U}^{\mathsf{L}}_{d},\qquad \widehat{U}^{\mathsf{R}}:=\bigcup_{d}\,\widehat{U}^{\mathsf{R}}_{d},\qquad \widehat{U}:=\bigoplus_{d}\,\widehat{U}_{d}.\] As previously mentioned, it follows from Corollary A.9.10 that this will result in a good triple \((\,\widehat{U}^{\mathsf{L}},\,\widehat{U},\,\widehat{U}^{\mathsf{R}})\) of associative algebras with good seminorms. Finally, one may construct a graded ideal \(J\) of \(\,\widehat{U}\) generated by the Jacobi relations, namely \(J\) is generated by for \(\ell\), \(m\), \(n\in\mathbb{Z}\), and \(a\), \(b\in V\), by \[\sum_{i\geq 0}(-1)^{i}\binom{\ell}{i}a_{[m+\ell-i]}b_{[n+i]}-(-1)^{\ell}b_{[n+ \ell-i]}a_{[m+i]}=\sum_{i\geq 0}\binom{m}{i}(a_{(\ell+i)}(b))_{[m+n-i]}.\] If we let \(J^{\mathsf{R}}\) and \(J^{\mathsf{L}}\) be the ideals of \(\,\widehat{U}^{\mathsf{R}}\) and \(\,\widehat{U}^{\mathsf{L}}\) generated by \(J\), and we let \(\overline{J},\overline{J}^{\mathsf{R}},\overline{J}^{\mathsf{L}}\) be the respective closures (see Definition/Lemma A.5.10), then we find that \((\overline{J}^{\mathsf{L}},\overline{J},\overline{J}^{\mathsf{R}})\) form a good triple (of nonunital algebras) by Lemma A.9.5 and Lemma A.9.6. Finally, by Corollary A.9.9, we find that the resulting quotient algebras \[\mathscr{U}^{\mathsf{L}}=\,\widehat{U}^{\mathsf{L}}/\overline{J}^{\mathsf{L}},\quad\mathscr{U}=\,\widehat{U}/\overline{J},\quad\mathscr{U}^{\mathsf{R}}= \,\widehat{U}^{\mathsf{R}}/\overline{J}^{\mathsf{R}},\] form a good triple of associative algebras with good seminorms (actually norms). **Definition 2.4.2**.: We call \(\mathscr{U}^{\mathsf{L}},\mathscr{U}^{\mathsf{R}},\mathscr{U}\) the left, right and finite universal enveloping algebras of \(V\), respectively. We note that a \(V\)-module corresponds, in this language, to a \(\mathscr{U}\)-module \(W\) (or a \(\mathscr{U}^{\mathsf{L}}\)-module) such that the action of this normed (and hence topological) algebra is continuous. That is, such that the multiplication map \[\mathscr{U}\times W\to W\quad\text{ or equivalently, }\quad\mathscr{U}^{ \mathsf{L}}\times W\to W\] is continuous, where \(W\) is given the discrete topology, and \(\mathscr{U}\) and \(\mathscr{U}^{\mathsf{L}}\) are topologized according to their norms. #### 2.4.3. Relation to the literature We note that \(\mathscr{U}\) coincides with the universal enveloping algebra of \(V\) introduced in [10], while we can identify \(\mathscr{U}^{\mathsf{L}}\) with the current algebra introduced in [11] or with the universal enveloping algebra \(\widetilde{U}(V)\) introduced in [12], with a minor modification (see [11, footnote on p.74]). ### Subalgebras and subquotient algebras We describe here some algebras built from the algebras \(\mathscr{U},\mathscr{U}^{\mathsf{L}},\mathscr{U}^{\mathsf{R}}\) which will play a special role. By definition, \(\mathscr{U}^{\mathsf{L}}_{\leq-n}\triangleleft\mathscr{U}^{\mathsf{L}}_{\leq 0}\) and \(\mathscr{U}^{\mathsf{R}}_{\geq n}\triangleleft\mathscr{U}^{\mathsf{R}}_{\geq 0}\) are two-sided ideals when \(n>0\) with \[\mathscr{U}^{\mathsf{L}}_{\leq 0}/\mathscr{U}^{\mathsf{L}}_{\leq-1}\cong \mathscr{U}_{0}\cong\mathscr{U}^{\mathsf{R}}_{\geq 0}/\mathscr{U}^{\mathsf{R}}_{ \geq 1},\] by the fact that these algebras are part of a good triple. We now look more closely at \(\mathscr{U}_{0}\), which forms a subring of \(\mathscr{U}\). As our triple of algebras is good, the seminorms on our algebras are almost canonical (Definition A.6.8), and in particular by Definition A.6.8(c), it follows that \(\operatorname{N}^{n}_{\mathsf{L}}\mathscr{U}_{0}=\operatorname{N}^{n}_{ \mathsf{R}}\mathscr{U}_{0}\) for every \(n\), so that there is no ambiguity in denoting these neighborhoods by \(\operatorname{N}^{r}\mathscr{U}_{0}\). We also see in the same way that \(\operatorname{N}^{r}\mathscr{U}_{0}\) is a two-sided ideal of \(\mathscr{U}_{0}\). **Definition 2.5.1**.: The _\(d\)th higher level Zhu algebra_ of \(V\) is the quotient \[\mathsf{A}_{d}(V)=\mathscr{U}_{0}/\operatorname{N}^{d+1}\mathscr{U}_{0}.\] For an element \(\alpha\in\mathscr{U}_{0}\), we will write \([\alpha]_{d}\) for its image in \(\mathsf{A}_{d}(V)\). When \(V\) is understood, we will denote \(\mathsf{A}_{d}(V)\) simply by \(\mathsf{A}_{d}\). #### 2.5.2. Relation to the literature In [15] the author defines an associative algebra, now referred to as the (zeroth) Zhu algebra as the quotient of \(V\) by an appropriate subspace \(O(V)\). In [10, 11] it is shown that this algebra is isomorphic to an appropriate quotient of the degree zero piece of the universal enveloping algebra of \(V\) (or of the current algebra of \(V\)). As mentioned in the introduction, higher level Zhu algebras \(\mathsf{A}_{d}\) have been introduced in [14] as quotients of \(V\) by subspaces \(O_{d}(V)\), and proved to be realized as quotients of the degree zero piece of the universal enveloping algebra of \(V\) in [10]. We notice further that the map that realizes the isomorphism between \(V/O_{d}(V)\) and \(\mathsf{A}_{d}\) is explicitly realized by identifying \([a]\in V/O_{d}(V)\) with the class of the element \(a_{\deg(a)-1}\) in \(\,U_{0}/\operatorname{N}^{d+1}U_{0}\) for every homogeneous element \(a\in V\). ## 3. Induced modules and the mode transition algebra The constructions and results discussed here are true in greater generality, as detailed in Appendix B.1 and in Appendix B.2. For instance, as in Section 2, the constructions mentioned in Section 3.1 and in Section 3.2 hold for a graded vertex algebra. The conformal structure is however used in Section 3.4. ### Induced modules As is the convention, throughout we denote \(\mathsf{A}_{0}\) by \(\mathsf{A}\). **Definition 3.1.1**.: For a left module \(W_{0}\) over \(\mathsf{A}\), we define the _left generalized Verma module_\(\Phi^{\mathsf{L}}(W_{0})\) as \[\Phi^{\mathsf{L}}(W_{0})=\bigoplus_{p=0}^{\infty}\Phi^{\mathsf{L}}(W_{0})_{p}= \left(\mathscr{U}/\operatorname{N}^{1}_{\mathsf{L}}\mathscr{U}\right)\otimes_ {\mathscr{U}_{0}}W_{0}\cong\left(\mathscr{U}^{\mathsf{L}}/\operatorname{N}^{1 }_{\mathsf{L}}\mathscr{U}^{\mathsf{L}}\right)\otimes_{\mathscr{U}_{0}}W_{0}. \tag{2}\] For \(Z_{0}\) a right module over \(\mathsf{A}\), we define the _right generalized Verma module_\(\Phi^{\mathsf{L}}(W_{0})\) as \[\Phi^{\mathsf{R}}(Z_{0})=\bigoplus_{p=0}^{\infty}\Phi^{\mathsf{L}}(Z_{0})_{-p}=Z_ {0}\otimes_{\mathscr{U}_{0}}\left(\mathscr{U}/\operatorname{N}_{\mathsf{R}}^{ 1}\mathscr{U}\right)\cong Z_{0}\otimes_{\mathscr{U}_{0}}\left(\mathscr{U}^{ \mathsf{R}}/\operatorname{N}_{\mathsf{R}}^{1}\mathscr{U}^{\mathsf{R}}\right). \tag{3}\] We note that this is well defined. In fact, the claimed isomorphisms of (2) and (3) follow from Lemma A.8.1, while the grading is explained in Remark B.1.6. Moreover, from Lemma A.8.1 we have that \(\mathscr{U}/\operatorname{N}_{\mathsf{L}}^{1}\mathscr{U}\cong\mathscr{U}^{ \mathsf{L}}/\operatorname{N}_{\mathsf{L}}^{1}\mathscr{U}^{\mathsf{L}}\), so that this quotient can be regarded both as a \((\mathscr{U},\mathscr{U}_{0})\) bimodule and as a \((\mathscr{U}^{\mathsf{L}},\mathscr{U}_{0})\) bimodule. In particular, this shows that the ancillary algebra acts on the left on \(\Phi^{\mathsf{L}}(W_{0})\). These have a universal property that we describe in Proposition 3.1.2 and proved in Proposition B.1.4. Given a left \(\mathscr{U}\)-module \(W\), we define an \(\mathsf{A}_{n}\)-module \(\Omega_{n}(W)\) by \[\Omega_{n}(W)=\{w\in W\mid(\operatorname{N}_{\mathsf{L}}^{n+1}U)w=0\}.\] **Proposition 3.1.2**.: _Let \(M\) be a \(\mathscr{U}\)-module and \(W_{0}\) an \(\mathsf{A}_{d}\)-module. Then there is a natural isomorphism of bifunctors:_ \[\operatorname{Hom}_{\mathsf{A}_{d}}(W_{0},\Omega_{d}(M))=\operatorname{Hom}_{ \mathscr{U}}(\Phi^{\mathsf{L}}_{d}(W_{0}),M).\] **Remark 3.1.3**.: If \(W_{0}\) if finite dimensional over \(\mathbb{C}\), and if \(V\) is \(C_{1}\)-cofinite, then there are a finite number of elements \(x^{1}\), \(x^{2}\), \(\ldots\), \(x^{r}\in V\) such that \(\Phi^{\mathsf{L}}(W_{0})\) is spanned by elements of the form \[x^{1}_{(-m_{1})}\cdot x^{2}_{(-m_{2})}\cdots x^{r}_{(-m_{r})}\otimes u,\] for some \(u\in W_{0}\) and positive integers \(m_{1}\geq m_{2}\geq\cdots\geq m_{r}\geq 1\). #### 3.1.4. Relation to the literature By the universal property in Proposition 3.1.2, the functor \(\Phi^{\mathsf{L}}\) is naturally isomorphic the functor denoted \(\overline{M}_{0}\) in [20, page 258], [10, (4.4)], and [11, page 3301]. Moreover, one can relate \(\Phi^{\mathsf{L}}\) to \(Q_{n}(d)\) from [16], which in the language used here, can be written \(Q_{n}(d)=\mathscr{U}_{d}/\operatorname{N}_{\mathsf{L}}^{n}\mathscr{U}_{d}\). In particular for \(n=1\) this gives the degree \(d\) part of a generalized Verma module \[\Phi^{\mathsf{L}}(W_{0})_{d}=\mathscr{U}_{d}/\operatorname{N}_{\mathsf{L}}^{ 1}\mathscr{U}_{d}\otimes_{\mathscr{U}_{0}}W_{0}=Q_{1}(d)\otimes_{\mathscr{U}_ {0}}W_{0}.\] In [16, Eq 2.6.1] a series of sub-algebras (called Quasi-finite algebras) of the universal universal enveloping algebra is defined for all \(d\in\mathbb{N}\), and by [16, Thm 3.3.4] if \(V\) is \(C_{2}\)-cofinite, then for \(d\gg 0\) there is an equivalence of categories of \(A_{d}\)-modules and (admissible) \(V\)-modules. ### Mode transition algebras and their action on modules In this section we introduce the _mode transition algebra_\(\mathfrak{A}(V)\) associated with a vertex operator algebra \(V\). A general treatment of these algebras is developed in Appendix B, while we will list here the principal consequences of the general theory. We begin by introducing the space underlying \(\mathfrak{A}(V)\), often denoted \(\mathfrak{A}\) when \(V\) is understood. **Definition 3.2.1**.: Let \(V\) be a VOA and \(\mathsf{A}=\mathsf{A}_{0}\) be the Zhu algebra associated to \(V\). We define \(\mathfrak{A}=\mathfrak{A}(V)\) to be the vector space \[\mathfrak{A}=\Phi^{\mathsf{R}}(\Phi^{\mathsf{L}}(\mathsf{A}))=\Phi^{\mathsf{L}} (\Phi^{\mathsf{R}}(\mathsf{A}))=\left(\mathscr{U}/\operatorname{N}_{\mathsf{ L}}^{1}\mathscr{U}\right)\otimes_{\mathscr{U}_{0}}\mathsf{A}\otimes_{ \mathscr{U}_{0}}\left(\mathscr{U}/\operatorname{N}_{\mathsf{R}}^{1}\mathscr{U} \right).\] Moreover, using the notation \(\mathfrak{A}_{d_{1},d_{2}}=\left(\mathscr{U}/\operatorname{N}_{\mathsf{L}}^{1} \mathscr{U}\right)_{d_{1}}\otimes_{\mathfrak{A}_{0}}\mathsf{A}\otimes_{ \mathfrak{A}_{0}}\left(\mathscr{U}/\operatorname{N}_{\mathsf{R}}^{1}\mathscr{U }\right)_{d_{2}}\) we write \[\mathfrak{A}=\bigoplus_{d_{1}\in\mathbb{Z}_{\geq 0}}\ \bigoplus_{d_{2}\in \mathbb{Z}_{\leq 0}}\mathfrak{A}_{d_{1},d_{2}}.\] The isomorphism described in the following Lemma is crucial to the definition of an algebra structure on \(\mathfrak{A}\). We refer to Lemma B.2.1 for its proof. **Lemma 3.2.2**.: _There is an isomorphism:_ \[\left(\mathscr{U}/\operatorname{N}_{\mathsf{R}}^{1}\mathscr{U} \right)\otimes_{\mathscr{U}}\left(\mathscr{U}/\operatorname{N}_{\mathsf{L}}^{ 1}\mathscr{U}\right) \to\mathsf{A}\] \[\overline{\alpha}\otimes\overline{\beta} \mapsto\alpha\otimes\beta\] _where, for \(\alpha,\beta\in\mathscr{U}\) homogeneous, we define \(\alpha\otimes\beta\) as :_ \[\alpha\otimes\beta=\begin{cases}0&\text{if }\deg(\alpha)+\deg(\beta)\neq 0\\ \left[\alpha\beta\right]_{0}&\text{if }\deg(\alpha)+\deg(\beta)=0\end{cases}\] _and we extend the definition to general products by linearity._ **Example 3.2.3**.: We explicitly describe the element \(\alpha\otimes\beta\in\mathsf{A}\) when \(\alpha\) and \(\beta\) are homogeneous elements of opposite degrees. Three cases can occur: * \(\deg(\alpha)<0\). It follows that \(\deg(\beta)>0\) and so \(\alpha\beta\in\operatorname{N}^{1}\mathscr{U}_{0}\), which gives \(\alpha\otimes\beta=0\). * \(\deg(\alpha)=0=\deg(\beta)\). We have that \(\alpha\otimes\beta=[\alpha]_{0}\cdot[\beta]_{0}\) since the map \(\mathscr{U}_{0}\to\mathsf{A}_{0}\) is a ring homomorphism. * \(\deg(\alpha)>0\). We first rewrite \(\alpha\beta=\beta\alpha+[\alpha,\beta]\). Since \(\beta\alpha\in\operatorname{N}^{1}\mathscr{U}_{0}\), we have that \(\alpha\otimes\beta\) coincide with \([\,[\alpha,\beta]\,]_{0}\). Note that if \(\alpha,\beta\in\mathfrak{L}(V)^{\mathsf{f}}\), then \([\alpha,\beta]\in\mathfrak{L}(V)_{0}^{\mathsf{f}}\), so the above description tells us that \(\alpha\otimes\beta\) is computed via the standard map \(\mathfrak{L}(V)_{0}^{\mathsf{f}}\to\mathsf{A}\) described in [19]. **Remark 3.2.4**.: We note that one has that \(\mathfrak{A}_{0,0}=\mathsf{A}\). Indeed, by the definitions \[\mathfrak{A}_{0,0}=\left(\mathscr{U}/\operatorname{N}_{\mathsf{L}}^{1} \mathscr{U}\right)_{0}\otimes_{\mathscr{U}_{0}}\left(\mathscr{U}_{0}/ \operatorname{N}^{1}\mathscr{U}_{0}\right)\otimes_{\mathscr{U}_{0}}\left( \mathscr{U}/\operatorname{N}_{\mathsf{R}}^{1}\mathscr{U}\right)_{0}\cong \mathsf{A}\otimes_{\mathsf{A}}\mathsf{A}\otimes_{\mathsf{A}}\mathsf{A}\cong \mathsf{A}.\] We will next simultaneously describe the algebra structure on \(\mathfrak{A}\), and the action of this algebra \(\mathfrak{A}\) on generalized Verma modules. **Definition 3.2.5**.: Let \(W_{0}\) be an \(\mathsf{A}\)-module. Following Definition B.2.4 we define the map \(\mathfrak{A}\times\Phi^{\mathsf{L}}(W_{0})\to\Phi^{\mathsf{L}}(W_{0})\) as follows. For \(\mathfrak{a}=u\otimes a\otimes u^{\prime}\in\mathfrak{A}\) and \(\beta\otimes w\in\Phi^{\mathsf{L}}(W_{0})\) we set \[\mathfrak{a}\star(\beta\otimes w):=u\otimes a(u^{\prime}\otimes\beta)w.\] By Definition 3.2.5 this map defines an algebra structure on \(\mathfrak{A}\) and \(\Phi^{\mathsf{L}}(W_{0})\) becomes a left \(\mathfrak{A}\)-module. Moreover the subspace \(\mathfrak{A}_{d}:=\mathfrak{A}_{d,-d}\) is closed under multiplication, hence it defines a subalgebra of \(\mathfrak{A}\). The following is a special case of Definition B.2.6. **Definition 3.2.6**.: We call \(\mathfrak{A}(V)=\mathfrak{A}=(\mathfrak{A},+,\star)\) the _transition mode algebra_ of \(V\), and \(\mathfrak{A}_{d}=(\mathfrak{A}_{d},+,\star)\) the _d-th transition mode algebra_ of \(V\). **Remark 3.2.7**.: We observe that the underlying vector space and the algebra structure of \(\mathfrak{A}(V)\) does not depend on the existence of a conformal structure on \(V\). Therefore \(\mathfrak{A}(V)\) can be defined for every graded vertex algebra \(V\). We refer to Example 3.3.2 for an explicit description of the algebra structure of \(\mathfrak{A}(V)\) when \(V\) is a \(C_{2}\)-cofinite and rational vertex operator algebra. The following assertion is straightforward **Remark 3.2.8**.: Let \(W_{0}\) be an \(\mathsf{A}_{0}\)-module. Then the action of \(\mathfrak{A}_{d}\) on \(\Phi^{\mathsf{L}}(W_{0})_{d}\) factors through the action of \(\mathsf{A}_{d}\) described in Definition 3.2.5 via the map \(\mu_{d}\). #### 3.2.9. Relation to the literature In [10] a series of unital associative algebras \(\mathsf{A}_{e,d}\), defined as quotients of \(V\), with \(\mathsf{A}_{d,d}\cong\mathsf{A}_{d}\). By Definition 3.2.5, the \(\mathfrak{A}_{d}\) act on the degree \(d\) part of an induced module \(W=\Phi^{\mathsf{L}}(W_{0})\), as is true for the \(\mathsf{A}_{e,d}\), although they differ. In [11], a related series of associative algebras \(A^{d}(V)\) is defined. These contain higher level Zhu algebras as sub-algebras, and act on (the sum of) components of a module up to degree \(d\). In [11], relations are established between bimodules for these associative algebras and (logarithmic) intertwining operators, and using these, in [11], modular invariance of (logarithmic) intertwining operators is proved. ### Strong unital action of \(\mathfrak{A}_{d}\) on modules The \(\mathfrak{A}_{0}=\mathsf{A}\) has a unity given by the image of \(\mathbf{1}_{[-1]}\), denoted \(1\). On the other hand, as we show in Section 8, for \(d\in\mathbb{Z}_{>0}\), the \(\mathfrak{A}_{d}\) may not admit multiplicative identity elements. However, if there are unities in \(\mathfrak{A}_{d}\) for all \(d\in\mathbb{N}\), we have the following results about them. **Definition/Lemma 3.3.1**.: Let \(M\) be an \(\mathsf{A}\)-module, and assume that for every \(d\in\mathbb{N}\) the ring \(\mathfrak{A}_{d}\) is unital, with unity \(\mathscr{I}_{d}\in\mathfrak{A}_{d}\). We say that \(\mathscr{I}_{d}\) is a strong unity for every \(d\) if one of the following equivalent conditions is verified: 1. For every \(d\),\(n\),\(m\in\mathbb{N}\), for all \(u\in\mathfrak{L}(V)_{d}\), and \(\mathfrak{a}\in\mathfrak{A}_{n,-m}\) one has \[(u\cdot\mathscr{I}_{n})\star\mathfrak{a}=u\cdot\mathfrak{a}\qquad\text{ and }\qquad\mathfrak{a}\star(\mathscr{I}_{m}\cdot u)=\mathfrak{a}\cdot u.\] 2. For every \(n,m\in\mathbb{N}\) and for every \(\mathfrak{a}\in\mathfrak{A}_{n,-m}\) one has \(\mathscr{I}_{n}\star\mathfrak{a}=\mathfrak{a}=\mathfrak{a}\star\mathscr{I}_{m}\). 3. For every \(d\in\mathbb{N}\) the homomorphism \(\mathfrak{A}_{d}\to\operatorname{End}(\Phi^{\mathsf{L}}(\mathsf{A})_{d})\) is unital; 4. For every \(d\in\mathbb{N}\), the homomorphism \(\mathfrak{A}_{d}\to\operatorname{End}(\Phi^{\mathsf{L}}(\mathsf{A})_{d})\) is unital and injective. 5. For every \(d\in\mathbb{N}\) and \(M\) an \(\mathsf{A}\)-module, the homomorphism \(\mathfrak{A}_{d}\to\operatorname{End}(\Phi^{\mathsf{L}}(M)_{d})\) is unital. Proof.: We prove these conditions are equivalent. Since (4) implies (3) and (5) implies (3), it will be enough to show the following implications: \[(\mathit{1})\Leftrightarrow(\mathit{2})\Leftrightarrow(\mathit{3})\Rightarrow (\mathit{5})\quad\text{ and }\quad(\mathit{3})\Rightarrow(\mathit{4})\] \((\mathit{1})\Rightarrow(\mathit{2})\): this follows by taking \(d=0\) and \(u=1\). \((\mathit{2})\Rightarrow(\mathit{1})\): This follows from Proposition B.2.5. \((\mathit{2})\Rightarrow(\mathit{3})\): This follows from the identification of \(\Phi^{\mathsf{L}}(\mathsf{A})_{d}\) with \(\mathfrak{A}_{d,0}\). \((\mathit{3})\Rightarrow(\mathit{2})\): By linearity, we can assume that \(\mathfrak{a}\in\mathsf{A}_{n,-m}\) is represented by an element of the form \(u\otimes a\otimes v\) with \(u\in\mathscr{U}_{n}^{\mathsf{L}}\), \(v\in\mathscr{U}_{-m}^{\mathsf{R}}\) and \(a\in\mathsf{A}\). Then \[\mathscr{I}_{n}\star(u\otimes a\otimes v)=\mathscr{I}_{n}\star(u\otimes \mathfrak{a}\otimes 1)\cdot(v)=(\mathscr{I}_{n}\star(u\otimes a))\otimes v=(u\otimes a \otimes 1)\cdot v=u\otimes a\otimes v\] where (_3_) ensures that the third equality holds. (_3_) \(\Rightarrow\) (_5_): By linearity we can assume that an element of \(\Phi^{\mathsf{L}}(M)_{d}\) is given by \(u\otimes m\) for \(u\in\mathscr{U}_{d}^{\mathsf{L}}\) and \(m\in M\). Hence we obtain \(\mathscr{I}_{d}\star(u\otimes m)=(\mathscr{I}_{d}\star u)\otimes m=u\otimes m\). (_3_) \(\Rightarrow\) (_4_): By Definition 3.2.5 the action of an element \(\mathfrak{a}\in\mathfrak{A}_{d}\) via \(\star\) on \(\mathfrak{A}_{d}\subseteq\mathfrak{A}=\Phi^{\mathsf{L}}(\Phi^{\mathsf{R}}( \mathsf{A}))=\Phi^{\mathsf{L}}(\mathsf{A})\otimes_{\mathscr{U}_{0}}\left( \mathscr{U}/\operatorname{N}_{\mathsf{R}}^{1}\mathscr{U}\right)\) is determined by the action of \(\mathfrak{a}\) on \(\Phi^{\mathsf{L}}(\mathsf{A})\). In particular, as the former is injective when we have a unity, the latter must be injective in this case as well. **Example 3.3.2**.: For \(V\) a rational VOA, \(\mathsf{A}\) is finite and semi-simple, and has a bimodule decomposition \(\mathsf{A}\cong\bigoplus_{W_{0}\in\mathscr{W}_{0}}W_{0}\otimes W_{0}^{\vee}\), where \(\mathscr{W}_{0}\) is the set of (finitely many) isomorphism classes of simple \(\mathsf{A}\)-modules. If \(V\) is also \(C_{2}\)-cofinite, \(\mathfrak{A}=\bigoplus_{W\in\mathscr{W}}W\otimes_{\mathbb{C}}W^{\prime}\), where \(\mathscr{W}\) is the set of isomorphism classes of simple \(V\)-modules (in bijection with \(\mathscr{W}_{0}\)). Here \(W=\Phi^{\mathsf{L}}(W_{0})\cong\bigoplus_{d\geq 0}W_{d}\), and \(W^{\prime}=\Phi^{\mathsf{R}}(W_{0})=\bigoplus_{d\geq 0}\operatorname{Hom}_{ \mathbb{C}}(W_{d},\mathbb{C})\), the module contragredient to \(W\). Using this, the \(\star\)-product is induced, by linearity, from \[(a_{W}\otimes\varphi_{W_{d}})\star(b_{M_{e}}\otimes\psi_{M})=\begin{cases} \varphi_{W_{d}}(b_{M_{e}})(a_{W}\otimes\psi_{W})&\text{if $W=M$ and $e=d$}\\ 0&\text{otherwise,}\end{cases}\] where \(\varphi_{W_{d}}\colon W_{d}\to\mathbb{C}\) and \(b_{M_{e}}\in M_{e}\). Under these assumptions, for all \(d\in\mathbb{Z}_{\geq 0}\), \[\mathfrak{A}_{d}=\bigoplus_{W\in\mathscr{W}}\operatorname{Hom}_{\mathbb{C}}( W_{d},W_{d}),\] admit strong unities \(\mathbf{1}_{d}:=\bigoplus_{W}\operatorname{Id}_{W_{d}}\). ### Relation to the functor \(\Phi\) To define the mode transition algebra \(\mathfrak{A}\), we used the map \(\Phi^{\mathsf{L}}\Phi^{\mathsf{R}}=\Phi^{\mathsf{L}}\Phi^{\mathsf{R}}\), which we can interpret as a functor from the category of \(\mathsf{A}\)-bimodules to the category of \(\mathscr{U}\)-bimodules. Its properties in a more general framework are described in Proposition B.2.5. We now show that this functor agrees with the functor denoted \(\Phi\) in [10, Definition 2.2] which assigns to an \(\mathsf{A}\)-bimodule \(M\) the \((\mathscr{U}^{\mathsf{L}})^{\otimes 2}\)-module \[\Phi(M):=(\mathscr{U}^{\mathsf{L}}\otimes\mathscr{U}^{\mathsf{L}})\underset{ \cong\circ\otimes\mathscr{U}^{\mathsf{L}}_{\leq 0}}{\overset{\mathsf{f}}{ \otimes}}M,\] where \(\mathscr{U}^{\mathsf{L}}_{\leq 0}\otimes\mathscr{U}^{\mathsf{L}}_{\leq 0}\) acts on \(M\) as follows: \[(u\otimes v)(m)=\begin{cases}u\cdot m\cdot\theta(v)&\text{if $u,v\in\mathscr{U}_{0}$} \\ 0&\text{otherwise.}\end{cases}\] Here \(\theta\) is the natural involution of \(\mathscr{U}_{0}\), from e.g. [10, Eq.(7)]), and which we describe briefly in Section 3.4.1. In Lemma 3.4.5 we describe the relation between functors \(\Phi^{\mathsf{L}}\) and \(\Phi^{\mathsf{R}}\) to \(\Phi\). #### 3.4.1. The involutions, left and right actions As we explain here, there is an anti-Lie algebra isomorphism \(\theta\) used to transport the universal enveloping algebra, considered as an object that acts on modules on the left (denoted here by \(\mathscr{U}^{\mathsf{L}}\)), to an analogous completion \(\mathscr{U}^{\mathsf{R}}\) that acts on modules on the right. The map \(\theta\colon V\otimes_{\mathbb{C}}\mathbb{C}(\!(t)\!)\to V\otimes_{\mathbb{C}} \mathbb{C}(\!(t^{-1})\!)\) is defined, for \(a\in V\) homogeneous, by \[a\otimes\sum_{i\geq N}c_{i}t^{i}\mapsto(-1)^{\deg(a)}\sum_{j\geq 0}^{\deg(a)} \left(\frac{1}{j!}\left(L_{1}^{j}(a)\right)\otimes\sum_{i\geq N}c_{i}t^{2\deg( a)-i-j-2}\right), \tag{4}\] and extended linearly. The map \(\theta\) is related to the involution \(\gamma=(-1)^{L_{0}}e^{L_{1}}\colon V\to V\) defined, for \(a\in V\) homogeneous by \[a\mapsto(-1)^{\deg a}\sum_{i\geq 0}\frac{1}{i!}L_{1}^{i}(a), \tag{5}\] and extended by linearity. To state the relation succinctly, for every homogeneous \(a\in V\) we set \[J_{n}(a):=a_{[\deg(a)-1+n]}. \tag{6}\] This notation, used in [10], has the property that \(\deg(J_{n}(v))=-n\), so that the degree of such an element is easily read. **Lemma 3.4.2**.: _For \(a\in V\), homogeneous, \(\theta(J_{n}(a))=J_{-n}(\gamma(a))\)._ Proof.: This follows by combining (4) and (5) and using linearity. **Lemma 3.4.3**.: _The map \(\theta\) induces a Lie algebra anti-isomorphism \(\mathfrak{L}(V)^{\mathsf{L}}\to\mathfrak{L}(V)^{\mathsf{R}}\), which restricts to a Lie algebra involution on \(\mathfrak{L}(V)^{\mathsf{f}}\), such that \(\theta(\mathfrak{L}(V)^{\mathsf{L}}_{\leq d})=\mathfrak{L}(V)^{\mathsf{R}}_{ \geq-d}\) and \(\theta(\mathfrak{L}(V)^{\mathsf{f}}_{d})=\mathfrak{L}(V)^{\mathsf{f}}_{-d}\)._ Proof.: One can check that this restricts to an endomorphism of \(V\otimes\mathbb{C}[t,t^{-1}]\), which by [10, Proposition 4.1.1], defines a Lie algebra involution of \(\mathfrak{L}(V)^{\mathsf{f}}\), that is, \(\theta([\ell_{1},\ell_{2}])=-[\theta(\ell_{1}),\theta(\ell_{2})]\), and \(\theta^{2}=\mathrm{id}\). Moreover, it is easy to verify that \(\theta(\mathfrak{L}(V)^{\mathsf{f}}_{d})\subset\mathfrak{L}(V)^{\mathsf{f}}_{ -d}\). As the Lie algebras \(\mathfrak{L}(V)^{\mathsf{L}},\mathfrak{L}(V)^{\mathsf{R}}\) carry exhaustive and separated split filtrations by the graded subalgebra \(\mathfrak{L}(V)^{\mathsf{f}}\), they are naturally equipped with norms via Remark A.4.3 - that is, by declaring that elements of large positive degree are large in \(\mathfrak{L}(V)^{\mathsf{L}}\) and small in \(\mathfrak{L}(V)^{\mathsf{R}}\). With this definition, it follows that \(\theta\) is continuous, and as noted in Remark A.4.6, that the multiplication on the Lie algebras is continuous. Finally, the fact that \(\mathfrak{L}(V)^{\mathsf{f}}\) induces a splitting of the filtrations, it follows that \(\mathfrak{L}(V)^{\mathsf{f}}\) is simultaneously dense in \(\mathfrak{L}(V)^{\mathsf{L}}\) and \(\mathfrak{L}(V)^{\mathsf{R}}\). Consequently, we see by continuity that \(\theta\) induces an anti-homomorphism from \(\mathfrak{L}(V)^{\mathsf{L}}\) to \(\mathfrak{L}(V)^{\mathsf{R}}\), which is an anti-isomorphism as \(\theta^{2}=\mathrm{id}\). If \(R=(R,+,\cdot)\) is a ring, denote by \(R^{\mathrm{op}}\) its opposite ring, that is \(R^{\mathrm{op}}=(R,+,*)\) where \(a*b:=b\cdot a\). Similarly, if \((L,[\,,\,])\) is a Lie algebra, we denote by \(L^{\mathrm{op}}\) its opposite Lie algebra, where \([a,b]_{L^{\mathrm{op}}}:=[b,a]_{L}\). **Lemma 3.4.4**.: _For \(\,U(\mathfrak{L}(V)^{\mathsf{R}})\) the universal enveloping algebra of \(\mathfrak{L}(V)^{\mathsf{R}}\),_ \[\theta\colon\,U(\mathfrak{L}(V)^{\mathsf{L}})\to\,U(\mathfrak{L}(V)^{\mathsf{ R}})^{\mathrm{op}}\] _is an isomorphism of rings._ Proof.: We have established in Lemma 3.4.3 that \(\theta\colon\mathfrak{L}(V)^{\mathsf{L}}\to\mathfrak{L}(V)^{\mathsf{R}}\) is an anti-isomorphism of Lie algebras, so that \(\theta\colon\mathfrak{L}(V)^{\mathsf{L}}\to(\mathfrak{L}(V)^{\mathsf{R}})^{ \operatorname{op}}\) is a Lie-algebra isomorphism. Moreover, as \(\mathit{U}((\mathfrak{L}(V)^{\mathsf{R}})^{\operatorname{op}})=\mathit{U}( \mathfrak{L}(V)^{\mathsf{R}})^{\operatorname{op}}\), it follows that \(\theta\) induces an isomorphism between \(\mathit{U}(\mathfrak{L}(V)^{\mathsf{L}})\) and \(\mathit{U}(\mathfrak{L}(V)^{\mathsf{R}})^{\operatorname{op}}\), as wanted. Here we note that \(\theta(\alpha\cdot\beta)=\theta(\beta)\cdot\theta(\alpha)\) for every \(\alpha,\beta\in\mathfrak{L}(V)^{\mathsf{L}}\) (and where \(\cdot\) is the usual product in the \(\mathit{U}(\mathfrak{L}(V)^{\mathsf{L}})\) and \(\mathit{U}(\mathfrak{L}(V)^{\mathsf{R}})\)). In particular \(\theta\) is an isomorphism between \(\mathit{U}(\mathfrak{L}(V)^{\mathsf{f}})\) and \(\mathit{U}(\mathfrak{L}(V)^{\mathsf{f}})^{\operatorname{op}}\). **Lemma 3.4.5**.: _Let \(B\) be an associative ring, \(W^{1}\) an \((\mathsf{A},B)\)-bimodule and \(W^{2}\) a \((B,\mathsf{A})\)-bimodule. Then we have a natural identification_ \[\Phi^{\mathsf{L}}(W^{1})\otimes_{B}\Phi^{\mathsf{R}}(W^{2})\cong\Phi(W^{1} \otimes_{B}W^{2})\] _of \((\mathscr{U}^{\mathsf{L}},\mathscr{U}^{\mathsf{R}})\)-bimodules. In particular we have \(\mathfrak{A}=\Phi(\mathsf{A})\)._ Proof.: We first note that there is a natural equivalence of categories between left \((\mathscr{U}^{\mathsf{L}})^{\otimes 2}\)-modules and \((\mathscr{U}^{\mathsf{L}},(\mathscr{U}^{\mathsf{L}})^{\operatorname{op}})\)-modules. Moreover, as described in Lemma 3.4.4, the involution \(\theta\) provides an identification \(\mathscr{U}^{\mathsf{R}}\cong(\mathscr{U}^{\mathsf{L}})^{\operatorname{op}}\). It follows that the map \(\Phi^{\mathsf{L}}(W^{1})\otimes_{B}\Phi^{\mathsf{R}}(W^{2})\to\Phi(W^{1} \otimes_{B}W^{2})\) induced by \[(u\otimes w_{1})\otimes(w_{2}\otimes v)\mapsto(u\otimes\theta(v))\otimes(w_{ 1}\otimes w_{2})\] for all \(u\in\mathscr{U}^{\mathsf{L}}\), \(v\in\mathscr{U}^{\mathsf{R}}\) and \(w_{i}\in W_{i}\) is indeed an isomorphism. ## 4. Smoothing, limits, and coinvariants In Section 4.1 we describe the sheaf of coinvariants on schemes \(S\) parametrizing families of pointed and coordinatized curves in general terms, while in Section 4.2, we explain what we mean by sheaves defined over a scheme \(S=\operatorname{Spec}R\), where \(R\) is a ring complete with respect to some ideal \(I\). In Section 4.3, we describe the setup for considering coinvariants on smoothings of nodal curves, establishing some results needed for our geometric applications. In particular, in Section 4.4, for the proof of Proposition 5.1.2, we explicitly describe the sheaf \(\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\) of Chiral Lie algebras. Throughout this section, \(V\) is a VOA with no additional finiteness assumptions. ### Coinvariants Let \(S\) be a scheme and let \(\mathscr{W}\) be a quasi-coherent sheaf of \(\mathscr{O}_{S}\)-modules. Let \(\mathscr{L}\) be a quasi-coherent sheaf of Lie algebras on \(S\) acting on \(\mathscr{W}\). We define the _sheaf of coinvariants_\([\mathscr{W}]_{\mathscr{L}}\) on S as the cokernel \[\mathscr{L}\otimes_{\mathscr{O}_{S}}\mathscr{W}\to\mathscr{W}\to[\mathscr{W} ]_{\mathscr{L}}\to 0.\] For future use, it will be helpful to note that the formation of the sheaves of coinvariants commutes with base change. **Lemma 4.1.1**.: _Let \(\mathscr{L}\) be a quasi-coherent sheaf of Lie algebras on a scheme \(S\) acting on a quasi-coherent sheaf \(\mathscr{W}\). For any morphism \(S^{\prime}\to S\), we have \(([\mathscr{W}]_{\mathscr{L}})_{S^{\prime}}\cong[\mathscr{W}_{S^{\prime}}]_{ \mathscr{L}_{S^{\prime}}}\)._ Proof.: This follows from right exactness of pullback of quasi-coherent sheaves (equivalently right exactness of tensor). **Remark 4.1.2**.: Let \(\pi\colon\mathscr{C}\to S\) be a projective curve, with \(n\) distinct smooth sections \(P_{\bullet}\colon S\to\mathscr{C}\) and formal coordinates \(t_{\bullet}\) at \(P_{\bullet}\). Assume further that \(\mathscr{C}\setminus\sqcup P_{\bullet}(S)\to S\) is affine. This assumption is possible by Propagation of Vacua [12, Thm 3.6] (see also [13, Theorem 4.3.1]). Denote by \(W^{\bullet}=W^{1}\otimes\cdots\otimes W^{n}\) the tensor product of an \(n\)-tuple of \(V\)-modules and let \(\mathscr{W}:=W^{\bullet}\otimes\mathcal{O}_{S}\). The sheaf of Chiral Lie algebras \(\mathscr{L}:=\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\), originally defined in this context for families of stable curves with singularities in [13, 14], is explicitly described with more details here in Section 4.4. The sheaf of coinvariants \([\mathscr{W}]_{\mathscr{L}}\) defined above will also be denoted \([W^{\bullet}]_{(\mathscr{C},P_{\bullet},t_{\bullet})}\). While quasi-coherent [13], for \(V\)\(C_{2}\)-cofinite, or if generated in degree \(1\), this sheaf is coherent [13, 14]. ### Completions As in Section 4.1, we consider coinvariants over \(S=\operatorname{Spec}(R)\), where \(R\) is a ring that is complete with respect to some ideal \(I\). For \(k\in\mathbb{Z}_{\geq 0}\), setting \(S_{k}=\operatorname{Spec}(R_{k})=\operatorname{Spec}(R/I^{k+1})\), pullbacks \(\mathscr{L}_{k}\) and \(\mathscr{W}_{k}\) of \(\mathscr{L}\) and \(\mathscr{W}\) to \(S_{k}\) respectively, we work with coinvariants \([\mathscr{W}_{k}]_{\mathscr{L}_{k}}\) for any \(k\in\mathbb{Z}_{\geq 0}\). Due to quasicoherence, each of these can be thought of as a module over \(R_{k}\), with maps \([\mathscr{W}_{k+1}]_{\mathscr{L}_{k+1}}\to[\mathscr{W}_{k}]_{\mathscr{L}_{k}}\). **Definition 4.2.1**.: In the above situation, we define the _formal coinvariants_, denoted \(\widehat{[\mathscr{W}]_{\mathscr{L}}}\) to be the \(I\)-adically complete \(R\)-module \[\widehat{[\mathscr{W}]_{\mathscr{L}}}=\varprojlim[\mathscr{W}_{k}]_{\mathscr{ L}_{k}}.\] **Proposition 4.2.2**.: _We have an identification_ \[\widehat{[\mathscr{W}]_{\mathscr{L}}}=\operatorname{coker}\left[\varprojlim\pi _{*}\mathscr{L}_{k}\otimes_{\mathscr{O}_{S_{k}}}\mathscr{W}_{k}\longrightarrow \varprojlim\mathscr{W}_{k}\right]\] **Proposition 4.2.3**.: _Suppose \([\mathscr{W}]_{\mathscr{L}}\) is finitely generated over an \(I\)-adically complete Noetherian ring \(R\). Then the natural map \([\mathscr{W}]_{\mathscr{L}}\to\widehat{[\mathscr{W}]_{\mathscr{L}}}\) is an isomorphism._ Proof.: Consider the exact sequence of \(R\)-modules (omitting the \(\pi_{*}\) from the notation, and identifying the quasicoherent sheaves with the corresponding \(R\)-modules): \[\mathscr{L}\otimes_{R}\mathscr{W}\longrightarrow\mathscr{W}\longrightarrow[ \mathscr{W}]_{\mathscr{L}}\longrightarrow 0.\] Tensoring with \(R/I^{k}\) (or geometrically base-changing along \(S_{k}\to S\)) is a right exact operation, hence it yields an exact sequence \[\mathscr{L}_{k}\otimes_{R_{n}}\mathscr{W}_{k}\longrightarrow\mathscr{W}_{k} \longrightarrow\left([\mathscr{W}]_{\mathscr{L}}\right)_{R_{k}}\longrightarrow 0,\] which shows that we can identify \([\mathscr{W}_{k}]_{\mathscr{L}_{k}}=\left([\mathscr{W}]_{\mathscr{L}}\right)_{ R_{k}}\). In particular, the composition \[[\mathscr{W}]_{\mathscr{L}}\longrightarrow\widehat{[\mathscr{W}]_{\mathscr{L }}}\longrightarrow[\mathscr{W}_{k}]_{\mathscr{L}_{k}}\] coincides with the surjection \([\mathscr{W}]_{\mathscr{L}}\to[\mathscr{W}]_{\mathscr{L}}\otimes_{R}R/I^{k}\). Since \([\mathscr{W}]_{\mathscr{L}}\) is finitely generated over a complete Noetherian ring, it is \(I\)-adically complete by [16, Tag 00MA(3)]. Therefore we can identify \([\mathscr{W}]_{\mathscr{L}}=\varprojlim[\mathscr{W}]_{\mathscr{L}}\otimes_{R}R/ I^{k}=\varprojlim[\mathscr{W}]_{\mathscr{L}_{k}}=\widehat{[\mathscr{W}]_{ \mathscr{L}}}\), giving the desired isomorphism. ### Smoothing setup In order to introduce the smoothing property for \(V\), we will recall the notion of a smoothing of a nodal curve, and set a small amount of notation used throughout. Let \(R=\mathbb{C}[\![q]\!]\) and write \(S=\operatorname{Spec}(R)\). Let \(\mathscr{C}_{0}\) be a projective curve over \(\mathbb{C}\) with at least one node \(Q\), smooth and distinct points \(P_{\bullet}=(P_{1},\dots,P_{n})\) such that \(\mathscr{C}_{0}\setminus P_{\bullet}\) is affine, and formal coordinates \(t_{\bullet}=(t_{1},\dots,t_{n})\) at \(P_{\bullet}\). Let \(\eta\colon\widetilde{\mathscr{C}}_{0}\to\mathscr{C}_{0}\) be the partial normalization of \(\mathscr{C}_{0}\) at \(Q\), which is naturally pointed by \(Q_{\pm}:=\eta^{-1}(Q)\). We also suppose we have chosen formal coordinates at \(Q_{\pm}\) and we call them \(s_{\pm}\). The choice of our formal coordinates \(s_{\pm}\) determine a smoothing family \((\mathscr{C},P_{\bullet},t_{\bullet})\) over \(S\), with the central fiber given by \((\mathscr{C}_{0},P_{\bullet},t_{\bullet})\). Let \((\widetilde{\mathscr{C}},P_{\bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})\) denote the trivial extension \(\widetilde{\mathscr{C}_{0}}\times S\) with its corresponding markings. We will now discuss the relationship between coinvariants for \((\mathscr{C},P_{\bullet},t_{\bullet})\) and \((\widetilde{\mathscr{C}},P_{\bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})\). Let \(W^{1},\dots,W^{n}\) be an \(n\)-tuple of \(V\)-modules, or equivalently, smooth \(\mathscr{U}\)-modules for \(\mathscr{U}\) the universal enveloping algebra of \(V\) (defined in Section 2), and \(W^{\bullet}\) their tensor product. As is described above in Remark 4.1.2, we may also consider the sheaf of coinvariants \([W^{\bullet}]_{(\mathscr{C},P_{\bullet},t_{\bullet})}\). As mentioned in the introduction, there is a map \(\alpha_{0}\colon W^{\bullet}\to W^{\bullet}\otimes\Phi(\mathsf{A})\) which induces a map between coinvariants \[[\alpha_{0}]\colon[W^{\bullet}]_{(\mathscr{C}_{0},P_{\bullet},t_{\bullet})} \smash{\mathop{\longrightarrow}\limits^{\cong}}[W^{\bullet}\otimes\Phi( \mathsf{A})]_{(\widetilde{\mathscr{C}}_{0},P_{\bullet}\sqcup Q_{\pm},t_{ \bullet}\sqcup s_{\pm})}.\] Moreover, if \(V\) is \(C_{1}\)-cofinite, then we will show in Lemma 4.4.4 that \([\alpha_{0}]\) is an isomorphism. We recall that \(\Phi(\mathsf{A})=\mathfrak{A}\), so we will generally use the notation \(\mathfrak{A}\) below. The following result, which is a consequence of Proposition 4.2.3, allows us to describe coinvariants over \(\widetilde{\mathscr{C}}\) whenever they are finite dimensional. The assumptions of the following result are satisfied when \(V\) is \(C_{2}\)-cofinite, for all \(V\)-modules \(W^{\bullet}\), and also more generally (by [1]). **Corollary 4.3.1**.: _Assume that the sheaf \([(W^{\bullet}\otimes\mathfrak{A})[\![q]\!]]_{(\widetilde{\mathscr{C}},P_{ \bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}\) is coherent over \(S\). Then one has identifications_ \[[(W^{\bullet}\otimes\mathfrak{A})[\![q]\!]]_{(\widetilde{\mathscr{ C}},P_{\bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}\\ \cong[W^{\bullet}\otimes\mathfrak{A}]_{(\widetilde{\mathscr{C}} _{0},P_{\bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}[\![q]\!]\\ \cong[W^{\bullet}\otimes\mathfrak{A}]_{(\widetilde{\mathscr{C}} _{0},P_{\bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}\otimes_{\mathbb{C }}\mathbb{C}[\![q]\!].\] Proof.: The second isomorphism holds because the coherence assumption implies that \([W^{\bullet}\otimes\mathfrak{A}]_{(\widetilde{\mathscr{C}}_{0},P_{\bullet} \sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}\) is a finite dimensional vector space. To prove the first isomorphism, we consider, for \(R=\mathbb{C}[\![q]\!]\), the \(R\)-module and \(R\)-Lie algebra \[\mathscr{W}=(W^{\bullet}\otimes\mathfrak{A})[\![q]\!],\ \ \text{and}\ \ \mathscr{L}=\mathcal{L}_{\widetilde{\mathscr{C}}\setminus\{P_{\bullet}\sqcup Q_{ \pm}\}}(V).\] Since \([\mathscr{W}]_{\mathscr{L}}\) is a finite dimensional \(R\)-module, for \(R_{k}=\mathbb{C}[\![q]\!]/q^{k+1}\), and \(S_{k}=\operatorname{Spec}(\mathrm{R}_{k})\), and one can show by Proposition 4.2.3, that \[[\mathscr{W}]_{\mathscr{L}}=\varprojlim\left([\mathscr{W}\otimes_{R}R_{k}]_{ \mathscr{L}\otimes_{R}R_{k}}\right). \tag{7}\] Note further that \(\mathscr{W}\otimes_{R}R_{k}=(W^{\bullet}\otimes\mathfrak{A})\otimes_{\mathbb{ C}}R_{k}\), and similarly, \[\mathscr{L}\otimes_{R}R_{k}=\mathcal{L}_{\widetilde{\mathscr{C}}_{0}\setminus\{P_{ \bullet}\sqcup Q_{\pm}\}}(V)\otimes_{\mathbb{C}}R_{k}.\] Using this, together with Proposition 4.2.2, we deduce that (7) is isomorphic to \[\varprojlim\left([W^{\bullet}\otimes\mathfrak{A}]_{|(\widetilde{\mathscr{C}_{0} },P_{\bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}\otimes_{\mathbb{C}}R_{k}\right)\] which is indeed \([W^{\bullet}\otimes\mathfrak{A}]_{|(\widetilde{\mathscr{C}_{0}},P_{\bullet} \sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}[\![q]\!]\), as was asserted. **Remark 4.3.2**.: Corollary 4.3.1 implies that, up to some assumptions of coherence, the sheaf of coinvariants associated with \(W^{\bullet}\otimes\mathfrak{A}\) over \(\widetilde{\mathscr{C}_{0}}\) deforms trivially to the sheaf of coinvariants over the trivial deformation \(\widetilde{\mathscr{C}}\) of \(\widetilde{\mathscr{C}_{0}}\). Consequently, the target of the induced map \([\alpha]\), which extends the map \([\alpha_{0}]\) is therefore identified with the sheaf of coinvariants associated with \(\widetilde{\mathscr{C}}\) (and not only with a completion thereof). We conclude this section with some criteria to show coherence of sheaves coinvariants over \(S\). Throughout we will use the notation \(R_{k}=\mathbb{C}[\![q]\!]/q^{k+1}\) and \(S_{k}=\operatorname{Spec}(R_{k})\) for every \(k\in\mathbb{N}\). **Lemma 4.3.3**.: _For \(M\) any module over \(R_{k}\), let \(m_{1},\dots,m_{r}\in M\) be elements whose images generate \(M\otimes_{R_{k}}R_{0}\). Then the elements \(m_{1},\dots,m_{r}\) also generate \(M\)._ Proof.: We induct on \(k\), the case \(k=0\) being automatic. For the induction step, suppose \(m\in M\) and consider the \(R_{k-1}\) module \(\overline{M}=M\otimes_{R_{k}}R_{k-1}\). By the induction hypothesis, the elements \(m_{1},\dots,m_{r}\) generate \(\overline{M}\). Therefore we can find \(a_{1},\dots a_{r}\in R_{k}\) so that \[m^{\prime}=m-\sum a_{i}m_{i}\in M,\] maps to \(0\) in \(\overline{M}\). Now consider the submodule \(M^{\prime}=q^{k}M\subset M\). As \(q^{k}M\) is exactly the kernel of the map \(M\to\overline{M}=M\otimes_{R_{k}}R_{k-1}\) we find that \(m^{\prime}\in M^{\prime}\), and therefore we can write \(m^{\prime}=q^{k}x\) for some \(x\in M\). Writing \(x=\sum b_{i}m_{i}\pmod{q}\), we find \(x-\sum b_{i}m_{i}=qy\) for some \(y\in M\). But now we have \[m=\left(\sum a_{i}m_{i}\right)+m^{\prime}=\left(\sum a_{i}m_{i}\right)+q^{k} \left(\left(\sum b_{i}m_{i}\right)+qy\right)=\sum(a_{i}+q^{k}b_{i})m_{i},\] as desired. **Proposition 4.3.4**.: _If \([W^{\bullet}]_{(\!\![q]\!](\![q]\!](\![p,p,t_{\bullet})\!]}\) is a finite dimensional vector space, then both_ \[[W^{\bullet}[\![q]\!]_{(\mathscr{C},P_{\bullet},t_{\bullet})}\quad\text{ and } \quad[(W^{\bullet}\otimes\mathfrak{A})[\![q]\!]]_{(\widetilde{\mathscr{C}},P_{ \bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}\] _are coherent._ Proof.: For every \(k\in\mathbb{N}\) and for every scheme \(X\) over \(S\), denote the pullback of \(S\) to \(S_{k}\) by \(X_{k}\). Define \[M_{k}:=[W^{\bullet}_{R_{k}}]_{(\mathscr{C}_{k},P_{\bullet},t_{\bullet})}\quad \text{ and }\quad\widetilde{M}_{k}:=[(W^{\bullet}\otimes\mathfrak{A})_{R_{k}}]_{ \big{(}\widetilde{\mathscr{C}_{k}},P_{\bullet}\sqcup Q_{\pm},t_{\bullet} \sqcup s_{\pm}\big{)}}.\] Let us first show that \(M_{k}\) and \(\widetilde{M}_{k}\) are coherent. As we are considering modules over the Noetherian ring \(R_{k}\), we only need to show that they are finitely generated. But by Lemma 4.3.3, for this it suffices to show that \(\widetilde{M}_{0}\) and \(M_{0}\) are finitely generated. This holds because by assumption \(M_{0}\) is finitely generated and \(\alpha_{0}\colon M_{0}\to\widetilde{M}_{0}\) is an isomorphism by [DGT22a]. For simplicity, denote \[M=[W^{\bullet}[\![q]\!]]_{(\mathscr{C},P_{\bullet},t_{\bullet})}\quad\text{ and }\quad \widetilde{M}=[(W^{\bullet}\otimes\mathfrak{A})[\![q]\!]]_{\big{(}\widetilde{ \mathscr{C}},P_{\bullet}\cup Q_{\pm},t_{\bullet}\sqcup s_{\pm}\big{)}}.\] By Lemma 4.1.1, it follows that \(M_{k}=M\otimes_{R}R_{k}\) and \(\widetilde{M}_{k}=\widetilde{M}\otimes_{R}R_{k}\). Consequently the natural maps \(\widetilde{M}_{k}\to\widetilde{M}_{k-1}\) and \(M_{k}\to M_{k-1}\) are surjective. It follows therefore from [10, Lemma 087W] that \(M\) and \(\widetilde{M}\) will be finitely generated over \(R\) whenever \(M_{k}\) and \(\widetilde{M}_{k}\) are finitely generated over \(R_{k}\) for every \(k\). This is what we have just shown and so \(M\) and \(\widetilde{M}\) are coherent. ### The sheaf of Chiral Lie algebras The sheaf of Chiral Lie algebras \(\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\) can be identified with a quotient of the space of sections of the sheaf \(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{\mathscr{ C}/S}\) on the affine open set \(\mathscr{C}\setminus P_{\bullet}\subset\mathscr{C}\). Here, for later use in the proof of Proposition 5.1.2, in order to describe the action of \(\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\), we explicitly describe the sheaf \(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{\mathscr{ C}/S}\), where \(\mathcal{V}_{\mathscr{C}}\) is the contracted product \((V\otimes_{\mathbb{C}}\mathcal{O}_{C})\times_{\mathcal{AutO}}\mathcal{Aut}_{ \mathscr{C}}\) (see Remark 4.4.2). For this, suppose we are given a relative curve \(\mathscr{C}\), projective over \(S=\operatorname{Spec}\mathbb{C}[\![q]\!]\), with closed fiber \(\mathscr{C}_{0}\) (cut out by the ideal generated by \(q\)), and an \((n+1)\)-tuple of distinct closed points \(P_{0},\dots,P_{n}\in\mathscr{C}_{0}\) with affine complement \(\mathscr{C}_{0}\setminus P_{\bullet}=\mathscr{C}_{0}\setminus\bigcup_{i}P_{i}\). Let \(B=\mathcal{O}_{\mathscr{C}}(\mathscr{C}_{0}\setminus P_{\bullet})\) denote those rational functions on \(\mathscr{C}\) which are regular at every scheme-theoretic point of \(\mathscr{C}_{0}\setminus P_{\bullet}\) and let \(\widehat{B}\) denote its \(q\)-adic completion. By [12, Theorem 3.4], coherent sheaves on \(\mathscr{C}\) may be described by specifying coherent sheaves \(M_{U}\) on \(U=\operatorname{Spec}\widehat{B}\), coherent sheaves \(M_{i}\) on \(D_{i}:=\operatorname{Spec}\widehat{\mathcal{O}}_{\mathscr{C},P_{i}}\) for each \(i\), together with "gluing data on the overlaps." The overlaps in this case are described as the formal completions \(D_{i}^{\times}\) of the fiber products \(\operatorname{Spec}\widehat{B}\times_{\mathscr{C}}\operatorname{Spec}\widehat{ \mathcal{O}}_{\mathscr{C},P_{i}}\), and the gluing data is a choice of an isomorphism \((M_{i})_{D_{i}^{\times}}\cong(M_{U})_{D_{i}^{\times}}\). More concretely, the \(D_{i}^{\times}\) can be described as follows. In a given complete local ring \(\widehat{\mathcal{O}}_{\mathscr{C},P_{i}}\), the ideal generated by \(q\) which describes the closed fiber will factor into a product of of primes \(\wp_{i,j}\). For each of these we can consider the localization and completion at the prime. We find that \(D_{i}^{\times}\) is the disjoint union of the formal spectra of the rings \(\big{(}(\widehat{\mathcal{O}}_{\mathscr{C},P_{i}})_{\wp_{i,j}}\big{)}^{ \widehat{\ We choose \(U\) so that the torsor \(\mathcal{A}ut_{\mathscr{C}/S}\) is trivial over \(\operatorname{Spec}\widehat{B}\) via the choice of a function \(s\in\widehat{B}\) such that \(ds\) is a free generator of \(\omega_{\mathscr{C}/S}(\operatorname{Spec}\widehat{B})\) as an \(\widehat{B}\)-module. In other words, \(s\) is a coordinate on \(U\). In particular, sections of \(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{\mathscr{ C}/S}\) on \(\operatorname{Spec}\widehat{B}\) can be described as the \(\widehat{B}\) module: \[\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{ \mathscr{C}/S}\right)(\operatorname{Spec}\widehat{B})=\bigoplus_{k\in\mathbb{ N}}V_{k}\otimes_{\mathbb{C}}\widehat{B}\ (d/ds)^{k-1}. \tag{8}\] **Remark 4.4.1**.: It is important to note that these expressions are not intrinsic to \(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{\mathscr{ C}/S}\) as a sheaf on \(\mathscr{C}\), but rather depend on a choice of parameter \(s\). Different choices give different identifications which correspond to inhomogeneous isomorphisms between the direct sums, but which do preserve the filtrations \((\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{\mathscr{ C}/S})_{\leq k}\). Similarly, on \(D_{Q}=\operatorname{Spec}(\widehat{\mathcal{O}}_{\mathscr{C},Q})\), either \(s_{+}\) or \(s_{-}\) can be used to define a trivialization of the torsor \(\mathcal{A}ut_{\mathscr{C}}\), this time corresponding to the two possible choices of generators \(ds_{+}/s_{+}\) or \(ds_{-}/s_{-}\) of \(\omega_{\mathscr{C}/S}\). These choices allow us to give the following expressions for the sections of our sheaf on \(D_{Q}\) as a \(\widehat{\mathcal{O}}_{\mathscr{C},Q}\)-module: \[\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{ \mathscr{C}/S}\right)(D_{Q})=\bigoplus_{k\in\mathbb{N}}V_{k}\otimes_{\mathbb{ C}}\mathbb{C}[\![s_{+},s_{-}]\!]s_{\pm}^{k-1}(d/ds_{\pm})^{k-1}. \tag{9}\] In particular, we may express a section \(\sigma\) on \(D_{Q}\) with respect to either the trivialization given by \(s_{+}\) or by \(s_{-}\). Since \(\gamma(s_{+})=s_{-}\), the trivializations of \(\mathcal{A}ut_{\mathscr{C}}\) associated to the coordinates \(s_{+}\) and \(s_{-}\) (regarded as sections of the torsor) are related by the order \(2\) element \((-1)^{L_{0}}e^{L_{1}}\in\mathcal{A}ut\mathcal{O}\), which acts on \(V\) via the involution \(\gamma\) described in Eq. (5). Hence, we can write sections of the contracted product \((V\otimes_{\mathbb{C}}\mathcal{O}_{\mathscr{C}})\times_{\mathcal{A}ut \mathcal{O}}\mathcal{A}ut_{\mathscr{C}}\) over \(D_{Q}\) as \[(v\otimes f,s_{+})=(v\otimes f,\gamma s_{-})\sim(\gamma(v)\otimes f,s_{-})\,,\] for \(f\in\mathcal{O}_{\mathscr{C}}\). Choosing \(v\in V_{\ell}\), the element of (9) which in the \(s_{+}\) trivialization is represented by \[\sum_{i,j\geq 0}v\otimes x_{i,j}s_{+}^{i}s_{-}^{j}s_{+}^{\ell-1}(d/ds_{+})^{ \ell-1},\] is represented with respect to the \(s_{-}\) trivialization as \[\sum_{i,j\geq 0}\sum_{m=0}^{\ell}\frac{1}{m!}L_{1}^{m}v\otimes x_{i,j}s_{+}^{i}s _{-}^{j}s_{-}^{\ell-m-1}(d/ds_{-})^{\ell-m-1}.\] More generally, one should consider a sum of such terms for various values of \(\ell\). Finally we consider the sheaf \(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{\mathscr{ C}/S}\) on \(D_{\pm}^{\times}=\operatorname{Spec}(\mathbb{C}(\!(s_{\pm})\!)[\![\mathfrak{q}]\!])\). In \(D_{\pm}^{\times}\), as in \(D_{Q}\), we may use the functions \(s_{\pm}\) to trivialize our torsor. Consequently we have: \[\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{ \mathscr{C}/S}\right)(D_{\pm})=\bigoplus_{k\in\mathbb{N}}V_{k}\otimes_{\mathbb{ C}}\mathbb{C}(\!(s_{\pm})\!)[\![\mathfrak{q}]\!]\ s_{\pm}^{k-1}(d/ds_{\pm})^{k-1}. \tag{10}\] Without loss of generality the trivializing coordinate \(s\) on \(U\) maps to our previously chosen trivializing coordinate \(s_{+}\) in \(D_{+}^{\times}\). That is, the map \(i_{+}\colon D_{+}^{\times}\hookrightarrow U\) corresponds to maps of rings \[\widehat{B}\to\mathbb{C}(\!(s_{+})\!)[\![\mathfrak{q}]\!],\ s\mapsto s_{+}. \tag{11}\] Although it is unnecessary here, to map \(s\) to both \(s_{+}\) and \(s_{-}\) simultaneously, one could work etale locally. For notational convenience, it is useful to consider the action of \(\mathcal{A}ut\mathcal{O}\) as on \(\mathfrak{L}(V)_{0}^{\mathrm{f}}\), the degree \(0\) part of the ancillary algebra and to recall the notation (6). For \(\rho\in\mathcal{A}ut\mathcal{O}\) and a homogeneous element \(a\in V\), we have \(\rho J_{0}(a)=J_{0}(\rho a)\). Further, when we use a coordinate \(s\) to trivialize our torsor \(\mathcal{A}ut_{\mathscr{C}}\), we will identify the expression \(a_{[\deg(a)-1+k]}\) with the element \(J_{k}(a)\in\mathfrak{L}(V)_{-k}^{\mathrm{f}}\). Finally, we simplify notation further by omitting the factors of the form \(d/ds\) from our presentations. Given \(\sigma_{U}\in\big{(}\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C }}}\omega_{\mathscr{C}/S}\big{)}\,(\operatorname{Spec}\widehat{B})\) we write \((\sigma_{U})_{\pm}\) for its restriction to \(D_{\pm}^{\times}\). Using the notation above, following the explicit expressions of (8) and (10), we find that if \[\sigma_{U}=\sum_{\ell=0}^{k}v_{\ell}\otimes f_{\ell},\] then writing \(f_{+}\) for the expansion (restriction) of the regular function \(f\) to \(\mathbb{C}(\!(s_{+})\!)[\![q]\!]\), we have (as the coordinates are compatible) \[(\sigma_{U})_{+}=\sum_{\ell=0}^{k}v_{\ell}\otimes(f_{\ell})_{+}=\sum v_{\ell} \otimes(g_{\ell})_{+}s_{+}^{\ell-1}.\] On the other hand, if \(\sigma_{Q}\in\big{(}\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C }}}\omega_{\mathscr{C}/S}\big{)}\,(D_{Q})\), is written as \(\sum_{\ell=0}^{k}\sum_{i,j\geq 0}v_{\ell}\otimes x_{i,j}^{\ell}s_{+}^{i+\ell-1 }s_{-}^{j}\). If the section \(\sigma_{Q}\), so represented, is to be compatible and glue together with the section \(\sigma_{U}\) above, we find that \[\sum_{\ell=0}^{k}\sum_{i,j\geq 0}J_{0}(v_{\ell})x_{i,j}^{\ell}s_{+}^{i}s_{-}^{j}= \sum_{i,j,\ell}J_{0}(v_{\ell})x_{i,j}^{\ell}s_{+}^{i-j}q^{j}=\sum_{i,j,\ell}J _{i-j}(v_{\ell})x_{i,j}^{\ell}q^{j} \tag{12}\] must represent the expression for \(\sigma_{U}\) restricted to \(D_{+}^{\times}\). To express \(\sigma_{U}\) restricted to \(D_{+}^{\times}\), following (4), we will make use of the anti-isomorphism \(\theta\colon\mathfrak{L}(V)^{\mathsf{L}}\to\mathfrak{L}(V)^{\mathsf{R}}\) described in (4) and related to \(\gamma\) via Lemma 3.4.2. We then conclude that \(\sigma_{U}\) restricted to \(D_{-}^{\times}\) is given by the expression \[\sum_{\ell=0}^{k}\sum_{i,j\geq 0}\gamma(J_{0}(v_{\ell}))x_{i,j}^{ \ell}s_{+}^{i}s_{-}^{j} =\sum_{i,j,\ell}J_{0}(\gamma(v_{k}))x_{i,j}^{\ell}s_{-}^{j-i}q^{i}\] \[=\sum_{i,j,\ell}J_{j-i}(\gamma(v_{k}))x_{i,j}^{\ell}q^{i}=\sum_{i, j,\ell}\theta(J_{i-j}(v_{\ell}))x_{i,j}^{\ell}q^{i}.\] **Remark 4.4.2**.: Through the above description, we have that the sheaf \(\mathcal{V}_{\mathscr{C}}\) discussed at length in [10] agrees, even on the boundary of \(\overline{\mathcal{M}}_{g,n}\), with the sheaf \(\mathcal{V}_{\mathscr{C}}\) described in [10]. We conclude with two lemmas which will be useful in our applications in the next section. **Lemma 4.4.3**.: _Let \(\mathscr{C}\) be a family of curves over \(S\), possibly with nodal singularities. Consider a collection of sections \(P_{1},\dots,P_{n}\) such that \(\mathscr{C}\setminus P_{\bullet}=U\subset\mathscr{C}\) is affine, and let \(Q_{1},\dots Q_{k}\subset\mathscr{C}\) be a finite collection of distinct closed points in \(U\) (possibly including nodes). Let \(D_{Q_{i}}=\operatorname{Spec}\widehat{\mathcal{O}}_{\mathscr{C},Q_{i}}\) be the complete local ring at \(Q_{i}\) with maximal ideal \(\widehat{\mathfrak{m}}_{\mathscr{C},Q_{i}}\). Then for any \(\ell\geq 0\) and any invertible sheaf of \(\mathcal{O}_{\mathscr{C}}\)-modules \(\mathcal{L}\), the natural map_ \[\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\mathcal{L} \right)(U)\to\bigoplus_{i}\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O} _{\mathscr{C}}}\mathcal{L}\right)(\operatorname{Spec}\widehat{\mathcal{O}}_{ \mathscr{C},Q_{i}}/\left(\widehat{\mathfrak{m}}_{\mathscr{C},Q_{i}}\right)^{ \ell})\] _is surjective._ Proof.: As \(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\mathcal{L}= \bigcup\limits_{k}\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{ \mathscr{C}}}\mathcal{L}\right)_{\leq k}\), it suffices to show that the map \[\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\mathcal{ L}\right)_{\leq k}(U)\to\bigoplus_{i}\left(\mathcal{V}_{\mathscr{C}}\otimes_{ \mathcal{O}_{\mathscr{C}}}\mathcal{L}\right)_{\leq k}(\operatorname{Spec} \widehat{\mathcal{O}}_{\mathscr{C},Q_{i}}/\left(\widehat{\mathfrak{m}}_{ \mathscr{C},Q_{i}}\right)^{\ell})\] is surjective for all \(k\). Since the sheaf \(\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\mathcal{L} \right)_{\leq k}\) is free of finite rank over \(\mathcal{O}_{\mathscr{C}}\), then this holds true. Indeed, for any coherent sheaf of modules \(M\) on \(\mathscr{C}\), the natural map \(M(U)\to\bigoplus_{i}M(\operatorname{Spec}\mathcal{O}_{\mathscr{C}}(U)/ \mathfrak{m}_{\mathscr{C},Q_{i}}(U)^{\ell})=\bigoplus_{i}M(U)\otimes_{ \mathcal{O}_{\mathscr{C}}(U)}\mathcal{O}_{\mathscr{C}}(U)/\mathfrak{m}_{ \mathscr{C},Q_{i}}(U)^{\ell}\) is seen to be surjective, using the fact that \(\mathcal{O}_{\mathscr{C}}(U)\to\bigoplus_{i}\mathcal{O}_{\mathscr{C}}(U)/ \mathfrak{m}_{\mathscr{C},Q_{i}}(U)^{\ell}\) is surjective by the Chinese Remainder Theorem, and that tensoring with \(M\) is right exact. **Lemma 4.4.4**.: _As in Section 4.3, let \(\mathscr{C}_{0}\) be a projective curve over \(\mathbb{C}\) with at least one node \(Q\), smooth and distinct points \(P_{\bullet}=(P_{1},\dots,P_{n})\) such that \(\mathscr{C}_{0}\setminus P_{\bullet}\) is affine, and formal coordinates \(t_{\bullet}=(t_{1},\dots,t_{n})\) at \(P_{\bullet}\). Let \(\eta\colon\widetilde{\mathscr{C}_{0}}\to\mathscr{C}_{0}\) be the partial normalization of \(\mathscr{C}_{0}\) at \(Q\), pointed by \(Q_{\pm}:=\eta^{-1}(Q)\), and choose formal coordinates \(s_{\pm}\) at \(Q_{\pm}\). Let \(W^{1},\dots,W^{n}\) be an \(n\)-tuple of \(V\)-modules. Then the map \(\alpha_{0}\colon W^{\bullet}\to W^{\bullet}\otimes\mathfrak{A}\) defined by \(\alpha_{0}(w)=w\otimes 1\) induces a map between the vector spaces of coinvariants:_ \[[\alpha_{0}]\colon[W^{\bullet}]_{(\mathscr{C}_{0},P_{\bullet},t_{\bullet})} \smash{\mathop{\longrightarrow}\limits^{\cong}}[W^{\bullet}\otimes\mathfrak{ A}]_{\left(\widetilde{\mathscr{C}_{0}},P_{\bullet}\sqcup Q_{\pm},t_{\bullet} \sqcup s_{\pm}\right)},\] _which is an isomorphism in case \(V\) is \(C_{1}\)-cofinite._ Proof.: Suppose \(\mathscr{C}_{0}\) has \(m\) nodes in total (including \(Q\)) and let \(\widetilde{\mathscr{C}_{0}}^{\prime}\) be the (full) normalization of \(\mathscr{C}_{0}\). Following [13, Remark 3.4] we find we have maps inducing corresponding maps \([\alpha_{0}],[\alpha_{0}^{\prime}],[\alpha_{0}^{\prime\prime}]\) on the respective coinvariants such that \([\alpha_{0}^{\prime}]\) an isomorphism. It follows that \([\alpha_{0}]\) is injective and therefore remains only to show that it is also surjective. For surjectivity, we follow the spirit of the proof of [13, Prop. 6.2.1]. We may represent an element of \(\mathfrak{A}\) as given by an expression \(a_{[n_{1}]}^{1}\cdots a_{[n_{k}]}^{k}\otimes 1\otimes b_{[m_{1}]}^{1}\cdots b_{[m_{r}]}^{r}\). For simplicity of notation, let us write \(a=a_{[n_{1}]}^{1}\cdots a_{[n_{k}]}^{k}\), \(a^{\prime}=a_{[n_{2}]}^{2}\cdots a_{[n_{k}]}^{k}\) and \(b=b_{[m_{1}]}^{1}\cdots b_{[m_{r}]}^{r}\). We will show that all elements of the form \([w\otimes(a\otimes 1\otimes b)]\) are in the image of \(\alpha_{0}\) by induction on \(k-m\), the base case \(k-m=0\) being true by construction (note \(b\) has nonpositive degree by definition). For the induction step, let us suppose that \(k>0\) (the case \(m<0\) being similar), and let \(d_{+}^{\prime}\) be the degree of \(a^{\prime}\) and \(d_{-}\) the degree of \(b\). Without loss of generality, we may assume \(\deg(a_{[n_{1}]}^{1})\geq\cdots\geq\deg(a_{[n_{k}]}^{k})\geq 0\). By Lemma 4.4.3, setting \(\mathcal{L}=\omega_{\mathscr{C}_{0}}(n_{1}Q_{+}+NQ_{-})\) for \(N>d_{-}-\deg(a^{1})\), we may find a section \(\sigma=a^{1}\otimes f\), \[\sigma\in\left(\mathcal{V}_{\widetilde{\mathscr{C}_{0}}}\otimes_{\mathcal{O}_ {\mathscr{C}_{0}}}\omega_{\mathscr{C}_{0}}(n_{1}Q_{+}+(d_{-}-1)Q_{-})\right)( \widetilde{\mathscr{C}_{0}}\setminus P_{\bullet})\subset\left(\mathcal{V}_{ \widetilde{\mathscr{C}_{0}}}\otimes_{\mathcal{O}_{\mathscr{C}_{0}}}\omega_{ \mathscr{C}_{0}}\right)(\widetilde{\mathscr{C}_{0}}\setminus P_{\bullet}\sqcup Q _{\pm})\] such that the image \(\sigma^{\mathsf{L}}_{Q_{+}}\) of \(\sigma\) in \(\left(\mathcal{V}_{\widetilde{\mathscr{C}_{0}}}\otimes_{\mathcal{O}_{ \mathscr{C}_{0}}}\omega_{\mathscr{C}_{0}}\right)(\widehat{\mathcal{O}_{ \widetilde{\mathscr{C}_{0}},Q_{+}}})\cong\mathfrak{L}(V)^{\mathsf{L}}\) has the form \(a^{1}_{[n_{1}]}+\widetilde{a}\) where \(\deg(\widetilde{a})<-d^{\prime}_{+}\). By construction, \(\sigma^{\mathsf{L}}_{Q_{-}}\) has degree \(<d_{-}\) and consequently \(\sigma^{\mathsf{R}}_{Q_{+}}\) has degree \(>-d_{-}\). So we find \(\sigma^{\mathsf{L}}_{Q_{+}}(a^{\prime}\otimes 1\otimes b)=a\otimes 1\otimes b\) and \((a\otimes 1\otimes b)\sigma^{\mathsf{R}}_{Q_{-}}=0\). This tells us \[\sigma\cdot\left(w\otimes(a^{\prime}\otimes 1\otimes b)\right)=(\sigma w) \otimes(a^{\prime}\otimes 1\otimes b)+w\otimes(a\otimes 1\otimes b)\] yielding \([w\otimes(a\otimes 1\otimes b)]=-[(\sigma w)\otimes(a^{\prime}\otimes 1 \otimes b)]\), completing the induction step. ## 5. Smoothing via strong unities Here we prove Theorem 5.0.3, which relates the smoothing property of \(V\), described here in Definition 5.0.1, to the existence of strong unities in \(\mathfrak{A}\). Theorem 5.0.3 relies crucially on Proposition 5.1.2. These results are proved in Section 5.1. Geometric consequences regarding coinvariants are given in Section 5.2. Throughout this section we will use the notation introduced in Section 4.3, considering two families of marked, parametrized curves \((\mathscr{C},P_{\bullet},t_{\bullet})\) and \((\widetilde{\mathscr{C}},P_{\bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})\) over the base scheme \(S=\operatorname{Spec}(\mathbb{C}[\![q]\!])\). As usual, \(\mathfrak{A}_{0}=\mathsf{A}\). **Definition 5.0.1**.: Given a family \((\mathscr{C},P_{\bullet},t_{\bullet})\), and collection of \(V\)-modules \(W^{1},\ldots,W^{n}\), an element \(\mathscr{I}=\sum_{d\geq 0}\mathscr{I}_{d}q^{d}\in\mathfrak{A}[\![q]\!]\) defines a _smoothing map_ for \(W^{\bullet}\) over \((\mathscr{C},P_{\bullet},t_{\bullet})\), if \(\mathscr{I}_{0}=1\in\mathfrak{A}_{0}\), and the map \(W^{\bullet}\to W^{\bullet}\otimes\mathfrak{A}[\![q]\!]\), \(w\mapsto w\otimes\mathscr{I}\) extends by linearity and \(q\)-adic continuity to an \(\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\)-module homomorphism \(\alpha\colon W^{\bullet}[\![q]\!]\longrightarrow(W^{\bullet}\otimes\mathfrak{ A})[\![q]\!]\). We say that \(\mathscr{I}=\sum_{d\geq 0}\mathscr{I}_{d}q^{d}\in\mathfrak{A}[\![q]\!]\) defines a _smoothing map_ for \(V\), if it defines a smoothing map for all \(V\)-modules \(W^{\bullet}\), over all families \((\mathscr{C},P_{\bullet},t_{\bullet})\). **Definition 5.0.2**.: Smoothing holds for \(W^{\bullet}\) over the family \((\mathscr{C},P_{\bullet},t_{\bullet})\), if there is an element \(\mathscr{I}=\sum_{d\geq 0}\mathscr{I}_{d}q^{d}\in\mathfrak{A}[\![q]\!]\) giving a smoothing map for \(W^{\bullet}\) over \((\mathscr{C},P_{\bullet},t_{\bullet})\). \(V\) _satisfies smoothing_ if smoothing holds for all \(W^{\bullet}\), over all families \((\mathscr{C},P_{\bullet},t_{\bullet})\). **Theorem 5.0.3**.: _Let \(V\) be a VOA. Then the algebras \(\mathfrak{A}_{d}\) admit strong unities for all \(d\in\mathbb{N}\) if and only if \(V\) satisfies smoothing._ ### Proof of Theorem 5.0.3 Following the idea of Definition/Lemma 3.3.1(_2_), we make the following definition: **Definition 5.1.1**.: We say that a sequence \((\mathscr{I}_{d})_{d\in\mathbb{N}}\), with \(\mathscr{I}_{d}\in\mathfrak{A}\) satisfies the _strong unity equations_ if for every homogeneous \(a\in V\), and \(n\in\mathbb{Z}\) such that \(n\leq d\), we have \[J_{n}(a)\mathscr{I}_{d}=\mathscr{I}_{d-n}J_{n}(a). \tag{13}\] In Definition 5.1.1 there is no assumption on the (bi-)degrees of the elements \(\mathscr{I}_{d}\in\mathfrak{A}\). However, if \(\mathscr{I}_{d}\in\mathfrak{A}_{d}\) is a unity for each \(d\), then by Definition/Lemma 3.3.1 they satisfy the strong unity equations if and only if they are strong unities. **Proposition 5.1.2**.: _Let \(V\) be a VOA and let \(\mathscr{I}_{d}\in\mathfrak{A}\) for \(d\in\mathbb{N}\). Then \(\mathscr{I}=\sum\mathscr{I}_{d}q^{d}\) defines a smoothing map for \(W^{\bullet}\) over \((\mathscr{C},P_{\bullet},t_{\bullet})\) if and only if the sequence \((\mathscr{I}_{d})\) satisfies the strong unity equations (13)._ Proof.: The map \(\alpha\colon W^{\bullet}\llbracket q\rrbracket\to(W^{\bullet}\otimes \mathfrak{A})\llbracket q\rrbracket\) is a map of \(\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\)-modules if and only if, for every \(\sigma\in\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\) and \(u\in W^{\bullet}\), one has \(\alpha(\sigma(u))=\sigma(\alpha(u))\). Here, the left hand side equals \((\sigma\cdot u)\otimes\mathscr{I}\). To describe the right hand side, as is explained in the beginning of Section 4.4, we recall that elements of the Lie algebra \(\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\) are represented by sections of the sheaf \(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{\mathscr{ C}/S}\) over the affine open set \(\mathscr{C}\setminus P_{\bullet}\). Consequently we can understand the right hand side in terms of the maps \[\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{ \mathscr{C}/S}\right)(\mathscr{C}\setminus P_{\bullet}) \to\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{ \mathscr{C}}}\omega_{\mathscr{C}/S}\right)(D_{\pm}^{\times})\cong V\otimes_{ \mathbb{C}}\mathbb{C}(\!(t)\!)\llbracket q\rrbracket\] \[\sigma \mapsto\sigma_{\pm}^{\mathsf{L}}.\] We let \(\sigma_{-}^{\mathsf{R}}=\theta(\sigma_{-}^{\mathsf{L}})\in\mathbb{C}(\!(t^{-1} )\!)\llbracket q\rrbracket\). We then have \[\sigma(u\otimes\mathscr{I})=\sigma(u)\otimes\mathscr{I}+u\otimes(\sigma_{+}^{ \mathsf{L}}\otimes 1+1\otimes\sigma_{-}^{\mathsf{R}})(\mathscr{I}).\] It follows that \(\alpha\) is a map of \(\mathcal{L}_{\mathscr{C}\setminus P_{\bullet}}(V)\)-modules if and only if \[\sigma\cdot\mathscr{I}=\left(\sigma_{+}^{\mathsf{L}}\otimes 1+1\otimes \sigma_{-}^{\mathsf{R}}\right)\cdot\mathscr{I}=0. \tag{14}\] We now reframe this in the language developed towards the end of Section 4.4. For a section \(\sigma\in\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}} \omega_{\mathscr{C}/S}\right)_{\leq k}(\mathscr{C}\setminus P_{\bullet})\), writing \(s_{+}s_{-}=q\) on \(\widehat{\mathcal{O}}_{\mathscr{C},Q}\), we may write (in terms of the local trivializations of Section 4.4) \[\sigma|_{D_{Q}}=\sum_{\ell=0}^{k}\sum_{i,j\geq 0}J_{0}(v_{\ell})\ x_{i,j}^{ \ell}\ s_{+}^{i}s_{i}^{j}\] and for this section \(\sigma\) we have \[\sigma_{+}^{\mathsf{L}}=\sum_{\ell=0}^{k}\sum_{i,j\geq 0}J_{i-j}(v_{\ell})x_{i,j} ^{\ell}q^{j}\quad\text{ and }\quad\sigma_{-}^{\mathsf{R}}=\sum_{\ell=0}^{k}\sum_{i,j\geq 0}J_ {i-j}(v_{\ell})x_{i,j}^{\ell}q^{i}.\] Putting this together with (14), we find that smoothing holds if and only if for all \(\sigma\) as above (and for all \(k\)), we have \[\sum_{\ell=0}^{k}\sum_{i,j,d\geq 0}x_{ij}\left(J_{i-j}(v_{\ell})\cdot\mathscr{I }_{d}\ q^{d+j}-\mathscr{I}_{d}\cdot J_{i-j}(v_{\ell})\ q^{d+i}\right)=0.\] This in turn holds if and only if each coefficient of \(q^{m}\) is zero, translating to the statement \[\sum_{\ell=0}^{k}\ \sum_{0\leq i,j\leq m}x_{ij}\left(J_{i-j}(v_{\ell})\cdot \mathscr{I}_{m-j}-\mathscr{I}_{m-i}\cdot J_{i-j}(v_{\ell})\right)=0, \tag{15}\] for every \(m\geq 0\). We note that the systems of equations \[J_{n}(v_{\ell})\mathscr{I}_{d}=\mathscr{I}_{d-n}J_{n}(v_{\ell}),\ \text{with}\ n\leq d,\ \text{and}\ d\in\mathbb{N},v_{\ell}\in V_{\ell},\ell\in\mathbb{N},\] and \[J_{i-j}(v_{\ell})\cdot\mathscr{I}_{m-j}-\mathscr{I}_{m-i}\cdot J_{i-j}(v_{ \ell})=0,\ \text{with}\ 0\leq i,j\leq m,\ \text{and}\ m\in\mathbb{N},v_{\ell}\in V_{\ell},\ell\in\mathbb{N},\] are equivalent after a change of variables. We then have showed that, if \((\mathscr{I}_{d})_{d\in\mathbb{N}}\) satisfies the strong unity equations, it follows that \(\mathscr{I}\) defines a smoothing map. It remains to show the converse, namely that if (15) holds for every \(\sigma\), then the strong unity equations hold. We do this by the following strategy: we will show that for every \(0\leq i_{0},j_{0}\leq m\), \(m\in\mathbb{N}\) and \(v^{\prime}_{\ell_{0}}\in V_{\ell_{0}}\), we may find a section \(\sigma\in\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}} \omega_{\mathscr{C}/S}\right)(\mathscr{C}\setminus P_{\bullet})\) so that the expansion of \(\sigma\) at \(Q\) has the form \[\sigma|_{D_{Q}}=\sum_{\ell=0}^{k}\sum_{i,j\geq 0}J_{0}(v_{\ell})\ x_{i,j}^{ \ell}\ s_{+}^{i}s_{i}^{j}=J_{0}(v^{\prime}_{\ell_{0}})s_{+}^{i_{0}}s_{-}^{j_{ 0}}+\sum_{\ell=0}^{k}\sum_{i,j\geq m}J_{0}(v_{\ell})\ x_{i,j}^{\ell}\ s_{+}^{i}s_ {i}^{j}. \tag{16}\] That is, we argue that the coefficients \(x_{i,j}\) in of the terms in (16) of degree less than \(m\) are only nonzero in the case \(i=i_{0},j=j_{0}\), and in this case \(x_{i_{0},j_{0}}=1\). For such a section \(\sigma\), (15) simply becomes \(J_{i_{0}-j_{0}}(v_{\ell})\cdot\mathscr{I}_{m-j_{0}}-\mathscr{I}_{m-i_{0}} \cdot J_{i_{0}-j_{0}}(v_{\ell})=0\), which, as has been noted, is equivalent to the strong unity equations once we run this argument for all \(i_{0},j_{0}\) and \(m\). For this final step, we note that by Lemma 4.4.3 we have a surjective map for every \(m\geq 0\). Hence there exists \(\sigma\in\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}} \omega_{\mathscr{C}/S}\right)(\mathscr{C}\setminus P_{\bullet})\) whose image in \(\left(\mathcal{V}_{\mathscr{C}}\otimes_{\mathcal{O}_{\mathscr{C}}}\omega_{ \mathscr{C}/S}\right)(\operatorname{Spec}\widehat{\mathcal{O}}_{\mathscr{C},Q})\) is congruent modulo \((\widehat{\mathfrak{m}}_{\mathscr{C},Q})^{2m}=(s_{+},s_{-})^{2m}\) to \(J_{0}(v^{\prime}_{\ell_{0}})s_{+}^{i_{0}}s_{-}^{j_{0}}\). It follows (16) holds for this \(\sigma\) as desired and the proof is complete. We note that already Proposition 5.1.2 shows that the smoothing property never depends on modules or specific families of curves: **Corollary 5.1.3**.: _Smoothing holds for \(W^{\bullet}\) over a family \((\mathscr{C},P_{\bullet},t_{\bullet})\) if and only if \(V\) satisfies smoothing._ Proof.: If smoothing holds for \(W^{\bullet}\) over a family \((\mathscr{C},P_{\bullet},t_{\bullet})\), then by Proposition 5.1.2, the sequence \((\mathscr{I}_{d})_{d\in\mathbb{Z}}\) satisfies the strong unity equations. But then, invoking again Proposition 5.1.2, we deduce that this sequence defines a smoothing map for any choice of modules and family of curves. In what follows, for an element \(b\in\mathfrak{A}=\oplus_{i,j}\mathfrak{A}_{i,j}\), we write \(b_{i,j}\in\mathfrak{A}_{i,j}\) for the corresponding homogeneous component of \(b\). **Lemma 5.1.4**.: _For \(V\) a VOA, if the sequence \((\mathscr{I}_{d})_{d\in\mathbb{N}}\) with \(\mathscr{I}_{d}\in\mathfrak{A}\) satisfies the strong unity equations (13), then so does the sequence \((\mathscr{I}^{\prime}_{d})_{d\in\mathbb{N}}\) where \(\mathscr{I}^{\prime}_{d}:=(\mathscr{I}_{d})_{d,-d}\)._ Proof.: Suppose we have a sequence \((\mathscr{I}_{d})\) satisfying the strong unity equations. When we equate terms of like degree in the expression \[J_{n}(a)\cdot\mathscr{I}_{d}=\mathscr{I}_{d-n}\cdot J_{n}(a),\] we obtain \[J_{n}(a)\cdot(\mathscr{I}_{d})_{i+n,j}=(\mathscr{I}_{d})_{i,j-n}\cdot J_{n}(a)\] for every \(i,j\). In particular, for \(\mathscr{I}^{\prime}_{d}=(\mathscr{I}_{d})d,-d\), we find that the strong unity equations (13) hold for the sequence \((\mathscr{I}^{\prime}_{d})_{d\in\mathbb{N}}\), as was claimed. In what follows we will use the following equalities, which are a direct consequence of Proposition B.2.5. Let \(\mathfrak{a},\mathfrak{b}\in\mathfrak{A}\). Then for every \(u,w\in\mathscr{U}\) we have \[u\cdot(\mathfrak{a}\star\mathfrak{b})=(u\cdot\mathfrak{a})\star\mathfrak{b} \qquad\text{ and }\qquad(\mathfrak{a}\star\mathfrak{b})\cdot w=\mathfrak{a}\star( \mathfrak{b}\cdot w). \tag{17}\] **Lemma 5.1.5**.: _Suppose we have a collection of elements \(\mathscr{I}_{d}\in\mathfrak{A}_{d}\) for each \(d\geq 0\), with \(\mathscr{I}_{0}=1\in\mathfrak{A}_{0}\). Then, \(\mathscr{I}_{d}\) is a strong unity in \(\mathfrak{A}_{d}\subset\mathfrak{A}\), for all \(d\in\mathbb{N}\), if and only if the sequence \((\mathscr{I}_{d})_{d\in\mathbb{N}}\) satisfies the strong unity equations (13)._ Proof.: Definition/Lemma 3.3.1(_2_)_ with \(\mathfrak{a}=J_{n}(v)\) implies that strong unities satisfy the strong unity equations (13), so we are left to prove the converse statement. To show that \(\mathscr{I}_{d}\) is a strong unity for each \(d\), it suffices to show that \(\mathscr{I}_{d}\) acts as the identity element on \(\mathfrak{A}_{d,e}\) for every \(e\in\mathbb{Z}\). That is, for every \(\mathfrak{a}\in\mathfrak{A}_{0,e}\), and \(n_{1}\leq\cdots\leq n_{r}<0\) with \(\sum n_{i}=-d\), we need to show \[\mathscr{I}_{d}\star(J_{n_{1}}(v_{1})\cdots J_{n_{r}}(v_{r})\cdot\mathfrak{a} )=J_{n_{1}}(v_{1})\cdots J_{n_{r}}(v_{r})\cdot\mathfrak{a}.\] We argue by induction on \(r\). The base case \(r=0\) holds since by assumptions \(\mathscr{I}_{0}=1\in\mathfrak{A}_{0}=\mathsf{A}\), hence \(\mathscr{I}_{0}\star\mathfrak{a}=1\cdot\mathfrak{a}=\mathfrak{a}\). For the inductive step, we write: \[\mathscr{I}_{d}\star(J_{n_{1}}(v_{1})\cdot J_{n_{2}}(v_{2})\cdots J _{n_{r}}(v_{r})\cdot\mathfrak{a}) =\mathscr{I}_{d}\star((J_{n_{1}}(v_{1}))\,(J_{n_{2}}(v_{2})\cdots J _{n_{r}}(v_{r})\cdot\mathfrak{a}))\] \[{}^{(17)} =(\mathscr{I}_{d}\cdot J_{n_{1}}(v_{1}))\star(J_{n_{2}}(v_{2}) \cdots J_{n_{r}}(v_{r})\cdot\mathfrak{a})\] \[{}^{(13)} =(J_{n_{1}}(v_{1})\cdot\mathscr{I}_{d+n_{1}})\star(J_{n_{2}}(v_{ 2})\cdots J_{n_{r}}(v_{r})\cdot\mathfrak{a})\] \[{}^{(17)} =J_{n_{1}}(v_{1})\cdot(\mathscr{I}_{d+n_{1}}\star(J_{n_{2}}(v_{2} )\cdots J_{n_{r}}(v_{r})\cdot\mathfrak{a}))\] \[{}^{(by\ induction)} =J_{n_{1}}(v_{1})J_{n_{2}}(v_{2})\cdots J_{n_{r}}(v_{r})\cdot \mathfrak{a},\] where the last identity holds by induction. We may now complete the proof of Theorem 5.0.3. Proof of Theorem 5.0.3.: Suppose the algebras \(\mathfrak{A}_{d}\) admit strong unities. Writing \(\mathscr{I}_{d}\) for these unities, we can apply Lemma 5.1.5 to deduce that the sequence \((\mathscr{I}_{d})_{d\in\mathbb{N}}\) satisfies the strong unity equations and therefore, by Proposition 5.1.2, the element \(\mathscr{I}=\sum\mathscr{I}_{d}q^{d}\) defines a smoothing map for any family of marked curves and choice of modules \(W^{\bullet}\). Hence \(V\) satisfies smoothing. Conversely, if \(V\) satisfies smoothing, there exists \(\mathscr{I}=\sum\mathscr{I}_{d}q^{d}\) which defines a smoothing map for any family of marked curves and choice of modules \(W^{\bullet}\), then by Proposition 5.1.2, the sequence \((\mathscr{I}_{d})_{d\in\mathbb{N}}\) satisfies the strong unity equations. Using Lemma 5.1.4 we may find a new sequence \((\mathscr{I}^{\prime}_{d})_{d\in\mathbb{N}}\) with \(\mathscr{I}^{\prime}_{d}\in\mathfrak{A}_{d}\) which also satisfy the strong unity equations. It follows from Lemma 5.1.5 that the elements \(\mathscr{I}^{\prime}_{d}\) are strong unities. ### Geometric results We describe in this section some statements about coinvariants, most of which are implications of Theorem 5.0.3. **Corollary 5.2.1**.: _For any VOA \(V\), let \(W^{\bullet}\) be \(V\)-modules such that the sheaf \([(W^{\bullet}\otimes\mathfrak{A})[\![q]\!]]_{(\widetilde{\mathscr{C}},P_{ \bullet}\sqcup Q_{\pm},t_{\bullet}\sqcup s_{\pm})}\) is coherent over \(S\). Assume that \(\mathfrak{A}_{d}\) admits a strong unity \(\mathscr{I}_{d}\) for every \(d\in\mathbb{N}\). Set \(\mathscr{I}=\sum_{d\geq 0}\mathscr{I}_{d}q^{d}\), and let \(\alpha\colon W^{\bullet}[\![q]\!]\to(W^{\bullet}\otimes\mathfrak{A})[\![q]\!]\) be the map induced by \(w\mapsto w\otimes\mathscr{I}\) (see Definition 5.0.1). Then the diagram_ _commutes, where \(\alpha_{0}\colon W^{\bullet}\to W\otimes\mathfrak{A}\) is given by \(w\mapsto w\otimes\mathscr{I}_{0}\)._ Proof.: The vertical maps are given by imposing the condition \(q=0\), and are surjective. After the identification of \(\mathfrak{A}\) with \(\Phi(\mathsf{A})\) provided in Lemma 3.4.5, we see that the map \([\alpha_{0}]\) is well defined as in [13, Proposition 3.3]. By the proof of Theorem 5.0.3 we deduce that the map \(\alpha\) is a map of \(\mathcal{L}_{\mathscr{C}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! We begin with an auxiliary result. **Lemma 5.2.5**.: _Let \(V\) be a \(C_{1}\)-cofinite VOA that satisfies smoothing and such that \([W^{\bullet}[\![q]\!]_{(\mathscr{C},P_{\bullet},t_{\bullet})}\) and \([(W^{\bullet}\otimes\mathfrak{A})[\![q]\!]]_{(\widetilde{\mathscr{C}},P_{ \bullet}\sqcup Q_{\pm}\sqcup Q_{\pm}\sqcup s_{\pm})}\) are coherent over \(S\). Then the map \([\alpha]\) defined in Corollary 5.2.1 is an isomorphism._ Proof.: Since \(V\) is \(C_{1}\)-cofinite, Lemma 4.4.4 ensures that \([\alpha_{0}]\) is an isomorphism. Since the source and target of \([\alpha]\) is finitely generated and the target is locally free (see Remark 5.2.4 (ii)), Nakayama's lemma ensures that \([\alpha]\) is an isomorphism as well. To state the next results, we shall refer to the moduli stacks \(\widehat{\mathcal{M}}_{g,n}\), parametrizing families of stable pointed curves of genus \(g\) with coordinates, and \(\overline{\mathcal{J}}_{g,n}\), of stable pointed curves of genus \(g\) with first order tangent data, and projection maps \[\widehat{\mathcal{M}}_{g,n}\to\overline{\mathcal{J}}_{g,n}\to\overline{ \mathcal{M}}_{g,n},\] discussed in detail in [13, SS2]. Recall the notation from Remark 4.1.2. **Corollary 5.2.6**.: _Let \(W^{1},\ldots,W^{n}\) be simple modules over a \(C_{1}\)-cofinite vertex operator algebra \(V\), such that coinvariants are coherent for curves of genus \(g\), and such that \(\mathfrak{A}_{d}(V)\) admit strong unities for all \(d\in\mathbb{Z}_{\geq 0}.\) Then sheaves of coinvariants are locally free, giving rise to a vector bundle \(\mathbb{V}_{g}(V;W^{\bullet})^{\overline{\mathcal{J}}_{g,n}}\) on \(\overline{\mathcal{J}}_{g,n}\). If the conformal dimensions of \(W^{1},\ldots,W^{n}\) are rational, these sheaves define vector bundles \(\mathbb{V}_{g}(V;W^{\bullet})\) on \(\overline{\mathcal{M}}_{g,n}\)._ Proof.: Since a sheaf of \(\mathcal{O}_{S}\)-modules is locally free if and only if is coherent and flat, in order to show a coherent sheaf \([W^{\bullet}]_{\mathcal{L}}\) is locally free, it suffices to show that it is flat. For this, we can use the valuative criteria of [10, Thm 11.8.1, SS3] to reduce to the case that our base scheme is \(S=\operatorname{Spec}(\mathbb{C}[\![q]\!])\). By [11, Ex. II.5.8], since \(S\) is Noetherian and reduced, and since formation of coinvariants commutes with base change, by Lemma 4.1.1, it suffices to check that vector spaces of coinvariants have the same dimension over all pointed and coordinatized curves. Our strategy for checking this condition holds is to argue by induction on the number of nodes, reducing to the base case where the curve has no nodes. To take the inductive step, following the notation of Corollary 5.2.1, let \(\mathscr{C}_{0}\to\operatorname{Spec}(k)\) be a nodal curve with \(k+1\) nodes, and let \(\mathscr{C}\to\operatorname{Spec}(\mathbb{C}[\![q]\!])\) be a smoothing family with \(\mathscr{C}_{0}\) the special fiber. By Proposition 4.3.4 and by Lemma 5.2.5, we deduce that \([\alpha]\) is an isomorphism, so that the dimension of the space of coinvariants associated with \(\mathscr{C}_{0}\) agrees with the dimension of the space of coinvariants for the partial normalization \(\widetilde{\mathscr{C}_{0}}\), a curve with \(k\) nodes. Therefore, by induction, the vector space \([W^{\bullet}]_{(\mathscr{C}_{0},P_{\bullet},t_{\bullet})}\) has the same dimension as the vector space of coinvariants associated with a smooth curve. We are then left to show that spaces of coinvariants associated with smooth curves of the same genus have the same dimensions. This holds since coinvariants \([W^{\bullet}]_{\mathcal{L}}\) are by assumption coherent, and moreover, when restricted to families of smooth coordinatized curves, they define a sheaf which admits a projectively flat connection [12, DGT21]. We have shown that \([W^{\bullet}]_{\mathcal{L}}\) is flat, giving rise to a coherent and locally free sheaf on \(\widehat{\mathcal{M}}_{g,n}\). As shown in [13], this sheaf of coinvariants descends to a sheaf of coinvariants \(\mathbb{V}_{g}(V;W^{\bullet})^{\overline{\mathcal{J}}_{g,n}}\) on \(\overline{\mathcal{J}}_{g,n}\). Moreover, for any collection of simple \(V\)-modules \(W^{\bullet}\) with rational conformal weights, as is explained in [12, SS8.7.1], the sheaves are independent of coordinates and will further descend to vector bundles on \(\overline{\mathcal{M}}_{g,n}\), denoted \(\mathbb{V}_{g}(V;W^{\bullet})\). **Remark 5.2.7**.: We note the following consequences of Corollary 5.2.6: 1. For a collection of simple modules over a \(C_{2}\)-cofinite VOA, the sheaf of coinvariants will give vector bundles \(\mathbb{V}_{g}(V;W^{\bullet})\) on \(\overline{\mathcal{M}}_{g,n}\) whenever the algebras \(\mathfrak{A}_{d}(V)\) admit strong unities. To see this, we note that the coinvariants will be coherent by [10, Corollary 5.10], and by [11, Corollary 5.10] any simple module over a \(C_{2}\)-cofinite \(V\) has rational conformal weight. 2. Combining Example 3.3.2 with Corollary 5.2.6 one may show that sheaves of coinvariants from \(C_{2}\)-cofinite and rational VOAs define vector bundles on \(\overline{\mathcal{M}}_{g,n}\), recovering [12, VB Corollary]. 3. By [10], sheaves defined by simple modules over VOAs that are generated in degree \(1\) are coherent over rational curves. If \(V\) satisfies smoothing, such sheaves of coinvariants descend to vector bundles \(\mathbb{V}_{0}(V;W^{\bullet})^{\overline{\mathcal{J}}_{0,n}}\) on \(\overline{\mathcal{J}}_{0,n}\). If the conformal dimensions of the modules are in \(\mathbb{Q}\), they further descend to vector bundles \(\mathbb{V}_{0}(V;W^{\bullet})\) on \(\overline{\mathcal{M}}_{0,n}\). Moreover, by [10], these bundles are globally generated. We refer to Section 7 and Corollary 7.4.1 for an application of this using the Heisenberg VOA. ## 6. Higher Zhu algebras and mode transition algebras Recall that if any of the equivalent properties of Definition/Lemma 3.3.1 hold, we say that \(\mathscr{I}_{d}\in\mathfrak{A}_{d}\) is a strong unity. Here we prove Theorem 6.0.1, one of our two main results. In order to formulate it, we introduce the map \[\mu_{d}\colon\mathfrak{A}_{d}\to\mathsf{A}_{d},\qquad\mu_{d}(\alpha\otimes u \otimes\beta)=[\alpha u\beta]_{d}.\] This map is well defined and fits into an exact sequence (see Lemma B.3.1) \[\mathfrak{A}_{d}\stackrel{{\mu_{d}}}{{\longrightarrow}}\mathsf{ A}_{d}\stackrel{{\pi_{d}}}{{\longrightarrow}}\mathsf{A}_{d-1} \longrightarrow 0. \tag{18}\] **Theorem 6.0.1**.: 1. _If the mode transition algebra_ \(\mathfrak{A}_{d}\) _admits a unity element, then the map_ \(\mu_{d}\) _in (_18_) is injective, and the sequence splits, yeilding an isomorphism_ \(\mathsf{A}_{d}\cong\mathfrak{A}_{d}\times\mathsf{A}_{d-1}\) _as rings. In particular, if_ \(\mathfrak{A}_{j}\) _admits a unity for every_ \(j\leq d\)_, then_ \(\mathsf{A}_{d}\cong\mathfrak{A}_{d}\oplus\mathfrak{A}_{d-1}\oplus\cdots \oplus\mathfrak{A}_{0}\)_._ 2. _If_ \(\mathfrak{A}_{d}\) _admits a strong unity for all_ \(d\in\mathbb{N}\)_, so that smoothing holds for_ \(V\)_, then given any generalized Verma module_ \(W=\Phi^{\mathsf{L}}(W_{0})=\oplus_{d\in\mathbb{N}}W_{d}\) _where_ \(L_{0}\) _acts on_ \(W_{0}\) _as a scalar with eigenvalue_ \(c_{W}\in\mathbb{C}\)_, there is no proper submodule_ \(Z\subset W\) _with_ \(c_{Z}-c_{W}>0\) _for every eigenvalue_ \(c_{Z}\) _of_ \(L_{0}\) _on_ \(Z\) _(see Remark_ 6.0.2_)._ We note that Theorem B.3.3 specializes to Part (a) of Theorem 6.0.1. It therefore remains to prove Part (b) of Theorem 6.0.1. Proof.: We say that an induced admissible module \(W=\Phi^{\mathsf{L}}(W_{0})\) has the LCW property if \(L_{0}\) acts on \(W_{0}\) as a scalar with eigenvalue \(c_{W}\in\mathbb{C}\), and there is no proper submodule \(Z\subset W\) with \(c_{Z}-c_{W}>0\) for every eigenvalue \(c_{Z}\) of \(L_{0}\) on \(Z\). Suppose for contradiction that \(V\) admits a module \(W=\Phi^{\mathsf{L}}(W_{0})\), and \(W\) does not have the LCW property. We will show that there must be a \(d\in\mathbb{N}\) such that \(\mathfrak{A}_{d}\) is not unital, contradicting our assumptions. By hypothesis, \(W\) has a proper submodule \(Z\) with \(c_{Z}-c_{W}>0\) for every eigenvalue \(c_{Z}\) of \(L_{0}\) on \(Z\). In particular, \(Z\) is not induced in degree zero over \(\mathsf{A}\). Let \(z_{d}\) be any homogeneous element in \(Z\) of smallest degree \(d>0\), so that \(z_{d}\in W_{d}\). By assumption \(\mathfrak{A}_{d}\) is unital, with unity \(\mathfrak{u}_{d}=\sum_{i}\alpha_{i}\otimes 1\otimes\beta_{i}\), where each \(\alpha_{i}\) has degree \(d\) and each \(\beta_{i}\) has degree \(-d\). The action of \(\mathfrak{A}\) on \(W\) restricts to an action of \(\mathfrak{A}_{d}\) on \(W_{d}\), and since \(\mathfrak{u}_{d}\) is the unity of \(\mathfrak{A}_{d}\) we have \[\mathfrak{A}_{d}\times W_{d}\longrightarrow W_{d},\qquad(\mathfrak{u}_{d},z_{ d})\mapsto\mathfrak{u}_{d}\star z_{d}=z_{d}.\] Unraveling the definition of \(\star\) and its associativity properties we have \(\mathfrak{u}_{d}\star z_{d}=\sum_{i}\alpha_{i}\cdot(\beta_{i}\cdot z_{d})\). But now since the degree of \(\beta_{i}\cdot z_{d}\) is zero and \(Z\) is a submodule, we have that \(\beta_{i}\cdot z_{d}\in Z\cap W_{0}=0\), since \(Z\) does not have a degree zero component. It then follows that \(z_{d}=\mathfrak{u}_{d}\star z_{d}=0\), giving a contradiction since we assumed \(z_{d}\neq 0\). **Remark 6.0.2**.: Although the eigenvalues \(c_{Z}\) and \(c_{W}\) are in general complex numbers, the difference \(c_{Z}-c_{W}\) is always an integer, hence it makes sense to require that this number be positive. In fact, every eigenvalue of the action of \(L_{0}\) on \(W\) will be obtained by shifting \(c_{W}\) by a non-negative integer. The condition \(c_{Z}-c_{W}>0\) coincides then with \(c_{Z}\neq c_{W}\). We remark that when \(V\) is \(C_{2}\)-cofinite, then the eigenvalues of \(L_{0}\) are necessarily rational numbers [10]. ## 7. Mode transition algebra for the Heisenberg vertex algebra In this section we describe the mode transition algebras for the Heisenberg vertex algebra. This result is stated in Proposition 7.2.1 and, as a consequence, in Section 7.3 we obtain that [1, Conjecture 8.1] holds. We refer [10, 11, 12, 13] for more details about the vertex algebra, denoted \(\pi\), \(V_{\mathfrak{h}}(1,\alpha)\), \(M_{a}(1)\) and \(M(1)_{a}\) in the literature, and which we next briefly describe. ### Background on the Heisenberg vertex algebra Let \(\mathfrak{h}=H\mathbb{C}(\mathfrak{t})\oplus k\mathbb{C}\) be the extended Heisenberg algebra and consider the Heisenberg vertex algebra \(V=\pi\). Let \(U_{1}(\mathfrak{h})\) denote the quotient of the universal enveloping algebra \(U(\mathfrak{h})\) by the two sided ideal generated by \(k-1\). Following [10, Section 4.3] the Lie algebra \(\mathfrak{L}(V)^{\mathsf{L}}\) is naturally embedded inside \[\overline{U(\mathfrak{h})}^{\mathsf{L}}:=\varprojlim_{U_{1}(\mathfrak{h})} \overline{U_{1}(\mathfrak{h})\circ Ht^{N}\mathbb{C}[t]}.\] The map is induced by \((b_{-1})_{[n]}\mapsto Ht^{n}\). This embedding induces a natural isomorphism between \(\mathscr{U}^{\mathsf{L}}\) and \(\overline{U(\mathfrak{h})}^{\mathsf{L}}\) which translates the filtration on \(\mathscr{U}^{\mathsf{L}}\) into the canonical filtration on \(\overline{U(\mathfrak{h})}^{\mathsf{L}}\) induced by the filtration on \(\mathbb{C}(\mathfrak{t})\) given by \(F^{p}\mathbb{C}(\mathfrak{t})=t^{-p}\mathbb{C}[t^{-1}]\). A similar construction holds for \(\mathfrak{L}(V)^{\mathsf{R}}\) and \(\mathscr{U}^{\mathsf{R}}\), where the extended Heisenberg algebra \(\mathfrak{h}=H\mathbb{C}(\mathfrak{t})\oplus\mathbb{C}\) is replaced by \(\mathfrak{h}=H\mathbb{C}(\mathfrak{t}^{-1})\oplus\mathbb{C}\). The sub ring \(\mathscr{U}\) of \(\mathscr{U}^{\mathsf{L}}\) and \(\mathscr{U}^{\mathsf{R}}\) has a natural gradation induced by \(\deg(Ht^{n})=-n\). We can then deduce that the associated zero mode algebra \(\mathfrak{A}_{0}\) is isomorphic to the commutative ring \(\mathbb{C}[x]\), where the element \((b_{-1})_{[0]}=H\in\mathscr{U}_{0}\) is identified with the variable \(x\). Combining these results we can explicitly compute all the mode transition algebras. ### Mode transition algebras for the Heisenberg vertex algebra We can now state and prove the main result of this section. **Proposition 7.2.1**.: _There is a natural identification \(\mathfrak{A}_{d}(\pi)\cong\operatorname{Mat}_{p(d)}(\mathbb{C}[x])\), where \(p(d)\) is the number of ways to decompose \(d\) into a sum of positive integers. In particular \(\mathfrak{A}_{d}\) is unital for every \(d\in\mathbb{N}\)._ Proof.: Denote by \(P(d)\) the set of partitions of \(d\) into positive integers, so that \(|P(d)|=p(d)\). We represent every element \([r_{1}|\cdots|r_{n}]=\boldsymbol{r}\in P(d)\) by a decreasing sequence of positive integers \(r_{1}\geq\cdots\geq r_{n}\geq 1\) such that \(\sum_{i}r_{i}=d\) and for some \(n\in\mathbb{N}\). For every pair \((\boldsymbol{r},\boldsymbol{s})\in P(d)^{2}\), we denote by \(\varepsilon_{\boldsymbol{r},\boldsymbol{s}}\) the element in \(\mathfrak{A}_{d}\) given by \[Ht^{-r_{1}}\circ\cdots\circ Ht^{-r_{n}}\otimes 1\otimes Ht^{s_{m}}\circ \cdots\circ Ht^{s_{1}}.\] From the explicit description of \(\mathscr{U}\) given above, and the fact that the Zhu algebra \(\mathsf{A}=\mathbb{C}[x]\) at level zero is Abelian, we have that the set whose elements are \(\varepsilon_{\boldsymbol{r},\boldsymbol{s}}\) freely generates \(\mathfrak{A}_{d}\) as an \(\mathsf{A}\)-module. Moreover, using a computation similar to Example 3.2.3, one may show that where \(a(\boldsymbol{r})\) is a non-zero, positive integer entirely depending on \(\boldsymbol{r}\). It then follows that \[\varepsilon_{\boldsymbol{r}^{\prime},\boldsymbol{s}}\star\varepsilon_{ \boldsymbol{r},\boldsymbol{s}^{\prime}}=\begin{cases}a(\boldsymbol{r}) \varepsilon_{\boldsymbol{r}^{\prime},\boldsymbol{s}^{\prime}}&\text{if } \boldsymbol{s}=\boldsymbol{r}\\ 0&\text{otherwise}.\end{cases}\] By identifying \(\varepsilon_{\boldsymbol{r},\boldsymbol{s}}\) with the element of \(\operatorname{Mat}_{p(d)}(\mathbb{C})\) having \(\sqrt{a(\boldsymbol{r})}\sqrt{a(\boldsymbol{s})}\) in the \((\boldsymbol{r},\boldsymbol{s})\)-entry, and zero otherwise, the above description gives an isomorphism of rings between \(\mathfrak{A}_{d}\) and \(\mathbb{C}[x]\otimes\operatorname{Mat}_{p(d)}(\mathbb{C})=\operatorname{Mat}_ {p(d)}(\mathbb{C}[x])\), as is claimed. **Example 7.2.2**.: We can explicitly compute the coefficient \(a(\boldsymbol{r})\) appearing in the proof of Proposition 7.2.1. Let \(\boldsymbol{r}=[r_{1}|\dots|r_{d}]\) be a partition of \(d\) with consisting of \(s\) many distinct elements \(r_{i_{1}},\dots,r_{i_{s}}\) (at most \(s=d\)). For every \(j\in\{1,\dots,s\}\), let \(m_{j}\) be the multiplicity of \(r_{i_{j}}\) in \(\boldsymbol{r}\). Then we have \[a(\boldsymbol{r})=\prod_{i=1}^{r}r_{i}\cdot\prod_{j=1}^{s}(m_{j}!).\] For instance \(a([1|\cdots|1])=d!\) and \(a([d])=d\). Moreover \(a([r_{1}|\cdots|r_{d}])=r_{1}\cdots\cdots r_{d}\) if the \(r_{i}\)'s are all distinct. ### The conjecture of Barron and Addabbo We now prove [1, 1]. **Corollary 7.3.1**.: _For all \(d\in\mathbb{N}\), one has that \(\mathsf{A}_{d}(\pi)\cong\operatorname{Mat}_{p(d)}(\mathbb{C}[x])\oplus\mathsf{A} _{d-1}(\pi)\)._ Proof.: This follows from Proposition 7.2.1 and Part _(a)_ of Theorem 6.0.1. **Remark 7.3.2**.: By [1, Remark 4.2], \(\mathsf{A}_{0}(\pi)\cong\mathbb{C}[x]\), \(\mathsf{A}_{1}(\pi)\cong\mathbb{C}[x]\oplus\mathsf{A}_{0}(\pi)\), and by [1, Theorem 7.1], \(\mathsf{A}_{2}(\pi)\cong\operatorname{Mat}_{p(2)}(\mathbb{C}[x])\oplus\mathsf{ A}_{1}(\pi)\). ### Vector bundles from the Heisenberg VOAs We now equip \(\pi\) with a conformal vector \(\omega\), so that it becomes a VOA. The following result shows that the application of Theorem 6.0.1 produces new examples, beyond the well-studied case of sheaves of coinvariants defined by rational and \(C_{2}\)-cofinite VOAs. Let \(\overline{\mathcal{J}}_{0,n}\) be the stack parametrizing families of stable pointed curves of genus zero with first order tangent data, and recall that the forgetful map \(\pi:\overline{\mathcal{J}}_{0,n}\to\overline{\mathcal{M}}_{0,n}\) makes \(\overline{\mathcal{J}}_{0,n}\) a \(\mathbb{G}_{m}^{\oplus n}\)-torsor over \(\overline{\mathcal{M}}_{0,n}\). **Corollary 7.4.1**.: _Sheaves of coinvariants defined by simple modules over the Heisenberg VOA form globally generated vector bundles on \(\overline{\mathcal{J}}_{0,n}\). If conformal dimensions of modules are in \(\mathbb{Q}\), these descend to form globally generated vector bundles on \(\overline{\mathcal{M}}_{0,n}\)._ Proof.: By Proposition 7.2.1, the mode transition algebras for the Heisenberg VOAs are unital. Moreover, the formula of the star product implies that these are strong unities. Hence by Theorem 6.0.1, the Heisenberg VOA satisfies smoothing. Since the Heisenberg VOA is by definition generated in degree 1, the assertion follows from Corollary 5.2.6, as described in Remark 5.2.7 (c),. **Remark 7.4.2**.: Unlike bundles of coinvariants given by representations of rational and \(C_{2}\)-cofinite VOAs, higher Chern classes of bundles on \(\overline{\mathcal{M}}_{g,n}\) from Corollary 5.2.6 (like those on \(\overline{\mathcal{M}}_{0,n}\) from Corollary 7.4.1) are elements of the tautological ring since we do not know if they satisfy factorization, and hence we do not that the Chern characters form a semisimple cohomological field theory as in [10, DGT22b]. ## 8. Mode transition algebras for Virasoro VOAs For \(c\in\mathbb{C}\), by \(\operatorname{Vir}_{c}=M_{c,0}/<L_{-1}1>\) we mean the (not necessarily simple) Virasoro VOA of central charge \(c\in\mathbb{C}\). ### \(\operatorname{Vir}_{c}\) By [11], when \(c\neq c_{p,q}=1-\frac{6(p-q)^{2}}{pq}\), then \(\operatorname{Vir}_{c}\) is a simple VOA, but it is not rational or \(C_{2}\)-cofinite. When \(c=c_{p,q}\), the VOA \(\operatorname{Vir}_{c}\) is not simple, but its simple quotient \(L_{c}\) will be rational and \(C_{2}\)-cofinite, and therefore satisfy smoothing. We therefore only consider \(\operatorname{Vir}_{c}\), for any values of \(c\), and not \(L_{c}\). **Proposition 8.1.1**.: _Let \(\operatorname{Vir}_{c}\) be the Virasoro VOA._ 1. _The first mode transition algebra_ \(\mathfrak{A}_{1}(\operatorname{Vir}_{c})\) _is not unital, and so_ \(\operatorname{Vir}_{c}\) _does not satisfy smoothing._ 2. _The kernel of the canonical projection_ \(\mathsf{A}_{1}(\operatorname{Vir}_{c})\to\mathsf{A}_{0}(\operatorname{Vir}_{c})\) _is isomorphic to_ \(\mathfrak{A}_{1}(\operatorname{Vir}_{c})\)_._ Proof.: We first prove _(a)_. By [12, Lemma 4.1], one has \(\mathsf{A}_{0}(\mathrm{Vir}_{c})\cong\mathbb{C}[t]\), where the class of \((L_{-2}1)_{[1]}\) is mapped to the generator \(t\). Here, as in Heisenberg case, \(\mathfrak{L}(V)_{\pm 1}^{t}\) is a one dimensional vector space, with generators denoted \(u_{\pm 1}\), so that \(\mathfrak{A}_{1}(\mathrm{Vir}_{c})=u_{1}\mathsf{A}_{0}(\mathrm{Vir}_{c})u_{-1}\). We can choose \(u_{1}=(L_{-2}1)_{[0]}\) and \(u_{-1}=(L_{-2}1)_{[2]}\), and to understand the multiplicative structure of \(\mathfrak{A}_{1}(\mathrm{Vir}_{c})\) we are only left to compute \([u_{-1},u_{1}]\). Since \(L_{-2}1\) is the conformal vector of \(\mathrm{Vir}_{c}\), we can identify \((L_{-2}1)_{[n]}\) with the element \(\mathcal{L}_{n-1}\) of the Virasoro algebra, and the bracket of \(L(\mathrm{Vir}_{c})\) coincides with the bracket in the Virasoro algebra. Hence we obtain \[[u_{-1},u_{1}]=[(L_{-2}1)_{[2]},(L_{-2}1)_{[0]}]=[\mathcal{L}_{1},\mathcal{L} _{-1}]=2\mathcal{L}_{0}=2(L_{-2}1)_{[1]}.\] We then have an identification of \(\mathfrak{A}_{1}(\mathrm{Vir}_{c})\) with \((\mathbb{C}[t],+,\star)\), where \(+\) denotes the usual sum of polynomials, while \(f(t)\star g(t)=2tf(t)g(t)\). In particular, this implies that \(\mathfrak{A}_{1}(\mathrm{Vir}_{c})\) is not unital. We now show _(b)_. By [12], \(\mathsf{A}_{0}(\mathrm{Vir}_{c})\) is generated by \(L_{-2}\mathbf{1}+O_{0}(V)\) and \(L_{-2}^{2}\mathbf{1}+O_{0}(V)\) so that \[\mathsf{A}_{0}(\mathrm{Vir}_{c})\cong\mathbb{C}[x,y]/(y-x^{2}-2x)\cong\mathbb{ C}[x],\] \[L_{-2}\mathbf{1}+O_{0}(V)\mapsto x+(q_{0}(x,y)),\qquad L_{-2}^{2}\mathbf{1}+O_{0}(V)\mapsto y+(q_{0}(x,y)),\] where \(q_{0}(x,y)=y-x^{2}-2x\). By [1, Theorem 4.7], \(\mathsf{A}_{1}(\mathrm{Vir}_{c})\) is generated by \(L_{-2}\mathbf{1}+O_{1}(V)\) and \(L_{-2}^{2}\mathbf{1}+O_{1}(V)\), and by [1, Theorem 4.11] on has that \[\mathsf{A}_{1}(\mathrm{Vir}_{c})\cong\mathbb{C}[x,y]/((y-x^{2}-2x)(y-x^{2}-6x+ 4)),\] \[L_{-2}\mathbf{1}+O_{1}(V)\mapsto x+(q_{0}(x,y)q_{1}(x,y)),\qquad L_{-2}^{2} \mathbf{1}+O_{1}(V)\mapsto y+(q_{0}(x,y)q_{1}(x,y)),\] where \(q_{0}(x,y)=y-x^{2}-2x\) and \(q_{1}(x,y)=y-x^{2}-6x+4\) (see also [1, SS5]). With the change of variables \(X=y-x^{2}-6x+4\) and \(Y=y-x^{2}-2x\), one has \[\mathsf{A}_{1}(\mathrm{Vir}_{c})=\frac{\mathbb{C}[X,Y]}{XY}\qquad\text{ and }\qquad\mathsf{A}_{0}(\mathrm{Vir}_{c})=\mathbb{C}[X],\] so that the kernel of the projection \(\mathsf{A}_{1}(\mathrm{Vir}_{c})\to\mathsf{A}_{0}(\mathrm{Vir}_{c})\) is identified with the ideal \(K_{1}\) generated by \(Y\) inside \(\mathsf{A}_{1}(\mathrm{Vir}_{c})\). Since \(XY=0\), the ideal \(K_{1}\) is isomorphic to \((Y\mathbb{C}[Y],+,\cdot)\). Furthermore, this algebra is isomorphic to the algebra \((\mathbb{C}[t],+,\star)\) through the assignment \(Yf(Y)\mapsto f(2t)\). This shows that, abstractly, \(\mathfrak{A}_{1}(\mathrm{Vir}_{c})\) is identified with the kernel of \(\mathsf{A}_{1}(\mathrm{Vir}_{c})\to\mathsf{A}_{0}(\mathrm{Vir}_{c})\). We now see directly that this identification is provided by the natural map \(\mu_{1}\colon\mathfrak{A}_{1}\to\mathsf{A}_{1}(V)\), which is induced by \((L_{-2}1)_{[0]}\otimes 1\otimes(L_{-2}1)_{[2]}\mapsto[(L_{-2}1)_{[0]}(L_{-2}1)_{[2 ]}]\) as in Lemma B.3.1. To check that indeed \(\mathfrak{A}_{1}(\mathrm{Vir}_{c})\) naturally identifies with the kernel of \(\mathsf{A}_{1}(\mathrm{Vir}_{c})\to\mathsf{A}_{0}(\mathrm{Vir}_{c})\), it is enough to show that \[\tilde{Y}-2(L_{-2}1)_{[0]}(L_{-2}1)_{[2]}\in N^{2}\mathscr{U}_{0},\] where \(\tilde{Y}\) is any lift of \(Y\) to \(\mathscr{U}_{0}\). We choose \[\tilde{Y}=(L_{-2}L_{-2}1)_{[3]}-(L_{-2}1)_{[1]}(L_{-2}1)_{[1]}-2(L_{-2}1)_{[1]}.\] To simplify the notation, we will now write \(\mathcal{L}_{n}\) to denote \((L_{-2}1)_{[n+1]}\). Using the Virasoro relations we obtain that this is the same as \[\tilde{Y} =2\sum_{n\geq 2}\mathcal{L}_{-n}\mathcal{L}_{n}+\mathcal{L}_{-1} \mathcal{L}_{1}+\mathcal{L}_{1}\mathcal{L}_{-1}+\mathcal{L}_{0}\mathcal{L}_{0} -\mathcal{L}_{0}\mathcal{L}_{0}-2\mathcal{L}_{0}\] \[=2\sum_{n\geq 2}\mathcal{L}_{-n}\mathcal{L}_{n}+\mathcal{L}_{-1} \mathcal{L}_{1}+\mathcal{L}_{1}\mathcal{L}_{-1}-2\mathcal{L}_{0}\] \[=2\sum_{n\geq 2}\mathcal{L}_{-n}\mathcal{L}_{n}+2\mathcal{L}_{-1} \mathcal{L}_{1}+2\mathcal{L}_{0}-2\mathcal{L}_{0}\] \[=2\sum_{n\geq 2}\mathcal{L}_{-n}\mathcal{L}_{n}+2\mathcal{L}_{-1} \mathcal{L}_{1}\] \[=2\sum_{n\geq 2}\mathcal{L}_{-n}\mathcal{L}_{n}+2(L_{-2}1)_{[0]}(L _{-2}1)_{[2]},\] and since \(\sum_{n\geq 2}\mathcal{L}_{-n}\mathcal{L}_{n}\in N^{2}\mathscr{U}_{0}\), the proof is complete. ## 9. Questions Here we ask a few other questions that arise from this work. ### Not rational and strongly generated in higher degree Keeping in mind the example of the Virasoro VOA from Section 8 and Theorem 6.0.1, we ask the following: **Question 9.1.1**.: For \(V\) a \(C_{2}\)-cofinite and non-rational VOA, not generated in degree 1, can one always find a pair \(Z\subset W\) where \(W=\Phi^{\mathsf{L}}(W_{0})\) is induced by an indecomposable \(\mathsf{A}_{0}(V)\)-module \(W_{0}\), such that \(L_{0}\) acts on \(W_{0}\) as a scalar with eigenvalue \(c_{W}\in\mathbb{C}\), and a proper submodule \(Z\subset W\), with \(c_{Z}-c_{W}>0\) for every eigenvalue \(c_{Z}\) of \(L_{0}\) on \(Z\). In Section 9.1.2 we provide an example of such a pair of modules \(Z\subset W\) for the triplet vertex operator algebra \(\mathcal{W}(p)\). This particular example was suggested to us in a communication with Thomas Creutzig. Simon Wood gave us a proof of Claim 9.1.3, a crucial detail for this example. The features of such an example (and that it should exist for the triplet) were described to us by Drazen Adamovic. #### 9.1.2. Triplet VOAs Let \(\mathcal{W}(p)\) denote the triplet vertex operator algebra. There are \(2p\) simple \(\mathcal{W}(p)\)-modules \(X_{s}^{+}\), and \(X_{s}^{-}\), for \(1\leq s\leq p\). Following [14, Eq (2.39)], we write \(\overline{X_{s}^{\pm}}\) for the quotient \(\mathsf{A}_{0}(X_{s}^{\pm})=X_{s}^{\pm}/I_{0}(X_{s}^{\pm})\) which are simple modules over Zhu algebra \(\mathsf{A}=\mathsf{A}_{0}(\mathcal{W}(p))\). And in this case, one also has in the notation of [1], that \(\Omega(X_{s}^{\pm})=(X_{s}^{\pm})_{0}=\overline{X_{s}^{\pm}}\). In particular, since \(\overline{X_{s}^{\pm}}\) is an \(\mathsf{A}\)-module, we may consider \(\Phi^{\mathsf{L}}(X_{s}^{\pm})\). Moreover, using for instance [14, Eq (3.8)], the eigenvalues of the action of \(L_{0}\) on the indecomposable modules \(\overline{X_{s}^{\pm}}\), i.e. the conformal weights, satisfy \(cw(X_{p-s}^{-})>cw(X_{s}^{+})\). The induced module \(\Phi^{\mathsf{L}}(\overline{X_{s}^{+}})\) can be identified with a quotient of the projective cover of \(X_{s}^{+}\), as follows. By [14, Proposition 4.5] (see also [14]) the projective cover \(P_{s}^{+}\) of \(X_{s}^{+}\) has socle filtration of length three consisting of submodules \(S_{0}\subset S_{1}\subset S_{2}=P_{s}^{+}\) with \(S_{0}\cong X_{s}^{+}\cong S_{2}/S_{1}\) and \(S_{1}/S_{0}\cong 2X_{p-s}^{-}\). **Claim 9.1.3**.: \(\Phi^{\mathsf{L}}(\overline{X_{s}^{+}})\cong P_{s}^{+}/X_{s}^{+}\) Proof.: The \(\mathsf{A}\)-module \(\overline{X}^{+}_{s}\) is indecomposable, and as \(\Phi^{\mathsf{L}}\) takes indecomposable modules to indecomposable modules (eg. [1]), one has that \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\) is an indecomposable admissible \(\mathcal{W}(p)\) module. It follows that \(\overline{X}^{+}_{s}\) will be the weight space of least conformal weight in \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\), and as \(X^{+}_{s}\) is generated by its lowest weight space \(\overline{X}^{+}_{s}\), we get a canonical surjective map \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\to X^{+}_{s}\). By projectivity, the map from the projective cover \(P^{+}_{s}\to X^{+}_{s}\) lifts to a map \(P^{+}_{s}\to\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\). As this map is surjective on the least weight space, the weight of \(X^{+}_{s}\), and \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\) is generated by this subspace by construction, the map \(P^{+}_{s}\to\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\) is surjective and so \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\) is a quotient of \(P^{+}_{s}\). The kernel of this quotient must contain the socle (which isomorphic to \(X^{+}_{s}\)), otherwise \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\) would have two composition factors isomorphic to \(X^{+}_{s}\) contradicting the size of its lowest weight space. The kernel cannot be larger, otherwise \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\) would admit a nontrivial extension by \(X^{-}_{p-s}\) (which as noted, has greater conformal weight than \(X^{+}_{s}\)). Note that if we had such an extension \(0\to X^{-}_{p-s}\to E\to\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\to 0\), the lowest weight spaces of \(E\) and \(\Phi^{\mathsf{L}}(\overline{S}^{+}_{s})\) would be isomorphic, producing a universal map \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\to E\) whose composition with the map in the above sequence would be the universal map from \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\) to itself - that is, the identity. Consequently we would have a splitting of our exact sequence and the extension would be trivial. Thus \(\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})\) is isomorphic to \(P^{+}_{s}/X^{+}_{s}\). In particular, by Claim 9.1.3, \(W=\Phi^{\mathsf{L}}(\overline{X}^{+}_{s})=S_{2}/S_{0}\) would have a sub-module isomorphic to \(Z=S_{1}/S_{0}\cong 2X^{-}_{p-s}\), and the conformal weight \(cw(Z)=cw(X^{-}_{p-s})\) would then be strictly larger than the conformal weight \(cw(W)=cw(\overline{X}^{+}_{s})\). **Proposition 9.1.4**.: \(\mathcal{W}(p)\) _does not satisfy smoothing._ Proof.: The example shows by Theorem 5.0.3 and Theorem 6.0.1 that the triplet does not satisfy smoothing. #### 9.1.5. More general triplet vertex algebras In [11, 1, 1, 1] the more general triplet vertex algebras \(\mathcal{W}_{p_{+},p_{-}}\) with \(p_{\pm}\geq 2\) and \((p_{+},p_{-})=1\) are studied. From their results, for \(p_{+}=2\) and \(p_{-}\) odd, the \(\mathcal{W}_{p_{+},p_{-}}\) are \(C_{2}\)-cofinite and not rational. We would like to know the answer to Question 9.1.1 for this family of VOAs. #### 9.1.6. Other \(C_{2}\)-cofinite, non-rational VOAs from extensions In [10] the authors discover three new series of \(C_{2}\)-cofinite and non-rational VOAs, via application of the vertex tensor category theory of [12, 13], which are not directly related to the triplets. They also list certain modules for these examples. We would like to know the answer to Question 9.1.1 for these new families of VOAs. ### Local freeness in case \(V\) does not satisfy smoothing **Question 9.2.1**.: Are there particular choices of modules \(W^{\bullet}\) over a \(V\) that do not satisfy smoothing, for which sheaves \(\mathbb{V}(V;W^{\bullet})\) form vector bundles on \(\overline{\mathcal{M}}_{g,n}\)? By Corollary 5.2.6, if \(V\) satisfies smoothing, and if the sheaves of coinvariants are coherent, they form vector bundles. However, if \(V\) does not satisfy smoothing, it is still an open question about whether these sheaves are locally free. For instance one could ask this for the triplet vertex algebras, which do not satisfy smoothing, but are \(C_{2}\)-cofinite, so their representations define coherent sheaves on \(\overline{\mathcal{M}}_{g,n}\). ### Generalized Constructions In Appendix A.9 the notion of triples of associative algebras is introduced, and to a good triple (see Definition A.9.1) we associate many of the standard notions affiliated with a VOA from higher level Zhu algebras to mode transition algebras (see Appendix B.1 and Appendix B.2). Some of the results proved here apply in this more general context. For instance, as was already noted in the introduction, the exact sequence in (5), and Part _(a)_ of Theorem 6.0.1 hold in this generality. It would be interesting to further develop this theory, and it is therefore natural to ask the following question: **Question 9.3.1**.: What are other examples of generalized (higher level) Zhu algebras and generalized mode transition algebras, beyond the context of VOAs? ## Appendix A Split filtrations This appendix contains a number of details about graded and filtered completions, and their relationships to one another. These serve to provide simple definitions of the building blocks of our constructions and uniform proofs of their properties. ### Filtrations The purpose of this first section is to provide a framework in which we can simultaneously discuss and compare filtered and graded versions of certain constructions. In particular, this will give us a language appropriate for dealing simultaneously with both graded and filtered versions of the universal enveloping algebra of a vertex operator algebra which we recall in Definition 2.4.2. **Definition A.1.1** (Left and right filtrations).: Let \(X\) be an Abelian group. A left filtration on \(X\) is a sequence of subgroups \(X_{\leq n}\subset X_{\leq n+1}\subset X\) for \(n\in\mathbb{Z}\). Similarly, a right filtration on \(X\) is a sequence of subgroups \(X_{\geq n}\subset X_{\geq n-1}\subset X\) for \(n\in\mathbb{Z}\). **Remark A.1.2**.: If \(X\) has a left filtration of subgroups \(X_{\leq n}\) we may produce a right filtration by setting \(X_{\geq n}=X_{\leq-n}\). Hence the concepts of left and right filtrations are essentially equivalent. We will work in this section exclusively with left filtrations, but will have use for both left and right filtrations eventually. The reader should therefore keep in mind that the results in this section all have their right counterparts. If \(X\) is a graded Abelian group, we can naturally regard it as filtered by setting \(X_{\leq n}=\oplus_{i\leq n}X_{n}\). **Notation A.1.3**.: If \(X\) is a filtered Abelian group and \(S\subset X\) is a subset, we write \(S_{\leq n}\) to mean \(S\cap X_{\leq n}\). We will now introduce some concepts which we will use throughout. **Definition A.1.4** (Exhaustive filtration).: Let \(X\) be a (left) filtered Abelian group. We say that the filtration on \(X\) is exhaustive if \(\bigcup_{n}X_{\leq n}=X\) and separated if \(\bigcap_{n}X_{\leq n}=0\). **Definition A.1.5** (Splittings of filtrations).: Given a (left) filtered Abelian group \(X\), we define the associated graded group to be \(\operatorname{gr}X=\bigoplus_{n}\left(X_{\leq n}/X_{\leq n-1}\right)\). A splitting of \(X\) is defined to be a graded subgroup \(X^{\prime}=\bigoplus_{n}X^{\prime}_{n}\subset X\) with \(X^{\prime}_{n}\subset X_{\leq n}\) such that for each \(n\), the induced map \(X^{\prime}_{n}\to(\operatorname{gr}X)_{n}\) is an isomorphism. **Definition A.1.6** (Split-filtered Abelian groups).: A split-filtered Abelian group is a filtered Abelian group \((X,\leq)\) together with a graded Abelian group \(X^{\prime}=\oplus_{n}X^{\prime}_{n}\), and an inclusion \(X^{\prime}\subset X\) which defines a splitting as in Definition A.1.5. **Notation A.1.7**.: For \(X\) a split-filtered Abelian group, and \(x\in X_{\leq n}\), we write \(x_{n}\in X^{\prime}_{n}\) and \(x_{<n}\in X_{\leq n-1}\) for the unique elements such that \(x=x_{n}+x_{<n}\). **Example A.1.8** (Concentrated split-filtrations).: If \(X\) is an Abelian group, with no extra structure, we may define a split filtered structure on it, \(X[d]\), which we refer to as "concentrated in degree \(d\)," by: \[X[d]_{\leq p}=\begin{cases}0&\text{if }p<d,\\ X&\text{if }p\geq d.\end{cases}\qquad\text{ and }\qquad X[d]^{\prime}_{p}= \begin{cases}0&\text{if }p\neq d,\\ X&\text{if }p=d.\end{cases}\] If \(X\) is an Abelian group, with no extra structure, we may define the trivial split-filtration on \(X\) to be \(X[0]\). **Example A.1.9**.: If \(X=\bigoplus_{n}X_{n}\) is a graded Abelian group, we may also consider it as a split Abelian group with respect to the filtration \(X_{\leq n}=\bigoplus_{p\leq n}X_{p}\). In this case the inclusion of \(X\) into itself provides the splitting. **Definition A.1.10** (Split-filtered maps).: If \(X\) and \(Y\) are split-filtered Abelian groups and \(d\in\mathbb{Z}\), we say that a group homomorphism \(f\colon Y\to X\) is a map of degree \(d\) if \(f(Y_{\leq p})\subset X_{\leq p+d}\) and \(f(Y^{\prime}_{p})\subset X^{\prime}_{p+d}\) for all \(p\). **Definition A.1.11** (Split-filtered subgroups).: If \(X\) and \(Y\) are split filtered Abelian groups with \(Y\subset X\), we say \(Y\) is a split-filtered subgroup of \(X\) is the inclusion is a degree \(0\) map of split-filtered Abelian groups. **Lemma A.1.12**.: _Let \(f\colon X\to Y\) be a degree \(d\) homomorphism of split-filtered Abelian groups. Then \(\ker f\) is a split-filtered subgroup of \(X\)._ Proof.: We verify that \((\ker f)_{\leq p}=(\ker f^{\prime})_{p}+(\ker f)_{\leq p-1}\), where \(f^{\prime}\colon U^{\prime}_{p}\to U^{\prime}_{p+d}\) is the restriction of \(f\). For this, we simply note that by definition, \(f\) induces a map \(U^{\prime}_{p}\oplus U_{\leq p-1}\to U^{\prime}_{p+d}\oplus U_{\leq p+d-1}\) which preserves the decomposition. The following lemma is straightforward to verify. **Lemma A.1.13**.: _Suppose \(f\colon Y\to X\) is a degree \(d\) map of split-filtered Abelian groups. Then restricting the filtration on \(X\) to the image of \(f\), we find \((\operatorname{im}f)_{\leq p}=\operatorname{im}\left(f\middle|_{Y_{\leq p-d}}\right)\). Further, \(\operatorname{im}f^{\prime}\subset\operatorname{im}f\) defines a splitting, giving \(\operatorname{im}f\) the structure of a split-filtered Abelian group._ **Lemma A.1.14**.: _Suppose \(f\colon Y\to X\) is a degree \(d\) map of split-filtered Abelian groups. Then \(\operatorname{coker}(f^{\prime})\subset\operatorname{coker}(f)\) defines a splitting, giving \(\operatorname{coker}(f)\) the structure of a split-filtered Abelian group._ Proof.: Via Lemma A.1.13, we know that \(\operatorname{im}(f^{\prime})\subset\operatorname{im}(f)\) defines a split-filtered structure on \(\operatorname{im}(f)\). As \(\operatorname{coker}(f)=\operatorname{coker}\left(\operatorname{im}(f)\to X\right)\), it therefore suffices to consider the case where \(f\) is injective. We have a diagram of (split) short exact sequences: where the vertical maps are injections. By the snake lemma, this gives a split short exact sequence of cokernels \((X/Y)_{\leq p}=(X^{\prime}/Y^{\prime})_{p}\oplus(X/Y)_{\leq p}\). In particular, the inclusion \(X_{p}\to X_{\leq p}\) induces an inclusion \((X^{\prime}/Y^{\prime})_{p}\subset(X/Y)_{\leq p}\) giving our desired splitting. **Proposition A.1.15**.: _The category of split-filtered Abelian groups is an Abelian category which is cocomplete, i.e. closed under colimits._ Proof.: The fact that we have an Abelian category is a consequence of Lemma A.1.14, Lemma A.1.13, Lemma A.1.12. By [22, Prop. 2.6.8] cocompleteness follows from being closed under direct sums, which can be checked by noticing that \(\bigoplus_{\lambda\in\Lambda}X^{\lambda}\) is split-filtered with respect to the graded subgroup \(\bigoplus_{\lambda\in\Lambda}(X^{\lambda})^{\prime}\). ### Modules and tensors **Definition A.2.1**.: Let \(R\) be a ring and \(M\) a left (or right) \(R\)-module. We say that \(M\) is a split-filtered \(R\) module if it is a split filtered Abelian group and \(M_{\leq n}\), \(M^{\prime}\) and \(M^{\prime}_{n}\) are \(R\)-submodules of \(M\) for all \(n\). **Definition/Lemma A.2.2**.: Let \(R\) be a split-filtered ring, \(M\) a split-filtered right \(R\)-module and \(N\) a split-filtered left \(R\)-module. Then \(M\otimes_{R}N\) is naturally a split-filtered \(R\)-module by defining \((M\otimes_{R}N)_{\leq n}=\bigoplus\limits_{p+q=n}M_{\leq p}\otimes_{R}N_{ \leq q}\) and \((M\otimes_{R}N)^{\prime}_{n}=\bigoplus\limits_{p+q=n}M^{\prime}_{p}\otimes_{R} N^{\prime}_{q}\). Proof.: Consider first the case where \(R\) is concentrated in degree \(0\) (see Example A.1.8) so that multiplication by elements of \(R\) is a degree \(0\) map. In this case, \[M_{\leq p}\otimes_{R}N_{\leq q}=(M^{\prime}_{p}+M_{\leq p-1}) \otimes_{R}(N^{\prime}_{q}+N_{\leq q-1})\\ =M^{\prime}_{p}\otimes_{R}N^{\prime}_{q}+M_{\leq p-1}\otimes_{R}N ^{\prime}_{q}+M^{\prime}_{p}\otimes_{R}N_{\leq q-1}\\ \subseteq(M^{\prime}\otimes_{R}N^{\prime})_{n}\oplus(M\otimes_{R} N)_{\leq n-1}.\] This shows \((M\otimes_{R}N)_{\leq n}\subseteq(M^{\prime}\otimes_{R}N^{\prime})_{n}\oplus(M \otimes_{R}N)_{\leq n-1}\). The other inclusion is straightforward. Next consider the general case when \(R\) is nontrivially split-filtered, and the map \(\mu\colon M\otimes_{\mathbb{Z}}R\otimes_{\mathbb{Z}}N\to M\otimes_{ \mathbb{Z}}N\) given by \(\mu(x\otimes r\otimes y)=xr\otimes y-x\otimes ry\). By definition, \(M\otimes_{R}N\) is defined as the cokernel of this map. Regarding the domain and codomain as split-filtered via the first part of the proof, we see that this is a degree \(0\) map of split-filtered Abelian groups. So by Lemma A.1.14, the cokernel is split filtered. ### Rings and ideals **Definition A.3.1** (Filtered rings).: If \(U\) is a filtered Abelian group with a (not necessarily associative, not necessarily unital) ring structure, we say that it is a filtered ring if \(U_{\leq p}U_{\leq q}\subset U_{\leq p+q}\). If \(U\) is a filtered ring and we are given \(U^{\prime}\) a graded subring providing a splitting, we say \(U\) is a split-filtered ring. **Lemma A.3.2**.: _Let \(U\) be a split-filtered ring and let \(S,T\subset U\) be arbitrary split-filtered additive subgroups. Then \(ST\) is split filtered with \((ST)_{\leq n}=\sum_{p+q=n}S_{\leq p}T_{\leq q}\) and \((ST)^{\prime}_{n}=\sum_{p+q=n}S^{\prime}_{p}T^{\prime}_{q}\)._ Proof.: If we consider the tensor product \(S\otimes_{\mathbb{Z}}T\), with its split-filtered structure of Definition/Lemma A.2.2, we see that the multiplication map \(S\otimes_{\mathbb{Z}}T\to ST\subset U\) is a degree \(0\) map of split-filtered groups. The result now follows from Lemma A.1.13. **Lemma A.3.3**.: _Let \(U\) be a split-filtered associative, unital ring, and let \(X\subset U\) be a split-filtered additive subgroup. Then the ideal generated by \(X\) in \(U\) is also split-filtered with homogeneous part the ideal of \(U^{\prime}\) generated by \(X^{\prime}\)._ Proof.: It follows from Definition/Lemma A.2.2 that \(U\otimes_{\mathbb{Z}}X\otimes_{\mathbb{Z}}U\) is split filtered with homogeneous part \(U^{\prime}\otimes_{\mathbb{Z}}X^{\prime}\otimes_{\mathbb{Z}}U^{\prime}\). As the multiplication map \(U\otimes_{\mathbb{Z}}X\otimes_{\mathbb{Z}}U\to U\) is a map of degree \(0\), it follows that its image, the ideal generated by \(X\) is split-filtered. **Lemma A.3.4**.: _Suppose \(L\) is a split-filtered Lie algebra over a commutative (associative and unital) ring \(R\). Then the universal enveloping algebra \(U(L)\) is a split-filtered algebra with respect to the graded subalgebra \(U(L^{\prime})\subset U(L)\)._ Proof.: It follows from Definition/Lemma A.2.2 and Proposition A.1.15 that the tensor algebra \(T(L)\) is split-filtered with respect to \(T(L^{\prime})\). Let \(X\subset T(L)\) be the image of the map \(L\otimes_{\mathbb{Z}}L\to T(L)\) defined by \(x\otimes y\mapsto x\otimes y-yx-[x,y]\) (note the tensor in the preimage is over \(\mathbb{Z}\) and in the image is over \(R\)). As this is a map of degree \(0\), its image \(X\) is split-filtered with homogeneous part spanned by the analogous expressions with homogeneous elements. By Lemma A.1.13, it follows that the ideal generated by \(X\) is also split-filtered. Finally Lemma A.1.14 tells us that the quotient by this ideal, the universal enveloping algebra, is also split filtered as described. ### Seminorms The algebraic structures which naturally arise in studying the universal enveloping algebras of a VOA come with additional topological structure in the form of a seminorm. In this section we will examine seminorms and their interactions with gradings, filtrations and split-filtrations. **Definition A.4.1**.: A system of neighborhoods of \(0\) in an Abelian group \(X\) is a collection of subgroups \(\mathrm{N}^{n}X\subset X\), \(n\in\mathbb{Z}\), with \(\mathrm{N}^{n}X\subset\mathrm{N}^{n-1}X\) and \(\bigcup_{n}\mathrm{N}^{n}X=X\). **Remark A.4.2** (Systems of neighborhoods, seminorms and pseudometrics).: The notion of a system of neighborhoods is equivalent to the notion of an Abelian group seminorm, where we would set \(|x|=e^{-n}\) if \(x\in\mathrm{N}^{n}X\setminus\mathrm{N}^{n+1}X\) or \(|x|=0\) if \(x\in\bigcap_{n}\mathrm{N}^{n}X\). Such a seminorm also gives rise to a pseudometric by setting \(d(x,y)=|x-y|\). Finally, these give rise to a topology on \(X\) whose basis is given by open balls with respect to this pseudometric. We see that addition is continuous with respect to this topology. With this remark in mind, we will refer to systems of neighborhoods of \(0\) and seminorms interchangeably, and will often refer to an Abelian group with a system of neighborhoods of \(0\) as a seminormed Abelian group. **Remark A.4.3**.: It follows from the definition that a system of neighborhoods (and hence a seminorm) is precisely the same as an exhaustive right filtration. We may therefore consider the seminorm associate to either a right or left exhaustive filtration (in view of Remark A.1.2). **Definition A.4.4** (Restriction of seminorms).: If \(X\) is a seminormed Abelian group and \(Y\subset X\) is a subgroup, we will consider \(Y\) a seminormed Abelian group via the restriction of the seminorm. That is, we set \(\mathrm{N}^{n}Y=\mathrm{N}^{n}X\cap Y\). **Definition A.4.5** (Seminormed rings and modules).: Let \(U\) be a ring which is seminormed as an Abelian group. We say that \(U\) is a seminormed ring if \(|xy|\leq|x||y|\), or, equivalently, \((\mathrm{N}^{p}U)(\mathrm{N}^{q}U)\subset\mathrm{N}^{p+q}U\). If \(M\) is a left \(U\)-module which is seminormed as an Abelian group, we say that it is a seminormed left module if \(|xm|\leq|x||m|\) for all \(x\in R\), \(m\in M\). **Remark A.4.6**.: It follows immediately from Definition A.3.1 that a filtered ring (not necessarily associative or unital) becomes a seminormed ring with respect to the seminorm induced by the filtration as in Remark A.4.3, and that multiplication map is continuous with respect to the induced topology. **Warning A.4.7**.: We will often consider seminorms on rings which are not ring seminorms, but just Abelian group seminorms. **Definition A.4.8** (Split-filtered seminorms).: Let \(X\) be an split-filtered Abelian group. We say that a seminorm is split-filtered if each of its neighborhoods \(\mathrm{N}^{n}X\) are split-filtered subgroups of \(X\). In this case we will simply refer to \(X\) as a split-filtered seminormed Abelian group. The following notion captures a property that we will often seek: that smaller filtered parts of a given filtered Abelian group lie in progressively smaller neighborhoods. **Definition A.4.9** (Tight seminorms).: Suppose \(X\) is a filtered seminormed Abelian group. We say that the \(X\) is tightly seminormed if for all \(m,p\) there exists \(d\) such that \(X_{\leq-d}\subset\mathrm{N}^{m}X_{\leq p}\). **Lemma A.4.10**.: _Suppose \(X\) is a split-filtered seminormed Abelian group whose seminorm is tight. Then \(X^{\prime}\) is dense in \(X\)._ Proof.: Let \(x\in X\). We can choose \(n,p\) with \(x\in\mathrm{N}^{n}X_{\leq p}\subset X\). For any \(m\), we need to show that there exists \(x^{\prime}\in X^{\prime}\) with \(x-x^{\prime}\in\mathrm{N}^{m}X\). We can write \(\mathrm{N}^{n}X_{\leq p}=\mathrm{N}^{n}X^{\prime}_{p}\oplus\mathrm{N}^{n}X_{ \leq p-1}\) and iterating this expression, we find \(\mathrm{N}^{n}X_{\leq p}=\bigoplus_{i=p-d+1}^{p}\mathrm{N}^{n}X^{\prime}_{i} \oplus\mathrm{N}^{n}X_{\leq p-d}\) for any \(d>0\). But by the tightness of the seminorm, choosing \(d\gg 0\) we can ensure \(\mathrm{N}^{n}X_{\leq p-d}\subset X_{\leq p-d}\subset\mathrm{N}^{m}X_{\leq p-d} \subset\mathrm{N}^{m}X_{\leq p}\). In particular, we may write \(x=x^{\prime}+y\) with \(x^{\prime}\in\bigoplus_{i=p-d+1}^{p}\mathrm{N}^{n}X^{\prime}_{i}\subset X^{\prime}\) and \(y\in\mathrm{N}^{m}X_{\leq p}\) as desired. ### Graded and filtered completions **Definition A.5.1** (Graded-complete and filtered-complete Abelian groups).: Let \(X\) be a normed Abelian group. If \(X\) is graded, we say that it is graded-complete if each of the graded subspaces \(X_{n}\) is complete. If \(X\) is filtered, we say that is is filtered-complete if each subspace \(X_{\leq n}\) is complete. **Definition A.5.2** (Short homomorphisms).: Let \(X,Y\) be seminormed Abelian groups. A group homomorphism \(f\colon X\to Y\) is called a short (or metric) homomorphism if \(|f(x)|\leq|x|\) for all \(x\in X\). **Definition/Lemma A.5.3** (Separated completions).: Let \(X\) be a seminormed Abelian group. Then we may form the (separated) completion \(\widehat{X}\) of \(X\), which a complete normed Abelian group equipped with a short map \(\iota\colon X\to\widehat{X}\) which is universal for short maps to complete normed Abelian groups. That is, for every complete normed Abelian group \(Y\) and short homomorphism \(X\to Y\), there is a unique factorization of this map as \(X\xrightarrow{\iota}\widehat{X}\to Y\). This can be constructed in the usual way via equivalence classes of Cauchy sequences. The following Lemma is a consequence of the fact that a metric space maps injectively into its completion: **Lemma A.5.4**.: _Let \(X\) be a seminormed Abelian group. Then the canonical map \(X\to\widehat{X}\) has kernel \(\bigcap_{n\in\mathbb{Z}}\mathrm{N}^{n}X\). In particular, \(X\to\widehat{X}\) is injective exactly when the seminorm on \(X\) is actually a norm._ **Lemma A.5.5**.: _Let \(W\subset Z\subset X\) be subgroups of a seminormed Abelian group \(X\). Then in the induced seminorm on \(Z/W\), the separated completion of \(Z/W\) can be identified with \(\widehat{Z}/\widehat{W}\), and \(\widehat{Z}\), \(\widehat{W}\) can be identified with the closures of the images of \(Z\) and \(W\) in \(\widehat{X}\) respectively._ Proof.: The latter identification of completions and closures is straightforward to check. We note that there is a universal map \(\widehat{Z}\to\overline{(Z/W)}\) of separated completions with \(W\) in the kernel. But as the image is Hausdorff, it follows that \(\overline{W}\) must also be in the kernel. But now we see that the map \(Z/W\to\widehat{Z}/\widehat{W}\) is therefore universal giving us \(\widehat{Z}/\widehat{W}\cong\overline{(Z/W)}\) as desired. We have various closely related universal constructions as follows. **Definition/Lemma A.5.6** (Filtered and graded completions).: Let \(X\) be a seminormed Abelian group. If \(X\) is graded then we can construct a short homomorphism \(X\to\widehat{X}^{\mathfrak{g}}\) which is universal for short homomorphisms to graded-complete Abelian groups. If \(X\) is filtered then we can construct a short homomorphism \(X\to\widehat{X}^{\mathfrak{f}}\) which is universal for short homomorphisms to graded-complete Abelian groups. Proof.: We set \(\widehat{X}^{\mathfrak{g}}=\bigoplus_{n}\widehat{X}^{\mathfrak{g}}_{n}\) where \(\widehat{X}^{\mathfrak{g}}_{n}=\widehat{X_{n}}\), and \(\widehat{X}^{\mathfrak{f}}=\bigcup_{n}\widehat{X}^{\mathfrak{f}}_{\leq n}\) where \(\widehat{X}^{\mathfrak{f}}_{\leq n}=\widehat{X_{\leq n}}\). **Remark A.5.7**.: These are also described in [10] as the degreewise completion and the filterwise completion respectively. **Lemma A.5.8**.: _For \(X\) a split-filtered tightly seminormed Abelian group, \(\widehat{X}^{\mathfrak{f}}\) is a split-filtered tightly seminormed Abelian group with respect to the graded subgroup \(\widehat{X}^{\mathfrak{f}}\)._ Proof.: Let us note that the natural morphism \(\widehat{X}^{\mathfrak{f}}\to\widehat{X}^{\mathfrak{f}}\) is an inclusion. For this, suppose that we have a pair of Cauchy sequences \((a_{n}),(b_{n})\) in \(X^{\prime}_{p}\) which have the same image in \(\widehat{X}^{\mathfrak{f}}\). Without loss of generality, we may select subsequences and re-index (possibly after modifying our starting index so an appropriate integer), and assume \(a_{n}-b_{n}\in\mathrm{N}^{n}X_{\leq p}\) for all \(n\). But as \(\mathrm{N}^{n}X^{\prime}_{p}=\mathrm{N}^{n}X_{\leq p}\cap X^{\prime}_{p}\) tells us that these Cauchy sequences have the same limit in \(\widehat{X}^{\mathfrak{f}}\) as well, giving injectivity. Next we check that \(\widehat{X}^{\mathfrak{f}}_{\leq p-1}\cap\widehat{X}^{\prime}_{p}=0\). Suppose we have an equality of classes of Cauchy sequences \((x_{n})=(y_{n})\) where \(x_{n}\in X_{\leq p-1}\), \(y_{n}\in X^{\prime}_{p}\) and \(x_{n}-y_{n}\in\mathrm{N}^{n}X_{\leq p}\). We claim that we may replace \((y_{n})\) by an equivalent Cauchy sequence with \(y_{n}\in X_{\leq p-d}\) for all \(d>0\). To see this by induction, suppose \(y_{n}\in X_{\leq p-(d-1)}\) we use the fact that our seminorm is split filtered to write \(y_{n}=y^{\prime}_{n}+y^{\prime\prime}_{n}\) with \(y^{\prime}_{n}\in X_{\leq p-d}\) and \(y^{\prime\prime}_{n}\in X_{p-(d-1)}\). As \(x_{n}-y_{n}=x_{n}-y^{\prime}_{n}-y^{\prime\prime}_{n}\in\mathrm{N}^{n}X_{\leq p}\) and \[\mathrm{N}^{n}X_{\leq p}=\mathrm{N}^{n}X_{p}+\mathrm{N}^{n}X_{\leq p-1}=\cdots =\mathrm{N}^{n}X_{p}\oplus\mathrm{N}^{n}X_{p-1}\oplus\cdots\oplus\mathrm{N}^ {n}X_{p-d+1}\oplus\mathrm{N}^{n}X_{\leq p-d},\] By uniqueness of our expressions, \(y^{\prime\prime}_{n}\in\mathrm{N}^{n}X_{p-d+1}\) for all \(n\). Consequently \(\lim\limits_{n\to\infty}y^{\prime\prime}_{n}=0\), which says the Cauchy sequences \((y_{n})\) and \((y^{\prime}_{n})\) are equivalent. This verifies our claim. Now, we claim that \((y_{n})=0\). By definition of the completion, this amounts to \((y_{n})\in\mathrm{N}^{m}X_{\leq p-1}\) for all \(m\). By our hypothesis, for any \(m\) there exists \(d^{\prime}\) such that \(X_{\leq-d^{\prime}}\subset\mathrm{N}^{m}X_{\leq p-1}\). In particular, choosing \(d=d^{\prime}+p\) in the prior argument, we find that we may choose a Cauchy sequence \((y^{\prime}_{n})\) equivalent to the first, with \(y^{\prime}_{n}\in X_{\leq-d^{\prime}}\subset\mathrm{N}^{m}X_{\leq p-1}\), showing that \((y^{\prime}_{n})\in\mathrm{N}^{m}\widehat{X}^{\mathfrak{f}}_{\leq p-1}\) for all \(m\), verifying our claim. Now we check \(\mathrm{N}^{n}\widehat{X}^{\mathfrak{f}}_{\leq p}\subset\mathrm{N}^{n} \widehat{X}^{\mathfrak{f}}_{\leq p-1}+\mathrm{N}^{n}\widehat{X}^{\mathfrak{f}} _{p}\). For this, let \(\sum x_{i}\in\mathrm{N}^{n}\widehat{X}^{\mathfrak{f}}_{\leq p}\) be a convergent infinite series. Without loss of generality, we may assume \(x_{i}\in\mathrm{N}^{n+i}X_{\leq p}\) for all \(i\). As our seminorm is split-filtered, we can write \(x_{i}=(x_{i})_{p}+(x_{i})_{<p}\) as in Notation A.1.7. But now we see that the sums \(\sum_{i}(x_{i})_{p}\) and \(\sum_{i}(x_{i})_{<p}\) both converge in \(\mathrm{N}^{n}\widehat{X}^{\mathfrak{f}}_{p}\) and \(\mathrm{N}^{n}\widehat{X}^{\mathfrak{f}}_{\leq p-1}\) respectively, showing that \(\sum x_{i}\in\mathrm{N}^{n}\widehat{X}^{\mathfrak{f}}_{\leq p-1}+\mathrm{N}^ {n}\widehat{X}^{\mathfrak{f}}_{p}\) as desired. As \(\widehat{X}^{\mathfrak{f}}=\bigcup_{n}\mathrm{N}^{n}\widehat{X}^{\mathfrak{f}}\) and \(\widehat{X}^{\mathfrak{f}}=\bigcup_{n}\mathrm{N}^{n}\widehat{X}^{\mathfrak{f}}\) we conclude that \(X\) is split-filtered seminormed by Proposition A.1.15. To check that it is tightly seminormed, We notice that whenever \(X_{\leq-d}\subset\mathrm{N}^{m}X_{\leq p}\) we find that, upon taking closures in \(\widehat{X}^{\mathfrak{f}}_{\leq p}\), that \(\widehat{X}^{\mathfrak{f}}_{\leq-d}\subset\mathrm{N}^{m}\widehat{X}^{\mathfrak{ f}}_{\leq p}\), showing that \(\widehat{X}^{\mathfrak{f}}\) is also tightly seminormed (with the same choice of \(d\) for a given \(m,p\)). **Remark A.5.9**.: In light of this result, it makes sense to refer to \(\widehat{X}^{\mathfrak{f}}\) as the completion of \(X\), when \(X\) is split-filtered, with the understanding the that graded subgroup is given by \(\widehat{X}^{\mathfrak{f}}\). In the case \(X=\widehat{X}^{\mathfrak{f}}\), we say \(X\) is complete. **Definition/Lemma A.5.10**.: Suppose \(X\) is a split-filtered, tightly seminormed, complete Abelian group, and suppose \(Y\subset X\) is a split-filtered subgroup. Define the closure \(\overline{Y}\) of \(Y\) in \(X\) to be the filtered subgroup with \(\overline{Y}_{\leq p}\) the closure of the image of \(Y_{\leq p}\) in \(X_{\leq p}\) and \(\overline{Y^{\prime}}_{p}\) the closure of the image of \(Y^{\prime}_{p}\) in \(X^{\prime}_{p}\). Then \(\overline{Y}\) is split-filtered with respect to the graded subgroup \(Y^{\prime}\). Proof.: It follows from the definition that the restriction of a tight seminorm is again a tight seminorm. As we can identify the closures with the completions by Lemma A.5.5, the result follows from Lemma A.5.8. ### Canonical seminorms The seminorms used in studying universal enveloping algebras of VOAs arise in a very specific way, as described in [15, 11, 16, 17, 18]. We will recall an generalized definition of these seminorms, as in [15], and then examine some abstract features in the context of split-filtrations, which will allow us to relate the filtered and graded versions. **Definition A.6.1** (The canonical seminorm).: Let \(U\) be a filtered ring. The canonical system of neighborhoods on \(U\) is defined by \({}^{c}\mathrm{N}^{n}U=UU_{\leq-n}\) (a left ideal of \(U\) if \(U\) is associative). We will write \({}^{c}|\cdot|\) for the corresponding canonical seminorm. **Lemma A.6.2**.: _Suppose \(U^{\prime}\subset U\) is a split-filtered ring. Then the canonical seminorm is split-filtered and tight._ Proof.: Suppose \(u\in{}^{c}\mathrm{N}^{n}U_{\leq p}\). By Lemma A.3.2, we can write \(u\) as a sum of elements of the form \(\alpha\beta\) with \(\alpha\in U_{\leq a}\) and \(\beta\in U_{\leq b}\) with \(a+b=p\) and \(b\leq-n\). Using our splitting we may write \(\alpha=\overline{\alpha}+\alpha^{\prime}\) and \(\beta=\overline{\beta}+\beta^{\prime}\) with \(\overline{\alpha}\in U^{\prime}_{a}\), \(\alpha^{\prime}\in U_{\leq a-1}\), \(\overline{\beta}\in U^{\prime}_{b}\), \(\beta^{\prime}\in U_{\leq b-1}\), and so we have \(\alpha^{\prime}\overline{\beta}\in{}^{c}\mathrm{N}^{n}U_{\leq p-1}\) giving us: \[\alpha\beta=\overline{\alpha}\overline{\beta}+\overline{\alpha}\beta^{\prime} +\alpha^{\prime}\overline{\beta}+\alpha^{\prime}\beta^{\prime}\in{}^{c} \mathrm{N}^{n}U^{\prime}_{p}+{}^{c}\mathrm{N}^{n}U_{\leq p-1}+{}^{c}\mathrm{N }^{n+1}U_{\leq p-1}={}^{c}\mathrm{N}^{n}U^{\prime}_{p}+{}^{c}\mathrm{N}^{n}U_ {\leq p-1}.\] It follows that \({}^{c}\mathrm{N}^{n}U_{\leq p}\subset{}^{c}\mathrm{N}^{n}U_{\leq p-1}+{}^{c} \mathrm{N}^{n}U^{\prime}_{p}\), and hence \({}^{c}\mathrm{N}^{n}U_{\leq p}={}^{c}\mathrm{N}^{n}U_{\leq p-1}+{}^{c} \mathrm{N}^{n}U^{\prime}_{p}\), showing the canonical seminorm is split filtered. To check that it is tight, we simply notice that for any \(m,p\), we have for \(d\geq\max\{m,-p\}\), \(X_{\leq-d}\subset X_{\leq p}\cap{}^{c}\mathrm{N}^{m}X={}^{c}\mathrm{N}^{m}X_{ \leq p}\). The following Lemmas are easily verified. **Lemma A.6.3**.: _Let \(U\) be a filtered associative ring. Then for any \(p,q,n\in\mathbb{Z}\), we have_ \[({}^{c}\mathrm{N}^{n}U_{\leq p})\,U_{\leq q}\subset{}^{c}\mathrm{N}^{n-q}U_{ \leq p+q}\quad\text{ and }\quad U_{\leq p}\,({}^{c}\mathrm{N}^{n}U_{\leq q}) \subset{}^{c}\mathrm{N}^{n}U_{\leq p+q}.\] **Lemma A.6.4**.: _Let \(f\colon X\to Y\) be a surjective filtered homomorphism of filtered associative rings. Then \(f({}^{c}\mathrm{N}^{n}X)={}^{c}\mathrm{N}^{n}Y\)._ By Lemma A.6.5, a useful property of the canonical topology is that multiplication is continuous with respect to it, at least when restricted to the various filtered parts. **Lemma A.6.5**.: _Let \(U\) be a filtered associative ring equipped with a seminorm such that_ \[(\mathrm{N}^{n}U_{\leq p})\,U_{\leq q}\subset\mathrm{N}^{n-q}U_{\leq p+q} \quad\text{ and }\quad U_{\leq p}\,(\mathrm{N}^{n}U_{\leq q})\subset\mathrm{N}^{n}U_{ \leq p+q}.\] _Then for any \(p,q\), the multiplication map \(U_{\leq p}\times U_{\leq q}\to U_{\leq p+q}\) is continuous with respect to the seminorm in both variables. Consequently, the completion \(\widehat{U}^{f}\) naturally has the structure of an associative ring._ **Remark A.6.6**.: It follows that under these hypotheses, if \(U\) and its seminorm is split-filtered, then the multiplication map \(U^{\prime}_{p}\times U^{\prime}_{q}\to U^{\prime}_{p+q}\) is also continuous (being the restriction of a continuous map). Consequently, in this case, the completion \(\widehat{U^{\prime}}\) is also a split-filtered associative ring, which is tightly split-filtered if \(U\) is (by Lemma A.5.8). Proof.: Let \(u_{1}\in U_{\leq p},u_{2}\in U_{\leq q}\). Then we must show that multiplication is continuous with respect to both variables at \((u_{1},u_{2})\). That is, given \(d\in\mathbb{Z}\), we must show there exist \(n_{1},n_{2}\) such that \((u_{1}+\mathrm{N}^{n_{1}}U_{\leq p})u_{2}\subset u_{1}u_{2}+\mathrm{N}^{d}U_{ \leq p+q}\) and \(u_{1}(u_{2}+\mathrm{N}^{n_{2}}U_{\leq q})\subset u_{1}u_{2}+\mathrm{N}^{d}U_{ \leq p+q}\). By our hypotheses, for \(n_{2}\geq d\), we have \(u_{1}(u_{2}+\mathrm{N}^{n_{2}}U_{\leq q})\subset u_{1}u_{2}+\mathrm{N}^{n_{2} }U_{\leq p+q}\subset u_{1}u_{2}+\mathrm{N}^{d}U_{\leq p+q}\). On the other hand, for \(n_{1}\geq q+d\), we find \((\mathrm{N}^{n_{1}}U_{\leq p})u_{2}\subset(\mathrm{N}^{n_{1}}U_{\leq p})\,U_{ \leq q}\subset\mathrm{N}^{n_{1}-q}U_{\leq p+q}\subset\mathrm{N}^{d}U_{\leq p+q}\), as desired. **Remark A.6.7**.: If \(U\) is a split-filtered seminormed ring with \({}^{c}\!\mathrm{N}^{n}U_{\leq p}\subset\mathrm{N}^{n}U_{\leq p}\) then by Lemma A.6.2, it is tightly seminormed. The canonical seminorm on a split-filtered associative ring has a number of useful properties which we would like to axiomatize. As we have seen, it is tight and split-filtered (Lemma A.6.2) and verifies the identities of Lemma A.6.3. **Definition A.6.8**.: Let \(U\) be a split-filtered seminormed associative ring. We say the seminorm is almost canonical if it verifies the following conditions: 1. the seminorm is split filtered, 2. \(\mathrm{N}^{n}U_{\leq p}={}^{c}\!\mathrm{N}^{n}U_{\leq p}+\mathrm{N}^{n+1}U_{ \leq p}\) for all \(n,p\), 3. \((\mathrm{N}^{n}U_{\leq p})\,U_{\leq q}\subset\mathrm{N}^{n-q}U_{\leq p+q}\) and \(U_{\leq p}\,(\mathrm{N}^{n}U_{\leq q})\subset\mathrm{N}^{n}U_{\leq p+q}\) for all \(p,q,n\). **Lemma A.6.9**.: _Let \(U\) be a split-filtered almost canonically seminormed associative ring. Then \(\mathrm{N}^{n}U^{\prime}_{p}={}^{c}\!\mathrm{N}^{n}U^{\prime}_{p}+\mathrm{N}^{ n+1}U^{\prime}_{p}\) for all \(n,p\)._ Proof.: Using the fact that the seminorm is split filtered and Definition A.6.8 (b), we have \(\mathrm{N}^{n}U^{\prime}_{p}+\mathrm{N}^{n}U_{\leq p-1}={}^{c}\!\mathrm{N}^{n }U^{\prime}_{p}+{}^{c}\!\mathrm{N}^{n}U_{\leq p-1}+\mathrm{N}^{n+1}U^{\prime}_ {p}+\mathrm{N}^{n+1}U_{\leq p-1}\) from which the result follows looking modulo \(U_{\leq p-1}\). **Lemma A.6.10**.: _Let \(U\) be a split-filtered seminormed associative ring. Then the following are equivalent:_ 1. \(\mathrm{N}^{n}U_{\leq p}={}^{c}\!\mathrm{N}^{n}U_{\leq p}+\mathrm{N}^{n+1}U_{ \leq p}\) _for all_ \(n,p\)_,_ 2. \(\mathrm{N}^{n}U_{\leq p}={}^{c}\!\mathrm{N}^{n}U_{\leq p}+\mathrm{N}^{n+d}U_{ \leq p}\) _for all_ \(n,p\) _and_ \(d>0\)_,_ 3. \({}^{c}\!\mathrm{N}^{n}U_{\leq p}\) _is contained in and dense in_ \(\mathrm{N}^{n}U_{\leq p}\)_._ Proof.: This follows by iterating the expression in part (a). **Lemma A.6.11**.: _Let \(f\colon X\to Y\) be a surjective map of split-filtered associative rings, and suppose \(X\) is endowed with an almost canonical seminorm. Then the system of neighborhoods \(\mathrm{N}^{n}Y_{\leq p}=f(\mathrm{N}^{n}X_{\leq p})\) defines an almost canonical seminorm on \(Y\)._ Proof.: By Lemma A.6.4 the image of the canonical neighborhoods in \(X\) are canonical neighborhoods in \(Y\), and by definition the images of neighborhoods in \(X\) are neighborhoods in \(Y\). The result then follows directly by applying the homomorphism \(f\) to the properties of Definition A.6.8 (b) and (c). **Lemma A.6.12**.: _Suppose \(U\) is a split-filtered associative ring with an almost canonical seminorm. Then the induced norm on the filtered completion \(\widetilde{U}^{\text{f}}\) is also almost canonical._ Proof.: By Remark A.6.7 and Lemma A.5.8, \(\widehat{U}^{\text{f}}\) is split-filtered and tightly seminormed, implying that \(\widehat{U}^{\text{f}}\) satisfies Definition A.6.8 (a). We proceed to Definition A.6.8 (b) using the equivalent conditions of Lemma A.6.10. As the neighborhoods \(\mathrm{N}^{n}\widehat{U}^{\text{f}}_{\leq p}\) can be identified as the closure of the image of \(\mathrm{N}^{n}U_{\leq p}\) and \({}^{c}\mathrm{N}^{n}U_{\leq p}\) is dense in \(\mathrm{N}^{n}U_{\leq p}\), it follows that the image of \({}^{c}\mathrm{N}^{n}U_{\leq p}\) is dense in \(\mathrm{N}^{n}\widehat{U}^{\text{f}}_{\leq p}\). But as the image of \({}^{c}\mathrm{N}^{n}U_{\leq p}\) is contained in \({}^{c}\mathrm{N}^{n}\widehat{U}^{\text{f}}_{\leq p}\), it follows that \({}^{c}\mathrm{N}^{n}\widehat{U}^{\text{f}}_{\leq p}\) is also dense in \(\mathrm{N}^{n}\widehat{U}^{\text{f}}_{\leq p}\), verifying Definition A.6.8 (b). We verify the first part of Definition A.6.8 (c) (the second part is analogous). The multiplication map \(\mathrm{N}^{n}U_{\leq p}\times U_{\leq q}\to U_{\leq p+q}\) is continuous by Lemma A.6.5 and it factors through \(\mathrm{N}^{n-q}U_{\leq p+q}\). By continuity, taking closures (of the images) in the completions \(\widehat{U}^{\text{f}}_{\leq p},\widehat{U}^{\text{f}}_{\leq q},\widehat{U}^ {\text{f}}_{\leq p+q}\) of \(U_{\leq p},U_{\leq q},U_{\leq p+q}\) respectively, we find that our map extends to a continuous map \(\overline{\mathrm{N}^{n}U_{\leq p}}\times\overline{U_{\leq q}}\to\overline{U_ {\leq p+q}}\) which factors through \(\overline{\mathrm{N}^{n-q}U_{\leq p+q}}\). Since the closure of the image in a completion can be identified with the completion itself, and \(\overline{\mathrm{N}^{n}U_{\leq p}}=\mathrm{N}^{n}\widehat{U}^{\text{f}}_{ \leq p}\), \(\overline{\mathrm{N}^{n-q}U_{\leq p+q}}=\mathrm{N}^{n-q}\widehat{U}^{\text{f} }_{\leq p+q}\), we interpret our multiplication as a continuous map \(\mathrm{N}^{n}\widehat{U}^{\text{f}}_{\leq p}\times\widehat{U}^{\text{f}}_{ \leq q}\to\widehat{U}^{\text{f}}_{\leq p+q}\) which factors through \(\mathrm{N}^{n-q}\widehat{U}^{\text{f}}_{\leq p+q}\), as desired. ### Completed tensors Completed tensors, introduced here in Definition A.7.2, make a number of arguments more natural. **Definition A.7.1** (Seminorm on tensors).: Let \(R\) be a seminormed ring, \(M\) a right seminormed \(R\)-module and \(N\) a left seminormed \(R\)-module. We define a seminorm on \(M\otimes_{R}N\) by the following neighborhoods of \(0\): \[\mathrm{N}^{n}(M\otimes_{R}N)=\sum_{p+q=n}\mathrm{im}\,\big{(}(\mathrm{N}^{p}M \otimes_{\mathrm{N}^{0}R}\mathrm{N}^{q}N)\to M\otimes_{R}N\big{)}.\] **Definition A.7.2**.: (Complete tensors) Let \(R\) be a seminormed ring, \(M\) a right seminormed \(R\)-module, and \(N\) a left seminormed \(R\)-module. The complete tensor product \(M\widehat{\otimes}_{R}N\) is defined to be the completion of the seminormed Abelian group \(M\otimes_{R}N\) with seminorm as described in Definition A.7.1. **Definition/Lemma A.7.3** (Complete tensors, filtered and graded).: Let \(R\) be a seminormed ring, \(M\) a right seminormed \(R\)-module and \(N\) a left seminormed \(R\)-module. If \(R,M,N\) are graded then we can construct a short homomorphism \(M\times N\to M\widehat{\otimes}_{R}^{\text{g}}N\) which is universal for \(R\)-bilinear maps to graded-complete Abelian groups. If \(R,M,N\) are filtered then we can construct a short homomorphism \(M\times N\to M\widehat{\otimes}_{R}^{\text{f}}N\) which is universal for \(R\)-bilinear maps to filtered-complete Abelian groups. Proof.: These are \(M\widehat{\otimes}_{R}^{\text{g}}N=\widehat{M\otimes_{R}N}^{\text{g}}\) and \(M\widehat{\otimes}_{R}^{\text{f}}N=\widehat{M\otimes_{R}N}^{\text{f}}\) respectively. ### Discrete quotients This section will be particularly useful in construction of generalized Verma modules and new algebraic structures (the mode transition algebras of Section 3.2) which will play an important role for us. If \(U\) is a filtered ring, then \(U_{\leq 0}\) is always a subring and \(U_{\leq-n}\) for \(n>0\) is a two-sided ideal of \(U_{\leq 0}\). Moreover, for \(n>0\), we have \(U\otimes_{U_{\leq 0}}U_{\leq 0}/U_{\leq-n}=U/UU_{\leq-n}=U/c\mathrm{N}^{n}U\). **Lemma A.8.1**.: _Suppose \(U\) is a split-filtered almost canonically seminormed ring. Then_ \[U\widehat{\otimes}^{f}_{U\leq 0}U_{\leq 0}/U_{\leq-n}\cong\widehat{U}^{f}/[ \operatorname{N}^{n}\widehat{U}^{f}\cong\widehat{U}^{\prime\mathcal{B}}/ \operatorname{N}^{n}\widehat{U}^{\prime\mathcal{B}}\cong U^{\prime}\widehat{ \otimes}^{\mathcal{B}}_{U^{\prime}_{\leq 0}}U^{\prime}_{\leq 0}/U^{\prime}_{\leq-n}\] _with isomorphism induced by the continuous map \(U\otimes_{U\leq 0}U_{\leq 0}/U_{\leq-n}\to U/\operatorname{N}^{n}U\) via \(u\otimes\overline{a}\mapsto\overline{ua}\)._ _In particular, as a topological space, these have the discrete topology._ Proof.: It is immediate that, assuming the claimed equalities hold, the natural quotient topology on \(\widehat{U}^{\mathcal{B}}/\operatorname{N}^{n}\widehat{U}^{\prime\mathcal{B}}\) is discrete. As we have noticed, \(U\otimes_{U\leq 0}U_{\leq 0}/U_{\leq-n}\cong U/c\operatorname{N}^{n}U\). We can therefore identify the separated completion of \(U_{\leq p}\otimes_{U\leq 0}U_{\leq 0}/U_{\leq-n}\) with the separated completion of \(U_{\leq p}/c\operatorname{N}^{n}U_{\leq p}\). But since \(\overline{c\operatorname{N}^{n}U_{\leq p}}=\operatorname{N}^{n}U_{\leq p}\) by Lemma A.6.10, the isomorphism \(U\widehat{\otimes}^{t}_{U_{\leq 0}}U_{\leq 0}/U_{\leq-n}\cong\widehat{U}^{f}/ \operatorname{N}^{n}\widehat{U}^{\prime\mathcal{B}}\) follows by Lemma A.5.5. Next, we note that the natural map \(\widehat{U}^{\mathcal{B}}\to\widehat{U}^{f}/\operatorname{N}^{n}\widehat{U}^{ f}\) has kernel \(\operatorname{N}^{n}\widehat{U}^{\prime\mathcal{B}}\). As \(U^{\prime}_{\leq-m}\subset c\operatorname{N}^{n}U\subset\operatorname{N}^{n}U\) for \(m\gg 0\) it follows that our map \(U^{\prime}\to\widehat{U}^{f}/\operatorname{N}^{n}\widehat{U}^{f}\), which has dense image by Lemma A.4.10 factors through the surjection \(U^{\prime}/U^{\prime}_{\leq-m}\to U^{\prime}/\operatorname{N}^{n}U^{\prime}\). In particular, the restriction of this map to \(U^{\prime}_{\leq p}/U^{\prime}_{\leq-m}\) factors through \(\widehat{U^{\prime}_{\leq p}/U^{\prime}_{\leq-m}}\) and hence the image of this part coincides with the image of \(\widehat{U^{\prime}_{\leq p}}\). But \(\widehat{U^{\prime}_{\leq p}/U^{\prime}_{\leq-m}}=\bigoplus\limits_{-m<i\leq p }\widehat{U}^{\prime}_{i}=\bigoplus\limits_{-m<i\leq p}\widehat{U}^{\prime}_{i}\). We therefore find that the map \(\widehat{U}^{\prime\mathcal{B}}_{\leq p}\to\widehat{U}^{t}_{\leq p}/ \operatorname{N}^{n}\widehat{U}^{t}_{\leq p}\) factors through \(\bigoplus\limits_{-m<i\leq p}\widehat{U}^{\prime\mathcal{B}}_{i}\) which is a complete space. As this map has dense image, it is surjective and from our prior description of the kernel, we see \(\widehat{U}^{\prime\mathcal{B}}_{\leq p}/\operatorname{N}^{n}\widehat{U}^{ \prime\mathcal{B}}_{\leq p}\cong\widehat{U}^{f}_{\leq p}/\operatorname{N}^{n} \widehat{U}^{\prime\mathcal{B}}_{\leq p}\). Taking a union over all \(p\) gives the identification \(\widehat{U}^{\prime\mathcal{B}}\cong\widehat{U}^{\prime\mathcal{B}}/ \operatorname{N}^{n}\widehat{U}^{\prime\mathcal{B}}\). Making the same observations as in the beginning of the proof with \(U^{\prime}\) instead of \(U\), we may identify the separated completion of \(U^{\prime}_{\leq p}\otimes_{U^{\prime}_{\leq 0}}U^{\prime}_{\leq 0}/U^{ \prime}_{\leq-n}\) with the separated completion of \(U^{\prime}_{\leq p}/c\operatorname{N}^{n}U^{\prime}_{\leq p}\). Choosing \(m\) as in the previous paragraph, we find that we have a surjective map \(U^{\prime}/U^{\prime}_{\leq-m}\to U^{\prime}/\operatorname{N}^{n}U^{\prime}\) which allows us to identify the separated completion of \(U^{\prime}_{\leq p}/c\operatorname{N}^{n}U^{\prime}_{\leq p}\) with \(\widehat{U}^{\prime\mathcal{B}}_{\leq p}/\operatorname{N}^{n}\widehat{U}^{ \prime\mathcal{B}}_{\leq p}\) as desired. ### Triples of associative algebras In this section we collect some of our previous facts which will be useful for the construction of our universal enveloping algebras of a VOA. As we simultaneously construct and relate three versions of the enveloping algebra (left, right and finite Definition 2.4.2), we will therefore introduce notions for working with triples of associative algebras here. **Definition A.9.1**.: A good triple of associative algebras \((U^{\mathsf{L}},U^{\prime},U^{\mathsf{R}})\) consists of the data of a left split-filtered associative algebra \(U^{\mathsf{L}}\), and split-filtered associative algebra \(U^{\mathsf{R}}\) such that \(U^{\prime}\) is graded subalgebra of both \(U^{\mathsf{L}}\) and \(U^{\mathsf{R}}\). **Definition A.9.2**.: A morphism of good triples \((X^{\mathsf{L}},X^{\prime},X^{\mathsf{R}})\to(Y^{\mathsf{L}},Y^{\prime},Y^{ \mathsf{R}})\) is a pair of degree \(0\) maps of split-filtered associative algebras \(X^{\mathsf{L}}\to Y^{\mathsf{L}}\) and \(X^{\mathsf{R}}\to Y^{\mathsf{R}}\) which agree on \(X^{\prime}\to Y^{\prime}\). **Definition A.9.3**.: A good seminorm on a good triple of associative algebras \((U^{\mathsf{L}},U^{\prime},U^{\mathsf{R}})\) consists of almost canonical split-filtered seminorms on \(U^{\mathsf{L}}\) and \(U^{\mathsf{R}}\) defined by neighborhoods \(\mathrm{N}^{n}_{L}U^{\mathsf{L}}\) and \(\mathrm{N}^{n}_{R}U^{\mathsf{R}}\) respectively such that \(\mathrm{N}^{n}_{L}U^{\prime}_{p}=\mathrm{N}^{n-p}_{R}U^{\prime}_{p}\). **Remark A.9.4**.: We note that in the case \(p=0\) we have \(\mathrm{N}^{n}_{L}U^{\prime}_{0}=\mathrm{N}^{n}_{R}U^{\prime}_{0}\), and in this case we can unambiguously write \(\mathrm{N}^{n}U^{\prime}_{0}\) for each. Also in this case, it follows from Definition A.6.8 (c) that \(\mathrm{N}^{n}U^{\prime}_{0}\) is a two sided ideal of \(U^{\prime}_{0}\). **Lemma A.9.5**.: _Suppose \((U^{\mathsf{L}},U^{\prime},U^{\mathsf{R}})\) is a good triple of associative unital algebras and \(I\triangleleft U^{\prime}\) is a homogeneous ideal. Let \(I^{\mathsf{L}}=U^{\mathsf{L}}IU^{\mathsf{L}}\) and \(I^{\mathsf{R}}=U^{\mathsf{R}}IU^{\mathsf{R}}\) be the ideals of \(U^{\mathsf{L}}\) and \(U^{\mathsf{R}}\) generated by \(I\). Then \((I^{\mathsf{L}},I,I^{\mathsf{R}})\) is a good triple (of ideals)._ Proof.: We note that the triple \((I,I,I)\) is good, where we regard \(I\) itself as left and right filtered as in Example A.1.9. The result now follows from Lemma A.3.3, in light of the observation that the ideal generated by \(I\) in \(U^{\prime}\) is \(I\) itself. The following Lemma is an immediate consequence of Definition/Lemma A.5.10. **Lemma A.9.6**.: _Let \((U^{\mathsf{L}},U^{\prime},U^{\mathsf{R}})\) be a good triple of associative unital algebras and \((I^{\mathsf{L}},I,I^{\mathsf{R}})\) a good triple of ideals. Then the closures \((\overline{I}^{\mathsf{L}},\overline{I},\overline{I}^{\mathsf{R}})\) is a good triple of ideals._ **Remark A.9.7**.: If our seminorms on a triple \((U^{\mathsf{L}},U^{\prime},U^{\mathsf{R}})\) are canonical, they are easily verified to be good: it is split-filtered by Lemma A.6.2 and satisfies the other conditions of Definition A.6.8 by definition of the canonical seminorm and by Lemma A.6.3. **Definition A.9.8**.: If \((U^{\mathsf{L}},U^{\prime},U^{\mathsf{R}})\) is a good triple with a good seminorm, it's completion is the triple \(\left(\widetilde{U^{\mathsf{L}}}^{\mathsf{f}},\widetilde{U^{\mathsf{R}}}^{ \mathsf{f}},\widetilde{U^{\mathsf{R}}}^{\mathsf{f}}\right)\). We say that triple is complete if these are the same seminormed triple under the canonical map. The following two results show that, in the appropriate sense, the class of good triples with good seminorms are closed under completions and homomorphic images. **Corollary A.9.9**.: _If \((X^{\mathsf{L}},X^{\prime},X^{\mathsf{R}})\to(Y^{\mathsf{L}},Y^{\prime},Y^{ \mathsf{R}})\) is a surjective map of good triples, and \((X^{\mathsf{L}},X^{\prime},X^{\mathsf{R}})\) has a good seminorm, the induced seminorm on \((Y^{\mathsf{L}},Y^{\prime},Y^{\mathsf{R}})\) is good._ Proof.: This is an immediate consequence of Lemma A.1.14 and Lemma A.6.11. **Corollary A.9.10**.: _Good triples with good seminorms are closed under the operation of completion._ Proof.: This is an immediate consequence of Lemma A.6.12. ## Appendix B Generalized Verma modules and mode algebras In this section, our basic object will be a graded seminormed algebra. While such an algebra may come as part of a triple as described in the previous section, the graded structure will play the decisive role here. We will, however, occasionally regard our graded algebra as also (split-)filtered as in Example A.1.9. ### Generalized higher Zhu algebras and Verma modules **Definition B.1.1**.: For a graded, seminormed algebra \(U\), we define the _generalized n-th Zhu algebra_ as \(\mathsf{A}_{n}(U)=U_{0}/\operatorname{N}^{n+1}U_{0}\). For \(\alpha\in U_{0}\), we write \([\alpha]_{n}\) to denote the image of \(\alpha\) in \(\mathsf{A}_{n}(U)\), and write \([\alpha]\) if \(n\) is understood. Observe that \(\mathsf{A}_{n}(U)=0\) if \(n\leq-1\) since \(\operatorname{N}^{i}U_{0}=U_{0}\) whenever \(i\leq 0\). **Definition B.1.2**.: If \(U\) is a graded algebra with an almost canonical seminorm and \(W_{0}\) is a left \(\mathsf{A}_{n}(U)\)-module, we define a \(U\)-module \(\Phi_{n}^{\mathsf{L}}(W_{0})\) by \[\Phi_{n}^{\mathsf{L}}(W_{0})=\left(U/\operatorname{N}_{\mathsf{L}}^{n+1}U \right)\otimes_{U_{0}}W_{0}=\left(U/\operatorname{N}_{\mathsf{L}}^{n+1}U \right)\otimes_{\mathsf{A}_{n}(U)}W_{0}.\] We will generally write \(\Phi^{\mathsf{L}}(W_{0})\) for \(\Phi_{0}^{\mathsf{L}}(W_{0})\). Following [10, 11] we define: **Definition B.1.3**.: If \(U\) is a graded algebra with an almost canonical seminorm and \(W\) is a left \(U\)-module, we define an \(\mathsf{A}_{n}(U)\)-module \(\Omega_{n}(W)\) by \[\Omega_{n}(W)=\{w\in W\mid(\operatorname{N}_{\mathsf{L}}^{n+1}U)w=0\}.\] We can show that the functors \(\Phi^{\mathsf{L}}\) have the following universal property: **Proposition B.1.4**.: _Let \(M\) be a \(U\)-module and \(W_{0}\) an \(\mathsf{A}_{n}(U)\)-module. Then there is a natural isomorphism of bifunctors:_ \[\operatorname{Hom}_{\mathsf{A}_{n}(U)}(W_{0},\Omega_{n}(M))=\operatorname{ Hom}_{U}(\Phi_{n}^{\mathsf{L}}(W_{0}),M).\] In Section 3.1.4 we use Proposition B.1.4 (there given by Proposition 3.1.2) to conclude that Zhu's original induction functor is naturally isomorphic to \(\Phi^{\mathsf{L}}\). Proof.: We describe the equivalence as follows. For \(f\colon W_{0}\to\Omega_{n}(M)\) we define a map \(g\colon\Phi_{n}^{\mathsf{L}}(W_{0})\to M\) by \(g(u\otimes m)=uf(m)\). Note that if \(u\in\operatorname{N}_{\mathsf{L}}^{n+1}U\) then \(uf(m)=0\) as \(f(m)\in\Omega_{n}(M)\). In the other direction, if we are given \(g\colon\Phi_{n}^{\mathsf{L}}(W_{0})\to M\), we note that the natural map \(W_{0}\to\Phi_{n}^{\mathsf{L}}(W_{0})\) defined by \(w\mapsto 1\otimes w\) is injective and by definition of the \(U\)-module structure of \(\Phi_{n}^{\mathsf{L}}(W_{0})\), has image lying inside \(\Omega_{n}(\Phi_{n}^{\mathsf{L}}(W_{0}))\). But as the map \(g\) is a \(U\)-module map, it follows that \(g(W_{0})\subset g(\Omega_{n}(\Phi_{n}^{\mathsf{L}}(W_{0})))\subset\Omega_{n} (M)\). Consequently we obtain a map \(f\colon W_{0}\to\Omega_{n}(M)\) which is easily checked to be an \(\mathsf{A}_{n}(U)\)-module map and to give an inverse correspondence to the prior prescription. Of course, we can also do a right handed version of this construction for a right \(\mathsf{A}_{n}(U)\) module \(Z_{0}\) and obtain in way a right \(U\)-module \(\Phi_{n}^{\mathsf{R}}(Z_{0})\). We will describe the properties of \(\Phi^{\mathsf{L}}\) and leave the analogue statements about \(\Phi^{\mathsf{R}}\) to the reader. **Lemma B.1.5**.: _Suppose \(U\) is a split-filtered algebra with graded subalgebra \(U^{\prime}\) and with an almost canonical seminorm. Then_ \[\Phi^{\mathsf{L}}(W_{0})=\left(U^{\prime}/\operatorname{N}_{\mathsf{L}}^{1}U^{ \prime}\right)\otimes_{U_{0}^{\prime}}W_{0}\cong\left(U/\operatorname{N}_{ \mathsf{L}}^{1}U\right)\otimes_{U_{0}}W_{0}.\] Proof.: This is an immediate consequence of Lemma A.8.1. Note that \(\operatorname{N}^{1}_{\mathsf{L}}U\) is a left \(U\) module and a right \(U_{\leq 0}\) module which is annihilated on the right by \(U_{\leq-1}\) in particular, we can also write the above expression as: \[\big{(}U/\operatorname{N}^{1}_{\mathsf{L}}U\big{)}\otimes_{U_{0}}W_{0}\cong \big{(}U/\operatorname{N}^{1}_{\mathsf{L}}U\big{)}\otimes_{U_{\leq 0}}W_{0},\] with respect to the truncation quotient map \(U_{\leq 0}\to U_{0}\) with kernel \(U_{\leq-1}\). This is because the additional relations in the tensor product on the right are of the form \(\alpha\beta\otimes w-\alpha\otimes\overline{\beta}w\) with \(\beta\in U_{\leq-1}\). But \(\alpha\beta\in UU_{\leq-1}\in\operatorname{N}^{1}_{\mathsf{L}}U\) represents \(0\) as does \(\overline{\beta}\). Hence these extra relations all vanish. **Remark B.1.6**.: We see that \(\Phi^{\mathsf{L}}(W_{0})\) is naturally a graded module, with grading inherited from \(U/\operatorname{N}^{1}_{\mathsf{L}}U\): \[\Phi^{\mathsf{L}}(W_{0})=\bigoplus_{p=0}^{\infty}\big{(}U/\operatorname{N}^{ 1}_{\mathsf{L}}U\big{)}_{p}\otimes_{U_{0}}W_{0}=\bigoplus_{p=0}^{\infty}\big{(} U_{p}/\operatorname{N}^{1}_{\mathsf{L}}U_{p}\big{)}\otimes_{U_{0}}W_{0}.\] Notice here that \(U_{-m}\subset\operatorname{N}^{m}_{\mathsf{L}}U\) and so \(\big{(}U/\operatorname{N}^{1}_{\mathsf{L}}U\big{)}_{p}=0\) for \(p<0\). **Lemma B.1.7**.: _The action of \(U_{0}\) on \(\Phi^{\mathsf{L}}(W_{0})\) via its left module structure induces an \(\mathsf{A}_{d}(U)\) module structure on \(\Phi^{\mathsf{L}}(W_{0})_{\leq d}=\bigoplus_{p=0}^{d}\Phi^{\mathsf{L}}(W_{0}) _{p}\)._ Proof.: We have \(U_{\leq-d-1}\Phi^{\mathsf{L}}(W_{0})_{\leq d}=0\) from degree considerations. It follows that \[({}^{c}\!\operatorname{N}^{d+1}U_{0})\Phi^{\mathsf{L}}(W_{0})=0.\] But by Lemma A.6.10\({}^{c}\!\operatorname{N}^{d+1}U_{0}\) is dense in \(\operatorname{N}^{d+1}U_{0}\) and by Lemma A.6.5, the multiplication action of \(U_{0}\) on \(U_{\leq}d\) is continuous, and hence so is the multiplication of \(U\) on \(U_{\leq d}/\operatorname{N}^{1}_{\mathsf{L}}U_{\leq d}\) and hence of \(U\) on \(\Phi(W_{0})\). But as \(U/\operatorname{N}^{1}_{\mathsf{L}}U\) has a discrete topology, so does \(\Phi(W_{0})\). Since a dense subset of \(\operatorname{N}^{d+1}U_{0}\) acts as zero, it follows that it acts as zero, making the action of the algebra \(\mathsf{A}_{d}(U)\) well defined. ### Generalized mode transition algebras Lemma B.2.1 is the main technical tool used to define algebraic structures and their actions on generalized Verma modules: **Lemma B.2.1**.: _Suppose \(U\) is a graded algebra with an almost canonical seminorm. Then we have a natural isomorphism_ \[\big{(}U/\operatorname{N}^{1}_{\mathsf{L}}U\big{)}\otimes_{U}\big{(}U/ \operatorname{N}^{1}_{\mathsf{L}}U\big{)}\to\mathsf{A}_{0}(U),\qquad\overline {\alpha}\otimes\overline{\beta}\mapsto\alpha\otimes\beta\] _where for \(\alpha,\beta\in U\) homogeneous, we define \(\alpha\otimes\beta\) as follows:_ \[\alpha\otimes\beta=\begin{cases}0&\text{if }\deg(\alpha)+\deg(\beta)\neq 0\\ \parbox[t]{142.26378pt}{$\begin{bmatrix}\alpha\beta&\text{if }\deg(\alpha)+\deg( \beta)=0\end{bmatrix}$}\end{cases}\] _and then extend the definition to general products by linearity._ Proof.: As our seminorm is almost canonical, the map \(U_{0}\to U/\big{(}\operatorname{N}^{1}_{\mathsf{L}}U+\operatorname{N}^{1}_{ \mathsf{R}}U\big{)}\) factors through \(U\to U/(UU_{\leq-1}+U_{\geq 1}U)\). But for this map, we see that both \(U_{\leq-1}\) and \(U_{\geq 1}\) are in the kernel, which implies that the restriction to \(U_{0}\) is surjective. The kernel of this map \(U^{\prime}_{0}\to U/\big{(}\operatorname{N}^{1}_{\mathsf{L}}U+\operatorname{N}^ {1}_{\mathsf{R}}U\big{)}\) consists of \(\operatorname{N}^{1}_{\mathsf{L}}U_{0}\cap\operatorname{N}^{1}_{\mathsf{R}}U_{ 0}=\operatorname{N}^{1}U_{0}\) (see Remark A.9.4). As an application of the above result, we obtain the following. **Corollary B.2.2**.: _Let \(W_{0}\) be a left \(\mathsf{A}_{0}(U)\)-module and \(Z_{0}\) be a right \(\mathsf{A}_{0}(U)\)-module. Then the map defined in Lemma B.2.1 induces an isomorphism_ \[\Phi^{\mathsf{R}}(Z_{0})\otimes_{U}\Phi^{\mathsf{L}}(W^{0})\to Z_{0} \otimes_{\mathsf{A}_{0}(U)}W_{0}.\] **Definition B.2.3**.: For a graded algebra with almost canonical seminorm \(U\), and an \(\mathsf{A}_{0}(U)\)-bimodule \(B\), we define a bigraded group: \[\Phi(B) =\Phi^{\mathsf{R}}(\Phi^{\mathsf{L}}(B))=\Phi^{\mathsf{L}}(\Phi^ {\mathsf{R}}(B))=\left(U/\operatorname{N}_{\mathsf{L}}^{1}U\right)\otimes_{U _{0}}B\otimes_{U_{0}}\left(U/\operatorname{N}_{\mathsf{R}}^{1}U\right)\] \[=\bigoplus_{d_{1}\geq 0}\ \bigoplus_{d_{2}\leq 0}\left(U/ \operatorname{N}_{\mathsf{L}}^{1}U\right)_{d_{1}}\otimes_{U_{0}}B\otimes_{U_ {0}}\left(U/\operatorname{N}_{\mathsf{R}}^{1}U\right)_{d_{2}}.\] We now introduce the space \(\Phi(B)\) and the operation \(\star\) arising from \(\mathfrak{R}\) which, as we show below, defines an algebra structure on \(\Phi(B)\) whenever \(B\) is an associative ring admitting a homomorphism \(f\colon\mathsf{A}_{0}(U)\to B\). **Definition B.2.4**.: Let \(B\) be an associative ring admitting a homomorphism \(f\colon\mathsf{A}_{0}(U)\to B\) and let \(W_{0}\) be a left \(B\)-module. Then we can define a map \(\Phi(B)\times\Phi^{\mathsf{L}}(W_{0})\to\Phi^{\mathsf{L}}(W_{0})\) as follows. For \(x=\alpha\otimes a\otimes\alpha^{\prime}\in\Phi(B)\) and \(\beta\otimes w\in\Phi^{\mathsf{L}}(W_{0})\) we set **Proposition B.2.5**.: _The map defined in Definition B.2.4 defines an associative algebra structure on \(\Phi(B)\) such that the above action of \(\Phi(B)\) on \(\Phi^{\mathsf{L}}(W_{0})\) defines a left module structure. Moreover, \(\gamma\cdot(x\star\beta)=(\gamma\cdot x)\star y\) for every \(x\in\Phi(B)\), \(y\in\Phi^{\mathsf{L}}(W_{0})\), and \(\gamma\in U\). Analogously, \((x\star y)\cdot\gamma=x\star(y\cdot\gamma)\) for every \(x,y\in\Phi(B)\) and \(\gamma\in U\). Finally, with respect to the bigrading of Definition B.2.3, we have_ \[\Phi(U)_{d_{1},d_{2}}\star\Phi(U)_{d_{3},d_{4}} \subseteq\Phi(U)_{d_{1},d_{4}}\qquad\text{and}\] \[\Phi(U)_{d_{1},d_{2}}\star\Phi(U)_{d_{3},d_{4}} =0\qquad\text{ whenever }d_{2}+d_{3}\neq 0.\] Proof.: We check first that this satisfies the standard associativity relationship for a module action. Let \(\alpha\otimes a\otimes\alpha^{\prime},\beta\otimes b\otimes\beta^{\prime}\in \Phi(B)\) and \(\gamma\otimes c\in\Phi^{\mathsf{L}}(W_{0})\), then \[(\alpha\otimes a\otimes\alpha^{\prime})\star\left((\beta\otimes b \otimes\beta^{\prime})\star(\gamma\otimes c)\right) =(\alpha\otimes a\otimes\alpha^{\prime})\star(\beta\otimes bf( \beta^{\prime}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, **Definition B.2.6**.: For a graded algebra with almost canonical seminorm \(U\), we define \(\mathfrak{A}(U)=\Phi(\mathsf{A}_{0}(U))\) to be the _(generalized) mode transition algebra_, and we write \(\mathfrak{A}(U)_{d}\) for the \(d\)th mode transition subalgebra \(\mathfrak{A}(U)_{d,-d}\). ### Relationship with higher generalized Zhu algebras Throughout this section, let us fix a graded algebra \(U\) complete with respect to an almost canonical seminorm. We will write \(\mathsf{A}_{n}\) in place of \(\mathsf{A}_{n}(U)\) and \(\mathfrak{A}_{n}\) in place of \(\mathfrak{A}_{n}(U)\). The action of \(U_{0}\) on \(\mathfrak{A}_{n}\) is continuous where \(\mathfrak{A}_{n}\) is defined as a discrete module. **Lemma B.3.1**.: _For each \(d\geq 0\), there is an exact sequence_ \[\mathfrak{A}_{d}\stackrel{{\mu_{d}}}{{\longrightarrow}}\mathsf{ A}_{d}\stackrel{{\pi_{d}}}{{\longrightarrow}}\mathsf{A}_{d-1} \longrightarrow 0, \tag{19}\] _where \(\mu_{d}(\overline{\alpha}\otimes[u]_{0}\otimes\overline{\beta})=[\alpha u \beta]_{d}\), for all \(\alpha\in U_{d}\) (respectively \(\beta\in U_{-d}\)) and where \(\overline{\alpha}\) (respectively \(\overline{\beta}\)) denotes its class in \(U/\operatorname{N}^{1}_{\mathsf{L}}U\) (respectively in \(U/\operatorname{N}^{1}_{\mathsf{R}}U\))._ Proof.: We first check that the map \(\mu_{d}\) is well defined. Notice \(\mu_{d}\) is independent on the lifts of \(\overline{\alpha}\) and \(\overline{\beta}\) to \(U\), since \(\operatorname{N}^{1}_{\mathsf{L}}U\cdot U_{0}\cdot U_{d}\subseteq\operatorname{ N}^{1}U_{0}\) and similarly \(U_{d}\cdot U_{0}\cdot\operatorname{N}^{1}_{\mathsf{R}}U_{-d}\subseteq \operatorname{N}^{1}U_{0}\). Analogously, since \(U_{d}\cdot\operatorname{N}^{1}U_{0}\cdot U_{-d}\subseteq\operatorname{N}^{d} U_{0}\), the map \(\mu_{d}\) is independent of the lift of \([u]_{0}\) to \(u\in U_{0}\). Finally, we need to show that it respects the tensor products over \(U_{0}\). For this we need to check that \(\mu_{d}(\overline{\alpha v}\otimes[u]_{0}\otimes\overline{\beta})=\mu_{d}( \overline{\alpha}\otimes[vu]_{0}\otimes\overline{\beta})\) for every \(v\in U_{0}\). But by definition both are the class of the element \(\alpha vu\beta\) in \(\mathsf{A}_{d}\). We have identifications \(\mathsf{A}_{d}=U_{0}/\operatorname{N}^{d+1}U_{0}\) and \(\mathsf{A}_{d-1}=U_{0}/\operatorname{N}^{d}U_{0}\). Consequently the kernel of the canonical projection \(\pi_{d}\) can be written as \(\operatorname{N}^{d}U_{0}/\operatorname{N}^{d+1}U_{0}\). It follows from the definition of \(\mu_{d}\) and \(\mathfrak{A}_{d}\) that the image of the \(\mu_{d}\) consists exactly of sums of element of the form \([\alpha\beta]_{d}\) with \(\deg\alpha=d\) and \(\deg\beta=-d\). Hence the image of \(\mu_{d}\) consists of the image of \({}^{c}\!\operatorname{N}^{d}U_{0}\) in \(\mathsf{A}_{d}\). Since we have an almost canonical filtration, we have by Lemma A.6.9, \[\operatorname{N}^{d}U_{0}={}^{c}\!\operatorname{N}^{d}U_{0}+\operatorname{N}^ {d+1}U_{0},\] which shows that \(\mu_{d}\) is therefore surjective onto \(\operatorname{N}^{d}U_{0}/\operatorname{N}^{d+1}U_{0}=\ker\pi_{d}\), showing right exactness. The following result is immediate from the definitions and from the associativity of the actions. **Lemma B.3.2**.: _Let \(W_{0}\) be an \(\mathsf{A}_{0}\)-module. Then the action of \(\mathfrak{A}_{d}\) on \(\Phi^{\mathsf{L}}(W_{0})_{d}\) factors through the action of \(\mathsf{A}_{d}\) described in Lemma B.1.7 via the map \(\mu_{d}\)._ We now are ready to state the principal result of this section. **Theorem B.3.3**.: _If \(\mathfrak{A}_{d}\) admits a unity, then the map \(\mu_{d}\) in (19) is injective and the sequence splits, giving a ring product \(\mathsf{A}_{d}\cong\mathfrak{A}_{d}\times\mathsf{A}_{d-1}\)._ Proof.: We first check that the map \(\mu_{d}\) is injective. Suppose we are given an element \(\mathfrak{a}\in\mathfrak{A}_{d}\) which is in the kernel of this map. By Lemma B.3.2, the action of \(\mathfrak{a}\) on \(\Phi^{\mathsf{L}}(M)_{d}\) for any \(M\) factors through the action of \(\mathsf{A}_{d}\) via \(\mu_{d}\), so it should be \(0\). If we consider the case of \(M=\Phi^{\mathsf{R}}(\mathsf{A}_{0})\), this says that, in particular, the action of \(\mathfrak{a}\) on \(\mathfrak{A}_{d}\subset\Phi^{\mathsf{L}}(\Phi^{\mathsf{R}}(\mathsf{A}_{0}))_{d}\) is \(0\). This action is identified with the algebra product via Definition B.2.4. It follows that since \(\mathfrak{A}_{d}\) has a unity \(\mathscr{I}_{d}\), we have \(\mathfrak{a}=\mathfrak{a}\star\mathscr{I}_{d}=0\) as claimed. Since \(\mu_{d}\) is injective we will omit it in the remainder of the proof and see \(\mathfrak{A}_{d}\) as naturally sitting inside \(\mathsf{A}_{d}\). Denote the unity in the higher level Zhu algebras \(\mathsf{A}_{d}\) by \(1\), and write \(e=\mathscr{I}_{d}\). Let \(f=1-e\) so that \(e\) and \(f\) are orthogonal idempotents. Note that \(e\) generates the \(2\)-sided ideal \(\mathfrak{A}_{d}\triangleleft\mathsf{A}_{d}\). Furthermore, since \(\mathrm{e}\) is the unity of \(\mathfrak{A}_{d}\), for every \(a\in\mathsf{A}_{d}\), we have \(ae=eae=ea\) and so \(e\in Z(\mathsf{A}_{d})\). Consequently \(f=1-e\in Z(\mathsf{A}_{d})\) as well. It follows that \(f\) and \(e\) are orthogonal central idempotents, and therefore \(\mathsf{A}_{d}=\mathsf{A}_{d}e\times\mathsf{A}_{d}f\) as rings. But \(\mathsf{A}_{d}e=\mathfrak{A}_{d}\) and \(\mathsf{A}_{d}f\cong\mathsf{A}_{d}/\mathfrak{A}_{d}\cong\mathsf{A}_{d-1}\) completing our proof.
2303.00941
ParaFormer: Parallel Attention Transformer for Efficient Feature Matching
Heavy computation is a bottleneck limiting deep-learningbased feature matching algorithms to be applied in many realtime applications. However, existing lightweight networks optimized for Euclidean data cannot address classical feature matching tasks, since sparse keypoint based descriptors are expected to be matched. This paper tackles this problem and proposes two concepts: 1) a novel parallel attention model entitled ParaFormer and 2) a graph based U-Net architecture with attentional pooling. First, ParaFormer fuses features and keypoint positions through the concept of amplitude and phase, and integrates self- and cross-attention in a parallel manner which achieves a win-win performance in terms of accuracy and efficiency. Second, with U-Net architecture and proposed attentional pooling, the ParaFormer-U variant significantly reduces computational complexity, and minimize performance loss caused by downsampling. Sufficient experiments on various applications, including homography estimation, pose estimation, and image matching, demonstrate that ParaFormer achieves state-of-the-art performance while maintaining high efficiency. The efficient ParaFormer-U variant achieves comparable performance with less than 50% FLOPs of the existing attention-based models.
Xiaoyong Lu, Yaping Yan, Bin Kang, Songlin Du
2023-03-02T03:29:16Z
http://arxiv.org/abs/2303.00941v2
# ParaFormer: Parallel Attention Transformer for Efficient Feature Matching ###### Abstract Heavy computation is a bottleneck limiting deep-learning-based feature matching algorithms to be applied in many real-time applications. However, existing lightweight networks optimized for Euclidean data cannot address classical feature matching tasks, since sparse keypoint based descriptors are expected to be matched. This paper tackles this problem and proposes two concepts: 1) a novel parallel attention model entitled ParaFormer and 2) a graph based U-Net architecture with attentional pooling. First, ParaFormer fuses features and keypoint positions through the concept of amplitude and phase, and integrates self- and cross-attention in a parallel manner which achieves a win-win performance in terms of accuracy and efficiency. Second, with U-Net architecture and proposed attentional pooling, the ParaFormer-U variant significantly reduces computational complexity, and minimize performance loss caused by downsampling. Sufficient experiments on various applications, including homography estimation, pose estimation, and image matching, demonstrate that ParaFormer achieves state-of-the-art performance while maintaining high efficiency. The efficient ParaFormer-U variant achieves comparable performance with less than 50% FLOPs of the existing attention-based models. ## Introduction Feature matching is a fundamental problem for many computer vision tasks, such as object recognition [14], structure from motion (SFM) [1], and simultaneous localization and mapping (SLAM) [17]. But with illumination changes, viewpoint changes, motion blur and occlusion, it is challenging to find the invariance and get robust matches from two images. Feature matching pipelines can be categorized into detector-based methods, which first detect keypoints and descriptors from the images and then match two sets of sparse features, and detector-free methods, which directly match dense features. Benefiting from the global modeling capability of Transformer [20], attention-based networks become dominant methods in both detector-based and detector-free pipelines, where self- and cross-attention are applied to match learning-based descriptors or dense features. However, despite the high performance, attention-based networks tend to bring high training costs, large memory requirements, and high inference latency, especially for detector-free pipelines, where processing dense features exacerbates the problem of quadratic complexity of attention mechanism. So we focus on _detector-based_ pipeline, seeking the best trade-off between efficiency and performance. As most lightweight operations [1, 16] are designed for Euclidean data, sparse descriptors cannot be handled by mainstream lightweight networks. Note that Transformer and Graph Neural Networks are suitable for processing non-Euclidean data, so we design efficient models from both perspectives, giving birth to the ParaFormer and its ParaFormer-U variant. Rethinking the self- and cross-attention in feature matching, all existing attention-based methods arrange two kinds of attention in a serial manner, a strategy derived from the behavior of people looking back and forth when matching images. Specifically, SuperGlue [15] and LoFTR [21] alternately arrange self- and cross-attention, _i.e._, the \(self\to cross\) strategy as illustrated in Figure 2 (a). For MatchFormer [21], the \(self\to self\to cross\) strategy is used in the early stages, and the \(self\to cross\to cross\) strategy is used in later Figure 1: Comparison between the ParaFormer and SuperGlue. With the same input features, ParaFormer can deliver more robust matches with higher matching precision. ParaFormer-U can achieve comparable performance to SuperGLue with significantly fewer FLOPs. stages as shown in Figure 2 (b). However, computer vision is not necessarily designed based on human behavior, the fixed serial attention structure limits the diversity of the integration of self- and cross-attention. We propose parallel attention to compute self- and cross-attention synchronously, and train the network to optimally fuse two kinds of attention instead of tuning the permutation of both as a hyperparameter. For the efficiency of the attention-based feature matching, instead of simply applying attention variants [22, 23], weight sharing and attention weight sharing strategies in the parallel attention layer are explored to reduce redundant parameters and computations. We further construct the U-Net architecture with parallel attention layers and propose attentional pooling, which identifies important context points by attention weights. In summary, the contributions of this paper include: * We rethink the attention-based feature matching networks, and propose the **parallel attention layer** to perform self- and cross-attention synchronously and adaptively integrate both with learnable networks. * We further explore the **U-Net architecture** for efficient feature matching and propose attentional pooling, which keeps only the important context points to reduce the FLOPs with minimal performance loss. * A novel **wave-based position encoder** is proposed for detector-based feature matching networks, which dynamically fuses descriptors and positions through the concepts of amplitude and phase of waves. ## Related Works **Local Feature Matching.** Classical feature matching tends to be a detector-based pipeline, _i.e._, the detector is first applied to generate keypoints and descriptors from images, and then the descriptors are matched. For detectors, some outstanding handcrafted methods [19, 23, 24] were first proposed and widely used for various 3D computer vision tasks. With the advent of the deep learning era, many learning-based detectors [1, 16, 25, 26] have been proposed to further improve the robustness of descriptors under illumination changes and viewpoint changes. In addition to detectors, other work has focused on better matters. SuperGlue [23] was the first to propose an attention-based feature matching network that uses self- and cross-attention to find matches with global context information. OETR [22] further constrains attention-based feature matching in the commonly visible region by overlap estimation. Besides matching the sparse descriptors generated by the detector, LoFTR [23] applies self- and cross-attention directly on the feature maps extracted by convolutional neural network (CNN) and generates matches in a coarse-to-fine manner. MatchFormer [23] further abandons the CNN backbone and adopts a completely attention-based hierarchical framework that can extract features while finding similarities utilizing the attention mechanism. Noting that the permutation of self- and cross-attention in SuperGlue and LoFTR is a simple alternating strategy, MatchFormer further proposes an interleaving strategy, which focuses on self-attention at the shallow stages of the network and cross-attention at the deep stages. This improvement gives us inspiration about the permutation of self- and cross-attention. All existing attention-based approaches artificially arrange self- and cross-attention in a serial manner to mimic human behavior, which does not take advantage of the benefits of deep learning network and parallel computing. We propose to compute two kinds of attention efficiently in a parallel manner, and let the network learn the optimal way to integrate the two kinds of attention. **Position Encoder.** The position encoder is a critical part for all transformer-based networks, which allows the network to sense the relative or absolute position of each vector in a sentence or image. The first proposed position encoding method [25] uses fixed sine and cosine functions to calculate the position encoding or uses the position encoding vector as a learnable parameter, and finally adds the position encoding to the original vector. Although position information can be provided, this approach severely limits the flexibility of the model because the position encodings are fixed-length at training time, which limits the model to only process fixed-length inputs at inference time. Another way of position encoding is relative position encoding [10], _i.e._, adjusting attention weights with relative position. However, it is not only computationally intensive but also needs to handle inputs of different lengths by interpolation, which severely damages the performance. Convolution-based position encoders [26, 23] are proposed to augment local features with convolution and enable the model to be aware of position information with zero padding. But this method can only be applied to Euclidean data such as feature maps, thus it cannot be applied to methods based on sparse descriptors. To handle arbitrary length and non-Euclidean inputs, SuperGlue proposes a position encoder based on the multi-layer perceptron (MLP), which uses MLP to extend the co Figure 2: Conceptual difference among three attention architectures. (a) Serial attention in SuperGlue. (b) Interleaving attention in MatchFormer. (c) Proposed parallel attention. ordinate vector to align with the dimension of the descriptor to get the position encoding. However, the weak encoding ability becomes the bottleneck of the matching network. Inspired by Wave-MLP [14], phase information is equally important in vectors compared to amplitude information. Wave-MLP encodes the same input as both amplitude and phase information, while we encode the descriptor as amplitude information and the position as phase information, then fuse the two types of information with the Euler formula to generate position-aware descriptors. **U-Net Architecture.** The U-Net [13] architecture consists of an encoder-decoder structure, where the encoder reduces the spatial resolution and the decoder recovers it. This architecture can efficiently handle dense prediction tasks such as semantic segmentation, so we seek to improve the efficiency of attention-based feature matching with U-Net. However conventional pooling operations cannot be directly applied to non-Euclidean data like sparse descriptors, so Graph U-Nets [1] proposes the graph pooling (gPool) layer to enable downsampling on graph data in a differentiable way. The gPool layer measures the information that can be retained by each feature vector through scalar projection and applies topk sampling so that the new graph preserves as much information as possible from the original graph. Based on the gPool layer, we propose to utilize attention weights to measure how much information can be retained by each feature vector, which can better cooperate with the attention-based network without introducing additional parameters. ## Methodology Assuming that \(M\) and \(N\) keypoints are detected in image \(X\) and image \(Y\), we let the positions be \(\mathbf{p}^{X}\in\mathbb{R}^{M\times 3}\), \(\mathbf{p}^{Y}\in\mathbb{R}^{N\times 3}\) and the descriptors be \(\mathbf{d}^{X}\in\mathbb{R}^{M\times C}\), \(\mathbf{d}^{Y}\in\mathbb{R}^{N\times C}\). As illustrated in Figure 3, our proposed method first dynamically fuses positions \(\mathbf{p}\) and descriptors \(\mathbf{d}\) in amplitude and phase manner with wave position encoder (Wave-PE). The parallel attention module is then applied to compute self- and cross-attention synchronously, utilizing global information to enhance the representation capability of features and find potential matches. \(\mathbf{x}^{l}\) and \(\mathbf{y}^{l}\) denote the intermediate features of image \(X\) and \(Y\) in layer \(l\). Finally, the enhanced descriptors are matched by the optimal matching layer [1] applying the Sinkhorn algorithm. ### Wave Position Encoder For the MLP-based position encoder (MLP-PE), the main drawback is the limited encoding capacity because the parameters of MLP-PE are less than 1\(\%\) of the whole network, yet the position information is important for feature matching. Therefore, Wave-PE is designed to dynamically adjust the relationship between descriptor and position by amplitude and phase to obtain better position encoding. In Wave-PE, position encoding is represented as a wave \(\tilde{\mathbf{w}}\) with both amplitude \(\mathbf{A}\) and phase \(\mathbf{\theta}\) information, and the Euler formula is employed to unfold waves into real parts and imaginary parts to process waves efficiently, \[\begin{split}\tilde{\mathbf{w}}_{j}&=\mathbf{A}_{j} \odot e^{i\mathbf{\theta}_{j}}\\ &=\mathbf{A}_{j}\odot\cos\mathbf{\theta}_{j}+i\mathbf{A}_{j}\odot \sin\mathbf{\theta}_{j},j=1,2,...,n.\end{split} \tag{1}\] As shown in Figure 3, the amplitude and phase are estimated by two learnable networks via descriptors and position, respectively. Then a learnable network is applied to fuse the real and imaginary parts into position encoding, \[\begin{split}\mathbf{A}_{j}&=MLP_{A}(\mathbf{d}_{j }),\\ \mathbf{\theta}_{j}&=MLP_{\theta}(\mathbf{p}_{j}),\\ \mathbf{x}_{j}^{0}&=\mathbf{d}_{j}+MLP_{F}([ \mathbf{A}_{j}\odot\cos\mathbf{\theta}_{j},\mathbf{A}_{j}\odot\sin\mathbf{\theta}_{j }]).\end{split} \tag{2}\] \([\cdot,\cdot]\) denotes concatenation. For three learnable networks in equation (2), two-layer MLP is chosen for simplicity. ### Parallel Attention As illustrated in the right side of Figure 3, the two sets of descriptors are first linearly projected as \(\mathbf{Q},\mathbf{K},\mathbf{V}\). Then self- and cross-attention are computed in a parallel manner. In the self-attention module, standard attention computation \(softmax(\mathbf{Q}\mathbf{K}^{T}/\sqrt{d})\mathbf{V}\) is employed, where \(\mathbf{Q},\mathbf{K},\mathbf{V}\) come from the same input, _i.e._, (\(\mathbf{Q}_{x},\mathbf{K}_{x},\mathbf{V}_{x}\)) or (\(\mathbf{Q}_{y},\mathbf{K}_{y},\mathbf{V}_{y}\)). In the cross-attention module, the attention Figure 3: ParaFormer architecture. Wave-PE fuses the amplitude \(\mathbf{A}\) estimated with the descriptor \(\mathbf{d}\) and the phase \(\mathbf{\theta}\) estimated with the position \(\mathbf{p}\) to generate position encoding. Stacked parallel attention layers utilize self- and cross-attention to enhance the descriptors and find potential matches, where self- and cross-attention are adaptively integrated through a learnable network. weight sharing strategy is proposed to improve model efficiency, which is replacing \(\mathbf{Q}_{y}\mathbf{K}_{x}^{T}\) with \((\mathbf{Q}_{x}\mathbf{K}_{y}^{T})^{T}\), so the input of the cross-attention module is (\(\mathbf{Q}_{x},\mathbf{V}_{x},\mathbf{K}_{y},\mathbf{V}_{y}\)). The impact of weight sharing and attention weight sharing is investigated in the ablation studies. Finally, self- and cross-attention outputs are fused by a two-layer MLP. Parallel attention saves redundant parameters and computations while boosting performance through learnable fusion. Since the parallel attention layer updates two sets of descriptors simultaneously, it is formally similar to the self-attention layer in mainstream Transformers, except with two inputs. We can simply stack \(L\) parallel attention layers to form ParaFormer and conveniently explore various architectures like U-Net architecture to design model variants. ### U-Net Architecture As shown in Figure 4 (a), ParaFormer-U is designed for efficiency. Spatial downsampling is performed first to extract the high-level semantic information, then upsampling is performed to recover the spatial information, and the low-level and high-level information are fused by skip connections. As illustrated in Figure 4 (b), attentional pooling is proposed for downsampling. Observing the attention map by column shows that certain points have strong weight with all other points, indicating that they are important context points in the image. Suppose the feature in layer \(l\) is \(\textbf{x}^{l}\in\mathbb{R}^{N\times D}\) and the attention weight is \(\textbf{A}^{l}\in\mathbb{R}^{N\times N}\). Our proposed attentional pooling is defined as \[\begin{split}&\textbf{s}=sum(\textbf{A},dim=1),\\ &\textbf{idx}=rank(\textbf{s},\textbf{k}),\\ &\bar{\textbf{x}}^{l}=Linear(\textbf{x}^{l}(\textbf{idx},:)), \\ &\textbf{g}=sigmoid(\textbf{s}(\textbf{idx}))\\ &\textbf{x}^{l+1}=\bar{\textbf{x}}^{l}\odot\textbf{g}.\end{split} \tag{3}\] **k** is the number of points selected for next layer \(l\)+1, which is set to half the number of points in previous layer \(l\). The sum of each column of the self-attention map is computed as the pooling score \(\textbf{s}\in\mathbb{R}^{N}\), which measures the importance of each point. Then topk points are selected based on the attentional pooling score to filter out insignificant points, and the _Linear_ layer is used to adjust the dimension size of the descriptors. **s(idx)** extracts values in **s** with indices **idx** followed by a _sigmoid_ operation to generate gating signal, and \(\odot\) represents the element-wise matrix multiplication. Following [1], the unpooling operation is defined as \[\begin{split}&\tilde{\textbf{x}}^{l}=Linear(\textbf{x}^{l}),\\ &\textbf{x}^{l+1}=distribute(\textbf{0}_{N\times C^{l+1}},\tilde{ \textbf{x}}^{l},\textbf{idx}),\end{split} \tag{4}\] where \(\textbf{x}^{l}\in\mathbb{R}^{k\times C^{l}}\) is the current feature matrix and \(\textbf{0}_{N\times C^{l+1}}\) initially empty feature matrix for the next layer. The _Linear_ layer is employed first to adjust the feature matrix dimension. \(\textbf{idx}\in\mathbb{R}^{k}\) is the indices of points selected in the corresponding pooling layer. Then the current feature matrix is inserted into the corresponding row of the empty feature matrix according to **idx**, while the other rows remain zero. In other words, the unselected features in the pooling layer are represented by zero vectors to perform upsampling. ### Implementation Details The homography model is pretrained on the \(\mathcal{R}\)1M dataset [1], and then the model is finetuned on the MegaDepth dataset [11] for outdoor pose estimation and image matching tasks. On the \(\mathcal{R}\)1M dataset, we employ the AdamW [13] optimizer for 10 epochs using the cosine decay learning rate scheduler and 1 epoch of linear warm-up. A batch size of 8 and an initial learning rate of 0.0001 are used. On the MegaDepth dataset, we use the same AdamW optimizer for Figure 4: (a) U-Net architecture. The descriptors are processed in an encoder-decoder fashion. After the stages 1 and 2, the descriptors are downsampled with attention pooling to filter out the insignificant descriptors. After the stages 4 and 5, the descriptors are upsampled and fused with descriptors in previous stage by skip connections. (b) Attentional pooling. Pooling scores are computed from attention weights to identify context points and provide the gating signal through _sigmoid_ function. 50 epochs using the same learning rate scheduler and linear warm-up. A batch size of 2 and a lower initial learning rate of 0.00001 are used. For training on \(\mathcal{R}\)1M/MegaDepth dataset, we resize the images to 640\(\times\)480/960\(\times\)720 pixels and detect 512/1024 keypoints, respectively. When the detected keypoints are not enough, random keypoints are added for efficient batching. All models are trained on a single NVIDIA 3070Ti GPU. For ParaFormer, we stack \(L=9\) parallel attention layers, and all intermediate features have the same dimension \(C=256\). For ParaFormer-U, the depth of each stage is \(\{2,1,2,1,2\}\), resulting in a total of \(L=8\) parallel attention layers, and the intermediate feature dimension of each stage is \(\{256,384,128,384,256\}\). More details are provided in the supplementary material. ## Experiments ### Homography Estimation **Dataset.** We split \(\mathcal{R}\)1M dataset [1], which contains over a million images of Oxford and Paris, into training, validation, and testing sets. To perform self-supervised training, random ground-truth homographies are generated to get image pairs. **Baselines.** SuperPoint [1] is applied as the unified descriptor to generate the input for the matcher. ParaFormer and ParaFormer-U are compared with attention-based matcher SuperGlue [1] and NN matcher with learning-based outlier rejection methods [13, 14]. The results of SuperGlue are from our own implementation. **Metrics.** Precision and recall are computed based on ground truth matches. The area under the cumulative error curve (AUC) up to a value of 10 pixels is reported, where the reprojection error is computed with the estimated homography. **Results.** As shown in Table 1, ParaFormer outperforms all outlier rejection methods and attention-based matcher on homography estimation. It can be seen that the attention-based approaches have a remarkable superiority due to the global receptive field of attention. Compared with the attention-based approach SuperGlue, ParaFormer further boosts the performance by integrating self- and cross-attention with parallel attention layers, bringing a \(+2.05\%\) improvement on the F1-score over SuperGlue. The visualization of matches can be found in Figure 6. Moreover, compared to SuperGlue, our efficient U-Net variant has only \(49\%\) FLOPs, yet achieves better performance. ### Outdoor Pose Estimation **Dataset.** ParaFormer is trained on the MegaDepth dataset [11] and evaluated on the YFCC100M dataset [10]. For training, 200 pairs of images in each scene are randomly sampled for each epoch. For evaluation, the YFCC100M image pairs and ground truth poses provided by SuperGlue are used. **Baselines.** SuperPoint is applied as the descriptor and combined with baseline matchers, which contain attention-based matcher SuperGlue and NN matcher with outlier rejection methods [12, 14]. The results of SuperGlue are from our own implementation. **Metrics.** The AUC of the pose error at thresholds (\(5^{\circ}\), \(10^{\circ}\), \(20^{\circ}\)) are reported. Evaluation is performed with both approximate AUC [14] and exact AUC [1] for a fair comparison. **Results.** As shown in Table 2, ParaFormer achieves the best performance at all thresholds, demonstrating the robustness of our models. With wave position encoder and parallel attention architecture, ParaFormer can bring \((+3.28\%,+4.2\%,+3.24\%)\) improvement on exact AUC and \((+4.38\%,+3.89\%,+3.55\%)\) improvement on approximate AUC at three thresholds of \((5^{\circ},10^{\circ},20^{\circ})\), respectively. In outdoor scenes with a large number of keypoints, ParaFormer-U can effectively alleviate the computational complexity problem by downsampling, while still maintaining state-of-the-art performance by attentional pooling. ### Image Matching **Dataset.** We follow the evaluation protocol as in D2-Net [10] and evaluate our methods on 108 HPatches [1] sequences, which contain 52 sequences with illumination changes and 56 sequences with viewpoint changes. **Baselines.** Baseline methods include learning-based descriptors R2D2, D2Net and SuperPoint [1, 12, 13] and advanced matchers LoFTR, Patch2Pix, SuperGlue and CAPS [23, 14, 15]. The results of SuperGlue are from our own implementation. **Metrics.** A match is considered correct if the reprojection error is below the matching threshold, where the reprojection error is computed from the homographies provided by the dataset. The matching threshold is varied from 1 to 10 \begin{table} \begin{tabular}{l c c c c} \hline \hline Matcher & AUC & Precision & Recall & F1-score \\ \hline NN & 39.47 & 21.7 & 65.4 & 32.59 \\ NN + mutual & 42.45 & 43.8 & 56.5 & 49.35 \\ NN + PointCN & 43.02 & 76.2 & 64.2 & 69.69 \\ NN + OANet & 44.55 & 82.8 & 64.7 & 72.64 \\ SuperGlue & 52.65 & 90.9 & 98.88 & 94.72 \\ **ParaFormer-U** & 53.16 & 90.93 & 99.01 & 94.80 \\ **ParaFormer** & **54.91** & **94.55** & **99.10** & **96.77** \\ \hline \hline \end{tabular} \end{table} Table 1: Homography estimation on \(\mathcal{R}\)1M. AUC @10 pixels is reported. The best method is highlighted in bold. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Matcher} & \multicolumn{3}{c}{Exact AUC} & \multicolumn{3}{c}{Approx. AUC} \\ \cline{2-7} & @\(5^{\circ}\) & @\(10^{\circ}\) & @\(20^{\circ}\) & @\(5^{\circ}\) & @\(10^{\circ}\) & @\(20^{\circ}\) \\ \hline NN + mutual & 16.94 & 30.39 & 45.72 & 35.00 & 43.12 & 54.25 \\ NN + OANet & 26.82 & 45.04 & 62.17 & 50.94 & 61.41 & 71.77 \\ SuperGlue & 28.45 & 48.6 & 67.19 & 55.67 & 66.83 & 74.58 \\ **ParaFormer-U** & 29.40 & 49.76 & 68.29 & 56.47 & 67.66 & 75.67 \\ **ParaFormer** & **31.73** & **52.28** & **70.43** & **60.05** & **70.72** & **78.13** \\ \hline \hline \end{tabular} \end{table} Table 2: Pose estimation on YFCC100M. ParaFormer and ParaFormer-U lead other methods at all thresholds. to plot the mean matching accuracy (MMA), which is the average percentage of correct matches for each image. **Results.** As shown in Figure 5, ParaFormer achieves the best overall performance at matching thresholds of 5 or more pixels. The results indicate that detector-based methods such as SuperGlue and our methods are better at handling scenarios with large viewpoint changes, while the detector-free methods such as LoFTR are better suited to address illumination changes. But ParaFormer still outperforms LoFTR in illumination change experiments, benefiting from the superior modeling capability of parallel attention. With ParaFormer and ParaFormer-U, the overall performance of SuperPoint grows from the last to the first and the second place, demonstrating the effectiveness of our matchers. As expected, the strategy of computing pooling scores by features or attention weights is superior to random pooling. **Efficiency Analysis.** Benefiting from the above designs, our model is remarkable in efficiency beyond just achieving state-of-the-art performance. As shown in Table 7, when matching 2048 descriptors, ParaFormer reduces FLOPs by \(17.8\%\) compared to SuperGlue with better performance. ParaFormer-U further improves efficiency with FLOPs of only \(49\%\) of SuperGlue, while it still outperforms SuperGlue due to the advantage of parallel attention and Wave-PE. As shown in Figure 7, the attention weight sharing strategy in ParFormer alleviates the squared complexity of the attention mechanism, and the U-Net architecture further significantly reduces computational cost through downsampling. ## Conclusion In this paper, we propose a novel attention-based network named ParaFormer to handle feature matching tasks efficiently. As a preprocessing module, the proposed Wave-PE dynamically fuses features and positions in amplitude and phase manner. In contrast to employing serial attention that intuitively mimics human behavior, we propose a parallel attention architecture that not only integrates self-attention and cross-attention in a learnable way but also saves redundant parameters and computations through weight sharing and attention weight sharing strategies. To further improve efficiency, the ParaFormer-U is designed with U-Net architecture, which reduces FLOPs by downsampling and minimizes the performance loss by the proposed attentional pooling. Experiments show that ParaFormer and ParaFormer-U deliver state-of-the-art performance with remarkable efficiency, enabling a broader application scenario for attention-based feature matching networks. ## Acknowledgments This work was jointly supported by the National Natural Science Foundation of China under grants 62001110, 62201142 and 62171232, the Natural Science Foundation of Jiangsu Province under grant BK20200353, and the China Postdoctoral Science Foundation under grant 2020M681684. \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & F1-score & FLOPs (G) & Runtime (ms) \\ \hline SuperGlue & 90.68 & 125.85 & 26.99 \\ **ParaFormer-U** & 90.72 & **61.67** & **20.23** \\ **ParaFormer** & **94.92** & 104.22 & 24.99 \\ \hline \hline \end{tabular} \end{table} Table 7: Efficiency analysis @2048 keypoints. Figure 6: Qualitative results of homography estimation and pose estimation experiments. Figure 7: Comparison between FLOPs of models.
2305.02200
Deep Graph Representation Learning and Optimization for Influence Maximization
Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users. Researchers have made great progress in designing various traditional methods, and their theoretical design and performance gain are close to a limit. In the past few years, learning-based IM methods have emerged to achieve stronger generalization ability to unknown graphs than traditional ones. However, the development of learning-based IM methods is still limited by fundamental obstacles, including 1) the difficulty of effectively solving the objective function; 2) the difficulty of characterizing the diversified underlying diffusion patterns; and 3) the difficulty of adapting the solution under various node-centrality-constrained IM variants. To cope with the above challenges, we design a novel framework DeepIM to generatively characterize the latent representation of seed sets, and we propose to learn the diversified information diffusion pattern in a data-driven and end-to-end manner. Finally, we design a novel objective function to infer optimal seed sets under flexible node-centrality-based budget constraints. Extensive analyses are conducted over both synthetic and real-world datasets to demonstrate the overall performance of DeepIM. The code and data are available at: https://github.com/triplej0079/DeepIM.
Chen Ling, Junji Jiang, Junxiang Wang, My Thai, Lukas Xue, James Song, Meikang Qiu, Liang Zhao
2023-05-01T15:45:01Z
http://arxiv.org/abs/2305.02200v2
# Deep Graph Representation Learning and Optimization for Influence Maximization ###### Abstract Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users. Researchers have made great progress in designing various _traditional_ methods, and their theoretical design and performance gain are close to a limit. In the past few years, learning-based IM methods have emerged to achieve stronger generalization ability to unknown graphs than traditional ones. However, the development of learning-based IM methods is still limited by fundamental obstacles, including 1) the difficulty of effectively solving the objective function; 2) the difficulty of characterizing the diversified underlying diffusion patterns; and 3) the difficulty of adapting the solution under various node-centrality-constrained IM variants. To cope with the above challenges, we design a novel framework DeepIM to generatively characterize the latent representation of seed sets, and we propose to learn the diversified information diffusion pattern in a data-driven and end-to-end manner. Finally, we design a novel objective function to infer optimal seed sets under flexible node-centrality-based budget constraints. Extensive analyses are conducted over both synthetic and real-world datasets to demonstrate the overall performance of DeepIM. The code and data are available at: [https://github.com/triplej0079/DeepIM](https://github.com/triplej0079/DeepIM). Machine Learning, ICML ## 1 Introduction As one of the fundamental research problems in network analysis, the objective of Influence Maximization (IM) is to find a set of seed nodes that maximizes the spread of influence in a social network. IM has been extensively studied in recent years due to its large commercial value. For example, consider the case of viral marketing (Chen et al., 2010) for promoting a commercial product, where a company may wish to spread the adoption of a new product from some initially selected users, the selected initial users are expected to spread the information about the product on their respective social networks. This cascading process will be continued, and ultimately, a significant fraction of the users will try the product. Besides viral marketing, IM is also the cornerstone in many other critical applications such as network monitoring (Wang et al., 2017), misinformation containment (Yang et al., 2020), and friend recommendation (Ye et al., 2012). As a typical type of combinatorial optimization problem, retrieving a (near) optimal seed set to maximize the influence in a network is challenging due to the stochastic nature of information diffusion and the hardness of the problem. Traditional (non-learning-based) methods for IM (Leskovec et al., 2007; Kempe et al., 2003; Tang et al., 2014, 2015; Nguyen et al., 2016; Saito et al., 2012) have made great progress in the last decade, and Li et al. (2019) have even achieved exact solutions under specific diffusion models. The commonality of traditional methods is the explicit requirement of the information diffusion model as the model input. However, the real-world information diffusion process is complex and cannot be simply modeled by prescribed diffusion models. With the recent development of machine/deep learning, it is natural to consider a learning-based way to characterize the underlying diffusion process. While great progress has been made in the field, current efforts on learning-based IM solutions are still in the infancy stage due to fundamental obstacles as follows. _1). The difficulty of efficiently optimizing the objective function._ Learning-based IM methods tend to solve the discrete problem in continuous space by mostly leveraging deep network representations (Zhang et al., 2022; Kumar et al., 2022) and deep reinforcement learning (Tian et al., 2020; Li et al., 2022). Even though they could attain a competitive performance with traditional methods, their scalability and execution efficiency are problematic due to _(a)_ the need to iteratively update all node embeddings at each action and _(b)_ the #P-hardness of computing the influence spread (Lin et al., 2017). _2). The difficulty of automatically identifying and modeling the actual diffusion process._ To maximize the influence spread in a network, the underlying information diffusion pattern is an imperative part as it determines the overall information propagation outcome. However, both traditional and learning-based methods cannot characterize the underlying diffusion process without heuristics. To work around this, both traditional and current learning-based methods have been leveraging pre-defined diffusion models (e.g., Linear Threshold (LT) and Independent Cascade (IC)) as the input to solve the combinatorial optimization problem. Although they could work well only for the process following their heuristics, the real-world network process is way more complex than the heuristics and largely unknown. _3). the difficulty of adapting solutions to various node-centrality-constrained IM problems._ There are a lot of variants of IM that relate to node centrality, e.g., the constraint on the number of seed nodes, the constraint on the total degree of seed nodes, etc. Current learning-based IM solutions do not have a well-defined paradigm for solving different node-centrality-constrained IM problems, which poses another challenge to their solution adaptivity. To address the above challenges, we propose a novel framework - DeepIM, to solve the IM problem by developing a novel strategy that embeds the initial discrete optimization domain into a larger continuous space. Remarkably, we propose to learn the latent representation of seed sets by retaining their expressiveness and directly optimizing in the continuous space to reduce the problem's hardness. We further design a learning-based diffusion model to characterize the underlying diffusion dynamics in an end-to-end manner. Moreover, we develop a generic seed set inference framework to directly optimize and generate set embeddings under a uniform budget constraint. Finally, we summarize our contributions as follows: * **Problem.** We formulate the learning-based IM problem as embedding the initial discrete optimization domain into continuous space for easing the optimization and identify its unique challenges arising from real applications. * **Framework.** We propose modeling the representation of the seed set in a latent space, and the representation is jointly trained with the model that learns the underlying graph diffusion process in an end-to-end manner. * **Adaptivity.** We propose a novel constrained optimization objective function to infer the optimal seed set by leveraging deep graph embeddings, which can be applied under arbitrary node-centrality-related constraints. * **Evaluation.** We conduct extensive experiments over four real-world datasets to demonstrate the performance of the proposed method. Compared with other state-of-the-art in various application scenarios, DeepIM achieves the best results in finding a seed set to maximize the influence. ## 2 Related Work ### Learning-based Influence Maximization Influence Maximization (IM) was first formulated as a combinatorial optimization problem by Kempe et al. (2003), which has inspired extensive research and applications in the next decade. Most of the traditional (i.e., non-learning-based) IM methods can be categorized as simulation-based, proxy-based, and heuristic-based. Traditional methods have achieved near or exact solutions under specific diffusion models with efficiency. Note that Du et al. (2014); Vaswani et al. (2017) have alluded to the possibility of learning the influence from the cascade data; however, they still assumed a prescribed model guides the diffusion pattern, specifically the Coverage function. We refer readers to recent surveys (Li et al., 2018; Banerjee et al., 2020) for more detailed reviews of traditional methods. Learning-based methods use deep learning to address the drawbacks of traditional IM methods, namely the lack of generalization ability. Pioneered works (Lin et al., 2015; Ali et al., 2018) first combined reinforcement learning with IM, and they triggered extensive works that leveraged deep reinforcement learning to solve the IM problem. Existing state-of-the-art solutions (Li et al., 2019; Tian et al., 2020; Manchanda et al., 2020; Li et al., 2022; Chen et al., 2022) follow a similar paradigm: learning latent embeddings of nodes or networks, and taking the current node embedding as the state of the agent in order to choose the next seed node as action, where the reward is its marginal influence gain. Other than reinforcement learning-based IM methods, there also exist methods (Kumar et al., 2022; Kamarthi et al., 2019; Panagopoulos et al., 2020) that solely leverage graph neural networks to encode social influence into node embedding and guide the node selection process. The crux of current learning-based IM methods is also obvious, the model complexity and adaptivity of learning-based IM methods are still not comparable to traditional methods. Particularly, current ML-based algorithms can neither handle the diversified diffusion patterns nor guarantee the quality of the solution as well as the model scalability. ### Graph Neural Network Graph Neural Networks (GNNs) (Wu et al., 2020) are a class of deep learning methods designed to perform inference on data described by graphs. The general paradigm of GNNs alternates between node feature transformation and neighbor nodes' information aggregation. For a \(K\)-layer GNN, a node aggregates information within \(K\)-hop neighbors. Specifically, the \(k\)-th layer transformation is: \[a^{k}=\mathcal{A}^{k}(h^{k-1};\theta^{k}),h^{k}=\mathcal{C}^{k}(a^{k};\theta^{k} ),\forall 1\leq k\leq K. \tag{1}\] where \(a^{k}\) is an aggregated feature, and \(h^{k}\) is the \(k\)-th layer node feature. The flexibility of aggregation function \(\mathcal{A}(\cdot)\) and combine function \(\mathcal{C}(\cdot)\) functions induces different GNN models (Velickovic et al., 2017; Kipf and Welling, 2016; Xu et al., 2018; Wang et al., 2022b). The high-level representations of nodes or graphs are utilized for different tasks. GNNs have been applied in various learning tasks such as information diffusion estimation (Chamberlain et al., 2021; Xia et al., 2021; Ko et al., 2020), graph source localization (Wang et al., 2022a; Ling et al., 2022b), deep graph generation (Ling et al., 2021, 2023a;b), and graph analogical reasoning (Ling et al., 2022a). In this work, we leverage GNN to characterize the underlying diffusion pattern and construct an end-to-end model for estimating the influence. ## 3 Problem Formulation Given a graph \(G=\{V,E\}\), the problem of IM aims to maximize the number of influenced nodes in \(G\) by selecting an optimal seed node set \(\mathbf{x}\subseteq V\). Particularly, the evaluation of IM relies on an influence diffusion model parametrized by \(\theta\): \(\mathbf{y}=M(\mathbf{x},G;\theta)\), where \(\theta\) can be the set of infection probability on each node if \(M(\cdot)\) is an independent cascade model or the set of parameters in the aggregation/combine functions if \(M(\cdot)\) is GNN-based. We denote \(\mathbf{x}\in\{0,1\}^{|V|}\) as the vector representation of the source node set, where the \(i\)-th element \(x_{i}=1,x_{i}\in\mathbf{x}\) if \(v_{i}\in\mathbf{x}\) and \(x_{i}=0\) otherwise. The output \(y\in\mathbb{R}_{+}\) measures the total number of infected nodes (Li et al., 2018). Based on the formalization of the influence spread, the IM problem is defined as follows: **Definition 1** (Influence Maximization).: The generic _IM_ problem requires selecting a set of \(k\) users from \(V\) as the seed set to maximize the influence spread: \[\tilde{\mathbf{x}}=\arg\max_{|\mathbf{x}|\leq k}M(\mathbf{x},G;\theta), \tag{2}\] where \(\tilde{\mathbf{x}}\) is the optimal seed node set that can produce a maximal influence spread in \(G\). Intuitively, selecting \(\tilde{\mathbf{x}}\) heavily depends on the underlying diffusion process. We have witnessed lots of works that develop algorithms with GNNs and reinforcement learning to tackle the problem. However, the expressiveness and generalization capability of existing learning-based IM frameworks is still limited due to the following challenges. **Challenges.** Firstly, most existing learning-based IM frameworks calculate the latent node embedding for selecting high-influential ones. However, their objective functions require iteratively updating the latent embeddings for each node at each action/optimization step no matter whether they are included in the current \(\mathbf{x}\). This poses a severe scalability problem if we are dealing with million-scale networks. Secondly, even though we leverage deep node/network embedding and various reward functions to guide the node selection process, existing frameworks are still tailored for specific diffusion models (e.g., they model \(M(\cdot)\) as explicit IC and LT model). However, these simple diffusion models cannot meet the needs of real applications. Moreover, to ease the computational overhead of the #P-hard influence estimation, learning-based IM methods rely on techniques from traditional methods, such as proxy-based and sampling-based estimation way, which makes scalability and generalization even worse. Lastly, there are plenty of node-centrality-constrained IM variants. For example, other than regulating the budget of the seed nodes, we may also need to regulate the total cost of selecting seed nodes. Learning-based IM solutions have different objective functions designed according to various application scenarios, and they do not have a well-defined scheme for all of the node-centrality-related constraints. ## 4 DeepIM In this section, we propose the DeepIM framework to ease the computational overhead of the learning-based IM methods and automatically identify the underlying diffusion patterns. The framework can be divided into two phases: the _learning phase_ is leveraged to characterize the probability of the observed seed set and model the underlying information propagation distribution, and the _inference phase_ is employed to optimize the selection of seeds in continuous space to maximize the influence spread. ### Learning Representation of Seed Set To build an effective and efficient objective function, we propose to characterize the probability of the seed node set \(p(\mathbf{x})\) over \(\mathbf{x}\) given the graph \(G\) since learning \(p(\mathbf{x})\) can help depict the seed set's underlying nature. However, learning such probability is not a trivial task because different nodes are inter-connected within each seed set and highly correlated based on the topology of \(G\). These connections make the relationship between nodes very complex and hard to decipher than other similar combinatorial problems. **Learning Probability over Seed Nodes.** Instead of directly modeling the highly-intractable probability \(p(\mathbf{x})\), we introduce an unobserved latent variable \(z\) to represent \(\mathbf{x}\) and define a conditional distribution \(p(\mathbf{x}|z)\) to quantify the likelihood. These latent variables have much lower dimensions than the observed sub-optimal seed sets, which can yield a compressed representation. Particularly, we marginalize over the latent variables to obtain \(\int p(\mathbf{x},z)\,dz=\int p(\mathbf{x}|z)p(z)\,dz\). The posterior likelihood \(p(z|\mathbf{x})=p(\mathbf{x}|z)\cdot p(z)/p(\mathbf{x})\) allows us to infer \(z\) given the observed seed sets \(\mathbf{x}\). In this work, we adopt autoencoder to generatively infer the posterior, where both encoder \(f_{\phi}\) (parameterized by \(\phi\)) and decoder \(f_{\psi}\) (parameterized by \(\psi\)) are used to characterize the likelihood of both posterior and conditional distribution, respectively. The objective of the autoencoder is to maximize the joint likelihood: \[\max_{\phi,\psi}\ \mathbb{E}\big{[}p_{\psi}(\mathbf{x}|z)\cdot p_{\phi}(z| \mathbf{x})\big{]}. \tag{3}\] **Learning the End-to-end Diffusion Model.** Once we have learned the latent distribution of seed nodes \(p(\mathbf{x})\), the next step is to update the seed node set \(\mathbf{x}\) in order to increase the marginal gain of the influence spread. Current learning-based IM solutions still assume the computation of the influence spread (i.e., \(M(\mathbf{x},G;\theta)\)) relies on prescribed mathematical models. However, real-world information diffusion is complicated, and it is not easy to determine the most suitable diffusion model in practice. A chosen diffusion model may be misspecified compared to real-world data and lead to large model bias. In addition, the diffusion network structure can also be hidden from us, so we need to learn not only the parameters in the diffusion model but also the diffusion network structure (Du et al., 2014). In this work, we design a GNN-based diffusion model \(M(\cdot)\) for accurate modeling of the relationship between \(\mathbf{x}\) and \(y\) with considering the overall graph topology. The output of a GNN-based diffusion function \(M(\cdot)\) is composed of two functions \(M=g_{r}\circ g_{u}(\mathbf{x},G;\theta)\): 1) \(\tau=g_{u}(\mathbf{x},G;\theta)\), where \(g_{u}(\cdot)\) is a GNN-based aggregation function and \(\tau\in[0,1]^{|V|}\) is an intermediate output after aggregating multi-hop neighborhood information. \(\tau\) denotes the _infection probability_ of each node; and 2) \(y=g_{r}(\tau;\xi),y\in\mathbb{R}_{+}\) denotes the final information spread, where \(g_{r}(\cdot)\) is a normalization function (e.g., \(l\)-1 norm) and \(\xi\) is the threshold to transform the probability into discrete value. The GNN-based \(M(\cdot)\) is visualized in Fig. 1 (a). **Definition 2** (**Score Monotonicity and Infection Monotonicity**).: Given a GNN-based diffusion model \(M(\cdot):2^{|V|}\rightarrow\mathbb{R}_{+}\) and any two subsets \(S,T\subseteq V\), \(M(\cdot)\) is score monotonic if \(x_{S}\preceq x_{T}\) (i.e. \(S\subseteq T\)) implies \(M(\mathbf{x}_{S},G;\theta)\leq M(\mathbf{x}_{T},G;\theta)\), where \(\mathbf{x}_{S},\mathbf{x}_{T}\in\{0,1\}^{|V|}\) are vector representations of seed sets \(S\) and \(T\), respectively. \(M(\cdot)\) is infection monotonic if \(x_{S}\preceq x_{T}\) (i.e. \(S\subseteq T\)) implies \(\tau_{S}\preceq\tau_{T}\), where \(\tau_{S},\tau_{T}\in[0,1]^{|V|}\) denote the infection probability of seed sets \(S\) and \(T\), respectively. Monotonicity is a natural property for us in modeling the overall diffusion network structure. A monotonic diffusion model indicates the spread of influence would continue to increase. Intuitively, if we select a larger community \(\mathbf{x}^{\prime}\) as the seed set, the larger \(\mathbf{x}^{\prime}\) would intrinsically infect no less nodes in the whole network than a smaller seed set \(\mathbf{x}\) if \(\mathbf{x}\preceq\mathbf{x}^{\prime}\). Ensuring the property of both monotonicities allows us to better characterize the underlying diffusion network structure and mimic the real-world diffusion pattern (Dolhansky and Bilmes, 2016). Hence, we add constraints to make the GNN-based diffusion model \(M(\mathbf{x},G;\theta)\) monotonic during the influence spread estimation. **Theorem 1** (**Monotonicity of GNN Models**).: For any GNN-based \(M(\mathbf{x},G;\theta)=g_{r}\circ g_{u}(\mathbf{x},G;\theta)\), where \(g_{u}(\mathbf{x},G;\theta)\) is formulated by Eq. (1), \(M\) is score and infection monotonic if \(\mathcal{A}^{k}\) and \(C^{k},k\in[1,K]\), are non-decreasing in Eq. (1), and \(g_{r}\) is also non-decreasing. We further illustrate that the well-known Graph ATtention network (GAT) can be score and infection monotonic under the constraint we claimed in Theorem 1. Note that we follow the standard network structures of GAT as introduced in the original paper. **Corollary 2** (**Monotonicity of GAT**).: \(M\) is score and infection monotonic when \(g_{u}\) is GAT if \(\theta^{k}\geq 0\) in Eq. (1) and \(g_{r}\) is also non-decreasing. Due to the space limitation, the proof of Theorem 1 and Corollary 2 are provided in Appendix A. According to Theorem 1 and Corollary 2, the GNN-based \(M(\mathbf{x},G;\theta)\) has the theoretical guarantee to retain monotonicity, and the Figure 1: DeepIM consists of two parts. a) we leverage the autoencoder to learn and compress the latent distribution of seed node sets into lower dimension \(p(z)\). The lower dimensional \(p(z)\) is then leveraged to learn an end-to-end and monotonic diffusion model \(M(\mathbf{x},G;\theta)\) for accurately predicting the spread. In addition, we employ a knowledge distillation module to train a lightweight student model to retain efficiency in predicting the influence spread. b) the seed set inference scheme iteratively optimizes the proposed objective function by updating the latent variable \(z\), initially sampled from the learned \(p(z)\), to maximize the influence spread. objective of learning the GNN-based \(M(\mathbf{x},G;\theta)\) is given as maximizing the following probability with a constraint: \[\max_{\theta}\mathbb{E}\big{[}p_{\theta}(y|\mathbf{x},G)\big{]}, \quad\text{s.t. }\theta\geq 0. \tag{4}\] Knowledge Distillation for Diffusion Estimation Efficiency.We have learnt the deep representation of seed nodes and an end-to-end diffusion model with a monotonicity guarantee. However, we empirically find the calculation of influence spread \(M(\mathbf{x},G;\theta)\) involves three steps: 1) decoding a node vector \(\mathbf{x}\) from the learned posterior \(p(\mathbf{x}|z)\); and 2) executing the GNN-based diffusion model \(M(\mathbf{x},G;\theta)\) under the graph \(G\); and 3) normalizing the probabilistic output \(\tau\) from \(M(\mathbf{x},G;\theta)\) to actual influence spread \(y\). Even though the prediction results are accurate, the computational overhead is still a burden when dealing with million-scale networks. Inspired by recent research on knowledge distillation, we propose to leverage a small yet powerful student model supervised by \(M(\mathbf{x},G;\theta)\) to attain efficiency. Specifically, the student model \(M_{s}(z;\lambda)\) is a lightweight neural network parametrized by \(\lambda\) that directly takes the latent variable \(z\) sampled from the learned \(p(z)\) as input. \(M_{s}(z;\lambda)\) directly returns the estimated influence spread \(y_{s}\) as output. The distillation loss between the \(y=M(\mathbf{x},G;\theta)\) (teacher model) and \(y_{s}=M_{s}(z;\lambda)\) can be as simple as \(\|y-y_{s}\|_{2}^{2}\). End-to-end Learning Objective.Finally, in order to bridge the representation learning and the learning of the diffusion model, we propose a unified objective function in an end-to-end manner by putting together Eq. (3) and (4) as: \[\mathcal{L}_{\text{train}}=\max_{\theta,\lambda,\psi,\phi} \mathbb{E}\big{[}p_{\theta}(y|\mathbf{x},G)\cdot p_{\lambda}(y_{s}|z) \tag{5}\] \[\cdot p_{\psi}(\mathbf{x}|z)\cdot p_{\phi}(z|\mathbf{x})\big{]}, \text{s.t. }\theta\geq 0.\] However, optimizing the expectation of joint probabilities could be computationally difficult. We instead derive the negative \(\log\) term of Eq. (5) and derive its lower bound as the final learning objective according to Jensen's inequality: \[\mathcal{L}_{\text{train}} =\min_{\theta,\lambda,\psi,\phi}-\log\Big{[}\mathbb{E}\big{[}p_{ \theta}(y|\mathbf{x},G)\cdot p_{\lambda}(y_{s}|z)\] \[\qquad\cdot p_{\psi}(\mathbf{x}|z)\cdot p_{\phi}(z|\mathbf{x}) \big{]}\Big{]},\text{ s.t. }\theta\geq 0.\] \[\geq\min_{\theta,\lambda,\psi,\phi}\mathbb{E}\Big{[}-\log\big{[}p _{\theta}(y|\mathbf{x},G)\cdot p_{\lambda}(y_{s}|z)\] \[\qquad\cdot p_{\psi}(\mathbf{x}|z)\cdot(p_{\phi}(z|\mathbf{x}) \big{]}\Big{]},\text{s.t. }\theta\geq 0.\] \[=\min_{\theta,\lambda,\psi,\phi}\mathbb{E}\Big{[}-\log\big{[}p_{ \theta}(y|\mathbf{x},G)\big{]}-\log\big{[}p_{\lambda}(y_{s}|z)\big{]}\] \[\qquad\qquad-\log\big{[}p_{\psi}(\mathbf{x}|z)\cdot(p_{\phi}(z| \mathbf{x})\big{]}\Big{]},\text{s.t. }\theta\geq 0. \tag{6}\] The overall objective consists of minimizing the empirical error \(-\log[p_{\theta}(y|\mathbf{x},G)]\) of the prediction of \(y\) with the reconstructed \(\mathbf{x}\) as input and minimizing the reconstruction error. In addition, we minimize the distillation loss \(-\log[p_{\lambda}(y_{s}|z)]\) to train the student model along with the overall training process. The overall framework for the training of end-to-end diffusion models and the autoencoder for learning the seed set distribution is visualized in Fig. 1 (a). ### Seed Node Set Inference To infer the high-influential seed node set in the testing domain, we leverage the latent distribution \(p(\mathbf{x})\) of the seed node set and the end-to-end diffusion model \(M(\cdot)\) jointly from Eq. (6). Firstly, if the autoencoder is well trained and can retain both _continuity_ (i.e., two close points in the latent space should not give two completely different contents once decoded) and _completeness_ (i.e., for a chosen distribution, a point sampled from the latent space should give "meaningful" content once decoded), the autoencoder in Eq. (3) can generate contents by exploiting the latent feature space \(p(z)\) learned from all the examples it was trained from, i.e., \(p(\mathbf{x})\). Therefore, we propose to alternatively search the optimal seed node set \(\tilde{x}\) in the lower-dimensional and less-noisy latent space \(p(z)\). The following corollary demonstrates it is equivalent to estimating the influence spread with the latent variable \(z\) rather than high-dimensional \(\mathbf{x}\) if the autoencoder retains both continuity and completeness. **Corollary 3** (Influence Estimation Consistency).: For any \(M(f_{\psi}(z^{(i)}),G;\theta)>M(f_{\psi}(z^{(j)}),G;\theta)\), we have \(M(x^{(i)},G;\theta)>M(x^{(j)},G;\theta)\). The proof of Corollary 3 can be found in Appendix. According to the corollary, we could find the optimal seed set that can generate the maximal influence by optimizing \(z\) in the following joint probability: \(\max_{z}\mathbb{E}\big{[}p_{\theta}(y|\mathbf{x},G)\cdot p_{\psi}(\mathbf{x}|z )\big{]}\). Adaptation to Different IM Variants with Node Centrality Constraints.Since the introduction of IM in (Kempe et al., 2003), IM was studied under various budget constrained settings on nodes in recent years. To enhance the adaptivity of DeepIM, we design a unified constraint that allows inferring seed sets under various budgets on individual nodes. Specifically, the objective \(\mathcal{L}_{\text{pred}}\) is given as: \[\mathcal{L}_{\text{pred}} =\max_{z}\mathbb{E}\big{[}p_{\theta}(y|\mathbf{x},G)\cdot p_{ \psi}(\mathbf{x}|z)\big{]},\] \[\text{s.t. }\sum\nolimits_{i=0}^{|V|}\mathcal{F}(v_{i},G)\cdot x_{i} \leq k, \tag{7}\] where \(\sum\nolimits_{i=0}^{|V|}\mathcal{F}(v_{i},G)\cdot x_{i}\) is a generalized budget constraint applied on individual nodes, and \(k\) is the actual budget. For the vanilla IM problem that only requires selecting a given number of seed nodes, \(\sum\nolimits_{i=0}^{|V|}\mathcal{F}(v_{i},G)\cdot x_{i}\) can be derived as \(\|x\cdot\mathbf{1}\|_{1}\), where the \(\mathbf{1}\in\{1\}^{N\times 1}\) is an all-one vector indicating the price of selecting each node are the same. In addition, for node degree constrained IM problems (Leskovec et al., 2007; Nguyen et al., 2017), \(\sum\nolimits_{i=0}^{|V|}\mathcal{F}(v_{i},G)\cdot x_{i}\) can be derived as \(\|x\cdot A\|_{1}\), where \(A\in\{0,1\}^{N\times N}\) is the adjacency matrix of the network \(G\), and \(\|\mathbf{x}\cdot A^{i}\|_{1}\leq k\) represents the \(l1\)-norm of the total seed node degree is bounded by a budget \(k\). The budget constraint \(\mathcal{F}(v_{i},G)\cdot x_{i}\leq k\) can also be easily designed, combined, and adapted to solve the IM variants with even non-uniform prices on each node. With the proposed flexible constraint, the challenge of IM methods' adaptability can be partially addressed. **Implementation Details of the Seed Set Inference.** We visualize our inference procedure in Fig. 1 (b). Specifically, the inference framework first samples a latent variable \(z\) from the learned latent distribution \(p(z)\). The latent variable \(z\) is iteratively optimized according to the inference objective function Eq. (7) to attain a larger marginal gain (influence spread). Note that the learning-based diffusion model \(p_{\theta}(y|\mathbf{x},G)\) can be switched between the student diffusion model \(M_{s}(z;\lambda)\) and the GNN-based diffusion model \(M(\mathbf{x},G;\theta)\) to achieve either efficiency or efficacy. In addition, the constrained objective function Eq. (7) cannot be computed directly so that we provide a practical version of the inference objective function: since the diffused observation \(y\) fits the Gaussian distribution and the seed set \(\mathbf{x}\) fits the Bernoulli distribution, we can simplify Eq. (7) as: \[\mathcal{L}_{\text{pred}}= \min_{z}\Big{[}-\log\big{[}\prod\nolimits_{i=0}^{|V|}f_{\psi}(z _{i})^{x_{i}}(1-f_{\psi}(z_{i})^{1-x_{i}}\big{]}\] \[+\big{\|}\tilde{y}-y\big{\|}_{2}^{2}\Big{]}\text{ s.t. }\sum\nolimits_{i=0}^{|V|}\mathcal{F}(v_{i},G)\cdot x_{i}\leq k, \tag{8}\] where the \(\tilde{y}\) is given as the optimal influence spread (i.e., \(\tilde{y}=|V|\)), and the full derivation of the above equation is provided in Appendix. Furthermore, we utilize the Projected Gradient Descent and propose a regularization function \(\Phi(\mathbf{x})\) to keep the predicted seed set \(\mathbf{x}\) in a valid region in terms of different constraints. For example, \(\Phi(\mathbf{x})\) can be defined as selecting \(k\) nodes with the highest probabilities when the price of selecting each node is equal in Eq. (7). \(\Phi(\mathbf{x})\) can also be defined as cost-efficiently selecting the top-\(k\) nodes from \(\mathbf{x}/c(\mathbf{x})\), where \(c(\mathbf{x})\) denotes the budget on one node (e.g., node degree). Finally, The optimization procedure is summarized in Algorithm 1. Specifically, we firstly sample an initial latent variable \(z\) on Line \(1\). From Line \(2\) - \(6\), we iteratively solve the optimization problem proposed in Eq. (7) via gradient descent optimizer (e.g., Adam) while regularizing the predicted seed set in a valid region with \(\Phi(\cdot)\). Fig. 1 (b) illustrates the overall process of the inference objective learning. We provide the derivation details of both Eq. (6) and (8) in Appendix A. ## 5 Experiment In this section, we compare the performance of our proposed DeepIM framework across six real networks in maximizing the influence under various settings, following a case study to qualitatively demonstrate the performance of DeepIM. Due to the space limit, more details of the experiment setup, hyperparameter settings, dataset description, and comparison methods can be found in Appendix. ### Experiment Setup Our primary purpose is to evaluate the expected influence spread as defined in Eq. (2) under various IM application scenarios. Since DeepIM can be easily adapted to different diffusion patterns, we choose two representative models that are commonly used in the IM problem, i.e., LT and IC model. In addition, we also evaluate the IM problem under the susceptible-infectious-susceptible (SIS) epidemic model (Kermack and McKendrick, 1927), where the major difference is activated nodes can be de-activated in SIS. Due to the space limit, the experiment of the non-progressive diffusion model can be found in Appendix. **Data.** The proposed DeepIM is compared with other approaches over six real-world datasets, including Cora-ML, Network Science, Power Grid, Jazz, Digg, and Weibo. We also adopt a synthetic dataset that is a random graph with \(50,000\) nodes generated by Erdos-Renyi algorithm (Erdos et al., 1960). The statistics of the data are shown in Table 1. We randomly sample seed node set \(\mathbf{x}\), and the seed size is proportional to \(|V|\) of each network. We then use IC, LT, and SIS models to compute the final influence spread \(y\). The \(\{(\mathbf{x},y)\}\) set then serves as the training set of our algorithm. ### Comparison Method. In addition to comparing our model's performance between the GNN-based diffusion model \(M(x,G;\theta)\) (denoted as DeepIM) and the student diffusion model \(M_{s}(z;\lambda)\) (denoted as DeepIM\({}_{s}\)), we also adopt four sets of comparison methods, all of which are outlined as follows. _Traditional IM_: 1) IMM (Tang et al., 2015), 2) OPIM-C (Tang et al., 2018), and 3) SubSIM (Guo et al., 2020). _Learning-based IM_: 1) IMINFECTOR (Panagopoulos et al., 2020), 2) PIANO (Li et al., 2022), and 3) ToupleGDD (Chen et al., 2022). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Digg & Weibo & Power Grid & Network Science & Cora-ML & Jazz & Synthetic \\ \hline Nodes & 279,613 & 2,251,166 & 4,941 & 1,565 & 2,810 & 198 & 50,000 \\ Edges & 1,170,689 & 225,877,808 & 6,594 & 13,532 & 7,981 & 2,742 & 250,000 \\ \hline \hline \end{tabular} \end{table} Table 1: The Overview of Dataset _Online IM_: OIM (Lei et al., 2015). _Budget-constraint IM_: CELF (Leskovec et al., 2007). In addition, we also compare the performance of the student model \(\text{DeepIM}_{s}\) (i.e., coupled with the simplified diffusion model \(M_{s}(\cdot)\)). ### Quantitative Analysis We evaluate the performance of DeepIM in maximizing the influence against other approaches under various IM application schemes. Each model selects 1%, 5%, 10%, and 20% nodes in each dataset as seed nodes, and we allow each diffusion model to simulate until the diffusion process stops and record the average influence spread of \(100\) rounds. We report the percentage of final infected nodes (i.e., the number of infected nodes/the total number of nodes). **IM under IC Model.** We first examine the effectiveness of DeepIM against other baseline methods under the IC diffusion pattern. As shown in Table 2, DeepIM can achieve an overall better performance than other methods across all datasets. Compared to traditional methods, IMM, OPIM, and SubSIM are three state-of-the-art methods based on reserve-set sampling and various approximation techniques, which generate similar results across all datasets; however, they rely on different heuristics to guide the node selection for efficiency improvement and fail to decode the underlying distribution of seed sets. OIM achieves better performance than traditional methods in most datasets because it can automatically update the edge weight iteratively. However, the disadvantage of OIM is also obvious: it is tailored for the specific IC diffusion model, which is less applicable in real-world scenarios. Lastly, the learning-based IM methods (IMINFECTOR, PIANO, and ToupleGDD) achieve competitive and generally better performance than traditional ones due to their larger model size and better generalization ability. However, learning-based methods that leverage reinforcement learning experience scalability issues and they cannot be applied in billion-scale networks (e.g., Digg and Weibo), which makes them hard to be applied in real-world scenarios. Compared to learning-based methods, DeepIM proposes a more robust way of learning the end-to-end diffusion model and searching the high-influential node set in latent space directly, which can better capture the underlying diffusion dynamics and resolve the scalability issues. In addition, \(\text{DeepIM}_{s}\) incorporates a lightweight end-to-end \begin{table} \begin{tabular}{l|c c c c c c|c c c c|c c c c c|c c c c c|c c c c c|c c c c c} \hline \hline & \multicolumn{3}{c|}{Cone-ML} & \multicolumn{3}{c|}{Network Science} & \multicolumn{3}{c|}{Power Grid} & \multicolumn{3}{c|}{Jazz} & \multicolumn{3}{c|}{Synthetic} & \multicolumn{3}{c|}{Digg} & \multicolumn{3}{c|}{Wubo} \\ \hline Methods & 1\% & 5\% & 10\% & 20\% & 1\% & 5\% & 10\% & 20\% & 1\% & 5\% & 10\% & 20\% & 1\% & 5\% & 10\% & 20\% & 1\% & 5\% & 10\% & 20\% & 1\% & 5\% & 10\% & 20\% & 1\% & 5\% & 10\% & 20\% \\ \hline IMM & 8.1 & 26.2 & 37.3 & 50.2 & 5.2 & 16.8 & 27.0 & 45.7 & 4.3 & 17.4 & 31.5 & 51.1 & 2.6 & 20.1 & 31.4 & 42.8 & 9.2 & 26.2 & 36.3 & 51.6 & 7.4 & 18.4 & 32.8 & -49.6 & 9.5 & 23.8 & 36.4 & 50.5 \\ OPIM & 14.6 & 26.9 & 37.4 & 50.9 & 6.6 & 19.4 & 28.9 & 4.6 & 5.7 & 17.7 & 29.7 & 50.1 & 2.0 & 24.0 & 34.4 & 4.8 & 6.6 & 9.6 & 25.3 & 36.5 & 51.7 & 16.5 & 32.9 & 48.9 & 9.7 & 27.7 & 32.7 & 36.6 & 50.3 \\ SubSIM & 10.1 & 25.7 & 38.8 & 51.1 & 43.5 & 15.7 & 29.7 & 44.8 & 4.6 & 19.2 & 31.7 & 50.2 & 36.8 & 18.3 & 37.6 & 44.7 & 9.5 & 26.7 & 35.5 & 51.5 & 7.5 & 18.9 & 33.4 & 9.3 & 21.1 & 36.5 & 50.6 \\ \hline OIM & 8.9 & 27.6 & 38.0 & 51.3 & 4.2 & 16.7 & 26.5 & 48.2 & 5.7 & 17.5 & 31.9 & 50.8 & 2.0 & 18.5 & 36.3 & 42.2 & 9.6 & 32.6 & 36.7 & 51.3 & 7.8 & 18.2 & 33.1 & 49.8 & - & - & - & - & - & - & - & - & - & - & - \\ \hline ImNFECTOR & 9.6 & 28.3 & 37.0 & 50.4 & 54.9 & 17.9 & 27.8 & 4.4 & 5.4 & 18.2 & 31.6 & 50.9 & 18.7 & 37.5 & 45.9 & 19.1 & 28.6 & 36.2 & 36.1 & 51.5 & 7.9 & 18.6 & 33.5 & 49.8 & 9.4 & 23.5 & 36.9 & 50.3 \\ PLANO & 9.8 & 25.2 & 37.4 & 51.1 & 5.4 & 16.3 & 27.2 & 47.2 & 5.3 & 18.1 & 37.7 & 50.2 & 2.2 & 19.2 & 36.2 & 41.2 & 9.4 & 26.2 & 34.2 & 9.4 & 26.4 & 35.6 & 51.6 & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - learning-based diffusion model that could retain both efficacy and efficiency than other learning-based methods. **IM under LT Model.** We then evaluate the final influence spread with respect to the initial seed set size by assuming LT as the diffusion model. As shown in Table 3, DeepIM can generate more superior seed sets to infect the most number of nodes and excel other methods by an evident margin across all datasets. Notably, DeepIM demonstrates its superiority in the Synthetic dataset that can effectively spread the influence to the whole network when choosing \(20\%\) of the node as the initial seed set, yet other methods can infect at most \(70\%\) nodes of the network. Specifically, DeepIM and DeepIM\({}_{s}\) excel other methods by on average \(200\%\) in the Jazz dataset and \(30\%\) in the Synthetic dataset. The reason is largely due to the lack of generalization capability of other methods under various diffusion models. **IM with Budget Constraint.** We then compare the quality of the seed sets generated by DeepIM and CELF under the IC and LT model with the budget constraint, and such a budget is explicitly defined as the node degree in this paper. As can be seen from Fig. 2, our proposed method generally performs better than CELF across all networks of different sizes, and the margins are more evident under the LT model (Fig. 2f - 2j). In addition, compared to CELF, the growths of influence spread in DeepIM have fewer fluctuations across all datasets, which also demonstrates the stability of DeepIM because of its capability of identifying the latent distribution seed sets while considering the budget constraint. ### Scalability Analysis We record the runtime of the seed set inference with regard to the increase in node size against other learning-based IM solutions. As can be clearly seen in Table 4, DeepIM demonstrates near-linear growth of runtime as the graph size increases. In addition, it achieves a generally shorter inference time (on average \(20\%\) faster inference time than the second-fast IMINFECTOR) compared to other learning-based methods. In addition, our DeepIM\({}_{s}\) coupled with a lightweight end-to-end diffusion model can greatly reduce the computational cost of estimating the expected influence spread, and achieves even \(90\%\) improvement in the inference time on average than our DeepIM model. ### Case Study: Graph Diffusion Visualization Finally, we conduct a case study to demonstrate the distribution of selected \(20\%\) seed nodes as well as the final infection status of all nodes in Fig. 3, where blue nodes indicate the initial seed node, red nodes indicate the infected node during the influence spread, and grey nodes represent uninfected nodes. Due to the ease of representation, we only visualize the result of the Jazz dataset because of its overall smaller graph size. Overall, DeepIM demonstrates better performance in terms of spreading influence. Due to the space limit, we compare the influence spread result between different initial seed set sizes, namely \(10\%\) and \(20\%\), in the appendix and provide more discussions there. ## 6 Conclusion In this paper, we propose a novel framework to tackle the IM problem in a more robust and generalized way than existing learning-based IM methods. Particularly, to characterize the complex nature of the seed set, we propose to characterize the probability of the seed set and directly search for a more optimal seed set in continuous space. Furthermore, to solve the challenge of modeling the underlying diffusion pattern, we offer two different learning-based diffusion models to characterize the diversified diffusion dynamics with efficiency and efficacy guarantee. Finally, we propose a novel objective function that can be coupled with multiple constraints for seed node set inference, which can adapt to different IM application schemes. Extensive experiments and case studies on both synthetic and real-world datasets demonstrate the advantages of DeepIM over existing state-of-the-art methods to maximize the influence spread. \begin{table} \begin{tabular}{l|c c c|c} \hline \hline & 10,000 & 20,000 & 30,000 & 50,000 & 50,000 (Training) \\ \hline IMINFECTOR & 3.478s & 7.842s & 12.376s & 16.492s & 4753.67s \\ PIANO & 5.948s & 10.532s & 16.575s & 28.437s & 14732.63s \\ ToepiGD & 10.476s & 19.583 & 32.792s & 58.988s & - \\ \hline DeepIM & 0.312s & 0.616s & 0.847s & 1.275s & 503.12s \\ DeepIM & 1.402s & 2.798s & 5.124s & 12.882s & 1244.56s \\ \hline \hline \end{tabular} \end{table} Table 4: The average inference runtime (in seconds) with regard to the increase of node size and the average training time. We select \(10\%\) of nodes as the seeds. Figure 3: The visualization of influence spread in Jazz dataset: The size of nodes is determined by the node degree, and the color on nodes determines the infection status: blue means the node is in seed set, red means the node is infected, and grey means the node is not infected.
2306.13885
Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem
Artificial Intelligence (AI) systems are increasingly used in high-stakes domains of our life, increasing the need to explain these decisions and to make sure that they are aligned with how we want the decision to be made. The field of Explainable AI (XAI) has emerged in response. However, it faces a significant challenge known as the disagreement problem, where multiple explanations are possible for the same AI decision or prediction. While the existence of the disagreement problem is acknowledged, the potential implications associated with this problem have not yet been widely studied. First, we provide an overview of the different strategies explanation providers could deploy to adapt the returned explanation to their benefit. We make a distinction between strategies that attack the machine learning model or underlying data to influence the explanations, and strategies that leverage the explanation phase directly. Next, we analyse several objectives and concrete scenarios the providers could have to engage in this behavior, and the potential dangerous consequences this manipulative behavior could have on society. We emphasize that it is crucial to investigate this issue now, before these methods are widely implemented, and propose some mitigation strategies.
Sofie Goethals, David Martens, Theodoros Evgeniou
2023-06-24T07:21:28Z
http://arxiv.org/abs/2306.13885v2
# Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem ###### Abstract Artificial Intelligence (AI) systems are increasingly used in high-stakes domains of our life, increasing the need to explain these decisions and to make sure that they are aligned with how we want the decision to be made. The field of Explainable AI (XAI) has emerged in response. However, it faces a significant challenge known as the disagreement problem, where multiple explanations are possible for the same AI decision or prediction. While the existence of the disagreement problem is acknowledged, the potential implications associated with this problem have not yet been widely studied. First, we provide an overview of the different strategies explanation providers could deploy to adapt the returned explanation to their benefit. We make a distinction between strategies that _attack_ the machine learning model or underlying data to influence the explanations, and strategies that _leverage_ the explanation phase directly. Next, we analyse several objectives and concrete scenarios the providers could have to engage in this behavior, and the potential dangerous consequences this manipulative behavior could have on society. We emphasize that it is crucial to investigate this issue now, before these methods are widely implemented, and propose some mitigation strategies. Explainable AI (XAI) Manipulation Responsible AI ## 1 Introduction Artificial Intelligence (AI) is used in more and more high-stakes domains of our life such as justice (Berk, 2012), healthcare (Callahan and Shah, 2017), and finance (Lessmann et al., 2015), increasing the need to explain these decisions and to make sure that they are aligned with how we want the decision to be made. However, the complexity of many AI systems makes them challenging to comprehend, posing a significant barrier to their implementation and oversight (Arrieta et al., 2020; Samek et al., 2019). Legislative initiatives, including the EU General Data Protection Regulation (GDPR), have recognized the 'right for explanation' for individuals affected by algorithmic-decision making, emphasizing the legal necessity of explainability (Goodman and Flaxman, 2017). In response, the field of Explainable Artificial Intelligence (XAI) has emerged, aimed at developing methods for explaining the decision-making processes of AI models (Adadi and Berrada, 2018; Holzinger et al., 2022; Xu et al., 2019). Nevertheless, the landscape of post-hoc explanations is diverse, and each method can yield a different explanation. Furthermore, even within a single explanation method, multiple explanations can be generated for the same instance or decision. This phenomenon, known as the _disagreement problem_, has been studied in literature (Broughmans et al., 2023, Krishna et al., 2022, Neely et al., 2021, Roy et al., 2022]. While the existence of the disagreement problem is acknowledged, the potential implications of this problem have not yet been extensively explored. Barocas et al. [2020] already mention that the power to choose which explanation to return, leaves the providers with significant room to promote their own welfare. Aivodji et al. [2019] discuss the possibility of fairwashing, where discriminatory practices can be hidden by selecting the right explanations, while Bordt et al. [2022] argue that post-hoc explanations fail to achieve their purpose in adversarial contexts. However, an overview of potential misuses by the explanation provider is still missing from the literature, and we believe it is imperative to study the implications now, before explainability methods are implemented on a wide scale. The main contributions of this paper are: * Providing a comprehensive framework that outlines the different strategies that could be employed by malicious entities to manipulate the explanations. * An overview of the different objectives these actors could have to engage in this behavior, and the potential implications. This paper is structured as follows: We introduce the field of Explainable AI and the disagreement problem in Sections 2 and 3. In Section 4, we explore various strategies that providers could employ to manipulate the explanations according to their preferences. Additionally, in Section 5, we present specific objectives and scenarios that may drive providers to engage in such behavior. Finally, in Section 6, we offer discussion and potential solutions to address this. ## 2 Explainable AI Within the field of Artificial Intelligence, providing insights into the decision-making process is crucial for various reasons. First, it establishes trust and compliance with stakeholders, as they can understand and validate the reasoning behind the model's output. Secondly, it enables improved domain insights, allowing practitioners and users to gain a deeper understanding of the problem space and uncover valuable knowledge. Lastly, insights derived from explanations can lead to model improvement, aiding in the optimization of AI systems [Molnar, 2020, Xu et al., 2019]. To reach these goals, various methods to achieve comprehensibility in AI models have been proposed. In general, there are two main approaches commonly used: inherently transparent models and post-hoc explanations. Inherently transparent models, such as small decision trees, are comprehensible by nature due to their simple structure, without the need for additional explanations [Molnar, 2020]. However, in many real-world scenarios, data is becoming increasingly complex and black-box models are used due to their superior predictive performance [Goethals et al., 2022]. These models lack inherent interpretability, and post-hoc explanations are used to provide insights into their decision-making process. This field of research is commonly known as Explainable Artificial Intelligence (XAI). Within XAI, a distinction can be made between global and local explanations. Global explanations aim to provide an understanding of the model's logic as a whole, allowing users to follow the reasoning that leads to every possible outcome. Techniques such as rule extraction [Martens et al., 2007] and Partial Dependence plots [Friedman, 2001] fall under this category. On the other hand, local post-hoc explanations focus on explaining the logic behind a specific prediction or decision made by the model. Methods like SHAP (SHapley Additive exPlanations) [Lundberg and Lee, 2017] and LIME (Local Interpretable Model-agnostic Explanations) [Ribeiro et al., 2016] are examples of post-hoc explanation that measure the impact of each feature for a given prediction score (feature importance methods). Another local technique, known as counterfactual explanations, describes a combination of feature changes required to alter the predicted class [Martens and Provost, 2014, Wachter et al., 2017, Guidotti, 2022]. While this paper predominantly uses counterfactual explanations as an example, the findings and discussion presented are applicable to other post-hoc explanation techniques as well. At the moment, we do not see manipulation issues for inherently transparent models but this would be an interesting avenue for future research [Bordt et al., 2022]. In line with Greene et al. [2023], we define an explanation recipient as a person who requests an explanation for an automated decision, and an explanation provider as the entity who provides the algorithmic explanations to the recipient. For example, in the domain of finance, the explanation provider could be a bank, and the explanation recipient a loan applicant; while in the domain of employment the explanation recipient would be the job applicant, and the explanation provider the hiring agency [Greene et al., 2023]. Not all scenarios described in Section 5 assume that there is one actual recipient; the explanation provider can also provide explanations of the model to the public proactively or to comply with regulatory requirements. ## 3 The Disagreement Problem A known issue within Explainable AI is that the results of different explanation techniques do not always agree with each other. Even one explanation technique can generate many different explanations for one instance, which is known as the disagreement problem (Krishna et al., 2022; Neely et al., 2021; Roy et al., 2022). One of the reasons behind the disagreement problem is that a 'true internal reason' why the machine learning model comes to a certain decision, generally does not exist (Bordt et al., 2022). For example, for feature importance methods such as SHAP and LIME, there is no mathematically unique way to determine the importance of each feature to the decision of a black-box function (Bordt et al., 2022; Sundararajan and Najmi, 2020). As a consequence, all feature importance methods rely on their own assumptions to approximate this (Bordt et al., 2022; Sundararajan and Najmi, 2020). For counterfactual explanations, this issue also exists as the optimization problem to create the explanations can be set up in a different way in every implementation. Even a single counterfactual explanation method could lead to a large number of explanations, as the choice of parameters (such as the distance metric) has an influence on the explanations that are returned first (Goethals et al., 2023). The diversity of multiple counterfactual explanations, generated by the same counterfactual algorithm is also known as the Rashomon effect (Molnar, 2020).1 Footnote 1: The Rashomon effect means that an event can be explained by multiple causes, and is named after a Japanese movie that tells multiple (contradictory) stories about the death of a samurai (Molnar, 2020). Other authors already showed the level of disagreement between different post-hoc explanation techniques: Roy et al. (2022) show disagreement between LIME and SHAP explanations, Brughmans et al. (2023) illustrate this for different counterfactual explanation algorithms, and Bordt et al. (2022) demonstrate the disagreement between SHAP, LIME, and counterfactual explanations. We illustrate the disagreement problem for one specific instance with an example in Table 1, in line with Brughmans et al. (2023). This table demonstrates the disagreement problem between different counterfactual explanation algorithms for one instance from the Australian credit dataset, where the target variable indicates whether a person should be granted a loan or not. The depicted instance was not awarded credit and asks for a counterfactual explanation to know which features to change to receive a positive credit decision. Table 1 shows the explanations returned by 10 different counterfactual algorithms, which vary widely. 2 This example illustrates that every feature can be included in the explanation by switching between explanation algorithms. Brughmans et al. (2023) verify this for multiple datasets and classifiers, and establish the feasibility of both including and excluding specific features across different scenarios. Note that the potential for manipulation of explanations extends beyond switching between different counterfactual explanation algorithms. In Section 4, alternative strategies that can be employed for manipulation are explored. Currently, a consensus on how to resolve this ambiguity has not yet been reached. Research indicates that most developers rely on arbitrary heuristics, such as personal preferences, to choose the final explanation (Krishna et al., 2022). Footnote 2: The counterfactual algorithm GeCo was not able to find a counterfactual explanation for the given instance. This plurality is not necessarily a bad thing. Bordt et al. (2022) distinguish between a cooperative and an adversarial context. In cooperative contexts, where stakeholders have the same goal, this plurality can be beneficial as it is expected that the explanation provider will choose the explanation that is in both parties' best interest. For example, when data scientists are debugging a model for their own company, this plurality of explanations can be useful. However, in adversarial contexts, the interests of the explanation provider and the data subject are not necessarily aligned, and the explanation providers will be incentivized to choose the explanation that best fits their own interests. An example of such an adversarial context is a loan application where the customer was denied the loan and wants to flag the decision as being discriminatory (Bordt et al., 2022). In this case, the bank might want to conceal this discriminatory practice by returning a different explanation. This phenomenon is known as _fairwashing_, and has received significant attention (Aivudji et al., 2019). While fairwashing is the most extensively studied objective, we will explore additional scenarios for misuse in adversarial contexts in Section 5. However, even in adversarial contexts, this plurality can be used in a positive way. For example, Bove et al. (2023) do mention that in settings such as loan applications, the plurality of explanations can benefit the user if they are provided with multiple explanations. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & Sex & Age & Residence & Home & Occupation & Job & Employment & Other & Bank & Time & Liability & Account & Housing & Savings \\ & & & time & status & time & time & investments & account & at bank & Liability & reference & expense & account \\ \hline Instance & 2 & 16 & 22 & 1 & 2 & 6 & 7 & 0 & 0 & 0 & 1 & 1 & 125 \\ \hline CBR & 2 & 16 & 025 & 1 & 2 & 6 & 7 & 0 & 1 & 0 & 0 & 1 & 1 & 125 \\ DHCI & 2 & 16 & 22 & 1 & 2 & 6 & 7 & 24 & 0 & 0 & 0 & 1 & 1 & 125 \\ Geo & & & & & & & & & & & & & & & \\ NICE(n)one & 2 & 34 & 0 & 3 & 3 & 10 & 1 & 0 & 0 & 0 & 0 & 1 & 2 & 136 \\ NICE(j)ousa & 2 & 34 & 0 & 3 & 3 & 6 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 136 \\ NICE(j)orax & 2 & 34 & 0 & 1 & 2 & 10 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 136 \\ NICE(sparse) & 2 & 16 & 0 & 1 & 2 & 10 & 1 & 0 & 0 & 0 & 1 & 1 & 136 \\ SECP & 2 & 16 & 22 & 1 & 2 & 6 & 7 & 0 & 1 & 0 & 0 & 1 & 1 & 125 \\ WTT & 1 & 278 & 8 & 2 & 1 & 5 & 1 & 6.5 & 1 & 1 & 6 & 0 & 0 & 102 \\ \hline \hline \end{tabular} \end{table} Table 1: Illustration of the disagreement problem for an instance of the Australian Credit dataset. ## 4 Manipulation Strategies: How can explanation providers exploit the disagreement problem? The manipulation of explanations by explanation providers is not limited to the mentioned example of switching between explanation algorithms, but can occur at various stages throughout the pipeline, as depicted in Figure 1. We specifically focus on the manipulation that takes place in the post-processing stage, where the explanations are generated, as we imagine that the explanation provider may not always possess the authority to modify the machine learning model or underlying data (the explanation provider is not necessarily the same entity as the model owner). Nevertheless, it is important to note that manipulations directly to the data or model are still feasible, and we discuss some relevant literature exploring this below. Manipulating the training data to result in different explanations, is related to the area of _data poisoning attacks_. Data poisoning attacks usually involve injecting manipulated data into the training set to compromise the performance of the machine learning model, and while the main focus in literature is on model behavior, its goal might also be manipulating the explanations. Baniecki et al. (2023) illustrate that it is possible to attack Partial Dependence plots by poisoning the training data. Bortd et al. (2022) highlight the important role of the reference dataset, and show how changing this set influences the resulting SHAP explanations. With regard to changing the model, Slack et al. (2020) demonstrate the possibility of modifying biased classifiers in such a way that they continue to yield biased predictions, while the explanations generated by LIME and SHAP will appear harmless. Other authors show the possibility of fine-tuning a neural network to conceal discrimination in the model explanations (Dimanov et al., 2020; Heo et al., 2019). Finally, in the domain of images, Dombrowski et al. (2019) present evidence showcasing the manipulation of explanations through the application of nearly imperceptible perturbations to visual inputs. In this case, the test data, for which the prediction needs to be explained, is altered. These perturbations would not change the output of the machine learning model, but could result in drastic changes in the explanation map. 3 Additionally, Slack et al. (2021) focus on modifying both the model and the test data, such that slight perturbations to the input data can lead to more cost-effective recourse for specific subgroups, while giving the impression of fairness to auditors. Footnote 3: One could argue that altering the test data in an imperceptible way will be mostly applicable to image data, as in tabular data these changes may be more noticeable. As mentioned, we focus on strategies to alter the explanation in the post-processing stage, without making any alterations to the used data or the underlying machine learning model. We foresee three main strategies the providers could deploy in this stage: 1. **Change the explanation technique** Many different post-hoc explanation techniques exist, both local and global, as outlined in Section 2. Conse Figure 1: Strategies the explanation providers could deploy to manipulate the explanations quently, a first evident strategy entails switching to a different explanation technique. For example, when the surrogate model reveals patterns the explanation provider wants to conceal, he might switch to using Partial Dependence plots as an alternative if these patterns do not manifest clearly in those plots. However, on a local level, using different explanation techniques between instances may attract greater attention than the strategies described below, as the output could have a significantly different format (e.g., feature importance plot versus a counterfactual explanation). 2. **Change the parameters or used implementation of an explanation technique** Even within a single explanation algorithm, significant leeway exists for manipulating the explanations, contingent upon the selected parameter configurations. For example, LIME explanations depend on the number of perturbed instances and the bandwidth (Bordt et al., 2022; Garreau and Luxburg, 2020), while for Shapley values, there is a multitude of ways to implement them and each operationalization yields significantly different results (Sundararajan and Najmi, 2020). Global methods, such as surrogate modeling, are heavily influenced by the choice of architectural design (e.g., linear models, decision trees, etc.) and the complexity of the surrogate model. In the case of counterfactual explanations, as shown in Table 1, the used implementation exerts a substantial influence on the returned explanations, with the number of potential implementations proliferating at a rapid pace. Additionally, even within one counterfactual algorithm, there often exist many modifiable parameters that influence the results. 3. **Exploit the non-deterministic component of some explanation algorithms** Some explanation algorithms such as DICE (Mothilal et al., 2020) inherently provide multiple possible explanations for one instance. In such cases, the explanation provider can simply select an explanation from the available options without requiring any modifications. Furthermore, certain explanation algorithms are not designed in a deterministic way and may return different explanations across runs. For example, when using LIME, the randomness introduced during the sampling and perturbation process can lead to variations in the generated explanations for each execution (Lee et al., 2019; Zhang et al., 2019). Additionally, de Oliveira and Martens (2021) show that multiple counterfactual algorithms do not generate consistent results over multiple runs, when the same model, input data and parameters are used. In this scenario, the explanation providers can repeatedly execute the explanation algorithm until an explanation that aligns with their preferences is returned. In the scenario we describe, we assume explanation providers deliberately choose the explanation out of all the possible explanations that best aligns with their interests. The returned explanation will still be technically correct, it will just not necessarily be the explanation that will be in the best interest of the user. It is important to note that we are not referring to situations where explanations chosen by the explanation provider are not in the best interest of the user 'by accident' due to differences in knowledge background or a lack of awareness of the user's preferences (Bove et al., 2023; Gilpin et al., 2022). Instead, we are concerned with cases where the explanation provider knowingly opts for an explanation that serves their own agenda, despite knowing that it may not be the optimal explanation for the end user. Note that in described strategies, the providers maintain a partial ethical stance by delivering explanations that retain technical correctness. However, providers have the potential to further exploit the situation by offering spam explanations, containing superfluous features (Greene et al., 2023), or by deliberately presenting entirely false explanations that are fabricated. The complexity of the pipeline depicted in Figure 1 demonstrates the extensive potential for manipulation and, consequently, the fragility of explanations. 5 Manipulation Objectives: Why would explanation providers want to exploit the disagreement problem? Which objectives could the providers have to engage in this behavior? We outline them in Figure 2, and discuss various scenarios for each objective in the subsections below. At the moment, we see mitigating liability, implementing their beliefs and maximizing their profits as the main objectives. This list may not be exhaustive yet as technology is constantly evolving and new objectives may emerge. ### Mitigate liability The model could be unethical or suboptimal in several ways and model explanations could reveal this. Explanation providers could manipulate the explanations to avoid these issues coming to light. #### 5.1.1 Fairwashing The first, and most studied, reason for explanation providers to engage in this behavior, is _fairwashing_(Aivodji et al., 2019, 2021; Shahin Shamsabadi et al., 2022). Fairwashing is defined as _'promoting the false perception that a machine learning model used by the company is fair while this might not be so_' (Aivodji et al., 2019). In a fairwashing attack, the explanation provider will manipulate the explanations to under report the unfairness of the machine learning model. This has a significant impact on the individuals that received a negative decision based on unfair grounds, as this will deprive them of the possibility to contest it (Aivodji et al., 2021). The relative easiness with which fairwashing can be executed has already been shown in the literature. (Aivodji et al., 2021; Shahin Shamsabadi et al., 2022). Imagine a bank that decides it prefers people from a certain demographic group, and predominantly gives out loans to this group (without a justified reason to do so). It could easily mask this behavior by choosing a different explanation. For example, instead of returning the explanation '_If you would have belonged to a different demographic group, you would have received the loan_', it could return as explanation '_If your income would be double as high, you would have received the loan_', even if the latter explanation is less plausible. Some counterfactual algorithms such as DICE (Mothilal et al., 2020) even have as an input parameter the features that can be part of the explanation, so if sensitive features such as demographic attributes are removed from this list, counterfactual explanations will never flag discrimination. We use counterfactual explanations as illustration here, but this objective extends to other explanation techniques as well. All the mentioned techniques in Section 2 have the potential to reveal bias within a model (for example a feature importance ranking where the sensitive attribute has a very high score). This misleading practice undermines the core principles of algorithmic fairness and hampers efforts towards achieving equitable and just outcomes. #### 5.1.2 Blame avoidance Explanation providers can also take advantage of the plurality of explanations to shift blame or evade responsibility for controversial or erroneous decisions made by Artificial Intelligence (AI) systems. Nissenbaum (1996) already mention that placing accountability in a computerized system can be a very obscure process due to the _'problem of many hands_' (many actors and factors contribute to the process, and is not clear which factor ultimately led to the decision). This issue is reflected in the explanations, where different explanations can point to different actors or circumstances. For example, in the case of autonomous vehicles, AI systems make critical decisions that impact passenger safety. Malicious model owners, such as manufacturers or operators, may downplay system failures or accidents caused by their vehicles. They could selectively present an explanation that attributes the fault to external factors or human error, and as such divert attention from potential design flaws or inadequate safety measures. Similarly, in the field of healthcare, this exploitative behavior can manifest when mistakes by surgeons or flaws in operating machines are concealed to avoid accountability. Explanation providers, which could include medical professionals, institutions, or even the manufacturers of medical devices, may withhold or manipulate explanations to protect their reputations or evade legal consequences. Such practices can have severe consequences, as critical flaws in life-critical systems may go unnoticed, posing a threat to the safety and well-being of future users. These actions not only endanger lives but also run contrary to our ethical values. Placing the entire blame on parties that are only partially responsible for an incident contradicts the principles of fairness and accountability. The appropriate distribution of responsibility is crucial for ensuring that the errors are properly addressed and the necessary improvements are made. ### Implement beliefs Explanation providers may use the explanations to promote their belief system, either by influencing people through propaganda or by excluding applicants that they deem unworthy, despite the machine learning model not sharing this perspective. Figure 2: Main objectives to leverage the disagreement problem #### 5.2.1 Computational propaganda The power to choose an explanation that best fits its interest, can be used to exert an influence on the public opinion. Propaganda itself is defined as '_the expression of opinion or action by individuals or groups deliberately designed to influence opinion or actions of other individuals or groups with reference to predetermined ends'_, while computational propaganda is defined as '_propaganda created or disseminated using computational (technical) means_' (Martino et al., 2020). Note that propaganda does not necessarily have to lie; it could simply cherry-pick the facts, which is exactly the option explanation providers have to their disposal (Martino et al., 2020). By selectively presenting explanations that align with their preferred ideology or desired narrative, explanation providers can amplify certain perspectives while downplaying or ignoring others. For example, in the realm of political campaigns, AI systems are used to analyze public sentiment, create targeted messaging, and influence voter behavior. Imagine an entity with access to an AI model that predicts the likelihood of successful integration for immigrants based on various factors like employment, language proficiency, and government support. The entity firmly believes in the principle of stricter requirements for immigrants, and they could selectively highlight specific factors such as language proficiency or employment history, while downplaying or omitting other important factors such as government support and community involvement. By presenting the AI model's predictions as mainly being driven by these selected factors, they could frame the narrative that successful integration is mainly due to language proficiency, and engaging in employment. The goal is to shape public opinion regarding immigration policy and generate support for stricter language and employment requirements for immigrants. Evidently, machine learning models cannot perfectly mimic the actual world, so even if a machine learning model could be perfectly explained, such an explanation would not constitute a perfect explanation of the real world. However, the concern here lies in the fact that people may still perceive machine-generated explanations as accurate depictions of the actual world, and consequently, the cherry-picked explanations have the potential to influence and shape their understanding of the world at large. Additionally, if the power to generate the explanations would be in the hands of a few actors, they would have the potential to wield significant influence over a large number of people. In this context, the manipulation of explanations can have far-reaching consequences for public opinion and democratic decision-making. It undermines the principles of transparency and a fair exchange of ideas, and could promote the spread of misinformation. #### 5.2.2 Avoid undesired applicants In this scenario, the explanation provider, who is using a machine learning model, has the ability to engage in discriminatory practices without directly manipulating the model itself. Instead, they alter the quality of the explanations given to certain population groups, thereby introducing discrimination. In algorithmic decision-making, explanations are often provided to users (the explanation recipients) to help them understand the factors that influenced the decision and potentially take corrective actions (_algorithmic recourse_). Counterfactual explanations are most often used here, as they guide users in modifying their input data to achieve a desired outcome. In this case, the explanation provider treats different population groups unequally by manipulating the quality of the explanations provided to them. The preferred population group is given explanations that are concise, actionable, and easily implementable. For example, they might receive suggestions such as adjusting the loan amount slightly or making small changes to their reported income. These explanations empower the preferred group to take specific actions that could potentially improve their chances of receiving a positive outcome. On the other hand, the disadvantaged demographic group is given explanations of lower quality. These explanations are designed to be difficult or even impossible to act on. They might involve suggesting large changes to their income or modifying their age, which are factors that applicants typically have limited or no control over. By providing such explanations, the explanation provider creates a significant imbalance in the recourse options available to different population groups. These population groups are not solely confined to traditionally protected characteristics such as race or gender. They can extend to any characteristic that the explanation provider deems undesirable. For example, in the hiring domain, the hiring company (and explanation provider) may deliberately offer lower-quality explanations to older individuals or individuals with certain health conditions, as they perceive them as less desirable for future employment. For some cases, this could also lead to an increase in profit which shows that the multiple objectives can be pursued in parallel and may not always require mutual exclusion. Note that the discriminatory practices described in this scenario are not related to the machine learning model itself, but to the post-processing stage where explanations are generated and shared with applicants. This issue is related to fairness in algorithmic recourse, where fairness is assessed by measuring the distance between the factual and the counterfactual instance (von Kugelgen et al., 2022, Sharma et al., 2020), and highlights the need for fairness assessments not only during the modeling stage but throughout the entire decision-making pipeline, including the provision of explanations. ### Increase profit Explanation providers might feel incentivized to capitalize on the explanations. They could return the explanation that would be the most profitable for them, and for this we envisage several scenarios. #### 5.3.1 Advertising One possibility discussed in previous work, is the integration of algorithmic explanations with advertising opportunities, creating an '_explanation platform_' where advertisements are served alongside the explanation (Greene et al., 2023). An example of this could be, that during a job application you receive the following explanation: '_If your CV would have included Python, you would have been invited for the next round_'. This explanation would then be accompanied by an advertisement for an online Python course, which would be a convenient solution for users to reach their goal (Greene et al., 2023). This approach allows the explanation provider to select the explanations that have the potential to generate the highest revenue in the advertising market. #### 5.3.2 Highlight profit-maximizing explanations However, monetization avenues can go beyond advertising. Explanation providers can also exploit the plurality of explanations to direct users towards actions that would maximize their own profits directly. This is related to the advertising scenario, but in this case the actions of the decision subject would directly lead to an increase in profit for the provider. For example, in the domain of healthcare diagnostics, AI systems are increasingly used for the identification of diseases and treatment recommendations. Malicious explanation providers, such as healthcare providers or insurance companies, may strategically choose explanations that prioritize certain measures or specific treatments. In this context, the goals of healthcare providers and insurance companies may diverge. Healthcare providers may have incentives to promote more expensive treatments, while insurance companies may prefer cost-saving measures and cheaper treatment options. However, by favoring explanations that are not necessarily the best or most appropriate, these providers can exert influence over medical decisions and potentially compromise patient care. This scenario could also happen in other domains than healthcare: for example, in the realm of credit scoring, AI systems are employed to evaluate an individual's creditworthiness. Barocas et al. (2020) already mention that decisions (and therefore explanations) in this scenario are not simply binary. The provider gives the decision subject a counterfactual that results in a _specific_ interest rate, and as such it can choose the interest rate that is likely to maximize its profit (Barocas et al., 2020). #### 5.3.3 Engage users In line with _Computational Propaganda_, discussed in Section 5.2.1, providers could also choose to return the explanations that reinforce the ideologies of the data subject itself. In this case, the explanation provider would be a platform, and the goal would be to maximize the revenues of the platform by keeping users as engaged and satisfied as possible (for many platforms daily/monthly active users is an important objective in their financial reports). An example of an explanation in this case, could be the same as in the scenario of propaganda, but in this case different society groups would receive very different explanations, depending on their beliefs. It is known that presenting them with content and information that is likely to resonate with their interests is a way to achieve this (in line with filter bubbles in content recommendation systems). However, this could lead to different groups in society receiving vastly different explanations for the same phenomenon, and consequently to _epistemic fragmentation_(Milano et al., 2021).4 By reinforcing filter bubbles and echo chambers, these platforms exacerbate polarization and hinder constructive dialogue between different groups in society. Footnote 4: Epistemic fragmentation refers to the tendency for different people to have different sources of knowledge and different, often conflicting, understandings Introducing a profit motive into the generation of explanations at all seems contradictory to the initial goals of Explainable AI. An explanation recipient should not have to wonder whether the selected explanation was chosen for its profit-making potential rather than for its ability to accurately explain the situation (Greene et al., 2023). ## 6 Discussion The examples discussed in Section 5 shed light on potential ethical concerns, even though they may not necessarily involve illegal activities. In these scenarios, the generated explanations remain factually correct but are selectively hand-picked by the explanation provider to serve their own interests. At the moment, this process is completely unregulated, but could have very serious consequences, as outlined in the scenarios above. In these scenarios, we assumed the explanation providers had malicious incentives, but obviously, this will not always be the case. In fact, some providers may be motivated to manipulate the explanations for the social good. For example, they might explicitly avoid providing explanations that reinforce biased stereotypes, in an attempt to promote fairness and equity. Nevertheless, even though their motives might be aligned with societal goals, it remains questionable whether unregulated entities without the required authority should have the power to make this call. As we are at the forefront of the XAI revolution, it is crucial to address this issue now, before these methodologies are implemented on an even wider scale. Currently, a substantial portion of AI power is concentrated among a few tech giants. If we also grant them the authority to control the explanations generated by AI models, they would possess yet another means to exert significant influence over society. To mitigate this concentration and potential misuse of power, it becomes imperative for government institutions to collaborate and establish agreed-upon standards and tools for XAI. In particular, in adversarial contexts where interests may clash, it should not be left solely to the explanation providers to create and choose the explanations. Instead, we argue that governments and policy makers should take the matter into their own hands, and agree on a framework that should be used as soon as possible. The key question here is "_What should be the process to make this decision, and what tools are needed to support this process?_". Academic researchers should help in answering this question by proposing a set of tools that can be used, and by promoting the transparency of digital platforms in their whole process (Greene et al., 2022). Similar to the no free lunch theorem, that indicates that there is no algorithm that always outperforms all others, there likely will also not be an universally superior explanation method. An agreement on which method to use in which scenario should be established, and this should be done democratically by allowing those affected by XAI to voice their opinion (Kuzba and Biecek, 2020; Vermeire et al., 2022), in line with the 'democratic principles of affected interests' (Fung and Wright, 2001). To ensure adherence to ethical values, we also foresee that it would be mandatory to have external auditors conducting audits of AI systems, explanations, and decision-making processes. These auditors should be independent entities without a vested interest in the outcomes, similar to how audits are conducted in other industries. It will take some time to reach a global consensus on the procedures that should be used, and therefore as a short-term solution, regulation should demand full **transparency** in the used explainability method, and settings. This would not remove all potential for manipulation, but would remove some flexibility for the explanation providers to change this continuously. In high-stakes contexts, where transparency is of paramount importance, we argue that the the use of white-box models needs more attention (Goethals et al., 2022), given the manipulation risks surrounding explanations. To conclude, we believe that implementing these measures can ensure that AI systems are developed and deployed in a manner that aligns with societal values, and foster a more transparent and ethical XAI ecosystem. ## Acknowledgements This research was funded by Research Foundation--Flanders grant number 11N7723N. The authors would like to acknowledge Dieter Brughmans for providing the example depicted in Table 1, and would like to thank Dieter Brughmans, Ine Weyts, Camille Dams and Travis Greene for their feedback on the paper.
2306.02750
The Learning Prescription, A Neural Network Hearing Aid Core
The definition of a hearing aid core which is based on a prescription neural network (such as NAL-NL2) is defined here. This hearing aid core replaces a traditional compressor hearing aid core which mimics the said hearing aid prescription. Whilst the replacement of the compressors for a neural network may seem simple, the implications are vast in terms of the "learning prescription" where the topology of the neural network may be increased to make available more free parameters and allow great personalisation of the hearing aid prescription.
Matt R. Flax
2023-06-05T10:12:41Z
http://arxiv.org/abs/2306.02750v1
# The Learning Prescription, A Neural Network Hearing Aid Core ###### Abstract The definition of a hearing aid core which is based on a prescription neural network (such as NAL-NL2) is defined here. This hearing aid core replaces a traditional compressor hearing aid core which mimics the said hearing aid prescription. Whilst the replacement of the compressors for a neural network may seem simple, the implications are vast in terms of the "learning prescription" where the topology of the neural network may be increased to make available more free parameters and allow great personalisation of the hearing aid prescription. ## 1 Introduction The NAL-NL2 hearing aid prescription introduced a neural network for the prescription of hearing aid gain for the first time [2] based on a desensitised speech intelligibility index (SII) designed for NAL-NL2 [1]. Concise descriptions of the NAL-NL2 hearing aid prescription are given [4, 5] which focus on the effects of the desensitised SII on gain optimisation, however the said articles gloss over the importance of the introduction of the neural network to hearing aid prescription, which overcame significant hurdles of reliable prescriptions being dispensed by NAL-NL1. The reason why arbitrary prescription is now far more accurate was the ability for the NAL-NL2 neural network to successfully interpolate between optimised prescriptions for people with unique and unseen hearing loss profiles. Prior to the introduction of the neural network in hearing aid prescription, hand crafted nonlinear equations were used to try to match the infinite possible prescriptions which can't all be optimised and thus certain patients would not receive optimal hearing aid prescriptions. This article takes the next logical step in hearing aid development by defining for the first time the replacement of hearing aid compressors by a personal prescription neural network. This article lays the foundation for the future layering of neural network and other statistically optimised systems to greatly improve hearing aid performance. With the introduction of personal prescription neural networks this article also introduces a robust method for further personalisation away from speech intelligibility prescriptions and towards learning prescriptions. A digital hearing aid core is shown in Figure 1a where a filter bank bands the signal and pre-fit compressors implement the hearing aid prescription. The signal path is nonlinear as the sound pressure level is constantly changing and the level estimation in the compressors are constantly changing. This constantly changing level estimation generates nonlinear gain application as the compressor's operating point is slowly but constantly varying. The hearing aid implemented with a prescription neural network core, shown in Figure 1b operates on a block of N samples of audio signal. The sound level meter (SLM) presents signal levels for the neural network to prescribe the block gain for each band of the filter bank. The gains are applied to the banded signals and summed then output to the receiver. As the gains are not varying within a signal block, the signal chain is linear. Half window overlap add techniques can be used to allow the audio blocks to vary smoothly and allow the gains to vary without output discontinuity. This article prescribes the implementation of a hearing aid with a neural network core. Free software is also available which implements the theory in this document. The first Section 2 implements a log banded filter bank centred around the prescription frequencies (\(f_{c}\)). The duration of the audio in each filter is roughly eight milliseconds and after overlap add the effective hearing aid gains change at a rate of approximately four milliseconds. Rates of gain change slower then three milliseconds are optimal for a prescription algorithm such as NAL-NL2 [2] as the compression ratio of the optimised prescription is not altered and thus the speech intelligibility index is maximised. The overall latencies of the filters are half the filter length as there is an overlap add framework. In operation, the first half block can be output after half the filter length is input/output and every subsequent half block is processed and output, resulting in an overall latency of half the filter length which is around 3 ms to 4 ms. Subsequent sections 3 and 4 briefly address level metering and signal amplification. While the last section leaves the prescription neural network as an open design solution. The best neural network will start the user in a space which is optimised for SII maximisation, but allow the user to train their prescription to their own personal target. The gradual expansion of the free parameters available to the neural network will allow for the expansion in the complexity of gain prescription to the user's taste. ## 2 Log banded filter bank The prescription algorithm outputs gains for the log centred frequencies (\(f_{c}\) in Hz) over the M=6 bands from m=0 to m=M-1 \[f_{c}\left(m\right)= 250\left(2^{m}\right)\] Zero phase brick wall band limited filters are generated1 where the zero phase filters (\(h_{0,\,m}\)) are specified in the Discrete Fourier Domain (\(H_{0,\,m}\)) and transformed to the time domain using the inverse Discrete Fourier Transform Figure 1: Digital and neural network hearing aid cores. (DFT or \(\mathcal{F}\)) \[H_{0,\,m}\left(f_{i}\left(m\right),\,f_{a}\left(m\right)\right) =\left.1\right|_{f_{1}\leq\left|f\right|\leq f_{a}}\] \[h_{0,\,m} =\mathcal{F}^{-1}\left\{H_{0,\,m}\right\}\] where the minimum frequency \(\left(f_{i}\left(m\right)\right)\) and maximum frequency \(\left(f_{a}\left(m\right)\right)\) are specified per band m (see Equation 1). These zero phase filters are circularly shifted by a constant group delay of \(\frac{N}{2}\) samples to give the linear phase band limited filters \(\left(h_{m}\right)\) \[h_{m}=h_{m,\,n}=h_{m}[n]=h_{0,\,m}\left[\left(n+\frac{N}{2}\right)\,mod\,N-1\right]\] The specifaction of the band limits (in Hz) are \[f_{a}\left(m\right)= \begin{cases}f_{c}\left(m\right)+f_{t}&m=0\\ \frac{3}{2}f_{c}\left(m\right)&m>0\end{cases} \tag{1}\] \[f_{i}\left(m\right)= \begin{cases}20&m=0\\ f_{a}\left(m-1\right)&m>0\end{cases}\] where \(f_{t}\) is the frequency stepping between Fourier bins or the DFT resolution, which is kept to a maximum value \[f_{t}=\frac{f_{c}\left(0\right)}{2}\] and this defines the number of samples (N) in the filter given a sample rate of \(f_{s}\) Hz \[N=\frac{f_{s}}{f_{t}}\] An example filter bank with a sample rate of \(f_{s}=24\,kHz\) is implemented in the script LogFilterBankTest.m and is shown in Figure 2. ## 3 The sound level meter The SLM estimates the dB SPL level of the signal \(\left(s\right)\) for each band \(\left(l_{m}\right)\) \[l_{m}=20\,log10\left(\sum_{n=0}^{N-1}h_{m}*s+l_{t,m}\right)+l_{d,m}\] where \(*\) represents the convolution operator and the three scaling variables are defined as; \(l_{t}\) is a time domain DC offset which may be necessary in some systems. \(l_{d}\) is a gain variable which converts digital full scale levels into dB sound pressure level. ## 4 Audio amplification and output The gained bands of audio are summed and output \[y=\sum_{m=0}^{M-1}g_{m}h_{m}*s\] At this point overlap add sums the last block of audio to the current block of audio to generate the receiver's output audio signal \(\left(r_{n}\right)\) \[r_{n}=y_{n-N/2}*w_{n}+y_{n}*w_{n}\] ## 5 The prescription neural network The neural network will input signal levels per band for each block of audio and output the required signal gain per band (\(g_{m}\)). All neural network pre and post conditioning are applied in this block of processing. The neural network can be multi-layer and have arbitrary non-linear layer output functions. The implementation of the prescription neural network [3] is proprietary software. ## 6 Conclusion This article replaces traditional hearing aid cores which are based on compressors (see Figure 1a) with the a suitable SII maximising neural network (see Figure 1b). A traditional prescription system such as NAL-NL2 can be placed directly onto the users hearing aid in the form of a personal prescription neural network. This personal prescription neural network can then be trained to learn the user's preference in amplification. With time as the free parameters in the neural network are increased in number, more complex features and learning may be accomplished. A suitable FIR filter for this hearing aid is defined in Section 2 which targets a half block input/output delay to allow a roughly 3 ms system latency which matches the optimal operating latency for the NAL-NL2 prescription algorithm. A simple sound level meter and amplification strategy is also defined in Sections 3 and 4.
2302.01976
SPARLING: Learning Latent Representations with Extremely Sparse Activations
Real-world processes often contain intermediate state that can be modeled as an extremely sparse tensor. We introduce Sparling, a technique that allows you to learn models with intermediate layers that match this state from only end-to-end labeled examples (i.e., no supervision on the intermediate state). Sparling uses a new kind of informational bottleneck that enforces levels of activation sparsity unachievable using other techniques. We find that extreme sparsity is necessary to achieve good intermediate state modeling. On our synthetic DigitCircle domain as well as the LaTeX-OCR and Audio-MNIST-Sequence domains, we are able to precisely localize the intermediate states up to feature permutation with > 90% accuracy, even though we only train end-to-end.
Kavi Gupta, Osbert Bastani, Armando Solar-Lezama
2023-02-03T19:51:28Z
http://arxiv.org/abs/2302.01976v2
# Sparling: Learning Latent Representations with Extremely Sparse Activations ###### Abstract Real-world processes often contain intermediate state that can be modeled as an extremely sparse tensor. We introduce Sparling, a new kind of informational bottleneck that explicitly models this state by enforcing extreme activation sparsity. We additionally demonstrate that this technique can be used to learn the true intermediate representation with no additional supervision (i.e., from only end-to-end labeled examples), and thus improve the interpretability of the resulting models. On our DigitCircle domain, we are able to get an intermediate state prediction accuracy of 98.84%, even as we only train end-to-end. Machine Learning, ICML ## 1 Introduction A hallmark of deep learning is its ability to learn useful intermediate representations of data from end-to-end supervision via backpropagation. However, these representations are often opaque, with components not referring to any semantically meaningful concepts. Many approaches have been proposed to address this problem by leveraging extra knowledge in the form of additional supervision or handcrafted constraints on the intermediate representation. For instance, concept bottlenecks leverage labels for the intermediate concepts (Koh et al., 2020), and information bottlenecks impose that that the mutual information between the representation and the input be bounded (Bourlard and Kamp, 1988). Here, we consider the constraint of extreme sparsity, which, when applicable, leads to a particularly effective approach to discovering the true underlying structure. We introduce Sparling, a novel technique for learning _extremely sparse representations_, where \(\geq\)99% of the activations are sparse for a given input. We are motivated by settings where components of the intermediate representation correspond to spatial concepts--which we call _motifs_--that occur in only a small number of locations. For instance, in a character recognition task, each motif may encode whether the center of a given character occurs at a given position. Since even in the worst case, an image of pure text, the image has orders of magnitude fewer characters than pixels, we expect the intermediate representation to be extremely sparse. This pattern is representative of many other prediction tasks--e.g., one could predict economic signals from satellite data by identifying a small number of building types, or detect bird social behavior from nature recordings by analyzing bird chirps. Sparling directly enforces sparsity by setting activations below some threshold equal to zero; this threshold is iteratively updated to achieve a target sparsity level (e.g., 99%). A key challenge is that the optimization problem is very unstable for high sparsity values. To address this issue, our optimization algorithm anneals the target sparsity over time. A byproduct of this approach is that we achieve a tradeoff between sparsity values and accuracies during the course of training, enabling the user to post-hoc choose a desired sparsity level. **Example.** Figure 1 shows our DigitCircle task, consisting of noisy images that contain digits placed in a circle. The goal is to list the digits in counterclockwise order starting from the smallest one. In our framework, each digit is a motif, and it occurs at a very sparse number of positions in the input image. The final label can be computed as a function of these motifs and their positions. Crucially, we want to learn to predict these motifs given no labeled supervision about their positions--i.e., the position of each digit is not provided during training. Despite training only on end-to-end supervision (i.e., input images and labels of the form "072634"), our model is able to act as an effective predictor (up to permutation) of digit positions, identifying the correct digit 98.84% of the time on average. Additionally, it is able to achieve high end-to-end accuracy of 97.42%, while achieving nearly the maximum sparsity possible (99.9950%; the maximum sparsity possible for this domain is 99.9955%). Alternate sparsity enforcement techniques employing \(L_{1}\) and KL-divergence loss cannot reproduce these results and either do not produce extreme sparsity or have accuracy close to 0%. **Contributions.** Our main contribution is Sparling, an algorithm for learning intermediate representations with extremely sparse activations, along with an empirical evaluation of the effectiveness of our approach. In particular, we show that our approach can successfully learn the correct latent motifs given only end-to-end supervision. ## 2 Related Work **Concept bottleneck models.** There has been work on learning models with intermediate features that correspond to known variables. Some techniques, such as Concept Bottleneck Models (Koh et al., 2020) and Concept Embedding-Models (Zarlenga et al., 2022), involve additional supervision with existing feature labels. Other techniques, such as Cross-Model Scene Networks (Aytar et al., 2017), use multiple datasets with the same intermediate representation. Sparling does not require the presence of additional datasets or annotations. **Neural Input Attribution.** Sparling is useful for identifying the relevant parts of an input. One existing technique that accomplishes this goal is saliency mapping (Simonyan et al., 2013; Selvaraju et al., 2016), which uses a backward propagating algorithm (either the standard backpropagation automatic differentiation algorithm or a variant) to find which parts of the input affect the output most. Another technique, looking at the attention weights of an attention model (Mnih et al., 2014), only works with a single layer of attention and also has well known pitfalls in terms of the validity and completeness of the explanations (Serrano and Smith, 2019). The main benefit a sparse annotation provides over these techniques is the property of unconditional independence. Specifically, when using sparsity, you have the ability to make the claim "region \(x[r]\) of the input is not relevant to the output prediction, regardless of what happens in the rest of the input \(x[\bar{r}]\)". This is a direct result of the fact that if a location is not annotated as a motif, this is a purely local decision and as 0s are overwhelmingly common, they thus carry little information. This property is unavailable using saliency or attention techniques as these techniques condition on the values you provide for \(x[\bar{r}]\). **Latent ground truth.** While deep neural networks typically have inscrutable latent variables that are not intended to correspond to any understood feature, in other settings, such as graphical models, latent variables can often represent real parts of a known system. A commonly used example is Hidden Markov Models with known states, which are commonly used in genomics (Yoon, 2009), where hidden states represent various hidden features of an observed DNA or RNA sequence. Our work attempts to accomplish the same goal of having an interpretable latent variable, but without having to pre-specify what it means. **Disentangled representations.**_Disentangled representations_ are ones where different components of the representation encode independent attributes of the underlying data (Desjardins et al., 2012). However, these approaches typically seek to capture all attributes of the data rather than select the ones specialized to a specific downstream prediction problem. **Informational bottleneck.** Other work also constrains the information content of the intermediate representation in a neural network. Intuitively, by limiting the mutual information between the input and the the intermediate representation, the model must learn to compress the input in a way that retains performance at the downstream prediction task. Strategies include constraining the dimension of the representation--e.g., PCA and autoencoders with low-dimensional representations (Bourlard and Kamp, 1988), or adding noise--e.g., variational autoencoders (Kingma and Welling, 2014). However, these approaches are not designed to learn interpretable representations. By reducing Figure 1: Example of the DigitCircle domain. The input \(x\) is mapped by the ground truth \(g^{*}\) function to a map \(m\) of the positions of every digit, which is itself mapped by the ground truth \(h^{*}\) function to the output \(y\), the sequence of symbols 072634. Only \(x\) and \(y\) are available during training. dimensionality, you increase the chances that multiple different concepts will share a given activation, and by injecting noise, you promote redundancy between neurons and thus reduce the meaningfulness of any given neuron. **Sparse parameters and sparse activations.** One popular measure of interpretability is _sparsity_, where models with fewer nonzero values are considered more interpretable. Thus, there has been work on constraining the information content by encouraging the intermediate representation to have _sparse activations_--i.e., each component of the representation is zero for most inputs. Note that this notion of sparsity differs from _sparse parameters_(Tibshirani, 1996; Scardapane et al., 2017; Ma et al., 2019), where the parameters themselves are sparse. Strategies for achieving sparse activations include imposing an \(L_{1}\) penalty on the representation or a penalty on the mutual information of the representation with a low-probability Bernoulli random variable (Jiang et al., 2015). However, these techniques typically only achieve 50% to 90% sparsity, versus Sparling, which achieves 99.995%. As discussed in Section 5.1, we directly compare with these as baselines. ## 3 Preliminaries We are interested in settings where the activations are latent variables corresponding to semantically meaningful concepts in the prediction problem. To this end, we consider the case where the _ground truth_ is represented as a function \(f^{*}:X\to Y\) composed of two functions \(g^{*}:X\to M\) and \(h^{*}:M\to Y\)--i.e., \(f^{*}=h^{*}\circ g^{*}\). Our goal is to learn models \(\hat{g}\) and \(\hat{h}\) that model \(g^{*}\) and \(h^{*}\) well using only end-to-end data, i.e., enforcing only that their composition \(\hat{f}=\hat{h}\circ\hat{g}\) models \(f^{*}\) well. We assume that elements of \(X\) are tensors (e.g., elements of \(\mathbb{R}^{d_{1}},\mathbb{R}^{d_{1}\times d_{2}},...\)), and \(Y\) is an arbitrary label space. We typically think of the last dimension of \(X\) representing channels and the rest corresponding to spatial dimensions (e.g., 2D images). We call the latent space \(M\) the _motif_ space. We assume it shares spatial dimensions with \(X\), but may have a different number of channels. Importantly, we do not assume that \(M\) is known--e.g., we may have little or no labeled data on which components of \(M\) are active. ### Sparse Activations Assumption Our critical assumptions are that the output of \(g^{*}\) is _sparse_ (i.e., its output equals zero on nearly all components), and that \(g^{*}\) is _local_. To formalize sparsity, we first define the _density_\(\delta\) to be the expected fraction of nonzero components of the output of \(g^{*}\). Letting \[\mathrm{NZ}(m)=\frac{1}{SC}\sum_{\mathbf{i},c}\mathbf{1}(m[\mathbf{i},c]\neq 0)\] be the proportion of nonzero entries of \(m\), where \(S\) is the total number of positions in \(m\) and \(C\) is the number of channels, we define \[\delta_{g}=\mathbb{E}_{x}[\mathrm{NZ}(g(x))],\] where the expectation is taken over the distribution of inputs \(x\in X\). Our Sparse Activations Assumption, parameterized by \(\delta_{0}\), can thus be stated as \(\delta_{g}\leq\delta_{0}\ll 1\). We use \(\delta\) to denote \(\delta_{\hat{g}}\) for the rest of this paper. In addition, locality is the standard property where a component only depends on a small number of inputs; for example, convolution filters are designed to parameterize spatially local linear functions. While these constraints may appear strict, they fit problems where most of the information can be localized to small regions of the input. In these settings, we can trade a small amount of accuracy in exchange for being able to tell precisely what parts of an input are important. Unlike attention layers, this determination is independent of other parts of the input. ### Motif Identifiability Hypothesis We can then pose the Motif Identifiability Hypothesis as _If \(g^{*}\) and \(\hat{g}\) both satisfy locality and the Sparse Activations Assumption, and \(\hat{f}\approx f^{*}\), we know that \(\hat{g}\approx g^{*}\)_. This hypothesis means that for certain kinds of functions, it is possible to recover the underlying motif structure with just end-to-end data. Note that this is a narrower claim than Identifiability in general, as we only claim to identify the ground truth \((g^{*},h^{*})\) functions rather than any individual parameters of \(g^{*}\) or \(h^{*}\). ### Motif Model Equivalence Evaluating our Motif Identifiability Hypothesis requires a formal definition of approximate equivalence between motif models--i.e., what \(\hat{g}\approx g^{*}\) means. For the purposes of this paper, we work in a synthetic domain where during final evaluation we can "unseal" \(M\), and thus get a view of the true motifs. However, we need to deal with two additional challenges: channel permutations and motif alignment. Permutations are easily handled by taking the minimum of our error metric over all possible permutations. Handling motif alignment is more complex. Specifically, there are many different ways to recognize a given pattern, some of which correspond to different motif positions. To ensure we account for this flexibility when evaluating models, we only check that the predicted point be within the _footprint_ of the true motif, which we define as the smallest cuboid1 covering the points that influence that motif. Footnote 1: For images, the cuboid is a rectangle, as drawn in Figure 1. We can then define \(P(\hat{m})\) as the set of all predicted motifs, \(\mathrm{FPM}(\hat{m},m^{*})\) as the set of predicted motifs that do not overlap the footprints of any true motifs, and \(\mathrm{MM}(\hat{m},m^{*})\) as the set of predicted motifs that overlap a footprint of a true motif and have greater activation value than all other motifs overlapping the same footprint.2 We also define \(C((\hat{\mathbf{i}},\hat{c}),m^{*})\) to be the footprint that the predicted motif at location \(\hat{\mathbf{i}},\hat{c}\) matches, or \(\emptyset\) if it does not match any. For formal definitions of these functions, see Appendix A. Footnote 2: We ignore motifs that are not maximal in a footprint as these would be trivially ignorable when actually using the intermediate layer. ### Evaluation Metrics Next, we describe the metrics we use to evaluate different models \(\hat{f}=\hat{g}\circ\hat{h}\). First, we use the usual end-to-end evaluation of exact match error: \[\textsc{EndToEnd}_{\mathcal{D}}(\hat{f})=\mathbb{E}_{x\sim\mathcal{D}}[ \mathbf{1}(f^{*}(x)\neq\hat{f}(x))].\] This error metric can be calculated given only end-to-end supervision in the form of \((x,y)\) pairs, and it is the only error used in training and validation. Beyond this basic error metric, we are interested in evaluating \(\hat{g}\approx g^{*}\) in order to test the Motif Identifiability Hypothesis. We define two motif error metrics. First, the _false positive error (FPE)_ is the percentage of motifs that are false positive motifs. \[\mathrm{FPE}_{\mathcal{D}}(\hat{g})=\frac{\sum_{x\in\mathcal{D}}|\mathrm{FPM }(\hat{g}(x),g^{*}(x))|}{\sum_{x\in\mathcal{D}}|P(\hat{g}(x))|}.\] Second, the _confusion error (CE)_ is defined as follows: (i) permute \(\hat{g}\)'s channels to best align them with \(g^{*}\), (ii) compute the percentage of maximal motifs in range of a true motif that do not correspond to the true motif's channel: \[\mathrm{CE}_{\mathcal{D}}(\hat{g})=\min_{\sigma\in\Sigma_{C}}\frac{\sum_{x \in\mathcal{D}}|\mathrm{conf}_{\sigma}(\hat{g}(x),g^{*}(x))|}{\sum_{x\in \mathcal{D}}|\mathrm{MM}(\hat{g}(x),g^{*}(x))|},\] where \(\mathrm{conf}_{\sigma}(\hat{m},m^{*})\) represents the motifs that do not match ground truth under permutation \(\sigma\) \[\mathrm{conf}_{\sigma}(\hat{m},m^{*})=\{t\in\mathrm{MM}(\hat{m},m^{*}):- \mathrm{mat}_{\sigma}(t,C(t,m^{*}))\},\] and \(\mathrm{mat}_{\sigma}(\hat{t},t^{*})\) is a function that checks whether the two motif index tuples match under channel permutation \(\sigma\). A low FPE implies that the motifs you do see are probably referring to something real, while a low CE implies that you can correctly identify which true motif is being referred to. ``` \(T_{0}\gets 1\) for\(t=1\)to\(\ldots\)do \(\textsc{TrainStep}(\hat{f},\mathcal{D}_{\mathit{Bt:B(t+1)}})\) \(T_{t}\gets T_{t-1}-Bd_{T}\) if\(bt\bmod M=0\)then \(A_{t}\leftarrow\textsc{Validate}(\hat{f})\) if\(A_{t}>T_{t}\)then \(\hat{f}.\delta\leftarrow\hat{f}.\delta\times\delta_{\mathrm{update}}\) \(T_{t}\gets A_{t}\) endif endif endfor ``` **Algorithm 1** Train Loop \((\hat{f},\mathcal{D},M,B,d_{T},\delta_{\mathrm{update}})\) ### Connection to Information Bound Finally, we establish a connection between Sparling and information bottleneck approaches. Sparsity induces an information bound by limiting the amount of information in the intermediate representation. Specifically, if we let \(\mathcal{X}\) be a random variable for the input, and \(\mathcal{M}=g(\mathcal{X})\) be the motif layer, we have that we can bound the mutual information between inputs and motifs as \(I(\mathcal{X},\mathcal{M})\leq H(\mathcal{M})\), where \(H(\cdot)\) is entropy. Thus, it is sufficient to bound \(H(\mathcal{M})\). We first can break it into per-channel components: \[H(\mathcal{M})\leq\sum_{\mathbf{i},c}H(\mathcal{M}[\mathbf{i},c]),\] Then, let \(\delta_{\mathbf{i},c}\) denote the density of channel \(c\) at position \(\mathbf{i}\), and \(\eta\) be a bound on the amount of entropy in each nonzero activation: \[\eta\geq H(\mathcal{M}[\mathbf{i},c]|\mathcal{M}[\mathbf{i},c]\neq 0)\] Then we apply the chain rule \[H(\mathcal{M}[\mathbf{i},c])\leq H(B(\delta_{\mathbf{i},c}))+\eta\delta_{ \mathbf{i},c}.\] Where \(B(p)\) denotes the Bernoulli distribution with parameter \(p\). Thus, we have \[H(\mathcal{M})\leq\sum_{\mathbf{i},c}H(B(\delta_{\mathbf{i},c}))+SC\eta\delta,\] where \(S\) is the size of the image in pixels and \(C\) is the number of channels, and \(\delta\) is defined as in section 3.1. Finally, using Jensen's inequality (as \(H(B(t))\) is concave), we have \[H(\mathcal{M})\leq SC(H(B(\delta))+\eta\delta).\] Section 5.5 discusses techniques to bound \(\eta\). ## 4 Methods In this section, we introduce Sparling, which is composed of two parts: the Spatial Sparsity Layer and the Adaptive Sparsity Algorithm. The Spatial Sparsity Layer is designed to achieve the extreme sparsity rates described in Section 3. This layer is the last step in the computation of \(\hat{g}\) and enforces the sparsity of \(\hat{g}\); we compose \(\hat{g}\) out of convolutional layers to enforce locality. The Adaptive Sparsity Algorithm is designed to ensure the Spatial Sparsity Layer can be effectively trained. ### Spatial Sparsity Layer We define a spatial sparsity layer to be a layer with a parameter \(t\) that whose forward pass is computed \[\mathrm{Sparse}_{t}(z)=\mathrm{ReLU}(z-t)\] Importantly, \(t\) is treated as a constant for the purposes of backpropagation and is not updated by gradient descent. Instead, we update \(t\) using an exponential moving average of the quantiles of observed training batches: \[t_{n}=\mu t_{n-1}+(1-\mu)q(z_{n},1-\delta),\] where \(t_{n}\) is the value of \(t\) on the \(n\)th iteration, \(z_{n}\) is the nth batch of inputs to this layer, \(\mu\) is the momentum (we use \(\mu=0.9\)), \(\delta\) is a target density the layer aims to achieve (described in section 3.1), and \(q\) is the quantile function. The quantile function \(q:\mathbb{R}^{B\times d_{1}\times\ldots\times d_{k}\times C}\times\mathbb{R} \rightarrow\mathbb{R}^{C}\) is implemented such that \[\forall c,p\approx\frac{1}{BS}\sum_{b,\mathbf{i}}\mathbf{1}(z[b,\mathbf{i},c ]\leq q(z,p)[c])\] This enforces that each channel must individually have density \(\delta\). Thresholds are set uniformly at all positions in the input. We refer to this as the _multiple thresholds (MT)_ approach, as opposed to the _single thresholds (ST)_ ablation we describe in Section 5.1's "ablation" paragraph. Since \(t_{n}\) is computed from the data distribution, we can treat it as the \((1-\delta)^{\mathrm{th}}\) quantile of the distribution of the outputs of the previous network over the data, enabling this layer to set all but a \(\delta\) fraction of its outputs to 0. Finally, we always include an affine batch normalization before this layer. This increases training stability, we believe by allowing for gradient signal to propagate even to areas masked by the thresholding of our Sparse layer. We provide an analysis on the necessity of this addition in Section 5.4. ### Adaptive Sparsity In practice, we find that applying an extreme sparsity requirement (very low \(\delta\)) upon initial training of the network leads to bad local minima, with the network being unable to gain any learning signal on the vast majority of inputs. Instead, we use a technique inspired by simulated annealing and learning rate decay, and reduce \(\delta\) slowly over time. Specifically, we add a step to our training loop that periodically checks validation accuracy \(A_{t}\) and reduces the density whenever it exceeds a target \(T_{t}\). The process is as described in Algorithm 1, with the target accuracy dropping slowly. When the validation accuracy reaches the target accuracy, we reduce density and increase the accuracy bar to whatever our model achieved. Our experiments use evaluation frequency \(M=2\times 10^{5}\), batch size \(B=10\), \(d_{T}=10^{-7}\), and \(\delta_{\mathrm{update}}=0.75\). ## 5 Experiments ### Experimental Setup **DigitCircle domain.** To evaluate Sparling we construct the DigitCircle domain. The input \(X\) is a \(100\times 100\) monochrome image with 3-6 unique digits placed in a rough circular pattern, with some noise being applied to the image both before and after the numbers are placed. See Figure 2 for examples. The output \(Y\) is the sequence of digits in counterclockwise order, starting with the smallest number. The latent motifs layer \(M\) is the position of each digit: we can conceptualize this space as a \(100\times 100\times 10\) tensor with 3-6 nonzero entries. Note that the model during training and validation has no access to the concept of a digit as an image, nor to the concept of a digit's position. **Architecture and training.** Our neural architecture is adapted from that of (Deng et al., 2016). We make our \(\hat{g}\) architecture a convolutional network with a \(17\times 17\) overall window, by layering four residual units (He et al., 2016), each containing two \(3\times 3\) convolutional layers. We then map to a 10-channel bottleneck where our Spatial Sparsity layer Figure 2: Examples of input/output pairs of the Digit Circle domain. The inputs are the images, and outputs are the sequences of numbers in the title. is placed. (We choose 10 channels to match the 10 digits.) Our \(\hat{h}\) architecture is a max pooling, followed by a similar architecture to Deng. We keep the LSTM row-encoder, but replace the attention decoder with a column-based positional encoding followed by a Transformer (Vaswani et al., 2017) whose encoder and decoder have 8 heads and 6 layers. Throughout, except in the bottleneck layer, we use a width of 512 for all units. For our experiments, we keep this structure stable, and only modify the bottleneck layer. We use an entirely random generation technique for the dataset, with seeds 1 through 9 for the 9 different training runs of each model, and seeds -1 and -2 being reserved for validation and testing. We use a batch size of 10 samples and a learning rate of \(10^{-5}\). Our validation and test sets both contain \(10^{4}\) examples. **Baselines.** We consider two other approaches to ensuring the creation of sparse motifs, both taking the form of auxiliary regularization losses. In both cases, we vary loss weight to see how that affects error and sparsity. First, we consider \(L_{1}\) loss. In our implementation, we use an affine batch normalization layer followed by a ReLU. The output of the ReLU is then used in an auxiliary \(L_{1}\) loss. This approach is discussed in (Jiang et al., 2015). We also consider using \(KL\)-divergence loss as in (Jiang et al., 2015). The approach is to apply a sigmoid, then compute a KL-divergence between the Bernoulli implied by the mean activation of the sigmoid and a target sparsity value (we use 99.995% to perform a direct comparison). While this usually is done across the training data (Ng, 2011), in our case, the overall sparsity should be similar in all batches, so we instead enforce the loss per-batch (but across all positions and channels). Our other modification, in order to induce true sparsity, is to, after the sigmoid layer (where the loss is computed), subtract 0.5 and apply a ReLU layer. **Ablations.** We consider ablations to test three design decisions. First, is the batch normalization we place before our sparse layer necessary? Second, is the adaptive sparsity algorithm we use necessary? Third, we consider the _single threshold (ST)_ sparsity approach, where we take the quantile across the entire input (batch axis, dimensional axes, channel axis). In this case, the channels can have differing resulting densities that average together to the target \(\delta\). More precisely, we use the quantile function \(q_{\mathrm{ST}}:\mathbb{R}^{B\times d_{1}\times\ldots\times d_{k}\times C} \times\mathbb{R}\rightarrow\mathbb{R}\), implemented such that \[p\approx\frac{1}{BSC}\sum_{b,\mathbf{i},c}\mathbf{1}(z[b,\mathbf{i},c]\leq q_{ \mathrm{ST}}(z,p)).\] ### End-to-End Results **End-to-end errors.** Figure 3 shows the end-to-end errors of Sparling and the ST ablation. At 1.5\(\times\) the theoretical minimum density, we consistently perform under 5% error, whereas at 1.1\(\times\) theoretical minimum density there is a much wider variation from run to run, but the minimum error stays similar, suggesting some instability in Sparling as it approaches the theoretical minimum density. **Baselines.** Figure 4 shows the results of using \(L_{1}\) as a method for encouraging sparsity. There are two weight regimes, where when \(\lambda\leq 1\), we end up with low sparsity (relative to the theoretical minimum) but low error, and when \(\lambda\geq 2\), we end up with a model that never learns anything at all (near 100% error). Even in the latter case, the \(L_{1}\) loss does not consistently push density down to the theoretical minimum or below, suggesting it might be insufficiently strong as a learning signal to achieve the kind of density Sparling can. In our experiments, the \(KL\)-divergence was unable to achieve a density below 0.1%, even when we used a loss weight as high as \(\lambda=10^{5}\) and \(3\times 10^{6}\) steps (much more than was necessary for convergence of the \(L_{1}\) model). Thus, we conclude that it is unsuitable for encouraging the kind of sparsity we are interested in. Figure 4: Results of \(L_{1}\) experiment. Note that the \(x\)-axis is log-scaled and the \(y\)-axis is error. Our results (same as Figure 3) can be seen on the bottom left. Figure 3: End-to-end error (lower is better). Computed on a test set (different from the validation set used to reduce density). Left plot is computed at 1.1\(\times\) the theoretical minimum density (one non-zero activation per digit), and right plot is computed at 1.5\(\times\) the theoretical minimum density. Plotted are 9 separate runs per model, with a 95% bootstrap confidence interval. ### Interpretability Results **Examples.** Figure 5 shows a few examples for one of our models' intermediate layers. As can be seen, all digits are appropriately identified by our intermediate layer, with very few dots (in these examples, none) falling away from a digit. Most of the slack (here, 10% extra) activations are duplicates on an existing digit. Also, note that the activations are consistent from sample to sample--for example, C is used for digit 6 in all three images where it appears. **Confusion matrix.** To provide a more quantitative summary of this effect, consider Figure 6, which shows an analog of a confusion matrix for that model. Note that our model rarely produces an incorrect covering motif, and even more rarely leaves a digit blank. **Motif error.** Next, we show our measures of motif error, FPE and CE, in Figure 7 for all the sparsity models. Errors for our MT model are usually below 10%, and in the 1.1\(\times\) density case are all below 1% except for in one training run out of the 9. Errors are substantially higher when we have 1.5\(\times\) minimum density, as our sparsity constraint is less strict so the model is freer to produce an incorrect intermediate representation. The generally low errors on the MT model, despite only having training and validation performed end-to-end, demonstrate that the Motif Identifiability Hypothesis holds for the DigitCircle domain. **Predicting motif error.** Figure 8 shows the relationship between the motif errors and the overall end-to-end error. There is no relationship for FPE, but there is a positive relationship for CE, implying that a strategy where one trains several models and then chooses the one with the best validation error is a good way to reduce CE and thereby improve motif quality. ### Ablation We compare our approach to ablations to evaluate our design decisions. First, including a batch normalization before the sparsity layer is crucial. Without a batch normalization layer, over 9 runs, the best MT model gets an error of 99.35%, whereas the best ST model gets an error of 97.07%; in essence, neither model is able to learn the task at all. We also analyzed the need for the adaptive sparsity update algorithm (Algorithm 1). When starting from 1.1\(\times\) or 1.5\(\times\) the theoretical minimum density, the model converged to an er Figure 5: Inputs annotated with the maximal motifs produced by the \(\hat{g}\) of the MT model trained with seed=1, at 1.1\(\times\) the theoretical minimum sparsity. We label our activations A through J to distinguish them from digits. Stars indicate sites where there are non-maximal motifs present as well. These examples are representative and are simply examples 0, 1, 2, and 3 from our dataset. Figure 6: Confusion Matrix of 10k unseen samples (not in training or validation sets). We place false positive motifs into the none row and maximal motifs into the rows corresponding to the digit they cover. Each row is labeled by the percentage of motifs falling into the row, and each row’s cells are then normalized to add to 1. We also record true motifs that do not have any predicted motifs placed on them as none column. The rest of the columns correspond to motifs, labeled A through J and permuted to the permutation that minimizes CE. Figure 7: Errors. Bar height depicts the mean across 9 seeds, while individual dots represent the individual values and the error bar represents a 95% bootstrap CI of the mean. The ST model’s FPE is so low it does not show up on the chart, all are under 0.012%. ror above 98%. This result suggests that some technique for updating sparsity is necessary to avoid bad local minimia. Next, as seen in Figure 3, the ST ablation is able to achieve fairly low error end-to-end, but still has a slightly higher average error than MT. In Figure 7, however, we see that it performs substantially worse in terms of CE, while performing better with respect to FPE. Without the constraint that the motifs have equivalent density across each channel, some motifs are being used to represent multiple digits, which substantially increases confusion error, but also reduces false positives. In general, the MT model is superior as it has reasonable FPE and substantially lower CE. ### Entropy upper bound To compute our entropy upper bound, we must first compute \(\eta\), as defined in Section 3.5. To compute this, we bin the nonzero activations into \(2^{k}\) bins by quantile. We set \(\eta\) to be the smallest value of \(k\) that does not substantially affect the accuracy of the model (we consider 0.5% to be a reasonable threshold for this purpose). Figure 9 shows the result of this experiment, averaged across 9 seeds. The general downward trend in error caused by binning as density decreases demonstrates that reducing the number of motifs reduces the importance of the precise magnitudes. For the purposes of entropy bounding, we use \(\eta=\log(16)=4\)b. ### Error metrics vs Entropy Bound Figure 10 shows our error metrics plotted against the entropy, with the \(x\)-axis reversed to show progression in training time as we tighten the entropy bound. As expected, as entropy decreases, FPE decreases, as there are fewer motifs produced and thus fewer false positives. More interestingly, we find that as entropy decreases, CE decreases while end-to-end error increases. This demonstrates a tradeoff between a more accurate overall model, which benefits from greater information present and a more accurate motif model, which benefits from a tighter entropy bound.3 Footnote 3: While this may seem to contradict the result in Section 5.3, it in fact does not. Within a single model, tightening the density has inverse effects on end-to-end error and CE, but separately, some models are in general more or less accurate. One illuminating result is that even when entropy is about \(0.1\)b/pixel and FPE is very high, CE is still not equivalent to random (which would be about 88% error). This result indicates that the model is mostly choosing the correct motifs to be maximal even at higher levels of entropy, which may explain why Algorithm 1 works: the newly removed activations when the threshold is raised are more likely to be incorrect than not. ## 6 Conclusion We have presented Sparling: a novel spatial sparsity layer and adaptive sparsity training technique that has the ability to learn a highly sparse latent motifs layer for dimensional data, using only an end-to-end training signal. Similar levels of activation sparsity are unachievable by existing strategies. Finally, we demonstrate that Sparling achieves Figure 8: Model error versus FPE and CE, at \(1.1\times\) the minimum sparsity. All are log-scaled to highlight the low-error region. Each dot represents a single model training seed. Figure 10: Error metrics used in this project versus entropy per pixel. Note that the \(x\) axis is reversed, this is to indicate training progression, which starts with high entropy and narrows it over time. Error region is the 95% confidence interval among 9 seeds. Figure 9: Increase in error when binning. Each series represents a different bin count, as annotated in the legend. Density is log-scaled and reversed to indicate training progress. interpretable and accurate motifs with zero direct training supervision on the motifs. ## 7 Acknowledgements This material is based upon work supported by the National Science Foundation under Grant Nos. CCF-1918839 & CCF-1917852. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This work was enriched by conversations with Christopher Burge, Phillip A. Sharp, Kayla McCue, and Chenxi Yang.